Toolchain Threats, AI Productivity Math, and Local‑First Pragmatism

NEWSLETTER | Amplifi Labs
Obsidian Plugins Abused to Deliver PHANTOMPULSE RAT via Blockchain C2
Around the web • May 10, 2026
Researchers detail REF6598, a targeted campaign that weaponizes shared Obsidian vaults: victims are socially engineered over LinkedIn/Telegram to enable community plugin sync, activating trojanized “Shell Commands”/“Hider” plugins that stage PHANTOMPULL and inject the PHANTOMPULSE RAT on Windows and macOS. The RAT resolves its C2 by querying Ethereum transactions from a hard‑coded wallet, hindering takedowns, and supports keylogging, screenshots, file exfiltration, and remote command execution. Teams—especially in finance/crypto—should restrict third‑party plugins and vault sync, monitor for Obsidian spawning shells or AppleScript and unexpected blockchain traffic, and enforce EDR/allowlisting.
Security and Platform Control
curl’s Mythos audit: one low-severity CVE, zero memory-safety flaws
Around the web •May 11, 2026
Anthropic’s Mythos audit of curl (178K LOC) flagged five "confirmed" issues, but curl maintainers validated only one low-severity vulnerability to be disclosed with curl 8.21.0 in late June, alongside ~20 non-vulnerability bugs. Daniel Stenberg reports no memory-safety flaws and concludes Mythos isn’t markedly stronger than other AI scanners previously used (which drove 200–300 fixes and a dozen+ CVEs), while reaffirming that modern AI analyzers are essential for continuous security review.
Apple and Google Push Hardware Attestation From Apps to the Web
Around the web •May 10, 2026
Apple’s App Attest and Google’s Play Integrity APIs are gaining traction, and Apple has already extended the approach to the web via Privacy Pass, with Google planning similar moves. Developers should expect more services to require attestation signals—curbing fraud and bots but increasing platform lock-in and complicating support for rooted, modified, or alternative clients.
AI in the Dev Workflow: Speed vs. Upkeep
Double Output? Then Halve Maintenance: The AI Coding Math
Around the web •May 10, 2026
AI code generation only delivers durable productivity if maintenance costs drop inversely with output (e.g., 2x code requires 0.5x maintenance per unit); otherwise teams hit a maintenance wall as debt compounds. Engineering leaders should pair agents with maintainability gates and ops metrics—rigorous reviews, smaller PRs, strong tests, dependency hygiene, automated refactoring—and invest in AI that accelerates maintenance work itself.
AI writes features, not architecture: k10s dev rewrites in Rust
Around the web •May 10, 2026
After 234 vibe-coded commits with Claude, the author’s Kubernetes GPU TUI (k10s) collapsed under a 1,690-line god object, leaky shared state, global keymaps, and racy background mutations—evidence that LLMs ship features, not architecture. He’s archiving the Go/Bubble Tea code and rewriting in Rust with explicit guardrails in a CLAUDE.md: per‑view isolation, typed data models, message‑passing on the main loop, and tighter product scope. Takeaway for teams using AI copilots: humans must define architecture and invariants up front or velocity amplifies entropy.
adamsreview supercharges Claude Code with multi‑agent, auto‑fix PR reviews
Around the web •May 11, 2026
adamsreview is an open‑source, multi‑stage code review plugin for Claude Code that runs up to seven parallel “lenses” (correctness, security, UX), deduplicates and validates findings, and can batch‑apply high‑confidence fixes with an Opus cross‑check to prevent regressions. It optionally ensembles Codex, supports interactive walkthroughs for ambiguous issues, ingests external findings, and in v0.4.0 adds auto‑fix hints—running on a standard Claude Code subscription; the author claims higher real‑bug catch rates than built‑ins with fewer false positives.
Run Qwen 3.5‑9B locally on M4: 40 tok/s, 128K
Around the web •May 10, 2026
A practitioner shares a reliable local LLM setup on a 24GB M4 Mac using LM Studio with Qwen 3.5‑9B (Q4_K_S), achieving ~40 tok/s, working tool use, a 128K context window, and “thinking” enabled. It includes concrete inference settings for coding (temperature 0.6, top_p 0.95, top_k 20) plus ready-to-use configs for pi and OpenCode that point to LM Studio’s OpenAI‑compatible server (localhost:1234/v1), including a prompt‑template flag to enable thinking. The post stresses trade‑offs versus SOTA—more stepwise guidance and occasional failures (e.g., git rebase handling)—but highlights offline, low‑cost, privacy‑friendly workflows.
Designing Agents and Tools People Trust
Build Better AI Agents: 4 UX Lessons from Qwen
Nielsen Norman Group •May 8, 2026
A Nielsen Norman Group usability study of Alibaba’s Qwen agent (six participants completing food-ordering and travel tasks) reveals how consumers actually use AI agents. Key takeaways for builders: provide redundant entry points with clarifying prompts, mirror familiar app UIs alongside chat, handle personal data with explicit, minimal, contextual permissions, and make fees and other decision-critical details transparent to protect autonomy and trust.
Local-First Web Apps in 2026: Practical Architecture, Tools, Tradeoffs
Smashing Magazine •May 6, 2026
A veteran’s guide argues for treating the client as a full replica node and recommends a 2026-ready stack: SQLite (WASM) on OPFS with Postgres and a sync engine like PowerSync, reserving CRDTs/Yjs for rich text. It clarifies when local-first fits (user-generated, collaborative, offline) versus when it doesn’t (server-generated data, strong consistency), and details pragmatic conflict handling (field-level LWW plus server-side semantic validation), auth at the sync boundary, and client-side migrations. Operational gotchas include Safari OPFS quirks, bundle/memory costs, initial sync latency, and the need to abstract the sync layer to hedge against tool churn.
Rethinking Utility UX: Turn Maintenance Tools Into Trusted Experiences
Smashing Magazine •May 5, 2026
The piece argues that system maintenance tools are overdue for emotionally intelligent UX and proposes three principles: translate complexity into human language, expose process and progress, and design a clear, satisfying completion—illustrated by Linear, Vercel, and CleanMyMac. For teams building developer utilities and infra dashboards, applying these ideas (aesthetic–usability and peak–end effects) can reduce anxiety, build trust, and boost adoption and retention as user expectations rise.




