Blog & News
AI-adaptive red team: we read your MCP server's source to generate targeted attacks
The deterministic 53-pattern suite finds the easy stuff. The AI-adaptive layer reads your server's source code and generates attacks targeted at the vulnerabilities it can actually see in your implementation. Here's what happened when we ran it against Anthropic's filesystem reference server.
5 min read
Decoy Red Team: we built an attacker for your MCP servers
Scanning catches bad code. Red teaming catches bad assumptions. `decoy-redteam` executes 53 adversarial attacks against your live MCP servers and tells you what's exploitable.
We tightened our scanner's regex. Here's what's actually in Anthropic's reference MCP servers
An earlier pass at Anthropic's reference servers reported 12 findings including critical poisoning. After fixing a false-positive-heavy regex, the real picture is quieter, more hygiene-shaped, and more actionable.
Why MCP needs its own security layer
The Model Context Protocol turned agents into a platform. That means the attack surface is a platform too, and traditional AppSec isn't enough.
Anatomy of a tripwire: how we detect compromised agents with zero false positives
Tripwires are decoy tools installed alongside real MCP servers. Honest agents never call them, so every trigger is signal. Here's how the design holds up.
Scanning MCP servers in CI: the shift-left pattern for agent security
Pull request scanning catches poisoned tool descriptions before a developer installs them. Here's the GitHub Action that makes it work.
Why we built Decoy
Every protocol eventually gets its dedicated security layer. MCP is a year in and doesn't have one yet. That's the gap.






