Secure the system of action — not just the model.
The security boundary has moved from the model to the system that acts on its behalf. This is a living map of the risks, controls, benchmarks, and architectures that secure agentic, multi-agent, and tool-using AI in production.
See, interpret, constrain, audit — every action.
AI systems that act need controls that operate at runtime, not only at the model boundary. The Defense Plane is the operating model that makes that possible.
Capture prompts, context, tool calls, memory, approvals, outputs, and downstream actions.
Understand intent, authority, sensitivity, tool risk, policy fit, and likely impact.
Limit actions through policy, tool brokers, credential brokers, sandboxing, and approval gates.
Preserve evidence for review, response, governance, assurance, and continuous improvement.
Pick the entry point that matches your job.
Each entry is a working map. Open one, follow its links, leave with a sharper picture of risk, controls, or evidence.
What changes when AI can act
The model that maps how language, tools, memory, and authority compose into real-world risk.
Open map → ATTACK CHAINSHow breaches actually compose
Multi-step paths from prompt injection through tool misuse, credential abuse, and unsafe action.
Walk a chain → PATTERNSDefenses you can implement
Secure runtime, tool calling, MCP, memory, credentials, approval, and observability patterns.
Browse patterns → BENCHMARKSEvaluate behaviour, not just answers
Benchmarks and rubrics for tool use, autonomy, memory, and multi-agent control effectiveness.
See benchmarks →Model security is necessary. It is no longer sufficient.
Securing agentic AI means securing the system of action around the model — prompts, context, tools, memory, credentials, code execution, delegated authority, and multi-agent workflows.
Risks chain together. Defenses have to chain with them.
A single compromised instruction combines with tool permissions, retrieved context, stored memory, delegated authority, weak approvals, and poor observability into a breach chain. The defensive task is to break that chain deliberately.
Patterns, not posters.
Concrete, implementable patterns for the parts of an agent system most likely to break under adversarial pressure.
Secure agent runtime
Sandboxing, isolation, policy enforcement, and observability inside the execution loop.
Read → TOOLSSecure tool calling
Tool brokers, schemas, scopes, allow-lists, side-effect controls, and approval gates.
Read → MCPSecure MCP
Trust boundaries, transport hardening, capability scoping, and untrusted-context handling.
Read → MEMORYMemory security
Write paths, provenance, poisoning detection, and retention controls for agent memory.
Read → CREDENTIALSCredential & token boundaries
Delegated authority, scoped tokens, credential brokers, and least-privilege impersonation.
Read → ARCHITECTUREDefense architecture
Reference architecture for runtime, gateway, observability, audit, and governance layers.
Read →Evidence over endorsement.
Catalogue entries are evidence for security judgement, not endorsements. Every substantial entry passes a rubric before it lands in the map.
- ✔ Type and producer identified
- ✔ Relevance to agentic AI security stated
- ✔ Risks / behaviours / controls covered
- ✔ Maturity level + limitations declared
- ✔ Last-checked date present
- ○ Independent corroboration optional
Resource quality
Sets the bar for standards, papers, tools, and vendor research.
Read → RUBRICBenchmark quality
Tests benchmarks on coverage, realism, and proof limits.
Read → RUBRICCase study
Standardises incident reports for evidence value.
Read → RUBRICAgent security readiness
A scorecard for agent systems before they ship.
Read →Five audiences. One shared map.
How agentic systems change enterprise risk, architecture, and assurance.
Design safer runtimes, tools, memory, approvals, and evaluation loops.
Map attack surfaces, breach chains, controls, and IR evidence.
Connect delegated authority, accountability, audit, and assurance.
Your agents deserve controlled action.
Open the map. Walk a chain. Pick a pattern. Ship a control.