Skip to content

OPEN MAP · AGENTIC AI SECURITY

Secure the system of action — not just the model.

The security boundary has moved from the model to the system that acts on its behalf. This is a living map of the risks, controls, benchmarks, and architectures that secure agentic, multi-agent, and tool-using AI in production.

Execution-first · not model-first Risks chain · defenses chain with them Evidence over endorsement
USER PROMPT RETRIEVED CONTEXT SYSTEM RULES AGENTIC REASONING goals emerge at runtime INTERNAL KNOWLEDGE EXTERNAL APIS OPERATIONAL TOOLS PERMITTED STEP · LOCAL CHECK COMPOSED OUTCOME MAY EXCEED APPROVED SCOPE
THE AI DEFENSE PLANE

See, interpret, constrain, audit — every action.

AI systems that act need controls that operate at runtime, not only at the model boundary. The Defense Plane is the operating model that makes that possible.

CORE agent runtime OBSERVE INTERPRET CONSTRAIN tool calls memory writes prompts credentials audit · evidence · review
OBSERVE

Capture prompts, context, tool calls, memory, approvals, outputs, and downstream actions.

INTERPRET

Understand intent, authority, sensitivity, tool risk, policy fit, and likely impact.

CONSTRAIN

Limit actions through policy, tool brokers, credential brokers, sandboxing, and approval gates.

AUDIT

Preserve evidence for review, response, governance, assurance, and continuous improvement.

THE SHIFT

Model security is necessary. It is no longer sufficient.

Securing agentic AI means securing the system of action around the model — prompts, context, tools, memory, credentials, code execution, delegated authority, and multi-agent workflows.

Model-centred securityAgentic execution security
Protects prompts, completions, and model-facing data flows.Protects the full system of action around the model.
Asks whether the model revealed something unsafe.Asks what the system can do, who authorised it, and whether the outcome is controlled.
Treats language as input and output.Treats language as part of the execution layer.
Evaluates single responses or short conversations.Evaluates multi-step behaviour across context, tools, credentials, memory, approvals, and agents.
Relies on isolated controls around prompt and output.Layered controls that observe, interpret, constrain, and govern behaviour as it unfolds.
BREACH CHAINS

Risks chain together. Defenses have to chain with them.

A single compromised instruction combines with tool permissions, retrieved context, stored memory, delegated authority, weak approvals, and poor observability into a breach chain. The defensive task is to break that chain deliberately.

STAGE 1 Prompt injection STAGE 2 Intent compromise STAGE 3 Tool misuse STAGE 4 Credential abuse STAGE 5 Memory poisoning STAGE 6 Cross-agent propagation STAGE 7 Unsafe action RISKS COMPOSE · EACH STEP PASSES A LOCAL CHECK DEFENSES MUST BREAK THE CHAIN, NOT PATCH ENDPOINTS
QUALITY BAR

Evidence over endorsement.

Catalogue entries are evidence for security judgement, not endorsements. Every substantial entry passes a rubric before it lands in the map.

SCORESHEET · ENTRY● PASS
  • ✔ Type and producer identified
  • ✔ Relevance to agentic AI security stated
  • ✔ Risks / behaviours / controls covered
  • ✔ Maturity level + limitations declared
  • ✔ Last-checked date present
  • ○ Independent corroboration optional
BUILT FOR

Five audiences. One shared map.

CTOs · LEADERS

How agentic systems change enterprise risk, architecture, and assurance.

AI ENGINEERS

Design safer runtimes, tools, memory, approvals, and evaluation loops.

SECURITY ENGINEERS

Map attack surfaces, breach chains, controls, and IR evidence.

GOVERNANCE

Connect delegated authority, accountability, audit, and assurance.

MAP READY IN 30 MINUTES

Your agents deserve controlled action.

Open the map. Walk a chain. Pick a pattern. Ship a control.