AISecOps · aisecops.net

Your AI agent
is acting.
Is it governed?

Agentic AI systems retrieve data, call tools, and take real actions — with your credentials, on your behalf. AISecOps is the runtime control plane for governing them: local enforcement, capability-gated execution, execution governance, structured audit logging, and deterministic execution boundaries.

aisecops-gateway · audit.jsonl
// Local / edge guard — injection precheck
{
  "event": "prompt_injection_detected",
  "severity": "high",
  "action": "blocked_before_llm",
  "tenant": "acme-corp"
}

// Runtime control plane — plan evaluated
{
  "tool": "send_email",
  "phase": "evaluate",
  "capability": "cap_notification_ops",
  "decision": "BLOCK",
  "reason": "recipient_not_allowlisted",
  "correlation_id": "cid-8821"
}

// Structured audit event emitted
aisecops_runtime_decision_total{decision="block"} 1
jsonl persisted → replay pipeline ✓

$ _
Framework covers
Prompt Injection Tool Abuse Memory Poisoning Capability Gates Replayable Audit Multi-Tenant Isolation
The Problem

Agentic AI changed the threat model.
Most teams haven't caught up.

Enterprises are deploying AI agents that browse the web, read email, query databases, and execute code. Traditional application security was not designed for autonomous execution systems.

Before AISecOps

Your AI agent calls a tool. You see a log entry. There's no policy engine — the call either succeeds or fails at the API level. You have no visibility into what the model was instructed to do, why it chose that tool, or whether the retrieved context was clean.

With AISecOps

Every execution plan passes through a runtime control plane. Retrieved context is sanitized before the model sees it. Capability gates validate requested actions before policy evaluation. Every runtime decision emits a structured audit event. Replayable JSONL logs enable explainability, forensics, and governance.

What attackers exploit

Indirect prompt injection via RAG. Tool parameter manipulation. Memory context poisoning. Policy drift as models and prompts evolve. These are not theoretical — they have been demonstrated in production agentic systems.

What this framework provides

A layered runtime governance architecture: local enforcement, context validation, capability containment, execution governance, deterministic execution boundaries, and replayable observability. Open-source reference implementations. Enterprise adoption guidance. Threat models aligned to OWASP LLM risks.

The Framework

Four layers. No single one is enough.

Securing an agentic AI system requires runtime governance across every transition boundary — from input to planning, evaluation, execution, and audit.

L1 Context — Trust Boundaries

Optional local / edge guards stop obvious injection patterns before cloud model invocation. Validate and sanitize all external data before it enters the model's context window. Treat every retrieved document, memory chunk, and tool response as untrusted input that must be inspected for injection patterns.

L2 Capability — Least-Privilege Tools

Enforce capability-gated execution and parameter validation before any external call is executed. The policy engine — not the model — decides what actions are permitted. Deny by default.

L3 Runtime Control Plane — Evaluate & Execute

Separate planning, evaluation, and execution into distinct runtime boundaries. Deterministic executors run only approved or allowed execution plans. High-risk actions require approval workflows and emit explainable audit trails.

L4 Observability — Replay & Audit

Emit structured JSONL runtime events at every decision point. Support replay, explainability, forensic reconstruction, governance analytics, and policy drift analysis across deployments.

View the Reference Architecture →
Resources

Everything is open and free.

Framework documentation, threat models, reference architecture, and working open-source code. No account required.