Wrap the tool layer.
One adapter call wraps the framework you already use. Every tool invocation now passes through the gate — including dynamic and model-generated ones.
# langchain tools = guard(tools, "./policy.yaml")
AgentGuard sits between your agent and its tools — deciding, in code, what reaches your database, your APIs, your customers. Open core. Self-hosted. Built so a hijacked prompt never becomes a hijacked production system.
Drop the SDK in front of your agent’s tool layer. Define policy. Stream the audit. Self-hosted by default — we never see your traffic.
One adapter call wraps the framework you already use. Every tool invocation now passes through the gate — including dynamic and model-generated ones.
# langchain tools = guard(tools, "./policy.yaml")
Rules in YAML or Rego. Match on tool, args, host, time, identity. Allow, block, redact, or require human approval — all composable.
match: db.query / op: "DELETE" decision: block
Hash-chained, signed audit log. Push to S3, Loki, or your SIEM. Replay any agent run end-to-end — what was asked, what was allowed, what reached the world.
# 14:02:11.482 shell.exec → block · sig d8f1a2
The agent issues a tool call; the gate evaluates policy; the call proceeds, is redacted, or is denied. Every decision is hash-chained to the next — whether the gate runs in-process or on the wire.
Policy evaluated in < 1 ms. One decision per call. Signed, replayable audit.
Embed it in the agent for tight integration, or run it on the wire for zero code changes. Same enforcement engine, your choice of install.
JSON-RPC bridge between MCP client and upstream server. Gates every tool call before it reaches the model — no agent code changes.
HTTP proxy in front of OpenAI and Anthropic. Buffers and inspects tool-call blocks against policy, with streaming support.
We took out the things you don’t want in your security path: an outbound dependency, a vendor with your prompts, a black-box decision. AgentGuard is yours, in your process.
Single binary or library. Runs in your VPC, your container, your laptop. There is no SaaS to call.
By default, prompts, tool args, and results never leave your network — the runtime makes no outbound calls. If you opt in to the hosted dashboard, you choose what gets sent.
Every decision hash-chained to the previous. Stream to S3, Loki, or your SIEM. Reconstruct any run end-to-end.
YAML for the common case, Rego when you need expression. Reviewable in PRs. Versioned, diff-able, deployable like any service.
Compiled rule tree, no network hops. The gate is fast enough to sit in front of every tool call without changing user-perceived latency.
The runtime is Apache 2.0 and stays that way. The hosted dashboard is a paid layer on top — it never gates the open path.
The full runtime is open source — install, configure, self-host. The paid plan is a hosted dashboard that handles install, policy editing, and rollout for you.
The full runtime, the adapters, the policy engine. Apache 2.0, source on GitHub. You write the YAML, you run the binary, you keep every byte.
Same runtime, made easy. One-click install, a UI for writing and rolling out policy, and an opt-in storage layer if you want a managed audit trail instead of running your own.
Built in public. Shipped against a thesis, not a press release.
Reviewed. Versioned. Auditable. Get on the waitlist for early access — design partners go first, OSS ships free, forever.