The Firewall for your AI Agents.
Stop Prompt Injections and protect your LLM workflows in 2 minutes. Built for Moltbot, Crawlbot, and autonomous agents that touch real data and real systems.
The problem: AI agents are blind to attacks.
Your LLM stack executes whatever the model outputs. Once an attacker lands a prompt injection, it can read files, leak secrets, or rewrite your agent’s mission.
Prompt Injections
Hidden instructions inside user input hijack your agent, bypassing your system prompt and safety rules.
- • "Ignore previous instructions and..."
- • Hidden payloads in Markdown, HTML or JSON
Data Leaks
Once compromised, agents can exfiltrate secrets from files, tools or connected APIs without any visibility.
- • Source code & SSH keys
- • Internal documents & customer data
System Takeover
Autonomous agents loop over tools, code, and APIs. A single malicious prompt can turn them into a remote shell.
- • "Install this binary and execute it"
- • Infinite loops & unbounded tool calls
The solution: a firewall for every prompt.
Guardian sits in front of your agent, red-teaming every input and issuing a cryptographically signed safety certificate for each request.
Install our 1-file SDK
Drop `guardian_client.py` into your codebase. No infra changes, no custom servers.
Plug your License + OpenAI Key
Guardian uses your OpenAI key (BYOK). You keep full control over cost and data.
Get a signed safety certificate per request
Each audit returns a signed Ed25519 certificate you can verify independently.
Live Demo: Guardian in your agent loop.
One function call before you pass input to your LLM. If Guardian says unsafe, you block the request.
from guardian_client import GuardianClient
client = GuardianClient(license_key="GS-XXXX-YYYY")
def guarded_agent_call(user_input: str, openai_key: str):
# 1) Ask Guardian if this input is safe
result = client.verify_input(
agent_input=user_input,
user_openai_key=openai_key,
)
if result.status == "unsafe":
# Block execution & surface reason to user or logs
print("[Guardian] Blocked:", result.threat_report)
return {"error": "Blocked for security reasons."}
# 2) Safe – call your LLM as usual
return call_openai(user_input, openai_key)
Simple pricing for serious security.
One plan. Unlimited audits. Bring your own OpenAI/Anthropic key.
- • Unlimited `/v1/audit` calls
- • Works with OpenAI & Anthropic keys
- • Ed25519-signed security certificates
- • Simulation mode for testing (no token spend)
Pay in USDT/USDC. Enterprise & on-premise options available.