G
Guardian
AI Security Firewall
Live API · Built for agents

The Firewall for your AI Agents.

Stop Prompt Injections and protect your LLM workflows in 2 minutes. Built for Moltbot, Crawlbot, and autonomous agents that touch real data and real systems.

Get Access Now
Live on Railway BYOK · We never see your prompts
Plug-in SDK · No infra changes · Works with OpenAI & Anthropic.
Guardian SaaS
Runtime Security Layer
Protected
Last audit 68 ms · gpt-4o
Prompt Injection Blocked
Threats 24h
132
Blocked 99.3%
131
Agents Live
12
Signed Certificate ed25519 · 9d5c...f2a1

The problem: AI agents are blind to attacks.

Your LLM stack executes whatever the model outputs. Once an attacker lands a prompt injection, it can read files, leak secrets, or rewrite your agent’s mission.

Prompt Injections

Hidden instructions inside user input hijack your agent, bypassing your system prompt and safety rules.

  • • "Ignore previous instructions and..."
  • • Hidden payloads in Markdown, HTML or JSON

Data Leaks

Once compromised, agents can exfiltrate secrets from files, tools or connected APIs without any visibility.

  • • Source code & SSH keys
  • • Internal documents & customer data

System Takeover

Autonomous agents loop over tools, code, and APIs. A single malicious prompt can turn them into a remote shell.

  • • "Install this binary and execute it"
  • • Infinite loops & unbounded tool calls

The solution: a firewall for every prompt.

Guardian sits in front of your agent, red-teaming every input and issuing a cryptographically signed safety certificate for each request.

1 Install

Install our 1-file SDK

Drop `guardian_client.py` into your codebase. No infra changes, no custom servers.

2 Connect

Plug your License + OpenAI Key

Guardian uses your OpenAI key (BYOK). You keep full control over cost and data.

3 Certify

Get a signed safety certificate per request

Each audit returns a signed Ed25519 certificate you can verify independently.

Live Demo: Guardian in your agent loop.

One function call before you pass input to your LLM. If Guardian says unsafe, you block the request.

guardian_client_demo.py
Python · SDK example

from guardian_client import GuardianClient

client = GuardianClient(license_key="GS-XXXX-YYYY")

def guarded_agent_call(user_input: str, openai_key: str):
    # 1) Ask Guardian if this input is safe
    result = client.verify_input(
        agent_input=user_input,
        user_openai_key=openai_key,
    )

    if result.status == "unsafe":
        # Block execution & surface reason to user or logs
        print("[Guardian] Blocked:", result.threat_report)
        return {"error": "Blocked for security reasons."}

    # 2) Safe – call your LLM as usual
    return call_openai(user_input, openai_key)

Simple pricing for serious security.

One plan. Unlimited audits. Bring your own OpenAI/Anthropic key.

Guardian SaaS
$19 /month
Unlimited audits · BYOK
  • • Unlimited `/v1/audit` calls
  • • Works with OpenAI & Anthropic keys
  • • Ed25519-signed security certificates
  • • Simulation mode for testing (no token spend)
Get Access Now

Pay in USDT/USDC. Enterprise & on-premise options available.