🛡 AI Security Infrastructure

The Security Layer
Every AI Agent Needs

Secra sits between your agents and the LLM. Catches prompt injection, persona hijacking, and data exfiltration in real time - before damage occurs.

No credit card required · Free forever plan

Security + Era — Secra was founded when prompt injection and AI-native attacks made a new layer of defence inevitable.

|
 
500K
Free tokens/month
<1ms
Pre-processor latency
3
Detection layers
30+
Injection patterns

Everything your agent needs to stay safe

Purpose-built for AI teams who need security without the overhead.

🎯
30+
Injection patterns detected
Every known prompt injection variant — direct overrides, persona hijacking, extraction attacks.
<1ms
Layer 0 response time
Pre-processor fires before your LLM even sees the prompt.
🧠
3-Layer Detection

Pre-processor → Rule engine → Groq LLM. Each layer only fires when the previous one is uncertain. Costs stay near zero for obvious attacks.

🔑
API Keys

Generate sk_secra_ scoped keys shown once, bcrypt-hashed at rest. Drop them into any HTTP client and you're protected.

🧹
Sanitize Mode Popular

Don't just block — rewrite. Secra strips injection payloads and returns a clean prompt your LLM can safely process. Get protection without breaking your flow.

🛡
Tool Validation

Validate LLM-generated tool calls before execution. Stops function-injection attacks that target your agent's action layer.

📊
Dashboard Pro

Real-time logs of every scanned prompt, verdict, threat category, and token spend. Understand your attack surface.

Three layers. Millisecond response.

Each layer only activates when the previous one is uncertain — keeping token costs near zero for obvious attacks.

Layer 0
Aho-Corasick Pre-Processor
Multi-pattern string matching across 30+ injection signatures. Zero LLM calls, zero tokens. Fires in under 1ms.
< 1ms · 0 tokens · Always free
Layer 1
Rule Engine
Regex + heuristic rules evaluate prompt structure, entropy, and instruction override patterns. Still no LLM cost.
2–5ms · 0 tokens
Layer 2
Groq LLM (Ambiguous Only)
Only when Layers 0 and 1 score 0.25–0.75 (uncertain) do we call Groq llama3-8b-8192 for the final verdict.
50–200ms · tokens charged
# Two lines to protect your agent from secra import Shield shield = Shield(api_key="sk_secra_xxxx") # Scan for threats result = shield.scan(user_prompt) if result.blocked: return "Request blocked." # Or sanitize + rewrite safe = shield.sanitize(user_prompt) response = call_llm(safe.sanitized_prompt) # Tool call validation shield.validate_tool(tool_name, args)

Up and running in 60 seconds

Install the SDK, grab your free key, and start blocking attacks before your first LLM call.

Need a key? Create a free account →

Simple REST API, ready in 2 minutes

One endpoint to protect your agent. Full SDK wrappers for Python and JavaScript.

POST/v1/scan1× words

Scan a prompt through 3 layers — Aho-Corasick, rules, and LLM. Returns a BLOCK / REVIEW / ALLOW decision with threat type and score.

POST/v1/sanitize2× words

Strip injection patterns from a dirty prompt. Returns a clean version safe to pass to your LLM, with a diff of what was removed.

POST/v1/scan-content1× words

Scan web pages or documents for indirect injection before injecting them into your agent's context window.

POST/v1/validate-tool50 flat

Validate an agent tool call before execution. Catches destructive shell commands, path traversal, and network exfiltration.

GET/v1/usage/balanceFree

Check your token balance, plan name, and days until next reset. Zero tokens consumed.

GET/v1/usage/historyFree

Full scan history with metadata. Raw prompts are never stored — only SHA-256 hashes for audit trails.

POST /v1/scanExample response
{
  "threat_score":     0.97,
  "recommendation":  "BLOCK",
  "threat_type":     "INJECTION",
  "tokens_consumed": 0,
  "tokens_remaining": 4987154,
  "latency_ms":      0.4
}
PyPI — secra-sdk ↗·npm — @secra/sdk ↗·Full API docs in dashboard →

Simple, token-based pricing

Pay for what you scan. Tokens reset monthly. No seat fees, no setup costs.

Free
$0/mo
500K tokens/mo included
Get Started Free
MOST POPULAR
Developer
$15/mo
5M tokens/mo included
Start Developer
Pro
$49/mo
50M tokens/mo included
Start Pro
See full pricing & token calculator →