Set one environment variable. SealVera intercepts every LLM call, logs it with cryptographic proof, and makes it defensible to regulators. No SDK integration required.
Loan rejections. Insurance denials. Hiring screens. Medical authorizations. Regulators, courts, and customers are demanding explanations. The question is not whether you will be asked. The question is whether you will be ready.
No SDK wrappers, no architecture changes, no code changes. Your agent keeps running exactly as before.
Add two environment variables to your agent's process. That's it. Your agent keeps running exactly as before.
export SEALVERA_API_KEY=sv_... export NODE_OPTIONS="--require sealvera/autoload"
SealVera intercepts OpenAI, Anthropic, and OpenRouter calls at the process level — no wrappers, no code changes, no deployment changes. Each decision is logged with the full input, output, structured reasoning, and a cryptographic signature.
When a regulator or auditor asks for records, generate a signed compliance report in seconds. Every decision is tamper-evident from the moment it was logged.
SealVera's proof layer operates at three levels — each one independently defensible.
Every decision is logged with structured reasoning tied to actual input values. A compliance officer, judge, or regulator can read it and understand exactly what the AI saw and why it decided what it did.
Human-readableEvery entry is cryptographically signed at the moment of logging. The hash chain means you cannot delete a record without detection. You can prove to any third party that the record is exactly as it was when the decision was made.
CryptographicBehavioral baselines let you demonstrate your AI operated within defined parameters over time. Drift events are logged with timestamps. You can show when the system was operating normally and when it deviated.
BehavioralRegulators across every major industry are establishing specific requirements for AI decision accountability. SealVera is built to meet them.
The first open standard for AI agent accountability. Free to use, cite, and implement. Published by SealVera under CC BY 4.0.
OpenClaw, Node.js, Python, Go, or any OTel-instrumented system. SealVera installs alongside your existing agents in minutes.
# Install the SealVera skill clawhub install sealvera # Set your API key — logging starts immediately export SEALVERA_API_KEY=sv_... export SEALVERA_AGENT=my-agent-name # Every LLM call your OpenClaw agent makes is now audited # No other changes needed
# Zero-friction path — no code changes needed npm install sealvera export NODE_OPTIONS="--require node_modules/sealvera/scripts/autoload.js" export SEALVERA_API_KEY=sv_... # Done — run your agent as normal # Or wrap explicitly: const SealVera = require('sealvera'); const { OpenAI } = require('openai'); SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: process.env.SEALVERA_API_KEY }); const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY }); const agent = SealVera.createClient(openai, { agent: 'loan-underwriter' }); // Your existing code unchanged — every call is now a signed audit record const result = await agent.chat.completions.create({ model: 'gpt-4o', messages });
# Connect SealVera to your existing client import openai, sealvera sealvera.init(endpoint="https://app.sealvera.com", api_key=os.environ["SEALVERA_API_KEY"]) # Pass your existing client — wrapped transparently client = openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"]) agent = sealvera.create_client(client, agent="loan-underwriter") # Your existing code unchanged result = agent.chat.completions.create(model="gpt-4o", messages=messages) # Anthropic with extended thinking — thinking chains captured natively anthropic_client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"]) uw_agent = sealvera.create_client(anthropic_client, agent="underwriter")
// Go SDK — explicit wrappers per provider import sealvera "github.com/sealvera/sealvera-go" sealvera.Init(sealvera.Config{ Endpoint: "https://app.sealvera.com", APIKey: os.Getenv("SEALVERA_API_KEY"), }) agent := sealvera.NewAgent("loan-underwriter") result, err := agent.WrapOpenAI(ctx, "evaluate_application", input, func() (any, error) { return openaiClient.Chat.Completions.New(ctx, params) }, )
// Anthropic with extended thinking chain capture const SealVera = require('sealvera'); const Anthropic = require('@anthropic-ai/sdk'); SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: process.env.SEALVERA_API_KEY }); const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY }); const agent = SealVera.createClient(anthropic, { agent: 'my-claude-agent' }); // Extended thinking chains are captured as native evidence automatically const result = await agent.messages.create({ model: 'claude-3-7-sonnet-20250219', max_tokens: 16000, thinking: { type: 'enabled', budget_tokens: 10000 }, messages: [{ role: 'user', content: '...' }] });
# No SDK required — works with any language or framework # Point your existing OTel exporter at SealVera: OTEL_EXPORTER_OTLP_ENDPOINT=https://app.sealvera.com/api/otel OTEL_EXPORTER_OTLP_HEADERS="X-SealVera-Key=sv_..." # Add these attributes to your AI decision spans: # ai.agent = "my-agent-name" # ai.action = "evaluate" # ai.decision = "APPROVED" # ai.model = "gpt-4o" # ai.input = '{"amount": 25000}' # ai.output = '{"decision": "APPROVED", "confidence": 0.94}' # If you already run OTel, this is a single config change.
From the moment a decision is made to the moment an auditor reviews it — SealVera covers the full chain.
Every decision captured with factor-level reasoning tied to actual input values. "Credit score 748 above threshold" not "the model approved it." Each factor is traceable to a specific data point. Anthropic Claude extended thinking chains are captured as native evidence automatically.
RSA signature on every entry. Hash chain linking entries in sequence — deletions break the chain. Independently verifiable with the public key. Replay any past decision with original inputs to confirm the AI's reasoning holds.
SealVera learns each agent's normal approval rates, decision patterns, confidence levels, and activity volume. When behavior shifts, you receive an alert before it becomes an incident — with the exact metrics that changed.
Pre-built templates for healthcare, fintech, insurance, and HR. Custom rules for any threshold, decision value, or pattern. Every alert is logged to a persistent history so you can demonstrate anomalies were detected and acted upon.
When multiple agents process the same case, SealVera links them into a single traceable chain automatically. Shared session IDs or request IDs are enough. See the full workflow with timing, models, and evidence at each step.
One-click audit reports formatted for regulators and legal teams. Chain integrity verification included. Export as HTML, JSONL, or CSV. Retention status shows exactly how much coverage you have. Minutes, not weeks.
EU AI Act Article 12 requires 10-year retention for high-risk AI decisions. SealVera tracks your coverage — oldest record, total entries, days covered — and tells you exactly where you stand.
Enterprise customers get a dedicated instance running inside their own VPC. No data leaves their environment. Every component — server, database, keys — is isolated per customer.
Start free with no time limit. Scale when your audit requirements do.
5 design partners at $499/mo — locked for 12 months. You shape the roadmap. We build features around your compliance workflow, your vertical, and your specific regulatory requirements. If you are in fintech, healthcare, insurance, or HR and you have AI agents making real decisions, let's talk.
/api/public-key for independent verification by any third party.Set one environment variable. Every LLM call your agents make is logged, signed, and ready for any regulator who asks.
Free tier available with no time limit · No credit card required · EU AI Act ready