Zero code changes required

Every AI decision
your agent makes,
audited automatically.

Set one environment variable. SealVera intercepts every LLM call, logs it with cryptographic proof, and makes it defensible to regulators. No SDK integration required.

Start free See how it works
Three questions you need to be able to answer
A regulator asks: "Show us every AI-assisted loan decision your system made in Q3 — with the reasoning behind each one."
Without SealVera, this takes weeks of engineering work. With SealVera, it's a one-click report.
A customer sues, claiming your AI denied their insurance claim unfairly. Their lawyer asks for the full decision record.
Without SealVera, you have a log entry. With SealVera, you have a cryptographically signed record of exactly what the AI saw, weighed, and decided.
Your approval rate dropped 30 points overnight. You find out three weeks later — from a news story.
SealVera monitors agent behavior continuously. When something drifts, you find out in minutes — not from a headline.

Your AI is already making decisions
that will be challenged.

Loan rejections. Insurance denials. Hiring screens. Medical authorizations. Regulators, courts, and customers are demanding explanations. The question is not whether you will be asked. The question is whether you will be ready.

One env var. Complete audit coverage.

No SDK wrappers, no architecture changes, no code changes. Your agent keeps running exactly as before.

1

Set one env var

Add two environment variables to your agent's process. That's it. Your agent keeps running exactly as before.

export SEALVERA_API_KEY=sv_...
export NODE_OPTIONS="--require sealvera/autoload"
2

Every LLM call is captured

SealVera intercepts OpenAI, Anthropic, and OpenRouter calls at the process level — no wrappers, no code changes, no deployment changes. Each decision is logged with the full input, output, structured reasoning, and a cryptographic signature.

3

Compliance proof, on demand

When a regulator or auditor asks for records, generate a signed compliance report in seconds. Every decision is tamper-evident from the moment it was logged.

Three layers of protection.

SealVera's proof layer operates at three levels — each one independently defensible.

Layer 01

The record itself

Every decision is logged with structured reasoning tied to actual input values. A compliance officer, judge, or regulator can read it and understand exactly what the AI saw and why it decided what it did.

Human-readable
Layer 02

Proof it was not modified

Every entry is cryptographically signed at the moment of logging. The hash chain means you cannot delete a record without detection. You can prove to any third party that the record is exactly as it was when the decision was made.

Cryptographic
Layer 03

Proof the system behaved as expected

Behavioral baselines let you demonstrate your AI operated within defined parameters over time. Drift events are logged with timestamps. You can show when the system was operating normally and when it deviated.

Behavioral

The requirements are already here.

Regulators across every major industry are establishing specific requirements for AI decision accountability. SealVera is built to meet them.

EU AI Act
Enforcement: August 2026
High-risk AI systems must maintain complete decision records for 10 years. Operators must explain each decision and demonstrate the system has not drifted.
Covers: decision records, retention tracking, behavioral monitoring, compliance export
SOC 2 Type II
AI Controls (emerging)
SOC 2 auditors increasingly require evidence of logging, monitoring, and access controls around AI decision systems. Tamper-evident records and continuous monitoring are becoming standard.
Covers: audit logging, anomaly detection, alert history, chain integrity verification
FINRA / SEC
Automated Decision Supervision
Financial services firms using AI must maintain supervisory records. Regulators require the ability to reconstruct any automated decision with its full context and rationale.
Covers: decision reconstruction, full input capture, cryptographic attestation, export
GDPR Article 22
Right to Explanation
Individuals have the right to an explanation for automated decisions that affect them. Organizations must provide meaningful information about the logic and consequences of automated processing.
Covers: structured reasoning trail, factor-level evidence, plain-language explanation export

The 10 requirements every production AI agent must meet.

The first open standard for AI agent accountability. Free to use, cite, and implement. Published by SealVera under CC BY 4.0.

Read the standard Download PDF
AA-01 Every decision must produce a complete record automatically
AA-03 Records must be cryptographically tamper-evident
AA-07 Anomalies must be detected before external parties report them
SV-10 Compliance reports must be on-demand, not assembled under pressure

Connects to what you already run.
No architecture changes.

OpenClaw, Node.js, Python, Go, or any OTel-instrumented system. SealVera installs alongside your existing agents in minutes.

# Install the SealVera skill
clawhub install sealvera

# Set your API key — logging starts immediately
export SEALVERA_API_KEY=sv_...
export SEALVERA_AGENT=my-agent-name

# Every LLM call your OpenClaw agent makes is now audited
# No other changes needed
# Zero-friction path — no code changes needed
npm install sealvera
export NODE_OPTIONS="--require node_modules/sealvera/scripts/autoload.js"
export SEALVERA_API_KEY=sv_...
# Done — run your agent as normal

# Or wrap explicitly:
const SealVera = require('sealvera');
const { OpenAI } = require('openai');

SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: process.env.SEALVERA_API_KEY });
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const agent  = SealVera.createClient(openai, { agent: 'loan-underwriter' });

// Your existing code unchanged — every call is now a signed audit record
const result = await agent.chat.completions.create({ model: 'gpt-4o', messages });
# Connect SealVera to your existing client
import openai, sealvera

sealvera.init(endpoint="https://app.sealvera.com", api_key=os.environ["SEALVERA_API_KEY"])

# Pass your existing client — wrapped transparently
client = openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"])
agent  = sealvera.create_client(client, agent="loan-underwriter")

# Your existing code unchanged
result = agent.chat.completions.create(model="gpt-4o", messages=messages)

# Anthropic with extended thinking — thinking chains captured natively
anthropic_client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
uw_agent = sealvera.create_client(anthropic_client, agent="underwriter")
// Go SDK — explicit wrappers per provider
import sealvera "github.com/sealvera/sealvera-go"

sealvera.Init(sealvera.Config{
    Endpoint: "https://app.sealvera.com",
    APIKey:   os.Getenv("SEALVERA_API_KEY"),
})

agent := sealvera.NewAgent("loan-underwriter")

result, err := agent.WrapOpenAI(ctx, "evaluate_application", input,
    func() (any, error) {
        return openaiClient.Chat.Completions.New(ctx, params)
    },
)
// Anthropic with extended thinking chain capture
const SealVera = require('sealvera');
const Anthropic = require('@anthropic-ai/sdk');

SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: process.env.SEALVERA_API_KEY });

const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const agent = SealVera.createClient(anthropic, { agent: 'my-claude-agent' });

// Extended thinking chains are captured as native evidence automatically
const result = await agent.messages.create({
  model: 'claude-3-7-sonnet-20250219',
  max_tokens: 16000,
  thinking: { type: 'enabled', budget_tokens: 10000 },
  messages: [{ role: 'user', content: '...' }]
});
# No SDK required — works with any language or framework
# Point your existing OTel exporter at SealVera:

OTEL_EXPORTER_OTLP_ENDPOINT=https://app.sealvera.com/api/otel
OTEL_EXPORTER_OTLP_HEADERS="X-SealVera-Key=sv_..."

# Add these attributes to your AI decision spans:
#   ai.agent      = "my-agent-name"
#   ai.action     = "evaluate"
#   ai.decision   = "APPROVED"
#   ai.model      = "gpt-4o"
#   ai.input      = '{"amount": 25000}'
#   ai.output     = '{"decision": "APPROVED", "confidence": 0.94}'

# If you already run OTel, this is a single config change.
OpenClaw skill
clawhub install sealvera — every agent turn audited with zero code changes.
Node.js / Python / Go
OpenAI, Anthropic, OpenRouter auto-detected. TypeScript types included.
OpenTelemetry
Any language, any framework. Single endpoint config — no SDK changes.

Everything the proof layer covers.

From the moment a decision is made to the moment an auditor reviews it — SealVera covers the full chain.

Records

Structured Evidence Trail

Every decision captured with factor-level reasoning tied to actual input values. "Credit score 748 above threshold" not "the model approved it." Each factor is traceable to a specific data point. Anthropic Claude extended thinking chains are captured as native evidence automatically.

Integrity

Cryptographic Attestation + Hash Chain

RSA signature on every entry. Hash chain linking entries in sequence — deletions break the chain. Independently verifiable with the public key. Replay any past decision with original inputs to confirm the AI's reasoning holds.

Monitoring

Behavioral Baseline + Drift Detection

SealVera learns each agent's normal approval rates, decision patterns, confidence levels, and activity volume. When behavior shifts, you receive an alert before it becomes an incident — with the exact metrics that changed.

Alerts

Alert Rules + Alert History

Pre-built templates for healthcare, fintech, insurance, and HR. Custom rules for any threshold, decision value, or pattern. Every alert is logged to a persistent history so you can demonstrate anomalies were detected and acted upon.

Tracing

Multi-Agent Decision Chains

When multiple agents process the same case, SealVera links them into a single traceable chain automatically. Shared session IDs or request IDs are enough. See the full workflow with timing, models, and evidence at each step.

Reporting

Compliance Reports + Data Export

One-click audit reports formatted for regulators and legal teams. Chain integrity verification included. Export as HTML, JSONL, or CSV. Retention status shows exactly how much coverage you have. Minutes, not weeks.

Retention

Retention Policy Tracking

EU AI Act Article 12 requires 10-year retention for high-risk AI decisions. SealVera tracks your coverage — oldest record, total entries, days covered — and tells you exactly where you stand.

Isolation

Private Cloud + Data Sovereignty

Enterprise customers get a dedicated instance running inside their own VPC. No data leaves their environment. Every component — server, database, keys — is isolated per customer.

Built around real compliance needs.

Start free with no time limit. Scale when your audit requirements do.

Always free
Free
$0
Evaluate the full product. No time limit, no credit card.
  • 1 agent
  • 10,000 decisions / month
  • 30-day retention
  • Full decision records
  • Cryptographic attestation
  • Compliance report export
  • Community support
Start for free
Pro
Pro
$99 /mo
For teams running production agents with real compliance exposure.
  • 10 agents
  • 500,000 decisions / month
  • 1-year retention
  • Behavioral drift detection
  • Alert rules + alert history
  • Multi-agent trace viewer
  • Email support
Get started
Enterprise
Enterprise
Custom
Dedicated instance, private cloud, custom retention, SLA.
  • Unlimited agents + decisions
  • 10-year retention
  • Private cloud / VPC
  • Dedicated support
  • SOC 2 Type II (in progress)
  • Bring Your Own Key (BYOK)
  • Custom SLA
Contact us

Design partner program

5 design partners at $499/mo — locked for 12 months. You shape the roadmap. We build features around your compliance workflow, your vertical, and your specific regulatory requirements. If you are in fintech, healthcare, insurance, or HR and you have AI agents making real decisions, let's talk.

Common questions.

Yes. Set NODE_OPTIONS to require the SealVera autoload script, set your API key, and run your agent exactly as before. SealVera hooks into Node.js's module system and intercepts OpenAI, Anthropic, and OpenRouter calls at the process level. Your code does not change. For OpenClaw agents, install the skill via clawhub and set two env vars. For Python, Go, or any OTel-instrumented system, SDKs and a single-endpoint config are also available if you prefer an explicit approach.
A log tells you what happened. A proof layer tells you what happened, why it happened, and proves the record is unaltered. SealVera captures the full decision record — inputs, reasoning, outcome — and cryptographically signs it at the moment of logging. Any modification after the fact breaks the signature. The hash chain means deletions are also detectable. Together, these mean you can hand a record to a court or regulator and prove it is exactly what the AI produced, unchanged, at the claimed time.
Every log entry is SHA-256 hashed and RSA-signed. The hash covers the input data, output, reasoning steps, agent name, and timestamp. If anyone modifies any of those fields after logging — even a single character — signature verification fails. The public key is available at /api/public-key for independent verification by any third party.
SealVera tracks your retention coverage in the dashboard — oldest record date, total entries, days covered, and whether your current configuration meets your regulatory threshold. Enterprise plans support custom retention policies up to indefinite. The sooner you start logging, the more coverage you have by the time enforcement begins in August 2026.
Decision logs are stored in SealVera's infrastructure with encryption at rest and in transit. For Enterprise customers, we offer private cloud deployment — your data never leaves your VPC. Every component, including the database and signing keys, is isolated per customer. We never sell or use your data for model training.
One env var away

Your agents are already making decisions.
Start auditing them today.

Set one environment variable. Every LLM call your agents make is logged, signed, and ready for any regulator who asks.

Start free Read the docs

Free tier available with no time limit  ·  No credit card required  ·  EU AI Act ready