OpenClaw Integration
The fastest path to SealVera. Install the skill once — every agent you run on OpenClaw is audited automatically.
Install the skill
clawhub install sealvera
Configure
export SEALVERA_API_KEY=sv_...
export SEALVERA_AGENT=my-agent-name # shown in dashboard
export SEALVERA_ENDPOINT=https://app.sealvera.com # default
That's it
Every LLM call your OpenClaw agent makes is now intercepted, logged, cryptographically signed, and visible in your dashboard. No code changes. No wrappers. No restarts.
Optional: verify setup
node ~/.openclaw/skills/sealvera/scripts/setup.js
node ~/.openclaw/skills/sealvera/scripts/status.js
Manual skill install (without clawhub)
cp -r /path/to/sealvera-skill ~/.openclaw/skills/sealvera
NODE_OPTIONS=--require autoload.js under the hood. If you are running a Node.js agent outside OpenClaw, you can use the same autoload file directly — see the Zero-Friction section below.
Quick Start
Your first logged, signed, and explainable AI decision in under 5 minutes.
Zero-Friction Path
No SDK integration required. Works with any existing Node.js agent.
Install
npm install sealvera
Set env vars
export SEALVERA_API_KEY=sv_...
export SEALVERA_AGENT=my-agent-name
export NODE_OPTIONS="--require node_modules/sealvera/scripts/autoload.js"
Run your agent as normal
node my-agent.js
# [SealVera] autoload: active — agent="my-agent-name" endpoint="https://app.sealvera.com"
Every OpenAI, Anthropic, and OpenRouter call is now intercepted and logged.
SDK Quick Start
For full control and native evidence steps, integrate the SDK directly.
Create your account
Go to app.sealvera.com/signup. Enter your company name, email, and password — your account, org, and first API key are created instantly.
Copy your API key
After signup, your API key is shown once. It starts with sv_. Copy it now — the full value won't be shown again. You can generate additional keys anytime from Settings → API Keys.
Install
npm install sealvera
pip install sealvera
go get github.com/sealvera/sealvera-go
# No install needed — configure your existing OTel exporter:
OTEL_EXPORTER_OTLP_ENDPOINT=https://app.sealvera.com/api/otel
OTEL_EXPORTER_OTLP_HEADERS="X-SealVera-Key=sv_..."
# No install needed — POST directly to the ingest endpoint
Add two lines to your agent
const SealVera = require('sealvera');
SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: 'sv_...' });
// Pass your existing client — SealVera detects the SDK automatically
const agent = SealVera.createClient(new OpenAI(), { agent: 'my-agent' });
// Use it exactly like the original client — every call is now logged
await agent.chat.completions.create({ ... });
// Multi-agent: one createClient per agent, mix SDKs freely
const fraudAgent = SealVera.createClient(new OpenAI(), { agent: 'fraud-screener' });
const uwAgent = SealVera.createClient(new Anthropic(), { agent: 'underwriter' });
// Add session_id to inputs → linked as a trace automatically
import openai
import sealvera
sealvera.init(endpoint="https://app.sealvera.com", api_key="sv_...")
# Pass your already-configured client — sealvera detects the SDK automatically
openai_client = openai.OpenAI(api_key="sk-...")
agent = sealvera.create_client(openai_client, agent="my-agent")
# Use exactly like the original — every call is now logged
response = agent.chat.completions.create(model="gpt-4o", messages=[...])
# Works for Anthropic too — SDK detected automatically
import anthropic
anthropic_client = anthropic.Anthropic(api_key="sk-ant-...")
uw_agent = sealvera.create_client(anthropic_client, agent="underwriter")
import sealvera "github.com/sealvera/sealvera-go"
// Init once
sealvera.Init(sealvera.Config{
Endpoint: "https://app.sealvera.com",
APIKey: "sv_...",
})
// Create one Agent per logical agent in your application
fraudAgent := sealvera.NewAgent("fraud-screener")
uwAgent := sealvera.NewAgent("loan-underwriter")
// Wrap your LLM call — provider specified per call
result, err := fraudAgent.WrapOpenAI(ctx, "screen_application", input,
func() (any, error) {
return openaiClient.Chat.Completions.New(ctx, params)
},
)
result, err := uwAgent.WrapAnthropic(ctx, "evaluate", input,
func() (any, error) {
return anthropicClient.Messages.New(ctx, params)
},
)
const SealVera = require('sealvera');
const Anthropic = require('@anthropic-ai/sdk');
SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: 'sv_...' });
// Pass your configured Anthropic client — SDK detected automatically
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const agent = SealVera.createClient(anthropic, { agent: 'my-agent' });
// When extended thinking is enabled, Claude's reasoning chain is captured
// automatically as native evidence — no prompt changes needed.
const response = await agent.messages.create({ ... });
const SealVera = require('sealvera');
const OpenAI = require('openai');
SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: 'sv_...' });
// OpenRouter baseURL is detected — any model, Claude thinking auto-detected
const openrouter = new OpenAI({
baseURL: 'https://openrouter.ai/api/v1',
apiKey: process.env.OPENROUTER_API_KEY,
});
const agent = SealVera.createClient(openrouter, { agent: 'my-agent' });
// response.model logged per entry — GPT-4o, Claude, Llama, etc.
# Set these env vars — no code changes required.
OTEL_EXPORTER_OTLP_ENDPOINT=https://app.sealvera.com/api/otel
OTEL_EXPORTER_OTLP_HEADERS="X-SealVera-Key=sv_..."
# Add these attributes to your AI decision spans:
# ai.agent = "my-agent"
# ai.decision = "APPROVED"
# ai.input = '{"amount": 25000}'
# ai.output = '{"decision": "APPROVED", "confidence": 0.94}'
const SealVeraOpenClaw = require('./skills/sealvera/sealvera-openclaw');
const sealvera = new SealVeraOpenClaw({
endpoint: 'https://app.sealvera.com',
apiKey: 'sv_...',
agent: 'my-agent'
});
// After every agent turn:
await sealvera.captureAgentTurn({
action: 'respond',
decision: 'RESPONDED',
input: { message: userMsg },
output: { response: reply }
});
curl -X POST https://app.sealvera.com/api/ingest \
-H "X-SealVera-Key: sv_..." \
-H "Content-Type: application/json" \
-d '{"agent":"my-agent","action":"evaluate","decision":"APPROVED",
"input":{"amount":25000},"output":{"decision":"APPROVED","confidence":0.94}}'
See your first decision
Open your dashboard. Within seconds of your agent making a call, the decision appears in the log. Click to expand the evidence trail and verify the cryptographic attestation.
Integration Paths
Pass your existing client to SealVera.createClient(). It detects which SDK you're using and handles the rest — no configuration, no choosing between patch functions.
| You import | You pass | What SealVera detects |
|---|---|---|
openai |
new OpenAI() |
OpenAI — patches chat.completions.create |
@anthropic-ai/sdk |
new Anthropic() |
Anthropic — patches messages.create, extracts thinking blocks natively |
openai with OpenRouter base URL |
new OpenAI({ baseURL: 'https://openrouter.ai/api/v1' }) |
OpenRouter — any model, auto-detects Claude thinking blocks |
SealVera.createClient(yourClient, { agent: 'name' }) is the only call you need to know. It fingerprints the client and applies the right interceptor automatically.
createClient — one function, any SDK
Pass your client. SealVera detects whether it's OpenAI, Anthropic, or OpenRouter and applies the right interceptor automatically. Returns a proxy that behaves identically to the original — use it exactly as you would the underlying client.
SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: 'sv_...' });
// OpenAI — detected automatically
const agent = SealVera.createClient(new OpenAI(), { agent: 'fraud-detector' });
// Anthropic — detected automatically, thinking blocks extracted natively
const agent = SealVera.createClient(new Anthropic(), { agent: 'claims-processor' });
// OpenRouter — detected from baseURL, any model, Claude thinking auto-detected
const agent = SealVera.createClient(
new OpenAI({ baseURL: 'https://openrouter.ai/api/v1', apiKey: 'sk-or-...' }),
{ agent: 'underwriter' }
);
// Multi-agent — one client per agent, same pattern regardless of SDK
const fraudAgent = SealVera.createClient(new OpenAI(), { agent: 'fraud-screener' });
const uwAgent = SealVera.createClient(new Anthropic(), { agent: 'loan-underwriter' });
// Mix SDKs freely — each agent is independent
Add session_id to your inputs and agents are linked into a trace automatically. See Traces.
What auto-detection handles per SDK
- OpenAI — intercepts
chat.completions.create. Extractsreasoning_stepsfrom JSON responses as native evidence. - Anthropic — intercepts
messages.create. Extended thinking blocks captured verbatim as native evidence — no prompt changes. The model's own reasoning is the audit record. - OpenRouter — intercepts
chat.completions.create, logsresponse.modelso you know which model made each decision. Auto-detects Claude thinking blocks in responses.
Python — create_client
Same pattern as Node.js. Pass your configured client — the SDK type is detected automatically.
import openai, anthropic, sealvera
sealvera.init(endpoint="https://app.sealvera.com", api_key="sv_...")
# Your existing configured client — API key stays with you
openai_client = openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"])
fraud_agent = sealvera.create_client(openai_client, agent="fraud-screener")
# Anthropic — detected automatically, thinking blocks extracted natively
anthropic_client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
uw_agent = sealvera.create_client(anthropic_client, agent="loan-underwriter")
# Use exactly like the original — every call is logged
response = fraud_agent.chat.completions.create(model="gpt-4o", messages=[...])
response = uw_agent.messages.create(model="claude-3-5-sonnet", messages=[...])
Go — NewAgent
Go can't monkey-patch or detect types at runtime. Instead, create one Agent per logical agent in your application, then call its typed wrap methods. The pattern maps directly to createClient in JS and Python — one handle per agent, scoped logging, nothing shared.
sealvera.Init(sealvera.Config{
Endpoint: "https://app.sealvera.com",
APIKey: "sv_...",
})
// One Agent per logical agent — equivalent to createClient() in JS/Python
fraudAgent := sealvera.NewAgent("fraud-screener")
uwAgent := sealvera.NewAgent("loan-underwriter")
// Wrap your LLM call — provider declared per call
result, err := fraudAgent.WrapOpenAI(ctx, "screen_application", input,
func() (any, error) { return openaiClient.Chat.Completions.New(ctx, params) },
)
result, err := uwAgent.WrapAnthropic(ctx, "evaluate_application", input,
func() (any, error) { return anthropicClient.Messages.New(ctx, params) },
)
// Add session_id to input → trace links automatically, same as JS/Python
input := map[string]any{"applicant_id": "APP-001", "session_id": caseID}
Evidence Trail
When a regulator asks "why did your agent deny this claim?", this is your answer. Every logged decision gets a Structured Evidence Trail — the factors, values, signals, and explanations that drove the outcome.
Three modes — from automatic to fully authoritative
| Auto-extract | Auto-reasoning (default) | Native (compliance-grade) | |
|---|---|---|---|
| Setup | Zero | Zero — on by default | Add reasoning_steps to your prompt |
| Dashboard badge | SealVera inferred | Agent-provided | Agent-provided |
| How it works | SealVera reconstructs from output | SealVera injects instruction into system prompt | Agent returns steps natively |
| Use when | Auto-reasoning disabled | Most production agents — default | Highest-stakes decisions, custom schemas |
Auto-reasoning (default — no config required)
SealVera automatically injects a compact instruction into your system prompt asking the model to return structured reasoning_steps. The injection is smart — it detects whether your prompt already handles it and does nothing if so.
| Scenario | What SealVera does |
|---|---|
System prompt contains reasoning_steps | Nothing — you have it covered |
System prompt exists but no reasoning_steps | Appends instruction at end |
| No system prompt | Injects a minimal system message |
To disable (if you need full control over your prompt):
// Node.js
SealVera.init({ endpoint: '...', apiKey: '...', autoReasoning: false });
# Python
sealvera.init(endpoint="...", api_key="...", autoReasoning=False)
Native steps — the highest-fidelity approach
For maximum control — especially when you have a custom evidence schema or need specific field names — add reasoning_steps to your system prompt directly. SealVera detects it and skips the injection.
Add this to your system prompt:
Include "reasoning_steps" in your response JSON:
[{
"factor": "field_name",
"value": "actual_value",
"signal": "risk" or "safe",
"explanation": "one sentence"
}]
Example output:
{
"decision": "DENIED",
"reasoning_steps": [
{
"factor": "claim_amount",
"value": "$14,200",
"signal": "risk",
"explanation": "Exceeds $10,000 threshold for automatic review"
},
{
"factor": "prior_history",
"value": "clean",
"signal": "safe",
"explanation": "No prior claims in 5 years"
}
]
}
thinking: { type: 'enabled' }). Claude's thought process is extracted automatically as native evidence — no prompt engineering needed. Use patchAnthropic and it just works.
Alert Rules
Alert rules catch problems before they become incidents — a denial streak, a high-value rejection, a volume spike. Start with templates. Build custom rules only if no template fits.
Templates by vertical
| Vertical | Templates available |
|---|---|
| Healthcare | High-value claim denied, prior auth denial streak, low-confidence decision, after-hours activity |
| Fintech | Large loan rejected, fraud flag spike, consecutive rejections, high-value payment denied |
| HR | Candidate decline streak, screening volume spike |
| General | Any flagged decision, rate spike |
Example: "Alert when any agent denies a claim over $10,000." Select the template, set the threshold, pick email or Slack — done. No field paths, no query syntax.
Custom rules
When no template fits, the Custom Rule builder gives you 5 condition types:
| Type | What it checks |
|---|---|
field_threshold |
A numeric field exceeds a value — e.g. input.amount > 10000 |
decision_value |
Decision equals a specific value — e.g. DENIED |
consecutive_rejections |
N rejections in a row, optionally for a specific decision value |
rate_spike |
N decisions within M minutes |
time_window |
Decisions occurring between hour X and hour Y |
Alert channels
Console output is always active. Add any of:
- Email — Settings → Email
- Slack — paste your webhook URL in Settings → Slack
- Generic webhook — any HTTP endpoint; payload includes full alert context (works with PagerDuty, Teams, etc.)
Test all channels at once: Settings → Send Test Alert.
Behavioral Baseline
SealVera learns your agent's normal patterns and alerts when behavior drifts — without you writing a single rule. This is what catches regressions, model changes, and data drift before they become incidents.
Maturity levels
Shown as a colored dot next to each agent. Don't act on Risk tab alerts until your agent reaches mature.
| Level | Criteria | What it means |
|---|---|---|
| learning amber | < 10 decisions, < 2 days | Conservative thresholds. False positives are expected. Don't act on alerts yet. |
| developing yellow | Building hourly profiles, 10–50 decisions | Alerts are directional. Treat as signals, not facts. |
| mature green | 50+ decisions, 5+ distinct days | Full hourly-aware detection. Trust these alerts. |
What it detects
- Approval rate shift — > 15 percentage points from baseline for that hour
- Confidence drop — > 2 standard deviations below normal confidence for that hour
- Rate anomaly — > 3× or < 0.2× expected volume for that hour
- Distribution shift — APPROVED/DENIED mix changes significantly from baseline
- Silence — agent goes quiet during normally active hours
Drift alerts appear in the Risk tab with the drift type, magnitude, and baseline comparison. Acknowledge once investigated — they won't re-fire for the same event.
Traces
When a claim goes through fraud-check → underwriting → approval, you want to see it as one flow — not three disconnected log entries. Traces group related decisions together. Most of the time, you don't need to do anything.
Auto-detection — the default, no configuration needed
If your inputs contain any of these fields, SealVera automatically links related decisions into a trace. No setup required — it happens on ingest.
session_idrequest_idcorrelation_idconversation_idworkflow_idjob_id
The three agents below have no knowledge of each other. The shared session_id in their inputs is the only link — SealVera finds it automatically whether you use createClient, direct ingest, or any other method.
SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: 'sv_...' });
const fraudAgent = SealVera.createClient(new OpenAI(), { agent: 'fraud-screener' });
const uwAgent = SealVera.createClient(new OpenAI(), { agent: 'loan-underwriter' });
const committeeAgent = SealVera.createClient(new OpenAI(), { agent: 'approval-committee' });
const SESSION_ID = `case-${application.id}`; // or use your existing case/order/claim ID
await fraudAgent.chat.completions.create({
messages: [{ role: 'user', content: JSON.stringify({ ...application, session_id: SESSION_ID }) }]
});
await uwAgent.chat.completions.create({
messages: [{ role: 'user', content: JSON.stringify({ ...application, session_id: SESSION_ID }) }]
});
await committeeAgent.chat.completions.create({
messages: [{ role: 'user', content: JSON.stringify({ ...application, session_id: SESSION_ID }) }]
});
// Dashboard → Traces tab: all three agents appear as one chain.
Explicit traceId — when inputs share no common field
Pass traceId directly in the ingest payload. Use this when agents don't share any business ID in their inputs.
// Direct ingest with explicit traceId
await fetch('https://app.sealvera.com/api/ingest', {
method: 'POST',
headers: { 'X-SealVera-Key': 'sv_...' },
body: JSON.stringify({
agent: 'fraud-screener', action: 'screen', decision: 'CLEAR',
input: { ... }, output: { ... },
traceId: 'claim-C9182' // explicit — guaranteed grouping
})
});
When you don't need traces at all
- Single-agent deployments — nothing to group
- Agents that already pass a business ID in inputs — auto-detection handles it without any code change
Confidence shown in the dashboard: explicit (traceId set directly) vs auto · session_id (auto-detected from input field).
Chain Integrity + Completeness
Cryptographic proof that your audit log hasn't been tampered with — and a mechanism to prove that every decision was actually logged in the first place.
What chain integrity proves
- Every log entry is linked to the previous via SHA-256 hash chain
- Any deletion or modification after logging breaks the chain — detectable
- Sequence gaps surface automatically in Settings → Chain Integrity
Verifying the chain
Dashboard: gear icon → Chain Integrity → Verify. Or via API:
GET /api/agents/:name/chain-verify
// Response:
{
"ok": true,
"totalEntries": 1482,
"gaps": 0,
"brokenLinks": 0
}
SDK Completeness — proving nothing was skipped
Send a completeness heartbeat from your application periodically. This lets SealVera compare how many calls your SDK intercepted vs. how many were successfully logged — surfacing any gaps.
// Send periodically from your application
const { intercepted, logged } = SealVera.getCompletenessStats();
await fetch('https://app.sealvera.com/api/completeness-report', {
method: 'POST',
headers: {
'X-SealVera-Key': 'sv_...',
'Content-Type': 'application/json'
},
body: JSON.stringify({ agent: 'my-agent', intercepted, logged })
});
// < 2% discrepancy = ok · 2-10% = warning · > 10% = gap detected
Check results: gear icon → Chain Integrity → SDK Completeness section. A gap doesn't automatically mean tampering — it could be a timeout, retry failure, or a bug. Investigate, don't panic.
Compliance Report
One click generates a formatted, audit-ready HTML report — the document you hand to a regulator or include in a SOC 2 audit package.
What it includes
- All decisions in the selected date range with full evidence trails
- Per-agent summary — approval rates, decision counts, confidence averages
- RSA signature blocks for every entry
- Chain integrity verification results
- Log completeness statement — honest about what's provable and what isn't
- EU AI Act / SOC 2 framing language
Generate
Dashboard header → Compliance Report. Filter by date range first for scoped reports — unscoped covers all time.
Via API:
GET /api/compliance-report?from=2026-01-01&to=2026-01-31
# Optional: &title=My+Custom+Title to override the report heading
GET /api/public-key — they can independently verify the RSA signature on any log entry without access to your dashboard.
Retention
EU AI Act Article 12 requires high-risk AI systems to retain decision records for 10 years. SealVera tracks your retention coverage and surfaces it in the dashboard and via API.
Retention status endpoint
GET /api/retention-status
Returns:
{
"totalEntries": 1842,
"oldestEntry": "2025-09-14T08:22:10.000Z",
"daysCovered": 167,
"requiredDays": 3650,
"coveragePct": 4.6,
"status": "insufficient",
"tier": "enterprise",
"note": "167 days of records. EU AI Act requires 3,650. Start date: 2025-09-14."
}
Retention by plan
| Plan | Retention | EU AI Act (10yr) |
|---|---|---|
| Free | 30 days | Not sufficient |
| Design Partner | 1 year | Partial — covers reporting period |
| Enterprise | Configurable (up to indefinite) | Fully configurable |
SV-10 Standard
The SealVera SV-10 is an open checklist of the ten requirements every production AI agent system should meet to be considered accountable. Published under CC BY 4.0 — free to use, cite, and implement.
SealVera covers all ten requirements. The mapping:
| Requirement | SealVera feature |
|---|---|
| AA-01 — Complete decision record | Ingest API + SDK auto-capture |
| AA-02 — Factor-level reasoning with actual values | Auto-reasoning + native reasoning_steps |
| AA-03 — Tamper-evident records | RSA attestation on every entry |
| AA-04 — Deletion-detectable record set | Hash chain + /api/agents/:name/chain-verify |
| AA-05 — Regulatory retention | Retention tracking + /api/retention-status |
| AA-06 — Behavioral baseline monitoring | Baseline computation + drift detection |
| AA-07 — Proactive anomaly alerting | Alert rules + alert history |
| AA-08 — Multi-agent trace chain | Auto-correlation + trace viewer |
| AA-09 — Decision replay | /api/logs/:id/replay |
| SV-10 — On-demand compliance reports | One-click compliance report + JSONL/CSV export |
API Reference
All endpoints are relative to https://app.sealvera.com. Authenticated endpoints require either a JWT session cookie (browser) or an X-SealVera-Key header (API key). API key auth is preferred for server-to-server calls.
Authentication
{ companyName, email, password } → creates org + user → { ok, apiKey }{ username, password } → sets JWT cookieIngest
X-SealVera-Key. Body: { agent, action, decision, input, output, reasoning?, reasoning_steps?, evidence_source?, model?, traceId?, role? }X-SealVera-Key. Body: standard OTel JSON. Set ai.agent, ai.decision, ai.input, ai.output span attributes.X-SealVera-Key. Body: { agent, intercepted, logged }Logs
agent, decision, from, to, search, limit, offsetAgents + Baselines
{ ok, totalEntries, gaps, brokenLinks }Risk / Drift
agent, acknowledged, limitAlert Rules
{ name, condition, severity, channels }Traces
status, limit{ traceId, name }{ status }Compliance
from, to, title — returns audit-ready HTML reportformat=json|csv, agent, from, to, decision, search — filtered export for legal/audit teams{ totalEntries, daysCovered, requiredDays, coveragePct, status }{ alerts, total }{ velocity, approvalTrend, driftStatus, chainStatus, completenessStatus, lastSeen }Org + Settings
{ name } → returns key value once{ webhook_url }{ alert_email }{ webhook_url }