Docs
Home Dashboard

OpenClaw Integration

The fastest path to SealVera. Install the skill once — every agent you run on OpenClaw is audited automatically.

Install the skill

clawhub install sealvera

Configure

export SEALVERA_API_KEY=sv_...
export SEALVERA_AGENT=my-agent-name   # shown in dashboard
export SEALVERA_ENDPOINT=https://app.sealvera.com  # default

That's it

Every LLM call your OpenClaw agent makes is now intercepted, logged, cryptographically signed, and visible in your dashboard. No code changes. No wrappers. No restarts.

Optional: verify setup

node ~/.openclaw/skills/sealvera/scripts/setup.js
node ~/.openclaw/skills/sealvera/scripts/status.js

Manual skill install (without clawhub)

cp -r /path/to/sealvera-skill ~/.openclaw/skills/sealvera
The skill uses NODE_OPTIONS=--require autoload.js under the hood. If you are running a Node.js agent outside OpenClaw, you can use the same autoload file directly — see the Zero-Friction section below.

Quick Start

Your first logged, signed, and explainable AI decision in under 5 minutes.

Zero-Friction Path

No SDK integration required. Works with any existing Node.js agent.

1

Install

npm install sealvera
2

Set env vars

export SEALVERA_API_KEY=sv_...
export SEALVERA_AGENT=my-agent-name
export NODE_OPTIONS="--require node_modules/sealvera/scripts/autoload.js"
3

Run your agent as normal

node my-agent.js
# [SealVera] autoload: active — agent="my-agent-name" endpoint="https://app.sealvera.com"

Every OpenAI, Anthropic, and OpenRouter call is now intercepted and logged.

SDK Quick Start

For full control and native evidence steps, integrate the SDK directly.

1

Create your account

Go to app.sealvera.com/signup. Enter your company name, email, and password — your account, org, and first API key are created instantly.

2

Copy your API key

After signup, your API key is shown once. It starts with sv_. Copy it now — the full value won't be shown again. You can generate additional keys anytime from Settings → API Keys.

3

Install

npm install sealvera
pip install sealvera
go get github.com/sealvera/sealvera-go
# No install needed — configure your existing OTel exporter:
OTEL_EXPORTER_OTLP_ENDPOINT=https://app.sealvera.com/api/otel
OTEL_EXPORTER_OTLP_HEADERS="X-SealVera-Key=sv_..."
# No install needed — POST directly to the ingest endpoint
4

Add two lines to your agent

const SealVera = require('sealvera');

SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: 'sv_...' });

// Pass your existing client — SealVera detects the SDK automatically
const agent = SealVera.createClient(new OpenAI(), { agent: 'my-agent' });

// Use it exactly like the original client — every call is now logged
await agent.chat.completions.create({ ... });

// Multi-agent: one createClient per agent, mix SDKs freely
const fraudAgent = SealVera.createClient(new OpenAI(),    { agent: 'fraud-screener' });
const uwAgent    = SealVera.createClient(new Anthropic(), { agent: 'underwriter' });
// Add session_id to inputs → linked as a trace automatically
import openai
import sealvera

sealvera.init(endpoint="https://app.sealvera.com", api_key="sv_...")

# Pass your already-configured client — sealvera detects the SDK automatically
openai_client = openai.OpenAI(api_key="sk-...")
agent = sealvera.create_client(openai_client, agent="my-agent")

# Use exactly like the original — every call is now logged
response = agent.chat.completions.create(model="gpt-4o", messages=[...])

# Works for Anthropic too — SDK detected automatically
import anthropic
anthropic_client = anthropic.Anthropic(api_key="sk-ant-...")
uw_agent = sealvera.create_client(anthropic_client, agent="underwriter")
import sealvera "github.com/sealvera/sealvera-go"

// Init once
sealvera.Init(sealvera.Config{
    Endpoint: "https://app.sealvera.com",
    APIKey:   "sv_...",
})

// Create one Agent per logical agent in your application
fraudAgent := sealvera.NewAgent("fraud-screener")
uwAgent    := sealvera.NewAgent("loan-underwriter")

// Wrap your LLM call — provider specified per call
result, err := fraudAgent.WrapOpenAI(ctx, "screen_application", input,
    func() (any, error) {
        return openaiClient.Chat.Completions.New(ctx, params)
    },
)

result, err := uwAgent.WrapAnthropic(ctx, "evaluate", input,
    func() (any, error) {
        return anthropicClient.Messages.New(ctx, params)
    },
)
const SealVera = require('sealvera');
const Anthropic = require('@anthropic-ai/sdk');

SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: 'sv_...' });

// Pass your configured Anthropic client — SDK detected automatically
const anthropic = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const agent = SealVera.createClient(anthropic, { agent: 'my-agent' });

// When extended thinking is enabled, Claude's reasoning chain is captured
// automatically as native evidence — no prompt changes needed.
const response = await agent.messages.create({ ... });
const SealVera = require('sealvera');
const OpenAI = require('openai');

SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: 'sv_...' });

// OpenRouter baseURL is detected — any model, Claude thinking auto-detected
const openrouter = new OpenAI({
  baseURL: 'https://openrouter.ai/api/v1',
  apiKey: process.env.OPENROUTER_API_KEY,
});
const agent = SealVera.createClient(openrouter, { agent: 'my-agent' });
// response.model logged per entry — GPT-4o, Claude, Llama, etc.
# Set these env vars — no code changes required.
OTEL_EXPORTER_OTLP_ENDPOINT=https://app.sealvera.com/api/otel
OTEL_EXPORTER_OTLP_HEADERS="X-SealVera-Key=sv_..."

# Add these attributes to your AI decision spans:
# ai.agent    = "my-agent"
# ai.decision = "APPROVED"
# ai.input    = '{"amount": 25000}'
# ai.output   = '{"decision": "APPROVED", "confidence": 0.94}'
const SealVeraOpenClaw = require('./skills/sealvera/sealvera-openclaw');

const sealvera = new SealVeraOpenClaw({
    endpoint: 'https://app.sealvera.com',
    apiKey:   'sv_...',
    agent:    'my-agent'
});

// After every agent turn:
await sealvera.captureAgentTurn({
    action:   'respond',
    decision: 'RESPONDED',
    input:    { message: userMsg },
    output:   { response: reply }
});
curl -X POST https://app.sealvera.com/api/ingest \
  -H "X-SealVera-Key: sv_..." \
  -H "Content-Type: application/json" \
  -d '{"agent":"my-agent","action":"evaluate","decision":"APPROVED",
       "input":{"amount":25000},"output":{"decision":"APPROVED","confidence":0.94}}'
5

See your first decision

Open your dashboard. Within seconds of your agent making a call, the decision appears in the log. Click to expand the evidence trail and verify the cryptographic attestation.

You're done. Every subsequent call is automatically intercepted, explained, and signed. No more code changes required unless you want native evidence steps or trace grouping.

Integration Paths

Pass your existing client to SealVera.createClient(). It detects which SDK you're using and handles the rest — no configuration, no choosing between patch functions.

You importYou passWhat SealVera detects
openai new OpenAI() OpenAI — patches chat.completions.create
@anthropic-ai/sdk new Anthropic() Anthropic — patches messages.create, extracts thinking blocks natively
openai with OpenRouter base URL new OpenAI({ baseURL: 'https://openrouter.ai/api/v1' }) OpenRouter — any model, auto-detects Claude thinking blocks
One function, any SDK. SealVera.createClient(yourClient, { agent: 'name' }) is the only call you need to know. It fingerprints the client and applies the right interceptor automatically.

createClient — one function, any SDK

Pass your client. SealVera detects whether it's OpenAI, Anthropic, or OpenRouter and applies the right interceptor automatically. Returns a proxy that behaves identically to the original — use it exactly as you would the underlying client.

SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: 'sv_...' });

// OpenAI — detected automatically
const agent = SealVera.createClient(new OpenAI(), { agent: 'fraud-detector' });

// Anthropic — detected automatically, thinking blocks extracted natively
const agent = SealVera.createClient(new Anthropic(), { agent: 'claims-processor' });

// OpenRouter — detected from baseURL, any model, Claude thinking auto-detected
const agent = SealVera.createClient(
  new OpenAI({ baseURL: 'https://openrouter.ai/api/v1', apiKey: 'sk-or-...' }),
  { agent: 'underwriter' }
);

// Multi-agent — one client per agent, same pattern regardless of SDK
const fraudAgent = SealVera.createClient(new OpenAI(),    { agent: 'fraud-screener' });
const uwAgent    = SealVera.createClient(new Anthropic(), { agent: 'loan-underwriter' });
// Mix SDKs freely — each agent is independent

Add session_id to your inputs and agents are linked into a trace automatically. See Traces.

What auto-detection handles per SDK

  • OpenAI — intercepts chat.completions.create. Extracts reasoning_steps from JSON responses as native evidence.
  • Anthropic — intercepts messages.create. Extended thinking blocks captured verbatim as native evidence — no prompt changes. The model's own reasoning is the audit record.
  • OpenRouter — intercepts chat.completions.create, logs response.model so you know which model made each decision. Auto-detects Claude thinking blocks in responses.

Python — create_client

Same pattern as Node.js. Pass your configured client — the SDK type is detected automatically.

import openai, anthropic, sealvera

sealvera.init(endpoint="https://app.sealvera.com", api_key="sv_...")

# Your existing configured client — API key stays with you
openai_client = openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"])
fraud_agent = sealvera.create_client(openai_client, agent="fraud-screener")

# Anthropic — detected automatically, thinking blocks extracted natively
anthropic_client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
uw_agent = sealvera.create_client(anthropic_client, agent="loan-underwriter")

# Use exactly like the original — every call is logged
response = fraud_agent.chat.completions.create(model="gpt-4o", messages=[...])
response = uw_agent.messages.create(model="claude-3-5-sonnet", messages=[...])

Go — NewAgent

Go can't monkey-patch or detect types at runtime. Instead, create one Agent per logical agent in your application, then call its typed wrap methods. The pattern maps directly to createClient in JS and Python — one handle per agent, scoped logging, nothing shared.

sealvera.Init(sealvera.Config{
    Endpoint: "https://app.sealvera.com",
    APIKey:   "sv_...",
})

// One Agent per logical agent — equivalent to createClient() in JS/Python
fraudAgent := sealvera.NewAgent("fraud-screener")
uwAgent    := sealvera.NewAgent("loan-underwriter")

// Wrap your LLM call — provider declared per call
result, err := fraudAgent.WrapOpenAI(ctx, "screen_application", input,
    func() (any, error) { return openaiClient.Chat.Completions.New(ctx, params) },
)
result, err := uwAgent.WrapAnthropic(ctx, "evaluate_application", input,
    func() (any, error) { return anthropicClient.Messages.New(ctx, params) },
)

// Add session_id to input → trace links automatically, same as JS/Python
input := map[string]any{"applicant_id": "APP-001", "session_id": caseID}

Evidence Trail

When a regulator asks "why did your agent deny this claim?", this is your answer. Every logged decision gets a Structured Evidence Trail — the factors, values, signals, and explanations that drove the outcome.

Three modes — from automatic to fully authoritative

Auto-extractAuto-reasoning (default)Native (compliance-grade)
Setup Zero Zero — on by default Add reasoning_steps to your prompt
Dashboard badge SealVera inferred Agent-provided Agent-provided
How it works SealVera reconstructs from output SealVera injects instruction into system prompt Agent returns steps natively
Use when Auto-reasoning disabled Most production agents — default Highest-stakes decisions, custom schemas

Auto-reasoning (default — no config required)

SealVera automatically injects a compact instruction into your system prompt asking the model to return structured reasoning_steps. The injection is smart — it detects whether your prompt already handles it and does nothing if so.

ScenarioWhat SealVera does
System prompt contains reasoning_stepsNothing — you have it covered
System prompt exists but no reasoning_stepsAppends instruction at end
No system promptInjects a minimal system message

To disable (if you need full control over your prompt):

// Node.js
SealVera.init({ endpoint: '...', apiKey: '...', autoReasoning: false });

# Python
sealvera.init(endpoint="...", api_key="...", autoReasoning=False)

Native steps — the highest-fidelity approach

For maximum control — especially when you have a custom evidence schema or need specific field names — add reasoning_steps to your system prompt directly. SealVera detects it and skips the injection.

Add this to your system prompt:

Include "reasoning_steps" in your response JSON:
[{
  "factor": "field_name",
  "value": "actual_value",
  "signal": "risk" or "safe",
  "explanation": "one sentence"
}]

Example output:

{
  "decision": "DENIED",
  "reasoning_steps": [
    {
      "factor":      "claim_amount",
      "value":       "$14,200",
      "signal":      "risk",
      "explanation": "Exceeds $10,000 threshold for automatic review"
    },
    {
      "factor":      "prior_history",
      "value":       "clean",
      "signal":      "safe",
      "explanation": "No prior claims in 5 years"
    }
  ]
}
Anthropic shortcut: Enable extended thinking (thinking: { type: 'enabled' }). Claude's thought process is extracted automatically as native evidence — no prompt engineering needed. Use patchAnthropic and it just works.

Alert Rules

Alert rules catch problems before they become incidents — a denial streak, a high-value rejection, a volume spike. Start with templates. Build custom rules only if no template fits.

Start with templates. Most teams never need custom rules. The template library covers 80% of use cases, written in plain language, no field paths required.

Templates by vertical

VerticalTemplates available
Healthcare High-value claim denied, prior auth denial streak, low-confidence decision, after-hours activity
Fintech Large loan rejected, fraud flag spike, consecutive rejections, high-value payment denied
HR Candidate decline streak, screening volume spike
General Any flagged decision, rate spike

Example: "Alert when any agent denies a claim over $10,000." Select the template, set the threshold, pick email or Slack — done. No field paths, no query syntax.

Custom rules

When no template fits, the Custom Rule builder gives you 5 condition types:

TypeWhat it checks
field_threshold A numeric field exceeds a value — e.g. input.amount > 10000
decision_value Decision equals a specific value — e.g. DENIED
consecutive_rejections N rejections in a row, optionally for a specific decision value
rate_spike N decisions within M minutes
time_window Decisions occurring between hour X and hour Y

Alert channels

Console output is always active. Add any of:

  • Email — Settings → Email
  • Slack — paste your webhook URL in Settings → Slack
  • Generic webhook — any HTTP endpoint; payload includes full alert context (works with PagerDuty, Teams, etc.)

Test all channels at once: Settings → Send Test Alert.

Behavioral Baseline

SealVera learns your agent's normal patterns and alerts when behavior drifts — without you writing a single rule. This is what catches regressions, model changes, and data drift before they become incidents.

Maturity levels

Shown as a colored dot next to each agent. Don't act on Risk tab alerts until your agent reaches mature.

LevelCriteriaWhat it means
learning amber < 10 decisions, < 2 days Conservative thresholds. False positives are expected. Don't act on alerts yet.
developing yellow Building hourly profiles, 10–50 decisions Alerts are directional. Treat as signals, not facts.
mature green 50+ decisions, 5+ distinct days Full hourly-aware detection. Trust these alerts.
Don't panic during learning. The first few days will produce false positives as the baseline stabilizes. Expect about 1 week of real traffic before it's fully reliable.

What it detects

  • Approval rate shift — > 15 percentage points from baseline for that hour
  • Confidence drop — > 2 standard deviations below normal confidence for that hour
  • Rate anomaly — > 3× or < 0.2× expected volume for that hour
  • Distribution shift — APPROVED/DENIED mix changes significantly from baseline
  • Silence — agent goes quiet during normally active hours
Time-aware, not average-aware. The baseline asks "is this hour abnormal for this specific hour of the day?" — not "is this above the flat weekly average?" An agent that always spikes at 2pm won't trigger false alerts at 2pm.

Drift alerts appear in the Risk tab with the drift type, magnitude, and baseline comparison. Acknowledge once investigated — they won't re-fire for the same event.

Traces

When a claim goes through fraud-check → underwriting → approval, you want to see it as one flow — not three disconnected log entries. Traces group related decisions together. Most of the time, you don't need to do anything.

Check auto-detection first. If your agents already pass common ID fields, traces are built automatically. Read below before adding any code.

Auto-detection — the default, no configuration needed

If your inputs contain any of these fields, SealVera automatically links related decisions into a trace. No setup required — it happens on ingest.

  • session_id
  • request_id
  • correlation_id
  • conversation_id
  • workflow_id
  • job_id

The three agents below have no knowledge of each other. The shared session_id in their inputs is the only link — SealVera finds it automatically whether you use createClient, direct ingest, or any other method.

SealVera.init({ endpoint: 'https://app.sealvera.com', apiKey: 'sv_...' });

const fraudAgent     = SealVera.createClient(new OpenAI(), { agent: 'fraud-screener' });
const uwAgent        = SealVera.createClient(new OpenAI(), { agent: 'loan-underwriter' });
const committeeAgent = SealVera.createClient(new OpenAI(), { agent: 'approval-committee' });

const SESSION_ID = `case-${application.id}`; // or use your existing case/order/claim ID

await fraudAgent.chat.completions.create({
  messages: [{ role: 'user', content: JSON.stringify({ ...application, session_id: SESSION_ID }) }]
});
await uwAgent.chat.completions.create({
  messages: [{ role: 'user', content: JSON.stringify({ ...application, session_id: SESSION_ID }) }]
});
await committeeAgent.chat.completions.create({
  messages: [{ role: 'user', content: JSON.stringify({ ...application, session_id: SESSION_ID }) }]
});
// Dashboard → Traces tab: all three agents appear as one chain.
Your case ID is probably already there. If your application, claim, order, or workflow already has an ID that travels through your pipeline, that is your session_id. Pass it through and tracing is automatic with no additional work.

Explicit traceId — when inputs share no common field

Pass traceId directly in the ingest payload. Use this when agents don't share any business ID in their inputs.

// Direct ingest with explicit traceId
await fetch('https://app.sealvera.com/api/ingest', {
  method: 'POST',
  headers: { 'X-SealVera-Key': 'sv_...' },
  body: JSON.stringify({
    agent: 'fraud-screener', action: 'screen', decision: 'CLEAR',
    input: { ... }, output: { ... },
    traceId: 'claim-C9182'   // explicit — guaranteed grouping
  })
});

When you don't need traces at all

  • Single-agent deployments — nothing to group
  • Agents that already pass a business ID in inputs — auto-detection handles it without any code change

Confidence shown in the dashboard: explicit (traceId set directly) vs auto · session_id (auto-detected from input field).

Chain Integrity + Completeness

Cryptographic proof that your audit log hasn't been tampered with — and a mechanism to prove that every decision was actually logged in the first place.

What chain integrity proves

  • Every log entry is linked to the previous via SHA-256 hash chain
  • Any deletion or modification after logging breaks the chain — detectable
  • Sequence gaps surface automatically in Settings → Chain Integrity
What it doesn't prove: that every decision was logged in the first place, or that the SDK wasn't bypassed selectively. Chain integrity is tamper-evidence, not completeness. For completeness, you need SDK Completeness below.

Verifying the chain

Dashboard: gear icon → Chain Integrity → Verify. Or via API:

GET /api/agents/:name/chain-verify

// Response:
{
  "ok":           true,
  "totalEntries": 1482,
  "gaps":         0,
  "brokenLinks":  0
}

SDK Completeness — proving nothing was skipped

Send a completeness heartbeat from your application periodically. This lets SealVera compare how many calls your SDK intercepted vs. how many were successfully logged — surfacing any gaps.

// Send periodically from your application
const { intercepted, logged } = SealVera.getCompletenessStats();
await fetch('https://app.sealvera.com/api/completeness-report', {
    method: 'POST',
    headers: {
        'X-SealVera-Key': 'sv_...',
        'Content-Type':   'application/json'
    },
    body: JSON.stringify({ agent: 'my-agent', intercepted, logged })
});

// < 2% discrepancy = ok · 2-10% = warning · > 10% = gap detected

Check results: gear icon → Chain Integrity → SDK Completeness section. A gap doesn't automatically mean tampering — it could be a timeout, retry failure, or a bug. Investigate, don't panic.

Compliance Report

One click generates a formatted, audit-ready HTML report — the document you hand to a regulator or include in a SOC 2 audit package.

What it includes

  • All decisions in the selected date range with full evidence trails
  • Per-agent summary — approval rates, decision counts, confidence averages
  • RSA signature blocks for every entry
  • Chain integrity verification results
  • Log completeness statement — honest about what's provable and what isn't
  • EU AI Act / SOC 2 framing language

Generate

Dashboard header → Compliance Report. Filter by date range first for scoped reports — unscoped covers all time.

Via API:

GET /api/compliance-report?from=2026-01-01&to=2026-01-31

# Optional: &title=My+Custom+Title to override the report heading
Sharing with auditors: Give them GET /api/public-key — they can independently verify the RSA signature on any log entry without access to your dashboard.

Retention

EU AI Act Article 12 requires high-risk AI systems to retain decision records for 10 years. SealVera tracks your retention coverage and surfaces it in the dashboard and via API.

Retention status endpoint

GET /api/retention-status

Returns:

{
  "totalEntries":     1842,
  "oldestEntry":      "2025-09-14T08:22:10.000Z",
  "daysCovered":      167,
  "requiredDays":     3650,
  "coveragePct":      4.6,
  "status":           "insufficient",
  "tier":             "enterprise",
  "note":             "167 days of records. EU AI Act requires 3,650. Start date: 2025-09-14."
}
The clock starts at first deployment, not first audit. The sooner you start logging, the more coverage you have by August 2026. You cannot retroactively create records.

Retention by plan

PlanRetentionEU AI Act (10yr)
Free30 daysNot sufficient
Design Partner1 yearPartial — covers reporting period
EnterpriseConfigurable (up to indefinite)Fully configurable

SV-10 Standard

The SealVera SV-10 is an open checklist of the ten requirements every production AI agent system should meet to be considered accountable. Published under CC BY 4.0 — free to use, cite, and implement.

SealVera covers all ten requirements. The mapping:

RequirementSealVera feature
AA-01 — Complete decision recordIngest API + SDK auto-capture
AA-02 — Factor-level reasoning with actual valuesAuto-reasoning + native reasoning_steps
AA-03 — Tamper-evident recordsRSA attestation on every entry
AA-04 — Deletion-detectable record setHash chain + /api/agents/:name/chain-verify
AA-05 — Regulatory retentionRetention tracking + /api/retention-status
AA-06 — Behavioral baseline monitoringBaseline computation + drift detection
AA-07 — Proactive anomaly alertingAlert rules + alert history
AA-08 — Multi-agent trace chainAuto-correlation + trace viewer
AA-09 — Decision replay/api/logs/:id/replay
SV-10 — On-demand compliance reportsOne-click compliance report + JSONL/CSV export

Run the self-assessment →

API Reference

All endpoints are relative to https://app.sealvera.com. Authenticated endpoints require either a JWT session cookie (browser) or an X-SealVera-Key header (API key). API key auth is preferred for server-to-server calls.

Authentication

POST/auth/signup{ companyName, email, password } → creates org + user → { ok, apiKey }
POST/auth/login{ username, password } → sets JWT cookie
POST/auth/logoutClears the session cookie
GET/auth/meReturns current user + org info

Ingest

POST/api/ingestLog a decision. Auth: X-SealVera-Key. Body: { agent, action, decision, input, output, reasoning?, reasoning_steps?, evidence_source?, model?, traceId?, role? }
POST/api/otel/v1/spansOTel span ingest. Auth: X-SealVera-Key. Body: standard OTel JSON. Set ai.agent, ai.decision, ai.input, ai.output span attributes.
POST/api/completeness-reportSDK completeness heartbeat. Auth: X-SealVera-Key. Body: { agent, intercepted, logged }

Logs

GET/api/logsQuery: agent, decision, from, to, search, limit, offset
GET/api/logs/:idSingle entry with full evidence trail and attestation
GET/api/logs/:id/verifyVerify the cryptographic signature for this entry
POST/api/logs/:id/replayRe-run the logged input through the current agent version

Agents + Baselines

GET/api/agentsList all agents with maturity level and last-seen timestamp
GET/api/agents/:name/baselineHourly baseline data for the named agent
GET/api/agents/:name/chain-verifyChain integrity check → { ok, totalEntries, gaps, brokenLinks }

Risk / Drift

GET/api/driftQuery: agent, acknowledged, limit
POST/api/drift/:id/acknowledgeMark a drift alert as reviewed
GET/api/stats/sparklineDecision volume sparkline data for charts

Alert Rules

GET/api/rulesList all configured rules
POST/api/rulesCreate a rule. Body: { name, condition, severity, channels }
PATCH/api/rules/:idUpdate a rule (partial)
DELETE/api/rules/:idDelete a rule

Traces

GET/api/tracesQuery: status, limit
GET/api/traces/:traceIdFull trace with all linked decisions in order
POST/api/tracesCreate an explicit trace. Body: { traceId, name }
PATCH/api/traces/:traceIdUpdate trace status. Body: { status }

Compliance

GET/api/compliance-reportQuery: from, to, title — returns audit-ready HTML report
GET/api/exportQuery: format=json|csv, agent, from, to, decision, search — filtered export for legal/audit teams
GET/api/retention-statusRetention coverage vs regulatory requirements → { totalEntries, daysCovered, requiredDays, coveragePct, status }
POST/api/sealCreate a tamper-evident seal over a time window → signed summary hash of all entries
GET/api/sealsList all seals with window, entry count, and signature
GET/api/alerts/historyAlert history with pagination → { alerts, total }
GET/api/agents/:name/healthUnified agent health → { velocity, approvalTrend, driftStatus, chainStatus, completenessStatus, lastSeen }
GET/api/public-keyRSA public key for independent signature verification

Org + Settings

GET/api/orgCurrent org details
GET/api/org/usageDecision count and quota usage
GET/api/org/api-keysList API keys (names only — values are not retrievable)
POST/api/org/api-keysCreate an API key. Body: { name } → returns key value once
DELETE/api/org/api-keys/:idRevoke an API key
GET/api/settingsCurrent alert channel settings
POST/api/settings/slackSet Slack webhook. Body: { webhook_url }
POST/api/settings/emailSet alert email. Body: { alert_email }
POST/api/settings/webhookSet generic webhook. Body: { webhook_url }
POST/api/settings/test-alertSend a test alert to all configured channels