How VerifiedState works.

Ingest → Extract → Verify → Query.
Four steps. Cryptographic proof at every stage.

Raw Text
"User prefers dark mode, uses JWT auth, budget is $50k"
Assertions
dark_mode: preference
jwt_auth: architecture
budget: $50k
Receipt
✓ rct_7f3a
ed25519 signed
confidence 0.96
Query Result
Ranked results
with receipts attached
7 channels fused
1

Ingest

Send raw content — text, docs, conversation logs, code diffs. Stored as an artifact in a namespace.

POST /ingest { namespace_id, content, source_type } → { artifact_id }
2

Extract

LLM-powered extraction breaks content into structured assertions: subject-predicate-object triples with temporal markers. Each gets a 768-dim embedding.

Fallback chain: Llama 3.3 70B → DeepSeek R1 → Qwen3 480B → Mistral Small 3.1 → Gemini Flash Lite → Grok 4.1 Fast. Pipeline never breaks.

POST /extract { namespace_id, artifact_id } → { assertions: [...] }
3

Verify

Assertion grounded against source spans. LLM evaluates support. Result: Ed25519-signed receipt with confidence score, evidence chain, Merkle chain position.

The receipt is the product.

POST /verify { assertion_id, namespace_id } → { receipt_id, status, confidence, signature }
4

Query

Six retrieval channels fused with Reciprocal Rank Fusion + epistemic multipliers:

  1. Exact slot match
  2. Sparse lexical (FTS)
  3. Dense semantic (768-dim cosine)
  4. Temporal range (valid_from/valid_to)
  5. Graph traversal (assertion edges)
  6. Conflict lookup (contradictions)
POST /query { namespace_id, query } → { assertions, receipts, conflicts }

The Speed Stack

Most queries never hit Tier 3.

Tier 0: Cuckoo Filter
~0.01ms
"Does this fact exist at all?"
Tier 1: Agent Cache
<1ms
"Is it in my session cache?"
Tier 2: Materialized Views
<5ms
"Current state snapshot"
Tier 3: Full Retrieval
<100ms
"All seven channels, fused"

The Data Model

The atomic unit is an assertion: a subject-predicate-object triple with metadata. Assertions are append-only — never updated or deleted. Superseded assertions get new ones pointing back. Full temporal history is always queryable.

{ "assertion_id": "ast_9f2a...", "subject": "auth_middleware", "predicate": "uses", "object": "JWT with RS256", "confidence": 0.94, "valid_from": "2026-04-01T00:00:00Z", "valid_to": null, "source_type": "coding_agent", "namespace_id": "ns_project_auth", "embedding": [0.023, -0.041, ...], // 768-dim "receipt_id": "rct_7f3a..." }

Infrastructure

LayerTechnologyPurpose
ComputeCloudflare Workers (20+)Edge processing
DatabaseSupabase Postgres + pgvectorAssertions, receipts, FTS, vectors
EmbeddingsCloudflare Workers AI (bge-base-en-v1.5)768-dim, free, on-edge
LLMOpenRouter (6-model fallback)Extraction + verification
Object StorageCloudflare R2Compression sidecars
CacheCloudflare KVDelta bloom filters
Real-timeSupabase RealtimeDelta push
CryptoWeb Crypto (Ed25519, SHA-256)Receipt signing

Common questions

Is this just another vector database?

No. A vector DB stores embeddings. We store assertions, verify them, sign them, detect conflicts, and provide six retrieval channels. Vectors are one channel of six.

Why not just use Mem0?

Different primitives. Mem0 stores memories. We store assertions with cryptographic receipts. See /compare for the full breakdown.

How is this different from RAG?

RAG retrieves chunks. We extract structured facts, verify them, produce receipts. RAG: "the document said X." VS: "the fact X was verified at time T with confidence C, signed by key K."

What if I don't need verification?

Use VS as a memory layer and ignore receipts. Seven-channel retrieval, conflict detection, temporal validity, governance all work independently.

Start building with verified memory.

Free: 50,000 assertions/month. No credit card. No trial expiration.