Skip to main content
Vrin is a knowledge reasoning engine that structures your documents into a temporal knowledge graph and reasons across it — giving AI answers you can trace to specific facts.

Why Vrin

Traditional retrieval systems rely on text similarity alone, which breaks down when questions span multiple documents or require temporal context. Vrin adds a knowledge graph layer that:
  • Extracts structured facts (entities, relationships, timestamps) from every document
  • Traverses entity connections to answer questions that span dozens of documents
  • Reasons across your knowledge graph to assemble precisely the context your LLM needs
  • Traces every answer to specific facts from specific source documents

Key features

Deep Reasoning

Knowledge graph reasoning that answers questions spanning dozens of documents — not just keyword matching.

True streaming

Server-Sent Events deliver tokens as they are generated. No buffering, no polling.

File upload

Upload PDFs, CSVs, and text files. Vrin extracts facts and chunks automatically.

Enterprise data sovereignty

Enterprise API keys route queries through your own AWS account. Your data never leaves your cloud.

How it works

Every query follows a three-stage pipeline:
  1. Structure — VRIN extracts entities from your query and traverses the knowledge graph to find relevant facts, relationships, and temporal context.
  2. Reason — Graph facts and document chunks are intelligently fused, scored for confidence, and assembled into precisely the context your LLM needs.
  3. Trace — Every answer includes the specific facts used, the source documents they came from, and confidence scores — so you can verify every claim.

For AI Agent Builders

VRIN works as a drop-in reasoning layer for AI agents. Integrate via Python SDK, MCP server, or REST API:
from vrin import VRINClient

client = VRINClient(api_key="vrin_your_api_key")
result = client.query("What changed in ACME's revenue between Q2 and Q3?")

# Every answer traces to specific facts
for source in result.get("sources", []):
    print(f"  {source['document']}: {source['fact']}")
VRIN is model-agnostic — it works with GPT, Claude, Gemini, or any LLM. You bring the model, VRIN provides the structured reasoning.

Benchmarks

BenchmarkVRINBest BaselineImprovement
MultiHop-RAG95.1%78.9% (GPT-5.2 w/ same docs)+16.2pp
MuSiQueEM 0.478EM 0.372 (HippoRAG 2, academic SOTA)+28%
MultiHop-RAG: 384 stratified samples (seed=42), 609-article corpus. MuSiQue: 300 multi-hop questions, 4,848 paragraphs ingested.

Next steps

Quickstart

Install the SDK and run your first query in under 2 minutes.

Python SDK

Full reference for VRINClient methods and configuration.

API Reference

HTTP endpoints, request/response schemas, and interactive playground.

Enterprise

Deploy Vrin in your own AWS account with full data sovereignty.