Query the Vrin knowledge base. Returns an AI-generated answer backed by knowledge graph facts and vector search results.
Bearer token. Example: Bearer vrin_abc123
Natural-language question to answer.
If true, the response is delivered as Server-Sent Events (SSE). Each event contains a JSON object with type and data fields.
Answer depth: "chat" (concise), "thinking" (reasoning chains), "research" (exhaustive multi-hop).
Override retrieval depth: "basic", "thinking", "research".
LLM model override (e.g. "gpt-4o").
Conversation session ID to continue.
If true, maintain conversation context. A session_id will be returned in the response.
If true, include AI-generated summary. Set to false for raw fact retrieval only.
Enable web search augmentation.
Upload IDs to include as additional context.
Non-streaming response
{
"success": true,
"summary": "ACME Corp reported $50M revenue in Q4 2025, representing a 23% increase year-over-year. CEO Jane Smith attributed the growth to the enterprise segment.",
"session_id": "sess_abc123",
"total_facts": 12,
"total_chunks": 5,
"metadata": {
"entities": ["ACME Corp", "Jane Smith"],
"model": "gpt-4o-mini",
"search_time": "1.2s"
}
}
Streaming response (SSE)
When stream: true, the response is text/event-stream:
data: {"type": "metadata", "data": {"session_id": "sess_abc123", "total_facts": 12, "entities": ["ACME Corp"]}}
data: {"type": "content", "data": {"delta": "ACME Corp "}}
data: {"type": "content", "data": {"delta": "reported $50M "}}
data: {"type": "sources", "data": {"sources": [{"title": "ACME Q4 Earnings", "chunk_id": "c_123"}]}}
data: {"type": "done", "data": {}}
SSE event types
| Type | Data fields | Description |
|---|
metadata | session_id, total_facts, total_chunks, entities, model | Retrieval metadata, sent first |
content | delta | Text token |
reasoning | chains or steps | Reasoning chain steps |
sources | sources | Source document references |
done | error?, insufficient_coverage? | Stream complete |
error | message | Fatal error |
Insufficient coverage
When the knowledge base has no relevant facts, the response includes insufficient_coverage: true and skips LLM generation.
{
"success": true,
"summary": "",
"insufficient_coverage": true,
"total_facts": 0,
"total_chunks": 0
}