Skip to main content
POST
/
query
Query
curl --request POST \
  --url https://api.example.com/query \
  --header 'Authorization: <authorization>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "query": "<string>",
  "stream": true,
  "response_mode": "<string>",
  "query_depth": "<string>",
  "model": "<string>",
  "session_id": "<string>",
  "maintain_context": true,
  "include_summary": true,
  "web_search_enabled": true,
  "conversation_upload_ids": [
    "<string>"
  ]
}
'
{
  "success": true,
  "summary": "ACME Corp reported $50M revenue in Q4 2025, representing a 23% increase year-over-year. CEO Jane Smith attributed the growth to the enterprise segment.",
  "session_id": "sess_abc123",
  "total_facts": 12,
  "total_chunks": 5,
  "metadata": {
    "entities": ["ACME Corp", "Jane Smith"],
    "model": "gpt-4o-mini",
    "search_time": "1.2s"
  }
}
Query the Vrin knowledge base. Returns an AI-generated answer backed by knowledge graph facts and vector search results.
Authorization
string
required
Bearer token. Example: Bearer vrin_abc123
query
string
required
Natural-language question to answer.
stream
boolean
default:"false"
If true, the response is delivered as Server-Sent Events (SSE). Each event contains a JSON object with type and data fields.
response_mode
string
default:"chat"
Answer depth: "chat" (concise), "thinking" (reasoning chains), "research" (exhaustive multi-hop).
query_depth
string
Override retrieval depth: "basic", "thinking", "research".
model
string
LLM model override (e.g. "gpt-4o").
session_id
string
Conversation session ID to continue.
maintain_context
boolean
default:"false"
If true, maintain conversation context. A session_id will be returned in the response.
include_summary
boolean
default:"true"
If true, include AI-generated summary. Set to false for raw fact retrieval only.
web_search_enabled
boolean
default:"false"
Enable web search augmentation.
conversation_upload_ids
string[]
Upload IDs to include as additional context.

Non-streaming response

{
  "success": true,
  "summary": "ACME Corp reported $50M revenue in Q4 2025, representing a 23% increase year-over-year. CEO Jane Smith attributed the growth to the enterprise segment.",
  "session_id": "sess_abc123",
  "total_facts": 12,
  "total_chunks": 5,
  "metadata": {
    "entities": ["ACME Corp", "Jane Smith"],
    "model": "gpt-4o-mini",
    "search_time": "1.2s"
  }
}

Streaming response (SSE)

When stream: true, the response is text/event-stream:
data: {"type": "metadata", "data": {"session_id": "sess_abc123", "total_facts": 12, "entities": ["ACME Corp"]}}

data: {"type": "content", "data": {"delta": "ACME Corp "}}

data: {"type": "content", "data": {"delta": "reported $50M "}}

data: {"type": "sources", "data": {"sources": [{"title": "ACME Q4 Earnings", "chunk_id": "c_123"}]}}

data: {"type": "done", "data": {}}

SSE event types

TypeData fieldsDescription
metadatasession_id, total_facts, total_chunks, entities, modelRetrieval metadata, sent first
contentdeltaText token
reasoningchains or stepsReasoning chain steps
sourcessourcesSource document references
doneerror?, insufficient_coverage?Stream complete
errormessageFatal error

Insufficient coverage

When the knowledge base has no relevant facts, the response includes insufficient_coverage: true and skips LLM generation.
{
  "success": true,
  "summary": "",
  "insufficient_coverage": true,
  "total_facts": 0,
  "total_chunks": 0
}