Documentation Index Fetch the complete documentation index at: https://docs.graphorlm.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Document Chat API allows you to ask questions about your ingested documents and receive answers grounded in your content. The API supports conversational memory, enabling follow-up questions that maintain context.
Endpoint
POST https://sources.graphorlm.com/ask-sources
Authentication
Include your API token in the Authorization header:
Authorization: Bearer YOUR_API_TOKEN
Request
Header Value Required AuthorizationBearer YOUR_API_TOKENYes Content-Typeapplication/jsonYes
Body Parameters
Parameter Type Required Description questionstring Yes The question to ask about your documents conversation_idstring No Conversation identifier to maintain memory context across questions resetboolean No When true, starts a new conversation and ignores previous history. Default: false file_idsstring[] No Restrict search to specific documents by file ID (preferred) file_namesstring[] No Restrict search to specific documents by file name (deprecated, use file_ids) output_schemaobject (JSON Schema) No Optional JSON Schema to request a structured output. When provided, the API returns validated structured data in structured_output and the raw JSON-text candidate in raw_json. thinking_levelstring No Controls model and thinking configuration. Values: "fast", "balanced", "accurate" (default). See Thinking Level for details.
Thinking Level
The thinking_level parameter controls the model and thinking configuration used for answering questions:
Value Description "fast"Uses a faster model without extended thinking. Best for simple questions where speed is prioritized. "balanced"Uses a more capable model with low thinking. Good balance between quality and speed. "accurate"Default. Uses a more capable model with high thinking. Best for complex questions requiring deep reasoning.
Example Request
curl -X POST "https://sources.graphorlm.com/ask-sources" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"question": "What are the main findings in this report?"
}'
Example with Conversation Memory
# First question
curl -X POST "https://sources.graphorlm.com/ask-sources" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"question": "What products are mentioned in the catalog?"
}'
# Response: { "answer": "...", "conversation_id": "conv_abc123" }
# Follow-up question using conversation_id
curl -X POST "https://sources.graphorlm.com/ask-sources" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"question": "Which one is the most expensive?",
"conversation_id": "conv_abc123"
}'
Example with Specific Documents (using file_ids)
curl -X POST "https://sources.graphorlm.com/ask-sources" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"question": "What is the total amount due?",
"file_ids": ["file_abc123", "file_def456"]
}'
Example with Specific Documents (using file_names - deprecated)
curl -X POST "https://sources.graphorlm.com/ask-sources" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"question": "What is the total amount due?",
"file_names": ["invoice-2024.pdf", "invoice-2023.pdf"]
}'
Example with Thinking Level
curl -X POST "https://sources.graphorlm.com/ask-sources" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"question": "Analyze the legal implications of the termination clause",
"file_names": ["contract.pdf"],
"thinking_level": "accurate"
}'
Example with Structured Output (JSON Schema)
When you pass output_schema, the API will attempt to return a schema-conformant JSON object/array in structured_output.
Notes/constraints:
Supported schemas must be simplified JSON Schema
Unions must be only with null (e.g. ["string", "null"])
Complex constructs like oneOf/anyOf/allOf/$ref are not supported
curl -X POST "https://sources.graphorlm.com/ask-sources" \
-H "Authorization: Bearer YOUR_API_TOKEN" \
-H "Content-Type: application/json" \
-d '{
"question": "Extract the invoice number and total amount due.",
"file_names": ["invoice-2024.pdf"],
"output_schema": {
"type": "object",
"properties": {
"invoice_number": { "type": ["string", "null"] },
"total_amount_due": { "type": ["number", "null"] },
"currency": { "type": ["string", "null"] }
}
}
}'
Response
Success Response (200 OK)
Field Type Description answerstring The answer to your question. When output_schema is provided, this will be a short status message and the structured data will be in structured_output (and the raw JSON-text candidate in raw_json). structured_outputany Optional structured output validated against the requested output_schema. Present only when output_schema is provided. raw_jsonstring Optional raw JSON-text produced by the model before validation/correction. Present only when output_schema is provided. conversation_idstring Conversation identifier for follow-up questions
Example Response
{
"answer" : "The main findings in the report indicate that revenue increased by 15% year-over-year, with the strongest growth in the digital services segment. The report also highlights three key challenges: supply chain disruptions, increased competition, and regulatory changes." ,
"conversation_id" : "conv_abc123"
}
Example Response (Structured Output)
{
"answer" : "Structured output generated." ,
"structured_output" : {
"invoice_number" : "INV-2024-001" ,
"total_amount_due" : 1299.5 ,
"currency" : "USD"
},
"raw_json" : "{ \n \" invoice_number \" : \" INV-2024-001 \" , \n \" total_amount_due \" : 1299.50, \n \" currency \" : \" USD \"\n }" ,
"conversation_id" : "conv_abc123"
}
Error Responses
Status Code Description 400 Bad Request - Invalid parameters 401 Unauthorized - Invalid or missing API token 404 Not Found - Specified file not found 422 Unprocessable Entity - Invalid output_schema or structured output validation failed 500 Internal Server Error
Usage Examples
Python
import requests
url = "https://sources.graphorlm.com/ask-sources"
headers = {
"Authorization" : "Bearer YOUR_API_TOKEN" ,
"Content-Type" : "application/json"
}
# Simple question
response = requests.post(url, headers = headers, json = {
"question" : "What are the key terms in this contract?"
})
data = response.json()
print (data[ "answer" ])
# Follow-up question
response = requests.post(url, headers = headers, json = {
"question" : "When does it expire?" ,
"conversation_id" : data[ "conversation_id" ]
})
print (response.json()[ "answer" ])
JavaScript
const API_URL = "https://sources.graphorlm.com/ask-sources" ;
const API_TOKEN = "YOUR_API_TOKEN" ;
async function askQuestion (
question ,
conversationId = null ,
fileNames = null ,
outputSchema = null
) {
const payload = {
question ,
conversation_id: conversationId
};
if ( fileNames && fileNames . length ) {
payload . file_names = fileNames ;
}
if ( outputSchema ) {
payload . output_schema = outputSchema ;
}
const response = await fetch ( API_URL , {
method: "POST" ,
headers: {
"Authorization" : `Bearer ${ API_TOKEN } ` ,
"Content-Type" : "application/json"
},
body: JSON . stringify ( payload )
});
return response . json ();
}
// Usage
const result = await askQuestion ( "What products are available?" );
console . log ( result . answer );
// Follow-up
const followUp = await askQuestion (
"Tell me more about the first one" ,
result . conversation_id
);
console . log ( followUp . answer );
Best Practices
Use conversation memory — Pass conversation_id for follow-up questions to maintain context
Be specific — Clear, specific questions get better answers
Scope when needed — Use file_names to focus on specific documents
Use structured output when integrating — Provide output_schema to get JSON you can reliably parse in code
Reset when changing topics — Set reset: true when switching to unrelated questions
Document Chat Guide Learn best practices for chatting with your documents
Data Ingestion Improve parsing quality for better chat responses