Skip to main content

chatCompletion

Sends a query to a configured LLM and returns the generated text. Optionally assembles a structured prompt from context data — extracted facts, reference citations, source documents, and metadata. Actual API cost is tracked per execution.

Parameters

ParamTypeRequiredDefaultDescription
querystringYesUser query or question. Minimum length: 1
system_promptstringNo"You are a helpful AI assistant that provides accurate, well-reasoned responses."System prompt to guide LLM behavior
context_dataobject?NonullOptional context to inject into the prompt (see below)
modelstringNo"gpt-4o-mini"LLM model identifier
providerstring?NonullLLM provider. Auto-detected from model when not set
temperaturefloatNo0.1Range: 0.0–2.0
max_tokensinteger?NonullMaximum tokens in response
response_formatstringNo"text"One of: text, markdown, json
include_citationsbooleanNotrueCount citation references that appear in the response

context_data object:

FieldTypeDefaultDescription
factsobject[][]Extracted facts with optional fact, supporting_citations, and confidence_score keys
citationsobject[][]Reference citations with optional id, title, and doi keys
documentsstring[][]Source document texts
metadataobject{}Arbitrary key-value pairs appended to the prompt

Output

FieldTypeDescription
responsestringGenerated response text from the LLM
model_usedstringThe model name passed to the LLM
provider_usedstringProvider resolved from the model, or "error" on failure
token_countintegerEstimated token count (word_count × 1.3)
context_usedbooleantrue if context_data was provided
citations_includedintegerCount of [cite-id] references found in the response
execution_time_secondsfloatWall-clock time in seconds

Example

{
"id": "llmCompletion",
"type": "chatCompletion",
"data": {
"label": "Chat Completion",
"isExecuted": false,
"handles": ["inputs", "outputs"],
"schema": {},
"params": {
"query": { "value": "{{ $input.question }}", "isExpression": true, "isAttachedToInputNode": false },
"system_prompt": { "value": "You are a scientific assistant. Answer using only the provided context.", "isExpression": false, "isAttachedToInputNode": false },
"model": { "value": "gpt-4o-mini", "isExpression": false, "isAttachedToInputNode": false },
"context_data": {
"value": {
"documents": "{{ @vectorSearch.documents }}",
"citations": [],
"facts": [],
"metadata": {}
},
"isExpression": true,
"isAttachedToInputNode": false
}
},
"inputs": [], "outputs": [], "errors": []
},
"position": { "x": 600, "y": 0 },
"isSelected": false,
"isDragging": false
}