chatCompletion
Sends a query to a configured LLM and returns the generated text. Optionally assembles a structured prompt from context data — extracted facts, reference citations, source documents, and metadata. Actual API cost is tracked per execution.
Parameters
| Param | Type | Required | Default | Description |
|---|---|---|---|---|
query | string | Yes | — | User query or question. Minimum length: 1 |
system_prompt | string | No | "You are a helpful AI assistant that provides accurate, well-reasoned responses." | System prompt to guide LLM behavior |
context_data | object? | No | null | Optional context to inject into the prompt (see below) |
model | string | No | "gpt-4o-mini" | LLM model identifier |
provider | string? | No | null | LLM provider. Auto-detected from model when not set |
temperature | float | No | 0.1 | Range: 0.0–2.0 |
max_tokens | integer? | No | null | Maximum tokens in response |
response_format | string | No | "text" | One of: text, markdown, json |
include_citations | boolean | No | true | Count citation references that appear in the response |
context_data object:
| Field | Type | Default | Description |
|---|---|---|---|
facts | object[] | [] | Extracted facts with optional fact, supporting_citations, and confidence_score keys |
citations | object[] | [] | Reference citations with optional id, title, and doi keys |
documents | string[] | [] | Source document texts |
metadata | object | {} | Arbitrary key-value pairs appended to the prompt |
Output
| Field | Type | Description |
|---|---|---|
response | string | Generated response text from the LLM |
model_used | string | The model name passed to the LLM |
provider_used | string | Provider resolved from the model, or "error" on failure |
token_count | integer | Estimated token count (word_count × 1.3) |
context_used | boolean | true if context_data was provided |
citations_included | integer | Count of [cite-id] references found in the response |
execution_time_seconds | float | Wall-clock time in seconds |
Example
{
"id": "llmCompletion",
"type": "chatCompletion",
"data": {
"label": "Chat Completion",
"isExecuted": false,
"handles": ["inputs", "outputs"],
"schema": {},
"params": {
"query": { "value": "{{ $input.question }}", "isExpression": true, "isAttachedToInputNode": false },
"system_prompt": { "value": "You are a scientific assistant. Answer using only the provided context.", "isExpression": false, "isAttachedToInputNode": false },
"model": { "value": "gpt-4o-mini", "isExpression": false, "isAttachedToInputNode": false },
"context_data": {
"value": {
"documents": "{{ @vectorSearch.documents }}",
"citations": [],
"facts": [],
"metadata": {}
},
"isExpression": true,
"isAttachedToInputNode": false
}
},
"inputs": [], "outputs": [], "errors": []
},
"position": { "x": 600, "y": 0 },
"isSelected": false,
"isDragging": false
}