OpenAI LLM¶
Sends a chat-style prompt to an OpenAI model via the LiteLLM service and returns the model's text response. It supports a system prompt, dynamic placeholders in the user prompt, temperature control, and an optional max_tokens limit. The node auto-populates available OpenAI models (with 1-hour caching) and falls back to a built-in list if the service is unavailable.

Usage¶
Use this node to perform text generation, reasoning, summarization, or transformation with OpenAI models. Select a model (e.g., gpt-4o), set a system prompt for behavior and style, and provide a user prompt. You can connect additional text inputs (input_1–input_4) to knowledge sources or context and reference them in the prompt using placeholders like {{input_1}}. Adjust temperature for creativity and max_tokens for response length.
Inputs¶
| Field | Required | Type | Description | Example |
|---|---|---|---|---|
| model | True | STRING | OpenAI model to use. Populated from the LLM service; falls back to a curated list if fetching fails. | gpt-4o |
| system_prompt | True | STRING | System instructions that set the assistant's behavior, style, and constraints. | Respond concisely in bullet points and include actionable next steps. |
| prompt | True | DYNAMIC_STRING | User prompt. Supports dynamic placeholders for inputs: {{input_1}}, {{input_2}}, {{input_3}}, {{input_4}}. | Summarize the following document: {{input_1}} |
| temperature | True | FLOAT | Controls creativity and variability. Lower values yield more deterministic responses. | 0.5 |
| max_tokens | True | INT | Upper limit for the number of tokens in the response. Set to 0 to use provider defaults. | 1024 |
| input_1 | False | STRING | Optional dynamic input for additional context (referenced in prompt via {{input_1}}). | Quarterly financial report text |
| input_2 | False | STRING | Optional dynamic input for additional context (referenced via {{input_2}}). | Customer feedback notes |
| input_3 | False | STRING | Optional dynamic input for additional context (referenced via {{input_3}}). | Product requirements |
| input_4 | False | STRING | Optional dynamic input for additional context (referenced via {{input_4}}). | Support ticket summaries |
Outputs¶
| Field | Type | Description | Example |
|---|---|---|---|
| Output | STRING | The generated text response from the selected OpenAI model. | Here are the key insights from the document... |
Important Notes¶
- Model sourcing: The model dropdown is fetched from the LLM service and cached for ~1 hour. If fetching fails, a fallback list of OpenAI models is shown.
- Dynamic placeholders: You can embed {{input_1}}–{{input_4}} in the prompt. These will be replaced with the corresponding inputs at runtime.
- Response truncation: If the response hits the max_tokens limit, it may be truncated (finish_reason: length). Increase max_tokens to allow longer output.
- Timeouts: Requests have a default timeout of about 180 seconds. Long-running requests are kept alive internally with heartbeats.
- Additional parameters: Temperature and max_tokens are passed to the underlying service; other provider-specific parameters may be supported when added by extended nodes.
Troubleshooting¶
- Model ID not found: Ensure the selected model exists in the dropdown. If you typed a custom value, select a listed model or wait for the list to refresh.
- Truncated output: If the response seems cut off, increase max_tokens and try again.
- No placeholder substitution: Verify your prompt uses the exact placeholder names (e.g., {{input_1}}) and that the corresponding input is connected or provided.
- Request timeout: For large contexts or complex models, increase the timeout (if exposed by your environment) or simplify the prompt/context.
- Empty or low-quality output: Reduce temperature for more focused responses, or provide clearer instructions and examples in the system/user prompts.