Skip to content

OpenAI LLM

Sends a chat-style prompt to an OpenAI model via the LiteLLM service and returns the model's text response. It supports a system prompt, dynamic placeholders in the user prompt, temperature control, and an optional max_tokens limit. The node auto-populates available OpenAI models (with 1-hour caching) and falls back to a built-in list if the service is unavailable.
Preview

Usage

Use this node to perform text generation, reasoning, summarization, or transformation with OpenAI models. Select a model (e.g., gpt-4o), set a system prompt for behavior and style, and provide a user prompt. You can connect additional text inputs (input_1–input_4) to knowledge sources or context and reference them in the prompt using placeholders like {{input_1}}. Adjust temperature for creativity and max_tokens for response length.

Inputs

FieldRequiredTypeDescriptionExample
modelTrueSTRINGOpenAI model to use. Populated from the LLM service; falls back to a curated list if fetching fails.gpt-4o
system_promptTrueSTRINGSystem instructions that set the assistant's behavior, style, and constraints.Respond concisely in bullet points and include actionable next steps.
promptTrueDYNAMIC_STRINGUser prompt. Supports dynamic placeholders for inputs: {{input_1}}, {{input_2}}, {{input_3}}, {{input_4}}.Summarize the following document: {{input_1}}
temperatureTrueFLOATControls creativity and variability. Lower values yield more deterministic responses.0.5
max_tokensTrueINTUpper limit for the number of tokens in the response. Set to 0 to use provider defaults.1024
input_1FalseSTRINGOptional dynamic input for additional context (referenced in prompt via {{input_1}}).Quarterly financial report text
input_2FalseSTRINGOptional dynamic input for additional context (referenced via {{input_2}}).Customer feedback notes
input_3FalseSTRINGOptional dynamic input for additional context (referenced via {{input_3}}).Product requirements
input_4FalseSTRINGOptional dynamic input for additional context (referenced via {{input_4}}).Support ticket summaries

Outputs

FieldTypeDescriptionExample
OutputSTRINGThe generated text response from the selected OpenAI model.Here are the key insights from the document...

Important Notes

  • Model sourcing: The model dropdown is fetched from the LLM service and cached for ~1 hour. If fetching fails, a fallback list of OpenAI models is shown.
  • Dynamic placeholders: You can embed {{input_1}}–{{input_4}} in the prompt. These will be replaced with the corresponding inputs at runtime.
  • Response truncation: If the response hits the max_tokens limit, it may be truncated (finish_reason: length). Increase max_tokens to allow longer output.
  • Timeouts: Requests have a default timeout of about 180 seconds. Long-running requests are kept alive internally with heartbeats.
  • Additional parameters: Temperature and max_tokens are passed to the underlying service; other provider-specific parameters may be supported when added by extended nodes.

Troubleshooting

  • Model ID not found: Ensure the selected model exists in the dropdown. If you typed a custom value, select a listed model or wait for the list to refresh.
  • Truncated output: If the response seems cut off, increase max_tokens and try again.
  • No placeholder substitution: Verify your prompt uses the exact placeholder names (e.g., {{input_1}}) and that the corresponding input is connected or provided.
  • Request timeout: For large contexts or complex models, increase the timeout (if exposed by your environment) or simplify the prompt/context.
  • Empty or low-quality output: Reduce temperature for more focused responses, or provide clearer instructions and examples in the system/user prompts.