Skip to content

OpenAI LLM

Calls OpenAI chat/completions via Salt’s LiteLLM layer. You select an OpenAI model, provide prompts and generation settings, and the node returns the model’s text response. It includes built-in fallback mappings for common/legacy model names to help keep workflows working as models change over time.
Preview

Usage

Use this node whenever you need OpenAI-powered text generation in a workflow. Typical flow: prepare or construct your prompt (and optional system prompt), select an OpenAI model, tune temperature and max_tokens, then feed the output downstream (e.g., to parsers, tools, or display nodes). It is suited for general-purpose chat, writing, and reasoning tasks that require OpenAI models.

Inputs

FieldRequiredTypeDescriptionExample
modelTrueSTRINGThe OpenAI model to use. A dropdown of available models is provided; legacy names are auto-mapped to current equivalents when possible.gpt-4o
system_promptTrueSTRINGSystem-level instruction that sets behavior and role for the model. Keep concise and consistent across turns.You are a concise technical assistant.
promptTrueSTRINGThe main user prompt or message content sent to the model.Summarize the following article in three bullet points.
temperatureTrueFLOATControls randomness. Lower is more deterministic; higher is more creative.0.7
max_tokensTrueINTUpper bound on the number of tokens generated in the response. Must be within the selected model’s limits.512
input_1FalseSTRINGOptional additional input that you can interpolate into your prompt template or concatenate upstream.Customer profile JSON
input_2FalseSTRINGOptional additional input for prompt composition.Knowledge base excerpt
input_3FalseSTRINGOptional additional input for prompt composition.Previous conversation context
input_4FalseSTRINGOptional additional input for prompt composition.Formatting instructions

Outputs

FieldTypeDescriptionExample
outputSTRINGThe OpenAI model's text response.Here are three concise bullet points summarizing the article...

Important Notes

  • Model availability and naming can change; this node includes fallback mappings (e.g., GPT-4o mini → gpt-4o-mini) to reduce breakage, but you should periodically validate model selections.
  • You must have a valid OpenAI API credential configured in your Salt environment or workspace settings for this node to run.
  • max_tokens must respect the chosen model’s context window; if set too high, the service may error or truncate.
  • Temperature interacts with prompts and model choice; for reproducibility, keep temperature low and prompts specific.
  • Usage is subject to organization-level limits and quotas; heavy use may trigger rate limits.

Troubleshooting

  • Model not found or deprecated: Pick a currently listed model from the dropdown or rely on the node’s automatic legacy-to-current mapping where available.
  • Authentication error: Ensure an OpenAI API key is configured in your Salt credentials store or environment variables (e.g., set in the appropriate location).
  • Rate limit or quota exceeded: Reduce request frequency, lower token usage, or upgrade plan/quotas; consider batching or adding delays.
  • max_tokens too large: Decrease max_tokens to fit within the model’s context window and your prompt length.
  • Empty or low-quality output: Lower temperature for more deterministic results, refine the system_prompt and prompt, and provide more concrete instructions or examples.