Skip to content

OpenAI LLM

Sends prompts to OpenAI language models through the Salt LiteLLM layer and returns the model’s response. Supports a wide range of current and legacy OpenAI model IDs with built-in fallback mapping. Designed to handle typical chat/completions use cases as well as newer OpenAI reasoning and small-footprint variants.
Preview

Usage

Use this node when you need text generation, chat-style responses, reasoning, or structured output from OpenAI models within a Salt workflow. Place it after any prompt-building or context-assembling nodes, then route its outputs to parsers, evaluators, or downstream application logic. Choose a model from the supported list (e.g., GPT-4o family, o-series, GPT-5 variants) and tune generation controls like temperature and max_tokens as needed.

Inputs

FieldRequiredTypeDescriptionExample
modelTrueSTRINGThe OpenAI model identifier to use. Supports many current options and legacy names that are auto-mapped to current equivalents.gpt-4o
inputTrueSTRINGYour prompt or message content sent to the model. Typically a user/system combined prompt or a prepared message string.Summarize the following article in 3 bullet points.
temperatureFalseNUMBERControls randomness. Higher values produce more diverse outputs.0.7
top_pFalseNUMBERNucleus sampling parameter to control output diversity.0.9
max_tokensFalseNUMBERMaximum number of tokens to generate in the response.1024
system_promptFalseSTRINGOptional system-level instructions to steer the assistant’s behavior.You are a concise assistant that answers with bullet points.
response_formatFalseSTRINGOptional desired response format hint (e.g., plain text or JSON).json
toolsFalseNot specifiedOptional tool/function definitions to enable tool calling if supported by the chosen model.Not specified

Outputs

FieldTypeDescriptionExample
textSTRINGThe primary text output from the model.• Point 1 • Point 2 • Point 3
rawJSONFull raw response payload with metadata such as token usage and model info.{"id":"resp_abc123","model":"gpt-4o","usage":{"total_tokens":345},"output":"..."}

Important Notes

  • • Provider is OpenAI. Display name appears as "OpenAI LLM" in the Salt UI.
  • • Supported/fallback models include: gpt-4, gpt-3.5-turbo, gpt-4-turbo, gpt-4o, gpt-4o-mini, chatgpt-4o-latest, o1, o1-mini, o3, o3-mini, o4-mini, gpt-4.1, gpt-4.1-mini, gpt-4.1-nano, gpt-5, gpt-5-chat-latest, gpt-5-nano.
  • • Some legacy model names are automatically mapped to current equivalents in the UI.
  • • This node is subject to LLM usage limits in Salt (per plan/remote config). If you hit limits, you may be blocked from adding/running more LLM nodes.
  • • An OpenAI API key must be configured in your Salt environment by an administrator; the node itself does not take a secret key input.

Troubleshooting

  • • Model not found or deprecated: Switch to a currently supported model (e.g., gpt-4o or gpt-4.1). The UI may auto-map some legacy names.
  • • Rate limit or quota errors: Reduce request frequency, lower max_tokens, or wait for your quota window to reset. Verify your organization’s OpenAI billing and Salt plan limits.
  • • Empty or low-quality output: Lower temperature for more deterministic results, provide clearer system instructions, or increase max_tokens.
  • • Tool calling not working: Ensure the chosen model supports tools and that tool definitions are correctly structured. If issues persist, try another compatible model.
  • • Node limit reached: Salt may restrict the number of LLM nodes per workflow/account tier. Remove unused LLM nodes or upgrade your plan.