OpenAI LLM¶
Sends a prompt to an OpenAI-compatible large language model via Salt’s LLM service and returns the generated text. Supports a system prompt for instruction, a user prompt with dynamic placeholders, temperature, and max token controls. Automatically lists available OpenAI models from the service and falls back to a built-in set if the service is unavailable.

Usage¶
Use this node to generate text, analyze content, or perform instruction-following tasks with OpenAI models. Provide a clear prompt, optionally combine it with context inputs (input_1–input_4) using {{placeholders}} in the prompt, select a model, and tune temperature and max_tokens to control creativity and length. Typically used in workflows for drafting, summarization, Q&A, content transformation, and reasoning steps.
Inputs¶
| Field | Required | Type | Description | Example | 
|---|---|---|---|---|
| model | True | SELECT | Choose an OpenAI model to run. The list is fetched from the LLM service; if unavailable, a fallback list (e.g., gpt-4o, gpt-4o-mini, gpt-4, gpt-3.5-turbo, o-series) is shown. | gpt-4o | 
| system_prompt | True | STRING | Instructions that guide how the AI should respond (style, format, tone). Leave concise rules for best results. | You are a helpful, concise assistant. Use bullet points when appropriate. | 
| prompt | True | DYNAMIC_STRING | User message to the model. You can include placeholders like {{input_1}}–{{input_4}} to inject context dynamically. | Summarize the following document in 5 bullet points: {{input_1}} | 
| temperature | True | FLOAT | Controls creativity and randomness. Lower for deterministic outputs, higher for more varied responses. | 0.5 | 
| max_tokens | True | INT | Maximum tokens in the response. Higher allows longer outputs but uses more compute. | 1024 | 
| input_1 | False | STRING | Optional contextual input that can be referenced in the prompt as {{input_1}}. | Full text of a report to summarize | 
| input_2 | False | STRING | Optional contextual input that can be referenced in the prompt as {{input_2}}. | User profile JSON for personalization | 
| input_3 | False | STRING | Optional contextual input that can be referenced in the prompt as {{input_3}}. | Glossary of terms | 
| input_4 | False | STRING | Optional contextual input that can be referenced in the prompt as {{input_4}}. | Style guide or formatting rules | 
Outputs¶
| Field | Type | Description | Example | 
|---|---|---|---|
| Output | STRING | The generated text response from the selected OpenAI model. | Here are five concise bullet points summarizing your document: ... | 
Important Notes¶
- This node targets the OpenAI provider via the Salt LLM service and automatically retrieves the current model list; if retrieval fails, it uses a predefined fallback set.
- Dynamic placeholders {{input_1}}–{{input_4}} in the prompt are replaced with the corresponding input values before sending to the model.
- Default timeout is 90 seconds. Very long or complex requests may require prompt simplification or reduced max_tokens to avoid timeouts.
- If you select a model not present in the available options, the node will raise an error. Refresh model options or select a listed model.
- Temperature range is 0–1. Max tokens typically supports up to 4096 for responses, but actual limits depend on the selected model.
Troubleshooting¶
- Model not found: Ensure you picked a model from the provided selector. If the list looks outdated, try refreshing the workflow or reloading the model list.
- Empty or truncated output: Increase max_tokens or reduce prompt/context length. Some models may have stricter token limits.
- Inconsistent answers: Lower temperature for more deterministic outputs (e.g., 0.1–0.3).
- Placeholders not replaced: Verify your prompt uses {{input_1}}–{{input_4}} exactly and that the respective inputs are connected/provided.
- Timeouts or service errors: Simplify the prompt, lower max_tokens, or try again. If persistent, check LLM service availability or credentials.