DeepSeek LLM¶
Runs a text generation request against DeepSeek models through Salt’s LiteLLM connector. Supports dynamic prompts with contextual inputs, temperature control, and token limits. Automatically populates available models from the LLM service and falls back to a curated list if discovery is unavailable.

Usage¶
Use this node when you want to generate or transform text with DeepSeek (e.g., general chat, coding assistance, or reasoning). Connect optional context inputs from upstream nodes or knowledge sources, reference them inside the prompt with {{input_1}}–{{input_4}}, pick a model, and adjust temperature and max_tokens as needed. The node returns a single string containing the model’s reply.
Inputs¶
| Field | Required | Type | Description | Example |
|---|---|---|---|---|
| model | True | SELECT | DeepSeek model to use. The list is fetched from the LLM service; if unavailable, a fallback list is provided. | deepseek-chat |
| system_prompt | True | STRING | High-level instruction that sets behavior, tone, and style for the model. Leave empty to omit a system message. | You are a concise assistant. Prefer bullet points and short sentences. |
| prompt | True | DYNAMIC_STRING | The main user prompt. You can reference optional inputs using {{input_1}}–{{input_4}} to inject external context. | Summarize the following context in 5 bullet points: {{input_1}} |
| temperature | True | FLOAT | Controls randomness and creativity (0 = deterministic, 1 = most creative). | 0.5 |
| max_tokens | True | INT | Maximum number of tokens in the response. Set to 0 to use the provider default. | 1024 |
| input_1 | False | STRING | Optional context string. Refer to it in the prompt as {{input_1}}. | Long document text or retrieved notes |
| input_2 | False | STRING | Optional context string. Refer to it in the prompt as {{input_2}}. | User profile or preferences |
| input_3 | False | STRING | Optional context string. Refer to it in the prompt as {{input_3}}. | Product specs or API output |
| input_4 | False | STRING | Optional context string. Refer to it in the prompt as {{input_4}}. | Previous conversation turns or extracted insights |
Outputs¶
| Field | Type | Description | Example |
|---|---|---|---|
| Output | STRING | The DeepSeek model’s response text. | Here are five concise bullet points summarizing the provided context... |
Important Notes¶
- Model options: The node queries the platform’s LLM service for available DeepSeek models; if that fails, it uses a fallback list (e.g., deepseek-chat, deepseek-coder, deepseek-reasoner).
- Dynamic variables: Use {{input_1}}–{{input_4}} inside the prompt to inject optional context inputs.
- Timeouts: Requests default to a 180s timeout. Long-running requests are kept alive via internal heartbeats.
- Token limits: If the response is truncated due to max_tokens, you may see a length-related finish reason; increase max_tokens to allow longer outputs.
- Caching: The discovered model list is cached per provider for about 1 hour to reduce repeated lookups.
- Provider: This node targets the DeepSeek provider via the LiteLLM integration.
Troubleshooting¶
- Model ID not found: Ensure you selected a model from the list or that the list has loaded. If discovery fails, try a known fallback like "deepseek-chat".
- Truncated output (finish_reason: length): Increase max_tokens to allow a longer response.
- Slow or timed-out requests: Reduce prompt size/context, choose a lighter model, or increase the timeout if supported by your environment.
- Unexpected or generic replies: Refine the system_prompt to set clearer behavior, lower temperature for precision, and provide structured context via inputs.
- Variables not replaced in prompt: Verify you used the correct placeholders (e.g., {{input_1}}) and that the corresponding inputs are connected or non-empty.