DeepSeek LLM¶
Queries DeepSeek language models via the LiteLLM service. Builds a message list from a system prompt and a user prompt (with optional dynamic placeholders), then sends it to the DeepSeek provider and returns the model’s text reply.

Usage¶
Use this node when you want to call DeepSeek models for text generation, reasoning, or coding tasks. Choose a model, provide a system prompt to guide behavior, write your prompt (optionally referencing input_1..input_4), and tune temperature and max_tokens. Connect upstream nodes to input_1..input_4 to inject dynamic context, such as retrieved knowledge or user data.
Inputs¶
| Field | Required | Type | Description | Example | 
|---|---|---|---|---|
| model | True | CHOICE | DeepSeek model to use. The list is fetched from the service; if unavailable, fallback options like deepseek-coder and deepseek-reasoner are provided. | deepseek-coder | 
| system_prompt | True | STRING | High-level instructions and rules for how the model should respond. Leave empty to skip a system message. | You are a concise assistant. Prefer clear bullet points. | 
| prompt | True | DYNAMIC_STRING | Main user prompt. Supports placeholder variables {{input_1}}..{{input_4}} that are replaced with the connected inputs before sending. | Summarize the following notes: {{input_1}} | 
| temperature | True | FLOAT | Controls response randomness/creativity from 0 to 1. | 0.5 | 
| max_tokens | True | INT | Maximum number of tokens to generate in the response. | 1024 | 
| input_1 | False | STRING | Optional dynamic input available to the prompt as {{input_1}}. | Meeting notes text | 
| input_2 | False | STRING | Optional dynamic input available to the prompt as {{input_2}}. | Project requirements | 
| input_3 | False | STRING | Optional dynamic input available to the prompt as {{input_3}}. | Code snippet | 
| input_4 | False | STRING | Optional dynamic input available to the prompt as {{input_4}}. | Customer message | 
Outputs¶
| Field | Type | Description | Example | 
|---|---|---|---|
| Output | STRING | The generated text message returned by the DeepSeek model. | Here is a concise summary of your notes... | 
Important Notes¶
- Model sourcing: The node attempts to fetch available DeepSeek models from the service. If it fails, it falls back to predefined options (e.g., deepseek-coder, deepseek-reasoner).
- Dynamic variables: Only {{input_1}}..{{input_4}} are replaced. Other placeholders will remain unchanged.
- Timeouts: Long-running requests are supported and will keep a heartbeat; default timeout is 90 seconds unless overridden by system configuration.
- Max tokens: If the response would exceed max_tokens, it may be truncated. Adjust as needed for longer outputs.
- Model validation: Selecting a model not present in the fetched/fallback list will cause an error.
Troubleshooting¶
- Model ID not found: Ensure the selected model exists in the model list. If the live list fails, use one of the fallback models.
- Service error or timeout: Check connectivity and credentials for the DeepSeek provider in your environment. Increase the timeout if supported in your setup.
- Empty or poor responses: Lower temperature for more deterministic outputs, or increase max_tokens to avoid truncation.
- Placeholders not replaced: Verify that input_1..input_4 are connected and that your prompt uses the correct placeholder format (e.g., {{input_1}}).
- Rate limits or provider errors: Try a different model, slow down request frequency, or validate account limits with the provider.