Skip to content

DeepSeek LLM

Runs a chat-style Large Language Model request using the DeepSeek provider via the LiteLLM gateway. Supports dynamic prompt composition, system instructions, temperature, and max token control. Automatically lists available models from the LLM service and falls back to known DeepSeek model IDs if the service is unavailable.
Preview

Usage

Use this node when you need DeepSeek models to generate or transform text in a workflow. Connect optional context inputs to enrich the prompt and reference them with placeholders inside the main prompt. Select the desired DeepSeek model, set creativity (temperature) and response length (max_tokens), then route the single text output to downstream nodes.

Inputs

FieldRequiredTypeDescriptionExample
modelTrueSELECTThe DeepSeek model to use. The node fetches available models from the LLM service and falls back to known DeepSeek models if listing fails.deepseek-chat
system_promptTrueSTRINGSystem-level instructions that set behavior, tone, and formatting rules for the model.You are a concise assistant. Respond with bullet points when appropriate.
promptTrueDYNAMIC_STRINGThe user prompt. Supports placeholders {{input_1}} to {{input_4}} that are replaced with the optional inputs if provided.Summarize the following context: {{input_1}}
temperatureTrueFLOATControls creativity and randomness. Lower is more deterministic; higher is more creative.0.5
max_tokensTrueINTMaximum number of tokens to generate in the response. Controls response length.1024
input_1FalseSTRINGOptional context string to merge into the prompt via {{input_1}}.Quarterly financial report text...
input_2FalseSTRINGOptional context string to merge into the prompt via {{input_2}}.Customer feedback snippets
input_3FalseSTRINGOptional context string to merge into the prompt via {{input_3}}.Product specifications
input_4FalseSTRINGOptional context string to merge into the prompt via {{input_4}}.Internal policy guidelines

Outputs

FieldTypeDescriptionExample
OutputSTRINGThe model’s generated text response.Here’s a concise summary of the provided context...

Important Notes

  • Model choices are fetched and cached per provider. If fetching fails, the node falls back to known DeepSeek models: deepseek-chat, deepseek-coder, deepseek-reasoner.
  • Use placeholders {{input_1}} to {{input_4}} inside the prompt to inject optional inputs; missing inputs are replaced with empty strings.
  • Default request timeout is approximately 3 minutes; very long generations may require lowering max_tokens or adjusting settings upstream.
  • This node contributes to LLM usage limits defined by your workspace or plan.

Troubleshooting

  • Model not found error: Ensure you select a model from the provided dropdown. If the list is empty, try using a known model like "deepseek-chat".
  • Empty or unexpected output: Verify the main prompt is not blank and that placeholders (e.g., {{input_1}}) have corresponding inputs connected.
  • Timeouts or slow responses: Reduce max_tokens, lower temperature for more deterministic output, or try a lighter model.
  • Service unavailable or listing fails: The node will use fallback models. If requests still fail, try again later or check your network and service status.
  • Formatting issues with placeholders: Make sure placeholders exactly match {{input_1}} to {{input_4}} syntax without extra spaces.