Skip to content

Anthropic LLM

Runs prompts against Anthropic models via the LiteLLM integration. You provide a model, system prompt, user prompt, and generation settings; the node returns the generated text. Includes a built-in fallback mapping for common Anthropic model names.
Preview

Usage

Use this node when you need text generation, reasoning, or assistant-style responses from Anthropic’s Claude family. Typical workflow: choose a Claude model, set a system prompt to define behavior, pass your prompt content (optionally composed from other node outputs via input_1–input_4), tune temperature and max_tokens, then feed the resulting text to downstream nodes for parsing or display.

Inputs

FieldRequiredTypeDescriptionExample
modelTrueSTRINGAnthropic model to use. If the service cannot resolve the exact name, the node uses an internal fallback mapping for well-known Claude models.claude-3-7-sonnet-20250219
system_promptTrueSTRINGSystem instructions that steer the model’s role, tone, and constraints.You are a helpful assistant. Be concise and cite sources when possible.
promptTrueSTRINGMain user prompt content to send to the model. You can compose this with optional inputs.Summarize the following report and list three key risks.
temperatureTrueFLOATControls randomness. Lower is more deterministic; higher is more creative.0.5
max_tokensTrueINTMaximum number of tokens to generate in the response.1024
input_1FalseSTRINGOptional extra text input for composing prompts (e.g., context, retrieved passages).Previous conversation transcript
input_2FalseSTRINGOptional extra text input for composing prompts.User profile or preferences
input_3FalseSTRINGOptional extra text input for composing prompts.Knowledge base excerpt
input_4FalseSTRINGOptional extra text input for composing prompts.Structured context or instructions

Outputs

FieldTypeDescriptionExample
OutputSTRINGThe model’s generated text output.Here’s a concise summary of the report along with three key risks...

Important Notes

  • Model options include common Anthropic Claude variants and are backed by an internal fallback map. If a model alias is used, it resolves to a provider-qualified name like anthropic/claude-... depending on availability.
  • Token limits and supported parameters can vary by model. Ensure max_tokens fits within the model’s context window together with your input size.
  • Temperature meaning: lower values yield more focused, repeatable outputs; higher values increase creativity and variability.
  • Optional inputs (input_1–input_4) are not merged automatically; construct your final prompt string accordingly in the prompt field if you need to include them.
  • Access to Anthropic requires proper service configuration and credentials managed by the platform; ensure they are set by your administrator.
  • This node returns a single STRING response; if you need structured output, instruct the model to format JSON and parse it in subsequent nodes.

Troubleshooting

  • Invalid model name or unsupported version: Pick a model from the available options or use one of the mapped Claude names (e.g., claude-3-7-sonnet-20250219).
  • Authorization or key errors: Verify that Anthropic access is configured by your administrator. The node itself does not accept keys directly.
  • Context length or token limit exceeded: Reduce prompt size, lower max_tokens, or choose a model with a larger context window.
  • Empty or low-quality outputs: Decrease temperature for more deterministic results, refine the system_prompt, and provide clearer instructions.
  • Rate limits or timeouts: Retry with backoff, reduce request frequency, or simplify prompts to lower latency.