Skip to content

Anthropic LLM

Sends a text prompt to Anthropic models (Claude family) through the LiteLLM provider and returns the model’s text response. Supports a system prompt, temperature control, and maximum token limits, with convenient optional inputs for templating or dynamic context. Includes built-in fallback model name mappings to help resolve common Anthropic model identifiers.
Preview

Usage

Use this node whenever you need a Claude (Anthropic) model to generate, summarize, translate, or reason over text. Typically, you will set the model, provide a system prompt to define behavior, pass the main user prompt (optionally enriched with input_1..input_4), and tune temperature and max_tokens. Connect its string output to downstream logic, routing, or storage nodes.

Inputs

FieldRequiredTypeDescriptionExample
modelTrueSTRINGThe Anthropic model identifier to use. Accepts Anthropic Claude variants. Common examples are mapped via internal fallbacks.claude-3-5-sonnet-20241022
system_promptTrueSTRINGHigh-level instructions guiding the model’s behavior and style for the entire session.You are a concise, helpful assistant. Provide clear bullet-point answers.
promptTrueSTRINGThe main user message or task description sent to the model. You may incorporate optional inputs as context.Summarize the following text and extract key actions: {input_1}
temperatureTrueFLOATControls randomness (0=deterministic, 1=creative). Higher values increase variability.0.5
max_tokensTrueINTMaximum number of tokens to generate in the response. Must be within the selected model’s limits.1024
input_1FalseSTRINGOptional auxiliary text input for templating or adding extra context to the prompt.Customer message: I can’t log in to my account after resetting my password.
input_2FalseSTRINGOptional auxiliary text input for templating or adding extra context to the prompt.Relevant policy: Users must confirm their email after password reset.
input_3FalseSTRINGOptional auxiliary text input for templating or adding extra context to the prompt.Known issue: Email delivery delays reported in region X.
input_4FalseSTRINGOptional auxiliary text input for templating or adding extra context to the prompt.Support guidelines: Always verify account ownership before changes.

Outputs

FieldTypeDescriptionExample
OutputSTRINGThe generated text from the selected Anthropic model.Here’s a concise summary with key actions and suggested next steps...

Important Notes

  • Model selection: Use valid Anthropic model IDs (e.g., Claude family). The node includes fallback mappings to resolve common identifiers.
  • Credentials: Ensure Anthropic access is configured in your environment or project settings before use. Never paste secrets directly into prompts; store them securely as environment or project variables.
  • Token limits: max_tokens must not exceed the model’s limit. If set too high, the request may fail.
  • Temperature tuning: Lower values (e.g., 0.0–0.3) are better for deterministic outputs; higher values (e.g., 0.7–1.0) encourage creativity.
  • Optional inputs are pass-through strings you can reference in your prompt to add context or build templates.

Troubleshooting

  • Invalid or missing API credentials: Configure the Anthropic API key in your project or environment settings and retry.
  • Model not found: Verify the model name matches an available Anthropic model or use a known fallback identifier.
  • Output too long or truncated: Reduce response length by lowering max_tokens or by requesting a shorter answer in your prompt.
  • Empty or low-quality responses: Provide a clearer system_prompt, add structured context via input_1..input_4, or lower temperature for more focused results.
  • Rate limits or timeouts: Retry with exponential backoff, reduce request frequency, or simplify prompts to shorten processing time.