Skip to content

Anthropic LLM

Sends a text prompt to Anthropic models via the LiteLLM integration and returns the model’s generated text. You select an Anthropic model, provide a system prompt and user prompt, and control generation with temperature and max token settings. Includes fallback model mappings to keep workflows running if a preferred model name isn’t available.
Preview

Usage

Use this node whenever you need text generation from Anthropic’s Claude family within a Salt workflow. Typical usage: set the model (e.g., a Claude Sonnet/Haiku/Opus variant), provide a system prompt to steer behavior, pass the main prompt (optionally augmented by auxiliary inputs), and adjust temperature/max tokens. Chain the output into downstream nodes for further processing or display.

Inputs

FieldRequiredTypeDescriptionExample
modelTrueSTRINGThe Anthropic model to use. Choose from available options; the node provides fallback mappings for common Claude variants if direct names are not available.claude-3-5-sonnet-20241022
system_promptTrueSTRINGHigh-level instructions defining the assistant’s role, style, or constraints. Applied before the main prompt to steer the model’s behavior.You are a concise assistant that answers with clear bullet points.
promptTrueSTRINGThe main user input or task to complete. This is the content the model will respond to.Summarize the following article in 5 bullets:
temperatureTrueFLOATControls randomness of the output. Lower values make responses more deterministic; higher values make them more creative.0.5
max_tokensTrueINTMaximum number of tokens to generate in the response. Actual limits may depend on the selected model.1024
input_1FalseSTRINGOptional auxiliary input to provide extra context or variables to your prompt.Customer profile data JSON
input_2FalseSTRINGOptional auxiliary input to provide extra context or variables to your prompt.Conversation history text
input_3FalseSTRINGOptional auxiliary input to provide extra context or variables to your prompt.Knowledge base excerpt
input_4FalseSTRINGOptional auxiliary input to provide extra context or variables to your prompt.Task parameters JSON

Outputs

FieldTypeDescriptionExample
OutputSTRINGThe text generated by the selected Anthropic model.Here are the five key points from the article...

Important Notes

  • Model selection: The node exposes Anthropic models and includes fallback mappings (e.g., various Claude Haiku/Sonnet/Opus versions). If an exact name isn’t available at runtime, a mapped alternative may be used.
  • Token limits: Effective max token limits depend on the chosen model. If you set max_tokens above a model’s allowance, the service may cap it or return an error.
  • Temperature scale: Temperature is on a 0–1 scale; lower values yield more deterministic outputs.
  • Auxiliary inputs: input_1 to input_4 are optional and can be used to pass additional context for your prompts.
  • Service configuration: Access to Anthropic models depends on your Salt environment’s configured providers and credentials. Ensure Anthropic access is enabled by your admin.

Troubleshooting

  • Model not found: If a selected model isn’t available, choose another from the list or rely on the provided fallback mappings.
  • Empty or truncated output: Reduce temperature for stability, lower max_tokens if hitting limits, or simplify/shorten your prompt.
  • Provider/credential errors: Verify that Anthropic access is configured in your environment and that your organization has the necessary permissions.
  • Inconsistent style or behavior: Strengthen the system_prompt with clearer, explicit instructions, or reduce temperature.
  • Long-running or timeout: Decrease max_tokens, simplify prompts, or try a smaller/faster model variant.