Skip to content

Groq LLM

Sends a chat-style prompt to Groq-hosted language models through the Salt LLM service and returns the model’s reply as text. The node automatically lists available Groq models from the service (and falls back to built-in mappings if the service list is unavailable) and supports dynamic prompt composition using optional context inputs.
Preview

Usage

Use this node to generate or transform text with Groq models. Select a model, set the system prompt (behavior/rules), compose the user prompt (optionally inserting {{input_1}}–{{input_4}}), and adjust temperature and max token limits. Commonly placed after retrieval or data-prep nodes to inject relevant context via input_1–input_4.

Inputs

FieldRequiredTypeDescriptionExample
modelTrueSTRINGThe Groq model to use. A selectable list is fetched from the LLM service; if unavailable, a fallback list is provided.llama-3.1-8b-instant
system_promptTrueSTRINGHigh-level instructions that shape the assistant’s behavior and tone. Leave empty to omit a system message.You are a concise assistant. Provide step-by-step reasoning only when asked.
promptTrueDYNAMIC_STRINGUser message to the model. Supports placeholders {{input_1}}–{{input_4}} that are replaced with the optional inputs below.Summarize the following research notes: {{input_1}}
temperatureTrueFLOATControls randomness/creativity. Lower is more deterministic; higher is more diverse.0.5
max_tokensTrueINTMaximum number of tokens to generate for the response.1024
input_1FalseSTRINGOptional context inserted via {{input_1}} in the prompt.Key findings from report A...
input_2FalseSTRINGOptional context inserted via {{input_2}} in the prompt.Customer feedback highlights...
input_3FalseSTRINGOptional context inserted via {{input_3}} in the prompt.Product specs v2.1
input_4FalseSTRINGOptional context inserted via {{input_4}} in the prompt.Constraints: budget <= $50k

Outputs

FieldTypeDescriptionExample
OutputSTRINGThe generated text reply from the selected Groq model.Here is a concise summary of the provided notes...

Important Notes

  • Model list resolution: the node queries the LLM service for available Groq models and caches them per provider. If that fails, it uses a built-in fallback list.
  • Supported prompt placeholders: {{input_1}}, {{input_2}}, {{input_3}}, {{input_4}} are replaced with the corresponding optional inputs.
  • Default behavior: if a non-empty system_prompt is provided, it is added as the system message; otherwise only the user prompt is sent.
  • Timeouts: default request timeout is 90 seconds. Long-running models may require increasing this if supported by your environment.
  • If you select a model name not recognized by the service/fallback list, the node will raise an error.
  • This node is counted under LLM-type limits in project-level usage controls.

Troubleshooting

  • Model not found error: Choose a model from the provided dropdown list. If the list seems incomplete, retry after the service refreshes or use one from the fallback set.
  • Empty or unexpected response: Verify the prompt content resolves correctly (placeholders replaced) and that max_tokens is high enough.
  • Timeouts or failures: Increase the timeout if supported by the environment and ensure the LLM service endpoint is reachable.
  • Low-quality outputs: Reduce temperature for more deterministic responses, or provide more explicit instructions/context in system_prompt and prompt.