Groq LLM¶
Executes a text generation request against Groq-hosted large language models via the LiteLLM service layer. It supports dynamic prompt composition, forwards common provider parameters (like temperature and max_tokens), and automatically lists available Groq models (with a safe fallback list if discovery fails).

Usage¶
Use this node when you need a Groq-backed LLM to generate or transform text. Typical flow: select a Groq model, provide a system prompt to set behavior, craft a user prompt (optionally referencing {{input_1}}–{{input_4}}), and tune temperature/max_tokens. Connect external context (e.g., knowledge base outputs) into input_1..input_4 to enrich the prompt before sending to the LLM. The node outputs the model’s text response as a single string.
Inputs¶
| Field | Required | Type | Description | Example |
|---|---|---|---|---|
| model | True | CHOICE | The Groq model to use. The list is fetched from the LLM service and falls back to a curated Groq model list if the service is unavailable. | llama-3.1-8b-instant |
| system_prompt | True | STRING | Instructions that set behavior, tone, and formatting for the assistant. If non-empty, it is sent as a system message. | Respond concisely and use bullet points when appropriate. |
| prompt | True | DYNAMIC_STRING | The main user message. You can reference optional inputs using double-brace placeholders like {{input_1}}–{{input_4}} to inject external context. | Summarize the following context: {{input_1}} |
| temperature | True | FLOAT | Controls randomness/creativity (0=deterministic, 1=more diverse). Passed to the provider. | 0.5 |
| max_tokens | True | INT | Upper bound on the length of the generated response. 0 lets the provider decide. | 512 |
| input_1 | False | STRING | Optional context to merge into the prompt via {{input_1}}. | Company knowledge base excerpt A |
| input_2 | False | STRING | Optional context to merge into the prompt via {{input_2}}. | Company knowledge base excerpt B |
| input_3 | False | STRING | Optional context to merge into the prompt via {{input_3}}. | User profile details |
| input_4 | False | STRING | Optional context to merge into the prompt via {{input_4}}. | Latest ticket transcript |
Outputs¶
| Field | Type | Description | Example |
|---|---|---|---|
| Output | STRING | The LLM’s generated text response. | Here is a concise summary of your context: ... |
Important Notes¶
- Model list discovery is cached per provider for about 1 hour; newly added models may not appear until the cache expires.
- If the model catalog service is unavailable, the node uses a built-in Groq fallback list (e.g., llama and compatible models).
- You can inject external context into the prompt using {{input_1}}–{{input_4}} placeholders; ensure the placeholders match the connected inputs.
- temperature and max_tokens (and any other advanced parameters you add) are forwarded to the LLM service as provider parameters.
- A default timeout of approximately 180 seconds applies; very long or slow generations may require adjusting the timeout via advanced parameters.
Troubleshooting¶
- Model ID not found: Select a model from the offered list or wait for model cache refresh. If using a newly released model, try again after the cache window.
- Truncated response or incomplete answer: Increase max_tokens; a finish reason of 'length' indicates the limit was hit.
- Slow or timed-out requests: Reduce prompt size/complexity, pick a smaller model, or increase the timeout via advanced parameters.
- Placeholders not replaced: Verify you used double braces (e.g., {{input_1}}) and that the corresponding inputs are connected or have values.
- Request failed errors: Ensure the LLM service is reachable and provider credentials/configuration for Groq are correctly set in your environment.