Groq LLM¶
Sends a text prompt to Groq-hosted large language models and returns the model’s response. Supports a system prompt for behavior control, optional context inputs, temperature, and max token settings. Model options are fetched from the Salt LLM service and fall back to a built-in list if the service is unavailable.

Usage¶
Use this node when you need to generate or transform text using Groq-backed models (e.g., Llama, Qwen, Kimi). Select a model, provide a clear prompt, optionally include a system prompt to shape the assistant’s behavior, and set temperature/max_tokens to control style and length. You can pass in up to four optional context strings and reference them in your prompt with placeholders (e.g., {{input_1}}). Chain the output into downstream nodes that need the generated text.
Inputs¶
| Field | Required | Type | Description | Example |
|---|---|---|---|---|
| model | True | STRING | The Groq model to use. The list is loaded from the Salt LLM service and falls back to a curated set if the service is not reachable. | llama-3.1-8b-instant |
| system_prompt | True | STRING | Instructions that set role, tone, and formatting rules for the assistant. Leave empty to skip. Multiline supported. | You are a concise technical assistant. Prefer bullet points and short code comments. |
| prompt | True | DYNAMIC_STRING | The main user prompt. You can reference optional inputs using placeholders like {{input_1}} ... {{input_4}} to inject external context. | Summarize the following notes in 5 bullets: {{input_1}} |
| temperature | True | FLOAT | Controls creativity and variability. Lower is more deterministic; higher is more diverse. | 0.5 |
| max_tokens | True | INT | Maximum number of tokens to generate. Set 0 for provider default. | 1024 |
| input_1 | False | STRING | Optional context string that can be referenced in the prompt as {{input_1}}. | Meeting notes from 2025-09-15... |
| input_2 | False | STRING | Optional context string that can be referenced in the prompt as {{input_2}}. | Customer feedback CSV excerpt... |
| input_3 | False | STRING | Optional context string that can be referenced in the prompt as {{input_3}}. | Style guide: use active voice. |
| input_4 | False | STRING | Optional context string that can be referenced in the prompt as {{input_4}}. | Audience: non-technical executives. |
Outputs¶
| Field | Type | Description | Example |
|---|---|---|---|
| Output | STRING | The text generated by the selected Groq model. | Here are five concise bullet points summarizing the notes... |
Important Notes¶
- Model list and fallback: The node fetches available Groq models from the Salt LLM service and caches them for performance. If fetching fails, it uses a built-in fallback mapping (e.g., groq/llama-3.1-8b-instant).
- Prompt context placeholders: You can inject optional inputs into your prompt using {{input_1}} to {{input_4}}. Ensure the placeholders match the input names exactly.
- Max tokens and truncation: If the response hits the max_tokens limit, the output may be truncated by the provider.
- Temperature range: Valid range is 0.0 to 1.0; adjust to balance determinism vs. creativity.
- Latency considerations: Large models or long prompts can take time to respond. The node is designed to handle long-running requests.
- Default behavior: If not specified otherwise, the node applies a concise default system prompt to keep responses direct and focused.
Troubleshooting¶
- Model ID not found: If you see an error about an unknown model, reselect a model from the dropdown to refresh the list, or choose a known fallback option.
- Truncated responses: If outputs seem cut off, increase max_tokens or simplify the prompt.
- No placeholder substitution: Ensure your prompt uses valid placeholders (e.g., {{input_1}}) and that the corresponding inputs are connected/populated.
- Timeout or service error: Check network connectivity and service status. Retry the request or select a smaller model to reduce latency.
- Unexpected style or tone: Set or adjust the system_prompt to explicitly direct behavior (tone, format, and constraints).