Groq LLM¶
Runs a Large Language Model hosted by Groq through Salt’s LiteLLM integration. Supports dynamic prompt templating and standard LLM controls like temperature and max tokens. Automatically fetches available Groq models (with caching) and falls back to a built-in list if the service model list is unavailable.

Usage¶
Use this node to generate text with Groq-hosted models in any workflow that needs summarization, analysis, drafting, or reasoning. Provide a system prompt for behavior/tone, a user prompt (with optional {{input_1}}–{{input_4}} placeholders for dynamic context), select a model, and tune temperature and max_tokens as needed. Chain it after context-gathering nodes and before downstream processing or output nodes.
Inputs¶
| Field | Required | Type | Description | Example |
|---|---|---|---|---|
| model | True | CHOICE | Groq model to use. The list is fetched from the service and cached; falls back to a predefined set if fetching fails. | llama-3.1-8b-instant |
| system_prompt | True | STRING | High-level instructions controlling style, tone, and constraints. Leave empty to omit a system message. | Respond as a concise technical assistant. Use bullet points where helpful. |
| prompt | True | DYNAMIC_STRING | User prompt. Supports placeholders {{input_1}}–{{input_4}} that are replaced with the provided optional inputs. | Summarize the following notes: {{input_1}}. Focus on action items. |
| temperature | True | FLOAT | Creativity vs. determinism. Lower is more focused; higher is more diverse. | 0.5 |
| max_tokens | True | INT | Maximum tokens to generate for the response. | 1024 |
| input_1 | False | STRING | Optional dynamic context inserted via {{input_1}} in the prompt. | Quarterly sales notes: ... |
| input_2 | False | STRING | Optional dynamic context inserted via {{input_2}} in the prompt. | Competitor highlights: ... |
| input_3 | False | STRING | Optional dynamic context inserted via {{input_3}} in the prompt. | Customer feedback snippets: ... |
| input_4 | False | STRING | Optional dynamic context inserted via {{input_4}} in the prompt. | Product roadmap bullets: ... |
Outputs¶
| Field | Type | Description | Example |
|---|---|---|---|
| Output | STRING | The generated text response from the selected Groq model. | Here are the key action items for next quarter... |
Important Notes¶
- Model availability is fetched and cached per provider; if fetching fails, a fallback list of Groq models is used.
- Valid model selection is required; using a model name not in the available list will raise an error.
- Prompt supports {{input_1}}–{{input_4}} placeholders that get replaced with the optional input values exactly as provided.
- Default timeout is designed for longer LLM responses; very large prompts or slow models may approach this limit.
- System prompt is included only if non-empty; clear it to omit system-level instructions.
Troubleshooting¶
- Model not found: Ensure you select a model from the provided dropdown. If the dynamic list failed to load, use one of the fallback models.
- Timeouts or long waits: Reduce max_tokens or simplify the prompt; try a faster model; verify network stability.
- Empty or generic output: Lower temperature for more focused results, add clearer instructions, or provide concrete context via optional inputs.
- Placeholders not replaced: Confirm the prompt contains {{input_1}}–{{input_4}} and the corresponding inputs are populated.
- Rate limits or service errors: Retry after a short delay or switch to another available model. If persistent, reduce request frequency or output length.