Gemini LLM¶
Runs a text generation/completion request against Google Gemini models through Salt’s LiteLLM service. It supports dynamic prompt composition with injected context inputs and configurable generation controls (temperature and max tokens). The node fetches an available model list from the service (with caching) and falls back to a predefined set if the service is unavailable.

Usage¶
Use this node when you need Gemini-family LLM responses (analysis, summaries, drafting, Q&A). Provide a clear prompt and optionally pass up to four context strings to be merged into the prompt using placeholders like {{input_1}}. Select a Gemini model appropriate for your task and tune temperature and max_tokens to control creativity and response length. Chain this node after nodes that produce textual context, and connect its output to downstream logic or display nodes.
Inputs¶
| Field | Required | Type | Description | Example |
|---|---|---|---|---|
| model | True | CHOICE | Select which Gemini (or Gemma) model to use. The list is fetched from the LLM service; if unavailable, a fallback list is shown. | gemini-2.5-flash |
| system_prompt | True | STRING | High-level instructions that set behavior, tone, and formatting rules for the model. | Respond concisely and use bullet points when listing steps. |
| prompt | True | DYNAMIC_STRING | The main user message. You can reference optional inputs using placeholders like {{input_1}} ... {{input_4}} to inject context. | Summarize the following notes: {{input_1}} |
| temperature | True | FLOAT | Controls randomness/creativity. Lower is more deterministic; higher is more diverse. | 0.5 |
| max_tokens | True | INT | Maximum tokens in the response. Set higher for longer answers; 0 lets the backend/provider default apply. | 1024 |
| input_1 | False | STRING | Optional context string to inject into the prompt via {{input_1}}. | Meeting notes from 2025-01-02 |
| input_2 | False | STRING | Optional context string to inject into the prompt via {{input_2}}. | Customer feedback excerpts |
| input_3 | False | STRING | Optional context string to inject into the prompt via {{input_3}}. | Product specification |
| input_4 | False | STRING | Optional context string to inject into the prompt via {{input_4}}. | Known issues list |
Outputs¶
| Field | Type | Description | Example |
|---|---|---|---|
| Output | STRING | The LLM-generated text response. | Here is a concise summary of the meeting notes... |
Important Notes¶
- Provider: This node targets the Gemini provider and queries models via the Salt LiteLLM service.
- Dynamic prompt variables: The node replaces placeholders {{input_1}}..{{input_4}} in the prompt with the corresponding optional input values.
- Model list caching: Available models are cached for about 1 hour. If the service cannot list models, a fallback set (e.g., gemini-2.5-flash, gemini-2.5-pro, gemma-3 variants) is shown.
- Max tokens: If the response stops early due to length, increase max_tokens.
- Quotas: This node counts toward LLM node usage limits in workflows.
- Service configuration: Your workspace must have Gemini access configured in the Salt LLM service (e.g., valid provider credentials) for requests to succeed.
Troubleshooting¶
- Model ID not found: Select a model from the provided dropdown (don’t type a custom name) and retry.
- Response truncated or cut off: Increase max_tokens or reduce prompt/context length.
- Slow or timeout errors: Simplify the prompt, reduce context size, or try a faster model (e.g., a 'flash' variant). If persistent, contact your admin to verify Gemini provider availability and service health.
- Empty or generic responses: Lower temperature for more focused output, or improve the system_prompt to better constrain style and structure.
- Placeholders not replaced: Ensure you actually used {{input_1}}..{{input_4}} in the prompt and provided the corresponding optional inputs.