Gemini LLM¶
Queries Google Gemini models through the LiteLLM service. It builds a chat-style request using a system prompt and a user prompt (with optional placeholders), then returns the model’s text response. Automatically lists available Gemini models from the service and falls back to a curated set if listing is unavailable.

Usage¶
Use this node when you need a Gemini LLM to generate or transform text based on prompts and optional contextual inputs. Typical workflows include summarization, content generation, classification, or instruction following where you can compose the final prompt using {{input_1}}–{{input_4}} placeholders and select the appropriate Gemini model level for speed vs. quality.
Inputs¶
| Field | Required | Type | Description | Example | 
|---|---|---|---|---|
| model | True | SELECT | Choose a Gemini model to query. The list is fetched from the service; if unavailable, a fallback list is used. | gemini-2.5-pro | 
| system_prompt | True | STRING | High-level instructions and tone guidelines for the AI. If not empty, it is prepended as a system message. | You are a concise, helpful assistant. Respond with clear bullet points. | 
| prompt | True | DYNAMIC_STRING | Main user prompt. Supports placeholders {{input_1}}–{{input_4}} for injecting connected inputs at runtime. | Summarize the following notes: {{input_1}}. Highlight 3 key insights. | 
| temperature | True | FLOAT | Controls randomness and creativity. Lower values are more deterministic. | 0.5 | 
| max_tokens | True | INT | Maximum number of tokens in the response. Use lower values for shorter, faster outputs. | 1024 | 
| input_1 | False | STRING | Optional context injected into {{input_1}} in the prompt. | Meeting notes from 2025-09-14 | 
| input_2 | False | STRING | Optional context injected into {{input_2}} in the prompt. | Customer support transcript | 
| input_3 | False | STRING | Optional context injected into {{input_3}} in the prompt. | Product specification V3.2 | 
| input_4 | False | STRING | Optional context injected into {{input_4}} in the prompt. | Known issues and FAQs | 
Outputs¶
| Field | Type | Description | Example | 
|---|---|---|---|
| Output | STRING | The text response returned by the selected Gemini model. | Here are three key insights from the notes: 1) ... 2) ... 3) ... | 
Important Notes¶
- Model selection is provider-scoped to Gemini and may fall back to a predefined list if live model listing is unavailable.
- Use {{input_1}}–{{input_4}} in the prompt to inject optional inputs dynamically.
- Temperature ranges from 0 to 1; start low for reliable results and increase for more creative output.
- Max tokens controls response length; higher values may increase latency and cost.
- Long-running requests are supported; however, very large prompts or responses may still hit service timeouts if set too low.
Troubleshooting¶
- Model not found: Ensure you pick a model from the dropdown. If custom text was entered, select a listed option instead.
- Empty output or truncated response: Increase max_tokens or simplify your prompt/context.
- Inconsistent formatting: Lower the temperature and provide clearer instructions in the system_prompt.
- Slow or timed-out requests: Reduce prompt size, choose a faster model (e.g., a Flash variant), or increase the timeout setting if available in your environment.
- Unexpected literal placeholders: Make sure your prompt uses {{input_1}}–{{input_4}} exactly, and provide those optional inputs if needed.