Skip to content

Gemini LLM

Runs a text generation/completion request against Google Gemini models through Salt’s LiteLLM service. It supports dynamic prompt composition with injected context inputs and configurable generation controls (temperature and max tokens). The node fetches an available model list from the service (with caching) and falls back to a predefined set if the service is unavailable.
Preview

Usage

Use this node when you need Gemini-family LLM responses (analysis, summaries, drafting, Q&A). Provide a clear prompt and optionally pass up to four context strings to be merged into the prompt using placeholders like {{input_1}}. Select a Gemini model appropriate for your task and tune temperature and max_tokens to control creativity and response length. Chain this node after nodes that produce textual context, and connect its output to downstream logic or display nodes.

Inputs

FieldRequiredTypeDescriptionExample
modelTrueCHOICESelect which Gemini (or Gemma) model to use. The list is fetched from the LLM service; if unavailable, a fallback list is shown.gemini-2.5-flash
system_promptTrueSTRINGHigh-level instructions that set behavior, tone, and formatting rules for the model.Respond concisely and use bullet points when listing steps.
promptTrueDYNAMIC_STRINGThe main user message. You can reference optional inputs using placeholders like {{input_1}} ... {{input_4}} to inject context.Summarize the following notes: {{input_1}}
temperatureTrueFLOATControls randomness/creativity. Lower is more deterministic; higher is more diverse.0.5
max_tokensTrueINTMaximum tokens in the response. Set higher for longer answers; 0 lets the backend/provider default apply.1024
input_1FalseSTRINGOptional context string to inject into the prompt via {{input_1}}.Meeting notes from 2025-01-02
input_2FalseSTRINGOptional context string to inject into the prompt via {{input_2}}.Customer feedback excerpts
input_3FalseSTRINGOptional context string to inject into the prompt via {{input_3}}.Product specification
input_4FalseSTRINGOptional context string to inject into the prompt via {{input_4}}.Known issues list

Outputs

FieldTypeDescriptionExample
OutputSTRINGThe LLM-generated text response.Here is a concise summary of the meeting notes...

Important Notes

  • Provider: This node targets the Gemini provider and queries models via the Salt LiteLLM service.
  • Dynamic prompt variables: The node replaces placeholders {{input_1}}..{{input_4}} in the prompt with the corresponding optional input values.
  • Model list caching: Available models are cached for about 1 hour. If the service cannot list models, a fallback set (e.g., gemini-2.5-flash, gemini-2.5-pro, gemma-3 variants) is shown.
  • Max tokens: If the response stops early due to length, increase max_tokens.
  • Quotas: This node counts toward LLM node usage limits in workflows.
  • Service configuration: Your workspace must have Gemini access configured in the Salt LLM service (e.g., valid provider credentials) for requests to succeed.

Troubleshooting

  • Model ID not found: Select a model from the provided dropdown (don’t type a custom name) and retry.
  • Response truncated or cut off: Increase max_tokens or reduce prompt/context length.
  • Slow or timeout errors: Simplify the prompt, reduce context size, or try a faster model (e.g., a 'flash' variant). If persistent, contact your admin to verify Gemini provider availability and service health.
  • Empty or generic responses: Lower temperature for more focused output, or improve the system_prompt to better constrain style and structure.
  • Placeholders not replaced: Ensure you actually used {{input_1}}..{{input_4}} in the prompt and provided the corresponding optional inputs.