Skip to content

Gemini LLM

Runs a prompt against Google’s Gemini models via the LiteLLM service. Builds a chat-style message array from a system prompt and a user prompt (with optional context variables), then returns the model’s text response. The node auto-fetches available Gemini models from the service and falls back to a curated list if the service is unavailable.
Preview

Usage

Use this node whenever you need a Gemini-family LLM to generate, analyze, summarize, or transform text. Choose a model, set a system prompt to control tone and behavior, and write a user prompt. Optionally inject up to four context strings into the prompt using {{input_1}}–{{input_4}} placeholders. Temperature and max_tokens control creativity and response length. Connect this node after upstream nodes that produce context or knowledge and pass those into input_1–input_4.

Inputs

FieldRequiredTypeDescriptionExample
modelTrueCHOICESelect the Gemini model to use. Options are fetched from the LiteLLM service; if unavailable, a fallback list is shown.gemini-2.5-pro
system_promptTrueSTRINGInstructions that set behavior, style, or constraints for the model. If left blank, only the user prompt is sent.Respond concisely and use bullet points where helpful.
promptTrueDYNAMIC_STRINGThe user message. Supports placeholders {{input_1}}–{{input_4}} that are replaced with the optional inputs.Summarize the following notes: {{input_1}}
temperatureTrueFLOATControls randomness/creativity. Higher values produce more varied outputs.0.7
max_tokensTrueINTUpper bound on response length in tokens.1024
input_1FalseSTRINGOptional context string. Use via {{input_1}} in the prompt.Meeting notes from 2025-10-05 ...
input_2FalseSTRINGOptional context string. Use via {{input_2}} in the prompt.Product specs v3.2 ...
input_3FalseSTRINGOptional context string. Use via {{input_3}} in the prompt.Customer email thread ...
input_4FalseSTRINGOptional context string. Use via {{input_4}} in the prompt.Support ticket #12345 details ...

Outputs

FieldTypeDescriptionExample
OutputSTRINGThe text response returned by the selected Gemini model.Here is a concise summary of your notes...

Important Notes

  • Model options are cached per provider for approximately 1 hour; if live fetching fails, a built-in fallback model list is used.
  • Use {{input_1}}–{{input_4}} exactly in the prompt to inject the optional inputs; unmatched placeholders remain unchanged.
  • Temperature and max_tokens are passed to the model as generation parameters and directly influence style and length.
  • A sensible default system prompt is provided; clear, specific prompts with relevant context produce the best results.
  • If a selected model is not in the available list, the node will raise an error prompting you to choose a valid option.

Troubleshooting

  • Model not found: Ensure the selected model name matches an available option. Refresh the model list or pick from the dropdown.
  • Service request failed: Network or service issues can cause failures. Retry later or use a different model. The node will fall back to a predefined model list for selection, but requests still require service availability.
  • Empty or overly short output: Increase max_tokens or reduce prompt constraints. Provide more context via input_1–input_4.
  • Incoherent or too-random responses: Lower the temperature to make outputs more deterministic.
  • Placeholders not replaced: Verify you used {{input_1}}–{{input_4}} syntax in the prompt and provided corresponding input values.