Skip to content

Gemini LLM

Executes Large Language Model calls to Google’s Gemini models through the LiteLLM provider layer. The node selects the Gemini backend (provider="gemini") and supports multiple Gemini/Gemma model aliases via an internal fallback mapping to ensure compatibility with expected model names.
Preview

Usage

Use this node anywhere you need LLM-generated text or reasoning powered by Google Gemini within a workflow. Select a Gemini or Gemma model, provide your prompt/messages, and connect its output to downstream logic, parsing, or display nodes. It is typically paired with input nodes (for dynamic prompts) and output or inspection nodes (to capture responses).

Inputs

FieldRequiredTypeDescriptionExample
modelFalseNot specifiedModel identifier to use. If a plain name is provided, the node maps it to a provider-qualified name when needed. Supported aliases include: gemini-2.5-flash, gemini-2.5-pro, gemini-2.0-flash, gemini-2.0-flash-001, gemini-2.0-flash-lite-001, gemini-2.0-flash-lite, gemma-3-1b-it, gemma-3-4b-it, gemma-3-12b-it, gemma-3n-e4b-it, gemma-3n-e2b-it, gemini-flash-latest, gemini-flash-lite-latest, gemini-pro-latest, gemini-2.5-flash-lite.gemini-2.5-flash
prompt_or_messagesFalseNot specifiedThe text prompt or chat messages to send to the model. Exact structure depends on the base LLM interface.Write a short haiku about the ocean.
generation_parametersFalseNot specifiedOptional tuning parameters such as temperature, max tokens, and top_p. Specific names and ranges depend on the underlying base node.{"temperature": 0.7, "max_tokens": 256}

Outputs

FieldTypeDescriptionExample
responseNot specifiedThe model’s response payload (usually text; exact structure depends on the base LiteLLM node).The ocean whispers blue horizons stretch afar secrets in the deep

Important Notes

  • This node is backed by LiteLLM with provider set to "gemini".
  • It supports a fallback model mapping so you can use simpler names (e.g., "gemini-2.5-flash") which are internally mapped to provider-qualified forms (e.g., "gemini/gemini-2.5-flash").
  • You must configure valid credentials for the Gemini provider in your environment or platform settings. Never hardcode secrets; use secure configuration (e.g., ).
  • This node is counted under LLM usage limits in certain plan tiers. If your workflow reports limit errors, review your LLM node quota.
  • Exact input/output shapes (prompt/messages format, return structure) are defined in the shared LiteLLM base implementation and may vary by platform version.

Troubleshooting

  • Model not recognized: Ensure the model name is one of the supported aliases or the fully-qualified provider form (e.g., "gemini/gemini-2.5-flash").
  • Authentication error: Verify that a valid Gemini API key is configured in your environment (e.g., ) and that your account has access to the chosen model.
  • Empty or truncated output: Increase max tokens (if supported) or reduce prompt size. Check any rate/usage limits.
  • Rate limit or quota errors: Reduce request frequency, choose a lighter model, or upgrade your plan to increase LLM quotas.
  • Unexpected response structure: Confirm downstream nodes expect the correct format and that you are using compatible parameters supported by the base LiteLLM interface.