Convert Figma logo to code with AI

google-gemini logogemini-cli

An open-source AI agent that brings the power of Gemini directly into your terminal.

78,095
8,477
78,095
2,302

Top Related Projects

The official Python library for the OpenAI API

Integrate cutting-edge LLM technology quickly and easily into your apps

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

119,202

🦜🔗 The platform for reliable agents.

23,837

AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.

Quick Overview

Gemini-CLI is a command-line interface tool for interacting with Google's Gemini AI model. It allows users to access Gemini's capabilities directly from the terminal, enabling quick queries, text generation, and other AI-powered tasks without the need for a graphical interface or complex setup.

Pros

  • Easy to use and integrate into existing command-line workflows
  • Provides quick access to Gemini AI capabilities without leaving the terminal
  • Lightweight and doesn't require extensive system resources
  • Potential for scripting and automation of AI-powered tasks

Cons

  • Limited to text-based interactions, lacking visual or graphical output
  • May require API key management and potential usage costs
  • Could have a learning curve for users not familiar with CLI tools
  • Possibly limited in advanced features compared to full SDK or web interface

Code Examples

# Simple query to Gemini
gemini-cli ask "What is the capital of France?"
# Generate a short story
gemini-cli generate "Write a 100-word story about a time traveler"
# Analyze sentiment of a given text
gemini-cli analyze "I love using this CLI tool for AI tasks!"

Getting Started

  1. Install Gemini-CLI:

    pip install gemini-cli
    
  2. Set up your API key:

    export GEMINI_API_KEY=your_api_key_here
    
  3. Run your first query:

    gemini-cli ask "Hello, Gemini! How are you today?"
    

Competitor Comparisons

The official Python library for the OpenAI API

Pros of openai-python

  • More comprehensive API coverage for OpenAI services
  • Better documentation and examples
  • Larger community and more frequent updates

Cons of openai-python

  • Specific to OpenAI, not usable with other AI providers
  • More complex setup and configuration required

Code Comparison

openai-python:

import openai

openai.api_key = "your-api-key"
response = openai.Completion.create(
  engine="davinci",
  prompt="Translate the following English text to French: '{}'",
  max_tokens=60
)

gemini-cli:

from gemini import Gemini

gemini = Gemini(api_key="your-api-key")
response = gemini.generate_content(
    "Translate the following English text to French: '{}'"
)

The openai-python library offers a more extensive API with various models and parameters, while gemini-cli provides a simpler interface focused on Gemini's capabilities. openai-python requires more setup but offers greater flexibility, whereas gemini-cli is more straightforward for quick interactions with Gemini models.

Integrate cutting-edge LLM technology quickly and easily into your apps

Pros of Semantic Kernel

  • More comprehensive framework for building AI applications
  • Supports multiple programming languages (C#, Python, Java)
  • Extensive documentation and examples available

Cons of Semantic Kernel

  • Steeper learning curve due to its complexity
  • Requires more setup and configuration
  • May be overkill for simple AI integrations

Code Comparison

Semantic Kernel (C#):

using Microsoft.SemanticKernel;

var kernel = Kernel.Builder.Build();
var result = await kernel.RunAsync("What is the capital of France?");
Console.WriteLine(result);

Gemini CLI (Python):

from gemini_cli import Gemini

gemini = Gemini()
response = gemini.generate_content("What is the capital of France?")
print(response.text)

Key Differences

  • Semantic Kernel offers a more structured approach to building AI applications
  • Gemini CLI provides a simpler, command-line focused interface
  • Semantic Kernel supports multiple AI models, while Gemini CLI is specific to Google's Gemini model
  • Semantic Kernel has a larger community and more extensive ecosystem
  • Gemini CLI is more lightweight and easier to get started with for basic tasks

🤗 Transformers: the model-definition framework for state-of-the-art machine learning models in text, vision, audio, and multimodal models, for both inference and training.

Pros of transformers

  • Extensive library supporting a wide range of models and tasks
  • Well-documented with a large community and ecosystem
  • Seamless integration with PyTorch and TensorFlow

Cons of transformers

  • Steeper learning curve due to its comprehensive nature
  • Potentially higher resource requirements for some models

Code Comparison

transformers:

from transformers import pipeline

classifier = pipeline("sentiment-analysis")
result = classifier("I love this product!")[0]
print(f"Label: {result['label']}, Score: {result['score']:.4f}")

gemini-cli:

from gemini import Gemini

gemini = Gemini()
response = gemini.generate_content("Analyze the sentiment: I love this product!")
print(response.text)

Key Differences

  • transformers offers a broader range of pre-trained models and tasks
  • gemini-cli focuses specifically on Google's Gemini model
  • transformers provides more fine-grained control over model parameters
  • gemini-cli aims for simplicity and ease of use with Google's API

Use Cases

  • transformers: Ideal for researchers and developers working with various NLP tasks and models
  • gemini-cli: Best for quick prototyping and leveraging Google's Gemini model capabilities

Community and Support

  • transformers: Large, active community with extensive documentation and third-party resources
  • gemini-cli: Newer project with growing community, backed by Google's support
119,202

🦜🔗 The platform for reliable agents.

Pros of LangChain

  • More comprehensive framework for building LLM applications
  • Supports multiple LLM providers and integrations
  • Extensive documentation and active community support

Cons of LangChain

  • Steeper learning curve due to its broader scope
  • May be overkill for simple CLI applications
  • Requires more setup and configuration

Code Comparison

LangChain example:

from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate

llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
    input_variables=["product"],
    template="What is a good name for a company that makes {product}?",
)

Gemini CLI example:

import gemini

model = gemini.load_model("gemini-pro")
response = model.generate_content("What is a good name for a company that makes shoes?")
print(response.text)

The LangChain example demonstrates its flexibility with prompt templates and different LLM providers, while the Gemini CLI example shows a more straightforward approach for quick interactions with the Gemini model. LangChain offers more customization options, but Gemini CLI provides a simpler interface for specific Gemini model interactions.

23,837

AI orchestration framework to build customizable, production-ready LLM applications. Connect components (models, vector DBs, file converters) to pipelines or agents that can interact with your data. With advanced retrieval methods, it's best suited for building RAG, question answering, semantic search or conversational agent chatbots.

Pros of Haystack

  • More comprehensive NLP framework with broader capabilities
  • Extensive documentation and community support
  • Modular architecture allowing for customization and flexibility

Cons of Haystack

  • Steeper learning curve due to its complexity
  • Potentially overkill for simple CLI applications
  • Requires more setup and configuration

Code Comparison

Haystack example:

from haystack import Pipeline
from haystack.nodes import TextConverter, Preprocessor, FARMReader

pipeline = Pipeline()
pipeline.add_node(TextConverter(), name="TextConverter")
pipeline.add_node(Preprocessor(), name="Preprocessor")
pipeline.add_node(FARMReader(model_name_or_path="deepset/roberta-base-squad2"), name="Reader")

Gemini CLI example:

import gemini

model = gemini.load_model("gemini-pro")
response = model.generate_content("Tell me about the solar system")
print(response.text)

While Haystack offers a more complex pipeline for advanced NLP tasks, Gemini CLI provides a simpler interface for quick content generation using Google's Gemini model. Haystack is better suited for building comprehensive NLP applications, while Gemini CLI is ideal for rapid prototyping and simple AI-powered CLI tools.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Gemini CLI

Gemini CLI CI Gemini CLI E2E Version License

Gemini CLI Screenshot

Gemini CLI is an open-source AI agent that brings the power of Gemini directly into your terminal. It provides lightweight access to Gemini, giving you the most direct path from your prompt to our model.

🚀 Why Gemini CLI?

  • 🎯 Free tier: 60 requests/min and 1,000 requests/day with personal Google account.
  • 🧠 Powerful Gemini 2.5 Pro: Access to 1M token context window.
  • 🔧 Built-in tools: Google Search grounding, file operations, shell commands, web fetching.
  • 🔌 Extensible: MCP (Model Context Protocol) support for custom integrations.
  • 💻 Terminal-first: Designed for developers who live in the command line.
  • 🛡️ Open source: Apache 2.0 licensed.

📦 Installation

Quick Install

Run instantly with npx

# Using npx (no installation required)
npx https://github.com/google-gemini/gemini-cli

Install globally with npm

npm install -g @google/gemini-cli

Install globally with Homebrew (macOS/Linux)

brew install gemini-cli

System Requirements

  • Node.js version 20 or higher
  • macOS, Linux, or Windows

Release Cadence and Tags

See Releases for more details.

Preview

New preview releases will be published each week at UTC 2359 on Tuesdays. These releases will not have been fully vetted and may contain regressions or other outstanding issues. Please help us test and install with preview tag.

npm install -g @google/gemini-cli@preview

Stable

  • New stable releases will be published each week at UTC 2000 on Tuesdays, this will be the full promotion of last week's preview release + any bug fixes and validations. Use latest tag.
npm install -g @google/gemini-cli@latest

Nightly

  • New releases will be published each week at UTC 0000 each day, This will be all changes from the main branch as represented at time of release. It should be assumed there are pending validations and issues. Use nightly tag.
npm install -g @google/gemini-cli@nightly

📋 Key Features

Code Understanding & Generation

  • Query and edit large codebases
  • Generate new apps from PDFs, images, or sketches using multimodal capabilities
  • Debug issues and troubleshoot with natural language

Automation & Integration

  • Automate operational tasks like querying pull requests or handling complex rebases
  • Use MCP servers to connect new capabilities, including media generation with Imagen, Veo or Lyria
  • Run non-interactively in scripts for workflow automation

Advanced Capabilities

  • Ground your queries with built-in Google Search for real-time information
  • Conversation checkpointing to save and resume complex sessions
  • Custom context files (GEMINI.md) to tailor behavior for your projects

GitHub Integration

Integrate Gemini CLI directly into your GitHub workflows with Gemini CLI GitHub Action:

  • Pull Request Reviews: Automated code review with contextual feedback and suggestions
  • Issue Triage: Automated labeling and prioritization of GitHub issues based on content analysis
  • On-demand Assistance: Mention @gemini-cli in issues and pull requests for help with debugging, explanations, or task delegation
  • Custom Workflows: Build automated, scheduled and on-demand workflows tailored to your team's needs

🔐 Authentication Options

Choose the authentication method that best fits your needs:

Option 1: Login with Google (OAuth login using your Google Account)

✨ Best for: Individual developers as well as anyone who has a Gemini Code Assist License. (see quota limits and terms of service for details)

Benefits:

  • Free tier: 60 requests/min and 1,000 requests/day
  • Gemini 2.5 Pro with 1M token context window
  • No API key management - just sign in with your Google account
  • Automatic updates to latest models

Start Gemini CLI, then choose Login with Google and follow the browser authentication flow when prompted

gemini

If you are using a paid Code Assist License from your organization, remember to set the Google Cloud Project

# Set your Google Cloud Project
export GOOGLE_CLOUD_PROJECT="YOUR_PROJECT_ID"
gemini

Option 2: Gemini API Key

✨ Best for: Developers who need specific model control or paid tier access

Benefits:

  • Free tier: 100 requests/day with Gemini 2.5 Pro
  • Model selection: Choose specific Gemini models
  • Usage-based billing: Upgrade for higher limits when needed
# Get your key from https://aistudio.google.com/apikey
export GEMINI_API_KEY="YOUR_API_KEY"
gemini

Option 3: Vertex AI

✨ Best for: Enterprise teams and production workloads

Benefits:

  • Enterprise features: Advanced security and compliance
  • Scalable: Higher rate limits with billing account
  • Integration: Works with existing Google Cloud infrastructure
# Get your key from Google Cloud Console
export GOOGLE_API_KEY="YOUR_API_KEY"
export GOOGLE_GENAI_USE_VERTEXAI=true
gemini

For Google Workspace accounts and other authentication methods, see the authentication guide.

🚀 Getting Started

Basic Usage

Start in current directory

gemini

Include multiple directories

gemini --include-directories ../lib,../docs

Use specific model

gemini -m gemini-2.5-flash

Non-interactive mode for scripts

Get a simple text response:

gemini -p "Explain the architecture of this codebase"

For more advanced scripting, including how to parse JSON and handle errors, use the --output-format json flag to get structured output:

gemini -p "Explain the architecture of this codebase" --output-format json

Quick Examples

Start a new project

cd new-project/
gemini
> Write me a Discord bot that answers questions using a FAQ.md file I will provide

Analyze existing code

git clone https://github.com/google-gemini/gemini-cli
cd gemini-cli
gemini
> Give me a summary of all of the changes that went in yesterday

📚 Documentation

Getting Started

Core Features

Tools & Extensions

Advanced Topics

Troubleshooting & Support

  • Troubleshooting Guide - Common issues and solutions.
  • FAQ - Frequently asked questions.
  • Use /bug command to report issues directly from the CLI.

Using MCP Servers

Configure MCP servers in ~/.gemini/settings.json to extend Gemini CLI with custom tools:

> @github List my open pull requests
> @slack Send a summary of today's commits to #dev channel
> @database Run a query to find inactive users

See the MCP Server Integration guide for setup instructions.

🤝 Contributing

We welcome contributions! Gemini CLI is fully open source (Apache 2.0), and we encourage the community to:

  • Report bugs and suggest features.
  • Improve documentation.
  • Submit code improvements.
  • Share your MCP servers and extensions.

See our Contributing Guide for development setup, coding standards, and how to submit pull requests.

Check our Official Roadmap for planned features and priorities.

📖 Resources

Uninstall

See the Uninstall Guide for removal instructions.

📄 Legal


Built with ❤️ by Google and the open source community