·

Sentry MCP With Gemini Cli

Sentry MCP With Gemini Cli

Gemini CLI is Google's open-source terminal AI agent powered by the Gemini model family. It supports the Model Context Protocol, making it possible to connect Sentry MCP and perform error investigation entirely from the command line with Gemini as the reasoning engine. This topic covers the full setup, practical usage patterns, a complete root cause analysis walkthrough, and an honest comparison of how Sentry MCP output differs between Gemini CLI and Claude Code.


Installing and Connecting Sentry MCP to Gemini CLI

Prerequisites
- Gemini CLI installed — follow the official Gemini CLI installation guide
- A Sentry auth token (see Module 3, Topic 1 for how to generate one)

Gemini CLI MCP configuration

Gemini CLI reads MCP server definitions from ~/.gemini/settings.json. Add Sentry MCP using HTTP transport to connect to the official Sentry MCP server:

{
  "mcpServers": {
    "sentry": {
      "httpUrl": "https://mcp.sentry.dev/mcp",
      "httpHeaders": {
        "Authorization": "Bearer YOUR_SENTRY_AUTH_TOKEN"
      }
    }
  }
}

Replace YOUR_SENTRY_AUTH_TOKEN with your actual Sentry auth token.

Verify the connection

Start a Gemini CLI session and confirm the Sentry tools are available:

gemini

Inside the session:

What Sentry MCP tools are available?

You should see sentry_list_issues, sentry_get_event, and other tools listed.

Tips
- Restart Gemini CLI after editing settings.json — changes are not picked up in a running session.
- Store sensitive tokens in a secrets manager or environment variable rather than hardcoding them directly in settings.json.


Using Sentry MCP Tools for Error Triage and Analysis in Gemini CLI

Gemini CLI's conversational interface works well for structured triage workflows. Gemini's strength with tabular data and multi-step reasoning makes it particularly effective for categorization and prioritization tasks.

Basic triage prompt:

Use Sentry to list the 10 most frequent unresolved issues in project "backend-api". Format results as a table: Issue ID | Error Type | Event Count | Last Seen | Affected Users.

Filtering for recent regressions:

Query Sentry for all unresolved issues in project "checkout-service" with firstSeen after 2026-04-30. Sort by event count descending. This was after our v5.0.0 deploy.

Error pattern analysis across projects:

List the top 5 issues from each of these Sentry projects: "api-gateway", "user-service", "payment-service". Identify if any error patterns appear in multiple services simultaneously — this would suggest a shared dependency problem.

Getting full event detail:

Get Sentry issue GATEWAY-441. Retrieve the most recent event. Show me:
- The full exception chain (all nested exceptions, not just the outermost one)
- Request method, URL, and status code
- All breadcrumbs with timestamps
- The SDK version and runtime environment

Analyzing stack trace frames:

Get Sentry event "e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6". List all stack frames that belong to our application code (exclude node_modules, vendor, stdlib). For each in-app frame, tell me the file path, line number, and function name.

Finding issues by user:

Search Sentry project "web-app" for issues where the affected user ID is "usr_789012". What errors has this user hit in the last 7 days?

Tips
- Gemini excels at structured output. Always ask for tables or numbered lists when dealing with multiple issues — the formatting is consistent and easy to scan.
- When Gemini returns a long stack trace in its response, ask it to "highlight only the frame where the error originated, and explain why that frame is the root cause."
- Gemini CLI handles multi-turn conversations well. Build context incrementally: list issues first, then drill into one, then ask for a fix — rather than trying to do everything in one massive prompt.
- For issues involving async operations, ask Gemini to "reconstruct the async call chain from the stack trace" — Gemini is good at reasoning about Promise chains and async/await flows.


Practical Example: Root Cause Analysis with Gemini CLI and Sentry MCP

This walkthrough demonstrates a complete root cause analysis session using Gemini CLI for a DatabaseConnectionError affecting a Node.js API service.

Scenario: Monitoring alerts fire for issue DB-7723 in the orders-api project. The error is intermittent but spiking — 400 events in the last hour.

Step 1: Get the big picture.

Get Sentry issue DB-7723 from the "orders-api" project. Give me a concise summary: error type, affected function, event count trend over the past 6 hours, and whether this error correlates with a specific release.

Gemini retrieves the issue and reports:
- DatabaseConnectionError: Connection timeout after 5000ms
- First seen in release v2.8.0, deployed 2026-05-07 18:00 UTC
- Events: 12 in hour 1, 40 in hour 2, 120 in hour 3, 400 in hour 4 — exponential growth
- Culprit: OrderRepository.findByUserId() in src/repositories/order.repository.ts:89

Step 2: Get the most severe recent event.

Get the most recent event for Sentry issue DB-7723. Show me the complete stack trace with all frames, the database connection config from the extra data, and the breadcrumbs leading up to the error.

Gemini returns the event payload and the AI identifies:
- The error occurs specifically in connection pool acquisition
- Breadcrumbs show: health check → GET /orders → DB pool exhausted → timeout

Step 3: Analyze the root cause.

Based on the Sentry data you retrieved, answer these questions:
1. Why is the connection pool being exhausted?
2. Is this a connection leak, a traffic spike, or a misconfiguration?
3. What is the exponential growth pattern suggesting?

Gemini's analysis:
- The breadcrumbs show health check endpoints are acquiring DB connections but not releasing them (connection leak)
- The exponential growth is consistent with pool exhaustion under sustained traffic — each health check leaks one connection
- Release v2.8.0 likely introduced a health check endpoint that doesn't properly release connections

Step 4: Verify with a second event.

Get two more recent events for issue DB-7723 (not the one you already fetched). Do the breadcrumbs also show health check requests before the DB error? This will confirm whether the health check is the common factor.

Gemini confirms: all three events show a health check breadcrumb 2-5 seconds before the connection timeout.

Step 5: Request a fix proposal.

The root cause is a connection leak in the health check endpoint introduced in v2.8.0. I'm using Knex.js as the query builder. The health check likely runs a raw query without releasing the connection. Write the correct pattern for a Knex.js health check that properly releases the connection after use.

Gemini proposes:

// Incorrect pattern (introduced in v2.8.0) — leaks connections:
async function healthCheck() {
  await db.raw('SELECT 1');
  return { status: 'ok' };
}

// Correct pattern — connection returned to pool after query:
async function healthCheck() {
  await db.raw('SELECT 1').then(() => true);
  // Or more explicitly with connection lifecycle:
  const conn = await db.client.acquireConnection();
  try {
    await conn.raw('SELECT 1');
    return { status: 'ok' };
  } finally {
    db.client.releaseConnection(conn);
  }
}

Step 6: Estimate impact and write the postmortem summary.

Based on the Sentry data (event count, affected users, time range, and release version), write a one-paragraph incident summary suitable for a postmortem. Include: what broke, when it started, why it happened, and what the fix is.

Tips
- For exponential growth error patterns, always ask Gemini to check whether the issue correlates with a specific deployment — the firstSeen vs. release timeline is a critical diagnostic signal.
- Gemini's reasoning is strong for multi-hypothesis analysis. Ask it to "generate three possible root causes ranked by likelihood" before committing to one — this prevents tunnel vision.
- After getting a fix proposal, follow up with: "What monitoring alert in Sentry should I set up to catch this class of error in the future?" Gemini can draft the alert configuration.
- Save Gemini CLI sessions to file with output redirection: gemini --session-log ~/incident-db-7723.md to preserve the investigation as documentation.


Comparing Sentry MCP Output Between Gemini CLI and Claude Code

Both Gemini CLI and Claude Code connect to the same Sentry MCP server at https://mcp.sentry.dev/mcp, so the raw data they receive from Sentry is identical. The differences emerge in how each model reasons about that data and presents results.

Response format and structure:

Aspect Gemini CLI Claude Code
Default output format More structured tables and lists More narrative prose with embedded code
Stack trace presentation Tends to produce numbered lists of frames Tends to inline frames with commentary
Fix proposal style Often shows before/after code blocks Often generates diffs or explains changes inline
Context retention Good across multi-turn sessions Excellent — retains file context from open editor

Reasoning style:

Gemini CLI tends to be more systematic in triage scenarios — it naturally generates comparative tables and can process multiple issues in parallel within one prompt. Claude Code tends to reason more deeply about a single issue, especially when local file context is available alongside the Sentry data.

Codebase integration:

Claude Code has a distinct advantage when your local codebase is involved. It can read the exact source file referenced in the Sentry stack trace and propose a fix grounded in the actual code. Gemini CLI can do this too (it has filesystem tools), but the VS Code integration makes Claude Code more fluid for in-editor workflows.

When to use each:

Use Gemini CLI when:
- You need to triage a large number of issues across multiple projects simultaneously
- You want structured tabular output for reporting or team standup prep
- You're on a machine without the Claude Code VS Code extension

Use Claude Code when:
- You're actively debugging in VS Code with source files open
- You need the AI to read local code files and cross-reference them with Sentry data
- You want the fix applied directly in the editor

Example of the same prompt, different output style:

Prompt: "Get Sentry issue ORDERS-891. Summarize the root cause."

Gemini CLI output style:

Issue: ORDERS-891
Type: TypeError: Cannot read properties of undefined (reading 'price')
Root Cause: `order.items` is undefined when `calculateTotal()` is called.
Likely Cause: Orders created before the v3.1.0 migration may lack an `items` field.
Recommendation: Add a null guard before iterating `order.items`.

Claude Code output style:

The error in ORDERS-891 originates at calculateTotal() in orders/pricing.ts:67.
The stack trace shows that order.items is undefined — likely an order record fetched
from the legacy database schema before the v3.1.0 migration that added the items array.

The fix is straightforward: add an early return or default assignment at the top of
calculateTotal() before the .map() call on line 67...

Both are useful. The Gemini output is faster to scan; the Claude Code output is more contextual.

Tips
- Don't pick one tool exclusively — use Gemini CLI for morning triage (scan many issues quickly) and Claude Code for deep dives (investigate and fix one issue with code context).
- The Sentry MCP server itself doesn't care which client calls it. You can start an investigation in Gemini CLI and finish it in Claude Code — just bring the issue ID along.
- If Gemini's response for a complex bug is too brief or misses the root cause, try the same prompt in Claude Code with the source file open. The additional file context often produces a more accurate diagnosis.
- Test your prompt strategy with a known bug first — one where you already know the root cause. This lets you calibrate how much context each tool needs to reach the correct conclusion.