·

What Is Sentry MCP

What Is Sentry MCP

Sentry MCP is a Model Context Protocol server that exposes Sentry's error monitoring data — issues, events, stack traces, distributed traces, and release information — as structured tools that AI coding agents can call directly. Instead of manually copying error details from the Sentry dashboard and pasting them into a chat window, you connect Sentry MCP once and your AI agent can autonomously query, analyze, and reason about production errors in real time.

Sentry operates an official remote MCP server at https://mcp.sentry.dev/mcp. Your AI client connects to it over HTTP — no local process to install, no npx command, no webhook or polling required. You authenticate once with a Sentry auth token, and the client handles the rest.

This topic covers what Sentry MCP can do, how authentication works, and the privacy boundaries you need to understand before connecting production error data to an LLM.


Core Sentry MCP Tools: Issues, Events, Stack Traces, Traces, and Releases

Sentry MCP exposes a set of tools that map directly onto the Sentry data model. Understanding what each tool returns helps you write better prompts and know exactly what your AI agent has access to.

Issues — An issue in Sentry is a deduplicated group of error events sharing the same fingerprint. The MCP server can list issues by project, filter by status (unresolved, ignored, resolved), sort by frequency or last seen, and return issue metadata including title, culprit, first seen, last seen, and event count.

Events — Individual error occurrences that belong to an issue. Each event contains the full exception payload: exception type, message, stack frames (file path, line number, function name, local variables if configured), request context (URL, headers, method), user context, and breadcrumbs (the sequence of actions leading up to the error).

Stack Traces — Stack frames are surfaced as part of the event payload. The MCP server returns them in a structured format that AI agents can reason about: which frames are in-app vs. library code, what the exact file and line is, and what local variable values were captured at the time of the error.

Distributed Traces — When your application uses Sentry's performance monitoring, errors are correlated with traces. The MCP server can retrieve a trace by its trace ID, showing the full span tree: which services were involved, which spans were slow, and where the error occurred within the distributed execution path.

Releases — Sentry tracks which release version an error first appeared in and whether it has been seen in newer releases. This allows AI agents to answer questions like "was this error introduced in v2.4.1?" or "is this regression still present in the latest release?"

Example tool invocations your AI agent will make internally:

list_issues(organization="your-org", project="backend-api", query="is:unresolved", limit=25)
get_issue(issue_id="BACKEND-1234")
get_event(event_id="abc123...", organization="your-org")
get_trace(trace_id="d4e5f6...", organization="your-org")
list_releases(organization="your-org", project="backend-api")

Tips
- Ask the AI to list the top 10 unresolved issues by frequency before diving into a single bug — it gives you triage context and often surfaces systemic problems rather than one-offs.
- The get_event tool returns breadcrumbs in chronological order. Always ask the AI to summarize the breadcrumb sequence when investigating crashes — it often reveals the exact user path that triggered the error.
- Distributed traces are only available if you have Sentry Performance enabled. Check your plan and SDK config (traces_sample_rate > 0) before relying on trace-based prompts.
- Issue fingerprinting in Sentry can be misconfigured. If you see thousands of separate issues for the same logical bug, ask the AI to check whether custom fingerprinting rules are set on the project.


Sentry MCP Authentication: Auth Token Setup and Organization Config

Sentry MCP authenticates using a Sentry auth token. You need to generate a token with the correct scopes. When you connect to the official Sentry MCP server at https://mcp.sentry.dev/mcp, you provide this token once — your AI client stores it and reuses it automatically. No manual config file editing or environment variable injection required.

Generating the token:

  1. Log in to Sentry and navigate to Settings > Account > API > Auth Tokens (personal token) or Settings > Organization > Auth Tokens (org-level token for CI/shared use).
  2. Create a new token with the following scopes:
  3. event:read — read error events and stack traces
  4. project:read — read project configuration and metadata
  5. org:read — read organization-level data including releases and members

For most debugging workflows, these three read-only scopes are sufficient. Do not grant event:write, project:write, or admin scopes to the MCP token.

curl -H "Authorization: Bearer YOUR_TOKEN" \
  https://sentry.io/api/0/organizations/your-org-slug/projects/ \
  | jq '.[].name'

How authentication works with the remote MCP server:

When you connect your AI client to https://mcp.sentry.dev/mcp, you provide your auth token during the initial connection setup. The client stores it securely and passes it as a bearer token on every request — no SENTRY_AUTH_TOKEN env var, no settings.json editing needed. See the per-client setup topics in this module for the exact connection steps.

Finding your organization slug: It appears in the URL when you're in the Sentry dashboard — https://your-org-slug.sentry.io or https://sentry.io/organizations/your-org-slug/.

Tips
- Use an organization-level auth token for team environments rather than personal tokens — if the developer who created the personal token leaves, the MCP integration breaks.
- Rotate MCP tokens on the same schedule as other API credentials. Because the token is stored by your AI client, treat it as a long-lived secret.
- Store the token in a secrets manager (1Password, AWS Secrets Manager, macOS Keychain) and use it when prompted during connection setup — don't hard-code it in config files committed to source control.
- The org:read scope is needed even for single-project queries because Sentry's API validates org membership at the organization level before returning project data.


What AI Can Automate with Sentry MCP: Triage, Analysis, and Fix Proposals

With Sentry MCP connected, an AI agent's capabilities shift from "I can answer questions about code I can see" to "I can investigate production errors end-to-end." Here are the concrete workflows that become possible:

Automated triage: The AI can list all unresolved high-priority issues in a project, group them by affected component, and produce a prioritized list with reasoning — all from a single prompt like "What are the most critical unresolved errors in the payment-service project right now?"

Stack trace analysis: Given an issue ID, the AI retrieves the full event, reads the stack frames, and cross-references them against your local codebase (if the AI client has file access). It can identify exactly which function threw, which caller passed the bad input, and what the values were at the time.

Root cause hypothesis: Using the stack trace, breadcrumbs, and request context together, the AI can reason about causal chains: "The user hit /checkout with an empty cart, the CartService.total() method received null for items, and a missing null check in calculateTax() caused the TypeError."

Fix proposal with code context: If your AI client also has access to your local source files, it can read the offending function, propose a concrete code fix, and explain why the fix addresses the root cause.

Release regression detection: The AI can compare issue firstSeen and lastSeen against your release history to tell you "this error was first introduced in v3.2.0, which shipped last Tuesday."

Example prompts that exercise these capabilities:

List the top 5 unresolved issues in the "backend-api" project sorted by frequency. For each one, tell me the error type, affected function, and event count.
Get Sentry issue BACKEND-4521. Read the stack trace and breadcrumbs. Hypothesize the root cause and suggest what code change would fix it.
Find all Sentry issues in the "checkout" project that have appeared since the v4.1.0 release. Group them by error type.

Tips
- Combine Sentry MCP with filesystem access in your AI client for the most powerful workflows — the AI can read the Sentry stack trace and then immediately open the referenced file to propose a fix.
- Ask the AI to summarize breadcrumbs as a numbered user journey — "1. User opened /cart, 2. User clicked checkout, 3. API returned 500" — before diving into the technical root cause. It helps you understand the blast radius.
- For recurring errors, ask the AI to check if there are duplicate issues with different fingerprints. Sentry sometimes creates separate issues for the same bug if stack frames vary slightly between environments.
- The AI can write Sentry search queries for you. Ask it to build a query string like is:unresolved !has:assignee level:error to find unowned high-severity errors.


Privacy and Security Considerations When Using Sentry MCP

Sentry error data frequently contains sensitive information: user email addresses, session tokens, request headers including Authorization and Cookie, user-provided form data captured in breadcrumbs, and PII in exception messages. Before connecting Sentry MCP to an AI agent, understand the data flow and apply appropriate controls.

Data flow: When your AI client invokes a Sentry MCP tool, the server fetches data from the Sentry API and returns it to the AI model's context window. The LLM processes that data as part of its inference — meaning sensitive fields in events are effectively passed to the model provider.

Sentry-side controls:

  • Enable Data Scrubbing in Sentry project settings to automatically redact known sensitive fields (credit card numbers, passwords, SSNs) before they are stored.
  • Configure Safe Fields and Additional Sensitive Fields to control what Sentry strips from events at ingest time.
  • Use Data Privacy settings under Settings > Security & Privacy to set global scrubbing rules.
"request": {
  "headers": { "Authorization": "Bearer eyJhbGciOi..." },
  "data": { "password": "hunter2", "email": "[email protected]" }
}

"request": {
  "headers": { "Authorization": "[Filtered]" },
  "data": { "password": "[Filtered]", "email": "[Filtered]" }
}

Token scoping: The event:read scope gives the MCP server access to the full event payload. There is currently no finer-grained scope to limit access to only anonymized data. Apply data scrubbing at the Sentry level, not at the MCP level.

Audit logging: Every MCP tool call results in an API call to Sentry. These calls appear in your Sentry audit log under Settings > Audit Log. Enable audit log retention if you need to track which errors were accessed via the MCP integration.

On-premise considerations: If you run self-hosted Sentry, all data stays within your network when using SENTRY_HOST. This removes the concern of sending event data to Sentry's cloud but does not eliminate the concern of sending it to your AI model provider.

Tips
- Run a spot check before onboarding your team: take a real Sentry event from your production project and manually inspect it for PII. If you find sensitive data, configure Sentry's scrubbing rules before enabling MCP access.
- Create a dedicated MCP auth token with a descriptive name (e.g., "claude-code-mcp-readonly") so you can quickly identify and revoke it if needed without affecting other integrations.
- For regulated industries (HIPAA, PCI-DSS, GDPR), consult your compliance team before routing production error events through an external LLM provider. Consider restricting Sentry MCP to staging environments only.
- Instruct your AI agent to avoid echoing raw event payloads verbatim in its response — ask it to summarize instead. This reduces the risk of sensitive values appearing in logs or chat history.