·

Puppeteer MCP With Opencode

Puppeteer MCP With Opencode

Installing and Connecting Puppeteer MCP to OpenCode

OpenCode is an open-source, terminal-native AI coding assistant that supports MCP servers through a TOML configuration file. It runs entirely from the command line, making it a natural fit for developers who prefer keyboard-driven workflows and want to integrate AI-assisted browser automation without leaving the terminal.

Install OpenCode via the official install script or npm:

npm install -g opencode-ai

Puppeteer MCP connects to OpenCode through the ~/.config/opencode/config.toml file. Add the following entry under the [mcp] namespace:

[mcp.puppeteer]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-puppeteer"]

OpenCode will spawn the Puppeteer MCP server as a subprocess when a session starts. The server communicates over stdio using the MCP protocol — no ports, no sockets, no daemon required.

To enable headful mode so you can watch the browser during a session:

[mcp.puppeteer]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-puppeteer"]

[mcp.puppeteer.env]
PUPPETEER_HEADLESS = "false"

For CI/Docker environments that require disabling the sandbox:

[mcp.puppeteer]
command = "npx"
args = ["-y", "@modelcontextprotocol/server-puppeteer"]

[mcp.puppeteer.env]
PUPPETEER_LAUNCH_ARGS = "--no-sandbox --disable-setuid-sandbox"

To confirm the integration is working, start an OpenCode session and run:

List all available MCP tools.

The response should include puppeteer_navigate, puppeteer_screenshot, puppeteer_click, puppeteer_type, puppeteer_evaluate, and puppeteer_select.

Tips
- Run opencode --verbose on first connection to see the MCP server startup output — it will surface Chromium binary download issues or PATH problems before you start a debugging session.
- OpenCode's TOML config supports inline comments — annotate your Puppeteer config with the target environment (local, staging) so you don't accidentally point the agent at a production URL.
- If you maintain multiple projects, consider a per-project opencode.toml in the project root. OpenCode will merge project-level config with the global config, letting you set project-specific base URLs or environment variables.
- Pin a specific MCP server version (@modelcontextprotocol/[email protected]) in long-running projects to avoid unexpected behavior from upstream package updates.

Letting the OpenCode Agent Reproduce and Debug Frontend Rendering Issues

With Puppeteer MCP active in OpenCode, the agent can operate a full Chromium browser and execute browser automation as part of its problem-solving loop. For frontend rendering issues, this means the agent can navigate to the broken page, capture the visual state, inspect the DOM, and evaluate JavaScript — all within the same terminal session where you're reading the source code.

A typical rendering bug investigation prompt in OpenCode:

The sidebar at http://localhost:3000/app collapses on first load and doesn't expand when
the toggle button is clicked. Please:
1. Navigate to the page
2. Screenshot the initial state
3. Evaluate: document.querySelector('.sidebar') — report its className and computed width
4. Click the toggle button (selector: '[data-testid="sidebar-toggle"]')
5. Screenshot again
6. Evaluate the sidebar's className and computed width again
7. Check for any console errors related to sidebar or toggle
Report whether the toggle changed the state and what the DOM shows before and after.

For CSS-driven rendering failures that only appear at specific viewport sizes:

The product card grid at http://localhost:3000/shop/all breaks at 768px width — cards
overflow horizontally. Please:
1. Set viewport to 1280x800 and screenshot (baseline)
2. Evaluate: document.querySelectorAll('.product-card') — how many cards are visible?
3. Set viewport to 768x1024 and screenshot
4. Evaluate: for the first product card, report its getBoundingClientRect() width and the
   parent container's scrollWidth vs clientWidth
5. Set viewport to 375x812 and screenshot
6. Report which viewport(s) show overflow and the exact pixel values that indicate the issue

For JavaScript-driven rendering issues (hydration errors, React state issues):

At http://localhost:3000/profile, the user avatar appears briefly then disappears.
This might be a hydration issue. Please:
1. Navigate to the page and immediately screenshot
2. Wait 2 seconds (evaluate: new Promise(r => setTimeout(r, 2000)))
3. Screenshot again
4. Evaluate: document.querySelector('.user-avatar') — is it present in the DOM? What are its
   styles?
5. Check console for any React hydration warnings or "unmount" related messages
6. Evaluate window.__USER_SESSION__ to see if auth state is correctly set

Tips
- For timing-sensitive rendering bugs, ask the agent to use puppeteer_evaluate to inject a MutationObserver that records DOM changes, then screenshot and report the mutation log after the triggering interaction.
- When the bug involves network-dependent rendering, ask the agent to evaluate performance.getEntriesByType('resource') after page load and check for any failed requests (status 4xx/5xx) that might explain missing data.
- If the selector for the broken element isn't obvious, ask the agent to evaluate document.body.innerHTML.substring(0, 3000) to get a truncated view of the DOM, then identify the right selector from that output.
- OpenCode maintains conversation context within a session — use this to chain investigations. First ask the agent to reproduce the bug, then follow up with "now check what happens if the user is not logged in" without re-explaining the full context.

Running UI Automation and QA Test Scenarios in OpenCode with Puppeteer MCP

OpenCode's terminal-native design makes it straightforward to integrate Puppeteer MCP-driven QA into CLI-based workflows. You can run QA automation sessions from the same terminal where you run builds, run tests, and deploy to staging — keeping the entire development loop in one environment.

A multi-scenario QA prompt for a login flow:

Run QA tests on the login flow at http://localhost:3000/login:

Test 1 - Valid credentials:
  - Enter email: "[email protected]", password: "correctpassword"
  - Click Login
  - Screenshot the result
  - Report: did it redirect to dashboard? Any console errors?

Test 2 - Invalid password:
  - Enter email: "[email protected]", password: "wrongpassword"
  - Click Login
  - Screenshot the error state
  - Report: what error message appears? Is it shown in the DOM or as a JS alert?

Test 3 - Empty form submission:
  - Click Login without entering anything
  - Screenshot
  - Report: what validation feedback appears and how?

Test 4 - SQL injection in email field:
  - Enter: "'; DROP TABLE users;--" in email, any text in password
  - Click Login
  - Report: does the app crash, show an error, or behave unexpectedly?

Produce a pass/fail summary for all 4 tests.

For regression testing a specific component after a refactor:

The DatePicker component was refactored today. Please run regression checks at
http://localhost:3000/components/datepicker-demo:

1. Click the datepicker input to open the calendar
2. Screenshot the open calendar state
3. Navigate to the next month using the ">" button, screenshot
4. Click on the 15th of the displayed month
5. Screenshot and report the selected date shown in the input field
6. Try typing a date directly in the input: "2025-06-20"
7. Screenshot and report whether the calendar updates to reflect the typed date
8. Click outside the datepicker to close it
9. Screenshot the final state

Report any unexpected behavior, console errors, or visual glitches.

Tips
- For QA sessions that cover many scenarios, use a numbered list format in your prompt so the agent's output maps directly to each test case — this makes copy-pasting results into a test report trivial.
- OpenCode supports file operations — ask the agent to save QA results to a markdown file in your project's qa-reports/ directory so they become versioned artifacts.
- For accessibility QA, ask the agent to evaluate document.querySelectorAll('[role]') and report ARIA roles, or evaluate axe.run() if you have the axe-core library loaded in your dev build.
- When the QA flow involves file uploads, puppeteer_evaluate can be used to set the value of a file input element programmatically if the standard file picker dialog is otherwise inaccessible.

Known Limitations for Puppeteer MCP Frontend Debugging in OpenCode

Understanding the constraints of Puppeteer MCP in OpenCode helps you design prompts that work within those constraints and avoid frustrated debugging sessions where the agent hits a wall.

No cross-tab or multi-window support. Puppeteer MCP manages a single browser page. If your application opens links in new tabs (target="_blank"), the agent cannot follow them. Work around this by asking the agent to evaluate the href attribute of the link and then navigate directly to that URL with puppeteer_navigate in the same page.

No file download verification. When a UI action triggers a file download, Puppeteer MCP does not expose the downloaded file to the agent. If you need to verify download behavior, ask the agent to check for any download-trigger network requests using puppeteer_evaluate with performance.getEntriesByType('resource') and verify the response headers indicate a downloadable content type.

No browser extension interaction. Puppeteer launches a clean Chromium profile. If your application's behavior depends on browser extensions (ad blockers, cookie managers), those will not be present. Design your QA prompts around the extension-free state.

Limited interaction with OS-level dialogs. Native file picker dialogs, authentication dialogs, and browser-native alerts can sometimes block automation. For window.alert, window.confirm, and window.prompt, ask the agent to handle them by evaluating overrides before triggering the action:

// Override before the action that triggers the dialog
window.alert = (msg) => { window.__lastAlert = msg; };
window.confirm = () => true;

Pass this as a puppeteer_evaluate call before the interaction that would trigger the dialog.

Screenshot resolution depends on device pixel ratio. On high-DPI displays, screenshots may render at 2x resolution. If the agent's screenshot analysis seems off, ask it to set an explicit device scale factor via puppeteer_evaluate before capturing.

Use puppeteer_evaluate to set window.devicePixelRatio = 1 before taking screenshots,
so the screenshots are at 1:1 pixel resolution for accurate element size reporting.

Tips
- Document the limitations relevant to your application in your project's AI agent instructions file so every team member's OpenCode sessions handle them consistently.
- For multi-tab workflows, restructure the test to explicitly use puppeteer_navigate to visit each URL in sequence on the same page, rather than relying on links that open new tabs.
- When dialogs block automation, ask the agent to inject overrides at the start of every session using puppeteer_evaluate — this is a one-time setup that prevents dialog-blocked sessions.
- If screenshot quality affects the agent's visual analysis, explicitly request headful mode with a specific viewport and device scale factor at the start of the debugging prompt.