·

Real World Workflow Frontend Debugging And Qa Automation

Real World Workflow Frontend Debugging And Qa Automation

Workflow Overview: From Frontend Bug Report to Verified Fix Using Puppeteer MCP

A typical frontend bug report arrives as a user complaint, a Sentry alert, or a QA ticket. It contains a description of unexpected behavior, sometimes a screenshot, sometimes a set of reproduction steps, and almost always incomplete information about the exact DOM state when the failure occurred. The developer's job is to reproduce it, understand why it happened, fix it, and verify the fix without introducing regressions.

With Puppeteer MCP connected to an AI agent, this workflow transforms: the agent handles the browser-facing steps (reproduction, inspection, verification) while you focus on the code-facing steps (reading source, applying fixes, reviewing changes). The full workflow looks like this:

  1. Reproduce the bug — Agent navigates the UI, follows the reported steps, and confirms the failure exists. It captures visual evidence (screenshots) and structural evidence (DOM state, console errors).
  2. Diagnose root cause — Agent inspects the DOM, evaluates JavaScript state, checks network requests, and correlates findings with the source code (if your AI client supports file context).
  3. Apply the fix — Developer (or agent, depending on your workflow) modifies the source code.
  4. Verify the fix and run regression checks — Agent re-executes the reproduction scenario on the fixed build, confirms the original failure is resolved, then runs a broader set of UI checks to catch any regressions introduced by the fix.

The entire loop — from "I have a bug report" to "the fix is verified in the browser" — can complete in minutes rather than the hours that manual browser debugging, context switching, and ad-hoc regression checking typically require.

This document walks through each step with real prompts, real commands, and a concrete example bug: a modal dialog that submits a form with empty required fields when the user clicks outside the modal.

Tips
- Start every debugging session with PUPPETEER_HEADLESS=false during the reproduction phase so you can visually monitor what the agent is doing — this helps you catch prompt misunderstandings before the agent goes too far down the wrong path.
- Establish a consistent staging environment for Puppeteer MCP sessions. Use a dedicated staging URL with a clean database state so agent actions (form submissions, account creation) don't pollute shared test data.
- Track which pages and flows your team uses Puppeteer MCP on most frequently — these become candidates for a formal regression suite that runs on every PR.
- Document the agent prompts that successfully reproduced, diagnosed, and verified fixes for significant bugs. These become a library of reusable debugging templates for similar issues in the future.

Step 1: Reproducing the Bug — Agent Navigates the UI and Captures the Failure

The goal of reproduction is to get a confirmed, documented failure state: a screenshot showing the wrong behavior, DOM evidence proving the failure happened, and a verified sequence of steps that reliably triggers it.

The bug report: "When I click outside the 'Create Project' modal without filling in the required fields, the form submits with empty data instead of just closing the modal."

Start a Claude Code (or Cursor, OpenCode, or Gemini CLI) session with Puppeteer MCP active and send this reproduction prompt:

I need to reproduce a bug in the Create Project modal at http://localhost:3000/projects.
The reported behavior is: clicking outside the modal (on the overlay) while the form
fields are empty submits the form with empty data instead of just closing the modal.

Please reproduce this:
1. Navigate to http://localhost:3000/projects
2. Screenshot the initial projects page
3. Click the "New Project" button (selector: '[data-testid="new-project-btn"]' or
   look for a button with "New Project" text)
4. Screenshot the open modal
5. DO NOT fill in any fields
6. Click the modal overlay (outside the modal dialog itself — the darkened background)
   Selector to try: '.modal-overlay' or '[data-testid="modal-backdrop"]'
7. Screenshot immediately after clicking
8. Evaluate: check if any network requests were made to '/api/projects' in the last
   5 seconds: performance.getEntriesByType('resource').filter(r => r.name.includes('/api/projects'))
9. Evaluate: is the modal still open? Check: document.querySelector('.modal') !== null
10. Check console for any errors or form submission related logs

Report: was the bug reproduced? Include screenshots and the network request check result.

If the agent's first attempt uses the wrong selector for the overlay, follow up:

The click on '.modal-overlay' didn't land on the right element. Please evaluate:
document.querySelectorAll('[class*="overlay"], [class*="backdrop"], [class*="modal"]')
and list all matching elements with their class names, so we can identify the correct
selector for the darkened background behind the modal.

Once reproduction is confirmed, the agent will have produced:
- A screenshot of the modal in its open state
- A screenshot of the state after clicking outside (showing whether the modal closed or stayed open)
- A network request log showing whether a POST to /api/projects was made during the click
- Console output showing any form submission events

This constitutes a confirmed, documented reproduction with browser evidence.

Tips
- If the first reproduction attempt fails because the agent used the wrong selector, ask it to evaluate document.querySelectorAll('*') filtered by likely class names — this is faster than manually inspecting the DOM in a separate browser window.
- Ask the agent to capture a screenshot immediately before the suspected failure action and immediately after — this gives you a before/after visual pair that definitively shows what changed.
- For intermittent bugs, ask the agent to repeat the sequence 3 times and report which attempts triggered the bug — this establishes whether the failure is timing-dependent.
- Save the reproduction prompt that successfully confirmed the bug to a debug-prompts/ directory in your project. Future developers (or the agent on a future session) can reuse it to verify the bug is fixed.

Step 2: Diagnosing Root Cause — Agent Inspects DOM, Console Errors, and Network Requests

With the bug reproduced, the diagnosis phase extracts the structural and behavioral evidence needed to understand why the failure occurs. The agent acts as a methodical investigator, checking each potential cause layer: event propagation, form state, component lifecycle, and network behavior.

The diagnosis prompt for the modal bug:

The bug is confirmed: clicking the modal overlay triggers a form submission.
Now let's diagnose the root cause. Please:

1. Navigate back to http://localhost:3000/projects and open the Create Project modal again
2. Evaluate before clicking: what event listeners are attached to the modal overlay element?
   Try: getEventListeners(document.querySelector('.modal-overlay')) if available,
   or evaluate: document.querySelector('.modal-overlay').onclick
3. Evaluate: what is the modal overlay's HTML — specifically, does the overlay element
   contain the form, or is it a sibling/parent of the form?
   document.querySelector('.modal-overlay').innerHTML.substring(0, 500)
4. Click the overlay and immediately evaluate:
   - What was the event.target of the last click event? We need to inject a listener:
     First evaluate: document.addEventListener('click', (e) => { window.__lastClick = e.target.className + ' | ' + e.target.tagName }, { once: true })
     Then click the overlay
     Then evaluate: window.__lastClick
5. Check if there's a form submit event being triggered:
   Evaluate before clicking: document.querySelector('form')?.addEventListener('submit',
   (e) => { window.__formSubmitTriggered = true; window.__formSubmitEvent = e.type }, { once: true })
   Then click the overlay
   Then evaluate: window.__formSubmitTriggered

6. Report: is the form element inside the overlay? Does clicking the overlay fire a submit event?
   What is the actual click target?

This investigation typically reveals one of several common root causes:

  • The overlay <div> wraps the <form>, and the click event bubbles to a submit handler on the form's parent
  • A mousedown on the overlay triggers a blur on the last focused input, which fires a submit handler listening to blur events
  • The modal's "close on outside click" handler uses event.target === overlayElement but a CSS pointer-events misconfiguration causes the form to be the actual click target

Follow up with a code-correlation prompt (for Claude Code or Cursor with file access):

Based on the DOM evidence — the form submit is being triggered when clicking the overlay —
please look at @src/components/Modal.tsx and @src/components/ProjectForm.tsx.

Find:
1. Where is the overlay click handler defined?
2. Is there a submit handler that could be triggered by an overlay click?
3. Is there a useEffect or event listener that submits the form on blur or focus-out?

Point to the specific function or event binding that is causing the premature submission.

Tips
- The getEventListeners() API is only available in Chrome DevTools console — it will not work in puppeteer_evaluate. Use the injection pattern (add an event listener before the action, then read window.__lastEvent) instead.
- For React applications, DOM event listener inspection via Puppeteer is limited because React uses synthetic events. Ask the agent to evaluate React fiber data: document.querySelector('[data-testid="modal-overlay"]').__reactFiber$ to check for React event handlers directly.
- Network request inspection via performance.getEntriesByType('resource') won't show request bodies. If you need to confirm what data was submitted, ask the agent to intercept at the JavaScript level by patching fetch or XMLHttpRequest before the test action.
- Keep the diagnosis prompt focused on a specific hypothesis (event propagation, form state, network call) rather than asking the agent to investigate everything at once — focused prompts produce more actionable findings.

Step 3: Verifying the Fix and Running UI Regression Checks with Puppeteer MCP

After the fix is applied — in this case, the overlay click handler was corrected to call event.stopPropagation() before closing the modal — the verification and regression phase confirms that:

  1. The original bug is resolved
  2. No related functionality was broken by the fix

The fix verification prompt:

I've applied a fix to the modal close behavior. Please verify the fix at
http://localhost:3000/projects (make sure the dev server has the latest build):

Verification 1 - Original bug scenario:
  1. Open the Create Project modal
  2. Leave all fields empty
  3. Click the modal overlay
  4. Screenshot the result
  5. Evaluate: was a network request made to '/api/projects'?
  6. Is the modal closed? (document.querySelector('.modal') === null)
  Expected result: modal closes, no API call made. Pass or fail?

Verification 2 - Modal closes normally via close button:
  1. Open the modal again
  2. Click the X/close button (selector: '[data-testid="modal-close"]' or a button with × text)
  3. Screenshot
  Expected result: modal closes. Pass or fail?

Verification 3 - Form submits correctly when filled in:
  1. Open the modal
  2. Fill in the project name: "Test Project Verification"
  3. Fill in any other required fields
  4. Click the submit/create button
  5. Screenshot the result
  6. Check for success state (modal closes and new project appears in list, or success message)
  Expected result: form submits successfully and project is created. Pass or fail?

Report pass/fail for all 3 verifications with screenshots.

After the fix is verified, run a broader regression check on modal behavior across the application:

The fix involved changes to modal event handling. Please run a regression check on all
modals in the application. Start at http://localhost:3000 and visit the following pages
that have modals:
- /projects (Create Project modal)
- /settings (Delete Account modal, Change Password modal)
- /team (Invite Member modal)

For each modal on each page:
1. Open the modal
2. Screenshot the open state
3. Click the overlay — does the modal close (expected) or trigger unexpected behavior?
4. Re-open the modal and use the close button — does it close correctly?
5. Check for console errors during each interaction

Produce a regression report: for each modal, pass/fail on overlay close and button close,
plus any console errors.

For a final pre-deploy smoke test that covers the changed code paths:

Run a pre-deploy smoke test focused on the modal and form submission flows.
Build is at http://localhost:3000.

Smoke tests:
1. Create a new project via the modal (full happy path)
2. Cancel project creation via overlay click (should close without submitting)
3. Cancel project creation via the close button (should close without submitting)
4. Attempt to submit the Create Project form with an empty name field (should show validation)
5. Invite a team member via the /team modal (full happy path)
6. Close the invite modal via overlay click

Screenshot each test step. Report pass/fail with console error status.
Produce a final summary table: | Test | Status | Console Errors |

This structured output can be pasted directly into the PR description or a QA ticket as evidence that the fix is verified and regressions are checked.

Tips
- Always run the original reproduction scenario first in the verification phase — not just "does the fix seem to work" but the exact sequence that confirmed the bug. This is your definitive pass criterion.
- For regression checks that span multiple pages, structure the prompt with one scenario per bullet point. The agent's output will then have a clear one-to-one correspondence with your regression checklist.
- Ask the agent to produce its regression report in a markdown table format — this is directly pasteable into GitHub PR descriptions, Jira tickets, or Confluence pages.
- After a successful verify-and-regress cycle, save the verification prompt to your project's debug-prompts/ library. The next time a modal bug appears, you have a ready-made regression suite that covers all known modal behaviors.