What Is an AI Agent? LLMs, Tool Use, and Autonomous Loops Explained
An AI agent is a software system that uses a large language model as its reasoning core, equips that model with tools it can invoke, and runs it in a feedback loop — letting it observe results and decide what to do next, repeatedly, until a task is complete.
To use agents effectively, you need to understand the three layers that make them work.
Layer 1: The LLM Core
Large language models (Claude, Gemini, GPT-4) are text-in, text-out systems trained on massive corpora. At their most fundamental level, they predict the most probable next token given the tokens before it. That sounds simple, but at scale it produces something that functions like reasoning: the ability to follow complex instructions, synthesize information from long documents, write and explain code, and generate structured outputs like JSON or markdown.
For agents, the key LLM capability is tool-calling awareness — the model has been trained to recognize when it needs information it doesn't have, and to emit a structured function call rather than hallucinating an answer.
Layer 2: Tool Use
A bare LLM only produces text. An agent gains the ability to act by being given a set of tools — defined functions with names, descriptions, and typed parameters. The LLM reads those descriptions and decides which tool to call, with what arguments, based on what the task requires at that moment.
Common tools in QA agentic systems:
| Tool | What it does for QA |
|---|---|
read_file(path) |
Read source code, test files, configs, OpenAPI specs |
run_command(cmd) |
Execute test suites, linters, build steps |
search_codebase(query) |
Find relevant test files, page objects, fixtures |
http_request(url, method, body) |
Call APIs directly to validate behavior |
write_file(path, content) |
Save generated test cases, test plans, bug reports |
list_directory(path) |
Navigate project structure to understand test organization |
The LLM doesn't execute these tools itself — it decides to call them and emits a structured call. Your agent runtime intercepts that call, executes it, and feeds the result back into the model's context. Claude Code and Gemini CLI handle this loop for you automatically.
Layer 3: The Autonomous Loop
The loop is what makes an agent fundamentally different from a chat response. An agent operates as:
1. PERCEIVE — receive the task + any initial context provided
2. REASON — decide what step to take next toward the goal
3. ACT — call a tool or generate intermediate output
4. OBSERVE — receive the tool result, add it to context
5. REPEAT — go back to step 2 until the task is done or a stop condition is met
This is the ReAct pattern (Reason + Act). Each loop iteration grows the agent's working context with observed results, letting it build knowledge and correct course.
Concrete QA example: You give an agent the task "Review this PR and generate an integration test plan."
- Loop 1: Agent calls
read_fileto read the PR diff - Loop 2: Agent identifies two changed API endpoints, calls
search_codebaseto find their OpenAPI spec - Loop 3: Agent calls
read_fileto read existing integration tests and understand the project's test style - Loop 4: Agent calls
write_fileto save a markdown test plan - Loop 5: Agent reports completion with a summary
From your perspective, you see one output. Inside, the agent ran four rounds of information gathering before producing it.
Learning Tip: The loop is also why agents occasionally get stuck or loop too long. If you give an agent an ambiguous task and no stopping condition, it may keep searching for context it can never fully satisfy. Always frame tasks with clear completion criteria: "Generate a test plan and stop" beats "Help me test this feature."
How Are AI Agents Different from Copilots and Autocomplete Tools?
Before agents, "AI for QA" meant two things:
Autocomplete tools (Tabnine, older GitHub Copilot, IntelliSense) suggest the next line or block as you type. They're reactive, stateless, and contextually limited to what's visible in the current editor pane. You type expect(response.status).toBe( and it suggests 200. Fast, frictionless, and useful — but you remain the one doing the work.
Copilots (GitHub Copilot Chat, Cursor, Claude.ai in chat mode) answer natural language questions. You can ask "what edge cases am I missing in this test?" and get a thoughtful answer. But the output is text you must manually evaluate and apply. Each message is largely independent — the copilot isn't tracking the overall task state across turns. It cannot run your tests or update your files.
Agents take on the task. Given "generate E2E tests for the user login flow," an agent will read your codebase, find existing patterns, generate the test file, write it to disk, run it, observe the failure, fix the assertion, and report what it did. You weren't needed for any of those individual steps.
| Dimension | Autocomplete | Copilot | Agent |
|---|---|---|---|
| Who initiates? | You (by typing) | You (by asking) | You (by assigning) |
| Takes file actions? | No | No | Yes |
| Runs commands? | No | No | Yes |
| Context scope | Cursor position | Single conversation | Entire repo + runtime |
| Number of steps | 1 | 1 | Many (autonomous) |
| Blocks your work? | No | No | Runs independently |
| Verifies its own output? | No | No | Can (runs tests, reads results) |
The practical implication for QA: A copilot helps you write one test case faster. An agent can execute the test case generation workflow for an entire feature — analyze requirements, generate scenarios, write scripts, run them, fix failures — while you focus on something else.
The shift is from AI as typing assistant to AI as autonomous workflow executor.
Learning Tip: Get in the habit of asking: "Is this tool taking actions on my behalf, or producing text I must apply?" If the answer is the latter, you have a copilot. Copilots are valuable for reasoning and drafting. Agents are valuable for execution and iteration. Knowing which you're using prevents you from expecting agent behavior from a copilot — and vice versa.
Why Do AI Agents Matter for QA Engineers Specifically?
QA sits at the intersection of every problem agents are built to solve: large volumes of repetitive analytical work, complex multi-source context synthesis, and tasks where the bottleneck is human bandwidth, not human intelligence.
The testing bottleneck is structural
In most agile teams, QA is the last gate before release. Features are built in sprints of one to two weeks, but test case generation, execution, and coverage verification happen at the end of that cycle. As teams grow, the number of features-in-flight scales faster than QA bandwidth. Agents are the first technology that lets QA work in parallel — running a test generation job for next sprint's stories while you're executing exploratory sessions on the current sprint.
QA work is context-heavy and repetitive
Good QA requires synthesizing context from multiple sources — requirements docs, acceptance criteria, UI specs, API schemas, existing test coverage, recent code changes, historical defect data — and then performing structurally similar analysis repeatedly across features. This is precisely the category of work LLMs are strongest at. An agent can hold your entire test suite, the PR diff, and the relevant spec doc in context simultaneously and produce a gap analysis that would take a human hours.
Test suites drift and degrade
Most production test suites have significant coverage debt: tests that no longer accurately reflect the feature they claim to test because the feature changed and the tests weren't updated. No team has the bandwidth to audit thousands of tests against current requirements. Agents can systematically analyze the delta between what tests assert and what the current code does.
Human-written coverage is biased toward happy paths
When QA engineers write tests under time pressure, they naturally over-cover happy paths and under-cover error states, boundary conditions, and negative paths. An agent given systematic test design methodologies (boundary value analysis, equivalence partitioning, SFDPOT heuristics) will generate more comprehensive coverage without cognitive bias.
What this means in practice:
| QA task | Typical time without agents | With agents |
|---|---|---|
| Test case generation for a user story | 2–3 hours | 20-min AI run + 30-min review |
| Regression scope analysis for a PR | 1–2 hours mental model work | 15-min automated diff + risk matrix |
| Bug report from a failed CI run | 30–45 min investigation + write-up | AI trace analysis + draft report in 10 min |
| Test suite coverage audit | Multi-day manual review | AI analysis in one prompt session |
| Update test cases after a feature change | 1–3 hours per affected test | AI identifies affected tests + generates update diffs |
Learning Tip: Reframe what "getting value from agents" means. Don't measure it as "how much faster did I write one test case." Measure it as "how much more test coverage did my team ship this sprint." Agents change the volume of work you can take on — that's the multiplier that matters.
What Should QA Engineers Realistically Expect from AI Agents Today?
Setting accurate expectations is as critical as understanding the technology. Here is an honest picture of the current state.
What agents do well today
Reading and synthesizing large codebases: An agent can scan hundreds of files and identify what's relevant to a test task faster and more reliably than a human.
Generating test case structure from clear requirements: Given a well-written user story with acceptance criteria, agents produce solid first drafts of test scenarios — often catching edge cases human writers miss.
Writing boilerplate E2E and API test scripts: For well-documented frameworks (Playwright, Pytest, Jest, REST-assured), agents generate syntactically correct, runnable scaffolding. The generated code typically passes basic tests but may need assertion tuning.
Drafting structured bug reports: Given logs, traces, and a test failure output, agents produce well-structured bug reports with reproduction steps, environment details, and root cause hypotheses — ready for human review and publish.
Identifying obvious coverage gaps: Against a clear feature spec, agents reliably flag missing negative paths, unhandled error states, and untested boundary conditions.
What agents still struggle with today
Business domain knowledge: Agents don't know that PENDING_APPROVAL is a valid intermediate payment state in your system, or that user_type=3 represents a corporate account in your data model. Without explicit domain context, agents will hallucinate plausible-sounding but incorrect assertions. Providing domain context is your job.
Dynamic and stateful UI flows: Agents can write Playwright test code, but if your application has complex multi-step flows with session state, conditional UI, or frequent selector changes, the generated tests need meaningful correction before they'll run reliably.
Cold generation without context: "Write tests for the checkout flow" with no code, no spec, and no existing test examples produces plausible but shallow output. The agent needs context to produce quality output — providing that context is a skill you'll develop through this course.
Reliable flaky test diagnosis without data: Agents can speculate about why a test is flaky if given logs and history. Without CI data and reproduction evidence, the analysis is mostly pattern-matching guesses.
Knowing when to stop and when to ask: Without clear stopping criteria, agents may over-generate — producing 80 test cases when 15 focused, well-scoped ones would serve the sprint better.
Your role shifts, it doesn't shrink
The senior QA engineer's role with agents shifts from author to reviewer and decision-maker. You become the person who:
- Provides the context that makes AI output accurate (domain knowledge, project conventions, risk framing)
- Reviews AI output with expert judgment (not rubber-stamping)
- Decides what to approve, reject, or refine
- Escalates what requires human judgment (novel failure modes, business rule interpretation)
- Improves the prompts and context setups that drive quality across the team
This is a better use of senior QA expertise. It's also only valuable if you actually review AI output critically — teams that auto-approve AI-generated test cases without review create the illusion of coverage without the substance.
A realistic productivity frame: Expect AI agents to make you 2–4× more productive on test generation and bug analysis tasks within four to six weeks of active use. The first week, output quality will be poor as you learn how to provide effective context. By week four, you'll have a personal context library and prompt patterns that consistently produce high-quality output.
Do not promise your team "AI will write all our tests." Do promise: "AI will help us cover more ground with the same team."
Learning Tip: For your first 30 days, keep a short prompt log: every time you get notably bad or notably good AI output, note what context you gave and what was different. This log is the foundation of your personal prompt library — and it's the fastest path to consistent, high-quality agent output.