AI assistants write code that compiles and passes tests — but specific structural patterns cause silent failures, data loss, and outages that only surface under production load.
Why AI-Generated Code Has Predictable Failure Modes
AI models are trained to produce code that looks correct and satisfies the immediate request. They optimize for readability and functional completeness based on the prompt — not for the edge cases your production environment will discover at 2 AM. The result is a consistent set of structural blind spots: error handling that swallows exceptions, async patterns that race under load, resource handles that are never released.
Understanding why these patterns occur is the first step to catching them. The model has no knowledge of your system's failure modes, your upstream service reliability, or the shape of your production traffic. When you ask it to "add a function that fetches user data," it will produce something that works — for the happy path. The unhappy path is where AI-generated code consistently falls short.
The good news is that these patterns are finite and learnable. Once you have cataloged the failures your team has encountered, you can codify them into review checklists, static analysis rules, and targeted follow-up prompts that close the gap before code ships.
Learning tip: Keep a running document called
ai-failure-patterns.mdin your team wiki. Every time you find a bug introduced by AI-generated code, add the pattern, the symptom it caused, and the fix. After a month, you will have a team-specific checklist that is more valuable than any generic linting rule.
The Seven Most Common AI Code Failure Patterns
1. Missing Error Propagation
AI frequently generates code that catches an exception, logs it, and returns a default value — without propagating the error to the caller. The function appears to succeed from the outside while silently swallowing a real failure.
// What AI often generates
async function getUserProfile(userId: string): Promise<UserProfile> {
try {
return await db.users.findById(userId);
} catch (err) {
console.error("Failed to fetch user", err);
return {} as UserProfile; // Caller never knows this failed
}
}
Production symptom: Downstream code receives an empty object, renders a blank UI, or writes corrupt data to storage. No alert fires because no exception escaped.
2. Overly Broad try/catch
The AI wraps entire function bodies in a single try/catch instead of handling specific failure modes at the right level. This prevents you from distinguishing a transient network error (which should retry) from a validation error (which should not).
Production symptom: Retry logic never triggers. All errors look the same in logs. MTTR increases because on-call engineers cannot tell whether a failure is retryable.
3. Silent Failures on Chained Null Access
When AI generates code that chains method calls without null guards, it creates TypeError: Cannot read property of undefined bombs that detonate on edge-case data.
// What AI often generates
const city = order.customer.address.city; // Crashes if address is null
Production symptom: A percentage of requests crash silently. In JavaScript/TypeScript, if this is inside an event handler, the crash may be swallowed by the runtime, producing no visible error — just missing data.
4. Incorrect async/await Usage
AI generates async functions that forget to await a Promise, await something that is not a Promise, or fire async operations without awaiting them inside a loop.
// Common AI pattern — missing await inside forEach
items.forEach(async (item) => {
await processItem(item); // The forEach doesn't await these — they all fire concurrently
});
Production symptom: Race conditions, partial writes, database connection pool exhaustion, and operations that appear to succeed but were never completed.
5. Off-by-One in Pagination
AI-generated pagination logic consistently miscalculates offsets, producing either duplicate records on page boundaries or skipped records when the total count is a multiple of the page size.
// Common off-by-one
const offset = page * pageSize; // Should be (page - 1) * pageSize for 1-indexed pages
Production symptom: Customer support tickets about "missing orders" or duplicate rows in exports. Intermittent — only reproducible on datasets where total count aligns with page size.
6. Resource Leaks
AI-generated code opens database connections, file handles, or HTTP streams but closes them only inside the success branch. Errors exit the function before the cleanup code runs.
// What AI often generates
async function exportData(query: Query): Promise<Buffer> {
const conn = await db.getConnection();
const stream = await conn.query(query);
const result = await streamToBuffer(stream);
conn.release(); // Never called if streamToBuffer throws
return result;
}
Production symptom: Connection pool exhaustion after a spike in errors. The service degrades progressively over hours, not instantly.
7. Missing Retry on Idempotent Operations
AI generates single-attempt calls to external services without retry logic, exponential backoff, or circuit breakers. Every transient failure becomes a user-visible error.
Production symptom: Error rate spikes that correlate with upstream service blips. P99 latency increases during periods of infrastructure noise.
Learning tip: For each pattern above, write a minimal failing test that demonstrates the bug. Having the test helps you verify that the AI's "fix" actually addresses the root cause and not just the symptom.
Using Static Analysis to Catch Patterns Early
Most of these patterns have linting rules or can be detected with existing tools. Setting them up as part of your CI pipeline means the AI's output is checked automatically before it reaches review.
TypeScript / JavaScript:
- @typescript-eslint/no-floating-promises — catches unawaited Promises
- no-unsafe-optional-chaining — flags chained access on potentially null values
- unicorn/no-array-for-each — encourages for...of loops where async behavior is more predictable
- SonarQube's resource leak detector covers open handles in many cases
Python:
- pylint with broad-exception-caught — flags except Exception blocks
- mypy with strict null checks — catches optional chaining issues
- bandit — covers some resource leak patterns
Go:
- errcheck — fails the build if any error return is ignored
- staticcheck — covers a wide range of resource and async patterns
Create a pre-commit hook or CI step that runs these checks on all AI-generated code before it is merged. Treat a linting failure as a signal to go back to the AI with a targeted correction prompt rather than a manual fix — you want the AI to learn the shape of your codebase's expectations.
Learning tip: Add a comment like
// AI-generatedto functions produced by an AI assistant. This makes it easy to audit which functions are candidates for pattern review, and it creates accountability in your team's review culture.
Hands-On: Diagnosing and Fixing AI Failure Patterns
Work through the following steps on a recent piece of AI-generated code in your codebase, or use the examples in the previous section as your subject.
Step 1: Collect the subject code
Copy the AI-generated function you want to audit into a scratch file. Include any types or interfaces it depends on.
Step 2: Run the pattern audit prompt
Review the following TypeScript function for these specific failure patterns:
1. Missing error propagation (catching exceptions and returning defaults without informing the caller)
2. Overly broad try/catch that swallows distinct error types
3. Missing null guards on chained property access
4. Incorrect async/await usage (missing awaits, forEach with async callbacks, fire-and-forget)
5. Resource leaks (connections, file handles, streams not closed on error paths)
6. Missing retry on idempotent external calls
For each pattern found, quote the exact lines, explain the production risk, and provide a corrected version.
[PASTE YOUR FUNCTION HERE]
Expected output: A numbered list where each item quotes the problematic lines, describes the failure mode, and shows a corrected snippet. If the AI says "no issues found," run it again with a more specific version of the function — this prompt works best when the code is genuinely suspect.
Step 3: Address the error propagation issues first
Error propagation failures are the most dangerous because they make every other debugging task harder. Use this targeted prompt:
This function catches exceptions and returns a default value. Rewrite it so that:
1. Transient errors (network timeouts, DB connection failures) are wrapped in a custom RetryableError and re-thrown.
2. Permanent errors (validation failures, not-found) are wrapped in a DomainError with a human-readable message and re-thrown.
3. The caller always knows whether the function succeeded or failed.
4. Add a log line before re-throwing that includes the original error, the function name, and any relevant input identifiers (not PII).
[PASTE THE FUNCTION]
Expected output: A revised function with typed error classes, structured logging, and no swallowed exceptions.
Step 4: Fix async/await issues
Identify every place in this function where async operations may not be awaited correctly, including:
- forEach or map callbacks with async functions
- Promise chains that are returned but not awaited by the caller
- Operations inside a loop that should run sequentially but may run concurrently
For each issue, explain the race condition or ordering problem it can cause, then rewrite the affected section using for...of loops or Promise.all where appropriate.
[PASTE THE FUNCTION]
Expected output: Rewritten loop structures with explicit sequencing or parallelism that matches the intended behavior.
Step 5: Add null safety
Add null and undefined guards to every chained property access in this code. Use optional chaining (?.) where a missing value is a valid state and should return undefined. Throw a descriptive error where a missing value indicates a programming error or corrupt data. Do not add guards that hide real bugs.
[PASTE THE FUNCTION]
Step 6: Fix resource leaks with try/finally
This function acquires a resource (database connection / file handle / HTTP stream) but only releases it on the success path. Rewrite it to use try/finally so that the resource is always released, even if an error is thrown. If the language supports it, use a using statement or equivalent RAII pattern instead.
[PASTE THE FUNCTION]
Step 7: Build your team checklist
After completing steps 2–6, document what you found in a shared checklist:
Based on the bugs I just found in this AI-generated function, write a 7-item code review checklist specifically for TypeScript async service functions. Each item should be one sentence and describe something a reviewer should actively look for. Format it as a Markdown checklist.
Expected output: A Markdown checklist your team can paste into pull request templates or code review guidelines.
Step 8: Validate with a regression test
Write a Jest unit test for the rewritten version of this function that:
1. Verifies that transient errors are thrown as RetryableError, not swallowed
2. Verifies that the resource (mock it) is always released, even when an error occurs
3. Verifies that the async operations complete in the correct order
Use Jest mocks and async/await throughout. Do not use done() callbacks.
Expected output: A complete test file with three describe blocks, one per behavior under test.
Learning tip: Run your static analysis tools on the AI's corrected output, not just the original. AI models sometimes introduce new linting violations while fixing existing ones. Treat each round of AI correction as new code that needs a full pass.
Key Takeaways
- AI-generated code has predictable structural failure modes: missing error propagation, overly broad catches, null chain explosions, async ordering bugs, resource leaks, and missing retries. Knowing the catalog means you know where to look.
- Static analysis tools catch most of these patterns automatically. Integrate them into CI so every piece of AI-generated code is checked before review.
- Targeted correction prompts are more effective than asking the AI to "fix the bugs." Tell it exactly which pattern to address, what the corrected behavior should be, and what invariants to preserve.
- A team failure-pattern checklist built from real incidents is more valuable than generic guidance. Maintain it as a living document.
- Always validate AI corrections with a regression test. The fix might address the reported symptom while introducing a different structural problem.