·

Hands On Building A Project Context File

Hands On Building A Project Context File

A well-structured project context file is the single highest-leverage thing you can do to make AI tools produce accurate, idiomatic code in your specific codebase — this topic walks you through building one from scratch.

Why AI Gets Your Codebase Wrong (Without Context)

Out of the box, an AI coding assistant knows general programming patterns. It does not know that your team uses a custom result type instead of throwing exceptions, that your mobile app splits networking into a DataLayer versus a DomainLayer, or that you have a strict rule against using any in TypeScript. Without this knowledge, the AI produces code that compiles but does not fit — you spend more time editing suggestions than you would have spent writing from scratch.

The solution is a project context file. Tools like Claude Code look for a CLAUDE.md file at the repository root (or nested for monorepos). GitHub Copilot reads .github/copilot-instructions.md. Cursor reads .cursorrules. Regardless of the tool, the principle is identical: a persistent, version-controlled document that tells the AI what it needs to know about your codebase before it writes a single line.

Context files work because AI assistants process the file at the start of every session or on every request. It is essentially a standing system prompt written by engineers for engineers. The investment is roughly one to two hours to write it initially, and a few minutes per sprint to keep it updated. The return is that every AI interaction from that point forward starts from an accurate mental model of your codebase.

A common mistake is treating the context file as documentation for humans. It is not. You are writing instructions for a code generator. Be explicit about what the AI should and should not do. State rules that would be obvious to any senior engineer on your team but are invisible to an outsider — that is exactly the gap you are filling.

Learning tip: Before writing a single line of your context file, spend five minutes with your last ten pull request review comments. Those comments — "use the repository pattern here", "don't import directly from the store", "we format errors like this" — are exactly the rules your context file should encode.

Anatomy of a Production-Ready Context File

A useful context file covers six areas. Each area answers a question the AI would otherwise have to guess at.

Architecture overview answers: What is the high-level structure? An AI that knows your app is a hexagonal architecture will not suggest putting business logic in a controller. One that knows you use a monorepo with shared packages will not duplicate utilities that already exist.

Tech stack and conventions answers: What libraries and patterns are canonical? This is where you list your state management library, your HTTP client, your ORM, your test framework, your formatter, and any non-default configuration for each. If you use React Query but not Redux, say so — otherwise the AI may suggest Redux.

Key file map answers: Where do things live? A brief map of your directory structure prevents the AI from generating files in the wrong place or importing from paths that do not exist.

Testing approach answers: How should tests be written? Cover the test runner, assertion style, mocking strategy, and naming conventions. If you have a helper renderWithProviders that every component test should use, name it here.

Gotchas answers: What mistakes does everyone make in this codebase? This is the most valuable section and the one most often skipped. It is a list of the anti-patterns, footguns, and implicit rules that trip up new engineers and AI alike.

Security rules answers: What must never appear in generated code? Hard constraints on things like logging sensitive fields, using eval, skipping input validation, or committing secrets.

Learning tip: Treat the gotchas section as a living document. Every time a code review catches an AI-generated mistake, add it to gotchas. The file gets smarter with every sprint.

Hands-On: Building a Complete CLAUDE.md for an Example Codebase

The example codebase is a TypeScript monorepo containing a Node.js REST API (packages/api) and a React web app (packages/web). It uses Prisma for the database, React Query for data fetching, Zod for validation, and Vitest for tests. Work through the following steps to build the full context file.

Step 1: Generate an architecture first draft

Start by asking the AI to interview you about your codebase structure rather than trying to write the file from scratch.

I am building a CLAUDE.md context file for my codebase. Ask me up to 10 questions about the project architecture, tech stack, conventions, and known gotchas so you can help me draft it. Ask one question at a time and wait for my answer before asking the next.

This surfaces the things you know but would not think to write down unprompted. Work through the interview, then ask for a draft:

Based on my answers, write a CLAUDE.md file with these sections: Architecture Overview, Tech Stack and Conventions, Key File Map, Testing Approach, Gotchas, and Security Rules. Use markdown headers. Be specific and imperative — write rules, not descriptions.

Expected output: A 200–400 line draft file that you will refine in the next steps. It will be approximately 60–70% correct at this point.

Step 2: Write the architecture section

Edit the architecture section to be precise about boundaries. For the example codebase:

Rewrite the Architecture Overview section of my CLAUDE.md to reflect this structure: the API uses a layered architecture — HTTP handlers in src/routes, business logic in src/services, database access only in src/repositories. Services must never import from routes. Repositories must never contain business logic. The web app uses a feature-folder structure under src/features/, with each feature owning its components, hooks, and API calls. Shared UI components live in src/components/ui/.

Expected output: A concise architecture section with explicit rules about import directions and where logic belongs — the AI will now refuse to put business logic in a handler when generating code.

Step 3: Lock in your tech stack and conventions

Add a Tech Stack and Conventions section to my CLAUDE.md with these specifics:
- Node.js 20, TypeScript 5.3, strict mode enabled
- Express 4 for routing — no other HTTP frameworks
- Prisma 5 as the ORM — never write raw SQL
- Zod for all input validation — define schemas in src/schemas/, not inline
- React 18, React Query v5 for all server state — no useEffect for data fetching
- Zustand for client-only UI state
- Tailwind CSS — utility classes only, no custom CSS files
- ESLint + Prettier — run "pnpm lint" before committing
- All async functions must be explicitly typed — no implicit any return types

Expected output: A section that reads like a rulebook. The AI will now default to React Query instead of useEffect, define Zod schemas in the right directory, and use Prisma instead of raw queries.

Step 4: Build the key file map

Paste a directory tree into the prompt:

Here is the output of "tree -L 3 --gitignore" for my project:

packages/
  api/
    src/
      routes/
      services/
      repositories/
      schemas/
      middleware/
      utils/
  web/
    src/
      features/
        auth/
        dashboard/
        settings/
      components/
        ui/
      hooks/
      lib/

Write a Key File Map section for my CLAUDE.md that explains what each directory is for and, where important, what does NOT belong there.

Expected output: A map section that prevents the AI from inventing directories or misplacing files. It will know that a new API endpoint needs a file in routes/, a schema in schemas/, and may need a new file in services/ and repositories/.

Step 5: Define your testing approach

Write a Testing Approach section for my CLAUDE.md:
- Test runner: Vitest
- Unit tests co-located with source files as *.test.ts
- Integration tests in packages/api/tests/integration/
- Use "describe" blocks named after the module under test
- Mock external services with vi.mock() at the top of the test file
- Use supertest for API route tests
- React component tests use @testing-library/react
- All component tests use the custom "renderWithProviders" helper from src/test-utils.tsx — never render components without it
- Test file naming: [feature].test.ts for units, [feature].integration.test.ts for integration

Expected output: The AI will now generate tests with the correct structure, use the right assertion patterns, and include renderWithProviders automatically in component tests.

Step 6: Write the gotchas section — the most important section

This section has the highest return on investment. Populate it with your actual pain points:

Write a Gotchas section for my CLAUDE.md covering these known issues in our codebase:
1. Prisma client must be imported from src/lib/prisma.ts (a singleton), never instantiated directly
2. All API errors must use our AppError class from src/utils/errors.ts — never throw plain Error objects
3. React Query mutation callbacks (onSuccess, onError) run in a stale closure — use queryClient.invalidateQueries instead of reading state inside the callback
4. Zustand store slices must use the "immer" middleware — state mutations without immer will silently fail in production builds
5. Never use process.env directly in application code — always use the config object from src/lib/config.ts
6. Date handling: always use dayjs, never the native Date constructor for formatting — timezones will break
7. The "user" object on Express requests is typed via a declaration merge in src/types/express.d.ts — add new fields there, not with type assertions

Expected output: A gotchas section that reads like a cheat sheet for a new hire. AI tools treat these as hard rules and will actively avoid the listed anti-patterns.

Step 7: Add security rules

Add a Security Rules section to my CLAUDE.md with these non-negotiables:
- Never log request bodies, authentication tokens, passwords, or PII — use our sanitizeForLog() utility
- Never use eval() or Function() constructors
- All user-supplied strings must be validated with Zod before use in any query or file path
- SQL string interpolation is forbidden — Prisma parameterizes queries automatically, use it
- Never commit .env files or hardcode secrets — use environment variables via src/lib/config.ts
- Authentication middleware (requireAuth) must be applied to all routes except those in the public whitelist in src/routes/public.ts

Expected output: A security section the AI treats as inviolable constraints. It will apply requireAuth, use sanitizeForLog, and never suggest eval.

Step 8: The before/after test — measuring your context file's impact

This is the most revealing step. Run the same prompt with and without your context file active.

Without context (run in a fresh session with no CLAUDE.md):

Write an Express route handler for POST /api/users/:id/settings that updates a user's notification preferences. The body should contain an object with keys emailEnabled (boolean) and smsEnabled (boolean). Save to the database and return the updated settings.

You will likely see: direct Prisma client instantiation, plain Error throws, inline validation, process.env usage, no requireAuth, and the handler containing business logic directly.

With context (run after the AI has read your CLAUDE.md):

Write an Express route handler for POST /api/users/:id/settings that updates a user's notification preferences. The body should contain an object with keys emailEnabled (boolean) and smsEnabled (boolean). Save to the database and return the updated settings.

The same prompt now produces: import from src/lib/prisma.ts, a Zod schema defined in src/schemas/, an AppError for validation failures, requireAuth middleware applied, business logic delegated to a service function, and a repository function for the database write.

The prompt is identical. The output quality difference is entirely due to the context file.

Learning tip: Treat the before/after test as your acceptance criteria for the context file. If the AI output still violates your conventions after the context file is active, add a more explicit rule to gotchas and retest.

Step 9: Validate the context file with a stress test prompt

Run this prompt to test whether the AI has internalized your rules across multiple concerns simultaneously:

Generate the full implementation for a new feature: a user can export their account data as a JSON file. This requires a new API endpoint, a service function, a repository query, a Zod schema, and a React component with a download button that calls the endpoint. Follow all project conventions.

Read through every generated file. Mark any violation — wrong import path, missing middleware, incorrect test structure, inline validation. Each violation becomes a new rule in your context file.

Step 10: Version control and team adoption

Commit your CLAUDE.md to the repository root and add a brief comment to your team's contributing guide pointing to it. The context file is most valuable when it is kept current — assign ownership to update it during your sprint retrospective when a new gotcha is discovered.

For monorepos with distinct packages, create package-level CLAUDE.md files in each package directory with package-specific rules. The root file handles global conventions; nested files handle local concerns.

Key Takeaways

  • A project context file is a standing instruction set for your AI tools — it encodes architecture rules, conventions, and anti-patterns that are invisible to the AI without it.
  • The six essential sections are: architecture overview, tech stack and conventions, key file map, testing approach, gotchas, and security rules. The gotchas section delivers the highest return on investment.
  • The before/after test (running the same prompt with and without the context file) is the fastest way to measure whether your file is working — run it after every significant edit to the file.
  • Treat the context file as living documentation. Every AI-generated bug that gets caught in code review is a new rule waiting to be added.
  • Version-control the file and keep it in the repo root so it is automatically loaded by tools like Claude Code and stays synchronized with the codebase it describes.