·

Setting Up A Production Grade AI Assisted Development Environment

Setting Up A Production Grade AI Assisted Development Environment

The difference between a developer who dabbles with AI and one who ships faster with it consistently is almost always environment setup — the right configuration transforms a novelty into a reliable engineering tool.

Why Environment Setup Is Not a One-Time Task

Most engineers install an AI tool, run a few prompts, and assume they are done. In practice, an AI-assisted environment is a living piece of infrastructure. It has configuration files that encode your team's conventions, context files that orient the model toward your codebase, and integration points that need to be kept aligned as your project evolves.

The good news is that the initial setup takes less than an hour. The discipline is treating it like any other infrastructure: version-controlled, reviewed, and kept up to date. A CLAUDE.md or .cursorrules file checked into the repo means every engineer on the team — and every AI session — starts from the same shared context. That consistency is what makes AI output reliable rather than random.

Before writing a single line of configuration, it helps to decide what you want the AI to be good at in your project. Is this a TypeScript monorepo with strict lint rules? A Python service with a specific test framework? A mobile app with custom navigation patterns? The clearer you are about those expectations upfront, the more targeted your configuration will be.

Learning tip: Treat your AI configuration files the same way you treat your eslint.config.js or pyproject.toml — commit them early, review changes with the team, and update them when your conventions change. AI tools read these files on every session, so every improvement compounds.

Installing and Configuring Claude Code

Claude Code is Anthropic's official CLI agent. It operates directly in your terminal, reads your files, runs commands, and can autonomously complete multi-step engineering tasks. It is distinct from chat-based interfaces — it has agency, which means configuration matters more, not less.

Installation requires Node.js 18+:

npm install -g @anthropic-ai/claude-code

Authenticate once using your Anthropic API key:

export ANTHROPIC_API_KEY=sk-ant-...
claude

For persistent configuration without exposing your key in shell history, add it to a secrets manager or your shell's environment file. On macOS you can store it in your keychain and load it via a shell function. The key should never appear in your .bashrc or .zshrc as a plaintext assignment in a file committed to version control.

Once installed, run claude from your project root. Claude Code automatically searches for a CLAUDE.md file in the current directory and its parents. This file is the single most important configuration lever you have.

For Cursor, installation is a standard macOS/Windows download. The equivalent configuration file is .cursorrules in your project root. The principles described throughout this topic apply equally to both tools — the file names differ but the strategy is identical.

Learning tip: After installing, run claude --version and claude /help before doing anything else. Spending five minutes with the help output reveals capabilities you would otherwise discover accidentally weeks later — particularly around /memory, file permissions, and MCP server configuration.

Writing a CLAUDE.md That Actually Works

A CLAUDE.md is not a README. It is a system-level instruction document that Claude reads before every session. Think of it as onboarding documentation written for an AI rather than a human — dense, precise, and focused on the constraints that matter most.

A minimal but effective CLAUDE.md for a backend Node.js service looks like this:


## Tech Stack
- Node.js 20, TypeScript 5.4 (strict mode)
- Express 4, Prisma ORM, PostgreSQL 15
- Vitest for unit tests, Supertest for integration tests

## Repository Layout
- `src/` — application source
- `src/routes/` — Express route handlers (one file per domain)
- `src/services/` — business logic (no direct DB calls here)
- `src/repositories/` — all Prisma queries live here
- `tests/` — mirrors src/ structure

## Key Conventions
- All public functions must have JSDoc with @param and @returns
- Never use `any` — use `unknown` and narrow with guards
- Database access only through the repository layer
- Errors bubble up as typed `AppError` instances (see src/errors.ts)
- All monetary values are stored and computed as integers (cents)

## Test Requirements
- Unit tests for all service-layer functions
- Integration tests for all route handlers
- No mocking of the database in integration tests — use the test DB

## Commands
- `npm run dev` — start dev server
- `npm run test` — run all tests
- `npm run lint` — run ESLint + TypeScript check
- `npm run db:migrate` — run pending Prisma migrations

For Cursor, this same content goes into .cursorrules at the repo root. The structure is identical — it is just plain text that the model reads as a high-priority instruction.

The most common mistake is writing a CLAUDE.md that is too vague. "Follow best practices" tells the model nothing it does not already know. "All monetary values are stored as integers in cents" tells it something specific to your project that it cannot infer.

Learning tip: After writing your initial CLAUDE.md, open a new Claude Code session and immediately ask: "Based on the CLAUDE.md, describe the architectural rules for this project and tell me anything that seems ambiguous." The model's response reveals exactly which rules are clear and which need more specificity.

Configuring MCP Servers for Real Project Integration

Model Context Protocol (MCP) servers extend Claude Code with tools beyond file editing — database inspection, documentation lookup, ticket management, and more. They are configured in ~/.claude/settings.json (global) or .claude/settings.json (project-level).

A typical project configuration for a development environment might include a Postgres MCP server for live schema inspection:

{
  "mcpServers": {
    "postgres": {
      "command": "npx",
      "args": ["-y", "@modelcontextprotocol/server-postgres"],
      "env": {
        "POSTGRES_CONNECTION_STRING": "${env:DEV_DATABASE_URL}"
      }
    },
    "filesystem": {
      "command": "npx",
      "args": [
        "-y",
        "@modelcontextprotocol/server-filesystem",
        "/Users/yourname/projects/payments-service"
      ]
    }
  }
}

The ${env:DEV_DATABASE_URL} syntax pulls the value from your environment at runtime — the connection string never sits in the config file itself. This is the correct pattern for any credential-bearing configuration.

With the Postgres MCP server active, Claude can inspect your live schema, query data during debugging sessions, and generate migrations that match your actual table structure — without you pasting DDL into the chat.

Learning tip: Start with one MCP server, not five. Add the database server first, verify it works with claude /mcp, then add others. Each server multiplies the tools available to the model; adding too many at once makes it harder to diagnose configuration problems.

Managing API Keys and Secrets Safely

This section exists because misconfigured secrets are a common failure mode when setting up AI tools. The rules are simple, but worth stating explicitly.

Never do any of the following:
- Put API keys in CLAUDE.md or .cursorrules
- Put connection strings in .claude/settings.json as literal values
- Paste secrets into a prompt so the model "knows" them for context
- Commit a .env file that contains real credentials

The correct approach:

  1. Store secrets in a .env file that is listed in .gitignore
  2. Reference them in MCP configs using ${env:VAR_NAME} interpolation
  3. For CI/CD environments, use your platform's secrets management (GitHub Actions secrets, AWS Secrets Manager, etc.)
  4. For local development on teams, use a tool like direnv or 1Password CLI to inject environment variables per project

A safe .env setup for Claude Code:

ANTHROPIC_API_KEY=sk-ant-api03-...
DEV_DATABASE_URL=postgres://user:pass@localhost:5432/payments_dev
.env
.env.local
.env.*.local

Then load it in your shell before starting Claude:

source .env && claude

Or use direnv to automate this per-directory.

Learning tip: Run git grep -r "sk-ant" and git grep -r "postgres://" before your first commit with AI tooling in place. These quick checks catch secrets that slipped into config files during setup. Add a pre-commit hook to automate this check permanently.

Configuring Git Hooks for AI-Assisted Commit Messages

One of the most immediately valuable integrations is generating commit messages from your staged diff. This pays off every day — commit messages are often written under time pressure and end up vague. An AI-assisted hook catches that before it hits the repo.

Install husky for hook management:

npm install --save-dev husky
npx husky init

Create a prepare-commit-msg hook that calls Claude to draft the message:

#!/bin/sh

COMMIT_MSG_FILE=$1
COMMIT_SOURCE=$2

if [ -n "$COMMIT_SOURCE" ]; then
  exit 0
fi

DIFF=$(git diff --cached --stat)
if [ -z "$DIFF" ]; then
  exit 0
fi

GENERATED=$(git diff --cached | claude --print --no-preamble \
  "Write a concise git commit message for this diff. Use the conventional commits format (type: description). First line max 72 chars. Add a brief body if the change warrants explanation. Output only the commit message, nothing else.")

if [ -n "$GENERATED" ]; then
  echo "$GENERATED" > "$COMMIT_MSG_FILE"
fi

Make it executable: chmod +x .husky/prepare-commit-msg

Now when you run git commit, Claude reads the staged diff and pre-fills the commit message editor with a conventional-commits formatted message. You review and edit before it is finalized.

Learning tip: Add --no-preamble and --print flags when calling Claude from scripts. --print outputs the result to stdout and exits (non-interactive), and --no-preamble suppresses the conversational wrapper so you get only the content you asked for.

Hands-On: Setting Up the Environment End-to-End

Work through these steps on a real project — even a small side project works. The goal is a fully wired environment you can verify.

Step 1: Install Claude Code and authenticate.

npm install -g @anthropic-ai/claude-code
export ANTHROPIC_API_KEY=your-key-here
claude --version

Expected: version string printed. Run claude /help and read the output.

Step 2: Create your CLAUDE.md at the project root.

Open a new file and fill in your stack, directory layout, conventions, and key commands. Be specific about patterns that AI tools tend to get wrong in your codebase (e.g., "never use class components — this codebase uses React hooks only").

Step 3: Start a Claude Code session and verify context loading.

You've just read the CLAUDE.md for this project. Summarize:
1. The tech stack
2. The directory structure and what belongs in each directory
3. The three most important code conventions I should know
4. The commands for running tests and linting

If anything in CLAUDE.md is ambiguous or contradictory, point it out.

Expected: Claude accurately reflects your project's structure. Any misses or confusions indicate CLAUDE.md sections that need more detail.

Step 4: Set up a .env file and add it to .gitignore.

echo "ANTHROPIC_API_KEY=your-key" > .env
echo ".env" >> .gitignore
git status  # confirm .env is not tracked

Step 5: Configure a project-level MCP server.

Create .claude/settings.json with the filesystem server scoped to your project directory. Start a new session and run /mcp to verify the server is listed and connected.

Step 6: Run your first real task through the configured environment.

Look at the current state of the src/services/ directory. Identify any service functions that:
1. Make direct database calls (violating the repository pattern in CLAUDE.md)
2. Are missing JSDoc comments
3. Use `any` as a type

List each violation with the file path, line number, and a one-sentence description of the issue.

Expected: Claude uses your CLAUDE.md conventions as the standard for its audit, not generic best practices. The response should reference your specific rules (repository pattern, JSDoc requirement, no any).

Step 7: Install Husky and configure the commit message hook.

npm install --save-dev husky
npx husky init
chmod +x .husky/prepare-commit-msg

Stage a small change and run git commit to verify the hook fires and pre-fills a message.

Step 8: Verify the full setup with a small feature task.

Using the conventions in CLAUDE.md, create a new repository function in src/repositories/userRepository.ts that:
- Retrieves a user by email address
- Returns null if not found (do not throw)
- Includes a JSDoc comment
- Follows the existing patterns in that file

Then write a unit test for it in tests/repositories/userRepository.test.ts using Vitest.

Expected: Claude produces code that matches your stack (TypeScript, Prisma, Vitest), follows the patterns in CLAUDE.md, and places files in the correct directories. If it deviates, update CLAUDE.md to close the gap and retry.

Learning tip: The end-to-end verification step (Step 8) is not optional. It is the only way to confirm that your configuration is actually influencing the model's output. A CLAUDE.md that the model ignores gives you false confidence.

Key Takeaways

  • CLAUDE.md and .cursorrules are the highest-leverage configuration files in your AI environment — specific, project-focused conventions produce dramatically better output than generic guidance.
  • MCP servers extend Claude Code with live access to your infrastructure (databases, APIs, file systems); use ${env:VAR} interpolation for all credentials.
  • Secrets belong in .env (gitignored) and injected at runtime — never in config files or prompts.
  • Git hooks with claude --print --no-preamble turn AI-assisted commit messages into a zero-overhead daily practice.
  • Verification is mandatory: run a real task immediately after setup to confirm the configuration is influencing model behavior, not just sitting in a file.