Comprehensive Code Review
Comprehensive Code Review
Four-category review structure (Architecture, Code Quality, Maintainability, UX) ensuring comprehensive coverage while preventing confirmation bias through fresh-context execution.
The Prompt
You are an expert code reviewer. Analyze the current changeset and provide a critical review.
The changes in the working tree were meant to: $DESCRIBE_CHANGES
Think step-by-step through each aspect below, focusing solely on the changes in the working tree.
1. **Architecture & Design**
- Verify conformance to project architecture
- Check module responsibilities are respected
- Ensure changes align with the original intent
2. **Code Quality**
- Code must be self-explanatory and readable
- Style must match surrounding code patterns
- Changes must be minimal - nothing unneeded
- Follow KISS principle
3. **Maintainability**
- Optimize for future LLM agents working on the codebase
- Ensure intent is clear and unambiguous
- Verify comments and docs remain in sync with code
4. **User Experience**
- Identify areas where extra effort would significantly improve UX
- Balance simplicity with meaningful enhancements
Review the changes critically. Focus on issues that matter.
Use ChunkHound's code research.
DO NOT EDIT ANYTHING - only review.
Overview
Why four-category framework works: Persona directive ("expert code reviewer") biases vocabulary toward critical analysis instead of descriptive summarization—"violates single responsibility" vs "this function does multiple things." Explicit change description ($DESCRIBE_CHANGES) anchors grounding by framing intent, enabling detection of misalignment between goals and execution (intended to add caching, actually introduced side effects). Sequential numbered structure implements Chain-of-Thought reasoning across review dimensions, preventing premature conclusions—can't evaluate maintainability without first understanding architecture. Grounding directive ("Use ChunkHound") forces actual codebase investigation instead of hallucinating patterns from training data. "DO NOT EDIT" constraint maintains separation between review and implementation phases, preventing premature fixes before comprehensive analysis. Four categories ensure systematic coverage: Architecture (structural correctness, pattern conformance, module boundaries), Code Quality (readability, style consistency, KISS adherence), Maintainability (future LLM comprehension, documentation sync, intent clarity), UX (meaningful enhancements, simplicity-value balance).
When to use—critical fresh-context requirement: Always run in fresh context separate from where code was written—agents reviewing their own work in the same conversation defend prior decisions rather than providing objective analysis (confirmation bias from accumulated context). Use after implementation completion (Execute phase done), post-refactoring (architecture changes), pre-PR submission (Validate phase). Critical: be specific with $DESCRIBE_CHANGES—vague descriptions ("fix bugs", "update code") prevent alignment analysis between intent and implementation; effective descriptions specify the architectural goal ("add Redis caching layer to user service", "refactor authentication to use JWT tokens"). Review is iterative: review in fresh context → fix issues → run tests → re-review in new conversation → repeat until green light or diminishing returns. Stop iterating when tests pass and remaining feedback is minor (3-4 cycles max)—excessive iteration introduces review-induced regressions where fixes address critique without improving functionality.
Prerequisites: Code research capabilities (semantic search across codebase, architectural pattern discovery, implementation reading), access to git working tree changes (git diff, git status), project architecture documentation (CLAUDE.md, AGENTS.md, README). Requires explicit description of intended changes ($DESCRIBE_CHANGES), access to both changed files and surrounding context for pattern conformance. Agent provides structured feedback by category with file paths, line numbers, specific issues, and actionable recommendations (evidence requirements). Adapt pattern for specialized reviews: security (attack surface mapping/input validation boundaries/authentication flows/credential handling/OWASP Top 10), performance (algorithmic complexity/database query efficiency/memory allocation/I/O operations/caching strategy), accessibility (semantic HTML structure/keyboard navigation/ARIA labels/screen reader compatibility/color contrast ratios), API design (REST conventions/error responses/versioning/backwards compatibility).
Related Lessons
- Lesson 3: High-Level Methodology - Four-phase workflow (Research > Plan > Execute > Validate) — review is the Validate phase
- Lesson 4: Prompting 101 - Persona directives, Chain-of-Thought, constraints, structured prompting
- Lesson 5: Grounding - ChunkHound for codebase research, preventing hallucination
- Lesson 7: Planning & Execution - Evidence requirements, grounding techniques
- Lesson 8: Tests as Guardrails - Fresh-context validation, preventing confirmation bias through three-context workflow
- Lesson 9: Reviewing Code - Iterative review cycles, diminishing returns, mixed vs agent-only codebases