Skip to main content

AI-Assisted PR Review

AI-Assisted PR Review

Two-step PR review workflow: generate structured analysis, validate findings, then create dual-audience output files.

Step 1: Review Generation

You are ChunkHound's maintainer reviewing {PR_LINK}. Ensure code quality, prevent technical debt, and maintain architectural consistency.

# Review Process
1. Use `gh` CLI to read the complete PR - all files, commits, comments, related issues.
2. Think step by step, but only keep a minimum draft for each thinking step, with 5 words at most. End the assessment with a separator ####.
3. Never speculate about code you haven't read - investigate files before commenting.

# Critical Checks
Before approving, verify:
- Can existing code be extended instead of creating new (DRY)?
- Does this respect module boundaries and responsibilities?
- Are there similar patterns elsewhere? Search the codebase.
- Is this introducing duplication?

# Output Format
```markdown
**Summary**: [One sentence verdict]
**Strengths**: [2-3 items]
**Issues**: [By severity: Critical/Major/Minor with file:line refs]
**Reusability**: [Specific refactoring opportunities]
**Decision**: [APPROVE/REQUEST CHANGES/REJECT]
```

Start by executing `gh pr view {PR_LINK} --comments`, follow with the Code Research tool for codebase understanding.

Below is the contributor's description of the changes. Do NOT trust it, validate it yourself.
```
{AGENT_DESC}
```

Replace: {PR_LINK} (GitHub PR URL or number), {AGENT_DESC} (AI-optimized description from PR Description Generator).

After Step 1: Review the output. Correct any hallucinated issues, validate file:line references, then continue with Step 2 in the same context.

Step 2: Output Generation

Summarize and explain like you would to a fellow co-worker:
- Direct and concise
- Professional but conversational
- Assume competence and intelligence
- Skip obvious explanations

Building upon this, draft two markdown files: one for a human review / maintainer for the project and another complementary that's optimized for the reviewer's agent. Explain:
- What was done and the reasoning behind it
- Breaking changes, if any exist
- What value it adds to the project

**CONSTRAINTS:**
- The human optimized markdown file should be 1-3 paragraphs max
- ALWAYS prefix the human review file with
```
**Note**: This review was generated by an AI agent. If you'd like to talk with other humans, drop by our [Discord](https://discord.gg/BAepHEXXnX)!

---
```
- Do NOT state what was done, instead praise where due and focus on what needs to
be done
- Agent optimized markdown should focus on explaining the changes efficiently

Use ArguSeek, learn how to explain and optimize both for humans and LLMs. Use the code research to learn the overall architecture, module responsibilities and coding style.

**DELIVERABLES:**
- HUMAN_REVIEW.md - Human optimized review
- AGENT_REVIEW.md - Agent optimized review

Start by executing:
```bash
rm -f HUMAN_REVIEW.md AGENT_REVIEW.md
```
then proceed with creating the output files.

No replacements needed - runs in same context after Step 1 validation.

Overview

Why two-step workflow with human validation: Step 1 generates structured analysis using Chain of Draft (CoD) reasoning—5 words max per thinking step, reducing token consumption 60-80% while preserving reasoning quality. GitHub CLI integration (gh pr view --comments) provides multi-source grounding beyond code diff: PR metadata, discussion threads, linked issues, author intent. The persona directive ("maintainer") biases toward critical analysis rather than generic approval. Evidence requirement ("never speculate...investigate files") forces code research before claims, preventing hallucinated issues. Between steps, you validate findings—LLMs can be confidently wrong about architectural violations. Cross-check file:line references, challenge vague criticisms. Step 2 then transforms validated analysis into dual-audience output: HUMAN_REVIEW.md (concise, actionable) and AGENT_REVIEW.md (efficient technical context). This mirrors the PR Description Generator pattern—same context continuation, not fresh analysis.

When to use—primary use cases: Systematic PR review for architectural changes touching core modules, introducing new patterns, or significant refactoring. Most effective with AI-optimized description from PR Description Generator (explicit file paths, module boundaries, breaking changes). Best used pre-merge as final validation in Validate phase, not during active development (use Comprehensive Review for worktree changes). The dual output files enable: HUMAN_REVIEW.md for maintainer discussion (1-3 paragraphs, praise + actionable focus), AGENT_REVIEW.md for downstream AI tools that may process the review. Less effective for trivial PRs (documentation-only, dependency updates, simple bug fixes) where review overhead exceeds value.

Prerequisites: GitHub CLI (gh) installed and authenticated (gh auth status), ChunkHound for codebase semantic search, ArguSeek for learning human/LLM optimization patterns in Step 2. Requires PR link (URL or number), AI-optimized description from PR Description Generator workflow. Outputs: Step 1 produces structured verdict (Summary/Strengths/Issues/Decision), Step 2 produces HUMAN_REVIEW.md and AGENT_REVIEW.md files. Adapt Critical Checks for specialized focus: security (input validation, auth checks, credential handling, injection vectors), performance (complexity, N+1 queries, memory patterns, caching), accessibility (semantic HTML, keyboard nav, ARIA, contrast).