PR Description Generator
PR Description Generator
Generate coordinated PR descriptions for two audiences: human maintainers (concise summaries) and AI review assistants (comprehensive technical context).
The Prompt
You are a contributor to {PROJECT_NAME} creating a GitHub pull request for the current branch. Using the sub task tool to conserve context, explore the changes in the git history relative to mainstream. Summarize and explain like you would to a fellow co-worker:
- Direct and concise
- Professional but conversational
- Assume competence and intelligence
- Skip obvious explanations
The intent of the changes is:
```
{CHANGES_DESC}
```
Building upon this, draft two markdown files: one for a human review / maintainer for the project and another complementary that's optimized for the reviewer's agent. Explain:
- What was done and the reasoning behind it
- Breaking changes, if any exist
- What value it adds to the project
**CONSTRAINTS:**
- The human optimized markdown file should be 1-3 paragraphs max
- ALWAYS prefix the human review file with
```
**Note**: This PR was generated by an AI agent. If you'd like to talk with other humans, drop by our [Discord](https://discord.gg/BAepHEXXnX)!
---
```
- Do NOT state what was done, instead praise where due and focus on what needs to
be done
- Agent optimized markdown should focus on explaining the changes efficiently
Use ArguSeek, learn how to explain and optimize both for humans and LLMs. Use the code research to learn the overall architecture, module responsibilities and coding style.
**DELIVERABLES:**
- HUMAN_REVIEW.md - Human optimized review
- AGENT_REVIEW.md - Agent optimized review
Start by executing:
```bash
rm -f HUMAN_REVIEW.md AGENT_REVIEW.md
```
then proceed with creating the output files.
Replace: $PROJECT_NAME (repo name), $CHANGES_DESC (1-2 sentence PR summary).
Overview
Why dual-audience optimization works: Sub-agents conserve context by spawning separate agents to explore git history—without this delegation, 20-30 file changes consume 40K+ tokens, pushing critical constraints into the attention curve's ignored middle. Multi-source grounding combines ArguSeek (PR best practices from GitHub docs and engineering blogs) with ChunkHound (project-specific architecture patterns, module responsibilities), preventing generic advice divorced from your codebase realities. The "co-worker" persona framing with explicit style constraints (direct, concise, assume competence, skip obvious) prevents verbose explanations that waste reviewer attention. Dual constraints balance audiences: "1-3 paragraphs max" for humans prevents overwhelming maintainers with walls of text, while "explain efficiently" keeps AI context comprehensive but structured—critical because AI reviewers need architectural context (file relationships, module boundaries) that humans infer from experience.
When to use—workflow integration: Before submitting PRs with complex changesets (10+ files, multiple modules touched, cross-cutting concerns) or cross-team collaboration where reviewers lack deep module familiarity. Integrate into four-phase workflow: complete implementation → validate with tests → self-review for issues → fix discovered problems → generate dual descriptions → submit PR with both files. Be specific with $CHANGES_DESC—vague descriptions ("fix bugs", "update logic") produce generic output because grounding requires concrete intent. Without specific change description, agent has no anchor to evaluate "what matters" in the git diff. Critical: if you manually edit generated descriptions post-generation, regenerate BOTH files—stale context in AI-optimized description causes hallucinations during review when architectural explanations contradict actual changes. For teams without AI reviewers yet, human-optimized output alone provides concise summaries that respect reviewer time.
Prerequisites: Sub-agent/task tool (Claude Code CLI provides built-in Task tool—other platforms require manual context management via sequential prompts), ArguSeek (web research for PR best practices), ChunkHound (codebase research via multi-hop semantic search and iterative exploration), git history access with committed changes on feature branch. Requires base branch for comparison (typically main or develop), architecture documentation (CLAUDE.md project context, AGENTS.md for agentic workflows). Agent generates two markdown files: human-optimized (1-3 paragraphs covering what changed, why it matters, breaking changes if any, value delivered) and AI-optimized (file paths with line numbers, module responsibilities, architectural patterns followed, boundary changes, testing coverage, edge cases addressed). Adapt this pattern for other documentation needs: release notes (user-facing features vs technical changelog), incident postmortems (executive summary vs technical root cause analysis), design docs (stakeholder overview vs implementation deep-dive).
For actual review, use these prompts with the generated artifacts:
- AI-Assisted PR Review — Review PRs using the AI-optimized description with GitHub CLI integration
- Comprehensive Code Review — Review worktree changes before committing (pre-PR stage)
Related Lessons
- Lesson 2: Understanding Agents - Sub-agents, task delegation, context conservation
- Lesson 4: Prompting 101 - Persona, constraints, attention curves
- Lesson 5: Grounding - Multi-source grounding, preventing hallucination
- Lesson 9: Reviewing Code - Dual-audience optimization, PR workflows, AI-assisted review