Generate AGENTS.md
Generate AGENTS.md (Project Context File)
Systematic workflow for bootstrapping project context files using ChunkHound codebase analysis and ArguSeek ecosystem research.
The Prompt
Generate AGENTS.md for this project.
Use the code research tool to learn the project architecture, tech stack,
how auth works, testing conventions, coding style, and deployment process.
Use ArguSeek to fetch current best practices for the tech stack used and the
latest security guidelines.
Create a concise file (≤500 lines) with sections:
- Tech Stack
- Development Commands (modified for non-interactive execution)
- Architecture (high-level structure)
- Coding Conventions and Style
- Critical Constraints
- Common Pitfalls (if found).
Do NOT duplicate information already in README or code comments—instead, focus
exclusively on AI-specific operations: environment variables, non-obvious
dependencies, and commands requiring modification for agents.
Adapt sections for your project: Security Guidelines (sensitive data), API Integration Patterns (microservices), Database Schema (data-intensive), Deployment Checklist (complex workflows).
Overview
Why multi-source grounding works: ChunkHound provides codebase-specific context (patterns, conventions, architecture) while ArguSeek provides current ecosystem knowledge (framework best practices, security guidelines)—this implements multi-source grounding to combine empirical project reality with ecosystem best practices. The structured output format with explicit sections ensures comprehensive coverage by forcing systematic enumeration instead of free-form narrative. The ≤500 line conciseness constraint forces prioritization—without it, agents generate verbose documentation that gets ignored during actual use. The non-duplication directive keeps focus on AI-specific operational details agents can't easily infer from code alone (environment setup, non-interactive command modifications, deployment gotchas). This implements the Research phase of the four-phase workflow, letting agents build their own foundation before tackling implementation tasks.
When to use this pattern: New project onboarding (establish baseline context before first implementation task), documenting legacy projects (capture tribal knowledge systematically), refreshing context after architectural changes (re-run after migrations, framework upgrades, major refactors). Run early in project adoption to establish baseline context files, re-run after major changes, then manually add tribal knowledge (production incidents, team conventions, non-obvious gotchas) that AI can't discover from code. Without initial context grounding, agents hallucinate conventions based on training patterns instead of reading your actual codebase—this manifests as style violations, incorrect assumptions about architecture, and ignored project-specific constraints.
Prerequisites: ChunkHound code research (deep codebase exploration via multi-hop semantic search, query expansion, and iterative follow-ups), ArguSeek web research (ecosystem documentation and current best practices), write access to project root. Requires existing codebase with source files and README/basic documentation to avoid duplication. After generation, validate by testing with a typical task—if the agent doesn't follow documented conventions, the context file needs iteration. Without validation, you risk cementing incorrect assumptions into project context that compound across future tasks.
Related Lessons
- Lesson 3: High-Level Methodology - Four-phase workflow (Research > Plan > Execute > Validate), iteration decisions
- Lesson 4: Prompting 101 - Structured prompting, constraints as guardrails, information density
- Lesson 5: Grounding - Multi-source grounding (ChunkHound + ArguSeek), semantic search, sub-agents
- Lesson 6: Project Onboarding - Context files (AGENTS.md, CLAUDE.md), hierarchical context, tribal knowledge