Skip to main content

Edge Case Discovery

Edge Case Discovery

Two-step pattern to discover what needs testing through systematic research, grounding test generation in actual code rather than generic advice.

The Prompt (Step 1: Discover Existing Edge Cases)

How does validateUser() work? What edge cases exist in the current implementation?
What special handling exists for different auth providers?
Search for related tests and analyze what they cover.

Replace validateUser() with your target function/module.

Follow-Up Prompt (Step 2: Identify Gaps)

Based on the implementation you found, what edge cases are NOT covered by tests?
What happens with:
- Null or undefined inputs
- Users mid-registration (incomplete profile)
- Concurrent validation requests

Adapt bulleted questions to your domain. Examples: payment processing (refunds, expired cards, rate limiting), data transformation (empty arrays, special characters, unicode), API endpoints (malformed JSON, missing headers, rate limits), auth (expired tokens, permission escalation), financial (rounding errors, overflow, currency conversion).

Overview

Why two-step pattern prevents generic advice: Step 1 loads concrete constraints—agent searches for function, reads implementation, finds existing tests. This populates context with actual edge cases from your codebase ("OAuth users skip email verification," "admin users bypass rate limits"). Step 2 identifies gaps—with implementation in context, agent analyzes what's NOT tested rather than listing generic test categories. Grounding directives force codebase search before suggesting tests. Existing tests reveal coverage patterns and domain-specific edge cases. Implementation details expose actual failure modes, not hypothetical ones. Prevents specification gaming by discovering edge cases separately from implementation—similar to fresh context requirement for objective analysis.

When to use—research-first requirement: Before implementing new features (discover existing patterns), test-driven development (identify edge cases before implementation), increasing coverage (find gaps in existing suites), refactoring legacy code (understand implicit edge case handling), code review (validate PRs include relevant tests). Critical: Don't skip Step 1—asking directly "what edge cases should I test?" produces generic advice without codebase grounding. Be specific in Step 2 with domain-relevant categories (see examples in prompt). If you generate edge cases and implementation in same conversation, tests will match implementation assumptions—use this pattern in fresh context or before implementation. Without grounding, agents hallucinate based on training patterns instead of analyzing your actual code.

Prerequisites: Code research capabilities (deep codebase exploration via multi-hop semantic search, query expansion, and iterative follow-ups), access to implementation files and existing test suites, function/module name to analyze. After Step 1, agent provides implementation summary with file paths, currently tested edge cases with evidence from test files, special handling logic and conditional branches. After Step 2, agent identifies untested code paths with line numbers, missing edge case coverage with concrete examples from your domain, potential failure modes based on implementation analysis.