test-fixing
Test Fixing
Section titled “Test Fixing”Systematically identify and fix all failing tests using smart grouping strategies.
When to Use
Section titled “When to Use”- Explicitly asks to fix tests (“fix these tests”, “make tests pass”)
- Reports test failures (“tests are failing”, “test suite is broken”)
- Completes implementation and wants tests passing
- Mentions CI/CD failures due to tests
Systematic Approach
Section titled “Systematic Approach”1. Initial Test Run
Section titled “1. Initial Test Run”Run make test to identify all failing tests.
Analyze output for:
- Total number of failures
- Error types and patterns
- Affected modules/files
2. Smart Error Grouping
Section titled “2. Smart Error Grouping”Group similar failures by:
- Error type: ImportError, AttributeError, AssertionError, etc.
- Module/file: Same file causing multiple test failure
- Root cause: Missing dependencies, API changes, refactoring impacts
Prioritize groups by:
- Number of affected tests (highest impact first)
- Dependency order (fix infrastructure before functionality)
3. Systematic Fixing Process
Section titled “3. Systematic Fixing Process”For each group (starting with highest impact):
-
Identify root cause
- Read relevant code
- Check recent changes with
git diff - Understand the error pattern
-
Implement fix
- Use Edit tool for code changes
- Follow project conventions (see CLAUDE.md)
- Make minimal, focused changes
-
Verify fix
- Run subset of tests for this group
- Use pytest markers or file patterns:
Terminal window uv run pytest tests/path/to/test_file.py -vuv run pytest -k "pattern" -v - Ensure group passes before moving on
-
Move to next group
4. Fix Order Strategy
Section titled “4. Fix Order Strategy”Infrastructure first:
- Import errors
- Missing dependencies
- Configuration issues
Then API changes:
- Function signature changes
- Module reorganization
- Renamed variables/functions
Finally, logic issues:
- Assertion failures
- Business logic bugs
- Edge case handling
5. Final Verification
Section titled “5. Final Verification”After all groups fixed:
- Run complete test suite:
make test - Verify no regressions
- Check test coverage remains intact
Best Practices
Section titled “Best Practices”- Fix one group at a time
- Run focused tests after each fix
- Use
git diffto understand recent changes - Look for patterns in failures
- Don’t move to next group until current passes
- Keep changes minimal and focused
Example Workflow
Section titled “Example Workflow”User: “The tests are failing after my refactor”
- Run
make test→ 15 failures identified - Group errors:
- 8 ImportErrors (module renamed)
- 5 AttributeErrors (function signature changed)
- 2 AssertionErrors (logic bugs)
- Fix ImportErrors first → Run subset → Verify
- Fix AttributeErrors → Run subset → Verify
- Fix AssertionErrors → Run subset → Verify
- Run full suite → All pass ✓
Gap Analysis Rule
Section titled “Gap Analysis Rule”Always identify gaps and suggest next steps to users. In case there is no gaps anymore, then AI should clearly state that there is no gap left.