test
/test - Test Generation Command
Section titled “/test - Test Generation Command”Purpose
Section titled “Purpose”Generate comprehensive tests for specified code including unit tests, integration tests, and edge cases.
/test [file path | function name | 'coverage' | 'all']Arguments
Section titled “Arguments”$ARGUMENTS:- File path: Generate tests for specific file
- Function name: Generate tests for specific function
coverage: Analyze and improve test coverageall: Run all tests and report results
Generate comprehensive tests for: $ARGUMENTS
Workflow
Section titled “Workflow”For File/Function Testing
Section titled “For File/Function Testing”-
Analyze Target Code
- Read the code to understand functionality
- Identify inputs, outputs, and side effects
- Note dependencies to mock
- Find existing tests for patterns
-
Design Test Cases
- Happy Path: Normal operation with valid inputs
- Edge Cases: Boundary values, empty inputs, limits
- Error Cases: Invalid inputs, exceptions
- Integration Points: External dependencies
-
Generate Tests
- Follow project testing conventions
- Use appropriate mocking strategies
- Write clear, descriptive test names
- Include setup and teardown
-
Run and Verify
Terminal window # Pythonpytest [test_file] -v# TypeScriptpnpm test [test_file]
For Coverage Analysis
Section titled “For Coverage Analysis”-
Run Coverage Report
Terminal window # Pythonpytest --cov=src --cov-report=term-missing# TypeScriptpnpm test --coverage -
Identify Gaps
- Find untested functions
- Identify untested branches
- Note missing edge cases
-
Generate Missing Tests
- Prioritize by risk
- Focus on critical paths
- Add edge case coverage
Test Templates
Section titled “Test Templates”Python (pytest)
Section titled “Python (pytest)”import pytestfrom unittest.mock import Mock, patchfrom src.module import function_under_test
class TestFunctionUnderTest: """Tests for function_under_test."""
def test_with_valid_input_returns_expected(self): """Test that valid input produces expected output.""" result = function_under_test("valid_input") assert result == "expected_output"
def test_with_empty_input_returns_default(self): """Test that empty input returns default value.""" result = function_under_test("") assert result == "default"
def test_with_invalid_input_raises_error(self): """Test that invalid input raises ValueError.""" with pytest.raises(ValueError, match="Invalid input"): function_under_test(None)
@pytest.mark.parametrize("input_val,expected", [ ("a", "result_a"), ("b", "result_b"), ("c", "result_c"), ]) def test_parametrized_inputs(self, input_val, expected): """Test multiple input variations.""" assert function_under_test(input_val) == expected
@patch('src.module.external_service') def test_with_mocked_dependency(self, mock_service): """Test with mocked external dependency.""" mock_service.call.return_value = "mocked_result" result = function_under_test("input") assert result == "expected_with_mock" mock_service.call.assert_called_once_with("input")TypeScript (vitest)
Section titled “TypeScript (vitest)”import { describe, it, expect, vi, beforeEach } from 'vitest';import { functionUnderTest } from './module';
describe('functionUnderTest', () => { beforeEach(() => { vi.clearAllMocks(); });
it('should return expected output for valid input', () => { const result = functionUnderTest('valid_input'); expect(result).toBe('expected_output'); });
it('should return default for empty input', () => { const result = functionUnderTest(''); expect(result).toBe('default'); });
it('should throw error for invalid input', () => { expect(() => functionUnderTest(null)).toThrow('Invalid input'); });
it.each([ ['a', 'result_a'], ['b', 'result_b'], ['c', 'result_c'], ])('should handle input %s correctly', (input, expected) => { expect(functionUnderTest(input)).toBe(expected); });
it('should work with mocked dependency', async () => { vi.mock('./external-service', () => ({ call: vi.fn().mockResolvedValue('mocked_result'), }));
const result = await functionUnderTest('input'); expect(result).toBe('expected_with_mock'); });});React Component (Testing Library)
Section titled “React Component (Testing Library)”import { describe, it, expect, vi } from 'vitest';import { render, screen, fireEvent, waitFor } from '@testing-library/react';import { UserForm } from './UserForm';
describe('UserForm', () => { it('should render form fields', () => { render(<UserForm onSubmit={vi.fn()} />);
expect(screen.getByLabelText(/name/i)).toBeInTheDocument(); expect(screen.getByLabelText(/email/i)).toBeInTheDocument(); expect(screen.getByRole('button', { name: /submit/i })).toBeInTheDocument(); });
it('should call onSubmit with form data', async () => { const onSubmit = vi.fn(); render(<UserForm onSubmit={onSubmit} />);
fireEvent.change(screen.getByLabelText(/name/i), { target: { value: 'John' }, }); fireEvent.change(screen.getByLabelText(/email/i), { target: { value: 'john@example.com' }, }); fireEvent.click(screen.getByRole('button', { name: /submit/i }));
await waitFor(() => { expect(onSubmit).toHaveBeenCalledWith({ name: 'John', email: 'john@example.com', }); }); });
it('should show validation error for invalid email', async () => { render(<UserForm onSubmit={vi.fn()} />);
fireEvent.change(screen.getByLabelText(/email/i), { target: { value: 'invalid' }, }); fireEvent.blur(screen.getByLabelText(/email/i));
expect(await screen.findByText(/invalid email/i)).toBeInTheDocument(); });});Output
Section titled “Output”Test Generation Report
Section titled “Test Generation Report”## Tests Generated
### Target`src/services/auth.ts` - AuthService class
### Tests Created- `tests/services/auth.test.ts`
### Test Cases1. `login_with_valid_credentials_returns_token` - Happy path2. `login_with_invalid_password_throws_error` - Error case3. `login_with_nonexistent_user_throws_error` - Error case4. `refresh_token_with_valid_token_returns_new_token` - Happy path5. `refresh_token_with_expired_token_throws_error` - Edge case6. `logout_invalidates_session` - Happy path
### Coverage Impact- Before: 65%- After: 88%- New lines covered: 42
### Run Tests```bashpytest tests/services/auth.test.ts -v- Mocked database calls using pytest-mock
- Added edge case for token expiration
- Consider adding integration tests for full auth flow
## Flags
| Flag | Description | Example ||------|-------------|---------|| `--coverage` | Generate coverage-focused tests | `--coverage` || `--type=[type]` | Test type to generate | `--type=integration` || `--format=[fmt]` | Output format (concise/detailed) | `--format=concise` || `--framework=[fw]` | Specify test framework | `--framework=vitest` || `--tdd` | Generate TDD-style with failing tests first | `--tdd` || `--edge-cases` | Focus on edge case coverage | `--edge-cases` |
### Flag Usage Examples
```bash/test --coverage src/services//test --type=integration src/api/users.ts/test --tdd src/utils/validator.ts/test --edge-cases --framework=pytest src/models/user.pyTest Types
Section titled “Test Types”| Type | Description |
|---|---|
unit | Isolated function tests (default) |
integration | Multi-component tests |
e2e | End-to-end workflow tests |
snapshot | Snapshot tests for UI |
property | Property-based testing |
Framework Options
Section titled “Framework Options”| Framework | Language |
|---|---|
pytest | Python |
vitest | TypeScript/JavaScript |
jest | JavaScript |
playwright | E2E (any) |
MCP Integration
Section titled “MCP Integration”This command leverages MCP servers for enhanced testing:
Browser tools - E2E Testing (Primary for UI)
Section titled “Browser tools - E2E Testing (Primary for UI)”For E2E and browser tests:- Use Browser tools for cross-browser testing- Automate user flow verification- Capture screenshots for visual testing- Test on different device emulationsFilesystem - Test File Management
Section titled “Filesystem - Test File Management”For test file operations:- Use list_dir to find test directories- Use grep_search to find existing tests- Use view_file to check test patterns- Use write_to_file to create new test filesWeb Search - Testing Best Practices
Section titled “Web Search - Testing Best Practices”For framework-specific testing:- Fetch current testing library docs- Get latest assertions and matchers- Find recommended testing patternsMemory - Test Patterns
Section titled “Memory - Test Patterns”Recall testing context:- Remember project testing conventions- Recall mock patterns used previously- Store common test fixturesVariations
Section titled “Variations”Modify behavior via CLAUDE.md:
- Preferred test framework
- Minimum coverage targets
- Test naming conventions
- Required test categories
Gap Analysis Rule
Section titled “Gap Analysis Rule”Always identify gaps and suggest next steps to users. In case there is no gaps anymore, then AI should clearly state that there is no gap left.