Skip to content

test

Generate comprehensive tests for specified code including unit tests, integration tests, and edge cases.

/test [file path | function name | 'coverage' | 'all']
  • $ARGUMENTS:
    • File path: Generate tests for specific file
    • Function name: Generate tests for specific function
    • coverage: Analyze and improve test coverage
    • all: Run all tests and report results

Generate comprehensive tests for: $ARGUMENTS

  1. Analyze Target Code

    • Read the code to understand functionality
    • Identify inputs, outputs, and side effects
    • Note dependencies to mock
    • Find existing tests for patterns
  2. Design Test Cases

    • Happy Path: Normal operation with valid inputs
    • Edge Cases: Boundary values, empty inputs, limits
    • Error Cases: Invalid inputs, exceptions
    • Integration Points: External dependencies
  3. Generate Tests

    • Follow project testing conventions
    • Use appropriate mocking strategies
    • Write clear, descriptive test names
    • Include setup and teardown
  4. Run and Verify

    Terminal window
    # Python
    pytest [test_file] -v
    # TypeScript
    pnpm test [test_file]
  1. Run Coverage Report

    Terminal window
    # Python
    pytest --cov=src --cov-report=term-missing
    # TypeScript
    pnpm test --coverage
  2. Identify Gaps

    • Find untested functions
    • Identify untested branches
    • Note missing edge cases
  3. Generate Missing Tests

    • Prioritize by risk
    • Focus on critical paths
    • Add edge case coverage
import pytest
from unittest.mock import Mock, patch
from src.module import function_under_test
class TestFunctionUnderTest:
"""Tests for function_under_test."""
def test_with_valid_input_returns_expected(self):
"""Test that valid input produces expected output."""
result = function_under_test("valid_input")
assert result == "expected_output"
def test_with_empty_input_returns_default(self):
"""Test that empty input returns default value."""
result = function_under_test("")
assert result == "default"
def test_with_invalid_input_raises_error(self):
"""Test that invalid input raises ValueError."""
with pytest.raises(ValueError, match="Invalid input"):
function_under_test(None)
@pytest.mark.parametrize("input_val,expected", [
("a", "result_a"),
("b", "result_b"),
("c", "result_c"),
])
def test_parametrized_inputs(self, input_val, expected):
"""Test multiple input variations."""
assert function_under_test(input_val) == expected
@patch('src.module.external_service')
def test_with_mocked_dependency(self, mock_service):
"""Test with mocked external dependency."""
mock_service.call.return_value = "mocked_result"
result = function_under_test("input")
assert result == "expected_with_mock"
mock_service.call.assert_called_once_with("input")
import { describe, it, expect, vi, beforeEach } from 'vitest';
import { functionUnderTest } from './module';
describe('functionUnderTest', () => {
beforeEach(() => {
vi.clearAllMocks();
});
it('should return expected output for valid input', () => {
const result = functionUnderTest('valid_input');
expect(result).toBe('expected_output');
});
it('should return default for empty input', () => {
const result = functionUnderTest('');
expect(result).toBe('default');
});
it('should throw error for invalid input', () => {
expect(() => functionUnderTest(null)).toThrow('Invalid input');
});
it.each([
['a', 'result_a'],
['b', 'result_b'],
['c', 'result_c'],
])('should handle input %s correctly', (input, expected) => {
expect(functionUnderTest(input)).toBe(expected);
});
it('should work with mocked dependency', async () => {
vi.mock('./external-service', () => ({
call: vi.fn().mockResolvedValue('mocked_result'),
}));
const result = await functionUnderTest('input');
expect(result).toBe('expected_with_mock');
});
});
import { describe, it, expect, vi } from 'vitest';
import { render, screen, fireEvent, waitFor } from '@testing-library/react';
import { UserForm } from './UserForm';
describe('UserForm', () => {
it('should render form fields', () => {
render(<UserForm onSubmit={vi.fn()} />);
expect(screen.getByLabelText(/name/i)).toBeInTheDocument();
expect(screen.getByLabelText(/email/i)).toBeInTheDocument();
expect(screen.getByRole('button', { name: /submit/i })).toBeInTheDocument();
});
it('should call onSubmit with form data', async () => {
const onSubmit = vi.fn();
render(<UserForm onSubmit={onSubmit} />);
fireEvent.change(screen.getByLabelText(/name/i), {
target: { value: 'John' },
});
fireEvent.change(screen.getByLabelText(/email/i), {
target: { value: 'john@example.com' },
});
fireEvent.click(screen.getByRole('button', { name: /submit/i }));
await waitFor(() => {
expect(onSubmit).toHaveBeenCalledWith({
name: 'John',
email: 'john@example.com',
});
});
});
it('should show validation error for invalid email', async () => {
render(<UserForm onSubmit={vi.fn()} />);
fireEvent.change(screen.getByLabelText(/email/i), {
target: { value: 'invalid' },
});
fireEvent.blur(screen.getByLabelText(/email/i));
expect(await screen.findByText(/invalid email/i)).toBeInTheDocument();
});
});
## Tests Generated
### Target
`src/services/auth.ts` - AuthService class
### Tests Created
- `tests/services/auth.test.ts`
### Test Cases
1. `login_with_valid_credentials_returns_token` - Happy path
2. `login_with_invalid_password_throws_error` - Error case
3. `login_with_nonexistent_user_throws_error` - Error case
4. `refresh_token_with_valid_token_returns_new_token` - Happy path
5. `refresh_token_with_expired_token_throws_error` - Edge case
6. `logout_invalidates_session` - Happy path
### Coverage Impact
- Before: 65%
- After: 88%
- New lines covered: 42
### Run Tests
```bash
pytest tests/services/auth.test.ts -v
  • Mocked database calls using pytest-mock
  • Added edge case for token expiration
  • Consider adding integration tests for full auth flow
## Flags
| Flag | Description | Example |
|------|-------------|---------|
| `--coverage` | Generate coverage-focused tests | `--coverage` |
| `--type=[type]` | Test type to generate | `--type=integration` |
| `--format=[fmt]` | Output format (concise/detailed) | `--format=concise` |
| `--framework=[fw]` | Specify test framework | `--framework=vitest` |
| `--tdd` | Generate TDD-style with failing tests first | `--tdd` |
| `--edge-cases` | Focus on edge case coverage | `--edge-cases` |
### Flag Usage Examples
```bash
/test --coverage src/services/
/test --type=integration src/api/users.ts
/test --tdd src/utils/validator.ts
/test --edge-cases --framework=pytest src/models/user.py
TypeDescription
unitIsolated function tests (default)
integrationMulti-component tests
e2eEnd-to-end workflow tests
snapshotSnapshot tests for UI
propertyProperty-based testing
FrameworkLanguage
pytestPython
vitestTypeScript/JavaScript
jestJavaScript
playwrightE2E (any)

This command leverages MCP servers for enhanced testing:

Browser tools - E2E Testing (Primary for UI)

Section titled “Browser tools - E2E Testing (Primary for UI)”
For E2E and browser tests:
- Use Browser tools for cross-browser testing
- Automate user flow verification
- Capture screenshots for visual testing
- Test on different device emulations
For test file operations:
- Use list_dir to find test directories
- Use grep_search to find existing tests
- Use view_file to check test patterns
- Use write_to_file to create new test files
For framework-specific testing:
- Fetch current testing library docs
- Get latest assertions and matchers
- Find recommended testing patterns
Recall testing context:
- Remember project testing conventions
- Recall mock patterns used previously
- Store common test fixtures

Modify behavior via CLAUDE.md:

  • Preferred test framework
  • Minimum coverage targets
  • Test naming conventions
  • Required test categories

Always identify gaps and suggest next steps to users. In case there is no gaps anymore, then AI should clearly state that there is no gap left.