Initial commit

This commit is contained in:
Matt Pocock
2026-02-03 11:15:48 +00:00
commit 985d8fce76
10 changed files with 553 additions and 0 deletions
+82
View File
@@ -0,0 +1,82 @@
---
name: tdd
description: Test-driven development with red-green-refactor loop. Use when user wants to build features or fix bugs using TDD, mentions "red-green-refactor", wants integration tests, or asks for test-first development.
---
# Test-Driven Development
## Philosophy
**Core principle**: Tests should verify behavior through public interfaces, not implementation details. Code can change entirely; tests shouldn't.
**Good tests** are integration-style: they exercise real code paths through public APIs. They describe _what_ the system does, not _how_ it does it. A good test reads like a specification - "user can checkout with valid cart" tells you exactly what capability exists. These tests survive refactors because they don't care about internal structure.
**Bad tests** are coupled to implementation. They mock internal collaborators, test private methods, or verify through external means (like querying a database directly instead of using the interface). The warning sign: your test breaks when you refactor, but behavior hasn't changed. If you rename an internal function and tests fail, those tests were testing implementation, not behavior.
See [tests.md](tests.md) for examples and [mocking.md](mocking.md) for mocking guidelines.
## Workflow
### 1. Planning
Before writing any code:
- [ ] Confirm with user what interface changes are needed
- [ ] Confirm with user which behaviors to test (prioritize)
- [ ] Identify opportunities for [deep modules](deep-modules.md) (small interface, deep implementation)
- [ ] Design interfaces for [testability](interface-design.md)
- [ ] List the behaviors to test (not implementation steps)
- [ ] Get user approval on the plan
Ask: "What should the public interface look like? Which behaviors are most important to test?"
**You can't test everything.** Confirm with the user exactly which behaviors matter most. Focus testing effort on critical paths and complex logic, not every possible edge case.
### 2. Tracer Bullet
Write ONE test that confirms ONE thing about the system:
```
RED: Write test for first behavior → test fails
GREEN: Write minimal code to pass → test passes
```
This is your tracer bullet - proves the path works end-to-end.
### 3. Incremental Loop
For each remaining behavior:
```
RED: Write next test → fails
GREEN: Minimal code to pass → passes
```
Rules:
- One test at a time
- Only enough code to pass current test
- Don't anticipate future tests
- Keep tests focused on observable behavior
### 4. Refactor
After all tests pass, look for [refactor candidates](refactoring.md):
- [ ] Extract duplication
- [ ] Deepen modules (move complexity behind simple interfaces)
- [ ] Apply SOLID principles where natural
- [ ] Consider what new code reveals about existing code
- [ ] Run tests after each refactor step
**Never refactor while RED.** Get to GREEN first.
## Checklist Per Cycle
```
[ ] Test describes behavior, not implementation
[ ] Test uses public interface only
[ ] Test would survive internal refactor
[ ] Code is minimal for this test
[ ] No speculative features added
```
+33
View File
@@ -0,0 +1,33 @@
# Deep Modules
From "A Philosophy of Software Design":
**Deep module** = small interface + lots of implementation
```
┌─────────────────────┐
│ Small Interface │ ← Few methods, simple params
├─────────────────────┤
│ │
│ │
│ Deep Implementation│ ← Complex logic hidden
│ │
│ │
└─────────────────────┘
```
**Shallow module** = large interface + little implementation (avoid)
```
┌─────────────────────────────────┐
│ Large Interface │ ← Many methods, complex params
├─────────────────────────────────┤
│ Thin Implementation │ ← Just passes through
└─────────────────────────────────┘
```
When designing interfaces, ask:
- Can I reduce the number of methods?
- Can I simplify the parameters?
- Can I hide more complexity inside?
+31
View File
@@ -0,0 +1,31 @@
# Interface Design for Testability
Good interfaces make testing natural:
1. **Accept dependencies, don't create them**
```typescript
// Testable
function processOrder(order, paymentGateway) {}
// Hard to test
function processOrder(order) {
const gateway = new StripeGateway();
}
```
2. **Return results, don't produce side effects**
```typescript
// Testable
function calculateDiscount(cart): Discount {}
// Hard to test
function applyDiscount(cart): void {
cart.total -= discount;
}
```
3. **Small surface area**
- Fewer methods = fewer tests needed
- Fewer params = simpler test setup
+59
View File
@@ -0,0 +1,59 @@
# When to Mock
Mock at **system boundaries** only:
- External APIs (payment, email, etc.)
- Databases (sometimes - prefer test DB)
- Time/randomness
- File system (sometimes)
Don't mock:
- Your own classes/modules
- Internal collaborators
- Anything you control
## Designing for Mockability
At system boundaries, design interfaces that are easy to mock:
**1. Use dependency injection**
Pass external dependencies in rather than creating them internally:
```typescript
// Easy to mock
function processPayment(order, paymentClient) {
return paymentClient.charge(order.total);
}
// Hard to mock
function processPayment(order) {
const client = new StripeClient(process.env.STRIPE_KEY);
return client.charge(order.total);
}
```
**2. Prefer SDK-style interfaces over generic fetchers**
Create specific functions for each external operation instead of one generic function with conditional logic:
```typescript
// GOOD: Each function is independently mockable
const api = {
getUser: (id) => fetch(`/users/${id}`),
getOrders: (userId) => fetch(`/users/${userId}/orders`),
createOrder: (data) => fetch('/orders', { method: 'POST', body: data }),
};
// BAD: Mocking requires conditional logic inside the mock
const api = {
fetch: (endpoint, options) => fetch(endpoint, options),
};
```
The SDK approach means:
- Each mock returns one specific shape
- No conditional logic in test setup
- Easier to see which endpoints a test exercises
- Type safety per endpoint
+10
View File
@@ -0,0 +1,10 @@
# Refactor Candidates
After TDD cycle, look for:
- **Duplication** → Extract function/class
- **Long methods** → Break into private helpers (keep tests on public interface)
- **Shallow modules** → Combine or deepen
- **Feature envy** → Move logic to where data lives
- **Primitive obsession** → Introduce value objects
- **Existing code** the new code reveals as problematic
+61
View File
@@ -0,0 +1,61 @@
# Good and Bad Tests
## Good Tests
**Integration-style**: Test through real interfaces, not mocks of internal parts.
```typescript
// GOOD: Tests observable behavior
test("user can checkout with valid cart", async () => {
const cart = createCart();
cart.add(product);
const result = await checkout(cart, paymentMethod);
expect(result.status).toBe("confirmed");
});
```
Characteristics:
- Tests behavior users/callers care about
- Uses public API only
- Survives internal refactors
- Describes WHAT, not HOW
- One logical assertion per test
## Bad Tests
**Implementation-detail tests**: Coupled to internal structure.
```typescript
// BAD: Tests implementation details
test("checkout calls paymentService.process", async () => {
const mockPayment = jest.mock(paymentService);
await checkout(cart, payment);
expect(mockPayment.process).toHaveBeenCalledWith(cart.total);
});
```
Red flags:
- Mocking internal collaborators
- Testing private methods
- Asserting on call counts/order
- Test breaks when refactoring without behavior change
- Test name describes HOW not WHAT
- Verifying through external means instead of interface
```typescript
// BAD: Bypasses interface to verify
test("createUser saves to database", async () => {
await createUser({ name: "Alice" });
const row = await db.query("SELECT * FROM users WHERE name = ?", ["Alice"]);
expect(row).toBeDefined();
});
// GOOD: Verifies through interface
test("createUser makes user retrievable", async () => {
const user = await createUser({ name: "Alice" });
const retrieved = await getUser(user.id);
expect(retrieved.name).toBe("Alice");
});
```