- Introduced TDD skills including deep modules, interface design, mocking, refactoring, and testing guidelines. - Added skills for breaking plans into GitHub issues and creating PRDs from conversation context. - Implemented productivity skills for scaffolding exercises, setting up pre-commit hooks, and managing notes in Obsidian. - Created a caveman communication mode for concise technical responses and a grilling technique for thorough plan discussions. - Developed a skill for writing new agent skills with structured templates and guidelines. - Included git guardrails to prevent dangerous git commands and a migration guide for using @total-typescript/shoehorn in tests.
6.8 KiB
name, description
| name | description |
|---|---|
| diagnose | Disciplined diagnosis loop for hard bugs and performance regressions. Reproduce → minimise → hypothesise → instrument → fix → regression-test. Use when user says "diagnose this" / "debug this", reports a bug, says something is broken/throwing/failing, or describes a performance regression. |
Diagnose
A discipline for hard bugs. Skip phases only when explicitly justified.
Phase 1 — Build a feedback loop
This is the skill. Everything else is mechanical. If you have a fast, deterministic, agent-runnable pass/fail signal for the bug, you will find the cause — bisection, hypothesis-testing, and instrumentation all just consume that signal. If you don't have one, no amount of staring at code will save you.
Spend disproportionate effort here. Be aggressive. Be creative. Refuse to give up.
Ways to construct one — try them in roughly this order
- Failing test at whatever seam reaches the bug — unit, integration, e2e.
- Curl / HTTP script against a running dev server.
- CLI invocation with a fixture input, diffing stdout against a known-good snapshot.
- Headless browser script (Playwright / Puppeteer) — drives the UI, asserts on DOM/console/network.
- Replay a captured trace. Save a real network request / payload / event log to disk; replay it through the code path in isolation.
- Throwaway harness. Spin up a minimal subset of the system (one service, mocked deps) that exercises the bug code path with a single function call.
- Property / fuzz loop. If the bug is "sometimes wrong output", run 1000 random inputs and look for the failure mode.
- Bisection harness. If the bug appeared between two known states (commit, dataset, version), automate "boot at state X, check, repeat" so you can
git bisect runit. - Differential loop. Run the same input through old-version vs new-version (or two configs) and diff outputs.
- HITL bash script. Last resort. If a human must click, drive them with
scripts/hitl-loop.template.shso the loop is still structured. Captured output feeds back to you.
Build the right feedback loop, and the bug is 90% fixed.
Iterate on the loop itself
Treat the loop as a product. Once you have a loop, ask:
- Can I make it faster? (Cache setup, skip unrelated init, narrow the test scope.)
- Can I make the signal sharper? (Assert on the specific symptom, not "didn't crash".)
- Can I make it more deterministic? (Pin time, seed RNG, isolate filesystem, freeze network.)
A 30-second flaky loop is barely better than no loop. A 2-second deterministic loop is a debugging superpower.
Non-deterministic bugs
The goal is not a clean repro but a higher reproduction rate. Loop the trigger 100×, parallelise, add stress, narrow timing windows, inject sleeps. A 50%-flake bug is debuggable; 1% is not — keep raising the rate until it's debuggable.
When you genuinely cannot build a loop
Stop and say so explicitly. List what you tried. Ask the user for: (a) access to whatever environment reproduces it, (b) a captured artifact (HAR file, log dump, core dump, screen recording with timestamps), or (c) permission to add temporary production instrumentation. Do not proceed to hypothesise without a loop.
Do not proceed to Phase 2 until you have a loop you believe in.
Phase 2 — Reproduce
Run the loop. Watch the bug appear.
Confirm:
- The loop produces the failure mode the user described — not a different failure that happens to be nearby. Wrong bug = wrong fix.
- The failure is reproducible across multiple runs (or, for non-deterministic bugs, reproducible at a high enough rate to debug against).
- You have captured the exact symptom (error message, wrong output, slow timing) so later phases can verify the fix actually addresses it.
Do not proceed until you reproduce the bug.
Phase 3 — Hypothesise
Generate 3–5 ranked hypotheses before testing any of them. Single-hypothesis generation anchors on the first plausible idea.
Each hypothesis must be falsifiable: state the prediction it makes.
Format: "If is the cause, then will make the bug disappear / will make it worse."
If you cannot state the prediction, the hypothesis is a vibe — discard or sharpen it.
Show the ranked list to the user before testing. They often have domain knowledge that re-ranks instantly ("we just deployed a change to #3"), or know hypotheses they've already ruled out. Cheap checkpoint, big time saver. Don't block on it — proceed with your ranking if the user is AFK.
Phase 4 — Instrument
Each probe must map to a specific prediction from Phase 3. Change one variable at a time.
Tool preference:
- Debugger / REPL inspection if the env supports it. One breakpoint beats ten logs.
- Targeted logs at the boundaries that distinguish hypotheses.
- Never "log everything and grep".
Tag every debug log with a unique prefix, e.g. [DEBUG-a4f2]. Cleanup at the end becomes a single grep. Untagged logs survive; tagged logs die.
Perf branch. For performance regressions, logs are usually wrong. Instead: establish a baseline measurement (timing harness, performance.now(), profiler, query plan), then bisect. Measure first, fix second.
Phase 5 — Fix + regression test
Write the regression test before the fix — but only if there is a correct seam for it.
A correct seam is one where the test exercises the real bug pattern as it occurs at the call site. If the only available seam is too shallow (single-caller test when the bug needs multiple callers, unit test that can't replicate the chain that triggered the bug), a regression test there gives false confidence.
If no correct seam exists, that itself is the finding. Note it. The codebase architecture is preventing the bug from being locked down. Flag this for the next phase.
If a correct seam exists:
- Turn the minimised repro into a failing test at that seam.
- Watch it fail.
- Apply the fix.
- Watch it pass.
- Re-run the Phase 1 feedback loop against the original (un-minimised) scenario.
Phase 6 — Cleanup + post-mortem
Required before declaring done:
- Original repro no longer reproduces (re-run the Phase 1 loop)
- Regression test passes (or absence of seam is documented)
- All
[DEBUG-...]instrumentation removed (grepthe prefix) - Throwaway prototypes deleted (or moved to a clearly-marked debug location)
- The hypothesis that turned out correct is stated in the commit / PR message — so the next debugger learns
Then ask: what would have prevented this bug? If the answer involves architectural change (no good test seam, tangled callers, hidden coupling) hand off to the /improve-codebase-architecture skill with the specifics. Make the recommendation after the fix is in, not before — you have more information now than when you started.