33 Commits

Author SHA1 Message Date
Matt Pocock b7bf8cc3c1 Enhance link-skills.sh to check for symlink conflicts and provide user guidance 2026-04-28 12:37:23 +01:00
Matt Pocock d0592f4cfb Separate content skills from backlog backend
Decouples to-prd, to-issues, and triage from GitHub. Content skills now
own their templates and the conceptual state vocabulary; a new /github
backend skill translates (state, artifact) into gh calls.

- Rename github-triage → triage; remove all gh calls; emit handoff hints
- Create /github skill with publishPRD, publishIssues, applyTriageOutcome
- to-issues: strip gh issue create; AFK slices use AGENT-BRIEF format;
  states assigned (AFK→ready-for-agent, HITL→ready-for-human)
- to-prd: strip GitHub submit; default to ready-for-agent
- Delete triage-issue (its diagnosis half is covered by /diagnose; its
  triage half by the new /triage)
- Update plugin.json, top-level README, engineering README, deprecated README

Closes #80

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:16:40 +01:00
Matt Pocock f76e3ba4af Add DOMAIN-AWARENESS.md and update SKILL.md files to reference it for code exploration 2026-04-28 12:04:01 +01:00
Matt Pocock ecdadb4076 Refine README to improve clarity and conciseness in the explanation of agent communication challenges and solutions 2026-04-28 11:21:32 +01:00
Matt Pocock 20861cb0a1 Enhance README by adding links to skill documentation and clarifying the purpose of feedback loops in AI development 2026-04-28 11:20:15 +01:00
Matt Pocock c21cf6ec93 Fix grammatical errors and enhance clarity in README section on agent skills 2026-04-28 10:55:14 +01:00
Matt Pocock 1eed8a689b Enhance README with detailed explanations of common AI failure modes and solutions 2026-04-28 10:50:40 +01:00
Matt Pocock edd9893326 Remove installation commands for individual skills from README 2026-04-28 10:19:48 +01:00
Matt Pocock 3911642d96 Add quickstart setup instructions to README 2026-04-28 10:18:47 +01:00
Matt Pocock fb847c6ade Refactor skill references and documentation for grill-with-docs skill
- Updated skill reference from domain-model to grill-with-docs in plugin.json and README.md
- Added detailed descriptions and context for grill-with-docs in its SKILL.md
- Created ADR-FORMAT.md and CONTEXT-FORMAT.md for grill-with-docs to standardize decision recording
- Adjusted references in improve-codebase-architecture to align with new grill-with-docs structure
2026-04-28 10:17:37 +01:00
Matt Pocock 51384f4e70 Remove deprecated skills from plugin.json 2026-04-28 10:11:06 +01:00
Matt Pocock 71542f9d1c Update skill references in README files and add new skills to deprecated and personal sections 2026-04-28 09:44:54 +01:00
Matt Pocock 62f43a1817 Add new skills for TDD, issue management, PRD creation, and productivity tools
- Introduced TDD skills including deep modules, interface design, mocking, refactoring, and testing guidelines.
- Added skills for breaking plans into GitHub issues and creating PRDs from conversation context.
- Implemented productivity skills for scaffolding exercises, setting up pre-commit hooks, and managing notes in Obsidian.
- Created a caveman communication mode for concise technical responses and a grilling technique for thorough plan discussions.
- Developed a skill for writing new agent skills with structured templates and guidelines.
- Included git guardrails to prevent dangerous git commands and a migration guide for using @total-typescript/shoehorn in tests.
2026-04-28 09:42:34 +01:00
Matt Pocock 3e3ca9b9fa Add initial implementation of design-an-interface skill and linking script
Co-authored-by: Copilot <copilot@github.com>
2026-04-28 09:23:06 +01:00
Matt Pocock 383b6a06d5 Moved to ./skills directory 2026-04-28 08:00:37 +01:00
Matt Pocock e7f0b58a4b Added diagnose 2026-04-28 07:58:41 +01:00
Matt Pocock 90ea8eec03 Revise README.md to clarify the purpose of agent skills and promote newsletter subscription 2026-04-26 20:49:06 +01:00
Matt Pocock 77b06d1022 Update definition of 'Order' in CONTEXT-FORMAT.md for clarity and consistency 2026-04-26 20:47:17 +01:00
Matt Pocock 60aa99c023 Enhance documentation in DEEPENING.md and INTERFACE-DESIGN.md for clarity and consistency; introduce LANGUAGE.md for shared vocabulary in architectural discussions. 2026-04-24 20:26:50 +01:00
Matt Pocock 949472a791 Added DDD-awareness to improve-codebase-architecture 2026-04-23 11:25:36 +01:00
Test 1186cf6d42 Update terminology in SKILL.md: replace 'grilling session' with '/domain-model session' for consistency 2026-04-22 16:58:03 +01:00
Test c2225f9865 Refine description in SKILL.md for clarity and conciseness 2026-04-22 12:10:22 +01:00
Test 3e251ea792 Clarify guidance on CONTEXT.md updates: emphasize meaningful terms for domain experts and avoid coupling to implementation details. 2026-04-22 12:10:11 +01:00
Test 8868f54212 Rename skills for clarity: update 'write-a-prd' to 'to-prd' and 'prd-to-issues' to 'to-issues' in README.md 2026-04-17 13:11:41 +01:00
Test aaf3050857 Updated write-a-prd to to-prd 2026-04-17 13:10:02 +01:00
Test a77fa6e7a6 Remove prd-to-plan skill
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 13:02:21 +01:00
Test ab45d5e6e1 Added domain-model, updates to github-triage, and caveman 2026-04-17 10:36:26 +01:00
Test 1f25956a56 Add AI disclaimer requirement for GitHub issue comments 2026-04-15 06:17:41 +01:00
Matt Pocock 651eab033b Add agent brief and out-of-scope documentation for GitHub issue triage 2026-04-01 12:57:15 +01:00
Matt Pocock eebfb3c99a Refactor glossary tables and enhance example dialogue in Ubiquitous Language skill 2026-03-26 19:57:24 +00:00
Matt Pocock 98fecc7619 Merge branch 'main' of https://github.com/mattpocock/skills 2026-03-26 19:57:17 +00:00
Matt Pocock a6bdfd9fed Add instruction to ask questions one at a time 2026-03-26 14:28:04 +00:00
Matt Pocock 6a87ed07b0 Add issue breakdown with blocking relationships to QA skill
Inspired by prd-to-issues, the QA skill now assesses whether a report
should be filed as a single issue or broken into multiple issues with
blocking relationships to maximize parallelism.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 20:08:57 +00:00
51 changed files with 1712 additions and 551 deletions
+17
View File
@@ -0,0 +1,17 @@
{
"name": "mattpocock-skills",
"skills": [
"./skills/engineering/diagnose",
"./skills/engineering/github",
"./skills/engineering/grill-with-docs",
"./skills/engineering/improve-codebase-architecture",
"./skills/engineering/triage",
"./skills/engineering/tdd",
"./skills/engineering/to-issues",
"./skills/engineering/to-prd",
"./skills/engineering/zoom-out",
"./skills/productivity/caveman",
"./skills/productivity/grill-me",
"./skills/productivity/write-a-skill"
]
}
+13
View File
@@ -0,0 +1,13 @@
Skills are organized into bucket folders under `skills/`:
- `engineering/` — daily code work
- `productivity/` — daily non-code workflow tools
- `misc/` — kept around but rarely used
- `personal/` — tied to my own setup, not promoted
- `deprecated/` — no longer used
Every skill in `engineering/`, `productivity/`, or `misc/` must have a reference in the top-level `README.md` and an entry in `.claude-plugin/plugin.json`. Skills in `personal/` and `deprecated/` must not appear in either.
Each skill entry in the top-level `README.md` must link the skill name to its `SKILL.md`.
Each bucket folder has a `README.md` that lists every skill in the bucket with a one-line description, with the skill name linked to its `SKILL.md`.
+116 -76
View File
@@ -1,117 +1,157 @@
# Agent Skills
# Agent Skills For Real Engineers
A collection of agent skills that extend capabilities across planning, development, and tooling.
My agent skills that I use every day to do real engineering - not vibe coding.
## Planning & Design
Developing real applications is hard. Approaches like GSD, BMAD, and Spec-Kit try to help by owning the process. But while doing so, they take away your control and make bugs in the process hard to resolve.
These skills help you think through problems before writing code.
These skills are designed to be small, easy to adapt, and composable. They work with any model. They're based on decades of engineering experience. Hack around with them. Make them your own. Enjoy.
- **write-a-prd** — Create a PRD through an interactive interview, codebase exploration, and module design. Filed as a GitHub issue.
If you want to keep up with changes to these skills, and any new ones I create, you can join ~60,000 other devs on my newsletter:
```
npx skills@latest add mattpocock/skills/write-a-prd
```
[Sign Up To The Newsletter](https://www.aihero.dev/s/skills-newsletter)
- **prd-to-plan** — Turn a PRD into a multi-phase implementation plan using tracer-bullet vertical slices.
## Quickstart (30-second setup)
```
npx skills@latest add mattpocock/skills/prd-to-plan
```
1. Run the skills.sh installer:
- **prd-to-issues** — Break a PRD into independently-grabbable GitHub issues using vertical slices.
```bash
npx skills@latest add mattpocock/skills
```
```
npx skills@latest add mattpocock/skills/prd-to-issues
```
2. Pick the skills you want, and which coding agents you want to install them on.
- **grill-me** — Get relentlessly interviewed about a plan or design until every branch of the decision tree is resolved.
3. Bam - you're ready to go.
```
npx skills@latest add mattpocock/skills/grill-me
```
## Why These Skills Exist
- **design-an-interface** — Generate multiple radically different interface designs for a module using parallel sub-agents.
I built these skills as a way to fix common failure modes I see with Claude Code, Codex, and other coding agents.
```
npx skills@latest add mattpocock/skills/design-an-interface
```
### #1: The Agent Didn't Do What I Want
- **request-refactor-plan** — Create a detailed refactor plan with tiny commits via user interview, then file it as a GitHub issue.
> "No-one knows exactly what they want"
>
> David Thomas & Andrew Hunt, [The Pragmatic Programmer](https://www.amazon.co.uk/Pragmatic-Programmer-Anniversary-Journey-Mastery/dp/B0833F1T3V)
```
npx skills@latest add mattpocock/skills/request-refactor-plan
```
**The Problem**. The most common failure mode in software development is misalignment. You think the dev knows what you want. Then you see what they've built - and you realize it didn't understand you at all.
## Development
This is just the same in the AI age. There is a communication gap between you and the agent. The fix for this is a **grilling session** - getting the agent to ask you detailed questions about what you're building.
These skills help you write, refactor, and fix code.
**The Fix** is to use:
- **tdd** — Test-driven development with a red-green-refactor loop. Builds features or fixes bugs one vertical slice at a time.
- [`/grill-me`](./skills/productivity/grill-me/SKILL.md) - for non-code uses
- [`/grill-with-docs`](./skills/engineering/grill-with-docs/SKILL.md) - same as [`/grill-me`](./skills/productivity/grill-me/SKILL.md), but adds more goodies (see below)
```
npx skills@latest add mattpocock/skills/tdd
```
These are my most popular skills. They help you align with the agent before you get started, and think deeply about the change you're making. Use them _every_ time you want to make a change.
- **triage-issue** — Investigate a bug by exploring the codebase, identify the root cause, and file a GitHub issue with a TDD-based fix plan.
### #2: The Agent Is Way Too Verbose
```
npx skills@latest add mattpocock/skills/triage-issue
```
> With a ubiquitous language, conversations among developers and expressions of the code are all derived from the same domain model.
>
> Eric Evans, [Domain-Driven-Design](https://www.amazon.co.uk/Domain-Driven-Design-Tackling-Complexity-Software/dp/0321125215)
- **improve-codebase-architecture** — Explore a codebase for architectural improvement opportunities, focusing on deepening shallow modules and improving testability.
**The Problem**: At the start of a project, devs and the people they're building the software for (the domain experts) are usually speaking different languages.
```
npx skills@latest add mattpocock/skills/improve-codebase-architecture
```
I felt the same tension with my agents. Agents are usually dropped into a project and asked to figure out the jargon as they go. So they use 20 words where 1 will do.
- **migrate-to-shoehorn** — Migrate test files from `as` type assertions to @total-typescript/shoehorn.
**The Fix** for this is a shared language. It's a document that helps agents decode the jargon used in the project.
```
npx skills@latest add mattpocock/skills/migrate-to-shoehorn
```
<details>
<summary>
Example
</summary>
- **scaffold-exercises** — Create exercise directory structures with sections, problems, solutions, and explainers.
Here's an example [`CONTEXT.md`](https://github.com/mattpocock/course-video-manager/blob/076a5a7a182db0fe1e62971dd7a68bcadf010f1c/CONTEXT.md), from my `course-video-manager` repo. Which one is easier to read?
```
npx skills@latest add mattpocock/skills/scaffold-exercises
```
- **BEFORE**: "There's a problem when a lesson inside a section of a course is made 'real' (i.e. given a spot in the file system)"
- **AFTER**: "There's a problem with the materialization cascade"
## Tooling & Setup
This concision pays off session after session.
- **setup-pre-commit** — Set up Husky pre-commit hooks with lint-staged, Prettier, type checking, and tests.
</details>
```
npx skills@latest add mattpocock/skills/setup-pre-commit
```
This is built into [`/grill-with-docs`](./skills/engineering/grill-with-docs/SKILL.md). It's a grilling session, but that helps you build a shared language with the AI, and document hard-to-explain decisions in ADR's.
- **git-guardrails-claude-code** — Set up Claude Code hooks to block dangerous git commands (push, reset --hard, clean, etc.) before they execute.
It's hard to explain how powerful this is. It might be the single coolest technique in this repo. Try it, and see.
```
npx skills@latest add mattpocock/skills/git-guardrails-claude-code
```
> [!TIP]
> A shared language has many other benefits than reducing verbosity:
>
> - **Variables, functions and files are named consistently**, using the shared language
> - As a result, the **codebase is easier to navigate** for the agent
> - The agent also **spends fewer tokens on thinking**, because it has access to a more concise language
## Writing & Knowledge
### #3: The Code Doesn't Work
- **write-a-skill** — Create new skills with proper structure, progressive disclosure, and bundled resources.
> "Always take small, deliberate steps. The rate of feedback is your speed limit. Never take on a task thats too big."
>
> David Thomas & Andrew Hunt, [The Pragmatic Programmer](https://www.amazon.co.uk/Pragmatic-Programmer-Anniversary-Journey-Mastery/dp/B0833F1T3V)
```
npx skills@latest add mattpocock/skills/write-a-skill
```
**The Problem**: Let's say that you and the agent are aligned on what to build. What happens when the agent _still_ produces crap?
- **edit-article** — Edit and improve articles by restructuring sections, improving clarity, and tightening prose.
It's time to look at your feedback loops. Without feedback on how the code it produces actually runs, the agent will be flying blind.
```
npx skills@latest add mattpocock/skills/edit-article
```
**The Fix**: You need the usual tranche of feedback loops: static types, browser access, and automated tests.
- **ubiquitous-language** — Extract a DDD-style ubiquitous language glossary from the current conversation.
For automated tests, a red-green-refactor loop is critical. This is where the agent writes a failing test first, then fixes the test. This helps give the agent a consistent level of feedback that results in far better code.
```
npx skills@latest add mattpocock/skills/ubiquitous-language
```
I've built a **[`/tdd`](./skills/engineering/tdd/SKILL.md) skill** you can slot into any project. It encourages red-green-refactor and gives the agent plenty of guidance on what makes good and bad tests.
- **obsidian-vault** — Search, create, and manage notes in an Obsidian vault with wikilinks and index notes.
For debugging, I've also built a **[`/diagnose`](./skills/engineering/diagnose/SKILL.md)** skill that wraps best debugging practices into a simple loop.
```
npx skills@latest add mattpocock/skills/obsidian-vault
```
### #4: We Built A Ball Of Mud
> "Invest in the design of the system _every day_."
>
> Kent Beck, [Extreme Programming Explained](https://www.amazon.co.uk/Extreme-Programming-Explained-Embrace-Change/dp/0321278658)
> "The best modules are deep. They allow a lot of functionality to be accessed through a simple interface."
>
> John Ousterhout, [A Philosophy Of Software Design](https://www.amazon.co.uk/Philosophy-Software-Design-2nd/dp/173210221X)
**The Problem**: Most apps built with agents are complex and hard to change. Because agents can radically speed up coding, they also accelerate software entropy. Codebases get more complex at an unprecedented rate.
**The Fix** for this is a radical new approach to AI-powered development: caring about the design of the code.
This is built in to every layer of these skills:
- [`/to-prd`](./skills/engineering/to-prd/SKILL.md) quizzes you about which modules you're touching before creating a PRD
- [`/zoom-out`](./skills/engineering/zoom-out/SKILL.md) tells the agent to explain code in the context of the whole system
And crucially, [`/improve-codebase-architecture`](./skills/engineering/improve-codebase-architecture/SKILL.md) helps you rescue a codebase that has become a ball of mud. I recommend running it on your codebase once every few days.
### Summary
Software engineering fundamentals matter more than ever. These skills are my best effort at condensing these fundamentals into repeatable practices, to help you ship the best apps of your career. Enjoy.
## Reference
### Engineering
Skills I use daily for code work.
- **[diagnose](./skills/engineering/diagnose/SKILL.md)** — Disciplined diagnosis loop for hard bugs and performance regressions: reproduce → minimise → hypothesise → instrument → fix → regression-test.
- **[github](./skills/engineering/github/SKILL.md)** — GitHub backend for the engineering skills. Publishes PRDs and issues, applies triage outcomes via `gh`.
- **[grill-with-docs](./skills/engineering/grill-with-docs/SKILL.md)** — Grilling session that challenges your plan against the existing domain model, sharpens terminology, and updates `CONTEXT.md` and ADRs inline.
- **[improve-codebase-architecture](./skills/engineering/improve-codebase-architecture/SKILL.md)** — Find deepening opportunities in a codebase, informed by the domain language in `CONTEXT.md` and the decisions in `docs/adr/`.
- **[tdd](./skills/engineering/tdd/SKILL.md)** — Test-driven development with a red-green-refactor loop. Builds features or fixes bugs one vertical slice at a time.
- **[to-issues](./skills/engineering/to-issues/SKILL.md)** — Break any plan, spec, or PRD into independently-grabbable issues using vertical slices. Hands off to a backend skill (`/github`) to publish.
- **[to-prd](./skills/engineering/to-prd/SKILL.md)** — Turn the current conversation context into a PRD. Hands off to a backend skill (`/github`) to publish. No interview — just synthesizes what you've already discussed.
- **[triage](./skills/engineering/triage/SKILL.md)** — Triage issues through a label-based state machine. Backend-agnostic — pairs with `/github` to apply outcomes.
- **[zoom-out](./skills/engineering/zoom-out/SKILL.md)** — Tell the agent to zoom out and give broader context or a higher-level perspective on an unfamiliar section of code.
### Productivity
General workflow tools, not code-specific.
- **[caveman](./skills/productivity/caveman/SKILL.md)** — Ultra-compressed communication mode. Cuts token usage ~75% by dropping filler while keeping full technical accuracy.
- **[grill-me](./skills/productivity/grill-me/SKILL.md)** — Get relentlessly interviewed about a plan or design until every branch of the decision tree is resolved.
- **[write-a-skill](./skills/productivity/write-a-skill/SKILL.md)** — Create new skills with proper structure, progressive disclosure, and bundled resources.
### Misc
Tools I keep around but rarely use.
- **[git-guardrails-claude-code](./skills/misc/git-guardrails-claude-code/SKILL.md)** — Set up Claude Code hooks to block dangerous git commands (push, reset --hard, clean, etc.) before they execute.
- **[migrate-to-shoehorn](./skills/misc/migrate-to-shoehorn/SKILL.md)** — Migrate test files from `as` type assertions to @total-typescript/shoehorn.
- **[scaffold-exercises](./skills/misc/scaffold-exercises/SKILL.md)** — Create exercise directory structures with sections, problems, solutions, and explainers.
- **[setup-pre-commit](./skills/misc/setup-pre-commit/SKILL.md)** — Set up Husky pre-commit hooks with lint-staged, Prettier, type checking, and tests.
@@ -1,78 +0,0 @@
# Reference
## Dependency Categories
When assessing a candidate for deepening, classify its dependencies:
### 1. In-process
Pure computation, in-memory state, no I/O. Always deepenable — just merge the modules and test directly.
### 2. Local-substitutable
Dependencies that have local test stand-ins (e.g., PGLite for Postgres, in-memory filesystem). Deepenable if the test substitute exists. The deepened module is tested with the local stand-in running in the test suite.
### 3. Remote but owned (Ports & Adapters)
Your own services across a network boundary (microservices, internal APIs). Define a port (interface) at the module boundary. The deep module owns the logic; the transport is injected. Tests use an in-memory adapter. Production uses the real HTTP/gRPC/queue adapter.
Recommendation shape: "Define a shared interface (port), implement an HTTP adapter for production and an in-memory adapter for testing, so the logic can be tested as one deep module even though it's deployed across a network boundary."
### 4. True external (Mock)
Third-party services (Stripe, Twilio, etc.) you don't control. Mock at the boundary. The deepened module takes the external dependency as an injected port, and tests provide a mock implementation.
## Testing Strategy
The core principle: **replace, don't layer.**
- Old unit tests on shallow modules are waste once boundary tests exist — delete them
- Write new tests at the deepened module's interface boundary
- Tests assert on observable outcomes through the public interface, not internal state
- Tests should survive internal refactors — they describe behavior, not implementation
## Issue Template
<issue-template>
## Problem
Describe the architectural friction:
- Which modules are shallow and tightly coupled
- What integration risk exists in the seams between them
- Why this makes the codebase harder to navigate and maintain
## Proposed Interface
The chosen interface design:
- Interface signature (types, methods, params)
- Usage example showing how callers use it
- What complexity it hides internally
## Dependency Strategy
Which category applies and how dependencies are handled:
- **In-process**: merged directly
- **Local-substitutable**: tested with [specific stand-in]
- **Ports & adapters**: port definition, production adapter, test adapter
- **Mock**: mock boundary for external services
## Testing Strategy
- **New boundary tests to write**: describe the behaviors to verify at the interface
- **Old tests to delete**: list the shallow module tests that become redundant
- **Test environment needs**: any local stand-ins or adapters required
## Implementation Recommendations
Durable architectural guidance that is NOT coupled to current file paths:
- What the module should own (responsibilities)
- What it should hide (implementation details)
- What it should expose (the interface contract)
- How callers should migrate to the new interface
</issue-template>
-76
View File
@@ -1,76 +0,0 @@
---
name: improve-codebase-architecture
description: Explore a codebase to find opportunities for architectural improvement, focusing on making the codebase more testable by deepening shallow modules. Use when user wants to improve architecture, find refactoring opportunities, consolidate tightly-coupled modules, or make a codebase more AI-navigable.
---
# Improve Codebase Architecture
Explore a codebase like an AI would, surface architectural friction, discover opportunities for improving testability, and propose module-deepening refactors as GitHub issue RFCs.
A **deep module** (John Ousterhout, "A Philosophy of Software Design") has a small interface hiding a large implementation. Deep modules are more testable, more AI-navigable, and let you test at the boundary instead of inside.
## Process
### 1. Explore the codebase
Use the Agent tool with subagent_type=Explore to navigate the codebase naturally. Do NOT follow rigid heuristics — explore organically and note where you experience friction:
- Where does understanding one concept require bouncing between many small files?
- Where are modules so shallow that the interface is nearly as complex as the implementation?
- Where have pure functions been extracted just for testability, but the real bugs hide in how they're called?
- Where do tightly-coupled modules create integration risk in the seams between them?
- Which parts of the codebase are untested, or hard to test?
The friction you encounter IS the signal.
### 2. Present candidates
Present a numbered list of deepening opportunities. For each candidate, show:
- **Cluster**: Which modules/concepts are involved
- **Why they're coupled**: Shared types, call patterns, co-ownership of a concept
- **Dependency category**: See [REFERENCE.md](REFERENCE.md) for the four categories
- **Test impact**: What existing tests would be replaced by boundary tests
Do NOT propose interfaces yet. Ask the user: "Which of these would you like to explore?"
### 3. User picks a candidate
### 4. Frame the problem space
Before spawning sub-agents, write a user-facing explanation of the problem space for the chosen candidate:
- The constraints any new interface would need to satisfy
- The dependencies it would need to rely on
- A rough illustrative code sketch to make the constraints concrete — this is not a proposal, just a way to ground the constraints
Show this to the user, then immediately proceed to Step 5. The user reads and thinks about the problem while the sub-agents work in parallel.
### 5. Design multiple interfaces
Spawn 3+ sub-agents in parallel using the Agent tool. Each must produce a **radically different** interface for the deepened module.
Prompt each sub-agent with a separate technical brief (file paths, coupling details, dependency category, what's being hidden). This brief is independent of the user-facing explanation in Step 4. Give each agent a different design constraint:
- Agent 1: "Minimize the interface — aim for 1-3 entry points max"
- Agent 2: "Maximize flexibility — support many use cases and extension"
- Agent 3: "Optimize for the most common caller — make the default case trivial"
- Agent 4 (if applicable): "Design around the ports & adapters pattern for cross-boundary dependencies"
Each sub-agent outputs:
1. Interface signature (types, methods, params)
2. Usage example showing how callers use it
3. What complexity it hides internally
4. Dependency strategy (how deps are handled — see [REFERENCE.md](REFERENCE.md))
5. Trade-offs
Present designs sequentially, then compare them in prose.
After comparing, give your own recommendation: which design you think is strongest and why. If elements from different designs would combine well, propose a hybrid. Be opinionated — the user wants a strong read, not just a menu.
### 6. User picks an interface (or accepts recommendation)
### 7. Create GitHub issue
Create a refactor RFC as a GitHub issue using `gh issue create`. Use the template in [REFERENCE.md](REFERENCE.md). Do NOT ask the user to review before creating — just create it and share the URL.
-88
View File
@@ -1,88 +0,0 @@
---
name: prd-to-issues
description: Break a PRD into independently-grabbable GitHub issues using tracer-bullet vertical slices. Use when user wants to convert a PRD to issues, create implementation tickets, or break down a PRD into work items.
---
# PRD to Issues
Break a PRD into independently-grabbable GitHub issues using vertical slices (tracer bullets).
## Process
### 1. Locate the PRD
Ask the user for the PRD GitHub issue number (or URL).
If the PRD is not already in your context window, fetch it with `gh issue view <number>` (with comments).
### 2. Explore the codebase (optional)
If you have not already explored the codebase, do so to understand the current state of the code.
### 3. Draft vertical slices
Break the PRD into **tracer bullet** issues. Each issue is a thin vertical slice that cuts through ALL integration layers end-to-end, NOT a horizontal slice of one layer.
Slices may be 'HITL' or 'AFK'. HITL slices require human interaction, such as an architectural decision or a design review. AFK slices can be implemented and merged without human interaction. Prefer AFK over HITL where possible.
<vertical-slice-rules>
- Each slice delivers a narrow but COMPLETE path through every layer (schema, API, UI, tests)
- A completed slice is demoable or verifiable on its own
- Prefer many thin slices over few thick ones
</vertical-slice-rules>
### 4. Quiz the user
Present the proposed breakdown as a numbered list. For each slice, show:
- **Title**: short descriptive name
- **Type**: HITL / AFK
- **Blocked by**: which other slices (if any) must complete first
- **User stories covered**: which user stories from the PRD this addresses
Ask the user:
- Does the granularity feel right? (too coarse / too fine)
- Are the dependency relationships correct?
- Should any slices be merged or split further?
- Are the correct slices marked as HITL and AFK?
Iterate until the user approves the breakdown.
### 5. Create the GitHub issues
For each approved slice, create a GitHub issue using `gh issue create`. Use the issue body template below.
Create issues in dependency order (blockers first) so you can reference real issue numbers in the "Blocked by" field.
<issue-template>
## Parent PRD
#<prd-issue-number>
## What to build
A concise description of this vertical slice. Describe the end-to-end behavior, not layer-by-layer implementation. Reference specific sections of the parent PRD rather than duplicating content.
## Acceptance criteria
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
## Blocked by
- Blocked by #<issue-number> (if any)
Or "None - can start immediately" if no blockers.
## User stories addressed
Reference by number from the parent PRD:
- User story 3
- User story 7
</issue-template>
Do NOT close or modify the parent PRD issue.
-107
View File
@@ -1,107 +0,0 @@
---
name: prd-to-plan
description: Turn a PRD into a multi-phase implementation plan using tracer-bullet vertical slices, saved as a local Markdown file in ./plans/. Use when user wants to break down a PRD, create an implementation plan, plan phases from a PRD, or mentions "tracer bullets".
---
# PRD to Plan
Break a PRD into a phased implementation plan using vertical slices (tracer bullets). Output is a Markdown file in `./plans/`.
## Process
### 1. Confirm the PRD is in context
The PRD should already be in the conversation. If it isn't, ask the user to paste it or point you to the file.
### 2. Explore the codebase
If you have not already explored the codebase, do so to understand the current architecture, existing patterns, and integration layers.
### 3. Identify durable architectural decisions
Before slicing, identify high-level decisions that are unlikely to change throughout implementation:
- Route structures / URL patterns
- Database schema shape
- Key data models
- Authentication / authorization approach
- Third-party service boundaries
These go in the plan header so every phase can reference them.
### 4. Draft vertical slices
Break the PRD into **tracer bullet** phases. Each phase is a thin vertical slice that cuts through ALL integration layers end-to-end, NOT a horizontal slice of one layer.
<vertical-slice-rules>
- Each slice delivers a narrow but COMPLETE path through every layer (schema, API, UI, tests)
- A completed slice is demoable or verifiable on its own
- Prefer many thin slices over few thick ones
- Do NOT include specific file names, function names, or implementation details that are likely to change as later phases are built
- DO include durable decisions: route paths, schema shapes, data model names
</vertical-slice-rules>
### 5. Quiz the user
Present the proposed breakdown as a numbered list. For each phase show:
- **Title**: short descriptive name
- **User stories covered**: which user stories from the PRD this addresses
Ask the user:
- Does the granularity feel right? (too coarse / too fine)
- Should any phases be merged or split further?
Iterate until the user approves the breakdown.
### 6. Write the plan file
Create `./plans/` if it doesn't exist. Write the plan as a Markdown file named after the feature (e.g. `./plans/user-onboarding.md`). Use the template below.
<plan-template>
# Plan: <Feature Name>
> Source PRD: <brief identifier or link>
## Architectural decisions
Durable decisions that apply across all phases:
- **Routes**: ...
- **Schema**: ...
- **Key models**: ...
- (add/remove sections as appropriate)
---
## Phase 1: <Title>
**User stories**: <list from PRD>
### What to build
A concise description of this vertical slice. Describe the end-to-end behavior, not layer-by-layer implementation.
### Acceptance criteria
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
---
## Phase 2: <Title>
**User stories**: <list from PRD>
### What to build
...
### Acceptance criteria
- [ ] ...
<!-- Repeat for each phase -->
</plan-template>
+38
View File
@@ -0,0 +1,38 @@
#!/usr/bin/env bash
set -euo pipefail
# Links all skills in the repository to ~/.claude/skills, so that
# they can be used by the local Claude CLI.
REPO="$(cd "$(dirname "$0")/.." && pwd)"
DEST="$HOME/.claude/skills"
# If ~/.claude/skills is a symlink that resolves into this repo, we'd end up
# writing the per-skill symlinks back into the repo's own skills/ tree. Detect
# and bail out instead of polluting the working copy.
if [ -L "$DEST" ]; then
resolved="$(readlink -f "$DEST")"
case "$resolved" in
"$REPO"|"$REPO"/*)
echo "error: $DEST is a symlink into this repo ($resolved)." >&2
echo "Remove it (rm \"$DEST\") and re-run; the script will recreate it as a real dir." >&2
exit 1
;;
esac
fi
mkdir -p "$DEST"
find "$REPO/skills" -name SKILL.md -not -path '*/node_modules/*' -print0 |
while IFS= read -r -d '' skill_md; do
src="$(dirname "$skill_md")"
name="$(basename "$src")"
target="$DEST/$name"
if [ -e "$target" ] && [ ! -L "$target" ]; then
rm -rf "$target"
fi
ln -sfn "$src" "$target"
echo "linked $name -> $src"
done
+8
View File
@@ -0,0 +1,8 @@
# Deprecated
Skills I no longer use.
- **[design-an-interface](./design-an-interface/SKILL.md)** — Generate multiple radically different interface designs for a module using parallel sub-agents.
- **[qa](./qa/SKILL.md)** — Interactive QA session where user reports bugs conversationally and the agent files GitHub issues.
- **[request-refactor-plan](./request-refactor-plan/SKILL.md)** — Create a detailed refactor plan with tiny commits via user interview, then file it as a GitHub issue.
- **[ubiquitous-language](./ubiquitous-language/SKILL.md)** — Extract a DDD-style ubiquitous language glossary from the current conversation.
+130
View File
@@ -0,0 +1,130 @@
---
name: qa
description: Interactive QA session where user reports bugs or issues conversationally, and the agent files GitHub issues. Explores the codebase in the background for context and domain language. Use when user wants to report bugs, do QA, file issues conversationally, or mentions "QA session".
---
# QA Session
Run an interactive QA session. The user describes problems they're encountering. You clarify, explore the codebase for context, and file GitHub issues that are durable, user-focused, and use the project's domain language.
## For each issue the user raises
### 1. Listen and lightly clarify
Let the user describe the problem in their own words. Ask **at most 2-3 short clarifying questions** focused on:
- What they expected vs what actually happened
- Steps to reproduce (if not obvious)
- Whether it's consistent or intermittent
Do NOT over-interview. If the description is clear enough to file, move on.
### 2. Explore the codebase in the background
While talking to the user, kick off an Agent (subagent_type=Explore) in the background to understand the relevant area. The goal is NOT to find a fix — it's to:
- Learn the domain language used in that area (check UBIQUITOUS_LANGUAGE.md)
- Understand what the feature is supposed to do
- Identify the user-facing behavior boundary
This context helps you write a better issue — but the issue itself should NOT reference specific files, line numbers, or internal implementation details.
### 3. Assess scope: single issue or breakdown?
Before filing, decide whether this is a **single issue** or needs to be **broken down** into multiple issues.
Break down when:
- The fix spans multiple independent areas (e.g. "the form validation is wrong AND the success message is missing AND the redirect is broken")
- There are clearly separable concerns that different people could work on in parallel
- The user describes something that has multiple distinct failure modes or symptoms
Keep as a single issue when:
- It's one behavior that's wrong in one place
- The symptoms are all caused by the same root behavior
### 4. File the GitHub issue(s)
Create issues with `gh issue create`. Do NOT ask the user to review first — just file and share URLs.
Issues must be **durable** — they should still make sense after major refactors. Write from the user's perspective.
#### For a single issue
Use this template:
```
## What happened
[Describe the actual behavior the user experienced, in plain language]
## What I expected
[Describe the expected behavior]
## Steps to reproduce
1. [Concrete, numbered steps a developer can follow]
2. [Use domain terms from the codebase, not internal module names]
3. [Include relevant inputs, flags, or configuration]
## Additional context
[Any extra observations from the user or from codebase exploration that help frame the issue — e.g. "this only happens when using the Docker layer, not the filesystem layer" — use domain language but don't cite files]
```
#### For a breakdown (multiple issues)
Create issues in dependency order (blockers first) so you can reference real issue numbers.
Use this template for each sub-issue:
```
## Parent issue
#<parent-issue-number> (if you created a tracking issue) or "Reported during QA session"
## What's wrong
[Describe this specific behavior problem — just this slice, not the whole report]
## What I expected
[Expected behavior for this specific slice]
## Steps to reproduce
1. [Steps specific to THIS issue]
## Blocked by
- #<issue-number> (if this issue can't be fixed until another is resolved)
Or "None — can start immediately" if no blockers.
## Additional context
[Any extra observations relevant to this slice]
```
When creating a breakdown:
- **Prefer many thin issues over few thick ones** — each should be independently fixable and verifiable
- **Mark blocking relationships honestly** — if issue B genuinely can't be tested until issue A is fixed, say so. If they're independent, mark both as "None — can start immediately"
- **Create issues in dependency order** so you can reference real issue numbers in "Blocked by"
- **Maximize parallelism** — the goal is that multiple people (or agents) can grab different issues simultaneously
#### Rules for all issue bodies
- **No file paths or line numbers** — these go stale
- **Use the project's domain language** (check UBIQUITOUS_LANGUAGE.md if it exists)
- **Describe behaviors, not code** — "the sync service fails to apply the patch" not "applyPatch() throws on line 42"
- **Reproduction steps are mandatory** — if you can't determine them, ask the user
- **Keep it concise** — a developer should be able to read the issue in 30 seconds
After filing, print all issue URLs (with blocking relationships summarized) and ask: "Next issue, or are we done?"
### 5. Continue the session
Keep going until the user says they're done. Each issue is independent — don't batch them.
@@ -1,6 +1,7 @@
---
name: ubiquitous-language
description: Extract a DDD-style ubiquitous language glossary from the current conversation, flagging ambiguities and proposing canonical terms. Saves to UBIQUITOUS_LANGUAGE.md. Use when user wants to define domain terms, build a glossary, harden terminology, create a ubiquitous language, or mentions "domain model" or "DDD".
disable-model-invocation: true
---
# Ubiquitous Language
@@ -28,14 +29,14 @@ Write a `UBIQUITOUS_LANGUAGE.md` file with this structure:
## Order lifecycle
| Term | Definition | Aliases to avoid |
|------|-----------|-----------------|
| ----------- | ------------------------------------------------------- | --------------------- |
| **Order** | A customer's request to purchase one or more items | Purchase, transaction |
| **Invoice** | A request for payment sent to a customer after delivery | Bill, payment request |
## People
| Term | Definition | Aliases to avoid |
|------|-----------|-----------------|
| ------------ | ------------------------------------------- | ---------------------- |
| **Customer** | A person or organization that places orders | Client, buyer, account |
| **User** | An authentication identity in the system | Login, account |
@@ -60,12 +61,27 @@ Write a `UBIQUITOUS_LANGUAGE.md` file with this structure:
- **Be opinionated.** When multiple words exist for the same concept, pick the best one and list the others as aliases to avoid.
- **Flag conflicts explicitly.** If a term is used ambiguously in the conversation, call it out in the "Flagged ambiguities" section with a clear recommendation.
- **Only include terms relevant for domain experts.** Skip the names of modules or classes unless they have meaning in the domain language.
- **Keep definitions tight.** One sentence max. Define what it IS, not what it does.
- **Show relationships.** Use bold term names and express cardinality where obvious.
- **Only include domain terms.** Skip generic programming concepts (array, function, endpoint) unless they have domain-specific meaning.
- **Group terms into multiple tables** when natural clusters emerge (e.g. by subdomain, lifecycle, or actor). Each group gets its own heading and table. If all terms belong to a single cohesive domain, one table is fine — don't force groupings.
- **Write an example dialogue.** A short conversation (3-5 exchanges) between a dev and a domain expert that demonstrates how the terms interact naturally. The dialogue should clarify boundaries between related concepts and show terms being used precisely.
<example>
## Example dialogue
> **Dev:** "How do I test the **sync service** without Docker?"
> **Domain expert:** "Provide the **filesystem layer** instead of the **Docker layer**. It implements the same **Sandbox service** interface but uses a local directory as the **sandbox**."
> **Dev:** "So **sync-in** still creates a **bundle** and unpacks it?"
> **Domain expert:** "Exactly. The **sync service** doesn't know which layer it's talking to. It calls `exec` and `copyIn` — the **filesystem layer** just runs those as local shell commands."
</example>
## Re-running
When invoked again in the same conversation:
@@ -73,12 +89,5 @@ When invoked again in the same conversation:
1. Read the existing `UBIQUITOUS_LANGUAGE.md`
2. Incorporate any new terms from subsequent discussion
3. Update definitions if understanding has evolved
4. Mark changed entries with "(updated)" and new entries with "(new)"
5. Re-flag any new ambiguities
6. Rewrite the example dialogue to incorporate new terms
## Post-output instruction
After writing the file, state:
> I've written/updated `UBIQUITOUS_LANGUAGE.md`. From this point forward I will use these terms consistently. If I drift from this language or you notice a term that should be added, let me know.
4. Re-flag any new ambiguities
5. Rewrite the example dialogue to incorporate new terms
+13
View File
@@ -0,0 +1,13 @@
# Engineering
Skills I use daily for code work.
- **[diagnose](./diagnose/SKILL.md)** — Disciplined diagnosis loop for hard bugs and performance regressions: reproduce → minimise → hypothesise → instrument → fix → regression-test.
- **[github](./github/SKILL.md)** — GitHub backend for the engineering skills. Publishes PRDs and issues, applies triage outcomes via `gh`.
- **[grill-with-docs](./grill-with-docs/SKILL.md)** — Grilling session that challenges your plan against the existing domain model, sharpens terminology, and updates `CONTEXT.md` and ADRs inline.
- **[improve-codebase-architecture](./improve-codebase-architecture/SKILL.md)** — Find deepening opportunities in a codebase, informed by the domain language in `CONTEXT.md` and the decisions in `docs/adr/`.
- **[tdd](./tdd/SKILL.md)** — Test-driven development with a red-green-refactor loop. Builds features or fixes bugs one vertical slice at a time.
- **[to-issues](./to-issues/SKILL.md)** — Break any plan, spec, or PRD into independently-grabbable issues using vertical slices. Hands off to a backend skill (`/github`) to publish.
- **[to-prd](./to-prd/SKILL.md)** — Turn the current conversation context into a PRD. Hands off to a backend skill (`/github`) to publish.
- **[triage](./triage/SKILL.md)** — Triage issues through a label-based state machine. Backend-agnostic — pairs with `/github` to apply outcomes.
- **[zoom-out](./zoom-out/SKILL.md)** — Tell the agent to zoom out and give broader context or a higher-level perspective on an unfamiliar section of code.
+117
View File
@@ -0,0 +1,117 @@
---
name: diagnose
description: Disciplined diagnosis loop for hard bugs and performance regressions. Reproduce → minimise → hypothesise → instrument → fix → regression-test. Use when user says "diagnose this" / "debug this", reports a bug, says something is broken/throwing/failing, or describes a performance regression.
---
# Diagnose
A discipline for hard bugs. Skip phases only when explicitly justified.
Before exploring the codebase, follow [../grill-with-docs/DOMAIN-AWARENESS.md](../grill-with-docs/DOMAIN-AWARENESS.md). Use the `CONTEXT.md` vocabulary to get a clear mental model of the relevant modules.
## Phase 1 — Build a feedback loop
**This is the skill.** Everything else is mechanical. If you have a fast, deterministic, agent-runnable pass/fail signal for the bug, you will find the cause — bisection, hypothesis-testing, and instrumentation all just consume that signal. If you don't have one, no amount of staring at code will save you.
Spend disproportionate effort here. **Be aggressive. Be creative. Refuse to give up.**
### Ways to construct one — try them in roughly this order
1. **Failing test** at whatever seam reaches the bug — unit, integration, e2e.
2. **Curl / HTTP script** against a running dev server.
3. **CLI invocation** with a fixture input, diffing stdout against a known-good snapshot.
4. **Headless browser script** (Playwright / Puppeteer) — drives the UI, asserts on DOM/console/network.
5. **Replay a captured trace.** Save a real network request / payload / event log to disk; replay it through the code path in isolation.
6. **Throwaway harness.** Spin up a minimal subset of the system (one service, mocked deps) that exercises the bug code path with a single function call.
7. **Property / fuzz loop.** If the bug is "sometimes wrong output", run 1000 random inputs and look for the failure mode.
8. **Bisection harness.** If the bug appeared between two known states (commit, dataset, version), automate "boot at state X, check, repeat" so you can `git bisect run` it.
9. **Differential loop.** Run the same input through old-version vs new-version (or two configs) and diff outputs.
10. **HITL bash script.** Last resort. If a human must click, drive _them_ with `scripts/hitl-loop.template.sh` so the loop is still structured. Captured output feeds back to you.
Build the right feedback loop, and the bug is 90% fixed.
### Iterate on the loop itself
Treat the loop as a product. Once you have _a_ loop, ask:
- Can I make it faster? (Cache setup, skip unrelated init, narrow the test scope.)
- Can I make the signal sharper? (Assert on the specific symptom, not "didn't crash".)
- Can I make it more deterministic? (Pin time, seed RNG, isolate filesystem, freeze network.)
A 30-second flaky loop is barely better than no loop. A 2-second deterministic loop is a debugging superpower.
### Non-deterministic bugs
The goal is not a clean repro but a **higher reproduction rate**. Loop the trigger 100×, parallelise, add stress, narrow timing windows, inject sleeps. A 50%-flake bug is debuggable; 1% is not — keep raising the rate until it's debuggable.
### When you genuinely cannot build a loop
Stop and say so explicitly. List what you tried. Ask the user for: (a) access to whatever environment reproduces it, (b) a captured artifact (HAR file, log dump, core dump, screen recording with timestamps), or (c) permission to add temporary production instrumentation. Do **not** proceed to hypothesise without a loop.
Do not proceed to Phase 2 until you have a loop you believe in.
## Phase 2 — Reproduce
Run the loop. Watch the bug appear.
Confirm:
- [ ] The loop produces the failure mode the **user** described — not a different failure that happens to be nearby. Wrong bug = wrong fix.
- [ ] The failure is reproducible across multiple runs (or, for non-deterministic bugs, reproducible at a high enough rate to debug against).
- [ ] You have captured the exact symptom (error message, wrong output, slow timing) so later phases can verify the fix actually addresses it.
Do not proceed until you reproduce the bug.
## Phase 3 — Hypothesise
Generate **35 ranked hypotheses** before testing any of them. Single-hypothesis generation anchors on the first plausible idea.
Each hypothesis must be **falsifiable**: state the prediction it makes.
> Format: "If <X> is the cause, then <changing Y> will make the bug disappear / <changing Z> will make it worse."
If you cannot state the prediction, the hypothesis is a vibe — discard or sharpen it.
**Show the ranked list to the user before testing.** They often have domain knowledge that re-ranks instantly ("we just deployed a change to #3"), or know hypotheses they've already ruled out. Cheap checkpoint, big time saver. Don't block on it — proceed with your ranking if the user is AFK.
## Phase 4 — Instrument
Each probe must map to a specific prediction from Phase 3. **Change one variable at a time.**
Tool preference:
1. **Debugger / REPL inspection** if the env supports it. One breakpoint beats ten logs.
2. **Targeted logs** at the boundaries that distinguish hypotheses.
3. Never "log everything and grep".
**Tag every debug log** with a unique prefix, e.g. `[DEBUG-a4f2]`. Cleanup at the end becomes a single grep. Untagged logs survive; tagged logs die.
**Perf branch.** For performance regressions, logs are usually wrong. Instead: establish a baseline measurement (timing harness, `performance.now()`, profiler, query plan), then bisect. Measure first, fix second.
## Phase 5 — Fix + regression test
Write the regression test **before the fix** — but only if there is a **correct seam** for it.
A correct seam is one where the test exercises the **real bug pattern** as it occurs at the call site. If the only available seam is too shallow (single-caller test when the bug needs multiple callers, unit test that can't replicate the chain that triggered the bug), a regression test there gives false confidence.
**If no correct seam exists, that itself is the finding.** Note it. The codebase architecture is preventing the bug from being locked down. Flag this for the next phase.
If a correct seam exists:
1. Turn the minimised repro into a failing test at that seam.
2. Watch it fail.
3. Apply the fix.
4. Watch it pass.
5. Re-run the Phase 1 feedback loop against the original (un-minimised) scenario.
## Phase 6 — Cleanup + post-mortem
Required before declaring done:
- [ ] Original repro no longer reproduces (re-run the Phase 1 loop)
- [ ] Regression test passes (or absence of seam is documented)
- [ ] All `[DEBUG-...]` instrumentation removed (`grep` the prefix)
- [ ] Throwaway prototypes deleted (or moved to a clearly-marked debug location)
- [ ] The hypothesis that turned out correct is stated in the commit / PR message — so the next debugger learns
**Then ask: what would have prevented this bug?** If the answer involves architectural change (no good test seam, tangled callers, hidden coupling) hand off to the `/improve-codebase-architecture` skill with the specifics. Make the recommendation **after** the fix is in, not before — you have more information now than when you started.
@@ -0,0 +1,41 @@
#!/usr/bin/env bash
# Human-in-the-loop reproduction loop.
# Copy this file, edit the steps below, and run it.
# The agent runs the script; the user follows prompts in their terminal.
#
# Usage:
# bash hitl-loop.template.sh
#
# Two helpers:
# step "<instruction>" → show instruction, wait for Enter
# capture VAR "<question>" → show question, read response into VAR
#
# At the end, captured values are printed as KEY=VALUE for the agent to parse.
set -euo pipefail
step() {
printf '\n>>> %s\n' "$1"
read -r -p " [Enter when done] " _
}
capture() {
local var="$1" question="$2" answer
printf '\n>>> %s\n' "$question"
read -r -p " > " answer
printf -v "$var" '%s' "$answer"
}
# --- edit below ---------------------------------------------------------
step "Open the app at http://localhost:3000 and sign in."
capture ERRORED "Click the 'Export' button. Did it throw an error? (y/n)"
capture ERROR_MSG "Paste the error message (or 'none'):"
# --- edit above ---------------------------------------------------------
printf '\n--- Captured ---\n'
printf 'ERRORED=%s\n' "$ERRORED"
printf 'ERROR_MSG=%s\n' "$ERROR_MSG"
+92
View File
@@ -0,0 +1,92 @@
---
name: github
description: GitHub backend for the engineering skills. Publishes PRDs and issues, applies triage outcomes via gh. Use when a content skill (`/to-prd`, `/to-issues`, `/triage`) has produced an artifact and the user wants to push it to GitHub.
---
# GitHub Backend
The GitHub backend for content skills in the engineering bucket. Reads an artifact + state from conversation context and translates it into `gh` calls.
This skill does **not** explore the codebase, generate templates, or own state vocabulary. Those concerns live in the content skills (`/to-prd`, `/to-issues`, `/triage`). This skill is pure integration — swap it for `/linear`, `/beads`, etc. by implementing the same three operations.
## Prerequisites
- `gh` is authenticated and the working directory is a git repo with a GitHub remote
- The repo has labels matching the state vocabulary defined in `/triage` (`bug`, `enhancement`, `needs-triage`, `needs-info`, `ready-for-agent`, `ready-for-human`, `wontfix`). If any are missing, create them with `gh label create` before publishing.
## State → label mapping
The state vocabulary maps 1:1 to GitHub labels of the same name. Every issue gets exactly one category label (`bug` or `enhancement`) and one state label.
## AI Disclaimer
Every comment or issue body posted by this skill **must** start with:
```
> *This was generated by AI during triage.*
```
Prepend it to the body before calling `gh`.
## Operations
### `publishPRD(artifact)`
Input: a PRD body produced by `/to-prd` (already in the desired template) and a state (defaults to `ready-for-agent`).
Steps:
1. Verify the `enhancement` and state labels exist in the repo; create any missing ones with `gh label create`.
2. Prepend the AI disclaimer to the body.
3. Run `gh issue create --title "<derived from PRD>" --body "<body>" --label enhancement --label <state>`.
4. Report the new issue URL to the user.
### `publishIssues(artifacts[])`
Input: an ordered list of vertical-slice artifacts produced by `/to-issues`. Each artifact has:
- title
- body (already formatted — agent brief for AFK slices, summary for HITL)
- type (`AFK` or `HITL`) → state mapping: AFK → `ready-for-agent`, HITL → `ready-for-human`
- category (defaults to `enhancement` unless the parent issue was a bug)
- blocked-by — references to other artifacts in this batch by their position
Steps:
1. Verify required labels exist; create any missing ones.
2. Create issues in dependency order (blockers first) so each issue's "Blocked by" section can reference real numbers.
3. For each artifact, after creation, substitute placeholder references in dependent artifacts with the real issue number.
4. Prepend the AI disclaimer to each body before posting.
5. Apply category + state labels to each.
6. Report the list of created issue URLs.
Do **not** close or modify any parent issue.
### `applyTriageOutcome(issueId, outcome)`
Input: an issue identifier and a triage outcome from `/triage`. The outcome has:
- target state (one of the states above)
- category (`bug` or `enhancement`)
- comment body (may be empty for `needs-triage` with no progress to capture)
- close (boolean — true for `wontfix`)
Steps:
1. Verify the target state and category labels exist; create missing ones.
2. Read the issue's current labels. Remove any existing state label that conflicts with the target state. Remove existing category label if it differs from the target.
3. Apply the new state and category labels: `gh issue edit <id> --add-label <state> --add-label <category> --remove-label <old-state>`.
4. If a comment body is provided, prepend the AI disclaimer and post it: `gh issue comment <id> --body "<body>"`.
5. If `close` is true: `gh issue close <id>`.
6. For `wontfix` on an enhancement, ensure the corresponding `.out-of-scope/<concept>.md` file exists in the working directory before closing (the `/triage` skill is responsible for writing it; this skill just verifies the file is present and surfaces a warning if missing).
7. Report the actions taken.
## Inferring the operation
When invoked, infer which operation to run from the most recent artifact in conversation context:
- A PRD artifact (from `/to-prd`) → `publishPRD`
- A list of vertical-slice artifacts (from `/to-issues`) → `publishIssues`
- A triage outcome with a target state (from `/triage`) → `applyTriageOutcome`
If multiple are present or the situation is ambiguous, ask the user which to run.
@@ -0,0 +1,47 @@
# ADR Format
ADRs live in `docs/adr/` and use sequential numbering: `0001-slug.md`, `0002-slug.md`, etc.
Create the `docs/adr/` directory lazily — only when the first ADR is needed.
## Template
```md
# {Short title of the decision}
{1-3 sentences: what's the context, what did we decide, and why.}
```
That's it. An ADR can be a single paragraph. The value is in recording *that* a decision was made and *why* — not in filling out sections.
## Optional sections
Only include these when they add genuine value. Most ADRs won't need them.
- **Status** frontmatter (`proposed | accepted | deprecated | superseded by ADR-NNNN`) — useful when decisions are revisited
- **Considered Options** — only when the rejected alternatives are worth remembering
- **Consequences** — only when non-obvious downstream effects need to be called out
## Numbering
Scan `docs/adr/` for the highest existing number and increment by one.
## When to offer an ADR
All three of these must be true:
1. **Hard to reverse** — the cost of changing your mind later is meaningful
2. **Surprising without context** — a future reader will look at the code and wonder "why on earth did they do it this way?"
3. **The result of a real trade-off** — there were genuine alternatives and you picked one for specific reasons
If a decision is easy to reverse, skip it — you'll just reverse it. If it's not surprising, nobody will wonder why. If there was no real alternative, there's nothing to record beyond "we did the obvious thing."
### What qualifies
- **Architectural shape.** "We're using a monorepo." "The write model is event-sourced, the read model is projected into Postgres."
- **Integration patterns between contexts.** "Ordering and Billing communicate via domain events, not synchronous HTTP."
- **Technology choices that carry lock-in.** Database, message bus, auth provider, deployment target. Not every library — just the ones that would take a quarter to swap out.
- **Boundary and scope decisions.** "Customer data is owned by the Customer context; other contexts reference it by ID only." The explicit no-s are as valuable as the yes-s.
- **Deliberate deviations from the obvious path.** "We're using manual SQL instead of an ORM because X." Anything where a reasonable reader would assume the opposite. These stop the next engineer from "fixing" something that was deliberate.
- **Constraints not visible in the code.** "We can't use AWS because of compliance requirements." "Response times must be under 200ms because of the partner API contract."
- **Rejected alternatives when the rejection is non-obvious.** If you considered GraphQL and picked REST for subtle reasons, record it — otherwise someone will suggest GraphQL again in six months.
@@ -0,0 +1,77 @@
# CONTEXT.md Format
## Structure
```md
# {Context Name}
{One or two sentence description of what this context is and why it exists.}
## Language
**Order**:
{A concise description of the term}
_Avoid_: Purchase, transaction
**Invoice**:
A request for payment sent to a customer after delivery.
_Avoid_: Bill, payment request
**Customer**:
A person or organization that places orders.
_Avoid_: Client, buyer, account
## Relationships
- An **Order** produces one or more **Invoices**
- An **Invoice** belongs to exactly one **Customer**
## Example dialogue
> **Dev:** "When a **Customer** places an **Order**, do we create the **Invoice** immediately?"
> **Domain expert:** "No — an **Invoice** is only generated once a **Fulfillment** is confirmed."
## Flagged ambiguities
- "account" was used to mean both **Customer** and **User** — resolved: these are distinct concepts.
```
## Rules
- **Be opinionated.** When multiple words exist for the same concept, pick the best one and list the others as aliases to avoid.
- **Flag conflicts explicitly.** If a term is used ambiguously, call it out in "Flagged ambiguities" with a clear resolution.
- **Keep definitions tight.** One sentence max. Define what it IS, not what it does.
- **Show relationships.** Use bold term names and express cardinality where obvious.
- **Only include terms specific to this project's context.** General programming concepts (timeouts, error types, utility patterns) don't belong even if the project uses them extensively. Before adding a term, ask: is this a concept unique to this context, or a general programming concept? Only the former belongs.
- **Group terms under subheadings** when natural clusters emerge. If all terms belong to a single cohesive area, a flat list is fine.
- **Write an example dialogue.** A conversation between a dev and a domain expert that demonstrates how the terms interact naturally and clarifies boundaries between related concepts.
## Single vs multi-context repos
**Single context (most repos):** One `CONTEXT.md` at the repo root.
**Multiple contexts:** A `CONTEXT-MAP.md` at the repo root lists the contexts, where they live, and how they relate to each other:
```md
# Context Map
## Contexts
- [Ordering](./src/ordering/CONTEXT.md) — receives and tracks customer orders
- [Billing](./src/billing/CONTEXT.md) — generates invoices and processes payments
- [Fulfillment](./src/fulfillment/CONTEXT.md) — manages warehouse picking and shipping
## Relationships
- **Ordering → Fulfillment**: Ordering emits `OrderPlaced` events; Fulfillment consumes them to start picking
- **Fulfillment → Billing**: Fulfillment emits `ShipmentDispatched` events; Billing consumes them to generate invoices
- **Ordering ↔ Billing**: Shared types for `CustomerId` and `Money`
```
The skill infers which structure applies:
- If `CONTEXT-MAP.md` exists, read it to find contexts
- If only a root `CONTEXT.md` exists, single context
- If neither exists, create a root `CONTEXT.md` lazily when the first term is resolved
When multiple contexts exist, infer which one the current topic relates to. If unclear, ask.
@@ -0,0 +1,51 @@
# Domain Awareness
Consumer rules for any skill that explores a codebase. Producer rules (writing `CONTEXT.md`, offering ADRs) live in [SKILL.md](./SKILL.md).
## Before exploring, read these
- **`CONTEXT.md`** at the repo root, or
- **`CONTEXT-MAP.md`** at the repo root if it exists — it points at one `CONTEXT.md` per context. Read each one relevant to the topic.
- **`docs/adr/`** — read ADRs that touch the area you're about to work in. In multi-context repos, also check `src/<context>/docs/adr/` for context-scoped decisions.
If any of these files don't exist, **proceed silently**. Don't flag their absence; don't suggest creating them upfront. The producer skill (`/grill-with-docs`) creates them lazily when terms or decisions actually get resolved.
## File structure
Single-context repo (most repos):
```
/
├── CONTEXT.md
├── docs/adr/
│ ├── 0001-event-sourced-orders.md
│ └── 0002-postgres-for-write-model.md
└── src/
```
Multi-context repo (presence of `CONTEXT-MAP.md` at the root):
```
/
├── CONTEXT-MAP.md
├── docs/adr/ ← system-wide decisions
└── src/
├── ordering/
│ ├── CONTEXT.md
│ └── docs/adr/ ← context-specific decisions
└── billing/
├── CONTEXT.md
└── docs/adr/
```
## Use the glossary's vocabulary
When your output names a domain concept (in an issue title, a refactor proposal, a hypothesis, a test name), use the term as defined in `CONTEXT.md`. Don't drift to synonyms the glossary explicitly avoids.
If the concept you need isn't in the glossary yet, that's a signal — either you're inventing language the project doesn't use (reconsider) or there's a real gap (note it for `/grill-with-docs`).
## Flag ADR conflicts
If your output contradicts an existing ADR, surface it explicitly rather than silently overriding:
> _Contradicts ADR-0007 (event-sourced orders) — but worth reopening because…_
@@ -0,0 +1,81 @@
---
name: grill-with-docs
description: Grilling session that challenges your plan against the existing domain model, sharpens terminology, and updates documentation (CONTEXT.md, ADRs) inline as decisions crystallise. Use when user wants to stress-test a plan against their project's language and documented decisions.
disable-model-invocation: true
---
Interview me relentlessly about every aspect of this plan until we reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one. For each question, provide your recommended answer.
Ask the questions one at a time, waiting for feedback on each question before continuing.
If a question can be answered by exploring the codebase, explore the codebase instead.
## Domain awareness
During codebase exploration, also look for existing documentation:
### File structure
Most repos have a single context:
```
/
├── CONTEXT.md
├── docs/
│ └── adr/
│ ├── 0001-event-sourced-orders.md
│ └── 0002-postgres-for-write-model.md
└── src/
```
If a `CONTEXT-MAP.md` exists at the root, the repo has multiple contexts. The map points to where each one lives:
```
/
├── CONTEXT-MAP.md
├── docs/
│ └── adr/ ← system-wide decisions
├── src/
│ ├── ordering/
│ │ ├── CONTEXT.md
│ │ └── docs/adr/ ← context-specific decisions
│ └── billing/
│ ├── CONTEXT.md
│ └── docs/adr/
```
Create files lazily — only when you have something to write. If no `CONTEXT.md` exists, create one when the first term is resolved. If no `docs/adr/` exists, create it when the first ADR is needed.
## During the session
### Challenge against the glossary
When the user uses a term that conflicts with the existing language in `CONTEXT.md`, call it out immediately. "Your glossary defines 'cancellation' as X, but you seem to mean Y — which is it?"
### Sharpen fuzzy language
When the user uses vague or overloaded terms, propose a precise canonical term. "You're saying 'account' — do you mean the Customer or the User? Those are different things."
### Discuss concrete scenarios
When domain relationships are being discussed, stress-test them with specific scenarios. Invent scenarios that probe edge cases and force the user to be precise about the boundaries between concepts.
### Cross-reference with code
When the user states how something works, check whether the code agrees. If you find a contradiction, surface it: "Your code cancels entire Orders, but you just said partial cancellation is possible — which is right?"
### Update CONTEXT.md inline
When a term is resolved, update `CONTEXT.md` right there. Don't batch these up — capture them as they happen. Use the format in [CONTEXT-FORMAT.md](./CONTEXT-FORMAT.md).
Don't couple `CONTEXT.md` to implementation details. Only include terms that are meaningful to domain experts.
### Offer ADRs sparingly
Only offer to create an ADR when all three are true:
1. **Hard to reverse** — the cost of changing your mind later is meaningful
2. **Surprising without context** — a future reader will wonder "why did they do it this way?"
3. **The result of a real trade-off** — there were genuine alternatives and you picked one for specific reasons
If any of the three is missing, skip the ADR. Use the format in [ADR-FORMAT.md](./ADR-FORMAT.md).
@@ -0,0 +1,37 @@
# Deepening
How to deepen a cluster of shallow modules safely, given its dependencies. Assumes the vocabulary in [LANGUAGE.md](LANGUAGE.md) — **module**, **interface**, **seam**, **adapter**.
## Dependency categories
When assessing a candidate for deepening, classify its dependencies. The category determines how the deepened module is tested across its seam.
### 1. In-process
Pure computation, in-memory state, no I/O. Always deepenable — merge the modules and test through the new interface directly. No adapter needed.
### 2. Local-substitutable
Dependencies that have local test stand-ins (PGLite for Postgres, in-memory filesystem). Deepenable if the stand-in exists. The deepened module is tested with the stand-in running in the test suite. The seam is internal; no port at the module's external interface.
### 3. Remote but owned (Ports & Adapters)
Your own services across a network boundary (microservices, internal APIs). Define a **port** (interface) at the seam. The deep module owns the logic; the transport is injected as an **adapter**. Tests use an in-memory adapter. Production uses an HTTP/gRPC/queue adapter.
Recommendation shape: *"Define a port at the seam, implement an HTTP adapter for production and an in-memory adapter for testing, so the logic sits in one deep module even though it's deployed across a network."*
### 4. True external (Mock)
Third-party services (Stripe, Twilio, etc.) you don't control. The deepened module takes the external dependency as an injected port; tests provide a mock adapter.
## Seam discipline
- **One adapter means a hypothetical seam. Two adapters means a real one.** Don't introduce a port unless at least two adapters are justified (typically production + test). A single-adapter seam is just indirection.
- **Internal seams vs external seams.** A deep module can have internal seams (private to its implementation, used by its own tests) as well as the external seam at its interface. Don't expose internal seams through the interface just because tests use them.
## Testing strategy: replace, don't layer
- Old unit tests on shallow modules become waste once tests at the deepened module's interface exist — delete them.
- Write new tests at the deepened module's interface. The **interface is the test surface**.
- Tests assert on observable outcomes through the interface, not internal state.
- Tests should survive internal refactors — they describe behaviour, not implementation. If a test has to change when the implementation changes, it's testing past the interface.
@@ -0,0 +1,44 @@
# Interface Design
When the user wants to explore alternative interfaces for a chosen deepening candidate, use this parallel sub-agent pattern. Based on "Design It Twice" (Ousterhout) — your first idea is unlikely to be the best.
Uses the vocabulary in [LANGUAGE.md](LANGUAGE.md) — **module**, **interface**, **seam**, **adapter**, **leverage**.
## Process
### 1. Frame the problem space
Before spawning sub-agents, write a user-facing explanation of the problem space for the chosen candidate:
- The constraints any new interface would need to satisfy
- The dependencies it would rely on, and which category they fall into (see [DEEPENING.md](DEEPENING.md))
- A rough illustrative code sketch to ground the constraints — not a proposal, just a way to make the constraints concrete
Show this to the user, then immediately proceed to Step 2. The user reads and thinks while the sub-agents work in parallel.
### 2. Spawn sub-agents
Spawn 3+ sub-agents in parallel using the Agent tool. Each must produce a **radically different** interface for the deepened module.
Prompt each sub-agent with a separate technical brief (file paths, coupling details, dependency category from [DEEPENING.md](DEEPENING.md), what sits behind the seam). The brief is independent of the user-facing problem-space explanation in Step 1. Give each agent a different design constraint:
- Agent 1: "Minimize the interface — aim for 13 entry points max. Maximise leverage per entry point."
- Agent 2: "Maximise flexibility — support many use cases and extension."
- Agent 3: "Optimise for the most common caller — make the default case trivial."
- Agent 4 (if applicable): "Design around ports & adapters for cross-seam dependencies."
Include both [LANGUAGE.md](LANGUAGE.md) vocabulary and CONTEXT.md vocabulary in the brief so each sub-agent names things consistently with the architecture language and the project's domain language.
Each sub-agent outputs:
1. Interface (types, methods, params — plus invariants, ordering, error modes)
2. Usage example showing how callers use it
3. What the implementation hides behind the seam
4. Dependency strategy and adapters (see [DEEPENING.md](DEEPENING.md))
5. Trade-offs — where leverage is high, where it's thin
### 3. Present and compare
Present designs sequentially so the user can absorb each one, then compare them in prose. Contrast by **depth** (leverage at the interface), **locality** (where change concentrates), and **seam placement**.
After comparing, give your own recommendation: which design you think is strongest and why. If elements from different designs would combine well, propose a hybrid. Be opinionated — the user wants a strong read, not a menu.
@@ -0,0 +1,53 @@
# Language
Shared vocabulary for every suggestion this skill makes. Use these terms exactly — don't substitute "component," "service," "API," or "boundary." Consistent language is the whole point.
## Terms
**Module**
Anything with an interface and an implementation. Deliberately scale-agnostic — applies equally to a function, class, package, or tier-spanning slice.
_Avoid_: unit, component, service.
**Interface**
Everything a caller must know to use the module correctly. Includes the type signature, but also invariants, ordering constraints, error modes, required configuration, and performance characteristics.
_Avoid_: API, signature (too narrow — those refer only to the type-level surface).
**Implementation**
What's inside a module — its body of code. Distinct from **Adapter**: a thing can be a small adapter with a large implementation (a Postgres repo) or a large adapter with a small implementation (an in-memory fake). Reach for "adapter" when the seam is the topic; "implementation" otherwise.
**Depth**
Leverage at the interface — the amount of behaviour a caller (or test) can exercise per unit of interface they have to learn. A module is **deep** when a large amount of behaviour sits behind a small interface. A module is **shallow** when the interface is nearly as complex as the implementation.
**Seam** _(from Michael Feathers)_
A place where you can alter behaviour without editing in that place. The *location* at which a module's interface lives. Choosing where to put the seam is its own design decision, distinct from what goes behind it.
_Avoid_: boundary (overloaded with DDD's bounded context).
**Adapter**
A concrete thing that satisfies an interface at a seam. Describes *role* (what slot it fills), not substance (what's inside).
**Leverage**
What callers get from depth. More capability per unit of interface they have to learn. One implementation pays back across N call sites and M tests.
**Locality**
What maintainers get from depth. Change, bugs, knowledge, and verification concentrate at one place rather than spreading across callers. Fix once, fixed everywhere.
## Principles
- **Depth is a property of the interface, not the implementation.** A deep module can be internally composed of small, mockable, swappable parts — they just aren't part of the interface. A module can have **internal seams** (private to its implementation, used by its own tests) as well as the **external seam** at its interface.
- **The deletion test.** Imagine deleting the module. If complexity vanishes, the module wasn't hiding anything (it was a pass-through). If complexity reappears across N callers, the module was earning its keep.
- **The interface is the test surface.** Callers and tests cross the same seam. If you want to test *past* the interface, the module is probably the wrong shape.
- **One adapter means a hypothetical seam. Two adapters means a real one.** Don't introduce a seam unless something actually varies across it.
## Relationships
- A **Module** has exactly one **Interface** (the surface it presents to callers and tests).
- **Depth** is a property of a **Module**, measured against its **Interface**.
- A **Seam** is where a **Module**'s **Interface** lives.
- An **Adapter** sits at a **Seam** and satisfies the **Interface**.
- **Depth** produces **Leverage** for callers and **Locality** for maintainers.
## Rejected framings
- **Depth as ratio of implementation-lines to interface-lines** (Ousterhout): rewards padding the implementation. We use depth-as-leverage instead.
- **"Interface" as the TypeScript `interface` keyword or a class's public methods**: too narrow — interface here includes every fact a caller must know.
- **"Boundary"**: overloaded with DDD's bounded context. Say **seam** or **interface**.
@@ -0,0 +1,71 @@
---
name: improve-codebase-architecture
description: Find deepening opportunities in a codebase, informed by the domain language in CONTEXT.md and the decisions in docs/adr/. Use when the user wants to improve architecture, find refactoring opportunities, consolidate tightly-coupled modules, or make a codebase more testable and AI-navigable.
---
# Improve Codebase Architecture
Surface architectural friction and propose **deepening opportunities** — refactors that turn shallow modules into deep ones. The aim is testability and AI-navigability.
## Glossary
Use these terms exactly in every suggestion. Consistent language is the point — don't drift into "component," "service," "API," or "boundary." Full definitions in [LANGUAGE.md](LANGUAGE.md).
- **Module** — anything with an interface and an implementation (function, class, package, slice).
- **Interface** — everything a caller must know to use the module: types, invariants, error modes, ordering, config. Not just the type signature.
- **Implementation** — the code inside.
- **Depth** — leverage at the interface: a lot of behaviour behind a small interface. **Deep** = high leverage. **Shallow** = interface nearly as complex as the implementation.
- **Seam** — where an interface lives; a place behaviour can be altered without editing in place. (Use this, not "boundary.")
- **Adapter** — a concrete thing satisfying an interface at a seam.
- **Leverage** — what callers get from depth.
- **Locality** — what maintainers get from depth: change, bugs, knowledge concentrated in one place.
Key principles (see [LANGUAGE.md](LANGUAGE.md) for the full list):
- **Deletion test**: imagine deleting the module. If complexity vanishes, it was a pass-through. If complexity reappears across N callers, it was earning its keep.
- **The interface is the test surface.**
- **One adapter = hypothetical seam. Two adapters = real seam.**
This skill is _informed_ by the project's domain model. The domain language gives names to good seams; ADRs record decisions the skill should not re-litigate.
## Process
### 1. Explore
Before exploring, follow [../grill-with-docs/DOMAIN-AWARENESS.md](../grill-with-docs/DOMAIN-AWARENESS.md) — read `CONTEXT.md` and relevant ADRs first.
Then use the Agent tool with `subagent_type=Explore` to walk the codebase. Don't follow rigid heuristics — explore organically and note where you experience friction:
- Where does understanding one concept require bouncing between many small modules?
- Where are modules **shallow** — interface nearly as complex as the implementation?
- Where have pure functions been extracted just for testability, but the real bugs hide in how they're called (no **locality**)?
- Where do tightly-coupled modules leak across their seams?
- Which parts of the codebase are untested, or hard to test through their current interface?
Apply the **deletion test** to anything you suspect is shallow: would deleting it concentrate complexity, or just move it? A "yes, concentrates" is the signal you want.
### 2. Present candidates
Present a numbered list of deepening opportunities. For each candidate:
- **Files** — which files/modules are involved
- **Problem** — why the current architecture is causing friction
- **Solution** — plain English description of what would change
- **Benefits** — explained in terms of locality and leverage, and also in how tests would improve
**Use CONTEXT.md vocabulary for the domain, and [LANGUAGE.md](LANGUAGE.md) vocabulary for the architecture.** If `CONTEXT.md` defines "Order," talk about "the Order intake module" — not "the FooBarHandler," and not "the Order service."
**ADR conflicts**: if a candidate contradicts an existing ADR, only surface it when the friction is real enough to warrant revisiting the ADR. Mark it clearly (e.g. _"contradicts ADR-0007 — but worth reopening because…"_). Don't list every theoretical refactor an ADR forbids.
Do NOT propose interfaces yet. Ask the user: "Which of these would you like to explore?"
### 3. Grilling loop
Once the user picks a candidate, drop into a grilling conversation. Walk the design tree with them — constraints, dependencies, the shape of the deepened module, what sits behind the seam, what tests survive.
Side effects happen inline as decisions crystallize:
- **Naming a deepened module after a concept not in `CONTEXT.md`?** Add the term to `CONTEXT.md` — same discipline as `/grill-with-docs` (see [CONTEXT-FORMAT.md](../grill-with-docs/CONTEXT-FORMAT.md)). Create the file lazily if it doesn't exist.
- **Sharpening a fuzzy term during the conversation?** Update `CONTEXT.md` right there.
- **User rejects the candidate with a load-bearing reason?** Offer an ADR, framed as: _"Want me to record this as an ADR so future architecture reviews don't re-suggest it?"_ Only offer when the reason would actually be needed by a future explorer to avoid re-suggesting the same thing — skip ephemeral reasons ("not worth it right now") and self-evident ones. See [ADR-FORMAT.md](../grill-with-docs/ADR-FORMAT.md).
- **Want to explore alternative interfaces for the deepened module?** See [INTERFACE-DESIGN.md](INTERFACE-DESIGN.md).
@@ -44,6 +44,8 @@ RIGHT (vertical):
### 1. Planning
Before exploring the codebase, follow [../grill-with-docs/DOMAIN-AWARENESS.md](../grill-with-docs/DOMAIN-AWARENESS.md). Test names and interface vocabulary should match the project's `CONTEXT.md`.
Before writing any code:
- [ ] Confirm with user what interface changes are needed
+98
View File
@@ -0,0 +1,98 @@
---
name: to-issues
description: Break a plan, spec, or PRD into independently-grabbable issues using tracer-bullet vertical slices. Backend-agnostic — pairs with `/github` (or another backlog skill) to publish. Use when user wants to convert a plan into issues, create implementation tickets, or break down work into issues.
---
# To Issues
Break a plan into independently-grabbable issues using vertical slices (tracer bullets). Produce canonical artifacts; hand off to a backend skill (`/github` by default) to publish.
## Process
### 1. Gather context
Work from whatever is already in the conversation context. If the user passes an issue identifier as an argument and a backend is reachable, ask the backend skill to fetch the parent issue (with comments).
### 2. Explore the codebase (optional)
If you have not already explored the codebase, do so to understand the current state of the code. Before exploring, follow [../grill-with-docs/DOMAIN-AWARENESS.md](../grill-with-docs/DOMAIN-AWARENESS.md). Issue titles and bodies should use the project's `CONTEXT.md` vocabulary.
### 3. Draft vertical slices
Break the plan into **tracer bullet** slices. Each is a thin vertical slice that cuts through ALL integration layers end-to-end, NOT a horizontal slice of one layer.
Slices may be **HITL** or **AFK**. HITL slices require human interaction (an architectural decision, a design review). AFK slices can be implemented and merged without human interaction. Prefer AFK over HITL where possible.
<vertical-slice-rules>
- Each slice delivers a narrow but COMPLETE path through every layer (schema, API, UI, tests)
- A completed slice is demoable or verifiable on its own
- Prefer many thin slices over few thick ones
</vertical-slice-rules>
### 4. Quiz the user
Present the proposed breakdown as a numbered list. For each slice, show:
- **Title**: short descriptive name
- **Type**: HITL / AFK
- **Blocked by**: which other slices (if any) must complete first
- **User stories covered**: which user stories this addresses (if the source material has them)
Ask the user:
- Does the granularity feel right? (too coarse / too fine)
- Are the dependency relationships correct?
- Should any slices be merged or split further?
- Are the correct slices marked as HITL and AFK?
Iterate until the user approves the breakdown.
### 5. Produce artifacts
For each approved slice, produce an artifact with:
- **title** — short descriptive name
- **type** — `AFK` or `HITL`. Maps to state: AFK → `ready-for-agent`, HITL → `ready-for-human` (the state vocabulary lives in `/triage`)
- **category** — `enhancement` by default, or `bug` if the parent was a bug
- **blocked-by** — references to other slices in this batch (by their position in the list)
- **body** — formatted per the templates below
**AFK slices** use the AGENT-BRIEF format from [../triage/AGENT-BRIEF.md](../triage/AGENT-BRIEF.md). The agent brief is the contract the AFK agent will work from.
**HITL slices** use the simpler template below — they require human judgment, so a procedural acceptance list is enough.
<hitl-template>
## Parent
#<parent-issue-id> (if the source was an existing issue, otherwise omit)
## What to build
A concise description of this vertical slice. Describe the end-to-end behavior, not layer-by-layer implementation.
## Why this needs a human
Brief reason this can't be delegated to an AFK agent (e.g. requires architectural decision, design review, judgment call).
## Acceptance criteria
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
## Blocked by
- Blocked by <slice-ref> (if any)
Or "None - can start immediately" if no blockers.
</hitl-template>
### 6. Hand off
Present the artifact list to the user and end with a handoff hint:
> Artifacts ready. Invoke `/github` (or your configured backend equivalent) to publish — it will create issues in dependency order, apply state and category labels, and substitute real identifiers into the "Blocked by" references.
Do not call `gh` directly. The backend skill owns publishing.
Do NOT close or modify any parent issue.
@@ -1,23 +1,25 @@
---
name: write-a-prd
description: Create a PRD through user interview, codebase exploration, and module design, then submit as a GitHub issue. Use when user wants to write a PRD, create a product requirements document, or plan a new feature.
name: to-prd
description: Turn the current conversation context into a PRD. Backend-agnostic — pairs with `/github` (or another backlog skill) to publish. Use when user wants to create a PRD from the current context.
---
This skill will be invoked when the user wants to create a PRD. You may skip steps if you don't consider them necessary.
This skill takes the current conversation context and codebase understanding and produces a PRD artifact. Do NOT interview the user — just synthesize what you already know. Hand off to a backend skill (`/github` by default) to publish.
1. Ask the user for a long, detailed description of the problem they want to solve and any potential ideas for solutions.
## Process
2. Explore the repo to verify their assertions and understand the current state of the codebase.
1. Explore the repo to understand the current state of the codebase, if you haven't already. Before exploring, follow [../grill-with-docs/DOMAIN-AWARENESS.md](../grill-with-docs/DOMAIN-AWARENESS.md). Use the project's `CONTEXT.md` vocabulary throughout the PRD.
3. Interview the user relentlessly about every aspect of this plan until you reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one.
4. Sketch out the major modules you will need to build or modify to complete the implementation. Actively look for opportunities to extract deep modules that can be tested in isolation.
2. Sketch out the major modules you will need to build or modify to complete the implementation. Actively look for opportunities to extract deep modules that can be tested in isolation.
A deep module (as opposed to a shallow module) is one which encapsulates a lot of functionality in a simple, testable interface which rarely changes.
Check with the user that these modules match their expectations. Check with the user which modules they want tests written for.
5. Once you have a complete understanding of the problem and solution, use the template below to write the PRD. The PRD should be submitted as a GitHub issue.
3. Write the PRD artifact using the template below. Default state is `ready-for-agent` (state vocabulary lives in `/triage`); category is `enhancement`. Present the artifact to the user, then end with a handoff hint:
> PRD ready. Invoke `/github` (or your configured backend equivalent) to publish — it will create the issue with category `enhancement` and state `ready-for-agent`.
Do not call `gh` directly. The backend skill owns publishing.
<prd-template>
+168
View File
@@ -0,0 +1,168 @@
# Writing Agent Briefs
An agent brief is a structured comment posted on a GitHub issue when it moves to `ready-for-agent`. It is the authoritative specification that an AFK agent will work from. The original issue body and discussion are context — the agent brief is the contract.
## Principles
### Durability over precision
The issue may sit in `ready-for-agent` for days or weeks. The codebase will change in the meantime. Write the brief so it stays useful even as files are renamed, moved, or refactored.
- **Do** describe interfaces, types, and behavioral contracts
- **Do** name specific types, function signatures, or config shapes that the agent should look for or modify
- **Don't** reference file paths — they go stale
- **Don't** reference line numbers
- **Don't** assume the current implementation structure will remain the same
### Behavioral, not procedural
Describe **what** the system should do, not **how** to implement it. The agent will explore the codebase fresh and make its own implementation decisions.
- **Good:** "The `SkillConfig` type should accept an optional `schedule` field of type `CronExpression`"
- **Bad:** "Open src/types/skill.ts and add a schedule field on line 42"
- **Good:** "When a user runs `/triage` with no arguments, they should see a summary of issues needing attention"
- **Bad:** "Add a switch statement in the main handler function"
### Complete acceptance criteria
The agent needs to know when it's done. Every agent brief must have concrete, testable acceptance criteria. Each criterion should be independently verifiable.
- **Good:** "Running `gh issue list --label needs-triage` returns issues that have been through initial classification"
- **Bad:** "Triage should work correctly"
### Explicit scope boundaries
State what is out of scope. This prevents the agent from gold-plating or making assumptions about adjacent features.
## Template
```markdown
## Agent Brief
**Category:** bug / enhancement
**Summary:** one-line description of what needs to happen
**Current behavior:**
Describe what happens now. For bugs, this is the broken behavior.
For enhancements, this is the status quo the feature builds on.
**Desired behavior:**
Describe what should happen after the agent's work is complete.
Be specific about edge cases and error conditions.
**Key interfaces:**
- `TypeName` — what needs to change and why
- `functionName()` return type — what it currently returns vs what it should return
- Config shape — any new configuration options needed
**Acceptance criteria:**
- [ ] Specific, testable criterion 1
- [ ] Specific, testable criterion 2
- [ ] Specific, testable criterion 3
**Out of scope:**
- Thing that should NOT be changed or addressed in this issue
- Adjacent feature that might seem related but is separate
```
## Examples
### Good agent brief (bug)
```markdown
## Agent Brief
**Category:** bug
**Summary:** Skill description truncation drops mid-word, producing broken output
**Current behavior:**
When a skill description exceeds 1024 characters, it is truncated at exactly
1024 characters regardless of word boundaries. This produces descriptions
that end mid-word (e.g. "Use when the user wants to confi").
**Desired behavior:**
Truncation should break at the last word boundary before 1024 characters
and append "..." to indicate truncation.
**Key interfaces:**
- The `SkillMetadata` type's `description` field — no type change needed,
but the validation/processing logic that populates it needs to respect
word boundaries
- Any function that reads SKILL.md frontmatter and extracts the description
**Acceptance criteria:**
- [ ] Descriptions under 1024 chars are unchanged
- [ ] Descriptions over 1024 chars are truncated at the last word boundary
before 1024 chars
- [ ] Truncated descriptions end with "..."
- [ ] The total length including "..." does not exceed 1024 chars
**Out of scope:**
- Changing the 1024 char limit itself
- Multi-line description support
```
### Good agent brief (enhancement)
```markdown
## Agent Brief
**Category:** enhancement
**Summary:** Add `.out-of-scope/` directory support for tracking rejected feature requests
**Current behavior:**
When a feature request is rejected, the issue is closed with a `wontfix` label
and a comment. There is no persistent record of the decision or reasoning.
Future similar requests require the maintainer to recall or search for the
prior discussion.
**Desired behavior:**
Rejected feature requests should be documented in `.out-of-scope/<concept>.md`
files that capture the decision, reasoning, and links to all issues that
requested the feature. When triaging new issues, these files should be
checked for matches.
**Key interfaces:**
- Markdown file format in `.out-of-scope/` — each file should have a
`# Concept Name` heading, a `**Decision:**` line, a `**Reason:**` line,
and a `**Prior requests:**` list with issue links
- The triage workflow should read all `.out-of-scope/*.md` files early
and match incoming issues against them by concept similarity
**Acceptance criteria:**
- [ ] Closing a feature as wontfix creates/updates a file in `.out-of-scope/`
- [ ] The file includes the decision, reasoning, and link to the closed issue
- [ ] If a matching `.out-of-scope/` file already exists, the new issue is
appended to its "Prior requests" list rather than creating a duplicate
- [ ] During triage, existing `.out-of-scope/` files are checked and surfaced
when a new issue matches a prior rejection
**Out of scope:**
- Automated matching (human confirms the match)
- Reopening previously rejected features
- Bug reports (only enhancement rejections go to `.out-of-scope/`)
```
### Bad agent brief
```markdown
## Agent Brief
**Summary:** Fix the triage bug
**What to do:**
The triage thing is broken. Look at the main file and fix it.
The function around line 150 has the issue.
**Files to change:**
- src/triage/handler.ts (line 150)
- src/types.ts (line 42)
```
This is bad because:
- No category
- Vague description ("the triage thing is broken")
- References file paths and line numbers that will go stale
- No acceptance criteria
- No scope boundaries
- No description of current vs desired behavior
+101
View File
@@ -0,0 +1,101 @@
# Out-of-Scope Knowledge Base
The `.out-of-scope/` directory in a repo stores persistent records of rejected feature requests. It serves two purposes:
1. **Institutional memory** — why a feature was rejected, so the reasoning isn't lost when the issue is closed
2. **Deduplication** — when a new issue comes in that matches a prior rejection, the skill can surface the previous decision instead of re-litigating it
## Directory structure
```
.out-of-scope/
├── dark-mode.md
├── plugin-system.md
└── graphql-api.md
```
One file per **concept**, not per issue. Multiple issues requesting the same thing are grouped under one file.
## File format
The file should be written in a relaxed, readable style — more like a short design document than a database entry. Use paragraphs, code samples, and examples to make the reasoning clear and useful to someone encountering it for the first time.
```markdown
# Dark Mode
This project does not support dark mode or user-facing theming.
## Why this is out of scope
The rendering pipeline assumes a single color palette defined in
`ThemeConfig`. Supporting multiple themes would require:
- A theme context provider wrapping the entire component tree
- Per-component theme-aware style resolution
- A persistence layer for user theme preferences
This is a significant architectural change that doesn't align with the
project's focus on content authoring. Theming is a concern for downstream
consumers who embed or redistribute the output.
```ts
// The current ThemeConfig interface is not designed for runtime switching:
interface ThemeConfig {
colors: ColorPalette; // single palette, resolved at build time
fonts: FontStack;
}
```
## Prior requests
- #42 — "Add dark mode support"
- #87 — "Night theme for accessibility"
- #134 — "Dark theme option"
```
### Naming the file
Use a short, descriptive kebab-case name for the concept: `dark-mode.md`, `plugin-system.md`, `graphql-api.md`. The name should be recognizable enough that someone browsing the directory understands what was rejected without opening the file.
### Writing the reason
The reason should be substantive — not "we don't want this" but why. Good reasons reference:
- Project scope or philosophy ("This project focuses on X; theming is a downstream concern")
- Technical constraints ("Supporting this would require Y, which conflicts with our Z architecture")
- Strategic decisions ("We chose to use A instead of B because...")
The reason should be durable. Avoid referencing temporary circumstances ("we're too busy right now") — those aren't real rejections, they're deferrals.
## When to check `.out-of-scope/`
During triage (Step 1: Gather context), read all files in `.out-of-scope/`. When evaluating a new issue:
- Check if the request matches an existing out-of-scope concept
- Matching is by concept similarity, not keyword — "night theme" matches `dark-mode.md`
- If there's a match, surface it to the maintainer: "This is similar to `.out-of-scope/dark-mode.md` — we rejected this before because [reason]. Do you still feel the same way?"
The maintainer may:
- **Confirm** — the new issue gets added to the existing file's "Prior requests" list, then closed
- **Reconsider** — the out-of-scope file gets deleted or updated, and the issue proceeds through normal triage
- **Disagree** — the issues are related but distinct, proceed with normal triage
## When to write to `.out-of-scope/`
Only when an **enhancement** (not a bug) is rejected as `wontfix`. The flow:
1. Maintainer decides a feature request is out of scope
2. Check if a matching `.out-of-scope/` file already exists
3. If yes: append the new issue to the "Prior requests" list
4. If no: create a new file with the concept name, decision, reason, and first prior request
5. Post a comment on the issue explaining the decision and mentioning the `.out-of-scope/` file
6. Close the issue with the `wontfix` label
## Updating or removing out-of-scope files
If the maintainer changes their mind about a previously rejected concept:
- Delete the `.out-of-scope/` file
- The skill does not need to reopen old issues — they're historical records
- The new issue that triggered the reconsideration proceeds through normal triage
+181
View File
@@ -0,0 +1,181 @@
---
name: triage
description: Triage issues through a label-based state machine. Backend-agnostic — pairs with `/github` (or another backlog skill) to apply outcomes. Use when user wants to triage issues, review incoming bugs or feature requests, prepare issues for an AFK agent, or manage issue workflow.
---
# Triage
Triage issues using a label-based state machine. This skill owns the *conceptual* state vocabulary, the AGENT-BRIEF format, and the workflows. It does **not** know how to talk to any specific backlog system — handoff to a backend skill (e.g. `/github`) when an outcome needs to be applied.
## AI Disclaimer
Every comment posted to a backlog system during triage **must** include the following disclaimer at the top, before any other content:
```
> *This was generated by AI during triage.*
```
The backend skill is responsible for prepending it.
## Reference docs
- [AGENT-BRIEF.md](AGENT-BRIEF.md) — how to write durable agent briefs
- [OUT-OF-SCOPE.md](OUT-OF-SCOPE.md) — how the `.out-of-scope/` knowledge base works
## States
The conceptual state vocabulary. The backend skill maps these to its platform's primitives (e.g. `/github` maps each to a GitHub label of the same name).
| State | Type | Description |
| ----------------- | -------- | ---------------------------------------- |
| `bug` | Category | Something is broken |
| `enhancement` | Category | New feature or improvement |
| `needs-triage` | State | Maintainer needs to evaluate this issue |
| `needs-info` | State | Waiting on reporter for more information |
| `ready-for-agent` | State | Fully specified, ready for AFK agent |
| `ready-for-human` | State | Requires human implementation |
| `wontfix` | State | Will not be actioned |
Every issue should have exactly **one** state and **one** category. If an issue has conflicting states (e.g. both `needs-triage` and `ready-for-agent`), flag the conflict and ask the maintainer which is correct before doing anything else. Provide a recommendation.
## State Machine
| Current State | Can transition to | Who triggers it | What happens |
| -------------- | ----------------- | ---------------------- | ------------------------------------------------------------------------------------------------------- |
| `unlabeled` | `needs-triage` | Skill (on first look) | Issue needs maintainer evaluation. Apply state after presenting recommendation. |
| `unlabeled` | `ready-for-agent` | Maintainer (via skill) | Issue is already well-specified and agent-suitable. Write agent brief, apply state. |
| `unlabeled` | `ready-for-human` | Maintainer (via skill) | Issue requires human implementation. Write a brief comment summarizing the task, apply state. |
| `unlabeled` | `wontfix` | Maintainer (via skill) | Issue is spam, duplicate, or out of scope. Close with comment (and write `.out-of-scope/` for enhancements). |
| `needs-triage` | `needs-info` | Maintainer (via skill) | Issue is underspecified. Post triage notes capturing progress so far + questions for reporter. |
| `needs-triage` | `ready-for-agent` | Maintainer (via skill) | Grilling session complete, agent-suitable. Write agent brief, apply state. |
| `needs-triage` | `ready-for-human` | Maintainer (via skill) | Grilling session complete, needs human. Write a brief comment summarizing the task, apply state. |
| `needs-triage` | `wontfix` | Maintainer (via skill) | Maintainer decides not to action. Close with comment (and write `.out-of-scope/` for enhancements). |
| `needs-info` | `needs-triage` | Skill (detects reply) | Reporter has replied. Surface to maintainer for re-evaluation. |
The maintainer can override any state directly (see Quick State Override below), but the skill should flag if the transition is unusual.
## Invocation
The maintainer invokes `/triage` and describes what they want in natural language. The skill interprets the request, decides the outcome, then hands off to the configured backend skill (`/github` by default) to apply it.
Example requests:
- "Show me anything that needs my attention"
- "Let's look at #42"
- "Move #42 to ready-for-agent"
- "What's ready for agents to pick up?"
- "Are there any unlabeled issues?"
## Workflow: Show What Needs Attention
When the maintainer asks for an overview, ask the backend skill to fetch issues grouped into three buckets:
1. **Unlabeled issues** — new, no state at all. These have never been triaged.
2. **`needs-triage` issues** — maintainer needs to evaluate or continue evaluating.
3. **`needs-info` issues with new activity** — the reporter has commented since the last triage notes comment.
Display counts per group. Within each group, show issues oldest first (longest-waiting gets attention first). For each issue, show: identifier, title, age, and a one-line summary of the issue body.
Let the maintainer pick which issue to dive into.
## Workflow: Triage a Specific Issue
### Step 1: Gather context
Before presenting anything to the maintainer:
- Read the full issue: body, all comments, all states, who reported it, when (ask the backend skill to fetch it)
- If there are prior triage notes from previous sessions, parse them to understand what has already been established
- Explore the codebase to build context — understand the domain, relevant interfaces, and existing behavior related to the issue. Before exploring, follow [../grill-with-docs/DOMAIN-AWARENESS.md](../grill-with-docs/DOMAIN-AWARENESS.md).
- Read `.out-of-scope/*.md` files and check if this issue matches or is similar to a previously rejected concept
### Step 2: Present a recommendation
Tell the maintainer:
- **Category recommendation:** bug or enhancement, with reasoning
- **State recommendation:** where this issue should go, with reasoning
- If it matches a prior out-of-scope rejection, surface that: "This is similar to `.out-of-scope/concept-name.md` — we rejected this before because X. Do you still feel the same way?"
- A brief summary of what you found in the codebase that's relevant
Then wait for the maintainer's direction. They may:
- Agree and ask you to apply the outcome → hand off to the backend skill
- Want to flesh it out → start a /grill-with-docs session
- Override with a different state → apply their choice
- Want to discuss → have a conversation
### Step 3: Bug reproduction (bugs only)
If the issue is categorized as a bug, attempt to reproduce it before starting a /grill-with-docs session.
- Read the reporter's reproduction steps (if provided)
- Explore the codebase to understand the relevant code paths
- Try to reproduce the bug: run tests, execute commands, or trace the logic to confirm the reported behavior
- If reproduction succeeds, report what you found — include the specific behavior you observed and where in the code it originates
- If reproduction fails, report that too — the bug may be environment-specific, already fixed, or the report may be inaccurate
- If the report lacks enough detail to attempt reproduction, note that — this is a strong signal the issue should move to `needs-info`
The reproduction attempt informs the /grill-with-docs session and the agent brief. A confirmed reproduction with a known code path makes for a much stronger brief.
### Step 4: /grill-with-docs session (if needed)
If the issue needs to be fleshed out before it's ready for an agent, interview the maintainer to build a complete specification. Use the /grill-with-docs skill.
### Step 5: Apply the outcome
Determine the outcome, then hand off to the configured backend skill to apply it. The triage outcome consists of:
- **Target state** (one of the states above)
- **Category** (bug or enhancement)
- **Comment body** to post — varies by outcome:
- **ready-for-agent** — an agent brief (see [AGENT-BRIEF.md](AGENT-BRIEF.md))
- **ready-for-human** — a comment summarizing the task, what was established during triage, and why it needs human implementation. Same structure as an agent brief but note the reason it can't be delegated (e.g. requires judgment calls, external system access, design decisions, or manual testing).
- **needs-info** — triage notes (see Needs Info Output below)
- **wontfix (bug)** — a polite comment explaining why; the backend will close the issue
- **wontfix (enhancement)** — write to `.out-of-scope/` (see [OUT-OF-SCOPE.md](OUT-OF-SCOPE.md)), then a comment linking to it; the backend will close the issue
- **needs-triage** — optional comment if there's partial progress to capture
End with a handoff hint:
> Outcome ready. Invoke `/github` (or your configured backend equivalent) to apply state `<state>` to issue `<id>` with the comment above.
The backend skill maps the state name to its platform primitive, posts the comment, and closes the issue if `wontfix`.
## Workflow: Quick State Override
When the maintainer explicitly tells you to move an issue to a specific state (e.g. "move #42 to ready-for-agent"), trust their judgment.
Show a confirmation of what you're about to do: target state, category, whether a comment will be posted, whether the issue will be closed. Then hand off to the backend skill. Skip the /grill-with-docs session entirely.
If moving to `ready-for-agent` without a /grill-with-docs session, ask the maintainer if they want to write a brief agent brief comment or skip it.
## Needs Info Output
When moving an issue to `needs-info`, the comment body should be:
```markdown
## Triage Notes
**What we've established so far:**
- point 1
- point 2
**What we still need from you (@reporter):**
- question 1
- question 2
```
Include everything resolved during the /grill-with-docs session in "established so far" — this work should not be lost. The questions for the reporter should be specific and actionable, not vague ("please provide more info").
## Resuming Previous Sessions
When triaging an issue that already has triage notes from a previous session:
1. Read all comments to find prior triage notes
2. Parse what was already established
3. Check if the reporter has answered any outstanding questions
4. Present the maintainer with an updated picture: "Here's where we left off, and here's what the reporter has said since"
5. Continue the /grill-with-docs session from where it stopped — do not re-ask resolved questions
+9
View File
@@ -0,0 +1,9 @@
---
name: zoom-out
description: Tell the agent to zoom out and give broader context or a higher-level perspective. Use when you're unfamiliar with a section of code or need to understand how it fits into the bigger picture.
disable-model-invocation: true
---
I don't know this area of code well. Go up a layer of abstraction. Give me a map of all the relevant modules and callers, using the language in `CONTEXT.md`.
Use [../grill-with-docs/DOMAIN-AWARENESS.md](../grill-with-docs/DOMAIN-AWARENESS.md) as a reference for how to use `CONTEXT.md`.
+8
View File
@@ -0,0 +1,8 @@
# Misc
Tools I keep around but rarely use.
- **[git-guardrails-claude-code](./git-guardrails-claude-code/SKILL.md)** — Set up Claude Code hooks to block dangerous git commands (push, reset --hard, clean, etc.) before they execute.
- **[migrate-to-shoehorn](./migrate-to-shoehorn/SKILL.md)** — Migrate test files from `as` type assertions to @total-typescript/shoehorn.
- **[scaffold-exercises](./scaffold-exercises/SKILL.md)** — Create exercise directory structures with sections, problems, solutions, and explainers.
- **[setup-pre-commit](./setup-pre-commit/SKILL.md)** — Set up Husky pre-commit hooks with lint-staged, Prettier, type checking, and tests.
+6
View File
@@ -0,0 +1,6 @@
# Personal
Skills tied to my own setup, not promoted in the plugin.
- **[edit-article](./edit-article/SKILL.md)** — Edit and improve articles by restructuring sections, improving clarity, and tightening prose.
- **[obsidian-vault](./obsidian-vault/SKILL.md)** — Search, create, and manage notes in an Obsidian vault with wikilinks and index notes.
+7
View File
@@ -0,0 +1,7 @@
# Productivity
General workflow tools, not code-specific.
- **[caveman](./caveman/SKILL.md)** — Ultra-compressed communication mode. Cuts token usage ~75% by dropping filler while keeping full technical accuracy.
- **[grill-me](./grill-me/SKILL.md)** — Get relentlessly interviewed about a plan or design until every branch of the decision tree is resolved.
- **[write-a-skill](./write-a-skill/SKILL.md)** — Create new skills with proper structure, progressive disclosure, and bundled resources.
+49
View File
@@ -0,0 +1,49 @@
---
name: caveman
description: >
Ultra-compressed communication mode. Cuts token usage ~75% by dropping
filler, articles, and pleasantries while keeping full technical accuracy.
Use when user says "caveman mode", "talk like caveman", "use caveman",
"less tokens", "be brief", or invokes /caveman.
---
Respond terse like smart caveman. All technical substance stay. Only fluff die.
## Persistence
ACTIVE EVERY RESPONSE once triggered. No revert after many turns. No filler drift. Still active if unsure. Off only when user says "stop caveman" or "normal mode".
## Rules
Drop: articles (a/an/the), filler (just/really/basically/actually/simply), pleasantries (sure/certainly/of course/happy to), hedging. Fragments OK. Short synonyms (big not extensive, fix not "implement a solution for"). Abbreviate common terms (DB/auth/config/req/res/fn/impl). Strip conjunctions. Use arrows for causality (X -> Y). One word when one word enough.
Technical terms stay exact. Code blocks unchanged. Errors quoted exact.
Pattern: `[thing] [action] [reason]. [next step].`
Not: "Sure! I'd be happy to help you with that. The issue you're experiencing is likely caused by..."
Yes: "Bug in auth middleware. Token expiry check use `<` not `<=`. Fix:"
### Examples
**"Why React component re-render?"**
> Inline obj prop -> new ref -> re-render. `useMemo`.
**"Explain database connection pooling."**
> Pool = reuse DB conn. Skip handshake -> fast under load.
## Auto-Clarity Exception
Drop caveman temporarily for: security warnings, irreversible action confirmations, multi-step sequences where fragment order risks misread, user asks to clarify or repeats question. Resume caveman after clear part done.
Example -- destructive op:
> **Warning:** This will permanently delete all rows in the `users` table and cannot be undone.
>
> ```sql
> DROP TABLE users;
> ```
>
> Caveman resume. Verify backup exist first.
@@ -5,4 +5,6 @@ description: Interview the user relentlessly about a plan or design until reachi
Interview me relentlessly about every aspect of this plan until we reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one. For each question, provide your recommended answer.
Ask the questions one at a time.
If a question can be answered by exploring the codebase, explore the codebase instead.
-102
View File
@@ -1,102 +0,0 @@
---
name: triage-issue
description: Triage a bug or issue by exploring the codebase to find root cause, then create a GitHub issue with a TDD-based fix plan. Use when user reports a bug, wants to file an issue, mentions "triage", or wants to investigate and plan a fix for a problem.
---
# Triage Issue
Investigate a reported problem, find its root cause, and create a GitHub issue with a TDD fix plan. This is a mostly hands-off workflow - minimize questions to the user.
## Process
### 1. Capture the problem
Get a brief description of the issue from the user. If they haven't provided one, ask ONE question: "What's the problem you're seeing?"
Do NOT ask follow-up questions yet. Start investigating immediately.
### 2. Explore and diagnose
Use the Agent tool with subagent_type=Explore to deeply investigate the codebase. Your goal is to find:
- **Where** the bug manifests (entry points, UI, API responses)
- **What** code path is involved (trace the flow)
- **Why** it fails (the root cause, not just the symptom)
- **What** related code exists (similar patterns, tests, adjacent modules)
Look at:
- Related source files and their dependencies
- Existing tests (what's tested, what's missing)
- Recent changes to affected files (`git log` on relevant files)
- Error handling in the code path
- Similar patterns elsewhere in the codebase that work correctly
### 3. Identify the fix approach
Based on your investigation, determine:
- The minimal change needed to fix the root cause
- Which modules/interfaces are affected
- What behaviors need to be verified via tests
- Whether this is a regression, missing feature, or design flaw
### 4. Design TDD fix plan
Create a concrete, ordered list of RED-GREEN cycles. Each cycle is one vertical slice:
- **RED**: Describe a specific test that captures the broken/missing behavior
- **GREEN**: Describe the minimal code change to make that test pass
Rules:
- Tests verify behavior through public interfaces, not implementation details
- One test at a time, vertical slices (NOT all tests first, then all code)
- Each test should survive internal refactors
- Include a final refactor step if needed
- **Durability**: Only suggest fixes that would survive radical codebase changes. Describe behaviors and contracts, not internal structure. Tests assert on observable outcomes (API responses, UI state, user-visible effects), not internal state. A good suggestion reads like a spec; a bad one reads like a diff.
### 5. Create the GitHub issue
Create a GitHub issue using `gh issue create` with the template below. Do NOT ask the user to review before creating - just create it and share the URL.
<issue-template>
## Problem
A clear description of the bug or issue, including:
- What happens (actual behavior)
- What should happen (expected behavior)
- How to reproduce (if applicable)
## Root Cause Analysis
Describe what you found during investigation:
- The code path involved
- Why the current code fails
- Any contributing factors
Do NOT include specific file paths, line numbers, or implementation details that couple to current code layout. Describe modules, behaviors, and contracts instead. The issue should remain useful even after major refactors.
## TDD Fix Plan
A numbered list of RED-GREEN cycles:
1. **RED**: Write a test that [describes expected behavior]
**GREEN**: [Minimal change to make it pass]
2. **RED**: Write a test that [describes next behavior]
**GREEN**: [Minimal change to make it pass]
...
**REFACTOR**: [Any cleanup needed after all tests pass]
## Acceptance Criteria
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] All new tests pass
- [ ] Existing tests still pass
</issue-template>
After creating the issue, print the issue URL and a one-line summary of the root cause.