13 Commits

Author SHA1 Message Date
Matt Pocock 8a54bc33a3 Add research documentation for marketing skills and superpowers plugins 2026-04-27 09:48:22 +01:00
Matt Pocock 90ea8eec03 Revise README.md to clarify the purpose of agent skills and promote newsletter subscription 2026-04-26 20:49:06 +01:00
Matt Pocock 77b06d1022 Update definition of 'Order' in CONTEXT-FORMAT.md for clarity and consistency 2026-04-26 20:47:17 +01:00
Matt Pocock 60aa99c023 Enhance documentation in DEEPENING.md and INTERFACE-DESIGN.md for clarity and consistency; introduce LANGUAGE.md for shared vocabulary in architectural discussions. 2026-04-24 20:26:50 +01:00
Matt Pocock 949472a791 Added DDD-awareness to improve-codebase-architecture 2026-04-23 11:25:36 +01:00
Test 1186cf6d42 Update terminology in SKILL.md: replace 'grilling session' with '/domain-model session' for consistency 2026-04-22 16:58:03 +01:00
Test c2225f9865 Refine description in SKILL.md for clarity and conciseness 2026-04-22 12:10:22 +01:00
Test 3e251ea792 Clarify guidance on CONTEXT.md updates: emphasize meaningful terms for domain experts and avoid coupling to implementation details. 2026-04-22 12:10:11 +01:00
Test 8868f54212 Rename skills for clarity: update 'write-a-prd' to 'to-prd' and 'prd-to-issues' to 'to-issues' in README.md 2026-04-17 13:11:41 +01:00
Test aaf3050857 Updated write-a-prd to to-prd 2026-04-17 13:10:02 +01:00
Test a77fa6e7a6 Remove prd-to-plan skill
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-17 13:02:21 +01:00
Test ab45d5e6e1 Added domain-model, updates to github-triage, and caveman 2026-04-17 10:36:26 +01:00
Test 1f25956a56 Add AI disclaimer requirement for GitHub issue comments 2026-04-15 06:17:41 +01:00
19 changed files with 777 additions and 307 deletions
+11 -13
View File
@@ -1,27 +1,25 @@
# Agent Skills
# Agent Skills For Real Engineers
A collection of agent skills that extend capabilities across planning, development, and tooling.
My agent skills that I use every day to do real engineering - not vibe coding.
If you want to keep up with changes to these skills, and any new ones I create, you can join ~60,000 other devs on my newsletter:
[Sign Up To The Newsletter](https://www.aihero.dev/s/skills-newsletter)
## Planning & Design
These skills help you think through problems before writing code.
- **write-a-prd** — Create a PRD through an interactive interview, codebase exploration, and module design. Filed as a GitHub issue.
- **to-prd** — Turn the current conversation context into a PRD and submit it as a GitHub issue. No interview — just synthesizes what you've already discussed.
```
npx skills@latest add mattpocock/skills/write-a-prd
npx skills@latest add mattpocock/skills/to-prd
```
- **prd-to-plan** — Turn a PRD into a multi-phase implementation plan using tracer-bullet vertical slices.
- **to-issues** — Break any plan, spec, or PRD into independently-grabbable GitHub issues using vertical slices.
```
npx skills@latest add mattpocock/skills/prd-to-plan
```
- **prd-to-issues** — Break a PRD into independently-grabbable GitHub issues using vertical slices.
```
npx skills@latest add mattpocock/skills/prd-to-issues
npx skills@latest add mattpocock/skills/to-issues
```
- **grill-me** — Get relentlessly interviewed about a plan or design until every branch of the decision tree is resolved.
@@ -58,7 +56,7 @@ These skills help you write, refactor, and fix code.
npx skills@latest add mattpocock/skills/triage-issue
```
- **improve-codebase-architecture** — Explore a codebase for architectural improvement opportunities, focusing on deepening shallow modules and improving testability.
- **improve-codebase-architecture** — Find deepening opportunities in a codebase, informed by the domain language in `CONTEXT.md` and the decisions in `docs/adr/`.
```
npx skills@latest add mattpocock/skills/improve-codebase-architecture
+49
View File
@@ -0,0 +1,49 @@
---
name: caveman
description: >
Ultra-compressed communication mode. Cuts token usage ~75% by dropping
filler, articles, and pleasantries while keeping full technical accuracy.
Use when user says "caveman mode", "talk like caveman", "use caveman",
"less tokens", "be brief", or invokes /caveman.
---
Respond terse like smart caveman. All technical substance stay. Only fluff die.
## Persistence
ACTIVE EVERY RESPONSE once triggered. No revert after many turns. No filler drift. Still active if unsure. Off only when user says "stop caveman" or "normal mode".
## Rules
Drop: articles (a/an/the), filler (just/really/basically/actually/simply), pleasantries (sure/certainly/of course/happy to), hedging. Fragments OK. Short synonyms (big not extensive, fix not "implement a solution for"). Abbreviate common terms (DB/auth/config/req/res/fn/impl). Strip conjunctions. Use arrows for causality (X -> Y). One word when one word enough.
Technical terms stay exact. Code blocks unchanged. Errors quoted exact.
Pattern: `[thing] [action] [reason]. [next step].`
Not: "Sure! I'd be happy to help you with that. The issue you're experiencing is likely caused by..."
Yes: "Bug in auth middleware. Token expiry check use `<` not `<=`. Fix:"
### Examples
**"Why React component re-render?"**
> Inline obj prop -> new ref -> re-render. `useMemo`.
**"Explain database connection pooling."**
> Pool = reuse DB conn. Skip handshake -> fast under load.
## Auto-Clarity Exception
Drop caveman temporarily for: security warnings, irreversible action confirmations, multi-step sequences where fragment order risks misread, user asks to clarify or repeats question. Resume caveman after clear part done.
Example -- destructive op:
> **Warning:** This will permanently delete all rows in the `users` table and cannot be undone.
>
> ```sql
> DROP TABLE users;
> ```
>
> Caveman resume. Verify backup exist first.
+47
View File
@@ -0,0 +1,47 @@
# ADR Format
ADRs live in `docs/adr/` and use sequential numbering: `0001-slug.md`, `0002-slug.md`, etc.
Create the `docs/adr/` directory lazily — only when the first ADR is needed.
## Template
```md
# {Short title of the decision}
{1-3 sentences: what's the context, what did we decide, and why.}
```
That's it. An ADR can be a single paragraph. The value is in recording *that* a decision was made and *why* — not in filling out sections.
## Optional sections
Only include these when they add genuine value. Most ADRs won't need them.
- **Status** frontmatter (`proposed | accepted | deprecated | superseded by ADR-NNNN`) — useful when decisions are revisited
- **Considered Options** — only when the rejected alternatives are worth remembering
- **Consequences** — only when non-obvious downstream effects need to be called out
## Numbering
Scan `docs/adr/` for the highest existing number and increment by one.
## When to offer an ADR
All three of these must be true:
1. **Hard to reverse** — the cost of changing your mind later is meaningful
2. **Surprising without context** — a future reader will look at the code and wonder "why on earth did they do it this way?"
3. **The result of a real trade-off** — there were genuine alternatives and you picked one for specific reasons
If a decision is easy to reverse, skip it — you'll just reverse it. If it's not surprising, nobody will wonder why. If there was no real alternative, there's nothing to record beyond "we did the obvious thing."
### What qualifies
- **Architectural shape.** "We're using a monorepo." "The write model is event-sourced, the read model is projected into Postgres."
- **Integration patterns between contexts.** "Ordering and Billing communicate via domain events, not synchronous HTTP."
- **Technology choices that carry lock-in.** Database, message bus, auth provider, deployment target. Not every library — just the ones that would take a quarter to swap out.
- **Boundary and scope decisions.** "Customer data is owned by the Customer context; other contexts reference it by ID only." The explicit no-s are as valuable as the yes-s.
- **Deliberate deviations from the obvious path.** "We're using manual SQL instead of an ORM because X." Anything where a reasonable reader would assume the opposite. These stop the next engineer from "fixing" something that was deliberate.
- **Constraints not visible in the code.** "We can't use AWS because of compliance requirements." "Response times must be under 200ms because of the partner API contract."
- **Rejected alternatives when the rejection is non-obvious.** If you considered GraphQL and picked REST for subtle reasons, record it — otherwise someone will suggest GraphQL again in six months.
+77
View File
@@ -0,0 +1,77 @@
# CONTEXT.md Format
## Structure
```md
# {Context Name}
{One or two sentence description of what this context is and why it exists.}
## Language
**Order**:
{A concise description of the term}
_Avoid_: Purchase, transaction
**Invoice**:
A request for payment sent to a customer after delivery.
_Avoid_: Bill, payment request
**Customer**:
A person or organization that places orders.
_Avoid_: Client, buyer, account
## Relationships
- An **Order** produces one or more **Invoices**
- An **Invoice** belongs to exactly one **Customer**
## Example dialogue
> **Dev:** "When a **Customer** places an **Order**, do we create the **Invoice** immediately?"
> **Domain expert:** "No — an **Invoice** is only generated once a **Fulfillment** is confirmed."
## Flagged ambiguities
- "account" was used to mean both **Customer** and **User** — resolved: these are distinct concepts.
```
## Rules
- **Be opinionated.** When multiple words exist for the same concept, pick the best one and list the others as aliases to avoid.
- **Flag conflicts explicitly.** If a term is used ambiguously, call it out in "Flagged ambiguities" with a clear resolution.
- **Keep definitions tight.** One sentence max. Define what it IS, not what it does.
- **Show relationships.** Use bold term names and express cardinality where obvious.
- **Only include terms specific to this project's context.** General programming concepts (timeouts, error types, utility patterns) don't belong even if the project uses them extensively. Before adding a term, ask: is this a concept unique to this context, or a general programming concept? Only the former belongs.
- **Group terms under subheadings** when natural clusters emerge. If all terms belong to a single cohesive area, a flat list is fine.
- **Write an example dialogue.** A conversation between a dev and a domain expert that demonstrates how the terms interact naturally and clarifies boundaries between related concepts.
## Single vs multi-context repos
**Single context (most repos):** One `CONTEXT.md` at the repo root.
**Multiple contexts:** A `CONTEXT-MAP.md` at the repo root lists the contexts, where they live, and how they relate to each other:
```md
# Context Map
## Contexts
- [Ordering](./src/ordering/CONTEXT.md) — receives and tracks customer orders
- [Billing](./src/billing/CONTEXT.md) — generates invoices and processes payments
- [Fulfillment](./src/fulfillment/CONTEXT.md) — manages warehouse picking and shipping
## Relationships
- **Ordering → Fulfillment**: Ordering emits `OrderPlaced` events; Fulfillment consumes them to start picking
- **Fulfillment → Billing**: Fulfillment emits `ShipmentDispatched` events; Billing consumes them to generate invoices
- **Ordering ↔ Billing**: Shared types for `CustomerId` and `Money`
```
The skill infers which structure applies:
- If `CONTEXT-MAP.md` exists, read it to find contexts
- If only a root `CONTEXT.md` exists, single context
- If neither exists, create a root `CONTEXT.md` lazily when the first term is resolved
When multiple contexts exist, infer which one the current topic relates to. If unclear, ask.
+81
View File
@@ -0,0 +1,81 @@
---
name: domain-model
description: Grilling session that challenges your plan against the existing domain model, sharpens terminology, and updates documentation (CONTEXT.md, ADRs) inline as decisions crystallise. Use when user wants to stress-test a plan against their project's language and documented decisions.
disable-model-invocation: true
---
Interview me relentlessly about every aspect of this plan until we reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one. For each question, provide your recommended answer.
Ask the questions one at a time, waiting for feedback on each question before continuing.
If a question can be answered by exploring the codebase, explore the codebase instead.
## Domain awareness
During codebase exploration, also look for existing documentation:
### File structure
Most repos have a single context:
```
/
├── CONTEXT.md
├── docs/
│ └── adr/
│ ├── 0001-event-sourced-orders.md
│ └── 0002-postgres-for-write-model.md
└── src/
```
If a `CONTEXT-MAP.md` exists at the root, the repo has multiple contexts. The map points to where each one lives:
```
/
├── CONTEXT-MAP.md
├── docs/
│ └── adr/ ← system-wide decisions
├── src/
│ ├── ordering/
│ │ ├── CONTEXT.md
│ │ └── docs/adr/ ← context-specific decisions
│ └── billing/
│ ├── CONTEXT.md
│ └── docs/adr/
```
Create files lazily — only when you have something to write. If no `CONTEXT.md` exists, create one when the first term is resolved. If no `docs/adr/` exists, create it when the first ADR is needed.
## During the session
### Challenge against the glossary
When the user uses a term that conflicts with the existing language in `CONTEXT.md`, call it out immediately. "Your glossary defines 'cancellation' as X, but you seem to mean Y — which is it?"
### Sharpen fuzzy language
When the user uses vague or overloaded terms, propose a precise canonical term. "You're saying 'account' — do you mean the Customer or the User? Those are different things."
### Discuss concrete scenarios
When domain relationships are being discussed, stress-test them with specific scenarios. Invent scenarios that probe edge cases and force the user to be precise about the boundaries between concepts.
### Cross-reference with code
When the user states how something works, check whether the code agrees. If you find a contradiction, surface it: "Your code cancels entire Orders, but you just said partial cancellation is possible — which is right?"
### Update CONTEXT.md inline
When a term is resolved, update `CONTEXT.md` right there. Don't batch these up — capture them as they happen. Use the format in [CONTEXT-FORMAT.md](./CONTEXT-FORMAT.md).
Don't couple `CONTEXT.md` to implementation details. Only include terms that are meaningful to domain experts.
### Offer ADRs sparingly
Only offer to create an ADR when all three are true:
1. **Hard to reverse** — the cost of changing your mind later is meaningful
2. **Surprising without context** — a future reader will wonder "why did they do it this way?"
3. **The result of a real trade-off** — there were genuine alternatives and you picked one for specific reasons
If any of the three is missing, skip the ADR. Use the format in [ADR-FORMAT.md](./ADR-FORMAT.md).
+19 -26
View File
@@ -1,12 +1,20 @@
---
name: github-triage
description: Triage GitHub issues through a label-based state machine with interactive grilling sessions. Use when user wants to triage issues, review incoming bugs or feature requests, prepare issues for an AFK agent, or manage issue workflow.
description: Triage GitHub issues through a label-based state machine. Use when user wants to create an issue, triage issues, review incoming bugs or feature requests, prepare issues for an AFK agent, or manage issue workflow.
---
# GitHub Issue Triage
Triage issues in the current repo using a label-based state machine. Infer the repo from `git remote`. Use `gh` for all GitHub operations.
## AI Disclaimer
Every comment or issue posted to GitHub during triage **must** include the following disclaimer at the top of the comment body, before any other content:
```
> *This was generated by AI during triage.*
```
## Reference docs
- [AGENT-BRIEF.md](AGENT-BRIEF.md) — how to write durable agent briefs
@@ -89,13 +97,13 @@ Tell the maintainer:
Then wait for the maintainer's direction. They may:
- Agree and ask you to apply labels → do it
- Want to flesh it out → start a grilling session
- Want to flesh it out → start a /domain-model session
- Override with a different state → apply their choice
- Want to discuss → have a conversation
### Step 3: Bug reproduction (bugs only)
If the issue is categorized as a bug, attempt to reproduce it before starting a grilling session. This will vary by codebase, but do your best:
If the issue is categorized as a bug, attempt to reproduce it before starting a /domain-model session. This will vary by codebase, but do your best:
- Read the reporter's reproduction steps (if provided)
- Explore the codebase to understand the relevant code paths
@@ -104,29 +112,14 @@ If the issue is categorized as a bug, attempt to reproduce it before starting a
- If reproduction fails, report that too — the bug may be environment-specific, already fixed, or the report may be inaccurate
- If the report lacks enough detail to attempt reproduction, note that — this is a strong signal the issue should move to `needs-info`
The reproduction attempt informs the grilling session and the agent brief. A confirmed reproduction with a known code path makes for a much stronger brief.
The reproduction attempt informs the /domain-model session and the agent brief. A confirmed reproduction with a known code path makes for a much stronger brief.
### Step 4: Grilling session (if needed)
### Step 4: /domain-model session (if needed)
If the issue needs to be fleshed out before it's ready for an agent, interview the maintainer to build a complete specification. Follow the /grill-me pattern:
- Ask questions one at a time
- Provide a recommended answer for each question
- If a question can be answered by exploring the codebase, explore the codebase instead
- If there are prior triage notes on this issue, resume from where you left off — never re-ask questions that were already resolved
- For bugs: use the reproduction findings to ask targeted questions ("I confirmed this happens because X — should the fix be Y or Z?")
The goal is to reach a point where you can write a complete agent brief. Keep going until you have:
- A clear summary of the desired behavior
- Concrete acceptance criteria
- Key interfaces that may need to change
- A clear boundary of what's out of scope
If the issue needs to be fleshed out before it's ready for an agent, interview the maintainer to build a complete specification. Use the /domain-model skill.
### Step 5: Apply the outcome
Before posting any comment or applying any label, show the maintainer a **preview** of exactly what will be posted and which labels will be applied/removed. Only proceed on confirmation.
Depending on the outcome:
- **ready-for-agent** — post an agent brief comment (see [AGENT-BRIEF.md](AGENT-BRIEF.md))
@@ -140,13 +133,13 @@ Depending on the outcome:
When the maintainer explicitly tells you to move an issue to a specific state (e.g. "move #42 to ready-for-agent"), trust their judgment and apply the label directly.
Still show a confirmation of what you're about to do: which labels will be added/removed, and whether you'll post a comment or close the issue. But skip the grilling session entirely.
Still show a confirmation of what you're about to do: which labels will be added/removed, and whether you'll post a comment or close the issue. But skip the /domain-model session entirely.
If moving to `ready-for-agent` without a grilling session, ask the maintainer if they want to write a brief agent brief comment or skip it.
If moving to `ready-for-agent` without a /domain-model session, ask the maintainer if they want to write a brief agent brief comment or skip it.
## Needs Info Output
When moving an issue to `needs-info`, post a comment that captures the grilling progress and tells the reporter what's needed:
When moving an issue to `needs-info`, post a comment that captures the interview progress and tells the reporter what's needed:
```markdown
## Triage Notes
@@ -162,7 +155,7 @@ When moving an issue to `needs-info`, post a comment that captures the grilling
- question 2
```
Include everything resolved during the grilling session in "established so far" — this work should not be lost. The questions for the reporter should be specific and actionable, not vague ("please provide more info").
Include everything resolved during the /domain-model session in "established so far" — this work should not be lost. The questions for the reporter should be specific and actionable, not vague ("please provide more info").
## Resuming Previous Sessions
@@ -172,4 +165,4 @@ When triaging an issue that already has triage notes from a previous session:
2. Parse what was already established
3. Check if the reporter has answered any outstanding questions
4. Present the maintainer with an updated picture: "Here's where we left off, and here's what the reporter has said since"
5. Continue the grilling from where it stopped — do not re-ask resolved questions
5. Continue the /domain-model session from where it stopped — do not re-ask resolved questions
@@ -0,0 +1,37 @@
# Deepening
How to deepen a cluster of shallow modules safely, given its dependencies. Assumes the vocabulary in [LANGUAGE.md](LANGUAGE.md) — **module**, **interface**, **seam**, **adapter**.
## Dependency categories
When assessing a candidate for deepening, classify its dependencies. The category determines how the deepened module is tested across its seam.
### 1. In-process
Pure computation, in-memory state, no I/O. Always deepenable — merge the modules and test through the new interface directly. No adapter needed.
### 2. Local-substitutable
Dependencies that have local test stand-ins (PGLite for Postgres, in-memory filesystem). Deepenable if the stand-in exists. The deepened module is tested with the stand-in running in the test suite. The seam is internal; no port at the module's external interface.
### 3. Remote but owned (Ports & Adapters)
Your own services across a network boundary (microservices, internal APIs). Define a **port** (interface) at the seam. The deep module owns the logic; the transport is injected as an **adapter**. Tests use an in-memory adapter. Production uses an HTTP/gRPC/queue adapter.
Recommendation shape: *"Define a port at the seam, implement an HTTP adapter for production and an in-memory adapter for testing, so the logic sits in one deep module even though it's deployed across a network."*
### 4. True external (Mock)
Third-party services (Stripe, Twilio, etc.) you don't control. The deepened module takes the external dependency as an injected port; tests provide a mock adapter.
## Seam discipline
- **One adapter means a hypothetical seam. Two adapters means a real one.** Don't introduce a port unless at least two adapters are justified (typically production + test). A single-adapter seam is just indirection.
- **Internal seams vs external seams.** A deep module can have internal seams (private to its implementation, used by its own tests) as well as the external seam at its interface. Don't expose internal seams through the interface just because tests use them.
## Testing strategy: replace, don't layer
- Old unit tests on shallow modules become waste once tests at the deepened module's interface exist — delete them.
- Write new tests at the deepened module's interface. The **interface is the test surface**.
- Tests assert on observable outcomes through the interface, not internal state.
- Tests should survive internal refactors — they describe behaviour, not implementation. If a test has to change when the implementation changes, it's testing past the interface.
@@ -0,0 +1,44 @@
# Interface Design
When the user wants to explore alternative interfaces for a chosen deepening candidate, use this parallel sub-agent pattern. Based on "Design It Twice" (Ousterhout) — your first idea is unlikely to be the best.
Uses the vocabulary in [LANGUAGE.md](LANGUAGE.md) — **module**, **interface**, **seam**, **adapter**, **leverage**.
## Process
### 1. Frame the problem space
Before spawning sub-agents, write a user-facing explanation of the problem space for the chosen candidate:
- The constraints any new interface would need to satisfy
- The dependencies it would rely on, and which category they fall into (see [DEEPENING.md](DEEPENING.md))
- A rough illustrative code sketch to ground the constraints — not a proposal, just a way to make the constraints concrete
Show this to the user, then immediately proceed to Step 2. The user reads and thinks while the sub-agents work in parallel.
### 2. Spawn sub-agents
Spawn 3+ sub-agents in parallel using the Agent tool. Each must produce a **radically different** interface for the deepened module.
Prompt each sub-agent with a separate technical brief (file paths, coupling details, dependency category from [DEEPENING.md](DEEPENING.md), what sits behind the seam). The brief is independent of the user-facing problem-space explanation in Step 1. Give each agent a different design constraint:
- Agent 1: "Minimize the interface — aim for 13 entry points max. Maximise leverage per entry point."
- Agent 2: "Maximise flexibility — support many use cases and extension."
- Agent 3: "Optimise for the most common caller — make the default case trivial."
- Agent 4 (if applicable): "Design around ports & adapters for cross-seam dependencies."
Include both [LANGUAGE.md](LANGUAGE.md) vocabulary and CONTEXT.md vocabulary in the brief so each sub-agent names things consistently with the architecture language and the project's domain language.
Each sub-agent outputs:
1. Interface (types, methods, params — plus invariants, ordering, error modes)
2. Usage example showing how callers use it
3. What the implementation hides behind the seam
4. Dependency strategy and adapters (see [DEEPENING.md](DEEPENING.md))
5. Trade-offs — where leverage is high, where it's thin
### 3. Present and compare
Present designs sequentially so the user can absorb each one, then compare them in prose. Contrast by **depth** (leverage at the interface), **locality** (where change concentrates), and **seam placement**.
After comparing, give your own recommendation: which design you think is strongest and why. If elements from different designs would combine well, propose a hybrid. Be opinionated — the user wants a strong read, not a menu.
+53
View File
@@ -0,0 +1,53 @@
# Language
Shared vocabulary for every suggestion this skill makes. Use these terms exactly — don't substitute "component," "service," "API," or "boundary." Consistent language is the whole point.
## Terms
**Module**
Anything with an interface and an implementation. Deliberately scale-agnostic — applies equally to a function, class, package, or tier-spanning slice.
_Avoid_: unit, component, service.
**Interface**
Everything a caller must know to use the module correctly. Includes the type signature, but also invariants, ordering constraints, error modes, required configuration, and performance characteristics.
_Avoid_: API, signature (too narrow — those refer only to the type-level surface).
**Implementation**
What's inside a module — its body of code. Distinct from **Adapter**: a thing can be a small adapter with a large implementation (a Postgres repo) or a large adapter with a small implementation (an in-memory fake). Reach for "adapter" when the seam is the topic; "implementation" otherwise.
**Depth**
Leverage at the interface — the amount of behaviour a caller (or test) can exercise per unit of interface they have to learn. A module is **deep** when a large amount of behaviour sits behind a small interface. A module is **shallow** when the interface is nearly as complex as the implementation.
**Seam** _(from Michael Feathers)_
A place where you can alter behaviour without editing in that place. The *location* at which a module's interface lives. Choosing where to put the seam is its own design decision, distinct from what goes behind it.
_Avoid_: boundary (overloaded with DDD's bounded context).
**Adapter**
A concrete thing that satisfies an interface at a seam. Describes *role* (what slot it fills), not substance (what's inside).
**Leverage**
What callers get from depth. More capability per unit of interface they have to learn. One implementation pays back across N call sites and M tests.
**Locality**
What maintainers get from depth. Change, bugs, knowledge, and verification concentrate at one place rather than spreading across callers. Fix once, fixed everywhere.
## Principles
- **Depth is a property of the interface, not the implementation.** A deep module can be internally composed of small, mockable, swappable parts — they just aren't part of the interface. A module can have **internal seams** (private to its implementation, used by its own tests) as well as the **external seam** at its interface.
- **The deletion test.** Imagine deleting the module. If complexity vanishes, the module wasn't hiding anything (it was a pass-through). If complexity reappears across N callers, the module was earning its keep.
- **The interface is the test surface.** Callers and tests cross the same seam. If you want to test *past* the interface, the module is probably the wrong shape.
- **One adapter means a hypothetical seam. Two adapters means a real one.** Don't introduce a seam unless something actually varies across it.
## Relationships
- A **Module** has exactly one **Interface** (the surface it presents to callers and tests).
- **Depth** is a property of a **Module**, measured against its **Interface**.
- A **Seam** is where a **Module**'s **Interface** lives.
- An **Adapter** sits at a **Seam** and satisfies the **Interface**.
- **Depth** produces **Leverage** for callers and **Locality** for maintainers.
## Rejected framings
- **Depth as ratio of implementation-lines to interface-lines** (Ousterhout): rewards padding the implementation. We use depth-as-leverage instead.
- **"Interface" as the TypeScript `interface` keyword or a class's public methods**: too narrow — interface here includes every fact a caller must know.
- **"Boundary"**: overloaded with DDD's bounded context. Say **seam** or **interface**.
@@ -1,78 +0,0 @@
# Reference
## Dependency Categories
When assessing a candidate for deepening, classify its dependencies:
### 1. In-process
Pure computation, in-memory state, no I/O. Always deepenable — just merge the modules and test directly.
### 2. Local-substitutable
Dependencies that have local test stand-ins (e.g., PGLite for Postgres, in-memory filesystem). Deepenable if the test substitute exists. The deepened module is tested with the local stand-in running in the test suite.
### 3. Remote but owned (Ports & Adapters)
Your own services across a network boundary (microservices, internal APIs). Define a port (interface) at the module boundary. The deep module owns the logic; the transport is injected. Tests use an in-memory adapter. Production uses the real HTTP/gRPC/queue adapter.
Recommendation shape: "Define a shared interface (port), implement an HTTP adapter for production and an in-memory adapter for testing, so the logic can be tested as one deep module even though it's deployed across a network boundary."
### 4. True external (Mock)
Third-party services (Stripe, Twilio, etc.) you don't control. Mock at the boundary. The deepened module takes the external dependency as an injected port, and tests provide a mock implementation.
## Testing Strategy
The core principle: **replace, don't layer.**
- Old unit tests on shallow modules are waste once boundary tests exist — delete them
- Write new tests at the deepened module's interface boundary
- Tests assert on observable outcomes through the public interface, not internal state
- Tests should survive internal refactors — they describe behavior, not implementation
## Issue Template
<issue-template>
## Problem
Describe the architectural friction:
- Which modules are shallow and tightly coupled
- What integration risk exists in the seams between them
- Why this makes the codebase harder to navigate and maintain
## Proposed Interface
The chosen interface design:
- Interface signature (types, methods, params)
- Usage example showing how callers use it
- What complexity it hides internally
## Dependency Strategy
Which category applies and how dependencies are handled:
- **In-process**: merged directly
- **Local-substitutable**: tested with [specific stand-in]
- **Ports & adapters**: port definition, production adapter, test adapter
- **Mock**: mock boundary for external services
## Testing Strategy
- **New boundary tests to write**: describe the behaviors to verify at the interface
- **Old tests to delete**: list the shallow module tests that become redundant
- **Test environment needs**: any local stand-ins or adapters required
## Implementation Recommendations
Durable architectural guidance that is NOT coupled to current file paths:
- What the module should own (responsibilities)
- What it should hide (implementation details)
- What it should expose (the interface contract)
- How callers should migrate to the new interface
</issue-template>
+53 -53
View File
@@ -1,76 +1,76 @@
---
name: improve-codebase-architecture
description: Explore a codebase to find opportunities for architectural improvement, focusing on making the codebase more testable by deepening shallow modules. Use when user wants to improve architecture, find refactoring opportunities, consolidate tightly-coupled modules, or make a codebase more AI-navigable.
description: Find deepening opportunities in a codebase, informed by the domain language in CONTEXT.md and the decisions in docs/adr/. Use when the user wants to improve architecture, find refactoring opportunities, consolidate tightly-coupled modules, or make a codebase more testable and AI-navigable.
---
# Improve Codebase Architecture
Explore a codebase like an AI would, surface architectural friction, discover opportunities for improving testability, and propose module-deepening refactors as GitHub issue RFCs.
Surface architectural friction and propose **deepening opportunities** — refactors that turn shallow modules into deep ones. The aim is testability and AI-navigability.
A **deep module** (John Ousterhout, "A Philosophy of Software Design") has a small interface hiding a large implementation. Deep modules are more testable, more AI-navigable, and let you test at the boundary instead of inside.
## Glossary
Use these terms exactly in every suggestion. Consistent language is the point — don't drift into "component," "service," "API," or "boundary." Full definitions in [LANGUAGE.md](LANGUAGE.md).
- **Module** — anything with an interface and an implementation (function, class, package, slice).
- **Interface** — everything a caller must know to use the module: types, invariants, error modes, ordering, config. Not just the type signature.
- **Implementation** — the code inside.
- **Depth** — leverage at the interface: a lot of behaviour behind a small interface. **Deep** = high leverage. **Shallow** = interface nearly as complex as the implementation.
- **Seam** — where an interface lives; a place behaviour can be altered without editing in place. (Use this, not "boundary.")
- **Adapter** — a concrete thing satisfying an interface at a seam.
- **Leverage** — what callers get from depth.
- **Locality** — what maintainers get from depth: change, bugs, knowledge concentrated in one place.
Key principles (see [LANGUAGE.md](LANGUAGE.md) for the full list):
- **Deletion test**: imagine deleting the module. If complexity vanishes, it was a pass-through. If complexity reappears across N callers, it was earning its keep.
- **The interface is the test surface.**
- **One adapter = hypothetical seam. Two adapters = real seam.**
This skill is _informed_ by the project's domain model — `CONTEXT.md` and any `docs/adr/`. The domain language gives names to good seams; ADRs record decisions the skill should not re-litigate. See [CONTEXT-FORMAT.md](../domain-model/CONTEXT-FORMAT.md) and [ADR-FORMAT.md](../domain-model/ADR-FORMAT.md).
## Process
### 1. Explore the codebase
### 1. Explore
Use the Agent tool with subagent_type=Explore to navigate the codebase naturally. Do NOT follow rigid heuristics — explore organically and note where you experience friction:
Read existing documentation first:
- Where does understanding one concept require bouncing between many small files?
- Where are modules so shallow that the interface is nearly as complex as the implementation?
- Where have pure functions been extracted just for testability, but the real bugs hide in how they're called?
- Where do tightly-coupled modules create integration risk in the seams between them?
- Which parts of the codebase are untested, or hard to test?
- `CONTEXT.md` (or `CONTEXT-MAP.md` + each `CONTEXT.md` in a multi-context repo)
- Relevant ADRs in `docs/adr/` (and any context-scoped `docs/adr/` directories)
The friction you encounter IS the signal.
If any of these files don't exist, proceed silently — don't flag their absence or suggest creating them upfront.
Then use the Agent tool with `subagent_type=Explore` to walk the codebase. Don't follow rigid heuristics — explore organically and note where you experience friction:
- Where does understanding one concept require bouncing between many small modules?
- Where are modules **shallow** — interface nearly as complex as the implementation?
- Where have pure functions been extracted just for testability, but the real bugs hide in how they're called (no **locality**)?
- Where do tightly-coupled modules leak across their seams?
- Which parts of the codebase are untested, or hard to test through their current interface?
Apply the **deletion test** to anything you suspect is shallow: would deleting it concentrate complexity, or just move it? A "yes, concentrates" is the signal you want.
### 2. Present candidates
Present a numbered list of deepening opportunities. For each candidate, show:
Present a numbered list of deepening opportunities. For each candidate:
- **Cluster**: Which modules/concepts are involved
- **Why they're coupled**: Shared types, call patterns, co-ownership of a concept
- **Dependency category**: See [REFERENCE.md](REFERENCE.md) for the four categories
- **Test impact**: What existing tests would be replaced by boundary tests
- **Files** — which files/modules are involved
- **Problem** — why the current architecture is causing friction
- **Solution** — plain English description of what would change
- **Benefits** — explained in terms of locality and leverage, and also in how tests would improve
**Use CONTEXT.md vocabulary for the domain, and [LANGUAGE.md](LANGUAGE.md) vocabulary for the architecture.** If `CONTEXT.md` defines "Order," talk about "the Order intake module" — not "the FooBarHandler," and not "the Order service."
**ADR conflicts**: if a candidate contradicts an existing ADR, only surface it when the friction is real enough to warrant revisiting the ADR. Mark it clearly (e.g. _"contradicts ADR-0007 — but worth reopening because…"_). Don't list every theoretical refactor an ADR forbids.
Do NOT propose interfaces yet. Ask the user: "Which of these would you like to explore?"
### 3. User picks a candidate
### 3. Grilling loop
### 4. Frame the problem space
Once the user picks a candidate, drop into a grilling conversation. Walk the design tree with them — constraints, dependencies, the shape of the deepened module, what sits behind the seam, what tests survive.
Before spawning sub-agents, write a user-facing explanation of the problem space for the chosen candidate:
Side effects happen inline as decisions crystallize:
- The constraints any new interface would need to satisfy
- The dependencies it would need to rely on
- A rough illustrative code sketch to make the constraints concrete — this is not a proposal, just a way to ground the constraints
Show this to the user, then immediately proceed to Step 5. The user reads and thinks about the problem while the sub-agents work in parallel.
### 5. Design multiple interfaces
Spawn 3+ sub-agents in parallel using the Agent tool. Each must produce a **radically different** interface for the deepened module.
Prompt each sub-agent with a separate technical brief (file paths, coupling details, dependency category, what's being hidden). This brief is independent of the user-facing explanation in Step 4. Give each agent a different design constraint:
- Agent 1: "Minimize the interface — aim for 1-3 entry points max"
- Agent 2: "Maximize flexibility — support many use cases and extension"
- Agent 3: "Optimize for the most common caller — make the default case trivial"
- Agent 4 (if applicable): "Design around the ports & adapters pattern for cross-boundary dependencies"
Each sub-agent outputs:
1. Interface signature (types, methods, params)
2. Usage example showing how callers use it
3. What complexity it hides internally
4. Dependency strategy (how deps are handled — see [REFERENCE.md](REFERENCE.md))
5. Trade-offs
Present designs sequentially, then compare them in prose.
After comparing, give your own recommendation: which design you think is strongest and why. If elements from different designs would combine well, propose a hybrid. Be opinionated — the user wants a strong read, not just a menu.
### 6. User picks an interface (or accepts recommendation)
### 7. Create GitHub issue
Create a refactor RFC as a GitHub issue using `gh issue create`. Use the template in [REFERENCE.md](REFERENCE.md). Do NOT ask the user to review before creating — just create it and share the URL.
- **Naming a deepened module after a concept not in `CONTEXT.md`?** Add the term to `CONTEXT.md` — same discipline as `/domain-model` (see [CONTEXT-FORMAT.md](../domain-model/CONTEXT-FORMAT.md)). Create the file lazily if it doesn't exist.
- **Sharpening a fuzzy term during the conversation?** Update `CONTEXT.md` right there.
- **User rejects the candidate with a load-bearing reason?** Offer an ADR, framed as: _"Want me to record this as an ADR so future architecture reviews don't re-suggest it?"_ Only offer when the reason would actually be needed by a future explorer to avoid re-suggesting the same thing — skip ephemeral reasons ("not worth it right now") and self-evident ones. See [ADR-FORMAT.md](../domain-model/ADR-FORMAT.md).
- **Want to explore alternative interfaces for the deepened module?** See [INTERFACE-DESIGN.md](INTERFACE-DESIGN.md).
-107
View File
@@ -1,107 +0,0 @@
---
name: prd-to-plan
description: Turn a PRD into a multi-phase implementation plan using tracer-bullet vertical slices, saved as a local Markdown file in ./plans/. Use when user wants to break down a PRD, create an implementation plan, plan phases from a PRD, or mentions "tracer bullets".
---
# PRD to Plan
Break a PRD into a phased implementation plan using vertical slices (tracer bullets). Output is a Markdown file in `./plans/`.
## Process
### 1. Confirm the PRD is in context
The PRD should already be in the conversation. If it isn't, ask the user to paste it or point you to the file.
### 2. Explore the codebase
If you have not already explored the codebase, do so to understand the current architecture, existing patterns, and integration layers.
### 3. Identify durable architectural decisions
Before slicing, identify high-level decisions that are unlikely to change throughout implementation:
- Route structures / URL patterns
- Database schema shape
- Key data models
- Authentication / authorization approach
- Third-party service boundaries
These go in the plan header so every phase can reference them.
### 4. Draft vertical slices
Break the PRD into **tracer bullet** phases. Each phase is a thin vertical slice that cuts through ALL integration layers end-to-end, NOT a horizontal slice of one layer.
<vertical-slice-rules>
- Each slice delivers a narrow but COMPLETE path through every layer (schema, API, UI, tests)
- A completed slice is demoable or verifiable on its own
- Prefer many thin slices over few thick ones
- Do NOT include specific file names, function names, or implementation details that are likely to change as later phases are built
- DO include durable decisions: route paths, schema shapes, data model names
</vertical-slice-rules>
### 5. Quiz the user
Present the proposed breakdown as a numbered list. For each phase show:
- **Title**: short descriptive name
- **User stories covered**: which user stories from the PRD this addresses
Ask the user:
- Does the granularity feel right? (too coarse / too fine)
- Should any phases be merged or split further?
Iterate until the user approves the breakdown.
### 6. Write the plan file
Create `./plans/` if it doesn't exist. Write the plan as a Markdown file named after the feature (e.g. `./plans/user-onboarding.md`). Use the template below.
<plan-template>
# Plan: <Feature Name>
> Source PRD: <brief identifier or link>
## Architectural decisions
Durable decisions that apply across all phases:
- **Routes**: ...
- **Schema**: ...
- **Key models**: ...
- (add/remove sections as appropriate)
---
## Phase 1: <Title>
**User stories**: <list from PRD>
### What to build
A concise description of this vertical slice. Describe the end-to-end behavior, not layer-by-layer implementation.
### Acceptance criteria
- [ ] Criterion 1
- [ ] Criterion 2
- [ ] Criterion 3
---
## Phase 2: <Title>
**User stories**: <list from PRD>
### What to build
...
### Acceptance criteria
- [ ] ...
<!-- Repeat for each phase -->
</plan-template>
+17
View File
@@ -0,0 +1,17 @@
# Research
Research notes informing the infrastructure plan for this repo.
- [superpowers.md](./superpowers.md) — `obra/superpowers`: a single skill bundle that ships as Claude Code, Codex App, Cursor, OpenCode, and Gemini plugins simultaneously by colocating manifests at the repo root. Themed grouping happens in a _separate_ curation repo (`obra/superpowers-marketplace`), not as a monorepo.
- [marketingskills.md](./marketingskills.md) — `coreyhaines31/marketingskills`: minimalist single-plugin marketplace, zero npm deps, all codegen via Node stdlib + bash. Notable patterns: `evals/evals.json` per skill, `VERSIONS.md` auto-update protocol, sync-skills.js GHA that auto-rewrites the marketplace and README on push.
## Cross-cutting takeaways
| Goal | superpowers | marketingskills | Implication |
| -------------------- | ---------------------------------------------------------------------- | ----------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `npx skills add` | not used | first-class | "Just be SKILL.md-spec compliant under `skills/`" — no work |
| Claude Code plugins | `.claude-plugin/` at root, themed groups via separate marketplace repo | single-plugin marketplace | Neither models a multi-plugin monorepo. Matt's plan (multiple plugins under `plugins/<name>/`, one marketplace.json listing them with `source: "./plugins/<name>"`) is novel. |
| Codex plugins | `.codex-plugin/plugin.json` (Codex App) + `.codex/INSTALL.md` (CLI) | implicit via AGENTS.md only | If Matt wants Codex App marketplace presence, copy superpowers' `.codex-plugin/`. If just CLI, AGENTS.md suffices. |
| Docs site | none | none | No prior art — Matt would be the first. |
| Build tooling | bash + jq, no package.json scripts | Node stdlib + bash, no deps | Both prove you don't need a build pipeline. Multi-plugin assembly may change that. |
| SKILL.md frontmatter | `name`, `description` only | `name`, `description`, `metadata.version` | Keep it minimal. |
+152
View File
@@ -0,0 +1,152 @@
# Research: `coreyhaines31/marketingskills`
A marketing-themed Agent Skills repo by Corey Haines. The infrastructure is genuinely interesting and notably *minimal*: zero npm dependencies, all codegen done with Node stdlib + bash.
## Repo structure
```
.claude-plugin/ marketplace.json, plugin.json
.github/ workflows + sync script + issue/PR templates
skills/ 40 skill dirs (flat, one level deep)
tools/ clis/, integrations/, composio/, REGISTRY.md
AGENTS.md CLAUDE.md CONTRIBUTING.md README.md VERSIONS.md
validate-skills.sh validate-skills-official.sh
```
`CLAUDE.md` is a one-line file: `AGENTS.md` (i.e. it points Claude at the cross-agent file, treating AGENTS.md as the source of truth).
Each skill directory is uniform:
```
skills/<name>/
├── SKILL.md
├── evals/evals.json # eval prompts + assertions
└── references/*.md # progressive-disclosure docs
```
No `scripts/` or `assets/`. Skills are flat (no nesting / sub-skills). **No `plugins/` or `dist/` directories** — the repo *is* the plugin, served from root.
## Plugin packaging
Ships as a Claude Code plugin marketplace via two manifests:
`.claude-plugin/marketplace.json`:
```json
{
"name": "marketingskills",
"owner": { "name": "Corey Haines", "url": "https://corey.co" },
"metadata": { "version": "1.9.0" },
"plugins": [
{ "name": "marketing-skills",
"description": "40 marketing skills...",
"source": "./" }
]
}
```
`.claude-plugin/plugin.json`:
```json
{ "name": "marketing-skills", "version": "1.9.0",
"skills": "./skills", "license": "MIT" }
```
**One marketplace, one plugin, all 40 skills bundled.** No themed sub-grouping into multiple plugins — the README's "categories" exist only as headings, not separate installable units. (Relevant for Matt's multi-plugin goal: this repo doesn't actually demonstrate a multi-plugin marketplace.)
VERSIONS.md notes the `plugin.json` was added explicitly so Claude Code's loader recognizes the skills directory.
## Build tooling
**No package.json, no TypeScript, no bundler, no dist build.** The only Node code is `.github/scripts/sync-skills.js` — a zero-dependency script (`fs`, `path` only) that:
1. Walks `skills/*/SKILL.md`
2. Parses YAML frontmatter with a hand-rolled regex (`/^---\n([\s\S]*?)\n---/`) — naïve key:value splitter, no real YAML lib
3. Rewrites the `plugins[0].skills` array in `marketplace.json` to a list of `./skills/<name>` paths
4. Updates the skill count in the plugin description (`/\d+ marketing skills/`)
5. Replaces the `<!-- SKILLS:START -->...<!-- SKILLS:END -->` block in README.md with a regenerated table (description truncated to 120 chars at a word boundary)
That's the entire codegen story. `validate-skills.sh` and `validate-skills-official.sh` are bash-only frontmatter linters using `sed`/`grep`.
## Installation story
Six options listed in the README, in priority order:
1. **vercel-labs/skills CLI**`npx skills add coreyhaines31/marketingskills [--skill page-cro copywriting] [--list]`. README claims this installs to `.agents/skills/` and symlinks `.claude/skills/` for Claude Code.
2. **Claude Code plugins**`/plugin marketplace add coreyhaines31/marketingskills` then `/plugin install marketing-skills`.
3. Clone-and-copy.
4. Git submodule.
5. Fork.
6. **SkillKit** (`npx skillkit install ...`) for cross-agent install (Cursor/Copilot/etc).
**Codex support** is implicit: the repo ships `AGENTS.md` (the cross-agent spec file Codex reads) and the README claims compatibility with "Claude Code, OpenAI Codex, Cursor, Windsurf, and any agent that supports the Agent Skills spec." There is **no Codex-specific plugin manifest** — Codex compatibility comes purely from being SKILL.md-spec-compliant + AGENTS.md at root.
## Docs site
**There is no embedded docs site.** No `docs/`, no `site/`, no `website/`, no Docusaurus/Nextra/Astro config, `has_pages: false`. The README claims `homepage: https://marketing-skills.com` but it is not built from this repo.
The only generated docs artifact is the README skills table (built by `sync-skills.js`).
## CI / release
Two workflows:
- **`.github/workflows/sync-skills.yml`** — on push to `main` touching `skills/**`, runs `sync-skills.js` and commits the result back via `stefanzweifel/git-auto-commit-action@v7` as a bot user. This is the "build step" — it auto-rewrites `marketplace.json` and `README.md` whenever skills change.
- **`.github/workflows/validate-skill.yml`** — on push/PR touching `**/SKILL.md`. A `detect-changes` job computes the changed skill dirs via git diff and a jq-built JSON array, then a matrix `validate` job runs `Flash-Brew-Digital/validate-skill@v1` on each. Skips drafts and dependabot.
No release/publish workflow — versioning is tracked manually in `VERSIONS.md` (per-skill semver + dates) and in `marketplace.json`/`plugin.json`'s top-level `version`.
## SKILL.md format
Frontmatter is minimal:
```yaml
---
name: page-cro
description: When the user wants to optimize... [trigger phrases and "For X, see other-skill" cross-refs]
metadata:
version: 1.1.0
---
```
Conventions documented in `AGENTS.md`:
- `name`: 1-64 chars, lowercase a-z + digits + hyphens, must match dir
- `description`: 1-1024 chars, *must* include trigger phrases AND scope-boundary cross-references to sibling skills
- `metadata.version` only (no author/license per-skill)
- SKILL.md kept under 500 lines; details pushed into `references/`
Distinctive description style: every skill enumerates trigger-phrase-laden quotes ("CRO," "this page isn't converting," "my landing page sucks") plus explicit `For X, see other-skill` boundaries — clearly tuned for the Skill-tool dispatcher's keyword matching.
## Distinctive / novel patterns
1. **Per-skill `evals/evals.json`** — every skill has structured eval prompts with `expected_output` summary + a list of `assertions` strings. Not wired into CI but provides a dataset for offline eval runs.
2. **Auto-update protocol baked into `AGENTS.md`** — instructs the agent to fetch `VERSIONS.md` from raw.githubusercontent once per session, compare local versions, and surface a non-blocking notification if 2+ skills are stale or any has a major bump. Includes a "say 'update skills'" trigger that runs `git pull`. This is a memory-less, network-fetched update channel.
3. **`product-marketing-context` as a hub skill** — every other skill is documented to read `.agents/product-marketing-context.md` (with `.claude/` fallback) before doing anything. README diagram shows it as the root of a star topology.
4. **Claude Code-only escape hatch in AGENTS.md** — explicitly calls out that `` !`command` `` shell-injection syntax is Claude Code-only and **must not** be in the cross-agent SKILL.md files. Suggests local override in `.claude/skills/` if you want it. Clean pattern: keep skills cross-agent, document Claude-Code-only enhancements separately.
5. **`tools/` registry** — orthogonal to skills, not part of the plugin. 60+ zero-dep Node CLIs (`tools/clis/<vendor>.js`) plus 80+ markdown integration guides plus a Composio mapping. AGENTS.md tells the agent: skills *reference* tools by name, agent reads `tools/REGISTRY.md` and `tools/integrations/<tool>.md` on demand. Effectively a second progressive-disclosure layer beyond `references/`.
6. **No package.json / no JS deps** — the entire codegen + validation pipeline is `node` (stdlib only) + `bash`. Maximally portable.
7. **No agents, no hooks, no slash-commands** — README hints at `/page-cro` invocations, but no `commands/` directory exists. These are skill-name invocations the Claude Code plugin loader produces automatically.
## Takeaways for Matt's infra goals
- **Multi-plugin marketplace (goal 2)**: this repo is *not* a model for that — single-plugin marketplace. The marketplace.json schema does support multiple `plugins[]` entries pointing to different `source` paths, so for themed groups Matt would want each plugin to live in its own subdir (e.g. `plugins/architecture/`, `plugins/typescript/`) each with their own `skills/`. coreyhaines31 doesn't demonstrate this.
- **Codex (goal 3)**: cheap — drop AGENTS.md at root, keep SKILL.md spec-compliant, claim compatibility. No separate manifest needed.
- **vercel-labs CLI (goal 1)**: zero work — once `skills/<name>/SKILL.md` exists with valid frontmatter, the CLI picks it up. coreyhaines31 added nothing extra.
- **Docs site (goal 4)**: not modeled here; this repo opts out and links to a separately-hosted marketing site.
- **Worth copying**: the `sync-skills.js` + GitHub Action pattern (auto-rewrite README table and marketplace skill list on push), the `VERSIONS.md` + auto-update protocol, the `evals/evals.json` per skill, and the AGENTS.md note about keeping `` !`command` `` out of cross-agent SKILL.md.
## Key file paths
(all on `main` branch of `coreyhaines31/marketingskills`)
- `/.claude-plugin/marketplace.json`, `/.claude-plugin/plugin.json`
- `/.github/scripts/sync-skills.js`
- `/.github/workflows/sync-skills.yml`, `/.github/workflows/validate-skill.yml`
- `/AGENTS.md` (the canonical agent guide; `CLAUDE.md` just points to it)
- `/VERSIONS.md`
- `/skills/<name>/SKILL.md` + `/skills/<name>/evals/evals.json` + `/skills/<name>/references/*.md`
- `/validate-skills.sh`, `/validate-skills-official.sh`
+110
View File
@@ -0,0 +1,110 @@
# Research: `obra/superpowers`
A single plugin/skills bundle that ships itself as a Claude Code plugin, a Codex plugin, a Cursor plugin, an OpenCode plugin, and a Gemini extension *simultaneously* by colocating every harness's manifest at the repo root.
## Repo structure
```
.claude-plugin/ marketplace.json + plugin.json (Claude Code)
.codex-plugin/ plugin.json (Codex App)
.codex/ INSTALL.md (Codex CLI bootstrap docs)
.cursor-plugin/ plugin.json (Cursor)
.opencode/ INSTALL.md + plugins/superpowers.js (OpenCode)
gemini-extension.json (Gemini CLI)
agents/ code-reviewer.md
commands/ brainstorm.md, execute-plan.md, write-plan.md
hooks/ hooks.json, hooks-cursor.json, run-hook.cmd, session-start
skills/ 14 top-level SKILL.md folders (FLAT, not nested)
docs/ handwritten markdown — README.codex.md, README.opencode.md, plans/, specs/
scripts/ bump-version.sh, sync-to-codex-plugin.sh
tests/ brainstorm-server, claude-code, codex-plugin-sync, opencode, skill-triggering, subagent-driven-dev
assets/ icons, logos
package.json, AGENTS.md, CLAUDE.md, GEMINI.md, RELEASE-NOTES.md
```
No `dist/`, no `plugins/` subdir, no nested skills. All skills live at `skills/<kebab-name>/SKILL.md` (14 of them). Skills can have sibling reference files or a `references/` subdir.
## Plugin packaging — multi-harness, not multi-plugin
The repo ships **one logical plugin packaged six ways**:
- **Claude Code**: `.claude-plugin/plugin.json` + `.claude-plugin/marketplace.json` (the marketplace declares a single plugin named `superpowers` with `"source": "./"`, so the repo *is* its own dev marketplace).
- **Codex App**: `.codex-plugin/plugin.json` — much richer than the Claude one, includes an `interface` block (`displayName`, `shortDescription`, `defaultPrompt`, `brandColor`, `composerIcon`, `logo`, `category: "Coding"`).
- **Cursor**: `.cursor-plugin/plugin.json` — declares `skills`, `agents`, `commands`, `hooks: ./hooks/hooks-cursor.json`.
- **OpenCode**: `.opencode/plugins/superpowers.js` — a real JS module that injects bootstrap context via system-prompt transform and parses SKILL.md frontmatter at runtime.
- **Codex CLI**: install via `git clone` + `ln -s skills ~/.agents/skills/superpowers` (no plugin manifest, uses native skill discovery).
- **Gemini CLI**: `gemini-extension.json` with `contextFileName: "GEMINI.md"`.
The **separate marketplace repo** `obra/superpowers-marketplace` is where themed grouping happens. Its `.claude-plugin/marketplace.json` lists 7 plugins by remote git URL: `superpowers`, `superpowers-chrome`, `elements-of-style`, `episodic-memory`, `superpowers-lab`, `superpowers-developing-for-claude-code`, `superpowers-dev`. Each entry is `{name, source: {source: "url", url}, description, version, strict: true}`. So the marketplace is a thin curation layer pointing at independent plugin repos — **no monorepo, no themed sub-plugins inside one repo**.
## Build tooling
`package.json` is minimal — just `name`, `version`, `type: "module"`, `main` pointing at the OpenCode plugin file. **No scripts, no dependencies, no build step, no TypeScript.** Manifests are handwritten and kept in sync by a custom version-bump tool.
Two interesting scripts:
1. **`scripts/bump-version.sh`** — driven by `.version-bump.json`, which lists every file containing a version field (`package.json`, `.claude-plugin/plugin.json`, `.cursor-plugin/plugin.json`, `.codex-plugin/plugin.json`, `.claude-plugin/marketplace.json` at `plugins.0.version`, `gemini-extension.json`). Has `--check` (drift detection) and `--audit` (greps repo for stale version strings). Pure bash + jq.
2. **`scripts/sync-to-codex-plugin.sh`** — ~300-line bash tool that rsyncs the upstream tree into a fork (`prime-radiant-inc/openai-codex-plugins/plugins/superpowers/`), opens a PR via `gh`. Has `--dry-run`, `--bootstrap`, `--local`. Deterministic: same upstream SHA → identical PR diff. This is how the Codex App marketplace gets updated.
## Installation story
**No `npx` installer of its own.** Installation is per-harness:
- Claude Code: `/plugin marketplace add obra/superpowers-marketplace` then `/plugin install superpowers@superpowers-marketplace`, OR Anthropic's official marketplace `/plugin install superpowers@claude-plugins-official`.
- Codex App: search & install via in-app plugin UI.
- Codex CLI: clone + symlink (manual).
- Cursor: `/add-plugin superpowers`.
- OpenCode: tell agent to fetch `INSTALL.md`.
- Gemini: `gemini extensions install https://github.com/obra/superpowers`.
- Copilot CLI: `copilot plugin marketplace add obra/superpowers-marketplace`.
Vercel-labs/skills is **not mentioned anywhere**.
## Docs site
**There is no docs site.** No Docusaurus/Nextra/Astro/Starlight, no GitHub Pages (`has_pages: false`). `docs/` is a flat folder of handwritten markdown plus design specs dated YYYY-MM-DD. All discovery happens via the README plus per-harness INSTALL.md files.
## CI / release
`.github/` contains only `FUNDING.yml`, `ISSUE_TEMPLATE/`, and `PULL_REQUEST_TEMPLATE.md`**no GitHub Actions workflows**. Releases appear to be manual: bump versions with the script, push tag, the marketplace repo points at git URLs (and `superpowers-dev` entry pins `ref: "dev"`).
## SKILL.md format
Frontmatter is intentionally minimal — only `name` and `description`:
```yaml
---
name: test-driven-development
description: Use when implementing any feature or bugfix, before writing implementation code
---
```
Body uses heavy XML-style emphasis tags (`<EXTREMELY-IMPORTANT>`, `<SUBAGENT-STOP>`, `<important-reminder>`) and prescriptive uppercase rules. No tags, no allowed-tools, no version field per skill.
## Distinctive patterns
- **Bootstrap via SessionStart hook**: `hooks/hooks.json` registers a `SessionStart` (matchers `startup|clear|compact`) that runs `hooks/run-hook.cmd session-start`, which reads `skills/using-superpowers/SKILL.md` and injects it as additional system context — this is how skills "trigger automatically" without the user opting in.
- **`using-superpowers` as meta-skill**: forces the agent to invoke the Skill tool *before any reply, including clarifying questions*.
- **OpenCode runtime SKILL.md parser**: `.opencode/plugins/superpowers.js` reimplements frontmatter extraction inline ("avoid dependency on skills-core for bootstrap").
- **Tests for skill triggering**: `tests/skill-triggering/`, `tests/codex-plugin-sync/`, `tests/opencode/` — actual integration tests for whether agents invoke the right skill.
- **Multi-harness CLAUDE.md/AGENTS.md/GEMINI.md** at root, one per agent's convention.
- **Spec-driven development** visible in `docs/superpowers/specs/`.
## Implications for Matt's repo
- **Multi-plugin themed marketplace**: superpowers' pattern is **separate plugin repos + one curation repo with `.claude-plugin/marketplace.json`** listing them by URL — *not* a monorepo. If Matt wants a monorepo, he'd be inventing a different pattern.
- **Codex support**: cheapest path is `.codex-plugin/plugin.json` at root (Codex App) plus `.codex/INSTALL.md` for CLI clone+symlink. No build step needed.
- **Docs site**: superpowers offers no precedent — Matt would be ahead of it.
- **Build tooling**: superpowers proves you can run a popular skills plugin with **zero npm scripts** and just two bash scripts (version bump + cross-repo sync). The `.version-bump.json` config-driven approach is worth copying.
- **SKILL.md conventions**: keep frontmatter to `name` + `description`; lean on body prose for behavior.
## Key URLs
- https://github.com/obra/superpowers/blob/main/.claude-plugin/marketplace.json
- https://github.com/obra/superpowers/blob/main/.codex-plugin/plugin.json
- https://github.com/obra/superpowers/blob/main/.opencode/plugins/superpowers.js
- https://github.com/obra/superpowers/blob/main/scripts/bump-version.sh
- https://github.com/obra/superpowers/blob/main/scripts/sync-to-codex-plugin.sh
- https://github.com/obra/superpowers/blob/main/hooks/hooks.json
- https://github.com/obra/superpowers-marketplace/blob/main/.claude-plugin/marketplace.json
+12 -21
View File
@@ -1,19 +1,17 @@
---
name: prd-to-issues
description: Break a PRD into independently-grabbable GitHub issues using tracer-bullet vertical slices. Use when user wants to convert a PRD to issues, create implementation tickets, or break down a PRD into work items.
name: to-issues
description: Break a plan, spec, or PRD into independently-grabbable GitHub issues using tracer-bullet vertical slices. Use when user wants to convert a plan into issues, create implementation tickets, or break down work into issues.
---
# PRD to Issues
# To Issues
Break a PRD into independently-grabbable GitHub issues using vertical slices (tracer bullets).
Break a plan into independently-grabbable GitHub issues using vertical slices (tracer bullets).
## Process
### 1. Locate the PRD
### 1. Gather context
Ask the user for the PRD GitHub issue number (or URL).
If the PRD is not already in your context window, fetch it with `gh issue view <number>` (with comments).
Work from whatever is already in the conversation context. If the user passes a GitHub issue number or URL as an argument, fetch it with `gh issue view <number>` (with comments).
### 2. Explore the codebase (optional)
@@ -21,7 +19,7 @@ If you have not already explored the codebase, do so to understand the current s
### 3. Draft vertical slices
Break the PRD into **tracer bullet** issues. Each issue is a thin vertical slice that cuts through ALL integration layers end-to-end, NOT a horizontal slice of one layer.
Break the plan into **tracer bullet** issues. Each issue is a thin vertical slice that cuts through ALL integration layers end-to-end, NOT a horizontal slice of one layer.
Slices may be 'HITL' or 'AFK'. HITL slices require human interaction, such as an architectural decision or a design review. AFK slices can be implemented and merged without human interaction. Prefer AFK over HITL where possible.
@@ -38,7 +36,7 @@ Present the proposed breakdown as a numbered list. For each slice, show:
- **Title**: short descriptive name
- **Type**: HITL / AFK
- **Blocked by**: which other slices (if any) must complete first
- **User stories covered**: which user stories from the PRD this addresses
- **User stories covered**: which user stories this addresses (if the source material has them)
Ask the user:
@@ -56,13 +54,13 @@ For each approved slice, create a GitHub issue using `gh issue create`. Use the
Create issues in dependency order (blockers first) so you can reference real issue numbers in the "Blocked by" field.
<issue-template>
## Parent PRD
## Parent
#<prd-issue-number>
#<parent-issue-number> (if the source was a GitHub issue, otherwise omit this section)
## What to build
A concise description of this vertical slice. Describe the end-to-end behavior, not layer-by-layer implementation. Reference specific sections of the parent PRD rather than duplicating content.
A concise description of this vertical slice. Describe the end-to-end behavior, not layer-by-layer implementation.
## Acceptance criteria
@@ -76,13 +74,6 @@ A concise description of this vertical slice. Describe the end-to-end behavior,
Or "None - can start immediately" if no blockers.
## User stories addressed
Reference by number from the parent PRD:
- User story 3
- User story 7
</issue-template>
Do NOT close or modify the parent PRD issue.
Do NOT close or modify any parent issue.
+7 -9
View File
@@ -1,23 +1,21 @@
---
name: write-a-prd
description: Create a PRD through user interview, codebase exploration, and module design, then submit as a GitHub issue. Use when user wants to write a PRD, create a product requirements document, or plan a new feature.
name: to-prd
description: Turn the current conversation context into a PRD and submit it as a GitHub issue. Use when user wants to create a PRD from the current context.
---
This skill will be invoked when the user wants to create a PRD. You may skip steps if you don't consider them necessary.
This skill takes the current conversation context and codebase understanding and produces a PRD. Do NOT interview the user — just synthesize what you already know.
1. Ask the user for a long, detailed description of the problem they want to solve and any potential ideas for solutions.
## Process
2. Explore the repo to verify their assertions and understand the current state of the codebase.
1. Explore the repo to understand the current state of the codebase, if you haven't already.
3. Interview the user relentlessly about every aspect of this plan until you reach a shared understanding. Walk down each branch of the design tree, resolving dependencies between decisions one-by-one.
4. Sketch out the major modules you will need to build or modify to complete the implementation. Actively look for opportunities to extract deep modules that can be tested in isolation.
2. Sketch out the major modules you will need to build or modify to complete the implementation. Actively look for opportunities to extract deep modules that can be tested in isolation.
A deep module (as opposed to a shallow module) is one which encapsulates a lot of functionality in a simple, testable interface which rarely changes.
Check with the user that these modules match their expectations. Check with the user which modules they want tests written for.
5. Once you have a complete understanding of the problem and solution, use the template below to write the PRD. The PRD should be submitted as a GitHub issue.
3. Write the PRD using the template below and submit it as a GitHub issue.
<prd-template>
+1
View File
@@ -1,6 +1,7 @@
---
name: ubiquitous-language
description: Extract a DDD-style ubiquitous language glossary from the current conversation, flagging ambiguities and proposing canonical terms. Saves to UBIQUITOUS_LANGUAGE.md. Use when user wants to define domain terms, build a glossary, harden terminology, create a ubiquitous language, or mentions "domain model" or "DDD".
disable-model-invocation: true
---
# Ubiquitous Language
+7
View File
@@ -0,0 +1,7 @@
---
name: zoom-out
description: Tell the agent to zoom out and give broader context or a higher-level perspective. Use when you're unfamiliar with a section of code or need to understand how it fits into the bigger picture.
disable-model-invocation: true
---
I don't know this area of code well. Go up a layer of abstraction. Give me a map of all the relevant modules and callers.