init commit

This commit is contained in:
medunes
2026-01-04 20:40:07 +01:00
commit 60a6e52449
27 changed files with 1061 additions and 0 deletions

1
.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
.idea

1
01-concurrent-aggregator/.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
.idea

View File

@@ -0,0 +1,48 @@
# Kata 01: The Fail-Fast Data Aggregator
**Target Idioms:** Concurrency Control (`errgroup`), Context Propagation, Functional Options
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
In other languages, you might use `Promise.all` or strict thread pools to fetch data in parallel. In Go, seasoned developers often start with `sync.WaitGroup`, but quickly realize it lacks two critical features for production: **Error Propagation** and **Context Cancellation**.
If you spawn 10 goroutines and the first one fails, `WaitGroup` blindly waits for the other 9 to finish. **Idiomatic Go fails fast.**
## 🎯 The Scenario
You are building a **User Dashboard Backend**. To render the dashboard, you must fetch data from two independent, mock microservices:
1. **Profile Service** (Returns "Name: Alice")
2. **Order Service** (Returns "Orders: 5")
You need to fetch these in parallel to reduce latency. However, if *either* fails, or if the global timeout is reached, the entire operation must abort immediately to save resources.
## 🛠 The Challenge
Create a `UserAggregator` struct and a method `Aggregate(id int)` that orchestrates this fetching.
### 1. Functional Requirements
* [ ] The aggregator must be configurable (timeout, logger) without a massive constructor.
* [ ] Both services must be queried concurrently.
* [ ] The result should combine both outputs: `"User: Alice | Orders: 5"`.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
To pass this kata, you **must** strictly adhere to these rules:
* [ ] **NO `sync.WaitGroup`:** You must use `golang.org/x/sync/errgroup`.
* [ ] **NO "Parameter Soup":** You must use the **Functional Options Pattern** for the constructor (e.g., `New(WithTimeout(2s))`).
* [ ] **Context is King:** You must pass `context.Context` as the first argument to your methods.
* [ ] **Cleanup:** If the Profile service fails, the Order service request must be cancelled (via Context) immediately.
* [ ] **Modern Logging:** Use `log/slog` for structured logging.
## 🧪 Self-Correction (Test Yourself)
Run your code against these edge cases:
1. **The "Slow Poke":** * Set your aggregator timeout to `1s`.
* Mock one service to take `2s`.
* **Pass Condition:** Does your function return `context deadline exceeded` after exactly 1s?
2. **The "Domino Effect":**
* Mock the Profile Service to return an error immediately.
* Mock the Order Service to take 10 seconds.
* **Pass Condition:** Does your function return the error *immediately*? (If it waits 10s, you failed context cancellation).
## 📚 Resources
* [Go Concurrency: errgroup](https://pkg.go.dev/golang.org/x/sync/errgroup)
* [Functional Options for Friendly APIs](https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis)

View File

@@ -0,0 +1,3 @@
module concurrent-aggregator
go 1.25.0

View File

@@ -0,0 +1,53 @@
# Kata 02: The Concurrent Map with Sharded Locks
**Target Idioms:** Concurrency Safety, Map Sharding, `sync.RWMutex`, Avoiding `sync.Map` Pitfalls
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
Seasoned developers coming from Java might reach for `ConcurrentHashMap`-style solutions, while Pythonistas might think of GIL-protected dictionaries. In Go, you have three main options:
1. **Naive sync.Mutex around a map** (bottlenecks under high concurrency)
2. **sync.Map** (optimized for specific "append-only, read-heavy" cases, but opaque and often misused)
3. **Sharded maps** (manual control, maximized throughput)
The Go way is explicit control: if you know your access patterns, build a solution that fits. This kata forces you to understand *when* and *why* to choose sharding over sync.Map.
## 🎯 The Scenario
You're building a real-time **API Rate Limiter** that tracks request counts per user ID. The system handles 50k+ RPS with 95% reads (checking limits) and 5% writes (incrementing counters). A single mutex would serialize all operations-unacceptable. `sync.Map` might work but obscures memory usage and lacks type safety.
## 🛠 The Challenge
Implement `ShardedMap[K comparable, V any]` with configurable shard count that provides safe concurrent access.
### 1. Functional Requirements
* [ ] Type-safe generic implementation (Go 1.18+)
* [ ] `Get(key K) (V, bool)` - returns value and existence flag
* [ ] `Set(key K, value V)` - inserts or updates
* [ ] `Delete(key K)` - removes key
* [ ] `Keys() []K` - returns all keys (order doesn't matter)
* [ ] Configurable number of shards at construction
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
* [ ] **NO `sync.Map`**: Implement sharding manually with `[]map[K]V` and `[]sync.RWMutex`
* [ ] **Smart Sharding**: Use `fnv64` hashing for key distribution (don't rely on Go's random map iteration)
* [ ] **Read Optimization**: Use `RLock()` for `Get()` operations when safe
* [ ] **Zero Allocation Hot-Path**: `Get()` and `Set()` must not allocate memory in the critical section (no string conversion, no boxing)
* [ ] **Clean `Keys()`**: Implement without data races, even while concurrent writes occur
## 🧪 Self-Correction (Test Yourself)
1. **The Contention Test**:
- Run 8 goroutines doing only `Set()` operations with sequential keys
- With 1 shard: Should see heavy contention (use `go test -bench=. -cpuprofile` to verify)
- With 64 shards: Should see near-linear scaling
2. **The Memory Test**:
- Store 1 million `int` keys with `interface{}` values
- **Fail Condition**: If your solution uses more than 50MB extra memory vs baseline map
- **Hint**: Avoid `string(key)` conversions; use type-safe hashing
3. **The Race Test**:
- Run `go test -race` with concurrent read/write/delete operations
- Any race condition = automatic failure
## 📚 Resources
* [Go Maps Don't Appear to be O(1)](https://dave.cheney.net/2018/05/29/how-the-go-runtime-implements-maps-efficiently-without-generics)
* [When to use sync.Map](https://dave.cheney.net/2017/07/30/should-i-use-sync-map)
* [Practical Sharded Maps](https://github.com/orcaman/concurrent-map)

View File

@@ -0,0 +1,56 @@
# Kata 03: The Graceful Shutdown Server
**Target Idioms:** Context Propagation, Signal Handling, Channel Coordination, Resource Cleanup
**Difficulty:** 🔴 Advanced
## 🧠 The "Why"
In other ecosystems, graceful shutdown is often framework magic (Spring's `@PreDestroy`, Django's `close()`). Go forces explicit lifecycle management. The mismatch: developers used to automatic cleanup often leak goroutines, drop in-flight requests, or corrupt data during shutdown.
The Go way: **Own your lifecycle**. Every goroutine you spawn must have a controlled shutdown path.
## 🎯 The Scenario
Build an **HTTP Server with Background Worker** that must:
1. Accept HTTP requests (handled by a pool of worker goroutines)
2. Run a background cache warmer every 30 seconds
3. Maintain persistent database connections
4. Shutdown within 10 seconds when receiving SIGTERM, completing in-flight requests but rejecting new ones
## 🛠 The Challenge
Implement `Server` struct with `Start() error` and `Stop(ctx context.Context) error` methods.
### 1. Functional Requirements
* [ ] HTTP server on configurable port with request timeout
* [ ] Worker pool (configurable size) processes requests via channel
* [ ] Background cache warmer ticks every 30s (use `time.Ticker`)
* [ ] Database connection pool (mock with `net.Conn`)
* [ ] SIGTERM/SIGINT triggers graceful shutdown
* [ ] Shutdown completes within deadline or forces exit
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
* [ ] **Single Context Tree**: Root `context.Context` passed to `Start()`, canceled on shutdown
* [ ] **Channel Coordination**: Use `chan struct{}` for worker pool shutdown, not boolean flags
* [ ] **Proper Ticker Cleanup**: `defer ticker.Stop()` with select in goroutine
* [ ] **Dependency Order**: Shutdown in reverse order (stop accepting → drain workers → stop warmer → close DB)
* [ ] **No `os.Exit()` in business logic**: Shutdown should be testable without process termination
## 🧪 Self-Correction (Test Yourself)
1. **The Sudden Death Test**:
- Send 100 requests, immediately send SIGTERM
- **Pass**: Server completes in-flight requests (not all 100), logs "shutting down", closes cleanly
- **Fail**: Server accepts new requests after signal, leaks goroutines, or crashes
2. **The Slow Leak Test**:
- Run server for 5 minutes with 1 request/second
- Send SIGTERM, wait 15 seconds
- **Pass**: `go test` shows no goroutine leaks (use `runtime.NumGoroutine()`)
- **Fail**: Any increase in goroutine count from start to finish
3. **The Timeout Test**:
- Start long-running request (sleep 20s)
- Send SIGTERM with 5s timeout context
- **Pass**: Forces shutdown after 5s, logs "shutdown timeout"
- **Fail**: Waits full 20s or deadlocks
## 📚 Resources
* [Go Blog: Context](https://go.dev/blog/context)
* [Graceful Shutdown in Go](https://medium.com/honestbee-tw-engineer/gracefully-shutdown-in-go-http-server-5f5e6b83da5a)
* [Signal Handling](https://medium.com/@marcus.olsson/writing-a-go-app-with-graceful-shutdown-5de1d2c6de96)

View File

@@ -0,0 +1,58 @@
# Kata 04: The Zero-Allocation JSON Parser
**Target Idioms:** Performance Optimization, `json.RawMessage`, Streaming Parsers, Buffer Reuse
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why**
Developers from dynamic languages often parse JSON by unmarshaling entire documents into `map[string]interface{}` or generic structs. In high-throughput Go services, this creates:
1. Massive memory churn (GC pressure)
2. Unnecessary allocations for unused fields
3. Lost type safety
The Go way: **Parse only what you need, reuse everything**. This kata teaches you to treat JSON as a stream, not a document.
## 🎯 The Scenario
You're processing **10MB/s of IoT sensor data** with JSON like:
```json
{"sensor_id": "temp-1", "timestamp": 1234567890, "readings": [22.1, 22.3, 22.0], "metadata": {...}}
```
You only need `sensor_id` and the first reading value. Traditional unmarshal would allocate for all fields and the entire readings array.
## 🛠 The Challenge
Implement `SensorParser` that extracts specific fields without full unmarshaling.
### 1. Functional Requirements
* [ ] Parse `sensor_id` (string) and first `readings` value (float64) from JSON stream
* [ ] Process `io.Reader` input (could be HTTP body, file, or network stream)
* [ ] Handle malformed JSON gracefully (skip bad records, continue parsing)
* [ ] Benchmark under 100ns per object and 0 allocations per parse
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
* [ ] **NO `encoding/json.Unmarshal`**: Use `json.Decoder` with `Token()` streaming
* [ ] **Reuse Buffers**: Use `sync.Pool` for `bytes.Buffer` or `json.Decoder`
* [ ] **Early Exit**: Stop parsing once required fields are found
* [ ] **Type Safety**: Return concrete struct `SensorData{sensorID string, value float64}`, not `interface{}`
* [ ] **Memory Limit**: Process arbitrarily large streams in constant memory (<1MB heap)
## 🧪 Self-Correction (Test Yourself)
1. **The Allocation Test**:
```go
go test -bench=. -benchmem -count=5
```
**Pass**: `allocs/op` = 0 for parsing loop
**Fail**: Any allocations in hot path
2. **The Stream Test**:
- Pipe 1GB of JSON through your parser (mock with repeating data)
- **Pass**: Memory usage flatlines after warm-up
- **Fail**: Memory grows linearly with input size
3. **The Corruption Test**:
- Input: `{"sensor_id": "a"} {"bad json here` (malformed second object)
- **Pass**: Returns first object, logs/skips second, doesn't panic
- **Fail**: Parser crashes or stops processing entirely
## 📚 Resources
* [Go JSON Stream Parsing](https://ahmet.im/blog/golang-json-stream-parse/)
* [json.RawMessage Tutorial](https://www.sohamkamani.com/golang/json/#raw-messages)
* [Advanced JSON Techniques](https://eli.thegreenplace.net/2019/go-json-cookbook/)

View File

@@ -0,0 +1,42 @@
# Kata 05: The Context-Aware Error Propagator
**Target Idioms:** Error Wrapping, Context-Aware Errors, Custom Error Types
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
Developers from dynamic languages often treat errors as simple strings. Java developers wrap exceptions in layers of inheritance. **Go's error philosophy is different:** errors are values that should carry context and be inspectable without string parsing. The unidiomatic pattern is to `log.Printf("error: %v", err)` and return nil - this destroys debugging context. Idiomatic Go preserves the original error while adding layers of context.
## 🎯 The Scenario
You're building a **cloud storage gateway** that interacts with multiple services: authentication, metadata database, and blob storage. When a file upload fails, operators need to know exactly which layer failed and why - was it auth timeout? database deadlock? storage quota exceeded? Your error handling must preserve this information while being safe for logging.
## 🛠 The Challenge
Create a service that uploads files to cloud storage with proper error handling.
### 1. Functional Requirements
* [ ] Implement three layers: `AuthService`, `MetadataService`, `StorageService`
* [ ] Each layer can fail with specific error types
* [ ] Return errors that expose the failure point and original cause
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
To pass this kata, you **must** strictly adhere to these rules:
* [ ] **NO string-based error inspection:** You must use `%w` with `fmt.Errorf` for wrapping
* [ ] **Custom Error Types:** Create specific error types for each service layer (e.g., `AuthError`, `StorageQuotaError`)
* [ ] **Context-Aware Errors:** Errors must implement `Timeout()` and `Temporary()` methods where appropriate
* [ ] **Safe Logging:** Errors must redact sensitive information (API keys, credentials) when logged
* [ ] **Error Unwrapping:** Your errors must support `errors.Is()` and `errors.As()` for programmatic inspection
## 🧪 Self-Correction (Test Yourself)
Test your error handling with these scenarios:
1. **The "Sensitive Data Leak":**
* Force an auth error with a mock API key
* **Fail Condition:** If `fmt.Sprint(err)` contains the API key string
2. **The "Lost Context":**
* Wrap an `AuthError` three times through different layers
* **Fail Condition:** If `errors.As(err, &AuthError{})` returns false
3. **The "Timeout Confusion":**
* Create a timeout error in the storage layer
* **Fail Condition:** If `errors.Is(err, context.DeadlineExceeded)` returns false
## 📚 Resources
* [Go 1.13 Error Wrapping](https://go.dev/blog/go1.13-errors)
* [Error Handling in Upspin](https://commandcenter.blogspot.com/2017/12/error-handling-in-upspin.html)
* [Don't just check errors, handle them gracefully](https://dave.cheney.net/2016/04/27/dont-just-check-errors-handle-them-gracefully)

View File

@@ -0,0 +1,50 @@
# Kata 04: The Interface-Based Middleware Chain
**Target Idioms:** Interface Design, Middleware Pattern, Composition over Inheritance
**Difficulty:** 🔴 Advanced
## 🧠 The "Why"
Object-oriented developers often reach for class hierarchies and inheritance when building pipelines. In Go, **interfaces enable composition over inheritance**. The unidiomatic approach is to create a `BaseHandler` class with virtual methods. The idiomatic Go way uses small interfaces composed together. This pattern powers `http.Handler`, `io.Reader`, and many standard library patterns - but developers from other ecosystems struggle to see when to split interfaces.
## 🎯 The Scenario
You're building a **real-time analytics pipeline** for user events. Each event must pass through multiple processing stages: validation, enrichment, filtering, and finally storage. New stages will be added frequently. The pipeline must be:
- Modular (add/remove stages without rewriting core logic)
- Observable (track metrics at each stage)
- Recoverable (continue processing after non-critical errors)
## 🛠 The Challenge
Create a middleware chain for processing user events.
### 1. Functional Requirements
* [ ] Process events through a configurable chain of middleware
* [ ] Each middleware can modify, filter, or reject events
* [ ] Provide metrics (counters, latencies) for each stage
* [ ] Support graceful shutdown with context cancellation
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
To pass this kata, you **must** strictly adhere to these rules:
* [ ] **Small Interfaces:** Define a `Processor` interface with a single method: `Process(context.Context, Event) ([]Event, error)`
* [ ] **Middleware Composition:** Each middleware must implement the `Processor` interface and wrap another `Processor`
* [ ] **Functional Options:** Configure middleware using functional options (e.g., `WithMetricsCollector()`)
* [ ] **Context Propagation:** All middleware must respect context cancellation
* [ ] **Zero Global State:** No package-level variables for configuration or state
* [ ] **Testable by Design:** Each middleware must be unit-testable in isolation
## 🧪 Self-Correction (Test Yourself)
Test your implementation against these scenarios:
1. **The "Infinite Loop":**
* Create a middleware that generates 2 events from 1 input
* Chain it with a filtering middleware
* **Fail Condition:** If events multiply uncontrollably or memory usage grows exponentially
2. **The "Context Leak":**
* Add a middleware with a 10s timeout
* Cancel the context after 1s
* **Fail Condition:** If any middleware continues processing after context cancellation
3. **The "Interface Pollution":**
* Try to add a new middleware that needs access to database connections
* **Fail Condition:** If you had to modify the core `Processor` interface to add database methods
## 📚 Resources
* [Go Proverbs by Rob Pike](https://go-proverbs.github.io/)
* [The Go Blog: Lexical Scanning in Go](https://blog.golang.org/lexical-scanning)
* [Standard Library Inspiration: net/http.Handler](https://pkg.go.dev/net/http#Handler)
* [Small Interfaces in the Standard Library](https://medium.com/@cep21/small-interfaces-in-go-1e912a7a7883)

View File

@@ -0,0 +1,48 @@
# Kata 07: The Rate-Limited Fan-Out Client
**Target Idioms:** Rate Limiting (`x/time/rate`), Bounded Concurrency (`x/sync/semaphore`), HTTP Client Hygiene, Context Cancellation
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
In many ecosystems, you slap a “rate limit middleware” in front of a thread pool and call it a day. In Go, people often:
- spawn too many goroutines (no backpressure),
- forget per-request cancellation,
- misuse `http.DefaultClient` (timeouts/transport reuse),
- implement “sleep-based” rate limiting (jittery, wasteful).
This kata forces **explicit control** over *rate*, *in-flight concurrency*, and *cancellation*.
## 🎯 The Scenario
Youre building an internal service that needs to fetch user widgets from a downstream API:
- API allows **10 requests/sec** with bursts up to **20**
- Your service must also cap concurrency at **max 8 in-flight** requests
- If any request fails, cancel everything immediately (fail-fast), and return the first error.
## 🛠 The Challenge
Implement `FanOutClient` with:
- `FetchAll(ctx context.Context, userIDs []int) (map[int][]byte, error)`
### 1. Functional Requirements
- [ ] Requests must respect a **QPS rate limit** + **burst**.
- [ ] Requests must run concurrently but never exceed **MaxInFlight**.
- [ ] Results returned as `map[userID]payload`.
- [ ] On first error, cancel remaining work and return immediately.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Must** use `golang.org/x/time/rate.Limiter`.
- [ ] **Must** use `golang.org/x/sync/semaphore.Weighted` (or equivalent semaphore pattern) for MaxInFlight.
- [ ] **Must** use `http.NewRequestWithContext`.
- [ ] **Must NOT** use `time.Sleep` for rate limiting.
- [ ] **Must** reuse a single `http.Client` (with a configured `Transport` + `Timeout`).
- [ ] Logging via `log/slog` (structured fields: userID, attempt, latency).
## 🧪 Self-Correction (Test Yourself)
- **If you spawn `len(userIDs)` goroutines:** you failed backpressure.
- **If cancellation doesnt stop waiting callers:** you failed context propagation.
- **If QPS is enforced using `Sleep`:** you failed rate limiting.
- **If you use `http.DefaultClient`:** you failed HTTP hygiene.
## 📚 Resources
- https://pkg.go.dev/golang.org/x/time/rate
- https://pkg.go.dev/golang.org/x/sync/semaphore
- https://go.dev/src/net/http/client.go
- https://go.dev/src/net/http/transport.go

View File

@@ -0,0 +1,47 @@
# Kata 08: The Retry Policy That Respects Context
**Target Idioms:** Retry Classification, Error Wrapping (`%w`), Timer Reuse, Context Deadlines
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
In other languages, retries are often hidden in SDKs. In Go, its easy to write:
- infinite retry loops,
- retry-on-any-error (bad),
- retry that ignores context cancellation (worse),
- retry implemented with repeated `time.Sleep` (hard to test, wasteful).
This kata makes you implement a **testable**, **context-aware** retry loop.
## 🎯 The Scenario
You call a flaky downstream service. You should retry only on **transient** failures:
- `net.Error` with `Timeout() == true`
- HTTP 429 / 503 (if you model HTTP)
- sentinel `ErrTransient`
Everything else must fail immediately.
## 🛠 The Challenge
Implement:
- `type Retryer struct { ... }`
- `func (r *Retryer) Do(ctx context.Context, fn func(context.Context) error) error`
### 1. Functional Requirements
- [ ] Retries up to `MaxAttempts`.
- [ ] Uses exponential backoff: `base * 2^attempt` with a max cap.
- [ ] Optional jitter (deterministic in tests).
- [ ] Stops immediately on `ctx.Done()`.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Must NOT** call `time.Sleep` inside the retry loop.
- [ ] **Must** use a `time.Timer` and `Reset` it (timer reuse).
- [ ] **Must** wrap the final error with context (attempt count) using `%w`.
- [ ] **Must** classify errors using `errors.Is` / `errors.As`.
## 🧪 Self-Correction (Test Yourself)
- **If context cancellation only stops after the sleep:** you failed.
- **If you retry non-transient errors:** you failed classification.
- **If you cant test it without real time:** inject time/jitter sources.
## 📚 Resources
- https://go.dev/blog/go1.13-errors
- https://pkg.go.dev/errors
- https://pkg.go.dev/time

View File

@@ -0,0 +1,42 @@
# Kata 09: The Cache Stampede Shield
**Target Idioms:** `singleflight`, TTL Cache, DoChan + Context Select, Lock Avoidance
**Difficulty:** 🔴 Advanced
## 🧠 The "Why"
In many stacks, caching is “just Redis”. In Go, an in-process cache is common, but people:
- hold locks while calling the loader (deadly),
- refresh the same key N times concurrently (stampede),
- cant cancel waiters cleanly.
This kata is about **deduplicating in-flight loads** and making waiters **context-cancellable**.
## 🎯 The Scenario
You have expensive per-key loads (e.g., DB or remote API). If 200 goroutines ask for the same key at once:
- loader must run **once**
- others must wait (or return on ctx cancel)
- TTL must be enforced
## 🛠 The Challenge
Implement:
- `type Cache[K comparable, V any] struct { ... }`
- `Get(ctx context.Context, key K, loader func(context.Context) (V, error)) (V, error)`
### 1. Functional Requirements
- [ ] Return cached value if not expired.
- [ ] If expired/missing: load once, share result to all callers.
- [ ] Callers must be able to stop waiting via `ctx.Done()`.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Must** use `golang.org/x/sync/singleflight.Group`.
- [ ] **Must** use `DoChan` + `select` on `ctx.Done()` to cancel waiters.
- [ ] **Must NOT** hold a mutex while calling `loader`.
- [ ] Errors must be wrapped with key context using `%w`.
## 🧪 Self-Correction (Test Yourself)
- **If 200 goroutines trigger 200 loads:** you failed (no stampede protection).
- **If a canceled context still blocks waiting:** you failed.
- **If you lock around loader execution:** you failed (contention / deadlocks).
## 📚 Resources
- https://pkg.go.dev/golang.org/x/sync/singleflight
- https://go.dev/blog/go1.13-errors

View File

@@ -0,0 +1,45 @@
# Kata 10: The Worker Pool With Backpressure and Joined Errors
**Target Idioms:** Worker Pools, Channel Ownership, `errors.Join`, Context Cancellation
**Difficulty:** 🔴 Advanced
## 🧠 The "Why"
Many devs bring “thread pool” instincts and end up with:
- goroutines that never exit,
- unbounded queues,
- “first error wins” even when you want a summary,
- ad-hoc error channels without cleanup.
This kata forces correctness: **bounded work**, **clean shutdown**, and **error aggregation**.
## 🎯 The Scenario
You process a stream of jobs (e.g., image resizing). You want:
- fixed number of workers
- bounded queue (backpressure)
- either fail-fast OR collect all errors (configurable)
## 🛠 The Challenge
Implement:
- `type Pool struct { ... }`
- `Run(ctx context.Context, jobs <-chan Job) error`
Where `Job` is `func(context.Context) error`.
### 1. Functional Requirements
- [ ] `N` workers process from `jobs`.
- [ ] Optional `StopOnFirstError`.
- [ ] If not fail-fast: return `errors.Join(errs...)` after draining.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Must** use `errors.Join` for aggregation.
- [ ] **Must** respect `ctx.Done()` (workers exit).
- [ ] **Must** close internal channels from the sender side only.
- [ ] **Must** guarantee no goroutine leak when `jobs` closes early or ctx cancels.
## 🧪 Self-Correction (Test Yourself)
- **If workers keep running after ctx cancel:** failed.
- **If you can deadlock by closing channels from the wrong side:** failed.
- **If you return before draining in non-fail-fast mode:** failed.
## 📚 Resources
- https://go.dev/doc/go1.20
- https://go.dev/src/errors/join.go

View File

@@ -0,0 +1,36 @@
# Kata 11: The NDJSON Reader That Survives Long Lines
**Target Idioms:** Streaming I/O (`io.Reader`), `bufio.Reader` vs `Scanner`, Handling `ErrBufferFull`, Low Allocation
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
Seasoned devs reach for `bufio.Scanner` and it “works”… until production sends a line > 64K and you get:
`bufio.Scanner: token too long`.
This kata forces you to implement a streaming reader that can handle **arbitrarily large lines** without falling over.
## 🎯 The Scenario
You ingest NDJSON logs from stdin or a file. Lines can be huge (hundreds of KB). You must process line-by-line.
## 🛠 The Challenge
Implement:
- `func ReadNDJSON(ctx context.Context, r io.Reader, handle func([]byte) error) error`
### 1. Functional Requirements
- [ ] Call `handle(line)` for each line (without the trailing newline).
- [ ] Stop immediately on `handle` error.
- [ ] Stop immediately on `ctx.Done()`.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Must NOT** rely on default `bufio.Scanner` behavior.
- [ ] **Must** use `bufio.Reader` and correctly handle `ReadSlice('\n')` returning `ErrBufferFull`.
- [ ] **Must** avoid per-line allocations where possible (reuse buffers).
- [ ] Wrap errors with line number context using `%w`.
## 🧪 Self-Correction (Test Yourself)
- **If a 200KB line crashes with “token too long”:** you failed.
- **If cancellation doesnt stop promptly:** you failed.
- **If you allocate a new buffer each line:** you failed the low-allocation goal.
## 📚 Resources
- https://pkg.go.dev/bufio
- https://pkg.go.dev/io

View File

@@ -0,0 +1,42 @@
# Kata 12: The sync.Pool Buffer Middleware
**Target Idioms:** `sync.Pool`, Avoiding GC Pressure, `bytes.Buffer` Reset, Benchmarks (`-benchmem`)
**Difficulty:** 🔴 Advanced
## 🧠 The "Why"
In Go, performance regressions often come from allocation/GC churn, not “slow CPU”.
People use `sync.Pool` incorrectly:
- pooling long-lived objects (wrong),
- forgetting to reset buffers (data leak),
- storing huge buffers back into the pool (memory bloat).
This kata is about **safe pooling** for high-throughput handlers.
## 🎯 The Scenario
Youre writing an HTTP middleware that:
- reads up to 16KB of request body for audit logging
- must not allocate per-request in the hot path
## 🛠 The Challenge
Implement a middleware:
- `func AuditBody(max int, next http.Handler) http.Handler`
### 1. Functional Requirements
- [ ] Read up to `max` bytes of request body (do not consume beyond `max`).
- [ ] Log the captured bytes with `slog` fields.
- [ ] Pass the request downstream intact (body still readable).
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Must** use `sync.Pool` to reuse buffers.
- [ ] **Must** `Reset()`/clear buffers before putting back.
- [ ] **Must** bound memory: never keep buffers larger than `max` in the pool.
- [ ] Provide a benchmark showing reduced allocations (`go test -bench . -benchmem`).
## 🧪 Self-Correction (Test Yourself)
- **If a request leaks previous request content:** you failed (no reset).
- **If allocations are ~O(requests):** you failed pooling.
- **If buffers grow unbounded and stay in pool:** you failed memory bounds.
## 📚 Resources
- https://pkg.go.dev/sync
- https://go.dev/doc/gc-guide
- https://go.dev/blog/pprof

View File

@@ -0,0 +1,37 @@
# Kata 13: The Filesystem-Agnostic Config Loader
**Target Idioms:** `io/fs` abstraction, `fs.WalkDir`, Testability via `fstest.MapFS`, `embed` readiness
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
In Go, passing `"/etc/app/config"` all over the place hard-couples your logic to the OS.
Idiomatic Go uses `fs.FS` so you can:
- load from disk,
- load from embedded files,
- load from a ZIP filesystem,
- unit test without touching the real filesystem.
## 🎯 The Scenario
Your CLI loads configuration fragments from a directory tree, merges them, and prints a final config report.
## 🛠 The Challenge
Implement:
- `func LoadConfigs(fsys fs.FS, root string) (map[string][]byte, error)`
### 1. Functional Requirements
- [ ] Walk `root` recursively and read all `*.conf` files.
- [ ] Return a map of `path -> content`.
- [ ] Reject invalid paths cleanly.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Must** accept `fs.FS` (not `os` paths) in the core API.
- [ ] **Must** use `fs.WalkDir` and `fs.ReadFile`.
- [ ] **Must NOT** use `os.Open` / `filepath.Walk` inside the core loader.
- [ ] Unit tests must use `testing/fstest.MapFS`.
## 🧪 Self-Correction (Test Yourself)
- **If you cant test without real files:** you failed.
- **If your loader only works on disk:** you failed the abstraction goal.
## 📚 Resources
- https://pkg.go.dev/io/fs
- https://go.dev/src/embed/embed.go

View File

@@ -0,0 +1,43 @@
# Kata 14: The Leak-Free Scheduler
**Target Idioms:** `time.Timer`/`time.Ticker`, Stop/Reset patterns, Jitter, Context Cancellation
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
Scheduling in Go is deceptively easy until you ship:
- goroutines that never stop,
- overlapping executions,
- ticker drift and backlog,
- resource retention from careless timer usage.
This kata makes you build a scheduler that is **predictable** and **stoppable**.
## 🎯 The Scenario
You need to periodically refresh a local cache:
- every 5s, with ±10% jitter
- do not overlap refreshes
- stop immediately on shutdown
## 🛠 The Challenge
Implement:
- `type Scheduler struct { ... }`
- `func (s *Scheduler) Run(ctx context.Context, job func(context.Context) error) error`
### 1. Functional Requirements
- [ ] Run `job` periodically (interval + jitter).
- [ ] Never run `job` concurrently with itself.
- [ ] Exit on `ctx.Done()`.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Must NOT** use `time.Tick` (no stop control).
- [ ] **Must** use a `time.Timer` or `time.Ticker` with correct stop/reset.
- [ ] **Must** propagate context into `job`.
- [ ] Log job duration and errors via `slog`.
## 🧪 Self-Correction (Test Yourself)
- **If `job` overlap occurs:** you failed.
- **If cancel doesnt stop quickly:** you failed.
- **If goroutines remain after exit:** you failed.
## 📚 Resources
- https://pkg.go.dev/time
- https://go.dev/wiki/Go123Timer

View File

@@ -0,0 +1,51 @@
# Kata 15: The Go Test Harness (Subtests, Parallel, Fuzz)
**Target Idioms:** Table-Driven Tests, `t.Run`, `t.Parallel`, Fuzzing (`go test -fuzz`)
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
Developers often write:
- one-off tests with repetition,
- unsafe parallel subtests (loop variable capture),
- no fuzz testing for parsers/sanitizers.
Idiomatic Go testing is:
- table-driven,
- readable failures,
- parallel where safe,
- fuzzed for edge cases.
## 🎯 The Scenario
Youre implementing a sanitizer:
- `func NormalizeHeaderKey(s string) (string, error)`
Rules:
- only ASCII letters/digits/hyphen allowed
- normalize to canonical header form (e.g., `content-type` -> `Content-Type`)
- reject invalid input
## 🛠 The Challenge
Write:
1) The implementation, and
2) A test suite that proves its solid.
### 1. Functional Requirements
- [ ] Canonicalize valid inputs.
- [ ] Reject invalid characters.
- [ ] Stable behavior (same input => same output).
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] Tests must be **table-driven** with `t.Run`.
- [ ] Use **parallel subtests** correctly (no loop var capture bugs).
- [ ] Include a **fuzz test** that:
- never panics,
- never returns a string containing invalid characters,
- roundtrips canonical form (calling Normalize twice is idempotent).
## 🧪 Self-Correction (Test Yourself)
- **If parallel subtests flake:** you likely captured the loop variable.
- **If fuzzing finds panics:** you missed an edge case.
## 📚 Resources
- https://go.dev/blog/subtests
- https://go.dev/wiki/TableDrivenTests
- https://go.dev/doc/security/fuzz/
- https://go.dev/doc/tutorial/fuzz

View File

@@ -0,0 +1,48 @@
# Kata 16: The HTTP Client Hygiene Wrapper
**Target Idioms:** `net/http` Transport Reuse, Timeouts, Context-First APIs, Response Body Draining
**Difficulty:** 🔴 Advanced
## 🧠 The "Why"
“Works locally” HTTP code in Go often fails in prod because people:
- use `http.DefaultClient` with no timeouts,
- create a new client/transport per request (connection churn),
- forget to close bodies (leaks + no keep-alive reuse),
- dont drain bodies (prevents connection reuse).
This kata is about building a small internal SDK the **Go way**.
## 🎯 The Scenario
Your service calls a downstream API that sometimes returns large error bodies and sometimes hangs.
You need:
- strict timeouts,
- proper cancellation,
- safe connection reuse,
- structured logs.
## 🛠 The Challenge
Implement:
- `type APIClient struct { ... }`
- `func (c *APIClient) GetJSON(ctx context.Context, url string, out any) error`
### 1. Functional Requirements
- [ ] Use `http.NewRequestWithContext`.
- [ ] Decode JSON on 2xx responses into `out`.
- [ ] On non-2xx: read up to N bytes of body and return an error including status code.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Must NOT** use `http.DefaultClient`.
- [ ] **Must** configure timeouts (`Client.Timeout` and/or transport-level timeouts).
- [ ] **Must** reuse a single `Transport` (connection pooling).
- [ ] **Must** `defer resp.Body.Close()`.
- [ ] **Must** drain (at least partially) error bodies to allow connection reuse.
- [ ] Use `slog` with fields: method, url, status, latency.
## 🧪 Self-Correction (Test Yourself)
- **If connections spike under load:** you probably rebuild transports.
- **If keep-alives dont work:** you likely didnt drain/close body.
- **If hangs occur:** you likely lack correct timeout configuration.
## 📚 Resources
- https://go.dev/src/net/http/client.go
- https://go.dev/src/net/http/transport.go
- https://blog.cloudflare.com/the-complete-guide-to-golang-net-http-timeouts/

View File

@@ -0,0 +1,56 @@
# Kata 17: The Context-Aware Channel Sender (No Leaked Producers)
**Target Idioms:** Pipeline Cancellation, Select-on-Send, Channel Ownership, Goroutine Leak Prevention
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
A goroutine sending on a channel blocks until a receiver is ready (unless buffered space is available). If the receiver exits early (timeout, HTTP cancel, upstream error), producers can block forever and leak.
Idiomatic Go fixes this by:
- threading `context.Context` through the pipeline
- **selecting on every send** (`case out <- v` vs `case <-ctx.Done()`), as recommended in Gos pipeline cancellation patterns and real leak writeups.
## 🎯 The Scenario
Youre building a data pipeline step that fetches N URLs concurrently and streams results downstream. If the request is canceled (client disconnect, global timeout), **all fetchers must stop immediately** and no goroutine may remain blocked on `out <- result`.
## 🛠 The Challenge
Implement:
- `type DataFetcher struct { ... }`
- `func (f *DataFetcher) Fetch(ctx context.Context, urls []string) <-chan Result`
Where:
- `Result` contains `URL`, `Body []byte`, `Err error` (or similar).
### 1. Functional Requirements
- [ ] Start concurrent fetchers for all URLs (or bounded concurrency if you choose).
- [ ] Send results as they complete (order doesnt matter).
- [ ] Stop promptly on `ctx.Done()`.
- [ ] Close the output channel exactly once after all producers exit.
- [ ] Return partial results that already completed before cancellation.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Every send uses select:** no bare `out <- x`.
- [ ] **Channel ownership:** only the producer side closes `out`.
- [ ] **No goroutine leaks:** all goroutines exit when ctx is canceled.
- [ ] **No double close:** prove it structurally (single closer goroutine). Avoid `sync.Once` unless you can justify it.
- [ ] **Buffer choice is intentional:** if you buffer, document why and how you chose the size.
### 3. Hints (Allowed Tools)
- You may use `errgroup` or a simple worker pattern, but the key is: **send must be cancel-aware**.
- If you do bounded concurrency, prefer `x/sync/semaphore` or a worker pool (but dont turn this kata into a rate-limiter kata).
## 🧪 Self-Correction (Test Yourself)
1. **Forgotten Sender**
- Start 50 fetchers, consume only 1 result, then cancel.
- **Pass:** goroutine count returns near baseline quickly (use `runtime.NumGoroutine()` as a sanity check).
2. **Cancellation Before First Receive**
- Cancel ctx immediately after calling `Fetch`.
- **Pass:** no goroutine blocks trying to send.
3. **Close Discipline**
- Cancel ctx from multiple places.
- **Pass:** no `panic: close of closed channel`.
## 📚 Resources
- https://go.dev/blog/pipelines
- https://www.ardanlabs.com/blog/2018/11/goroutine-leaks-the-forgotten-sender.html

View File

@@ -0,0 +1,54 @@
# Kata 18: embed.FS Dev/Prod Switch Without Handler Forks
**Target Idioms:** `embed`, `io/fs`, Build Tags, `fs.Sub`, Same Handler Code Path
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
Embedding assets is great for production (single binary), but terrible for frontend iteration if every CSS tweak needs a rebuild.
Idiomatic Go solves this with:
- compile-time selection via build tags
- a shared `fs.FS` abstraction so handler code doesnt branch on “dev/prod”.
## 🎯 The Scenario
You run a small internal dashboard:
- Prod: ship a single binary (assets embedded).
- Dev: designers update `static/` and `templates/` live without recompiling.
## 🛠 The Challenge
Create a server that serves:
- templates from `templates/`
- static assets from `static/`
### 1. Functional Requirements
- [ ] `GET /` renders an HTML template.
- [ ] `GET /static/...` serves static files.
- [ ] Dev mode serves from disk; prod mode serves embedded.
- [ ] Handler code is identical in both modes.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Build tags:** two files:
- `assets_dev.go` with `//go:build dev`
- `assets_prod.go` with `//go:build !dev`
- [ ] **Return `fs.FS`:** `func Assets() (templates fs.FS, static fs.FS, err error)`
- [ ] **Use `fs.Sub`:** exported FS must have *clean roots* (no `static/static/...` path bugs).
- [ ] **No runtime env checks in handlers:** mode selection must be compile-time.
- [ ] **Single `http.FileServer` setup:** no duplicated handler logic for dev vs prod.
## 🧪 Self-Correction (Test Yourself)
1. **Live Reload**
- Build with `-tags dev`.
- Modify a CSS file and refresh.
- **Pass:** change shows without rebuild.
2. **Binary Portability**
- Build without tags.
- Delete `static/` and `templates/` from disk.
- **Pass:** server still serves assets/templates.
3. **Prefix Correctness**
- Request `/static/app.css`.
- **Pass:** works in both modes (no 404 due to prefix mismatch).
## 📚 Resources
- https://pkg.go.dev/embed
- https://pkg.go.dev/io/fs
- https://pkg.go.dev/io/fs#Sub

View File

@@ -0,0 +1,59 @@
# Kata 19: The Cleanup Chain (defer + LIFO + Error Preservation)
**Target Idioms:** `defer` Discipline, Named Returns, Error Composition (`errors.Join`), Close/Rollback Ordering
**Difficulty:** 🟡 Intermediate
## 🧠 The "Why"
`defer` is easy to misuse:
- deferring in loops (resource spikes),
- ignoring `Close()` / `Rollback()` errors,
- losing the original failure when cleanup also fails,
- wrong cleanup ordering (commit then rollback nonsense).
Idiomatic Go keeps cleanup local, ordered, and preserves important errors.
## 🎯 The Scenario
You implement `BackupDatabase`:
- open output file
- connect DB
- begin transaction
- stream rows to file
- commit
If anything fails, you must close/rollback what was already acquired.
## 🛠 The Challenge
Implement:
- `func BackupDatabase(ctx context.Context, dbURL, filename string) (err error)`
Use mock interfaces for DB + Tx + Rows if you want (recommended).
### 1. Functional Requirements
- [ ] Open file for writing.
- [ ] Connect to DB.
- [ ] Begin Tx.
- [ ] Write data (simulate streaming).
- [ ] Commit on success.
- [ ] On failure: rollback + close resources in correct order.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Defer cleanup immediately after acquisition.**
- [ ] **No manual cleanup paths** except by controlling flags (e.g., `committed bool`) used by deferred funcs.
- [ ] **Preserve both errors:** if main operation fails and cleanup fails too, return a combined error (`errors.Join`).
- [ ] **Named return `err`** so defers can amend it safely.
- [ ] **No defer-in-loop for per-row resources:** if your mock has per-row closers, show the correct pattern.
## 🧪 Self-Correction (Test Yourself)
1. **Tx Begin Fails**
- Make `Begin()` error.
- **Pass:** file + db connection still close.
2. **Commit Fails + Close Fails**
- Make `Commit()` return error and also make `file.Close()` return error.
- **Pass:** returned error clearly contains both (use `errors.Join`).
3. **No FD Leak**
- Run 1000 times.
- **Pass:** file descriptors dont grow.
## 📚 Resources
- https://go.dev/blog/defer-panic-and-recover
- https://go.dev/doc/go1.20 (errors.Join)

View File

@@ -0,0 +1,54 @@
# Kata 20: The “nil != nil” Interface Trap (Typed nil Errors)
**Target Idioms:** Interface Semantics, Typed nil Pitfall, Safe Error Returns, `errors.As`
**Difficulty:** 🔴 Advanced
## 🧠 The "Why"
In Go, an interface value is only nil when **both** its dynamic type and value are nil.
If you return a **typed nil pointer** (e.g., `(*MyError)(nil)`) as an `error`, the interface has a non-nil type, so `err != nil` becomes true even though the pointer inside is nil.
This bites real code in production (especially custom error types and factories).
## 🎯 The Scenario
A function returns `error`. Sometimes it returns a typed nil pointer.
Your caller checks `if err != nil` and takes an error path, logs misleading failures, or even panics when accessing fields/methods.
## 🛠 The Challenge
Write a minimal package that:
1) demonstrates the bug, and
2) fixes it with an idiomatic pattern.
### 1. Functional Requirements
- [ ] Implement `type MyError struct { Op string }` (or similar).
- [ ] Implement a function `DoThing(...) error` that **sometimes returns** `(*MyError)(nil)` as `error`.
- [ ] Demonstrate:
- `err != nil` is true
- `fmt.Printf("%T %#v\n", err, err)` shows the typed nil behavior
- [ ] Provide a corrected version that returns a true nil interface when there is no error.
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
- [ ] **Must show the failing behavior** in a test (`go test`).
- [ ] **Must show the fix** in a test.
- [ ] **Must not “fix” by panicking or by sentinel errors.**
- [ ] Use one of these idiomatic fixes:
- return `nil` explicitly when the pointer is nil
- or return `error(nil)` in the relevant branch
- [ ] Demonstrate safe extraction using:
- `var me *MyError; errors.As(err, &me)` and check `me != nil`
## 🧪 Self-Correction (Test Yourself)
1. **The Trap Repro**
- Make `DoThing()` return `var e *MyError = nil; return e`
- **Pass:** your test proves `err != nil` is true.
2. **The Fix**
- If internal pointer is nil, return literal `nil`.
- **Pass:** `err == nil` works, callers behave correctly.
3. **Extraction Safety**
- Wrap the error and still extract with `errors.As`.
- **Pass:** extraction works through wrapping layers.
## 📚 Resources
- https://go.dev/blog/laws-of-reflection (interface basics)
- https://go.dev/blog/go1.13-errors (errors.As)
- https://forum.golangbridge.org/t/logic-behind-failing-nil-check/16331

View File

45
README.md Normal file
View File

@@ -0,0 +1,45 @@
# 🥋 Go Katas 🥋
> "I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times."
(Bruce Lee)
## What should it be?
- Go is simple to learn, but nuanced to master. The difference between "working code" and "idiomatic code" often lies in details such as safety, memory efficiency, and concurrency control.
- This repository is a collection of **Daily Katas**: small, standalone coding challenges designed to drill specific Go patterns into your muscle memory.
## What should it NOT be?
- This is not intended to teach coding, having Go as the programming mean. Not even intended to teach you Go **in general**
- The focus should be as much as possible challenging oneself to solve common software engineering problems **the Go way**.
- Several seasoned developers spent years learning and applying best-practices at prod-grade context. Once they decide to switch to go, they would face two challanges:
- Is there a window of knowledge transform here, so that I don't have to through years of my career from the window at start from zero?
- If yes, the which parts should I focus on to recognize the mismatches and use them the expected way in the Go land?
## How to Use This Repo
1. **Pick a Kata:** Navigate to any `XX-kata-yy` folder.
2. **Read the Challenge:** Open the `README.md` inside that folder. It defines the Goal, the Constraints, and the "Idiomatic Patterns" you must use.
3. **Solve It:** Initialize a module inside the folder and write your solution.
4. **Reflect:** Compare your solution with the provided "Reference Implementation" (if available) or the core patterns listed.
## Contribution Guidelines
### Have a favorite Go pattern?
1. Create a new folder `XX-your-topic`. (`XX` is an ordinal number)
2. Copy the [README_TEMPLATE.md](./README_TEMPLATE.md) to the new folder as `README.md`
3. Define the challenge: focus on **real-world scenarios** (e.g., handling timeouts, zero-allocation sets), and **idiomatic Go**, not just algorithmic puzzles.
4. **Optionally**, create a `main.go` or any other relevant files under the project containing blueprint of the implementation, **as long as you think it reduces confusion and keeps the implementation focused**
5. Submit a PR.
### Using the script
You can use the shorthand script to add a new challenge, it will create a new folder and a new README.md file under it:
```bash
./add.sh my-very-creative-challange
```
This will create a new folder `21-my-very-creative-challange` (in case the latest challange was under the folder name `20-latest-name-here`) and add a `README.md` under it
```bash
medunes@medunes:~/projects/go-kata$ ls 21-my-very-creative-challange/
README.md
```

30
README_TEMPLATE.md Normal file
View File

@@ -0,0 +1,30 @@
# Kata [ID]: [Title of the Challenge]
**Target Idioms:** [e.g. Concurrency Patterns, Interface Pollution, Error Wrapping]
**Difficulty:** [🟢 Beginner / 🟡 Intermediate / 🔴 Advanced]
## 🧠 The "Why"
*Briefly explain the mismatch seasoned devs would face here. Why would the "Java way" or "Python way" fail in Go for this specific problem?*
## 🎯 The Scenario
*[Describe a realistic production problem. Example: "You are building a CLI that parses 1GB logs..."]*
## 🛠 The Challenge
Implement a solution that satisfies the following requirements.
### 1. Functional Requirements
* [ ] [Requirement A]
* [ ] [Requirement B]
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
*To pass this kata, you **must** use the following patterns:*
* [ ] **Constraint 1:** [e.g. Do not allocate new memory inside the loop.]
* [ ] **Constraint 2:** [e.g. Use `functional options` for configuration.]
* [ ] **Constraint 3:** [e.g. Return wrapped errors.]
## 🧪 Self-Correction (Test Yourself)
* **If you did X:** [Explain why this is "un-idiomatic" or dangerous in Go]
* **Instead, ensure Y:** [Explain the idiomatic fix]
## 📚 Resources
* [Link to Go Blog or Spec]

12
add.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
name="${1}"
if [[ -z "${name}" ]]; then
echo "please provide a 'kebab-case' name of the new challenge."
exit 1
fi
number=$(ls -d */ | wc -l)
number=$((10#$number + 1))
number=$(printf "%02d" $number)
folder="$number-${1}"
mkdir $folder
touch "$folder/README.md"