mirror of
https://github.com/MedUnes/go-kata.git
synced 2026-03-12 21:55:53 +07:00
update links under the root README file
This commit is contained in:
1
01-context-cancellation-concurrency/01-concurrent-aggregator/.gitignore
vendored
Normal file
1
01-context-cancellation-concurrency/01-concurrent-aggregator/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
.idea
|
||||
@@ -0,0 +1,48 @@
|
||||
# Kata 01: The Fail-Fast Data Aggregator
|
||||
|
||||
**Target Idioms:** Concurrency Control (`errgroup`), Context Propagation, Functional Options
|
||||
**Difficulty:** 🟡 Intermediate
|
||||
|
||||
## 🧠 The "Why"
|
||||
In other languages, you might use `Promise.all` or strict thread pools to fetch data in parallel. In Go, seasoned developers often start with `sync.WaitGroup`, but quickly realize it lacks two critical features for production: **Error Propagation** and **Context Cancellation**.
|
||||
|
||||
If you spawn 10 goroutines and the first one fails, `WaitGroup` blindly waits for the other 9 to finish. **Idiomatic Go fails fast.**
|
||||
|
||||
## 🎯 The Scenario
|
||||
You are building a **User Dashboard Backend**. To render the dashboard, you must fetch data from two independent, mock microservices:
|
||||
1. **Profile Service** (Returns "Name: Alice")
|
||||
2. **Order Service** (Returns "Orders: 5")
|
||||
|
||||
You need to fetch these in parallel to reduce latency. However, if *either* fails, or if the global timeout is reached, the entire operation must abort immediately to save resources.
|
||||
|
||||
## 🛠 The Challenge
|
||||
Create a `UserAggregator` struct and a method `Aggregate(id int)` that orchestrates this fetching.
|
||||
|
||||
### 1. Functional Requirements
|
||||
* [ ] The aggregator must be configurable (timeout, logger) without a massive constructor.
|
||||
* [ ] Both services must be queried concurrently.
|
||||
* [ ] The result should combine both outputs: `"User: Alice | Orders: 5"`.
|
||||
|
||||
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
|
||||
To pass this kata, you **must** strictly adhere to these rules:
|
||||
|
||||
* [ ] **NO `sync.WaitGroup`:** You must use `golang.org/x/sync/errgroup`.
|
||||
* [ ] **NO "Parameter Soup":** You must use the **Functional Options Pattern** for the constructor (e.g., `New(WithTimeout(2s))`).
|
||||
* [ ] **Context is King:** You must pass `context.Context` as the first argument to your methods.
|
||||
* [ ] **Cleanup:** If the Profile service fails, the Order service request must be cancelled (via Context) immediately.
|
||||
* [ ] **Modern Logging:** Use `log/slog` for structured logging.
|
||||
|
||||
## 🧪 Self-Correction (Test Yourself)
|
||||
Run your code against these edge cases:
|
||||
|
||||
1. **The "Slow Poke":** * Set your aggregator timeout to `1s`.
|
||||
* Mock one service to take `2s`.
|
||||
* **Pass Condition:** Does your function return `context deadline exceeded` after exactly 1s?
|
||||
2. **The "Domino Effect":**
|
||||
* Mock the Profile Service to return an error immediately.
|
||||
* Mock the Order Service to take 10 seconds.
|
||||
* **Pass Condition:** Does your function return the error *immediately*? (If it waits 10s, you failed context cancellation).
|
||||
|
||||
## 📚 Resources
|
||||
* [Go Concurrency: errgroup](https://pkg.go.dev/golang.org/x/sync/errgroup)
|
||||
* [Functional Options for Friendly APIs](https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis)
|
||||
@@ -0,0 +1,3 @@
|
||||
module concurrent-aggregator
|
||||
|
||||
go 1.25.0
|
||||
@@ -0,0 +1,56 @@
|
||||
# Kata 03: The Graceful Shutdown Server
|
||||
**Target Idioms:** Context Propagation, Signal Handling, Channel Coordination, Resource Cleanup
|
||||
**Difficulty:** 🔴 Advanced
|
||||
|
||||
## 🧠 The "Why"
|
||||
In other ecosystems, graceful shutdown is often framework magic (Spring's `@PreDestroy`, Django's `close()`). Go forces explicit lifecycle management. The mismatch: developers used to automatic cleanup often leak goroutines, drop in-flight requests, or corrupt data during shutdown.
|
||||
|
||||
The Go way: **Own your lifecycle**. Every goroutine you spawn must have a controlled shutdown path.
|
||||
|
||||
## 🎯 The Scenario
|
||||
Build an **HTTP Server with Background Worker** that must:
|
||||
1. Accept HTTP requests (handled by a pool of worker goroutines)
|
||||
2. Run a background cache warmer every 30 seconds
|
||||
3. Maintain persistent database connections
|
||||
4. Shutdown within 10 seconds when receiving SIGTERM, completing in-flight requests but rejecting new ones
|
||||
|
||||
## 🛠 The Challenge
|
||||
Implement `Server` struct with `Start() error` and `Stop(ctx context.Context) error` methods.
|
||||
|
||||
### 1. Functional Requirements
|
||||
* [ ] HTTP server on configurable port with request timeout
|
||||
* [ ] Worker pool (configurable size) processes requests via channel
|
||||
* [ ] Background cache warmer ticks every 30s (use `time.Ticker`)
|
||||
* [ ] Database connection pool (mock with `net.Conn`)
|
||||
* [ ] SIGTERM/SIGINT triggers graceful shutdown
|
||||
* [ ] Shutdown completes within deadline or forces exit
|
||||
|
||||
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
|
||||
* [ ] **Single Context Tree**: Root `context.Context` passed to `Start()`, canceled on shutdown
|
||||
* [ ] **Channel Coordination**: Use `chan struct{}` for worker pool shutdown, not boolean flags
|
||||
* [ ] **Proper Ticker Cleanup**: `defer ticker.Stop()` with select in goroutine
|
||||
* [ ] **Dependency Order**: Shutdown in reverse order (stop accepting → drain workers → stop warmer → close DB)
|
||||
* [ ] **No `os.Exit()` in business logic**: Shutdown should be testable without process termination
|
||||
|
||||
## 🧪 Self-Correction (Test Yourself)
|
||||
1. **The Sudden Death Test**:
|
||||
- Send 100 requests, immediately send SIGTERM
|
||||
- **Pass**: Server completes in-flight requests (not all 100), logs "shutting down", closes cleanly
|
||||
- **Fail**: Server accepts new requests after signal, leaks goroutines, or crashes
|
||||
|
||||
2. **The Slow Leak Test**:
|
||||
- Run server for 5 minutes with 1 request/second
|
||||
- Send SIGTERM, wait 15 seconds
|
||||
- **Pass**: `go test` shows no goroutine leaks (use `runtime.NumGoroutine()`)
|
||||
- **Fail**: Any increase in goroutine count from start to finish
|
||||
|
||||
3. **The Timeout Test**:
|
||||
- Start long-running request (sleep 20s)
|
||||
- Send SIGTERM with 5s timeout context
|
||||
- **Pass**: Forces shutdown after 5s, logs "shutdown timeout"
|
||||
- **Fail**: Waits full 20s or deadlocks
|
||||
|
||||
## 📚 Resources
|
||||
* [Go Blog: Context](https://go.dev/blog/context)
|
||||
* [Graceful Shutdown in Go](https://medium.com/honestbee-tw-engineer/gracefully-shutdown-in-go-http-server-5f5e6b83da5a)
|
||||
* [Signal Handling](https://medium.com/@marcus.olsson/writing-a-go-app-with-graceful-shutdown-5de1d2c6de96)
|
||||
@@ -0,0 +1,42 @@
|
||||
# Kata 05: The Context-Aware Error Propagator
|
||||
**Target Idioms:** Error Wrapping, Context-Aware Errors, Custom Error Types
|
||||
**Difficulty:** 🟡 Intermediate
|
||||
|
||||
## 🧠 The "Why"
|
||||
Developers from dynamic languages often treat errors as simple strings. Java developers wrap exceptions in layers of inheritance. **Go's error philosophy is different:** errors are values that should carry context and be inspectable without string parsing. The unidiomatic pattern is to `log.Printf("error: %v", err)` and return nil - this destroys debugging context. Idiomatic Go preserves the original error while adding layers of context.
|
||||
|
||||
## 🎯 The Scenario
|
||||
You're building a **cloud storage gateway** that interacts with multiple services: authentication, metadata database, and blob storage. When a file upload fails, operators need to know exactly which layer failed and why - was it auth timeout? database deadlock? storage quota exceeded? Your error handling must preserve this information while being safe for logging.
|
||||
|
||||
## 🛠 The Challenge
|
||||
Create a service that uploads files to cloud storage with proper error handling.
|
||||
|
||||
### 1. Functional Requirements
|
||||
* [ ] Implement three layers: `AuthService`, `MetadataService`, `StorageService`
|
||||
* [ ] Each layer can fail with specific error types
|
||||
* [ ] Return errors that expose the failure point and original cause
|
||||
|
||||
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
|
||||
To pass this kata, you **must** strictly adhere to these rules:
|
||||
* [ ] **NO string-based error inspection:** You must use `%w` with `fmt.Errorf` for wrapping
|
||||
* [ ] **Custom Error Types:** Create specific error types for each service layer (e.g., `AuthError`, `StorageQuotaError`)
|
||||
* [ ] **Context-Aware Errors:** Errors must implement `Timeout()` and `Temporary()` methods where appropriate
|
||||
* [ ] **Safe Logging:** Errors must redact sensitive information (API keys, credentials) when logged
|
||||
* [ ] **Error Unwrapping:** Your errors must support `errors.Is()` and `errors.As()` for programmatic inspection
|
||||
|
||||
## 🧪 Self-Correction (Test Yourself)
|
||||
Test your error handling with these scenarios:
|
||||
1. **The "Sensitive Data Leak":**
|
||||
* Force an auth error with a mock API key
|
||||
* **Fail Condition:** If `fmt.Sprint(err)` contains the API key string
|
||||
2. **The "Lost Context":**
|
||||
* Wrap an `AuthError` three times through different layers
|
||||
* **Fail Condition:** If `errors.As(err, &AuthError{})` returns false
|
||||
3. **The "Timeout Confusion":**
|
||||
* Create a timeout error in the storage layer
|
||||
* **Fail Condition:** If `errors.Is(err, context.DeadlineExceeded)` returns false
|
||||
|
||||
## 📚 Resources
|
||||
* [Go 1.13 Error Wrapping](https://go.dev/blog/go1.13-errors)
|
||||
* [Error Handling in Upspin](https://commandcenter.blogspot.com/2017/12/error-handling-in-upspin.html)
|
||||
* [Don't just check errors, handle them gracefully](https://dave.cheney.net/2016/04/27/dont-just-check-errors-handle-them-gracefully)
|
||||
@@ -0,0 +1,48 @@
|
||||
# Kata 07: The Rate-Limited Fan-Out Client
|
||||
**Target Idioms:** Rate Limiting (`x/time/rate`), Bounded Concurrency (`x/sync/semaphore`), HTTP Client Hygiene, Context Cancellation
|
||||
**Difficulty:** 🟡 Intermediate
|
||||
|
||||
## 🧠 The "Why"
|
||||
In many ecosystems, you slap a “rate limit middleware” in front of a thread pool and call it a day. In Go, people often:
|
||||
- spawn too many goroutines (no backpressure),
|
||||
- forget per-request cancellation,
|
||||
- misuse `http.DefaultClient` (timeouts/transport reuse),
|
||||
- implement “sleep-based” rate limiting (jittery, wasteful).
|
||||
|
||||
This kata forces **explicit control** over *rate*, *in-flight concurrency*, and *cancellation*.
|
||||
|
||||
## 🎯 The Scenario
|
||||
You’re building an internal service that needs to fetch user widgets from a downstream API:
|
||||
- API allows **10 requests/sec** with bursts up to **20**
|
||||
- Your service must also cap concurrency at **max 8 in-flight** requests
|
||||
- If any request fails, cancel everything immediately (fail-fast), and return the first error.
|
||||
|
||||
## 🛠 The Challenge
|
||||
Implement `FanOutClient` with:
|
||||
- `FetchAll(ctx context.Context, userIDs []int) (map[int][]byte, error)`
|
||||
|
||||
### 1. Functional Requirements
|
||||
- [ ] Requests must respect a **QPS rate limit** + **burst**.
|
||||
- [ ] Requests must run concurrently but never exceed **MaxInFlight**.
|
||||
- [ ] Results returned as `map[userID]payload`.
|
||||
- [ ] On first error, cancel remaining work and return immediately.
|
||||
|
||||
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
|
||||
- [ ] **Must** use `golang.org/x/time/rate.Limiter`.
|
||||
- [ ] **Must** use `golang.org/x/sync/semaphore.Weighted` (or equivalent semaphore pattern) for MaxInFlight.
|
||||
- [ ] **Must** use `http.NewRequestWithContext`.
|
||||
- [ ] **Must NOT** use `time.Sleep` for rate limiting.
|
||||
- [ ] **Must** reuse a single `http.Client` (with a configured `Transport` + `Timeout`).
|
||||
- [ ] Logging via `log/slog` (structured fields: userID, attempt, latency).
|
||||
|
||||
## 🧪 Self-Correction (Test Yourself)
|
||||
- **If you spawn `len(userIDs)` goroutines:** you failed backpressure.
|
||||
- **If cancellation doesn’t stop waiting callers:** you failed context propagation.
|
||||
- **If QPS is enforced using `Sleep`:** you failed rate limiting.
|
||||
- **If you use `http.DefaultClient`:** you failed HTTP hygiene.
|
||||
|
||||
## 📚 Resources
|
||||
- https://pkg.go.dev/golang.org/x/time/rate
|
||||
- https://pkg.go.dev/golang.org/x/sync/semaphore
|
||||
- https://go.dev/src/net/http/client.go
|
||||
- https://go.dev/src/net/http/transport.go
|
||||
@@ -0,0 +1,42 @@
|
||||
# Kata 09: The Cache Stampede Shield
|
||||
**Target Idioms:** `singleflight`, TTL Cache, DoChan + Context Select, Lock Avoidance
|
||||
**Difficulty:** 🔴 Advanced
|
||||
|
||||
## 🧠 The "Why"
|
||||
In many stacks, caching is “just Redis”. In Go, an in-process cache is common, but people:
|
||||
- hold locks while calling the loader (deadly),
|
||||
- refresh the same key N times concurrently (stampede),
|
||||
- can’t cancel waiters cleanly.
|
||||
|
||||
This kata is about **deduplicating in-flight loads** and making waiters **context-cancellable**.
|
||||
|
||||
## 🎯 The Scenario
|
||||
You have expensive per-key loads (e.g., DB or remote API). If 200 goroutines ask for the same key at once:
|
||||
- loader must run **once**
|
||||
- others must wait (or return on ctx cancel)
|
||||
- TTL must be enforced
|
||||
|
||||
## 🛠 The Challenge
|
||||
Implement:
|
||||
- `type Cache[K comparable, V any] struct { ... }`
|
||||
- `Get(ctx context.Context, key K, loader func(context.Context) (V, error)) (V, error)`
|
||||
|
||||
### 1. Functional Requirements
|
||||
- [ ] Return cached value if not expired.
|
||||
- [ ] If expired/missing: load once, share result to all callers.
|
||||
- [ ] Callers must be able to stop waiting via `ctx.Done()`.
|
||||
|
||||
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
|
||||
- [ ] **Must** use `golang.org/x/sync/singleflight.Group`.
|
||||
- [ ] **Must** use `DoChan` + `select` on `ctx.Done()` to cancel waiters.
|
||||
- [ ] **Must NOT** hold a mutex while calling `loader`.
|
||||
- [ ] Errors must be wrapped with key context using `%w`.
|
||||
|
||||
## 🧪 Self-Correction (Test Yourself)
|
||||
- **If 200 goroutines trigger 200 loads:** you failed (no stampede protection).
|
||||
- **If a canceled context still blocks waiting:** you failed.
|
||||
- **If you lock around loader execution:** you failed (contention / deadlocks).
|
||||
|
||||
## 📚 Resources
|
||||
- https://pkg.go.dev/golang.org/x/sync/singleflight
|
||||
- https://go.dev/blog/go1.13-errors
|
||||
@@ -0,0 +1,45 @@
|
||||
# Kata 10: The Worker Pool With Backpressure and Joined Errors
|
||||
**Target Idioms:** Worker Pools, Channel Ownership, `errors.Join`, Context Cancellation
|
||||
**Difficulty:** 🔴 Advanced
|
||||
|
||||
## 🧠 The "Why"
|
||||
Many devs bring “thread pool” instincts and end up with:
|
||||
- goroutines that never exit,
|
||||
- unbounded queues,
|
||||
- “first error wins” even when you want a summary,
|
||||
- ad-hoc error channels without cleanup.
|
||||
|
||||
This kata forces correctness: **bounded work**, **clean shutdown**, and **error aggregation**.
|
||||
|
||||
## 🎯 The Scenario
|
||||
You process a stream of jobs (e.g., image resizing). You want:
|
||||
- fixed number of workers
|
||||
- bounded queue (backpressure)
|
||||
- either fail-fast OR collect all errors (configurable)
|
||||
|
||||
## 🛠 The Challenge
|
||||
Implement:
|
||||
- `type Pool struct { ... }`
|
||||
- `Run(ctx context.Context, jobs <-chan Job) error`
|
||||
|
||||
Where `Job` is `func(context.Context) error`.
|
||||
|
||||
### 1. Functional Requirements
|
||||
- [ ] `N` workers process from `jobs`.
|
||||
- [ ] Optional `StopOnFirstError`.
|
||||
- [ ] If not fail-fast: return `errors.Join(errs...)` after draining.
|
||||
|
||||
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
|
||||
- [ ] **Must** use `errors.Join` for aggregation.
|
||||
- [ ] **Must** respect `ctx.Done()` (workers exit).
|
||||
- [ ] **Must** close internal channels from the sender side only.
|
||||
- [ ] **Must** guarantee no goroutine leak when `jobs` closes early or ctx cancels.
|
||||
|
||||
## 🧪 Self-Correction (Test Yourself)
|
||||
- **If workers keep running after ctx cancel:** failed.
|
||||
- **If you can deadlock by closing channels from the wrong side:** failed.
|
||||
- **If you return before draining in non-fail-fast mode:** failed.
|
||||
|
||||
## 📚 Resources
|
||||
- https://go.dev/doc/go1.20
|
||||
- https://go.dev/src/errors/join.go
|
||||
@@ -0,0 +1,43 @@
|
||||
# Kata 14: The Leak-Free Scheduler
|
||||
**Target Idioms:** `time.Timer`/`time.Ticker`, Stop/Reset patterns, Jitter, Context Cancellation
|
||||
**Difficulty:** 🟡 Intermediate
|
||||
|
||||
## 🧠 The "Why"
|
||||
Scheduling in Go is deceptively easy until you ship:
|
||||
- goroutines that never stop,
|
||||
- overlapping executions,
|
||||
- ticker drift and backlog,
|
||||
- resource retention from careless timer usage.
|
||||
|
||||
This kata makes you build a scheduler that is **predictable** and **stoppable**.
|
||||
|
||||
## 🎯 The Scenario
|
||||
You need to periodically refresh a local cache:
|
||||
- every 5s, with ±10% jitter
|
||||
- do not overlap refreshes
|
||||
- stop immediately on shutdown
|
||||
|
||||
## 🛠 The Challenge
|
||||
Implement:
|
||||
- `type Scheduler struct { ... }`
|
||||
- `func (s *Scheduler) Run(ctx context.Context, job func(context.Context) error) error`
|
||||
|
||||
### 1. Functional Requirements
|
||||
- [ ] Run `job` periodically (interval + jitter).
|
||||
- [ ] Never run `job` concurrently with itself.
|
||||
- [ ] Exit on `ctx.Done()`.
|
||||
|
||||
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
|
||||
- [ ] **Must NOT** use `time.Tick` (no stop control).
|
||||
- [ ] **Must** use a `time.Timer` or `time.Ticker` with correct stop/reset.
|
||||
- [ ] **Must** propagate context into `job`.
|
||||
- [ ] Log job duration and errors via `slog`.
|
||||
|
||||
## 🧪 Self-Correction (Test Yourself)
|
||||
- **If `job` overlap occurs:** you failed.
|
||||
- **If cancel doesn’t stop quickly:** you failed.
|
||||
- **If goroutines remain after exit:** you failed.
|
||||
|
||||
## 📚 Resources
|
||||
- https://pkg.go.dev/time
|
||||
- https://go.dev/wiki/Go123Timer
|
||||
@@ -0,0 +1,56 @@
|
||||
# Kata 17: The Context-Aware Channel Sender (No Leaked Producers)
|
||||
**Target Idioms:** Pipeline Cancellation, Select-on-Send, Channel Ownership, Goroutine Leak Prevention
|
||||
**Difficulty:** 🟡 Intermediate
|
||||
|
||||
## 🧠 The "Why"
|
||||
A goroutine sending on a channel blocks until a receiver is ready (unless buffered space is available). If the receiver exits early (timeout, HTTP cancel, upstream error), producers can block forever and leak.
|
||||
|
||||
Idiomatic Go fixes this by:
|
||||
- threading `context.Context` through the pipeline
|
||||
- **selecting on every send** (`case out <- v` vs `case <-ctx.Done()`), as recommended in Go’s pipeline cancellation patterns and real leak writeups.
|
||||
|
||||
## 🎯 The Scenario
|
||||
You’re building a data pipeline step that fetches N URLs concurrently and streams results downstream. If the request is canceled (client disconnect, global timeout), **all fetchers must stop immediately** and no goroutine may remain blocked on `out <- result`.
|
||||
|
||||
## 🛠 The Challenge
|
||||
Implement:
|
||||
- `type DataFetcher struct { ... }`
|
||||
- `func (f *DataFetcher) Fetch(ctx context.Context, urls []string) <-chan Result`
|
||||
|
||||
Where:
|
||||
- `Result` contains `URL`, `Body []byte`, `Err error` (or similar).
|
||||
|
||||
### 1. Functional Requirements
|
||||
- [ ] Start concurrent fetchers for all URLs (or bounded concurrency if you choose).
|
||||
- [ ] Send results as they complete (order doesn’t matter).
|
||||
- [ ] Stop promptly on `ctx.Done()`.
|
||||
- [ ] Close the output channel exactly once after all producers exit.
|
||||
- [ ] Return partial results that already completed before cancellation.
|
||||
|
||||
### 2. The "Idiomatic" Constraints (Pass/Fail Criteria)
|
||||
- [ ] **Every send uses select:** no bare `out <- x`.
|
||||
- [ ] **Channel ownership:** only the producer side closes `out`.
|
||||
- [ ] **No goroutine leaks:** all goroutines exit when ctx is canceled.
|
||||
- [ ] **No double close:** prove it structurally (single closer goroutine). Avoid `sync.Once` unless you can justify it.
|
||||
- [ ] **Buffer choice is intentional:** if you buffer, document why and how you chose the size.
|
||||
|
||||
### 3. Hints (Allowed Tools)
|
||||
- You may use `errgroup` or a simple worker pattern, but the key is: **send must be cancel-aware**.
|
||||
- If you do bounded concurrency, prefer `x/sync/semaphore` or a worker pool (but don’t turn this kata into a rate-limiter kata).
|
||||
|
||||
## 🧪 Self-Correction (Test Yourself)
|
||||
1. **Forgotten Sender**
|
||||
- Start 50 fetchers, consume only 1 result, then cancel.
|
||||
- **Pass:** goroutine count returns near baseline quickly (use `runtime.NumGoroutine()` as a sanity check).
|
||||
|
||||
2. **Cancellation Before First Receive**
|
||||
- Cancel ctx immediately after calling `Fetch`.
|
||||
- **Pass:** no goroutine blocks trying to send.
|
||||
|
||||
3. **Close Discipline**
|
||||
- Cancel ctx from multiple places.
|
||||
- **Pass:** no `panic: close of closed channel`.
|
||||
|
||||
## 📚 Resources
|
||||
- https://go.dev/blog/pipelines
|
||||
- https://www.ardanlabs.com/blog/2018/11/goroutine-leaks-the-forgotten-sender.html
|
||||
Reference in New Issue
Block a user