Kennedy's 8 Go Design Pillars: How They Changed My CloudMeta Code
Three months ago I reviewed a pull request that looked fine on the surface. Clean diffs, no obvious bugs, tests passing. I approved it. Two days later, a subtle state corruption showed up in production — a resource was being reported as "active" after it had been closed. When I traced it back, the problem was architectural: the code had three separate violations of principles I didn't have words for yet.
I didn't have those words because I hadn't read Kennedy yet.
This post is about Chapter 1 of William Kennedy's Ultimate Go Notebook — eight design pillars that are not personal opinions or style preferences. They are a shared decision framework used across the entire professional Go community. After studying them, I went back and found violations and correct applications of every single pillar in my own production codebase: CloudMeta, a multi-tenant cloud resource metadata collector with a channel-based extraction pipeline, PostgreSQL storage, and a REST API.
Why a Framework at All?
Before we get to the pillars, let's be clear about why they exist.
Junior engineers are judged on whether code works. Mid-level engineers are judged on whether code is maintainable. Senior and Staff engineers are judged on the quality of their decisions — specifically, whether they can explain why a piece of code is written the way it is, articulate the cost of that decision, and defend it against alternatives.
Kennedy's central insight in Chapter 1 is this: every decision has a cost. Not a moral cost — a technical cost. Memory allocations, CPU cache misses, coupling between packages, cognitive load on future readers. The question a senior engineer asks is never "does this work?" but "what is the cost of this design, and do I understand it well enough to defend it?"
The eight pillars give you a vocabulary for this. When you look at a function, you can now ask: which pillar does this follow? Which does it violate? Can I articulate the tradeoff?
When an interviewer at Databricks or Razorpay asks you to "walk me through a design decision you made in production," these eight pillars are your answer framework.
The Eight Pillars — Overview
| # | Pillar | One-Line Rule |
|---|---|---|
| 1 | Integrity | Code cannot be in a half-valid state |
| 2 | Readability | Intent must be visible in name + comment |
| 3 | Simplicity | Minimum cognitive load, not minimum lines |
| 4 | Performance | Use the hardware as the platform |
| 5 | Micro-Optimization | Profile first, optimise proven hotspots |
| 6 | Data-Orientation | Data layout drives design |
| 7 | Decoupling | Components must change independently |
| 8 | Concurrency | Right tool per responsibility |
Let's go through each one in depth, with both the concept-clear version and the CloudMeta production version.
Pillar 1 — Integrity
"Integrity means that every piece of code must be accurate, consistent, and efficient."
— Kennedy §1.8.1
What Kennedy means
Integrity is not about being morally correct. It's about state honesty. Kennedy defines it at two levels:
Micro-integrity: every individual operation — every allocation, read, and write — must start from an accurate representation of the current state. A function that is called must either succeed completely or fail cleanly. There must be no in-between.
Macro-integrity: every resource acquisition must have a guaranteed release path. If your code opens a database connection, that connection must be closeable from every exit path — not just the happy path.
The core test: can this code ever be in a state where it reports success for an operation that has already been invalidated?
Concept-clear example
// A simple connection type demonstrating micro-integrity.
// Once Close() is called, all subsequent operations return an honest error.
type Connection struct {
addr string
active bool
}
func NewConnection(addr string) (*Connection, error) {
if addr == "" {
return nil, fmt.Errorf("connection: addr cannot be empty")
}
return &Connection{addr: addr, active: true}, nil
}
func (c *Connection) Send(data string) error {
if !c.active {
// Micro-integrity: we check state BEFORE doing any work.
// We return an honest error rather than panicking or silently doing nothing.
return fmt.Errorf("connection closed")
}
fmt.Printf("send to %s: %s\n", c.addr, data)
return nil
}
func (c *Connection) Close() error {
c.active = false // Immediate state change — no half-closed window
return nil
}
Notice what this code guarantees: after Close() returns, every subsequent call to Send() will return an error. There is no window where active is false but Send() can still proceed. The state transition is atomic from the caller's perspective.
CloudMeta production example — AWSExtractor
In CloudMeta, AWSExtractor connects to AWS APIs to extract cloud resource metadata. Here's the integrity pattern in internal/extractor/aws.go:
// AWSExtractor simulates extracting resources from AWS.
type AWSExtractor struct {
region string
accountID string
connected bool // ← honest state flag
}
// Close implements Closer.
// Immediately sets connected=false — no half-closed state possible.
func (a *AWSExtractor) Close() error {
a.connected = false
return nil
}
// ExtractResources implements Extractor.
// Checks state FIRST — before any work begins.
func (a *AWSExtractor) ExtractResources(ctx context.Context, tenantID string) ([]models.Resource, error) {
if !a.connected {
return nil, storage.NewStorageError("ExtractResources", "aws", tenantID, storage.ErrConnectionFailed)
}
// Only reaches here if connected == true.
// Context cancellation check next:
select {
case <-ctx.Done():
return nil, fmt.Errorf("ExtractResources cancelled: %w", ctx.Err())
default:
}
return generateFakeResources(tenantID, a.region, a.accountID), nil
}
The pattern: Close() sets the flag first, ExtractResources checks the flag first. There is no code path where ExtractResources returns a successful result after Close() has been called.
CloudMeta production example — runServe startup (Macro-Integrity)
Macro-integrity is about the lifecycle of resources. In cmd/cloudmeta/main.go:
func runServe(port int, debug bool) error {
cfg, err := configs.Load()
if err != nil {
return fmt.Errorf("load config: %w", err) // fail fast — never partially init
}
startupCtx, cancel := context.WithTimeout(context.Background(), 15*time.Second)
defer cancel()
pool, err := storage.NewPool(startupCtx, cfg.Database)
if err != nil {
return fmt.Errorf("create db pool: %w", err) // fail fast
}
defer pool.Close() // ← guaranteed cleanup from this point forward
if err := storage.Ping(startupCtx, pool); err != nil {
return fmt.Errorf("db connectivity check: %w", err)
// pool.Close() will still run — defer guarantees it
}
// ... server setup continues
}
defer pool.Close() is placed immediately after the pool is successfully created. This means that from line 8 onwards, no matter what error occurs — config failure, migration failure, server crash — pool.Close() will run. This is Kennedy's macro-integrity: every resource acquisition has a guaranteed release path.
Why this matters
The PR I mentioned at the start of this post? The violation was micro-integrity. A struct's internal connected flag was being set after some cleanup logic ran, which meant there was a ~10ms window where the flag was true but the underlying connection was already torn down. The code was reporting success for operations on a dead connection. Kennedy's Pillar 1 would have caught this in code review.
Pillar 2 — Readability
"Code is written for humans, not computers. Computers only need it to be correct."
— Kennedy §1.8.2
What Kennedy means
Kennedy's readability test is specific and ruthless: can a competent Go engineer understand this function in under 60 seconds without running it?
Not 5 minutes. Not "with some effort." Sixty seconds. Cold. No context from the author.
A readable function tells you three things:
- WHAT it does — from its name
- HOW it does it — from its body
- WHY it makes the tradeoffs it does — from its comments
If you need all three and the function fails on any one, it fails the readability test.
Concept-clear example
// POOR READABILITY — what not to write:
func process(r *Resource, t time.Time) {
p := t
r.DA = &p // What is DA? Why a pointer? Why now?
r.IA = false // Why false? What is IA?
r.UA = t // Three fields updated — why these three together?
}
// EXCELLENT READABILITY — Kennedy's standard:
// SoftDelete marks the resource as deleted without removing it from the database.
// POINTER receiver: this method modifies the struct.
func (r *Resource) SoftDelete() {
now := time.Now()
r.DeletedAt = &now // pointer: nil means "not deleted", non-nil means "deleted at X"
r.IsActive = false
r.UpdatedAt = now
}
SoftDelete passes the test:
- WHAT: the name says "soft delete" — you know it won't remove the row
- HOW: 4 lines, nothing clever, no indirection
- WHY: the comment explains the "without removing from DB" distinction, and the inline comment on
&nowexplains the pointer semantics
You can read SoftDelete in 8 seconds. You need 60+ to understand process.
CloudMeta production example — SoftDelete()
The actual SoftDelete in internal/models/resource.go:
// SoftDelete marks the resource as deleted without removing it from DB.
// POINTER receiver: modifies the struct.
func (r *Resource) SoftDelete() {
now := time.Now()
r.DeletedAt = &now // take the address of 'now' — creates a pointer to a time.Time
r.IsActive = false
r.UpdatedAt = now
}
The &now deserves explanation for Go learners: we cannot write r.DeletedAt = &time.Now() because time.Now() returns a value, not an addressable location. We must first assign it to a variable (now), then take its address. The inline comment explains this directly to the future reader.
CloudMeta readability gap — runExtract()
Here's a place where CloudMeta currently fails the readability test. In cmd/cloudmeta/main.go:
// CURRENT — passes readability test for WHAT and HOW, but fails on WHY:
func runExtract(tenantID, region, provider, account string) error {
ext, err := extractor.NewAWSExtractor(region, account)
if err != nil {
return fmt.Errorf("create extractor: %w", err)
}
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
resources, err := ext.ExtractResources(ctx, tenantID)
// ...
}
A reader has three unanswered questions: Does this persist to the database? Is it a one-shot or streaming operation? Why exactly 5 minutes?
// IMPROVED — passes all three parts of Kennedy's readability test:
//
// runExtract performs a one-shot extraction for a single tenant.
// It is designed for CLI usage (cloudmeta extract --tenant=...) and
// exits after printing results — it does NOT persist to the database.
//
// Timeout: 5 minutes is generous for AWS API rate limiting (up to ~3000
// API calls for a large account). Production pipeline uses a different path.
func runExtract(tenantID, region, provider, account string) error {
The improvement costs 5 lines of comments. It saves every future reader from needing to trace through the entire function to answer three basic questions.
Pillar 3 — Simplicity
"Simplicity is the art of hiding complexity without losing correctness."
— Kennedy §1.8.3
What Kennedy means
This is the hardest pillar, and the most counterintuitive. Simplicity does not mean fewer lines of code. It means the cognitive complexity of understanding the code is minimised.
The right abstraction hides irrelevant complexity at the level where it doesn't need to be understood. It exposes relevant complexity at exactly the level where it does.
The question is never "can I make this shorter?" but "can I make the reader's cognitive load lower?"
The validatorAdapter — the most important pattern in Chapter 1
I want to spend more time on this than any other example because it appears in practically every real Go codebase and is tested in every Staff-level interview. Understand this deeply.
The problem:
ResourceHandler in internal/api/handlers needs to validate resources before inserting them. The validator package has a ValidateResource function. The first instinct — the Java/C# instinct — is:
// NAIVE APPROACH — what most engineers write first:
package handlers
import "github.com/harish/cloudmeta/internal/validator"
type ResourceHandler struct {
store storage.ResourceStore
validator *validator.Validator // ← direct coupling to the validator package
}
func NewResourceHandler(store storage.ResourceStore, v *validator.Validator) *ResourceHandler {
return &ResourceHandler{store: store, validator: v}
}
This seems reasonable. It works. But it has four concrete problems:
- The handler package now imports the validator package. The import graph has grown. Any change to the validator package interface forces you to check whether
ResourceHandleris affected. - You cannot test
ResourceHandlerin isolation. To construct a*validator.Validator, you need everythingvalidator.Validatorneeds — configuration, possibly external dependencies. - You cannot swap the validation logic. If you want to test with "always valid" or "always invalid" validation, you'd need to change the handler's type signature.
- The handler knows about validator internals. If
validator.Validatorgains new constructor parameters, the handler test setup must change too.
The Kennedy solution:
Instead, define a small interface in the handler package for exactly what the handler needs:
// internal/api/handlers/resource.go
package handlers
// Validator is the only thing ResourceHandler needs from validation logic.
// One method. Discoverd from usage, not designed up-front.
type Validator interface {
ValidateResource(r *models.Resource) error
}
type ResourceHandler struct {
store storage.ResourceStore
validator Validator // depends on the interface, not the package
}
func NewResourceHandler(store storage.ResourceStore, v Validator) *ResourceHandler {
return &ResourceHandler{store: store, validator: v}
}
Now ResourceHandler has zero knowledge of the validator package. But we still need to wire them together somewhere. That's where the validatorAdapter comes in — and it lives in main.go, the wiring point of the entire application:
// cmd/cloudmeta/main.go
// validatorAdapter bridges the wiring concern in main.go to the handler's
// interface contract. It is intentionally defined here — not in the validator
// or handlers package.
//
// WHY it exists:
// handlers.NewResourceHandler requires a Validator interface:
// type Validator interface {
// ValidateResource(r *models.Resource) error
// }
// validator.ValidateResource is a package-level function, not a method.
// This adapter promotes the function to a method on a zero-size struct,
// satisfying the interface without coupling the handler to the validator package.
//
// Kennedy Ch1 Pillar 3 (Simplicity): the handler's complexity budget is zero.
// Kennedy Ch1 Pillar 7 (Decoupling): handler → interface, NOT handler → package.
//
// Zero-size struct: validatorAdapter{} occupies 0 bytes on the stack.
// No fields, no allocations, no overhead.
type validatorAdapter struct{}
func (validatorAdapter) ValidateResource(r *models.Resource) error {
return validator.ValidateResource(r)
}
// In runServe():
v := validatorAdapter{}
resourceHandler := handlers.NewResourceHandler(resourceStore, v)
What is a zero-size struct and why does it matter?
type validatorAdapter struct{} has no fields. In Go, a struct with no fields occupies zero bytes of memory. This is not a micro-optimization trick — it's a fundamental property of the language.
When you write validatorAdapter{}, no heap allocation occurs. The compiler may use a shared global address for all instances. You get the abstraction for free — there is no runtime cost whatsoever.
Compare to type validatorAdapter struct{ v *validator.Validator } — that version would need a field, a constructor, and memory allocation. The zero-size version is pure compile-time wiring.
The payoff
The 3-line adapter removes an entire import dependency from handlers/resource.go. Every engineer who reads ResourceHandler in the future sees only the clean interface. They never need to understand the validator package to work with the handler.
The one place that does need to understand both — main.go — is the wiring layer. main.go's job is to own these dependencies. This is Pillar 3 working with Pillar 7: complexity is hidden at the right level.
In tests, you replace validatorAdapter{} with any struct that implements the one-method interface:
// In handler_test.go — zero dependency on the validator package:
type alwaysValidValidator struct{}
func (alwaysValidValidator) ValidateResource(r *models.Resource) error { return nil }
type alwaysInvalidValidator struct{}
func (alwaysInvalidValidator) ValidateResource(r *models.Resource) error {
return fmt.Errorf("validation failed: test error")
}
handler := handlers.NewResourceHandler(store, alwaysValidValidator{})
Pillars 4 & 5 — Performance and Micro-Optimization
"Use the hardware as the platform. Measure before optimising."
— Kennedy §1.8.4–1.8.5
Kennedy's critical distinction
These two pillars are often conflated. They are not the same thing, and confusing them produces either premature optimization or missed architectural performance decisions.
Pillar 4 — Performance is about understanding the hardware platform: CPU caches, memory bandwidth, syscall costs, protocol overhead. It's about algorithm selection at the architecture level — choosing the right tool because you understand what the hardware is doing underneath. This happens at design time.
Pillar 5 — Micro-Optimization is the final 5%. After you have correct code, readable code, simple code, the right algorithm, and hardware awareness — then you profile. Then you find the actual bottleneck. Then you optimise the proven hotspot. This happens after profiling data exists.
The sequence is: Correctness → Readability → Simplicity → Right Algorithm → Profile → Micro-Optimize (if proven necessary).
CloudMeta Pillar 4 — BatchInsert with pgx CopyFrom
In internal/storage/resource_store.go, the bulk insert operation uses PostgreSQL's COPY protocol instead of regular INSERT statements:
// BatchInsert uses PostgreSQL's binary COPY protocol — not regular INSERT.
//
// WHY CopyFrom and not INSERT ... VALUES (...), (...):
//
// Regular INSERT path for 1000 resources:
// → 1000 individual SQL parse + plan cycles
// → 1000 network round trips (without batching)
// → 1000 transaction log entries processed individually
//
// pgx CopyFrom path for 1000 resources:
// → 1 network round trip (binary stream)
// → PostgreSQL bulk-loads directly into the heap file
// → ~50-200x faster for large batches
//
// This is NOT premature optimisation — it is algorithm selection.
// CloudMeta extraction returns 3-7 resources per small tenant,
// but 500-5000 per large AWS account. At that scale, INSERT is unusable.
//
// Kennedy Pillar 4: use the hardware (the DB engine's bulk-load path).
func (s *ResourceStore) BatchInsert(ctx context.Context, resources []models.Resource) error {
rows := make([][]interface{}, len(resources))
for i, r := range resources {
tagsJSON, _ := json.Marshal(r.Tags)
metaJSON, _ := json.Marshal(r.Metadata)
rows[i] = []interface{}{r.TenantID, r.Name, string(r.Type), r.Region, r.IsActive, tagsJSON, metaJSON}
}
_, err := s.pool.CopyFrom(
ctx,
pgx.Identifier{"resources"},
[]string{"tenant_id", "name", "type", "region", "is_active", "tags", "metadata"},
pgx.CopyFromRows(rows),
)
return err
}
The CopyFrom choice is Pillar 4 because it's an architectural decision made by understanding what PostgreSQL is doing at the storage engine level — not a micro-optimization tuned after profiling.
CloudMeta Pillar 5 — What NOT to Optimize Yet
In the same Insert() function (single-resource inserts), there are two json.Marshal calls:
func (s *ResourceStore) Insert(ctx context.Context, r *models.Resource) error {
tagsJSON, err := json.Marshal(r.Tags) // ← heap allocation #1
if err != nil { /* ... */ }
metaJSON, err := json.Marshal(r.Metadata) // ← heap allocation #2
if err != nil { /* ... */ }
// ... INSERT query
}
At 100 requests per second, this is 200 heap allocations per second from these two calls alone. Should we optimise it?
Kennedy's answer: not yet.
// TODO(ch10-profiling): json.Marshal in Insert creates 2 heap allocs/call
// (tags + metadata). At 100 RPS this is 200 allocs/sec — GC pressure unknown.
// Profile under realistic load before deciding to optimise.
// Candidate approaches: pre-serialise JSONB columns, use pgtype.Text wrapper,
// or pool a bytes.Buffer for the marshal.
//
// DO NOT optimise this until Chapter 10 profiling confirms it is a hotspot.
The TODO comment is the correct action at Chapter 1. The wrong action is to reach for sync.Pool or pre-allocated buffers before we know this is actually causing a problem.
A correct, slightly inefficient function is infinitely preferable to an incorrect, "optimised" one.
Pillar 6 — Data-Orientation
"Design around the data. The data determines everything."
— Kennedy §1.8.6
What Kennedy means
This pillar bridges Chapter 1 (philosophy) and Chapter 3 (CPU caches and data structures). The insight is: how your data is laid out in memory determines how fast your code runs, independent of algorithm complexity. Two functions with identical Big-O complexity can differ by 5-10x in real performance based purely on memory layout.
The CPU's prefetcher works by predicting which memory addresses you'll need next. Sequential, contiguous memory (a value slice []T) allows perfect prediction. Scattered pointer memory ([]*T, where each pointer points to a random heap address) defeats the prefetcher entirely.
Before Chapter 3 gives us the benchmarks, Chapter 1's lesson is: understand your data's access pattern and choose your data structure accordingly.
Concept-clear example — value slice vs nil slice
// POOR: nil slice, grows by reallocation
func makeNamesNaive(tenantID string, count int) []string {
var names []string // nil slice — capacity 0
for i := 0; i < count; i++ {
// append must allocate new backing array each time capacity is exceeded
// Typical growth: 0 → 1 → 2 → 4 → 8 → 16 → 32...
// At count=100: ~7 allocations, ~6 copy operations
names = append(names, fmt.Sprintf("%s-resource-%03d", tenantID, i+1))
}
return names
}
// CORRECT: pre-allocated at known size, zero reallocations
func makeNames(tenantID string, count int) []string {
names := make([]string, 0, count) // capacity = count, zero reallocations
for i := 0; i < count; i++ {
names = append(names, fmt.Sprintf("%s-resource-%03d", tenantID, i+1))
}
return names
}
// EVEN BETTER when you know the exact size upfront:
func makeNamesExact(tenantID string, count int) []string {
names := make([]string, count) // length AND capacity = count
for i := range names {
names[i] = fmt.Sprintf("%s-resource-%03d", tenantID, i+1) // index assignment
}
return names
}
The third version uses make([]string, count) (not make([]string, 0, count)). When you set length upfront with make, you can use index assignment names[i] = ... instead of append. This avoids bounds checking on some architectures and communicates that the size is exactly known.
CloudMeta production example — generateFakeResources
In internal/extractor/aws.go:
func generateFakeResources(tenantID, region, accountID string) []models.Resource {
types := []models.ResourceType{
models.ResourceTypeVM,
models.ResourceTypePod,
models.ResourceTypeStorage,
}
count := 3 + rand.Intn(5) // 3-7 resources per tenant
// Pillar 6: pre-allocate at the exact known size.
// No reallocations. Contiguous memory. Cache-line friendly.
// The pipeline will read this sequentially — perfect for CPU prefetcher.
resources := make([]models.Resource, count)
for i := range resources {
rt := types[i%len(types)]
resources[i] = models.Resource{ // value assignment — not &Resource{}
ID: int64(i + 1),
TenantID: tenantID,
Name: fmt.Sprintf("%s-%s-%03d", rt, region, i+1),
Type: rt,
Region: region,
IsActive: true,
Tags: map[string]string{"account": accountID, "env": "prod"},
}
}
return resources // value slice — caller gets contiguous memory
}
Three data-orientation decisions working together:
make([]models.Resource, count)— exact pre-allocation, zero reallocationsfor i := range resourceswithresources[i] = ...— index assignment, no append overhead- Returns
[]models.Resourcenot[]*models.Resource— value slice, contiguous layout
The open question for Chapter 3
ListByTenant in the storage layer currently returns []*models.Resource — a pointer slice:
// Returns pointer slice — each *Resource is scattered in heap memory
func (s *ResourceStore) ListByTenant(ctx context.Context, tenantID string, ...) ([]*models.Resource, error) {
// pgx allocates one *Resource per database row
// Each pointer points to a different heap location
}
vs. CollectResources in internal/models/slices.go which uses a value slice:
// Returns value slice — all Resources contiguous in memory
func CollectResources(tenantIDs []string, perTenantCount int) []Resource {
result := make([]Resource, 0, len(tenantIDs)*perTenantCount)
// ... sequential fill
}
Which is faster when iterating over 100,000 resources? Chapter 3 will benchmark this. The answer, based on CPU cache mechanics, should be a 3-5x difference. Hold that question.
Pillar 7 — Decoupling
"Design components so they can change independently."
— Kennedy §1.8.7
What Kennedy means
Decoupling means that when component A needs to change, component B does not need to change with it. In Go, this is achieved through small, focused interfaces and composition — not type hierarchies.
Kennedy's most important Go proverb: "Don't design with interfaces, discover them."
This is the principle that trips up engineers migrating from Java or C# most badly. In Java, you write an interface before you write the implementation. In Go, you write the implementation first. You discover the interface later, when you have two concrete implementations doing the same job and you need to abstract over them.
An interface that is designed up-front is a prediction about the future. An interface that is discovered is an accurate description of the present.
CloudMeta example A — ExtractorWithClose (Interface Composition)
In internal/extractor/interface.go, the interface design tells a story:
// Package extractor — interface design shows the "discover, don't design" principle.
// Extractor: discovered when we needed to abstract over AWSExtractor and MockExtractor.
// Three methods — the minimum that makes the abstraction useful.
type Extractor interface {
ExtractResources(ctx context.Context, tenantID string) ([]models.Resource, error)
HealthCheck(ctx context.Context) error
ProviderName() string
}
// Closer: a single-method interface for lifecycle management.
// Note: identical to io.Closer from the standard library.
// This means any type with Close() error satisfies BOTH Closer AND io.Closer.
type Closer interface {
Close() error
}
// ExtractorWithClose: composed from two small interfaces.
// This is NOT "design with interfaces" — it's "discover through composition."
// We needed a type that does both extraction AND lifecycle management.
// The answer: compose the two interfaces, don't create a fat third one.
type ExtractorWithClose interface {
Extractor // all 3 Extractor methods
Closer // Close()
// Total: 4 methods required
}
The composition reveals intent: code that only needs to extract resources depends on Extractor (3 methods). Code that owns the lifecycle of an extractor — main.go — depends on ExtractorWithClose (4 methods). Each consumer depends on exactly the minimum interface it needs.
The compile-time interface check
Both AWSExtractor and MockExtractor include this line:
// This line does nothing at runtime — it is erased by the compiler.
// At compile time, it verifies that *AWSExtractor satisfies ExtractorWithClose.
// If any method is missing or has the wrong signature:
// → compile error: "cannot use (*AWSExtractor)(nil) (type *AWSExtractor) as type ExtractorWithClose"
// You fix it in 30 seconds. Without this line, you find out at runtime — or at 2am in CI.
var _ ExtractorWithClose = (*AWSExtractor)(nil)
This pattern is idiomatic Go. Every concrete type that claims to implement an interface should have this check. It costs nothing at runtime and prevents entire categories of subtle bugs.
CloudMeta example B — the init() self-registration plugin pattern
This is one of the most powerful decoupling patterns in the entire Go standard library ecosystem. In cmd/cloudmeta/main.go:
import (
_ "github.com/harish/cloudmeta/internal/plugin" // triggers init()
)
And in internal/plugin/gcp.go:
package plugin
func init() {
// init() runs automatically when this package is imported.
// main.go does not know GCPExtractor exists.
// GCPExtractor registers itself.
Register(&GCPExtractor{
regions: []string{"us-central1", "us-east1", "europe-west1", "asia-east1"},
})
}
main.go has zero knowledge of GCPExtractor. It doesn't import the gcp file. It doesn't name the type. The blank import _ "github.com/harish/cloudmeta/internal/plugin" triggers the init() function, which self-registers the GCP implementation into a plugin registry.
To add Azure support: create internal/plugin/azure.go with its own init(). Change nothing in main.go. This is Pillar 7 at its purest.
Why "discover, don't design" matters
Consider the alternative: an interface-first approach where you define ExtractorInterface with 10 methods before writing AWSExtractor. You've now:
- Predicted that every cloud provider needs exactly those 10 methods
- Constrained every future implementation to match your prediction
- Paid the cognitive cost of the interface before you know if it's the right shape
- Made it harder to add the 11th method later because it's a breaking change
Kennedy calls this "interface pollution" — the most common Go anti-pattern in codebases written by engineers migrating from Java or C#. The signal: an interface with more than 3-4 methods is usually a sign it was designed, not discovered.
Pillar 8 — Concurrency
"Concurrency means managing multiple things that happen at the same time."
— Kennedy §1.8.8
Kennedy's central distinction
Kennedy draws a sharp line between two types of concurrency that most engineers conflate:
Orchestration — coordinating when goroutines complete. You need to know when N workers are done before proceeding. The right tool: sync.WaitGroup.
Signaling — passing data or events between goroutines. You need to move values from one goroutine to another. The right tool: channel.
Using the wrong tool produces code that works most of the time and fails in subtle, hard-to-reproduce ways:
- WaitGroup where you need a channel: you lose the data — WaitGroup cannot carry values
- Channel where you need WaitGroup: you get no clean shutdown — you'd need to build a counter from scratch, which is what WaitGroup already is
- Neither with context cancellation: goroutine leaks — goroutines outlive the pipeline that started them
Concept-clear example — orchestration vs signaling
// ORCHESTRATION: I need to know when N workers are done.
// Use WaitGroup.
func runWorkers(n int, work func()) {
var wg sync.WaitGroup
for i := 0; i < n; i++ {
wg.Add(1) // ← ALWAYS Add BEFORE launching the goroutine
go func() {
defer wg.Done() // ← ALWAYS Done inside the goroutine (deferred)
work()
}()
}
wg.Wait() // blocks until all n workers call Done()
fmt.Println("all workers complete")
}
// SIGNALING: I need to pass values between goroutines.
// Use channel.
func produce(ctx context.Context) <-chan int {
out := make(chan int, 10) // buffered: producer doesn't block until buffer full
go func() {
defer close(out) // ALWAYS close: signals receivers that no more data is coming
for i := 0; i < 100; i++ {
select {
case <-ctx.Done():
return // respect cancellation
case out <- i:
// sent successfully
}
}
}()
return out // return the read-only end to the caller
}
The critical rules that never change:
wg.Add(1)before the goroutine launches — never inside the goroutinedefer wg.Done()inside the goroutine — always deferredclose(ch)in the producer goroutine when done — never in the consumerctx.Done()checked in every select — prevents goroutine leaks
CloudMeta production example — RunPipeline
The full extraction pipeline in internal/worker/pipeline.go uses both types correctly:
// RunPipeline: Extract → FanOut Validate (4 parallel) → Store
// Three stages connected by channels (signaling).
// FanOut manages goroutine lifecycle with WaitGroup (orchestration).
func RunPipeline(
ctx context.Context,
tenantIDs []string,
extractor interface { ExtractResources(context.Context, string) ([]models.Resource, error) },
validator interface { ValidateResource(r *models.Resource) error },
storeFunc func(context.Context, []models.Resource) error,
) (int, error) {
// Stage 1 — SIGNALING: Extract produces resources into a channel.
// The channel is the only coupling between stages.
extracted := Extract(ctx, tenantIDs, extractor) // returns <-chan models.Resource
// Stage 2 — SIGNALING + ORCHESTRATION: 4 parallel validation goroutines.
// Channels carry the data (signaling).
// WaitGroup coordinates shutdown (orchestration).
validated := FanOut(ctx, extracted, 4, func(ctx context.Context, r models.Resource) (models.Resource, error) {
rCopy := r
if err := validator.ValidateResource(&rCopy); err != nil {
return models.Resource{}, err
}
return rCopy, nil
})
// Stage 3 — Store batches of 100 (sink — no output channel)
total, err := Store(ctx, validated, 100, storeFunc)
return total, err
}
And FanOut showing the WaitGroup orchestration pattern:
// FanOut creates n goroutines, all reading from the same input channel.
// ORCHESTRATION: WaitGroup tracks when all n goroutines complete.
// SIGNALING: output channel carries results downstream.
func FanOut[T any](ctx context.Context, in <-chan T, n int, transform func(context.Context, T) (T, error)) <-chan T {
out := make(chan T, n*10)
var wg sync.WaitGroup
for i := 0; i < n; i++ {
wg.Add(1) // ← Add BEFORE the goroutine starts
go func() {
defer wg.Done() // ← Done deferred — runs even on early return
for {
select {
case <-ctx.Done():
return // context cancelled — stop cleanly
case v, ok := <-in:
if !ok {
return // upstream channel closed — stop cleanly
}
result, err := transform(ctx, v)
if err != nil {
fmt.Printf("fan-out transform error: %v\n", err)
continue // skip this item, keep running
}
select {
case out <- result:
case <-ctx.Done():
return
}
}
}
}()
}
// Separate goroutine: waits for ALL n workers to finish, then closes output.
// This is the correct shutdown sequence: Wait → close.
go func() {
wg.Wait()
close(out) // signals Store() that no more data is coming
}()
return out
}
The WaitGroup ensures close(out) only runs after all 4 goroutines have returned. Without WaitGroup, you'd need to implement that counting logic yourself — which is error-prone and exactly what WaitGroup already provides.
Bringing It All Together — the Pillar Audit
After reading Chapter 1, Kennedy's exercise is to do a pillar audit of your own codebase. For each pillar, find one location that correctly follows it, and one location that could be improved.
Here's what that looks like for CloudMeta:
| Pillar | Correct Application | Improvement Opportunity |
|---|---|---|
| Integrity | AWSExtractor.Close() → ExtractResources state check |
MockExtractor missing compile-time interface check |
| Readability | SoftDelete() — 4 lines, self-documenting |
runExtract() — missing intent comment and timeout explanation |
| Simplicity | validatorAdapter — 3 lines, zero import dependency in handler |
— |
| Performance | BatchInsert using pgx.CopyFrom |
— |
| Micro-Optimize | Insert() has TODO for json.Marshal allocs |
— |
| Data-Orientation | generateFakeResources using make([]Resource, count) |
ListByTenant returns []*Resource (pointer slice) — benchmark in Ch3 |
| Decoupling | init() plugin self-registration |
— |
| Concurrency | FanOut WaitGroup + channels |
— |
The improvement opportunities become your refactor targets. Chapter 1's concrete commits:
# Refactor 1: add compile-time check to MockExtractor (Pillar 1 — Integrity)
# Add to bottom of internal/extractor/mock.go:
var _ Extractor = (*MockExtractor)(nil)
var _ ExtractorWithClose = (*MockExtractor)(nil)
# Refactor 2: document the validatorAdapter (Pillar 2 — Readability)
# Add full explanatory comment to cmd/cloudmeta/main.go validatorAdapter
The Go Patterns Chapter 1 Introduces
Before we close, here's a compact reference of the specific Go patterns that appear in Chapter 1:
Compile-time interface check:
var _ InterfaceName = (*ConcreteType)(nil)
// Zero runtime cost. Fails at build time, not test time.
Zero-size struct adapter:
type myAdapter struct{}
func (myAdapter) MethodName(args) returnType { return packageLevelFunc(args) }
// Occupies 0 bytes. Used to bridge a package-level function to an interface.
defer for guaranteed cleanup:
pool, err := storage.NewPool(ctx, cfg)
// ... error check ...
defer pool.Close() // runs on every exit path, including panics
Pre-allocated slice:
items := make([]T, count) // length = count, assign by index
items := make([]T, 0, count) // length = 0, capacity = count, use append
// Use the first form when you know the exact count upfront.
Blank import for init() plugin:
import _ "github.com/yourname/pkg/plugin"
// Triggers init() in the plugin package without exposing its name.
WaitGroup: the 3-line invariant:
wg.Add(1) // before goroutine launch — never inside
go func() {
defer wg.Done() // inside goroutine, deferred — runs even on panic
// ... work
}()
wg.Wait() // after all Add() calls
What's Next — Chapter 2
Chapter 1 was the WHY. Every principle above is a design philosophy with real engineering justification.
Chapter 2 is the HOW at the mechanical level — the substrate that makes these principles achievable:
- Zero value — what the compiler initialises for you and why that matters for integrity
- var vs := — when each declaration form is idiomatic and why
- Stack vs heap — why
SoftDelete's&nowescapes to the heap, and what that costs - Escape analysis — run
go build -gcflags='-m' ./...on CloudMeta and read the output - The GC tricolor algorithm — why pointer slices create more GC pressure than value slices
- Panic vs error — Kennedy's rule for when a panic is the correct choice
All of Chapter 2's concepts connect directly back to CloudMeta: models/resource.go, models/concepts_vars.go, and storage/panic_recover.go all contain concrete examples of every topic.
Closing Thought
I started this chapter looking for code patterns to memorise. I finished it with a decision framework.
The habit Kennedy instills is simple: before you write a line of code, ask which of the eight pillars it follows. Before you merge a PR, ask which pillars it violates. When an interviewer asks you to defend a design decision, answer in terms of which pillars it satisfies and what tradeoffs it makes.
That habit is the difference between an engineer who writes code that works and an engineer who writes code that can be explained, defended, and maintained. At companies like Databricks, Razorpay, and Sarvam AI, that distinction is the entire interview.
See you in Chapter 2.
All code examples are from the real CloudMeta codebase. Runnable pillar demonstrations are at go-notebook/ch01-intro.