Notes from the work.

Short, opinionated pieces on applied AI, agent workflows, and the things that actually make internal tools stick.

Start here

If you're new to the work, these three set the foundation.

Foundation · 1 of 3

The Context Audit: why your team's AI tools aren't landing

Your team has AI licenses and nobody's using them. That's not a training problem. It's a context problem.

Read the post →
Foundation · 2 of 3

What I teach: a pattern from a dozen public builds

A dozen public builds, one repeatable workflow underneath. The map from the builds to what the course teaches.

Read the post →
Foundation · 3 of 3

How to write a lightweight CLAUDE.md (and AGENTS.md)

Anthropic's own guidance is to keep CLAUDE.md short. Bloated files drag every conversation down. The 2026 lightweight pattern.

Read the post →

Pillars

The recurring threads across the writing.

Pillar 01

Shipping with agents

Agentic coding is not magic. It's scope, specs, review, and verification.

Get the harness →
Pillar 02

Context as the compounding layer

AI adoption fails when tools lack durable context. Context is the only AI skill that compounds.

Start a consulting conversation →
Pillar 03

Human review, evals, and reliability

Don't trust outputs you cannot inspect. Lightweight evals beat clever demos.

Get the harness →
Pillar 04

Build logs and case studies

The site is a record of active shipping, not a static portfolio.

See the case studies →
Pillar 05

Operator-led AI transformation

The people closest to the workflow can now build the first version themselves.

Join the next cohort →

Get the field notes

Short updates when the agent workflow changes: what shipped, what broke, what I changed in the harness.

No spam. I send when something useful changes.

Working on something specific?

If a piece here mapped to something you're actually stuck on, I'd rather hear about it than guess at it from the outside.

Start a conversation