The Context Audit: why your team's AI tools aren't landing
Your team has AI licenses and nobody's using them. That's not a training problem. It's a context problem.
Read the post →Short, opinionated pieces on applied AI, agent workflows, and the things that actually make internal tools stick.
If you're new to the work, these three set the foundation.
Your team has AI licenses and nobody's using them. That's not a training problem. It's a context problem.
Read the post →A dozen public builds, one repeatable workflow underneath. The map from the builds to what the course teaches.
Read the post →Anthropic's own guidance is to keep CLAUDE.md short. Bloated files drag every conversation down. The 2026 lightweight pattern.
Read the post →The recurring threads across the writing.
Agentic coding is not magic. It's scope, specs, review, and verification.
Get the harness →AI adoption fails when tools lack durable context. Context is the only AI skill that compounds.
Start a consulting conversation →Don't trust outputs you cannot inspect. Lightweight evals beat clever demos.
Get the harness →The site is a record of active shipping, not a static portfolio.
See the case studies →The people closest to the workflow can now build the first version themselves.
Join the next cohort →RefereeOS uses six AI agents and a Daytona sandbox to help peer reviewers verify preprints before bad science gets amplified. Built in 5 hours. First place.
Anthropic's own guidance is to keep CLAUDE.md short. Bloated files drag every conversation down. The 2026 lightweight pattern for CLAUDE.md and AGENTS.md.
Built Commons Copilot at the Betaworks multi-agent hackathon in two hours. Five agents share one Intent Space. No queue, no planner, no dispatcher. Second place.
Built a sparring partner skill that pressure-tests my thinking before any writing starts. Most people use AI to write faster. Better to think harder first.
Annoying weekly LinkedIn homework: export analytics, triage messages. So I taught Claude in Chrome how to do it. No scripts, no API, just patient teaching.
Your team has AI licenses and nobody's using them. That's not a training problem. It's a context problem. A 30-day Context Audit ships the fix in four weeks.
Most AI tools fail inside teams because the tool doesn't know who's using it or what good looks like. Context is the only AI skill that compounds.
Ran four coding agents in parallel, each owning a case study end-to-end. The skill morphed from prompting to scoping the right work for the right tool.
Late-round interview, ADHD brain on the scenic route, interviewer flagged my stories as too long. Built Stellar that weekend: paste messy, get a clean STAR.
A dozen public builds since November. One repeatable workflow underneath. That's what the course teaches. This is the map from the builds to the curriculum.
WalkRide combines walking, biking, and transit into smarter NYC routes. The 24-hour build log: why parallel APIs matter and what shipping fast taught me.
An ElevenLabs recruiter visited my LinkedIn profile. Couldn't DM her back, so I built a voice agent for the next one. V1 was embarrassing. V2 wasn't.
Got laid off. Spent 45 minutes per job application doing work a prompt could do in 5. Built an 11-prompt pipeline in two weeks, with a tracker to close the loop.
My path to AI building went through fine-dining floors in NYC and four and a half years in B2B SaaS. Most of what I know about shipping I learned on a floor.
ADHDos was my first serious multi-agent build, shipped as a Google/Kaggle capstone. Three agents, a constraint-aware design, and an April retrospective update.
Short updates when the agent workflow changes: what shipped, what broke, what I changed in the harness.
No spam. I send when something useful changes.
If a piece here mapped to something you're actually stuck on, I'd rather hear about it than guess at it from the outside.