A Dozen Builds. One Pattern Underneath.
Since November I've shipped about a dozen things in public. A multimodal route planner. A voice AI portfolio that went from embarrassing to actually working over a weekend. Multiple iterations of a resume tailor. A job search system that runs on eleven prompts. A mind map of thirty-plus AI tools. A multi-agent task system built for ADHD brains. A couple of hackathon wins before all that.
Every one of those builds used the same underlying workflow. Same decision points. The recovery moves when things broke looked identical across all of them.
That's what the course teaches.
Not a tour of AI tools. Not a syntax crash course. The actual repeatable workflow I use every time I sit down to ship something. It's what I wish had existed when I started building with AI a year ago.
Why This Course Exists
The courses I could find in 2025 fell into two buckets. One bucket taught Python: here's how an LLM works, now let's write some prompts. The other bucket was content-farm video tours of AI tools. Neither put a real project in anyone's hands.
Most people who want to build with AI in 2026 don't want a computer science class. They want to ship one tool they'll actually use, with a workflow they can repeat for the next idea, and proof they can point a friend at.
The course page comes later in this post. The structure matters first.
Week 1: Pick a Project That Can Survive Contact With Reality
The first thing that kills beginner AI builds is scope. Someone opens Claude Code on Monday wanting to build a "platform" with users, payments, and a marketplace. By Thursday they're stuck on auth, spiraling, and convinced AI building is hype.
Week 1 teaches scope cutting and spec-first work. You pick a project in one of four lanes: a static personal tool, an input-output tool, a small saved-state app, or a lightweight AI wrapper. You have Claude (or whichever AI you're using) draft a lightweight CLAUDE.md or AGENTS.md for the repo: a short directory that tells the agent what the project is and which skills it should know about. Keep it thin. Resist the urge to dump every constraint inline; skills live in their own files, and bloated spec files tend to make models perform worse, not better.
Then plan mode (Shift+Tab twice in Claude Code) before anything gets built. This is where you red-team the idea. A pre-made "argue with me" skill does most of the generic pushback. You layer in the nuanced pushback in the moment for the specifics no reusable skill can capture.
Every public build I've shipped started like this. WalkRide shipped in 24 hours because the lane was narrow: a transit app for one city, not a global routing platform. The 11-prompt job search system worked because the scope was "one person's job hunt," not "a hiring marketplace."
Pick something smaller than you want to. Write the spec. Argue with the agent in plan mode before you ship anything. The whole WalkRide build log is a case study in scope discipline.
Week 2: Fix the Mess
Most courses show you the clean path. You type a prompt, beautiful code appears, the thing works. Nobody mentions what happens on Wednesday when the agent breaks the thing it built on Monday.
Week 2 is the ugly middle. Reading errors without spiraling. Debugging with screenshots instead of descriptions. Using /clear to reset the chat when it's stuck in a loop. The one-change rule: one goal per prompt, commit after each win, so rollback is cheap.
This is the week that matters most. Everyone can vibe-code something on day one. The people who keep building are the ones with recovery moves.
Voice AI Portfolio v1 was embarrassing. The agent couldn't read emotion, kept interrupting, sounded robotic. V2 came out of exactly what Week 2 teaches: paste screenshots of the broken behavior, ask narrow questions, reset when context got polluted, commit every working state. (More in the voice AI build log.)
The course won't pretend things ship cleanly. It'll teach you to recover fast enough that broken doesn't mean stuck.
Week 3: Make It Look Real
Most things people build stop here. It works, kind of. The buttons say "Submit" and "Cancel." The empty state shows nothing. Nobody outside the builder has tried it.
Week 3 is product taste. Hierarchy, spacing, contrast. Rewriting confusing UI copy. Then watching one real person use the thing for ten minutes, silent, and writing down what they clicked and what confused them.
Watching a real user for ten minutes tells you more than a week of guessing. Every single build of mine improved the most after seeing somebody else try to use it. That's also the week that teaches the handoff: if the build only makes sense to you, it's not done.
Week 4: Tell the Story and Build the Habit
Week 4 is where the repeat part kicks in. You write the build log (the short public writeup for LinkedIn, a newsletter, friends). The moves that saved time during the build become Skills (skill.md files that Claude helps you write) or slash commands. The prompt library is the starting point; the skill library is what keeps compounding across the next ten projects. You pick the next project using a simple filter so momentum carries.
The last twenty minutes of the course is the map of what's next. Skills. Subagents. MCP servers. Not a full tour, just the names and what they unlock. That way when you hit the "I want to go deeper" wall, you know where the next rungs are.
Everything I keep doing publicly came out of Week 4 habits. The 11-prompt job search system I built after getting laid off started as a prompt library and turned into a reusable skill folder I still pull from. The reason I can ship one thing every week or two isn't speed. It's that every project leaves me with one or two more skills that save time on the next one.
The Full Course
The course is called "From Chatbot to Builder" (working title, possibly changing by launch). Four weeks, one real project, one visible ship each week. Two tiers: a guided cohort with weekly live calls, a Discord, and async stuck-support, and a self-serve version for people who just want the recorded material.
The guided cohort is at the founding-cohort price right now, which means small group, heavy attention, and a floor for what the standard price becomes once the first students have shipped. The self-serve tier launches after the first cohort produces proof.
Who It's For
This works if you've been thinking about building something for months but haven't started. If you've opened Claude Code once and closed it because you didn't know where to begin. If you want to ship one tool you actually use, not a survey of everything AI can do.
It's not for you if you want to become a professional software engineer (this won't get you there), or if you want a passive watch-and-nod library (the course ships a real project or it fails).
Why the Workflow Holds Up
The specific tools will keep moving. New models every 60 days. New MCP servers, new skills patterns, new defaults in Claude Code. Staying on top of that is part of the job, which is why the course updates reflect what works right now, not what worked in 2024.
What doesn't change is the higher-level theory. Tight scope. Outcome-focused ideation. Spec-first. Iterate against a plan. Save what worked into a skill. That's been true since the first hackathon I shipped, and it'll be true when Claude 5 lands. You're learning the pattern, not the flavor of the month.
The Pattern, One More Time
A dozen public builds since November. Every one used the same rhythm: pick a tight scope, have the tool write its own spec, red-team the plan before building, one change at a time, debug with screenshots, watch a real user, turn what worked into a reusable skill, pick the next thing.
That's the course. Everything else on this blog is just receipts.