Yesterday: Four Agents in Parallel
Yesterday I ran four coding agents in parallel for the first time. New personal record.
Each agent owned one case study from start to finish. Read the app's codebase. Pulled the existing case-study text. Rewrote it to reflect everything I'd shipped since the initial version. Grabbed screenshots from the right folder. Built a native page on the site following the brand guidelines. Replaced the old Gamma embed. Deployed.
Four case studies. Four agents. One coordinated session. The pages on teamvince.com are the result.
A month ago I was running one agent at a time and double-checking every pass. Today I'm orchestrating four in a session and trusting the work. The skill that changed had nothing to do with better prompting. It was learning to scope.
What "Scope" Actually Means With Agents
Most people who try to run multiple coding agents at once hit the same wall. Two agents step on each other. Three agents argue about the same files. Four agents corrupt your repo and you spend the afternoon recovering.
The fix isn't prompting. It's architecture. Each agent has to own a piece of work that doesn't overlap with what the others are touching.
For the case-study migration, that meant one agent per case study. Each agent worked on its own page in its own branch. They never touched each other's files. The shared code (brand utilities, layout components) was already stable, so nobody had reason to modify it. The work was naturally parallel because the boundaries were clear.
That's the architecture-side discipline. Without it, "four agents in parallel" becomes "four agents in conflict."
The Mechanics
Each agent ran in its own Claude Code session, each on its own git worktree pointing at a feature branch. Worktrees are the unlock here. They give each agent a separate working directory tied to the same repo. Agent 1 works on the WalkRide branch in one worktree. Agent 2 works on the ResumeTailor branch in another. They don't see each other's changes until I merge.
Setup time per agent: about ninety seconds. Fire up a new session, point it at a worktree, hand it a focused brief ("here's the codebase, here's the existing case study, rebuild it as a native page following the brand skill"). Then let it work and move to the next one.
Once all four were running, my role became coordinator, not coder. Check the diffs as each agent finished. Run the build to make sure nothing crossed over. Merge in order. The actual coding happened in four parallel streams. The orchestration was the human work.
What Made the Briefs Work
The agents didn't get general "go rebuild this case study" instructions. Each one got a focused brief that named the inputs (existing case-study markdown, screenshot folder), the constraints (follow the brand skill, no Gamma embeds, native HTML), and the success condition (page live at the right slug, passes the build).
Most "the agent broke things" stories I hear come from briefs that are vague about boundaries. Once you can write a brief that names what's in scope, what's out, and how to know when it's done, parallel agents become possible. Before that, even one agent will burn time on the wrong work.
The brand skill (a SKILL.md file in the repo) was load-bearing. Each agent could reference it without me copy-pasting style rules four times. Skills compose. That's the whole point of building them in the first place.
Where Parallel Breaks Down
Four worked. Five would have been a stretch. The bottleneck isn't the agents. It's me.
Each agent needs a focused brief at the start, a check-in when it surfaces a question, and a review when it finishes. Beyond four, my attention drops. I miss things. Diffs land merged that shouldn't be.
The other failure mode: shared dependencies. If two agents need to modify the same component, parallel doesn't help. One has to go first. Either you serialize, or you split the work differently. The case-study migration worked because the case studies barely shared code beyond the brand layer.
Both constraints are scoping problems. How much can the human handle in parallel. How separable is the work. Get those two right and you can run as many agents as the answer allows.
What This Teaches
The shift from prompting to scoping is the meta-lesson behind everything I've built since November. Same pattern as the 11 AI prompts that ran my job search. Same pattern as the sparring partner skill. Same pattern as the Stellar build's schema-enforced output.
In each case, the part that mattered wasn't the prompt. It was the structure around the prompt. Where the inputs come from. Where the outputs go. What the boundary is between this agent's job and the next agent's job.
Better prompts get you 1.5x. Better scoping gets you 4x because it lets you run multiple agents in parallel without supervision.
The course is the playbook for getting there. Scope cutting in week 1. Recovery moves in week 2. Polish in week 3. By week 4 the question stops being "how do I prompt this" and starts being "what's the right work for the right tool, and how do I keep the boundaries clean."
That's the difference between using AI to type faster and using AI to ship more.
The Bottleneck Moved
A month ago I was running one agent at a time. Now four. The bottleneck used to be the agent. Now it's me.
That's the right direction.