The Interviewer Said It Out Loud
Earlier this week I had a late-round interview with a company I was really excited about. Some of my answers landed. Others rambled. I could feel it happening in real time, ADHD brain taking the scenic route when the interviewer wanted the highway.
Most interviewers dodge that kind of thing. This one didn't. He told me straight: a couple of those stories would've hit harder in STAR format. Situation, Task, Action, Result.
Fair. I knew the format. I'd never used it under pressure.
So I built Stellar that weekend. A tool that takes your raw, long-winded story (the way you'd tell it to a friend) and reshapes it into a clean STAR narrative you can drop into interview prep, performance reviews, or anywhere you need to communicate impact clearly.
This post is the build log. The full case study is at /stellar.
What Stellar Does
You paste a messy story into a textarea. The model sorts it into Situation, Task, Action, Result, plus a clear title. You get back a structured version you can copy-paste anywhere. No login, no onboarding, no stored data. Open the page, paste, get the result, close the tab.
The job isn't to invent new content. It's to separate context from challenge from action from result, in someone else's words, without making them sound like a robot wrote it. Most STAR tools fail at the last part.
That's the whole product.
The 48-Hour Build
The build ran from the interview Tuesday night through Thursday afternoon. Stack: v0 for the initial Next.js shell and UI, Next.js 16 with React 19 and Tailwind 4 underneath, Vercel AI SDK v6 wrapping OpenAI, Zod schema for structured output, deployed on Vercel.
The whole app is under 300 lines of custom code. Three components: a page, a story input, a STAR result card. One API route at POST /api/format-star. Nothing else. No database, no auth, no long-lived server state.
The trick was the schema. Vercel's AI SDK lets you pass a Zod schema and get typed output back from the model directly. No regex parsing, no broken JSON, no "the model returned a string with three sections that should have been four." The schema enforces the STAR structure at the response layer. The model produces valid STAR fields or the response fails.
Most of the engineering was the schema. Most of the thinking was the prompt. The first prompt produced grammatically clean STAR fields that sounded nothing like the input. Like a corporate ghostwriter had filed off all the personality. So I rewrote it with explicit instructions to keep the speaker's voice, vocabulary, and quirks. Same structure, same person. The difference between Stellar feeling like a tool and Stellar feeling like a robot was almost entirely in the prompt, not the schema.
Three Design Decisions
Structured output over streaming. Plenty of AI apps stream tokens for the visual flair. For this use case, streaming is wrong. The user wants a final clean result, not to watch the model think. So the API returns one complete object, validated against the schema. Either it's right or it asks again.
A 50-character minimum on the input. A light guardrail. If the story you're trying to format is shorter than fifty characters, the model has nothing to work with. Fifty is enough to hint at context. It's also low enough not to be annoying.
Copy-to-clipboard as the primary action. The main thing users do after Stellar formats their story is paste it somewhere else (a Notion doc, a resume, an interview prep file). The primary button on the result page does that in one click. Everything else fades.
Each of those was a fifteen-minute decision. Each shaved measurable friction off the experience.
The Meta-Proof
The thing I didn't expect: Stellar is the tool I used to tell the story of building Stellar.
When I wrote the LinkedIn post about this build, the first draft was four paragraphs of context, half a sentence about what Stellar does, and a long ramble about why I built it. Classic me. So I pasted my own draft into the tool I'd just built and got back a STAR-shaped version. Situation: late-round interview, feedback about long-winded stories. Task: ship something useful before the lesson got stale. Action: scaffolded with v0, structured the output with Zod, shipped to Vercel. Result: a tool I'm using right now to write this post.
That's the test that mattered more than any A/B. The thing has to work on the person who built it, on the exact problem that triggered the build, before it can work on anyone else.
What This Taught Me
The fastest builds are the ones that come straight out of friction you just felt. Not friction you've noticed in someone else. Not friction you've been thinking about for a quarter. Friction from this morning.
The interview ended Tuesday at 5pm. By Thursday afternoon there was a shipped tool. Less than 48 hours. The shortest distance between feedback and a build I've hit so far.
There's a pattern under that. Friction is the spec. The interviewer's two-sentence feedback gave me a tighter product brief than any kickoff meeting I've sat in. Specific input. Specific output. Specific failure mode. The build practically wrote itself once the spec was clear.
The skill is recognizing the moment something annoyed you enough that the spec just landed. Most people miss it because the annoyance is uncomfortable. The discomfort is the signal.
Where Stellar Lives Now
The app is live at stellar.teamvince.com. Paste in a long story, get back a STAR version. Copy what you need, close the tab.
If you've got an interview coming up, a self-review to write, or a resume bullet that's not landing, it's free. If you spot the pattern (annoyance → tight spec → fast build) in your own work, that's a prototype waiting to ship.
The pattern of getting from feedback to shipped tool fast is a thread in what I teach through public building.