11 AI Prompts That Run a Complete Job Search

Two Weeks After the Layoff

Cision laid me off on January 20. By the end of that week, I was hammering through job applications and noticing a pattern. Forty-five minutes per application. Read the JD. Tailor the resume. Write a cover letter. Update LinkedIn. Research the company. Prep STAR stories. The work was real, but most of it was wrappable in a prompt.

Two weeks later there was a system. Eleven prompts that run end to end, chained so each one's output feeds the next. Forty-five minutes per application became five.

This post is what I built and how it works. The full library is still public at vincentdipaola.notion.site/ai-powered-job-search. Copy whatever you want.

The Wrong Framing: Prompts as Single-Use Tools

Most "ChatGPT for job search" advice gives you a list of one-off prompts: one for the resume, one for the cover letter, one for follow-up. The list works for any single application. It doesn't compound.

What changed when I built this as a pipeline instead: each prompt's output becomes the next prompt's input. Run the JD Analyzer once on a posting. Its output (decoded requirements, culture signals, red flags, fit rating) feeds the resume tailoring prompt, the cover letter prompt, the company research prompts, and the STAR story prompt. The same context shows up six different places without me re-typing it.

Output schemas were the unlock. Early versions produced wildly different formats across models. Adding explicit output structure (named sections, tables, fixed fields) made each prompt's output something the next prompt could consume. A mediocre prompt fed great context outperforms a great prompt with no context.

The 11 Prompts, Grouped by Stage

The pipeline runs in five stages. Each stage's output flows into the next.

Stage 1: Research and analysis. One prompt: the Job Description Analyzer. Paste in any job posting and get back explicit requirements, implicit signals, culture clues, red flags, and a fit rating with gap identification. This is the prompt I run before deciding whether to apply at all. If the gap analysis lights up, I save the time.

Stage 2: Application materials. Three prompts run from the JD Analyzer's output. Resume Tailoring Engine scores each bullet for relevance to this specific role, flags gaps, and recommends an order for the resume. Cover Letter Framework uses a three-paragraph structure that reframes background gaps as strengths instead of apologizing for them. LinkedIn Optimization generates headline variants and a keyword-rich about section, tuned for recruiter searches.

Stage 3: Company research. Two prompts. Company Deep Dive surfaces decision-makers, recent news, and strategic priorities. Competitive Positioning maps how the company differentiates from its peers and what that implies for the role you're interviewing for. By the time I get to a phone screen, I've internalized things the interviewer probably hasn't articulated themselves.

Stage 4: Interview prep. Three prompts. STAR Story Generator mines my experience for 5 to 7 stories that map to common interview competencies. Reverse Interview Questions generates questions tailored to the interviewer's specific role (different questions for the hiring manager vs. the IC vs. the recruiter). Salary Negotiation Prep pulls market data, frames the negotiation, and scripts the harder conversations.

Stage 5: Follow-up and outreach. Two prompts. Post-Interview Follow-up drafts a 24-hour thank-you that references specific conversation points instead of generic gratitude. Cold Outreach Templates for hitting hiring managers and internal advocates directly when there's a connection point worth using.

Eleven prompts. Five stages. One pipeline.

Why I Run It on Both Claude and GPT

Each prompt got primarily tested on Claude Opus and Sonnet, plus GPT-5.2. The "works with any model" claim that floats around prompt libraries is aspirational. Different models do different parts of the pipeline better. The JD Analyzer is sharp on Claude. The Cover Letter Framework lands more naturally on GPT. The STAR generator is roughly tied. Where the difference matters, the prompt notes it.

The point of testing across both wasn't to be model-agnostic. It was to find where each was stronger and use it there.

The Tracker That Closes the Loop

A pipeline without measurement is a content factory. The system also includes a Google Sheets tracker logging every application: source, role, resume version, response status, dates, auto-calculated metrics on response rates and pipeline health.

The tracker turned the job search into A/B testing. Two resume versions, run against similar roles, with response rates compared. Cold outreach with one template vs another, response rates compared. After thirty applications, the data was clean enough to stop guessing about what was working.

What This Teaches

The whole thing is prompt engineering as systems design. That's the part that transferred to everything else I've built since.

Three lessons that applied immediately to the next builds:

  • Output schemas are non-negotiable. Every prompt in the pipeline returns structured fields the next prompt can consume. Same logic now lives in every CLAUDE.md or AGENTS.md I write: tell the model exactly what shape the output should take, or it'll improvise.
  • The mediocre prompt with context beats the brilliant prompt without. The JD Analyzer doesn't have to be the world's best prompt. It has to produce context the rest of the pipeline can use.
  • Save what worked into reusable form. Eleven prompts started as eleven separate ChatGPT chats. Within a week they'd become a saved library. A few weeks later, the strongest ones became Skills attached to my personal Claude setup. Prompts decay. Skills compound.

That last point is the meta-lesson behind everything I've built since the layoff. The first version of any workflow is a prompt. The second is a skill. The third is a system someone else can use.

The tracker template is linked from the same hub. Feel free to fork either piece. The whole pipeline is what got me back to building full-time, and what taught me the rhythms behind every build that came after.