Annoying Monday Morning LinkedIn Homework
Every Monday for months, I'd open LinkedIn and do the same things. Export the post analytics one by one. Open the messages tab. Triage the inbox. Mark conversations for follow-up. Track which posts performed and which fell flat.
Maybe forty minutes total. Fine for one Monday. Bad for fifty Mondays.
So I taught Claude in Chrome how to do it. And I mean taught. Not "wrote a script" or "built an integration." Walked it through each step the way you'd onboard someone's first dashboard. Click here. Wait for the export to load. Open the messages tab. Read the last three messages per conversation. Summarize what's pending. Move on.
Now it runs on a weekly schedule. No scripts, no API. Pure agent action. Forty minutes became zero.
How the Teaching Went
Claude in Chrome lets the agent control a browser tab the way a person does. Click coordinates, type into fields, scroll, switch tabs. The catch: the agent doesn't know your particular workflow until you teach it. Teaching is closer to onboarding a new hire than writing code.
The first run, I walked through every click while it watched. "OK, now go to the dashboard. Wait for the page to load. Sometimes there's a banner, dismiss it. Click the export icon. Pick the last 30 days. Confirm. Wait." When it got something wrong, I corrected the way you'd correct a junior on day three. "No, that one's the share button. The export is the icon two over."
After two passes the agent had the workflow. After three it caught its own mistakes. By the fourth, it was running unattended.
The thing I didn't expect: the teaching process was itself the documentation. The steps never got written down. I just walked through them with Claude listening, and the resulting trace was the spec. No README, no SOP doc. Just a recorded run that the agent now reuses.
The Adjustment That Mattered
The first version pulled "the latest message" from each conversation. Wrong. People send follow-up DMs as separate messages, often within the same hour. Pulling just the latest meant losing the thread (and sometimes a question that had been hanging for a week).
Fix: read the last three messages per conversation, not just the latest. Trivial change. Big effect on what the summary showed me Monday morning.
That kind of adjustment is the part most people skip when they automate something. The first version of any agent workflow handles 80% of cases. The interesting work is finding the 20% where the obvious move is wrong, then teaching the agent to handle that.
What It Does Now
Every Monday at 8am, the agent opens LinkedIn, exports last week's post analytics into a spreadsheet, opens messages, reads the last three exchanges in each unread thread, and writes me a summary by priority. By 8:15 there's a doc waiting that tells me which posts performed, which conversations need a reply today, and what can wait.
The summary doc has the same fields every time: top three posts by engagement, conversations with unread messages older than 48 hours, conversations where someone asked a specific question. Three sections. One paragraph each.
I read the doc with coffee. The whole job is done by the time I'm at my desk.
Why This Belongs in a Course on Building With AI
The thing this build taught me, and a central thread of the course: automating a workflow with an agent is more about articulating it cleanly than writing code.
If you can describe the steps to a careful person taking notes, you can teach an agent. The skill isn't programming. It's process clarity. Most workflows in your week are eligible for this kind of automation. Most people don't realize it because they assume automation requires APIs or scripts.
Claude in Chrome (and the other agent harnesses landing this year) collapse that gap. The bottleneck stops being "can I write the script." It becomes "can I describe what good looks like in clear enough terms that an agent can reproduce it."
The same principle drove the sparring partner skill and the job search prompt pipeline. Articulate the process. Put the rules where the agent can read them. Let the model do the part it's good at.
What's the Most Tedious Thing in Your Browser?
What's the most tedious thing in your browser you'd hand off if you could? That's the question I'd start with. Most people overestimate how hard the automation would be and underestimate how easy it is to teach.
The Monday homework is gone. The hour I'd have spent on it goes to whatever I'm building instead. That's the whole pitch.