
Good morning.
Over the past week or so, I watched two of the biggest companies in AI try to out-launch each other in real time.
Anthropic had their big reveal: Claude Opus 4.6 — their smartest model ever. Press embargoes lifted. Blog post ready. Then, fifteen minutes before go-time, they moved the whole thing up. Published early. Grabbed the spotlight.
OpenAI responded within minutes. GPT-5.3-Codex went live almost simultaneously.
Two frontier models. Same day. Same hour. Nearly the same minute.
Socials went nuts. Tech blogs scrambled to cover both at once. Most of the coverage focused on benchmarks and the drama of the release.
I’ve spent the past couple of weeks with them both, as I’m very fortunate to sometimes be included on early releases, before the public is.
Not incrementally. Not "oh this is a little better." I mean I briefed Claude on a project at 9 AM, went to lunch, and came back to a deliverable that would've taken me two full days to produce. I opened the Codex App, set three agents loose on three different builds, and checked in on their progress between meetings like I was managing a remote team.
I always put new models through a pretty rigorous testing phase. My team and I have some complex stuff running and we are looking for ways to simplify. Why use 10 agents or automations, when using one model will do?
These aren't chatbots anymore. They're contractors. And the way you use them is different from everything you've been doing for the past two years.
Let me show you exactly what I mean.
— Sam
IN TODAY’S ISSUE 🤖

The New Stack (what dropped, what it costs, and which tool does which job)
The Briefing Method (the system that changes how you use every AI tool from now on)
Three Builds In One Afternoon (what happened when I ran the Codex App for real)
Cowork and the Plugin Revolution (how Anthropic triggered a $285 billion stock selloff — and what happened when I ran the Sales plugin on a real prospect)
The 3,000-Page Play (what a 1M token context window actually unlocks — two use cases that blew my mind)
The Agency Multiplier (the $40/month stack that replaces a $15K/month team)
Your Monday Morning (exactly what to do first)
Let’s get into it.

The New Stack
Before we get into the how, you need to know what actually shipped.
I'm going to keep this tight because the tools themselves aren't the story — what you do with them is.
Claude Opus 4.6 is Anthropic's new flagship. It thinks deeper than any model I've used.
When you give it a complex brief — "audit my entire email funnel and tell me what's broken" — it doesn't rush to an answer. It plans. It considers multiple angles. It writes something, reconsiders, and revises internally before you ever see output. The result reads like it came from someone who slept on your problem overnight, not someone autocompleting sentences.
It also now handles 1 million tokens in a single conversation. That's roughly 3,000 pages. I'll come back to why that matters — it's a bigger deal than it sounds.
GPT-5.3-Codex is OpenAI's new coding agent, and it's a different kind of beast.
It's 25% faster than the last version, but speed isn't the upgrade that matters. The upgrade that matters is duration and steering.
Previous coding models were fire-and-forget — you'd submit a prompt, wait, and hope the output was usable. GPT-5.3-Codex works on projects for hours or days. And while it's working, you can talk to it. Redirect it. Change requirements. Ask questions about its approach. All without losing context. It's the difference between throwing a message in a bottle and having a Slack channel with a developer who's actively building your thing.
Oh, and it helped build itself. OpenAI used early versions to debug its own training run, manage its own deployment, and scale GPU clusters during launch. We're officially in the "AI improving AI" era.
The Codex App landed three days earlier as a macOS desktop app. It's a command center for running multiple coding agents in parallel. Separate threads. Separate projects. Brief one agent on your frontend, another on your API, another on your deployment config. Check in on all of them from one dashboard.
OpenAI made it temporarily free for everyone — including free-tier ChatGPT users. That window will close.
Claude Cowork is the one most people haven't noticed yet, and it might end up being the most important. It launched January 13, with plugins added January 30. Think of it as Claude Code for people who don't code. You point it at a folder on your computer, give it instructions in plain English, and it reads files, creates new ones, edits existing ones, and works through multi-step tasks autonomously.
The plugin system — 11 official plugins covering sales, marketing, finance, legal, support, data, and more — lets you configure Claude as a role-specific AI employee with one click. Install the Sales plugin, and you get slash commands like /sales:call-prep and /sales:prospect-research. Install Marketing, and you get /marketing:content-calendar and /marketing:draft-campaign.
Those plugins triggered a $285 billion selloff in SaaS stocks. Investors did the math on what happens when a $20/month AI can do the job of software that costs $50-500/month. We'll dig into this.
What It Costs
The pricing is straightforward enough that I can give it to you in two sentences per platform.
Claude: Free tier is too limited for real work. Pro at $20/month gets you Opus 4.6 (roughly 45 messages per 5-hour window), Cowork, and all plugins. If you're a heavy user, Max at $100/month gives you 5x the usage, the full 1M context window, and agent teams.
ChatGPT Plus at $20/month gives you GPT-5.3-Codex across the Codex App, CLI, IDE, and web — roughly 30-150 messages per 5 hours depending on complexity. Pro at $200/month uncaps everything for full-time production work.
The move for most of you: $40/month total for Claude Pro + ChatGPT Plus. That's the smartest thinking model and the fastest coding agent, plus Cowork with plugins, plus the Codex App. Less than a single hour of freelancer time for an entire month of AI contractors.
However, I strongly urge you to spend the $100-$200/month for Claude Max or ChatGPT Pro ($200/m).
The Briefing Method
For the past two years, we've all been "prompting" AI. Write a prompt. Get a response. Copy it. Edit it. Paste it somewhere. Repeat.
That workflow is dead. Or at least, it should be.
The models that shipped this week don't want prompts. They want briefs. The same kind of brief you'd hand a freelancer or a new hire on their first day. And the difference in output quality is staggering.
I had a team member use it for one of our projects, and here’s what he wrote up for us:
I tested this on a Wednesday afternoon. I took a project I'd been putting off — a full AI audit of an email funnel. The funnel has a sales page, a 12-email welcome sequence, a 5-email cart abandonment flow, three upsell sequences, and a win-back series. I'd been meaning to audit it for weeks.
Normally I'd spend two to three full days on this, before AI. Reading every email. Mapping the flow. Comparing performance data. Identifying weak links. Writing recommendations. Drafting rewrites.
Instead, I uploaded everything to Claude Opus 4.6. The sales page. Every email. The performance data export — open rates, click rates, revenue per send, unsubscribes. I uploaded the client's brand voice guide and three of their best-performing emails as style references.
Then I didn't write a prompt. I wrote a brief:
I gave it the business context — what the product is, who it's for, what the price point is, what the sales cycle looks like.
I told it what I needed — a stage-by-stage audit from traffic through referral, with specific weaknesses called out by quoting exact lines, and rewrites I could implement that day.
I told it to analyze the performance data and find patterns — which hooks drive clicks, which CTAs drive revenue, which subject line structures get opens, and what the worst-performing emails have in common.
It didn't just "analyze" the funnel. It mapped the entire customer journey and identified three disconnects I'd never noticed. The sales page was making a specific promise about "results in 14 days" that the onboarding sequence never mentioned again — for nine emails.
The highest-converting email hook in the welcome series (a customer transformation story) appeared exactly once and was never replicated.
The cart abandonment flow was using urgency language that directly contradicted the trust-building tone of the rest of the funnel.
hese are the insights you get when someone reads 40 pieces of content simultaneously and maps the connections between all of them.
It quoted specific lines. It rewrote the weakest emails. It drafted a new cart abandonment sequence that matched the voice of the best-performing content. It gave me a prioritized list of changes ranked by estimated conversion impact.
Two hours of my time. Ninety percent of that was reading and refining the output, not creating it.
Not bad, right?
The Anatomy of a Good Brief
The difference between a prompt and a brief is context. A prompt is a question. A brief is an assignment.
A good brief has five parts:
Context — What's the business? What's the product? Who's it for? What's the price point? What's working and what isn't? The more context you provide, the less generic the output.
Objective — What do you actually need? Not "help me with my emails." Something like "audit my email funnel stage by stage and identify the three highest-impact changes I can make this week."
Constraints — What are the rules? "Match this brand voice." "Don't exceed 500 words per email." "Focus on the welcome sequence only." Constraints make output usable. Without them, the AI guesses.
Reference material — Upload examples of what "good" looks like. Your best-performing content. A competitor's page you admire. A template you want to follow. This is the single biggest lever for quality.
Deliverable format — Tell it what you want back. A document? A spreadsheet? A list of rewrites? A comparison matrix? Be specific about the shape of the output, and you'll spend less time reformatting.
This isn't complicated. But it's different from what most people are doing. Most people type a sentence and hope for magic. The people getting 10x results are handing the model a folder of context and a clear assignment.
And with Opus 4.6 holding 1 million tokens, that "folder of context" can now be your entire business.
Three Builds In One Afternoon
I opened the Codex App and decided to stress-test it.
I had three things on my to-do list that all involved building something. A pricing page redesign for a side project. A webhook integration that sends customer events to our analytics stack. And an automated weekly report that pulls metrics from a database and emails them to my team.
Individually, each of these would normally mean finding a freelancer, scoping the work, waiting for delivery, and going through revisions. Or, more realistically, each one would sit on my to-do list for another month.
I opened three threads in the Codex App. Three separate agents. Three separate projects.
For the pricing page, I gave the agent everything: the wireframe, the existing design system, the tier names and features, the monthly and annual pricing, and the instruction to integrate Stripe checkout. I told it I wanted it mobile-responsive and to use the visual style of Linear's pricing page as a reference.
For the webhook, I described the events I needed to capture — signup, upgrade, downgrade, churn — where I wanted them forwarded (Mixpanel), and that I needed retry logic and error handling. Basic stuff for a developer, but not something I wanted to spend my afternoon on.
For the automated report, I listed the metrics I wanted pulled (WAU, MRR, churn rate), the database they live in, the format I wanted (clean HTML email), and the schedule (Monday mornings at 8 AM).
Three briefs. Maybe fifteen minutes total to write them. Then I closed the Codex App and went about my day.
About an hour in, I checked Thread 1. The pricing page was mostly built. It looked clean. But the annual discount wasn't displaying correctly — it was showing the monthly price with a strikethrough instead of calculating the discounted annual total.
In any other tool, I'd copy the broken code, open a new chat, explain the problem, get a fix, and paste it back.
With GPT-5.3-Codex, I just typed into the same thread: "The annual pricing display is wrong. It should show the calculated annual total, not the monthly price with a strikethrough. Also, add a 'Most Popular' badge to the Pro tier."
The agent read my message, understood the existing codebase it had already built, made both changes, and continued finishing the page. No restart. No lost context. No re-explaining what the project is.
This is the steering feature, and it changes everything about how you manage AI builds. You're not submitting prompts anymore. You're having an ongoing conversation with a builder who's actively working on your project. Just like you'd Slack a developer.
By the end of the afternoon, I had three working components. Not perfect, of course. They needed polish and testing. But they were 80-90% done. The kind of "first draft" you would deliver after a week.
Three builds. One afternoon. $20/month subscription.
The Parallel Advantage
The thing that's hard to appreciate until you experience it is the parallel part. It's not that each agent is faster than a human. It's that three of them work simultaneously.
While Thread 1 is building the pricing page, Thread 2 is writing the webhook integration, and Thread 3 is building the report script.
They're not waiting for each other. They're not blocked by your availability. You brief them, and they all start working at the same time.
For solo founders and small teams, this is the unlock. You're not three times faster. You're doing three things at once that you previously did sequentially — or, let's be honest, never got around to at all.
The Codex App also has a feature called Skills — bundled instructions that extend what agents can do. There's a "develop web game" skill, a deployment skill, and you can create your own.
A skill is basically an SOP for the agent — it tells the agent how to approach a specific type of task, which tools to use, and what standards to follow.
Sound familiar? It's the same concept as the brief, just packaged for reuse.
Cowork and the Plugin Upgrade
Now let's talk about the release that spooked Wall Street.
Claude Cowork launched on January 13. It's Anthropic's answer to a question nobody realized they were asking: what if Claude Code, but for people who don't code?
Cowork runs inside the Claude Desktop app. You click the "Cowork" tab in the sidebar (it sits next to Chat and Code). You point it at a folder on your computer. You give it instructions. And then you walk away.
That last part is what matters. You're not having a conversation. You're not going back and forth. You're leaving a note for a coworker, going to lunch, and coming back to finished work in your folder.
Anthropic built the entire thing in ten days. Using Claude Code. AI building AI tools. That's the meta-story.
But the more interesting story (the one that caused $285 billion in SaaS stock losses) happened on January 30, when Anthropic released the plugin system.
What Plugins Actually Are
A plugin is a package that turns Claude from a generalist into a specialist.
Each one bundles together skills (domain knowledge Claude draws on automatically), connectors (integrations with your tools), slash commands (specific actions you can trigger), and sub-agents (smaller agents that handle specific subtasks).
Anthropic open-sourced 11 of them. You install them with one click inside the Cowork tab. No terminal. No code. No configuration files.
Here's what's available right now:
Sales — connects to your CRM and knowledge base. Teaches Claude your sales process. Commands for prospect research, call prep, and follow-ups.
Marketing — connects to your analytics and content library. Draft campaigns, plan content calendars, manage launches.
Finance — financial modeling, metric tracking, analysis. Point it at a folder of financial docs and ask for a quarterly report.
Legal — contract review with clause-by-clause analysis. Highlights OK clauses in green, risky ones in yellow, critical issues in red. Explains how clauses interact and suggests specific modifications.
Data — write queries, analyze datasets, build reports. Great for the non-technical founder who needs insights from their database but doesn't write SQL.
Customer Support — analyze tickets, draft responses, build knowledge bases from your support history.
Productivity, Project Management, Enterprise Search — exactly what they sound like.
And there's a meta-plugin called Plugin Create that lets you build your own plugins from scratch. Describe what you need in plain English. It generates the plugin. No coding required.
Why Wall Street Panicked
The math is simple. A CRM tool like HubSpot costs $45-800/month depending on tier. A legal review platform costs hundreds. Financial analysis software costs hundreds. Content calendar tools, project management platforms, customer support software — each one is a separate subscription, a separate login, a separate learning curve.
Cowork with plugins does all of it from one interface for $20/month.
Now, will it replace enterprise software overnight? No. These are research previews. They're not yet reliable enough for mission-critical operations at scale.
But for a solopreneur running a SaaS? An agency owner managing five clients? A course creator handling their own marketing? These plugins do 80% of what those specialized tools do, for 5% of the cost.
That's why the stocks dropped, in my opinion. Not because the tools are perfect today. Because the trajectory is obvious.
What Happened When I Actually Used It
Here's where I stop talking about plugins in the abstract and tell you what happened when I ran one on a real project.
I installed the Sales plugin on Thursday morning. Took about thirty seconds — click install, confirm. Then Cowork asked me to customize it. It wanted to know about my business: what I sell, who I sell to, what the typical sales cycle looks like, what CRM I use, what my main value propositions are, and what objections I hear most often.
I spent about ten minutes filling that in. Honestly, it felt like onboarding a new sales hire. Which, in hindsight, is exactly what it is.
Then I tested it. I had a discovery call scheduled for Friday with a prospect — a B2B SaaS company that had reached out about AI content strategy. The kind of call where you want to walk in knowing everything: what their product does, who their competitors are, what their current marketing looks like, what they've tried before, where the gaps are.
I typed /sales:call-prep and gave it the company name, the contact's name and title, and their website URL.
What came back was a five-page briefing document. And I don't mean five pages of fluff. I mean it pulled the company's recent blog posts and analyzed their content strategy.
It looked at their pricing page and mapped their positioning against competitors it identified on its own.
It found the prospect's LinkedIn activity and noted what topics they'd been posting about.
It pulled their job listings to infer what teams they're building (they were hiring three content writers — so content velocity was clearly a priority).
It cross-referenced all of this against my sales process (the one I'd fed it during customization) and gave me suggested talking points mapped to their specific situation.
It even drafted a follow-up email template tailored to three different call outcomes: they're ready to move forward, they need to loop in a decision-maker, or they want to think about it.
This is where I had to sit back for a minute. Because the call prep I normally do just happened in about ninety seconds. And the output was more thorough than what I'd have produced on my own, because it connected dots I wouldn't have thought to look for.
The call on Friday went exceptionally well. Not because the plugin closed the deal for me. But because I walked in with the kind of preparation that makes a prospect think, "This person really did their homework." I knew their pain points before they told me. I referenced a blog post their CEO had written the week before. I asked about their content hiring plans, which genuinely surprised them.
Preparation is a cheat code in sales. Everyone knows this. Almost nobody does it consistently, because it takes too long. When it takes ninety seconds, you do it for every single call.
The Plugin Stack
After the Sales plugin clicked, I spent Friday afternoon installing and customizing three more. Marketing, Finance, and Legal. Each one took about ten minutes to set up.
The Marketing plugin immediately became my content planning hub. I pointed it at a client's Google Analytics export and their existing content library — about sixty blog posts and a year of email newsletters.
I typed /marketing:content-calendar and told it to build a 30-day plan focused on driving trial signups, using only topic angles supported by their performance data. It didn't give me generic content ideas.
It gave me specific headlines modeled on the patterns that had already worked for this audience, scheduled across channels with suggested formats for each.
The Legal plugin earned its keep the same day. A client sent over a vendor contract — 14 pages, dense legalese. I used to forward these to our attorney and wait three to five business days.
Instead, I typed /review-contract and dropped the PDF into the conversation. Three minutes later I had a clause-by-clause breakdown with a green-yellow-red risk assessment, plain-English explanations of what each section actually means, and flagged interactions between clauses that could create problems.
It caught an auto-renewal clause buried in section 11 that would've locked the client into a two-year extension with 90 days' notice required to cancel.
I still had the attorney review the final document — Claude isn't a lawyer and I'm not suggesting you skip yours — but the initial pass saved hours and gave me the right questions to ask.
The Finance plugin I'm still exploring, but even in its basic form, pointing it at a folder of monthly P&L statements and asking for a quarterly trends analysis produced a report that would've taken our bookkeeper a full day to prepare.
Plugins compound. The Sales plugin knows your process. The Marketing plugin knows your content performance. The Legal plugin knows your risk tolerance.
Each one gets “smarter” about your specific business every time you use it, because it's drawing on the context you gave it during setup plus everything it's learned from the work you've done since.
After a month of use, you won't have a set of AI tools. You'll have a team that knows your business.
The 3,000-Page Play
I saved this for late in the issue because it's the feature that sounds like a spec-sheet detail but is actually the most transformative thing that shipped this week.
Claude Opus 4.6 now holds 1 million tokens in a single conversation. That's about 750,000 words. Roughly 3,000 pages.
To put that in perspective: you can upload your complete brand guidelines, your entire email archive from the past year, six months of customer support tickets, your full product catalog, all your SOPs, your sales page, your competitor's sales pages, and your financial reports — all into one conversation. With room to spare.
Why does this matter?
Because the number one reason AI output is generic is lack of context. When you give a model a 500-word prompt and ask it to write an email, it has no idea who you are, what your business sounds like, what's worked before, or what your audience responds to. So it gives you something that could be from anyone.
When you give it 3,000 pages of your actual business content, performance data, and operational history? The output is yours. It can't help but be yours. It's built from your patterns, your voice, your data.
Use Case #1: The Email Audit That Found What I Couldn't
I took the complete email history of a project of ours — 147 emails from the past 14 months — along with their performance data. Open rates, click rates, revenue per send, unsubscribes. All of it. One upload.
Then I asked Opus to do something no human reviewer could do in a reasonable timeframe:
Analyze every email simultaneously and find the hidden patterns.
It found five subject line structures that consistently outperformed everything else and they weren't the ones I would've guessed.
The highest-performing format wasn't curiosity gaps or urgency. It was specificity with a number. "3 things I'd change about your checkout page" outperformed "How to fix your checkout page" by 40% on average across this client's audience.
It found that emails with personal stories in the first paragraph got 2x the click-through rate of emails that opened with a teaching point. Not a small difference. Not a marginal edge. Double.
It found that the client's worst-performing emails — the ones with the highest unsubscribe rates — all shared one thing in common: they made a promise in the subject line that took more than three paragraphs to deliver on. The audience wanted the payoff faster.
None of this was directly visible in any single email. It only becomes visible when you can see all 147 at once and compare them against performance data.
Then I asked Opus to write the next 30 days of emails based on these specific patterns. Not generic best practices from marketing blogs. Patterns extracted from actual data, with this actual audience, at this actual price point.
The emails it produced were eerily good. Why? Because it had 3,000 pages of context telling it exactly what "good" looks like for this specific business.
Use Case #2: Competitive Positioning You Can't Get Any Other Way
The second experiment was even more eye-opening.
I work with a SaaS client in the project management space. Competitive market. Lots of noise. They've been struggling to articulate why someone should pick them over the five or six alternatives that look almost identical from the outside.
My team and I uploaded everything. Their full marketing site — every page, every feature description, every case study. Their sales deck. Their pricing page. Their onboarding emails. Their help documentation.
Then we uploaded the same materials for four competitors. Not just their homepages — their full feature pages, their pricing, their case studies, their blog posts about product updates, their help docs. It took about thirty minutes to scrape everything. Each competitor's material went in with a clear label so Opus could keep them straight.
The total upload was somewhere around 400,000 tokens. Not even close to the limit.
I briefed Opus the same way I'd brief a brand strategist:
You now have the complete public-facing content for five competing companies in the same market. Analyze all of it simultaneously. I need a positioning map that shows where each company is trying to own mindshare, the messaging patterns each one uses, the specific customer segments each one is targeting, the gaps in the market that nobody is addressing, and a recommended positioning strategy for [my client] that exploits those gaps.What came back was the single most useful competitive analysis I've ever seen. It was reading five companies' worth of material simultaneously and holding all of it in working memory at once.
A human strategist might spend two weeks doing this analysis. They'd take notes on each competitor, then try to synthesize across all of them from memory and notes. Things would get lost. Patterns across companies would be harder to spot.
Opus found that all four competitors were targeting the same customer profile — mid-market teams of 20-50 people — with nearly identical messaging about "streamlining workflows." Every single one. The language was so similar you could swap their brand names and barely notice.
But buried in the support tickets (which I'd also uploaded), there was a clear pattern:
Their happiest customers weren't 20-50 person teams. They were 5-10 person teams at early-stage startups who specifically loved the product's simplicity. That customer segment was completely unaddressed in the competitive landscape.
The positioning recommendation was specific, actionable, and backed by evidence from every source I'd uploaded.
It wasn't "differentiate yourself." Instead, it was:
"Here's the exact customer segment being ignored, here's the messaging angle supported by your own customer data, here are the three feature pages to rewrite first, and here are the specific claims your competitors are making that you should directly counter."
My client's head of marketing called it the best strategic brief they'd ever received. I charged them for a full competitive analysis engagement. It took me an afternoon.
We’re now turning this experiment into a full agent flow, that’s semi-autonomous.
What Else You Can Do With 3,000 Pages
Those two examples only scratch the surface. Here's how I'm thinking about the context window going forward:
For a brand voice audit: Upload your 15 best pieces of writing. Ask Claude to analyze sentence structure, vocabulary patterns, opening techniques, transitions, humor style, and tone. Have it produce a "Brand Voice Bible" — a document you can hand to any AI model or freelancer to replicate your voice. Do this once, use it forever.
For a customer intelligence report: Upload six months of support tickets. Ask for the top recurring issues, most-requested features, common friction points, churn signals (patterns in tickets from customers who later cancelled), and upsell opportunities. This is the kind of analysis that a product manager spends a quarter doing manually.
For a full SOP library: Upload all the documents, templates, and reference files from a specific workflow. Ask Claude to map the complete process, identify every decision point, document every tool involved, estimate time per step, and format it so a new hire could follow it without asking a single question.
Each of these would take a human analyst days to weeks. Each of them takes Opus about twenty minutes, because it's reading everything simultaneously instead of sequentially.
The businesses that figure out how to systematically load their institutional knowledge into these sessions — brand assets, performance data, operational docs, customer feedback — will have a compounding advantage that gets wider every month.
The Service Business Multiplier
If you run an agency, consult for clients, or manage any kind of service business — this section is specifically for you. Because the stack I've been describing doesn't just make you more productive. It fundamentally changes the economics of your business.
Let me walk you through what a Monday morning could look like now, if you’re a service business:
Let’s pretend I have five active clients right now. A SaaS company that needs content strategy. An e-commerce brand that needs email marketing. A course creator who needs a launch sequence. An agency owner who needs better internal processes. And another SaaS company that needs a full funnel audit.
Before this week, Monday was triage day. I'd pick the most urgent client, spend most of the day on their deliverables, and hope I had enough time left for the other four. The other four usually got pushed to Tuesday. Or Wednesday. Or next week.
Here's what I did this Monday instead.
I spent the first hour of my morning writing briefs. Not deliverables. Just briefs. Five of them, one per client.
For the SaaS content client, I uploaded their last quarter of blog analytics, their product changelog, and their competitor's three most recent blog posts. I briefed Opus on building a content calendar for Q2 with topic angles mapped to their best-performing categories and competitive gaps they could exploit.
For the e-commerce email client, I uploaded their full email archive and Klaviyo performance data. I briefed Opus on the same pattern analysis I described earlier — find what's working, find what's not, write the next two weeks of emails based on actual data instead of guesswork.
For the course creator, I uploaded their existing sales page, their customer testimonials, and three high-converting sales pages from other course creators in adjacent niches. I briefed Opus on writing a new launch sequence — five emails — that synthesized the best elements from the reference material with the client's actual customer language from testimonials.
For the agency owner, I pointed Cowork at their operations folder and briefed it on documenting their client onboarding process as a step-by-step SOP. They'd been saying they needed this for six months. Nobody had time to write it.
For the second SaaS company, I did the full funnel audit — same process I described in the Briefing Method section.
Five briefs. One hour. Then I’d go to a coffee shop, have a long breakfast, answer emails, and came back around noon.
Every single deliverable would be waiting for me. Not finished — I still needed to review, refine, and add my own expertise. But the heavy lifting would be done. The research completed. The first drafts written. The analysis, etc.
I’d spend Monday afternoon reviewing and polishing. By end of day, I’d have five client deliverables ready to send. Work that used to take me an entire week could be done before dinner on Monday.
The Math
Let's talk numbers, because this is where it gets absurd.
Let's say you charge $3,000/month per client for strategy and content. Five clients is $15,000/month in revenue. Before this week, servicing five clients took essentially all of your available working hours. Adding a sixth client meant either working nights, hiring someone, or dropping the quality on the other five.
Your AI stack costs $40/month. Claude Pro and ChatGPT Plus.
That $40/month just replaced roughly 60-70% of the production hours across all five clients. You're not doing less work — you're doing different work. Less drafting, more directing. Less research, more strategy. Less execution, more judgment.
Which means you can take on client number six. And seven. Without hiring anyone. Without working longer hours. Without sacrificing quality — because the briefs and the review process actually produce better output than grinding through everything manually under time pressure.
Or — and this is the move I'd make — you keep five clients and use the reclaimed time to build products. A course. A template library. A paid community. Something with leverage. Something that compounds while you sleep.
The agency model has always had a ceiling: your time. That ceiling just got a lot higher.
Your Monday Morning
If you've made it this far, you might be feeling a combination of excitement and overwhelm. That's normal. A lot just shipped. You don't need to learn all of it at once.
Here's what I'd do if I were starting from scratch on Monday morning. Not the whole stack. Not all the plugins. Just the thing that will make you understand, in your bones, why this week was different.
Before 9 AM: Sign up for Claude Pro ($20/month) if you haven't already. Download the Claude Desktop app. Open the Cowork tab and install whichever plugin is closest to your work — Sales if you sell, Marketing if you create content, Finance if you manage money, Legal if you deal with contracts.
9:00 AM: Spend ten minutes customizing the plugin. Answer its questions about your business. Be specific. The more context you give it during setup, the better everything works from here.
9:15 AM: Pick one real task from your actual to-do list. Not a test. Not "write me a poem about productivity." A real deliverable that a client or your business is waiting on. Something that would normally take you half a day or more.
9:30 AM: Write a brief. Not a prompt — a brief. Give it the context (what's the business, what's the situation), the objective (what do you need), the constraints (what are the rules), the reference material (upload examples of what "good" looks like), and the deliverable format (what should the output look like). Upload everything relevant. Don't be stingy with context.
9:45 AM: Send the brief. Close the laptop. Go for a walk. Get a coffee. Do whatever you want for an hour.
10:45 AM: Open the laptop. Read what came back.
That's the moment. That's when you'll feel the shift. The output won’t be perfect. You'll need to refine it, add your expertise, adjust the tone, fix details.
But the distance between what you sent and what came back will be unlike anything you've experienced from an AI model before.
It'll feel less like getting a chatbot response and more like checking in on work your team did while you were out.
If you want to go further, install the Codex App (currently free) and open a thread for whatever you've been putting off building. Brief it the same way. Let it cook. Check in when you feel like it. Steer when needed.
But honestly? Just the one brief is enough for Monday. Once you feel the difference between prompting and briefing, you'll reorganize everything else on your own. You won't need me to tell you.

I've been writing about AI tools for a while now. Most weeks, the releases are incremental. A little faster. A little cheaper. A little smarter.
These past couple of weeks, or so, was different.
If I could leave you with one thing, it's this: the bottleneck is not the tool. It's your ability to provide context and brief it.
The founders who document their processes, organize their files, and learn to write clear briefs will extract 10x more value from a $20/month subscription than someone paying $200/month who's still typing one-line prompts.
The Codex App is free right now for everyone. Claude Pro is $20/month. Both shipped this week with capabilities that would've been science fiction twelve months ago.
Go brief something Monday morning. See what comes back.
Until next time,
Sam Woods
The Editor
.

