
Good morning.
A SaaS founder I advise got on a Zoom with me a few weeks ago, one of our regular check-ins, and I could tell something was wrong before he even said anything.
His onboarding system had stopped working. New signups were coming in, trials were converting, but nobody was actually getting provisioned into the product.
The whole thing relied on a single integration between his payment processor and his provisioning workflow. One API update on the processor's side, pushed without any warning, broke the connection.
And this is a guy who had spent the last six months building agent workflows across his operation: sales intelligence agents, support triage agents, onboarding follow-up sequences driven by AI.
He was doing sophisticated work on top of infrastructure he hadn't touched since the day he duct-taped it together.
By the time he noticed, 73 new customers had paid and received nothing. His support inbox was a disaster. His ops lead was fielding angry emails and manually onboarding people one at a time, and he spent two full days cleaning up a mess that started because one integration he'd completely forgotten about just broke.
That conversation is what prompted this issue. I've been thinking about what it actually takes to run AI well inside a real business, and it comes down to three things:
The skills most operators are missing, the tools that serious founders are actually using, and the prompts that change how you see your own operation.
— Sam
IN TODAY’S ISSUE 🤖

The skills gap most established founders don't realize they have
What serious AI operators are running across seven categories
Five prompts that have changed how I advise businesses, with context engineering guidance and model recommendations for each
Let’s get into it.

The Skills Gap (Most Founders Don’t Realize They Have)
I keep meeting younger founders who are building things that would have taken a funded team of ten people five years ago, and if you're a founder in your mid-thirties or forties running an established business, you need to pay attention to what these people actually know how to do because it's going to be your competition within the next few years.
The skills I'm watching young builders pick up that most operators with established teams haven't started learning:
Context engineering, which is the skill that actually matters now that the models are good enough. The difference between getting useless output and getting something you can actually ship comes down to how well you structure the context you give the tool: what information you include, what you leave out, what constraints you set, and how you sequence the work so each step has what it needs.
Building and managing a personal knowledge base that an AI tool can actually read and use. This means learning how to structure information in a way that's useful to a machine and not just to a human.
Thinking in context graphs, which means understanding how the different pieces of your business connect to each other and how an AI system needs to traverse those connections to do useful work. If you can map out how your customers relate to your products relate to your operations relate to your finances, you can give an agent a working model of your business instead of just handing it isolated tasks.
Scoping and managing AI agents that run multi-step processes, including how to set guardrails so the agent doesn't go off the rails when it hits an edge case.
Knowing when to use AI and when to do the work yourself. The people who try to automate everything end up with a bunch of systems producing average work across the board instead of great work where it counts.
Documenting processes and decisions explicitly as a habit, because the more of your thinking you can get out of your head and into a structured format, the more of your operation an AI system can actually help with. This connects directly to the tacit-to-explicit knowledge conversion that makes agent deployment possible.
A founder who's good at even half of these can operate at a level that looks like a team of five from the outside.
The gap closes when you learn these skills yourself, or at least learn them well enough to know what good looks like when someone on your team is doing them.
What Serious Operators Are Using
Every week I see another "here are the 30 AI tools you need" post with a grid of logos and zero context for when or why you'd actually use any of them.
Here's what I'm actually seeing operators use across the businesses I advise, organized by the job each one does.
Thinking and strategy. The operators getting the most out of AI are using it as a thinking partner before they use it as a production tool. They'll take a pricing decision, a hiring question, a product roadmap tradeoff, and work through it conversationally with a model that has real context on their business.
Claude with Projects for long-context strategic thinking, or Claude Code and Claude Cowork for a more agentic setup where the tool can actually execute on decisions, run tasks, and work across your files instead of just talking through ideas
ChatGPT with custom GPTs for structured decision workflows
Perplexity for real-time research questions mid-conversation
Research and intelligence. Two jobs here. Real-time research when a question comes up, and ongoing monitoring that surfaces intelligence before you ask for it.
Perplexity for competitive research and market questions, synthesized answers with sources instead of ten blue links
Crayon for ongoing competitive monitoring, watches competitor sites, pricing, and messaging changes automatically
Content production. What separates the operators producing good content from the ones producing generic output is that they've built a voice guide and examples into their system context so the tool writes in their voice.
Claude or ChatGPT for drafting long-form content, emails, and website copy
Jasper for brand-consistent content at scale with agent workflows that handle the full pipeline
Descript for turning podcasts and video into clips and written content
Sales and outreach. This category has gotten genuinely agentic in the last year. The pattern that works best is a research agent that preps before a call, a tool that drafts personalized outreach, and an automation that handles follow-up.
Apollo for prospecting data combined with outreach in one platform
Outreach revenue agent for autonomous prospecting at higher volume
Clay for enriching prospect data from multiple sources
Operations and workflow. This is the category most people underinvest in and it's where the biggest leverage actually lives.
Zapier Agents for multi-step autonomous actions across your whole stack
Make for complex workflows where you need finer control over the logic
Cursor and Claude Code for building custom internal tools and dashboards in hours
Data and analytics. The move I'm seeing more and more is operators asking questions in plain English instead of building traditional dashboards.
Cursor or Claude pointed at your database for conversational data access
ThoughtSpot for AI-powered dashboards that deliver personalized insights
Julius for connecting multiple data sources and discovering insights through conversation
Design and visual. Most operators are using AI for rapid variant testing where they used to test two options, they now test ten.
Midjourney, Nano Banana, or ChatGPT image generation for visual assets and product mockups
Gamma for presentations and pitch decks from a brief
Sitekick or Base44 for conversion-optimized landing pages in minutes
A founder using five of these tools connected through Zapier or Make with shared context will outperform someone using fifteen that each live in their own silo.
Five Prompts That Changed How I Advise Businesses
Most founders I work with are sitting on Claude or ChatGPT subscriptions and using maybe 10% of what the tool can actually do because they're asking surface-level questions and getting surface-level answers back. The prompts below are ones I've either built for operators I advise or refined over dozens of sessions.
These aren't cute one-liners. Each one requires real context to work, and I'll explain what context to load and which model to use for each.
1. The Blind Spot Audit
Context to load: your current revenue model, team structure, key tools and automations, your biggest customers or revenue concentrations, and any recent changes you've made to the operation. If you just say "I run a SaaS company" you'll get generic risk management advice. If you say "I run a SaaS company doing $1.2M ARR with 60% of revenue from three enterprise accounts and one developer maintaining our entire codebase" you'll get something that actually keeps you up at night.
Best model: Claude Opus 4.7 with extended thinking or GPT-5.4 Thinking. You want a reasoning model here because the value comes from the tool working through second and third order consequences of your setup.
The prompt:
I'm going to give you a complete picture of how my business operates right now. I want you to identify the three biggest risks I'm probably not thinking about, the assumptions I'm making that I haven't tested, and the areas where my current setup has single points of failure. For each one, tell me what the failure scenario looks like and what I'd need to do in the next 30 days to reduce the risk. Push back on me. I'd rather hear something uncomfortable now than discover it during a crisis.2. The Pricing Pressure Test
Context to load: your actual pricing page or rate card, your cost structure including what you pay for tools and labor, your gross margins per product or service, what your top three competitors charge, and if you have it, any data on where prospects drop off in your sales process. Load a few examples of recent proposals or invoices if you can.
Best model: Claude Sonnet 4.6 or GPT-5.4 for the initial analysis since this is more about pattern recognition across structured data than deep reasoning. If the initial output surfaces something interesting, switch to Claude Opus 4.7 or GPT-5.4 Thinking for the follow-up where you're working through scenarios.
The prompt:
Here's my current pricing structure, my cost basis, my margins by product or service line, and my customer segments. I want you to stress-test this pricing from three angles: where am I leaving money on the table because I'm underpricing relative to the value I deliver, where am I creating friction that's costing me conversions because the pricing structure is too complex or misaligned with how customers buy, and where am I vulnerable to a competitor undercutting me. For each angle, give me a specific recommendation I could test within two weeks.3. The Customer Intelligence Debrief
Context to load: raw customer communications, as many as you can. Support tickets, sales call transcripts, churn survey responses, NPS comments, even Slack messages from a customer community if you have one. The value of this prompt scales directly with the volume and rawness of the input. Cleaned-up summaries don't work because the tool picks up on phrasing patterns and emotional signals in the original language that get lost when someone has already filtered them.
Best model: Claude Opus 4.7 or Claude Sonnet 4.6, because you're feeding in a lot of raw text and you need the model to hold all of it in memory while finding patterns across the full dataset. If your data is in a spreadsheet or database, point Cursor or Claude Code at it directly instead of pasting it into a chat window.
The prompt:
I'm going to paste in the last 30 days of customer support tickets [or sales call notes, or churn emails, or NPS survey responses]. I want you to identify the patterns I'm missing. What are customers actually frustrated about that they're expressing in different ways across these conversations? What feature or service gap keeps showing up indirectly even when people aren't requesting it explicitly? And what's the single highest-leverage change I could make to my product or service based on what these customers are telling me, even if they don't know they're telling me?4. The Process Extraction
Context to load: walk the tool through the process step by step, out loud, the way you'd explain it to someone shadowing you. Include the "obvious" parts you'd normally skip because those are often where your tacit knowledge lives. If you have examples of past decisions where you went one direction versus another, include those with the reasoning you used at the time. The tool is essentially interviewing you to extract knowledge you don't realize you have.
Best model: Claude Opus 4.7 or Sonnet 4.6 in a Projects workspace, or even better, Claude Cowork if you want the tool to actually build the SOP document and iterate on it with file access. This is a process you'll want to run multiple times for different parts of your operation, and having the previous extractions as context makes each subsequent one better. This prompt is directly connected to the tacit-to-explicit knowledge conversion that makes agent deployment possible later.
The prompt:
I'm going to describe how I handle [specific process] from start to finish, including the parts I do on autopilot and the judgment calls I make along the way. I want you to document this as a complete standard operating procedure that someone else on my team could follow without asking me questions. But I also want you to flag every point in the process where I'm making a decision based on experience or gut feeling rather than a clear rule, because those are the parts I need to either codify into explicit criteria or keep in my own hands.5. The Strategic Fork
Context to load: everything relevant to the decision. Your current financials, your team's capacity, your competitive position, any time constraints, and what you've already tried or considered. Also tell the tool what your risk tolerance is and what your priority is for the next 6-12 months, because a founder optimizing for growth makes a different choice than one optimizing for margin or sustainability. The "don't hedge" instruction matters because models default to giving you balanced pros-and-cons that feel helpful but don't actually force a decision.
Best model: Claude Opus 4.7 with extended thinking or GPT-5.4 Thinking. This is the prompt where reasoning depth matters the most because you're asking the tool to simulate multiple futures and compare them. The faster models will give you a structured comparison that looks thorough but won't actually reason through the second-order effects in a way that surfaces something you hadn't considered.
The prompt:
I'm facing a decision between [option A] and [option B] and I need to think through this properly before I commit. For each option, I want you to map out the most likely scenario if it works, the most likely scenario if it fails, the second-order effects I probably haven't considered, and the reversibility of the decision, meaning how hard it would be to undo or change course six months from now. Then tell me which one you'd choose if you were running this business and why, and don't hedge. I want a clear recommendation.That does it for this issue. Let me know what you think?

That SaaS founder I mentioned at the top? We spent six hours mapping his infrastructure, adding failure alerts, and building redundant pathways for the automations that were existential to his business.
The hard part was him sitting realizing he'd been building increasingly sophisticated AI systems on top of a foundation he'd never pressure-tested.
The pattern across all of this is the same. The tools are available and the models are good enough.
What's missing for most operators is the context:
The skills to use them well, the infrastructure to connect them, and the prompts that matter.
Until next time,
Sam Woods
The Editor
.

