
Good morning.
If you’ve made use of previous Skills issues, you ran the CRO audit. You optimized the landing page. You launched the ad campaign. Conversion rate went up.
But was it the headline change? The new CTA? The audience targeting? Or something else entirely?
Without measurement, you're guessing. Educated guessing, maybe. But you’re still guessing.
Measurement is the infrastructure that makes everything else in this series work.
It’s not flashy, it’s not shiny, it’s not the latest AI distraction. But it’s required for any serious business.
The previous skills issues help you make changes. This issue helps you know which changes mattered. It's the difference between "we tried a lot of things and revenue went up" and "the headline change drove a 23% lift, the CTA change had no effect, and the audience targeting is worth scaling."
Most businesses measure too little or too much, and understand too little. Dashboards with 47 metrics. Weekly reports nobody reads. Data everywhere, insight nowhere. The goal is the right data, structured so you can act on it.
This issue gives you five skills to build that infrastructure. Analytics setup that works across platforms. Event tracking that captures what matters. Dashboards that surface insights instead of burying them. Experiments designed to produce real answers. And attribution that shows what's actually driving conversions.
— Sam
IN TODAY’S ISSUE 🤖

Measurement as infrastructure
Analytics setup: platform-agnostic principles
Event tracking: capturing what matters
Dashboard design: signal over noise
Experiment design: tests that produce answers
Attribution: what's actually working
Download the skills (for Cortex subscribers)
Let’s get into it.

1. Measurement as Infrastructure
What you're getting today:
5 skills covering the full measurement stack
Platform-agnostic frameworks that work with any analytics tool
Event taxonomy templates you can adapt
Experiment design with statistical interpretation built in
A funnel instrumentation project you can complete in 1-2 business days
If you haven't set up your skill environment and you’re not using Claude yet, start with the previous issue. This issue assumes you have skills installed and understand the basics.
When you’re ready, let’s go:
Measurement enables everything else. Without it, you're optimizing blind.
The Measurement Hierarchy
Build in this order:
Layer | What It Does | Depends On |
1. Analytics foundation | Tool configured, data collecting | Nothing |
2. Event tracking | Key actions captured with consistent naming | Layer 1 |
3. Dashboards | Metrics surfaced in a way you'll actually use | Layers 1-2 |
4. Experiments | Changes tested with proper controls | Layers 1-3 |
5. Attribution | Credit assigned to what's actually working | Layers 1-4 |
Each layer builds on the previous. Skip a layer and the ones above it don't work properly. You can't build useful dashboards without consistent event tracking. You can't run valid experiments without reliable analytics. You can't understand attribution without all of the above.
When to Use These Skills
Starting from scratch: Work through the layers in order. Analytics setup first, then event tracking, then dashboards. Experiments and attribution come once the foundation is solid.
Existing setup that's messy: Start with an analytics audit. Find out what's broken or missing. Fix the foundation before building on top of it.
Preparing to scale: Before increasing ad spend or expanding channels, get attribution working. Know what's actually driving results so you scale the right things.
After making changes: Run experiments to know if changes worked. Use dashboards to monitor impact over time.
The Five Skills
analytics-setup starts at the foundation. Configure your analytics correctly, regardless of which platform you use. Account structure, goals, data quality, verification. This is the plumbing that everything else depends on.
event-tracking builds the vocabulary. Events are the raw material for every report, dashboard, and experiment. This skill covers what to track, how to name it, and what properties to capture. Get this wrong and everything downstream is harder.
dashboard-design surfaces what matters. Most dashboards fail because they show too much. This skill is about choosing the right metrics, arranging them usefully, and creating reports you'll actually use.
experiment-design produces real answers. Most A/B tests fail to produce usable results because the design was wrong. This skill covers hypothesis formation, sample size, duration, and how to interpret results honestly.
conversion-attribution assigns credit correctly. When a customer touches multiple channels before converting, who gets credit? This skill covers attribution models and how to think about channel contribution when perfect measurement is impossible.
Connecting Your Data to Claude
These skills help you think through measurement strategy, design tracking, and interpret results. But to work with your actual data, Claude needs access to it.
Three approaches:
Method | How It Works | Best For |
Manual export | Export CSVs or screenshots, upload to Claude | Everyone, works now |
API integration | Connect via custom scripts | Technical users |
MCP connectors | Direct platform connection via Model Context Protocol | Cleanest option when available |
Manual export works for everyone. Export a CSV from your analytics platform, upload it to your Claude Project, and ask Claude to analyze it. For dashboards and visual reports, screenshots work. This approach requires no setup beyond what you already have. It’s NOT ideal, but it works.
MCP connectors are the emerging standard. Model Context Protocol lets Claude connect directly to platforms like Google Analytics, data warehouses, and BI tools. When available, this eliminates the export step entirely. Claude can query your data directly, pull fresh numbers, and work with live metrics instead of static exports.
Check what MCP connectors are available for your analytics stack. If your platform has one, set it up. If not, manual export works fine. The skills in this issue work either way.
The analytics-setup Skill
Before you can track events, build dashboards, or run experiments, you need analytics working correctly. This is the foundation everything else sits on.
The principles here are platform-agnostic. Whether you use GA4, Mixpanel, Amplitude, Plausible, or something else, the same fundamentals apply. Tools change. Principles don't.
Most analytics setups fail in predictable ways. Test data mixed with production data. Bot traffic inflating numbers. Internal visits counted as real traffic. Goals never configured, so you're tracking pageviews but not conversions.
The skill file covers all of these, but the core discipline is simple: verify that what your analytics reports matches what actually happened.
The most important setup decision is distinguishing macro conversions from micro conversions:
Macro conversions are the primary business outcomes: purchase, paid signup, qualified lead.
Micro conversions are the steps that lead there: add to cart, start checkout, form view.
Track both. Macro tells you the outcome. Micro tells you where the funnel breaks.
Business Type | Macro Conversions | Micro Conversions |
SaaS | Paid subscription, trial start | Signup, activation, feature use |
Ecommerce | Purchase | Add to cart, checkout start |
Lead gen | Qualified lead submitted | Form start, content download |
Content | Subscription, membership | Newsletter signup, article completion |
Assign monetary values where possible. Even rough estimates help prioritize. A $500 demo request matters more than a newsletter signup. Your analytics should reflect that.
Example Prompt:
Audit my analytics setup for [type of business]. Platform: [GA4/Mixpanel/etc.]. Currently tracking: [what's configured]. Tell me what's missing, what's misconfigured, and what to prioritize fixing.Here’s the full SKILL.md file for you to use:
---
name: analytics-setup
description: Configure analytics foundations. Account structure, goals, data quality, verification. Use when setting up new analytics, auditing existing setup, or migrating platforms.
---
# Analytics Setup
Configure analytics foundations using platform-agnostic principles. This skill covers account structure, goal and conversion setup, data quality controls, and verification processes that apply whether you use GA4, Mixpanel, Amplitude, Plausible, or any other analytics platform.
## When to Use This Skill
- Setting up analytics for a new site or product
- Auditing an existing analytics configuration
- Migrating between analytics platforms
- Verifying that tracking matches reality
- Troubleshooting data quality issues
- Preparing analytics foundation before adding event tracking
## Account Structure Principles
### Environment Separation
Never mix data from different environments:
| Environment | Purpose | Data Handling |
|-------------|---------|---------------|
| Production | Real user data | Primary analytics property |
| Staging | Pre-release testing | Separate property, exclude from reports |
| Development | Local development | Separate property or filtered out |
**Why it matters:** Test traffic pollutes real data. A developer refreshing a page 500 times during testing looks like a traffic spike. Internal testing of checkout flows creates fake conversions.
### Property/Account Naming
Use consistent, descriptive naming:
```
[Company] - [Product/Site] - [Environment]
```
**Examples:**
- Acme Corp - Main Site - Production
- Acme Corp - Main Site - Staging
- Acme Corp - Mobile App - Production
### Access Control
| Role | Permissions | Who |
|------|-------------|-----|
| Admin | Full configuration access | Analytics owner, engineering lead |
| Editor | Can modify goals, filters | Marketing leads |
| Analyst | Full read access | Team members who analyze |
| Viewer | Read access to reports | Stakeholders, executives |
**Principle:** Minimum necessary access. Most people need to read data, not modify configuration.
### Data Retention
| Consideration | Recommendation |
|---------------|----------------|
| Minimum retention | At least 14 months (year-over-year comparison) |
| Ideal retention | 26 months (2+ years of trends) |
| Legal constraints | GDPR and privacy laws may limit retention |
| Storage costs | Some platforms charge for longer retention |
Set retention before collecting data. Extending later doesn't recover deleted data.
## Goal and Conversion Setup
### Conversion Types
| Type | Definition | Examples |
|------|------------|----------|
| Macro conversions | Primary business outcomes | Purchase, paid signup, qualified lead |
| Micro conversions | Steps toward macro conversions | Add to cart, form start, demo request |
Track both. Macro conversions tell you the outcome. Micro conversions help diagnose where funnels break.
### Goals by Business Type
| Business | Macro Conversions | Micro Conversions |
|----------|-------------------|-------------------|
| SaaS | Paid subscription, trial start | Signup, activation, feature adoption |
| Ecommerce | Purchase completed | Add to cart, checkout start, account creation |
| Lead generation | Qualified lead submitted | Content download, form start, demo request |
| Content/Media | Subscription, membership | Newsletter signup, article completion, share |
| Marketplace | Transaction completed | Listing view, contact seller, account creation |
### Goal Value Assignment
Assign monetary values to conversions when possible:
| Conversion | Value Approach |
|------------|----------------|
| Purchase | Actual transaction value |
| Subscription | Lifetime value or first-year value |
| Lead | Average lead value (leads × close rate × deal size) |
| Trial | Trial × conversion rate × subscription value |
| Content action | Estimated value or indexed value |
Even rough estimates help prioritize. A $500 lead submission matters more than a newsletter signup.
### Goal Configuration Checklist
- [ ] All macro conversions defined and tracking
- [ ] Key micro conversions defined and tracking
- [ ] Values assigned where possible
- [ ] Goal names are clear and consistent
- [ ] Goals tested and verified working
- [ ] Documentation of what each goal measures
## Data Quality Controls
### Bot Filtering
| Control | Implementation |
|---------|---------------|
| Known bots | Enable platform's bot filtering (most have this) |
| Suspicious patterns | Filter unusually high pageviews per session |
| Data center traffic | Some platforms filter known data center IPs |
| Custom filters | Filter based on known bot user agents |
**Verification:** Check traffic for suspicious spikes, unusually low bounce rates from certain sources, or unrealistic session durations.
### Internal Traffic Exclusion
| Method | Best For |
|--------|----------|
| IP filtering | Office networks with static IPs |
| Cookie/flag based | Remote teams, dynamic IPs |
| User ID exclusion | Logged-in internal users |
| UTM parameter | Testing traffic (utm_source=internal_test) |
**Verification:** Visit your site from internal network, verify traffic doesn't appear in reports.
### Referral Exclusions
Exclude domains that shouldn't appear as referral sources:
| Exclude | Why |
|---------|-----|
| Your own domains | Cross-subdomain traffic isn't referral |
| Payment processors | Stripe, PayPal redirects aren't referral |
| Auth providers | OAuth redirects aren't referral |
| CDNs | If your CDN appears as referrer |
**Symptom of missing exclusions:** High bounce rate from payment processor "referrals" that are actually returning customers.
### Cross-Domain Tracking
If users cross domains (e.g., shop.example.com to checkout.example.com):
| Check | Requirement |
|-------|-------------|
| Linker configured | Client ID passes between domains |
| Referral exclusion | Domains excluded from referral |
| Testing | Verify session continues across domains |
**Verification:** Start session on domain A, cross to domain B, confirm same session in reports.
### Time and Currency
| Setting | Recommendation |
|---------|----------------|
| Timezone | Match your business timezone for reporting clarity |
| Currency | Single currency for all revenue reporting |
| Consistency | Same settings across all properties/views |
## Verification Process
### Real-Time Testing
| Step | What to Check |
|------|---------------|
| 1 | Visit site in new incognito window |
| 2 | Perform tracked actions (pageview, button click, form submit) |
| 3 | Check real-time reports immediately |
| 4 | Verify event appears with correct properties |
### Conversion Verification
| Comparison | What to Check |
|------------|---------------|
| Analytics conversions vs. system records | Do reported purchases match order system? |
| Analytics signups vs. database | Do reported signups match user database? |
| Analytics leads vs. CRM | Do reported leads match CRM entries? |
**Acceptable variance:** Within 5% is normal (timing, filtering differences). Greater than 10% indicates a problem.
### Traffic Sanity Checks
| Check | What to Look For |
|-------|------------------|
| Traffic trends | Major unexplained spikes or drops |
| Bounce rates | Unusually high (>90%) or low (<20%) rates |
| Session duration | Average durations that seem impossible |
| Geographic data | Unexpected traffic from countries you don't serve |
### Regular Audit Schedule
| Cadence | What to Audit |
|---------|---------------|
| Weekly | Check for tracking anomalies, verify key goals firing |
| Monthly | Compare analytics to source systems, review data quality |
| Quarterly | Full configuration review, filter effectiveness |
| Annually | Major cleanup, deprecate unused goals, review access |
## Common Configuration Issues
| Issue | Symptom | Fix |
|-------|---------|-----|
| Test traffic in production | Spikes during development, fake conversions | Implement environment separation |
| Missing bot filtering | Inflated traffic, low conversion rates | Enable bot filtering |
| Broken goal tracking | Goals show zero or incorrect counts | Re-verify goal configuration |
| Cross-domain session breaks | High "referral" traffic from own domains | Configure cross-domain tracking |
| Internal traffic included | Higher traffic than expected, test conversions | Implement IP or flag filtering |
| Time zone mismatch | Reports don't match business day | Align time zone settings |
## Connecting to Claude
To use these skills with your actual analytics data, Claude needs access to it.
### Connection Methods
| Method | How It Works | Setup Required |
|--------|--------------|----------------|
| Manual export | Export data as CSV, upload to Claude Project | None |
| Screenshots | Capture dashboards/reports, upload as images | None |
| MCP connector | Direct platform connection via Model Context Protocol | MCP setup |
### Manual Export Approach
Works with any platform, no setup required:
1. **Export relevant data** as CSV (traffic, conversions, events)
2. **Upload to Claude Project** as context
3. **Reference in conversation** ("analyze the analytics export I uploaded")
**What to export:**
- Traffic summary by source/medium
- Goal/conversion data
- Event counts and properties
- User/session metrics
### MCP Connectors
Model Context Protocol enables direct platform connections. When available, Claude can query your analytics directly without manual exports.
**Available connectors vary.** Check for MCP connectors for:
- Google Analytics / GA4
- Your data warehouse (BigQuery, Snowflake, etc.)
- BI tools (Looker, Tableau, etc.)
**Benefits of MCP:**
- No manual export step
- Fresh data on every query
- Can pull specific metrics on demand
- Enables more interactive analysis
**If no MCP connector exists** for your platform, manual export works fine. The frameworks in this skill apply regardless of how data reaches Claude.
## Red Flags
- [ ] No separation between production and test data
- [ ] Goals/conversions not configured (only pageviews tracked)
- [ ] Bot traffic significantly inflating numbers
- [ ] Internal traffic mixed with real traffic
- [ ] Analytics numbers don't match source systems
- [ ] Data retention too short for year-over-year analysis
- [ ] Too many people with admin access
- [ ] No documentation of configuration
- [ ] Filters not tested after implementation
- [ ] Cross-domain tracking broken
## Output Format
When auditing or configuring analytics:
```
## Analytics Setup: [Property/Account Name]
### Current State
- Platform: [GA4 / Mixpanel / Amplitude / etc.]
- Property/account: [Name and ID]
- Data collection status: [Active / Issues]
### Environment Structure
| Environment | Property | Status |
|-------------|----------|--------|
| Production | [Name] | [OK / Issues] |
| Staging | [Name] | [OK / Missing] |
| Development | [Name] | [OK / Missing] |
### Goal Configuration
**Macro Conversions:**
| Goal | Status | Value | Notes |
|------|--------|-------|-------|
| [Goal] | [OK/Missing/Broken] | $[Value] | [Notes] |
**Micro Conversions:**
| Goal | Status | Value | Notes |
|------|--------|-------|-------|
| [Goal] | [OK/Missing/Broken] | $[Value] | [Notes] |
### Data Quality Controls
| Control | Status | Notes |
|---------|--------|-------|
| Bot filtering | [Enabled/Disabled] | |
| Internal traffic exclusion | [Configured/Missing] | Method: [IP/Cookie/etc.] |
| Referral exclusions | [Configured/Missing] | Domains: [list] |
| Cross-domain tracking | [Configured/N/A] | |
### Verification Results
| Check | Result | Notes |
|-------|--------|-------|
| Real-time testing | [Pass/Fail] | |
| Conversion accuracy | [X% variance] | Compared to [source] |
| Traffic sanity | [Pass/Issues] | |
### Issues Found
| Priority | Issue | Impact | Fix |
|----------|-------|--------|-----|
| High | [Issue] | [Impact] | [Action needed] |
| Medium | [Issue] | [Impact] | [Action needed] |
| Low | [Issue] | [Impact] | [Action needed] |
### Recommendations
1. [Most important fix]
2. [Second priority]
3. [Third priority]
### Access Review
| User/Team | Current Access | Appropriate? |
|-----------|---------------|--------------|
| [User] | [Level] | [Yes/Reduce] |
```
## Chaining to Other Skills
Analytics setup is the foundation for other measurement skills:
- **Foundation ready, need event tracking** → Chain to `event-tracking`
- **Events tracked, need dashboard** → Chain to `dashboard-design`
- **Need to run experiments** → Chain to `experiment-design`
- **Need attribution analysis** → Chain to `conversion-attribution`
When chaining, pass along: platform being used, goals configured, any known data quality issues.The dashboard-design Skill
Most dashboards fail because they show too much. Someone asked "what should we track?" and the answer was "everything, just in case."
The result is usually a wall of charts nobody looks at, and when something breaks, nobody notices because it's buried among 47 other metrics.
The goal is surfacing the metrics that drive decisions. A good dashboard answers specific questions quickly. A bad dashboard requires 20 minutes of investigation to figure out if anything is wrong.
The key insight is the metric hierarchy. Not all metrics deserve dashboard space.
Level | What It Is | Dashboard Presence |
North Star | One metric that captures value delivered | Always visible, top of dashboard |
Health metrics | 3-5 indicators something is wrong | Always visible, alert when anomalous |
Input metrics | Levers you can pull | Operational dashboards only |
Diagnostic metrics | For investigating problems | Available but not on main view |
North Star and health metrics belong on the main dashboard. Everything else is noise.
The other failure mode is vanity metrics. Total pageviews, total registered users, email list size, social followers. They go up and to the right, which feels good, but they don't help you decide anything.
Actionable metrics tell you what to do differently: conversion rate by source, activated users this week, email click-through rate, social traffic that converts.
Every number needs comparison context.
"Conversion rate: 3.2%" means nothing.
"Conversion rate: 3.2% (↑12% vs. last week)" tells a story.
Show vs. previous period, vs. same period last year, or vs. goal. Never display a number alone.
Different audiences need different dashboards. Executives need health checks and business metrics. Operators need real-time anomaly detection. Analysts need drill-down capability. Don't put everything on one dashboard and call it done.
Example Prompt:
Design a weekly performance dashboard for my [type of business]. Here's what I care about: [goals]. Here's what I currently track: [metrics]. Give me a dashboard layout with the right metrics and how to organize them.Here’s the full SKILL.md file for you to use:
---
name: dashboard-design
description: Build actionable dashboards. Metric selection, layout principles, avoiding vanity metrics. Use when creating dashboards, simplifying cluttered ones, or setting up reporting.
---
# Dashboard Design
Build dashboards that surface insights and drive decisions. This skill covers metric selection, layout principles, avoiding vanity metrics, and establishing reporting cadence. The goal is dashboards people actually use.
## When to Use This Skill
- Creating a new dashboard from scratch
- Redesigning a cluttered existing dashboard
- Deciding what metrics actually matter
- Setting up regular reporting cadence
- Simplifying reports that have grown unwieldy
- Designing dashboards for different audiences (exec vs. operator)
## The Dashboard Problem
Most dashboards fail because they show too much. Someone asked "what should we track?" and the answer was "everything, just in case." The result: 47 metrics, nobody looks at it, and when something breaks, nobody notices.
Good dashboards answer specific questions quickly. Bad dashboards require 20 minutes of investigation to figure out if anything is wrong.
## Metric Selection
### The Metric Hierarchy
Not all metrics deserve dashboard space. Prioritize:
| Level | Purpose | Dashboard Presence |
|-------|---------|-------------------|
| North Star | The one metric that best captures value | Always visible, top of dashboard |
| Health metrics | 3-5 indicators something is wrong | Always visible, alert if anomalous |
| Input metrics | Levers you can pull | Operational dashboards, not exec |
| Diagnostic metrics | Drill-down for investigation | Available but not on main view |
### Finding Your North Star
| Business Type | Possible North Stars |
|---------------|---------------------|
| SaaS | Weekly Active Users, Monthly Recurring Revenue |
| Ecommerce | Revenue per Visitor, Orders |
| Marketplace | Gross Merchandise Value, Transactions |
| Content | Engaged Time, Subscribers |
| Lead gen | Marketing Qualified Leads, Pipeline Value |
**Test:** If this one number goes up, is the business healthier? If it's not clear, that's not your North Star.
### Health Metrics Selection
Health metrics answer: "Is anything broken right now?"
| Type | Examples | Alert When |
|------|----------|------------|
| Conversion rates | Signup rate, checkout rate | Drop >20% from baseline |
| Error rates | 500 errors, payment failures | Spike above threshold |
| Engagement rates | Email open rate, feature adoption | Significant decline |
| Churn/cancellation | Daily cancellations, refund rate | Spike above normal |
**Rule:** 3-5 health metrics maximum. More than that and you'll ignore them.
### Vanity Metrics vs. Actionable Metrics
| Vanity (Feels Good) | Actionable (Drives Decisions) |
|---------------------|-------------------------------|
| Total pageviews | Conversion rate by page |
| Total registered users | Activated users this period |
| Email list size | Email click-through rate |
| Social followers | Social traffic that converts |
| Time on site (average) | Completion rate of key flows |
| Total revenue (all-time) | Revenue this period vs. last |
| Downloads (all-time) | Active installations |
**The test:** Does this metric tell me what to do differently? If not, it's vanity.
### Metrics by Audience
| Audience | What They Need | Metric Focus |
|----------|---------------|--------------|
| Executive | Is the business healthy? | North Star, revenue, growth rate |
| Manager | Is the team effective? | Team KPIs, goal progress |
| Operator | What needs attention today? | Health metrics, anomalies, queues |
| Analyst | What's driving results? | Full metrics, segmentation, trends |
Don't put everything on one dashboard. Different audiences need different views.
## Layout Principles
### Visual Hierarchy
| Principle | Implementation |
|-----------|---------------|
| Most important top-left | Eye naturally starts there |
| Related metrics grouped | Conversion funnel stages together |
| Consistent sizing | Similar importance = similar size |
| White space matters | Don't pack everything together |
### Dashboard Anatomy
```
┌─────────────────────────────────────────────────┐
│ NORTH STAR METRIC │ Time Period │
│ [Big number with trend] │ [Selector] │
├─────────────────────────────────────────────────┤
│ HEALTH METRICS (3-5 cards) │
│ [Card] [Card] [Card] [Card] │
├─────────────────────────────────────────────────┤
│ PRIMARY CHART │ SECONDARY CHART │
│ [Main trend/funnel] │ [Breakdown] │
├─────────────────────────────────────────────────┤
│ DATA TABLE (if needed) │
│ [Detailed breakdown for drilling down] │
└─────────────────────────────────────────────────┘
```
### Context Is Required
Every metric needs comparison context:
| Context Type | When to Use |
|--------------|-------------|
| vs. Previous period | Daily/weekly performance |
| vs. Same period last year | Seasonal businesses |
| vs. Goal | When targets are set |
| vs. Benchmark | Industry comparisons |
**Never show a number alone.** "Conversion rate: 3.2%" means nothing without context. "Conversion rate: 3.2% (↑12% vs. last week)" tells a story.
### Chart Selection
| Data Type | Best Chart |
|-----------|------------|
| Trend over time | Line chart |
| Part of whole | Pie/donut (sparingly) or stacked bar |
| Comparison across categories | Bar chart |
| Exact values needed | Table |
| Single KPI | Big number card |
| Funnel progression | Funnel chart or horizontal bar |
| Geographic data | Map |
**Default to simplicity.** Tables are often more useful than charts when precision matters.
### What to Avoid
| Anti-pattern | Problem | Alternative |
|--------------|---------|-------------|
| 3D charts | Distorts proportions | Use 2D |
| Dual Y-axes | Confusing, manipulable | Separate charts |
| Pie charts with 10+ slices | Unreadable | Bar chart or table |
| Truncated Y-axes | Exaggerates changes | Full axis or note it |
| Too many colors | Visual noise | 5-7 colors max |
| Animation | Distracting | Static |
## Dashboard Types
### Health Dashboard
**Purpose:** Is anything broken right now?
**Audience:** Operators, on-call
**Update frequency:** Real-time or near-real-time
**Key elements:**
- Status indicators (green/yellow/red)
- Anomaly alerts
- Key conversion rates vs. thresholds
- Error rates
- System performance
### Performance Dashboard
**Purpose:** How are we doing vs. goals?
**Audience:** Team, managers
**Update frequency:** Daily
**Key elements:**
- North Star with trend
- Goal progress (actual vs. target)
- Period-over-period comparison
- Key drivers broken down
### Funnel Dashboard
**Purpose:** Where are we losing people?
**Audience:** Product, growth team
**Update frequency:** Daily or weekly
**Key elements:**
- Stage-by-stage conversion rates
- Drop-off between stages
- Segmentation (by source, device, cohort)
- Comparison vs. previous period
### Channel Dashboard
**Purpose:** What's driving results?
**Audience:** Marketing, growth
**Update frequency:** Weekly
**Key elements:**
- Traffic by channel
- Conversion by channel
- Cost per acquisition by channel
- Channel trends over time
### Executive Dashboard
**Purpose:** Is the business healthy?
**Audience:** Leadership, board
**Update frequency:** Weekly or monthly
**Key elements:**
- Revenue and growth
- Key ratios (LTV/CAC, margins)
- Progress vs. quarterly/annual goals
- Major leading indicators
## Reporting Cadence
### Cadence Framework
| Frequency | What to Include | Audience | Distribution |
|-----------|-----------------|----------|--------------|
| Real-time | Anomaly alerts | Operators | Slack/PagerDuty |
| Daily | Health check, key metrics | Team | Email/Slack summary |
| Weekly | Performance summary, trends | Team + managers | Email + meeting |
| Monthly | Strategic metrics, goals | Leadership | Presentation |
| Quarterly | Business review | Exec/board | Document + meeting |
### Automation Requirements
| Requirement | Why |
|-------------|-----|
| Auto-refresh | Manual updates get skipped |
| Scheduled delivery | Ensures regular review |
| Alert thresholds | Catches problems before check-in |
| Self-service drill-down | Reduces ad-hoc requests |
## Building a New Dashboard
### Step 1: Define the Audience
| Question | Answer for this dashboard |
|----------|--------------------------|
| Who will use this? | |
| What decisions will they make? | |
| How often will they check it? | |
| What questions should it answer? | |
### Step 2: Select Metrics
| Metric | Why It Matters | Target/Threshold |
|--------|---------------|------------------|
| [Metric 1] | | |
| [Metric 2] | | |
| [Metric 3] | | |
**Constraint:** Start with 5-7 metrics maximum. Add more only if necessary.
### Step 3: Design Layout
Sketch before building:
- What goes top-left (most important)?
- What groups together?
- What needs comparison context?
- What chart type for each metric?
### Step 4: Add Context and Alerts
- Set comparison periods
- Define alert thresholds
- Add goal lines where applicable
- Include last-updated timestamp
### Step 5: Test with Users
- Does it answer their questions?
- What's confusing?
- What's missing?
- What's unnecessary?
### Step 6: Document
- What does each metric measure?
- Where does the data come from?
- What do alert thresholds mean?
- Who owns this dashboard?
## Connecting Dashboards to Claude
To analyze dashboard data with Claude, you need to get that data into the conversation.
### Connection Methods
| Method | How It Works | Best For |
|--------|--------------|----------|
| Screenshot upload | Capture dashboard, upload as image | Quick questions, visual context |
| Data export | Export underlying data as CSV | Deeper analysis, calculations |
| MCP connector | Direct connection to BI tool or data source | Ongoing analysis, live data |
### Screenshot Approach
For quick analysis or design feedback:
1. **Capture the dashboard** as image
2. **Upload to Claude**
3. **Ask specific questions** ("What stands out?" "What's missing?" "How would you redesign this?")
Good for: Dashboard reviews, layout feedback, identifying issues
### Data Export Approach
For deeper analysis:
1. **Export underlying metrics** as CSV
2. **Upload to Claude Project**
3. **Reference in analysis** ("Analyze trends in the dashboard data I uploaded")
Good for: Trend analysis, anomaly detection, recommendations
### MCP Connectors
Model Context Protocol enables direct connections to data sources. When available, Claude can pull fresh dashboard data without manual exports.
**Relevant MCP connectors:**
- Data warehouses (BigQuery, Snowflake, Redshift)
- BI tools (Looker, Tableau, Metabase)
- Analytics platforms (GA4, Mixpanel)
**Benefits:**
- No manual export step
- Always current data
- Can query specific metrics on demand
- Interactive analysis across multiple data points
**If no connector exists:** Screenshots and CSV exports work fine. The frameworks in this skill apply regardless of how data reaches Claude.
## Red Flags
- [ ] Dashboard has 20+ metrics
- [ ] No clear hierarchy (everything equally emphasized)
- [ ] Numbers shown without comparison context
- [ ] Nobody actually looks at it regularly
- [ ] Updated manually (not automated)
- [ ] Different dashboards show conflicting numbers
- [ ] Vanity metrics prominently featured
- [ ] No date range controls or filters
- [ ] No documentation of what metrics mean
- [ ] Metric definitions changed without notice
- [ ] Alerts that fire so often they're ignored
- [ ] Dashboards created and then abandoned
## Output Format
When designing a dashboard:
```
## Dashboard Design: [Dashboard Name]
### Audience and Purpose
- **Primary audience:** [Who]
- **Purpose:** [What questions it answers]
- **Check frequency:** [How often reviewed]
- **Update frequency:** [How often data refreshes]
### Metrics
**North Star:**
| Metric | Definition | Target | Comparison |
|--------|------------|--------|------------|
| [Metric] | [What it measures] | [Goal] | [vs. what] |
**Health Metrics:**
| Metric | Definition | Threshold | Alert When |
|--------|------------|-----------|------------|
| [Metric] | [Definition] | [Normal range] | [Condition] |
**Supporting Metrics:**
| Metric | Definition | Why Included |
|--------|------------|--------------|
| [Metric] | [Definition] | [Purpose] |
### Layout
[ASCII sketch or description of layout]
```
┌────────────────────────────────┐
│ [Section 1] │ [Section 2] │
├────────────────────────────────┤
│ [Section 3] │
└────────────────────────────────┘
```
### Visualization Specifications
| Metric/Section | Chart Type | Notes |
|----------------|------------|-------|
| [Metric] | [Chart type] | [Special requirements] |
### Filters and Controls
| Filter | Options | Default |
|--------|---------|---------|
| Time period | [Options] | [Default] |
| Segment | [Options] | [Default] |
### Alerts
| Alert | Condition | Notify | Channel |
|-------|-----------|--------|---------|
| [Name] | [Trigger] | [Who] | [How] |
### Data Sources
| Metric | Source | Refresh |
|--------|--------|---------|
| [Metric] | [Where data comes from] | [How often] |
### Documentation
[Link to detailed metric definitions]
### Owner
[Who maintains this dashboard]
```
## Chaining to Other Skills
Dashboard design connects to other measurement skills:
- **Need analytics foundation** → Chain to `analytics-setup`
- **Need event tracking for metrics** → Chain to `event-tracking`
- **Dashboard shows test results** → Chain to `experiment-design`
- **Need channel breakdown** → Chain to `conversion-attribution`
When chaining, pass along: metrics needed, audience requirements, and data sources available.The event-tracking Skill
Events are the raw material for everything else. Every dashboard, funnel report, and experiment depends on events being tracked correctly and consistently.
A messy event taxonomy creates problems that compound over time. Inconsistent naming means you can't find events. Missing properties mean you can't segment. Duplicate events with different names mean conflicting numbers and nobody knowing which is right.
Six months from now, you need to know what evt_signup_v2_final was supposed to track. The skill file has documentation templates, but the core principle is: name events so they're understandable without documentation.
The naming convention matters less than consistency. Pick a format and use it everywhere:
Format | Example |
Object_Action | Button_Clicked, Form_Submitted |
Action_Object | Clicked_Button, Submitted_Form |
Noun_Verb | Checkout_Started, Purchase_Completed |
Every event needs properties that enable analysis.
Standard properties go on every event: timestamp, user_id, session_id, page_url, device_type.
Event-specific properties capture what matters for that action: a Form_Submitted event needs form_name and fields_completed; a Purchase_Completed event needs order_id, total_value, and items.
The biggest mistake is tracking everything. More events means more noise, more maintenance, and more confusion. Track what you'll actually analyze. For most businesses, that's 15-30 core events, not 200.
Business | Critical Events |
SaaS | Signup_Completed, Activation_Reached, Feature_Used, Upgrade_Started |
Ecommerce | Product_Viewed, Cart_Updated, Checkout_Started, Purchase_Completed |
Lead gen | Form_Viewed, Form_Started, Form_Submitted, Lead_Qualified |
Content | Article_Viewed, Article_Completed, Newsletter_Subscribed |
Example Prompt:
Design an event tracking plan for my [type of business]. Here's my main funnel: [steps]. What events should I track, what properties should each have, and how should I name them consistently?Note: you can ask Claude to build you a Skill like this one (if you provide good inputs and examples). But, if you want this Skill, and the other Skills, join Cortex (see details below).
The experiment-design Skill
Most A/B tests fail to produce usable results. Not because the change was wrong, but because the test design was wrong. Sample size is too small. Test stopped too early. No clear hypothesis. Results declared significant when they weren't.
The skill file has sample size tables and detailed interpretation frameworks, but the core discipline fits in one paragraph: Before testing, write down what you expect to happen and why. Calculate how much traffic you need and how long that will take. Run the test for the full duration without peeking.
Then interpret the results honestly, including the possibility that you don't have enough data to know.
The hypothesis prevents testing for testing's sake. Format: "If we [change], then [metric] will [improve] by at least [threshold], because [rationale]."
If you can't fill in that sentence, you're not ready to test.
Sample size is where most tests fail. Lower baseline conversion rates and smaller expected improvements require dramatically more traffic.
A 2% conversion rate testing for 15% relative lift needs roughly 30,000 visitors per variation.
At 500 visitors per week, that's over a year. The skill file has the full table, but the lesson is: do the math before you start, or you'll waste months on a test that never reaches significance.
Duration matters beyond sample size. Never stop before one full week, even if you hit your sample size early. Tuesday traffic behaves differently than Sunday traffic. And don't peek and decide. Checking results daily and stopping when one variant looks ahead inflates your false positive rate from 5% to 25% or higher. Pre-commit to an end date.
Interpreting results requires honesty. Statistical significance doesn't mean certainty. A 95% confidence result means there's still a 5% chance you're wrong.
If the lift is real but tiny, weigh the effort of implementation against the payoff.
If you don't reach significance, that doesn't mean there's no difference. It means you don't have enough data to know.
Design an A/B test for [page/element]. Current conversion rate: [X%]. I want to detect at least a [Y%] improvement. Traffic: [Z] visitors per week. Give me the test design, expected duration, and how to interpret results.The conversion-attribution Skill
A customer saw your blog post, clicked a retargeting ad, received an email, then converted. What gets credit?
Attribution is genuinely hard. There's no perfect answer. But ignoring it means misallocating budget. Killing channels that introduce customers because they don't close them. Over-investing in channels that only convert people already decided to buy.
The core insight is that different channels play different roles. Introducers create first exposure: organic content, social, display ads, PR. Influencers build consideration over time: email nurture, retargeting, reviews.
Closers convert ready buyers: brand search, direct traffic, bottom-funnel retargeting. Judging introducers by last-click is unfair. They bring people in; other channels close them. Both matter.
Model | How It Works | Bias |
Last click | 100% credit to final touchpoint | Overvalues closers |
First click | 100% credit to first touchpoint | Overvalues introducers |
Linear | Equal credit to all touchpoints | Fair but unrealistic |
Position-based | 40% first, 40% last, 20% middle | Balanced |
Data-driven | ML assigns based on patterns | Best with enough data |
No model is correct. All are simplifications. The skill file has detailed guidance on choosing a model, but the practical approach is to use multiple models to triangulate.
If a channel looks strong under last-click but weak under first-click, it's a closer. If the reverse, it's an introducer. Both are valuable, but different.
Every platform claims more credit than it deserves. Google reports Google works. Meta reports Meta works. Each sees only its own touchpoints. Use your own analytics as source of truth, and expect platform dashboards to overstate their contribution.
The honest truth about attribution: it will never be perfectly accurate. Accept approximation. Look for directional patterns rather than precise percentages. Test budget shifts and measure total outcomes, not just model outputs.
And if you want to know if a channel really matters, the gold standard is a holdout test: turn it off in one region and compare.
Example Prompt:
Help me understand attribution for my [type of business]. Here are my channels: [list]. Here's my current attribution data: [summary]. Average time from first touch to conversion: [X days]. How should I think about what's actually driving conversions?"You can ask Claude to build you a Skill like this one (if you provide good inputs and examples). But, if you want this Skill, and the other Skills, join Cortex (see details below).
Do you want ALL the Skills, ready to deploy?
You got some of the skills above. Plenty to get you started and going.
If you’re a Cortex subscriber, you get them ALL below.
You can either spend 30-60 minutes trying to create your own, or you can snag them all in ~30 seconds as a paying Cortex member—plus get the true value of Cortex, far beyond a bunch of skills.
What is Cortex?
Cortex is the premium monthly membership version of Bionic Business, helping entrepreneurs and business owners grow and scale with AI and Agents.
In addition to 3-4 full, unrestricted regular issues per month, you also get:
1x SIGNALS Strategic Briefing issue (strategic moves you must make now to secure your business revenue, market share, and profits).
1x CIRCUITS Tactical Guide issue (workflows, how-to’s, prompts, very practical implementation inside a self-driving business).
Access to all Skills issues (full details, download files, tutorials).
Claude Cowork as your marketing team special issue (full guide on how to run Cowork as an agent marketing team for your business).
50% OFF workshops on prompt engineering, agent operating systems, and more. Only active Cortex subscribers get any kind of discount, no one else.
Free subscribers get a redacted and summarized version of regular issues but only Cortex subscribers get the full emails loaded with the good stuff.
So, if you’d like to receive full issues and the monthly special Signals and Circuits issues, then you should sign up for Cortex, which is $50/month (and you have full archives of previous regular issues for as long as you’re a member).
One note on Signals and Circuits: You only get these special issues for the months you’re subscribed—which means if you skip a ~30 day period, about a month, you won’t get them for that month. There are no back-issues for past months, so now is your chance to lock-in these upcoming issues while you can.
After you’ve upgraded to Cortex, log in to the website and come back to this issue, and the Skills will be unlocked below.
Ready? Get the Skills, full regular issues, and the upcoming Signals and Circuits issues:
Cortex is a monthly premium version of Bionic Business, helping entrepreneurs grow and scale with AI and Agents. You can cancel at any time. But there are no refunds, for any reason. All sales are final.

Join Cortex to read the rest!
Become a paying subscriber to get access to this post and other subscriber-only issues. Cortex is a monthly premium version of Bionic Business, helping entrepreneurs and business owners grow and scale with AI and Agents—and transform their business into an autonomous business.
Upgrade to Cortex NowEvery month, you get:
- Full regular email issues
- 1x Signals Strategic Briefing
- 1x Circuits Tactical Guide
- Access to all Skills issues
- Claude Cowork as your marketing team
- 50% OFF workshops

