Free skillFor Claude Code · Cursor · any MCP agent

Analytics Interpretation

Numbers without product context are noise. This skill teaches your AI agent to read Analiz data like an experienced analyst — anchored on your homepage, cross-referenced with your codebase and recent commits, and aware of which features sit behind paid tiers.

Install in 10 seconds

Pick whichever editor you use. The skill file lives at a stable URL — re-run the command any time to update.

Claude Code (global, all projects)

bash
mkdir -p ~/.claude/skills/analiz-analytics-interpretation
curl -o ~/.claude/skills/analiz-analytics-interpretation/SKILL.md \
  https://analiz.dev/skills/analiz-analytics-interpretation.md

Cursor (project-local)

bash
mkdir -p .cursor/rules
curl -o .cursor/rules/analiz-analytics-interpretation.md \
  https://analiz.dev/skills/analiz-analytics-interpretation.md

Any project — drop into .claude/skills/

bash
mkdir -p .claude/skills/analiz-analytics-interpretation
curl -o .claude/skills/analiz-analytics-interpretation/SKILL.md \
  https://analiz.dev/skills/analiz-analytics-interpretation.md

Pair it with the MCP server

The skill works best when your agent can actually query your analytics. Add the Analiz MCP server to .mcp.json (Claude Code) or your editor's MCP config:

.mcp.json
{
  "mcpServers": {
    "analiz": {
      "command": "npx",
      "args": ["analiz-mcp"],
      "env": {
        "ANALIZ_API_KEY": "your_api_key",
        "ANALIZ_PROJECT_ID": "your_project_id",
        "ANALIZ_URL": "https://analiz.dev"
      }
    }
  }
}

Get your API key and project ID from the dashboard.

What triggers it

Once installed, the skill activates automatically when you ask your agent about analytics. Try any of these:

>How are we doing?
>What's going on with our analytics?
>Where are users dropping off?
>Is anyone converting?
>Add tracking for the signup flow
>Why did pageviews spike yesterday?

What it teaches your agent

Four core principles, plus a catalog of common mistakes to avoid.

1

Always anchor on /

The homepage is the only page every visitor hits. It's the denominator for nearly every conversion question. The skill ensures the agent always pulls top pages and references / first.

2

Filter dashboard self-traffic

Owner traffic on /dashboard/proj_* inflates pageview counts but says nothing about product reach. The skill teaches the agent to exclude these from acquisition math and call it out explicitly.

3

Cross-reference the codebase

Before drawing conclusions, the agent greps the project: low form_submit count → check if the form is a real <form>; revenue null → check if litetrack.revenue() is wired; pageview spike → git log to find recent commits.

4

Recommend instrumentation, don't just report gaps

If a question can't be answered, the agent proposes the exact analiz.capture / identify / revenue call and where to add it — not just 'we don't track that'.

Common mistakes the skill prevents

Things any analyst — human or AI — gets wrong without these guardrails.

MistakeWhat the skill does instead
Treating $identify count as signupsIt's logins + signups. Recommend explicit capture("signup").
Reporting 88 visitors without excluding /dashboard/* self-trafficSubtract or call out — that's owner browsing, not product reach.
Concluding 'no revenue' without checking instrumentationGrep for analiz.revenue( first — often just not wired.
Recommending session stories without flagging Pro tierRead lib/plan-access.ts. Flag the gate.
Drawing trends from low absolute numbersUnder ~50 conversions/wk = noise. Say so.
Reporting raw avg_duration in millisecondsConvert to seconds/minutes for humans.

Get analytics tips by email

We send occasional, useful emails — new skills, instrumentation patterns, and what we're seeing across thousands of projects. No spam, unsubscribe anytime.

View the full SKILL.md source
SKILL.md
---
name: analiz-analytics-interpretation
description: Use when the user asks to read, interpret, analyze, summarize, or act on Analiz analytics data via the mcp__analiz__* tools — covers "how are we doing", "what's going on", "is feature X being used", "where's the leak", "should we be worried", or any time analytics findings should be cross-referenced with the project's codebase, recent commits, or extended with custom events/funnels/revenue tracking.
---

# Analiz Analytics Interpretation

## Overview

Analiz analytics tools return raw numbers. **Numbers without product context are noise.** This skill turns Analiz data into product insight by anchoring every interpretation in:

1. **The homepage `/`** as the top-of-funnel reference (always pull it)
2. **The codebase** — routes, instrumentation, recent commits explain the numbers
3. **The full Analiz capability surface** — knowing when to recommend `analiz.capture()`, funnels, or `revenue()` instead of guessing
4. **Tier awareness** — some features (funnels, sessions, revenue) are paid-tier; flag this when relevant

## When to Use

Triggers:
- User asks to read, summarize, or interpret analytics ("how are we doing", "check the analytics", "what's going on")
- Question about a specific metric, page, source, or event
- "Is feature X being used?", "where are users dropping off?", "is anyone converting?"
- User asks to add tracking, set up a funnel, or understand why a number is what it is
- Any time `mcp__analiz__*` tools are present in the toolset

Do NOT use:
- For unrelated frontend/backend work in the Analiz codebase itself (use the project's own skills)
- For pure ClickHouse query writing (use `database` skill)

## Core Principles

### 1. Always anchor on `/` (homepage)

The homepage is the only page every visitor hits. It is the denominator for nearly every conversion question. **Always include `get_top_pages` and look at `/` first**, even if the user asks something narrow.

### 2. Filter dashboard self-traffic before reporting "user activity"

`/dashboard/proj_*` paths are owners viewing their own analytics. They inflate pageview/session counts but say nothing about *product reach*. When measuring acquisition or conversion, **exclude dashboard paths from the denominator** and call this out explicitly.

```
Top pages typically look like:
  /                      → real top-of-funnel (USE THIS)
  /dashboard/proj_xxx    → owner self-traffic (EXCLUDE)
  /dashboard/proj_xxx/X  → owner self-traffic (EXCLUDE)
```

### 3. Cross-reference the codebase before interpreting

Before drawing conclusions, **grep the project**. Examples:

| Observation | Check the codebase for |
|---|---|
| Low `$form_submit` count | The actual signup/contact form route. Is the form using `<form>`? Auto-capture only fires on real form submits. |
| `$identify` count looks like signups | `analiz.identify(...)` calls — it fires on **every login AND signup**. Not a signup signal alone. |
| Spike in pageviews | `git log --since="2 weeks ago"` — was a feature shipped? landing page changed? |
| Specific page in top results | Read that route file — what is it actually for? |
| Custom event `xyz` shows up | `grep -r 'capture("xyz"' .` to find where it fires |
| `$revenue` is null but Stripe is wired | Check that `analiz.revenue({amount, plan})` is called in the Stripe webhook / success page |

### 4. Recommend instrumentation, don't just report gaps

If a question can't be answered from existing data, **propose the exact `analiz.*` call** and where to add it. Don't just say "we don't track that."

## Reading the Analiz Tool Surface

### Tool → use case map

| Tool | When to call |
|---|---|
| `get_stats` | First call for any "how are we doing" — KPIs + WoW deltas |
| `get_realtime` | "Is anyone on the site right now?", live debugging |
| `get_top_pages` | **Almost always pull this** — anchor on `/` |
| `get_top_sources` | Acquisition / channel questions, attribution |
| `get_events` | What custom events are firing? Use before drawing event-based conclusions |
| `get_events(event_name=X)` | Drill into a single event's properties |
| `get_sessions(outcome="converted"\|"engaged"\|"bounced")` | Walk individual visitor journeys — *paid tier* |
| `get_revenue` | Revenue KPIs + by-source/page attribution — *paid tier* |
| `get_funnel()` | List funnels first, then drill in — *paid tier* |

### Default playbook for "what's going on" / "how are we doing"

Run in parallel:
1. `get_stats(period="week")` — anchor metrics
2. `get_top_pages(period="week")` — find `/` and identify dashboard self-traffic
3. `get_top_sources(period="week")` — channel mix
4. `get_events(period="week")` — what's instrumented
5. `get_realtime()` — live snapshot
6. `get_revenue(period="month")` — only if revenue is part of the question or instrumented

Then `git log --oneline -10` to see if recent commits explain any spike/drop.

## Built-in Auto-Captured Events

Read `packages/tracker/src/t.ts` if uncertain. Auto-captured (no code needed):

| Event | Fires on |
|---|---|
| `$pageview` | Page load + SPA navigation (pushState/replaceState/popstate) |
| `$pageleave` | Page hidden / nav away (with `time_on_page_ms`, `scroll_depth_percent`) |
| `$session_end` | Tab hidden after activity |
| `$click` | Clicks on `<a>`, `<button>`, or `[data-track]` — captures text/href/classes/track_id |
| `$scroll_depth` | 25/50/75/100% milestones |
| `$form_submit` | `<form>` submit event — captures field count + has_email/has_password (no values) |
| `$web_vitals` | LCP/CLS/INP/TTFB/FCP on page hide |
| `$identify` | Manual `analiz.identify(id, traits)` — **fires on login AND signup**, not signup-only |
| `$revenue` | Manual `analiz.revenue({amount, currency, plan})` |

**Critical:** `$identify` is NOT a signup signal. It fires every login. To track signups specifically, recommend an explicit event:

```js
// In the signup completion handler:
window.analiz?.capture("signup", { plan: "trial", source: "homepage" })
window.analiz?.identify(user.id, { email, plan })
```

## Adding Custom Events (Recommend This Often)

The tracker exposes a global API. Recommend explicit events for any business-meaningful action:

```js
// Generic event
window.analiz?.capture("event_name", { any: "properties" })

// Signup/login attribution
window.analiz?.identify(userId, { email, plan, signup_source })

// Revenue (powers $revenue, get_revenue, by-source attribution)
window.analiz?.revenue({ amount: 29, currency: "USD", plan: "pro_monthly" })
```

Place them where the action *succeeds*, not where it's attempted:
- After Stripe webhook confirms payment → `revenue()`
- In `/api/auth/signup` success handler or onSuccess callback → `capture("signup", ...)`
- In feature usage that matters (export, share, invite) → `capture("feature_used", { feature: "export" })`

Use `data-track="cta_pricing"` on important buttons to make `$click` filterable by `track_id`.

## Funnels (Business Tier)

Funnels live in PostgreSQL (`funnels` table, `apps/web/lib/schema.ts`). Each funnel has ordered `steps` (JSON) — typically a sequence of event names or paths. Built in the dashboard at `/dashboard/[projectId]/funnels`.

**Recommend a funnel when** the user asks about drop-off, conversion paths, or "where are we losing people." Standard SaaS funnel:

```
1. $pageview path=/                  (visited homepage)
2. $pageview path=/signup            (reached signup)
3. capture event=signup              (completed signup)
4. capture event=activated           (used core feature)
5. revenue                           (paid)
```

Call `get_funnel()` (no args) to list existing funnels first, then drill in by ID.

## Tier Awareness

From `apps/web/lib/plan-access.ts`:

| Tier | Has access to |
|---|---|
| **Starter** | No analytics API/MCP access |
| **Pro** | `stats`, `top-pages`, `top-sources`, `realtime` |
| **Business** | All of Pro + `sessions/stories`, `funnels`, `revenue`, `events` (drill-in) |
| **Trial** | Everything (3 days from signup — see commit `61913f8`) |

When recommending features, **flag the tier**:

> Sessions storytelling and funnels are Business-tier features. If you're on Pro you'll see a 403 from the MCP tool. The current trial period is 3 days.

## Reporting Format

Structure interpretations like this:

```
## Headline (1 sentence)
[The single most important finding]

## Key metrics
- Anchor on / first
- WoW deltas with direction
- Always note dashboard self-traffic exclusion when relevant

## What it means
[Connect numbers to the codebase / recent commits / product context]

## What to do
[Concrete next action — often: instrument event X, set up funnel Y, fix gap Z]
[If recommending a paid-tier feature, flag the tier]
```

Keep numbers in absolute + relative form. Always show the denominator.

## Common Mistakes

| Mistake | Fix |
|---|---|
| Treating `$identify` count as signups | It's logins + signups. Recommend explicit `capture("signup")`. |
| Reporting "88 visitors" without excluding `/dashboard/*` self-traffic | Subtract or call out — that's owner browsing, not product reach. |
| Concluding "no revenue" without checking instrumentation | Grep for `analiz.revenue(` first — often just not wired. |
| Recommending funnels without flagging Business tier | Read `lib/plan-access.ts`. Flag the gate. |
| Drawing trends from low absolute numbers | Under ~50 conversions/wk = noise. Say so. |
| Not pulling homepage when answering narrow question | Always pull `/` — context matters even for narrow questions. |
| Skipping `git log` when explaining a spike | Recent commits often explain the data. Always check. |
| Reporting raw `avg_duration` in milliseconds | Convert to seconds/minutes for humans. |

## Red Flags — Stop and Re-Check

- About to report conversion without excluding `/dashboard/*` from denominator
- About to say "we don't track X" without grepping for `capture("X"` first
- About to recommend a funnel/sessions/revenue feature without checking the user's tier
- About to interpret a spike without `git log`
- Skipped `get_top_pages` because the question seemed narrow