Skip to content
Home » Reference » The Measurement Plan: How to Stop Tracking Random Events

The Measurement Plan: How to Stop Tracking Random Events

Most analytics implementations I audit have the same problem: the team tracks a hundred events nobody looks at, and misses the ten that would actually answer the questions leadership keeps asking. The fix isn’t more events — it’s a measurement plan. A document that ties every single tracked event back to a real decision someone needs to make.

I’ve built measurement plans for e-commerce shops with a single product line and for SaaS companies with seven product surfaces. The structure stays the same. The discipline is what differs. This guide walks through the framework I use, the questions I ask in every kickoff, and the mistakes that turn a measurement plan into a forgotten Confluence page.

What a Measurement Plan Actually Is

A measurement plan is a single document that answers three questions for every metric you collect:

  • Why do we measure this? What decision will it inform?
  • Who looks at it? Which role, which ritual, which dashboard?
  • How will we know it’s working? What threshold triggers an action?

If you can’t answer all three for a metric, that metric does not belong in your tracking. This is the single rule that separates teams who use their analytics from teams who drown in it.

The Five Layers of a Good Plan

I structure every measurement plan as five layers, top-down. Each layer constrains the one below it, which is what stops the document from sprawling into a thousand-row spreadsheet of “would be nice to know.”

LayerWhat it capturesExample
1. Business goalThe outcome the company is chasing this quarterIncrease activated trial accounts by 25%
2. KPIThe single number that proves the goal is movingWeekly activated accounts
3. Driver metricInputs that influence the KPITrial-to-activated rate, time to first value
4. EventThe user action you actually logaccount_activated, first_workflow_created
5. ParameterThe dimension you slice the event byplan_tier, signup_source

Notice that events appear in layer four, not layer one. Most teams start with the events (“let’s track all the clicks”) and try to retrofit meaning afterward. That’s how you end up with 200 events and zero answers.

How to Run the Kickoff

The measurement plan is not a deliverable you write alone. It comes out of a kickoff workshop with the people who will actually use the data. I block 90 minutes and bring exactly three roles into the room: a product owner, a marketer who runs campaigns, and the engineer who will implement the tracking.

The agenda I use:

  1. 10 min — business goals. What does leadership care about this quarter? Write it on a whiteboard. No metrics yet.
  2. 20 min — questions. For each goal, what questions would tell us we’re succeeding or failing? Capture every question, no filtering.
  3. 30 min — KPIs and drivers. Group questions into KPIs. Identify two or three driver metrics per KPI.
  4. 20 min — events. For each driver metric, write down the user action that produces it.
  5. 10 min — owners. Assign one human to each KPI. No KPI without an owner.

If a question can’t be tied to a goal, it doesn’t get a metric. If a metric doesn’t have an owner, it doesn’t get tracked. Brutal, but it’s the only way to keep the plan small enough to maintain.

The Document Itself

I keep measurement plans as a single Google Sheet, one row per event. The columns map directly to the five layers above, plus implementation details. Here’s the schema I use:

ColumnPurpose
GoalQuarterly business goal this event supports
KPIThe metric this event rolls up to
OwnerSingle human accountable for the KPI
Event nameSnake_case, follows naming conventions
TriggerWhat user action fires it
ParametersRequired and optional parameters with types
ImplementationPage URL, GTM trigger, server endpoint, etc.
StatusPlanned / Implemented / Validated / Deprecated

The Status column matters more than people think. Without it, you cannot tell which events are live, which are half-implemented, and which were retired six months ago and still show up in reports. Status is what makes the plan a living document instead of a snapshot.

Common Mistakes I See

After running this exercise with maybe forty teams over the last decade, the same patterns show up:

  • Tracking everything “just in case.” Storage is cheap, but maintenance isn’t. Every event you add becomes a future migration burden, a future GDPR question, and a future bug.
  • Skipping the owner column. An unowned KPI is an unread KPI. Within a quarter, nobody can remember why it exists.
  • Treating the plan as a one-time deliverable. The plan needs a quarterly review on the calendar. Otherwise the deprecated rows pile up and the new rows get tracked outside the document.
  • Letting the engineer write it alone. Engineers naturally focus on what’s technically interesting. Without product and marketing in the room, you’ll measure the wrong things beautifully.
  • Mixing tactical and strategic events. Button clicks belong in heatmap tools, not in your event schema. Reserve the schema for actions tied to KPIs.

How to Maintain the Plan

I run a 30-minute review every quarter. Three things happen in that meeting:

  1. Deprecation pass. Any event nobody has queried in 90 days gets flagged. Either it gets a defender, or it moves to Status: Deprecated and stops being tracked next sprint.
  2. New goals. If quarterly OKRs changed, new rows go in. The owner is named at the same time.
  3. Implementation drift check. Compare the plan against what’s actually live in your analytics tool. Mismatches always exist. The point is to surface them before they corrupt a report.

The first time I ran this review with a SaaS client, we deprecated 47 of their 112 active events. Their dashboards immediately got faster, the analytics warehouse got cheaper, and nobody complained because nobody was using those events anyway.

FAQ

How long should a measurement plan be?

For most product teams I work with, the plan ends up between 30 and 80 events. If you’re past 100, you’re tracking interesting things instead of decision-driving things. If you’re under 20, you’re probably missing driver metrics for at least one KPI.

Should the measurement plan include privacy considerations?

Yes. Add a column for personal data flags. Any event collecting identifiers, location, or behavior under consent should be marked. This makes your DPA reviews and consent design much easier later, and it forces the team to think about privacy at design time, not after launch.

Who owns the measurement plan document?

One person, usually a product analyst or data PM. They don’t own the KPIs themselves — those have separate owners — but they own the document’s existence and quarterly review. Without a single document owner, the plan rots within two quarters.

Do I need a measurement plan if I only use Plausible or Matomo?

Yes, and arguably more. Lightweight tools tempt teams to skip planning because there are fewer features to configure. But the document isn’t about your tool. It’s about agreeing what counts as success and who watches it. The tool is the easy part.

Conclusion

A measurement plan is an act of restraint. The hardest part is not adding events — it’s saying no to events. Every row you reject is a future bug that won’t happen and a dashboard nobody has to maintain. Start with goals, end with parameters, and review it every quarter. That’s the entire framework.

If you’ve never written one, block 90 minutes this week. Bring product, marketing, and engineering. Walk through the five layers. You’ll leave the room with a smaller, sharper picture of what you’re actually trying to learn from your users.

Leave a Reply

Your email address will not be published. Required fields are marked *