By the time a SaaS user cancels, you’ve already lost. The cancellation is the announcement, not the decision — that decision was made weeks earlier, after a series of behavioral signals that almost nobody bothered to track. The good news is that those signals are very predictable, and you can usually spot them 30 days before the actual churn event if you’re watching the right things.
I’ve built churn prediction models for a couple of European SaaS clients, and the same handful of signals show up every time. This article walks through the seven I trust most, how to track each one, and how to act on the warning before the user clicks cancel.
Why Lagging Churn Metrics Are Useless
The traditional churn metric — accounts cancelled this month divided by accounts at start of month — is a lagging indicator. By the time the number moves, the users in question have been disengaging for weeks. You see the cliff after you’ve already fallen.
Leading indicators are the alternative. They don’t tell you who has churned. They tell you who is about to. And they let you intervene while there’s still a chance to change the outcome.
The Seven Signals
1. Login Frequency Decay
The simplest and most reliable signal. Take the user’s login frequency over the last 30 days and compare it to their previous 30 days. A drop of more than 50% is a strong signal that the user is disengaging. A drop of more than 75% means churn within the next month is likely.
Track login_count_30d and login_count_60_30d as user properties. Compare them in a daily job. Flag any user where the ratio drops below 0.5.
2. Core Action Frequency Decay
Logins alone aren’t enough — a user who logs in every day to check a number but never does anything productive is also disengaging. Pick the one or two actions that represent real value delivery in your product, and track their frequency the same way as logins.
For a project management tool, this might be task_created or task_completed. For an analytics dashboard, report_generated. For a CRM, contact_updated. The exact choice matters less than picking something that maps to value, not vanity.
3. Feature Breadth Reduction
Healthy users tend to use multiple features over a month. Disengaging users narrow down to one or two — usually the simplest features they’re stuck with. Track distinct_features_used_30d as a property. A drop from, say, 5 to 2 is a leading indicator that’s hard to explain away.
This signal is especially strong for B2B tools where the original use case has expired but the user is paying out of inertia. They’re winding down, and the breadth metric catches it before the cancellation.
4. Support Ticket Patterns
Two patterns matter here. First, a sudden cluster of support tickets from a previously quiet user often signals frustration that will crystallize into churn. Second, paradoxically, a user who used to file tickets and suddenly stops is also at risk — they’ve given up on the product responding to their needs.
Track support_tickets_30d and the change from the previous window. Both extreme ends of the distribution deserve a closer look.
5. Seat Reduction
For team-based tools, the strongest churn signal is removed seats. When an account goes from 12 active users to 8 to 5, the writing is on the wall. The full cancellation usually follows within 60 to 90 days. Track active_seats as a property and alert on any reduction of 25% or more.
Seat reduction is also a great leading indicator for downgrades, which are technically not churn but have similar revenue impact and similar root causes.
6. Last Engagement Recency
How many days since the user last did the core action? This is a single-number version of the frequency signals above, and it’s particularly useful for surfacing accounts that have already gone silent. Anything past 14 days for a daily-use tool, or 30 days for a weekly-use tool, deserves attention.
Track days_since_last_core_action as a property and let your CS team filter by it.
7. Billing Page Visits
One of the most direct signals. A user who lands on the billing page but doesn’t change anything is often researching cancellation. The conversion from “visited billing without changing plan” to “cancelled” within 14 days is high enough that I always set up an alert for it.
Track billing_page_view as an event. If the user views billing more than once in a 7-day window without making a plan change, route them to your retention playbook immediately.
Combining the Signals
Any one signal in isolation is noisy. The power comes from combining them. I usually score each user on a simple 0-7 scale, one point per signal that’s currently triggered. The distribution looks like this for a healthy SaaS account base:
| Score | Risk level | Action |
|---|---|---|
| 0–1 | Healthy | None — these users are fine |
| 2–3 | Watch | Add to weekly CS review |
| 4–5 | At risk | Trigger re-engagement email; CS reach-out |
| 6–7 | Critical | Personal call from account manager within 48 hours |
For a recent client we ran this scoring weekly, and the at-risk segment converted to actual churn at 8x the rate of the healthy segment. That predictive lift was enough to justify a CS reach-out program targeted exclusively at scores 4 and above.
How to Set the Tracking Up
- Add the seven user properties to your data warehouse: login frequency 30d, core action frequency 30d, distinct features 30d, support tickets 30d, active seats, days since last core action, and billing page views 7d.
- Run a daily job that recalculates them for every active account. Don’t try to do this in real time — the daily cadence is what matters.
- Compute the 0-7 risk score and store it as a separate property.
- Build a single dashboard showing the score distribution over time. Flag any week where the at-risk segment grows by more than 10%.
- Connect the at-risk segment to your CS workflow. The signals only matter if someone acts on them.
Common Mistakes
- Tracking signals without acting on them. The dashboard is decorative if no human is responsible for the at-risk segment. Assign an owner.
- Using too many signals. Seven is enough. Adding more dilutes each one and makes the score harder to interpret.
- Treating new accounts the same as old ones. A 14-day-old account that’s quiet might just be onboarding slowly. Apply the signals only after the account has been live for at least 30 days.
- Ignoring reactivation signals. A user whose score drops from 5 to 2 in a single week is a great sign. Track the velocity of the score, not just its absolute value.
FAQ
How accurate is a signal-based churn prediction?
Signal-based scoring isn’t a machine learning model and shouldn’t be confused with one. In my experience, it correctly flags around 70% of users who churn, with maybe 30% false positives. That’s good enough to drive a CS workflow without needing data science overhead.
Should I build this in-house or buy a churn prediction tool?
For most product teams, in-house is the better starting point. The signals are simple enough to compute in your data warehouse, you keep full control over the data, and you avoid yet another vendor in your stack. Move to a paid tool only when the in-house version is producing clear ROI and you need more sophistication.
Do these signals work for self-serve products?
Yes, but the action is different. Without a CS team, you’ll route at-risk users into automated re-engagement flows: an email sequence, an in-app prompt, a discount offer. The detection is the same. The intervention is what changes.
What’s the most predictive single signal?
Billing page visits without a plan change. It’s the most direct signal of intent. The other six are about disengagement; this one is about the user actively researching the exit. Both kinds matter, but the billing signal converts to actual churn at the highest rate.
Conclusion
The decision to churn happens long before the click. Login decay, feature reduction, billing visits — these are the breadcrumbs the user leaves before they leave. Track them weekly, score them simply, and put a human in front of the at-risk segment. That’s the entire framework.
You won’t save every account. But you’ll save the ones that were on the fence — and those are the ones worth fighting for.