Every product team I’ve worked with eventually starts looking at retention curves. And almost every team gets the same three things wrong on their first attempt: they pick the wrong cohort window, they confuse rolling retention with classic retention, and they read flat curves as good news when they’re often the opposite. This article unpacks Day-1, Day-7, and Day-30 retention — what each one actually tells you, and the trap each one hides.
I’ve built retention dashboards for consumer apps, B2B SaaS, and developer tools. The math is the same. The interpretation is wildly different depending on what your product does and how often users are supposed to come back. Let’s go through it.
What a Retention Curve Actually Shows
A retention curve plots the percentage of a user cohort that comes back on each day after their first session. You start at 100% on Day 0 (the day they signed up), and the line drops as users stop returning. The shape of the drop is the diagnostic. The number at any given day is the headline metric.
Two things to fix in your head before reading any retention chart:
- Cohort definition matters. Are you looking at users who signed up in one specific week, or all-time users? These produce very different curves.
- Return definition matters. Does “returning” mean opening the app, or doing a meaningful action? A curve based on “opened the app” looks much healthier than one based on “completed the core action,” and only the second one tells you about real engagement.
Day-1 Retention
Day-1 is the first reality check. It tells you whether the user came back at all the day after signing up. For consumer apps, this is the most important short-term retention metric — if a user doesn’t return within 24 hours, they’re statistically very unlikely to return at all.
What good looks like, in my experience:
| Product type | Healthy Day-1 | Concerning Day-1 |
|---|---|---|
| Consumer mobile app | 30–45% | Below 20% |
| B2B SaaS (daily-use) | 40–55% | Below 25% |
| B2B SaaS (weekly-use) | 15–25% | Below 10% |
| Marketplace | 20–35% | Below 15% |
What Day-1 hides: a user who came back, did nothing, and bounced gets counted as retained. If your Day-1 is high but your Day-7 collapses, the issue is probably that your Day-1 events are too easy to trigger. Tighten the definition.
Day-7 Retention
Day-7 is where you start to see whether the product became part of someone’s week. It filters out the “I tried it once and forgot” users and reveals who actually integrated the tool into their routine. For most B2B tools, this is the metric I focus on first.
The shape of the curve between Day-1 and Day-7 is more informative than either number alone. Three patterns I see most often:
- Steep drop, then flat. The product loses casual users immediately but the rest stick. This is healthy. The flat tail represents your real users.
- Gradual decline through the week. Users are giving up over multiple sessions. This usually points to a missing aha moment — they keep returning to try, then quitting.
- High Day-1, collapse by Day-7. Almost always a measurement issue. Your Day-1 event is too generous and the Day-7 event is closer to real engagement.
What Day-7 hides: users who only use your product on weekdays. If your cohort signed up on a Friday, their “Day-7” lands on the next Friday and may look healthier than a Tuesday cohort. Always cohort by signup-day-of-week before drawing conclusions.
Day-30 Retention
Day-30 tells you whether the product became a habit. By this point, the casual users are gone, the trial users have decided to convert or churn, and what’s left is your real engaged base. For subscription businesses, Day-30 retention is the leading indicator for revenue retention 90 days later.
What’s healthy varies enormously by product type:
| Product type | Healthy Day-30 | What it means |
|---|---|---|
| Consumer mobile app | 10–20% | Brutal but normal — the long tail is small and loyal |
| B2B SaaS (daily-use) | 30–45% | Strong sign of paid conversion potential |
| B2B SaaS (weekly-use) | 20–35% | Real users are integrating the tool into their week |
| Marketplace | 15–25% | Network effects starting to show |
What Day-30 hides: a flat retention curve at Day-30 looks great, but if your retained base is shrinking month over month, the curve is hiding the cohort decay. Always look at multiple cohorts side by side, not a single rolling average.
The Trap of Reading Flat Curves as Good
A flat retention curve at, say, 15% looks like a stable user base. It can also be a slow death. The math: if your Day-30 is 15% and you don’t acquire any new users for two months, you’ll lose another chunk of that 15% by day 60. The curve is flat in shape, but the absolute number of users is still falling.
The fix is to plot multiple cohorts on the same chart. If each cohort flattens at roughly the same percentage, you have product-market fit at that retention level. If each cohort flattens lower than the last, the product is losing relevance and you need to act.
Classic vs Rolling Retention
The two definitions produce very different numbers, and people mix them up constantly:
- Classic retention — Day-N retention is the % of cohort users who returned specifically on day N. This is what most cohort tables show by default.
- Rolling retention — Day-N retention is the % of cohort users who returned on day N or any later day. This is what subscription businesses tend to use.
Rolling retention is more forgiving and produces higher numbers. Classic retention is harsher and produces a more diagnostic curve. Pick one and stick with it. The dangerous thing is comparing your “20% retention” to a competitor’s “20% retention” without knowing which definition each is using.
How to Set Up Retention Tracking Cleanly
- Decide what counts as a “retained” action. Pick something tied to the product’s core value, not just an app open or pageview.
- Tag every user with their signup date as a permanent property, not a session property.
- Run your retention query against weekly cohorts at first, not daily — the noise on daily cohorts is too high until you have real volume.
- Plot at least three cohorts on the same chart. One cohort is anecdote; three is a pattern.
- Re-check the numbers after every major product change. A retention shift of 3 points after a release is usually meaningful.
FAQ
Should I use Day-1 or Day-7 as my main retention metric?
It depends on use cadence. Daily-use products should focus on Day-1 — if users skip a day, that’s a real signal. Weekly-use products should focus on Day-7, because Day-1 is mostly noise. B2B tools where users come back twice a week should focus on Week-2 retention rather than any daily metric.
What’s a good Day-30 retention number?
There’s no universal answer. For consumer apps, 10-20% is normal and survivable. For daily-use B2B SaaS, you want 30%+. For weekly-use B2B SaaS, 20%+. Less important than the absolute number is whether the curve is flattening — that’s the signal of product-market fit.
How big should my cohorts be?
Aim for at least 200 users per cohort to get statistically meaningful numbers. Below 100, the noise will dominate any signal. If your volume is too low, switch from daily to weekly cohorts and accept slower feedback loops.
Why does my retention curve sometimes go up?
A retention curve mathematically cannot increase in classic retention — once a user is gone, they don’t come back into the cohort. If you see it rising, you’re probably looking at rolling retention or have a bug in your cohort attribution. Usually it’s the latter.
Conclusion
Day-1, Day-7, and Day-30 each answer a different question: did the user come back, did they integrate the product into their week, and did they make it a habit? Look at all three. Compare cohorts, not averages. Define “returned” tightly enough that the metric reflects real engagement, not just sessions opened.
Once you have these three numbers cleanly tracked, the product’s health becomes legible in a way that no headline metric can match.