Cohort Retention Table Generator
Generate cohort retention matrices from simple inputs. Track how different user groups retain over time and identify your best-performing cohorts.
Ready to Analyze Cohort Retention
Enter your cohort data to generate a retention table. Track how different user groups retain over time and identify your best-performing cohorts.
Define Cohorts
Add cohorts by signup month, campaign, or any grouping
Enter Retention Data
Input active user counts or retention percentages per period
Analyze Patterns
View retention curves, heatmaps, and identify best cohorts
Where Your Retention Table Hides the Real Story
Your blended monthly retention rate says 88%. But when you break it down by cohort — grouping customers by signup month — a different picture emerges. January’s cohort retains 92% into month two; April’s retains only 74%. That 18-point gap is invisible in the aggregate number. The common mistake is reporting a single retention metric across all customers without asking which customers are retaining. A retention table forces the comparison: rows are cohorts, columns are months since signup, and each cell shows what fraction survived.
The heatmap overlay makes the pattern visual. A diagonal stripe of dark cells means every cohort drops off at the same stage — likely an onboarding failure. A single dark row means one cohort was acquired through a bad channel. You cannot diagnose either pattern from an aggregate retention number.
Reading the Heatmap Without Over-Interpreting Colour
Colour scale anchoring. If your heatmap runs from 0% (red) to 100% (green), most cells cluster in the 60–90% range and look identically greenish. Re-anchor the scale to the actual data range — say 50% to 95% — so meaningful differences in the drop-off curve are visually distinct. A 5-point retention difference between cohorts might matter enormously at scale but looks invisible on a 0–100 scale.
Small cohorts, noisy cells. If a cohort has only 30 users, month-6 retention jumping from 60% to 70% is a swing of 3 users. Do not re-prioritise product work based on a cell driven by single-digit user counts. Flag cohorts below a minimum sample size and grey them out or annotate them.
Triangular shape is normal. Recent cohorts have fewer columns filled because they have not existed long enough. This creates a triangle in the bottom-right corner of the table. Do not mistake the absence of data for zero retention.
Turning Cohort Insights Into Product or Channel Decisions
Once you spot a weak cohort, the next question is why. Cross-reference the signup month against acquisition channel, pricing changes, or feature releases. If March’s cohort drops off faster and March was when you ran a heavily discounted campaign, the cohort quality problem is a marketing problem, not a product problem. Different root causes demand different fixes.
Quantify the gap in revenue terms. If the average cohort retains 85% at month 3 but the weak cohort retains 70%, and cohort size is 500 users at $40/month, the gap costs (0.85 − 0.70) × 500 × $40 = $3,000/month in lost MRR from that single cohort. Multiply across several underperforming cohorts and the annual revenue impact justifies dedicated investigation.
Patterns That Distort Cohort Retention Analysis
Reactivated users inflating later columns. If a customer cancels in month 3 and re-subscribes in month 7, do you count them as retained in month 7? If yes, the drop-off curve bends upward, which flatters the picture. Define clearly whether your table tracks continuous retention or any-active retention and be consistent.
Plan changes mid-cohort. If you raise prices in June, cohorts that signed up pre-increase may churn differently from post-increase cohorts — but both appear in the same table without context. Annotate the table with event markers so readers know a price change landed between rows 5 and 6.
Seasonal products. A tax-prep tool will always see massive month-5 drop-off because tax season ends. That is not a product failure; it is the natural usage cycle. Comparing that tool’s cohort table to a year-round SaaS product’s table is misleading.
B2B Onboarding Cohort: Mini Case Study
Scenario: A B2B SaaS product signs up 200 teams per month. The retention table shows month-1 retention averaging 78% across all cohorts — except Q3 cohorts, which retain 91%. Investigation reveals Q3 onboarding included a mandatory 30-minute setup call. Q1 and Q2 relied on self-serve onboarding.
Heatmap signal: The Q3 rows are visibly greener through months 1–4, then converge with other cohorts by month 6. The setup call improves early retention but does not affect long-term stickiness. The product team rolls out the call for all new signups, lifting blended month-1 retention from 78% to 87%. At $120 ARPU, recovering 9% of 200 users adds $2,160/month in retained MRR.
Takeaway: The cohort view surfaced a natural experiment that aggregate metrics completely missed. Without the table, the team would still be guessing why early churn was high.
Cohort Retention Table Equations
The arithmetic behind each cell in the retention table:
Paste-Ready Cohort Summary for Your Stakeholder Deck
Use this template to present cohort retention findings to non-technical stakeholders:
This format forces the presenter to name the best and worst cohorts, explain the gap, and attach a dollar figure. It prevents the common failure mode of showing a pretty heatmap with no actionable conclusion.
Sources
Amplitude — Cohort Analysis Guide: Cohort table construction, heatmap interpretation, and retention benchmarking by industry.
Reforge — Retention Is the Silent Killer of Growth: Cohort-level retention patterns, early drop-off diagnosis, and product-led retention strategies.
Baremetrics — Cohort Analysis Academy: Revenue vs logo retention in cohort tables, weighted averages, and seasonal adjustment.
Harvard Business Review — The Value of Keeping the Right Customers: Business impact of cohort-level retention improvements and compounding revenue effects.
Frequently Asked Questions
What's the difference between 'counts' and 'percents' input mode?
In 'counts' mode, you enter the actual number of active users at each period (e.g., 850 users active in Month 1). The tool calculates retention percentages for you. In 'percents' mode, you directly enter retention percentages (e.g., 85% retained in Month 1). Use counts if you have raw data; use percents if you've already calculated retention rates. Understanding this helps you see which mode to use for your data and why each mode is useful.
What does Period 0 (M0, Q0, etc.) represent?
Period 0 represents the moment the cohort is formed—for example, when users sign up. By definition, retention at Period 0 is always 100% because all users in the cohort are 'active' at the start. Period 1 is the first measurement after the cohort started (e.g., 1 month after signup). Understanding this helps you see why Period 0 is always 100% and how periods are numbered.
How is retention calculated relative to original cohort vs previous period?
This tool calculates retention relative to the original cohort size, not the previous period. For example, if your cohort started with 1,000 users and has 500 active in Month 3, that's 50% retention—regardless of how many were active in Month 2. This is the standard approach for cohort retention analysis. Understanding this helps you see why retention is always relative to the original cohort and how it differs from period-to-period retention.
Why might newer cohorts show better retention than older ones?
Several factors can explain improving cohort retention: product improvements (better onboarding, new features), better targeting in acquisition (higher-quality users), seasonal effects (certain months attract more engaged users), or market changes. However, be careful not to over-interpret short-term differences—statistical noise and small sample sizes can create apparent patterns. Understanding this helps you see why newer cohorts might perform better and how to interpret these patterns correctly.
Should I weight the average by cohort size?
This tool uses a simple (unweighted) average across cohorts. For a weighted average that accounts for cohort sizes, you'd give larger cohorts more influence. Both approaches are valid: simple averages treat each cohort equally as an 'experiment', while weighted averages reflect your overall user base better. For most purposes, the simple average is sufficient for comparing cohort performance. Understanding this helps you see when to use weighted vs unweighted averages and why each approach is useful.
How do I know if my retention is 'good'?
Retention benchmarks vary dramatically by product type. Mobile apps might be happy with 25% Day 30 retention, while B2B SaaS expects 90%+ annual retention. Compare against: (1) your historical trends, (2) industry-specific benchmarks, and (3) your business model requirements (e.g., what retention do you need to be profitable?). This tool is for analysis, not for declaring 'good' or 'bad'. Understanding this helps you see why retention benchmarks are context-dependent and how to evaluate your retention correctly.
Can I use this for daily, weekly, or yearly cohorts?
Yes! Use the granularity dropdown to select monthly, quarterly, yearly, or custom periods. Choose based on your business: daily retention makes sense for social apps, monthly for most SaaS, yearly for enterprise products with annual contracts. The key is consistency—pick a granularity and stick with it for meaningful comparisons. Understanding this helps you see when to use each granularity and why consistency matters.
What if my cohort sizes vary significantly?
Varying cohort sizes are normal (some months have more signups than others). However, very small cohorts (e.g., 10 users) will have noisy retention data—a single user churning represents 10%! For small cohorts, focus on trends across multiple cohorts rather than individual cohort performance. Understanding this helps you see how cohort size affects data quality and why small cohorts need careful interpretation.
Why isn't there a statistical significance test?
This tool is for descriptive cohort analysis, not statistical hypothesis testing. To determine if one cohort is 'significantly' better than another, you'd need formal statistical tests that account for sample sizes and multiple comparisons. Such tests are more complex and require additional assumptions. Use this tool to explore patterns, then validate important findings with proper statistical analysis. Understanding this limitation helps you use the tool correctly and recognize when statistical tests are needed.
Is this tool suitable for financial planning?
No. This is an educational tool for understanding cohort retention patterns. For financial planning, you'd need more sophisticated models that account for revenue per user, expansion/contraction, discounting, and other factors. Additionally, past retention does not guarantee future retention. Always consult with financial professionals for business planning. Understanding this limitation helps you use the tool for learning while recognizing that financial planning requires validated procedures and professional judgment.
Related Tools
Basic Churn & Retention Calculator
Calculate period-level churn rate, retention rate, and net growth from your customer counts.
CAC, LTV & LTV/CAC Calculator
Estimate customer acquisition cost, lifetime value, and unit economics for your business.
Conversion Funnel Drop-Off Analyzer
Analyze where users drop off in your conversion funnel and identify optimization opportunities.
A/B Test Significance Calculator
Determine if your A/B test results are statistically significant and calculate lift with confidence intervals.
Subscription Cohort Revenue Decay
Visualize how subscription revenue decays across cohorts over time and project future revenue.
Franchise Location Profitability Model
Model profitability for franchise or branch locations with revenue and cost projections.
Master Retention Analytics
Build essential skills in cohort analysis, retention tracking, and user behavior insights
Explore All Data Science & Operations Tools