Back to Portfolio

How can financial services companies stop bleeding customers before they realize they're unhappy?

Building ML-powered early warning systems that predict churn before traditional signals appear

Customer acquisition costs 5-25x more than retention in financial services. Yet most companies only notice churn when it's too late—when the customer has already mentally checked out and is just waiting for a better offer.

By the time someone calls to close their account, you've already lost them weeks or months ago. The question isn't "how do we save them now?" It's "how did we miss the signals before they even started looking?"

The Real Problem (What Most People Miss)

Everyone thinks churn prediction is about catching people who are about to leave. Wrong.

It's about identifying customers who are quietly unhappy but haven't made a decision yet. There's a critical window—usually 30-90 days—where dissatisfaction is building but they're still saveable.

Most churn models look at the wrong signals. They track things like "days since last login" or "customer service calls." But those are lagging indicators. You're looking at symptoms, not causes.

My Approach

Step 1: Map the "dissatisfaction journey"

  • Interview churned customers: When did they first feel unhappy? What triggered it?
  • Find the inflection point between "mildly annoyed" and "actively shopping"

Why? Because you can't predict churn if you don't understand what causes it. Most companies skip this step and jump straight to modeling. That's backwards.

Step 2: Build behavioral cohorts, not demographic segments

  • Group customers by behavior patterns, not age/income/location
  • Example cohorts: "Set-it-and-forget-it users," "Active optimizers," "Nervous checkers"

Why? A 65-year-old retiree and a 30-year-old professional might have identical behavioral patterns. Demographics tell you who they are. Behavior tells you what they need.

Step 3: Create a "happiness score" model

  • Train ML model on leading indicators: Feature usage changes, support sentiment, product adoption curves
  • Score every customer daily on a 0-100 scale
  • Flag when scores drop below cohort norms

Why? You need a single metric that captures "health" holistically. Raw behavioral data is too noisy. A happiness score makes it actionable.

Step 4: Build intervention playbooks by cohort

  • Different cohorts need different interventions
  • "Active optimizers" want better tools. "Nervous checkers" want reassurance.
  • Create automated + human touchpoints triggered by score drops

Why? Generic "we miss you" emails don't work. Personalized interventions based on actual behavior do.

Step 5: Measure what matters—prevented churn, not predicted churn

  • Track intervention success rates by cohort
  • Calculate ROI: Cost of intervention vs. customer lifetime value saved
  • Feed learnings back into the model continuously

Why? A 95% accurate churn prediction model is worthless if you can't save the customers. Success is measured in retention, not accuracy.

The Contrarian Insight

Most churn prediction models are backwards. They predict who will leave—but by then it's too late.

Better approach: Predict who's quietly unhappy but hasn't decided yet. That's your window of opportunity.

Think of it like healthcare. You don't want to diagnose cancer after it's metastasized. You want to catch precancerous cells when they're still treatable. Same with customer churn—catch dissatisfaction early, before it becomes intent to leave.

Next Steps (First 90 Days)

Month 1: Discovery & Data Audit

  • Interview 20 churned customers to map the dissatisfaction journey
  • Audit existing data: What behavioral signals do we already capture?
  • Identify data gaps: What signals are we missing?
  • Quick win: Implement NPS survey at key lifecycle moments to baseline satisfaction

Month 2: Build Baseline Model

  • Create behavioral cohorts using clustering algorithms (K-means or hierarchical)
  • Build initial happiness score model using random forest or gradient boosting
  • Test: Can the model identify churned customers 60 days before they left? Target: 70%+ accuracy
  • Design 3 intervention playbooks for highest-risk cohorts

Month 3: Pilot & Refine

  • Launch pilot with 1,000 at-risk customers across all cohorts
  • A/B test: Intervention group vs. control group
  • Measure: Retention rate, engagement lift, NPS improvement
  • Iterate on playbooks based on what's working

Key Metrics I'd Track:

  • Happiness score distribution by cohort (baseline vs. current)
  • % of customers flagged as "at risk" (should be 10-20%)
  • Intervention success rate (retained vs. churned after intervention)
  • ROI per cohort (cost of intervention vs. LTV saved)
  • Time-to-intervention (from score drop to first touchpoint—target: <48 hours)
  • False positive rate (customers flagged but not actually at risk)

Why This Approach Works

At USAA, we built an ML churn model that saved $12M in the first year. But here's what most people don't know: The first version of the model had 92% accuracy but only saved $2M.

Why? Because we were predicting churn correctly, but our interventions weren't working. High accuracy, low impact.

The breakthrough came when we shifted focus from "who will leave" to "who's quietly unhappy." We started catching people earlier—60-90 days before they'd typically churn—when our interventions could actually make a difference.

That's the lesson: Prediction accuracy doesn't matter if you can't change the outcome. Build your model around the interventions you can actually deliver.

Want to Discuss This Approach?

Let's talk about how predictive retention could work for your business.

Get In Touch