AI Solution & Deterministic Intervention Layer
1. Solution Overview
The proposed solution introduces a two-layer architecture:
-
A probabilistic churn prediction model
-
A deterministic intervention engine
The model estimates churn risk.
The product layer controls how that risk translates into action.
This separation ensures predictive intelligence without uncontrolled incentive leakage.
1. Predictive Churn Modeling Layer -
The system operates on weekly batch scoring at the driver level.
Objective -
Predict probability of churn within the next 30 days.
Model Type -
Gradient boosted trees or ensemble-based classifier for:
-
Nonlinear interaction handling
-
Strong tabular performance
-
Feature interpretability
Core Feature Groups -
Earnings Dynamics
-
4-week rolling earnings mean
-
4-week rolling earnings variance
-
Week-over-week earnings delta
-
Surge participation ratio
Behavioural Engagement
-
Trip acceptance rate trend
-
Idle time distribution
-
Session frequency
-
Online hours decline rate
Incentive Sensitivity
-
Incentive redemption ratio
-
Earnings dependency on bonuses
The model outputs: P (churn | features) ∈ [0,1]
This is a probabilistic signal, not an action.
2. Deterministic Intervention Engine -
Raw probabilities are not directly executed.
A deterministic policy layer translates model outputs into controlled actions.
Risk Tier Segmentation -
High Risk
Probability > 0.75
Medium Risk
0.50 – 0.75
Low Risk
< 0.50
Each tier maps to predefined interventions.
Intervention Logic
High Risk
Trigger targeted retention incentive
Personalised earnings boost or guaranteed minimum
Medium Risk
Optimise ride allocation priority
Reduce idle exposure
Monitor behavioural recovery
Low Risk
No intervention
Passive monitoring only
3. Budget Guardrails -
To prevent incentive overspend:
-
Daily retention budget cap
-
Cost per saved driver threshold
-
ROI validation check
If budget exhaustion occurs, intervention ranking prioritises: Highest churn probability × highest LTV drivers.
This ensures economic efficiency.
4. Monitoring & Feedback Loop -
The system continuously tracks:
-
Actual churn vs predicted churn
-
Intervention uplift rate
-
Incentive cost per retained driver
-
Model drift indicators
Retraining cadence: Monthly
Feature drift monitoring: Weekly
This creates a closed-loop learning system.
Why This Matters?
The model provides probabilistic foresight.
The deterministic layer enforces economic discipline.
This converts churn mitigation from:
Broad reactive incentives to Precision, threshold-controlled retention intelligence.
The outcome is measurable:
-
Reduced churn
-
Improved driver LTV
-
Stabilised supply
-
Controlled incentive spend
