Why do I need to take care of churn?

All subscription models share the same goals: to improve both Customer Lifetime Value (CLTV) and Monthly Recurring Revenue (MRR). And while acquiring new customers leads to recurring revenue, the key challenges today’s OTT providers face are customer retention and churn rate management.

In OTT new customer acquisition is 7 times more expensive than retaining an existing one, making retention essential for any SVOD company to be cost efficient and scalable.

Loyal customers and long-lasting relations are vital to the ROI of a successful SVOD service and there are many variables to take into account to avoid churn.

For example, there’s a strong correlation between frequent use and low churn risk, which increases by more than 90 percent for occasional viewers as compared to daily users. Market leader Netflix excels in high-frequency usage and significantly lower churn risk among its subscribers. Other factors to consider are the number of devices used, the time the user has been subscribed, service UX and even involuntary churn due to payment processing errors.

These can all have a high impact on churn and the ability to accurately track such variables can substantially increase a video service ROI and help you stand out from a growing pack of competitors by using advanced data-driven retention and acquisition solutions.


What do I get with JUMP RETENTION?

Jump can help you understand your user base consumption habits, profile your users to predict if they will use your service in the coming months, get a handle on the variables that make them use your SVOD service, create actionable insights to launch personalised marketing and promotional campaigns directed at every specific user segment and track the performance of each activity.

What’s more, you’ll have all of the information you need in a user-friendly dashboard that unifies the most important KPIs to reduce SVOD churn by as much as 30%.

  • Profile your users: find the variables and consumption habits that make them more likely to use your service
  • Enjoy a user-friendly interface with KPIs that tell you:
    • How many customers have churned
    • How many customers are at risk of future churn
    • Why they are at risk and what you can do to retain them
  • Create your own actionable insights: group your users into clusters of similar tastes and consumption habits, to send them personalized promotions and marketing campaigns that match their specific needs.
  • Make an impact by using AI/Big Data to build Data-Driven retention campaigns (i.e. select users with >70% churn probability that use an iPhone as their main device and have watched less than 5 hours of content in the past month) and send them push notifications, emails or any other action directly from your marketing tools.
  • Track the profitability of your decisions: immediately access the results from all the campaigns you launched on a single interface.

How much money do I save?

In order to determine the real impact of our churn solution on ROI, we performed a simulation with three different scenarios: (1) an actual video service provider’s real situation with a 5% churn rate , (2) a hypothetical situation where we helped decrease churn to 4% and (3) a third scenario with a reduction in churn rate from 3% to 2%. Based on a number of subscribers (100,000), a subscription package price of 7,99€ and a 10% growth rate for subscribers, we calculated the revenues for the first and second year for these scenarios, and found the following results:

churn management example


This simulation shows an increase in earnings of 771,304.12 € in the first year by simply reducing churn by 1%, and 1,960,514.41 € during the second year. The second scenario shows an increase in earnings of 1,046,373.57 € in the first year and 2,720,160.65 € in the second year by reducing churn by 2%.

How it works

Churn modelization starts with the ingestion of the client’s data on Jump’s system. This process implies an exhaustive data mapping from the client’s format (Multiple files/storages, multiple formats, custom logics).

Once this stage is finished, our client’s user behaviour data will all have been incorporated into our Data Lake, under our own Data Model. This Data Model is standard for all Jump clients.

Once the Data Model is ready, a streamlined workflow creates the features used to feed Jump’s Churn model and trains one or more models using state-of-the-art ML techniques.

This model can be used to generate predictions over equally modelized data, which can then be validated to generate performance metrics or used to draw prediction results on Jump’s online platform.

how Jump reention works

overview of the workflow stages


Build vs. Buy

Building a Proof of Concept Churn model is easy. Building a robust product on a productified Churn model is really complex.

When starting a new ML project from the ground up , data tends to be static. PoCs are generally completed on a relatively short time scale, raising expectations pretty quickly. But when the time comes to deploy the model into a real production environment, things also tend to get grim pretty quickly.

The complexity of a production model can actually be explained through a series of simple concepts:

New data creates edge situations on data that require very robust coding and data modeling to be handled correctly. Static data modeling is satisfied by static decisions making. Dynamic production environments require the model and the ETL process to adapt intelligently to new realities.

PoC results are extensively analyzed by the development team. A productified model requires outputs that are easy to understand for a non-technical audience. Dynamic production environments generate changing conclusions. Output explanations must be flexible enough to capture the complexity of a changing reality but may not be synthesized and straightforward enough.

Furthermore, it should be noted that the first iteration of an ML product gets acceptable value with relatively low effort. Yet it is in improving that value that the value-effort curve gets steeper.

The only way to continue innovating after the early stages of the product is to build an agile, flexible and powerful framework that allows the development team to test ideas quickly, eliminate as many repetitive and maintenance tasks as possible, and keep track of results over time.

Building such a system is no small feat, but but marks the difference between more modest projects and state-of-the-art ones.

Working with Jump eliminates the need to create this kind of team and code infrastructure. From our experience working with clients worldwide and the use of their aggregated data to train our ML models, the capabilities of HAMLET, our ML framework, is better than inhouse initiatives in terms of both performance and time to market.

Build vs. Buy a churn model system

  • Time to market: it can take on average up to two years to build a churn model that is anywhere near as powerful as the one we offer, and in those same two years the market and consequently our algorithms will have evolved significantly, so that it will be even more difficult for a company to build and train models in-house and get to that point, simply because we have experienced this same process with many different clients from different regions and with much more data to test our models. We at Jump can implement our churn solution for your company in just one month and continuously optimize it along the way.
  • Accuracy: HAMLET, our Data framework, gives us the capability to process much more data, filtering the data we really need to feed our algorithms and train the ML models with aggregated data from more than 25 clients all over the world. On average it takes 2 to 3 years for a company to develop their own models and get the results we provide:
      • Accuracy: minimum 75% (the more data, the better results the model provides)
      • Precision: minimum 50%
      • Recall: minimum 40%
  • Scalability: while SVoD companies’ data teams can be more limited in developing their own ML algorithms with the data they gather from their own clients, and running those models carries high costs. At Jump, we have the capability to prefilter all the information we gather from our clients with our data framework, to reduce time and costs and obtain more accurate results. We also can compare results with those provided by the data gathered from other clients in the same regions to test the efficiency of our models, and continuously run different models to improve them.
  • Experience: we have been developing and testing churn models for TV and OTT in the video industry for more than 4 years, and are able to adapt our models to the specific needs of any kind of company.
  • Aggregated data: we can greatly improve models with aggregated data because we know what’s going on in other countries and regions.

Advantages of using a framework:

Hamlet An ML workflow is much more complex than a traditional ETL process. Many decisions must be configured per client, and some of the correct values cannot be identified beforehand, but only after laboratory tests are run. In order to work swiftly, this process must be as automated as possible.

  • Effective Churn Management as a Key to Success 10 Thanks to HAMLET, our Data Science teams are able to run these kinds of tests automatically and intelligently for every new version of the code, saving tons of hours and greatly accelerating development.
  • Implementing a state of the art ML optimisation engine, HAMLET automatically identifies segments of the data that can be modelled separately in order to provide a higher level of insights and greater performance.
  • Automatic feature selection and multi-model competition, selection and tuning for each of those models on industry-standard reduces times and provides more elaborate results than a simple model training and evaluation setup can offer.
  • Our proven metrics validation methodology allows us to present results with the confidence that the final production performance of our models will accurately match the measured performance.
  • In addition, our proven metrics validation methodology allows us to present results with the confidence that the final production performance of our models will accurately match the measured performance.