A/B testing: Winback strategy
When creating an actionable A/B test, I set out to find the following answers to these questions:
What do I want to test?
Why do I want to test it?
How much time is it going to take me to run this test?
What am I going to do with the results? (The most important question in my opinion)
And with the above answered, I can feel confident to move forward with a solid testing strategy knowing that I’m not wasting time.
Introducing, A/B testing winback strategy.
What do I want to test?
I want to test a different theme for one of our automations. This theme would follow the same sending frequency as our control (which is currently a generic series aimed at winning subscribers back to a paid subscription).
The first variant I want to pin against our control is a transactional-focused reminder series. No fluff, just clear and direct copy laying out how much time the subscriber has left on their subscription, and what happens when the time runs out. In the outline above this is labeled as “transactional”.
The second variant I want to pin against the control and the first variant is personalized automation from “bob” (the name given at random for this write-up), taking the approach of offering assistance, answers to questions, and downgrade potential. Just overall, a helping hand. In the example above, this is labeled as “hand holding".
Why do I want to test it?
To try to move the needle towards a higher conversion rate of winning back subscribers that have canceled their subscription, before the final date they lose access to their account.
How much time is it going to take me to run this test?
The current control includes multiple emails. Reminding subscribers at 7 days, 48 hours, and the day their subscription ends.
In order to replicate the same frequency I will need to create 3 emails for Variant 1, and 3 emails for Variant 2. I’d ball park this at a couple of weeks worth of content creation, building, and QA.
In terms of test duration, I want to run this test for at least a month in order to get a healthy sample size of users to measure results from, without making inferences that could be tied in as a fluke.
What am I going to do with the results?
The results of this will help shape how we run win-back strategies in the future. It can help us determine how we speak to subscribers that have expressed intent that they might be wanting to leave the platform, and therefore could help us shape both marketing copy, but also in product copy before users take the action of cancelling.
My hypothesis
I am torn on this one, but I believe the hand holding approach will result in the highest conversion due to having a human aspect to it, people might feel a sense of guilt of not interacting with the campaigns. My assumption comes with the forward thought that the more people that engage with the campaigns, the more users will potentially reconvert to a paid subscription.
The results
After running this test for over a month, conversion rate for both Variant 1 and 2 were considerably higher than the control. That being said, both of them were incredibly close next to each other.
What differentiated the winner, was the engagement. With a session start rate 7% higher than the control and one path, our Transactional variant (variant 1) stole the show!
Being direct and to the point of what users would lose access to not only converted users back to a subscription, but drove users into the app to potentially close out any lose ends should they decide not to proceed further.