For episode 23 of the Subscription League Podcast, we had the pleasure of discussing tips and frameworks with a behavioral science expert and the founder of a product growth agency, Applica Sviat Hnizdovskyi. Sviat shared valuable insights on how companies can improve their app subscription, LTV (customer’s lifetime value), and monetization.
Listen to learn more about his tips for optimization subscriptions, Applica’s experiment history review framework, Sviat’s passion for behavioral science, and other key takeaways from the episode.
For noteworthy quotes and key takeaways from the episode, read the article - Framework for subscription optimization: tips from behavioral science expert Sviat Hnizdovskyi (Applica)
Episode Topics at a Glance
- Sviat’s Passion for Behavioural Science.
- Strategy to increase ARPU
- Tips for subscription optimization
- Experiment categories
- Applica’s experiment history review framework
- Pre-requisites to run Sviat’s framework.
- Type of companies Sviat is working with to build this framework.
More about Sviat
Sviat is currently the founder of Applica, a London based product growth agency. Sviat has years of experience in behavioral economics, social & cognitive psychology, and data-driven digital product analytics. During his time at Applica, Sviat has helped companies like Loona, FitMind, Fabulous, Drops, and Freeletics increase significantly their app ARR. Recently, he founded the Open Minds Institute - an international think & do tank that cultivates open-mindedness and behavioral change for peace & freedom.
Applica’s Experiment History Review Framework
00:37 - Sviat’s Passion for Behavioural Science.
01:53 - Strategy to increase ARPU
03:57 - Making the transition from employee to entrepreneur.
05:33 - Creating steps of Framework.
08:05 - Categories of experiments.
11:50 - Recategorizing the experiments.
12:31 - Next step after having category
13:42 - Does this framework differs for different countries?
16:29 - How to start experimenting after getting the data.
19:08 - Tracking multiple metrics with the same framework.
20:33 - How do you deal with long running tests?
22:33 - Minimum setup required to run Sviat’s framework.
24:58 - Type of companies Sviat is working with to build this framework.
26:36 - Experiment that surprised Sviat.
[00:00:01.480] - Olivier Destrebecq
Welcome to the Subscription League, a podcast by Purchasely. Listen to what's working in subscription apps. In each episode, we invite leaders of the app industry who are mastering the subscription model for mobile apps. To learn more about subscriptions, head to subscriptionleague.com. Let's get started. Welcome to the show, everybody. Today I'm joined by my co-host Jeff, and Sviat Hnizdovskyi, Founder at Applica. Welcome to the show, Sviat.
[00:00:29.980] - Sviat Hnizdovskyi
Welcome. Thank you.
[00:00:32.350] - Olivier Destrebecq
One of the thing that struck me when I was researching you specifically for the interview is that you've been focused on behavioural science for a long time. And I'm curious, where is that passion coming from?
[00:00:44.320] - Sviat Hnizdovskyi
My previous academic background was mainly in the field of behavioural economics, partially social and cognitive psychology. I've already told that story many times. The biggest question was, how can I apply everything I learned and all the papers and effects and cognitive biases in something practical. When I started working with digital products, and in particular with the subscription apps, I tried and kind of proactively implement whatever I learned.
[00:01:17.920] - Sviat Hnizdovskyi
I started looking through the lenses of behavioural sciences at forever issues, growth processes. And that helps a lot, I would say. That doesn't mean it's something that can be just very easily applied. You have to actually think carefully. A lot of things are just too generic, and maybe the explanation of social behaviour wouldn't be applicable for some specific narrow use cases. That's currently basically just a few words. At Applica, we are just a small team of growth enthusiast, product managers, product analysts.
[00:01:53.200] - Olivier Destrebecq
Right now you're at Applica and you're a founder over there, but before that you were at BetaMe and you applied behavioural insights to user activation over there. The team that you're on raised engagement by 15% and increase day one retention by 10%. You also grew the ARPU by 203% and the ARPPU by 36% for the Chinese market. Can you share some of the highlights of what you guys did to get those impressive numbers?
[00:02:22.510] - Sviat Hnizdovskyi
Oh yeah. First team was and still the guys who work there are really outstanding. It happened that we had different but still experience that would work well on the intersection. There were people who were very well trained in economics and were doing great with calculations and numbers. When it came to the proper design of pricing for specific geos like Chinese, in that regard, we did a very sophisticated analysis what should work and what shouldn't.
[00:02:56.260] - Sviat Hnizdovskyi
But overall I tried. That was very early stage of my career to apply just some of the behavioural frameworks there. When we approached user activation, we used some of the well-known frameworks like Psych, but besides that, just also were trying to understand what stops users from sharing more data about their fitness life, let's say, and their fitness goals. What intimidates them? Maybe they are a little bit afraid of sharing the ambitious goals they want to achieve, or they think someone could see that later.
[00:03:30.940] - Sviat Hnizdovskyi
It was a combination of qualitative research by asking all of these questions, but also quantitative and digging into data and understanding the different behavioural patterns.
[00:03:44.350] - Sviat Hnizdovskyi
Long story short, that was a very productive time for me when I had access to lots of data, very good experimentation framework, good speed of testing as well, and we managed to do some outstanding results with the team.
[00:03:56.620] - Olivier Destrebecq
Awesome. After that you started the Applica where you use behavioural science to help digital products grow their revenue, conversion and retention. I'm curious how the transition has gone from being an employee to being a founder/entrepreneur now.
[00:04:11.290] - Sviat Hnizdovskyi
Actually, even between these two stages of my life, I also was involved in some rather smaller but also well-known projects and apps as a consultant and advisor on monetisation and activation. That was the smoother transition. I had this role of strategic advisor and then I saw across not just one app in the case of BetaMe or the ecosystem of the apps, but rather 10 to 15 apps. What are some common problems and issues growth teams are facing and where they are struggling and what are the plateaus of optimisation, let's call it.
[00:04:47.650] - Sviat Hnizdovskyi
Eventually, just revealed some of these issues and led me to just talking to some-like minded product managers and growth managers. That's how I created Applica with just three colleagues of mine. Somewhat similar experience and background from subscription apps. It started slow. We are still currently a small team. We carefully choose clients we work with and sometimes, we just have to keep client waiting for 2 or 3 months before we finish some of the current tasks. But that's in the philosophy and my vision of the company that we we really require some more time and deep dive into data to give some meaningful suggestions instead of just, throwing generic advice here and there.
[00:05:32.110] - Olivier Destrebecq
Okay. You've mentioned that you've worked with companies that have reached a plateau in their experimentation. At Applica, if I understand correctly, you guys created a framework to help those companies continuing optimize their subscription. Can you give us the steps of that framework and we can dive into each steps afterwards?
[00:05:52.570] - Sviat Hnizdovskyi
Yes. First and foremost, this framework has proven to be very helpful for the apps that had at least 100 or more experiments in their backlog already, performed experience. This framework has proven to be very effective for the apps that historically performed around 100 or more experiments at least. Why so? I'll now go through some of the major steps and you can explore that in more detail.
[00:06:21.310] - Sviat Hnizdovskyi
But before you reach that number of experiments, a good advice would be just to follow some of the best practices, listen to what your users are saying. But unless you have that many experiments, you can't really start categorizing them into specific buckets and then performing some more sophisticated analysis. I will tell you in just a few minutes why it's important and how we do that.
[00:06:42.460] - Sviat Hnizdovskyi
The first step basically is to understand whether you are still on that level of limited experimentation capacity. If so, I already gave some specific advice and some of my previous articles and podcasts, I can just maybe briefly mention a few.
[00:07:00.190] - Sviat Hnizdovskyi
The biggest benefits you can see as it comes to increasing your LTV and monetisation might lie within your overall paywall strategy. Just make it more frequent. Understanding the subscription duration optimal period for your specific app. If it's health and fitness or anything related to education, it's mostly the choice between having monthly and yearly subscription or just yearly subscription. I'm not going to too much of specific details here.
[00:07:26.470] - Sviat Hnizdovskyi
Second important thing like low-hanging fruit, easy with engineering is to optimize your first impression of the app. That means a lot like 80% plus of the purchases are happening within just the first 10-15 minutes of exploring the app. So it's very important that what you communicate from the very first screen is understandable. You show the benefits of the product instead of just listing features, etc., etc. But once you reach this threshold of 100 experiments overall related to monetisation, the interesting things are revealed basically. I think we can just spend some time going through these steps now.
[00:08:04.270] - Olivier Destrebecq
Yeah, yeah. I guess the first step is taking all the experimentation that you've done and figuring where you are and deciding can you move on? You said you have to categorise all those experiments. Can you tell us more about how like, what kind of categories do you want to create?
[00:08:18.280] - Sviat Hnizdovskyi
Yeah. What I would suggest is just to open whatever, let's say it's a spreadsheet, and you have all of your experiments listed and then you go through your entire history of experiments and assign each one category. Some examples of categories are like subscription price, discounts, frequency of your paywalls, placement of your paywall, your onboarding survey land, for example, your onboarding survey particular questions. Just some of the examples. I can provide a more exhaustive list of these categories.
[00:08:51.550] - Sviat Hnizdovskyi
It depends like, if you've really done more than like maybe 3 or 400 of experiments, you can even break them down into subcategories, like number of screens, screen orders, screen content, etc. Why we are doing that, we basically want to understand later and I'll explain how we do that, which particular category is the probably most optimal for you to target with all of your subsequent future experiments you're about to plan. Maybe some particular categories are already very optimised and you don't really have to spend more time looking at these ones.
[00:09:27.130] - Olivier Destrebecq
So when you have all those categories and, as you said, depending on how many experiments you might have, more or less categories, what should you do with those?
[00:09:35.380] - Sviat Hnizdovskyi
Then imagine you have this spreadsheet. The first column is basically the name of your test. The second one is category you assigned, we just talked about. Then you will still require a few more columns to complete this exercise. The next one would be to see within each experiment what was your best LTV change of the best variant of this experiment.
[00:09:58.990] - Sviat Hnizdovskyi
Imagine you have, let's say, a paywall test with three different designs. So you had some, of course, default variant, of course, 0% change, because we take zero as a default. Then you have two other variants. Let's say the first variant was statistically significant and has shown 5% increase to your LTV and a second variant showing 10% decrease, it's relative numbers.
[00:10:24.780] - Sviat Hnizdovskyi
What you want to do, you want to have a column where you would, first column to show your best LTV change per variant. In our case, it would be 5%, and worst LTV change would be -10%. In the case, if you just had two variants, that would be, for example, default and variant A. It would be zero for worst, like worst was basically nothing, and best LTV all change was 5%.
[00:10:50.190] - Sviat Hnizdovskyi
Why do you want to do this exercise is to just evaluate basically how impactful and accurate your hypothesis are and how good were your variants you picked for this particular experiment. It doesn't tell you much on the level of specific test, but when you start combining and summarizing all of the tests within a particular category, that's starting showing you a lot. And basically that also shows to what extent this particular category, let's say pricing elasticity or paywall design can be optimised, but also how good you are doing that.
[00:11:26.490] - Sviat Hnizdovskyi
It's hard to distinguish between those two, so it might be the case that the category is actually promising, but you are just not doing a good job with coming up with some good variants of what to do there. But usually, if you follow the market and what some of the other apps are doing and you have your strong team internally, I think it rather boils down to the impact of the category, honestly.
[00:11:49.980] - Jeff Grang
Okay. That is why your your framework needs to be first initiated with a lot of experiments and past experiments. So the first step is really to re-categorise all the experiments you've done in the past, right?
[00:12:01.530] - Sviat Hnizdovskyi
Yes, because eventually you will end up with, let's say, 8 or 10 or 15 categories and you want to have sufficient number of tasks per category to actually have some meaningful conclusion. If it was just 2 or 3 A/B tests, it's early to say that this category is not promising enough or you don't really want to experiment there anymore because you can't really improve LTV while doing experiments there. Yeah, that's why exactly it's important to have a lot of A/B tests.
[00:12:31.560] - Olivier Destrebecq
You've got all those categories, you got the change that each category provided for the ARPU or LTV. What do you do next once you have all those numbers in place?
[00:12:43.950] - Sviat Hnizdovskyi
Then basically you create this pivot table and summary of each particular category. At the end you will have, instead of just having a single experiment in a row, like in the list of, let's say, 100 experiments, you will end up with a table just having 5, 10 categories and average LTV improvement per category. Let's say you launch 20 experiments, once again, on the pricing elasticity. On average, let's say you had 5% improvement, and the other category was, let's say, onboarding survey length and you had just five experiments there and on average you had something like 10% improvement.
[00:13:26.280] - Sviat Hnizdovskyi
By combining that, you want to sort this table in the descending order, starting from the very best LTV change category to the very like least, let's say, the very biggest one.
[00:13:40.290] - Olivier Destrebecq
Speaking of what we have started the podcast with and all the behavioural analysis, etc., we have a lot of our customers at Purchasely that are doing experiments on a specific country or set of countries. What's your feeling about this? How this goes into your framework. Do you take it under consideration or is just like, it doesn't matter. We know that price elasticity is something no matter the country, even if you've only played with some countries here and they're not the biggest type countries. What's your feeling about this?
[00:14:10.950] - Sviat Hnizdovskyi
There is some kind of convergent research evidence in the behavioural science and consumer behaviour space that human psychology is more or less universal. There are, of course, differences between particular world regions. Let's say consumer behaviour in Japan might be very different from the United States, but it's mostly related to things like price perception and also particular smaller variables, let's say, importance of social approval, let's say, or importance of authority. For example, in the health and fitness product, a lot of apps refer to some medical doctors, promoting their product or just having some review on their programme.
[00:14:53.610] - Sviat Hnizdovskyi
For some regions, it might be more beneficial. In particular, I know a few studies that claim that for people from like Western world, this is a very important thing for people from Middle East, let's say, or even some southern and eastern Asia. It's of less importance. But I just don't want us to focus on these small details.
[00:15:16.650] - Sviat Hnizdovskyi
Usually, for the sake of simplicity, I would say, the test can be universal, but when it comes to pricing, I would highlight that it might be worth actually tracking the specific effect on separately tier one countries, US and all of the other countries. Because when it comes to pricing, especially for subscription app and digital product, it's fine charging 90 bucks per year somewhere in the United States, but even in European countries like Italy or Germany, or Spain, the perception of the value of the digital product is slightly lower.
[00:15:54.920] - Sviat Hnizdovskyi
Usually you would be probably much better charging slightly lower price for the yearly subscription and decreasing also the refund rates or conversions from trial to purchase. That's what we found empirically as well.
[00:16:07.670] - Sviat Hnizdovskyi
In some other countries, obviously with the lower GDP per capita, you should lower that even more. But that's something obvious. The same applies with the difference between iOS and Android. So of course.
[00:16:19.250] - Olivier Destrebecq
So I guess that can be one of the categories that you have in the list of categories of tests that you've done, like pricing testing and varying prices for country and all that good stuff. We're at the point, I guess, in your framework where we have the list of areas that are sorted by, I guess, order of importance as to where you should start experimenting because you have the one where you have the most chances of a big return essentially at the top, and so you should start experimenting on those. Do you have any advice for, how should people go through that list afterward over experiments, things to watch out for and stuff like that?
[00:16:52.370] - Sviat Hnizdovskyi
Yes. We are at the stage where we more or less already know, this category is probably something really promising and this one is probably less promising. But there is one more important step I wasn't really aware of initially, but as we empirically try to apply this framework to some of the products, we found out it to be very important and it can be a game changer. What you should do finally also and what we suggest is to keep track of some diminishing returns of optimisation as you progress with the number of tests. Of course, your first experiment would likely be very impactful.
[00:17:32.750] - Sviat Hnizdovskyi
Let's say you never tested price and you had some hypothesis, you tried it out and initially you had like 40% improvement, and that can happen. But imagine if you keep testing pricing for like 15, 20, 30 experiments. Of course the returns will be diminishing as you progress. The last step would be to kind of track whether you're starting hitting this plateau and diminishing returns for each category of your experimentation.
[00:18:03.170] - Sviat Hnizdovskyi
Then basically you have two options. First one, if you do actually see this diminishing return, you can either start revisiting some of your initial assumptions and maybe completely trying something different, something more risky, in order to break this through this plateau. The second variant would be just to skip this category as of now because, well, likely, it was good in the past, so that's what we're trying to evaluate here.
[00:18:28.790] - Sviat Hnizdovskyi
Whether it's still good, it's the question. Probably less promising. Maybe you can move on to the number 2, number 3, number 4 on your list and do the same exercise with them. That's something important because sometimes we saw, okay, we have on average 10% improvement to LTV for push notifications and we did like 10 experiments. But what we found out, it was 40% improvement for the first experience, 25 for the second, 15 for third, and then 5, 3, 2, 1% for experiments coming up to the 10th, let's say. That's what we wanted to check as well.
[00:19:06.860] - Olivier Destrebecq
Yeah. Okay. I'm curious because we use LTV as the metrics that we use for the example that you're doing for your framework. But what happens if you are tracking multiple metrics, like how, for example, do you keep also like retention in mind and other things that might be very important to your business? Can you use that same framework with multiple variable, I guess?
[00:19:27.140] - Sviat Hnizdovskyi
Yes. I think we just tried to apply this framework for more like ARPU related or LTV related experiments because that's something that most of the subscription apps experiment around the most. But this framework can also be applied to any kind of other metrics like retention, particular product engagement metrics as well. You can even separately use it for your monetisation/activation team and for your retention team working on the core functionality as well.
[00:19:56.510] - Sviat Hnizdovskyi
When we perform these calculations, we want to make sure that we want to control retention basically. We can of course increase LTV, but if there is a drop in retention, that's a more strategic question. Whether we want to allow this drop and to what extent we want to sacrifice some of the other metrics. That's what I always ask the product owners. Well, usually decided on the sea level. Someone is ready to go more like aggression in terms of quantization, sacrificing some of the retention. The other people just want completely the opposite sometimes, hoping for… But that's rarely the case, actually.
[00:20:32.490] - Olivier Destrebecq
Okay. One other thing that we had talked about last time was how to deal with test that we take a really long time to get some results because obviously here, you look at all the tests that you've done in the past, but you might have a test that's six months before or six months or a year before you can actually see the results. I'm sure you want to try to move a little faster as you're doing all your categorisation and all that. I'm curious how to best deal with that.
[00:20:56.790] - Sviat Hnizdovskyi
Well, ideally, I think the higher in the funnel your experiment is, the sooner you can collect data. If you experiment somewhere around the first session, onboarding very first paywall, very first activation aha moment, usually you can of course get the results quite early.
[00:21:16.950] - Sviat Hnizdovskyi
The nature of subscription-driven apps is that in the majority of niches like education, health and fitness, but also entertainment, you don't really have great long term retention. That's the very fundamental problem, I think. You probably seen some recent industry reports, there are quite a few ones. Now this recent feature of App Store Connect when they show you, how do you compare to your closest competitors.
[00:21:43.770] - Sviat Hnizdovskyi
You see that the long term retention is actually very, very huge problem, which means if you want to experiment with the things happening way later in your funnel on day seven, maybe on day three, of course these experiments can take months.
[00:21:59.700] - Sviat Hnizdovskyi
Usually I would recommend focusing on things happening in the very first few sessions instead where you have more users, more new coming users, higher retention, of course, for the first few days. I think still majority of optimisations can be done there before you can move to more like in-depth parts of the funnel, more sophisticated things that would probably require both more engineering, better knowledge of your users and it will take way more time to just calculate and get sufficient sample size for your experiment to have some statistical significance.
[00:22:33.210] - Jeff Grang
Amazing. You've mentioned a lot of KPIs and I believe that your framework comes at a certain level of maturity for a company and organisation with some needs for these KPIs to have for data tracking, and I believe some tools and associated tools to get them out, or maybe you have developed these. What would you say would be the minimum setup that you will require to run your framework within an organisation?
[00:22:57.450] - Sviat Hnizdovskyi
Тhat's a great question because I had the chance to see what тech stack is used by a lot of apps and that differs. There is more or less standard choice. UIs can sacrifice fraction of your profits by using some of the third-party tools, or you can spend more time and engineering time in particular by trying to just build something like that in-house, or maybe sacrifice partially the analytics overall.
[00:23:26.910] - Sviat Hnizdovskyi
Ideally to make our framework viable and be able to evaluate all of these things, we want to have all of this historical data on the experiment performance. No matter how you did it, you'll launch the experiments through Firebase, which is basically a free experimentation tool, or you did it through some more sophisticated and, of course, expensive variants. You must be able to collect these historical data.
[00:23:52.050] - Sviat Hnizdovskyi
Firebase is maybe not the worst tool when it comes to the randomisation, but it's totally not reliable when it comes to the calculation of the ARPU results because it doesn't include the projection of ARPU or LTV. It just calculates what you receive in the moment, like within the first few days of when you run the experiment, but it doesn't calculate the potential refunds of subscriptions, potential rebuilds.
[00:24:16.740] - Jeff Grang
Because it doesn't know the cohorts, right?
[00:24:18.990] - Sviat Hnizdovskyi
Of course, yeah. For this reason, you would need something to calculate all of these things. It's either like third party tool, internal BI tool or anything like that. Of course some of the data analytics platforms like Mixpanel and Amplitude now provide their own A/B testing measurement solutions. Some of them even provide randomization solutions for additional charge. But it's up to probably app owner or other stakeholder to decide on this tech. But if you want to save some money, it might be the case that you will be quite limited in what you can calculate later and what you can revise when it comes to historical evaluation.
[00:24:58.680] - Jeff Grang
It's not something that every company will be able to start tomorrow, especially small ones. Which kind of company have you been working with on defining this framework? Because we know that we are the very first ones you present this framework for, and thanks for sharing this on the subscription link podcast. But which kind of companies have you been working with for building such a framework and with which outcome?
[00:25:21.500] - Sviat Hnizdovskyi
Yes. This particular thing was applied to a few products. I can probably name, that's not a problem. We tried to apply that to Fabulous when I still was working as an individual consultant with a small team. Helping me, we applied it to Drops language learning app partially that was also applied to Luna, a sleep app.
[00:25:41.750] - Sviat Hnizdovskyi
Yes, some of the cases, like when we were trying to make this categorisation, what I learned is actually categorisation was quite different for all three products. Even the nature of the products are quite similar. You usually would assign these categories based on historical performance of experiments. Let's say Fabulous was very different from Luna in that way. They were completely different categories.
[00:26:06.620] - Sviat Hnizdovskyi
This framework helped in all three of the cases, and there were some other cases with rather smaller products and less known products to just evaluate these, let's say, bottlenecks first and low-hanging fruits. But then, you cannot just leave it there. You need some person with the knowledge of the product and of the users to just use this and be able to make some conclusions and suggest some hypothesis based on what he or she has seen from this framework.
[00:26:36.080] - Olivier Destrebecq
I'm sure you've run lots of experiments using that framework and working with companies. Which one has been the most surprising to you? Which ones you would have bet your house on and you would have lost a house?
[00:26:45.830] - Sviat Hnizdovskyi
There was one particular experiment. We tried to play with different variants of onboarding survey, and of course the standard practice to ask a bunch of questions, especially mostly used in health and fitness and education products. Probably most of the market followed the example of Noom, having very long 20+ questions onboarding. Some of the players had rather somewhere in between 5, 10 questions.
[00:27:12.770] - Sviat Hnizdovskyi
What we tried to evaluate is optimal length of onboarding. I would usually bet on the one-year onboardings. I think it's when you will have smaller percentage of the users coming to the end of this survey, but with high intent to purchase eventually.
[00:27:29.990] - Sviat Hnizdovskyi
But what we did in that experiment was comparing three variants. First one was two steps, just very short survey. Second one was something like 10 steps, and third one was like 25 steps. I would expect the linear increase in LTV. The very short one would be the worst, probably then the longest would be the best.
[00:27:51.080] - Sviat Hnizdovskyi
But what eventually we witnessed is slightly different when the very long and the very short onboardings were almost equally good, funnily. The mid-length one was significantly worse than these two. That's maybe a very interesting finding. It's hard to explain. Maybe some further experimentation has to be done as to whether these results can be replicated in some other products. But basically, maybe everyone is just so tired of these long onboardings, because it's a common practice already like-
[00:28:24.080] - Jeff Grang
Let me use the damn thing.
[00:28:25.520] - Sviat Hnizdovskyi
Yeah, 3, 4, or 5 years apps already using that since probably 2017, 2018. Some users just want really to experience the thing first and they don't need this one, lots of questions. I see some good decisions when app owners just put some short surveys and then offer longer survey as an option later within the product. What do you think is a good thing to do, honestly, and you will definitely stand out from majority of your competitors who are just trying to follow and this kind of magic wand, 30, 40-question thing.
[00:29:01.490] - Olivier Destrebecq
That's good advice. You've given a lot of great advice today, and thank you very much for walking us through your framework. It was really nice. If people want to learn more about your framework, where can they go? I guess your framework and you, and Applica.
[00:29:14.660] - Sviat Hnizdovskyi
Yeah, thanks. I'm also quite happy to share all of these learnings, and thanks for having me here today. I'm planning actually to publish this framework on our blog. There are some other articles that can be useful for product managers, so you can just go to applica.agency and look through some of the case studies, how we apply this thing and also step-by-step guide for this framework. I'm thinking of also creating a spreadsheet template so that can you can readily use and you can just import all of your A/B tests and that will help you to actually perform all of these calculations in more like semi-automatic way.
[00:29:53.420] - Olivier Destrebecq
Awesome. Thank you very much for joining us today and sharing all that knowledge and all those learnings. That was really awesome. Thanks again.
[00:29:58.880] - Sviat Hnizdovskyi
Thank you. [foreign language 00:29:59]
[00:30:00.650] - Olivier Destrebecq
On behalf of the Purchasely team, thank you for listening to the Subscription League podcast. If you've enjoyed what you heard, leave us a five-star review on iTunes or other audio platform. To find out more about Purchasely link and how we can improve your subscription business, visit purchasely.com. Please hit subscribe in your podcast player, and don't miss any future episodes. You can also listen to previous episodes at subscriptionleague.com. See you soon.