Uptime is built on a mission to help people to learn quickly and grasp knowledge from trusted thought leaders without spending too much time, energy and/or resources.
Only launched in 2021, the app already boasts almost a five-start customer reviews on the App Store, recognized as an Apple Editor's Choice and a Google Play App of the year 2021.
In episode 8, Léa Samrani, lead product manager at Uptime, talks about how the company expedited the app’s growth by
- Conducting heavy market and user research
- Leveraging one-on-one user engagement to complement the lack of data as an early startup
- Tracking everything with data tools once the user base has grown
- Leading with the product that speaks for itself
- Getting users on board with the apps benefit by giving full premium access.
Listen to the episode to find out more about the brand’s growth strategy.
For noteworthy quotes and key takeaways from the episode, read the article -
Uptime - How to build a successful user-first app strategy with Léa Samrani.
Episode Topics at a Glance
- Uptime’s mission and history
- Initial research and consumer survey
- A/B testing
- Setting the right KPIs
- Paywall strategies
- Educating users about the app’s value
More about Léa Samrani
Léa is a London-based product management veteran with about a decade of experience managing teams, driving product strategy, and building products ground up in fast pace environments. Currently, she’s the lead product manager at Uptime, a freemium app featuring 5-minute knowledge hacks from the world's best books, courses, and documentaries. Prior to her current position, Léa worked as a senior product manager at Bumble and Badoo. She also has years of experience as head of product management in the charity sector. Her educational background includes a public law and politics science degree and a master’s in marketing communications advertising. Léa is also a lecturer at Product School.
Léa Samrani’s Links
- Léa Samrani’s LinkedIn profile
- Uptime website
- Lea’s Product School lecturer profile
Timestamps
00:20 Introducing Lea Samrani
00:43 Lea Samrani and Uptime backstory
02:17 What did the research you did for your app entail?
05:51 Knowing when Uptime was on the right path
07:58 Challenges of A/B testing
12:00 Importance of testing the paywall
15:03 Does your advice change based on the scale of the company (large vs small)?
16:31 What KPIs do you optimize for today and why?
18:45 Did you encounter any unexpected tests?
20:31 How do you educate users on the value of the app?
22:17 Where to learn more about Lea Samrani and Uptime
Transcript
[00:00:19.400] - Olivier Destrebecq
Hey, Jeff, are you ready for another great episode?
[00:00:22.640] - Jeff
Yes, I'm super excited, because today we're interviewing Léa Samrani. She's product lead at Uptime where she has worked for the last three years. A big bet they made at that time was to tirelessly put the customer's needs first to grow, and it seems like the bet has paid off. So I want to learn more about their success.
[00:00:43.010] - Olivier Destrebecq
Awesome. Welcome, Léa, to the show. Before I get to ask you tons of question, do you want to tell us more about you and Uptime?
[00:00:50.250] - Léa Samrani
Yeah, sure. Hi, guys. Thanks for having me, it was great to be here. My name is Léa. I've been in product for a decade now, makes me feel old. I mostly work for mission-driven company of all sizes that always with a strong mission. I work in the not-for-profit sector, I work within the dating industry, and most recently, a lot more in education.
[00:01:11.360] - Léa Samrani
For the last three years, I've been part of the Uptime team, which has been a fantastic experience. Uptime, maybe you haven't heard of us yet, that's because we are a startup. We're on a mission actually, to inspire people everywhere to learn and to thrive in this very fast changing world of ours. The way we go about that is we select thousands of lessons from the best books, the best courses, the best documentary, and even the best podcast, and we've packaged them into 5-minutes visual stories. We call that a Knowledge Hack. We've actually launched the application in January 2021. So we haven't even celebrated our second birthday yet.
[00:01:48.390] - Léa Samrani
We've made a big impact already, and we've been recognized for our work, which is really, really amazing. The app was Apple Editor's Choice. We were a Google Play App of the year 2021.
[00:01:59.630] - Olivier Destrebecq
Nice.
[00:02:00.830] - Léa Samrani
We even made the Fast Company list for the best app of 2021, so pretty cool.
[00:02:04.880] - Olivier Destrebecq
Great. You mentioned that you do it for podcasts too. How do we get a little thing done for our podcast? Who do we talk to?
[00:02:12.280] - Léa Samrani
Leave it to me.
[00:02:13.440] - Olivier Destrebecq
Great.
[00:02:14.640] - Léa Samrani
I can put you in touch with the right people.
[00:02:16.880] - Olivier Destrebecq
Awesome. You guys at Uptime did a lot of research to get right in the app. Can you tell us more about what that research entailed and how it made a difference?
[00:02:26.320] - Léa Samrani
Very early on, we had a good idea of what our mission was and what problem we were trying to fix. But the big question was, how do we actually go about fixing it? That's when searched [inaudible 00:02:37]. You know we had that hypothesis that it's something we could relate to on a personal level, that spending too much time on social media, mindlessly scrolling, wasn't really great for you. It wasn't good for your mental health. It just wasn't good for a person in general. We thought that learning something new instead, actually gaining knowledge, could be the answer. We were just starting so no data to look at, no customers to get insights from, no way to validate any kind of hypothesis in that way. We still need essential information to guide our decision-making process so what we did is a very thorough research. We did an extensive analysis of the market. I mean, extensive. I think I've personally tested over 100 application-
[00:03:20.620] - Olivier Destrebecq
How nice.
[00:03:21.420] - Léa Samrani
-and other product out there as well, like, weather, everything, pretty much. We also ran a survey with, I think it was something between 4,000 - 5,000 participants. We ran diary study that went on over a few weeks, we're really looking at people habits and how they were consuming any digital product, how they were learning or not learning, actually. And then we also did in-depth interviews to classify people needs into personas.
[00:03:52.060] - Léa Samrani
What we really wanted to know actually was how people were spending their time and how that was making them feel, and how we could help with that. Through this process, it actually became very clear that there was a need for a way to spend short amounts of time throughout your day gaining knowledge, but it's actually very difficult to do so. It's really not easy to do so. We met people that were giving us example of how they were hacking existing products that were not intended for learning and they were making them work for them in that way. But the details of how they would do that, it's just crazy.
[00:04:31.080] - Léa Samrani
We basically realized that a lot of education products that exist out there were very formal, it's more on the long-form side of things. And it's great. It's really good to have that. But it's not necessarily something that you can do in, you know, 5 minutes, in your downtime, when you're on a commute, when you have a break at work or when you're cooking, anything like that. Some products have very poor usability. So we found the better the content, the worse usability, in a way. It was really strange because on the complete other end of that scale, you found a lot of products that are very engaging, very fun, and have great usability. This is usually like entertainment or social media or even the gaming industry. In that sphere, the product, there's an information to overload on this platform, and it's hard to get a sense of what can be trusted and what cannot be trusted. So there are really nothing in the middle. There were nothing that actually had all that great usability, ease of use, and ease of access. They will spend on great to use, but has content that can be trustworthy and that has knowledge that is worth learning.
[00:05:41.840] - Léa Samrani
We found that out through talking to people extensively and it became very clear for us where we should be focusing, what, where is it we should be doing, and how we started.
[00:05:51.360] - Jeff
Nice. How much time would you say it took to validate that hypothesis that you made for product with knowledge, yet very appealing for a daily use?
[00:06:00.560] - Léa Samrani
It took a couple of months, first, to really dig into that properly.
[00:06:04.640] - Jeff
What event made you think that you were succeeding and you were on the right path? Is it like being-
[00:06:10.690] - Léa Samrani
[crosstalk 00:06:10]
[00:06:10.120] - Jeff
-featured by Apple, a certain amount of downloads? Or…
[00:06:15.910] - Léa Samrani
No, that comes later. You don't get a feature on your first day, that comes later. At first, it was all through user feedbacks. Our early adopter, we had a very close communication with them, we tried to build a mini community with them. With the first people, we were getting really positive feedback very early on. Then, they take them into place, and then, big iteration. At the end, I mean, if you look where we started to where we are now, the product is completely different. It's very, very different, but the mission is the same and the need we're fulfilling is the same as well, as to when we're establishing the original research.
[00:06:53.500] - Olivier Destrebecq
I'm curious, kind of a follow-up question to Jeff's, how long do you think it took until you get to the point where you're like, "Okay, now we got traction, we're on the right path"? And you know, before that, you're kind of like, "Well, we have that great idea but we were getting feedback and turning left and right and adjusting." But I'm sure at some point, you're like, "Okay, we're onto something, we're going the right direction."
[00:07:11.420] - Léa Samrani
That's a really good question. I don't think it's a time thing. First, it was based on the inside because we were receiving incredibly positive feedback from users. It really got us thinking we are doing something right. We were also receiving requests for things like for future or for behavior in the app that we were planning on doing. So we really felt aligned with our audiences, what they needed, and our thinking as well. But then, you really know you're doing something well, when Apple picks you up, when you get really great retention, where people reviewing us out there on, like, blogs. It's not people were paying, it's people that are using the product and just like it and write about it. So it's a general-
[00:07:56.160] - Olivier Destrebecq
Awesome.
[00:07:56.160] - Léa Samrani
-multitude of things that comes together.
[00:07:59.340] - Olivier Destrebecq
Last time we talked, you told us a key point in Uptime's culture was to test everything, do every testing every time there's a feature that comes out. Can you tell us about the challenges this can bring in a project?
[00:08:11.980] - Léa Samrani
When you just start, you don't really have enough user to test everything. You don't get the number to validate anything in a significant way. We didn't start testing from day one, what we did was introduce the basics needed for testing. We made sure that everything was tracked properly, for example. So from the very beginning, we never actually launched a feature with a adequate tracking in place. The second thing is we actually invested in tools that would democratize the access to data. We made sure that data wasn't something that was siloed. That got us to a place where we had enough user to release testing, to start experimentation. It was very easy to do so because all the infrastructure was in place from day zero.
[00:08:56.850] - Léa Samrani
We actually started with user testing before A/B. It's actually easier to start with user testing, there's a lot of tool that you can use, [inaudible 00:09:05] equipment. You can also build functionality in your own product so talk to your user directly. The big advantage is that you don't need big numbers to validate things that way. You can do it with five users.
[00:09:16.990] - Léa Samrani
So that was the first step. Once [inaudible 00:09:19] was widely accepted, we had that running with the team basically as VAU, and our numbers start to ramp up. We just added experimentation to it. So first, you user test, then you A/B. What happened though, is when you run experimentation on a new product, you have to think carefully about what is it you're testing. Where in the final you're testing your feature or you change how many people are going to get to see it, how easy is it to move the metric that you're actually testing against? Because you're new, you don't have that level of user yet that you can necessarily test a very low-down [inaudible 00:09:54] and have result fast. It's something that we had in mind very early on, that we don't really want to test run for months and have no results, and just never go anywhere. So deciding if something was worth testing on is something that we really felt carefully, in term of how exposed to people it would be. You take a little longer to build something to test as well. So-
[00:10:20.140] - Léa Samrani
Actually, at a time, that wasn't really a problem because we didn't have any legacy system, we build the process straight away from the beginning. So that went very swiftly. But actually, I've worked with a lot of company where that's not the case. When you're going to build the feature for A/B testing, sometimes, it takes twice the time to build it. Usually, it's because of the way the tech is set up or it's just not made for it. For those guys, experimentation has a significant cost on their organization, and it's not always worth it, actually.
[00:10:49.740] - Olivier Destrebecq
Yeah.
[00:10:50.540] - Léa Samrani
Some of the way to go around that was, maybe just test on one device. You always see, you often see, actually, that IOS is ahead of Android for quite a few products that build that separately. So maybe you just test on IOS, and if it works, then you just extrapolate the results, release it on all platform. That save you some time, give some visibility. Other case I've actually seen, working with some company, is that sometimes, you have a feature that's decided at business level. It could be something that's essential for a company positioning or to support the marketing activity, for example. In those cases, the tests are almost purchased because nobody cares about the results, the business will release that feature no matter what.
[00:11:31.060] - Léa Samrani
In those cases where you have to think about the cost of getting the data you may not use versus the possible cost of not having any visibility and not making informed decision, those are tradeoff as well that you have to considered. But I really believe that a company that have decent developed feature acquisition and that have a proper setup, like where we're at with Uptime now, you should have testing as a default methodology and have exception when you know it's not needed, rather than the other way around.
[00:12:00.760] - Olivier Destrebecq
That's a great summary. In the subscription world, A/B testing, the paywall is super important. Do you have any advice on that specific topic?
[00:12:08.080] - Léa Samrani
Yes, absolutely. You're right. Testing a paywall is very, very important. I've worked with startup that actually would say, "We don't have the resource to test this so we're just going to copy what worked from our competitor." In a sense, those guys were right. You don't always have to invent the will, especially with paywall, sometimes you can just get inspired on, get best practice from other application.
[00:12:29.910] - Léa Samrani
But there's a big caveat here, because works for one product doesn't necessarily work for another one. There's a lot of other factors to be taken into consideration, like your brand awareness, your pricing strategy, your audience, your localization. I've actually seen that happen a few time where the same design, or very similar design, have a completely different impact on different products, which actually makes sense, right? Because otherwise, every single application out there would have the exact same paywall, and that's not the case. By focusing so much on what the competitor are doing, they sort of stop being innovative, they stop focusing on their own user behaviour. I don't think it really works. I think it's quite important that you actually test and invest the time in testing your paywalls, and find what works for you, for your product, and your audience.
[00:13:19.310] - Léa Samrani
One of the main mistake I've seen with paywall testing is people trying way too many things at once as well. It's very difficult to get an accurate answer if you're changing your price, your intro offer, your design, your copy, and your paywall entry point all at once. Those are the kind of thing that I really would not recommend. You should split that into different tests and see. If you have enough user, you can even run all those tests at once. And if you don't have enough, you just do them one after the other but they're individual tests.
[00:13:47.830] - Léa Samrani
I've also seen people going on the other extreme with paywalls, where people would actually lose their trust in their own decision-making, and they would start testing absolutely everything, to the point like, one pixel change on design, there would actually be a test. That's not great either, because if it's not visible to the untrained eye, you maybe shouldn't test it. It doesn't really justify the extra resource, the extra time. Also, it's kind of bad for your team because this test, probably, they're going to be inconclusive, and then your team will lose the trust in experimentation, on the value of experimentation. So you have to be careful as well.
[00:14:30.430] - Olivier Destrebecq
And there's the impact that design team and on the people making the decision as to how much you want to take those decisions away from the team and what's the impact on the morale of the team too.
[00:14:41.470] - Léa Samrani
Yeah.
[00:14:41.990] - Jeff
It drives developers crazy. I mean, when they do, they spend days building a paywall, developing a paywall, and it goes to trash after three weeks. You can do that once, twice, three times. In the end, it's like, "Can't you think a little bit before doing some test please?"
[00:14:58.070] - Olivier Destrebecq
Sounds like that happened to you.
[00:14:59.990] - Jeff
It never works. Yeah.
[00:15:03.680] - Olivier Destrebecq
All those advice that you gave, do they change at all, or do they vary based on the company, the size of the company, like early stage startup versus larger companies?
[00:15:13.440] - Léa Samrani
Absolutely, it's completely different for different scales. When you're a large company, you have millions of users, it's so much easier, you can test. But it's true, you can test things on such small subset of your user, you can run test in a defined market. You can actually test on just 10 % of your audience. You have more control, you have more option, you can run a test for a much longer period of time and releases very incrementally as well. I think in startup, there's a lot more pressure to have results. And also because you have less user, you have less resource, the impact is much bigger. So you have to be a lot smarter about how you test in a startup, but you can actually move faster too, which is great.
[00:15:54.820] - Léa Samrani
If you're testing a paywall, for example, as a startup, maybe you can't take as much risk as in a large company. One of the thing that I've seen, for example, a good approach is identify your top conversion point and then don't touch it. And then you run your test on your secondary conversion point, or you lower conversion point, and only once you validate them there, then you can test it again on your main conversion point, because of the impact. When in a big company, you don't really need to do that. You just take 10 %, you run it everywhere, you get results. So-
[00:16:28.380] - Olivier Destrebecq
The benefit of being a large company.
[00:16:30.380] - Léa Samrani
Yeah.
[00:16:31.260] - Olivier Destrebecq
When doing A/B testing and you're making a choice, version A, version B, but there's always a goal behind it, of moving a needle that you're trying to move, whether it's getting more users or incrementing revenue, what KPIs do you guys optimize for today and why?
[00:16:48.410] - Léa Samrani
It's very different for every specific test. Usually, you have a set of KPI that are like your core metric and you just always monitor that. Whether in relation with the test or not, you should add something around, your retention metric, humanization on your store. Those are the one that you either want to see move positively or not impacted at all. They don't normally move easily, those metrics, they're quite high levels so your tests would rarely be a direct objective for that test. They're more of something that you look at as a security, a safety metric.
[00:17:24.250] - Léa Samrani
If your test had any negative impact on those one, well, you would probably fail it no matter how well it's doing, the actual depth metric. For example, let's say you're testing a notification and the notification has an amazing conversion open rate, but then you actually realize that retention drops, most probably, you have a great open rate because people are then going in your app unsubscribing. If your metric is open rate of notification, you'd be like, "This is successful. This is great. Let's release it." But if you also keep an eye on the higher level metric, which is more on your app as a whole, you would fail that test, right? Because you do not want retention to be impacted.
[00:18:04.050] - Léa Samrani
But the best way to actually find out what metric to look at is just to set up your hypothesis from the beginning, really decide what success will look like before you start the test, before numbers start coming in, before you start thinking, "Wait a second, what does that mean?" You should establish that before the test run and make sure you're willing to make some tradeoff. Because in my experience, it's very rare that a test is black or white. You always see some metrics are doing well, some are doing less good, there's always some sort of tradeoff that needs to be made around what is the most important for you right now, what is the most important for your business, what's your North Star, that kind of thing.
[00:18:44.370] - Jeff
Speaking of A/B testing, and maybe A/B testing paywalls, do you have any fun story or A/B test that was successful or unsuccessful—unexpected, successful or unsuccessful paywall? Because we hear these stories of people having a bug in a button, like three line instead of two lines and it raises conversion when it was originally a bug. Did you encounter any of these kind of situations, is there a well-unexpected test?
[00:19:11.080] - Léa Samrani
So often, really. That's why I'm saying it's so important to you and test, see what works for you, because it's really surprising. Like, I work with one client, they pretty much copy-pasted the paywall from one of the most grossing app in the market. Of course, the copied slightly different in pricing strategy, but the design of it, the structure, the strategy is the same. We know the app they got it from is one of the top grossing app. We know they test the paywall, we know it's doing really well for them. But actually, with those guys, it was terrible. It really didn't work at all, it actually failed. We weren't super clear on why it failed. But at the end, we sort of agreed that it was a question of brand awareness, that this app that they took it from has such a big brand awareness, they can afford to do things that a smaller brand cannot afford because they don't have the trust yet. So they needed more education before that paywall. They needed more copy in that paywall. They needed things that the other branches didn't have to bother to do really.
[00:20:12.740] - Jeff
After seeing that, what happened? Did you start from the ground up?
[00:20:15.820] - Léa Samrani
No, it was reverted. And then, there's kind of dropped and moved into maybe we shouldn't focus on paywall right now but actually focus on what's before it. I know they're going to test that again in a couple of months. So stay tuned.
[00:20:29.560] - Olivier Destrebecq
You mentioned that the larger brand maybe didn't have to educate the user as much as that potential client. Educating a user about the value of the app is always a tough proposition. How did you approach that at Uptime and did you encounter any issues?
[00:20:46.040] - Léa Samrani
It's very difficult to educate a user on the value of your product, especially when you brand new. There's no awareness. You're asking a lot out of the person, right? It's a very competitive market. Why should someone choose to use your product and trust you rather than anyone else out there? There's a lot of way to go about that. But we've decided, at Uptime, to actually lead with the product first, let the product speak for itself. So the value come from the full offering.
[00:21:16.650] - Léa Samrani
What we've done is actually launch with the model, where as a new user, when you actually join a product, we do two things: first, we put as little barrier to entry as possible between you and the core value of the product so you can get straight to the content and get to the Aha moment where you can really get the benefit of it; and the second thing we do is that we lead with premium. Most application on the market, they have a freemium model, right? You go there, there's some sort of limited feature, limited experience. If you pay, or if you take on a trial, you can actually get the full experience, but you are asked for your card details before you actually know what you get.
[00:21:58.150] - Léa Samrani
We decided to go the other way around. We decided that, "Here's a full product, full functionality, full content, you try it out." Then you decide to commit because you know this is the right product for you. So far, that's worked quite well for us. But obviously, something that we continuously optimize and look at evolving.
[00:22:17.630] - Olivier Destrebecq
Those were all the questions that we had for you today. If our listeners want to learn more about you and Uptime, where can they go?
[00:22:25.030] - Léa Samrani
uptime.app.
[00:22:26.270] - Olivier Destrebecq
And what about you? Where can they find you?
[00:22:29.320] - Léa Samrani
They can find me on LinkedIn, it's my only active social media. [inaudible 00:22:33]
[00:22:34.960] - Olivier Destrebecq
Good for you.
[00:22:36.640] - Jeff
Also speaking at conferences because this is how we met, remember?
[00:22:40.280] - Léa Samrani
Yes, yeah.
[00:22:41.520] - Olivier Destrebecq
What's the next conference that you'll be talking at?
[00:22:43.720] - Léa Samrani
It's actually in Berlin last week, in a conference, and I will be in London. No, [inaudible 00:22:48] in June, and then in London in September.
[00:22:51.360] - Olivier Destrebecq
Okay, well, maybe in September people can see you.
[00:22:54.840] - Léa Samrani
Yes.
[00:22:55.760] - Olivier Destrebecq
Thank you so much for all those great answers. It was great having you on the show.
[00:22:59.120] - Jeff
Thanks, Léa.
[00:22:59.480] - Léa Samrani
Thank you so much.