Support Freedom in Ukraine 🇺🇦
From the launch date to 3 million installations in one year: How to scale an app on Android

From the launch date to 3 million installations in one year: How to scale an app on Android

Launch of the Android version

Undoubtedly, Android is the most popular OS for smartphones. However, companies involved with mobile development prefer to invest more effort into the iOS platform. The decision is quite logical as iOS users can typically bring more profit. However, after the iOS 14.5 release, it became vital to use all opportunities for conducting purchases with clear and timely metrics.

My work in the OBRIO company and marketing began in June 2020. I started my carrier by working with the iOS version of Nebula, the only existing product at that time. Our marketing team included three media buyers, each responsible for an ads platform (one or several). My first experience was related to Snapchat and TikTok.

These platforms brought us good results but were quite unstable. Overall, they are rather suitable for one-time scaling when one has a top-performing ad creative. But it can fail when a correctly organized strategy and stable volumes are needed.

Later on, new products appeared on the project. One of them was Nebula for Android. It made us reorganize the working process of the marketing team. Since then, the focus of each marketer has shifted from ad platforms to a particular product. We wanted to make the new version as steeply developing as Nebula iOS. However, this was impossible without corresponding resources.

Furthermore, we had noticed that our team often faced communication difficulties. For instance, some team members could not understand why we made specific changes? Why did we conduct these tests, not others? It farced us to create separate marketing mini teams inside OBRIO, responsible for Nebula iOS, Nebula Android, and other apps.

It may seem illogical to part app versions for various operating systems into different products. The thing is that the Android version was released much later than the version for iOS. When the latter was on the scaling stage, outpacing Tinder in ratings, bitting new records each day, the former was only on the MVP stage, and we knew neither the audience nor the economics.

Our mini teams were relatively small in size. Besides, we lacked a product manager for all apps and an analyst. Thus, a marketer was in response for choosing the most profitable ad platforms to guarantee to reach business aims. On the level of mini teams, we also identified problems of the products and brainstormed ideas to solve them.

We estimated the pool of available ideas according to the ICE framework (crossed out). We TRIED to estimate the pool of available ideas according to the ICE framework. But we discovered that it was challenging to objectively consider the ideas and their impact on the product (even based on the prepared grading scale).

One of our failures was to limit the most popular functionality. Thus, Nebula contains the section of zodiac compatibility in which a user can check up to 10 zodiac pairs in one session. Our idea appeared naturally: what would happen if we restrict the number of checks per day and offer them as the premium functionality? Right now, it seems quite logical that motivation through restrictions is not the best idea. But now, we also have the numbers with test results for complete confidence.

We have learned three lessons out of this case:

If you want to make the product successful, you should translate it into value and not force a user to believe it is valuable.

When you generate an idea, you should choose the instruments that would help you distinctly estimate its impacts and necessary resources. You need the experience to make those instruments work right.

Each genius idea has to be underpinned by the MVP functionality, according to which you can check the hypothesis.

Nebula Android pipeline: the problems

To understand the needed changes, one must carefully analyze previous marketing results and product metrics. Analysis of the pipeline considerably helps in forming ideas and tasks prioritization.

The Nebula Android pipelines had significant differences from the iOS-version on each stage: conversion from installing to trial was smaller by 1.5 times, conversion from trial to payment — by 2.4. These metrics hugely impacted the efficiency and purchases as the LTV trial (the effective cost per user who began the trial) was too low to scale volumes with positive ROMI (Return of Marketing Investments).

Additionally, we discovered that above 70% of users who began the trial did it immediately after onboarding. So it was vital to translate the product's value from the first seconds of usage.

Apart from the product metrics, the potential obstacle was our score in the Play Market. In April 2021, it was 2.9. The users complained that the Android version was not complete, the subscription was too expensive, and the ads were too numeric. Overall, the score can either motivate users to download an app or push them to find an alternative. In fact, it stands for reputation. The 2.9 score is objectively not too attractive for a user. Particularly I would not have trust or wish to try such a product.

We could have chosen to improve only the app content and add new sections. But we decided to focus on two metrics instead — conversion from installing to trial and from trial to subscription. To boost these, we resolved to improve the first two stages — onboarding and user activation. And it gave remarkable results.

What we did

Conversion from trial to subscription

It was the first step we started working with. We held monetization tests, changed subscription opportunities, prices, and experimented with variants of sales screens. But practically, the test groups showed worse results than the actual variants — even those which worked better showed only minor benefits.

Back then, ordering compatibility reports was one of the most popular services among our users. The report was completed by astrologers based on parters' data and gave detailed information about the relations of two people. Our idea was to make a free basic compatibility report for our subscribers. It was also completed by astrologers and provided accurate data (but did not consider all personal information, was more generalized).

The idea did not require significant development resources but boosted the conversion from trial to subscription by more than 2 percent.

Trial conversion

This number is formed by estimating an app onboarding: a user's experience during the first usage of the product, its usefulness, and pleasantness.

At that moment, Nebula Android had the standard onboarding process, which required basic information about the user's name, place, and date of birth. On the one hand, the onboarding was understandable and straightforward. It needed a minimum of personal data, which signified it as secure since we did not collect personal information. On the other hand, such onboarding could indicate that we provided only basic information, which could also be found on other resources.

We decided to conduct a test. We added more questions to understand our users better and later introduce even more unique content and opportunities.

The new onboarding consisted of extended questions about diverse life spheres and required more time to pass. However, the test results turned positive, and the trial conversion was boosted by more than 1 percentage point.

Rating compilation in the app

Besides conducting tests, we also worked over other app features and content. The Play Market score remained low, but we hypothesized that it would increase as we improved our product. Nonetheless, we decided to realize the rating pop-up in the app. Mostly, people are more willing to complain about problems than to express their gratitude. So it was vital for us to receive more responses to understand if our product was helpful or needed drastic changes. We did not expect that rating collection would change our Play Market score. But when the new version with the pop-ups was released, our score elevated from 2.9 to 4.6 in less than a day.

All we needed to do was ask active users how much they liked our app. With such seemingly easy steps, we improved each pipeline stage, from a click on an ad to the activation and users retain.

Marketing

Singular improvements, which we made, did not guarantee the growth of the audience. The situation changed drastically only when we applied all the product changes for 100% of users. In approximately a month, the volumes increased 4 times.

Facebook is the primary source of traffic for Nebula Android. For quite some time, the platform did not provide the required results. The deterring factors besides Facebook itself included our approach to marketing and the efficiency of ad campaigns.

We had one question in mind: how can we boost the volumes from Facebook? Back then, almost all the campaigns were launched on "bid cap", which meant that volumes were minimal. The expenses could reach ~5% of the daily budget during the whole life cycle of the campaign.

Meanwhile, the purchases at the "lowest cost", which could boost our traffic, were expensive, and campaigns burnt out in a few days. However, we had to learn to optimize these exact campaigns, which could scale in the long run. Here are several rules which we defined during this period:

If you have any doubts regarding working with the lowest cost — you better withdraw these campaigns into a separate account or revise the automated rules for the main one. It would allow you to reduce their impact on the lowest cost campaigns. Automatic Facebook rules for ads turning on/off can impact optimization and restrict you from noticing positive dynamics.

Start with small budgets. I cannot provide exact recommendations, but you should expect approximately ten target actions.

Don't decide about campaign shut down during the first day and observe which conversion % you will receive in some time. Our model is subscriptions, and we are optimizing for trials. So at the beginning of working with these campaigns, I watched how the trials worked out in the first 2-3 days. If I saw that the price was approaching the target for 2-3 days in small volumes, I would raise the budget. You should take into account that this period can be more extended for you since you do not optimize actions, which a user takes immediately (maybe you should better observe the campaign during ~7 days)

When I see positive dynamics, I do not stick to the rule which advises "not to increase by more than 20%". I can double it to collect more data for a campaign. If your budgets are bigger, you better play safe.

Duplicate the companies from the beginning:
your budget will be divided into smaller portions among several campaigns. If some of them do not optimize, you will lose much less than in the case of larger investments into one extensive campaign.

There is a serious possibility that the indicators by these campaigns will differ. Thus you can choose a "winner" and scale it.

Besides, one company with a small budget will hardly allow you to scale volumes quickly.

Such an experiment has witnessed that campaigns can really be held even during a whole month. The market seasonality also had a positive role. However, we could not increase the volume and preserve traffic payback without improving the product and rethinking the marketing strategy.

Automation

One more obstacle we faced was the constant lack of resources. We have an extremely high bar when choosing our specialists — we always try to hire the best on the market. Thus, the hiring process takes much time. Furthermore, it is definitely a vain attempt to make these people do the routine job. So we developed a view that automation is vital, and so we always integrated AdBraze at an early stage. This habit brought us many outstanding achievements. It allowed us:

  • to save at least 20 hours per week
  • do more tests in a short period
  • find best performing creative based on fast tests
  • scale the new top-performer before competitors even noticed our growth
  • reach 100,000 downloads per day and set the absolute record for performance marketing within our team for that time
  • free marketers' time for research and self-education, which led to the launch of successful influencer and brand marketing

It took us time to realize the need for automation, but since then, we have never underestimated the results we can achieve and the resources we save with the help of modern technology.

Conclusions

Here I would like to share several conclusions:

  • Many effective ideas have already been implemented — do not be too shy to use them. When one starts working over something new, they desire to generate a new genius idea to produce a breakthrough in their industry. This intention is very commendable. But always keep in mind that many working approaches have already been realized, and sometimes all you need is to use them properly.
  • So many books and articles were devoted to soft skills for a clear reason — communication is vital. Any team sometimes faces problems: developers do not understand how marketing works and why it is impossible to scale in a day; marketers do not understand why some crucial tasks cannot be realized immediately; analytics launch tests, and someone does not see their purpose. It is essential to trace all the cases when someone from the team "did not know" or "did not understand" because this is the marker of miscommunication. In our case, the creation of mini teams helped us better understand the process and each other.
  • Concentration. It is impossible to become successful with a product where people devote 10%, 20%, or even 50% of their time. Success equals 150% of the attention.
  • Automation is everything. While living in the 21st century, it is a shame to do the routine work manually and waste the time of your talented employees. Use the automation software wisely, and you'll see growth and positive ROI.

Scale faster with AdBraze

14-day free trial
no credit card required
no obligations