Android Blog Series – Accelerating the Android Build (Part 4)

Written by: Electric Bee
6 min read
Stay connected

========================================================== Edit: New data on Accelerating the Android Build here (October 28, 2013) ========================================================== In previous posts in our Android Software Delivery blog series, we've discussed the high-level business and operational challenges of delivering and releasing Android products (Part 1 and Part 2 ). We’ve also discussed the challenges involved for device and chipset makers when building the software included in the Android-based stack and platform, here . This post is about software techniques for improving the Android platform build performance - i.e. given a fixed hardware configuration, what can be done to improve the overall performance and throughput of the Android platform build? This post is heavily based on data from the “Accelerating the Android Build” presentation that will be delivered by CloudBees at the 2013 Android Builders Summit , February 18 2013 . This post will be updated with references to recording from the presentation, if and when available.

Where are we today? What’s the baseline Android build time?

Before we move on in this post, let’s make a very clear statement – the Google engineers that are working on the Android build system has done a very good job optimizing the build for large parallelism on large multi-core machines. Below is the baseline graph of running the Android build with GNU Make on an 48-core 128GB RAM server, with time on the Y-axis and the amount of parallelism representing the X-axis: With ccache enabled you see improved build times, but also that the relative improvement fades out indicating possible bottlenecks in ccache as you scale up the parallelism:

Isn’t that fast enough? Who cares about accelerating the Android build?

In a late November 2012 post , I summarized a visit we had from a well-known global mobile device maker with three takeaways – 1. Time-to-market is critical , 2. Quality cannot be neglected and 3. Private development/build clouds are happening . At that meeting these corporate representatives also told us about an executive mandate to shorten the lead times throughout their development lifecycle, with one KPI being defined as “10 MLOC should be able to build in < 5 minutes ” (In November 2012 their current baseline was ~1 hour…). Another company that CloudBees is working with has thousands of Android developers, doing ~50,000 Android developer builds per week. Assuming only a single minute of reduced build time in such an environment, it’s simple to derive ~800 hours per week in time-savings ! Time that instead could be spent adding new features and functionality to products, rather than wasted waiting for builds to complete! It’s also worth pointing out that most Android device makers add significant proprietary customization to the Android-based software stack, significantly impacting the Android build environment. In many Android environments, we’ve seen that these patches have a tendency to degrade the build performance and severely affect how well the build is optimized for higher levels of parallelization and scale.

So let’s take a look at the Android platform build, do we spot any opportunities for improvements?

The below screenshot shows an ElectricInsight visualization from a run of the Android build through Electric Make – indicating where the time is spent and where there are bottlenecks, hence where there are opportunities for improvements: Cores or parallel threads are listed vertically in this picture, and time in minutes is represented along the horizontal axis. Each small box in the visualization represents an individual target – a compile, a link-step, an I/O-operation, or whatever else might be happening inside the build process. When studying the picture, it’s fairly simple to identify a couple of areas for potential improvement: 1. There is a long serial phase in the beginning of the build – where the purple color indicates makefile parse time, or overhead from the build system where it is deciding what work needs to be done in what order. 2. There are significant gaps at the end of the build - indicating (superfluous?) serializations where explicit dependencies in the build definition prohibit a more aggressive parallelization. What if we were to throw more cores at the build, what would be the effect? We can use the Longest Serial Chain and ElectricSimulator reports from ElectricInsight to help answer this question, where we see that best possible time is ~15m no matter how many cores we’ll throw at it: Why doesn’t ccache help more? While we’re studying these reports, let’s also try to help formulate an answer as to why ccache isn’t helping us more than what we perhaps were hoping for: In the above picture you see that there is a significant portion of the total build time that is compile-time, ~75 - but there is nothing ccache will be able to do about the remaining 25 of workload. You also see that the average compile time is ~1.3s across the 17,566 compiles, so compiles are already relatively fast.

Solving the problem of Long Parse Time

What if we could avoid parsing the makefiles every time, and reuse parse results from a previous build with identical input – i.e. unchanged makefiles, command-line and environment? Parse Avoidance feature of CloudBees Accelerator 7.0 In the upcoming release of CloudBees Accelerator 7.0, the Parse Avoidance feature adds the capability to cache and store parse results for reuse in later identical builds. Let’s take a look at the Android build through ElectricInsight with this feature enabled: As you can see, we have pretty much eliminated the initial long serial purple makefile parse job, and by that shortened the build time by roughly 2m!

Solving the problem of Unnecessary Serializations

What if we could have a system that automatically removes all possible superfluous dependencies from a build with no manual intervention, and as such allow for a much higher and more aggressive parallelization? Dependency Optimization feature of CloudBees Accelerator 7.0 In the upcoming release of CloudBees Accelerator 7.0, the Dependency Optimization feature automatically eliminates and prunes superfluous dependencies from any make-based build. Let’s take a look at the Android build through ElectricInsight with this feature also enabled: As you can see, it turned out that those serializations at the very end of the build were due to superfluous dependencies, carving another minute or so from the overall build time.

Summary and Conclusion

When using CloudBees Accelerator 7.0 with its Parse Avoidance and Dependency Optimization features, the vanilla Android build time in our environment has now been reduced to about 12m30s, with no manual intervention or build definition modification – a total reduction of about 3m or 20! In an undisclosed Android device-maker customer environment with a customized Android software stack, CloudBees Accelerator 7.0 has been able to bring down a ~23m build to ~15m – a reduction of more than 8m or close to 40! Below are graphs comparing CloudBees Accelerator 7.0 vs. GNU Make when building the vanilla Android build – it’s worth pointing out that CloudBees Accelerator 7.0 on 32 cores give better performance than GNU Make on 48 cores: Below are graphs comparing CloudBees Accelerator 7.0 vs. GNU Make when building the vanilla Android build, with ccache enabled:

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.