Read Time: 13 mins

In this post, we’re going to take an introductory look at benchmarking Gradle build performance using the gradle-profiler tool.  

By the end, you should have a basic understanding of how to use gradle-profiler to gain better insights into the performance of your Gradle build and how you might use those insights to improve Gradle build performance for your project.

What is the Gradle-Profiler Tool?

The gradle-profiler project describes itself in the following way:

“A tool to automate the gathering of profiling and benchmarking information for Gradle builds.”

What does that mean to you and your project?

Imagine you want to get a sense of how fast your Gradle build is.

You might run your target Gradle task, wait for the build to complete, and take that build time as your result.

Now, because build times are often variable, you may want to run your Gradle task multiple times and average the execution times.  So, you kick off your build, wait for it to finish, write down the total execution time, and repeat the process.  

This process of manually benchmarking your Gradle build will likely take a while.  Depending on how long your build takes to complete, and how many interactions you want to run, you could find yourself repeatedly coming back to your computer to check whether it’s time to start the next build or not.  Even if you stay busy with other things, this is still a drawn out, tedious process that relies on you, as the developer, manually recording build statistics for each iteration.

It’s this process that gradle-profiler aims to automate for you.

Rather than you having to manually kick off each build and recording the resulting build statistics, gradle-profiler allows devs to start a single command which will repeatedly run a specified Gradle task and record all the build stats in an easy to examine output format.

This takes Gradle build benchmarking from a tedious, manual task to something that can be highly automated with little need for human interaction or monitoring.

I’ve been using gradle-profiler recently in an effort to keep my primary project’s build times low, and to examine the build impact of proposed changes.  In this post, we’re going to walk through how you can start using gradle-profiler to gather benchmark data for your Gradle build, and how you may start using that data to understand the impact of changes on your project. 

Installing Gradle-Profiler

Before you can start benchmarking Gradle build performance, you’ll need to install gradle-profiler to your development machine.

You can do this in one of several ways

Install From Source

You could clone the git repository to your local machine, and build the project from source.

→ git clone git@github.com:gradle/gradle-profiler.git
→ cd gradle-profiler
→ ./gradlew installDist

You would likely then want to add gradle-profiler/build/install/gradle-profiler/bin to your path or create some kind of alias that enables you to invoke the tool by executing the gradle-profiler command.

Install With Homebrew

If you are using Homebrew on your machine, installation is quite simple.

→ brew install gradle-profiler

Other Installation Options

If building from source, or installing using Homebrew, aren’t good options for you, you could try either of the following installation methods:

Examples On GitHub

You can find example commands and example benchmark scenarios in my sample repo on GitHub.

Benchmarking Your Gradle Build

Now that gradle-profiler is installed, and the gradle-profiler command is available to us, let’s start benchmarking Gradle build performance for your project.

Running the Benchmarking Tool

To generate benchmarking results for our project, we need two things:

  1. The path to the project directory containing the Gradle project
  2. A Gradle task to run

With these, we can start benchmarking our build like this:

→ cd <project directory>→ gradle-profiler --benchmark --project-dir . assemble

Let’s break down this command into its individual parts.

  • gradle-profiler – invokes the gradle-profiler tool
  • --benchmark – indicates that we want to benchmark our build
  • --project-dir . – indicates that the Gradle project is located within the current working directory
  • assemble – this is the Gradle task to benchmark 

When this command is run, the benchmarking tool will begin.  You should see output in your console indicating that warm-up and measured builds are running.

When all the builds are completed, two benchmarking artifacts should be created for you:

  1. benchmark.csv – provides benchmarking data in a simple .csv format
  2. benchmark.html – provides an interactive webpage report based on the .csv data

The output paths should look something like this:

Results written to /Users/n8ebel/Projects/GradleProfilerSandbox/profile-out
/Users/n8ebel/Projects/GradleProfilerSandbox/profile-out/benchmark.csv
/Users/n8ebel/Projects/GradleProfilerSandbox/profile-out/benchmark.html

Understanding Benchmarking Results

Once your outputs are generated, you can use them to explore the benchmarking results.

Interpreting .csv results

Here’s a sample benchmark.csv output generated by benchmarking the assemble task for a new Android Studio project.

scenariodefault
versionGradle 6.7
tasksassemble
valueexecution
warm-up build #145574
warm-up build #22149
warm-up build #31778
warm-up build #41772
warm-up build #51436
warm-up build #61474
measured build #11247
measured build #21370
measured build #31267
measured build #41217
measured build #51305
measured build #61103
measured build #7973
measured build #81007
measured build #9999
measured build #101151
CSV benchmark results for assemble task

This report is very streamlined and highlights only a few things.  The most interesting data points are likely:

  • Which version of Gradle was used
  • Which Gradle tasks were run
  • The task execution time, in milliseconds, for the warm-up and measured builds

Notice that the first warm-up build took significantly longer than every other build?  Is this a problem?  Is something wrong with your project’s configuration?

By default, gradle-profiler uses a warm Gradle daemon when measuring build times.  If you’re not familiar, the Gradle daemon runs in the background to avoid repeatedly paying JVM startup costs.  It can drastically improve the performance of your Gradle builds.

So, for this first warm-up build, task execution time is much longer as the daemon is started.  After that, you can see that subsequent builds are much faster.

If we ignore the warm-up builds, and look only at the set of measured builds, we see that the build times are consistently fast as one might expect for a new project.  

Interpreting HTML results  

In addition to being an interactive webpage, the benchmarking.html results provide more data than the .csv file.  By viewing the results this way, you’ll have access to:

  • Mean, Median, StdDev, and other statistics from the measured build results
  • Gradle argument details
  • JVM argument details
  • And more…
gradle-profiler HTML benchmarking results for a single assemble task
HTML benchmark results for assemble task

The HTML results provide a graph of each warm-up and measured build to help you visually understand performance over time.

If you aren’t interested in automating the analysis of your benchmarking results, then viewing the HTML results is often the most convenient way to quickly understand how long your build takes, and to quickly see if there are any outlier executions to examine.

This HTML view becomes even more useful when benchmarking multiple scenarios at the same time as we’ll see later on.

Detecting Gradle Performance Issues

Now that we have some understanding of the output generated by gradle-profiler benchmarking, let’s explore how we might begin to use that benchmark data to compare the performance of our Gradle builds.

The simplest approach is generally as follows:

  • Collect benchmarking data for your current/default Gradle configuration
  • Change your Gradle configuration
  • Collect benchmarking data for updated configuration
  • Compare the results

We could use this approach to compare the impact of caching on a build.  We might compare the performance of clean builds versus incremental builds. 

Any tweak to our build settings or project structure could be examined through this type of comparison.

Comparing clean and up-to-date build performance

Let’s walk through a quick example of this kind of analysis using gradle-profiler.  We’re going to compare the impact of an up-to-date build over a clean build.

First we’ll benchmark up-to-date builds of our assemble task and collect the results.

→ gradle-profiler --benchmark --project-dir . assemble

We’re referring this as the up-to-date scenario because after the first execution of the assmble task, each task should be up-to-date and subsequent builds should be extremely fast as there is nothing to rebuild.

Next, we’ll benchmark our clean build.

→ gradle-profiler --benchmark --project-dir . clean assemble

In the clean build scenario, we are discarded previous outputs before executing our assemble task, so we should expect the clean build to take longer than the up-to-date build.

Once we’ve generated both sets of output, we can compare the benchmarked performance. I’ve taken the .csv results from each of those benchmarking executions and combined them into the following table for comparison.

scenariodefaultscenariodefault
versionGradle 6.7versionGradle 6.7
tasksassembletasksclean assemble
valueexecutionvalueexecution
warm-up build #114330warm-up build #126492
warm-up build #21887warm-up build #210345
warm-up build #31546warm-up build #39853
warm-up build #41440warm-up build #48757
warm-up build #51383warm-up build #58705
warm-up build #61301warm-up build #67377
measured build #11187measured build #17268
measured build #21230measured build #27378
measured build #31118measured build #37750
measured build #41104measured build #46707
measured build #51105measured build #56635
measured build #61082measured build #67542
measured build #71044measured build #77066
measured build #8990measured build #86398
measured build #9993measured build #96341
measured build #101037measured build #107416
Comparing an up-to-date and clean build

With this, we can see that our up-to-date build takes ~1 second while our clean build is taking ~7 seconds. This seems in line with expectations about the relative performance of these two build types.

Comparing caching impact

We’ve seen the performance of an up-to-date build.  We’ve seen the impact of doing a clean build. 

Let’s expand on our example by now benchmarking the impact of enabling Gradle’s local build cache.

First, we’ll once again benchmark a clean build without enabling the build cache.

→ gradle-profiler --benchmark --project-dir . clean assemble

Next, we’ll enable Gradle’s local build cache by adding the following to our gradle.properties file:

org.gradle.caching=true

Now, we can re-run our clean build benchmark scenario; this time with the cache enabled.

→ gradle-profiler --benchmark --project-dir . clean assemble

Once again, we can compare results to get a sense of how enabling the local Gradle build cache can benefit build performance.

No CachingCaching
scenariodefaultscenariodefault
versionGradle 6.7versionGradle 6.7
tasksclean assembletasksclean assemble
valueexecutionvalueexecution
warm-up build #126492warm-up build #124568
warm-up build #210345warm-up build #26838
warm-up build #39853warm-up build #34901
warm-up build #48757warm-up build #45145
warm-up build #58705warm-up build #54382
warm-up build #67377warm-up build #65309
measured build #17268measured build #14825
measured build #27378measured build #24674
measured build #37750measured build #34428
measured build #46707measured build #44577
measured build #56635measured build #54053
measured build #67542measured build #64236
measured build #77066measured build #74388
measured build #86398measured build #84131
measured build #96341measured build #95818
measured build #107416measured build #104016
Merged CSV results comparing the build speed impact of enabling the Gradle build cache

From these results, we see that enabling local caching seems to improve the performance of clean builds from ~7 seconds to ~4.5 seconds.

Now, these build times are artificially fast as it’s a simple project.

However, this approach is applicable to your real-world projects, and these general results are what one might expect from a well-configured project; up-to-date < clean with caching < clean w/out caching.

Configuring Benchmarking Behavior

When running these benchmarking tasks, you may find yourself wanting greater control over how benchmarking is carried out.  A few of the common configuration properties you may find yourself needing to change include

  • changing the project directory
  • modifying the output directory
  • updating the number of build iterations

We’re going to quickly examine how you can update these properties when executing your benchmarking task.

Changing Project Directory

Throughout these examples, we’ve been operating as if we are running gradle-profiler from within the project directory.  

Here, we see that after the --project-dir flag, we pass . to signify the current directory.

→ gradle-profiler --benchmark --project-dir . assemble

If we want to run gradle-profiler from any other directory, we are free to do that.  We just need to update the path to the project directory.

In this updated example, we change directories into our user directory, and pass Projects/GradleProfilerSandbox as the path to our project directory.

→ cd
→ gradle-profiler --benchmark --project-dir Projects/GradleProfilerSandbox/ clean assemble

Changing Output Directory

When gradle-profiler is run, by default, it will store output artifacts in the current directory.  If you would prefer to specify a different output directory, you can do so using --output-dir.

gradle-profiler --benchmark --project-dir Projects/GradleProfilerSandbox/ --output-dir Projects/benchmarking/ clean assemble

This may come in handy if you want to automate the running of benchmarking tasks and would like to keep all your outputs collected within a specific working directory.

Controlling Build Warm-Ups and Iterations

Another pair of useful configuration options are --warmups and --iterations.  These two flags allow you to control how many warm-up builds and measured builds to run.

This might be useful to you if you want to receive your results more quickly, or if you want more data points, and hopefully greater confidence, if your benchmarking results.

If we wanted to have 10 warm-up builds and 15 measured builds, we can start our benchmarking task like this.

→ gradle-profiler --benchmark --project-dir . --warmups 10 --iterations 15 assemble

Benchmarking Complex Gradle Builds

We’ve really only scratched the surface of what gradle-profiler is capable of.  Real world build scenarios are varied, and often quite complex.  Ideally, we’d be able to capture these complexities, and benchmark Gradle build performance for these real-world conditions.

Let’s take a look at how we can define benchmark scenarios that give greater control and flexibility into what is benchmarked.

Defining a Benchmark Scenario

As our command line inputs become more complex, it may become difficult to manage. 

Look at the following command.

gradle-profiler --benchmark --project-dir Projects/GradleProfilerSandbox/ --output-dir Projects/benchmarking/ --warmups10 --iterations 15 clean assemble

This command is still executing a fairly simple benchmarking task, but has already become quite long and unwieldily to execute from the command line every time.

This is especially true if we want to start benchmarking multiple build types, or want to simulate complex incremental build changes.

To help define complex build scenarios, the gradle-profiler tool provides a mechanism for encapsulating all of the configuration for a build scenario into a .scenarios file.  This helps us organize our scenarios in a single place and makes it easier to benchmarking multiple scenarios.

To define a simple .scenarios file, we’ll do the following.

  • First, we’ll create a new file named benchmarking.scenarios.
  • Next, we’ll define a scenario for our up-to-date assemble task.
assemble_no_op {
  tasks = ["assemble"]
}

In this small configuration block, we’ve defined a scenario named assemble_no_op that will run the assemble task when executed. We’ve used the “no op” suffix on the scenario name because this will test our up-to-date build in which tasks shouldn’t have to be re-run each time.

Benchmarking With a Scenarios File

With our scenario file defined, we can benchmark this scenario by using the --scenario-file flag.

gradle-profiler --benchmark --project-dir . --scenario-file benchmarking.scenarios

From this, we get an HTML output similar to this.

HTML benchmark results
HTML output from running the assemble_no_op benchmark scenario

We can see that the assemble_no_op name used to define our scenario is automatically used as the scenario name in the output.

In the next sections, we’ll see how to changes this to something more human-readable and why this can be important.

Configuring Build Scenarios

Within our .scenarios file, there are quite a few configuration options we can use to control our benchmarked scenarios.

We can provide a human-readable title for our scenario:

assemble_no_op {
  title = "Up-To-Date Assemble"
  tasks = ["assemble"]
}

We can provide multiple Gradle tasks to run:

assemble_clean {
  title = "Clean Assemble"
  tasks = ["clean", "assemble"]
}

You could pass in different Gradle build flags such as for parallel execution or for the build cache:

assemble_clean {
  title = "Clean Assemble"
  tasks = ["clean", "assemble"]
  gradle-args = ["--parallel"]
}

We can also explicitly define the number of warm-ups and iterations so they don’t have to be passed from the command line:

assemble_clean {
  title = "Clean Assemble"
  tasks = ["clean", "assemble"]
  gradle-args = ["--parallel"]
  warm-ups = 3
  iterations = 5
}

This is not an exhaustive list of scenario configurations.  You can find more examples in the gradle-profiler documentation.

Benchmarking Multiple Build Scenarios

One of the primary benefits of using a .scenarios file is that we can define multiple scenarios within a single file, and benchmark multiple scenarios at the same time.

For example, we could compare our up-to-date, clean, clean w/caching builds from a single benchmarking execution, and receive a single set of output reports comparing all of them.

To do this, we first define each of our unique build scenarios within our benchmarking.scenarios file.

assemble_clean {
  title = "Clean Assemble"
  tasks = ["clean", "assemble"]
}

assemble_clean_caching {
  title = "Clean Assemble w/ Caching"
  tasks = ["clean", "assemble"]
  gradle-args = ["--build-cache"]
}

assemble_no_op {
  title = "Up-To-Date Assemble"
  tasks = ["assemble"]
}

Then we continue to invoke gradle-profiler in the same way; by specifying the single .scenarios file.

gradle-profiler --benchmark --project-dir . --scenario-file benchmarking.scenarios

From this, we will receive a merge report that compares the performance of each scenario.  

HTML gradle-profiler output for multiple build scenarios
HTML output for multiple build scenarios

Scenarios will be run in alphabetical order based on the name used in the scenario definition; not the human-readable title.  

When viewing a report with multiple scenarios, you can select a scenario as the baseline.  This will update the report to display a +/-% on each build metric where the +/-% is the statistical difference between the current scenario and the baseline scenario.

What does that look like in practice?

In this example, we’ve selected the clean build w/out caching scenario as our baseline.

HTML gradle-profiler output for multiple scenarios with a selected baseline
HTML output for multiple build scenarios with a selected baseline

With the baseline set, we see that the mean build time was reduced ~4% for the clean build with caching scenario and ~80% for the up-to-date build scenario.

By setting a baseline, you let the tool do all the statistical analysis for you leaving you free to interpret and share the results.

Benchmarking Incremental Gradle Builds

The last concept we’re going to touch on is that of benchmarking incremental builds.  These are builds in which we need to re-execute a subset of tasks because of file changes.  These files could change due to source file changes, resource updates, etc.

Incremental scenarios can be very important for real-world benchmarking of Gradle build performance, because in our day-to-day work, we’re often performing incremental builds as we make a small code change, and redeploy for testing.

When using gradle-profiler we have quite a few options available to us for defining and benchmarking incremental build scenarios.

If building with Kotlin or Java, we might be interested in:

  • apply-abi-change-to
  • apply-non-abi-change-to

If building an Android application, we might use:

  • apply-android-resource-change-to
  • apply-android-resource-value-change-to
  • apply-android-manifest-change-to
  • apply-android-layout-change-to

With these options, and others, we can simulate different types of changes to our projects to benchmark real world scenarios.

Here, we’ve defined a new incremental scenario in our benchmarking.scenarios file.

incremental_build {
  title = "Incremental Assemble w/ Caching"
  tasks = ["assemble"]

  apply-abi-change-to = "app/src/main/java/com/goobar/gradleprofilersandbox/MainActivity.kt"
  apply-android-resource-change-to = "app/src/main/res/values/strings.xml"
  apply-android-resource-value-change-to = "app/src/main/res/values/strings.xml"
}

This scenario will measure the impact of changing one Kotlin file, and of adding and modifying string resource values.

The resulting benchmark results look something like this.

HTML benchmark results from gradler-profiler displaying build times for 4 different build scenarios.  The chart shows that an up-to-date build is the fastest of the 4 build types.
HTML benchmark results including incremental build scenario

As expected, the incremental build was measured to be faster than a clean build, but slower than an up-to-date build.

This is a very simple example.  For your production project, you might define multiple different incremental build scenarios. 

If you’re working in a multi-module project, you might want to measure the incremental build impact of changing a single module versus multiple modules.  You might want to measure the impact of changing a public api versus changing implementation details of a module.  There are many things you may want to measure in order to improve the real world performance of your build.  

Stay Up To Date

Subscribe to stay up to date with future posts, videos, and courses.

What’s Next?

At this point, hopefully you have a better understanding of benchmarking Gradle build performance using gradle-profiler, and how to start using gradle-profiler to improve the performance of your Gradle builds.

There’s still more that can be done with gradle-profiler to make it more effective for detecting build regressions and to make it easier to use for everyone on your team. 

I’ll be exploring those ideas in upcoming posts.

If you’d like to start learning more today, be sure to check out gradle-profiler on GitHub and Tony Robalik’s post on Benchmarking builds with Gradle-Profiler.

You can find more Gradle-related posts here on my blog.

Leave a Reply

Back to Top