Measuring Java Code Coverage from a Running Jar
In Java projects, JaCoCo provides an effective way of measuring test coverage. Most IDEs support using it out-of-the-box, any type of run can produce coverage data. In most projects, it is quite easy to set it up to run alongside standard JUnit or Spock tests. However, it is also possible to spin it up alongside a running application, where you can record coverage of end-to-end tests, UI tests, and manual tests. This post explores how we capture code coverage for running these tests.
Using Code Coverage Effectively
Before talking about how you can measure it, it is important to emphasize that: Code coverage as a goal is a mistake, for the following reasons:
- It punishes good error handling. Many error paths are defensive and not worth unit-testing in detail.
- It encourages superficial testing. Tests should focus on meaningful assertions and expected behavior, not just executing lines of code.
- It's a weak proxy for quality. High coverage doesn't necessarily mean reliable or maintainable code.
Instead, code coverage should be used as an indicator. A signal to help you spot untested areas or unexpected trends, not a target to chase. For example, a sudden drop in coverage might indicate missing tests for new features, while a steady increase can show improving discipline, but neither tells you whether the tests are good.
In the contracting world, customers usually want a code coverage target as this helps assure them that the code is of good quality. This is completely understandable as many software vendors have left customers with bad and unmaintainable code. In these cases, the general advice is to focus the coverage goals on the domain model as this can be effectively unit tested.
Measuring Coverage of a Running Jar
Running a normal jar file can be done with the following command:
java -jar app.jar
To append a JaCoCo agent, you simply add:
java -javaagent:jacocoagent.jar=destfile=jacoco.exec,output=file,dumponexit=true -jar app.jar
The arguments do the following:
javaagent:jacocoagent.jar
specifies that the JaCoCo agent should be used- It should be the runtime variant of the JaCoCo agent JAR (not the agent module used by Maven/Gradle).
Usually easiest to just copy it into the project with:wget https://repo1.maven.org/maven2/org/jacoco/org.jacoco.agent/0.8.12/org.jacoco.agent-0.8.12-runtime.jar -O jacocoagent.jar
- It should be the runtime variant of the JaCoCo agent JAR (not the agent module used by Maven/Gradle).
destfile=jacoco.exec,output=file
specifies that the run should output a file calledjacoco.exec
dumponexit=true
specifies that the file should be generated when the jar shuts down- There is also an alternative where you start and end coverage using a TCP connection.
This way, the output file jacoco.exec
can be used to generate a report of the run. You can stop here if you only want to report on the coverage of a specific run. Alternatively, you can use the jacoco-aggregation
to aggregate this report and other reports into a single report.
Setting It Up with End-to-End Tests
We successfully used this in a project where we have three different test formats: JUnit, Cypress, and Bruno. Originally, only the JUnit tests reported on code coverage. After the following change, we reported on all test formats:
- JUnit tests run as normal.
- The application is built as a jar and started with the attached
jacocoagent.jar
. - The Cypress and Bruno tests are run in sequence.
- Stop the application to generate the report (
dumponexit=true
specifies this). - The output
jacoco.exec
is copied into the main project next to theexec
files from JUnit. - The JaCoCo Report Aggregation Plugin is used to combine the
exec
files into a final report.
Use the following to specify all.exec
should be included:jacocoAggregation { executionData.from(fileTree(dir: ".", include: "**/*.exec")) }
This setup measures all the different test types, and it also measures the code run during startup. We run this setup for each PR, and it performs well.
Conclusion
Measuring code coverage from a running jar opens up new possibilities for understanding your test suite's effectiveness. By attaching the JaCoCo agent to your application, you can capture coverage from end-to-end tests, API tests, and even manual testing sessions, giving you a more complete picture than unit tests alone.
The setup is straightforward: add the agent when starting your jar, run your tests, and aggregate the results. In our project, this approach revealed that our Cypress tests were exercising critical integration paths that unit tests couldn't reach, while also highlighting startup code that was previously invisible to our coverage metrics.
Remember: use coverage as a diagnostic tool, not a success metric. The real value isn't hitting a percentage target; it's discovering which areas of your codebase lack meaningful testing and making informed decisions about where to invest your testing efforts.