Setting Up Test Coverage Profiling for C or C++
There is a wealth of different profilers for C and C++, both on Linux and Windows. They each have their respective advantages and disadvantages. On this page, we will show you the most commonly used profilers and how to use them.
We suggest you always start with a profiler that is compatible with your compiler:
- gcc: lcov
- clang: llvm-cov
- Microsoft Visual C++: CodeCoverage.exe
- Various other compilers, including embedded: Bullseye
In some cases, these profilers may impact the performance of your system too much, e.g. when your code runs on highly resource constrained embedded devices. For these cases there are more specialized commercial profilers available. In general, we recommend you first try Bullseye in these cases, as its instrumenting technique allows you to compile with all compiler optimizations enabled.
For embedded software, specialized hardware profilers are also an option. These have no performance impact at all and do not require any data to be written to the file system.
If none of the mentioned options works for you, please contact us. We are happy to discuss further options with you.
gcc and lcov
You can instruct gcc to instrument your binaries during compilation. Simply specify the --coverage
flag both during compilation and during linking. You must use a debug build and most compiler optimizations must be disabled to get useful coverage data. Use -g -Og
to enable debug information and only those optimizations that do not interfere with debug information.
Use clang for Better Performance
clang uses a different method to obtain coverage that does not require an unoptimized debug build. Instead you can build your instrumented binary with all compiler optimizations enabled. Should you notice performance problems after instrumenting your code with gcc, you can try clang's test coverage capabilities instead.
Additional Flags for Dynamically Loaded Code
If your application is dynamically loading code at runtime (e.g. via dlopen
), you must supply these additional command-line flags to gcc
during compilation and linking:
-Wl --dynamic-list-data
Otherwise, no or only partial .gcda
files will be written for dynamically loaded code.
gcc will generate one .gcno
file per object file. This contains information needed to parse the test coverage information generated during your tests. Archive these files for later use. These files are not needed to run the created binary.
Run your tests on the instrumented binary as usual. During normal program exit, multiple .gcda
files will be written to disk (one per object file). These contain the coverage information in a binary format.
Terminate Your Process Gracefully
Since coverage is being written to disk when your process ends, a graceful shutdown is required. Otherwise, no or incomplete coverage information will be written.
In particular this means that you must never use SIGTERM
to kill the process abruptly. Use SIGINT
instead for a graceful shutdown.
Change the Location of the .gcda Files
During the execution of the instrumented binary, the .gcda
files are written to the same location where the .gcno
files were created during the build. This is usually not desirable as the build system will have a very different directory layout than the test system. You can use the environment variables GCOV_PREFIX
and GCOV_PREFIX_STRIP
to change the output directory of the .gcda
files.
Example:
export GCOV_PREFIX="/coverage"
export GCOV_PREFIX_STRIP="1"
This will turn the build server path /build/project/file.gcno
into the test server path /coverage/project/file.gcda
.
The output directory for the .gcda
files must be writable by the instrumented process!
See the gcc documentation for further information.
lcov is a program that converts the .gcda
and .gcno
binary files into a format that is readable by Teamscale. It is already included in most installations of gcc.
To convert your .gcda
files to a format that Teamscale can understand, put all .gcda
files next to their corresponding .gcno
file. Then run
lcov --capture --directory /path/containing/gcda_and_gcno_files --output-file /tmp/lcov.info
This command searches for .gcda
and .gcno
files recursively and merges them all into the .info
file. Upload the .info
file to Teamscale. Specify the report format LCOV
.
Always Use Matching .gcno Files
You must use the exact .gcno
files that were generated during the build of the instrumented executable that produced the .gcda
files. Using files from a different build will not work and either result in incorrect or no coverage data.
As a best practice, always keep the .gcno
and .gcda
files together when transferring them between computers.
gcov and gcc Versions Must be The Same
lcov internally calls gcov to parse the .gcda
and .gcno
files. Make sure that your gcov version is the same as your gcc version.
If you use a different computer to run lcov than you used to run gcc --coverage
, make sure that both have the same version of gcc and gcov installed.
You can check the version of both tools by running
gcc --version
gcov --version
The versions must be identical. Otherwise, you will get an empty .info
file and compatibility errors from lcov/gcov such as:
/home/user/coverage-files/system.gcno:version 'A74*', prefer '408*'
clang and llvm-cov
clang supports so-called source-based code coverage. This coverage mode also works if you enable compiler optimizations.
To enable the instrumentation, pass the following flags to clang during compilation and linking:
-fprofile-instr-generate -fcoverage-mapping
Before you run the instrumented binary in your test environment, set the environment variable LLVM_PROFILE_FILE
to the path where the .profraw
coverage output file should be written.
LLVM_PROFILE_FILE=/path/to/coverage.profraw
This can be a relative path as well. The clang documentation has additional details on the format of this variable. When the process is terminated, it writes its coverage information to that file.
This binary .profraw
file must first be converted to the intermediate .profdata
format and then to a format that Teamscale can understand:
llvm-profdata merge --sparse /path/to/coverage.profraw -o ./coverage.profdata
llvm-cov export --format=lcov --instr-profile ./coverage.profdata /path/to/your/binary > ./coverage.lcov
Upload the coverage.lcov
file to Teamscale. Specify the report format LCOV
.
Use the Matching Binary File
You must use the exact instrumented binary file that produced the .profraw
file. Using files from a different build will not work and either result in incorrect or no coverage data.
As a best practice, always keep the binary and .profraw
files together when transferring them between computers.
Prefer Source-Based Code Coverage
While clang also has a compatibility mode for gcc's gcov, we recommend using the source-based code coverage instead as it gives more accurate results and is easier to set up. Furthermore, it allows you to enable compiler optimizations while the gcov-compatible coverage mode requires an unoptimized debug build to function correctly.
Profiling without a File System
Clang also supports profiling binaries that run on hosts without a file system. In this mode, the coverage data is sent to any buffer under your control from where you can forward it out of the constrained environment any way you choose.
MSVC and CodeCoverage.exe
Microsoft provides CodeCoverage.exe as part of any current VisualStudio installation. It can also be downloaded as part of this NuGet package.
When building your binary, make sure to also generate .pdb
files for all code for which you wish to receive test coverage data. You must use a debug build without any compiler optimizations to get useful coverage data.
Deploy your binary and the .pdb
files to your test environment. Wrap the invocation of your binary during your tests with CodeCoverage.exe:
CodeCoverage.exe collect /output:"c:\path\to\output.coverage" c:\path\to\your\binary.exe
After your process is terminated, a .coverage
file will be written to the output path you specified.
Convert this file to XML as described in our guide for Visual Studio Code Coverage. Then, upload the XML file to Teamscale. Specify the report format VS_COVERAGE
.
Bullseye
Bullseye wraps your compiler to instrument your source code before it is compiled. This allows you to compile the instrumented code with all compiler optimizations enabled and still get valid test coverage data. Thus, it has a significantly lower performance impact than many other profilers which require a debug build with optimizations disabled.
Bullseye offers a free trial license on their website, so you can test if it fits your needs.
When to Choose Bullseye
- If the other software profilers are not performant enough for your tests
- or you have to compile your binaries with compiler optimizations enabled
- or you are using a compiler that has no profiler of its own (e.g. specialized embedded compiler).
In order to create an instrumented executable:
Install Bullseye on the computer that builds your binaries. During the installation, select install for all users, build servers, services and kernel mode testing. Also select all relevant compiler integrations, e.g. Microsoft Visual C++ build tools and Microsoft Visual Studio 2019, depending on which versions you are using.
In your build script/build pipeline, execute the following steps:
- Set the
COVFILE
environment variable for the entire build pipeline. At this location, the compilation process will create a.cov
file that contains information needed by the instrumented binary to record test coverage. The Bullseye documentation has detailed guidelines for how to set this correctly. - Delete the old
.cov
file if present. - Before invoking your compiler to build your binary, enable coverage collection by runningsh
cov01 -1
- Build your binary as usual. Bullseye wraps your compiler command and automatically injects itself into the build process. I.e. it will instrument everything the compiler compiles, including subprojects etc.
Instrumenting Multiple Binaries and Shared Libraries
If you either have multiple binaries that run on the same test environment/target or your main binary uses shared libraries that have their own build, you can share the same .cov
file between these builds. Bullseye will merge the coverage information from all binaries and libraries in this .cov
file and all processes can then write their coverage into this file at runtime. This makes deployment much easier than managing multiple .cov
files on the same target.
Exclude Unnecessary Code
To configure which code should be instrumented, please refer to the Bullseye documentation on excludes. Excluding code for which you don't need test coverage will speed up the instrumented binary.
Your build process produced the instrumented binary and a .cov
file. Make sure to save the .cov
file together with your build output, so you always use the correct .cov
file that corresponds to the instrumented binary.
Deploy your binary and the corresponding .cov
file to your test environment. Set the environment variable COVFILE
on your test environment to point to the copied .cov
file. The Bullseye documentation has detailed guidelines for where to best place the .cov
file on your test environment.
Run your tests with the instrumented binary like you normally do. Coverage data is written into the .cov
file.
Convert the .cov
file to XML by running
covxml -f /path/to/cov_file.cov -o /path/to/coverage.xml
Finally, upload the XML file to Teamscale. Specify the report format BULLSEYE
.
For further configuration options, please refer to the Bullseye documentation.
Troubleshooting
- Check your compiler logs or stdout of the compilation process for errors.
- If anything goes wrong during the execution of the instrumented executable, Bullseye will log errors to stdout of the instrumented binary's process.
- Ensure that your process actually sees the
COVFILE
environment variable. - Ensure the
.cov
file is valid and includes all relevant code by inspecting it with Bullseye's Coverage Browser.
Performance Impact
The following table gives an estimate of the runtime performance impact that each profiler might have on your program, however your mileage may vary. The sample program was very computing intensive, most programs that have more user interaction will likely see a significantly lower impact. A release build was used for all baseline runs. For the instrumented runs a release build was used when possible, otherwise a debug build.
Profiler | Baseline | Instrumented | Avg-Impact |
---|---|---|---|
clang / llvm-cov | 43s | 48s | 12% |
Bullseye | 43s | 55s | 28% |
Testwell CTC++ | 44s | 61s | 39% |
gcc / gcov | 140s | 240s | 71% |
OpenCppCoverage | 44s | 110s | 150% |
CodeCoverage.exe | 43s | 190s | 342% |
Hardware Profilers
A hardware profiler is a physical device that attaches to debug ports on your eval board. It interfaces directly with the CPU to receive debug information about the code that is being run on your board in real time and converts that into test coverage information. This method is advertised to have no performance impact at all.
Teamscale currently supports test coverage generated by the following hardware profilers: