Skip to content

Improving Test Execution Efficiency

Regression testing of changes is usually done by re-running all automated or manual tests of a system (retest all). As more and more features are added to a software system over its lifetime, the number and overall runtime of tests also increases. With increasing runtime of the testsuite the number of possible retest all runs decreases. Therefore, the time between introducing a new bug with a change and getting the results of a failed regression test increases as well. However, re-executing all tests is not only expensive, it is also inefficient, since most of the tests in the test suite don't test a given change to the codebase.

Test Impact Analysis provides an automated mechanism to select and prioritize tests to be executed based on the coverage of previous test runs. For that purpose Testwise Coverage must be recorded and uploaded to Teamscale. In follow-up runs your test runner will ask Teamscale for the tests that need to be executed to test the changes of the current commit to be tested. Teamscale uses the data from the initial run to determine, which of the tests will actually execute the code changes. These impacted tests are then prioritized such that tests with a higher probability of failing are executed before the other tests. The Test Runner can then start executing tests based on the test list.

Test Impact analysis big picture.

On Impacted Tests

Please note that the selection of impacted tests is not guaranteed to include all tests that may fail because of a code change. In particular, changes to resource or configuration files that influence the test's execution but are not tracked as coverage are not taken into consideration. It's therefore recommended to execute all tests in regular intervals to catch the impact of these changes as well.

Pareto Analysis uses the same data to generate an optimal smoke test-suite which covers the most amount of code in any given time budget. Use it when you want a small test suite that you can run before more expensive test runs, e.g. manual or HIL tests. This test suite has a good chance of catching most bugs before the expensive test runs happen, making them more effective.

Both techniques are valid ways of speeding up test suites, with Pareto analysis being a little less effective but also easier to set up. So choose the technique with the best cost/benefit tradeoff for your situation.

Setting up the Test Impact and Pareto Analysis

Determining Tests Covering a Method

After the initial Testwise Coverage upload the Code view will show an indicator per method if the method has been covered:

Testwise Coverage indicator

A click on the indicator opens a dialog box, which presents a list of all tests that executed the selected method:

Tests executing method dialog

Viewing Detailed Test Information

A single test can either be inspected by following the links in the dialog box or by selecting the test from the Test Gaps perspective in the section Test executions . The detail view depicted below shows the most recent test execution result at the top. The treemap below shows coverage of this single test on the whole codebase. By using the Show only executed methods option, the treemap can be focused to only show the executed code regions. At the bottom the history of the 10 previous test executions are shown with the corresponding durations and test results. A click on one of the previous test executions opens the test details view for this execution e.g., to inspect the error message of a previous failure.

Test details view.

Determining Impacted Tests over Large Time Intervals

In the Delta perspective the list of impacted tests can be inspected by either selecting a time range or a merge scenario. A sample result is shown here:

Impacted tests in the Delta perspective.

The list will show all tests that are impacted by the selected changes. A prerequisite for this is that a Testwise Coverage report has been uploaded for a timestamp that lies before the end of the inspected timeframe.

The impacted tests shown in the delta perspective can be used to verify that everything is set up correctly on the server side. It can also be used as a utility to support manual test selection.

Generating an Optimized Smoke Test Suite

In the Test Gaps > Test Selection view, Teamscale can optimize your test cases based on the test coverage they produce. This allows you to generate a smoke test suite that has optimal test coverage, even for short amounts of test runtime. On average, these smoke test suites find 90% of new bugs in only 11% of test runtime.

Use them, e.g. to catch most bugs with short test runs during the day, while catching the remaining bugs with longer nightly test runs. Or use the smoke test suite as a quality gate to ensure your software is fit for a larger, more expensive test stage.

Teamscale calculating an optimal smoke test suite with 99% of test coverage achieved after only 9% of test runtimePareto Analysis achieves almost the same test coverage (y-axis) as the full test suite in only a fraction of your usual test runtime (x-axis) by intelligently selecting which tests to run.