Copyright notice: This text was originally written by John Paliotta, Vector Informatik GmbH, under the headline “Improving test efficiency” and can be found at: https://www.coderskitchen.com/improving-test-efficiency/
Successful software teams are continuously searching for methods to improve the efficiency of its members. They recognize that enhancing the development process is a solid method to increase quality while decreasing time to market. This may necessitate the addition of new tools, but it may also necessitate a step back from day-to-day work to take a fresh look at what we're doing.
Think about testing. Most of you are working on projects with thousands of test cases, and a full run might take hours, if not days. Any of your teams add new tests every week, and around twice a year, we find that the overall test duration has grown too long, so many of you add hardware resources for automated testing or employees for human and free-play testing.
Today, I'd want to assist you in considering various alternatives to pouring additional money into the problem. Here are some basic actions that we have found in certain occasions when dealing with the "testing is taking too long" issue and how TestQuality Test Management Case Tool helps to get better Test results.
Step 1: Examine the test failure dataAny project should be able to easily automate the acquisition of this data. You probably already have this in some form, so just make sure it's consistent (e.g., tests have unique-ids) and can be accessed programmatically for analysis. Nobody wants to dig through log files. The underlying data storage structure is unimportant as long as you offer an API that allows you to cycle through all test runs and retrieve the list of tests and their historical status. Also, ensure that this information is recorded each time a user performs a test in any circumstance. The more data there is, the better. Once you get the raw data, divide the tests into three categories: often, sometimes, seldom.
Step 2: Repair or remove flaky testsThe following step is to separate the tests that fail "frequently" into two categories: those that uncover problems and those that are flaky.
Flaky tests are the single most effective way to undermine trust in your testing and reporting environment. If your first reaction to a faulty test is to re-run it and hope it passes, your process is flawed. Flaky tests must be changed or removed. If it's not simple to repair these tests fast and you feel they're valuable, at the very least disable them for everyday testing and perform them on a regular basis when your infrastructure has downtime.
TestQuality's Analyze main Tab offers several test measurement options one of them is directly focus to show us how flaky our tests are. This option within the Analyze main menu tab is the Test Reliability tab that will help you identify those tests that are flaky. In the graph each test's flakiness is displayed as icons.
TestQuality's Test Reliability analisys to detect Test Flakiness
Step 3: Comprehend your testsMore essential than performing tens of thousands of tests is ensuring that the proper tests are being run.
There are other ways we might organize tests, but I prefer simple and easy-to-maintain solutions, so let's make three additional buckets and categorize the tests by the relevance of the feature being tested: high, medium, and low.
This will have to be done manually, but it doesn't have to be flawless; the groups may be refined over time. Keep in mind that the significance should be based on the end-perspective, user's and keep in mind that most users only use a percentage of a product's overall capabilities. So, if 90% of your tests fall into the "high" priority category, take a second look. Also, bear in mind that the features most appreciated by your consumers may not be the ones most valued by your engineers, so seek assistance from your customer support team and analyze user problem reports to obtain insight into how customers are using the product.
Step 4: Remove outdated testsHow frequently do we go through the tests that we're running? Do we get rid of the ones that no longer make sense or are obsolete? Or do we keep running old tests because we're not sure what they do and are reluctant to remove them? Reworking test cases, like refactoring a code base, is vital to maintaining efficiency, but where should we begin?
When you finish step 3, you'll probably have several tests in the "low" bucket since you're not sure what they do. This is the moment to examine them more closely and consider removing or refactoring them.
TestQuality's Test Quality analisys to detect Highlighted Tests for Quality reasons
Based on the execution of your tests, TestQuality analyzes in its "Test Quality Tab", which were useful and those that were not as useful. You will also see those tests that are highlighted for quality reasons. Tests that have not been run yet etc.
Step 5: Arrange the tests in order of importanceAfter you've corrected (or eliminated) your Flaky Tests and removed the obsolete ones, prioritize the 9 attribute combinations given in the table below. As a result, there will be five test priority groups. Next, ensure that the percentage of tests in each category is appropriate for your project. There's nothing magical about the priority groups or percentages I've used in this example; you should adjust it for your project, but if 90% of your tests are failing often and are testing high-priority features, you may have issues that go beyond the scope of this post!
Step 6: Run tests in descending order of priorityNow that your tests have been prioritized, you may devise an execution plan. I advocate running the tests in priority order and pausing after a certain number of failures, perhaps 5 or 10. Allowing several failures is beneficial because it allows engineers to identify a pattern in the outcomes while analyzing an issue.
Step 7: Continue to fine-tune everythingThe final phase is to ensure that you revisit the test significance and frequency of failure attributes on a regular basis. As your product grows, older features may lose relevance while newer features gain importance, resulting in a shifting distribution of tests among the priority levels.
TestQuality can simplify test case creation and organization, it offers a very competitive price but it is free when used with GitHub free repositories providing Rich and flexible reporting that can help you to visualize and understand where you and your dev or QA Team are at in your project's quality lifecycle. But also look for analytics that can help identify the quality and effectiveness of your test cases and testing efforts to ensure you're building and executing the most effective tests for your efforts.
Sign Up for a Free Trial and add TestQuality to your workflow today!