I often hear people describe their automated test approach by naming the tool, framework, harness, technology, test runner, or structure/format. I’ve described mine the same way. It’s safe. It’s simple. It’s established. “We use Cucumber”.
Lately, I’ve seen things differently.
Instead of trying to pigeon hole each automated check into a tightly controlled format for an entire project, why not design automated checks for each Story, based on their best fit for that story?
I think this notion comes from my context-driven test schooling. Here’s an example:
On my current project, we said “let’s write BDD-style automated checks”. We found it awkward to pigeon-hole many of our checks into Given, When, Then. After eventually dropping the mandate for BDD-style, I discovered the not-as-natural-language style to be easier to read, more flexible, and quicker to author…for some Stories. Some Stories are good candidates for data-driven checks authored via Excel. Some might require manual testing with a mocked product...computer-assisted-exploratory-testing…another use of automation. Other Stories might test better using non-deterministic automated diffs.
Sandboxing all your automated checks into FitNesse might make test execution easier. But it might stifle test innovation.
Code coverage is a very interesting metric, covered by a large body of research that reaches somewhat contradictory results. Some people think it is an extremely useful metric and that a certain percentage of coverage should be enforced on all code. Some think it is a useful tool to identify areas that need more testing but don’t necessarily trust that covered code is truly well tested. Others yet think that measuring coverage is actively harmful because it provides a false sense of security.
Our team’s mission was to collect coverage related data then develop and champion code coverage practices across Google. We designed an opt-in system where engineers could enable two different types of coverage measurements for their projects: daily and per-commit. With daily coverage, we run all tests for their project, where as with per-commit coverage we run only the tests affected by the commit. The two measurements are independent and many projects opted into both.
While we did experiment with branch, function and statement coverage, we ended up focusing mostly on statement coverage because of its relative simplicity and ease of visualization.
How we measured
Our job was made significantly easier by the wonderful Google build system whose parallelism and flexibility allowed us to simply scale our measurements to Google scale. The build system had integrated various language-specific open source coverage measurement tools like Gcov (C++), Emma / JaCoCo (Java) and Coverage.py (Python), and we provided a central system where teams could sign up for coverage measurement.
For daily whole project coverage measurements, each team was provided with a simple cronjob that would run all tests across the project’s codebase. The results of these runs were available to the teams in a centralized dashboard that displays charts showing coverage over time and allows daily / weekly / quarterly / yearly aggregations and per-language slicing. On this dashboard teams can also compare their project (or projects) with any other project, or Google as a whole.
For per-commit measurement, we hook into the Google code review process (briefly explained in this article) and display the data visually to both the commit author and the reviewers. We display the data on two levels: color coded lines right next to the color coded diff and a total aggregate number for the entire commit.
Displayed above is a screenshot of the code review tool. The green line coloring is the standard diff coloring for added lines. The orange and lighter green coloring on the line numbers is the coverage information. We use light green for covered lines, orange for non-covered lines and white for non-instrumented lines.
It’s important to note that we surface the coverage information before the commit is submitted to the codebase, because this is the time when engineers are most likely to be interested in improving it.
I am happy to say that we can share some preliminary results with you today:
The chart above is the histogram of average values of measured absolute coverage across Google. The median (50th percentile) code coverage is 78%, the 75th percentile 85% and 90th percentile 90%. We believe that these numbers represent a very healthy codebase.
We have also found it very interesting that there are significant differences between languages:
The table above shows the total coverage of all analyzed code for each language, averaged over the past quarter. We believe that the large difference is due to structural, paradigm and best practice differences between languages and the more precise ability to measure coverage in certain languages.
Note that these numbers should not be interpreted as guidelines for a particular language, the aggregation method used is too simple for that. Instead this finding is simply a data point for any future research that analyzes samples from a single programming language.
The feedback from our fellow engineers was overwhelmingly positive. The most loved feature was surfacing the coverage information during code review time. This early surfacing of coverage had a statistically significant impact: our initial analysis suggests that it increased coverage by 10% (averaged across all commits).
We are aware that there are a few problems with the dataset we collected. In particular, the individual tools we use to measure coverage are not perfect. Large integration tests, end to end tests and UI tests are difficult to instrument, so large parts of code exercised by such tests can be misreported as non-covered.
We are working on improving the tools, but also analyzing the impact of unit tests, integration tests and other types of tests individually.
In addition to languages, we will also investigate other factors that might influence coverage, such as platforms and frameworks, to allow all future research to account for their effect.
We will be publishing more of our findings in the future, so stay tuned.
And if this sounds like something you would like to work on, why not apply on our job site?
…may not be a good way to start testing.
I heard a programmer use this metaphor to describe the testing habits of a tester he had worked with.
As a tester, taking all test input variables to their extreme, may be an effective way to find bugs. However, it may not be an effective way to report bugs. Skilled testers will repeat the same test until they isolate the minimum variable(s) that cause the bug. Or using this metaphor, they may repeat the same test with all levels on the mixing board pulled down, except the one they are interested in observing.
Once identified, the skilled tester will repeat the test only changing the isolated variable, and accurately predict a pass or fail result.
Dear Test Automators,
The next time you discuss automation results, please consider qualifying the context of the word “bug”.
If automation fails, it means one of two things:
- There is a bug in the product-under-test.
- There is a bug in the automation.
The former is waaaaaay more important than the latter. Maybe not to you, but certainly for your audience.
Instead of saying,
“This automated check failed”,
“This automated check failed because of a bug in the product-under-test”.
Instead of saying,
“I’m working on a bug”,
“I’m working on a bug in the automation”.
Your world is arguably more complex than that of testers who don’t use automation. You must test twice as many programs (the automation and the product-under-test). Please consider being precise when you communicate.
Avenged Sevenfold, “This Means War”:
Are we at war with the others in the software industry who accept poor quality software? Are we at war with ourselves because we give it just slightly less than we’ve got and sometimes a lot less than it takes?