Software Testing

Will It Meet Our Needs? vs. Is There A Problem Here?

Eric Jacobson's Software Testing Blog - Fri, 07/25/2014 - 12:12

Which of the above questions is more important for testers to ask?

Let’s say you are an is-there-a-problem-here tester: 

  • This calculator app works flawlessly as far as I can tell.  We’ve tested everything we can think of that might not work and everything we can think of that might work.  There appear to be no bugs.  Is there a problem here?  No.
  • This mileage tracker app crashes under a load of 1000 users.  Is there a problem here?  Yes.

But might the is-there-a-problem-here question get us into trouble sometimes?

  • This calculator app works flawlessly…but we actually needed a contact list app.
  • This mileage tracker app crashes under a load of 1000 users but only 1 user will use it.

Or perhaps the is-there-a-problem-here question only fails us when we use too narrow an interpretation:

  • Not meeting our needs, is a problem.  Is there a problem here?  Yes.  We developed the wrong product, a big problem.
  • A product that crashes under a load of 1000 users may actually not be a problem if we only need to support 1 user.  Is there a problem here?  No.

Both are excellent questions.  For me, the will-it-meet-our-needs question is easier to apply and I have a slight bias towards it.  I’ll use them both for balance.

Note: The “Will it meet our needs?” question came to me from a nice Pete Walen article.  The “Is there a problem here?” came to me via Michael Bolton.

Categories: Software Testing

The Deadline to Sign up for GTAC 2014 is Jul 28

Google Testing Blog - Tue, 07/22/2014 - 18:53
Posted by Anthony Vallone on behalf of the GTAC Committee

The deadline to sign up for GTAC 2014 is next Monday, July 28th, 2014. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to add yours for consideration. If you would like to speak or attend, be sure to complete the form by Monday.

We will be making regular updates to our site over the next several weeks, and you can find conference details there:
  developers.google.com/gtac

For those that have already signed up to attend or speak, we will contact you directly in mid August.

Categories: Software Testing

The Sock Puppets of Formal Testing

DevelopSense - Michael Bolton - Mon, 07/21/2014 - 14:22
Formal testing is testing that must be done in a specific way, or to check specific facts. In the Rapid Software Testing methodology, we map the formality of testing on a continuum. Sometimes it’s important to do testing in a formal way, and sometimes it’s not so important. From Rapid Software Testing. See http://www.satisfice.com/rst.pdf People […]
Categories: Software Testing

How Models Change

DevelopSense - Michael Bolton - Sat, 07/19/2014 - 13:38
Like software products, models change as we test them, gain experience with them, find bugs in them, realize that features are missing. We see opportunities for improving them, and revise them. A product coverage outline, in Rapid Testing parlance, is an artifact (a map, or list, or table…) that identifies the dimensions or elements of […]
Categories: Software Testing

Test Automation Can Be So Much More…

Eric Jacobson's Software Testing Blog - Fri, 07/18/2014 - 13:32

I often hear people describe their automated test approach by naming the tool, framework, harness, technology, test runner, or structure/format.  I’ve described mine the same way.  It’s safe.  It’s simple.  It’s established.  “We use Cucumber”.

Lately, I’ve seen things differently.

Instead of trying to pigeon hole each automated check into a tightly controlled format for an entire project, why not design automated checks for each Story, based on their best fit for that story?

I think this notion comes from my context-driven test schooling.  Here’s an example:

On my current project, we said “let’s write BDD-style automated checks”.  We found it awkward to pigeon-hole many of our checks into Given, When, Then.  After eventually dropping the mandate for BDD-style, I discovered the not-as-natural-language style to be easier to read, more flexible, and quicker to author…for some Stories.  Some Stories are good candidates for data-driven checks authored via Excel.  Some might require manual testing with a mocked product...computer-assisted-exploratory-testing…another use of automation.  Other Stories might test better using non-deterministic automated diffs.

Sandboxing all your automated checks into FitNesse might make test execution easier.  But it might stifle test innovation.

Categories: Software Testing

Very Short Blog Posts (20): More About Testability

DevelopSense - Michael Bolton - Mon, 07/14/2014 - 15:30
A few weeks ago, I posted a Very Short Blog Post on the bare-bones basics of testability. Today, I saw a very good post from Adam Knight talking about telling the testability story. Adam focused, as I did, on intrinsic testability—things in the product itself that it more testable. But testability isn’t just a product […]
Categories: Software Testing

Measuring Coverage at Google

Google Testing Blog - Mon, 07/14/2014 - 13:45
By Marko Ivanković, Google Zürich

Introduction
Code coverage is a very interesting metric, covered by a large body of research that reaches somewhat contradictory results. Some people think it is an extremely useful metric and that a certain percentage of coverage should be enforced on all code. Some think it is a useful tool to identify areas that need more testing but don’t necessarily trust that covered code is truly well tested. Others yet think that measuring coverage is actively harmful because it provides a false sense of security.

Our team’s mission was to collect coverage related data then develop and champion code coverage practices across Google. We designed an opt-in system where engineers could enable two different types of coverage measurements for their projects: daily and per-commit. With daily coverage, we run all tests for their project, where as with per-commit coverage we run only the tests affected by the commit. The two measurements are independent and many projects opted into both.

While we did experiment with branch, function and statement coverage, we ended up focusing mostly on statement coverage because of its relative simplicity and ease of visualization.

How we measured
Our job was made significantly easier by the wonderful Google build system whose parallelism and flexibility allowed us to simply scale our measurements to Google scale. The build system had integrated various language-specific open source coverage measurement tools like Gcov (C++), Emma / JaCoCo (Java) and Coverage.py (Python), and we provided a central system where teams could sign up for coverage measurement.

For daily whole project coverage measurements, each team was provided with a simple cronjob that would run all tests across the project’s codebase. The results of these runs were available to the teams in a centralized dashboard that displays charts showing coverage over time and allows daily / weekly / quarterly / yearly aggregations and per-language slicing. On this dashboard teams can also compare their project (or projects) with any other project, or Google as a whole.

For per-commit measurement, we hook into the Google code review process (briefly explained in this article) and display the data visually to both the commit author and the reviewers. We display the data on two levels: color coded lines right next to the color coded diff and a total aggregate number for the entire commit.


Displayed above is a screenshot of the code review tool. The green line coloring is the standard diff coloring for added lines. The orange and lighter green coloring on the line numbers is the coverage information. We use light green for covered lines, orange for non-covered lines and white for non-instrumented lines.

It’s important to note that we surface the coverage information before the commit is submitted to the codebase, because this is the time when engineers are most likely to be interested in improving it.

Results
One of the main benefits of working at Google is the scale at which we operate. We have been running the coverage measurement system for some time now and we have collected data for more than 650 different projects, spanning 100,000+ commits. We have a significant amount of data for C++, Java, Python, Go and JavaScript code.

I am happy to say that we can share some preliminary results with you today:


The chart above is the histogram of average values of measured absolute coverage across Google. The median (50th percentile) code coverage is 78%, the 75th percentile 85% and 90th percentile 90%. We believe that these numbers represent a very healthy codebase.

We have also found it very interesting that there are significant differences between languages:

C++JavaGoJavaScriptPython56.6%61.2%63.0%76.9%84.2%

The table above shows the total coverage of all analyzed code for each language, averaged over the past quarter. We believe that the large difference is due to structural, paradigm and best practice differences between languages and the more precise ability to measure coverage in certain languages.

Note that these numbers should not be interpreted as guidelines for a particular language, the aggregation method used is too simple for that. Instead this finding is simply a data point for any future research that analyzes samples from a single programming language.

The feedback from our fellow engineers was overwhelmingly positive. The most loved feature was surfacing the coverage information during code review time. This early surfacing of coverage had a statistically significant impact: our initial analysis suggests that it increased coverage by 10% (averaged across all commits).

Future work
We are aware that there are a few problems with the dataset we collected. In particular, the individual tools we use to measure coverage are not perfect. Large integration tests, end to end tests and UI tests are difficult to instrument, so large parts of code exercised by such tests can be misreported as non-covered.

We are working on improving the tools, but also analyzing the impact of unit tests, integration tests and other types of tests individually.

In addition to languages, we will also investigate other factors that might influence coverage, such as platforms and frameworks, to allow all future research to account for their effect.

We will be publishing more of our findings in the future, so stay tuned.

And if this sounds like something you would like to work on, why not apply on our job site?

Categories: Software Testing

Pushing Up All The Levels On The Mixing Board

Eric Jacobson's Software Testing Blog - Fri, 07/11/2014 - 09:50

…may not be a good way to start testing.

I heard a programmer use this metaphor to describe the testing habits of a tester he had worked with. 

As a tester, taking all test input variables to their extreme, may be an effective way to find bugs.  However, it may not be an effective way to report bugs.  Skilled testers will repeat the same test until they isolate the minimum variable(s) that cause the bug.  Or using this metaphor, they may repeat the same test with all levels on the mixing board pulled down, except the one they are interested in observing.

Once identified, the skilled tester will repeat the test only changing the isolated variable, and accurately predict a pass or fail result.

Categories: Software Testing

Pages