Software Testing

The Power To Declare Something Is NOT A Bug

Eric Jacobson's Software Testing Blog - Thu, 04/24/2014 - 11:20

Many think testers have the power to declare something as a bug.  This normally goes without saying.  How about the inverse? 

Should testers be given the power to declare something is NOT a bug? 

Well…no, IMO.  That sounds dangerous because what if the tester is wrong?  I think many will agree with me.  Michael Bolton asked the above question in response to a commenter on this post.  It really gave me pause. 

For me, it means maybe testers should not be given the power to run around declaring things as bugs either.  They should instead raise the possibility that something may be a problem.  Then I suppose they could raise the possibility something may not be a problem.

Categories: Software Testing

A Tale of Four Projects

DevelopSense - Michael Bolton - Wed, 04/23/2014 - 16:53
Once upon time, in a high-tech business park far, far away, there were four companies, each working on a development project. In Project Blue, the testers created a suite of 250 test cases, based on 50 use cases, before development started. These cases remained static throughout the project. Each week saw incremental improvement in the […]
Categories: Software Testing

What He Said

QA Hates You - Tue, 04/22/2014 - 09:29

Wayne Ariola in SD Times:

Remember: The cost of quality isn’t the price of creating quality software; it’s the penalty or risk incurred by failing to deliver quality software.

Word to your mother, who doesn’t understand why the computer thing doesn’t work any more and is afraid to touch computers because some online provider used her as a guinea pig in some new-feature experiment with bugs built right in.

Categories: Software Testing

“In The Real World”

DevelopSense - Michael Bolton - Mon, 04/21/2014 - 04:17
In Rapid Software Testing, James Bach, our colleagues, and I advocate an approach that puts the skill set and the mindset of the individual tester—rather than some document or tool or test case or process modelY—at the centre of testing. We advocate an exploratory approach to testing so that we find not only the problems […]
Categories: Software Testing

Stress Testing Drupal Commerce

LoadStorm - Thu, 04/17/2014 - 07:53

I’ve had the pleasure of working with Andy Kucharski for several years on various performance testing projects. He’s recognized as one of the top Drupal performance experts in the world. He is the Founder of Promet Source and is a frequent speaker at conferences, as well as a great client of LoadStorm. As an example of his speaking prowess, he gave the following presentation at Drupal Mid Camp in Chicago 2014.

Promet Source is a Drupal web application and website development company that offers expert services and support. They specialize in building and performance tuning complex Drupal web applications. Andy’s team worked with our Web Performance Lab to conduct stress testing on Drupal Commerce in a controlled environment. He is skilled at using LoadStorm and New Relic to push Drupal implementations to the point of failure. His team tells me he is good at breaking things.

In this presentation at Drupal Mid Camp, Andy explained how his team ran several experiments in which they load tested a kickstarter drupal commerce site on an AWS instance and then compared how the site performed after several well known performance tuning enhancements were applied. They compared performance improvements after Drupal cache, aggregation, Varnish, and Nginx reverse proxy.

View the below slideshare to see the summary of the importance of web performance and to see how they used LoadStorm to prove that they were able to scale Drupal Commerce from a point of failure (POF) of 100 users to 450 users! That’s a tremendous 450% improvement in scalability.

Drupal commerce performance profiling and tunning using loadstorm experiments drupal mid camp chicago 2014 from Andrew Kucharski

The post Stress Testing Drupal Commerce appeared first on LoadStorm.

Very Short Blog Posts (16): Usability Problems Are Probably Testability Problems Too

DevelopSense - Michael Bolton - Wed, 04/16/2014 - 13:04
Want to add ooomph to your reports of usability problems in your product? Consider that usability problems also tend to be testability problems. The design of the product may make it frustrating, inconsistent, slow, or difficult to learn. Poor affordances may conceal useful features and shortcuts. Missing help files could fail to address confusion; self-contradictory […]
Categories: Software Testing

Testing on the Toilet: Test Behaviors, Not Methods

Google Testing Blog - Mon, 04/14/2014 - 15:25
by Erik Kuefler

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

After writing a method, it's easy to write just one test that verifies everything the method does. But it can be harmful to think that tests and public methods should have a 1:1 relationship. What we really want to test are behaviors, where a single method can exhibit many behaviors, and a single behavior sometimes spans across multiple methods.

Let's take a look at a bad test that verifies an entire method:

@Test public void testProcessTransaction() {
User user = newUserWithBalance(LOW_BALANCE_THRESHOLD.plus(dollars(2));
transactionProcessor.processTransaction(
user,
new Transaction("Pile of Beanie Babies", dollars(3)));
assertContains("You bought a Pile of Beanie Babies", ui.getText());
assertEquals(1, user.getEmails().size());
assertEquals("Your balance is low", user.getEmails().get(0).getSubject());
}

Displaying the name of the purchased item and sending an email about the balance being low are two separate behaviors, but this test looks at both of those behaviors together just because they happen to be triggered by the same method. Tests like this very often become massive and difficult to maintain over time as additional behaviors keep getting added in—eventually it will be very hard to tell which parts of the input are responsible for which assertions. The fact that the test's name is a direct mirror of the method's name is a bad sign.

It's a much better idea to use separate tests to verify separate behaviors:

@Test public void testProcessTransaction_displaysNotification() {
transactionProcessor.processTransaction(
new User(), new Transaction("Pile of Beanie Babies"));
assertContains("You bought a Pile of Beanie Babies", ui.getText());
}
@Test public void testProcessTransaction_sendsEmailWhenBalanceIsLow() {
User user = newUserWithBalance(LOW_BALANCE_THRESHOLD.plus(dollars(2));
transactionProcessor.processTransaction(
user,
new Transaction(dollars(3)));
assertEquals(1, user.getEmails().size());
assertEquals("Your balance is low", user.getEmails().get(0).getSubject());
}

Now, when someone adds a new behavior, they will write a new test for that behavior. Each test will remain focused and easy to understand, no matter how many behaviors are added. This will make your tests more resilient since adding new behaviors is unlikely to break the existing tests, and clearer since each test contains code to exercise only one behavior.

Categories: Software Testing

AB Testing – Episode 3

Alan Page - Mon, 04/14/2014 - 11:02

Yes – it’s more of Brent and Alan yelling at each other. But this time, Brent says that he, “hates the Agile Manifesto”. I also talk about my trip to Florida, and we discuss estimation and planning and a bunch of other stuff.

Subscribe to the ABTesting Podcast!

Subscribe via RSS
Subscribe via iTunes

(potentially) related posts:
  1. Alan and Brent talk testing…
  2. More Test Talk with Brent
  3. Thoughts on Swiss Testing Day
Categories: Software Testing

Performance Testing Insights: Part I

LoadStorm - Sat, 04/12/2014 - 18:35

Performance Testing can be viewed as the systematic process of collecting and monitoring the results of system usage, then analyzing them to aid system improvement towards desired results. As part of the performance testing process, the tester needs to gather statistical information, examine server logs and system state histories, determine the system’s performance under natural and artificial conditions and alter system modes of operation.

Performance testing complements functional testing. Functional testing can validate proper functionality under correct usage and proper error handling under incorrect usage. It cannot, however, tell how much load an application can handle before it breaks or performs improperly. Finding the breaking points and performance bottlenecks, as well as identifying functional errors that only occur under stress, requires performance testing.

The purpose of Performance testing is to demonstrate that:

1. The application processes required business process and transaction volumes within specified response times in a real-time production database (Speed).

2. The application can handle various user load scenarios (stresses), ranging from a sudden load “spike” to a persistent load “soak” (Scalability).

3. The application is consistent in availability and functional integrity (Stability).

4. Determination of minimum configuration that will allow the system to meet the formal stated performance expectations of stakeholders

When should I start testing and when should I stop?

When to Start Performance Testing:

A common practice is to start performance testing only after functional, integration, and system testing are complete; that way, it is understood that the target application is “sufficiently sound and stable” to ensure valid performance test results. However, the problem with the above approach is that it delays performance testing until the latter part of the development lifecycle. Then, if the tests uncover performance-related problems, one has to resolve problems with potentially serious design implications at a time when the corrections made might invalidate earlier test results. In addition, the changes might destabilize the code just when one wants to freeze it, prior to beta testing or the final release.
A better approach is to begin performance testing as early as possible, just as soon as any of the application components can support the tests. This will enable users to establish some early benchmarks against which performance measurement can be conducted as the components are developed.

When to Stop Performance Testing:

The conventional approach is to stop testing once all planned tests are executed and there is consistent and reliable pattern of performance improvement. This approach gives users accurate performance information at that instance. However, one can quickly fall behind by just standing still. The environment in which clients will run the application will always be changing, so it’s a good idea to run ongoing performance tests. Another alternative is to set up a continual performance test and periodically examine the results. One can “overload” these tests by making use of real world conditions. Regardless of how well it is designed, one will never be able to reproduce all the conditions that application will have to contend with in the real-world environment.

The post Performance Testing Insights: Part I appeared first on LoadStorm.

Users, Usage, Usability, and Data

Alan Page - Thu, 04/10/2014 - 10:33

The day job (and a new podcast) have been getting the bulk of my time lately, but I’m way overdue to talk about data and quadrants.

If you need a bit of context or refresher on my stance, this post talks about my take on Brian Marick’s quadrants (used famously by Gregory and Crispin in their wonderful Agile Testing book); and I assert that the the left side of the quadrant is well suited for programmer ownership, and that the right side is suited for quality / test team ownership. I also assert that the right side knowledge can be obtained through data, and that one could gather what they need in production – from actual customer usage.

And that’s where I’ll try to pick up.

Agile Testing labels Q3 as “Tools”, and Q4 as “Manual”. This is (or can be) generally true, but I claim that it doesn’t have to be true. Yes, there are some synthetic tests you want to run locally to ensure performance, reliability, and other Q3 activities, but you can get more actionable data by examining data from customer usage. Your top-notch performance suite doesn’t matter if your biggest slowdown occurs on a combination of graphics card and bus speed that you don’t have in your test lab. Customers use software in ways we can’t imagine – and on a variety of configurations that are practically impossible to duplicate in labs. Similarly, stress suites are great – but knowing what crashes your customers are seeing, as well as the error paths they are hitting is far more valuable. Most other “ilities” can be detected from customer usage as well.

Evaluating Q4 from data is …interesting. The list of items in the graphic above is from Agile Testing, but note that Q4 is the quadrant labeled (using my labels) Customer Facing / Quality Product. You do Alpha and Beta testing in order to get customer feedback (which *is* data, of course), but beyond there, I need to make a bit larger leaps.

To avoid any immediate arguments, I’m not saying that exploratory testing can be replaced with data, or that exploratory testing is no longer needed. What I will  say is that not even your very best exploratory tester can represent how a customer uses a product better than the actual customer.

So let’s move on to scenarios and, to some extent, usability testing. Let’s say that one of the features / scenarios of your product is “Users can use our client app to create a blog post, and post it to their blog”. The “traditional” way to validate this scenario is to either make a bunch of test cases (either written down in advance(yuck) or discovered through exploration) that create blog entries with different formatting and options, and then make sure it can post to whatever blog services are supported. We would also dissect the crap out of the scenario and ask a lot of questions about every word until all ambiguity is removed. There’s nothing inherently wrong with this approach, but I think we can do better.

Instead of the above, tweak your “testing” approach. Instead of asking, “Does this work?”, or “What would happen if…?”, ask “How will I know if the scenario was completed successfully?” For example, if you knew:

  • How many people started creating a blog post in our client app?
  • Of the above set, how many post successfully to their blog
  • What blog providers do they post to
  • What error paths are being hit?
  • How long does posting to their blog take?
  • What sort of internet connection do they have?
  • How long does it take for the app to load?
  • After they post, do they edit the blog immediately (is it WYSIWYG)?
  • etc.

With the above, you can begin to infer a lot about how people use your application and discover outliers, answer questions; and perhaps, help you discover new questions you want to have answered. And to get an idea of whether they may have liked the experience, perhaps you could track things like:

  • How often do people post to their blog from our client app?
  • When they encounter an error path, what do they do? Try again? Exit? Uninstall?
  • etc.

Of course, you can get subjective data as well via short surveys. These tend to annoy people, but used strategically and sparsely, can help you gauge the true customer experience. I know of at least one example at Microsoft where customers were asked to provide a star rating and feedback after using an application – over time, the team could use available data to accurately predict what star rating customers would give their experience. I believe that’s a model that can be reproduced frequently.

Does a data-prominent strategy work everywhere? Of course not. Does it replace the need for testing? Don’t even ask – of course not. Before taking too much of the above to heart, answer a few questions about your product. If your product is a web site or web service or anything else you can update (or roll back) as frequently as you want, of course you want to rely on data as much as possible for Q3 and Q4. But, even for “thick” apps that run on a device (computer, phone, toaster) that’s always connected, you should also consider how you can use data to answer questions typically asked by test cases.

But look – don’t go crazy. There are a number of products, where long tests (what I call Q3 and Q4 tests) can be replaced entirely by data. But don’t blindly decide that you no longer need people to write stress suites or do exploratory testing. If you can’t answer your important questions from analyzing data, by all means, use people with brains and skills to help you out. And even if  you think you can get all your answers with data, use people as a safety net while you make the transition. It’s quite possible (probable?) to gather a bunch of data that isn’t actually the data you need, and then mis-analyze it and ship crap people don’t want – that’s not a trap you want to fall into.

Data is a powerful ally. How many times, as a tester, have you found an issue and had to convince someone it was something that needed to be fixed or customers would rebel? With data, rather than rely on your own interpretation of what customers want, you can make decisions based on what customers are actually doing. For me, that’s powerful, and a strong statement towards the future of software quality.

(potentially) related posts:
  1. Finding Quality
  2. It’s all just testing
  3. Being Forward
Categories: Software Testing

Out-Of-Touch People Want Metrics

Eric Jacobson's Software Testing Blog - Wed, 04/09/2014 - 12:19

The second thing (here is the first) Scott Barber said that stayed with me is this:

The more removed people are from IT workers, the higher their desire for metrics.  To paraphrase Scott, “the managers on the floor, in the cube farms, agile spaces or otherwise with their teams most of the time, don’t use a lot of metrics because they just feel what’s going on.”

It seems to me, those higher up people dealing with multiple projects don’t have (as much) time to visit  the cube farms and they know summarized information is the quickest way to learn something.  The problem is, too many of them think:

SUMMARIZED INFORMATION = ROLLED UP NUMBERS

It hadn’t occurred to me until Scott said it.  That, alone, does not make metrics bad.  But it helps me to understand why I (as a test manager) don’t bother with them but I spend a lot of time fending off requests for them from out-of-touch people (e.g., directors, other managers).  Note: by “out-of-touch” I mean out-of-touch with the details of the workers.  Not out-of-touch in general.

Scott reminds us the right way to find the right metric for your team is to start with the question:

What is it we’re trying to learn?

I love that.  Maybe a metric is not the best way of learning.  Maybe it is.  If it is, perhaps coupling it with a story will help explain the true picture.

Thanks Scott!

Categories: Software Testing

Infographic: Web Performance Impacts Conversion Rates

LoadStorm - Wed, 04/09/2014 - 09:46

There are many companies that design or re-design websites with the goal of increasing conversion rates. What do we mean by conversion rates? SiteTuners defines Conversion Rate as: “The percentage of landing page visitors who take the desired conversion action.” Included in this list are completing a purchase, filling out an information form, or donating to the cause.

Almost all companies focus on the design, look, and layout of a website when attempting to increase conversion rates. However, we at LoadStorm want to draw more attention to the role that web performance plays in conversions. Many studies have conclusively shown that a delay in page load time will negatively affect conversions, check out our latest infographic for a summary of the statistics!

&nspb

Sources:

1. http://www.aberdeen.com/Aberdeen-Library/5136/RA-performance-web-application.aspx

2. https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate

3. http://www.globaldots.com/how-website-speed-affects-conversion-rates/

4. http://www.crestechglobal.com/wp-content/uploads/2013/10/WhitePaper_AgileBottleneck_Performance.pdf

5. http://www.webperformancetoday.com/2013/12/11/slower-web-pages-user-frustration/

6. http://kylerush.net/blog/meet-the-obama-campaigns-250-million-fundraising-platform

7. http://www-01.ibm.com/software/th/collaboration/webexperience/index.html

The post Infographic: Web Performance Impacts Conversion Rates appeared first on LoadStorm.

Nondeterministic Testing Instead Of Pass/Fail

Eric Jacobson's Software Testing Blog - Tue, 04/08/2014 - 10:55

I heard a great interview with performance tester, Scott Barber.  Two things Scott said stayed with me.  Here is the first.

Automated checks that record a time span (e.g., existing automated checks hijacked to become performance tests) may not need to result in Pass/Fail, as respect to performance.  Instead, they could just collect their time span result as data points.  These data points can help identify patterns:

  • Maybe the time span increases by 2 seconds after each new build.
  • Maybe the time span increases by 2 seconds after each test run on the same build.
  • Maybe the time span unexpectedly decreases after a build.
  • etc.

My System 1 thinking tells me to add a performance threshold that resolves automated checks to a mere Pass/Fail.  Had I done that, I would have missed the full story, as Facebook did. 

Rumor has it, Facebook had a significant production performance bug that resulted from reliance on a performance test that didn’t report performance increases.  It was supposed to Fail if the performance dropped.

At any rate, I can certainly see the advantage of dropping Pass/Fail in some cases and forcing yourself to analyze collected data points instead.

Categories: Software Testing

Testers should learn to code?

Dorothy Graham - Tue, 04/08/2014 - 09:10
It seems to be the "perceived wisdom" these days that if testers want to have a job in the future, they should learn to write code. Organisations are recruiting "developers in test" rather than testers. Using test automation tools (directly) requires programming skills, so the testers should acquire them, right?

I don't agree, and I think this is a dangerous attitude for testing in general.

Here's a story of two testers:

  • Les has degree in Computer Science, started out in a traditional test team, and now works in a multi-disciplinary agile team. Les is a person who likes to turn a hand to whatever needs doing, and enjoys a technical challenge. Les is very happy to write code, and has recently starting coding for a recently acquired test automation tool, making sure that good programming practices are applied to the testware and test code. Les is very happy as a developer-tester.
  • Fran came into testing through the business. Started out being a user who was more interested in any new release from IT than the other users, so became the “first user”. Got drawn into the user acceptance test group and enjoyed testing – found things that the technical people missed, due to a good business background. With training in testing techniques, Fran became a really good tester, providing great value to the organization. Probably saved them hundreds of thousand pounds a year by advising on new development and testing from a user perspective. Fran never wanted anything to do with code.


What will happen when the CEO hears: “Testers should learn to code”? Les’s job is secure, but what about Fran? I suspect that Fran is already feeling less valued by the organisation and is worried about job security, in spite of having provided a great service for years as an excellent software tester.

So what’s wrong with testers who write code?
  • -       absolutely nothing
  • -       for testers who want to code, who enjoy it, who are good at it
  • -       for testers in agile teams


Why is this a dangerous attitude for testing in general?
  • -       it reads as “all testers should write code” and is taken as that by managers who are looking to get rid of people
  • -       not all testers will be good at it or want to become developers (maybe that's why they went into testing)
  • -       it implies that “the only good tester is one who can write code”
  • -       it devalues testing skills (now want coders, not [good] testers. In fact, if coders can test, why do we need specialist testers anyway?)
  • -       tester-developers may "go native" and be pushed into development, so we lose more testing skills
  • -       it's not right to force good testers out of our industry
So I say, let's stand up for testing skills, and for non-developer testers!

Categories: Software Testing

More Test Talk with Brent

Alan Page - Mon, 04/07/2014 - 10:44

Brent and I gabbed about testing again last week – or sort of…We mostly talked about change management, and why people avoid or embrace change.

We’ll probably settle back to a once every two weeks posting after this – or some sort of regular, sustainable cadence.

Subscribe to the ABTesting Podcast!

Subscribe via RSS
Subscribe via iTunes

(potentially) related posts:
  1. Alan and Brent talk testing…
  2. Careers in Test
  3. A Test Strategy – or whatever you want to call it
Categories: Software Testing

Human vs. Machine Test Evaluation Has A Double Standard

Eric Jacobson's Software Testing Blog - Fri, 04/04/2014 - 13:54

I often hear skeptics question the value of test automation.  Their questioning is healthy for the test industry and it might flush out bad test automation.  I hope it continues.

But shouldn’t these same questions be raised about human testing (AKA Manual testing)?  If these same skeptics judged human testing with the same level of scrutiny, might it improve human testing? 

First, the common criticisms of test automation:

  • Sure, you have a lot of automated checks in your automated regression check suite, but how many actually find bugs?
  • It would take hours to write an automated check for that.  A human could test it in a few seconds.
  • Automated checks can’t adapt to minor changes in the system under test.  Therefore, the automated checks break all the time.
  • We never get the ROI we expect with test automation.  Plus, it’s difficult to measure ROI for test automation.
  • We don’t need test automation.  Our manual testers appear to be doing just fine.

Now let’s turn them around to question manual testing:

  • Sure, you have a lot of manual tests in your manual regression test suite, but how many actually find bugs?
  • It would take hours for a human to test that.  A machine could test it in a few seconds.
  • Manual testers are good at adapting to minor changes in the system under test.  Sometimes, they aren’t even aware of their adaptions.  Therefore, manual testers often miss important problems.
  • We never get the ROI we expected with manual testing.  Plus, it’s difficult to measure ROI for manual testing.
  • We don’t need manual testers.  Our programmers appear to be doing just fine with testing.
Categories: Software Testing

Pages