Software Testing

Book Review – Introduction to Agile Methods - Ashmore and Runyan

Randy Rice's Software Testing & Quality - Wed, 11/12/2014 - 09:07
Click to Order on AmazonFirst, as a disclaimer, I am a software testing consultant and also a project life cycle consultant. Although I do train teams in agile testing methods, I do not consider myself as an agile evangelist. There are many things about agile methods I like, but there are also things that trouble me about some agile methods. I work with organizations that use a wide variety of project life cycle approaches and believe that one approach is not appropriate in all situations. In this review, I want to avoid getting into the weeds of sequential life cycles versus agile, the pros and cons of agile, and instead focus on the merits of the book itself.

Since the birth of agile, I have observed that agile methods keep evolving and it is hard to find in one place a good comparison of the various agile methodologies. I have been looking for a good book that I could use in a learning context and I think this book is the one.
One of the issues that people experience in learning agile is that the early works on the topic are from the early experiences in agile. We are now almost fifteen years into the agile movement and many lessons have been learned. While this book references many of the early works, the authors present the ideas in a more recent context.
This is a very easy to read book and very thorough in it’s coverage of agile methods. I also appreciate that the authors are up-front about some of the pitfalls of certain methods in some situations.
The book is organized as a learning tool, with learning objectives, case studies, review questions and exercises, so one could easily apply this book “as-is” as the basis for training agile teams. An individual could also use this book as a self-study course on agile methods. There is also a complete glossary and index which helps in getting the terms down and using the book as a reference.
The chapter on testing provided an excellent treatment of automating agile tests, and manual testing was also discussed. In my opinion, there could have been more detail about how to test functionality from an external perspective, such as the role of functional test case design in an agile context. Too many people only test the assertions and acceptance-level tests without testing the negative conditions. The authors do, however, emphasize that software attributes such as performance, usability, and others should be tested. The “hows” of those forms of testing are not really discussed in detail. You would need a book that dives deeper into those topics.
I also question how validation is defined and described, relying more on surveys than actual real-world tests. Surveys are fine, but they don’t validate specific use of the product and features. If you want true validation, you need real users performing tests based on how they intend to use a product.
There is significant discussion on the importance of having a quality-minded culture, which is foundational for any of the methods to deliver high-quality software. This includes the ability to be open about discussing defects.
To me, the interviews were interesting, but some of the comments were a little concerning. However, I realize they are the opinions of agile practitioners and I see quite a bit of disagreement between experts at the conferences, so I really didn’t put a lot of weight in the interviews.
One nitpick I have is that on the topic of retrospectives, the foundational book, “Project Retrospectives” by Norm Kerth was not referenced. That was the book that changed the term “post-mortem” to “retrospectives” and laid the basis for the retrospective practice as we know it today.
My final thought is that the magic is not in the method. Whether using agile methods or sequential lifecycles like the waterfall, it takes more than simply following a method correctly. If there are issues such team strife, stakeholder disengagement and other organizational maladies, then agile or any other methodology will not fix that. A healthy software development culture is needed, along with a healthy enterprise culture and understanding of what it takes to build and deliver great software.
I can highly recommend this book for people who want to know the distinctions between agile methods, those that want to improve their practices and for management that may feel a disconnect in their understanding of how agile is being performed in their organization. This book is also a great standalone learning resource!
Categories: Software Testing

Don’t Go Changing…

Alan Page - Wed, 11/05/2014 - 16:10

My last post dove into the world of engineering teams / combined engineering / fancy-name-of-the-moment where there are no separate disciplines for development and test. I think there are advantages to one-team engineering, but that doesn’t mean that your team needs to change.

First things First

I’ve mentioned this before, but it’s worth saying again. Don’t change for change’s sake. Make changes because you think the change will solve a problem. And even then, think about what problems a change may cause before making a change.

In my case, one-team engineering solves a huge inefficiency in the developer to tester communication flow. The back-and-forth iteration of They code it – I test it – They fix it – I verify it can be a big inefficiency on teams. I also like the idea of both individual and team commitments to quality that happen on discipline-free engineering teams.

Is it for you?

I see the developer to tester ping pong match take up a lot of time on a lot of teams. But I don’t know your team, and you may have a different problem. Before diving in on what-Alan-says, ask yourself, “What’s the biggest inefficiency on our team”. Now, brainstorm solutions for solving that inefficiency. Maybe combined engineering is one potential solution, maybe it’s not. That’s your call to make. And then remember that the change alone won’t solve the problem (and I outlined some of the challenges in my last post as well).

Taking the Plunge

OK. So your team is going to go for it and have one engineering team. What else will help you be successful?

In the I-should-have-said-this-in-the-last-post category, I think running a successful engineering team requires a different slant on managing (and leading) the team. Some things to consider (and why you should consider them) include:

  • Flatten – It’s really tempting to organize a team around product functionality. Does this sound familiar to anyone?
    Create a graphics team, Create a graphics test team. Create a graphics performance team. Create a graphics analysis team. Create a graphics analysis test team. You get the point.
    Instead, create a graphics team. Or create a larger team that includes graphics and some related areas. Or create an engineering team that owns the whole product (please don’t take this last sentence literally on a team that includes hundreds of people).
  • Get out of the way – A book I can’t recommend enough for any manager or leader who wants to transition from the sausage-making / micro-managing methods of the previous century is The Leaders Guide to Radical Management by Steve Denning (and note that there are several other great books on reinventing broken management; e.g. Birkinshaw, Hamel, or Collins for those looking for even more background). In TLGRM, Denning says (paraphrased), Give your organization a framework they understand, and then get out of their way. Give them some guidelines and expectations, but then let them work. Check in when you need to, but get out of the way. Your job in 21st century management is to coach, mentor, and orchestrate the team for maximum efficiency – not to monitor them continuously or create meaningless work. This is a tough change for a lot of managers – but it’s necessary – both for the success of the workers and for the sanity of managers. Engineering teams need the flexibility (and encouragement) to self-organize when needed, innovate as necessary, and be free from micro-management.
  • Generalize and Specialize – I’ve talked about Generalizing Specialists before (search my last post and my blog). For another take, I suggest reading what Jurgen Appelo has to say about T-shaped people, and what Adam Knight says about square-shaped teams for additional explanation on specialists who generalize and how they make up a good team.

This post started as a follow up and clarification for my previous post – but has transformed into a message to leaders and managers on their role – and in that, I’m reminded how critically important good leadership and great managers are to making transitions like this. In fact, given a choice of working for great leaders and awesome managers on a team making crummy software with horrible methods, I’d probably take it every time…but I know in that team doesn’t exist. Great software starts with great teams, and great teams come from leaders and managers who know what they’re doing and make changes for the right reasons.

If your team can be better – and healthier –  by working as a single engineering team, I think it’s a great direction to go. But make your changes for the right reasons and with a plan for success.

(potentially) related posts:
  1. Stuff About Leadership
  2. Who Owns Quality?
  3. Leadership
Categories: Software Testing

Very Short Blog Posts (21): You Had It Last!

DevelopSense - Michael Bolton - Tue, 11/04/2014 - 06:51
Sometimes testers say to me “My development team (or the support people, or the managers) keeping saying that any bugs in the product are the testers’ fault. ‘It’s obvious that any bug in the product is the tester’s responsibility,’ they say, ‘since the tester had the product last.’ How do I answer them?” Well, you […]
Categories: Software Testing

Throwing Kool-Aid on the Burning Books

DevelopSense - Michael Bolton - Sun, 11/02/2014 - 10:10
Another day, and another discovery of an accusation of Kool-Aid drinking or book burning from an ISO 29119 proponent (or an opponent of the opposition to it; a creature in the same genus, but not always the same species). Most of the increasingly vehement complaints come from folks who have not read [ISO 29119], perhaps […]
Categories: Software Testing

So You Didn’t Find Any Bugs, Huh?

Eric Jacobson's Software Testing Blog - Wed, 10/29/2014 - 14:39

You must really be a sucky tester.

I’m kidding, of course.  There may be several explanations as to why an excellent tester like yourself is not finding bugs.  Here are four:

  • There aren’t any bugs!  Duh.  If your programmers are good coders and testers, and perhaps writing a very simple Feature in a closely controlled environment, it’s possible there are no bugs.
  • There are bugs but the testing mission is not to find them.  If the mission is to do performance testing, survey the product, determine under which conditions the product might work, smoke test, etc., it is likely we will not find bugs.
  • A rose by any other name… Maybe you are finding bugs in parallel with the coding and they are fixed before ever becoming “bug reports”.  In that case, you did find bugs but are not giving yourself credit.
  • You are not as excellent as you think.  Sorry.  Finding bugs might require skills you don’t have.  Are you attempting to test data integrity without understanding the domain?  Are you testing network transmissions without reading pcaps?

As testers, we often feel expendable when we don’t find bugs.  We like to rally around battle cries like:

“If it ain’t broke, you’re not trying hard enough!”

“I don’t make software, I break it!”

“There’s always one more bug.”

But consider this, a skilled tester can do much more than merely find a bug.  A skilled tester can also tells us what appears to work, what hasn’t broken in the latest version, what unanticipated changes have occurred in our product, how it might work better, how it might solve additional problems, etc. 

And that may be just as important as finding bugs.

Categories: Software Testing

Testing is…

DevelopSense - Michael Bolton - Tue, 10/28/2014 - 10:36
Every now and again, someone makes some statement about testing that I find highly questionable or indefensible, whereupon I might ask them what testing means to them. All too often, they’re at a loss to reply because they haven’t really thought deeply about the matter; or because they haven’t internalized what they’ve thought about; or […]
Categories: Software Testing

To Combine… or not?

Alan Page - Mon, 10/27/2014 - 15:47

I talk a lot (and write a bit) about software teams without separate disciplines for developers and testers (sometimes called “combined engineering” within the walls of MS – a term I find annoying and misleading – details below). For a lot of people, the concept of making software with a single team falls into the “duh – how else would you do it?” category, while many others see it as the end of software quality and end for tester careers.

Done right, I think an engineering team based approach to software development is more efficient, and produces higher quality software than a “traditional” Dev & Test team. Unfortunately, it’s pretty easy to screw it up as well…

How to Fail

If you are determined to jump on the fad of software-engineering-without-testers, and don’t care about results, here is your solution!

First, take your entire test team and tell them that they are now all developers (including your leads and managers). If you have any testers that can’t code as well as your developers, now is a great time to fire them and hire more coders.

Now, inform your new “engineering team” that everyone on the team is now responsible for design, implementation, testing, deployment and overall end to end quality of the system. While you’re at it, remind them that because there are now more developers on the team that you expect team velocity to increase accordingly.

Finally, no matter what obstacles you hit, or what the team says about the new culture, don’t tell them anything about why you’ve made the changes. Since you’re in charge of the team, it’s your right to make any decision you want, and it’s one step short of insubordination to question your decisions.

A better way?

It’s sad, but I’ve seen pieces (and sadder still, all) of the above section occur on teams before. Let’s look at another way to make (or reinforce) a change that removes a test team (but does not remove testing).

Start with Why?

Rather than say, “We’re moving to a discipline-free organization” (the action), start with the cause – e.g. “We want to increase the efficiency of our team and improve our ability to get new functionality in customers hands. In order to do this, we are going to move to a discipline-free organizational structure. …etc.” I would probably even preface this with some data or anecdotes, but the point is that starting with a compelling reason for “why” has a much better chance of working than a proclamation in isolation (and this approach works for almost any other organizational change as well).

Build your Team

The core output (and core activity) of an engineering team is working software. To this end, this is where most (but not necessarily all) of the team should focus. The core role of a software engineer is to create working software (at a minimum, this means design, implementation, unit, and functional tests). IMO, the ability to this is the minimum bar for a software engineer.

But…if you read my blog, you’re probably someone who knows that there’s a lot more to quality software than writing and testing code. I think great software starts with core software engineering, but that alone isn’t enough. Good software teams have learned the value of using generalizing specialists to help fill the gap between core-software engineering, and quality products.

Core engineering (aka what-developers-do) covers the a large part of Great Software. The sizes of circles may be wrong (or wrong for some teams); this is just a model.


Nitpicker note: sizes of circles may be wrong (or wrong for some teams) – this is a model.

Engineering teams still have a huge need for people who are able to validate and explore the system as a whole (or in large chunks). These activities remain critically important to quality software, and they can’t be ignored. These activities (in the outer loop of my model) improve and enhance “core software engineering”. In fact, an engineering team could be structured with an separate team to focus on the types of activities in the outer loop.

Generalizing Specialists (and Specializing Generalists)

I see an advantage, however, in engineering teams that include both circles above. It’s important to note that some folks who generally work in the “Core Engineering” circle will frequently (or regularly) take on specialist roles that live in the outer loop. A lot of people seem to think that discipline-free software teams, everyone can do everything – which is, of course, flat out wrong. Instead, it’s critical that a good software team has (generalizing) specialists who can look critically at quality areas that span the product.

There also will/must be folks who live entirely in the outer ring, and there will be people like me who typically live in the outer ring, but dive into product code as needed to address code problems or feature gaps related to the activities in the outer loop. Leaders need to support (and encourage – and celebrate) this behavior…but with this much interaction between the outer loop of testing and investigation, and the inner loop of creating quality features, it’s more efficient to have everyone on one team. I’ve seen walls between disciplines get in the way of efficient engineering (rough estimate) a million times in my career. Separating the team working on the outer loop from core engineering can create another opportunity for team walls to interfere with engineering. To be fair, if you’re on a team where team walls don’t get in the way, and you’re making great software with separate teams, then there’s no reason to change. For some (many?) of us, the efficiency gains we can get from this approach to software development are worth the effort (along with any short-term worries from the team).

Doing it Right

It’s really easy for a team to make the decision to move to one-engineering-team for the wrong reasons, and it’s even easier to make the move in a way that will hurt the team (and the product). But after working mostly the wrong way for nearly 20 years, I’m completely sold on making software with a single software team. But making this sort of change work effectively is a big challenge.

But it’s a challenge that many of us have faced already, and something everyone worried about making good software has to keep their eye on. I encourage anyone reading this to think about what this sort of change means for you.

(potentially) related posts:
  1. Some Principles
  2. In Search of Quality
  3. Testing Trends…or not?
Categories: Software Testing

GTAC 2014 is this Week!

Google Testing Blog - Mon, 10/27/2014 - 12:26
by Anthony Vallone on behalf of the GTAC Committee

The eighth GTAC commences on Tuesday at the Google Kirkland office. You can find the latest details on the conference at our site, including speaker profiles.

If you are watching remotely, we'll soon be updating the live stream page with the stream link and a Google Moderator link for remote Q&A.

If you have been selected to attend or speak, be sure to note the updated parking information. Google visitors will use off-site parking and shuttles.

We look forward to connecting with the greater testing community and sharing new advances and ideas.

Categories: Software Testing

Facts and Figures in Software Engineering Research (Part 2)

DevelopSense - Michael Bolton - Wed, 10/22/2014 - 20:13
On July 23, 2002, Capers Jones, Chief Scientist Emeritus of a company called Software Productivity Research gave a presentation called “SOFTWARE QUALITY IN 2002: A SURVEY OF THE STATE OF THE ART”. In this presentation, he shows data on a slide titled “U.S. Averages for Software Quality”. (Source:, accessed September 5, 2014) It is […]
Categories: Software Testing

Facts and Figures in Software Engineering Research

DevelopSense - Michael Bolton - Mon, 10/20/2014 - 19:44
On July 23, 2002, Capers Jones, Chief Scientist Emeritus of a company called Software Productivity Research, gave a presentation called “SOFTWARE QUALITY IN 2002: A SURVEY OF THE STATE OF THE ART”. In this presentation, he provided the sources for his data on the second slide: SPR clients from 1984 through 2002 • About 600 […]
Categories: Software Testing

Things A Tester Should Say At A Daily Standup

Eric Jacobson's Software Testing Blog - Mon, 10/20/2014 - 08:27

Hey testers, don’t say:

“yesterday I tested a story.  Today I’m going to test another story.  No impediments”

Per Scrum inventor, Jeff Sutherland, daily standups should not be “I did this…”, “I’ll do that…”.  Instead, share things that affect others with an emphasis on impediments.  The team should leave the meeting with a sense of energy and urgency to rally around the solutions of the day.  When the meeting ends, the team should be saying, “Let’s go do this!”.

Here are some helpful things a tester might say in a daily standup:

  • Let’s figure out the repro steps for production Bug40011 today, who can help me?
  • I found three bugs yesterday, please fix the product screen bug first because it is blocking further testing.
  • Sean, I know you’re waiting on my feedback on your new service, I’ll get that too you first thing today.
  • Yesterday I executed all the tests we discussed for Story102, unless someone can think of more, I am done with that testing.  Carl, please drop by to review the results.
  • I’m getting out of memory errors on some test automation, can someone stop by to help?
  • If I had a script to identify data corruption, it would save hours.
  • Paul, I understand data models, I’ll test that for you and let you know something by noon.
  • The QA data seems stale.  Don’t investigate any errors yet.  I’m going to refresh data and retest it today.  I’ll let you know when I’m done.
  • Jolie, if you can answer my question on expected behavior, I can finish testing that Story this afternoon.

Your role as a tester affects so many people. Think about what they might be interested in and where your service might be most valuable today.

Categories: Software Testing

The Beginner's Mind Applied to Software Testing

Randy Rice's Software Testing & Quality - Mon, 10/20/2014 - 08:10
Something has been bothering me for some time now as I conduct software testing training classes on a wide variety of topics - ISTQB certification, test management, test automation, and many others.

But it's not only in that venue I see the issue. I also see it in conferences where people attend, sit through presentations - some good and some not so good - and leave with comments like "I didn't learn anything new." Really?

I remember way back in the day when I chaired QAI's International Testing Conference (1995 - 2000) reading comments like those and asking "How could this be?" I knew we had people presenting techniques that were innovations at the time. One specific example was how to create test cases from use cases. At the time, it was a new and hot idea.

So this nagging feeling has been rolling around in my mind for weeks now. Then, this past week I had the need to learn more about something not related to testing at all. One article I found told of the author's similar quest. All of his previous attempts to solve the problem ended up looking similar, so he started over - again - except this time with a mindset in which he knew nothing about the subject. Then, he went on to describe the idea of the "Beginner's Mind." Then, it all clicked for me.

When I studied and practiced martial arts for about ten years, my fellow students and I learned that if we trusted in our belt color, or how many years we had been learning, we would get our butts kicked. In the martial arts, beginners and experts train in the same class and nobody complains. The experts mentor the beginners and they also perfect the minor flaws in their techniques. The real danger was when we thought we already knew what the teacher was teaching.

Now...back to testing training...

I respect that someone may have 30 years experience in software testing. However, those 30 years may be limited by working for one company or a few companies. Also, the person may only have worked in a single industry. Even if the experience is as wide as possible, you still don't have 100% knowledge of anything.

The best innovators I know in software testing are those that can take very basic ideas and combine them, or find a new twist on them and innovate a new (and better) way of doing something. But you can't do that if you think you already know it all.

In the ISTQB courses, I always tell my students, "You are going to need to 'un-learn' some things and not rely solely on your experience, as great as that might be." That's because the ISTQB has some specific ways it defines things. If you miss those nuances because you are thinking "Been there, done that," you may very well miss questions on the exam. I've seen it happen too many times. I've seen people with 30+ years of experience fail the exam even after taking a class!

So the next time you are at a conference, reading a book, attending a class, etc. and you start to get bored, adopt the beginner's mind. Look at the material, listen to the speaker and ask beginner questions, like "Why is this technique better that another one?", "Why can't you do ..... instead?", or "What would happen if I combined technique X with technique Y."

Adopt the beginner's mind and you might just find a whole new world of innovation and improvement waiting for you!

Categories: Software Testing

Testing on the Toilet: Writing Descriptive Test Names

Google Testing Blog - Thu, 10/16/2014 - 14:40
by Andrew Trenk,

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

How long does it take you to figure out what behavior is being tested in the following code?

@Test public void isUserLockedOut_invalidLogin() {
authenticator.authenticate(username, invalidPassword);
authenticator.authenticate(username, invalidPassword);
authenticator.authenticate(username, invalidPassword);
You probably had to read through every line of code (maybe more than once) and understand what each line is doing. But how long would it take you to figure out what behavior is being tested if the test had this name?


You should now be able to understand what behavior is being tested by reading just the test name, and you don’t even need to read through the test body. The test name in the above code sample hints at the scenario being tested (“invalidLogin”), but it doesn’t actually say what the expected outcome is supposed to be, so you had to read through the code to figure it out.

Putting both the scenario and the expected outcome in the test name has several other benefits:

- If you want to know all the possible behaviors a class has, all you need to do is read through the test names in its test class, compared to spending minutes or hours digging through the test code or even the class itself trying to figure out its behavior. This can also be useful during code reviews since you can quickly tell if the tests cover all expected cases.

- By giving tests more explicit names, it forces you to split up testing different behaviors into separate tests. Otherwise you may be tempted to dump assertions for different behaviors into one test, which over time can lead to tests that keep growing and become difficult to understand and maintain.

- The exact behavior being tested might not always be clear from the test code. If the test name isn’t explicit about this, sometimes you might have to guess what the test is actually testing.

- You can easily tell if some functionality isn’t being tested. If you don’t see a test name that describes the behavior you’re looking for, then you know the test doesn’t exist.

- When a test fails, you can immediately see what functionality is broken without looking at the test’s source code.

There are several common patterns for structuring the name of a test (one example is to name tests like an English sentence with “should” in the name, e.g., shouldLockOutUserAfterThreeInvalidLoginAttempts). Whichever pattern you use, the same advice still applies: Make sure test names contain both the scenario being tested and the expected outcome.

Sometimes just specifying the name of the method under test may be enough, especially if the method is simple and has only a single behavior that is obvious from its name.

Categories: Software Testing

Fifty Quick Ideas To Improve Your User Stories – Now Available

The Quest for Software++ - Tue, 10/14/2014 - 09:18

The final version of my latest book, Fifty Quick Ideas To Improve Your User Stories, is now available on the Kindle store, and the print version will be available on Amazon soon.

This book will help you write better stories, spot and fix common issues, split stories so that they are smaller but still valuable, and deal with difficult stuff like crosscutting concerns, long-term effects and non-functional requirements. Above all, this book will help you achieve the promise of agile and iterative delivery: to ensure that the right stuff gets delivered through productive discussions between delivery team members and business stakeholders.

For the next seven days, the book will be sold on the Kindle store at half the normal price. To grab it at that huge discount, head over to the Kindle store (the price will double on 22nd October). Here are the quick links: UK DE FR ES IT JP CN BR CA MX AU IN

Tips and Tricks to Troubleshooting PRO Scripts

LoadStorm - Mon, 10/13/2014 - 17:13

When a recording is first uploaded, it behaves the same as when it was recorded; That is, it makes all the same requests, with no differentiating qualities. Until you parameterize it, our application will simply repeat all of the GETs, POSTs, and occasional PUT requests the same way they were recorded, but we handle a few things automatically such as cookies and most form tokens.

Often new scripts can have a bundle of requests and only a few of which that need parameterization to work as expected for every VUser that makes that request. These problems are often found when a request’s recorded status code does not match its status code from the last script execution. Problems such as a failed login request can cause subsequent requests to fail, causing a domino effect. For this reason, it’s always a good idea to start from the earliest requests and work your way down, checking that requests are behaving as expected during a single script execution. Even after it’s all fixed, a small load test is a good way to check that it is also working as expected for concurrent users.

When parameterizing a script, here are the key recommendations:

  • Manage servers
  • Start with first requests and proceed sequentially
  • Examine POSTs to make sure you are getting the response you expect (e.g. login)
  • Check script Settings such as Think Times, Timeouts, etc.
  • Re-execute the script after changes
  • Run a small load test to confirm your script is working as expected (debug test)
Tips and Tricks Where to Begin

Switch to the Manage Server sub-tab and ignore all non-essential third-party servers. The goal is to test your web application, not someone else’s. If you feel the service they’re providing you is critical, please contact for help with server verification. After these have been ignored, execute the script again. This will make things easier when determining which requests you need to work on because the unnecessary requests will now show a status of ignored instead of a response code.


You can make it easier to find problematic requests by using our filtering options. To quickly bring all of the apparent errors to your attention click the All Errors radio button.

For the less obvious errors you’ll need to look for mismatched status codes by clicking the Recorded vs Last drop down and selecting the Status Code Mismatch option. As you begin looking at the Recorded Status column and comparing it to the Last Status column for each request you should disregard requests if they show cached or ignored for their last status. Typically the less obvious ones to look for change from a status 200 to a 302, or they were meant to be a 302 and now give a status 200.

If you’re unfamiliar with status code these are some of the most common:

  • 200 = OK (usually delivers some content)
  • 302 = Found (asks the browser to redirect to a response header called Location)
  • 400 = Bad Request (often this means the URL has invalid syntax)
  • 401 = Not Authorized (you need permission such as a login to proceed)
  • 403 = Forbidden (does not allow the target to be viewed with or without authentication)
  • 404 = Not Found (missing the file at this target location)
  • 500 = Internal Server Error (often means LoadStorm made a request that has something the server wasn’t expecting like a static token that should’ve been dynamic)

Another option to filter down to what could be considered your most crucial requests is changing the All mime-types drop down and selecting the text/html option. These requests usually represent each page of text content that your users would need to read and interact with.

Request Details Window

Once you’ve identified a request that you feel is causing problems double-click that request to open the details window. From here you can compare all of the details of a request from its recorded values with the values in the last execution. Any values that differ in the last execution will appear in red. Usually there are values that must appear in red because they need to be dynamic per user. Often these are tokens of some kind such as authentication, userIDs, sessionIDs, viewstates, incremental values, etc. If you find that something appears like a randomly generated token, but is colored black for the last execution then that indicates it was repeated as a static copy of the recorded value. This can often be the cause of problems with a request and cause a domino effect of problems with subsequent requests that rely on passing that token around or other chain of a events that need to occur. To fix this you’ll need to look at prior requests to see where this token is being set, and then modify the problematic request to grab the token from the prior request that would assign it.

Response Content tab in the Details Window

You can often find important information regarding the behavior of a request in the response content. This information could let you know many things such as why your login attempt failed, if you received a stack trace from .NET, or if the page contains some dynamic tokens that are needed in later requests.

Previewing Response Content in a New Browser Tab

We also provide you with a hyperlink near the top-right above the last execution half of the response text. This link will let you preview the text of the last execution in a new browser tab. It is especially useful if you’d prefer to read what’s on the page instead of scrolling through HTML code.

Response Content Validation

If you’re concerned that a request is going to deliver a custom error page that shows a status code 200 which isn’t shown as an error during a load test, then fear not. We have a feature that will allow you to put in place a validation that will flag it as an error under the conditions that you specify. Let’s say you’re expecting a login to send you back to the homepage with some message at the top welcoming the user, but in this case the login failed and now the server is still giving us a status 200 delivering us to the homepage without a welcome message such as “Welcome to our store, Michael!”. You can select the homepage request that comes after the login POST request, and then click the Validate Response Content button. This will display a modal window that allows you to choose text that you expect to see, or that you do not wish to see. So following my example we would expect to see the words Welcome to our store in every response we get for this request, and that’s what you would enter in the expected string field. Now even though this request isn’t appearing as an error under normal conditions we can flag it as an error during a load test.

Utilizing Custom Data to find Dynamic Tokens, URLs, and Strings

Whenever you identify a request that needs something changed to a dynamic value you’ll need to parameterize it to make use of our custom data selector. Depending on the need you’ll have to select the request and click the appropriate modification button to get started. Then in the next modal window that appears you’ll have to select the parameter that you wish to modify and change the modification type to custom and click the Select Data button. Now you’ll be in the custom data window where we gather all kinds of data for you to parameterize with from a convenient location. I’ll leave out the User Data and Generated Data tabs for now because this is really meant to focus on dynamic data that we’re grabbing from the responses to requests in your script.

  • Form Input tab – Here we offer you all extracted names and values of hidden input fields that we find from the responses of requests that come prior to the selected request.
  • Response Headers tab – This provides you with all of the unique response headers from older requests and if there were any duplicate response headers we’ll only display the most recent ones prior to the selected request.
  • Response Text tab – While this is the most complicated one, it also allows the most control over what you wish to use. We have a drop-down that allows you to select the response text from any request prior to the selected of course. Once selected the preview box displays the appropriate content. Now comes the hard part, finding what you need. My trick for to make this go quickly is to use the browser’s find feature which can be opened by pressing the Command and F keys together, or the CTRL and F keys for Windows users. Now you should see a miniature search bar appear within the browser window. Begin typing some identifying text that will lead you to the string, but remember not to enter the string itself since it will be different each time. An example of this would be something like ASP.NET_SessionId=, JSESSION=, __VIEWSTATE=, form_id=, etc. I’ve also seen URL paths that are built dynamically into the anchor tags of links based on your session with the server. That could look like href=”/test/user/12JzntR93/profile” in which case you would want to look for the parts just before the string. Once you find it you’ll need to define a unique start string delimiter and an ending string delimiter to let our application now what the string is between. A quick example of this would be an anchor tag like <a href=”/some/path/uniquestring”>link</a> which we would then enter a Start String Delimiter of [<a href=”/some/path/] and an Ending String Delimiter of [“>link</a>] which I’m using the [ ] brackets to represent the input box itself.
  • Cookies tab – The contents found here are rarely used, but in one occasion I needed to help someone parameterize a URL that was taking the JSESSION token set in the cookies and passing it allong with a URL of a later request. So in that case we needed to select the request and click the Modify URL button. Selected the token portion of the URL as the substring we wished to replace and copied it into the substring input field. Then using the custom data we selected the JSESSION token from the Cookies tab.

The post Tips and Tricks to Troubleshooting PRO Scripts appeared first on LoadStorm.

6 Facts About Cyber Monday Every E-Commerce Business Should Know

LoadStorm - Mon, 10/13/2014 - 09:41
1. Cyber Monday is Growing

Adobe reported that Cyber Monday e-commerce sales in 2013 reached $2.29 billion – a staggering 16% increase over 2012. comScore reported that desktop sales on Cyber Monday 2013 totaled over $1.73 billion, making it the heaviest US online spending day in history.

With these kinds of numbers, only time will tell how long Cyber Monday will continue to grow. But one thing is certain, Cyber Monday is the single most important day of the year for e-commerce businesses.

2. Mobile Shopping is Growing

Sites only designed for desktops are missing out on a huge chunk of the market. According to IBM’s Cyber Monday report, more than 18% of consumers used mobile devices to visit retailer sites. Even more impressive is the fact that mobile sales accounted for 13% of all online spending that day – an increase of 96% over 2012.

While making an e-commerce application mobile isn’t easy, its definitely not something you can skip anymore!

3. Internet Shoppers Are an Impatient Bunch

Even with the surge of traffic on Cyber Monday, web performance is absolutely critical to success. Did you know that studies have shown:

  • 74% of users will abandon a mobile site if it takes longer than 5 seconds to load
  • 46% of users will NOT return to a poorly performing website
  • You have 5 seconds to engage a customer before he or she leaves your site (How much of that are you wasting with load time?)

Customers don’t have any patience for slow sites and the fact is that if a site isn’t fast, they will spend their money elsewhere. The peak load time for conversions is 2 seconds and just a one second delay in load time causes:

  • A 7% decrease in conversions
  • 11% fewer page views
  • A 16% decrease in customer satisfaction

For sources and more statistics of the impact of web performance on conversions, check out our full infographic.

Fast and scalable wins the race!

4. Each Year Several Companies Have High Profile Website Crashes

In 2012, the standout crash was, in 2013 it was Motorola. In both cases, the heavy load of traffic slowed the websites to a crawl, returned lots of errors, and inevitably crashed completely.

According to Finish Line CEO Glen S. Lyon, Finish Line’s new website launched “November 19th and cost us approximately $3 million in lost sales . . . Following the launch, it became apparent that the customer experience was negatively impacted.” To read more about Motorola’s debacle in 2013, check out our recent blog post: Cyber Monday and The Impact of Web Performance.

5. Cyber Monday Shopping Has Gone Social

For better or worse, many shoppers are sharing their experiences on social media. According to OfferPop, Cyber Monday accounted for 1.4% of all social media chatter that day last year. This could be excellent exposure of people sharing the great deal you are offering or a PR nightmare.

Unprepared businesses will not only lose out on business, but unhappy shoppers will also share their bad experience with friends. Since we are using Finish Line as our example of high profile website crashes, it seems fitting to illustrate how social media played into their painful weekend. Angry tweets and Facebook messages popped up throughout the weekend, here is a small list of some of the best courtesy of Retail Info Systems News:

“The site is slower than slow.”

“I have had the flashing Finish Line icon running on my page for over 30 minutes now trying to confirm my order. And I have tried to refresh and nothing.”

“I have been trying for 2 days to submit an order on your site – receive error message every time.”

“Y’all’s website is down. Are you maybe going to extend your sales because of it?”

“Wait, you schedule maintenance on Cyber Monday?”

“Extremely disappointed! Boo! You are my go to store, and the one day you have huge Internet sales your website doesn’t work.”

6. Preparing Early for the Surge of Traffic is Essential

Want to avoid being like Finishline and Motorola? There is only one way to ensure that your website will absolutely, positively, without a doubt be fast and error free under pressure on Cyber Monday: performance testing.

Performance testing is a critical part of development and leaving it to the last minute or testing as an afterthought is a recipe for disaster (see above examples). Performance testing is best used as a part of an iterative testing cycle: Run a test, tune the application, run a test, review changes in scalability, tune the application, run a test, etc. until the desired performance is reached at the necessary scale whether that is 300 concurrent users or 300,000. Without time to tune an application after testing, the application may very well be in hot water with no way to get out before the big day.

There are tons of performance testing tools to choose from (I personally recommend LoadStorm ; ) but whatever tool you use, the moral of this story is: Test early, test often.

The post 6 Facts About Cyber Monday Every E-Commerce Business Should Know appeared first on LoadStorm.

Design Test Automation Around Golden Masters

Eric Jacobson's Software Testing Blog - Fri, 10/10/2014 - 15:08

“Golden Master”, it sounds like the bad guy in a James Bond movie.  I first heard the term used by Doug Hoffman at STPCon Spring 2012 during his Exploratory Test Automation workshop.  Lately, I’ve been writing automated golden master tests that check hundreds of things with very little test code.

I think Golden-Master-Based testing is super powerful, especially when paired with automation.

A golden master is simply a known good version of something from your product-under-test.  It might be a:

  • web page
  • reference table
  • grid populated with values
  • report
  • or some other file output by your product

Production is an excellent place to find golden masters because if users are using it, it’s probably correct.  But golden masters can also be fabricated by a tester.

Let’s say your product outputs an invoice file.  Here’s a powerful regression test in three steps:

  1. Capture a known good invoice file from production (or a QA environment).  This file is your golden master. 
  2. Using the same parameters that were used to create the golden master, re-create the invoice file on the new code under test. 
  3. Programmatically compare the new invoice to the golden master using your favorite diff tool or code.

Tips and Ideas:

  • Make sure the risky business logic code you want to test is being exercised.
  • If you expand on this test, and fully automate it, account for differences you don’t care about (e.g., the invoice generated date in the footer, new features you are expecting to not yet be in production). 
  • Make it a data-driven test. Pass in a list of orders and customers, retrieve production golden masters and compare them to dynamically generated versions based on the new code.
  • Use interesting dates and customers.  Iterate through thousands of scenarios using that same automation code.
  • Use examples from the past that may not be subject to changes after capturing the golden master.
  • Structure your tests assertions to help interpret failures.  The first assertion on the invoice file might be, does the item line count match?  The second might be, do each line’s values match?
  • Get creative.  Golden masters can be nearly anything.

Who else uses this approach?  I would love to hear your examples.

Categories: Software Testing

do u webview?

Steve Souders - Thu, 10/09/2014 - 04:52

A “webview” is a browser bundled inside of a mobile application producing what is called a hybrid app. Using a webview allows mobile apps to be built using Web technologies (HTML, JavaScript, CSS, etc.) but still package it as a native app and put it in the app store. In addition to allowing devs to work with familiar technologies, other advantages of building a hybrid app include greater code reuse across the app and the website, and easier support for multiple mobile platforms.

We all have webview traffic

Deciding whether to build a hybrid app versus a native app, or to have an app at all, is a lengthy debate and not the point of this post. Even if you don’t have a hybrid app, a significant amount of your mobile traffic comes from webviews. That’s because many sources of traffic are hybrid apps. Two examples on iOS are the Facebook app and Google Chrome. “Whoa, whoa, whoa” you say, Facebook’s retreat from its hybrid app is well known. That’s true. The Facebook timeline, for example, is no longer rendered using a webview:

However, the Facebook timeline contains links, such as the link to in the timeline above. When users click on links in the timeline, the Facebook app opens those in a webview:

Similarly, Chrome for iOS is implemented using a webview. Across all iOS traffic, 6% comes from Facebook’s webview and 5% comes from Google Chrome according to ScientiaMobile. And there are other examples: Twitter’s iOS app uses a webview to render clicked links, etc.

I encourage you to scan your server logs to gauge how much of your mobile traffic comes from webviews. There’s not much documentation on webview User-Agent strings. For iOS, the User-Agent is typically a base string with information appended by the app. Here’s the User-Agent string for Facebook’s webview:

Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Mobile/11D201 [FBAN/FBIOS;FBAV/; FBBV/3214247; FBDV/iPhone6,1;FBMD/iPhone; FBSN/iPhone OS;FBSV/7.1.1; FBSS/2; FBCR/AT&T;FBID/phone;FBLC/en_US;FBOP/5]

Here’s the User-Agent string from Chrome for iOS:

Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_2 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) CriOS/37.0.2062.60 Mobile/11D257 Safari/9537.53

That’s a lot of detail. The bottom line is: we’re all getting more webview traffic than we expect. Therefore, it’s important that we understand how webviews perform and take that into consideration when building our mobile websites.

Webview performance

Since a webview is just a bundled browser, we might think that webviews and their mobile browser counterpart have similar performance profiles. It turns out that this is not the case. This was discovered as an unintentional side effect from the article iPhone vs. Android – 45,000 Tests Prove Who is Faster. This article from 2011, in the days of iOS 4.3, noted that the iPhone browser was 52% slower than Android’s. The results were so dramatic it triggered the following response from Apple:

[Blaze's] testing is flawed. They didn’t actually test the Safari browser on the iPhone. Instead they only tested their own proprietary app, which uses an embedded Web viewer that doesn’t actually take advantage of Safari’s Web performance optimizations.

Apple’s response is accurate. The study conducted by Blaze (now part of Akamai) was conducted using a webview, so it was not a true comparison of the mobile browser from each platform. But the more important revelation is that webviews were hobbled resulting in worse performance than mobile Safari. Specifically, the webview on iOS 4.3 did not have Nitro’s JIT compiler for JavaScript, application cache, nor asynchronous script loading.

This means it’s not enough to track the performance of mobile browsers alone; we also need to track the performance of webviews. This is especially true in light of the fact that more than 10% of iOS traffic comes from webviews. Luckily, the state of webviews is better than it was in 2011. Even better, the most recent webviews have significantly more features when it comes to performance. The following table compares the most recent iOS and Android webviews along a set of important performance features.

iOS 7
UIWebView iOS 8
WKWebView Android 4.3
Webview Android 4.4
Webview Nitro/V8 ✔ ✔ 410 440 278 434 localStorage ✔ ✔ ✔ ✔ app cache ✔ ✔ ✔ ✔ indexedDB ✔ ✔ SPDY ✔ ✔ WebP ✔ srcset ✔ ? WebGL ✔ ? requestAnimation- Frame ✔ ✔ ✔ Nav Timing ✔ ✔ ✔ Resource Timing ✔

As shown in this table, the newest webviews have dramatically better performance. The most important improvement is JIT compilation for JavaScript. While localStorage and app cache now have support across all webviews, the newer webviews add support for indexedDB. Support for SPDY in the newer webviews is important to help mitigate the impact of slow mobile networks. WebP, image srcset, and WebGL address the bloat of mobile images, but support for these features is mixed. (I wasn’t able to confirm the status of srcset and WebGL in Android 4.4′s webview. Please add comments and I’ll update the table.) The requestAnimationFrame API gives smoother animations. Finally, adoption of the Nav Timing and Resource Timing APIs gives website owners the ability to track performance for websites served inside webviews.

Not out of the woods yet

While the newest webviews have a better performance profile, we’re still on the hook for supporting older webviews. Hybrid apps will continue to use the older webviews until they’re rebuilt and updated. The Android webview is pinned at Chromium 30 and requires an OS upgrade to get feature updates. Similar to the issues with legacy browsers, traffic from legacy webviews will continue for at least a year. Given the significant amount of traffic from webviews and the dramatic differences in webview performance, it’s important that developers measure performance on old and new webviews, and apply mobile performance best practices to make their website as fast as possible even on old webviews.

(Many thanks to Maximiliano Firtman, Tim Kadlec, Brian LeRoux, and Guy Podjarny for providing information for this post.)

Categories: Software Testing

The Myths of Tech Interviews

Alan Page - Wed, 10/08/2014 - 11:16

I recently ran across this article on from over a year ago. Here’s the punch line (or at least one of them):

“We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess…”

I expect (hope?) that the results aren’t at all surprising. After almost 20 years at Microsoft and hundreds of interviews, it’s exactly what I expected. Interviews, at best, are a guess at finding a good employee, but often serve the ego of the interviewer more than the needs of the company. The article makes note of that as well.

“On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.”

I wish more interviewers paid attention to statistics or articles (or their peers) and stopped asking horrible interview questions, and really, really tried to see if they could come up with better approaches.

Why Bother?

So – why do we do interviews if they don’t work? Well, they do work – hopefully at least as a method of making sure you don’t hire a completely incapable person. While it’s hard to predict future performance based on an interview, I think they may be more effective at making sure you don’t hire a complete loser – but even this approach has flaws, as I frequently see managers pass on promising candidates for (perhaps) the wrong reasons out of fear of making a “bad hire”.

I do mostly “as appropriate” interviewer at Microsoft (this is the person on the interview loop who makes the ultimate hire / no hire decision on candidates based on previous interviews and their own questions). For college candidates or industry hires, one of the key questions I’m looking to answer is, “is this person worth investing 12-18 months of salary and benefits to see if they can cut it”. A hire decision is really nothing more than an agreement for a long audition. If I say yes, I’m making a (big) bet that the candidate will figure out how to be valuable within a year or so, and assume they will be “managed out” if not. I don’t know the stats on my hire decisions, but while my heart says I’m great, my head knows that I may be just throwing darts.

What Makes a Good Tech Employee?

If I had a secret formula for what made people successful in tech jobs, I’d share. But here’s what I look for anyway:

  1. Does the candidate like to learn? To me, knowing how to figure out how to do something is way more interesting than knowing how to do it in the first place. In fact, the skills you know today will probably be obsolete in 3-5 years anyway, so you better be able to give me examples about how you love to learn new things.
  2. Plays well with others – (good) software engineering is a collaborative process. I have no desire to hire people who want to sit in their office with the door closed all day while their worried team mates pass flat food under their door. Give me examples of solving problems with others.
  3. Is the candidate smart? By “smart”, I don’t mean you can solve puzzles or write some esoteric algorithm at my white board. I want to know if you can carry on an intelligent conversation and add value. I want to know your opinions and see how you back them up. Do you regurgitate crap from textbooks and twitter, or do you actually form your own ideas and thoughts?
  4. If possible, I’ll work with them on a real problem I’m facing and evaluate a lot of the above simultaneously. It’s a good method that I probably don’t use often enough (but will make a mental note now to do this more).

The above isn’t a perfect list (and leaves off the “can they do the job?” question, but I think someone who can do the above can at least stay employed.

Categories: Software Testing

Dallas Ebola Patient Sent Home Because of Defect in Software Used by Many Hospitals

Randy Rice's Software Testing & Quality - Mon, 10/06/2014 - 05:37
Here at Rice Consulting, we have been making the message to heathcare providers for over two years now, that the greatest risk they face is in the integration and testing of workflows to make sure electronic health records are correctly made available to everyone involved in the healthcare delivery process - all the way from patients to nurses, doctors and insurance companies. It is a complex domain and too many heathcare organizations see EHR as just a technical issue that the software vendor will address and test. However, that is not the case as demonstrated by this story.

From the Homeland Security News Wire today, October 6:

"Before Thomas Eric Duncan was placed in isolation for Ebola at Dallas’ Texas Health Presbyterian Hospitalon 28 September, he sought care for fever and abdominal pain three days earlier, but was sent home. During his initial visit to the hospital, Duncan told a nurse that he had recently traveled to West Africa — a sign that should have led hospital staff to test Duncan for Ebola. Instead, Duncan’s travel record was not shared with doctors who examined him later that day. This was the result of a flaw in the way the physician and nursing portions of our electronic health records (EHR). EHR software, used by many hospitals, contains separate workflows for doctors and nurses."

"Before Thomas Eric Duncan was placed in isolation for Ebola at Dallas’ Texas Health Presbyterian Hospital on 28 September, he sought care for fever and abdominal pain three days earlier, but was sent home. During his initial visit to the hospital, Duncan told a nurse that he had recently traveled to West Africa — a sign that should have led hospital staff to test Duncan for Ebola. Instead, Duncan’s travel record was not shared with doctors who examined him later that day.

'Protocols were followed by both the physician and the nurses. However, we have identified a flaw in the way the physician and nursing portions of our electronic health records (EHR) interacted in this specific case,” the hospital wrote in a statement explaining how it managed to release Duncan following his initial visit.

According to NextGov, EHR software used by many hospitals contains separate workflows for doctors and nurses. Patients’ travel history is visible to nurses, but such information 'would not automatically appear in the physician’s standard workflow.' As a result, a doctor treating Duncan would have no reason to suspect Duncan’s illness was related to Ebola.

Roughly 50 percent of U.S. physicians now use EHRs since the Department of Health and Human Services (HHS) began offering incentives for the adoption of digital records. In 2012, former HHS chief Kathleen Sebelius said EHRs 'will lead to more coordination of patient care, reduced medical errors, elimination of duplicate screenings and tests and greater patient engagement in their own care.' Many healthcare security professionals, however, have pointed out that some EHR systems contain loopholes and security gaps that prevent data sharing among healthcare workers.

The New York Times recently reported that several major EHR systems are built to make data sharing between competing EHR systems difficult. Additionally, a 2013 RAND Corporationstudy for the American Medical Association found that doctors felt 'current EHR technology interferes with face-to-face discussions with patients; requires physicians to spend too much time performing clerical work; and degrades the accuracy of medical records by encouraging template-generated doctors’ notes.'

Today, Dallas’s Texas Health Presbyterian Hospital has made patients’ travel history available to both doctors and nurses. It has also modified its EHR system to highlight Ebola-endemic regions in Africa. 'We have made this change to increase the visibility and documentation of the travel question in order to alert all providers. We feel that this change will improve the early identification of patients who may be at risk for communicable diseases, including Ebola,' the hospital noted."

Categories: Software Testing