ISTQB Foundation Level Training - Miami/Ft. Lauderdale, FL, July 16 - 18, 2013

Randy Rice's Software Testing & Quality - Mon, 06/03/2013 - 09:30
We don't conduct many public courses, but are going to host an ISTQB Training event in July in the Miami/Ft. Lauderdale area (closer to Ft. Lauderdale). If you have been thinking about getting certified in software testing, here's your chance!

There is a 10% discount for groups of 3 or more. Plus, the price includes the cost of the exam ($250 value).

The instructor will be Dr. Tauhida Parveen, an authorized ISTQB trainer and author of two testing books. (And a great trainer!)

For more information and to register, click here:

We hope to see you there!
Categories: Software Testing

QA Music: Bang Your App

QA Hates You - Mon, 06/03/2013 - 03:22

Way, way back for Quiet Riot’s “Bang Your Head” this morning:

Categories: Software Testing

Paul is Definitely Alive!

Randy Rice's Software Testing & Quality - Fri, 05/31/2013 - 12:21
Last night I had the opportunity to check off a bucket list item - getting to see Paul McCartney perform live in concert. It was an amazing show flawlessly performed by 5 guys who can rock. Click here for the review.

At age 70, McCartney is an inspiration to me. Talk about energy!

I was contrasting this concert to the Eric Clapton concert I went to a couple of months ago. Don't get me wrong - I really like Eric Clapton, too. He's one of the greatest guitarists of all time. But...he mainly played, took a bow, did an encore, took another bow and left. Very little audience interaction.

At last night's concert, I felt like I was at a party jamming with old friends. Paul told funny and touching stories. He expressed his empathy at the losses recently in the tornadoes. I thought "Let it Be" was the perfect song for that. It was also a classy thing to do to come back on the 2nd Encore, waving the Oklahoma Flag, with someone else carrying the British flag.

I know that discussing the relative talent and appeal of artists is very subjective. All I can say is that McCartney entertained. Big time. And the audience wanted more, even after 2 encores.

To quote from the review, “You have been a fantastic crowd here tonight in Tulsa, Oklahoma, but there does come a time when we gotta go home … and there comes a time when you gotta go home, too,” the rocker cautioned the crowd after delivering the weirdly wonderful combo of “Yesterday” and “Helter Skelter.”

As I was just letting it all soak in on the 2-hour drive back to Oklahoma City, I was thinking that I have fans in what I do, and you have fans in what you do.  If you are reading this as one of my fans, this will have meaning to you. I don't see things like a lot of other consultants and trainers see them. I have a special "sound" you might say.

So, play to your audience and let the others go find the sound they like. No need to change your sound to please them. Not everyone will like your sound. In fact, some people will like some of what you do, but not all of what you do. That helps make you better; it helps make me better!

By the way, I have a recorded presentation on How to Be a Software Testing Rock Star at

Some takeaways:
  • Know what pleases your audience and give it to them.
  • Give a little extra
  • Leave them wanting more (that came from Will Rogers)
  • Be gracious and kind
  • Have fun
  • Rock it loud!

Until next time...

Categories: Software Testing

Five QA Things To Do In San Francisco

QA Hates You - Fri, 05/31/2013 - 04:37

As some of you might be aware, I spent last weekend in the San Francisco area. What did I do? Things worthy of QA, of course, and here’s what you, too, can do while visiting San Francisco and capturing a bit of the flavor of the area worthy of QA.

Eat at NaN

A restaurant called NaN? That’s just screaming for QA’s dining dollars.

Oh, but the actual QA world in San Francisco let us down. It’s NaR (not a restaurant). So no Korean fusion for you or me.

Visit Sonoma Valley for the Wonderful Mobile Testing

The Sonoma Valley, partly known for its wines, also has low to no mobile data connectivity. Did you know there’s an icon on your iPhone that indicates just cellular service? I do now. Now, what happens to my app under test when I am in a tastefully comported Faraday Cage, sampling Cabernet and adding records to the DB?

Also, there are some wineries around, which gives you the proper answer for any “Why would the user do that?” questions you might get: “It was the third winery on the tour.”

Take a Bay Cruise to Find the Null in Geolocation Testing

In at least one spot on San Francisco Bay, I discovered I was (null) miles away from Sausalito. Can you find that spot?

Granted, that was an error in geolocation in this particular app, but what happens in your mobile application when you’re on a boat?

Never pass up an opportunity to try the mobile app out in unexpected circumstances (What street address do I get when I’m flying over the building in a helicopter tour?).

Although strange and extreme, they might uncover conditions and application behaviors that would occur in less extreme circumstances.

Traverse the Golden Gate Bridge in the Fog and the Wind in an Open Car or Bus

Brothers and sisters who are not in San Francisco: Just because San Francisco is in California, that does not mean it is warm. Your stereotypes only apply to southern California.

Crossing the Golden Gate bridge in the morning in a convertible or one of those open-topped tour buses allows you to prove your toughness and endurance, and it provides a handy metaphor for the software development life cycle. You’re in a fog, you can’t see anything, suddenly something large looms over you, then it passes.

Additionally, crossing the same way on the southbound, windward side of the bridge is a particularly moving experience that will bring tears to your eyes and proves to be an endurance test not unlike a lessons learned meeting.

Walk the City. Then Take A Cab

Walking around San Francisco (or any other geographically bound and dense city) allows you to get up close to things and to view the emergence of the architecture and whatnot slowly.

Then take a cab and see how those familiar with the city and who have a particular goal in mind use the city streets and alleys to accomplish their goal.

It’s different, but they’re both San Francisco.

That’s a bit of a stretch and overt metaphor for testing, but I needed a fifth item because the title says five.

Categories: Software Testing

Mobile app quality takes more than just software testing automation

SearchSoftwareQuality - Thu, 05/30/2013 - 10:05
Mobile application testers may not find the same value in software testing automation that Web application testers enjoy.

Categories: Software Testing

Where Do You Keep Your Lessons Learned?

Randy Rice's Software Testing & Quality - Wed, 05/29/2013 - 07:37
I'll never forget a conversation I had with a client one day. It was just before we were both going to make a presentation of assessment findings and recommendations to senior management.

I asked her if she felt senior management would be willing to take action on the recommendations. "Oh yes," she said. "They don't want another Project X (fictitious name) to happen." I realized I had missed something in my interviews.

"Project X?" I asked.

"Yes, it went over budget by $1 million dollars and was a big black eye on our company," she said.

"Why was that?"

"Lack of solid test practices, people not communicating well....all the thing you have there in your PowerPoint deck," she said.

This got me thinking about something we all know and discuss, namely, "lessons learned." We mention them as if they are in a book like the President's Book of Secrets.

However, I think that is seldom the case.

My belief is that most of the lessons learned are in people's heads, which makes sense. So are requirements, test criteria and a lot of other things. Not a great storage method, but it is reality for many.

I'm not even proposing there should be a book of lessons learned. However, I think they should be reflected in practices, processes, systems, whatever you use to guide work.

One thing  I have found inexpensive and effective is a team knowledgebase. Whether it be a wiki or some other method, at least it is accessible.

Of course, I must also mention the evil twin of lessons learned, "lessons not learned." This one is painful and we all are well-acquainted with it. Imagine the Homer Simpson facepalm and you have it.

It is frustrating because then we are left asking "Why don't we learn from the hard knocks?"

Here is my short list of reasons:

1. Lack of reflection - We don't regroup and reflect after a major event, so we never identify the key learnings, even as individuals.

2. The pain is not great enough - It's like "death by a thousand paper cuts" so we ignore the event and move on. I have often said that there is nothing like a good disaster to get management's attention.

3. Lack of leadership - A good leader and coach knows how to remind the team "Don't do that!"without being a jerk. Good leaders also knows how to fix the process to prevent the mistakes from happening again.

4. Short memories - Time can heal a lot of wounds. Five years from now, Project X, might not seem so bad. However, a new Project X could be devastating. Remember when we blew up the moon (just kidding!)?

By the way, the lessons learned don't have to be your own. In sifting through the rubble of the Moore tornado, I have found several pictures I have turned into the picture lost and found effort. Hopefully, they will be reunited with their owners.

I am now on a mission to take digital pictures of all my prints, DVD copies of all videos and put them in a bank storage box, just in case. I am taking a video of all our personal property "just in case." Oh, and I'm also working on getting a storm shelter installed. Two F5 tornadoes close by in 14 years are too many for me!

I would like to hear how you and your team handles "lessons learned." Leave a comment and let me know.



Categories: Software Testing

Testing on the Toilet: Don’t Overuse Mocks

Google Testing Blog - Tue, 05/28/2013 - 17:22
By Andrew Trenk

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

When writing tests for your code, it can seem easy to ignore your code's dependencies by mocking them out.

public void testCreditCardIsCharged() { paymentProcessor = new PaymentProcessor(mockCreditCardServer); when(mockCreditCardServer.isServerAvailable()).thenReturn(true); when(mockCreditCardServer.beginTransaction().thenReturn(mockTransactionManager); when(mockTransactionManager.getTransaction().thenReturn(transaction); when(, creditCard, 500).thenReturn(mockPayment); when(mockPayment.isOverMaxBalance()).thenReturn(false); paymentProcessor.processPayment(creditCard, Money.dollars(500)); verify(mockCreditCardServer).pay(transaction, creditCard, 500); }
However, not using mocks can sometimes result in tests that are simpler and more useful.

public void testCreditCardIsCharged() { paymentProcessor = new PaymentProcessor(creditCardServer); paymentProcessor.processPayment(creditCard, Money.dollars(500)); assertEquals(500, creditCardServer.getMostRecentCharge(creditCard)); }
Overusing mocks can cause several problems:

- Tests can be harder to understand. Instead of just a straightforward usage of your code (e.g. pass in some values to the method under test and check the return result), you need to include extra code to tell the mocks how to behave. Having this extra code detracts from the actual intent of what you’re trying to test, and very often this code is hard to understand if you're not familiar with the implementation of the production code.

- Tests can be harder to maintain. When you tell a mock how to behave, you're leaking implementation details of your code into your test. When implementation details in your production code change, you'll need to update your tests to reflect these changes. Tests should typically know little about the code's implementation, and should focus on testing the code's public interface.

- Tests can provide less assurance that your code is working properly. When you tell a mock how to behave, the only assurance you get with your tests is that your code will work if your mocks behave exactly like your real implementations. This can be very hard to guarantee, and the problem gets worse as your code changes over time, as the behavior of the real implementations is likely to get out of sync with your mocks.

Some signs that you're overusing mocks are if you're mocking out more than one or two classes, or if one of your mocks specifies how more than one or two methods should behave. If you're trying to read a test that uses mocks and find yourself mentally stepping through the code being tested in order to understand the test, then you're probably overusing mocks.

Sometimes you can't use a real dependency in a test (e.g. if it's too slow or talks over the network), but there may better options than using mocks, such as a hermetic local server (e.g. a credit card server that you start up on your machine specifically for the test) or a fake implementation (e.g. an in-memory credit card server).

For more information about using hermetic servers, see Stay tuned for a future Testing on the Toilet episode about using fake implementations.

Categories: Software Testing

A Virtual Battlefield... with Bugs: Ship Wars @ Google Kirkland

Google Testing Blog - Tue, 05/28/2013 - 14:12
By Anthony F. Voellm (aka Tony the @p3rfguy / G+) and Emily Bedont

On Wednesday, October 24th, while sitting under the Solar System, 30 software engineers from the Greater Seattle area came together at Google Kirkland to partake in the first ever Test Edition of Ship Wars. Ship Wars was created by two Google Waterloo engineers, Garret Kelly and Aaron Kemp, as a 20% project. Yes, 20% time does exist at Google!  The object of the game is to code a spaceship that will outperform all others in a virtual universe - algorithm vs algorithm.

The Kirkland event marked the 7th iteration of the program which was also recently done in NYC. Kirkland however was the first time that the game had been customized to encourage exploratory testing. In the case of "Ship Wars the Test Edition," we planted 4 bugs that the engineering participants were awarded for finding. Well, we ran out of prizes and were quickly reminded that when you put a lot of testing minded people in a room, many bugs will be unveiled! One of the best unveiled bugs was not one of the four planted in the simulator. When turning your ship 90 degrees, the ship actually turned -90 degrees. Oops!

Participants were encouraged to test their spaceship built on their own machine or a Google Chromebook. While the coding was done in the browser, the simulator and web server were run on Google Compute Engine. Throughout the 90 minutes, people challenged other participants to duels. Head-to-head battles took place on Chromebooks at the front of the room. There were many accolades called out but in the end, there could only be one champion who would walk away with a brand spankin’ new Nexus7. Check out our video of the evening’s activities.

Sounds fun, huh? We sure hope our participants, including our first place winner shown receiving the Nexus 7 from Garret, enjoyed the evening! Beyond the battles, our guests were introduced to the revived Google Testing Blog, heard firsthand that GTAC will be back in 2013, learned about testing at Google, and interacted with Googlers in a "Googley" environment. Achievement unlocked.

Special thanks to all the Googlers that supported the event!

Categories: Software Testing

QA Music: An Ode to Independent Consultants

QA Hates You - Mon, 05/27/2013 - 03:40

Cher, "Gypsies, Tramps, and Thieves":

Who are, by the way, working today even though it’s a holiday in the United States. Paid time off is for suckers. So we tell ourselves.

Categories: Software Testing

Test Profession Survey Results Preview

Randy Rice's Software Testing & Quality - Fri, 05/24/2013 - 08:37
First, thanks for your thoughts, prayers and concerns after the tornado this week. I helped a friend go through the rubble of her former home on Monday. It was amazing to see her positive attitude. As we arrived at the debris, she said, "Welcome to my humble abode."

While there, she was interviewed by a Brazilian TV crew who also remarked to me how impressed they were with her attitude. Yes, that is inspiring, and many others here are also holding up well even in trying times.

 Also, thanks for those of you responded to the test professional survey last week. I ended up with 100 responses. I need a while longer to write the article (and perhaps white paper) on this, but I thought you might want to see some early results.

I am working on the article and should have it by next week. To me, the early impressions are that (at least to those that responded) the majority of testers see themselves as professionals and it is important to have that view. Management may not see testers as professionals as much, but there is still a significant percentage of managers that do find importance in the view of a test professional as opposed to some other view. I think this has some larger, and very good, implications for those of us in the field of software testing.

More to come soon. Of course, I would love to get your thoughts on these findings.

For those in the U.S., have a great Memorial Day. Remember those who have given their lives for our freedom. Remember those who are rebuilding in Oklahoma.

Thanks as always for being the best at what you do!

Categories: Software Testing

Test Is Dead is dead

Cartoon Tester - Fri, 05/24/2013 - 00:11
If you've not heard of the term 'Test is dead' then Google [did] it.

Test Is Dead is dead... Long live Test!!

Categories: Software Testing

Rockin’ STAR West

Alan Page - Thu, 05/23/2013 - 22:46

The cat’s out of the bag – I’m popping out of my no-conference bubble, and making an appearance at a testing conference (STAR West in October). The theme of the conference is “Be a Testing Rock star”, and while I think that theme begs for an appearance from Michael Larsen, I’ll do my best to live up to the hype.

I’ll be giving a keynote on testing the Xbox. While I’m certain I’ll have interesting stories and pragmatic examples I can share in a generic fashion, I’m hoping I can share much, much more. I don’t have clearance yet to share too much about the Xbox One, but assuming I can get my proposals in place, it should be a pretty exciting talk to share. By the time STAR rolls around, I’ll have passed my 20 year anniversary working on software, and working on this product has definitely been the highlight of my testing career.

I’m also signed up to give a half-day workshop on testing. Literally – “on testing”. The title of the workshop is Alan Page: On Testing, and I’ll take the time to share some good stories and examples of new testing ideas, but I expect the attendees to drive the content. More details on that as we get closer to the event, but I expect it will be a ton of fun, and especially interesting to those folks who just want to suck up some new and interesting ideas.

More as the conference gets closer.

(potentially) related posts:
  1. Twinkle Twinkle I’m back from STAR
  2. Happy Birthday HWTSAM
  3. Stop Guessing about my STAR presentation
Categories: Software Testing

A debate on the merits of mobile software test automation

SearchSoftwareQuality - Thu, 05/23/2013 - 12:50
One veteran recommends automating all mobile software tests. Another expert says to focus on planning and automate only where necessary.

Categories: Software Testing

Walls on the Soapbox

Alan Page - Wed, 05/22/2013 - 16:21

I chair an advisory council for a community of senior testers at Microsoft. We have a variety of events ranging from talking heads to open space events to panels to whatever type of event we think is the most different than the previous one.

Yesterday, we had our fifth annual “soapbox” event, a lightning talk-ish event where speakers are encouraged to share a rant or opinion with the rest of the community (five minute limit). Because I’ve been thinking so much about my Tear Down the Wall post, I decided to give the five minute version for my peers.

I began with the story of Lloyd Frink (from this post), and  compared the world of technology in 1979 to the world today. I talked about how “walls” get in the way, and how boxing people into roles (especially roles of gatekeeper and fake-customer) are fruitless at best. I closed by sharing Trish Khoo’s quote from her take on all of this:

It would be somewhat revolutionary, if it weren’t already happening.

This was a point I really wanted to drive home. Many of the senior testers at Microsoft have only been at Microsoft, and few pay any attention to software development outside the Borg, and I think there’s a lot to learn from companies and teams that have successfully blurred and erased lines and boxes from their development teams.

As of today, I’m still employed, so I’ll keep ranting and see what happens.

For MS folks, you can find the talk on //resnet (search for soapbox). The whole event is good, but my talk starts about 19:30 in.

(potentially) related posts:
  1. Why?
  2. Fall Travel
  3. The Ballad of the Senior Tester
Categories: Software Testing

Should You Still Report Bugs If Nobody Listens?

Eric Jacobson's Software Testing Blog - Tue, 05/21/2013 - 10:08

Well…yes.  I would.

The most prolific bug finder on my team is struggling with this question.  The less the team decides to fix her bugs, the less interested she grows in reporting them.  Can you relate?

There is little satisfaction reporting bugs that nobody wants to hear about or fix.  In fact, it can be quite frustrating.  Nevertheless, when our stakeholders choose not to fix certain classes of bugs, they are sending us a message about what is important to them right now.  And as my friend and mentor, Michael Bolton likes to say:

If they decide not to fix my bug, it means one of two things:

  • Either I’m not explaining the bug well enough for them to understand its impact,
  • or it’s not important enough for them to fix.

So as long as you’re practicing good bug advocacy, it must be the second bullet above.  And IMO, the customer is always right.

Nevertheless, we are testers.  It is our job to report bugs despite adversity.  If we report 10 for every 1 that gets fixed, so be it.  We should not take this personally.  However, we may want to:

  • Adjust our testing as we learn more about what our stakeholders really care about.
  • Determine a non-traditional method of informing our team/stakeholders our bugs.
    • Individual bug reports are expense because they slowly suck everyone’s time as they flow through or sit in the bug repository.  We wouldn’t want to knowingly start filling our bug report repository with bugs that won’t be fixed.
    • One approach would be a verbal debrief with the team/stakeholders after testing sessions.  Your testing notes should have enough information to explain the bugs.
    • Another approach could be a “supper bug report”; one bug report that lists several bugs.  Any deemed important can get fixed or spun off into separate bug reports if you like.
Categories: Software Testing

Update on May 20 Tornado

Randy Rice's Software Testing & Quality - Mon, 05/20/2013 - 20:05
Hi Folks,

Thanks to everyone who has contacted me concerning the tornadoes here today. Thankfully, we were spared, barely.

I was at a client in OKC to attend at 2:00 meeting, which was canceled. It was looking stormy, so I decided to head home. About when I arrived home, the tornado sirens started to sound.

As the tornado approached, I was on the back porch watching it come directly toward us. Just like May 3rd, 1999 all over again. So, I hustled Janet and our two pugs into the car and drove north. We don't have a storm shelter (I think we will soon). The tornado took a turn toward the east, so no damage at our place. But, the loss of life and damage is terrible, especially the children at the school.

(This is a short video I shot from our car. You can see the tornado moving from the right of the screen to the left behind the Homeland store.)

Both our sons and families are fine, although had Ryan (our oldest) and his family not moved 6 years ago, they would have been wiped out and our oldest grandson would have been at the school where the 7 children died.

I don't what it is about Moore. It's like a tornado magnet. It's just really bad here right now. Your prayers are needed.

The people here are tough and resilient. We will rebuild and go on, but the loss of life is the worst part. 51 lives lost as of this writing.



Update: May 21

Here are some more pictures:

This is the path of the tornado:

Here is what I was looking at on radar on my iPad when I decided it was time to bug out:

Here is a pretty dramatic picture after the first tornado. This is a second lowering. This didn't form into a tornado:

Categories: Software Testing

QA Music: All That We Suffer Through Leads To Determination

QA Hates You - Mon, 05/20/2013 - 06:39

Killswitch Engage, "In Due Time":

Categories: Software Testing

New skills for the QA tester: Scripting, security

SearchSoftwareQuality - Fri, 05/17/2013 - 11:52
Software quality assurance is gaining respect as a profession -- but do QA testers have the scripting and security skills the role now requires?

Categories: Software Testing

Help With a Survey on Software Test Profesionals

Randy Rice's Software Testing & Quality - Fri, 05/17/2013 - 07:21
I am working on an article about software testing as a profession. This is a controversial topic to some people and I would like to get your thoughts.

I have created a very short (10-question) survey that will help me write this article. It will only allow 50 respondents, but I would like to invite anyone to participate at:
I will share the results and also let you know when the article is out.

Thanks for your help!

Categories: Software Testing

Don’t Find Bugs, Prevent Bugs

Eric Jacobson's Software Testing Blog - Fri, 05/17/2013 - 06:09

It’s a cliché, I know.  But it really gave me pause when I heard Jeff “Cheezy” Morgan say it during his excellent STAReast track session, “Android Mobile Testing: Right Before Your Eyes”.  He said something like, “instead of looking for bugs, why not focus on preventing them?”.                 

Cheezy demonstrated Acceptance Test Driven Development (ATDD) by giving a live demo, writing Ruby tests via Cucumber, for product code that didn’t exist.  The tests failed until David Shah, Cheezy’s programmer, wrote the product code to make them pass. 

(Actually, the tests never passed, which they later blamed on incompatible Ruby versions…ouch.  But I’ll give these two guys the benefit of the doubt. )

Now back to my blog post title.  I find this mindshift appealing for several reasons, some of which Cheezy pointed out and some of which he did not:

  • Per Cheezy’s rough estimate 8/10 bugs involve the UI.  There is tremendous benefit to the programmer knowing about these UI bugs while the programmer is writing the UI initially.  Thus, why not have our testers begin performing exploratory testing before the Story is code complete?
  • Programmers are often incentivized to get something code completed so the testers can have it (and so the programmers can work on the next thing).  What if we could convince programmers it’s not code complete until it’s tested?
  • Maybe the best time to review a Story is when the team is actually about to start working on it; not at the beginning of a Sprint.  And what do we mean when we say the team is actually about to start working on it?
    • First we (Tester, Programmer, Business Analyst) write a bunch of acceptance tests.
    • Then, we start writing code as we start executing those tests.
    • Yes, this is ATDD, but I don’t think automation is as important as the consultants say.  More on that in a future post.
  • Logging bugs is soooooo time consuming and can lead to dysfunction.  The bug reports have to be managed and routed appropriately.  People can’t help but count them and use them as measurements for something…success or failure.  If we are doing bug prevention, we never need to create bug reports.

Okay, I’m starting to bore myself, so I’ll stop.  Next time I want to explore Manual ATDD.

Categories: Software Testing