Syndicate content
Refinements on the art of software testing
URL: http://www.testthisblog.com/
Updated: 21 hours 8 min ago

My Daddy Tests Software

Mon, 11/19/2012 - 11:13

This clearly makes her the coolest kid at daycare.

If I had Josephine 1000 years ago, she would probably have become a software tester like her dad.  Back then, trades often remained in the family.  But in this era, she can be whatever she wants to be when she grows up.

I doubt she will become a software tester.  However, I will teach her how important software testers are.  Josie will grow up with Generation Z, a generation that will use software for almost everything.  The first appliances she buys will have sophisticated software running them.  She will probably be able to see if she needs more eggs by logging in to a virtual version of her refrigerator from work. 

And why do you think that software will work?  Because of testers!

Josie will be able to process information, herself, at lightning speeds.  So I figure, if I start early enough, she can start making suggestions to improve the way we test. 

But first she has to learn to talk.

Do your kids appreciate your testing job?

Categories: Software Testing

What Makes Software Cool?

Fri, 10/26/2012 - 09:59

For most of us, testing for “coolness” is not at the top of our quality list.  Our users don’t have to buy what we test.  Instead, they get forced to use it by their employer.  Nevertheless, coolness can’t hurt. 

As far as testing for it…good luck.  It does not appear to be as straightforward as some may think.

I attended a mini-UX Conference earlier this week and saw Karen Holtzblatt, CEO and founder of InContext, speak.  Her keynote was the highlight of the conference for me, mostly because she was fun to watch.  She described the findings of 90 interviews and 2000 survey results, where her company asked people to show them “cool” things and explain why they considered them cool.

Her conclusion was that software aesthetics are way less important than the following four aspects:

  1. Accomplishments – When using your software, people need to feel a sense of accomplishment without disrupting the momentum of their lives.  They need to feel like they are getting something done that was otherwise difficult.  They need to do this without giving up any part of their life.  Example: Can they accomplish something while waiting in line?
  2. Connection – When using your software, they should be motivated to connect with people they actually care about (e.g., not Facebook friends).  These connections should be enriched in some manner.  Example: Were they able to share it with Mom?  Did they talk about it over Thanksgiving dinner?
  3. Identity - When using your software, they should feel like they’re not alone.  They should be asking themselves, “Who am I?”, “Do I fit in with these other people?”.  They should be able to share their identity with joy.
  4. Sensation – When using your software, they should experience a core sensory pleasure.  Examples: Can they interact with it in a fresh way via some new interface?  Can they see or hear something delightful?

Here are a few other notes I took:

  • Modern users have no tolerance for anything but the most amazing experience.
  • The app should help them get from thought to action, nothing in between.
  • Users expect software to gather all the data they need and think for them.

I guess maybe I’ll think twice the next time I feel like saying, “just publish the user procedures, they’ll get it eventually”.

Categories: Software Testing

Performance Testing Primer

Mon, 10/15/2012 - 10:19

Last week we had an awesome Tester Lightning Talk session here at my company.  Topics included:

  • Mind Maps
  • Cross-Browser Test Emulation
  • How to Bribe Your Developers
  • Performance Testing Defined
  • Managing Multiple Agile Projects
  • Integration Testing Sans Personal Pronouns
  • Turning VSTS Test Results Files Into Test Reports
  • Getting Back to Work After Leave
  • Black Swans And Why Testers Should Care

The “Performance Testing Defined” talk inspired me to put my own twist on it and blog.  Here goes…

 

 

The terms in the above graphic are often misused and interchanged.  I will paraphrase from my lightning talk notes:

Baseline Testing – Less users than we expect in prod.  This is like when manual testers perform a user scenario and use a stopwatch to time it.  It could also be an automated load test where we are using less than the expected number of users to generate load.
Load Testing – # of users we expect in prod.  Real-world scenario.  Realistic.
Stress Testing – More users than we expect in prod. Obscene amount of users.  Used to determine the breaking point.  After said test, the tester will be able to say “With more than 2000 users, the system starts to drag.  With 5000 users, the system crashes.”
Stability Testing – Run the test continuously over a period of time (e.g., 24 hours, 1 week) to see if something happens.  For example, you may find a memory leak.
Spike Testing – Think TicketMaster.  What happens to your system when it suddenly jumps from 100 simultaneous users to 5000 simultaneous users for a short period of time?

There.  Now you can talk like a performance tester and help your team discuss their needs. 

As far as building these tests, at the most basic level, you really only need one check (AKA automated test).  Said check should simulate something user-like, if possible.  In the non-web-based world (which I live in) this check may be one or more service calls.  In the non-web-based world, you probably do not want to use an automated check at the UI level; you would need an army of clients to load test.  After all, your UI will only have a load of 1 user, right?  What you’re concerned with is how the servers handle the load.  So your check need only be concerned with the performance before the payload gets handed back to the client.

The check is probably the most challenging part of Performance testing.  Once you have your check, the economies of scale begin.  You can use that same check as the guts for most of your performance testing.  The main variables in each are user load and duration.

Warning: I’m certainly an amateur when it comes to performance testing.  Please chime in with your corrections and suggestions.

Categories: Software Testing

Testers Are Like Fact Checkers

Thu, 09/27/2012 - 10:21

Per one of my favorite podcasts, WNYC’s On the Media, journalists are finding it increasingly more difficult to check facts at a pace that keeps up with modern news coverage.  To be successful, they need dedicated fact checkers.   Seem familiar yet?

Journalists depend on these fact checkers to keep them out of trouble.  And fact checkers need to have their own skill sets, allowing them to focus on fact checking.  Fact checkers have to be creative and use various tricks, like only following trustworthy people on Twitter and speaking different languages to understand the broader picture.  How about now, seem familiar?

Okay, try this:  Craig Silverman, founder of Regret the Error, a media error reporting blog, said “typically people only notice fact checkers if some terrible mistake has been made”.  Now it seems familiar, right?

The audience of fact checkers or software testers has no idea how many errors were found before it was released.  They only know what wasn’t found. 

Sometimes I have a revenge fantasy that goes something like this:

If a user finds a bug and says, “that’s so obvious, why didn’t they catch this”, their software will immediately revert back to the untested version. 

…Maybe some tester love will start to flow then.

Categories: Software Testing

What I Love About Kanban As A Tester #5

Mon, 09/10/2012 - 09:53

There’s no incentive to look the other way when we notice bugs at the last minute.

We are planning to release a HUGE feature to production tomorrow.  Ooops!  Wouldn’t you know it…we found more bugs.

Back in the dark ages, with Scrum, it’s possible we may have talked ourselves into justifying the release without the bug fixes; “these aren’t terrible…maybe users won’t notice…we can always patch production later”.

But with Kanban, it went something like this:

“…hey, let’s not release tomorrow.  Let’s give ourselves an extra day.”

  • Nobody has to work late.
  • No iteration planning needs to be rejiggered.
  • There’s no set, established maintenance window restricting our flexibility.
  • Quality did not fall victim to an iteration schedule.
  • We don’t need to publish any known bugs (i.e., there won’t be any).
Categories: Software Testing

Testing, Everybody’s An Expert in Hindsight

Wed, 09/05/2012 - 10:13

I just came from an Escape Review Meeting.  Or as some like to call it, a “Blame Review Meeting”.  I can’t help but feel empathy for one of the testers who felt a bit…blamed.

With each production bug, we ask, “Could we do something to catch bugs of this nature?”.  The System 1 response is “no, way too difficult to expect a test to have caught it”.  But after 5 minutes of discussion, the System 2 response emerges, “yes, I can imagine a suite of tests thorough enough to have caught it, we should have tests for all that”.  Ouch, this can really start to weigh on the poor tester.

So what’s a tester to do?

  1. First, consider meekness.  As counterintuitive as it seems, I believe defending your test approach is not going to win respect.  IMO, there is always room for improvement.  People respect those who are open to criticism and new ideas.
  2. Second, entertain the advice but don’t promise the world.  Tell them about the Orange Juice Test (see below).

The Orange Juice Test is from Jerry Weinberg’s book, The Secrets of Consulting.  I’ll paraphrase it:

A client asked three different hotels to supply said client with 700 glasses of fresh squeezed orange juice tomorrow morning, served at the same time.  Hotel #1 said “there’s no way”.  Hotel #2 said “no problem”.  Hotel #3 said “we can do that, but here’s what it’s going to cost you”.  The client didn’t really want orange juice.  They picked Hotel #3.

If the team wants you to take on new test responsibilities or coverage areas, there is probably a cost.  What are you going to give up?  Speed?  Other test coverage?  Your kids?  Make the costs clear, let the team decide, and there should be no additional pain on your part.

Remember, you’re a tester, relax.

Categories: Software Testing

Keep Your Failed Tests To Yourself

Tue, 08/28/2012 - 10:30

One of my tester colleagues and I had an engaging discussion the other day. 

If a test failure is not caused by a problem in the system-under-test, should the tester bother to say the test failed? 

My position is: No. 

If a test fails but there is no problem with system-under-test, it seems to me it’s a bad test.  Fix the test or ignore the results.  Explaining that a test failure is nothing to be concerned with, gives the project team a net gain of nothing.  (Note: If the failure has been published, my position changes; the failure should be explained).

The context of our discussion was the test automation space. I think test automaters, for some reason, feel compelled to announce automated check failures in one breath, and in the next, explain why these failures should not matter.  “Two automated checks failed…but it’s because the data was not as expected, so I’m not concerned” or “ten automated checks are still failing but it’s because something in the system-under-test changed and the automated checks broke…so I’m not concerned”. 

My guess is, project teams and stakeholders don’t care if tests passed or failed.  They care about what those passes and failures reveal about the system-under-test.  See the difference?

Did the investigation of the failed test reveal anything interesting about the system-under-test?  If so, share what it revealed.  The fact that the investigation was triggered by a bad test is not interesting.

If we’re not careful, Test Automation can warp our behavior. IMO, a good way of understanding how to behave in the test automation space, is to pretend your automated checks are sapient (AKA “manual”) tests.  If a sapient tester gets different results than they expected, but later realizes their expectations were wrong, they don’t bother to explain their recent revelation to the project team.  A sapient tester would not say, “I thought I found a problem, but then I realized I didn’t”?  Does that help anyone?

Categories: Software Testing

Should Testers Write Source Code?

Tue, 08/21/2012 - 10:22

My system 1 thinking says “no”.  I’ve often heard separation of duties makes testers valuable.

Let’s explore this.

A programmer and a tester are both working on a feature requiring a complex data pull.  The tester knows SQL and the business data better than the programmer.

If Testers Write Source Code:

The tester writes the query and hands it to the programmer.  Two weeks later, as part of the “testing phase”, the tester tests the query (they wrote themselves) and finds 0 bugs.  Is anything dysfunctional about that? 

If Testers do NOT Write Source Code:

The programmer struggles but manages to cobble some SQL together.  In parallel, the tester writes their own SQL and puts it in an automated check.  During the “testing phase”, the tester compares the results of their SQL with that of the programmer’s and finds 10 bugs.  Is anything dysfunctional about that? 

Categories: Software Testing

Critical Thinking For Testers with Michael Bolton

Wed, 08/15/2012 - 13:39

After RST class (see my Four Day With Michael Bolton post), Bolton did a short critical thinking for testers workshop.  If you get an opportunity to attend one of these at a conference or other place, it’s time well spent.  The exercises were great, but I won’t blog about them because I don’t want to give them away.  Here is what I found in my notes…

  • There are two types of thinking:
    1. System 1 Thinking – You use it all the time to make quick answers.  It works fine as long as things are not complex.
    2. System 2 Thinking – This thinking is lazy, you have to wake it up.
  • If you want to be excellent at testing, you need to use System 2 Thinking.  Testing is not a straight forward technical problem because we are creating stuff that is largely invisible.
  • Don’t plan or execute tests until you obtain context about the test mission.
  • Leaping to assumptions carries risk.  Don’t build a network of assumptions.
  • Avoid assumptions when:
    • critical things depend on it
    • when the assumption is unlikely to be true
    • the assumption is dangerous when not declared
  • Huh?  Really?  So?   (James Bach’s critical thinking heuristic)
    • Huh? – Do I really understand?
    • Really? – How do I know what you say is true?
    • So? – Is that the only solution?
  • “Rule of Three” – If you haven't thought of at least three plausible explanations, you’re not thinking critically enough.
  • Verbal Heuristics: Words to help you think critically and/or dig up hidden assumptions.
  • Mary Had a Little Lamb Heuristic – emphasize each word in that phrase and see where it takes you.
  • Change “the” to “a” Heuristic:
    • “the killer bug” vs. “a killer bug”
    • “the deadline” vs. “a deadline”
  • “Unless” Heuristic:  I’m done testing unless…you have other ideas
  • “Except” Heuristic:  Every test must have expected results except those we have no idea what to expect from.
  • “So Far” Heuristic:  I’m not aware of any problems…so far
  • “Yet” Heuristic: Repeatable tests are fundamentally more valuable, yet they never seem to find bugs.
  • “Compared to what?” Heuristic: Repeatable tests are fundamentally more valuable…compared to what?
  • A tester’s job is to preserve uncertainty when everyone around us is certain.
  • “Safety Language” is a precise way of speaking which differentiates between observation and inference.  Safety Language is a strong trigger for critical thinking.
    • “You may be right” is a great way to end an argument.
    • “It seems to me” is a great way to begin an observation.
    • Instead of “you should do this” try “you may want to do this”.
    • Instead of “it works” try “it meets the requirements to some degree”
    • All the verbal heuristics above can help us speak precisely.
Categories: Software Testing

Bite-Sized Test Wisdom From RST Class – Part 3

Mon, 08/06/2012 - 10:10

See Part 1 for intro.

  • People don’t make decisions based on numbers, they do so based on feelings (about numbers).
  • Asking for ROI numbers for test automation or social media infrastructure does not make sense because those are not investments, those are expenses.  Value from an automation tool is not quantifiable.  It does not replace a test a human can perform.  It is not even a test.  It is a “check”.
  • Many people say they want a “metric” when what they really want is a “measurement”.  A “metric” allows you to stick a number on an observation.  A “measurement”, per Jerry Weinberg, is anything that allows us to make observations we can rely on.  A measurement is about evaluating the difference between what we have and what we think we have.
  • If someone asks for a metric, you may want to ask them what type of information they want to know (instead of providing them with a metric).
  • When something is presented as a “problem for testing”, try reframing it to “a problem testing can solve”.
  • Requirements are not a thing.  Requirements are not the same as a requirements document.  Requirements are an abstract construct.  It is okay to say the requirements document is in conflict with the requirements.  Don’t ever say “the requirements are incomplete”.  Requirements are not something that can be incomplete.  Requirements are complete before you even know they exist, before anyone attempts to write a requirements document.
  • Skilled testers can accelerate development by revealing requirements.  Who cares what the requirement document says.
  • When testing, don’t get hung up on “completeness”.  Settle for adequate.  Same for requirement documents.  Example: Does your employee manual say “wear pants to work”?  Do you know how to get to your kid’s school without knowing the address?
  • Session-Based Test Management (SBTM) emphasizes conversation over documentation.  It’s better to know where your kid’s school is than to know the address.
  • SBTM requires 4 things:
    • Charter
    • Time-boxed test session
    • Reviewable results
    • Debrief
  • The purpose of a program is to provide value to people.  Maybe testing is more than checking.
  • Quality is more than the absence of bugs.
  • Don’t tell testers to “make sure it works”.  Tell them to “find out where it won’t work.”  (yikes, that does rub against the grain with my We Test To Find Out If Software *Can* Work post, but I still believe each)
  • Maybe when something goes wrong in production, it’s not the beginning of a crisis, it’s the end of an illusion.
Categories: Software Testing