Examining Agile fundamentals: Key practices for success

SearchSoftwareQuality - Mon, 09/17/2012 - 06:20
The term Agile can be slippery, with various interpretations and implementation methods. Learn more about the Agile fundamentals that enable success.


Categories: Software Testing

An Honor to Share Our Web Performance Tuning Article

LoadStorm - Fri, 09/14/2012 - 14:02

"My name is Jovana."

Usually when I receive an email that starts out like the sentence above, I figure that I'm about to be solicited for something to wit my wife would strongly object. Then upon further reading, I realized this email was legit and became intrigued at the request.

"I found your article extremely interesting and would like to spread the word for people from Ex Yugoslavia."

That was very cool. Everyone enjoys hearing that their writing is valuable to someone - even better if they want to pass it on to their friends!

Jovana found value in our post about web performance tuning. Yeah, one of my favorite topics too.

It is an honor to be asked if someone may translate your article to another language...in this case Serbo-Croatian language.

According to Jovana, "My purpose is to help people from Ex-Yugoslavia better understand some very useful information about computer science." That is quite a compliment to me (at least that's my perspective), and it does my heart good to know that other computer scientists around the world could benefit from something I contributed. Me? Scott Price, the balding geeky guy that lives in a remote mountain location is helping computer science? Sounds great, albeit unlikely.

Some quick info about Jovana Milutinovich:

I was born in Yugoslavia, Europe. Former Yugoslavia consisted of now totally independent states like Serbia, Montenegro, Croatia, Bosnia & Hercezovina, Slovenia and Macedonia, which are all united by Serbo-Croatian language. I'm currently studying Computer Science at the University of Belgrade, Serbia.

If you can read Serbo-Croatian, please go to her article on WebHostingGeeks about web performance tuning.

Thanks Jovana for considering our thoughts worthy of sharing with your colleagues. We appreciate your professionalism in asking for permission, and we consider it an honor that you read LoadStorm's blog. I sincerely hope you are successful in your computer science career. We look forward to hearing more about your contributions to our industry. Please stay in touch.




Don't Forget the Joy

ABAKAS - Catherine Powell - Fri, 09/14/2012 - 04:05
Work is, well, work. It's also an open secret that we're better workers when we have fun at work. We're more productive, we help the bottom line, and we're more likely to stay in a job longer if we enjoy our jobs. So let's roll up our sleeves and get to work, but don't forget the joy!

Some joy comes from the work itself. There is a thrill in solving a hard problem. Learning a new technique provides a quiet satisfaction. Finally understanding some concept - really understanding it! - puts a spring in my step. Looking at a demo or a customer report and knowing I built some feature they really love sends me home bragging. I know I'm not alone in quite simply loving what I do.

Other joy coms from the culture around the work. This is where your colleagues and your work environment come in. Joking around the coffee machine is just plain fun. Having your desk set up the way you like it - with your Star Wars figurines and footrest - makes the physical office more comfortable. We're people, after all, not automatons.

As an employer, there are things I can do to help make work more fun. Most of them aren't even very expensive! Consider these:

  • A training course - online or at a local university's extension school - for an employee who asks usually costs under $1000.
  • Letting someone go speak at a conference costs about 20 hours for prep and the cost of plane/hotel. (The conference itself is usually free for speakers.)
  • Letting the team choose a theme for sprint names and name their own sprints. Free and often hilarious.
  • Bringing in bagels or cookies or cake once or twice a month - either on a schedule or ad hoc, depending on your team's sense of routine - is surprisingly fun. Keeping snacks on hand accomplishes the same thing.
  • Don't do management by walking around. Be available and show up but don't hover. You don't want to be your employees' friend, but you don't want to be the heavy, either. Free, too.
Joy has a place at work. Encourage it.
Categories: Software Testing

Status Message Messaging

ABAKAS - Catherine Powell - Tue, 09/11/2012 - 07:25
For those of us who use hosted services, status pages are an essential communication point. They're a way to indicate service health ("it's not us, it's you") and, when there are problems, they provide a forum for disseminating updates quickly and loudly. The "everything is okay" update is pretty easy. The "stuff's broken" update is a much more delicate thing to write. It has to reflect your relationship with your users, but also reflect the gravity of the situation.

Here's part of a status update Beanstalk published this morning:

"Sorry for the continued problems and ruining your morning." 
Oh man. That's an update you don't want to have to publish. To provide some context, we'll just say that Beanstalk has had a bad few days. Beanstalk provides git and subversion hosting; that makes them a high-volume service. Pushing, pulling, committing, checking in/out, etc. happen very frequently and, well, software teams on deadline are not known for being nice when their tools get in the way. The last few days have been hard on Beanstalk: they got hit by the GoDaddy attack, then had a problem with load on their servers traced to an internal problem, and finally are again having difficulties with their file servers that host repos. And you can see it in that status update. "[R]uining your morning" is the phrasing of someone who is seriously exasperated. That update does some things well: it shows they understand the problem is severe; and it reflects the increasing frustration users are likely experiencing. It's escalating, just like user's tempers are probably escalating. However, it goes too far for my taste. It reeks of frustration on the part of whoever wrote the update, and that's clearly not a good head space for actually solving the problem. It also implies a sense of fatalism. That update was at 9:23am - my morning might still be salvaged, if they can get the system back up relatively quickly. Don't give up, guys!

There's an art to writing the status update when things are going poorly. When I'm working with a team fixing a major problem, I'll dedicate someone to doing just that. They sit in the middle of the war room (or chat session or conference call) and do nothing but manage status communications. Let the people fixing the problem focus on fixing the problem. Hand off status to someone who will maintain a level head and write a good status update, considering:

  • Frequency of updates. At least every hour for most issues, and whenever something changes there should be a status update. Silence does not inspire confidence.
  • Location of updates. As many channels as possible are good. Use twitter, the status page, email or phone calls to really important customers, and any other tools at your disposal.
  • Tone of updates. This needs to match your general tone of communication (see the Twitter fail whale - still cute and fun, even in error!) but also show that you know your customers are getting frustrated.
  • Inspiring confidence. Providing enough information to look like you are getting a grip on the problem and will get it fixed is important. Providing a proper postmortem also helps inspire confidence.



Categories: Software Testing

Exploring the shifting roles in test and QA management

SearchSoftwareQuality - Mon, 09/10/2012 - 13:18
If Agile software teams reorganize and report to one line manager, what happens to test and QA management? Matt Heusser shares his answer with you.


Categories: Software Testing

What I Love About Kanban As A Tester #5

Eric Jacobson's Software Testing Blog - Mon, 09/10/2012 - 09:53

There’s no incentive to look the other way when we notice bugs at the last minute.

We are planning to release a HUGE feature to production tomorrow.  Ooops!  Wouldn’t you know it…we found more bugs.

Back in the dark ages, with Scrum, it’s possible we may have talked ourselves into justifying the release without the bug fixes; “these aren’t terrible…maybe users won’t notice…we can always patch production later”.

But with Kanban, it went something like this:

“…hey, let’s not release tomorrow.  Let’s give ourselves an extra day.”

  • Nobody has to work late.
  • No iteration planning needs to be rejiggered.
  • There’s no set, established maintenance window restricting our flexibility.
  • Quality did not fall victim to an iteration schedule.
  • We don’t need to publish any known bugs (i.e., there won’t be any).
Categories: Software Testing

How to Wow Customers Even When You are Closed

Randy Rice's Software Testing & Quality - Fri, 09/07/2012 - 15:00
Here I am in Cuchara, Colo, sitting on a boardwalk in 72 degrees and light rain (elevation 8,600 feet above sea level), outside a great little coffee shop, Brewed Awakenings.

Don't you love catchy names?

Unfortunately, almost every store in the village, including the coffee shop is closed because the Summer season is over. The owner of the shop (at least I think she is the owner  :-)  ) walked by a few minutes ago, unlocked the door to go in and I heard her say to someone, "I just need to get the cinnamon rolls."  (They have AWESOME ones.)

Me being vocal self, said "Sure, I'll take one," and just laughed. A few minutes later, out she comes with one on a paper plate. Keep in mind, the shop is closed!

I said, "Please let me pay you." She said, "Oh, just come back in sometime."

Isn't that great?  I'll be back tomorrow morning for sure!

So, I'll just finish my awesome cinnamon roll and wish everyone a great weekend.
Categories: Software Testing

Testing, Everybody’s An Expert in Hindsight

Eric Jacobson's Software Testing Blog - Wed, 09/05/2012 - 10:13

I just came from an Escape Review Meeting.  Or as some like to call it, a “Blame Review Meeting”.  I can’t help but feel empathy for one of the testers who felt a bit…blamed.

With each production bug, we ask, “Could we do something to catch bugs of this nature?”.  The System 1 response is “no, way too difficult to expect a test to have caught it”.  But after 5 minutes of discussion, the System 2 response emerges, “yes, I can imagine a suite of tests thorough enough to have caught it, we should have tests for all that”.  Ouch, this can really start to weigh on the poor tester.

So what’s a tester to do?

  1. First, consider meekness.  As counterintuitive as it seems, I believe defending your test approach is not going to win respect.  IMO, there is always room for improvement.  People respect those who are open to criticism and new ideas.
  2. Second, entertain the advice but don’t promise the world.  Tell them about the Orange Juice Test (see below).

The Orange Juice Test is from Jerry Weinberg’s book, The Secrets of Consulting.  I’ll paraphrase it:

A client asked three different hotels to supply said client with 700 glasses of fresh squeezed orange juice tomorrow morning, served at the same time.  Hotel #1 said “there’s no way”.  Hotel #2 said “no problem”.  Hotel #3 said “we can do that, but here’s what it’s going to cost you”.  The client didn’t really want orange juice.  They picked Hotel #3.

If the team wants you to take on new test responsibilities or coverage areas, there is probably a cost.  What are you going to give up?  Speed?  Other test coverage?  Your kids?  Make the costs clear, let the team decide, and there should be no additional pain on your part.

Remember, you’re a tester, relax.

Categories: Software Testing

The Customer Wants To Speak With You. Why Cover Your Ears?

DevelopSense - Michael Bolton - Tue, 09/04/2012 - 11:59
Speaking of oracles—the ways in which we recognize problems… I’m on the mailing list for a company whose product I purchased a while ago. The other day, I received a mailing signed by the product marketing manager for that company. The topic of the mailing is a potential use for the product, but the product [...]
Categories: Software Testing

Drupal Load Testing Partnership

LoadStorm - Tue, 09/04/2012 - 10:42
Load Testing Drupal Implementations with LoadStorm

Promet Source, web application development specialists in Chicago, announced their official partnership with LoadStorm recently. Their press release called it, "Drupal systems finally able to undergo efficient, rigorous load-barring testing."

According to Promet's team, they like the "powerful, cost-effective web load test tool, and
with LoadStorm’s ability to quickly create a load test to simulate a scenario of sudden high traffic to a given website, the partnership represents the continuing evolution of Promet Source’s test driven development."

“The complex online systems that we build require rigorous testing,” says Andrew Kucharski, President of Promet Source. “With LoadStorm, we have been able to offer our clients the assurance that their website can withstand the high level of traffic that they can expect their site will draw.”
Promet Source’s clients will now benefit from capacity planning that is backed by custom-built automated scaling systems that will be able to respond to traffic surges. Promet, in partnership with LoadStorm, has already been able to ensure that their client OptionIt, a popular online reservation tool, is able to withstand the high peaks in traffic it receives after crucial sporting championships or tour announcements.

“It takes skilled engineers to optimize Drupal implementations properly and achieve high performance,” noted Scott Price, Vice President of LoadStorm. “Andy's team is probably the best in the world. Making a Drupal site look pretty can be done by just about any dev shop but making it run efficiently takes real experts because of the complexity of Drupal's many modules. LoadStorm is proud to be a partner of Promet Source.”

About Promet Source

Promet Source is a full service technology firm, focusing on Drupal and iOS that delivers high-value consulting and software development solutions that clients need to grow their business. As a leading technology provider of web services, Promet Source focuses on complex web development, support, mobile applications, and strategic marketing. We are dedicated to open source software solutions by providing managed services for Drupal-based websites, products, and applications. For more information, visit Promet’s website at www.prometsource.com.

About LoadStorm

LoadStorm is a cloud load testing tool. It has a simple browser user interface that makes it easy to create many scenarios representing different user types and allocate the right volume of traffic to each. With thousands of servers around the world, it is able to simulate up to 100,000+ concurrent users. Try it free by creating a Breeze account.






Testing 2.0

Google Testing Blog - Fri, 08/31/2012 - 11:14

By Anthony F. Voellm (aka Tony the perfguy)

It’s amazing what has happened in the field of test in the last 20 years... a lot of “art” has turned into “science”. Computer scientists, engineers, and many other disciplines have worked on provable systems and calculus, pioneered model based testing, invented security fuzz testing, and even settled on a common pattern for unit tests called xunit. The xunit pattern shows up in open source software like JUnit as well as in Microsoft development test tools.

With all this innovation in test, there’s no wonder test is dead.  The situation is no different from the late 1800’s when patents were declared dead. Everything had been invented. So now that everything in test has been invented, it’s dead.

Well... if you believe everything in test has been invented then please stop reading now :)

As an aside:  “Test is dead” was a keynote at the Google Test Automation Conference (GTAC) in 2011.  You can watch that talk and many other GTAC test talks on YouTube, and I definitely recommend you check them out here.  Talks span a wide range of topics ranging from GUI Automation to Cloud.

What really excites me these days is that we have closed a chapter on test. A lot of the foundation of writing and testing great software has been laid (examples at the beginning of the post, tools like Webdriver for UI, FIO for storage, and much more), which I think of as Testing 1.0. We all use Testing 1.0 day in and day out. In fact at Google, most of the developers (called Software Engineers or SWEs) do the basic Testing 1.0 work and we have a high bar on quality.  Knuth once said "Be careful about using the following code -- I've only proven that it works, I haven't tested it." 

This brings us to the current chapter in test which I call Testing 1.5.  This chapter is being written by computer scientists, applied scientists, engineers, developers, statisticians, and many other disciplines.  These people come together in the Software Engineer in Test (SET) and Test Engineer (TE) roles at Google. SET/TEs focus on; developing software faster, building it better the first time, testing it in depth, releasing it quicker, and making sure it works in all environments.  We often put deep test focus on Security, Reliability and Performance.  I sometimes think of the SET/TE’s as risk assessors whose role is to figure out the probability of finding a bug, and then working to reduce that probability. Super interesting computer science problems where we take a solid engineering approach, rather than a process oriented / manual / people intensive based approach.  We always look to scale with machines wherever possible.

While Testing 1.0 is done and 1.5 is alive and well, it’s Testing 2.0 that gets me up early in the morning to start my day. Imagine if we could reinvent how we use and think about tests.  What if we could automate the complex decisions on good and bad quality that humans are still so good at today? What would it look like if we had a system collecting all the “quality signals” (think: tests, production information, developer behavior, …) and could predict how good the code is today, and what it most likely will be tomorrow? That would be so awesome...

Google is working on Testing 2.0 and we’ll continue to contribute to Testing 1.0 and 1.5. Nothing is static... keep up or miss an amazing ride.

Peace.... Tony

Special thanks to Chris, Simon, Anthony, Matt, Asim, Ari, Baran, Jim, Chaitali, Rob, Emily, Kristen, Annie, and many others for providing input and suggestions for this post.
Categories: Software Testing

Consider the Message

ABAKAS - Catherine Powell - Fri, 08/31/2012 - 10:47
A couple days ago I was at the butcher picking up some meat for supper (burgers!). My card got declined. And here's the thinking: "Oh wow embarrassing! I come here all the time! I'm sooo not that person. I'm fiscally responsible, darn it! Besides, I'm nowhere near the limit. How annoying!" It's about a 5 second panic, but let's be honest, it's not a good feeling. It's embarrassing and annoying for both parties - for the cashier and for me.

So I paid with cash, and as I was going out, the cashier handed me the receipt from when it got declined. Here it is:




Seriously?! Seriously!?!? That 5 second panic and the annoyance for me and for the cashier - and check out that decline reason:

"Could not make Ssl c"

I'm going to assume that means "Could not make an SSL connection to ". Bonus points for the truncated message and for the interesting capitalization.

That's why error messages matter. The error shouldn't have been "Declined". It should have been "Error". That would have saved us all the embarrassment, at least! (Yeah, it still would have been annoying.)

So please, consider your error messages. They matter.
Categories: Software Testing

Software quality and testing: Does the DevOps movement shortchange QA?

SearchSoftwareQuality - Fri, 08/31/2012 - 07:23
Teams must merge roles and adjust testing practices when they join the DevOps movement. Learn how to ensure quality with DevOps adoption.


Categories: Software Testing

Parsing management roles on a DevOps team

SearchSoftwareQuality - Thu, 08/30/2012 - 12:50
Project management expert Yvette Francino explains the leadership roles of project managers and QA managers on a DevOps team.


Categories: Software Testing

Agile metrics, tools and processes: Tenets for the project manager

SearchSoftwareQuality - Thu, 08/30/2012 - 12:44
The Agile project manager must understand the basics of collaboration, servant leadership, Agile metrics and the tools and processes the team uses.


Categories: Software Testing

Working Your Obliques

QA Hates You - Thu, 08/30/2012 - 03:40

Luxury Cars Struggle In New Crash Tests:

Last year the National Highway Traffic Safety Administration (NHTSA) instituted more rigorous crash-testing procedures that made perfect five-star ratings tougher to come by. This year the Insurance Institute for Highway Safety (IIHS) is doing the same, and the first round of results finds many popular luxury cars to be lacking. Only three out of the 11 model-year 2012 cars tested under the Institute’s new so-called “small overlap frontal crash test” earned what would be considered passing ratings.

. . . .

In the IIHS’s new frontal test, cars are smashed at 40 mph with only 25 percent of a car’s front end on the driver side striking a five-foot tall rigid barrier. The Institute says the test is designed to replicate what happens when the front corner of a car strikes another vehicle or an object like a tree or utility pole in more of glancing blow, rather than a full-on or offset frontal collision.

This is another industry’s example of happy path testing. The full frontal crash test has thus far yielded results that made car makers design their cars to meet that test, head-on if you will, and to account for what happens in that test. That sounds an awful lot like what happens in happy path testing or the bare minimum “testing” that occurs in demos, internal and external, where someone shows someone else exactly how your developers have expected the user will use the software. Maybe there’s even some validation testing to make sure that, if a collision occurs, it’s handled.

But the real damage, and the real value of QA, is in the oblique testing, where QA doesn’t do what the user is supposed to do, but also does something that the developer doesn’t expect a user to do even when the user isn’t doing what he’s supposed to do.

The trick, of course, is to think that way and to see the possibilities of just tangentally using the software incorrectly or working just a little bit out of order to elicit a response that you can film into a compelling PDA for the benefits of not commuting or you can enter into the defect tracker.

It’s also a keen insight into why QA is never happy nor confidant in a product; sure, it might have passed all the tests (Ha! Just kidding! It just passed enough tests to satisfy the low standards of the project managers), but QA knows it did not think of all the tests, and out there, some user is going to find the right combination of events and the right angle and speed to hit the application to build his own catastrophe. If only I could think of that use case.

Categories: Software Testing

Autonomous and Dependent Teams

ABAKAS - Catherine Powell - Wed, 08/29/2012 - 12:02
Almost all of my clients are currently doing some variation on agile methodologies. Most of them are doing SCRUM or some version of it. And yet, they are very different teams. In particular, lately I've noticed that there are two kinds of teams: those that encourage autonomy and those that encourage dependence.

To be clear, I'm speaking entirely within the development team itself. Within a development team there are invariably leaders, usually the ones who have been around for a while and show the most ability to interact on a technical level with the rest of the team. The character of those leaders and how much autonomy non-leader team members have says a lot about the team.

Teams that encourage autonomy take the attitude that team members should help themselves. If a member of the team needs something - a new git branch, a test class, a method on an object in another area of the code - then that person is responsible for getting it.  How the team member accomplishes that is, in descending order of preference: (1) doing it; (2) asking for help to do it with someone (pairing and learning); (3) asking someone else to do it; (4) throwing up a request to the team at large.

Teams that encourage dependence have a very different attitude. These are teams where each person has a specialty and anything outside that specialty should be done by a leader. If a team member needs something - a new git branch, a test class, a method in another layer of the code - then that person should ask a team leader, who will provide it. Sometimes the leader passes the request off to another team member, and sometimes the leader simply does it.

Let's look a little deeper at what happens with these teams.

Autonomous Teams

  • Emphasize a leaderless culture. These are the teams that will say, "we're all equals" or "there's no leader." There are people who know more about a given area or technology, but the team considers them question answerers more than doers in that particular area.
  • Can better withstand the loss of one person. Whether it's vacation, maternity leave, or leaving the company, the absent person is less likely to have specialized knowledge no one else on the team has. It's a loss that's easier to recover from.
  • Tend to have more tooling. Because there's no dedicated "tools person", everyone introduces tooling as it's needed, from a continuous integration system to deployment scripts to test infrastructure to design diagramming. Over time this actually winds up with more tools in active use than a team with a dedicated tools engineer.
  • Produce more well-rounded engineers. "I don't do CSS" is not an excuse on this team. If the thing you're working on needs it, well, you do CSS now!
  • Work together more. Because each team member touches a larger area of the code base, there's more to learn and team members wind up working together frequently, either as a training exercise, or to avoid bumping into each other's features, or just because they enjoy it.
  • Tend toward spaghetti code. With everyone touching many parts of the code, there is some duplication. Coding standards, a strong refactoring policy and static code analysis can help keep this under control.
  • Have less idea of the current status. Because each team member is off doing, they don't always know the overall status of a project. This is what the daily standup and burndown charts are supposed to help, and can if they're done carefully.
Dependent Teams
  • Have a command and control culture. These are the teams that say, "we'd be dead without so-and-so" or "Blah tells me what to do." They look to the leader (or leaders) and do what that person says, frequently waiting for his opinion.
  • Can quickly replace non-leaders but have a huge dependence on leaders. When a leader is missing - vacation, meeting, or leaves the company - then the team gets very little done, and uses the phrase, "I don't know. So-and-so would normally tell me, but he's not around."
  • Have a good sense of overall status. The leaders tend to know exactly where things stand. Individual team members often do not.
  • Do standup as an "update the manager" period. The leader usually leads standup, and members will speak directly to that person (watch the body language - they usually face the person). Standup often takes place in shorthand, and not all team members could describe each task being worked on.
  • Tend to work alone or with a leader. Because individual team members tend not to work on similar things or to know what everyone is doing, they'll often work alone or with a team leader.
  • Tend to wait. You'll hear phrases like, "well, I need a Git branch; has one been created yet?" Instead of attempting to solve the problem - for example, by looking for a branch and creating one - the team member will note the problem and wait for a leader to fix it or to direct the fix. 


Overall, I vastly prefer working with teams that encourage autonomy. There are moments of chaos, and times when you find yourself doing some rework, but overall the teams get a lot more done and they produce better engineers. I understand the appeal of dependent teams to those who want to be essential (they'd like to be the leaders) or to those who just want to do what they're told and go home, but it's not for me. Viva the autonomous team!








Categories: Software Testing

Remote Workers Not As Distant

QA Hates You - Wed, 08/29/2012 - 08:31

Report: Remote Workers More Engaged, Committed:

If your employees come into the office each day, it’s natural to think that they’re engaged and well-connected with one another. But that’s a misperception, according to a recent blog post from the Harvard Business Review.

Scott Edinger, founder of Tampa-based Edinger Consulting Group, wrote that the physical proximity of an office gives the illusion that co-workers are communicative and working together efficiently. The opposite is true, however. Remote workers are actually more engaged and committed to their team, Edinger wrote.

One reason for this, Edinger pointed out, could be that members of virtual teams feel the physical distance between them makes interactions more valuable.

When people do not sit at adjacent desks, they try hard to connect with one another and maximize what little time they do spend speaking with one another.

He wrote: “What’s more, because they have to make an effort to make contact, these leaders can be much more concentrated in their attention to each person and tend to be more conscious of the way they express their authority.”

Working remotely does focus one’s attention and effort to the task at hand and to make sure that when you’re “on the clock,” you’re actually putting forth effort toward the company goals. You’re not as focused on making it until the end of the day as you are getting the tasks done.

I worked at a posting where we went from a remote office setup to an office space, and a lot of the communication remained through IM or email for tracking purposes or because we were too lazy to get up from our desks. As far as communication goes, the only real benefit of being in the office is being able to trap the developer who’s been avoiding you in a cubicle and make sure that he is, in fact, going to hear you out (as will the whole office by that time).

There are distinct benefits to allowing remote workers, especially in IT. I think it will continue, and this article helps bolster the arguments for it.

Categories: Software Testing

Keep Your Failed Tests To Yourself

Eric Jacobson's Software Testing Blog - Tue, 08/28/2012 - 10:30

One of my tester colleagues and I had an engaging discussion the other day. 

If a test failure is not caused by a problem in the system-under-test, should the tester bother to say the test failed? 

My position is: No. 

If a test fails but there is no problem with system-under-test, it seems to me it’s a bad test.  Fix the test or ignore the results.  Explaining that a test failure is nothing to be concerned with, gives the project team a net gain of nothing.  (Note: If the failure has been published, my position changes; the failure should be explained).

The context of our discussion was the test automation space. I think test automaters, for some reason, feel compelled to announce automated check failures in one breath, and in the next, explain why these failures should not matter.  “Two automated checks failed…but it’s because the data was not as expected, so I’m not concerned” or “ten automated checks are still failing but it’s because something in the system-under-test changed and the automated checks broke…so I’m not concerned”. 

My guess is, project teams and stakeholders don’t care if tests passed or failed.  They care about what those passes and failures reveal about the system-under-test.  See the difference?

Did the investigation of the failed test reveal anything interesting about the system-under-test?  If so, share what it revealed.  The fact that the investigation was triggered by a bad test is not interesting.

If we’re not careful, Test Automation can warp our behavior. IMO, a good way of understanding how to behave in the test automation space, is to pretend your automated checks are sapient (AKA “manual”) tests.  If a sapient tester gets different results than they expected, but later realizes their expectations were wrong, they don’t bother to explain their recent revelation to the project team.  A sapient tester would not say, “I thought I found a problem, but then I realized I didn’t”?  Does that help anyone?

Categories: Software Testing

Only the Elder Sign Offers More Protection

QA Hates You - Tue, 08/28/2012 - 07:04

This captcha will protect against bots particularly well.

Also, users who don’t have an e with an acute accent on their keyboards.

I guess it’s to test how bad you really, really want to create a new account.

Hey, let me digress here: Why doesn’t the Elder Sign actually have a Unicode symbol or character for it?

Aren’t geeks in charge of the official Unicode spec, or what?

Categories: Software Testing