Software Testing

Roles and Fluidity

Alan Page - Sun, 12/13/2015 - 10:02

I had a twitter conversation this week about roles this week. I’ll recap it – and expand on my views; but first I’ll tell a story.

Very early on in the Windows 98 project, I was performing some exploratory testing on the explorer shell and found an interesting (and slightly weird bug). At the end of my session, I entered the bug (and several others) into our bug tracking system – but the one issue continued to intrigue me. So, I took some time to look at the bug again and reflect on what could cause this bug to happen. I dug into the source code in the area where the bug occurred, but there was nothing obvious. I couldn’t shake my curiosity, so I looked at the check-in history and read through the code again; this time focusing on code checked in within the last few weeks. My assumption was that this was a newly introduced bug, and that seemed like a reasonable way to narrow my focus.

Less than an hour later, I discovered that a particular windows API was called several times throughout the code base, but on one occasion, was called with the parameters reversed. At this point, I could have hooked up a debugger (or  some could say that I should have already hooked up a debugger), but after visual examination of the code, the code of the API, and the documentation, I was positive I found cause of the error. I added the information to the bug report and started to pack my things to go home.

But I didn’t. I couldn’t.

I was bothered by how easy it was to make this particular error and wondered if others had made the same error too. I sat down and wrote a small script which would attempt to discover this error in source code. Another hour or so later, and I had a not-perfect, but pretty-good analyzer for this particular error. I ran it across the code base, and found 19 more errors. I spot checked each one manually, and after verifying they were all errors, added each of them to the bug tracking system.

Finally I was about to go home. But as I was leaving, one of the developers on the team stopped by to ask how I found the bugs I just entered. I told him the story above, and he suggested I add the tool to the check-in suite (along with several other static analysis tools) so that developers could  catch this error before checking in. I sat back down, we reviewed the code, made a few tweaks, and I added the tool to the check-in system.

Over the course of several hours, my role changed from testing and investigation of the product, to analysis and debugger, to tool developer, and finally to early detection  / prevention. The changes were fluid.

On twitter, a conversation started on detection vs. prevention. Some testers have a stance that those two activities are distinct, and that doing both makes you average (at best) at both. The conversation (although plagued by circular discussion, metaphors and 140 character limits) centered around the point that you can’t do multiple roles simultaneously. While I agree completely that you cannot do multiple roles simultaneously, I believe (and have proven over 20+ years) that it is certainly possible to move fluidly through different roles. Furthermore, I can say anecdotally that people who can move fluidly through different roles tend to have the most impact on their teams.

To this day, I figure out what needs to be done, and I take on the role necessary to solve my team’s most important problems. Even though I have self-identified as a tester for most of my career, I don’t see a hard line between testing and developing (or detecting and preventing). In fact, that may be one of the roots of conversations like this. For years, I’ve considered the line between development and testing to be a very thin grey line. This reflects in my story above, and in many of my writings.

Today, however, I don’t see a line at all. Or – if it’s there, it’s nearly invisible. It’s been a freeing experience for me to consider software creation as an activity where I can make a significant contribution while contributing in whatever areas make sense – at any given moment.

Sure – there are places where develop and then test still exist, and this sort of role fluidity is difficult there (but not impossible). But for those of us shipping frequently and making high quality software for thousands (or millions) of customers, I think locking into roles is a bottleneck.

The key to building a great product is building a great team first. To me, great teams aren’t bound by roles, but they’re driven by moving forward. Roles can help define how people contribute to the team, but people can – and should flow between roles as needed.

(potentially) related posts:
  1. Exploring Test Roles
  2. Activities and Roles
  3. Peering into the white box
Categories: Software Testing

Mastering the Art of The Year-End Performance Review

QA Hates You - Thu, 12/10/2015 - 12:35

The end of the year is upon us, and with it comes the annual review. Before you go into your performance review, you should plan your strategy to make your case and to put your best foot forward to get the best possible result. The following video offers good tips and tricks on how to wow your boss(es) in those reviews.

Categories: Software Testing

Outgoing Email Communications Could Use Some Attention, Too

QA Hates You - Wed, 12/09/2015 - 09:33

A year or so ago, I volunteered to help the Missouri Department of Conservation test its new Web site. The testing is ongoing, and little did I know it was only user experience testing and not testing testing.

But I periodically receive emails like this:


This indicates they’re not double-checking the outgoing emails, either.

Categories: Software Testing

GTAC 2015 Wrap Up

Google Testing Blog - Tue, 12/08/2015 - 13:45
by Michael Klepikov and Lesley Katzen on behalf of the GTAC Committee

The ninth GTAC (Google Test Automation Conference) was held on November 10-11 at the Google Cambridge office, the “Hub” of innovation. The conference was completely packed with presenters and attendees from all over the world, from industry and academia, discussing advances in test automation and the test engineering computer science field, bringing with them a huge diversity of experiences. Speakers from numerous companies and universities (Applitools, Automattic, Bitbar, Georgia Tech, Google, Indian Institute of Science, Intel, LinkedIn, Lockheed Martin, MIT, Nest, Netflix, OptoFidelity, Splunk, Supersonic, Twitter, Uber, University of Waterloo) spoke on a variety of interesting and cutting edge test automation topics.


All presentation videos and slides are posted on the Video Recordings and Presentations pages. All videos have professionally transcribed closed captions, and the YouTube descriptions have the slides links. Enjoy and share!

We had over 1,300 applicants and over 200 of those for speaking. Over 250 people filled our venue to capacity, and the live stream had a peak of about 400 concurrent viewers, with about 3,300 total viewing hours.

Our goal in hosting GTAC is to make the conference highly relevant and useful for both attendees and the larger test engineering community as a whole. Our post-conference survey shows that we are close to achieving that goal; thanks to everyone who completed the feedback survey!

  • Our 82 survey respondents were mostly (81%) test focused professionals with a wide range of 1 to 40 years of experience. 
  • Another 76% of respondents rated the conference as a whole as above average, with marked satisfaction for the venue, the food (those Diwali treats!), and the breadth and coverage of the talks themselves.


The top five most popular talks were:

  • The Uber Challenge of Cross-Application/Cross-Device Testing (Apple Chow and Bian Jiang) 
  • Your Tests Aren't Flaky (Alister Scott) 
  • Statistical Data Sampling (Celal Ziftci and Ben Greenberg) 
  • Coverage is Not Strongly Correlated with Test Suite Effectiveness (Laura Inozemtseva) 
  • Chrome OS Test Automation Lab (Simran Basi and Chris Sosa).


Our social events also proved to be crowd pleasers. The social events were a direct response to feedback from GTAC 2014 for organized opportunities for socialization among the GTAC attendees.


This isn’t to say there isn’t room for improvement. We had 11% of respondents express frustration with event communications and provided some long, thoughtful suggestions for what we could do to improve next year. Also, many of the long form comments asked for a better mix of technologies, noting that mobile had a big presence in the talks this year.

If you have any suggestions on how we can improve, please comment on this post, or better yet – fill out the survey, which remains open. Based on feedback from last year urging more transparency in speaker selection, we included an individual outside of Google in the speaker evaluation. Feedback is precious, we take it very seriously, and we will use it to improve next time around.

Thank you to all the speakers, attendees, and online viewers who made this a special event once again. To receive announcements about the next GTAC, currently planned for early 2017, subscribe to the Google Testing Blog.

Categories: Software Testing

My Kind of Career Planning

QA Hates You - Tue, 12/08/2015 - 06:33

A cartoon in Barron’s answers the interview question, “Where do you see yourself in ten years?”

Categories: Software Testing

QA Music: The World We’re In

QA Hates You - Mon, 12/07/2015 - 05:53

“F*cked Up World” by the Pretty Reckless

Go out and improve it. Whether that’s to make it more or less I leave to you.

Categories: Software Testing

Someone Is Unclear On What “Manual” Means

QA Hates You - Fri, 12/04/2015 - 02:35

Job posting for a Manual Test Engineer:

Job duties:

As our Manual Test Engineer, you’ll ensure that our customers have a great experience when they use DataRobot. You will do this by developing and executing comprehensive and robust software validation tests (both automated and manual); including large datasets, advanced features, custom options, heavy usage, etc. You will also manage an external testing team and testing plans to align with our current customer use cases. This is a great opportunity for you if you’re detail oriented and driven to provide excellence within every customer interaction.

It’s a great opportunity to work as a senior QA engineer or manager with an entry-level title and, perhaps, pay.

Categories: Software Testing

I Voted For Willcox

QA Hates You - Thu, 12/03/2015 - 09:32

I visit the St. Louis Post-Dispatch Web site almost daily, and whenever I clear my cookies and cache, the site prompts me to take a survey before I can access the content of an article.

One day, the intra-office rivalry at the marketing department or agency got a little intense as Willcox tried to prove he was the most popular person in the staff by holding a little popularity contest embedded in the polls.

By the end of the survey, even I was voting for Willcox.

Poor Masheika never stood a chance.

Categories: Software Testing

Oracles from the Inside Out, Part 8: Successful Stumbling

DevelopSense - Michael Bolton - Thu, 11/26/2015 - 17:18
When we’re building a product, despite everyone’s good intentions, we’re never really clear about what we’re building until we try to build some of it, and then study what we’ve built. Even after that, we’re never sure, so to reduce risk, we must keep studying. For economy, let’s group the processes associated with that study—review, […]
Categories: Software Testing

What Happened?

Alan Page - Mon, 11/23/2015 - 19:36

As I approach the half-century mark of existence, I’ve been reflecting on how I’ve ended up where I am…so excuse me while I ramble out loud.

Seriously, how did a music major end up as a high level software engineer at a company like Microsoft? I have interviewed hundreds of people for Microsoft, who, on paper, are technology rock stars, and I (yes, the music major) have had to say no-hire to most of them.

I won’t argue that a lot of it is luck – but sometimes being lucky is just saying yes at the right times and not being afraid of challenges. Yeah, but it’s mostly luck.

I think another part is my weird knack for learning quickly. When I was a musician (I like to consider that I’m still a musician, but I just don’t play that often anymore) – I was always the one volunteering to pick up doubles (second instruments) as needed, or volunteer to fill whatever hole needed filling in order for me to get into the top groups. Sometimes I would fudge my background if it would help – knowing that I could learn fast enough to not make myself look stupid.

In grad school (yeah, I have a masters in music too), I flat out told the percussion instructor – who had a packed studio – that I was a percussionist. To be fair, I played snare drum and melodic percussion in drum corps for several summers, but I didn’t have the years of experience of many of my peers. So, as  grad student, I was accepted into the studio, and I kicked ass. In my final performance of the year for the music staff, one of my undergrad professors blew my cover and asked about my clarinet and saxophone playing. I fessed up to my professor that I wasn’t a percussion major as an undergrad, and that I lied to get into his program. When he asked why, I told him that I though if I told him the truth, that he wouldn’t let me into the percussion program. He said, “You’re right”. And then nailed of a marimba piece I wrote and closed out another A on my masters transcript.

I have recently discovered Kathy Kolbe’s research and assessments on the conative part of the brain (which works with the cognitive and affective parts of the brain). According to Kolbe, the cognitive brain drives our preferences (like Meyers Briggs or Insights training measure), but conative assessments show how we prefer to actually do things. For grins, I took a Kolbe assessment, and sure enough, my “score” gives me some insights into how I’ve managed to be successful despite myself.

I’m not going to spout off a lot about it, because I take all of these assessments with a grain of salt – but so far, I give this one more credit than Meyers Briggs (which I think is ok), and Insights (which I find silly). I am curious if others have done this assessment before and what they think…

By the time my current product ships, I’ll be hovering around the 21 year mark at Microsoft. Then, like today, I’m sure I’ll still wonder how I got here. I can’t see myself stopping to think about this.

And then I’ll find something new to learn and see where the journey takes me…

(potentially) related posts:
  1. Twenty Years…and Change
  2. Twenty-Ten – my best year ever
  3. 2012 Recap
Categories: Software Testing

Let’s Do This!

Alan Page - Thu, 11/19/2015 - 15:47

A lot of people want to see changes happen. Some of those want to make change happen. Whether it’s introducing a new tool to a software team, changing a process, or shifting a culture, many people have tried to make changes happen.

And many of those have failed.

I’ve known “Jeff” for nearly 20 years. He’s super-smart, passionate, and has great ideas. But Jeff gets frustrated about his inability to make changes happen. In nearly every team he’s ever been on, his teammates don’t listen to him. He asks them to do things, and they don’t. He gives them a deadline, and they don’t listen. Jeff is frustrated and doesn’t think he gets enough respect.

I’ve also known “Jane” for many years. Jane is driven to say the least. Unlike Jeff, she doesn’t wait for answers from her peers, she just does (almost) everything herself and deals with any fallout as it happens (and as time allows). It doesn’t always go well, and sometimes she has to backtrack, but progress is progress. Jane enjoys creating chaos and has no problem letting others clean up whatever mess she makes. Jane is convinced that the people around her “just don’t know how to get things done.”

Over the years, I’ve had a chance to give advice to both Jeff and Jane – and I think it’s worked. Jeff has been able to get people to help him, and Jane leaves a few less bodies in the wake of progress.

Jeff – as you may be able to tell, can be a bit of a slow starter. Or sometimes, a non-starter. He once designed an elaborate spreadsheet and sent mail to a large team asking every person to fill in their relevant details. When nobody filled it out, Jeff couldn’t believe the disrespect. I talked with him about his “problem”, and asked why he needed the data. His answer made sense, and I could see the value. Next I asked if he knew any of the data he needed from the team. “Most of it, actually”, he started, “but I don’t want to guess”.

Thirty minutes later, we filled out the spreadsheet, guessing where necessary, and re-sent the information to the team. In the email, Jeff said, “Here’s the data we’re using, please let me know if you need any corrections.” By the next morning, Jeff had several corrections in his inbox and was able to continue his project with a complete set of data. In my experience, people may stall to do work from scratch, but will gladly fix “mistakes”. Sometimes, you just need to kick start folks a bit to lead them.

Jane needed different advice. Jane is never going to be someone who puts together an elaborate plan before starting. But, I was able to talk Jane into taking just a bit of time to define a goal, and then create a list, an outline, or a set of tasks (or a combination), and sharing it with a few folks before driving forward. The time impact was either minimal or huge (depending on whether you asked me, or Jane), but the impact on her ability to get projects done was massive no matter who you ask. These days, Jane not only gets projects done without leaving bodies in her wake, but she actually receives (and welcomes) help from others on her team.

There are lot of other ways to screw up leadership opportunities, and countless bits of advice to share to avoid screw-ups. But – the next time you want to make change and it’s not working, take some time to think about whether the problem is really with the people around you…or if the “problem” is you.

(potentially) related posts:
  1. Making Time
  2. Give ‘em What They Need
  3. The Easy Part
Categories: Software Testing

The Value of Merely Imagining a Test – Part 2

Eric Jacobson's Software Testing Blog - Thu, 11/12/2015 - 14:40

I’m a written-test-case hater.  That is to say, in general, I think writing detailed test cases is not a good use of tester time.  A better use is interacting with the product-under-test.

But something occurred to me today:

The value of a detailed test case increases if you don’t perform it and decreases when you do perform it.

  • The increased value comes from mentally walking through the test, which forces you to consider as many details as you can without interacting with the product-under-test.  This is more valuable than doing nothing.
  • The decreased value comes from interacting with the product-under-test, which helps you learn more than the test case itself taught you.

What’s the takeaway?  If an important test is too complicated to perform, we should at least consider writing a detailed test case for it.  If you think you can perform the test, you should consider not writing a detailed test case and instead focusing on the performance and taking notes to capture your learning as it occurs.

Categories: Software Testing

The Value of Merely Imagining a Test – Part 1

Eric Jacobson's Software Testing Blog - Thu, 11/12/2015 - 14:09

An import bug escaped into production this week.  The root cause analysis took us to the usual place; “If we had more test time, we would have caught it.”

I’ve been down this road so many times, I’m beginning to see things differently.  No, even with more test time we probably would not have caught it.  Said bug would have only been caught via a rigorous end-to-end test that would have arguably been several times more expensive than this showstopper production bug will be to fix. 

Our reasonable end-to-end tests include so many fakes (to simulate production) that their net just isn’t big enough.

However, I suspect a mental end-to-end walkthrough, without fakes, may have caught the bug.  And possibly, attention to the “follow-through” may have been sufficient.  The “follow-through” is a term I first heard Microsoft’s famous tester, Michael Hunter, use.  The “follow-through” is what might happen next, per the end state of some test you just performed.

Let’s unpack that:  Pick any test, let’s say you test a feature to allow a user to add a product to an online store.  You test the hell out of it until you reach a stopping point.  What’s the follow-on test?  The follow-on test is to see what can happen to that product once it has been added to the online store.  You can buy it, you can delete it, you can let it get stale, you can discount it, etc…  I’m thinking nearly every test has several follow-on tests.

Categories: Software Testing

Fifty Quick Ideas To Improve Your Retrospectives

The Quest for Software++ - Mon, 11/09/2015 - 01:07

Fifty Quick Ideas To Improve Your Retrospectives by Ben Williams and Tom Roden is now available. The Kindle version is on a special promo price of 99p on Amazon.co.uk, and 99c on Amazon.com, this week only.

This is the third book in the Fifty Quick Ideas series, and contains tips and techniques to help teams enhance and energise their continuous improvement efforts. The ideas are grouped into five sections: preparing for retrospectives, providing a clear focus, adapting the environment, facilitating sessions and re-energising retrospectives (for more info, see the table of contents). The book is mostly aimed at teams that already know the basics of retrospectives, and want to take the next step, but Ben and Tom made sure that there are plenty of tips for various levels of knowledge and experience.

If you don’t have a Kindle account, there are other versions of the book available from Leanpub (though not at the fabulous Kindle discount), and the paper version should appear in most online stores this week.

Eurostar mobile deep dive

Alan Page - Sun, 11/08/2015 - 21:35

I posted slides from my Eurostar Mobile Deep Dive presentation on slideshare.

I had a great time, and hope you find them useful. Let me know if you have questions.

(potentially) related posts:
  1. Eurostar 2012 – Slides
  2. Surface RT and EuroSTAR recap
  3. Mobile Application Quality
Categories: Software Testing

GTAC 2015 is Next Week!

Google Testing Blog - Fri, 11/06/2015 - 19:46
by Anthony Vallone on behalf of the GTAC Committee

The ninth GTAC (Google Test Automation Conference) commences on Tuesday, November 10th, at the Google Cambridge office. You can find the latest details on the conference site, including schedule, speaker profiles, and travel tips.

If you have not been invited to attend in person, you can watch the event live. And if you miss the livestream, we will post slides and videos later.

We have an outstanding speaker lineup this year, and we look forward to seeing you all there or online!

Categories: Software Testing

Worst Presentation Ever

Alan Page - Fri, 11/06/2015 - 06:08

Last night I dreamt about the worst presentation ever. Sometimes I was presenting, sometimes I was watching, but it was frighteningly bad. Fortunately, my keynote this morning went well – and now that it has, I’ll share what happened (including some conscious editing to make sure I cover all of the bases).

It begins…

Moderator: I’d like to introduce Mr. Baad Prezenter. (resume follows)

Speaker: (taps on microphone – “is this on”). “Good Morning!” (when speaker doesn’t get the proper volume of answer, he repeats louder, “Good Morning!”. The audience answers louder and he thinks he’s engaged them now. “

Speaker then re-introduces himself repeating everything the moderator just told the room. After all, it’s important to establish credibility.

Speaker: “I’d like to thank Paul Blatt for telling me about this conference, Suzie Q for providing mentorship…” The list of thank-you’s goes on for a few minutes, before thanking the audience for attending his talk…even though many of them wish they hadn’t). Finally, the speaker moves on from the title slide to the agenda slide. He reads each bullet out loud for the audience members who are unable to read. He notices that one of the bullet points is no longer in the presentation and chooses that moment to talk about it anyway.

minutes pass…

The next slide shows the phonetic pronunciation of the presenters main topic along with a dictionary definition. The presenter reads this slide, making sure to emphasize the syllables in the topic. It’s important that the audience know what words mean.

15 minutes in, and finally, the content begins.

The speaker looks surprised by the content on the next slide.

Speaker: “I actually wasn’t going to talk about this, but since the slide is up, I’ll share some thoughts.” The speaker’s thoughts consist of him reading the bullet points on the slide to the audience. His next slide contains a picture of random art.

Speaker: “This is a picture I found on the internet. If you squint at it while I talk about my next topic you may find that it relates to the topic, but probably not. But I read the presentations need pictures, so I chose this one! “

Speaker spends about 15 minutes rambling. It seems like he’s telling a story, but there’s no story. It’s just random thoughts and opinions. Some audience members wonder what language he’s speaking.

The moderator flashes a card telling him there’s 10 minutes left in his presentation

Speaker: “I’m a little behind,  so let’s get going”. Finally, on the next slide are some graphics that look interesting to the audience and information that seems like it would support the topic. But the speaker skips this slide, and several more.

Speaker: “Ooh – that would have been a good one to cover – maybe next time” Finally the speaker stops on a slide that looks similar to one of the earlier slides.

Speaker (noticing that this slide is a duplicate): “I think I already talked about this, but it’s important, so I want to cover it.” Speaker reads the bullet points on the slide. At this point he hasn’t turned to face the audience in several minutes.

The next slide has a video of puppies chasing a raccoon that apparently has a lot to do with the topic. Unfortunately, the audio isn’t working, so the speaker stops the presentation and fiddles with the cable for a minute. Finally, he has audio, and restarts the presentation.

From the beginning.

He quickly advances to the video, stopping only once to talk about a slide he almost added that included an Invisible Gorilla and plays it for the audience. The audience stares blankly at the screen and wonders what drew them to this presentation in the first place.

Finally, the speaker gets to his last slide. It’s the same as the agenda slide, but…the bullet points are in ITALICS. He reads each bullet point again so he can tell them what they learned…or could have learned, thanks them, and sends them to lunch.

Twenty minutes late.

The audience are too dazed, and too hungry to fill out evaluation forms, so the speaker fills them out for them.

They loved him.

(potentially) related posts:
  1. Presentation Maturity
  2. Online Presentation: Career Tips for Testers
  3. One down, two to go…
Categories: Software Testing

Scope and Silos

Alan Page - Thu, 11/05/2015 - 08:14

I’ve watched a lot of teams try to be more agile or more adaptive, or just move to faster shipping cadence. It has taken me a while, but I think I see a pattern, and the hard stuff boils down to two things.

Scope and Silos

Scope

Scope, in this context, is everything that goes into a feature / user story. For many developers in the previous century, this meant getting the thing to compile and sort-of work, and then letting test pound quality into it. That worked fine if you were going to spend months finding and fixing bugs, but if you want to ship every week, you need to understand scope, and figure out a way to deliver smaller pieces that don’t lower customer value.

Scope includes architecture, performance, tests, telemetry, review, analysis, interoperability, compatibility, and many, many other things beyond “sort-of works”. There may not be work to do for all of these items, but if you don’t consider all of them for all stories, you end up with an application where half of the features are incomplete in some way. If half of your application is incomplete, are you ready to ship?
Silos

The second “problem” I see – mostly in teams transitioning from predictive development models are silos (or strict adherence to the team ownership). You can find these teams by asking people what they do. They’ll say, “my team owns shimmery blue menus”, or “I own the front page”. When you have a team full of people who all own an isolated piece of the product, you very likely will have a whole lot of stories “in progress” at once, and you end up with the first problem above.

I’ve frequently told the story of a team I was on that scheduled (what I thought was)  a disproportionate number of features for a release an a specific area. When I asked why we were investing so much in that area, I was told that due to attrition and hiring, that the team that owned the area was much larger than the other teams.

If you’re shipping frequently – let’s say once a week, you want every release to have more value to the customer than the previous release. Value can come from stability or performance improvements, or from new features or functionality in the product. On a team of twenty people, delivering twenty new stories is probably not a great idea. Failing to finish any of those stories and delivering no new functionality is worse.

So pick the top 3 (ish) stories, and let the team deliver them together. Forget about who reports to who, and who owns what. Figure out what’s most important for the customer, and enable the team to deliver value. Everyone may not be involved in delivering a story each release (there’s bound to be be fundamental, infrastructure, and other similar work that needs to be done). That’s ok – let the team self-organize and they’ll do the right thing.

In other words, I think a lot of improvement to be discovered by defining work better, and limiting work in progress. Not rocket science, but often forgotten.

(potentially) related posts:
  1. Silos, Walls, Teams and TV
  2. Working on Lync
  3. In the Middle
Categories: Software Testing

The wrong question: What percentage of tests have you automated?

Dorothy Graham - Sun, 11/01/2015 - 10:24
At a couple of recent conferences, I became aware that people are asking the wrong question with regard to automation. There was an ISTQB survey that asked “How many (what percentage of) test cases do you automate?”. In talking to a delegate after my talk on automation at another conference, she said that her manager wanted to know what percentage of tests were automated; she wasn’t sure how to answer, and she is not alone. It is quite common for managers to ask this question; the reason it is difficult to answer is because it is the wrong question.
Why do people ask this? Probably to get some information about the progress of an automation effort, usually when automation is getting started. This is not unreasonable, but this question is not the right one to ask, because it is based on a number of erroneous assumptions:

Wrong assumption 1)  All manual tests should be automated. “What percentage of tests” implies that all existing tests are candidates for automation, and the percentage will measure progress towards the “ideal” goal of 100%.
It assumes that there is a single set of tests, and that some of them are manual and some are automated. Usually this question actually means “What percentage of our existing manual testsare automated?”
But your existing manual tests are not all good candidates for automation – certainly some manual tests can and should be automated, but not all of them!
Examples: if you could automate “captcha”, then the “captcha” isn’t working, as it’s supposed to tell the difference between a human and a computer. “Do these colours look nice?” or “Is this exactly what a real user would do”? And tests that take too long to automate such as tests that are not run very often or those that. are complex to automate.
Wrong assumption 2) Manual tests are the only candidates for automation. “What percentage of tests” also implies that the only tests worth automating are existing manual tests, but this is also incorrect. There are many things that can be done using tools that are impossibly or infeasible to do when testing manually.
Examples: additional verification or validation of screen objects – are they in the correct state? When testing manually, you can see what is on the screen, but you may not know its state or whether the state is displaying correctly.
Tests using random inputs and heuristic oracles, which can be generated in large volume and checked automatically.
Wrong assumption 3) A manual test is the same as an automated test. “What percentage of tests” also assumes that a manual test and an automated test are the same - but they are not. A manual test consists of a set of directions for a human being to follow; it may be rather detailed (use customer R Jones), or it could be quite vague (use an existing customer). A manual test is optimised for a human tester. When tests are executed manually, they may vary slightly each time, and this can be both an advantage (may find new bugs) and a disadvantage (inconsistent tests, not exactly repeated each time).
An automated test should be optimized for a computer to run. It should be structured according to good programming principles, with modular scripts that call other scripts. It shouldn’t be one script per test, but each test should use many scripts (most of them shared) and most scripts should be used in many tests. An automated test is executed in exactly the same way each time, and this can be an advantage (repeatability, consistency) and a disadvantage  (won’t find new bugs).
One manual test may be converted into 3, 5, 10 or more automated scripts. Take for example a manual test that starts at the main menu, navigates to a particular screen and does some tests there, then returns to the main menu. And suppose you have a number of similar tests for the same screen, say 10. If you have one script per test, each will do 3 things: navigate to the target area, do tests, navigate back. If the location of the screen changes, all of those tests will need to be changed – a maintenance nightmare (especially if there are a lot more than 10 tests)! Rather, each test should consist of at least 3 scripts: one to navigate to the relevant screen, one (or perhaps many) scripts to perform specific tests, and one script to navigate back to the main menu. Note that the same “go to screen” and “return to main menu” script is used by all of these tests. Then if the screen is re-located, only 2 scripts need to be changed and all the automated tests will still work.
But now the question is: how many tests have you automated? Is it the 10 manual tests you started with? Or should you count automated scripts? Then we have at least 12 but maybe 20. Suppose you now find that you can very easily add another 5 tests to your original set, sharing the navigation scripts and 4 of the other scripts. Now you have 15 tests using 13 scripts – how many have you automated? Your new tests never were manual tests, so have you automated 10 tests (of the original set) or 15?
Wrong assumption 4) Progress in automation is linear (like testing). A “what percent completed” measure is fine for an activity that is stable and “monotonic”, for example running sets of tests manually. But when you automate a test, especially at first, you need to put a lot of effort in initially to get the structure right, and the early automated tests can’t reuse anything because nothing has been built yet. Later automated tests can be written / constructed much more quickly than the earlier ones, because there will (should) be a lot of reusable scripts that can just be incorporated into a new automated test. So if your goal is to have say 20 tests automated in 2 weeks, after one week you may only have automated only 5 of those tests, but the other 15 can easily be automated in week 2. So after week 1 you have automated 25% of the tests, but you have done 50% of the work.
Eventually it should be easier and quicker to add a new automated test than to run that test manually, but it does take a lot of effort to get to that point.
Good progress measures. So if these are all reasons NOT to measure the percentage of manual tests automated, what would be a good automation progress measure instead? Here are three suggestions:
1)            Percentage of automatable tests that have been automated. Decide first which tests are suitable for automation, and/or that you want to have as automated tests, and measure the percentage automated compared to that number, having taken out tests that should remain manual and tests that we don’t want to automate now. This can be done for a sprint, or for a longer time frame (or both).
2)            EMTE: Equivalent Manual Test Effort: Keep track of how much time a set of automated tests would have taken if they had been run manually. Each time those tests are run (automatically), you “clock up” the equivalent of that manual effort. This shows that automation is running tests now that are no longer run manually, and this number should increase over time as more tests are automated.
3)            Coverage: With automation, you can run more tests, and therefore test areas of the application that there was never time for when the testing was done manually. This is a partial measure of one aspect of the thoroughness of testing (and has its own pitfalls), but is a useful way to show that automation is now helping to test more of the system.
Conclusion So if your manager asks you “What percent of the tests have you automated?” you need to ask something like: Percent of what? Out of existing tests that could be automated or that we decide to automate? What about additional tests that would be good to automate that we aren’t doing now? Do you want to know about progress in time towards our automation goal, or literally only the tests, as this will be different because automated tests are structured differently to manual tests.
It might be a good idea to find out why he or she has asked that question – what is it that they are trying to see? They need to have some visibility for automation progress, and it is up to you to agree something that would be useful and helpful, honest and reasonably easy to measure. Good luck! And let me know how you measure your progress in automation!
Categories: Software Testing

Announcing the GTAC 2015 Agenda

Google Testing Blog - Tue, 10/27/2015 - 17:52
by Anthony Vallone on behalf of the GTAC Committee 

We have completed the selection and confirmation of all speakers and attendees for GTAC 2015. You can find the detailed agenda at: developers.google.com/gtac/2015/schedule.

Thank you to all who submitted proposals!

There is a lot of interest in GTAC once again this year with about 1400 applicants and about 200 of those for speaking. Unfortunately, our venue only seats 250. We will livestream the event as usual, so fret not if you were not selected to attend. Information about the livestream and other details will be posted on the GTAC site soon and announced here.

Categories: Software Testing

Pages