Software Testing

Worst Presentation Ever

Alan Page - Fri, 11/06/2015 - 06:08

Last night I dreamt about the worst presentation ever. Sometimes I was presenting, sometimes I was watching, but it was frighteningly bad. Fortunately, my keynote this morning went well – and now that it has, I’ll share what happened (including some conscious editing to make sure I cover all of the bases).

It begins…

Moderator: I’d like to introduce Mr. Baad Prezenter. (resume follows)

Speaker: (taps on microphone – “is this on”). “Good Morning!” (when speaker doesn’t get the proper volume of answer, he repeats louder, “Good Morning!”. The audience answers louder and he thinks he’s engaged them now. “

Speaker then re-introduces himself repeating everything the moderator just told the room. After all, it’s important to establish credibility.

Speaker: “I’d like to thank Paul Blatt for telling me about this conference, Suzie Q for providing mentorship…” The list of thank-you’s goes on for a few minutes, before thanking the audience for attending his talk…even though many of them wish they hadn’t). Finally, the speaker moves on from the title slide to the agenda slide. He reads each bullet out loud for the audience members who are unable to read. He notices that one of the bullet points is no longer in the presentation and chooses that moment to talk about it anyway.

minutes pass…

The next slide shows the phonetic pronunciation of the presenters main topic along with a dictionary definition. The presenter reads this slide, making sure to emphasize the syllables in the topic. It’s important that the audience know what words mean.

15 minutes in, and finally, the content begins.

The speaker looks surprised by the content on the next slide.

Speaker: “I actually wasn’t going to talk about this, but since the slide is up, I’ll share some thoughts.” The speaker’s thoughts consist of him reading the bullet points on the slide to the audience. His next slide contains a picture of random art.

Speaker: “This is a picture I found on the internet. If you squint at it while I talk about my next topic you may find that it relates to the topic, but probably not. But I read the presentations need pictures, so I chose this one! “

Speaker spends about 15 minutes rambling. It seems like he’s telling a story, but there’s no story. It’s just random thoughts and opinions. Some audience members wonder what language he’s speaking.

The moderator flashes a card telling him there’s 10 minutes left in his presentation

Speaker: “I’m a little behind,  so let’s get going”. Finally, on the next slide are some graphics that look interesting to the audience and information that seems like it would support the topic. But the speaker skips this slide, and several more.

Speaker: “Ooh – that would have been a good one to cover – maybe next time” Finally the speaker stops on a slide that looks similar to one of the earlier slides.

Speaker (noticing that this slide is a duplicate): “I think I already talked about this, but it’s important, so I want to cover it.” Speaker reads the bullet points on the slide. At this point he hasn’t turned to face the audience in several minutes.

The next slide has a video of puppies chasing a raccoon that apparently has a lot to do with the topic. Unfortunately, the audio isn’t working, so the speaker stops the presentation and fiddles with the cable for a minute. Finally, he has audio, and restarts the presentation.

From the beginning.

He quickly advances to the video, stopping only once to talk about a slide he almost added that included an Invisible Gorilla and plays it for the audience. The audience stares blankly at the screen and wonders what drew them to this presentation in the first place.

Finally, the speaker gets to his last slide. It’s the same as the agenda slide, but…the bullet points are in ITALICS. He reads each bullet point again so he can tell them what they learned…or could have learned, thanks them, and sends them to lunch.

Twenty minutes late.

The audience are too dazed, and too hungry to fill out evaluation forms, so the speaker fills them out for them.

They loved him.

(potentially) related posts:
  1. Presentation Maturity
  2. Online Presentation: Career Tips for Testers
  3. One down, two to go…
Categories: Software Testing

Scope and Silos

Alan Page - Thu, 11/05/2015 - 08:14

I’ve watched a lot of teams try to be more agile or more adaptive, or just move to faster shipping cadence. It has taken me a while, but I think I see a pattern, and the hard stuff boils down to two things.

Scope and Silos

Scope

Scope, in this context, is everything that goes into a feature / user story. For many developers in the previous century, this meant getting the thing to compile and sort-of work, and then letting test pound quality into it. That worked fine if you were going to spend months finding and fixing bugs, but if you want to ship every week, you need to understand scope, and figure out a way to deliver smaller pieces that don’t lower customer value.

Scope includes architecture, performance, tests, telemetry, review, analysis, interoperability, compatibility, and many, many other things beyond “sort-of works”. There may not be work to do for all of these items, but if you don’t consider all of them for all stories, you end up with an application where half of the features are incomplete in some way. If half of your application is incomplete, are you ready to ship?
Silos

The second “problem” I see – mostly in teams transitioning from predictive development models are silos (or strict adherence to the team ownership). You can find these teams by asking people what they do. They’ll say, “my team owns shimmery blue menus”, or “I own the front page”. When you have a team full of people who all own an isolated piece of the product, you very likely will have a whole lot of stories “in progress” at once, and you end up with the first problem above.

I’ve frequently told the story of a team I was on that scheduled (what I thought was)  a disproportionate number of features for a release an a specific area. When I asked why we were investing so much in that area, I was told that due to attrition and hiring, that the team that owned the area was much larger than the other teams.

If you’re shipping frequently – let’s say once a week, you want every release to have more value to the customer than the previous release. Value can come from stability or performance improvements, or from new features or functionality in the product. On a team of twenty people, delivering twenty new stories is probably not a great idea. Failing to finish any of those stories and delivering no new functionality is worse.

So pick the top 3 (ish) stories, and let the team deliver them together. Forget about who reports to who, and who owns what. Figure out what’s most important for the customer, and enable the team to deliver value. Everyone may not be involved in delivering a story each release (there’s bound to be be fundamental, infrastructure, and other similar work that needs to be done). That’s ok – let the team self-organize and they’ll do the right thing.

In other words, I think a lot of improvement to be discovered by defining work better, and limiting work in progress. Not rocket science, but often forgotten.

(potentially) related posts:
  1. Silos, Walls, Teams and TV
  2. Working on Lync
  3. In the Middle
Categories: Software Testing

The wrong question: What percentage of tests have you automated?

Dorothy Graham - Sun, 11/01/2015 - 10:24
At a couple of recent conferences, I became aware that people are asking the wrong question with regard to automation. There was an ISTQB survey that asked “How many (what percentage of) test cases do you automate?”. In talking to a delegate after my talk on automation at another conference, she said that her manager wanted to know what percentage of tests were automated; she wasn’t sure how to answer, and she is not alone. It is quite common for managers to ask this question; the reason it is difficult to answer is because it is the wrong question.
Why do people ask this? Probably to get some information about the progress of an automation effort, usually when automation is getting started. This is not unreasonable, but this question is not the right one to ask, because it is based on a number of erroneous assumptions:

Wrong assumption 1)  All manual tests should be automated. “What percentage of tests” implies that all existing tests are candidates for automation, and the percentage will measure progress towards the “ideal” goal of 100%.
It assumes that there is a single set of tests, and that some of them are manual and some are automated. Usually this question actually means “What percentage of our existing manual testsare automated?”
But your existing manual tests are not all good candidates for automation – certainly some manual tests can and should be automated, but not all of them!
Examples: if you could automate “captcha”, then the “captcha” isn’t working, as it’s supposed to tell the difference between a human and a computer. “Do these colours look nice?” or “Is this exactly what a real user would do”? And tests that take too long to automate such as tests that are not run very often or those that. are complex to automate.
Wrong assumption 2) Manual tests are the only candidates for automation. “What percentage of tests” also implies that the only tests worth automating are existing manual tests, but this is also incorrect. There are many things that can be done using tools that are impossibly or infeasible to do when testing manually.
Examples: additional verification or validation of screen objects – are they in the correct state? When testing manually, you can see what is on the screen, but you may not know its state or whether the state is displaying correctly.
Tests using random inputs and heuristic oracles, which can be generated in large volume and checked automatically.
Wrong assumption 3) A manual test is the same as an automated test. “What percentage of tests” also assumes that a manual test and an automated test are the same - but they are not. A manual test consists of a set of directions for a human being to follow; it may be rather detailed (use customer R Jones), or it could be quite vague (use an existing customer). A manual test is optimised for a human tester. When tests are executed manually, they may vary slightly each time, and this can be both an advantage (may find new bugs) and a disadvantage (inconsistent tests, not exactly repeated each time).
An automated test should be optimized for a computer to run. It should be structured according to good programming principles, with modular scripts that call other scripts. It shouldn’t be one script per test, but each test should use many scripts (most of them shared) and most scripts should be used in many tests. An automated test is executed in exactly the same way each time, and this can be an advantage (repeatability, consistency) and a disadvantage  (won’t find new bugs).
One manual test may be converted into 3, 5, 10 or more automated scripts. Take for example a manual test that starts at the main menu, navigates to a particular screen and does some tests there, then returns to the main menu. And suppose you have a number of similar tests for the same screen, say 10. If you have one script per test, each will do 3 things: navigate to the target area, do tests, navigate back. If the location of the screen changes, all of those tests will need to be changed – a maintenance nightmare (especially if there are a lot more than 10 tests)! Rather, each test should consist of at least 3 scripts: one to navigate to the relevant screen, one (or perhaps many) scripts to perform specific tests, and one script to navigate back to the main menu. Note that the same “go to screen” and “return to main menu” script is used by all of these tests. Then if the screen is re-located, only 2 scripts need to be changed and all the automated tests will still work.
But now the question is: how many tests have you automated? Is it the 10 manual tests you started with? Or should you count automated scripts? Then we have at least 12 but maybe 20. Suppose you now find that you can very easily add another 5 tests to your original set, sharing the navigation scripts and 4 of the other scripts. Now you have 15 tests using 13 scripts – how many have you automated? Your new tests never were manual tests, so have you automated 10 tests (of the original set) or 15?
Wrong assumption 4) Progress in automation is linear (like testing). A “what percent completed” measure is fine for an activity that is stable and “monotonic”, for example running sets of tests manually. But when you automate a test, especially at first, you need to put a lot of effort in initially to get the structure right, and the early automated tests can’t reuse anything because nothing has been built yet. Later automated tests can be written / constructed much more quickly than the earlier ones, because there will (should) be a lot of reusable scripts that can just be incorporated into a new automated test. So if your goal is to have say 20 tests automated in 2 weeks, after one week you may only have automated only 5 of those tests, but the other 15 can easily be automated in week 2. So after week 1 you have automated 25% of the tests, but you have done 50% of the work.
Eventually it should be easier and quicker to add a new automated test than to run that test manually, but it does take a lot of effort to get to that point.
Good progress measures. So if these are all reasons NOT to measure the percentage of manual tests automated, what would be a good automation progress measure instead? Here are three suggestions:
1)            Percentage of automatable tests that have been automated. Decide first which tests are suitable for automation, and/or that you want to have as automated tests, and measure the percentage automated compared to that number, having taken out tests that should remain manual and tests that we don’t want to automate now. This can be done for a sprint, or for a longer time frame (or both).
2)            EMTE: Equivalent Manual Test Effort: Keep track of how much time a set of automated tests would have taken if they had been run manually. Each time those tests are run (automatically), you “clock up” the equivalent of that manual effort. This shows that automation is running tests now that are no longer run manually, and this number should increase over time as more tests are automated.
3)            Coverage: With automation, you can run more tests, and therefore test areas of the application that there was never time for when the testing was done manually. This is a partial measure of one aspect of the thoroughness of testing (and has its own pitfalls), but is a useful way to show that automation is now helping to test more of the system.
Conclusion So if your manager asks you “What percent of the tests have you automated?” you need to ask something like: Percent of what? Out of existing tests that could be automated or that we decide to automate? What about additional tests that would be good to automate that we aren’t doing now? Do you want to know about progress in time towards our automation goal, or literally only the tests, as this will be different because automated tests are structured differently to manual tests.
It might be a good idea to find out why he or she has asked that question – what is it that they are trying to see? They need to have some visibility for automation progress, and it is up to you to agree something that would be useful and helpful, honest and reasonably easy to measure. Good luck! And let me know how you measure your progress in automation!
Categories: Software Testing

Announcing the GTAC 2015 Agenda

Google Testing Blog - Tue, 10/27/2015 - 17:52
by Anthony Vallone on behalf of the GTAC Committee 

We have completed the selection and confirmation of all speakers and attendees for GTAC 2015. You can find the detailed agenda at: developers.google.com/gtac/2015/schedule.

Thank you to all who submitted proposals!

There is a lot of interest in GTAC once again this year with about 1400 applicants and about 200 of those for speaking. Unfortunately, our venue only seats 250. We will livestream the event as usual, so fret not if you were not selected to attend. Information about the livestream and other details will be posted on the GTAC site soon and announced here.

Categories: Software Testing

QA Music: A Tale of Boundary Analysis

QA Hates You - Mon, 10/26/2015 - 04:49

Five Finger Death Punch, “Jekyll and Hyde”

Categories: Software Testing

Pages