Software Testing

Someone Is Unclear On What “Manual” Means

QA Hates You - 3 hours 44 min ago

Job posting for a Manual Test Engineer:

Job duties:

As our Manual Test Engineer, you’ll ensure that our customers have a great experience when they use DataRobot. You will do this by developing and executing comprehensive and robust software validation tests (both automated and manual); including large datasets, advanced features, custom options, heavy usage, etc. You will also manage an external testing team and testing plans to align with our current customer use cases. This is a great opportunity for you if you’re detail oriented and driven to provide excellence within every customer interaction.

It’s a great opportunity to work as a senior QA engineer or manager with an entry-level title and, perhaps, pay.

Categories: Software Testing

I Voted For Willcox

QA Hates You - Thu, 12/03/2015 - 09:32

I visit the St. Louis Post-Dispatch Web site almost daily, and whenever I clear my cookies and cache, the site prompts me to take a survey before I can access the content of an article.

One day, the intra-office rivalry at the marketing department or agency got a little intense as Willcox tried to prove he was the most popular person in the staff by holding a little popularity contest embedded in the polls.

By the end of the survey, even I was voting for Willcox.

Poor Masheika never stood a chance.

Categories: Software Testing

Oracles from the Inside Out, Part 8: Successful Stumbling

DevelopSense - Michael Bolton - Thu, 11/26/2015 - 17:18
When we’re building a product, despite everyone’s good intentions, we’re never really clear about what we’re building until we try to build some of it, and then study what we’ve built. Even after that, we’re never sure, so to reduce risk, we must keep studying. For economy, let’s group the processes associated with that study—review, […]
Categories: Software Testing

What Happened?

Alan Page - Mon, 11/23/2015 - 19:36

As I approach the half-century mark of existence, I’ve been reflecting on how I’ve ended up where I am…so excuse me while I ramble out loud.

Seriously, how did a music major end up as a high level software engineer at a company like Microsoft? I have interviewed hundreds of people for Microsoft, who, on paper, are technology rock stars, and I (yes, the music major) have had to say no-hire to most of them.

I won’t argue that a lot of it is luck – but sometimes being lucky is just saying yes at the right times and not being afraid of challenges. Yeah, but it’s mostly luck.

I think another part is my weird knack for learning quickly. When I was a musician (I like to consider that I’m still a musician, but I just don’t play that often anymore) – I was always the one volunteering to pick up doubles (second instruments) as needed, or volunteer to fill whatever hole needed filling in order for me to get into the top groups. Sometimes I would fudge my background if it would help – knowing that I could learn fast enough to not make myself look stupid.

In grad school (yeah, I have a masters in music too), I flat out told the percussion instructor – who had a packed studio – that I was a percussionist. To be fair, I played snare drum and melodic percussion in drum corps for several summers, but I didn’t have the years of experience of many of my peers. So, as  grad student, I was accepted into the studio, and I kicked ass. In my final performance of the year for the music staff, one of my undergrad professors blew my cover and asked about my clarinet and saxophone playing. I fessed up to my professor that I wasn’t a percussion major as an undergrad, and that I lied to get into his program. When he asked why, I told him that I though if I told him the truth, that he wouldn’t let me into the percussion program. He said, “You’re right”. And then nailed of a marimba piece I wrote and closed out another A on my masters transcript.

I have recently discovered Kathy Kolbe’s research and assessments on the conative part of the brain (which works with the cognitive and affective parts of the brain). According to Kolbe, the cognitive brain drives our preferences (like Meyers Briggs or Insights training measure), but conative assessments show how we prefer to actually do things. For grins, I took a Kolbe assessment, and sure enough, my “score” gives me some insights into how I’ve managed to be successful despite myself.

I’m not going to spout off a lot about it, because I take all of these assessments with a grain of salt – but so far, I give this one more credit than Meyers Briggs (which I think is ok), and Insights (which I find silly). I am curious if others have done this assessment before and what they think…

By the time my current product ships, I’ll be hovering around the 21 year mark at Microsoft. Then, like today, I’m sure I’ll still wonder how I got here. I can’t see myself stopping to think about this.

And then I’ll find something new to learn and see where the journey takes me…

(potentially) related posts:
  1. Twenty Years…and Change
  2. Twenty-Ten – my best year ever
  3. 2012 Recap
Categories: Software Testing

Let’s Do This!

Alan Page - Thu, 11/19/2015 - 15:47

A lot of people want to see changes happen. Some of those want to make change happen. Whether it’s introducing a new tool to a software team, changing a process, or shifting a culture, many people have tried to make changes happen.

And many of those have failed.

I’ve known “Jeff” for nearly 20 years. He’s super-smart, passionate, and has great ideas. But Jeff gets frustrated about his inability to make changes happen. In nearly every team he’s ever been on, his teammates don’t listen to him. He asks them to do things, and they don’t. He gives them a deadline, and they don’t listen. Jeff is frustrated and doesn’t think he gets enough respect.

I’ve also known “Jane” for many years. Jane is driven to say the least. Unlike Jeff, she doesn’t wait for answers from her peers, she just does (almost) everything herself and deals with any fallout as it happens (and as time allows). It doesn’t always go well, and sometimes she has to backtrack, but progress is progress. Jane enjoys creating chaos and has no problem letting others clean up whatever mess she makes. Jane is convinced that the people around her “just don’t know how to get things done.”

Over the years, I’ve had a chance to give advice to both Jeff and Jane – and I think it’s worked. Jeff has been able to get people to help him, and Jane leaves a few less bodies in the wake of progress.

Jeff – as you may be able to tell, can be a bit of a slow starter. Or sometimes, a non-starter. He once designed an elaborate spreadsheet and sent mail to a large team asking every person to fill in their relevant details. When nobody filled it out, Jeff couldn’t believe the disrespect. I talked with him about his “problem”, and asked why he needed the data. His answer made sense, and I could see the value. Next I asked if he knew any of the data he needed from the team. “Most of it, actually”, he started, “but I don’t want to guess”.

Thirty minutes later, we filled out the spreadsheet, guessing where necessary, and re-sent the information to the team. In the email, Jeff said, “Here’s the data we’re using, please let me know if you need any corrections.” By the next morning, Jeff had several corrections in his inbox and was able to continue his project with a complete set of data. In my experience, people may stall to do work from scratch, but will gladly fix “mistakes”. Sometimes, you just need to kick start folks a bit to lead them.

Jane needed different advice. Jane is never going to be someone who puts together an elaborate plan before starting. But, I was able to talk Jane into taking just a bit of time to define a goal, and then create a list, an outline, or a set of tasks (or a combination), and sharing it with a few folks before driving forward. The time impact was either minimal or huge (depending on whether you asked me, or Jane), but the impact on her ability to get projects done was massive no matter who you ask. These days, Jane not only gets projects done without leaving bodies in her wake, but she actually receives (and welcomes) help from others on her team.

There are lot of other ways to screw up leadership opportunities, and countless bits of advice to share to avoid screw-ups. But – the next time you want to make change and it’s not working, take some time to think about whether the problem is really with the people around you…or if the “problem” is you.

(potentially) related posts:
  1. Making Time
  2. Give ‘em What They Need
  3. The Easy Part
Categories: Software Testing

The Value of Merely Imagining a Test – Part 2

Eric Jacobson's Software Testing Blog - Thu, 11/12/2015 - 14:40

I’m a written-test-case hater.  That is to say, in general, I think writing detailed test cases is not a good use of tester time.  A better use is interacting with the product-under-test.

But something occurred to me today:

The value of a detailed test case increases if you don’t perform it and decreases when you do perform it.

  • The increased value comes from mentally walking through the test, which forces you to consider as many details as you can without interacting with the product-under-test.  This is more valuable than doing nothing.
  • The decreased value comes from interacting with the product-under-test, which helps you learn more than the test case itself taught you.

What’s the takeaway?  If an important test is too complicated to perform, we should at least consider writing a detailed test case for it.  If you think you can perform the test, you should consider not writing a detailed test case and instead focusing on the performance and taking notes to capture your learning as it occurs.

Categories: Software Testing

The Value of Merely Imagining a Test – Part 1

Eric Jacobson's Software Testing Blog - Thu, 11/12/2015 - 14:09

An import bug escaped into production this week.  The root cause analysis took us to the usual place; “If we had more test time, we would have caught it.”

I’ve been down this road so many times, I’m beginning to see things differently.  No, even with more test time we probably would not have caught it.  Said bug would have only been caught via a rigorous end-to-end test that would have arguably been several times more expensive than this showstopper production bug will be to fix. 

Our reasonable end-to-end tests include so many fakes (to simulate production) that their net just isn’t big enough.

However, I suspect a mental end-to-end walkthrough, without fakes, may have caught the bug.  And possibly, attention to the “follow-through” may have been sufficient.  The “follow-through” is a term I first heard Microsoft’s famous tester, Michael Hunter, use.  The “follow-through” is what might happen next, per the end state of some test you just performed.

Let’s unpack that:  Pick any test, let’s say you test a feature to allow a user to add a product to an online store.  You test the hell out of it until you reach a stopping point.  What’s the follow-on test?  The follow-on test is to see what can happen to that product once it has been added to the online store.  You can buy it, you can delete it, you can let it get stale, you can discount it, etc…  I’m thinking nearly every test has several follow-on tests.

Categories: Software Testing

Fifty Quick Ideas To Improve Your Retrospectives

The Quest for Software++ - Mon, 11/09/2015 - 01:07

Fifty Quick Ideas To Improve Your Retrospectives by Ben Williams and Tom Roden is now available. The Kindle version is on a special promo price of 99p on, and 99c on, this week only.

This is the third book in the Fifty Quick Ideas series, and contains tips and techniques to help teams enhance and energise their continuous improvement efforts. The ideas are grouped into five sections: preparing for retrospectives, providing a clear focus, adapting the environment, facilitating sessions and re-energising retrospectives (for more info, see the table of contents). The book is mostly aimed at teams that already know the basics of retrospectives, and want to take the next step, but Ben and Tom made sure that there are plenty of tips for various levels of knowledge and experience.

If you don’t have a Kindle account, there are other versions of the book available from Leanpub (though not at the fabulous Kindle discount), and the paper version should appear in most online stores this week.

Eurostar mobile deep dive

Alan Page - Sun, 11/08/2015 - 21:35

I posted slides from my Eurostar Mobile Deep Dive presentation on slideshare.

I had a great time, and hope you find them useful. Let me know if you have questions.

(potentially) related posts:
  1. Eurostar 2012 – Slides
  2. Surface RT and EuroSTAR recap
  3. Mobile Application Quality
Categories: Software Testing

GTAC 2015 is Next Week!

Google Testing Blog - Fri, 11/06/2015 - 19:46
by Anthony Vallone on behalf of the GTAC Committee

The ninth GTAC (Google Test Automation Conference) commences on Tuesday, November 10th, at the Google Cambridge office. You can find the latest details on the conference site, including schedule, speaker profiles, and travel tips.

If you have not been invited to attend in person, you can watch the event live. And if you miss the livestream, we will post slides and videos later.

We have an outstanding speaker lineup this year, and we look forward to seeing you all there or online!

Categories: Software Testing

Worst Presentation Ever

Alan Page - Fri, 11/06/2015 - 06:08

Last night I dreamt about the worst presentation ever. Sometimes I was presenting, sometimes I was watching, but it was frighteningly bad. Fortunately, my keynote this morning went well – and now that it has, I’ll share what happened (including some conscious editing to make sure I cover all of the bases).

It begins…

Moderator: I’d like to introduce Mr. Baad Prezenter. (resume follows)

Speaker: (taps on microphone – “is this on”). “Good Morning!” (when speaker doesn’t get the proper volume of answer, he repeats louder, “Good Morning!”. The audience answers louder and he thinks he’s engaged them now. “

Speaker then re-introduces himself repeating everything the moderator just told the room. After all, it’s important to establish credibility.

Speaker: “I’d like to thank Paul Blatt for telling me about this conference, Suzie Q for providing mentorship…” The list of thank-you’s goes on for a few minutes, before thanking the audience for attending his talk…even though many of them wish they hadn’t). Finally, the speaker moves on from the title slide to the agenda slide. He reads each bullet out loud for the audience members who are unable to read. He notices that one of the bullet points is no longer in the presentation and chooses that moment to talk about it anyway.

minutes pass…

The next slide shows the phonetic pronunciation of the presenters main topic along with a dictionary definition. The presenter reads this slide, making sure to emphasize the syllables in the topic. It’s important that the audience know what words mean.

15 minutes in, and finally, the content begins.

The speaker looks surprised by the content on the next slide.

Speaker: “I actually wasn’t going to talk about this, but since the slide is up, I’ll share some thoughts.” The speaker’s thoughts consist of him reading the bullet points on the slide to the audience. His next slide contains a picture of random art.

Speaker: “This is a picture I found on the internet. If you squint at it while I talk about my next topic you may find that it relates to the topic, but probably not. But I read the presentations need pictures, so I chose this one! “

Speaker spends about 15 minutes rambling. It seems like he’s telling a story, but there’s no story. It’s just random thoughts and opinions. Some audience members wonder what language he’s speaking.

The moderator flashes a card telling him there’s 10 minutes left in his presentation

Speaker: “I’m a little behind,  so let’s get going”. Finally, on the next slide are some graphics that look interesting to the audience and information that seems like it would support the topic. But the speaker skips this slide, and several more.

Speaker: “Ooh – that would have been a good one to cover – maybe next time” Finally the speaker stops on a slide that looks similar to one of the earlier slides.

Speaker (noticing that this slide is a duplicate): “I think I already talked about this, but it’s important, so I want to cover it.” Speaker reads the bullet points on the slide. At this point he hasn’t turned to face the audience in several minutes.

The next slide has a video of puppies chasing a raccoon that apparently has a lot to do with the topic. Unfortunately, the audio isn’t working, so the speaker stops the presentation and fiddles with the cable for a minute. Finally, he has audio, and restarts the presentation.

From the beginning.

He quickly advances to the video, stopping only once to talk about a slide he almost added that included an Invisible Gorilla and plays it for the audience. The audience stares blankly at the screen and wonders what drew them to this presentation in the first place.

Finally, the speaker gets to his last slide. It’s the same as the agenda slide, but…the bullet points are in ITALICS. He reads each bullet point again so he can tell them what they learned…or could have learned, thanks them, and sends them to lunch.

Twenty minutes late.

The audience are too dazed, and too hungry to fill out evaluation forms, so the speaker fills them out for them.

They loved him.

(potentially) related posts:
  1. Presentation Maturity
  2. Online Presentation: Career Tips for Testers
  3. One down, two to go…
Categories: Software Testing

Scope and Silos

Alan Page - Thu, 11/05/2015 - 08:14

I’ve watched a lot of teams try to be more agile or more adaptive, or just move to faster shipping cadence. It has taken me a while, but I think I see a pattern, and the hard stuff boils down to two things.

Scope and Silos


Scope, in this context, is everything that goes into a feature / user story. For many developers in the previous century, this meant getting the thing to compile and sort-of work, and then letting test pound quality into it. That worked fine if you were going to spend months finding and fixing bugs, but if you want to ship every week, you need to understand scope, and figure out a way to deliver smaller pieces that don’t lower customer value.

Scope includes architecture, performance, tests, telemetry, review, analysis, interoperability, compatibility, and many, many other things beyond “sort-of works”. There may not be work to do for all of these items, but if you don’t consider all of them for all stories, you end up with an application where half of the features are incomplete in some way. If half of your application is incomplete, are you ready to ship?

The second “problem” I see – mostly in teams transitioning from predictive development models are silos (or strict adherence to the team ownership). You can find these teams by asking people what they do. They’ll say, “my team owns shimmery blue menus”, or “I own the front page”. When you have a team full of people who all own an isolated piece of the product, you very likely will have a whole lot of stories “in progress” at once, and you end up with the first problem above.

I’ve frequently told the story of a team I was on that scheduled (what I thought was)  a disproportionate number of features for a release an a specific area. When I asked why we were investing so much in that area, I was told that due to attrition and hiring, that the team that owned the area was much larger than the other teams.

If you’re shipping frequently – let’s say once a week, you want every release to have more value to the customer than the previous release. Value can come from stability or performance improvements, or from new features or functionality in the product. On a team of twenty people, delivering twenty new stories is probably not a great idea. Failing to finish any of those stories and delivering no new functionality is worse.

So pick the top 3 (ish) stories, and let the team deliver them together. Forget about who reports to who, and who owns what. Figure out what’s most important for the customer, and enable the team to deliver value. Everyone may not be involved in delivering a story each release (there’s bound to be be fundamental, infrastructure, and other similar work that needs to be done). That’s ok – let the team self-organize and they’ll do the right thing.

In other words, I think a lot of improvement to be discovered by defining work better, and limiting work in progress. Not rocket science, but often forgotten.

(potentially) related posts:
  1. Silos, Walls, Teams and TV
  2. Working on Lync
  3. In the Middle
Categories: Software Testing

The wrong question: What percentage of tests have you automated?

Dorothy Graham - Sun, 11/01/2015 - 10:24
At a couple of recent conferences, I became aware that people are asking the wrong question with regard to automation. There was an ISTQB survey that asked “How many (what percentage of) test cases do you automate?”. In talking to a delegate after my talk on automation at another conference, she said that her manager wanted to know what percentage of tests were automated; she wasn’t sure how to answer, and she is not alone. It is quite common for managers to ask this question; the reason it is difficult to answer is because it is the wrong question.
Why do people ask this? Probably to get some information about the progress of an automation effort, usually when automation is getting started. This is not unreasonable, but this question is not the right one to ask, because it is based on a number of erroneous assumptions:

Wrong assumption 1)  All manual tests should be automated. “What percentage of tests” implies that all existing tests are candidates for automation, and the percentage will measure progress towards the “ideal” goal of 100%.
It assumes that there is a single set of tests, and that some of them are manual and some are automated. Usually this question actually means “What percentage of our existing manual testsare automated?”
But your existing manual tests are not all good candidates for automation – certainly some manual tests can and should be automated, but not all of them!
Examples: if you could automate “captcha”, then the “captcha” isn’t working, as it’s supposed to tell the difference between a human and a computer. “Do these colours look nice?” or “Is this exactly what a real user would do”? And tests that take too long to automate such as tests that are not run very often or those that. are complex to automate.
Wrong assumption 2) Manual tests are the only candidates for automation. “What percentage of tests” also implies that the only tests worth automating are existing manual tests, but this is also incorrect. There are many things that can be done using tools that are impossibly or infeasible to do when testing manually.
Examples: additional verification or validation of screen objects – are they in the correct state? When testing manually, you can see what is on the screen, but you may not know its state or whether the state is displaying correctly.
Tests using random inputs and heuristic oracles, which can be generated in large volume and checked automatically.
Wrong assumption 3) A manual test is the same as an automated test. “What percentage of tests” also assumes that a manual test and an automated test are the same - but they are not. A manual test consists of a set of directions for a human being to follow; it may be rather detailed (use customer R Jones), or it could be quite vague (use an existing customer). A manual test is optimised for a human tester. When tests are executed manually, they may vary slightly each time, and this can be both an advantage (may find new bugs) and a disadvantage (inconsistent tests, not exactly repeated each time).
An automated test should be optimized for a computer to run. It should be structured according to good programming principles, with modular scripts that call other scripts. It shouldn’t be one script per test, but each test should use many scripts (most of them shared) and most scripts should be used in many tests. An automated test is executed in exactly the same way each time, and this can be an advantage (repeatability, consistency) and a disadvantage  (won’t find new bugs).
One manual test may be converted into 3, 5, 10 or more automated scripts. Take for example a manual test that starts at the main menu, navigates to a particular screen and does some tests there, then returns to the main menu. And suppose you have a number of similar tests for the same screen, say 10. If you have one script per test, each will do 3 things: navigate to the target area, do tests, navigate back. If the location of the screen changes, all of those tests will need to be changed – a maintenance nightmare (especially if there are a lot more than 10 tests)! Rather, each test should consist of at least 3 scripts: one to navigate to the relevant screen, one (or perhaps many) scripts to perform specific tests, and one script to navigate back to the main menu. Note that the same “go to screen” and “return to main menu” script is used by all of these tests. Then if the screen is re-located, only 2 scripts need to be changed and all the automated tests will still work.
But now the question is: how many tests have you automated? Is it the 10 manual tests you started with? Or should you count automated scripts? Then we have at least 12 but maybe 20. Suppose you now find that you can very easily add another 5 tests to your original set, sharing the navigation scripts and 4 of the other scripts. Now you have 15 tests using 13 scripts – how many have you automated? Your new tests never were manual tests, so have you automated 10 tests (of the original set) or 15?
Wrong assumption 4) Progress in automation is linear (like testing). A “what percent completed” measure is fine for an activity that is stable and “monotonic”, for example running sets of tests manually. But when you automate a test, especially at first, you need to put a lot of effort in initially to get the structure right, and the early automated tests can’t reuse anything because nothing has been built yet. Later automated tests can be written / constructed much more quickly than the earlier ones, because there will (should) be a lot of reusable scripts that can just be incorporated into a new automated test. So if your goal is to have say 20 tests automated in 2 weeks, after one week you may only have automated only 5 of those tests, but the other 15 can easily be automated in week 2. So after week 1 you have automated 25% of the tests, but you have done 50% of the work.
Eventually it should be easier and quicker to add a new automated test than to run that test manually, but it does take a lot of effort to get to that point.
Good progress measures. So if these are all reasons NOT to measure the percentage of manual tests automated, what would be a good automation progress measure instead? Here are three suggestions:
1)            Percentage of automatable tests that have been automated. Decide first which tests are suitable for automation, and/or that you want to have as automated tests, and measure the percentage automated compared to that number, having taken out tests that should remain manual and tests that we don’t want to automate now. This can be done for a sprint, or for a longer time frame (or both).
2)            EMTE: Equivalent Manual Test Effort: Keep track of how much time a set of automated tests would have taken if they had been run manually. Each time those tests are run (automatically), you “clock up” the equivalent of that manual effort. This shows that automation is running tests now that are no longer run manually, and this number should increase over time as more tests are automated.
3)            Coverage: With automation, you can run more tests, and therefore test areas of the application that there was never time for when the testing was done manually. This is a partial measure of one aspect of the thoroughness of testing (and has its own pitfalls), but is a useful way to show that automation is now helping to test more of the system.
Conclusion So if your manager asks you “What percent of the tests have you automated?” you need to ask something like: Percent of what? Out of existing tests that could be automated or that we decide to automate? What about additional tests that would be good to automate that we aren’t doing now? Do you want to know about progress in time towards our automation goal, or literally only the tests, as this will be different because automated tests are structured differently to manual tests.
It might be a good idea to find out why he or she has asked that question – what is it that they are trying to see? They need to have some visibility for automation progress, and it is up to you to agree something that would be useful and helpful, honest and reasonably easy to measure. Good luck! And let me know how you measure your progress in automation!
Categories: Software Testing

Announcing the GTAC 2015 Agenda

Google Testing Blog - Tue, 10/27/2015 - 17:52
by Anthony Vallone on behalf of the GTAC Committee 

We have completed the selection and confirmation of all speakers and attendees for GTAC 2015. You can find the detailed agenda at:

Thank you to all who submitted proposals!

There is a lot of interest in GTAC once again this year with about 1400 applicants and about 200 of those for speaking. Unfortunately, our venue only seats 250. We will livestream the event as usual, so fret not if you were not selected to attend. Information about the livestream and other details will be posted on the GTAC site soon and announced here.

Categories: Software Testing

QA Music: A Tale of Boundary Analysis

QA Hates You - Mon, 10/26/2015 - 04:49

Five Finger Death Punch, “Jekyll and Hyde”

Categories: Software Testing

QA Music: It’s Monday–Apocalypse Is On Its Way

QA Hates You - Mon, 10/19/2015 - 04:48

Theory of a Deadman, “Savages”

Categories: Software Testing

Prezo Prep

Alan Page - Wed, 10/14/2015 - 13:42

This post is completely inspired by Trish Khoo’s post on Preparing for Your Presentation. I was going to add a comment, but it got too long, so it’s becoming a blog post. Go ahead and read that first – it covers way more than I’m covering here, and it’s a well written article.

Trish suggests starting early, and I can’t stress that enough – but there’s some flavor to the timeline that has worked consistently well for me. As soon as I know I’m doing the presentation, I make an outline. Sometimes I make the outline in powerpoint, but usually I start with Word (or notepad, or onenote). This helps me get my story together and give me an idea of what I want to say. I’ll add notes on what I want to research. I’ll read through it dozen times or so over a few days and add or edit as needed.

And then I’ll ignore it for at least a few weeks.

I don’t know if I can recommend this for everyone, but (assuming I’ve started early enough), during the few weeks away, my brain has subconsciously worked out a lot of the details. Whenever I come back to the outline, I immediately see obvious edits and areas to clean up. Usually this is the time I shove the outline into powerpoint and make a skeleton slide deck.

Trish also suggests nailing your intro and having one big message. For me, these are the same thing. At this point, I spend some time thinking about “the one thing” I want to get across. I not only figure out how I’ll work the message into my intro, but I’ll figure out how I repeat the message throughout the presentation. This also means that I usually find really “cool” material that I remove from the presentation because I can’t make a strong connection from the material to the message. It’s a tough decision, but it helps make the presentation clear.

The last thing I do is turn the text / bullet points from my slides into speaking notes and make the slides more about the concepts and ideas I’m talking about. I may use screen shots or stock photos – it all just depends. One word of warning though – I see a lot of people pull keywords from their slides into a search engine and grab whatever photo shows up. Beyond potential copyright issues, often the picture has nothing to do with the actual subject (e.g. if you’re talking about working with Red Hat Linux, by all means, don’t show a picture of a random red hat as your bullet-point-replacement).

From there, I tweak, tweak, and then tweak a little bit more. I know it drives conference organizers crazy, but the “draft” I deliver to them a month or two ahead of the conference is rarely what I present at the conference. Sometimes parts of the presentation just don’t “click” until late. Of course, it’s possible to over-tweak, but I’d much rather give the best presentation possible for the audience than match what I temporarily thought was complete a month or two ago.

One more thing

The only thing not on Trish’s list that I want to add is that it’s really important to check out the room first. Try to watch at least one talk in the room you’re going to present in to get an idea of size (if it’s a long narrow room, take time to increase font size), or noises (so you won’t be as surprised if the kitchen is next door). Figure out in advance if you can put your laptop where you want, how you’ll pull off interactions, etc. As a last resort, if you can’t see another talk in the room, get there early, get set up, and get as much of a feel for the room as you can.

One more more thing

I’ll be fair. For keynote presentations, tutorials, and the like, I will always apply the above steps. It works for me, and I see no reason to change it. I think (hope?) it’s a reason I’m invited back to many conferences.

However, I give a lot of smaller talks (meetups, q&a sessions, etc.), and for those I prepare on a much lighter level – usually because I’m speaking on experiences or I’m confident I can wing it on the subject matter. It took a long time before I could pull this off, but I’m ok doing it now for some types of events.

(potentially) related posts:
  1. Presentation Maturity
  2. Assorted Stories, Part 1 – illness
  3. ET and Me
Categories: Software Testing

Dangerous, Some Interfaces Are

QA Hates You - Wed, 10/14/2015 - 09:18

I’ve seen some computer interfaces that suffer from this problem:

This is dangerous, you see, because the text that says push here is the glass. Which is not really where you want to push; the user should push on the bar.

So when you’re testing, make sure to evaluate the design of the interface to ensure that the instructions are clear and that the text appropriately indicates what the user should do.

Not just close enough.

Categories: Software Testing

WordPress Passes Judgment

QA Hates You - Tue, 10/13/2015 - 09:38

New reader Aron notes the error on this very site:

Perhaps WordPress is passing judgment on gimlet. More likely, though, the template has a bug in it. I’ll go looking for it eventually.

Categories: Software Testing

Oracles from the Inside Out, Part 7: References as Checks

DevelopSense - Michael Bolton - Mon, 10/12/2015 - 12:01
Over the last few blog posts, I’ve been focusing on oracles—means by which we could recognize a problem when we encounter it during testing. So far, I’ve talked about feelings and private mental models within internal, tacit experiences; consistency heuristics by which we can make inferences that help us to articulate why we think and […]
Categories: Software Testing