Alan Page

Subscribe to Alan Page feed
notes and rants about testing and quality from alan page
Updated: 23 hours 59 min ago

“Why the UI?”

Mon, 06/29/2015 - 16:17

Readers of my blog know my stance on UI automation. But, as I’ve forgotten my StickyMinds password, and the answer is longer than 140 characters, so I’m responding here.

This article from Justin Rohrman talks about the coolness of Selenium for UI testing. In a paragraph called, “Why the UI”, Justin wrote:

The API and everything below that will give you a feel for code quality and some basic functionality. Testing the UI will help you know things from a different perspective: the user’s.

I like everything else in the article, but that second sentence kills me. Writing automated tests for the UI is as close to a user perspective as I am to the moon (I’m only on the 20th floor). I’m going to do Justin a favor and rewrite that paragraph for him here. Justin – if you read this, feel free to copy and paste the edit.

…some basic functionality. Testing the UI is difficult and prone to error, and automation can never, ever in a million years replace, replicate, or mimic a real users interaction with the software. However, sometimes it’s convenient – and often necessary to write UI automation for web pages, and in cases where that happens, Selenium is obvious choice.

Justin – your work is good – I just disagree (a LOT) with the trailing sentence of the paragraph in question.

Back to work for me…

(potentially) related posts:
  1. <sigh> Automation…again
  2. It’s all just testing
  3. Test Design for Automation
Categories: Software Testing

Star Canada

Wed, 06/24/2015 - 17:54

It’s been a long time since I have had to talk so much, but I had a great time, and met some great people.

As promised (to many people in my talks), here are the links to my presentations.

(potentially) related posts:
  1. One down, two to go…
  2. Twinkle Twinkle I’m back from STAR
  3. Upcoming stuff
Categories: Software Testing

Building Quality

Fri, 06/12/2015 - 11:51

I mentioned in my last post that I has a new job at Microsoft (and I discussed it a bit more on the last AB Testing). During the interviews for the job, I talked a lot about quality. I used the agile quadrants as one example of how a team builds quality software (including my roles in each of the quadrants), but I also talked about quality software coming from a pyramid of activities and processes. I’ve been dwelling on the model for the last week or so, and wanted to share it for comments and feedback…or to just brain-dump the idea.

Processes / Practice / Culture

The base of software quality (and my pyramid) is in the craftsmanship and approach of the team. Do they care about good unit testing and code reviews, or do they check in code willy nilly? Do they take pride in having a working product every day, or does the build fall on the floor for days on end? The base of the pyramid is critical for making quality software – but on established teams can be the most difficult thing to change.

Code Quality (correctness)

An extension of PP&C above is code correctness. This is a more granular look specifically at code quality and code correctness. This includes attention to architecture, code design, use of analysis tools, programming tools, and overall attention do detail on writing great code.

Functional Quality

Unit tests, functional tests, integration  / acceptance tests, etc. are all part of product quality. I italicize, because for some reason, some folks think that quality ends here – that if the tests pass, the product is ready for consumers. (un?)Fortunately, readers of this blog know better, so I’ll save the soapbox rant for another day. However, a robust and trustworthy set of tests that range from the unit level to integration and acceptance tests is a critical part of building software quality.

Automate Everything

There are some folks in software in the “Automate Everything” camp. A lot of testers don’t like this camp, because they think it will take away their job. Whatever.

As far as I can tell from my limited research on this camp, Automate Everything means automate all of the unit functional and integration tests…and maybe a chunk of the performance and reliability tests. For some definitions of “Everything”, I agree. Absolutely automate all of this stuff, and let (make) the developer of the code under test do it. The testers’ mind is much better put to use higher up the pyramid.

-Ilities

Performance, reliability, usability, I18N, and other non-functional requirements / ilities are what begins to take your product from something that is functionally correct to something that people may just want to use. Often, the ilities are ignored or postponed until late in the product cycle, but good software teams will pay a lot of attention to this part of the pyramid throughout the product cycle.

Customer Quality

It doesn’t matter how much you kick ass everywhere else in the pyramid. If the customers don’t like your product, you made a shitty product. It may be a functionally correct masterpiece that passes every test you wrote, but it doesn’t matter if it doesn’t provide value for your customers. Team members can “act like the customer”, be an “advocate for the customer”, or flat out, “be the customer”, but I’ll tell you (for likely the twentieth time on this blog), as a member of the product team, you are not the customer! That said, this is the part of the pyramid where good testers can shine in finding the fit and finish bugs that cause a lot of software to die the death of a thousand paper cuts.

Now, if you do everything else in the pyramid well, you have a better shot at getting lucky at the top, but your best shot at creating a product that customers like crave is to get quantitative and qualitative feedback directly from your users. Use data collection to discover how they’re using the product and what errors they’re seeing, ask them questions (in person, or via surveys), monitor twitter, forums, uservoice, etc. to see what’s working (and not working), and use the feedback to adapt your product. Get it in their hands, listen to them, and make it better.

More to come as I continue to ponder.

(potentially) related posts:
  1. In Search of Quality
  2. Activities and Roles
  3. Finding Quality
Categories: Software Testing

Twenty Years…and Change

Fri, 06/05/2015 - 09:36

In January of 1995, I began some contract work (testing networking) on the Windows 95 team at Microsoft. Apparently, my work was appreciated, because in late May, I was offered a full time position on the team.

My first official day as a full time Microsoft employee was June 5, 1995.

That was twenty years ago today!

I never (ever!) thought I would be at any company this long. I thought computer software would be a fun thing to do “for a while” – but I didn’t realize how much I’d enjoy creating software, and dealing with all of the technical and non-technical things aspects that come with it. I learned a lot – and even though my fiftieth birthday is close enough to see, I’m still learning, and still having fun – and that’s a good thing to have in a job.

I’ve had fourteen managers, and seventeen separate offices. I’ve made stuff work (and screwed stuff up) across a whole bunch of products. I’ve done a ton of testing, entered thousands of bugs, and written code that’s shipped in Windows, Xbox, and more (not bad for a music major who stumbled into software).

In a nice bit of coincidence, my twenty-year mark also is a time of change for me. After two years working on Project Astoria (look it up – it’s really cool stuff), it’s time for me to do something new at Microsoft…something that aligns more with my passions, skills, and experiences – and something that shows what someone with over two decades of software testing experience can do for modern software engineering.

I’ve joined (yet another) v1 product team at Microsoft. Other than a few contract vendors, the team of a hundred or so has no software testers. They hired me to be “the quality guy”. This set up could be bad news in many worlds, but my job is definitely not to try to test everything. Instead, my job is to offer quality assistance, help build a quality culture, assist in exploratory testing, and look at quality holistically across the product. I don’t know if any jobs like this exist elsewhere (inside or outside of Microsoft), but I’m excited (and a bit scared) of the challenge.

More to come as I figure out what I do, and what it means for me as well as everyone else interested in software quality.

(potentially) related posts:
  1. Plus ça change..
  2. My last fifteen (or so) years
  3. Twenty-Ten – my best year ever
Categories: Software Testing

Fog Creek Fun

Thu, 05/28/2015 - 12:30

Whaaa…? Two posts on automation in one week?

Normally, I’d refrain, but for those who missed it on twitter, I recorded an interview with Fog Creek last week on the Abuse and Misuse of Test Automation. It’s short and sweet (and that includes my umms and awws).

(potentially) related posts:
  1. An Angry Weasel Hiatus
  2. Why I Write and Speak
  3. Testing with code
Categories: Software Testing

<sigh> Automation…again

Tue, 05/26/2015 - 10:53

I think this is the first time I’ve blogged about automation since writing…or, to be fair, compiling The A Word.

But yet again, I see questions among testers about the value of automation and whether it will replace testers, etc.. For example, this post from Josh Grant asks whether there are similarities between automated trucking and automated testing. Of course, I think most testers will go on (and on) about how much brainpower and critical thinking software testing needs, and how test automation can never replace “real testing”. They’re right, of course, but there’s more to the story.

Software testing isn’t at all unique among professions requiring brain power, creativity, or critical thinking. I challenged  you to bingoogle “Knowledge Work” or Knowledge Worker”, and not see the parallels to software testing in other professions. You know what? Some legal practices can be replaced by automation or by low-cost outsourcing – yet I couldn’t find any articles, blogs, or anything else from lawyers complaining about automation or outsourcing taking away their jobs (disclaimer – I only looked at the first two pages of results on simple searches). Apparently, however, there are “managers” (1000’s of them if I’m extrapolating correctly) who claim that test automation is a process for replacing human testers. Apparently, these managers don’t spend any time on the internet, because I could only find second hand confirmation of their existence.

At risk of repeating myself (or re-repeating myself…) you should automate the stuff that humans don’t want (or shouldn’t have) to do. Automate the process of adding and deleting 100,000 records; but use your brain to walk through a user workflow. Stop worrying about automation as a replacement for testing, but don’t’ ignore the value it gives you for accomplishing the complex and mundane.

(potentially) related posts:
  1. To Automate…?
  2. Test Design for Automation
  3. Last Word on the A Word
Categories: Software Testing

Forwards and Backwards

Tue, 05/05/2015 - 11:53

What a year it’s been so far. I’ve been away from blogging, and I’m not quite sure if I’m back yet, but I expect so…and here’s why.

The project I’m working on at Microsoft is no longer a secret. I’ve never blogged a lot about product specifics, but since a big chunk of my work for the last year has been way off my normal path (linux, java, etc.), I just didn’t find a lot to share (not necessarily because I wanted to help keep the project under wraps, but because pushing through n00b questions on tools and language constructs probably isn’t very exciting for my typical audience).

There’s also been a lot of change – net little change in what I do day to day. For the last several years, my role (which many wouldn’t call testing) has been to figure out what’s not getting done, and make sure it gets done. With Xbox (and Lync before that), that meant helping the test team – either by coaching or mentoring, or developing strategy, or building tools, or asking questions about the product, or whatever needed to be done – by myself, or through others.

That (figuring out what needs to get done) is exactly what I do for my current team. But on this team, those holes and gaps have been mostly about improving our builds, CI, testing, and other parts of our systems that make our build->measure->learn loop more efficient and effective. I help with testing too, but in different ways than in the past.

I’m going to give a keynote at STAR Canada next month where I’ll talk a bit about what I do, and how it’s changed over the last few years. I’ll also try to dispel the silly myths  / bad interpretations that MS doesn’t do testing anymore. As I develop that talk, I expect I’ll come up with some good blog fodder to share and to help develop ideas. Crap – now that I think about it, I should have that talk a lot further along than it is right now. I better put some time on that right now.

If you haven’t already, please check out the AB Testing podcast.

(potentially) related posts:
  1. Let it happen
  2. Filling A Hole
  3. Some Stuff I’ve Learned
Categories: Software Testing