Alan Page

Subscribe to Alan Page feed
notes and rants about testing and quality from alan page
Updated: 18 hours 20 min ago

An Angry Weasel Hiatus

Sun, 06/15/2014 - 21:24

I mentioned this on twitter a few times, but should probably mention it here for completeness.

I’m taking a break.

I’m taking a break from blogging, twitter, and most of all, my day job. Beyond July 4, 2014, I’ll be absent from the Microsoft campus for a few months– and most likely from social media as well. While I can guarantee my absence from MS, I may share on twitter…we’ll just have to see.

The backstory is that Microsoft offers a an award – a “sabbatical” for employees of a certain career stage and career history. I received my award nine years ago, but haven’t used it yet…but now’s the time – so I’m finally taking ~9 weeks off beginning just after the 4th of July, and concluding in early September.

Now – hanging out at home for two months would be fun…but that’s not going to cut it for this long awaited break. Instead, I’ll be in the south of France (Provence) for most of my break before spending a week in Japan on my way back to reality.

I probably won’t post again until September. I’m sure I have many posts waiting on the other side.

-AW

Categories: Software Testing

Alan and Brent are /still/ talking…

Mon, 05/19/2014 - 13:32

In case you missed it, Brent and I are still recording testing podcasts. I stopped posting the announcements for every podcast post on this blog, but if you want to subscribe, you have a few choices.

  1. You can Subscribe via RSS
  2. You can Subscribe via iTunes
  3. You can bookmark the podcast blog
  4. You can follow me on twitter and look for the announcements (usually every other Monday)

And for those of you too busy to click through, here are the episodes so far:

  1. AB Testing – Episode 1
  2. AB Testing – Episode 2
  3. AB Testing – Episode 3
  4. AB Testing – Episode 4
  5. AB Testing – Episode 5
Categories: Software Testing

Stop, If You Want To…

Fri, 05/09/2014 - 19:45

Well – that was a fun post. The dust hasn’t quite settled, but a follow up is definitely in order. 

First, some context. I was committed to giving a lightning talk as part of STAR East’s “Lightning Strikes the Keynotes” hour. I purposely didn’t pick a topic before I left, and figured I would come up with something while I was there. On Wednesday morning (the day of the lightning talks), I was out for a run thinking about the conference, when I had the idea to talk about testers writing fewer automated tests. I realized that if programmers should be writing more tests (ok, “checks” for those of you who insist), and that just about every mention of automation I heard at the conference talked about the challenges in automating end-to-end scenarios and equal challenges in maintaining that automation – that it was a topic worth exploring. I could have called the post, Testers Should Stop Writing Some of Their Automation Because Programmers Should Do Some of it, and Some of Your Automation Isn’t Very Good Anyway and it is Getting in the Way of Testing You Should Be Doing, but I chose a more controversial (and shorter) title instead. I purposely left out the types of automation (and other coding activities) that testers still should do, because I was afraid that if I did, that it would distract from the main two points (Programmers need to write a lot more tests, and Testers spend too much time writing and maintaining bad automation).

Also – I only had five minutes (to talk about it), so I stuck to the main points:

  1. Developers need to own more testing.
  2. Testers need to stop wasting time writing ineffective automation.

But the fun really began with the comments. I’ve never had more fun reading blog comments, twitter, and my mailbox (for some reason a bunch of people prefer to email me directly with comments rather than comment publically). I thought I’d comment on a few areas where I had a lot of questions.

Does this work for both services and “thick” clients?

Although I didn’t call it out in my post, this sort of approach works really well with services. But – it can work with thick clients too, you just need to be a little more careful with your deployment, as rollback and monitoring won’t be in real time like your service. I think mobile apps are a great example of where you may run experiments with a limited number of users, but windows (or mac) apps could follow the model as well. For always (or often) connected devices, I see no reason to not push updates – of course, these updates should probably go through a bit more testing than services before being pushed, as if something is broken, getting back to a safe state will take a bit of work.

The important thing to note for those still in unbelief is that deploying test items to production is done all of the time. Ebay does it, amazon does it, netflix does it (I could go on, but believe me that it’s done a lot). I don’t have a link, but in the comments, Noah Sussman tells me that NASA does it.

What Do Testers Do? Where did my cheese go. You are annoying!

The fear of cheese moving is strong (“If developers do functional testing, what will I do?”). There’s a lot of testing activity left to do (even when you take away writing developers tests for them and wasting time on unneeded automation). Stress and performance suites and monitoring tools (for example) should give the coding testers on the team plenty of work to do. Data analysis is also necessary if you’re gathering data from customers. And thanks to Roberto for pointing out that sanity checking UI changes, testing for Accessibility or Localization, or color changes all could use an onsite tester (or sometimes some code) to help.

And honestly, now that I’ve made you think about it, there are a few places where testers writing automation is useful. But it’s about time that testers stopped trying to write automation for cases that shouldn’t be automated. Want to try logging in and out 500 times – automate that (ideally NOT at the GUI level)? Go for it. Want to automate the end-to-end scenario of setting up an account and logging in? Please don’t bother. Instead, just add some monitoring code that lets you know if login is failing and save yourself some frustration.

One other thing – the point about developers owning more testing – that one is equally true for services and clients. It doesn’t make sense to me at all to have a separate team verify functional correctness. It’s not “too hard” or “too much work” for developers to write tests. Developers need to own writing quality code – and doing this requires that they write tests. I was surprised that some people felt that it was bad for developers to test their own code – but I suppose that years of working in silos will make you believe that there’s some sort of taint in doing so (but there’s not!).

This is obviously a point (and a change that is really happening now at a lot of companies) that causes no small amount of fear in testers. I get that, but ignoring the change or burying your head in the sand, or justifying why testers need to own functional testing isn’t going to help you figure out how to function when these changes hit your team.

Categories: Software Testing

Stop Writing Automation

Wed, 05/07/2014 - 13:40

After releasing The A Word, I didn’t plan on writing any more posts about automation. But, after pondering transitions in test, and after reading this post from Noah Sussman, I have a thought in my head that I need to share.

I don’t think testers should write automation.

I suppose I better explain myself.

All automation isn’t created equal. Automation works wonderfully for short confirmatory or validation tests. Unit, functional, acceptance, integration tests, and all other “short” tests lend themselves very well to automation. But I think it’s wasteful and inefficient to have testers write this automation – this should be written by the code owners. Testing your own code (IMO) improves design, prevents regression, and takes much, much less time than passing code off to another team to test.

That leaves the test team to write automation for end-to-end scenarios. There’s nothing wrong with that…except that writing end to end automated tests is hard (especially, as Noah points out, at the GUI level). The goal of automation (as touted by many of the vendors), is to enable running a bunch of tests automatically, so that testers will have more time for hands-on testing. In reality, I think that most test teams spend half of their time writing automation, half of their time maintaining and debugging automation, and half of their time doing everything else.

Let’s look at a typical scenario.

Pretty easy – you only need to automate three actions, add a bit of validation and you’re done!

If you’ve attempted something like this before, you know that’s not the whole story. A good automated test doesn’t just execute a set of user actions and then go on it’s way – you need to look at error conditions, and try to write the test in a way that prevents it from breaking.

It’s hard. It’s fragile. It’s a pain in the ass. So I say, stop doing it.

“But Alan – we have to test the scenario so that we know it works for our customers”. I believe that you want to know that the scenario works – but as much as you try to be the customer, nobody is a better representative of the customer than the customer. And – validating scenarios is quite a bit easier if we let the customers do it.

NOTE: I acknowledge that for some contexts, letting the customers generate usage data for an untested scenario is an inappropriate choice. But even when I’ve already tested the scenario, I still want (need!) to know what the customer experience is. As a tester, I can be the voice of the customer, but I am NOT the customer.

Now – let’s look at the same scenario:

But this time, instead of writing an automated test, I’ve added code (sometimes called instrumentation) to the product code that let’s me know when I’ve taken an action related to the scenario. While we’re at it, let’s add more instrumentation at every error path in the code.

Now, when a user attempts the scenario, you could get logs something like this:

05072014-1601-3143: Search started for FizzBuzz 05072014-1601-3655: Download started for FizzBuzz 05072014-1602-2765: Install started for FizzBuzz 05072014-1603-1103: FizzBuzz launched successfully

or 05072014-1723-2143: Search started for FizzBuzz 05072014-1723-3655: Download started for FizzBuzz 05072014-1723-2945: ERROR 115: Connection Lost

or 05072014-1819-3563: Search started for FizzBuzz 05072014-1819-3635: ERROR 119: Store not available.

 

From this, you can begin to evaluate scenario success by looking at how many people get through the entire scenario, and how many fail for particular errors.

or generate data like this:

And now we have information about our product that’s directly related to real customer usage, and the code to enable it is substantially more simple and easy to maintain than the traditional scenario automation.

A more precise way to put, I don’t think testers should write automation. is – I think that many testers in some contexts don’t need to write automation anymore. Developers should write more automation, and testers should try to learn more from the product in use. Use this approach in financial applications at your own risk!

(potentially) related posts:
  1. Orchestrating Test Automation
  2. Test Design for Automation
  3. Design for *GUI* Automation
Categories: Software Testing

Testing Trends…or not?

Sun, 05/04/2014 - 20:40

I read this article over the weekend about five emerging trends in software testing – Test Automation; Rise of mobile and cloud; Emphasis on security; Context-driven testing; and More business involvement.

I fully acknowledge that I work in a software development environment that isn’t like many others, but while reading the article, I really didn’t feel like any of those areas are “emerging” – all are fully emerged already. Sure, the trends are interesting to testers, but emerging? I could waste some space rebutting or commenting on the areas above, but instead, let me offer some alternate trends that I see inside of MS and from some of my colleagues who work elsewhere.

Fuzzier Role Definitions.  I don’t really like the terms “whole team approach” or “combined engineering”, but I do see software teams really figuring out how to work better together and leverage every team members strengths effectively. Great testers are working as Test Specialists and working much more broadly across the team. I expect the “lines” between software disciplines to fade even more in the future.

Developers Own More Testing. You can call them “checks” if you wish (I call them “short tests”, but software developers are beginning to own much bigger portions of traditional software testing. This is a good thing – it ensures that daily code quality is high, and gives test specialists a high quality product to work with.

Testing Live Sites. Mock-test environments typically do a poor job representing production environments. Other than brief sanity checks for the most critical components, many web service teams just roll their new bits straight to production, and then run their tests against the live system. With a good monitoring system (including the ability to stage rollouts and automatically roll back if needed), this is a safe, efficient, and frankly, practical method for testing services.

Data is HUGE. Many software teams have figured out that the best way to get an accurate representation of how customers use software is collect and analyze data from those same customers. A whole lot of traditional test activities can be replaced by product instrumentation, and an efficient method for getting product instrumentation back to the team for analysis. On a lot of teams, last year’s testers are this  year’s data analysts and data scientists. While not every tester is cut out for this role, this move to data analysis is a strong trend on a lot of software teams.

To critique myself for a moment, I think a lot of readers could say that none of these points are emerging either. That’s a fair point, since I know teams that have been doing everything above for years…but I’m just now seeing some of these trends “emerge” on multiple teams (and not just those testing web services or sites).

What trends do you see? Did I miss anything huge? Have the above four points already reached the tipping point of emergence?

(potentially) related posts:
  1. Death and Testing
  2. Some Principles
  3. Coding, Testing, and the “A” Word
Categories: Software Testing