Alan Page

Subscribe to Alan Page feed
notes and rants about testing and quality from alan page
Updated: 21 hours 21 min ago

A Little Q & A

Thu, 12/18/2014 - 12:57

I took some time recently to answer some questions about Combined Engineering, Data-Driven Quality, and the Future of Testing for the folks at the A1QA blog.

Check it out here.

(potentially) related posts:
  1. HWTSAM – Six Years Later
  2. Testing Trends…or not?
  3. Finding Quality
Categories: Software Testing

HWTSAM – Six Years Later

Sun, 11/30/2014 - 11:37

We’re less than a week away from the sixth anniversary of How We Test Software at Microsoft (some chapters were completed nearly seven years ago).

Recently I was considering a new position at Microsoft, and one of my interviewers (a dev architect and also an author) said that he had been reading my book. I stated my concern that the book is horribly out of date, and that it doesn’t really reflect testing at Microsoft accurately anymore. In the ensuing discussion, he asked how I’d structure the book if I re-wrote it today. I gave him a quick brain dump – which in hindsight I believe is still accurate…and worth sharing here.

So – for those who may be curious, here’s the outline for the new (and likely never-to-be-written) edition of How We Test Software at Microsoft. For consistency with the first edition, I’m going to try to follow the original format  as much as possible.

Part 1 – About Microsoft

This section initially included chapters on Engineering at Microsoft, the role of the Tester / SDET, and Engineering Life cycles. The new edition would go into some of the changes Satya Nadella is driving across Microsoft – including flattening of organizational structures, our successes (and shortcomings) in moving to Agile methodologies across the company, and what it means to be a “Cloud First, Mobile First” company. I think I’d cover a little more of the HR side of engineering as well, including moving between teams, interviews, and career growth.

Part 1 would also dive deeply into the changes in test engineering and quality ownership, including how [combined] engineering teams work, and an intro to what testers do . This is important because it’s a big change (and one still in progress), and it sets the tone for some of the bigger changes described later in the book.

Although I don’t think I could give Marick’s Testing Quadrants the justice that Lisa Crispin and Janet Gregory (and many others) give the them, I think the quadrants (my variation presented on the left) give one view into how Microsoft teams can look at sharing quality when developers own the lions share of engineering quality (with developers generally owning the left side of the quadrant and testers generally owning the right side.

And finally, part 1 would talk about how community works within Microsoft (including quality communities, The Garage, and how we use email distribution lists and Yammer to share (or attempt to share) knowledge.

Part 2 – About Quality

Fans of the book may remember that the original name for the second  and third sections of the book were“About Testing”, and “About Test Tools and Systems” – but I think talking about quality – first from the developer perspective, and then from the quality engineer / tester perspective (along with how we use tools to make our jobs easier) is a better way to present the 2015 version of Testing at Microsoft.

I’d spend a few chapters showing examples of the developer workflow and typical tools – especially where the tools we use are available to readers of the book (e.g. unit testing and coverage tools available in Visual Studio and use of tools like SpecFlow for acceptance test development).

This section would also include some digging into analysis tools, test selection, test systems, and some of the other big pieces we use across all products and disciplines to keep the big engines running.

I didn’t talk much about how Microsoft does Exploratory Testing in the original book – I’d fix that in the second edition.

I also thought about breaking this section into “developer owned” and “testing owned” sections, but that line isn’t clear across the company (or even across organizations), so I think I’d start as a progression from unit testing to exploratory testing and let readers draw their own ideas of where their team may want to draw the line.

Part 3 – About Data Driven Quality

There’s at least a book or two of information that could go in this section, and it represents the biggest shift over the last six years. The original chapters on Customer Feedback Systems and Testing Software Plus Services hinted about how we approach Data-Driven Quality (DDQ), but our systems, and the way we use them have matured massively, and there’s definitely a lot of great information worth sharing – and enough where it’s an easy decision to dedicate a full section and several chapters to the topic.

Part 4 – About the Future

Much of what I talked about in this section of the original has either happened…or become obsolete. As in the first version, I would reserve this section for talking about engineering ideas still in the experimental stage, or ideas currently under adoption. Also, as in the original, I’d use this section as a platform for whatever essay idea chewing on me about where software engineering is going, and what it means to Microsoft.

Part 5 – Appendix

In my 20+ years of testing, I’ve read at least a hundred different books on software testing, software development, leadership, and organizational change. I’ll include a list of books that have had a significant influence on my approach and views to software engineering, publications that influenced the content of book, and publications that readers may find helpful for further information.

(potentially) related posts:
  1. HWTSAM–Five Years Later
  2. Happy Birthday HWTSAM
  3. HWTSAM – One Year Later
Categories: Software Testing

Don’t Go Changing…

Wed, 11/05/2014 - 16:10

My last post dove into the world of engineering teams / combined engineering / fancy-name-of-the-moment where there are no separate disciplines for development and test. I think there are advantages to one-team engineering, but that doesn’t mean that your team needs to change.

First things First

I’ve mentioned this before, but it’s worth saying again. Don’t change for change’s sake. Make changes because you think the change will solve a problem. And even then, think about what problems a change may cause before making a change.

In my case, one-team engineering solves a huge inefficiency in the developer to tester communication flow. The back-and-forth iteration of They code it – I test it – They fix it – I verify it can be a big inefficiency on teams. I also like the idea of both individual and team commitments to quality that happen on discipline-free engineering teams.

Is it for you?

I see the developer to tester ping pong match take up a lot of time on a lot of teams. But I don’t know your team, and you may have a different problem. Before diving in on what-Alan-says, ask yourself, “What’s the biggest inefficiency on our team”. Now, brainstorm solutions for solving that inefficiency. Maybe combined engineering is one potential solution, maybe it’s not. That’s your call to make. And then remember that the change alone won’t solve the problem (and I outlined some of the challenges in my last post as well).

Taking the Plunge

OK. So your team is going to go for it and have one engineering team. What else will help you be successful?

In the I-should-have-said-this-in-the-last-post category, I think running a successful engineering team requires a different slant on managing (and leading) the team. Some things to consider (and why you should consider them) include:

  • Flatten – It’s really tempting to organize a team around product functionality. Does this sound familiar to anyone?
    Create a graphics team, Create a graphics test team. Create a graphics performance team. Create a graphics analysis team. Create a graphics analysis test team. You get the point.
    Instead, create a graphics team. Or create a larger team that includes graphics and some related areas. Or create an engineering team that owns the whole product (please don’t take this last sentence literally on a team that includes hundreds of people).
  • Get out of the way – A book I can’t recommend enough for any manager or leader who wants to transition from the sausage-making / micro-managing methods of the previous century is The Leaders Guide to Radical Management by Steve Denning (and note that there are several other great books on reinventing broken management; e.g. Birkinshaw, Hamel, or Collins for those looking for even more background). In TLGRM, Denning says (paraphrased), Give your organization a framework they understand, and then get out of their way. Give them some guidelines and expectations, but then let them work. Check in when you need to, but get out of the way. Your job in 21st century management is to coach, mentor, and orchestrate the team for maximum efficiency – not to monitor them continuously or create meaningless work. This is a tough change for a lot of managers – but it’s necessary – both for the success of the workers and for the sanity of managers. Engineering teams need the flexibility (and encouragement) to self-organize when needed, innovate as necessary, and be free from micro-management.
  • Generalize and Specialize – I’ve talked about Generalizing Specialists before (search my last post and my blog). For another take, I suggest reading what Jurgen Appelo has to say about T-shaped people, and what Adam Knight says about square-shaped teams for additional explanation on specialists who generalize and how they make up a good team.

This post started as a follow up and clarification for my previous post – but has transformed into a message to leaders and managers on their role – and in that, I’m reminded how critically important good leadership and great managers are to making transitions like this. In fact, given a choice of working for great leaders and awesome managers on a team making crummy software with horrible methods, I’d probably take it every time…but I know in that team doesn’t exist. Great software starts with great teams, and great teams come from leaders and managers who know what they’re doing and make changes for the right reasons.

If your team can be better – and healthier –  by working as a single engineering team, I think it’s a great direction to go. But make your changes for the right reasons and with a plan for success.

(potentially) related posts:
  1. Stuff About Leadership
  2. Who Owns Quality?
  3. Leadership
Categories: Software Testing

To Combine… or not?

Mon, 10/27/2014 - 15:47

I talk a lot (and write a bit) about software teams without separate disciplines for developers and testers (sometimes called “combined engineering” within the walls of MS – a term I find annoying and misleading – details below). For a lot of people, the concept of making software with a single team falls into the “duh – how else would you do it?” category, while many others see it as the end of software quality and end for tester careers.

Done right, I think an engineering team based approach to software development is more efficient, and produces higher quality software than a “traditional” Dev & Test team. Unfortunately, it’s pretty easy to screw it up as well…

How to Fail

If you are determined to jump on the fad of software-engineering-without-testers, and don’t care about results, here is your solution!

First, take your entire test team and tell them that they are now all developers (including your leads and managers). If you have any testers that can’t code as well as your developers, now is a great time to fire them and hire more coders.

Now, inform your new “engineering team” that everyone on the team is now responsible for design, implementation, testing, deployment and overall end to end quality of the system. While you’re at it, remind them that because there are now more developers on the team that you expect team velocity to increase accordingly.

Finally, no matter what obstacles you hit, or what the team says about the new culture, don’t tell them anything about why you’ve made the changes. Since you’re in charge of the team, it’s your right to make any decision you want, and it’s one step short of insubordination to question your decisions.

A better way?

It’s sad, but I’ve seen pieces (and sadder still, all) of the above section occur on teams before. Let’s look at another way to make (or reinforce) a change that removes a test team (but does not remove testing).

Start with Why?

Rather than say, “We’re moving to a discipline-free organization” (the action), start with the cause – e.g. “We want to increase the efficiency of our team and improve our ability to get new functionality in customers hands. In order to do this, we are going to move to a discipline-free organizational structure. …etc.” I would probably even preface this with some data or anecdotes, but the point is that starting with a compelling reason for “why” has a much better chance of working than a proclamation in isolation (and this approach works for almost any other organizational change as well).

Build your Team

The core output (and core activity) of an engineering team is working software. To this end, this is where most (but not necessarily all) of the team should focus. The core role of a software engineer is to create working software (at a minimum, this means design, implementation, unit, and functional tests). IMO, the ability to this is the minimum bar for a software engineer.

But…if you read my blog, you’re probably someone who knows that there’s a lot more to quality software than writing and testing code. I think great software starts with core software engineering, but that alone isn’t enough. Good software teams have learned the value of using generalizing specialists to help fill the gap between core-software engineering, and quality products.

Core engineering (aka what-developers-do) covers the a large part of Great Software. The sizes of circles may be wrong (or wrong for some teams); this is just a model.

      

Nitpicker note: sizes of circles may be wrong (or wrong for some teams) – this is a model.

Engineering teams still have a huge need for people who are able to validate and explore the system as a whole (or in large chunks). These activities remain critically important to quality software, and they can’t be ignored. These activities (in the outer loop of my model) improve and enhance “core software engineering”. In fact, an engineering team could be structured with an separate team to focus on the types of activities in the outer loop.

Generalizing Specialists (and Specializing Generalists)

I see an advantage, however, in engineering teams that include both circles above. It’s important to note that some folks who generally work in the “Core Engineering” circle will frequently (or regularly) take on specialist roles that live in the outer loop. A lot of people seem to think that discipline-free software teams, everyone can do everything – which is, of course, flat out wrong. Instead, it’s critical that a good software team has (generalizing) specialists who can look critically at quality areas that span the product.

There also will/must be folks who live entirely in the outer ring, and there will be people like me who typically live in the outer ring, but dive into product code as needed to address code problems or feature gaps related to the activities in the outer loop. Leaders need to support (and encourage – and celebrate) this behavior…but with this much interaction between the outer loop of testing and investigation, and the inner loop of creating quality features, it’s more efficient to have everyone on one team. I’ve seen walls between disciplines get in the way of efficient engineering (rough estimate) a million times in my career. Separating the team working on the outer loop from core engineering can create another opportunity for team walls to interfere with engineering. To be fair, if you’re on a team where team walls don’t get in the way, and you’re making great software with separate teams, then there’s no reason to change. For some (many?) of us, the efficiency gains we can get from this approach to software development are worth the effort (along with any short-term worries from the team).

Doing it Right

It’s really easy for a team to make the decision to move to one-engineering-team for the wrong reasons, and it’s even easier to make the move in a way that will hurt the team (and the product). But after working mostly the wrong way for nearly 20 years, I’m completely sold on making software with a single software team. But making this sort of change work effectively is a big challenge.

But it’s a challenge that many of us have faced already, and something everyone worried about making good software has to keep their eye on. I encourage anyone reading this to think about what this sort of change means for you.

(potentially) related posts:
  1. Some Principles
  2. In Search of Quality
  3. Testing Trends…or not?
Categories: Software Testing

The Myths of Tech Interviews

Wed, 10/08/2014 - 11:16

I recently ran across this article on nytimes.com from over a year ago. Here’s the punch line (or at least one of them):

“We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess…”

I expect (hope?) that the results aren’t at all surprising. After almost 20 years at Microsoft and hundreds of interviews, it’s exactly what I expected. Interviews, at best, are a guess at finding a good employee, but often serve the ego of the interviewer more than the needs of the company. The article makes note of that as well.

“On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.”

I wish more interviewers paid attention to statistics or articles (or their peers) and stopped asking horrible interview questions, and really, really tried to see if they could come up with better approaches.

Why Bother?

So – why do we do interviews if they don’t work? Well, they do work – hopefully at least as a method of making sure you don’t hire a completely incapable person. While it’s hard to predict future performance based on an interview, I think they may be more effective at making sure you don’t hire a complete loser – but even this approach has flaws, as I frequently see managers pass on promising candidates for (perhaps) the wrong reasons out of fear of making a “bad hire”.

I do mostly “as appropriate” interviewer at Microsoft (this is the person on the interview loop who makes the ultimate hire / no hire decision on candidates based on previous interviews and their own questions). For college candidates or industry hires, one of the key questions I’m looking to answer is, “is this person worth investing 12-18 months of salary and benefits to see if they can cut it”. A hire decision is really nothing more than an agreement for a long audition. If I say yes, I’m making a (big) bet that the candidate will figure out how to be valuable within a year or so, and assume they will be “managed out” if not. I don’t know the stats on my hire decisions, but while my heart says I’m great, my head knows that I may be just throwing darts.

What Makes a Good Tech Employee?

If I had a secret formula for what made people successful in tech jobs, I’d share. But here’s what I look for anyway:

  1. Does the candidate like to learn? To me, knowing how to figure out how to do something is way more interesting than knowing how to do it in the first place. In fact, the skills you know today will probably be obsolete in 3-5 years anyway, so you better be able to give me examples about how you love to learn new things.
  2. Plays well with others – (good) software engineering is a collaborative process. I have no desire to hire people who want to sit in their office with the door closed all day while their worried team mates pass flat food under their door. Give me examples of solving problems with others.
  3. Is the candidate smart? By “smart”, I don’t mean you can solve puzzles or write some esoteric algorithm at my white board. I want to know if you can carry on an intelligent conversation and add value. I want to know your opinions and see how you back them up. Do you regurgitate crap from textbooks and twitter, or do you actually form your own ideas and thoughts?
  4. If possible, I’ll work with them on a real problem I’m facing and evaluate a lot of the above simultaneously. It’s a good method that I probably don’t use often enough (but will make a mental note now to do this more).

The above isn’t a perfect list (and leaves off the “can they do the job?” question, but I think someone who can do the above can at least stay employed.

Categories: Software Testing

The Weasel Returns

Tue, 09/16/2014 - 12:05

I’m back home and back at work…and slowly getting used to both. It was by far the best (and longest) vacation I’ve had (or will probably ever have). While it’s good to be home, it’s a bit weird getting used to working again after so much time off (just over 8 weeks in total).

But – a lot happened while I was gone that’s probably worth at least a comment or two.

Borg News

Microsoft went a little crazy while I was out, but the moves (layoffs, strategy, etc.) make sense – as long as the execution is right. I feel bad for a lot of my friends who were part of the layoffs, and hope they’re all doing well by now (and if any are reading this, I almost always have job leads to share).

ISO 29119

A lot of testers I know are riled up (and rightfully so) about ISO 29119 – which, in a nutshell, is a “standard” for testing that says software should be tested exactly as described in a set of textbooks from the 1980’s. On one hand, I have the flexibility to ignore 29119 – I would never work for a company that thought it was a good idea. But I know there are testers who find themselves in a situation where they have to follow a bunch of busywork from the “standard” rather than provide actual value to the software project.

As for me…

Honestly, I have to say that I don’t think of myself as a tester these days. I know a lot about testing and quality (and use that to help the team), but the more I work on software, the more I realize that thinking of testing as a separate and distinct activity from software development is a road to ruin. This thought is at least partially what makes me want to dismiss 29119 entirely – from what I’ve seen, 29119 is all about a test team taking a product that someone else developed, and doing a bunch of ass-covering while trying to test quality into the product. That approach to software development doesn’t interest me at all.

I talked with a recruiter recently (I always keep my options open) who was looking for someone to “architect and build their QA infrastructure”. I told them that I’d talk to them about it if they were interested, but that my goal in the interview would be to talk them out of doing that and give them some ideas on how to better spend that money.

I didn’t hear back.

Podcast?!?

It’s also been a long hiatus from AB Testing. Brent and I are planning to record on Friday, and expect to have a new episode published by Monday September 22, and get back on our every-two-week schedule.

Categories: Software Testing