Alan Page

Subscribe to Alan Page feed
notes and rants about testing and quality from alan page
Updated: 23 hours 36 min ago

To Combine… or not?

Mon, 10/27/2014 - 15:47

I talk a lot (and write a bit) about software teams without separate disciplines for developers and testers (sometimes called “combined engineering” within the walls of MS – a term I find annoying and misleading – details below). For a lot of people, the concept of making software with a single team falls into the “duh – how else would you do it?” category, while many others see it as the end of software quality and end for tester careers.

Done right, I think an engineering team based approach to software development is more efficient, and produces higher quality software than a “traditional” Dev & Test team. Unfortunately, it’s pretty easy to screw it up as well…

How to Fail

If you are determined to jump on the fad of software-engineering-without-testers, and don’t care about results, here is your solution!

First, take your entire test team and tell them that they are now all developers (including your leads and managers). If you have any testers that can’t code as well as your developers, now is a great time to fire them and hire more coders.

Now, inform your new “engineering team” that everyone on the team is now responsible for design, implementation, testing, deployment and overall end to end quality of the system. While you’re at it, remind them that because there are now more developers on the team that you expect team velocity to increase accordingly.

Finally, no matter what obstacles you hit, or what the team says about the new culture, don’t tell them anything about why you’ve made the changes. Since you’re in charge of the team, it’s your right to make any decision you want, and it’s one step short of insubordination to question your decisions.

A better way?

It’s sad, but I’ve seen pieces (and sadder still, all) of the above section occur on teams before. Let’s look at another way to make (or reinforce) a change that removes a test team (but does not remove testing).

Start with Why?

Rather than say, “We’re moving to a discipline-free organization” (the action), start with the cause – e.g. “We want to increase the efficiency of our team and improve our ability to get new functionality in customers hands. In order to do this, we are going to move to a discipline-free organizational structure. …etc.” I would probably even preface this with some data or anecdotes, but the point is that starting with a compelling reason for “why” has a much better chance of working than a proclamation in isolation (and this approach works for almost any other organizational change as well).

Build your Team

The core output (and core activity) of an engineering team is working software. To this end, this is where most (but not necessarily all) of the team should focus. The core role of a software engineer is to create working software (at a minimum, this means design, implementation, unit, and functional tests). IMO, the ability to this is the minimum bar for a software engineer.

But…if you read my blog, you’re probably someone who knows that there’s a lot more to quality software than writing and testing code. I think great software starts with core software engineering, but that alone isn’t enough. Good software teams have learned the value of using generalizing specialists to help fill the gap between core-software engineering, and quality products.

Core engineering (aka what-developers-do) covers the a large part of Great Software. The sizes of circles may be wrong (or wrong for some teams); this is just a model.

      

Nitpicker note: sizes of circles may be wrong (or wrong for some teams) – this is a model.

Engineering teams still have a huge need for people who are able to validate and explore the system as a whole (or in large chunks). These activities remain critically important to quality software, and they can’t be ignored. These activities (in the outer loop of my model) improve and enhance “core software engineering”. In fact, an engineering team could be structured with an separate team to focus on the types of activities in the outer loop.

Generalizing Specialists (and Specializing Generalists)

I see an advantage, however, in engineering teams that include both circles above. It’s important to note that some folks who generally work in the “Core Engineering” circle will frequently (or regularly) take on specialist roles that live in the outer loop. A lot of people seem to think that discipline-free software teams, everyone can do everything – which is, of course, flat out wrong. Instead, it’s critical that a good software team has (generalizing) specialists who can look critically at quality areas that span the product.

There also will/must be folks who live entirely in the outer ring, and there will be people like me who typically live in the outer ring, but dive into product code as needed to address code problems or feature gaps related to the activities in the outer loop. Leaders need to support (and encourage – and celebrate) this behavior…but with this much interaction between the outer loop of testing and investigation, and the inner loop of creating quality features, it’s more efficient to have everyone on one team. I’ve seen walls between disciplines get in the way of efficient engineering (rough estimate) a million times in my career. Separating the team working on the outer loop from core engineering can create another opportunity for team walls to interfere with engineering. To be fair, if you’re on a team where team walls don’t get in the way, and you’re making great software with separate teams, then there’s no reason to change. For some (many?) of us, the efficiency gains we can get from this approach to software development are worth the effort (along with any short-term worries from the team).

Doing it Right

It’s really easy for a team to make the decision to move to one-engineering-team for the wrong reasons, and it’s even easier to make the move in a way that will hurt the team (and the product). But after working mostly the wrong way for nearly 20 years, I’m completely sold on making software with a single software team. But making this sort of change work effectively is a big challenge.

But it’s a challenge that many of us have faced already, and something everyone worried about making good software has to keep their eye on. I encourage anyone reading this to think about what this sort of change means for you.

(potentially) related posts:
  1. Some Principles
  2. In Search of Quality
  3. Testing Trends…or not?
Categories: Software Testing

The Myths of Tech Interviews

Wed, 10/08/2014 - 11:16

I recently ran across this article on nytimes.com from over a year ago. Here’s the punch line (or at least one of them):

“We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess…”

I expect (hope?) that the results aren’t at all surprising. After almost 20 years at Microsoft and hundreds of interviews, it’s exactly what I expected. Interviews, at best, are a guess at finding a good employee, but often serve the ego of the interviewer more than the needs of the company. The article makes note of that as well.

“On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.”

I wish more interviewers paid attention to statistics or articles (or their peers) and stopped asking horrible interview questions, and really, really tried to see if they could come up with better approaches.

Why Bother?

So – why do we do interviews if they don’t work? Well, they do work – hopefully at least as a method of making sure you don’t hire a completely incapable person. While it’s hard to predict future performance based on an interview, I think they may be more effective at making sure you don’t hire a complete loser – but even this approach has flaws, as I frequently see managers pass on promising candidates for (perhaps) the wrong reasons out of fear of making a “bad hire”.

I do mostly “as appropriate” interviewer at Microsoft (this is the person on the interview loop who makes the ultimate hire / no hire decision on candidates based on previous interviews and their own questions). For college candidates or industry hires, one of the key questions I’m looking to answer is, “is this person worth investing 12-18 months of salary and benefits to see if they can cut it”. A hire decision is really nothing more than an agreement for a long audition. If I say yes, I’m making a (big) bet that the candidate will figure out how to be valuable within a year or so, and assume they will be “managed out” if not. I don’t know the stats on my hire decisions, but while my heart says I’m great, my head knows that I may be just throwing darts.

What Makes a Good Tech Employee?

If I had a secret formula for what made people successful in tech jobs, I’d share. But here’s what I look for anyway:

  1. Does the candidate like to learn? To me, knowing how to figure out how to do something is way more interesting than knowing how to do it in the first place. In fact, the skills you know today will probably be obsolete in 3-5 years anyway, so you better be able to give me examples about how you love to learn new things.
  2. Plays well with others – (good) software engineering is a collaborative process. I have no desire to hire people who want to sit in their office with the door closed all day while their worried team mates pass flat food under their door. Give me examples of solving problems with others.
  3. Is the candidate smart? By “smart”, I don’t mean you can solve puzzles or write some esoteric algorithm at my white board. I want to know if you can carry on an intelligent conversation and add value. I want to know your opinions and see how you back them up. Do you regurgitate crap from textbooks and twitter, or do you actually form your own ideas and thoughts?
  4. If possible, I’ll work with them on a real problem I’m facing and evaluate a lot of the above simultaneously. It’s a good method that I probably don’t use often enough (but will make a mental note now to do this more).

The above isn’t a perfect list (and leaves off the “can they do the job?” question, but I think someone who can do the above can at least stay employed.

Categories: Software Testing

The Weasel Returns

Tue, 09/16/2014 - 12:05

I’m back home and back at work…and slowly getting used to both. It was by far the best (and longest) vacation I’ve had (or will probably ever have). While it’s good to be home, it’s a bit weird getting used to working again after so much time off (just over 8 weeks in total).

But – a lot happened while I was gone that’s probably worth at least a comment or two.

Borg News

Microsoft went a little crazy while I was out, but the moves (layoffs, strategy, etc.) make sense – as long as the execution is right. I feel bad for a lot of my friends who were part of the layoffs, and hope they’re all doing well by now (and if any are reading this, I almost always have job leads to share).

ISO 29119

A lot of testers I know are riled up (and rightfully so) about ISO 29119 – which, in a nutshell, is a “standard” for testing that says software should be tested exactly as described in a set of textbooks from the 1980’s. On one hand, I have the flexibility to ignore 29119 – I would never work for a company that thought it was a good idea. But I know there are testers who find themselves in a situation where they have to follow a bunch of busywork from the “standard” rather than provide actual value to the software project.

As for me…

Honestly, I have to say that I don’t think of myself as a tester these days. I know a lot about testing and quality (and use that to help the team), but the more I work on software, the more I realize that thinking of testing as a separate and distinct activity from software development is a road to ruin. This thought is at least partially what makes me want to dismiss 29119 entirely – from what I’ve seen, 29119 is all about a test team taking a product that someone else developed, and doing a bunch of ass-covering while trying to test quality into the product. That approach to software development doesn’t interest me at all.

I talked with a recruiter recently (I always keep my options open) who was looking for someone to “architect and build their QA infrastructure”. I told them that I’d talk to them about it if they were interested, but that my goal in the interview would be to talk them out of doing that and give them some ideas on how to better spend that money.

I didn’t hear back.

Podcast?!?

It’s also been a long hiatus from AB Testing. Brent and I are planning to record on Friday, and expect to have a new episode published by Monday September 22, and get back on our every-two-week schedule.

Categories: Software Testing