Alan Page

Subscribe to Alan Page feed
notes and rants about testing and quality from alan page
Updated: 2 hours 27 min ago

The Myths of Tech Interviews

Wed, 10/08/2014 - 11:16

I recently ran across this article on nytimes.com from over a year ago. Here’s the punch line (or at least one of them):

“We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess…”

I expect (hope?) that the results aren’t at all surprising. After almost 20 years at Microsoft and hundreds of interviews, it’s exactly what I expected. Interviews, at best, are a guess at finding a good employee, but often serve the ego of the interviewer more than the needs of the company. The article makes note of that as well.

“On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.”

I wish more interviewers paid attention to statistics or articles (or their peers) and stopped asking horrible interview questions, and really, really tried to see if they could come up with better approaches.

Why Bother?

So – why do we do interviews if they don’t work? Well, they do work – hopefully at least as a method of making sure you don’t hire a completely incapable person. While it’s hard to predict future performance based on an interview, I think they may be more effective at making sure you don’t hire a complete loser – but even this approach has flaws, as I frequently see managers pass on promising candidates for (perhaps) the wrong reasons out of fear of making a “bad hire”.

I do mostly “as appropriate” interviewer at Microsoft (this is the person on the interview loop who makes the ultimate hire / no hire decision on candidates based on previous interviews and their own questions). For college candidates or industry hires, one of the key questions I’m looking to answer is, “is this person worth investing 12-18 months of salary and benefits to see if they can cut it”. A hire decision is really nothing more than an agreement for a long audition. If I say yes, I’m making a (big) bet that the candidate will figure out how to be valuable within a year or so, and assume they will be “managed out” if not. I don’t know the stats on my hire decisions, but while my heart says I’m great, my head knows that I may be just throwing darts.

What Makes a Good Tech Employee?

If I had a secret formula for what made people successful in tech jobs, I’d share. But here’s what I look for anyway:

  1. Does the candidate like to learn? To me, knowing how to figure out how to do something is way more interesting than knowing how to do it in the first place. In fact, the skills you know today will probably be obsolete in 3-5 years anyway, so you better be able to give me examples about how you love to learn new things.
  2. Plays well with others – (good) software engineering is a collaborative process. I have no desire to hire people who want to sit in their office with the door closed all day while their worried team mates pass flat food under their door. Give me examples of solving problems with others.
  3. Is the candidate smart? By “smart”, I don’t mean you can solve puzzles or write some esoteric algorithm at my white board. I want to know if you can carry on an intelligent conversation and add value. I want to know your opinions and see how you back them up. Do you regurgitate crap from textbooks and twitter, or do you actually form your own ideas and thoughts?
  4. If possible, I’ll work with them on a real problem I’m facing and evaluate a lot of the above simultaneously. It’s a good method that I probably don’t use often enough (but will make a mental note now to do this more).

The above isn’t a perfect list (and leaves off the “can they do the job?” question, but I think someone who can do the above can at least stay employed.

Categories: Software Testing

The Weasel Returns

Tue, 09/16/2014 - 12:05

I’m back home and back at work…and slowly getting used to both. It was by far the best (and longest) vacation I’ve had (or will probably ever have). While it’s good to be home, it’s a bit weird getting used to working again after so much time off (just over 8 weeks in total).

But – a lot happened while I was gone that’s probably worth at least a comment or two.

Borg News

Microsoft went a little crazy while I was out, but the moves (layoffs, strategy, etc.) make sense – as long as the execution is right. I feel bad for a lot of my friends who were part of the layoffs, and hope they’re all doing well by now (and if any are reading this, I almost always have job leads to share).

ISO 29119

A lot of testers I know are riled up (and rightfully so) about ISO 29119 – which, in a nutshell, is a “standard” for testing that says software should be tested exactly as described in a set of textbooks from the 1980’s. On one hand, I have the flexibility to ignore 29119 – I would never work for a company that thought it was a good idea. But I know there are testers who find themselves in a situation where they have to follow a bunch of busywork from the “standard” rather than provide actual value to the software project.

As for me…

Honestly, I have to say that I don’t think of myself as a tester these days. I know a lot about testing and quality (and use that to help the team), but the more I work on software, the more I realize that thinking of testing as a separate and distinct activity from software development is a road to ruin. This thought is at least partially what makes me want to dismiss 29119 entirely – from what I’ve seen, 29119 is all about a test team taking a product that someone else developed, and doing a bunch of ass-covering while trying to test quality into the product. That approach to software development doesn’t interest me at all.

I talked with a recruiter recently (I always keep my options open) who was looking for someone to “architect and build their QA infrastructure”. I told them that I’d talk to them about it if they were interested, but that my goal in the interview would be to talk them out of doing that and give them some ideas on how to better spend that money.

I didn’t hear back.

Podcast?!?

It’s also been a long hiatus from AB Testing. Brent and I are planning to record on Friday, and expect to have a new episode published by Monday September 22, and get back on our every-two-week schedule.

Categories: Software Testing