It’s interesting when I go back and look at the number of posts where I talk about what I do, what testing is to me, and how testing is changing. Ever since the Telerik Test Summit (telsum), I’ve been thinking even more about testing and how it fits into software development. When I wrote this post, and added on a bit to it here, I was pondering the same thing. When I (last) blogged about the future of testing, I (likely subconsciously) was thinking about the same thing.
A few things happened recently to make everything really click for me. One, was this article, asking for universities to offer software testing degrees – as a means to “train our testers”. Besides the fact that the author thinks that Universities are vocational schools where we train people (rather than teach people), I think it’s difficult to have a degree in something that the industry can’t define consistently (yes, I know there are test degree programs out there now, and I could make the same argument about those).
Then, just over a week ago at Telerik’s testing summit, we led off the open-space session with a discussion on test design. I shared some of my ideas, including how I use mind-maps to communicate test design. Towards the end of the discussion, Jeff Morgan (@chzy) asked, “Isn’t it just ‘Design’”. What Jeff was implying was that many of the details of test design should be equally at home when thinking of feature design. Rather than have programmers determine how it should work, and then have testers determine how it should be validated and explored, that those tasks could occur (for the most part) simultaneously – e.g. “To implement this feature, I need to read a value from the database, but I also need to make sure I account for and understand performance implications, whether or not the values will be localized, what the error cases are, etc.).” Much (not all) of test design can be considered when designing code – so why don’t we consider test design and code design simultaneously?
After this discussion (and for many, many hours since telsum), I’ve been thinking about programming and testing and how the two roles can work together better – and began to wonder if the test role may be detrimental to software development.
Three things are important to note before you scroll down and leave me hate-comments.
- I didn’t say test is dead
- I said “role”, not activity
- I purposely used the weasel-word “may” – that statement won’t always be true
And now for some much-needed elaboration.
There’s a wall between testers and programmers. Programmers “build”, testers “break” – but passing features back and forth between one group and another is wasteful. It takes time and extra communication to traverse the wall. I spent at least the first few years of my career on one side of the wall catching the crappy code programmers threw to me. Then, I found unit-level bugs and threw those bugs back over the wall to the programmers. Then, we played this wasteful game of catch until it was time to ship. Recently, I saw a fairly prominent tester mention that their bread and butter was participating in a game of bad-code and happy-path-bug crap-catch. While I’m happy that there are easy ways to make money in testing, I’d rather tear my eyelids off than do work like this. I like what I do because it’s hard. Finding happy path bugs in crappy software isn’t hard. It’s boring and practically demeaning.
The Neotsys blog latched onto one of my recent posts in their testing roundup. They said that my team at Microsoft has moved to the “whole-team” approach, but that’s not true. We still have testers and programmer roles – and while programmers frequently write test code and testers frequently write product code there’s still a wall. Programmers are still responsible for writing product code, and testers are still responsible for testing – we just don’t have a problem crossing those lines.
We still have a wall. It’s a small wall, but it’s still there.
Tear Down the Wall
If you played with my recent thought experiments of programmers who can test and testers who can program, it’s not much of a reach to picture a team where there are no programmers or testers – just contributors (or developers or engineers, or whatever you want to call them). If you have a hard time imagining that, imagine your current team with no walls between role responsibilities. Many of you may do the same thing you’re doing today, but with the walls gone, I bet you could make better software – faster.
Figure out what needs to get done, and get it done. Leverage the diversity of the team. If someone is a specialist, let them specialize. If they’re not, let them optimize for their own skills.
Tear down the wall.
On a quick side note, testers worry (far too much) about their titles. This recent blog reminds me of the idiocy of tester titles (and I’ve discussed it here). There will always be a place for people who know testing to be valuable contributors to software development – but perhaps it’s time for all testing titles to go away?
I’ve written in the past about where test is going – toward data analysis, non-functional emphasis, etc., but I think I was at least partially wrong. What software teams need in the future is team members who can perform the activities of programming, testing, and analysis – together.
I saw a slide deck recently that stated, “Agile Testers frequently can’t keep up with Programmers”. Good software development can’t happen when you serialize tasks – the team needs to work on this stuff – together.
I’ll always be deeply involved in the activity of software testing, but I don’t know if the role or title exists in my future. After nearly 20 years of being a tester (and likely several more with that title), I’m admittedly going out on a bit of a limb with that statement. Despite my title, I want the walls to go away.
Testing can’t be something that happens after programming is complete any longer. Testers aren’t “breakers” any more – but experts in the testing activity are (or need to be) as critical to making software as anyone else on the team.
Since it’s mostly testers who read this blog I challenge all of us to shed any remnants of tester vs. programmer and figure out how to tear down the wall.
I’ll let you know how my own battle goes.(potentially) related posts:
A few days ago, I began a verbal exploration of testers who code and coders who test. That post provides some context for today’s continuation, so if you have a moment, go ahead and check it out.
OK – I know you didn’t read it (or you read it already), so I’ll give you the summary. There are coders, testers, coders who test, testers who code, and many variations in between. If it’s good for a coder to have some test knowledge, it certainly can’t hurt to have some knowledge of code if you’re a tester. My contrived exercises asked you to consider what mix of testing and coding knowledge you’d pick when staffing a team. The spectrum of awareness and infusion of coding and testing skills looks a bit like this:
But that’s not (as some of you pointed out) the full picture. I know fantastic testers with tons of experience and the ability to find the the most important issues quickly and efficiently – but they’re jerks (and that’s putting it nicely). It doesn’t matter how awesome you are, if you’re a horrible teammate, I don’t care.
it’s been 35 years or so since I played any paper based role playing games, but when I did, we had these “character sheets” where we kept track of our characters stats and experience. Something like that would give a lot better picture of the full picture of a good software developer.
It’s not perfect, but it helps to see the big picture (and I’ll be the first to admit, it’s difficult, if not impossible, to measure people on a scale like this) – but as a leader, you can certainly consider the type of people you want on your team. You can’t just hire the first code-monkey (or “breaker”) you find and expect to have all of your problems solved. When I make hiring decisions, those decisions are based an itty-bitty bit on whether the candidate can do the job, quite a bit more on how they will fit in with the team, and a whole lot on how well I think they’ll be able to help the team months and years down the line.
And with that, the intro to this blog post has turned into an interlude, so I’ll wrap this up later this week. Great discussion so far (in comments, twitter, and email) – please keep it up.(potentially) related posts:
For the last day or so, I’ve been thinking a lot about programming and testing, about collaboration and walls, and about where this may all be going. This post is the start of my mental exploration of the subject.
In the beginning…
In How We Test Software at Microsoft, we told the story of Microsoft hiring our first tester, Lloyd Frink. Lloyd was a high school intern, and his job was to “test” the Basic compiler by running some programs through the compiler and make sure they worked. That was 1979, but full time testers didn’t show up at Microsoft until 1983. I have no recollection or knowledge of what was happening in the rest of the software industry at that time, but I imagine it was probably the same. Since then, Microsoft has hired thousands of testers (we currently employ over 9000), and the industry has hired a kazillion (yes, a made up number, because I really have no way of guessing) software testers (regardless of title).
Now, in the last several years, many teams (including some inside of Microsoft), have moved to the “whole team approach” – where separation of testing and programming roles are blurred or gone (the activities will always exist). Even on teams where separate roles still exist, there is still a significant amount of blurring between which activities each role does as part of their job (for example, I, like many of my test colleagues, make changes to product code, just as many programmers on the team create and edit test code).
The metaphorical “wall” between programming and testing is often quite high (as in programmers write code, then throw it over the wall to testers; who, in turn, throw the bugs back over the wall to developers, who make fixes and start the wall-chucking process over again). Lloyd was hired to catch stuff thrown to him over the wall, and I think it’s a fair argument to say it would have been more efficient for developers to run those tests as part of a check-in suite (assuming, of course, they had such a thing).
To be completely transparent, a wall on my team still exists – it’s just a much shorter wall. There are still expectations of programmers and testers that fit into the typical roles, but collaboration and code throwing don’t occur often.
Elisabeth Hendrickson speaks often of being “test-infected” to describe programmers who “get” testing (I believe the term was coined by Erich Gamma). I for one, can’t think of a good reason for a programmer to not have some test ideas in their head when they write code.
Similarly, I think that testers can benefit from being a bit “code-infected”. Before the self-preserving screams of oppression begin, let me note that I said testers “can benefit” from some code knowledge. I’m not talking about language expertise – a basic understanding of programming logic and some ideas on how computer programs may improve testing is usually plenty. It’s not necessary to know anything about programming to be a tester – I’m just saying it can help.
Beyond the disclaimer, it’s worth elaborating on the last two paragraphs here. I have had testers tell me that knowledge of code will adversely affect the “tester mindset” – that by knowing how to code, the tester will somehow utilize less of their critical thinking or problem solving skills. While I suspect this to be a false assumption, I have no proof of whether this is true or not. But I will hypothesize that if this is true (that code-infection adversely affects testing skill), then it’s just as likely that test-infection adversely effects programming ability. When I look at it this way, the answer clears up a bit. Take a moment to think about the best programmer you know. I’m fortunate enough to work with some really incredible (and some famous) programmers. Every single one of them is test-infected. Yes – I have no hard data, and zero empirical research, so I could be high as a kite on this, but I’d be curious to hear if anyone knows a great programmer who doesn’t have the ability to think at least a little bit like a tester.
So – let’s suppose for a few minutes that code-infection is beneficial for testers. Not even necessarily equally beneficial – just helpful. Is there such a thing as too infected? An old colleague of mine, John, once said,
“I’m not the best tester on the team, and I’m not the best programmer. But, of the programmers on the team, I’m the best tester, and of the testers, I’m the best programmer”.
Which role should John be in? On your team, would he be a programmer, or a tester? Is it a clear cut decision, does it depend, or does it even matter? Is John a test-infected developer, or a code-infected tester? Does it matter?
I think the answer to the above questions are either “It depends”, or “It doesn’t matter”, so let me ask a different question. Do you want a team full of John’s? Maybe you do, or maybe you don’t (or maybe it depends), so let me rephrase yet again in the form of a quick thought experiment.
You are forming a nine-person team to make a super cool software application. You get to build a dream team to put it together. On a scale from 1-9, where a 1 is a tester with zero knowledge of programming, a 9 is a developer who can’t even pronounce “unit test”, and a programmer / tester like John is a 5 – which employees do you hire (note that all of these people get paid the same, work the same hours, get along with each other, etc.).
To be fair, I don’t think there’s a “right” answer to this question, but we can certainly explore some options of what would be more or less “right”. One option is to (conveniently) select one of each type. Then you get infection-diversity! However, I’m betting that most of you are thinking, “There’s no room on my team for developers who don’t write unit tests!” So, is “some” testing knowledge enough? Is it possible to have too much? If I hire 7’s and 8’s, do I eventually want them to have more testing knowledge? If that’s the case, should I just hire 5’s and 6s instead?
And what about testers? Remember, that code-infected doesn’t necessarily mean that the testers are code-monkeys cranking out tools and automation all day long – it just means that they have (varying levels of) programming skills as part of their available toolbox. The context of the product may have some influence on my choice, but if you agree that some programming knowledge can’t hurt, then maybe a 2 is better than a 1. But is a 3 better than a 2? “I suppose that “better” depends, but to me, it’s a worthwhile thought process.
But that’s not right…
If you’ve read this far without jumping down to comments to yell at me, let me admit now that I’ve led you in a bad direction. The exercise above sort of assumes that as you gain testing ability, you lose programming ability (and vice-versa). That’s not true in the real world, so let’s try a bit of a twist and see if we can be slightly more realistic. For this exercise, you have the $1,000,000 of salary to spend on a maximum of 10 employees. You have the following additional requirements:
- An employee can have a “programmer” rating of 0-10, where 0 is can’t even write a batch file to 10 where they dream in algorithms.
- Similarly, an employee has a “tester” rating of 0-10 where 0 is no knowledge of testing, and is at least the best tester you’ve ever dreamt of.
- Minimum salary is $50k, and the maximum salary is (arbitrarily) $160,000.
Because context is important, let’s say the app is a social networking site with clients for phones and tablets, as well as a web interface. Your exercise (if you want to play) is to think about the type of people you would use to staff this team. One option, for example, is to have five developers, each with a 10 rating (5x$100,000), paired with 10 testers who also each have a 10 rating (5x$100,000). Another option would be to have 6 developers, each with a 15 rating (e.g. 9-dev and 6-test for 6x$150,000=$800,000) paired with 4 testers, each with a 5 rating (4x$50,000-$20,000). The choice is yours, and there’s no right answer – but I think the thought exercise is interesting.
I’ll close this post by saying that I really, really hate sequels and series when it comes to blog posts (and I’ll add an extra “really” to that statement when referring to my own posts). But there’s more to come here – both in my thoughts on testing and programming, and in more thought exercises that I hope you will consider.
More to come.(potentially) related posts:
I haven’t blogged much recently, and it’s mainly for three reasons.
- I’m busy – probably the hardest I’ve worked in all of my time in software. And although there have been a few late nights, the busy isn’t coming from 80-100 hour weeks, it’s coming from 50 hour weeks of working on really hard shit. As a result, I haven’t felt like writing much lately.
- I can’t tell you.
- I’m not really doing much testing.
It’s the third point that I want to elaborate on, because it’s sort of true, and sort of not true – but worth discussing / dumping.
First off, I’ve been writing a ton of code recently. But not really much code to test the product directly. Instead, I’ve been neck deep in writing code to help us test the product. This includes both infrastructure (stories coming someday), and tools that help other testers find bugs. I frequently say that a good set of analysis tools to run alongside your tests is like having personal testing assistants. Except you don’t have to pay them, and they don’t usually interrupt you while you’re thinking.
I’ve also been spending a lot of time thinking about reliability, and different ways to measure and report software reliability. There’s nothing really new there other than applying the context of my current project, but it’s interesting work. On top of thinking about reliability and the baggage that goes along with it, I spend a lot of time making sure the right activities to improve reliability happen across the org. I know that some testers identify themselves as “information providers”, but I’ve always found that too passive of a role for my context. My role (and the role of many of my peers) is to not only figure out what’s going on, but to figure out what changes are needed, and then make them happen.
And this last bit is really what I’ve done for years (with different flavors and variations). I find holes and I make sure they get filled. Sometimes I fill the holes. Often, I need to get others to fill the holes – and ideally, make them want to fill them for me. I work hard at this, and while I don’t always succeed, I often do, and I enjoy the work. Most often, I’m driving changes in testing – improving (or changing) test design, test tools, or test strategy. Lately, there’s been a mix of those along with a lot of multi-discipline work. The fuzzy blur between disciplines on our team (and on many teams at MS these days) contributes a lot to that, and just “doing what needs to be done” fills in the rest.
I’m still a tester, of course, and I’ll probably wear that hat until I retire. What I do while wearing that hat will, of course, change often – and that’s (still) ok.(potentially) related posts:
Yesterday, I read a mail sent to an email alias I’m on, where the author was asking why tool X wasn’t enabled on his latest build. The mail looked something like this (genericized to protect the innocent).
foo.service doesn’t appear to be working
- I installed the build from <build_path>
- I verified the binaries existed <where they should exist>
- I queried to see if foo.service was running – it wasn’t
- I queried another way, and it didn’t show anything either
- When I run my tests, they fail and tell me that the service isn’t running
- I thought the service wasn’t started, so I tried starting it, but that also failed (error text: Failed to retrieve the fizzbaz from the bogatorium)
On first glance, it looks like there’s a real problem here. I try to avoid the “works on my machine” comments, but I thought it was strange that nobody else had seen this. I assumed the Problem was Between the Keyboard And the Chair (PBKAC).
At first, that seemed to be exactly the problem. You see, there is no foo.service. It’s actually called fizz.service, but it’s run as part of the foo toolset, so it’s an easy misunderstanding. If they would have queried for fizz.service, they would have seen it happily running.
Their tests failed because they had an invalid command line for their test (or more specifically, they specified an invalid address for the machine where they wanted to run the tests).
And then they got that strange error message attempting to start the service because they were attempting to restart a service that was already running (interesting, however, was that they did manage to try and start the fizz service rather than the foo service at this point.
Three errors, all different, yet relatively easy to see as symptoms of a common problem. Except they weren’t.
Absolutely, definitely, beyond a shadow of a doubt, PBKAC.
But maybe not. Actually, definitely not. The princess is in another castle – or between a different keyboard and chair.
PBKAC was definitely in play when the user tried to query for the wrong service. My service names above are silly, but our service names are actually nearly as confusing. You only make this mistake once, but it’s not too difficult to make.
When the tests failed, the error said the service wasn’t running. The error was 100% accurate (the service wasn’t running because the user connected to the wrong machine). What if, as a courtesy, that error message said something like, “The service isn’t running. Are you connected to a valid target machine”? I bet that would have set off a few light bulbs rather than generated more confusion.
That message about the fizzabaz and the bogatorium isn’t far from what the user actually saw. What they saw was again, a 100% accurate statement of what went wrong – it just gave no clue of what may have caused it.
Granted, these were internal tools, but that’s a horrible excuse to confuse someone. I bet I’ve seen hundreds (if not thousands) of error messages like this – but very few that offer even the tiniest bit of trouble shooting or diagnostic advice. It’s easy to blame the user when they do something “wrong” (or unexpected), but ultimately, it’s rarely their fault. And in the end, if they can’t do what they want to do with your software, they’ll take their business and money elsewhere.
And then, it’s your problem alone.(potentially) related posts:
Last summer, I posted a short rant on multitasking. If you don’t want to read, it was my normal type of rant where I complain about people taking an already generalized statement and apply it even more widely.
This week, in response to that post (some responses take longer than others), I received a pointer to a little multitasking test that I thought would be fun to share. I’m not affiliated, no kickback, blah blah blah, just sharing because I think it’s interesting. I embedded the test below, but you can go straight to the source at http://open-site.org/blog/the-multitask-test/ as well. (note – the embedded version doesn’t always work that well, so head on over to the open-site.org page if you’re having problems).Created by http://Open-Site.org (potentially) related posts:
Like most Microsoft employees, I have a whiteboard in my office, and mine (also like most) gets used a lot for notes, explanations, architecture, or whatever.
This is nothing new – it’s part of the software culture. A few years ago, I recorded a few handfuls of these talks (some slightly staged) with some all-star colleagues and we posted them on the (now-defunct) Microsoft tester center site. Given that the talks are getting pretty difficult to find, I thought I’d take a few of them and re-post to youtube. I have no schedule in mind for posting, but I expect I’ll re-post 10-20 or so in the coming months.
Here are two to get things started. The first is me talking with James Rodriguez about code reviews, and the second is Alan Myrvold getting me started with Security Testing.(potentially) related posts:
Last week I happened to log on to twitter just as some test folks were marveling over the the massive parallels between the video with the gorilla and the basketball players and software testing. I made some cracks about the absurdity of the hype, and more than a few testers freaked out.
Somewhere in the tweet insanity, I promised to blog about my opinions on the concept, so here goes.
First off – this isn’t a new gripe of mine. Here’s a tweet from August, 2011 (edited to fix my typo). To be clear, I think all three of the items below are valuable – yet all also seem to be frequently hyped beyond their value. The gorilla especially comes up among testers – many of whom overreact to the value of the video to software testing.
The video shows that it’s possible to miss something right in front of your eyes. The point of the video is to show that it’s possible to miss something that’s right in front of your eyes.
IB makes you confront the illusion of attention.
Knowing about inattentional blindness doesn’t make you better at noticing things, and there’s zero correlation between the observational skills of those who see the gorilla vs. those who don’t. While I agree that it’s critical for testers to know that inattentional blindness exists, it’s a small nugget of information in a pretty big pool of stuff that actually helps testers. Any knowledge work requires knowledge of IB – furthermore, I’d argue that there are plenty of professions where IB is much more critical to know about than in software testing.
- TSA Agents
- Script Supervisor (those are the people in charge of ensuring continuity in movies / tv)
- NASCAR drivers
And I’m hard pressed to think of a profession where knowledge of IB isn’t at least equal to that of testing. My garbage man (sanitation engineer) needs to make sure he doesn’t miss any cans, and ensure I’m not throwing away anything illegal. Given that garbage collection is much more repetitive than anything I do from day to day, I’d expect viewing the gorilla video to be standard training material for the guys in the big stinky truck (and for all I know, it is). The gorilla is fascinating – for everyone. There’s no special appeal to testing that I can see.
Of course, beyond the video (which helped the people behind it earn many awards), Christopher Chabris and Daniel Simons also wrote a book called, The Invisible Gorilla which discussed the vides and a ton of other really cool stuff (stuff I find much more interesting than the gorilla video – especially when considered as a whole). I’ve read the book twice, skimmed it several other times, and met the authors briefly after attending a talk they gave a year or so ago.
Some other cool topics covered include:
- The illusion of memory (what you think you remember clearly may not be accurate. At all).
- The illusion of confidence (most people overate their abilities – includes a great story about chess players and chess rankings)
- The illusion of knowledge (you probably don’t know as much as you think you do)
It’s good stuff.
The Gorilla Stunt is good.
So stop hyping the damn gorilla video.(potentially) related posts:
Last week, I spent my Wednesday evening talking with the folks at QASIG about test innovation. This is a variation on the talk I gave in November at Eurostar – changed enough so I didn’t get bored, and with some bonus chatter that can only come from a small friendly audience.
The recording is here if you want to take a look.
Anyone who reads this blog has probably also read about, or heard of the recent policy from Marissa Mayer at Yahoo recalling home-office based employees back to the office (Bing search here in case you’ve been under a rock). Most of the reactions I’ve seen to this policy are correctly identifying this as a management problem (or a worker-milking-the-system) problem, and I agree with that assessment.
Of course, remote workers all over the internet are completely up in arms whether they work at Yahoo or not. Perhaps they’re afraid that their employer will follow suit (not-likely), or they don’t want to be discovered as yet-another-wfh-miler (slightly more likely), or they feel like their choice of work method is being threatened by the publicity of this choice (most likely).
To be clear, I am completely supportive of working from home. I’m fortunate enough to have an employer and a history of managers who let me work from home – or remotely as needed. In fact, the recent news reminded me that after I worked remotely for two solid weeks in the summer of 2011, I wrote up some thoughts for my (then) manager. Given that there’s nothing confidential in that write up, I’m sharing it unedited below as fodder for discussion.From: Alan Page
Date: August 3, 2011
Subject: Working the Swing Shift in France
This summer, I’m spending two weeks working from Toulouse, France. My family came here for a vacation (and to house sit for some friends of friends). I had planned to only stay for about 10 days, but due to a variety of circumstances, I decided to extend my trip and stay with my family for an additional two weeks. I asked Ross if I could work from France for a few weeks and he graciously allowed me to do so.The logistics
The house we’re staying in has a reasonably fast internet connection as well as an office where the door shuts, so a reasonable workspace wasn’t a problem. I decided that I’d work Redmond hours (I start work between 4:00 and 6:00 pm and work until 2:00 or 3:00am). There was no requirement that I align my work with Redmond time, but it allowed me to spend some time on daytrips with my family during the day before beginning the workday. I was also able to attend a fair number of meetings over Lync.The experience
I’ve worked from home before and have never had a problem staying focused on work outside of the workplace (I probably learned to excel in this area while writing hwtsam on evenings and weekends). My family (fortunately) “gets” that I’m working even though I’m close by and leaves me alone to concentrate.
One highlight of the experience is that Lync has worked flawlessly. I’ve made several calls to Redmond, and attended several meetings. Audio and video have worked well, and it’s helped keep me connected much of the time.
I was reflecting on my first week of working remotely, and had a bit of an insight. I have been able to get a ton of work done – but to be fair and honest, it’s different work than I would have done had I been in Redmond. I don’t think this is necessarily a bad thing, since some of the things I’ve worked on (e.g. writing up thoughts and experimenting with fault injection, figuring out how Lync should approach model-based testing, writing up debugging tutorials, or polishing up a thinkweek paper) are the right work for me to do, but it seems that working remotely changes priorities slightly. By this, I mean that a big part of my typical role involves interacting with people on the team in a somewhat random pattern – e.g. answering questions in the hallway, discussing topics of the day over lunch, or following up with people 1:1 after meetings. Not all Microsoft roles involve this sort of interaction, but it seems difficult to interact in this manner remotely.
One example I thought of that reflects the above is this scenario: Say Josh and Bob are talking in Josh’s office. I overhear the conversation and have some relevant (and valuable!) thoughts, so I get up, poke my head in and join the conversation. That scenario doesn’t happen if I’m not there.
An interesting Lync feature based on this would work like this. Say Bob and Josh are having an IM conversation. If Lync noticed that I was on both of their contact lists, and they mentioned a keyword that shows up in my “interests”, Lync would ask them if they wanted to add me to the conversation.
I think the casual interaction limitation is more of a cultural problem than something inherent to working remotely (and something that may solve itself if I was away for a longer period). My thought is that in general, people on our team don’t send IM’s casually – e.g. “Hey – I was just talking to Dan about test automation, and wondered if you had thoughts on how to do data driven testing”, or “Do you have any quick thoughts on blah?”. The questions I normally hear in the hallway, or from someone sticking their head in my office haven’t occurred over IM (or telephone for that matter).
Another example is the value (in our culture) of the in-person follow up. For example, I sent an email to a few peers on the team last week – I was expecting it to turn into a discussion, but only received two short replies (I replied to both, and then the conversation ended). If I were in Redmond, I probably would have had an additional casual conversation with a few of the recipients and attempted to clear up any ambiguity or answer any questions that was blocking an engaged conversation on the topic. This is difficult to do remotely (although it’s certainly possible to do all of this over IM, it takes some getting used to).
Now – since I wrote that email, I’ve changed teams, and the world has warmed up more to social media. Lots of Microsoftees use Yammer, and IM usage is on the rise. I (and others) still don’t think we (as a company, and as an industry) are doing enough to support and encourage remote work, and I’m discouraged a bit that Yahoo has seemed to take us a step or two backwards.
(potentially) related posts:
Although I really enjoy talking about testing, I’m (purposely) speaking a lot less these days. I have a day-job that I love, and I like hanging out in the rainy Pacific Northwest. As of now (and I think a plan change is a long shot), I’m travelling to exactly one conference in CY13 (StarWest in Fall of 2013 – more on that later….but it will be epic).
However – if you are also in the Pacific Northwest, and want to hang out and talk about test innovation, I’ll be hanging out with the folks at qasig on the evening of March 13. I’ll be delivering an updated version of my EuroStar keynote on test innovation, and I expect it will be a fun night.
Beyond that (and depending on interest), I may do another free webinar soon (it’s been at least a year since the last one), but that will be about it for me talking about testing in person about testing.
More here later.
-AW(potentially) related posts:
“Hey – can you set up a repro of that bug for me?”
As a tester, how many times have you heard this phrase? How many times have you walked through the steps you outlined in the bug report so someone could look at an error for you? Or – how many times have you seen a test error, and immediately re-run the test to see if you could reproduce the error yourself?
Is it a big number? If it is, you’re not going to like what I have to say. If you need to reproduce all of your bugs to figure out what’s going on, you screwed up. Your logging is bad. Your diagnostics don’t exist. You wrote crummy code. You’re wasting time!
I sometimes see code like this:
int status = CreateSomethingCool(object);
if (status != COOL_SUCCESS)
status = MakeItWayCool(object);
if (status != COOL_SUCCESS)
The next time the test is run, Joe Tester (assuming some sort of automation around the automation) gets an email saying his test failed. Of course, Joe doesn’t have a freakin’ clue why his test failed. It could have failed for either case above (assuming those are the only two failure points), so he needs to run it – perhaps under a debugger to see what happened.
Joe writes horrible test code.
But – Joe wants to improve, so he writes this:
int status = CreateSomethingCool(object);
if (status != COOL_SUCCESS)
status = MakeItWayCool(object);
if (status != COOL_SUCCESS)
Now Joe has different return values, but he still writes crappy code. At least now he knows why his test failed; but he doesn’t know why the product failed.
So, I fired Joe, and hired Sally. Sally wrote this instead.
LOG(“Calling CreateSomethingCool with object = %s”, object.ToString());
int status = CreateSomethingCool(object);
VERIFY_COOLSUCCESS((status), “CreateSomethingCool Failed. status=%d, objectData=%d”, status, DumpObject(object));
LOG(“Calling MakeItWayCool with object == %s”, object.ToString());
status = MakeItWayCool(object);
VERIFY_COOLSUCCESS((status), “CreateSomethingCool Failed. status=%d, objectData=%d”, status, DumpObject(object));
To be fair, Sally’s version was a little easier to read, and contained some comments (or it would if Sally was a real person), but when her test failed, instead of needing to set up a repro, the automation system sent her this mail.
CoolTest failed with a verify failure. Log follows:
Calling CreateSomethingCool with object = LittleRedCorvette
CreateSomethingCool Failed. status=5, objectData=
Even here, there may not be enough information, but chances are that if Sally (or her teammates know that), “CoolTest is failing with object LittleRedCorvette, and it’s likely because the object was inactive and not running, and that the error code was 5″, that someone familiar with the code would know exactly where to look.
And – in the case where Sally (or a teammate) has to hook up a debugger anyway, they should add additional debug information to the log file for the next time a similar error happens. Setting up a repro wastes time. Doing it for every issue you find is irresponsible and wasteful. Be a professional, and stop setting up repros, and start writing tests (and code) that make your job easier and make your team better.