Alan Page

Subscribe to Alan Page feed
notes and rants about software and software quality
Updated: 7 hours 19 min ago

Madness, Math and (half) Marathons – and Medium!

Mon, 01/25/2016 - 14:02

I decided that in the rare occurrences where I post non-software articles, that I’d use a blog on Medium and post my attempts at story-telling along with the rest of the world.

My first post is here.

(potentially) related posts:
  1. Settling on Quality?
  2. Writing About Testing
  3. Walls on the Soapbox
Categories: Software Testing

Intelligence and Insight

Wed, 01/13/2016 - 16:32

I have an after-work event tonight, and rather than leave my car in the garage overnight, I ran to work. Since I’ve moved to downtown Bellevue, I’ve done this a few times – and given that I’m running another half-marathon in 10 days, it was a great opportunity for a long training run before I begin to taper my mileage down a bit leading up to the race.

App Issues

I’ve been a long-time user of a running companion app called Runtastic. It does the usual stuff of tracking mileage, route, pace, and giving voice updates at user specified intervals. I find it especially valuable when training or racing, because I usually have very specific pace goals. Having a voice tell me how long the last mile/half-mile/km took me let’s me know if I’m running too fast (and burning out) or too slow (and putting my goals in jeopardy). Granted, I have a pretty good internal clock, and usually run my paces pretty well withouth “help”, but the feedback is really useful to me.

Today, I took off from home, started spacing out, and before long, I was a mile or two from home…when I noticed that I had not received any voice feedback yet. I knew exactly what happened (because it’s happened before). When you start the app, it gives you a 15 second timer before the tracking actually starts, along with the ability to add time to the delay up to two minutes or so. I LOVE this feature, because I can give myself time to put my phone into my running belt, walk twenty yards and curse myself for having such a painful hobby before actually exerting any physical energy.

Unfortunately, at least 25% of the time I use the app the countdown fails. But I never know it failed until too late.

What happens is that the countdown stops at 1 second. I hear the voice prompt count down 5-4-3-2-1, and I think it’s tracking, but it’s stuck on one second.

Today was extra painful not only because I wanted to see how I was doing on “race pace”, but after I discoverd it failed, I restarted the app, set the countdown again…and it “hung” on one second again.

Grrrr…

Now, it’s a bug – that’s for sure. Some testers I know would automatically assume that every user in the world was hitting this bug and that public shaming of the company would be the next course of action. I, however, realize that context plays a role in many (every?) part of software engineering, and that given the value I get from the product (and my very amateur level of running) that while this is a painfully annoying bug, it’s not the end of the world.

Intelligence?

Given that I wasn’t concerned at all with pace for the remaining 5+ miles of my commute, my mind began to wander. What follows is a completely made up story of how a Runtastic engineer may discover this issue without me, or anyone else, reporting it.

Pointy Haired Runtastic Manager: Hey super-smart employee (sse). Our default time for delay before a run is 15 seconds. Can you look at the data and see if our estimate for a default delay length is in the ballpark of what people actually use? Someone on the train told me that they though 30 seconds would be a lot better. I don’t think they’re right, but I want to make a decision based on data.

Super Smart Employee (sse): Sure boss. That data is pretty easy to pull. I’ll take a look!

What SSE is about to do at this point is gather Business Intelligence. They want to use data to make a business decision.

SSE looks at hundreds of thousands of activities from the past six months and sees that nearly 60% of the people just use the default 15 seconds. She quickly generates a scatter graph showing that shows the outliers and prepares it for her boss. Before sending, she realized that she wants to exclude the instances where people cancel the activity completely before the countdown completes (phone calls, cold feet (literally, and metaphorically), and a variety of other reasons could cause this). She filters the data and starts to send the report…but – while she’s there, she notices something…interesting. First, the number of “cancelled” activities seem high to her (over 15%). She flips the filter to look only at cancelled activities and things get weirder. Of the 15% of cancelled activities, 90% are cancelled at exactly 1 second.

That’s too weird to be true.

Insight

SSE looks at every activity where the timer was “killed” at one second. Often, those users started another activity within 10-15 minutes.

Or, maybe they all had the same model of phone.

Or maybe they were all running Spotify at the same time.

Or something. Remember. This story is completely made up.

The point is that SSE quickly went from gathering BI to using discovery and insight to find a pretty cool bug. Using data!

Aside

I told a story at a conference recently about a team I worked on that used an offshore vendor team to run through a large number of applications for app compatability testing. We asked them to take notes and to send a report, but not to bother filing bug reports.

HuWhat?

Yeah – we had sufficient telemetry and monitoring that we knew about all the bugs and glitches (and had collected call stacks and other helpful information) already. Many, in fact, that the test team didn’t (or couldn’t) notice. Entering the bugs would have been a waste of time. In the rare cases where something weird happened that we didn’t track, we immediately added the appropriate instrumentation to track that class of failure in the future.

I expect that for most of you, my world isn’t your world. But in my world, data driven engineering is critical.

Epilogue?

Since I don’t know how made up my made up story really is, I’m going to report it to Runtastic anyway. I can’t predict the future (or anything else), but I hope the reply to my complaint is, “Yeah – we already knew about that. From the data”.

(potentially) related posts:
  1. PBKAC?
  2. The Goal and the Path
  3. Riffing on the Quadrants
Categories: Software Testing

Creative Work

Tue, 01/05/2016 - 10:52

It’s early January, but I think I’ve already read at least a half dozen web articles on how testers need to be creative and use their brains, etc.. The articles are exactly on point in some sense, but most give me the feeling that the authors think that software testing is (one of) the only profession(s) that requires thinking and creativity.

Which is, of course, complete crap.

In A Whole New Mind, Daniel Pink tells story after story about how creativity is the competitive advantage for any business, and any knowledge worker. The jobs of today, and especially the future all will require creativity and thinking over “book smarts” and rote work. Software development (including testing) is just one example of a knowledge worker role that requires those skills. Everyone who wants a successful career should look for ways to learn, opportunities to be creative, and new ways to think about hard problems.

Peter Drucker came up with the term Knowledge Worker in the 1950’s, and most definitions I’ve read describe software testing quite well (wikipedia article here if you’d like to form your own opinion) – but if you don’t want to click, try this excerpt:

Knowledge work can be differentiated from other forms of work by its emphasis on “non-routine” problem solving that requires a combination ofconvergent, divergent, and creative thinking.

I think the problems in software testing (and in software development in general) are some of the most interesting and challenging problems anywhere – but I do not believe that the approach to the problem solving is particularly unique – especially as unique as some of my industry peers seem to imply.

I encourage anyone curious about knowledge work to read Druckers writings on the subject, and especially Pink’s book mentioned above.

(potentially) related posts:
  1. <sigh> Automation…again
  2. Tester DNA
  3. Orchestrating Test Automation
Categories: Software Testing

Roles and Fluidity

Sun, 12/13/2015 - 10:02

I had a twitter conversation this week about roles this week. I’ll recap it – and expand on my views; but first I’ll tell a story.

Very early on in the Windows 98 project, I was performing some exploratory testing on the explorer shell and found an interesting (and slightly weird bug). At the end of my session, I entered the bug (and several others) into our bug tracking system – but the one issue continued to intrigue me. So, I took some time to look at the bug again and reflect on what could cause this bug to happen. I dug into the source code in the area where the bug occurred, but there was nothing obvious. I couldn’t shake my curiosity, so I looked at the check-in history and read through the code again; this time focusing on code checked in within the last few weeks. My assumption was that this was a newly introduced bug, and that seemed like a reasonable way to narrow my focus.

Less than an hour later, I discovered that a particular windows API was called several times throughout the code base, but on one occasion, was called with the parameters reversed. At this point, I could have hooked up a debugger (or  some could say that I should have already hooked up a debugger), but after visual examination of the code, the code of the API, and the documentation, I was positive I found cause of the error. I added the information to the bug report and started to pack my things to go home.

But I didn’t. I couldn’t.

I was bothered by how easy it was to make this particular error and wondered if others had made the same error too. I sat down and wrote a small script which would attempt to discover this error in source code. Another hour or so later, and I had a not-perfect, but pretty-good analyzer for this particular error. I ran it across the code base, and found 19 more errors. I spot checked each one manually, and after verifying they were all errors, added each of them to the bug tracking system.

Finally I was about to go home. But as I was leaving, one of the developers on the team stopped by to ask how I found the bugs I just entered. I told him the story above, and he suggested I add the tool to the check-in suite (along with several other static analysis tools) so that developers could  catch this error before checking in. I sat back down, we reviewed the code, made a few tweaks, and I added the tool to the check-in system.

Over the course of several hours, my role changed from testing and investigation of the product, to analysis and debugger, to tool developer, and finally to early detection  / prevention. The changes were fluid.

On twitter, a conversation started on detection vs. prevention. Some testers have a stance that those two activities are distinct, and that doing both makes you average (at best) at both. The conversation (although plagued by circular discussion, metaphors and 140 character limits) centered around the point that you can’t do multiple roles simultaneously. While I agree completely that you cannot do multiple roles simultaneously, I believe (and have proven over 20+ years) that it is certainly possible to move fluidly through different roles. Furthermore, I can say anecdotally that people who can move fluidly through different roles tend to have the most impact on their teams.

To this day, I figure out what needs to be done, and I take on the role necessary to solve my team’s most important problems. Even though I have self-identified as a tester for most of my career, I don’t see a hard line between testing and developing (or detecting and preventing). In fact, that may be one of the roots of conversations like this. For years, I’ve considered the line between development and testing to be a very thin grey line. This reflects in my story above, and in many of my writings.

Today, however, I don’t see a line at all. Or – if it’s there, it’s nearly invisible. It’s been a freeing experience for me to consider software creation as an activity where I can make a significant contribution while contributing in whatever areas make sense – at any given moment.

Sure – there are places where develop and then test still exist, and this sort of role fluidity is difficult there (but not impossible). But for those of us shipping frequently and making high quality software for thousands (or millions) of customers, I think locking into roles is a bottleneck.

The key to building a great product is building a great team first. To me, great teams aren’t bound by roles, but they’re driven by moving forward. Roles can help define how people contribute to the team, but people can – and should flow between roles as needed.

(potentially) related posts:
  1. Exploring Test Roles
  2. Activities and Roles
  3. Peering into the white box
Categories: Software Testing

What Happened?

Mon, 11/23/2015 - 19:36

As I approach the half-century mark of existence, I’ve been reflecting on how I’ve ended up where I am…so excuse me while I ramble out loud.

Seriously, how did a music major end up as a high level software engineer at a company like Microsoft? I have interviewed hundreds of people for Microsoft, who, on paper, are technology rock stars, and I (yes, the music major) have had to say no-hire to most of them.

I won’t argue that a lot of it is luck – but sometimes being lucky is just saying yes at the right times and not being afraid of challenges. Yeah, but it’s mostly luck.

I think another part is my weird knack for learning quickly. When I was a musician (I like to consider that I’m still a musician, but I just don’t play that often anymore) – I was always the one volunteering to pick up doubles (second instruments) as needed, or volunteer to fill whatever hole needed filling in order for me to get into the top groups. Sometimes I would fudge my background if it would help – knowing that I could learn fast enough to not make myself look stupid.

In grad school (yeah, I have a masters in music too), I flat out told the percussion instructor – who had a packed studio – that I was a percussionist. To be fair, I played snare drum and melodic percussion in drum corps for several summers, but I didn’t have the years of experience of many of my peers. So, as  grad student, I was accepted into the studio, and I kicked ass. In my final performance of the year for the music staff, one of my undergrad professors blew my cover and asked about my clarinet and saxophone playing. I fessed up to my professor that I wasn’t a percussion major as an undergrad, and that I lied to get into his program. When he asked why, I told him that I though if I told him the truth, that he wouldn’t let me into the percussion program. He said, “You’re right”. And then nailed of a marimba piece I wrote and closed out another A on my masters transcript.

I have recently discovered Kathy Kolbe’s research and assessments on the conative part of the brain (which works with the cognitive and affective parts of the brain). According to Kolbe, the cognitive brain drives our preferences (like Meyers Briggs or Insights training measure), but conative assessments show how we prefer to actually do things. For grins, I took a Kolbe assessment, and sure enough, my “score” gives me some insights into how I’ve managed to be successful despite myself.

I’m not going to spout off a lot about it, because I take all of these assessments with a grain of salt – but so far, I give this one more credit than Meyers Briggs (which I think is ok), and Insights (which I find silly). I am curious if others have done this assessment before and what they think…

By the time my current product ships, I’ll be hovering around the 21 year mark at Microsoft. Then, like today, I’m sure I’ll still wonder how I got here. I can’t see myself stopping to think about this.

And then I’ll find something new to learn and see where the journey takes me…

(potentially) related posts:
  1. Twenty Years…and Change
  2. Twenty-Ten – my best year ever
  3. 2012 Recap
Categories: Software Testing

Let’s Do This!

Thu, 11/19/2015 - 15:47

A lot of people want to see changes happen. Some of those want to make change happen. Whether it’s introducing a new tool to a software team, changing a process, or shifting a culture, many people have tried to make changes happen.

And many of those have failed.

I’ve known “Jeff” for nearly 20 years. He’s super-smart, passionate, and has great ideas. But Jeff gets frustrated about his inability to make changes happen. In nearly every team he’s ever been on, his teammates don’t listen to him. He asks them to do things, and they don’t. He gives them a deadline, and they don’t listen. Jeff is frustrated and doesn’t think he gets enough respect.

I’ve also known “Jane” for many years. Jane is driven to say the least. Unlike Jeff, she doesn’t wait for answers from her peers, she just does (almost) everything herself and deals with any fallout as it happens (and as time allows). It doesn’t always go well, and sometimes she has to backtrack, but progress is progress. Jane enjoys creating chaos and has no problem letting others clean up whatever mess she makes. Jane is convinced that the people around her “just don’t know how to get things done.”

Over the years, I’ve had a chance to give advice to both Jeff and Jane – and I think it’s worked. Jeff has been able to get people to help him, and Jane leaves a few less bodies in the wake of progress.

Jeff – as you may be able to tell, can be a bit of a slow starter. Or sometimes, a non-starter. He once designed an elaborate spreadsheet and sent mail to a large team asking every person to fill in their relevant details. When nobody filled it out, Jeff couldn’t believe the disrespect. I talked with him about his “problem”, and asked why he needed the data. His answer made sense, and I could see the value. Next I asked if he knew any of the data he needed from the team. “Most of it, actually”, he started, “but I don’t want to guess”.

Thirty minutes later, we filled out the spreadsheet, guessing where necessary, and re-sent the information to the team. In the email, Jeff said, “Here’s the data we’re using, please let me know if you need any corrections.” By the next morning, Jeff had several corrections in his inbox and was able to continue his project with a complete set of data. In my experience, people may stall to do work from scratch, but will gladly fix “mistakes”. Sometimes, you just need to kick start folks a bit to lead them.

Jane needed different advice. Jane is never going to be someone who puts together an elaborate plan before starting. But, I was able to talk Jane into taking just a bit of time to define a goal, and then create a list, an outline, or a set of tasks (or a combination), and sharing it with a few folks before driving forward. The time impact was either minimal or huge (depending on whether you asked me, or Jane), but the impact on her ability to get projects done was massive no matter who you ask. These days, Jane not only gets projects done without leaving bodies in her wake, but she actually receives (and welcomes) help from others on her team.

There are lot of other ways to screw up leadership opportunities, and countless bits of advice to share to avoid screw-ups. But – the next time you want to make change and it’s not working, take some time to think about whether the problem is really with the people around you…or if the “problem” is you.

(potentially) related posts:
  1. Making Time
  2. Give ‘em What They Need
  3. The Easy Part
Categories: Software Testing

Eurostar mobile deep dive

Sun, 11/08/2015 - 21:35

I posted slides from my Eurostar Mobile Deep Dive presentation on slideshare.

I had a great time, and hope you find them useful. Let me know if you have questions.

(potentially) related posts:
  1. Eurostar 2012 – Slides
  2. Surface RT and EuroSTAR recap
  3. Mobile Application Quality
Categories: Software Testing

Worst Presentation Ever

Fri, 11/06/2015 - 06:08

Last night I dreamt about the worst presentation ever. Sometimes I was presenting, sometimes I was watching, but it was frighteningly bad. Fortunately, my keynote this morning went well – and now that it has, I’ll share what happened (including some conscious editing to make sure I cover all of the bases).

It begins…

Moderator: I’d like to introduce Mr. Baad Prezenter. (resume follows)

Speaker: (taps on microphone – “is this on”). “Good Morning!” (when speaker doesn’t get the proper volume of answer, he repeats louder, “Good Morning!”. The audience answers louder and he thinks he’s engaged them now. “

Speaker then re-introduces himself repeating everything the moderator just told the room. After all, it’s important to establish credibility.

Speaker: “I’d like to thank Paul Blatt for telling me about this conference, Suzie Q for providing mentorship…” The list of thank-you’s goes on for a few minutes, before thanking the audience for attending his talk…even though many of them wish they hadn’t). Finally, the speaker moves on from the title slide to the agenda slide. He reads each bullet out loud for the audience members who are unable to read. He notices that one of the bullet points is no longer in the presentation and chooses that moment to talk about it anyway.

minutes pass…

The next slide shows the phonetic pronunciation of the presenters main topic along with a dictionary definition. The presenter reads this slide, making sure to emphasize the syllables in the topic. It’s important that the audience know what words mean.

15 minutes in, and finally, the content begins.

The speaker looks surprised by the content on the next slide.

Speaker: “I actually wasn’t going to talk about this, but since the slide is up, I’ll share some thoughts.” The speaker’s thoughts consist of him reading the bullet points on the slide to the audience. His next slide contains a picture of random art.

Speaker: “This is a picture I found on the internet. If you squint at it while I talk about my next topic you may find that it relates to the topic, but probably not. But I read the presentations need pictures, so I chose this one! “

Speaker spends about 15 minutes rambling. It seems like he’s telling a story, but there’s no story. It’s just random thoughts and opinions. Some audience members wonder what language he’s speaking.

The moderator flashes a card telling him there’s 10 minutes left in his presentation

Speaker: “I’m a little behind,  so let’s get going”. Finally, on the next slide are some graphics that look interesting to the audience and information that seems like it would support the topic. But the speaker skips this slide, and several more.

Speaker: “Ooh – that would have been a good one to cover – maybe next time” Finally the speaker stops on a slide that looks similar to one of the earlier slides.

Speaker (noticing that this slide is a duplicate): “I think I already talked about this, but it’s important, so I want to cover it.” Speaker reads the bullet points on the slide. At this point he hasn’t turned to face the audience in several minutes.

The next slide has a video of puppies chasing a raccoon that apparently has a lot to do with the topic. Unfortunately, the audio isn’t working, so the speaker stops the presentation and fiddles with the cable for a minute. Finally, he has audio, and restarts the presentation.

From the beginning.

He quickly advances to the video, stopping only once to talk about a slide he almost added that included an Invisible Gorilla and plays it for the audience. The audience stares blankly at the screen and wonders what drew them to this presentation in the first place.

Finally, the speaker gets to his last slide. It’s the same as the agenda slide, but…the bullet points are in ITALICS. He reads each bullet point again so he can tell them what they learned…or could have learned, thanks them, and sends them to lunch.

Twenty minutes late.

The audience are too dazed, and too hungry to fill out evaluation forms, so the speaker fills them out for them.

They loved him.

(potentially) related posts:
  1. Presentation Maturity
  2. Online Presentation: Career Tips for Testers
  3. One down, two to go…
Categories: Software Testing

Scope and Silos

Thu, 11/05/2015 - 08:14

I’ve watched a lot of teams try to be more agile or more adaptive, or just move to faster shipping cadence. It has taken me a while, but I think I see a pattern, and the hard stuff boils down to two things.

Scope and Silos

Scope

Scope, in this context, is everything that goes into a feature / user story. For many developers in the previous century, this meant getting the thing to compile and sort-of work, and then letting test pound quality into it. That worked fine if you were going to spend months finding and fixing bugs, but if you want to ship every week, you need to understand scope, and figure out a way to deliver smaller pieces that don’t lower customer value.

Scope includes architecture, performance, tests, telemetry, review, analysis, interoperability, compatibility, and many, many other things beyond “sort-of works”. There may not be work to do for all of these items, but if you don’t consider all of them for all stories, you end up with an application where half of the features are incomplete in some way. If half of your application is incomplete, are you ready to ship?
Silos

The second “problem” I see – mostly in teams transitioning from predictive development models are silos (or strict adherence to the team ownership). You can find these teams by asking people what they do. They’ll say, “my team owns shimmery blue menus”, or “I own the front page”. When you have a team full of people who all own an isolated piece of the product, you very likely will have a whole lot of stories “in progress” at once, and you end up with the first problem above.

I’ve frequently told the story of a team I was on that scheduled (what I thought was)  a disproportionate number of features for a release an a specific area. When I asked why we were investing so much in that area, I was told that due to attrition and hiring, that the team that owned the area was much larger than the other teams.

If you’re shipping frequently – let’s say once a week, you want every release to have more value to the customer than the previous release. Value can come from stability or performance improvements, or from new features or functionality in the product. On a team of twenty people, delivering twenty new stories is probably not a great idea. Failing to finish any of those stories and delivering no new functionality is worse.

So pick the top 3 (ish) stories, and let the team deliver them together. Forget about who reports to who, and who owns what. Figure out what’s most important for the customer, and enable the team to deliver value. Everyone may not be involved in delivering a story each release (there’s bound to be be fundamental, infrastructure, and other similar work that needs to be done). That’s ok – let the team self-organize and they’ll do the right thing.

In other words, I think a lot of improvement to be discovered by defining work better, and limiting work in progress. Not rocket science, but often forgotten.

(potentially) related posts:
  1. Silos, Walls, Teams and TV
  2. Working on Lync
  3. In the Middle
Categories: Software Testing