And then I almost fainted when I saw the sample code they included for Hello, World programs.
As you know, gentle reader, I’ve often thumped tubs about how any program used to teach students and that outputs Hello World is teaching the youngsters to program defects at the outset because, as “World” in this instance is a noun of direct address, it should be offset by a comma (Hello, World).
So when I saw the sample code included, where the programs for Java and C have that very comma in them, I was amazed. I had to immediately tweet about how it changed my life.
But then I looked closer: In addition to Java and C, the article includes samples of Python and Perl. And the commas are missing.
So I guess it’s more of a crash course in development than the writer intended.
It’s a collection kludged together, and nobody’s going to think to check consistency or know if there’s a problem in different areas of the software except for QA. Because the milestone and the deadline were met.
In my last post, I wrote a bit about what it takes to build a great team, and how it takes a great team to build great software (which also covers about half of what I spoke about at STAR West in October). In this post, I’ll see if I can cover the other main points from that presentation.Leadership and Management
Many people still get leadership and management confused. One does not have to be a manager to be a leader (and most certainly, not all managers are good leaders). As someone who has spent the majority of their career as a peer to test managers (yet in a non-manager) role, I frequently see the manager role as one that runs the testing business, while my role is to improve the testing business (in fact, I would say that many of my non-management Principal band peers provide as much, or more leadership to the team than the people-managers on the team).
Many people (and company cultures) assume that career growth for a software engineer always involves a “promotion” into management. Too often, this policy results in turning great software leaders into mediocre managers – or worse, enables mediocre engineers to prolong their career by becoming completely inadequate people-managers.
The formula to fix this isn’t difficult.
- Let the people on your team with demonstrated skills in people management manage the team (and there are dozens of ways to discover who has, and does not have these skills before making someone a manager)
- Encourage strong engineers to take on team leadership (but not necessarily management roles).
In my last post, I said there’s a direct relationship between team health and product quality. One surefire way to negatively impact team health (and subsequently product quality) is to put people in roles they’re not ready for – or roles where they cannot succeed.Learning Leadership
Not many things irk me more than self-proclaimed leaders. Being a great leader involves study and hard work. Just as great engineers know patterns and heuristics that enable them to build great software, great leaders know patterns and heuristics that enable them to lead.
Too many leaders (or those who call themselves leaders) rely on only a few leadership tools – often thinking that if they merely tell people what to do, that they’ll do it. Leadership is influence, and influence requires credibility, compassion, humility, strategy and many other factors. You can’t lead with only a handful of leadership ideas – you need dozens or more.
It’s just as important for a great leader to have a toolbox full of leadership ideas as it is for a software engineer to have a toolbox full of design ideas. Not enough people understand this point.
If you think you’re a leader – or if you want to be one, experience alone is not enough to get you there. Work at it, practice, and learn. And then work at it, practice, and learn some more.(potentially) related posts:
Here’s another failure story, per the post where I complained about people not telling enough test failure stories.
Years ago, after learning about Keyword-Driven Automation, I wrote an automation framework called OKRA (Object Keyword-Driven Repository for Automation). @Wiggly came up with the name. Each automated check was written as a separate Excel worksheet, using dynamic dropdowns to select from available Action and Object keywords in Excel. The driver was written in VBScript via QTP. It worked, for a little while, however:
- One Automator (me) could not keep up with 16 programmers. The checks quickly became too old to matter. FAIL!
- An Automator with little formal programming training, writing half-ass code with VBScript, could not get help from a team of C# focused programmers. FAIL!
- The product under test was a .Net Winforms app full of important drag-n-drop functionality, sitting on top of constantly changing, time-sensitive, data. Testability was never considered. FAIL!
- OKRA was completely UI-based automation. FAIL!
Later, a product programmer took an interest in developing his own automation framework. It would allow manual testers to write automated checks by building visual workflows. This was a Microsoft technology called MS Workflow or something like that. The programmer worked in his spare time over the course of about a year. It eventually faded into oblivion and was never introduced to testers. FAIL!
Finally, I hired a real automator, with solid programming skills, and attempted to give it another try. This time we picked Microsoft’s recently launched CodedUI framework and wrote the tests in C# so the product programmers could collaborate. I stood in front of my SVP and project team and declared,
“This automation will shave 2 days off our regression test effort each iteration!”
- The automator was often responsible for writing automated checks for a product they barely understood. FAIL!
- Despite the fact that CodedUI was marketed by Microsoft as being the best automation framework for .Net Winform apps, it failed to quickly identify most UI objects, especially for 3rd party controls.
- Although, at first, I pushed for significant amounts of automation below the presentation layer, the automator focused more energy on UI automation. I eventually gave in too. The tests were slow at best and human testers could not afford to wait. FAIL! Note: this was not the automators failure, it was my poor direction.
At this point, I’ve given up all efforts to automate this beast of an application.
Can you relate?
In North American football, the offense sometimes runs a draw play, which is a delayed handoff that gets the moving parts of the defense to act before the offense turns the ball over to its running back:
The idea behind a draw play is to attack aggressive, pass-rushing defenses by “drawing” them downfield. This creates larger gaps between defenders and thereby allows the offense to effectively run the ball. Draw plays are often run out of the shotgun formation, but can also be run when the quarterback is under center. These types of draw plays are sometimes referred to as “delayed handoffs”. The running back will most often run straight downfield through the “A-Gap” (the space between the center and the offensive guard), although there are more complicated variations.
Now, in our deadline-driven software development world, we often find ourselves typing and clicking as fast as we can to get as much test coverage as we can in our allotted time. In a way, we’re playing to software that’s trained to be aggressive linebackers, running toward our software’s operations as fast as possible. Kinda like I started charging into this metaphor too quickly and find myself off-balance, unable to elegantly segue to my point.
Within applications, you can sometimes find defects by delaying actions. If your software integrates with other software or services asynchronously, the expectations of the code might not match how fast–or slow–a user might interact with the software.
For example, a shopping cart piece of e-commerce with distinct units might mark those units as Hold, Wait For Payment if a user adds it to the shopping cart and starts the payment process. If the hold has a timeout associated with it–an expectation that, if the user does not complete the payment within five minutes (or two minutes, or twenty minutes) that the payment has been abandoned and the system should remove the hold. However, in the case of paying with PayPal, the PayPal window itself can linger for that duration and more, ready to accept your login and confirmation of payment, at which time PayPal will send a payment received message to your system. For an item whose hold might have been relinquished and might not be available any more. How will you know unless you hold onto that ball for a a couple of seconds and let all the integrated software make their plays?
Another example: Financial software that uses two stages of authentication. The first time you log in from a browser, it prompts you for your user name and then directs you to a security question; after you answer that correctly, it prompts you for your password. If you log in and don’t do anything for ten minutes, your session times out. But what happens if it takes you longer than ten minutes to answer your security question? What happens if you answer the security question but don’t enter a password in those ten minutes?
The draw test is a valid line of testing because your users don’t live to fill out your forms. In their real lives, they’re called to meetings, go to lunch, minimize the browser window when a co-worker comes in, or otherwise temporarily delay their activity in a fashion that might put their next actions beyond your software’s expectations. If you take a little time, you can identify the places where your system might allow this and to make sure your software handles delayed action on your user’s part elegantly.
I picked up my Xbox One this morning – a special white console with “I made this!” engraved on it. The reviews started appearing last night, and for the post part, they’re quite positive (and I’m not surprised by the drawbacks listed in the reviews I’ve read so far). Now that this project is officially behind me – and before I figure out what’s next, I have some backlog blog posts to share.Lessons Learned
I gave a keynote at STAR West in October on “Lessons learned from Xbox One”. Given that it was an unreleased product at the time at the time of the presentation, rather than talk about the details, I spoke mostly about the team, and how strong team principles lead to great software projects (I spoke of my love of the Xbox team in a previous post – http://angryweasel.com/blog/?p=723). I thought I’d share a few of those lessons.Lesson 1: Build a Great Team
In order to build great software, build a great team first. I’ll say that differently in case it didn’t sink in (or make sense). Worry about building a great team first, and then let them build a great product. I realize this isn’t always an option for every organization, but I see too many teams take an existing group of people, and then try to make a product that isn’t in the sweet spot of that organizations strengths. A good team can usually make a few good things, but a great team can make anything great.
That leads to a phrase I heard once that rings too true – “Don’t ship your org chart”. Just over ten years ago, I was working on Windows CE (an embedded operating system loosely based on Windows). During product planning, I noticed what looked like an unbalanced number of networking features (there were a lot – including some relatively obscure functionality). I didn’t know if it was a product strategy decision, customer requests, or what, so I asked. The answer was, “Our networking team has a lot of people, and we need to keep them busy.” To this day, I still don’t know why there was no attempt to shift people around as needed for features customers wanted, but I do know it seemed silly at the time, and it seems silly now.
Which leads to…Lesson 2: Diversity and strengths
I’m a believer in teams filled with (or at least largely populated by) specializing generalists. Everyone certainly doesn’t have to be able to do everything, but every team member should be able to do many things well, and a few things very well. It’s important, when building a team to have diversity in both generalization and specialization – and match those skills with what’s needed for your project.
In my opinion (and experience), you can’t have enough deep specialization. My official job title at Microsoft is Principal SDET – that means I’m in the Principal band, and that I don’t manage people. I think far too many testers (and organizations) see management as the only career growth path, when there is huge value in developing, growing, and nurturing the experience of experienced individual test contributors (which reminds me, I wrote a paper about the value of highly experienced non-manager testers a few years ago).
There are 100 or so Principal SDETs at Microsoft (plus more in the management ranks that I haven’t counted lately). When I joined the team two years ago, there were five of us (Principal testers) – two on the Xbox Live team, and three on console. For various reasons, we wanted a name for our virtual team on the console software team. The mythical Hydra name was already in use to describe our three-OS model, so after a bit of deliberation, we named ourselves after Hagrid’s three-headed dog in Harry Potter, and Team Fluffy was born. Over time, we expanded (and promoted) our way to eight Principal testers in console, and three in Live, for what I believe is the highest team concentration of Principal testers at the company. Someone asked once if we needed that many Principal ICs – but honestly, I don’t know how we would have made this product without the diversity and deep skills of this group. I think the eight of us all contributed significantly to getting this product shipped on time with quality.Teams and Products
Yes – products are important, but I’m a huge believer in the team first – and I think it’s just as difficult and challenging to build a great team, as it is to build a great product – and that we should all put the same care and passion into hiring and developing our teams, as we do in building great software.
I’ll share a few other lessons later this week.(potentially) related posts:
Have you ever been to a restaurant with a kitchen window? Well, sometimes it may be best not to show the customers what the chicken looks like until it is served.
A tester on my team has something similar to a kitchen window for his automated checks; the results are available to the project team.
Here’s the rub:
His new automated check scenario batches are likely to result in…say, a 10% failure rate (e.g., 17 failed checks). These failures are typically bugs in the automated checks, not the product under test. Note: this project only has one environment at this point.
When a good curious product owner looks through the kitchen window and sees 17 Failures, it can be scary! Are these product bugs? Are these temporary failures?
Here’s how we solved this little problem:
- Most of the time, we close the curtains. The tester writes new automated checks in a sandbox, debugs them, then merges them to a public list.
- When the curtains are open, we are careful to explain, “this chicken is not yet ready to eat”. We added an “Ignore” attribute to the checks so they can be filtered from sight.
Developers, listen to the kind man Frank Hayes as he explains you should never set the cat on fire:
QA: Do just the opposite of what he says except for the first verse and the cat thing.