Apple is making Swift open source
This week Apple announced that Swift will soon be open source. In the keynote address at the Worldwide Developers Conference, Craig Federighi was met with the applause when he stated that Apple thinks “Swift should be everywhere and used by everyone” and stated “Today we’re announcing that Swift will be open source.” Swift will join the growing array of tech that Apple has decided to make open source as a key part of their software strategy. Federighi outlined the additional new changes we could expect for Swift 2.0, including “all new optimization tech” geared for complex applications. New language features in Swift 2.0 include enhanced error handling, protocol extensions, and interfaces as synthesized headers in Xcode. The compiler and the standard libraries for iOS, OS X, and Linux are said to be released by the end of the year.
France gives Google an ultimatum regarding the Right to be Forgotten
The “Right to be Forgotten” was a ruling of the European Court of Justice a year ago that determined that citizens should be able to request for search engines to remove links to sites from search results if they were considered private. Although major search engines established forms to submit requests in June of last year, Google has only de-indexed content for European domains. Still, Google has removed 41% of all requests submitted over the past year since the ruling. While Google challenges that the Right to be Forgotten was a European law and, therefore, should only apply to European content, the head of France’s regulator CNIL insists that “for delisting to be effective, it must be world-wide.” While the EU does not have legal jurisdiction with regards to Google outside of its domain, the EU is on a quest to protect citizen privacy. Due to the disagreement, this week CNIL mandated that Google to comply with de-indexing demands within 15 days, or face a potential fine just short of $170,000.
Net Neutrality rules go into effect today as scheduled
The FCC’s internet regulations to treat the internet as a public utility go in effect this week, after a three judge panel of the U.S Court of Appeals rejected a petition for a stay on the matter. Several cable companies, including AT&T and Verizon petitioned for a stay of the FCC’s decision, arguing the new rules were unfair and that the FCC did not follow proper procedure when the rules were created. The FCC voted 3-2 in agreement of the new rules last February, ban internet service providers from throttling or blocking connection for specific content, services, or applications. Although the regulations are going into effect today as scheduled, litigation is expected to continue, as 10 separate lawsuits have been filed.
Security fix issued for all versions of IE and Windows
Patch Tuesday released this week by Microsoft addresses 45 unique vulnerabilities, 24 of which are said to be critical Internet Explorer risks. Of those, four expose users to the risk of remote code execution when using IE, which would give attackers the ability to access and alter devices regardless of geographic location. While this may sound dangerous, this month’s Patch Tuesday is uncharacteristically light compared to recent months for Microsoft. The vulnerabilities arise as the browser’s successor begins its debut in the Windows 10 preview.
I mentioned in my last post that I has a new job at Microsoft (and I discussed it a bit more on the last AB Testing). During the interviews for the job, I talked a lot about quality. I used the agile quadrants as one example of how a team builds quality software (including my roles in each of the quadrants), but I also talked about quality software coming from a pyramid of activities and processes. I’ve been dwelling on the model for the last week or so, and wanted to share it for comments and feedback…or to just brain-dump the idea.Processes / Practice / Culture
The base of software quality (and my pyramid) is in the craftsmanship and approach of the team. Do they care about good unit testing and code reviews, or do they check in code willy nilly? Do they take pride in having a working product every day, or does the build fall on the floor for days on end? The base of the pyramid is critical for making quality software – but on established teams can be the most difficult thing to change.Code Quality (correctness)
An extension of PP&C above is code correctness. This is a more granular look specifically at code quality and code correctness. This includes attention to architecture, code design, use of analysis tools, programming tools, and overall attention do detail on writing great code.Functional Quality
Unit tests, functional tests, integration / acceptance tests, etc. are all part of product quality. I italicize, because for some reason, some folks think that quality ends here – that if the tests pass, the product is ready for consumers. (un?)Fortunately, readers of this blog know better, so I’ll save the soapbox rant for another day. However, a robust and trustworthy set of tests that range from the unit level to integration and acceptance tests is a critical part of building software quality.Automate Everything
There are some folks in software in the “Automate Everything” camp. A lot of testers don’t like this camp, because they think it will take away their job. Whatever.
As far as I can tell from my limited research on this camp, Automate Everything means automate all of the unit functional and integration tests…and maybe a chunk of the performance and reliability tests. For some definitions of “Everything”, I agree. Absolutely automate all of this stuff, and let (make) the developer of the code under test do it. The testers’ mind is much better put to use higher up the pyramid.-Ilities
Performance, reliability, usability, I18N, and other non-functional requirements / ilities are what begins to take your product from something that is functionally correct to something that people may just want to use. Often, the ilities are ignored or postponed until late in the product cycle, but good software teams will pay a lot of attention to this part of the pyramid throughout the product cycle.Customer Quality
It doesn’t matter how much you kick ass everywhere else in the pyramid. If the customers don’t like your product, you made a shitty product. It may be a functionally correct masterpiece that passes every test you wrote, but it doesn’t matter if it doesn’t provide value for your customers. Team members can “act like the customer”, be an “advocate for the customer”, or flat out, “be the customer”, but I’ll tell you (for likely the twentieth time on this blog), as a member of the product team, you are not the customer! That said, this is the part of the pyramid where good testers can shine in finding the fit and finish bugs that cause a lot of software to die the death of a thousand paper cuts.
Now, if you do everything else in the pyramid well, you have a better shot at getting lucky at the top, but your best shot at creating a product that customers like crave is to get quantitative and qualitative feedback directly from your users. Use data collection to discover how they’re using the product and what errors they’re seeing, ask them questions (in person, or via surveys), monitor twitter, forums, uservoice, etc. to see what’s working (and not working), and use the feedback to adapt your product. Get it in their hands, listen to them, and make it better.
More to come as I continue to ponder.(potentially) related posts:
Google launches a new hub to streamline security, privacy, and account management
This week at Google I/O, Google announced its new center of control for privacy and security, My Account. Google noted that security and privacy are “two sides of the same coin,” and therefore, are giving users the ability to quickly access and control both from one place. My Account brings two new options, the Privacy Checkup and Security Checkup, to simplify and guide users through the array of settings available to them. You can also manage settings normally used to enhance your search history, such as Web/App activity and location history on Google Maps. From My Account, you can also use the Ads Settings tool to adjust ads shown based on your prior searches. And, last but not least, you can control which apps are connected to your account. In addition to My Account, Google set up a new site at privacy.google.com to answer additional questions and promote
Bitcoin trading rules are finalized in New York
After an investigation into cryptocurrency that spanned two years, the first state in the US has now finalized trading regulations for virtual currency and Bitcoin. Announced during a speech at the BITS Emerging Payments Forum in Washington, the new rules will affect all traders who sell, buy, or accept the virtual currency. The new regulations require anyone engaged in Virtual Currency Business Activity apply for a license within 45 days of the regulation. The BitLicense regulations also outlines various specific conditions that must be met to keep the license updated “with regards to protections of consumers and anti-money-laundering compliance, capital adequacy, changes of ownership, and cybersecurity.” Unsurprisingly, the regulations have been met with a divided reaction.
Airbnb is making its machine learning technology open source
This week at Airbnb’s 2015 OpenAir developer conference, Airbnb announced two new open source technologies well worth checking out. Aerosolve, a tool written mostly in Java and Scala, uses “machine learning for humans” to assist with data discoveries within Airbnb. For example, Aerosolve can be utilized to easily depict the relationship between the price of a listing and the demand in the market. Airbnb also provided demos of the tool, including using
Aerosolve to teach the algorithm how to paint, pointillism style, or prediction income, based on US census data. Airflow, a workflow management platform, was used in house to streamline processes for their engineering team. The tool is built for authoring, scheduling, and monitoring data pipelines efficiently and scalably.
Both are readily available on their new site airbnb.io, which now hosts all of its open-source projects.
In January of 1995, I began some contract work (testing networking) on the Windows 95 team at Microsoft. Apparently, my work was appreciated, because in late May, I was offered a full time position on the team.
My first official day as a full time Microsoft employee was June 5, 1995.
That was twenty years ago today!
I never (ever!) thought I would be at any company this long. I thought computer software would be a fun thing to do “for a while” – but I didn’t realize how much I’d enjoy creating software, and dealing with all of the technical and non-technical things aspects that come with it. I learned a lot – and even though my fiftieth birthday is close enough to see, I’m still learning, and still having fun – and that’s a good thing to have in a job.
I’ve had fourteen managers, and seventeen separate offices. I’ve made stuff work (and screwed stuff up) across a whole bunch of products. I’ve done a ton of testing, entered thousands of bugs, and written code that’s shipped in Windows, Xbox, and more (not bad for a music major who stumbled into software).
In a nice bit of coincidence, my twenty-year mark also is a time of change for me. After two years working on Project Astoria (look it up – it’s really cool stuff), it’s time for me to do something new at Microsoft…something that aligns more with my passions, skills, and experiences – and something that shows what someone with over two decades of software testing experience can do for modern software engineering.
I’ve joined (yet another) v1 product team at Microsoft. Other than a few contract vendors, the team of a hundred or so has no software testers. They hired me to be “the quality guy”. This set up could be bad news in many worlds, but my job is definitely not to try to test everything. Instead, my job is to offer quality assistance, help build a quality culture, assist in exploratory testing, and look at quality holistically across the product. I don’t know if any jobs like this exist elsewhere (inside or outside of Microsoft), but I’m excited (and a bit scared) of the challenge.
More to come as I figure out what I do, and what it means for me as well as everyone else interested in software quality.(potentially) related posts:
The Struts, “It Could Have Been Me”
This week in web perf news the IRS website was used to steal over 100,000 tax returns, Google rolled out algorithm adjustments to remove offensive Google Maps results, Bing added how-old.net to its image search, and Google and Bing announced their searches will be indexing apps across multiple devices.
Criminals used IRS website to steal massive amount of tax information
A sophisticated breach of the IRS website that helped criminals gain access to tax returns of up to 100,000 people this week. The IRS disclosed that they believe the breach originated in Russia. The criminals used the personal information of real taxpayers to gain access to a service provided by the IRS called, “Get Transcript”. They were able to download half of the forms attempted and claimed tax refunds in the names of 15,000 people. The IRS announced they would notify the 200,000 people potentially affected by mail.
Google apologizes for racist search queries returning Google Maps results
Google allowed users the power to edit and improve Google Maps, and it backfired. This week, Google officially apologized after “certain offensive search terms were triggering unexpected map results”. The Washington post first shed light on the issue by showing that the outcomes of several racially offensive search queries resulted in the White House in Google maps. Google decided to suspend user ability to edit Google Maps, stating they would be working on “making the moderation system more robust.” The team has been working hard to extend the existing algorithmic change used to minimize the impact of Googlebombs in regular Google search. “Simply put, you shouldn’t see these kinds of results in Google Maps, and we’re taking steps to make sure you don’t” said Jen Fitzpatrick, the VP of Engineering and Product Management. Google said the change will gradually roll out globally to address the majority of the searches and will be refined over time.
Microsoft integrates how-old.net tool into Bing
Microsoft’s how-old.net tool has now been integrated into Bing’s image search. The popular tool was debuted last month at Build 2015, allowing users to upload images of people and have their ages estimated utilizing facial recognition APIs. Now users can apply the same tool to images found in search results. To test out the new addition, search for an image of a person using bing by clicking the Images link on the search results page. Then, click on an image to select it, and hover over it. Click on the how-old.net tool when it appears on the right side of the image to try it out. The tool has proven to be very inaccurate at times, but Microsoft has been continuously working on improving the feature.
Bing and Google begin indexing apps for several devices
This week Bing announced that it is creating a “massive index of apps and app actions” to improve the model of how users interact with apps. The search engine revealed that they are analyzing the web for app links and actions markup. Bing suggested users to get an edge on using app linking and schema.org actions now, and provided detailed descriptions on how to do so. This means users will soon be able to find relevant content within iOS, Android, and Windows 10 apps from Bing Search results (including Cortana).
Days later Google announced that they will be extending app indexing to include iOS devices. Now a Google search will be able to surface relevant app content from both Android and iOS apps, and will be able to find relevant app information. App indexing will only launch for a small group initially in order to test it, but the technology will be made available to developers as soon as possible. Developers can get to work on making sure their apps are ready now.
First, let me say that all of May has been a very difficult month weather-wise for those of us in Oklahoma, then later in May, for folks in Texas. Thankfully, all the tornadoes and flooding did not affect us personally, but we have friends and neighbors who were impacted and some of the stories are just tragic. So, I ask that if you are able to send a relief gift to the Red Cross designated for these disasters, please do so. It would really help those in need.
Here in Oklahoma we have had two years of extreme drought. One of the major lakes was over 31 feet below normal levels. Now, it has risen to 99% capacity. We have some lakes that are 33 feet above normal. Just in the month of May we have had 27.5 inches of rain, which shatters the record for the wettest month in history (May 2013 with 14.52 inches and the all time monthly record was 14.66 in June of 1989). Texas has also seen similar records broken. In short, we’ve had all the rain we need, thank you. California, we would be happy for you to get some of this for your drought.
The image above is of the main street of my hometown, Chickasha, OK.
Then, there are the tornadoes that make everything even more exciting. One night this month, we had to take shelter twice but no damage, thankfully. Then yesterday morning I was awakened at 5:30 a.m. to the sounds of tornado sirens. That is freaky because you have to act fast to see what is really happening. In this case, the tornado was 40 miles away, heading the opposite direction. I question the decision to sound the alarm in that situation.
Anyway…with that context…
About a week ago, I started noticing ants everywhere in and around our house. I mean parades of them everywhere. Ironically, I even found one crawling on my MacBook Pro!
Then, came the spiders, a plethora of other bugs, snakes and even fish in some peoples’ years. A friend reported seeing a solid white opossum near his house, which is very unusual.
And you perhaps heard that in one tornado event nearby on May 6, a wild animal preserve was hit and it was reported for a while that lions, bears, tigers, etc. were loose. Turns out that was a false report, too. But it did make for some juicy Facebook pictures for “Tigernado”movies.
Other weird things have happened as well, such as storm shelters and entire swimming pools popping out of the ground due to the high water table (and poor installation in some cases)!
But back to the ants and bugs and why they are everywhere. Turns out that we have had so much rain, their nests and colonies were destroyed and they are now looking for other habitats. The same has occurred with spiders, snakes, mice and rats.
In fact, my wife and I are finding bugs we have never seen before. I had to look some of them up on the Internet just to know what kind of bug I was killing.
That caused me to think about a new testing analogy to reinforce a really great testing technique. To flush out the bugs in something, change the environment.
Of course, the difference here in this analogy is that software bugs are not like actual bugs in many regards. However, there are some similarities:
· Both have taxonomies· Both can be studied· Both can mutate· Both can travel· Both can destroy the structure of something· Both can be identified and removed· Both can be prevented· Both can be hidden from plain view
The main differences are:
· Bugs have somewhat predictable behavior – not all software defects do· Bugs can inhabit a place on their own initiative – software defects are created by people due to errors
(Although I have wondered how squash bugs know how just to infest squash plants and nothing else…)
In the recent onslaught of ants, it is the flooding that has caused them to appear in masses. In software, perhaps if you flooded the application with excessive amounts of data such as long data strings in fields, you might see some new bugs. Or, you could flood a website with concurrent transaction load to see new and odd behavior.
Perhaps you could do the opposite and starve the environment of memory, CPU availability, disk space, etc. to also cause bugs to manifest as failures.
This is not a new idea by any means. Fault injection has been used for many years to force environmental conditions that might reveal failures and defects. Other forms of fault injection directly manipulate code.
Another technique is to test in a variety of valid operational environments that have different operating systems, hardware capacities and so forth. This is a great technique for testing mobile devices and applications. It’s also a great technique for web-based testing and security testing.
The main principle here is that if you can get the application to fail in a way that causes it to change state (such as from “normal state” to “failure state”, then it is possible to use that failure as a point of vulnerability. Once the defect has been isolated and fixed, not only has a defect been found and fixed, but also another security vulnerability has been eliminated.
Remember, as testers we are actually trying to cause failures that might reveal the presence of defects. Failure is not an option – it is an objective!
Although, we commonly say that testers are looking for defects (bugs), the bugs are actually out of our view many times. They are in the code, the integration, APIs, requirements, and so forth. Yes, sometimes we see the obvious external bug, like an error message with confusing wording, or no message at all.
However, in the external functional view of an application or system, testers mainly see the indicators of defects. These can then be investigated for a final determination of really what is going on.
As testers, we can dig for the bugs (which can also be productive), or we can force the bugs to manifest themselves by flushing them out with environmental changes.
And let’s be real here. In some software, the bugs are not at all hard to find!
Me? I’ll continue to both dig and flush to find those defects. Even better, I’ll go upstream where the bugs often originate (in requirements, user stories, etc.) and try to find them there!
Whaaa…? Two posts on automation in one week?
Normally, I’d refrain, but for those who missed it on twitter, I recorded an interview with Fog Creek last week on the Abuse and Misuse of Test Automation. It’s short and sweet (and that includes my umms and awws).
We are pleased to announce that the ninth GTAC (Google Test Automation Conference) will be held in Cambridge (Greatah Boston, USA) on November 10th and 11th (Toozdee and Wenzdee), 2015. So, tell everyone to save the date for this wicked good event.
GTAC is an annual conference hosted by Google, bringing together engineers from industry and academia to discuss advances in test automation and the test engineering computer science field. It’s a great opportunity to present, learn, and challenge modern testing technologies and strategies.
You can browse presentation abstracts, slides, and videos from previous years on the GTAC site.
Stay tuned to this blog and the GTAC website for application information and opportunities to present at GTAC. Subscribing to this blog is the best way to get notified. We're looking forward to seeing you there!
I want to let you know about two events that will add value to your testing efforts. I'm very excited to be a speaker at the upcoming ASTQB Conference 2015 in Washington, D.C. on September 14 - 16. This is a special conference because we are giving focus to critical topics such as Cybersecurity Testing, Testing of Critical Applications, The Business Value of Testing, Agile Testing, and Test Leadership. In this conference, our goal is to provide valuable take-away ideas for increasing the reliability, security and quality of software projects.
I often tell my clients there are two places you don't want to be during or after a project - the newspaper and the courtroom. Your organization's reputation is at stake with every project. I'm sure, like me, you hear stories every week of another cyber attack, system failure, or failed project. The costs of these failures are enormous.
At this conference, we are bringing together some of the country's leading experts on cybersecurity, software quality, and test management to provide real-world solutions to some of the most challenging issues we face as software testers.
Early-bird pricing is still available until June 15. But, if you use the code "astqb2015a2u" during registration, you get an extra 10% discount!
To see more information and to register, just go to http://www.astqb.org/certified-tester-resources/astqb-software-testing-conference/
Free Webinar - Thursday, June 4 from 1 p.m. to 2 p.m. EDT
As part of the lead-up to the ASTQB Conference 2015, Taz Daughtrey and I will be presenting a free one-hour webinar on how to "Protect Your Company - and Your Software Testing Career." This free sneak peek of the ASTQB Conference will preview the keynote topics, tutorials and breakout sessions that are designed to keep you out of trouble.
In addition to the preview of the conference topics, Taz and I will share some practical information you can use immediately in testing cyber security, mobile applications and agile projects. I'll discuss a few of the new "Free and Cheap Tools" I have found recently as well.
I promise it will be an entertaining and informative time!
To register, visit https://attendee.gotowebinar.com/register/1292443299731168257
Very important note: This webinar will be full at 100 people. We will probably get 300 or more registrants, so if you want to actually get in to the session, you will want to log in early - at least 15 minutes prior.
I really hope you can join me at one or both of these events!
Thanks for reading,
I think this is the first time I’ve blogged about automation since writing…or, to be fair, compiling The A Word.
But yet again, I see questions among testers about the value of automation and whether it will replace testers, etc.. For example, this post from Josh Grant asks whether there are similarities between automated trucking and automated testing. Of course, I think most testers will go on (and on) about how much brainpower and critical thinking software testing needs, and how test automation can never replace “real testing”. They’re right, of course, but there’s more to the story.
Software testing isn’t at all unique among professions requiring brain power, creativity, or critical thinking. I challenged you to bingoogle “Knowledge Work” or Knowledge Worker”, and not see the parallels to software testing in other professions. You know what? Some legal practices can be replaced by automation or by low-cost outsourcing – yet I couldn’t find any articles, blogs, or anything else from lawyers complaining about automation or outsourcing taking away their jobs (disclaimer – I only looked at the first two pages of results on simple searches). Apparently, however, there are “managers” (1000’s of them if I’m extrapolating correctly) who claim that test automation is a process for replacing human testers. Apparently, these managers don’t spend any time on the internet, because I could only find second hand confirmation of their existence.
At risk of repeating myself (or re-repeating myself…) you should automate the stuff that humans don’t want (or shouldn’t have) to do. Automate the process of adding and deleting 100,000 records; but use your brain to walk through a user workflow. Stop worrying about automation as a replacement for testing, but don’t’ ignore the value it gives you for accomplishing the complex and mundane.(potentially) related posts:
BugMagnet 0.8, pushed out to the Chrome Extension store today, allows users to define custom edge cases, boundaries and interesting examples. This was by far the most requested feature since BugMagnet came out, so I certainly hope that the new version helps people be more productive while testing.
Previously, users had to change the main config file and re-build the extension from the local source files. This was a hassle because it required a development environment setup, plus if effectively required users to maintain their own version of the extension and follow source code updates.
The new version makes configuration changes trivial: Just click on the new “Configure BugMagnet” option in the menu, and you’ll see a pop-up window with the option to add local files. For a description of the configuration file format, see the Github repo main page.
This also means that we can distribute more usage-specific configuration files in the main repository. Where users previously asked for configuration file changes to be merged with the main repository, I had a really difficult decision to make between balancing things that are useful to the majority and adding interesting boundary conditions. No more! Because now people can load whatever they want, and we can avoid overcomplicating menus for users who don’t need all the additional use cases, I’m happy to take in pull requests for additional libraries of examples. I’ll distribute them through the extras folder on Github, and later make a nice web page that allows people to add such config files with one click.
To get started with BugMagnet, grab it from the Chrome Web store.
After experimenting with a Test Case Management application’s Session-Test tool, a colleague of mine noted the tool’s overhead (i.e., the non-test-related waiting and admin effort forced by the tool). She said, I would rather just use Notepad to document my testing. Exactly!
Notepad has very little overhead. It requires no setup, no license, no logging in, few machine resources, it always works, and we don’t waste time on trivial things like making test documentation pretty (e.g., let’s make passing tests green!).
Testing is an intellectual activity, especially if you’re using automation. The test idea is the start. Whether it comes to us in the midst of a discussion, requirements review, or while performing a different test, we want to document it. Otherwise we risk losing it.
Don’t overlook the power of Notepad.
The software industry venerates the young. If you have a family, you’re too old to code. If you’re pushing 30 or even 25, you’re already over the hill.
Alas, the whippersnappers aren’t always the best solution. While their brains are full of details about the latest, trendiest architectures, frameworks, and stacks, they lack fundamental experience with how software really works and doesn’t. These experiences come only after many lost weeks of frustration borne of weird and inexplicable bugs.
Like the viewers of “Silicon Valley,” who by the end of episode 1.06 get the satisfaction of watching the boy genius crash and burn, many of us programming graybeards enjoy a wee bit of schadenfraude when those who have ignored us for being “past our prime” end up with a flaming pile of code simply because they didn’t listen to their programming elders.
(Link via tweet.)
My new book, Fifty Quick Ideas to Improve Your Tests, is now available on Amazon. Grab it at 50% discount before Friday:
This book is for cross-functional teams working in an iterative delivery environment, planning with user stories and testing frequently changing software under tough time pressure. This book will help you test your software better, easier and faster. Many of these ideas also help teams engage their business stakeholders better in defining key expectations and improve the quality of their software products.
For more info, check out FiftyQuickIdeas.com
AWOLNATION, “Hollow Moon (Bad Wolf)”
A couple weeks ago I tweeted:
I prefer: - Recovery over Perfection - Predictability over Commitment - Safety Nets over Change Control - Collaboration over Handoffs
— ElisabethHendrickson (@testobsessed) May 6, 2015
Apparently it resonated. I think that’s more retweets than anything else original I’ve said on Twitter in my seven years on the platform. (SEVEN years? Holy snack-sized sound bytes! But I digress.)
@jonathandart said, “I would love to read a fleshed out version of that tweet.”
OK, here you go.
First, a little background. Since I worked on Cloud Foundry at Pivotal for a couple years, I’ve been living the DevOps life. My days were filled with zero-downtime deployments, monitoring, configuration as code, and a deep antipathy for snowflakes. We honed our practices around deployment checklists, incident response, and no-blame post mortems.
It is within that context that I came to appreciate these four simple statements.Recovery over Perfection
Something will go wrong. Software might behave differently with real production data or traffic than you could possibly have imagined. AWS could have an outage. Humans, being fallible, might publish secret credentials in public places. A new security vulnerability may come to light (oh hai, Heartbleed).
If we aim for perfection, we’ll be too afraid to deploy. We’ll delay deploying while we attempt to test all the things (and fail anyway because ‘all the things’ is an infinite set). Lowering the frequency with which we deploy in order to attempt perfection will ironically increase the odds of failure: we’ll have fewer turns of the crank and thus fewer opportunities to learn, so we’ll be even farther from perfect.
Perfect is indeed the enemy of good. Striving for perfection creates brittle systems.
So rather than strive for perfection, I prefer to have a Plan B. What happens if the deployment fails? Make sure we can roll back. What happens if the software exhibits bad behavior? Make sure we can update it quickly.Predictability over Commitment
Surely you have seen at least one case where estimates were interpreted as a commitment, and a team was then pressured to deliver a fixed scope in fixed time.
Some even think such commitments light a fire under the team. They give everyone something to strive for.
It’s a trap.
Any interesting, innovative, and even slightly complex development effort will encounter unforeseen obstacles. Surprises will crop up that affect our ability to deliver. If those surprises threaten our ability to meet our commitments, we have to make painful tradeoffs: Do we live up to our commitment and sacrifice something else, like quality? Or do we break our commitment? The very notion of commitment means we probably take the tradeoff. We made a commitment, after all. Broken commitments are a sign of failure.
Commitment thus trumps sustainability. It leads to mounting technical debt. Some number of years later find themselves constantly firefighting and unable to make any progress.
The real problem with commitments is that they suggest that achieving a given goal is more important than positioning ourselves for ongoing success. It is not enough to deliver on this one thing. With each delivery, we need to improve our position to deliver in the future.
So rather than committing in the face of the unknown, I prefer to use historical information and systems that create visibility to predict outcomes. That means having a backlog that represents a single stream of work, and using velocity to enable us to predict when a given story will land. When we’re surprised by the need for additional work, we put that work in the backlog and see the implications. If we don’t like the result, we make an explicit decision to tradeoff scope and time instead of cutting corners to make a commitment.
Aiming for predictability instead of commitment allows us to adapt when we discover that our assumptions were not realistic. There is no failure, there is only learning.Safety Nets over Change Control
If you want to prevent a given set of changes from breaking your system, you can either put in place practices to tightly control the nature of the changes, or you can make it safer to change things.
Controlling the changes typically means having mechanisms to accept or reject proposed changes: change control boards, review cycles, quality gates.
Such systems may be intended to mitigate risk, but they do so by making change more expensive. The people making changes have to navigate through the labyrinth of these control systems to deliver their work. More expensive change means less change means less risk. Unless the real risk to your business is a slogging pace of innovation in a rapidly changing market.
Thus rather than building up control systems that prevent change, I’d rather find ways to make change safe. One way is to ensure recoverability. Recovery over perfection, after all.
Fast feedback cycles make change safe too. So instead of a review board, I’d rather have CI to tell us when the system is violating expectations. And instead of a laborious code review process, I’d rather have a pair work with me in real time.
If you want to keep the status quo, change control is fine. But if you want to go fast, find ways to make change cheap and safe.Collaboration over Handoffs
In traditional processes there are typically a variety of points where one group hands off work to another. Developers hand off to other developers, to QA for test, to Release Engineering to deliver, or to Ops to deploy. Such handoffs typically involve checklists and documentation.
But the written word cannot convey the richness of a conversation. Things will be missed. And then there will be a back and forth.
“You didn’t document foo.”
“Yes, we did. See section 3.5.1.”
“I read that. It doesn’t give me the information I need.”
The next thing you know it’s been 3 weeks and the project is stalled.
We imagine a proper handoff to be an efficient use of everyone’s time, but they’re risky. Too much can go wrong, and when it does progress stops.
Instead of throwing a set of deliverables at the next team down the line, bring people together. Embed testers in the development team. Have members of the development team rotate through Ops to help with deployment and operation for a period of time. It actually takes less time to work together than it does to create sufficient documentation to achieve a perfect handoff.True Responsiveness over the Illusion of Control
Ultimately all these statements are about creating responsive systems.
When we design processes that attempt to corral reality into a neat little box, we set ourselves up for failure. Such systems are brittle. We may feel in control, but it’s an illusion. The real world is not constrained by our imagined boundaries. There are surprises just around the corner.
We can’t control the surprises. But we can be ready for them.