Introducing DOM Snitch, our passive in-the-browser reconnaissance tool

Google Testing Blog - Fri, 08/03/2012 - 10:46
By Radoslav Vasilev from Google Zurich

Every day modern web applications are becoming increasingly sophisticated, and as their complexity grows so does their attack surface. Previously we introduced open source tools such as Skipfish and Ratproxy to assist developers in understanding and securing these applications.

As existing tools focus mostly on testing server-side code, today we are happy to introduce DOM Snitch — an experimental* Chrome extension that enables developers and testers to identify insecure practices commonly found in client-side code. To do this, we have adopted several approaches to intercepting JavaScript calls to key and potentially dangerous browser infrastructure such as document.write or HTMLElement.innerHTML (among others). Once a JavaScript call has been intercepted, DOM Snitch records the document URL and a complete stack trace that will help assess if the intercepted call can lead to cross-site scripting, mixed content, insecure modifications to the same-origin policy for DOM access, or other client-side issues.



Here are the benefits of DOM Snitch:
  • Real-time: Developers can observe DOM modifications as they happen inside the browser without the need to step through JavaScript code with a debugger or pause the execution of their application.

  • Easy to use: With built-in security heuristics and nested views, both advanced and less experienced developers and testers can quickly spot areas of the application being tested that need more attention.

  • Easier collaboration: Enables developers to easily export and share captured DOM modifications while troubleshooting an issue with their peers.


DOM Snitch is intended for use by developers, testers, and security researchers alike. Click here to download DOM Snitch. To read the documentation, please visit this page.

*Developers and testers should be aware that DOM Snitch is currently experimental. We do not guarantee that it will work flawlessly for all web applications. More details on known issues can be found here or in the project’s issues tracker.
Categories: Software Testing

GTAC 2011 Keynotes

Google Testing Blog - Fri, 08/03/2012 - 10:45
By James Whittaker

I am pleased to confirm 3 of our keynote speakers for GTAC 2011 at the Computer History Museum in Mountain View CA.

Google's own Alberto Savoia, aka Testivus.

Steve McConnell the best selling author of Code Complete and CEO of Construx Software.

Award winning speaker ("the Jon Stewart of Software Security") Hugh Thompson.

This is the start of an incredible lineup. Stay tuned for updates concerning their talks and continue to nominate additional speakers and keynotes. We're not done yet and we're taking nominations through mid July.

In addition to the keynotes, we're going to be giving updates on How Google Tests Software from teams across the company including Android, Chrome, Gmail, You Tube and many more.
Categories: Software Testing

How We Tested Google Instant Pages

Google Testing Blog - Fri, 08/03/2012 - 10:44
By Jason Arbon and Tejas Shah

Google Instant Pages are a cool new way that Google speeds up your search experience. When Google thinks it knows which result you are likely to click, it preloads that page in the background, so when you click the page it renders instantly, saving the user about 5 seconds. 5 seconds is significant when you think of how many searches are performed each day--and especially when you consider that the rest of the search experience is optimized for sub-second performance.

The testing problem here is interesting. This feature requires client and server coordination, and since we are pre-loading and rendering the pages in an invisible background page, we wanted to make sure that nothing major was broken with the page rendering.

The original idea was for developers to test out a few pages as they went.But, this doesn’t scale to a large number of sites and is very expensive to repeat. Also, how do you know what the pages should look like? To write Selenium tests to functionally validate thousands of sites would take forever--the product would ship first. The solution was to perform automated test runs that load these pages from search results with Instant Pages turned on, and another run with Instant Pages turned off. The page renderings from each run were then compared.

How did we compare the two runs? How to compare pages when content and ads on web pages are constantly changing and we don't know what the expected behavior is? We could have used cached versions of these pages, but that wouldn’t be the realworld experience we were testing and would take time setting up, and the timing would have been different. We opted to leverage some other work that compares pages using the Document Object Model (DOM). We automatically scan each page, pixel by pixel, but look at what element is visible at the point on the page, not the color/RGB values. We then do a simple measure of how closely these pixel measurements match. These so-called "quality bots" generate a score of 0-100%, where 100% means all measurements were identical.

When we performed the runs, the vast majority (~95%) of all comparisons were almost identical, like we hoped. Where the pages where different we built a web page that showed the differences between the two pages by rendering both images and highlighting the difference. It was quick and easy for the developers to visually verify that the differences were only due to content or other non-structural differences in the rendering. Anytime test automation scales, is repeatable, quantified, and developers can validate the results without us is a good thing!

How did this testing get organized? As with many things in testing at Google, it came down to people chatting and realizing their work can be helpful for other engineers. This was bottom up, not top down. Tejas Shah was working on a general quality bot solution for compatibility (more on that in later posts) between Chrome and other browsers. He chatted with the Instant Pages developers when he was visiting their building and they agreed his bot might be able to help. He then spend the next couple of weeks pulling it all together and sharing the results with the team.

And now more applications of the quality bot are surfacing. What if we kept the browser version fixed, and only varied the version of the application? Could this help validate web applications independent of a functional spec and without custom validation script development and maintenance? Stay tuned...
Categories: Software Testing

Keynote Lineup for GTAC 2011

Google Testing Blog - Fri, 08/03/2012 - 10:43
By James Whittaker

The call for proposals and participation is now closed. Over the next few weeks we will be announcing the full agenda and notifying accepted participants. In the meantime, the keynote lineup is now locked. It consists of two famous Googlers and two famous external speakers that I am very pleased to have join us.

Opening Keynote: Test is Dead by Alberto Savoia

The way most software is designed, developed and launched has changed dramatically over the last decade – but what about testing? Alberto Savoia believes that software testing as we knew it is dead – or at least moribund – in which case we should stick a fork in it and proactively take it out of its misery for good. In this opening keynote of biblical scope, Alberto will cast stones at the old test-mentality and will try his darnedest to agitate you and convince you that these days most testers should follow a new test-mentality, one which includes shifting their focus and priority from “Are we building it right?” to “Are we building the right it?” The subtitle of this year’s GTAC is “cloudy with a chance of tests,” and if anyone can gather the clouds into a hurricane, it's Alberto – it might be wise to bring your umbrella.

Alberto Savoia is Director of Engineering and Innovation Agitator at Google. In addition to leading several major product development efforts (including the launch of Google AdWords), Alberto has been a lifelong believer, champion, innovator and entrepreneur in the area of developer testing and test automation tools. He is a frequent keynote speaker and the author of many articles on testing, including the classic booklet “The Way of Testivus” and “Beautiful Tests” in O’Reilly’s Beautiful Code. His work in software development tools has won him several awards including the 2005 Wall Street Journal Technical Innovator Award, InfoWorld’s Technology of the Year award, and no less than four Software Development Magazine Jolt Awards.

Day 1 Closer: Redefining Security Vulnerabilities: How Attackers See Bugs by Herbert H. Thompson

Developers see features, testers see bugs, and attackers see “opportunities.” Those opportunities are expanding beyond buffer overflows, cross site scripting, etc. into logical bugs (and features) that allow attackers to use the information they find to exploit trusting users. For example, attackers can leverage a small information disclosure issue in an elaborate phishing attempt. When you add people in the mix, we need to reevaluate which “bugs” are actual security vulnerabilities. This talk is loaded with real world examples of how attackers are using software “features” and information tidbits (many of which come from bugs) to exploit the biggest weakness of all: trusting users.

Dr. Herbert H. Thompson is Chief Security Strategist at People Security and a world-renown expert in application security. He has co-authored four books on the topic including, How to Break Software Security: Effective Techniques for Security Testing (with Dr. James Whittaker) and The Software Vulnerability Guide (with Scott Chase). In 2006 he was named one of the “Top 5 Most Influential Thinkers in IT Security” by SC Magazine. Thompson continually lends his perspective and expertise on secure software development and has been interviewed by top news organizations including CNN, MSNBC, BusinessWeek, Forbes, Associated Press, and the Washington Post. He is also Program Committee Chair for RSA Conference, the world’s leading information security gathering. He holds a Ph.D. in Applied Mathematics from Florida Institute of Technology, and is an adjunct professor in the Computer Science department at Columbia University in New York.

Day 2 Opener: Engineering Productivity: Accelerating Google Since 2006 by Patrick Copeland

Patrick Copeland is the founder and architect of Google's testing and productivity strategy and in this "mini keynote" he tells the story and relates the pain of taking a company from ad hoc testing practices to the pinnacle of what can be accomplished with a well oiled test engineering discipline.

Conference Closer: Secrets of World-Class Software Organizations by Steve McConnell

Construx consultants work with literally hundreds of software organizations each year. Among these organizations a few stand out as being truly world class. They are exceptional in their ability to meet their software development goals and exceptional in the contribution they make to their companies' overall business success. Do world class software organizations operate differently than average organizations? In Construx's experience, the answer is a resounding "YES." In this talk, award-winning author Steve McConnell reveals the technical, management, business, and cultural secrets that make a software organization world class.

Steve McConnell is CEO and Chief Software Engineer at Construx Software where he consults to a broad range of industries, teaches seminars, and oversees Construx’s software engineering practices. Steve is the author of Software Estimation: Demystifying the Black Art (2006), Code Complete (1993, 2004), Rapid Development (1996), Software Project Survival Guide (1998), and Professional Software Development (2004), as well as numerous technical articles. His books have won numerous awards for "Best Book of the Year," and readers of Software Development magazine named him one of the three most influential people in the software industry along with Bill Gates and Linus Torvalds.
Categories: Software Testing

Pretotyping: A Different Type of Testing

Google Testing Blog - Fri, 08/03/2012 - 10:43
Have you ever poured your heart and soul and blood, sweat and tears to help test and perfect a product that, after launch, flopped miserably? Not because it was not working right (you tested the snot out of it), but because it was not the right product.

Are you currently wasting your time testing a new product or feature that, in the end, nobody will use?

Testing typically revolves around making sure that we have built something right. Testing activities can be roughly described as “verifying that something works as intended, or as specified.” This is critical. However, before we take steps and invest time and effort to make sure that something built right, we should make sure that the thing we are testing, whether its a new feature or a whole new product, is the right thing to build in the first place.

Spending time, money and effort to test something that nobody ends up using is a waste of time.

For the past couple of years, I’ve been thinking about, and working on, a concept called pretotyping.

What is pretotyping? Here’s a somewhat formal definition – the dry and boring kind you’d find in a dictionary:

Pretotyping [pree-tuh-tahy-ping], verb: Testing the initial appeal and actual usage of a potential new product by simulating its core experience with the smallest possible investment of time and money.

Here’s a less formal definition:

Pretotyping is a way to test an idea quickly and inexpensively by creating extremely simplified, mocked or virtual versions of that product to help validate the premise that "If we build it, they will use it."

My favorite definition of pretotyping, however, is this:

Make sure – as quickly and as cheaply as you can – that you are building the right it before you build it right.
My thinking on pretotyping evolved from my positive experiences with Agile and Test Driven Development. Pretotyping applies some of the core ideas from these two models and applies them further upstream in the development cycle.

I’ve just finished writing the first draft of a booklet on pretotyping called “Pretotype It”.


You can download a PDF of the booklet from Google Docs or Scribd.

The "Pretotype It" booklet is itself a pretotype and test. I wrote this first-draft to test my (possibly optimistic) assumption that people would be interested in it, so please let me know what you think of it.

You can follow my pretotyping work on my pretotyping blog.

Post contentPosted by Alberto Savoia
Categories: Software Testing

Unleash the QualityBots

Google Testing Blog - Fri, 08/03/2012 - 10:41
By Richard Bustamante

Are you a website developer that wants to know if Chrome updates will break your website before they reach the stable release channel? Have you ever wished there was an easy way to compare how your website appears in all channels of Chrome? Now you can!

QualityBots is a new open source tool for web developers created by the Web Testing team at Google. It’s a comparison tool that examines web pages across different Chrome channels using pixel-based DOM analysis. As new versions of Chrome are pushed, QualityBots serves as an early warning system for breakages. Additionally, it helps developers quickly and easily understand how their pages appear across Chrome channels.


QualityBots is built on top of Google AppEngine for the frontend and Amazon EC2 for the backend workers that crawl the web pages. Using QualityBots requires an Amazon EC2 account to run the virtual machines that will crawl public web pages with different versions of Chrome. The tool provides a web frontend where users can log on and request URLs that they want to crawl, see the results from the latest run on a dashboard, and drill down to get detailed information about what elements on the page are causing the trouble.

Developers and testers can use these results to identify sites that need attention due to a high amount of change and to highlight the pages that can be safely ignored when they render identically across Chrome channels. This saves time and the need for tedious compatibility testing of sites when nothing has changed.



We hope that interested website developers will take a deeper look and even join the project at the QualityBots project page. Feedback is more than welcome at qualitybots-discuss@googlegroups.com. Posted by Ibrahim El Far, Web Testing Technologies Team (Eriel Thomas, Jason Stredwick, Richard Bustamante, and Tejas Shah are the members of the team that delivered this product)
Categories: Software Testing

Google Test Analytics - Now in Open Source

Google Testing Blog - Fri, 08/03/2012 - 10:41

By Jim Reardon

The test plan is dead!

Well, hopefully.  At a STAR West session this past week, James Whittaker asked a group of test professionals about test plans.  His first question: “How many people here write test plans?”  About 80 hands shot up instantly, a vast majority of the room.  “How many of you get value or refer to them again after a week?”  Exactly three people raised their hands.

That’s a lot of time being spent writing documents that are often long-winded, full of paragraphs of details on a project everyone already knows to get abandoned so quickly.

A group of us at Google set about creating a methodology that can replace a test plan -- it needed to be comprehensive, quick, actionable, and have sustained value to a project.  In the past few weeks, James has posted a few blogs about this methodology, which we’ve called ACC.  It's a tool to break down a software product into its constituent parts, and the method by which we created "10 Minute Test Plans" (that only take 30 minutes!)

Comprehensive
The ACC methodology creates a matrix that describes your project completely; several projects that have used it internally at Google have found coverage areas that were missing in their conventional test plans.

Quick
The ACC methodology is fast; we’ve created ACC breakdowns for complex projects in under half an hour.  Far faster than writing a conventional test plan.

Actionable
As part of your ACC breakdown, risk is assessed to the capabilities of your appliciation.  Using these values, you get a heat map of your project, showing the areas with the highest risk -- great places to spend some quality time testing.

Sustained Value
We’ve built in some experimental features that bring your ACC test plan to life by importing data signals like bugs and test coverage that quantify the risk across your project.

Today, I'm happy to announce we're open sourcing Test Analytics, a tool built at Google to make generating an ACC simple -- and which brings some experimental ideas we had around the field of risk-based testing that work hand-in-hand with the ACC breakdown.




Defining a project’s ACC model.
Test Analytics has two main parts: first and foremost, it's a step-by-step tool to create an ACC matrix that's faster and much simpler than the Google Spreadsheets we used before the tool existed.  It also provides visualizations of the matrix and risks associated with your ACC Capabilities that were difficult or impossible to do in a simple spreadsheet.




A project’s Capabilities grid.
The second part is taking the ACC plan and making it a living, automatic-updating risk matrix.  Test Analytics does this by importing quality signals from your project: Bugs, Test Cases, Test Results, and Code Changes.  By importing these data, Test Analytics lets you visualize risk that isn't just estimated or guessed, but based on quantitative values.  If a Component or Capability in your project has had a lot of code change or many bugs are still open or not verified as working, the risk in that area is higher.  Test Results can provide a mitigation to those risks -- if you run tests and import passing results, the risk in an area gets lower as you test.




A project’s risk, calculated as a factor of inherent risk as well as imported quality signals.
This part's still experimental; we're playing around with how we calculate risk based on these signals to best determine risk.  However, we wanted to release this functionality early so we can get feedback from the testing community on how well it works for teams so we can iterate and make the tool even more useful.  It'd also be great to import even more quality signals: code complexity, static code analysis, code coverage, external user feedback and more are all ideas we've had that could add an even higher level of dynamic data to your test plan.




An overview of test results, bugs, and code changes attributed to a project’s capability.  The Capability’s total risk is affected by these factors.
You can check out a live hosted version, browse or check out the code along with documentation, and of course if you have any feedback let us know - there's a Google Group set up for discussion, where we'll be active in responding to questions and sharing our experiences with Test Analytics so far.

Long live the test plan!
Categories: Software Testing

GTAC Videos Now Available

Google Testing Blog - Fri, 08/03/2012 - 10:40
By James Whittaker

All the GTAC 2011 talks are now available at http://www.gtac.biz/talks and also up on You Tube. A hearty thanks to all the speakers who helped make this the best GTAC ever. 


Enjoy!
Categories: Software Testing

Google JS Test, now in Open Source

Google Testing Blog - Fri, 08/03/2012 - 10:26

By Aaron Jacobs
Google JS Test is a JavaScript unit testing framework that runs on the V8 JavaScript Engine, the same open source project that is responsible for Google Chrome’s super-fast JS execution speed. Google JS Test is used internally by several Google projects, and we’re pleased to announce that it has been released as an open source project.

Features of Google JS Test include:
  • Extremely fast startup and execution time, without needing to run a browser.
  • Clean, readable output in the case of both passing and failing tests.
  • An optional browser-based test runner that can simply be refreshed whenever JS is changed.
  • Style and semantics that resemble Google Test for C++.
  • A built-in mocking framework that requires minimal boilerplate code (e.g. no $tearDown or$verifyAll calls), with style and semantics based on the Google C++ Mocking Framework.
  • A system of matchers allowing for expressive tests and easy to read failure output, with many built-in matchers and the ability for the user to add their own.

See the Google JS Test project home page for a quick introduction, and the getting started page for a tutorial that will teach you the basics in just a few minutes.
Categories: Software Testing

Take a BITE out of Bugs and Redundant Labor

Google Testing Blog - Fri, 08/03/2012 - 10:25
In a time when more and more of the web is becoming streamlined, the process of filing bugs for websites remains tedious and manual. Find an issue. Switch to your bug system window. Fill out boilerplate descriptions of the problem. Switch back to the browser, take a screenshot, attach it to the issue. Type some more descriptions. The whole process is one of context switching; from the tools used to file the bug, to gather information about it, to highlight problematic areas, most of your focus as the tester is pulled away from the very application you’re trying to test.

The Browser Integrated Testing Environment, or BITE, is an open source Chrome Extension which aims to fix the manual web testing experience. To use the extension, it must be linked to a server providing information about bugs and tests in your system. BITE then provides the ability to file bugs from the context of a website, using relevant templates.

When filing a bug, BITE automatically grabs screenshots, links, and problematic UI elements and attaches them to the bug. This gives developers charged with investigating and/or fixing the bug a wealth of information to help them determine root causes and factors in the behavior.


When it comes to reproducing a bug, testers will often labor to remember and accurately record the exact steps taken. With BITE, however, every action the tester takes on the page is recorded in JavaScript, and can be played back later. This enables engineers to quickly determine if the steps of a bug repro in a specific environment, or whether a code change has resolved the issue.

Also included in BITE is a Record/Playback console to automate user actions in a manual test. Like the BITE recording experience, the RPF console will automatically author javascript that can be used to replay your actions at a later date. And BITE’s record and playback mechanism is fault tolerant; UI automation tests will fail from time to time, and when they do, it tends to be for test issues, rather than product issues. To that end, when a BITE playback fails, the tester can fix their recording in real-time, just by repeating the action on the page. There’s no need to touch code, or report a failing test; if your script can’t find a button to click on, just click on it again, and the script will be fixed! For those times when you do have to touch the code, we’ve used the Ace (http://ace.ajax.org/) as an inline editor, so you can make changes to your javascript in real-time.

Check out the BITE project page at http://code.google.com/p/bite-project. Feedback is welcome at bite-feedback@google.com. Posted by Joe Allan Muharsky from the Web Testing Technologies Team (Jason Stredwick, Julie Ralph, Po Hu and Richard Bustamante are the members of the team that delivered the product).
Categories: Software Testing

RPF: Google's Record Playback Framework

Google Testing Blog - Fri, 08/03/2012 - 10:09
By Jason Arbon


At GTAC, folks asked how well the Record/Playback (RPF) works in the Browser Integrated Test Environment (BITE). We were originally skeptical ourselves, but figured somebody should try. Here is some anecdotal data and some background on how we started measuring the quality of RPF.
The idea is to just let users use the application in the browser, record their actions, and save them as a javascript to play back as a regression test or repro later. Like most test tools, especially code generating ones, it works most of the time but its not perfect. Po Hu had an early version working, and decided to test this out on a real world product. Po, the developer of RPF, worked with the chrome web store team to see how an early version would work for them. Why chrome web store? It is a website with lots of data-driven UX, authentication, file upload, and it was changing all the time and breaking existing Selenium scripts: a pretty hard web testing problem, only targeted the chrome browser, and most importantly they were sitting 20 feet from us.

Before sharing with the chrome web store test developer Wensi Liu, we invested a bit of time in doing something we thought was clever: fuzzy matching and inline updating of the test scripts. Selenium rocks, but after an initial regression suite is created, many teams end up spending a lot of time simply maintaining their Selenium tests as the products constantly change. Rather than simply fail like the existing Selenium automation would do when a certain element isn’t found, and require some manual DOM inspection, updating the Java code and re-deploying, re-running, re-reviewing the test code what if the test script just kept running and updates to the code could be as simple as point and click? We would keep track of all the attributes in the element recorded, and when executing we would calculate the percent match between the recorded attributes and values and those found while running. If the match isn’t exact, but within tolerances (say only its parent node or class attribute had changed), we would log a warning and keep executing the test case. If the next test steps appeared to be working as well, the tests would keep executing during test passes only log warnings, or if in debug mode, they would pause and allow for a quick update of the matching rule with point and click via the BITE UI. We figured this might reduce the number of false-positive test failures and make updating them much quicker.

We were wrong, but in a good way!

We talked to the tester after a few days of leaving him alone with RPF. He’d already re-created most of his Selenium suite of tests in RPF, and the tests were already breaking because of product changes (its a tough life for a tester at google to keep up with the developers rate of change). He seemed happy, so we asked him how this new fuzzy matching fanciness was working, or not. Wensi was like “oh yeah, that? Don’t know. Didn’t really use it...”. We started to think how our update UX could have been confusing or not discoverable, or broken. Instead, Wensi said that when a test broke, it was just far easier to re-record the script. He had to re-test the product anyway, so why not turn recording on when he manually verified things were still working, remove the old test and save this newly recorded script for replay later?

During that first week of trying out RPF, Wensi found:
  • 77% of the features in Webstore were testable by RPF
  • Generating regression test scripts via this early version of RPF was about 8X faster than building them via Selenium/WebDriver
  • The RPF scripts caught 6 functional regressions and many more intermittent server failures.
  • Common setup routines like login should be saved as modules for reuse (a crude version of this was working soon after)
  • RPF worked on Chrome OS, where Selenium by definition could never run as it required client-side binaries. RPF worked because it was a pure cloud solution, running entirely within the browser, communicating with a backend on the web.
  • Bugs filed via bite, provided a simple link, which would install BITE on the developers machine and re-execute the repros on their side. No need for manually crafted repro steps. This was cool.
  • Wensi wished RPF was cross browser. It only worked in Chrome, but people did occasionally visit the site with a non-Chrome browser.
So, we knew we were onto something interesting and continued development. In the near term though, chrome web store testing went back to using Selenium because that final 23% of features required some local Java code to handle file upload and secure checkout scenarios. In hindsight, a little testability work on the server could have solved this with some AJAX calls from the client.

We performed a check of how RPF faired on some of the top sites of the web. This is shared on the BITE project wiki. This is now a little bit out of date, with lots more fixes, but it gives you a feel for what doesn’t work. Consider it Alpha quality at this point. It works for most scenarios, but there are still some serious corner cases.

Joe Muharsky drove a lot of the UX (user experience) design for BITE to turn our original and clunky developer and functional-centric UX into something intuitive. Joe’s key focus was to keep the UX out of the way until it is needed, and make things as self-discoverable and findable as possible. We’ve haven't done formal usability studies yet, but have done several experiments with external crowd testers using these tools, with minimal instructions, as well as internal dogfooders filing bugs against Google Maps with little confusion. Some of the fancier parts of RPF have some hidden easter eggs of awkwardness, but the basic record and playback scenarios seem to be obvious to folks.

RPF has graduated from the experimental centralized test team to be a formal part of the Chrome team, and used regularly for regression test passes. The team also has an eye on enabling non-coding crowd sourced testers generate regression scripts via BITE/RPF.

Please join us in maintaining BITE/RPF, and be nice to Po Hu and Joel Hynoski who are driving this work forward within Google.
Categories: Software Testing

Test Is Dead

Google Testing Blog - Thu, 08/02/2012 - 13:12


My earthly body casts no shadows
'Tis my thoughts and words that bring umbrage
that is shade to some
and darkness to others

Testivus was but a bit of child play
to appetize

At GTAC 2011
the greater truth
shall be revealed

Test is dead

And I the executioner




Alberto Savoia
VI.XVII.MMXI


The Way of Testivus

"Floating Alberto" photograph courtesy of Patrick Copeland
Categories: Software Testing

The Easy Part

Alan Page - Tue, 07/31/2012 - 17:13

Recently, I was helping another part of the team with a project. Or at least it ended up that way. There’s a particular bit of what they’re doing where I have some expertise, so I volunteered to take care of a chunk of the work where I thought I could help out.

One thing I’ve learned through experience (and through testers eyes) is to take a look at the big picture of the problem I’m solving before diving in. For example, let’s say someone asked me to change the text on a button from “Query DB” to “Query Database”. What could be a 30 second task is far from it. First, I need to make sure the button is big enough, I need to check to see if there are other buttons in the application that need similar renaming. I need to make sure the documentation (including screen shots) are updated. I probably need to make sure the localization team knows what to do with the update. Of course, I’ll see if there are any tests that look for this particular string on a button, and after I punch them in the face for testing against a hard coded UI string, I’ll make sure they fix it.

In this case, I needed to add functionality ‘A’ to a system. I know functionality ‘A’ pretty well, but in this case, in order to add ‘A’ correctly, I needed to update ‘B’ – and for ‘B’ to work, I needed to refactor huge chunks of ‘C’. I went to the team and told them that I knew what needed to be done, but it was complex (due to A, B, and C), and that while I was willing to do the work, it would take me a few days to a week to implement and test.

Then they asked me my new favorite estimation question. “How long will it take you to do the easy part.” My answer, of course**, was, “It depends. Which part is the easy part?” To be fair, they meant, how long will ‘A’ take (because they had some insight into B & C), but it was still a fun quote.

 

** Alan-ism #17: The answer to any sufficiently complex question is, “It depends”.

Categories: Software Testing

Me, Ranting About Thinking

Alan Page - Mon, 07/23/2012 - 21:38

Joey McAllister (favorite hashtag: #expectpants) recently talked with me (electronically) about critical thinking and learning.

The interview is here in case you’re curious.

Categories: Software Testing

Is it dangerous to measure ROI for test automation?

Dorothy Graham - Thu, 06/21/2012 - 10:57

I have been a fan of trying to show ROI for automation in a way that is simple enough to understand easily, and provides a way of showing the benefit of automation compared to its cost.
I have been developing a spreadsheet with sample calculations (sparked off initially by Molly Mahai, including an example from Mohacsi & Beer's chapter in the new book, and some other people have also been influential - thanks). I have sent this spreadsheet out to around 300 people, including most who have attended my automation tutorials.
The problem with showing ROI is that it's hard to quantify some of the important factors, so I have focused on showing ROI using what is the most straight-forward to quantify - people's effort and/or time. This can be converted into money, using some kind of salary cost, if desired, and either effort or money can then be plugged into the ROI calculation = (benefit - cost) / cost.
So basically, this is showing how a set of tests requires less human effort when those tests are automated than would be required if those same tests were run manually.
This is great, right? We have a clear business case showing savings from automation than are greater than the cost of developing the automation, so our managers should be happy.
Recently, however, I have been wondering whether this approach can be dangerous.
If we justify automation ONLY in terms of reduced human effort, we run the risk of implying that the tools can replace the people, and this is definitely not true! Automation supports testing, it does not replace testers. Automation should free the testers to be able to do better testing, designing better tests, having time to do exploratory testing, etc.
So should we abandon ROI for automation? I don’t think that’s a good idea – we should be gaining business benefit from automation, and we should be able to show this.
Scott Barber told me about Social ROI – a way of quantifying some of the intangible benefits – I like this but haven’t yet seen how to incorporate it into my spreadsheet.
In our book, there are many success stories of automation where ROI was not specifically calculated, so maybe ROI isn’t as critical as it may have seemed.
I don’t know the answer here – these are just my current thoughts!
Categories: Software Testing

The Secrets of Faking a Test Project

Jonathon Kohl - Tue, 03/13/2012 - 15:10

Here is the slide deck: Secrets of Faking a Test Project. This is satire, but it is intended to get your attention and help you think about whether you are creating value on your projects, or merely faking it by following convention.

Back story:
Back in 2007 I got a request to fill in for James Bach at a conference. He had an emergency to attend to and the organizers wondered if I could take his place. One of the requests was that I present James' "Guide to Faking a Test Project" presentation because it was thought-provoking and entertaining. They sent me the slide deck and I found myself chuckling, but also feeling a bit sad about how common many of the practices are.

I couldn't just use James' slides because he has an inimitable style, and I had plenty of my own ideas and experiences to draw on that I wanted to share, so I used James' slide deck as a guide and created and presented my own version.

This presentation is satirical - we challenge people to think about how they would approach a project where the goal is to release bad software, but you make it look as if you really tried to test it well. It didn't take much effort on our part, we just looked to typical, all too common practices that are often the staple of how people approach testing projects, and presented them from a different angle.

I decided to release these slides publicly today, because almost 5 years after I first gave that presentation, this type of thing still goes on. Testers are forced into a wasteful, strict process of testing that rarely creates value. One of my colleages contacted me - she is on her first software testing project. She kept asking me about different practices that to her seemed completely counter-productive to effective testing, and asked if it was normal. Much of what she has described about her experiences are straight out of that slide deck. In this case, I think it is a naive approach. No doubt managers are much more worried about meeting impossible deadlines than finding problems that might take more time than is allocated, rather than blatant charlatans who are deliberately faking it, but sadly, the outcome is the same.

If you haven't thought about how accepted testing approaches and "best practices" can be viewed from another perspective, I hope this is an eye opener for you. While you might not think it is particularly harmful to a project to merely follow convention, you might be faking testing to the most important stakeholder: you.

Categories: Software Testing

I'm on Twitter

Jonathon Kohl - Mon, 02/13/2012 - 14:54

I've been resisting it, but I've been asked by enough of you to finally join in. Follow me: @jonathan_kohl

Categories: Software Testing

Experiences of Test Automation

Jonathon Kohl - Fri, 02/10/2012 - 14:58

I recently received my copy of Experiences of Test Automation: Case Studies of Software Test Automation. I contributed chapter 19: There's More to Automation than Regression Testing: Thinking Outside the Box. I'm recommending this book not only because I am a contributor, but I have enjoyed the raw honesty about test automation by experienced practitioners. Finally, we have a book that provides balanced, realistic, experienced-based content.

For years, it seems that test automation writing is dominated by cheerleading, tool flogging, hype and hyperbole. (There are some exceptions, but I still run into exaggerations and how automation is an unquestioning good far too often.) The division between the promoters of the practice (ie. those who make a lot of money from it), the decision makers they convince and the technical practitioners is often deep. It can be galling to constantly see outright claptrap about how automation is a cure to all ills, or views that only talk about benefits without also pointing out drawbacks and limitations. It's really difficult to implement something worthwhile in a world of hype and misinformation that skews implementation ideas and the expected results. This book is refreshingly different.

I agreed to contribute after talking to Dot Graham - she wanted content that was relevant, real, and honest. She said their goal for the book was a balanced view from real practitioners on the ground who would talk about good points, but we also needed to be honest about bad points and challenges we had to overcome. Dot liked my Man and Machine work and asked me to expand on that concept.

Now that I have a copy of the book, I find myself smiling at the honesty and reality described within. Did I really just read that? Where else will you find an admission of: "We tried tool____, and it was a complete disaster"?

If you're serious about automation, consider buying this book. It is chock full of real-world experience, and you are bound to find at least one lesson that you can apply directly to your automation effort. That is worth the cost alone, especially when we are constantly bombarded with distorted ideals and hype. You won't agree with everything, and we all have preferences and biases, but the real-world honesty is a constant theme, and a breath of fresh air.

Categories: Software Testing

Content Pointer: The Future Is Mobile Technology

Jonathon Kohl - Mon, 12/05/2011 - 11:59

Heather Shanholtzer of SQE interviewed me for TechWell a few days ago. You can read the interview here: The Future Is Mobile Technology.

Categories: Software Testing

Ranorex 3 – It is just better

Test & Try - Mon, 10/10/2011 - 02:37
Nearly two years have passed, we are all older and more mature, it’s great that test tools are growing together with us, the tools by which we can have a bit more time on more important things. Recalling the story two years ago, I mean the article “Covers Everything? – Ranorex Automation Tool Review” . [...] Related posts:
  1. Covers Everything? – Ranorex Automation Tool Review
  2. 4 Ways to Automate Flex GUI Testing
  3. FlexMonkey – Flex Test Automation Tool Review
Categories: Software Testing