Agile leadership: Start supporting, stop ‘Dilbert-izing'

SearchSoftwareQuality - Thu, 09/27/2012 - 11:05
Agile software development projects are not living up to expectations because of inadequate Agile leadership training, said STARWEST 2012 Conference speaker Bob Galen.


Categories: Software Testing

Testers Are Like Fact Checkers

Eric Jacobson's Software Testing Blog - Thu, 09/27/2012 - 10:21

Per one of my favorite podcasts, WNYC’s On the Media, journalists are finding it increasingly more difficult to check facts at a pace that keeps up with modern news coverage.  To be successful, they need dedicated fact checkers.   Seem familiar yet?

Journalists depend on these fact checkers to keep them out of trouble.  And fact checkers need to have their own skill sets, allowing them to focus on fact checking.  Fact checkers have to be creative and use various tricks, like only following trustworthy people on Twitter and speaking different languages to understand the broader picture.  How about now, seem familiar?

Okay, try this:  Craig Silverman, founder of Regret the Error, a media error reporting blog, said “typically people only notice fact checkers if some terrible mistake has been made”.  Now it seems familiar, right?

The audience of fact checkers or software testers has no idea how many errors were found before it was released.  They only know what wasn’t found. 

Sometimes I have a revenge fantasy that goes something like this:

If a user finds a bug and says, “that’s so obvious, why didn’t they catch this”, their software will immediately revert back to the untested version. 

…Maybe some tester love will start to flow then.

Categories: Software Testing

Premises of Rapid Software Testing, Part 3

DevelopSense - Michael Bolton - Thu, 09/27/2012 - 07:00
Over the last two days, I’ve published the premises of the Rapid Software Testing classes and methodology, as developed by James Bach and me. The first set addresses the nature of Rapid Testing’s engagement with software development—an ambitious activity, performed by fallible humans for other fallible humans, under conditions of uncertainty and time pressure. The [...]
Categories: Software Testing

Tips on Working With QA

QA Hates You - Thu, 09/27/2012 - 04:02

Working with Someone You Don’t Like?

Summary: It’s all in your head as you resent your own shortcomings and project them onto QA. Which is not perfect, but is better than you.

Categories: Software Testing

The whole team approach to QA/test time

SearchSoftwareQuality - Wed, 09/26/2012 - 11:31
QA/test role does not just belong to the test manager. In the whole team approach, the responsibility is spread throughout the team.


Categories: Software Testing

How does the role of project manager change in the cloud?

SearchSoftwareQuality - Wed, 09/26/2012 - 10:36
Agile and ALM expert Yvette Francino discusses how the role of project manager may change when applications are developed and tested in the cloud.


Categories: Software Testing

Premises of Rapid Software Testing, Part 2

DevelopSense - Michael Bolton - Wed, 09/26/2012 - 07:00
Yesterday I published the first three premises that underlie the Rapid Software Testing methodology developed and taught by James Bach and me. Today’s two are on the nature of “test” as an activity—a verb, rather than a noun—and the purpose of testing as we see it: understanding the product and imparting that understanding to our [...]
Categories: Software Testing

Strike That. No, That.

QA Hates You - Wed, 09/26/2012 - 02:53

So I wore a hole in my favorite fedora, and I now live in a city without a hat shop in it. Not so much because it’s a small city, but because not many cities have hat shops any more. Well, the malls have baseball cap shops, but do you think I’m the sort of man who wears a baseball cap?

So I go to Zappos because I know they have a very liberal return policy, and I fully expect that I’ll have to try on and return a dozen or more hats before I settle on one (to illustrate: On my last visit to the venerable Donges in the venerable Milwaukee, Wisconsin, I tried on so many hats that the discouraged salesman muttered to a coworker that I wasn’t going to buy anything. I bought my third fedora that day).

And I browsed and I searched Zappos, and as I was looking the site over, I noticed something.

When you filter by brand, size, variety, color, and so on, it adds the filters to a pseudo-breadcrumb trail looking list above the individual selections. You can click the filter to remove it. Me, I was looking for the filter to eliminate hipsters wearing their fedora brims turned up. Come on, guys, what’s the deal? The brim is for keeping the sun off, not for catching the rain, you twee tweethings.

And I noticed something:

When you mouse over one of the terms, the tooltip explains you can click it to remove it. But the filters immediately to the right displays in the <strike> form with a line through the text.

Looks like someone got his or her index values mixed up in an array.

You know how you find these things? You mouse over the damn things. Or you don’t, and you don’t find them.

Me? I mouse over them.

(I know, you’re wondering: how does one wear a hole in a fedora? Well, my fourth fedora here, which I bought at a hat shop in Memphis just off the train tracks in 1997 or 1998, I wore almost daily for many years, gave it a breather, and have worn it again daily for some time. The hole is at the fold in the crown at the front where you grab it to take the hat off or to put it on. It’s also the spot that touches the pavement when you’ve got the fedora upside down on the sidewalk while you’re Street QAin’ for tips. So it’s natural that it would wear unevenly here. Strangely enough, my fourth fedora lasted longer than the first three.)

Categories: Software Testing

Cloud-based tools, Agile and DevOps accelerate application deployment

SearchSoftwareQuality - Tue, 09/25/2012 - 13:39
Developers and QA testers are increasing application deployment speeds as a result of cloud-based tools and shifting roles in DevOps environments.


Categories: Software Testing

Clean Project Checklist

ABAKAS - Catherine Powell - Tue, 09/25/2012 - 10:08
Some projects are harder than others. On a few of my projects, everything just feels harder than it should be. I can't find the classes I need to change; the tests on a clean checkout fail; configuration is manual and fiddly; getting set up to try your code is slow and unwieldy. They're just painful all around. Other projects are smooth and easy. There's a solid README for setting it up - and the README is short. Tests run well on clean checkout. Classes and methods are in the first place you look. The only hard part of these projects is any difficulty in actually solving the technical problem.

The main difference, I've discovered, is in the tooling around the projects. Some of the smooth projects are huge and involve different services and technologies. Some of the difficult projects are actually quite small and simple. When the tooling is clean and works, then the project is much more likely to be smooth. Over time I've come to look for a set of things that tend to indicate a clean project.

This is my clean project checklist:

  • Deployment and environment setup is automated using an actual automation tool. I don't care if it's Capistrano, Chef, Puppet, RightScale scripts or whatever. It just has to be something beyond a human and/or an SSH script. This includes creating a new box, whether it's a cloud box or a local machine in a data center somewhere. If it's scripted then I don't have to fiddle a lot manually, and it also tends to mean there's good isolation of configuration.
  • Uses Git. This makes the next item possible. It's also an indicator that the developers are using a toolchain that I happen to enjoy using and to understand.
  • Developers are branch-happy. Feature branches, "I might mess this up" branches, "just in case" branches - developers who are branch happy tend to think smaller. With that many branches, most don't survive for long! Smaller changes means smaller merging, and that makes me happy. It fits nicely with the way I work. I should note that I don't care how merging happens, either by merge or by pull request.
  • Has a CI system that runs all the automated tests. It might not run them all in one batch, and it might not run them all before it spits out a build, but it runs them. The key here is that automated tests run regularly on a system that is created cleanly regularly. This cuts down on tests that fail because they weren't updated, or issues that relate to data not getting properly put in the database (or other data stores).
  • Cares about the red bar. The build shouldn't be broken for long, and failing tests should be diagnosed quickly. In policing, it's called the broken windows problem: if you get used to small bad things then bigger bad things will happen. Don't let your project windows break.

I'm sure there's more, but what else am I missing?
Categories: Software Testing

QA, Educator

QA Hates You - Tue, 09/25/2012 - 09:05

Over at Dice TV, Cat Miller identifies some of the things software engineers don’t learn in school:

I agree with many of the things she presents, but I’d also like to point out what other thing computer science students don’t learn in school that’s important, but often overlooked: Domain knowledge.

It’s not the same as listening to your customer, because your customer and/or user has one perspective on domain knowledge, and it might not be deep nor broad. The customer might have just started the job last week and is trying to tell you how to write software to assist him with a job that he doesn’t know how to do himself.

Can you write computer software without domain knowledge? Yes. Heck, my whole schtick is that I don’t need a lot of domain knowledge when testing an application because software fails in predictable places because it’s software, irrespective of industry.

But I do pick up some domain knowledge from a little research to help supplement my understanding of how the user will use the software and to try to predict some patterns of behavior that real users will attempt when confronted with this strange piece of programming during the workday.

I’m not keeping it very secret, am I? Because I want the developers to pay attention to it, too. And project managers. And client engagement vice presidents. Try to glean a little about the problems the software is trying to solve instead of just solutions presented to you to code. Because where your assumptions and their assumptions differ, danger lies.

Categories: Software Testing

Premises of Rapid Software Testing, Part 1

DevelopSense - Michael Bolton - Tue, 09/25/2012 - 07:22
In February of 2012, James Bach and I got together for a week of work one-on-one, face-to-face—something that happens all too rarely. We worked on a number of things, but the principal outcome was a statement of the premises on which Rapid Software Testing—our classes and our methodology—are based. In deference to Twitter-sized attention spans [...]
Categories: Software Testing

QA Music: Metrics

QA Hates You - Mon, 09/24/2012 - 03:49

Well, the song is called “Still Counting” by Volbeat.

If I ever present at a conference, you can bet this will be the song that plays as I ascend the stage.

Uh, language warning.

Categories: Software Testing

New Zealand & Australia, 2012

Alan Page - Sat, 09/22/2012 - 13:03

Note: no direct information on software testing or software engineering is in this post. This is purely a trip report (that happens to be related to a trip I took to talk about testing).

As I begin writing this, I’m in the middle of a trip to New Zealand and Australia. The main purpose of the trip was a bit of a speaking tour for SoftEd (for the STANZ and Fusion conferences). I gave a keynote and a workshop in Christchurch, NZ on Monday – repeated the same thing on Tuesday in Auckland, and then hopped across the ditch to Sydney to give a talk on Thursday and a workshop on Friday.

Christchurch

Although this trip marks my first visit to Australia, I spent three weeks in New Zealand almost exactly ten years ago. I arrived Saturday morning so I’d have a bit extra time to acclimate to the new time zone, but I didn’t really need it. Through a combination of horrible movie selections and three seats to myself, I managed to sleep quite a bit – and apparently just the right amount so that morning in Christchurch felt a lot like morning in Seattle. The hotel was nice – as far as hotels based on medieval themes go (this being my first medieval themed hotel, I was quite happy). I went for a run, showered, explored a bit, and then went in search of dinner. Although I spent a few days in Christchurch ten years ago, I wasn’t sure how much I’d remember. I also wasn’t prepared for just how devastating the earthquake in February, 2011 was. Nearly two years later, the bulk of the downtown area is still fenced off and unsafe. I found the hotel where I stayed in 2002 (closed), and a restaurant I remember (also closed). The weird part was how dead the downtown area was on a Saturday night. While I wandered around pondering the destruction, I suddenly realized that it was silent. Other than the flap of a few flags and banners in the strong Christchurch wind, there was nothing – complete stillness in an area that was bustling nearly 24-hours a day during my last trip. I eventually found my way back to the area near the conference hotel – apparently where at least some of the liveliness had moved, and found some food.

I was up early Sunday morning and went for a long run in the park. I met up with fellow conference speaker Elisabeth Hendrickson for a coffee run that evolved into a wonderful walking tour of the area surrounding the hotel. We managed to invite ourselves to a tour of a nature walk and had some great conversations about software engineering, testing, and our shared love of walking miles and miles in foreign cities. Although I’ve met Elisabeth before, one of the big benefits of this trip was getting a chance to get to know her better. She has a sensible holistic approach to software engineering that I really like (and unfortunately, don’t see nearly enough of among people in the testing community).

Monday was a blur – I think I went for a run in the morning, gave a keynote later in the morning, and delivered a workshop in the afternoon. Then we all flew to Auckland, where we checked in, and I decided I preferred sleep much more than dinner.

Auckland

Tuesday was more of the same. I changed my talk a bit – I changed the workshop a bit. Not only do I not like giving the same talk twice – I think I’m incapable of giving the same talk twice. I suppose this keeps things fresh (for me, at least). After the conference a group of us had a wonderful fish dinner at a nearby restaurant.

My flight on Wednesday wasn’t until 6:00pm, so I used the morning and afternoon to explore Auckland. I spent quite a bit of time in Auckland (3-4 days) in 2002, and was eager to re-explore. It’s funny how we remember things. There were restaurants and points of interest I remembered quite well (and went in search of a few of them), but it was a bit of a mind trip to notice places and things that I didn’t remember until I saw them (“Oh wow – I ate at this exact restaurant ten years ago…”)

I made a brief stop at the Microsoft Auckland office during my wandering. I knew I wanted to go to Melbourne while in Australia, but I hadn’t bothered to book a flight or lodging yet. Nor did I have any lodging in Sydney after the conference. I took advantage of some peace and quiet and (apparently rare) high-speed internet and got all of my bookings squared away.

Sydney

The Fusion conference was Thursday and Friday. The opening keynote was Emma Langman, and I really enjoyed her talks (she also gave the closing keynote on Friday). Emma is a Change Magician (cool title, huh?), and knows a ton about organizational change and systems thinking. I applaud SoftEd for inviting her, and think she did a great job capturing the essence of the conference and coming up with relevant topics for the audience.

On top of that, I was a bit intimidated that she attended my Thursday session (on customer-focused test design), but managed to keep it together as I babbled for an hour about test design.

Thursday night, we had a dinner keynote from Dr. Peter Ellyard (Australia’s most prominent futurist). I spoke to him for a few minutes before dinner, where he shared this quote that stuck with me:

Vision without strategy is a daydream. Strategy without vision is a nightmare.

(note that this is a variation of a Japanese proverb that replaces the word ‘strategy’ with ‘action’)

After dinner a few of us hung out and had a few more drinks. I have a vague memory of late-night Karaoke, but the details of the night may be forever lost.

Friday morning came quickly, and I gave my last talk of the conference. All-in-all, it was a fantastic experience, and I am grateful to SoftEd for the opportunity.

Saturday morning, Elisabeth Hendrickson and I met up for a sequel walk and talk event around Sydney, culminating with one of the best cups of coffee I ever had (along with a delicious salmon sandwich). I spent the rest of the weekend doing more wandering around town, getting some laundry done, and catching up on work and reading. I did get in a good run through the botanical gardens, around the opera house, and then up and over the big bridge and back.

Melbourne

On Monday, I left for a quick trip to Melbourne. After a few hours there, I was already bummed that I wasn’t spending more time there. I have to admit, that I’m a bit of a foodie, and good food is everywhere in Melbourne. It’s also a beautiful city with a great mix of new and old architecture.

I went for a nice run along the river Tuesday morning, walked around the city some more, and then met with a Melbourne area testers for a few beers and dinner in the evening (as well as some fun conversations).

After breakfast – and a bit more exploring – I headed back to Sydney in the afternoon. My return trip to Sydney had a few obstacles. The first was that for some reason, my domestic flight to Sydney left from an international terminal. This meant I had to go through some extremely long security and passport control lines. I was particularly ‘lucky’ on this trip and was pulled out of line twice for extra ‘checking’. And then it got interesting. For reasons too long to share here, I’ve always gone by ‘Alan’. My credit cards say ‘Alan’, my Microsoft id says ‘Alan’, my email address is alanpa…I bet most people don’t even know that my first name is ‘Donald’. Because I book my flights with my credit card, my boarding passes often (or nearly always) say ‘Alan Page’. I make sure signatures on my license and passport include my middle name, and this has never, in dozens of international trips, ever been a problem – until this flight. The agent questioned me for ten minutes, trying to get me to prove who I was. I think if I was actually travelling to another country, instead of a few hundred miles north, she wouldn’t have let me leave. But eventually, I made it onto the plane and back to Sydney.

Sydney (part II) & Home Again

My final trip to Sydney was uneventful. I spent most of the afternoon reading in a café, and then had dinner, and got some sleep before heading home.

The flight home was long (SYD->AUK->SFO->SEA), but uneventful. I’m home now, and although I managed to catch a bit of a cold, it’s good to be here and get back to my regular routine. My arrival at home was also nearly perfectly timed with a small fire drill at work, and that is accelerating my return to reality as well.

Overall, I had a fantastic time. I expect to return someday (whether it’s for a conference or for a vacation), and hope to meet many more great testers either way.

Categories: Software Testing

Concurrent Virtual Users or Increasing Transaction Rate?

LoadStorm - Fri, 09/21/2012 - 15:56

I’m going to write about increasing the number of transactions to simulate a higher number of Virtual Users. This is a common technique used by performance testers to ‘cheat’ or rather avoid the high costs associated charged by vendors. I recently answered a question on Linkedin and suggested increasing the transaction rate.

I was sent the following message by Jim:

“Saw a reply on here where you suggested increasing no of iterations to increase load… I’m not a big fan of this at all, as it really doesn’t do that, it compresses time instead so for example say I have 200 vusers and they are running 4 iterations per hour each to increase the number of concurrent users I need to double the number of vusers if I double the number of iterations I reduce an 8 hour soak test to 4 hours I know it’s a commonly used technique, but it’s total rubbish, usually when I hear people saying this I just mention the word “sessions” and their head explodes with the thought: huge amounts of what I’ve done for years is wrong!!!”

Being called out and made to think is good thing.

Now I’m not going to go on the defensive and pick holes – In essence what Jim is saying is correct*, and Jim’s response got me thinking. What on earth is the impact of ‘sessions’, what exactly are they? What is the overhead? Session is an umbrella term people use regularly but I suspect many actually don’t have a firm understanding of it. So I did a little digging.


Its all about the Session Data

Within the context of HTTP, what exactly is a session? I’m not going to regurgitate formal definitions of it here: But here’s my attempt at a translation: HTTP is stateless, information about the connecting client is retained on the server architecture using a variety of techniques. The client is given a unique ID when the initial connection is made to the server (session ID). All associated information for the client (session data) is linked to this identifier. Now this is where things can get a little murky....

Because ‘session data’ depends on the implementation within the architecture.. So what does this mean in terms of impact – it means that the actual overhead of session ID’s is small … what is important is the implementation of the associated data with session ID’s. So if a user logs in and has an associated large amount of information this may put strain on the resources handling the information – which in most cases is likely to be either caches or DB’s. This all leads me to the following observations/conjectures.


Advantage of Increasing Transaction Rate, not Virtual Users

Cost: Its cheap, this is the main one. In an ideal world we would flood a system with the actual real world number of users – but quite often our wallets don’t allow us to.

Transactions: I’ve often been on site and heard “we have 50000 people connected”, we need 50000 VU’s. When the following question is posed “Whats the transaction rate for a key business process X” … and the response is “Oh … about 60/min”. I can immediately see that we do not need 50k users to achieve the goals of the customer. In fact that rate can be hit and exceeded with just 200 virtual users. Sometimes a customer thinks they need concurrency but they actually only need a transaction rate in order to achieve their goals.

Coverage: You will eke out and find 90%** of the issues you would have found using this method Vs increasing the number of users

Hardware: I need a LOT less hardware to generate the load. Less hardware components in a performance test also means a lot less risk during a test.

Soak Testing: Not greatly relevant to this argument – if you have a long running soak test, upping the transaction rate can decrease the length of the test. I’ve used this to effectively reduce a 3-day soak performance test into 8 hours (Link Here).


When Increasing Transaction Rate will/may not work

Sizing: If you are using Caches for session data – then a lower number of users with an increased transaction rate will not fill the allocated space of associated caches as effectively (This is implementation dependent). This means caches size could actually be too small when you go to live
Thrashing and Time To Live: A large number of users will naturally cause a large breath of caching to be exercised. I have tested systems where we have ‘come a cropper’ because when the system went live the breath of data searched by a large number of users caused the cache to thrash the DB. Luckily this was raised as a risk beforehand.

Non Issues: By increasing transaction rates (and not users) you may cause issues you would not experience on live (contention, deadlocking ..). The upside of this disadvantage is that you build a system which is more robust by solving these non, or yet to be experienced issues. PM’s and the business tend not to be fans of this.

Stateful: If you have a stateful system and the memory required by a connection client is created and retained for the duration of the life of the connecting client.

Large ViewStates: Large viewstate sizes indicates large use of session data. Generally considered to be a warning sign of poor implementation.


So when would it be safe to use a High Transaction Approach?

This depends on the implementation of system you are testing against and your understanding of the underlying architecture. Here is my initial stab of when it would be safe:

  • A site is generally static i.e. doesn’t acquire or retrieve a large amount of session data for connected users
  • When caching mechanisms are not heavily utilised by middle tiers
  • When a client connecting to a server doesn’t enact a large retrieval of data in order to process associated business flows
  • Writing calls to atomic web services


Summary

That’s my initial stab at putting a little more meat around the bones of the Transaction Vs User argument. I think it’s a little strong to come firmly down in one camp or another. If you have the ability to scale to the actual number of users then great, if not then you should evaluate, assess and report the risk. If anyone has other concise examples I can use then let me know, I’ll update the article and give credit.

How to ensure product performance and quality in Agile enterprises

SearchSoftwareQuality - Fri, 09/21/2012 - 12:33
Agile teams must work together in new ways to ensure product performance, quality and value for customers. Agile expert Lisa Crispin explains how.


Categories: Software Testing

Conversation with a Test Engineer

Google Testing Blog - Wed, 09/19/2012 - 09:31
By Alan Myrvold

Alan Faulkner is a Google Test Engineer working on DoubleClick Bid Manager, which enables advertising agencies and advertisers to bid on multiple ad exchanges. Bid Manager is the next generation of the Invite Media product, acquired by Google in 2010. Alan Faulkner has been focused on the migration component of Bid Manager, which transitions advertiser information from Invite Media to Bid Manager. He joined Google in August 2011, and works in the Kirkland, WA office.
Are you a Test Engineer, or a Software Engineer in Test, and what’s the difference?
Right now, I’m a Test Engineer, but the two roles can be very similar. As a Test Engineer, you’re more focused on the overall quality of the product and speed of releases, while a Software Engineer in Test might focus more on test frameworks, automation, and refactoring code for testability. I think of the difference as more of a shift in focus and not capabilities, since both roles at Google need to be able to write production quality code. Example test engineering tasks I worked on are introducing an automated release process, identifying areas for the team to improve code coverage, and reducing the manual steps needed to validate data correctness.

What is a typical day for you?
When I get in, I look at any code reviews I need to respond to, look for any production bugs from technical account managers that are high priority, and then start writing code. In my current role, I focus my development effort on improving the efficiency and coverage of our large scale integration tests and frameworks. I also work on adding additional features to our product that improve our testability. I typically spend anywhere from 50% to 75% of my time either writing code or participating in code reviews.

Do you write only test code?
No, I write a lot of code that is included in the product as well. One of the great things about being an SET or TE at Google is that you can write product code as easily as test code. I write both. My test code focuses on improving test frameworks and enabling developers to write integration tests. The production code that I write focuses on increasing the verification of external inputs. I also focus on adding features that improve testability. This pushes more quality features into the product itself rather than relying on test code for correctness.

What programming languages do you use?
Both the test and product code are mostly Java. Occasionally I use Python or C++ too.

How much time to do you spend doing manual testing?
Right now, with the role I am in, I spend less than 5% of my time doing manual testing. Although some exploratory testing helps develop product knowledge and find risky areas, it doesn’t scale as a repeated process. There are a small amount of manual steps and I focus on ways to help reduce this so our team does not spend our time doing repeated manual steps as part of our data migration.

Do you write unit tests for code that isn’t yours?
At Google, the responsibility of testing is shared across all product engineers, not just Test Engineers. Everyone is responsible for writing unit tests as well as integration tests for their components. That being said, I have written unit tests for components that are outside of what I developed but that has been to illustrate how to write a unit test for said component. This component usually involved a abnormally complex set of code or to illustrate using a new mocking framework, such as Mockito.

What do you like about working on the Google advertising products?
I like the challenges of the scalability problems we need to solve, from handling massive amounts of data to dealing with lots of real time ad requests that need to be responded to in milliseconds. I also like the impact, since the products affect a lot of people. It’s rewarding to work on stuff like that.

How is testing at Google different from your experience at other companies?
I feel the role is more flexible at Google. There are fewer SET’s and TE’s in my group at Google per developer, and you have the flexibility to pick what is most important. For example, I get to write a lot of production code to fix bugs, make the code more testable, and increasing the visibility into errors encountered during our data migrations. Plus, developers at Google spend a lot of time writing tests, so testing isn’t just my responsibility.

How does the Google Kirkland office differ from the headquarters in Mountain View?
What I really like about the offices at Google is that each of them has their own local feel and personality. Google encourages this! For instance, the office here in Kirkland has a climbing wall, boats and all the conference rooms in our building are named after local bands in the area. The office in Seattle has kayaks and the New York office has an actual food truck in its cafeteria.

What’s the future of testing at Google?
I think the future is really bright. We have a lot of flexibility to make a big impact on quality, testability and improving our release velocity. We need to release new features faster and with good quality. The problems that we face are complex and at an extreme scale. We need engineering teams focused on ensuring that we have efficient ways to simulate and test. There will always be a need for testers and developers that focus on these areas here at Google.

Interested in test jobs at Google?
Categories: Software Testing

Automate What?

ABAKAS - Catherine Powell - Tue, 09/18/2012 - 09:00
I was in a meeting the other day, talking about an upcoming project. I made some offhand comment about needing to make sure we allowed for a continuous integration and test automation environment when we were sizing our hardware needs. The response was immediate: "but automation never works!" After a bit of discussion, it emerged that this team had never attempted automation at any level under the GUI, and their GUI automation was, well, not good.

Automation at the GUI level is a legitimate thing to do. It's far from the only automation you can do. Consider all the other options:

  • Automated deployment. Manual scripts or with tools - this is an entire discipline in itself. And it doesn't have to deploy to production; automated deployment to test or staging environments is also useful.
  • Unit tests. Yes, this is test automation, too.
  • API tests. Automation of a user interface, jut not a graphical user interface.
  • Integration tests. You can test pieces together without testing the whole thing or without using the GUI.

So here's my automation heuristic:
Automate as much as you can as low as possible.

Let's break that down a bit. "Automate as much as you can" means just that. Not everything is automate-able, and not everything should be automated. If you have a GUI, testing that the colors go nicely together is probably not automate-able - so don't do it. Do that manually. If you have a method that takes inputs and produces outputs, you can probably automate the test that confirms it works with different kinds of inputs and outputs. If automating something involves massive architectural changes or fails one in three times randomly, then you're not going to maintain it and it's either not automate-able or simply broken.

(On a somewhat related code, the ability to automate something frequently correlates with how modular and maintainable an architecture it has. Hard-coded configuration parameters, nonexistent interfaces, etc. make code less testable and also a lot harder to maintain!)

"As low as possible" means that you should target your automation as close to the code you're trying to affect (deploy or test, for example) as you can. If you're testing that a method does the right thing, use a unit test. Sure, it might be possible to test that same method through a GUI test but the overhead will make it a lot slower and a lot more brittle. The team I was talking with was right about one thing: GUI tests tend to be brittle. If you can go below the GUI - either through an API or using an injected test framework (think xUnit tests) - then you can avoid the brittleness.

Yay automation! But even more yay, appropriate automation!
Categories: Software Testing

How can software development teams best manage large projects?

SearchSoftwareQuality - Mon, 09/17/2012 - 11:15
Agile expert Lisa Crispin explains how software development teams can manage large projects by breaking them down into smaller chunks.


Categories: Software Testing

ThreadFix: Open source defect management tool speeds security vulnerability fixes

SearchSoftwareQuality - Mon, 09/17/2012 - 08:27
Security and development teams can share a common defect management tool with ThreadFix, Denim Group's new open source security tool.


Categories: Software Testing