Software Testing

Alan and Brent talk testing…

Alan Page - Mon, 03/31/2014 - 18:29

Brent Jensen and I found some time together recently to talk about testing. For some reason, we decided to record it. Worse yet, we decided to share it! I suppose we’ll keep this up until we either run out of things to say (unlikely), or until the internet tells us to shut up (much more likely).

But for now, here’s Alan and Brent talk about Testing.

(potentially) related posts:
  1. Talk Notes: testing at Microsoft
  2. New Testing Ideas
  3. Quick Testing Challenge
Categories: Software Testing

Very Short Blog Posts (14): “It works!”

DevelopSense - Michael Bolton - Mon, 03/31/2014 - 13:47
“It works” is one of Jerry Weinberg‘s nominees for the most ambiguous sentence in the English language. To me, when people say “it works”, they really mean Some aspect of some feature or some function appeared to meet some requirement to some degree based on some theory and based on some observation that some agent […]
Categories: Software Testing

Web Performance News for the Week of March 24, 2014

LoadStorm - Fri, 03/28/2014 - 15:23
A Look Back at Healthcare.gov

The period of time during which individuals are able to take part in the open enrollment end March 31, 2014. The amount of publicity Healthcare.gov received in October last year drew in numerous discussions. The situation has settled down and now we can look back with a clearer picture.

Complex System

Healthcare.gov was more than just a website. It’s a front-end for an entire set of systems interconnected to exchange data. The system has to make requests to a handful of other agencies checking for citizenship, financial eligibility, IRS, Homeland Security, SSA, etc. Because of the multiple layers Healthcare.gov needed such as:

  • Bank level authentication and encryption.
  • Verification of identity and identity.
  • Granting insurance companies access to advertise plans.
  • Calculation of federal subsidy.
  • Managing and storing all the application data.

These were just some of the situations that made the back-end component of Healthcare.gov struggle. In addition, political interest required the site to do the enrollment upfront in order to show the subsidized price to reduce the sticker shock to the American public.

Lack of Proper Testing

Initially the contractor’s built their system to handle 50,000 people using it at the same time. The number was based on the roll out of the Massachusetts version of the site a number of years ago. When launching a new online system, the peak load will often happen on the first day of launch. Without proper stress testing, many systems will drop dead due to the flood of traffic. For examples of this, Intechnica wrote a blog listing 15 examples of high profiled companies having their website crash due to lack of testing. The lack of QA and proper testing before the live launch of Healthcare.gov was a nightmare for user experience. The multi-hour wait times, blank menus, and so  called “prison glitches” which stopped a user from moving forward until they specified how long they had been incarcerated, despite never being incarcerated. The administration expected more states to make their own exchanges to reduce workload on the federal system. The exact opposite happened. Over 60% of the country is coming to the federal exchange instead of a state-run one.

Project Management & Communication Problems

Another failure was also lack of communication. The front end of Healthcare.gov was made by one company, and the back end made by another. They did not adequately communicate with one another during development, so issues have arisen based on the poor communication between the two parts.
The projection of the project was beyond optimistic. A web application of this scale being rolled out for the whole nation isn’t going to be finished. The term finished doesn’t fit too well with software development these days. The publicity to urge millions of Americans to sign up the first day was a standard that was unrealistic especially when the contract expected Healthcare.gov to be flawless on October 1.

Patching and Fixing

Since the launch of Healthcare.gov, the level of pressure to improve the web application has been significant. There is still work being done to improve the system and enhance user experience. The new management system has improved the site stability by lowering the error rate below 1% and allowing a capacity of 50,000 concurrent users.Below are charts and graphs to see the progress of Healthcare.gov after the first month of launch.

The post Web Performance News for the Week of March 24, 2014 appeared first on LoadStorm.

Aren’t You Glad You’re a Software Tester?

Eric Jacobson's Software Testing Blog - Fri, 03/28/2014 - 14:27

It’s true.  Our job rocks.  Huff Post called it the 2nd happiest job in America this year.  Second only to being a DBA…yawn.  Two years ago, Forbes said testing was #1

But why?  Neither article goes in depth.  Maybe it’s because all news is good news, for a tester:

  • The System Under Test (SUT) is crashing in QA, it doesn’t work, it’s a steaming pile of…YES!  My testing was valuable!  My life has meaning!  My testing just saved users from this nightmare!
  • The SUT is finally working.  Awesome!  It’s so nice being part of a development team that can deliver quality software.  I can finally stop testing it and move on.  Our users are going to love this.

See?  Either way it’s good news.

Or maybe I just spin it that way to love my job more.  So be it.  If you think your testing job is stressful, you may want to make a few adjustments in how you work.  Read my You’re a Tester, Relax post.

Categories: Software Testing

Two New Videos About Testing at Google

Google Testing Blog - Thu, 03/27/2014 - 15:41
by Anthony Vallone

We have two excellent, new videos to share about testing at Google. If you are curious about the work that our Test Engineers (TEs) and Software Engineers in Test (SETs) do, you’ll find both of these videos very interesting.

The Life at Google team produced a video series called Do Cool Things That Matter. This series includes a video from an SET and TE on the Maps team (Sean Jordan and Yvette Nameth) discussing their work on the Google Maps team.

Meet Yvette and Sean from the Google Maps Test Team



The Google Students team hosted a Hangouts On Air event with several Google SETs (Diego Salas, Karin Lundberg, Jonathan Velasquez, Chaitali Narla, and Dave Chen) discussing the SET role.

Software Engineers in Test at Google - Covering your (Code)Bases



Interested in joining the ranks of TEs or SETs at Google? Search for Google test jobs.

Categories: Software Testing

Optimal Logging

Google Testing Blog - Thu, 03/27/2014 - 15:41
by Anthony Vallone

How long does it take to find the root cause of a failure in your system? Five minutes? Five days? If you answered close to five minutes, it’s very likely that your production system and tests have great logging. All too often, seemingly unessential features like logging, exception handling, and (dare I say it) testing are an implementation afterthought. Like exception handling and testing, you really need to have a strategy for logging in both your systems and your tests. Never underestimate the power of logging. With optimal logging, you can even eliminate the necessity for debuggers. Below are some guidelines that have been useful to me over the years.


Channeling Goldilocks

Never log too much. Massive, disk-quota burning logs are a clear indicator that little thought was put in to logging. If you log too much, you’ll need to devise complex approaches to minimize disk access, maintain log history, archive large quantities of data, and query these large sets of data. More importantly, you’ll make it very difficult to find valuable information in all the chatter.

The only thing worse than logging too much is logging too little. There are normally two main goals of logging: help with bug investigation and event confirmation. If your log can’t explain the cause of a bug or whether a certain transaction took place, you are logging too little.

Good things to log:
  • Important startup configuration
  • Errors
  • Warnings
  • Changes to persistent data
  • Requests and responses between major system components
  • Significant state changes
  • User interactions
  • Calls with a known risk of failure
  • Waits on conditions that could take measurable time to satisfy
  • Periodic progress during long-running tasks
  • Significant branch points of logic and conditions that led to the branch
  • Summaries of processing steps or events from high level functions - Avoid logging every step of a complex process in low-level functions.

Bad things to log:
  • Function entry - Don’t log a function entry unless it is significant or logged at the debug level.
  • Data within a loop - Avoid logging from many iterations of a loop. It is OK to log from iterations of small loops or to log periodically from large loops.
  • Content of large messages or files - Truncate or summarize the data in some way that will be useful to debugging.
  • Benign errors - Errors that are not really errors can confuse the log reader. This sometimes happens when exception handling is part of successful execution flow.
  • Repetitive errors - Do not repetitively log the same or similar error. This can quickly fill a log and hide the actual cause. Frequency of error types is best handled by monitoring. Logs only need to capture detail for some of those errors.


There is More Than One Level

Don't log everything at the same log level. Most logging libraries offer several log levels, and you can enable certain levels at system startup. This provides a convenient control for log verbosity.

The classic levels are:
  • Debug - verbose and only useful while developing and/or debugging.
  • Info - the most popular level.
  • Warning - strange or unexpected states that are acceptable.
  • Error - something went wrong, but the process can recover.
  • Critical - the process cannot recover, and it will shutdown or restart.

Practically speaking, only two log configurations are needed:
  • Production - Every level is enabled except debug. If something goes wrong in production, the logs should reveal the cause.
  • Development & Debug - While developing new code or trying to reproduce a production issue, enable all levels.


Test Logs Are Important Too

Log quality is equally important in test and production code. When a test fails, the log should clearly show whether the failure was a problem with the test or production system. If it doesn't, then test logging is broken.

Test logs should always contain:
  • Test execution environment
  • Initial state
  • Setup steps
  • Test case steps
  • Interactions with the system
  • Expected results
  • Actual results
  • Teardown steps


Conditional Verbosity With Temporary Log Queues

When errors occur, the log should contain a lot of detail. Unfortunately, detail that led to an error is often unavailable once the error is encountered. Also, if you’ve followed advice about not logging too much, your log records prior to the error record may not provide adequate detail. A good way to solve this problem is to create temporary, in-memory log queues. Throughout processing of a transaction, append verbose details about each step to the queue. If the transaction completes successfully, discard the queue and log a summary. If an error is encountered, log the content of the entire queue and the error. This technique is especially useful for test logging of system interactions.


Failures and Flakiness Are Opportunities

When production problems occur, you’ll obviously be focused on finding and correcting the problem, but you should also think about the logs. If you have a hard time determining the cause of an error, it's a great opportunity to improve your logging. Before fixing the problem, fix your logging so that the logs clearly show the cause. If this problem ever happens again, it’ll be much easier to identify.

If you cannot reproduce the problem, or you have a flaky test, enhance the logs so that the problem can be tracked down when it happens again.

Using failures to improve logging should be used throughout the development process. While writing new code, try to refrain from using debuggers and only use the logs. Do the logs describe what is going on? If not, the logging is insufficient.


Might As Well Log Performance Data

Logged timing data can help debug performance issues. For example, it can be very difficult to determine the cause of a timeout in a large system, unless you can trace the time spent on every significant processing step. This can be easily accomplished by logging the start and finish times of calls that can take measurable time:
  • Significant system calls
  • Network requests
  • CPU intensive operations
  • Connected device interactions
  • Transactions


Following the Trail Through Many Threads and Processes

You should create unique identifiers for transactions that involve processing across many threads and/or processes. The initiator of the transaction should create the ID, and it should be passed to every component that performs work for the transaction. This ID should be logged by each component when logging information about the transaction. This makes it much easier to trace a specific transaction when many transactions are being processed concurrently.


Monitoring and Logging Complement Each Other

A production service should have both logging and monitoring. Monitoring provides a real-time statistical summary of the system state. It can alert you if a percentage of certain request types are failing, it is experiencing unusual traffic patterns, performance is degrading, or other anomalies occur. In some cases, this information alone will clue you to the cause of a problem. However, in most cases, a monitoring alert is simply a trigger for you to start an investigation. Monitoring shows the symptoms of problems. Logs provide details and state on individual transactions, so you can fully understand the cause of problems.

Categories: Software Testing

Slides from Defect Sampling Presentation at ASTQB Conference.

Randy Rice's Software Testing & Quality - Tue, 03/25/2014 - 16:46
Hi all,

Thanks to everyone at my presentation at the ASTQB Conference on defect sampling.

Here are my slides from today's presentation:

http://www.softwaretestingtrainingonline.com/cc/Slide%20Shows/Defect%20Sampling%20-%20ASTQB.pdf

Here is the video from Gold Rush:

http://youtu.be/0XXzdNma7O4

Enjoy!




Categories: Software Testing

Throw user stories away after they are delivered

The Quest for Software++ - Tue, 03/25/2014 - 05:58

This is an excerpt from my upcoming book 50 Quick Ideas to Improve your User Stories. If you want to try this idea in practice, I’ll be running a workshop on improving user stories at the Product Owner Survival Camp

Many teams get stuck by using previous stories as documentation. They assign test cases, design diagrams or specifications to stories in digital task tracking tools. Such tasks become a reference for future regression testing plans or impact analysis. The problem with this approach is that it is unsustainable for even moderately complex products.

A user story will often change several functional aspects of a software system, and the same functionality will be impacted by many stories over a longer period of time. One story might put some feature in, another story might modify it later or take it out completely. In order to understand the current situation, someone has to discover all relevant stories, put them in reverse chronological order, find out about any potential conflicts and changes, and then come up with a complete picture.

Designs, specifications and tests explain how a system works currently – not how it changes over time. Using previous stories as a reference is similar to looking at a history of credit card purchases instead of the current balance to find out how much money is available. It is an error-prone, time consuming and labour intensive way of getting important information.

The reason why so many teams fall into this trap is that it isn’t immediately visible. Organising tests or specifications by stories makes perfect sense for work in progress, but not so much to explain things done in the past. It takes a few months of work until it really starts to hurt.

It is good to have a clear confirmation criteria for each story, but does not mean that test cases or specifications have to be organised by stories for eternity.

Divide work in progress and work already done, and manage specifications, tests and design documents differently for those two groups. Throw user stories away after they are done, tear up the cards, close the tickets, delete the related wiki pages. This way you won’t fall into the trap of having to manage documentation as a history of changes. Move the related tests and specifications over to a structure that captures the current behaviour organised from a functional perspective.

Key Benefits

Specifications and tests organised by functional areas describe the current behaviour without the need for a reader to understand the entire history. This will save a lot of time in future analysis and testing, because it will be faster and less error prone to discover the right information.

If your team is doing any kind of automated testing, those tests are likely already structured according to the current system behaviour and not a history of changes. Managing the remaining specifications and tests according to a similar structure can help avoid a split-brain syndrome where different people work from different sources of truth.

How to make this work

Some teams explicitly divide tests and specifications for work in progress and existing functionality. This allows them to organise information differently for different purposes. I often group tests for work in progress first by the relevant story, and then by functionality. I group tests for existing functionality by feature areas, then functionality. For example, if an enhanced registration story involves users logging in with their Google accounts and making a quick payment through Paypal, those two aspects of a story would be captured by two different tests, grouped under the story in a hierarchy. This allows us to divide work and assign different parts of a story to different people, but also ensure that we have an overall way of deciding when a story is done. After delivery, I would move the Paypal payment test to the Payments functional area, and merge with any previous Paypal-related tests. The Google mail integration tests would go to the User Management functional area, under the registration sub-hierarchy. This allows me to quickly discover how any payment mechanism works, regardless of how many user stories were involved in delivery.

Other teams keep tests and specifications only organised by functional areas, and use tags to mark items in progress or related to a particular story. They would search by tag to identify all tests related to a story, and configure automated testing tools to execute only tests with a particular story tag, or only tests without the work-in-progress tag. This approach needs less restructuring after a story is done, but requires better tooling support.

From a testing perspective, the existing feature sets hierarchy captures regression tests, the work in progress hierarchy captures acceptance tests. From a documentation perspective, the existing feature sets is documentation ‘as-is’, and the work in progress hierarchy is documentation ‘to-be’. Organising tests and specifications in this way allows teams to define different testing policies. For example, if a test in the feature sets area fails, we sound the alarm and break the current build. On the other hand, if a test in the current iteration area fails, that’s expected – we’re still building that feature. We are only really interested in the whole group of tests under a story passing for the first time.

Some test management tools are great for automation but not so good in publishing information so it can be easily discovered. If you use such a tool, it might be useful to create a visual navigation tool, such as a feature map. Feature maps are hierarchical mind maps of functionality with hyperlinks to relevant documents at each map node. They can help people quickly decide where to put related tests and specifications after a story is done and produce a consistent structure.

Some teams need to keep a history of changes, for regulatory or auditing purposes. In such cases, adding test plans and documentation to the same version control system as the underlying code is a far more powerful approach than keeping that information in a task tracking tool. Version control systems will automatically track who changed what and when. They will also enable you to ensure that the specifications and tests follow the code whenever you create a separate branch or a version for a specific client.

The Cartoon Tester Book

Cartoon Tester - Mon, 03/24/2014 - 06:52

The Cartoon Tester Book is now available for purchase at LeanPub and Amazon!

If I could recommend a supplier, I suggest you order it via LeanPub as they are more flexible with the price and the file format.

The book contains nearly 200 cartoons about software development, testing, bugs and everything in between.





If you like the cartoons, please write up a review in Amazon! Love you x

Categories: Software Testing

Adventures in Commuting

Test Obsessed - Sat, 03/22/2014 - 11:14

I rested my head against the window, drowsy from a long day, full tummy, and steady hum of the train. My husband sat next to me absorbed in a game. It’s a long ride from San Francisco, where we work, to Pleasanton, where we live. I thought I might take a nap.

A disembodied voice crackled, announcing the next station: “West Oakland.”

The train stopped, the doors opened. I watched the passengers. The train car was about half full. A few people got off. Three black youths, a girl and two boys, got on. They were talking animatedly, happy. All three were well-dressed: fashionable clothes that looked new and fit well. Tight black pants. Caps. A little bling. They’d taken the time to look nice, and they were out for a good time.

One of the boys sat while the girl and the other boy stood, balancing by holding onto the straps suspended from the ceiling of the car. They talked and joked together. I felt a little voyeuristic watching them share this private moment of fun, but I couldn’t help myself. I was too taken with their delight and energy. Their faces lit with joy as they laughed. The girl and the boy hung from the straps and swung their weight around, enjoying the interplay of gravity and momentum of the train and their own upper body strength.

The girl clung to the strap then braced her other hand on the side of the train. Walking her feet up the train side, she executed an impressive flip. She did it again, obviously pleased with herself. “You try doing that in heels!” she crowed. She did it again. I noted with admiration that she was wearing boots with 5″ heels.

I guessed the kids to be somewhere between 15 and 20. It’s an in-between age. Young enough to treat a train as a jungle gym but old enough for a night out on their own. I thought about my own kids, about the same age.

The disembodied crackling voice announced the next station: Lake Merritt. The train stopped. The kids stayed on. The train started again. The train abruptly came to a screeching halt. “That’s not good,” the passenger in front of me muttered. I looked around but couldn’t tell why we were stopped. I leaned my head back against the window; my husband stayed immersed in his game. The trains don’t run on time here. Stopping wasn’t unusual.

Two BART police officers stepped onto the car. Another guy, a passenger, trailed behind them. The passenger with the cops said something I couldn’t hear and gestured toward the three youths. He was a mousy white guy in a button-down oxford shirt, corduroy pants, and a fleece vest. He looked like the kind of guy who drives an SUV with Save the Planet bumper stickers.

The first police officer addressed the youths. “There’s been a complaint. I need you to step off the train.”

“What’s the complaint?” the girl asked.

“We didn’t do anything,” the boy said.

“There’s been a complaint,” the police officer insisted. “Please come with me.”

“What’s the complaint?” the girl repeated.

The mousy passenger who had made the accusation faded back. I didn’t see where he went. Now it was just the cops and the kids.

“We got a complaint that you were asking for money.” The cop’s voice carried.

“We didn’t do anything!” the boy repeated, louder.

“Please come with me,” the cop insisted. “This train can’t move until you get off.”

The crowd on the car grew restless, impatient. The two cops flanked the three youths and attempted to escort them off the train. For a minute, it looked like the kids were going to comply. The boy who was seated stood, headed toward the open door, but then changed his mind. “I didn’t do anything. I was just sitting there,” he said. He got back on the train.

The girl’s face had gone from joy to furor. “It’s that racist white guy,” she yelled, loud enough for the whole train car to hear. “We didn’t do anything, but he thinks we did because we’re black.”

A passenger stood up, Asian, “They really didn’t do anything,” he said to the cops. Then to the kids: “Let’s just step off the train so they can make their report.”

Another passenger stood up. A black woman. “Really, officers, these kids didn’t do anything.”

By this time I was on the edge of my seat. It was so unfair. These kids were just out for a good time. They had done nothing besides being full of energy and life. They hadn’t asked for money. They were being harassed by the cops for something they had not done.

I’m still not sure what made me decide to act on my sense of injustice. Maybe I arrogantly thought the police would listen to me, a middle-class, middle-aged, white woman. Maybe I saw the faces of my own kids in the black youths and imagined them being detained on hearsay. Whatever the trigger, I stood up, moving past my husband who looked at me quizzically. I walked up the train aisle to the officer who had been barking orders.

“Sir, they really didn’t do anything. They just got on at the last stop.” My voice was steady. Calm. Quiet. Pitched for the circle of people involved, not for the rest of the train. Internally I was amazed that my adrenaline wasn’t amped up, that my voice was not shaking. I don’t normally confront cops. It is not at all like me to tell a cop how to do his job. I couldn’t believe I was confronting a cop and not even nervous about it.

By this time the crowd on the train was growing antsy. A few were calling for the kids to get off the train so they could get where they were going, but far more were calling for the cops to leave the kids alone. Several more passengers stood up and started shouting at the cops:

“These kids didn’t do anything!”

“Leave them alone!”

“Racist white guy can’t tell blacks apart!”

“It must have been someone else!”

“Let the kids go!”

I felt self-conscious standing in the middle of everything. I’d played my bit part in the unfolding drama. I returned to my seat.

A woman on the other side of the car leaned over toward me, “I could have done that,” she said. She was white like me, graying hair cut shoulder length in a bob. She looked like she could live in the house next door in my white-picket-fence neighborhood.

“I’m sorry?” I was confused by her words. What does that even mean, I could have done that? If you could have why didn’t you? Did she mean that she could not have done what I did?

“I could have done what you did,” she repeated.

I smiled and nodded. I didn’t know what else to say.

My attention turned back to the growing situation. More passengers had stood up to defend the kids. Another BART police officer entered the car. The increasing tension and buzz from the crowd made it hard to hear. I realized that there were several more police officers waiting on the platform. The situation had escalated rapidly. This could become ugly.

“CHECK THE BUCKET,” the girl shouted.

For the first time I noticed that the kids had an amplifier and a bucket with them, and suddenly I saw a missing piece of the puzzle. The kids were street performers. They weren’t out for a night of clubbing. They were dancers.

Sometimes street performers come onto the BART trains. They strut up and down the aisle, put on a show, and ask for money. I personally find being a captive audience to these shows annoying. I never give the performers money because I don’t want to encourage it. But I did not realize that such behavior warranted police intervention on this scale.

Nor did I think that these kids had been dancing for money on this train. I looked around for the accuser but couldn’t see him.

I was incensed by the entire situation. The accuser had disappeared, the situation was escalating, and these innocent kids were now targets.

“This isn’t right,” I told my husband. I stood up, shouldered my backpack in case I decided to leave the train, and pushed past my husband’s knees for the second time. Without even thinking about whether or not my husband would follow me or would be upset with me for getting involved, I stepped into the situation again.

I sidled up to the cop who had originally been barking orders and who was now standing between the kids and the rest of the passengers.

“They really didn’t do anything,” I said into his ear. “They didn’t. This isn’t right. I will be happy to make a statement. What can I do to help?”

The cop turned to me. “They might not have done anything originally,” he said, “but now they are resisting. If they don’t get off the train we will have to arrest them.”

Another police officer entered the train. The mood of the passengers was growing dark, roiling. It became a palpable thing, shifting energies threatening to spill over into violence. My stomach fluttered.

I turned around, realized that half a dozen people had their cell phones out, recording the incident. I idly wondered if I would show up on youtube videos. I decided I didn’t care. This wasn’t right and I wouldn’t stand for it. These kids were being detained, harassed, and possibly arrested all because the mousy guy in the fleece vest didn’t like being bothered by dancers panhandling on BART. These kids might have danced on BART at some point, but not tonight, not on this train. They didn’t deserve this.

Another officer entered the train. It was now five officers to the three kids. The Asian passenger who had originally stepped up to intervene turned to the kids, “Why don’t we get off the train and get this handled. Then we can all get onto the next train.”

The police closed their circle, puffing themselves up, herding the kids to the exit. I followed them to the door. I was about to step off the train. I turned back to see if my husband was paying attention. He was staring at me, his face open, questioning. “This isn’t right,” I mouthed at him. He stood up, following me.

I turned my attention back to the kids. The cops had one of the boys in an arm lock and pressed him up against the wall of the station. It looked like it hurt. They had the girl surrounded. I couldn’t see her, but I could hear her screaming. I heard another voice — the other boy? the Asian passenger who had intervened? a cop? — telling her “This isn’t worth it. Stop struggling.”

I stepped off the train onto the platform.

The cops who had the girl surrounded had her hands behind her back. She was struggling and screaming. They began half pulling, half dragging her down the platform toward the station exit. “LET GO OF ME,” she shouted. “LET GO OF ME LET GO OF ME LET ME GO!”

An officer stood in the middle of the chaos just looking, sizing it all up. He was new on the scene. He was also the only officer not in the middle of the chaos. I walked up to him. “Sir, they didn’t do anything,” I said. I watched his face. Dark skin. Brown eyes. A guy who’d been having a quiet night. He turned to me.

“You’re a witness?” he asked.

“Yes sir. I was on the car. These kids didn’t do anything. This isn’t right.” The girl was still screaming. Each panicked shriek felt like shrapnel in my soul. That could be my daughter, I thought. So very not right. “I will be happy to make a statement,” I said.

“OK,” the cop replied. He took out a notebook, asked me my name. He had to ask me to spell my last name a second time. I handed him my driver’s license to make things easier for him. He started taking notes, then looked up at my husband and me. “I left my recorder in my office,” he said. “Would you be willing to go there to make a statement?”

I agreed immediately. To my surprise, my husband agreed as well. “Of course,” he said. We would probably be an hour late, or more, getting home. This was worth it.

We followed the officer off the platform, past the turnstiles, and through a door marked “authorized personnel only.” He led us down a maze of corridors, past a bank of cubicles, and into his office. As promised, he took out a recorder and interviewed us for a verbal statement. My husband and I each told our version of the events. They lined up. Each of us emphasized that the kids had done nothing wrong.

“Let me just clarify a few points,” the officer said after we finished our statements. “In your opinion, did the officers use excessive force at any time?” I said they had, citing the painful hands-behind-the-back restraining technique and the panicked girl screaming. My husband said they had not, but pointed out that the officers should have just let the kids go when so many passengers testified that the kids were innocent.

The officer asked a few more questions: had the officers used mace, batons, or physical violence on the kids? had they acted unprofessionally?

We finished the interview, the officer escorted us back into the station, said that internal affairs would probably be contacting us in connection with the investigation, then said goodbye.

As we waited for the next train, I played over the evening in my mind.

I realized that the statements we made existed only on a recorder in the cop’s office. The kids wouldn’t know we made the recorded statement. If they got lawyers, the lawyers wouldn’t know. Our testimony might never be heard by anyone involved in the case against the kids.

The more I played over the questions that the officer had asked us, the more I realized that the questions were all about the conduct of the other officers, not about the kids and their innocence.

The internal affairs investigation would, no doubt, turn into a huge deal. The situation had escalated so rapidly, fueled by racial tension. So many people had recorded videos. The train had been halted for 15 minutes or more. Passengers were irate. There would be much attention on the conduct of the cops.

I didn’t care about the cops. I wanted to help the kids.

Somehow, I doubted that I had.

Next time I see dancers on BART, I’m giving them whatever is in my wallet. I’ll think of it as their defense fund.

Categories: Software Testing

Test Automation As An Investigation Trigger

Eric Jacobson's Software Testing Blog - Fri, 03/21/2014 - 13:40

During a recent exchange about the value of automated checks, someone rhetorically asked:

“Is automation about finding lots of bugs or triggering investigation?”

Well…the later, right?

  • When an automated check passes consistently for months then suddenly fails, it’s an indication the system-under-test (SUT) probably unexpectedly changed.  Investigate!  The SUT change may not be directly related to the check but who cares, you can still pat the check on the back and say, “thank you automated check, for warning me about the SUT change”.
  • When you design/code an automated check, you are learning how to interact with your SUT and investigating it.  If there are bugs uncovered during the automated check design/coding, you report them now and assume the automated checks should happily PASS for the rest of their existence.
  • If someone is organized enough to tell you the SUT is about to change, you should test the change and assess the impact on your automated checks and make necessary updates.  Doing so requires investigating said SUT changes.

In conclusion, one can argue, even the lamest of automated checks can still provide value.  Then again, one can argue most anything.

Categories: Software Testing

Very Short Blog Posts (13): When Will Testing Be Done?

DevelopSense - Michael Bolton - Fri, 03/21/2014 - 12:54
When a decision maker asks “When will testing be done?”, in my experience, she really means is “When will I have enough information about the state of the product and the project, such that I can decide to release or deploy the product?” There are a couple of problems with the latter question. First, as […]
Categories: Software Testing

Web Performance News for the Week of March 17, 2014

LoadStorm - Fri, 03/21/2014 - 11:14


Does Adblock improve your load time?

An essential rule is to have ad banners displayed after the main content is displayed. In terms of perception, users wont notice the banner until they find the content they’re looking for on the page. While your web page will most likely download less data, your perception of speed isn’t likely to fluctuate drastically due to the adblock extension. Perhaps testing this would be an interesting experiment!

How does adblock work?

Adblock takes the div of the ad and gives it a height of 0. You can test this by putting a mini ad on your site and looking at the source code by switching adblock on and off. When the adblock is on, you can display a message that basically says if this height=0, then display a prompt or an alert telling the user that they have adblock on.

Even if loaded correctly, certain HTML elements are still hidden from the page especially if the element class=Ad.


For example, if youI had a whole bunch of images for a page called “apparel design”, and abbreviated them to AD very few of your images will display if you have adblock. AdBlock uses a pretty simple logic to determine what content is advertisment. One of the methods seems to be looking for anything named or prefixed “%ad%” “%banner%”
Here are some elements that adblock plus blocks:
id=”adBanner”: blocked
id=”adbanner”: blocked
id=”adAd”: blocked
id=”bannerad”: blocked
id=”adHello”: not blocked
class=”adAd”: not blocked
id=”adVert”: not blocked
id=”adFixed”: not blocked
id=”adFull”: not blocked

Another method for blocking ads is communication blocking. There are certain servers that host ads.AdBlock just needs to keep a list of all these server addresses and when the browser is ready to retrieve the ads, Adblock steps in to stop the connection. This is when communication to ad-servers are blocked altogether (the client request does not occur). The adblock will block all requests from certain URLs such as Google Adsense. Adblock Plus gets help from various lists such as EasyList to define, customize and distribute rules that maximize blocking efficiency.

Adblock Users

User experience is important to retain visitors to your website, but what if your website is generating revenue completely from web ads? Some user complain that ads put their computer at risk, that ads are ugly, irrelevant, and interrupting. Ads also include tracking technology which feels like a major violation of privacy for web users. Usually the best way is to stop users from complaining is making sure your ads are non-intrusive and topical.

Although we may hate ads, they are important to keep the internet free. To help facilitate a middle ground for uses and advertisers, Adblock Plus has set up a program called acceptable ads.Users can allow some of the advertising to not be considered annoying to be viewed. By doing this they support websites that rely on advertising but choose to do it in a non-intrusive way. Adblock Plus also answers the question “which ads are acceptable? To name a few they listed Static advertisements only (no animations, sounds or similar) and preferably text only, no attention-grabbing images

The post Web Performance News for the Week of March 17, 2014 appeared first on LoadStorm.

Swiss Testing Day 2014

Alan Page - Fri, 03/21/2014 - 08:38

As you may have noticed, my blogging has slowed. I’ve been navigating and ramping up on a new team and helping the team shift roles (all fun stuff). I’ve also been figuring out how to work in a big org (our team is “small”, but it’s part of one of Microsoft’s “huge” orgs – and there are some learning pains that go with that.

But – I did take a few days off this week to head to Zurich and give a talk at Swiss Testing Day. I attended the conference three years ago (where I was impressed with the turnout, passion and conference organization), and I’m happy to say that it’s still a great conference. I met some great people (some who I had virtually met on twitter), and had a great time catching up with Adrian and the rest of the Swiss-Q gang (the company that puts on Swiss Testing Day).

I talked (sort of) about testing Xbox. I talked a bit about Xbox, and mostly about what it takes to build and test the Xbox. I stole a third or so from my Star West keynote, and was able to use a few more examples now that the Xbox One has shipped (ironically, the Xbox One is available in every country surrounding Switzerland, but not in Switzerland). Of note, is that now I’ve retired from the Xbox team (or moved on, at least), I expect that I’m done with giving talks about Xbox (although I expect I’ll use examples of the Xbox team in at least one of my sessions at Star East).

I owe some follow ups on my Principles post, and I promise to get to those soon. I’ll define “soon” later.

(potentially) related posts:
  1. Thoughts on Swiss Testing Day
  2. Home Again
  3. Rockin’ STAR West
Categories: Software Testing

Kanban in Action – book review

The Quest for Software++ - Thu, 03/20/2014 - 01:28

This wonderful little book is a gentle introduction to Kanban by Marcus Hammarberg and Joakim Sunden. It explains the theory behind flow-based processes and provides a ton of practical implementation tips on everything from visualising work to how to properly take out a sticky note.

The first part deals with the basic principles of Kanban, using visual boards to show and manage work in progress, managing queues and bottlenecks and distributing and limiting work across team members. The second part explains how to manage continuous process improvement, how to deal with estimation and planning and how to define and implement different classes of service.

My impression is that this book will be most useful to people completely new to Kanban, who are investigating the concepts or starting to adopt this process. If you already use Kanban, you might find the chapters on managing bottlenecks and process metrics interesting.

Compared to David Anderson’s book, Kanban in Action is more approachable for beginners. Each important concept is described with lots of small, concrete examples, which will help readers new to Kanban put things into perspective, but also reinforce the message that there is no one-size-fits-all solution. Anderson’s book goes in more depth to explain the theory behind the practice, and this book has more practical information and concrete advice on topics such as setting work in progress limits, managing different types of items on a visualisation board and choosing workflow metrics. If you’re researching this topic or starting to implement Kanban, it’s worth reading both books.

Webinar Video and Slides - What to Automate First?

Randy Rice's Software Testing & Quality - Wed, 03/19/2014 - 21:42
Hi Folks,

Thanks for attending the webinar today on What to Automate First - The Low-Hanging Fruit of Test Automation. If you missed it, here is the video:

http://youtu.be/eo66ouKGyVk

Here are the slides:

http://www.slideshare.net/rrice2000/what-do-we-automate-first

I hope you enjoy the content!
Categories: Software Testing

I Said Something Clever, Once

QA Hates You - Wed, 03/19/2014 - 04:55

Talking about automated testing once, I said:

Automated testing, really, is a misnomer. The testing is not automated. It requires someone to build it and to make sure that the scripts remain synched to a GUI.

Quite so, still

Categories: Software Testing

Testing on the Toilet: What Makes a Good Test?

Google Testing Blog - Tue, 03/18/2014 - 16:10
by Erik Kuefler

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

Unit tests are important tools for verifying that our code is correct. But writing good tests is about much more than just verifying correctness — a good unit test should exhibit several other properties in order to be readable and maintainable.

One property of a good test is clarity. Clarity means that a test should serve as readable documentation for humans, describing the code being tested in terms of its public APIs. Tests shouldn't refer directly to implementation details. The names of a class's tests should say everything the class does, and the tests themselves should serve as examples for how to use the class.

Two more important properties are completeness and conciseness. A test is complete when its body contains all of the information you need to understand it, and concise when it doesn't contain any other distracting information. This test fails on both counts:

@Test public void shouldPerformAddition() {
Calculator calculator = new Calculator(new RoundingStrategy(),
"unused", ENABLE_COSIN_FEATURE, 0.01, calculusEngine, false);
int result = calculator.doComputation(makeTestComputation());
assertEquals(5, result); // Where did this number come from?
}
Lots of distracting information is being passed to the constructor, and the important parts are hidden off in a helper method. The test can be made more complete by clarifying the purpose of the helper method, and more concise by using another helper to hide the irrelevant details of constructing the calculator:

@Test public void shouldPerformAddition() {
Calculator calculator = newCalculator();
int result = calculator.doComputation(makeAdditionComputation(2, 3));
assertEquals(5, result);
}
One final property of a good test is resilience. Once written, a resilient test doesn't have to change unless the purpose or behavior of the class being tested changes. Adding new behavior should only require adding new tests, not changing old ones. The original test above isn't resilient since you'll have to update it (and probably dozens of other tests!) whenever you add a new irrelevant constructor parameter. Moving these details into the helper method solved this problem.

Categories: Software Testing

How Low Can You Go?

QA Hates You - Tue, 03/18/2014 - 10:35

I think I have found the source of the problem:

I expected some sort of enlightenment when I got all the way back to error 000, but that’s not the case, actually.

Or is Google denying there’s an error here?

Fun fact: 000 is 000 in binary. It is one of 10 numbers that are the same.

Categories: Software Testing

Pages