Software Testing

Optimal Logging

Google Testing Blog - Thu, 03/27/2014 - 15:41
by Anthony Vallone

How long does it take to find the root cause of a failure in your system? Five minutes? Five days? If you answered close to five minutes, it’s very likely that your production system and tests have great logging. All too often, seemingly unessential features like logging, exception handling, and (dare I say it) testing are an implementation afterthought. Like exception handling and testing, you really need to have a strategy for logging in both your systems and your tests. Never underestimate the power of logging. With optimal logging, you can even eliminate the necessity for debuggers. Below are some guidelines that have been useful to me over the years.


Channeling Goldilocks

Never log too much. Massive, disk-quota burning logs are a clear indicator that little thought was put in to logging. If you log too much, you’ll need to devise complex approaches to minimize disk access, maintain log history, archive large quantities of data, and query these large sets of data. More importantly, you’ll make it very difficult to find valuable information in all the chatter.

The only thing worse than logging too much is logging too little. There are normally two main goals of logging: help with bug investigation and event confirmation. If your log can’t explain the cause of a bug or whether a certain transaction took place, you are logging too little.

Good things to log:
  • Important startup configuration
  • Errors
  • Warnings
  • Changes to persistent data
  • Requests and responses between major system components
  • Significant state changes
  • User interactions
  • Calls with a known risk of failure
  • Waits on conditions that could take measurable time to satisfy
  • Periodic progress during long-running tasks
  • Significant branch points of logic and conditions that led to the branch
  • Summaries of processing steps or events from high level functions - Avoid logging every step of a complex process in low-level functions.

Bad things to log:
  • Function entry - Don’t log a function entry unless it is significant or logged at the debug level.
  • Data within a loop - Avoid logging from many iterations of a loop. It is OK to log from iterations of small loops or to log periodically from large loops.
  • Content of large messages or files - Truncate or summarize the data in some way that will be useful to debugging.
  • Benign errors - Errors that are not really errors can confuse the log reader. This sometimes happens when exception handling is part of successful execution flow.
  • Repetitive errors - Do not repetitively log the same or similar error. This can quickly fill a log and hide the actual cause. Frequency of error types is best handled by monitoring. Logs only need to capture detail for some of those errors.


There is More Than One Level

Don't log everything at the same log level. Most logging libraries offer several log levels, and you can enable certain levels at system startup. This provides a convenient control for log verbosity.

The classic levels are:
  • Debug - verbose and only useful while developing and/or debugging.
  • Info - the most popular level.
  • Warning - strange or unexpected states that are acceptable.
  • Error - something went wrong, but the process can recover.
  • Critical - the process cannot recover, and it will shutdown or restart.

Practically speaking, only two log configurations are needed:
  • Production - Every level is enabled except debug. If something goes wrong in production, the logs should reveal the cause.
  • Development & Debug - While developing new code or trying to reproduce a production issue, enable all levels.


Test Logs Are Important Too

Log quality is equally important in test and production code. When a test fails, the log should clearly show whether the failure was a problem with the test or production system. If it doesn't, then test logging is broken.

Test logs should always contain:
  • Test execution environment
  • Initial state
  • Setup steps
  • Test case steps
  • Interactions with the system
  • Expected results
  • Actual results
  • Teardown steps


Conditional Verbosity With Temporary Log Queues

When errors occur, the log should contain a lot of detail. Unfortunately, detail that led to an error is often unavailable once the error is encountered. Also, if you’ve followed advice about not logging too much, your log records prior to the error record may not provide adequate detail. A good way to solve this problem is to create temporary, in-memory log queues. Throughout processing of a transaction, append verbose details about each step to the queue. If the transaction completes successfully, discard the queue and log a summary. If an error is encountered, log the content of the entire queue and the error. This technique is especially useful for test logging of system interactions.


Failures and Flakiness Are Opportunities

When production problems occur, you’ll obviously be focused on finding and correcting the problem, but you should also think about the logs. If you have a hard time determining the cause of an error, it's a great opportunity to improve your logging. Before fixing the problem, fix your logging so that the logs clearly show the cause. If this problem ever happens again, it’ll be much easier to identify.

If you cannot reproduce the problem, or you have a flaky test, enhance the logs so that the problem can be tracked down when it happens again.

Using failures to improve logging should be used throughout the development process. While writing new code, try to refrain from using debuggers and only use the logs. Do the logs describe what is going on? If not, the logging is insufficient.


Might As Well Log Performance Data

Logged timing data can help debug performance issues. For example, it can be very difficult to determine the cause of a timeout in a large system, unless you can trace the time spent on every significant processing step. This can be easily accomplished by logging the start and finish times of calls that can take measurable time:
  • Significant system calls
  • Network requests
  • CPU intensive operations
  • Connected device interactions
  • Transactions


Following the Trail Through Many Threads and Processes

You should create unique identifiers for transactions that involve processing across many threads and/or processes. The initiator of the transaction should create the ID, and it should be passed to every component that performs work for the transaction. This ID should be logged by each component when logging information about the transaction. This makes it much easier to trace a specific transaction when many transactions are being processed concurrently.


Monitoring and Logging Complement Each Other

A production service should have both logging and monitoring. Monitoring provides a real-time statistical summary of the system state. It can alert you if a percentage of certain request types are failing, it is experiencing unusual traffic patterns, performance is degrading, or other anomalies occur. In some cases, this information alone will clue you to the cause of a problem. However, in most cases, a monitoring alert is simply a trigger for you to start an investigation. Monitoring shows the symptoms of problems. Logs provide details and state on individual transactions, so you can fully understand the cause of problems.

Categories: Software Testing

Slides from Defect Sampling Presentation at ASTQB Conference.

Randy Rice's Software Testing & Quality - Tue, 03/25/2014 - 16:46
Hi all,

Thanks to everyone at my presentation at the ASTQB Conference on defect sampling.

Here are my slides from today's presentation:

http://www.softwaretestingtrainingonline.com/cc/Slide%20Shows/Defect%20Sampling%20-%20ASTQB.pdf

Here is the video from Gold Rush:

http://youtu.be/0XXzdNma7O4

Enjoy!




Categories: Software Testing

Throw user stories away after they are delivered

The Quest for Software++ - Tue, 03/25/2014 - 05:58

This is an excerpt from my upcoming book 50 Quick Ideas to Improve your User Stories. If you want to try this idea in practice, I’ll be running a workshop on improving user stories at the Product Owner Survival Camp

Many teams get stuck by using previous stories as documentation. They assign test cases, design diagrams or specifications to stories in digital task tracking tools. Such tasks become a reference for future regression testing plans or impact analysis. The problem with this approach is that it is unsustainable for even moderately complex products.

A user story will often change several functional aspects of a software system, and the same functionality will be impacted by many stories over a longer period of time. One story might put some feature in, another story might modify it later or take it out completely. In order to understand the current situation, someone has to discover all relevant stories, put them in reverse chronological order, find out about any potential conflicts and changes, and then come up with a complete picture.

Designs, specifications and tests explain how a system works currently – not how it changes over time. Using previous stories as a reference is similar to looking at a history of credit card purchases instead of the current balance to find out how much money is available. It is an error-prone, time consuming and labour intensive way of getting important information.

The reason why so many teams fall into this trap is that it isn’t immediately visible. Organising tests or specifications by stories makes perfect sense for work in progress, but not so much to explain things done in the past. It takes a few months of work until it really starts to hurt.

It is good to have a clear confirmation criteria for each story, but does not mean that test cases or specifications have to be organised by stories for eternity.

Divide work in progress and work already done, and manage specifications, tests and design documents differently for those two groups. Throw user stories away after they are done, tear up the cards, close the tickets, delete the related wiki pages. This way you won’t fall into the trap of having to manage documentation as a history of changes. Move the related tests and specifications over to a structure that captures the current behaviour organised from a functional perspective.

Key Benefits

Specifications and tests organised by functional areas describe the current behaviour without the need for a reader to understand the entire history. This will save a lot of time in future analysis and testing, because it will be faster and less error prone to discover the right information.

If your team is doing any kind of automated testing, those tests are likely already structured according to the current system behaviour and not a history of changes. Managing the remaining specifications and tests according to a similar structure can help avoid a split-brain syndrome where different people work from different sources of truth.

How to make this work

Some teams explicitly divide tests and specifications for work in progress and existing functionality. This allows them to organise information differently for different purposes. I often group tests for work in progress first by the relevant story, and then by functionality. I group tests for existing functionality by feature areas, then functionality. For example, if an enhanced registration story involves users logging in with their Google accounts and making a quick payment through Paypal, those two aspects of a story would be captured by two different tests, grouped under the story in a hierarchy. This allows us to divide work and assign different parts of a story to different people, but also ensure that we have an overall way of deciding when a story is done. After delivery, I would move the Paypal payment test to the Payments functional area, and merge with any previous Paypal-related tests. The Google mail integration tests would go to the User Management functional area, under the registration sub-hierarchy. This allows me to quickly discover how any payment mechanism works, regardless of how many user stories were involved in delivery.

Other teams keep tests and specifications only organised by functional areas, and use tags to mark items in progress or related to a particular story. They would search by tag to identify all tests related to a story, and configure automated testing tools to execute only tests with a particular story tag, or only tests without the work-in-progress tag. This approach needs less restructuring after a story is done, but requires better tooling support.

From a testing perspective, the existing feature sets hierarchy captures regression tests, the work in progress hierarchy captures acceptance tests. From a documentation perspective, the existing feature sets is documentation ‘as-is’, and the work in progress hierarchy is documentation ‘to-be’. Organising tests and specifications in this way allows teams to define different testing policies. For example, if a test in the feature sets area fails, we sound the alarm and break the current build. On the other hand, if a test in the current iteration area fails, that’s expected – we’re still building that feature. We are only really interested in the whole group of tests under a story passing for the first time.

Some test management tools are great for automation but not so good in publishing information so it can be easily discovered. If you use such a tool, it might be useful to create a visual navigation tool, such as a feature map. Feature maps are hierarchical mind maps of functionality with hyperlinks to relevant documents at each map node. They can help people quickly decide where to put related tests and specifications after a story is done and produce a consistent structure.

Some teams need to keep a history of changes, for regulatory or auditing purposes. In such cases, adding test plans and documentation to the same version control system as the underlying code is a far more powerful approach than keeping that information in a task tracking tool. Version control systems will automatically track who changed what and when. They will also enable you to ensure that the specifications and tests follow the code whenever you create a separate branch or a version for a specific client.

The Cartoon Tester Book

Cartoon Tester - Mon, 03/24/2014 - 06:52

The Cartoon Tester Book is now available for purchase at LeanPub and Amazon!

If I could recommend a supplier, I suggest you order it via LeanPub as they are more flexible with the price and the file format.

The book contains nearly 200 cartoons about software development, testing, bugs and everything in between.





If you like the cartoons, please write up a review in Amazon! Love you x

Categories: Software Testing

Adventures in Commuting

Test Obsessed - Sat, 03/22/2014 - 11:14

I rested my head against the window, drowsy from a long day, full tummy, and steady hum of the train. My husband sat next to me absorbed in a game. It’s a long ride from San Francisco, where we work, to Pleasanton, where we live. I thought I might take a nap.

A disembodied voice crackled, announcing the next station: “West Oakland.”

The train stopped, the doors opened. I watched the passengers. The train car was about half full. A few people got off. Three black youths, a girl and two boys, got on. They were talking animatedly, happy. All three were well-dressed: fashionable clothes that looked new and fit well. Tight black pants. Caps. A little bling. They’d taken the time to look nice, and they were out for a good time.

One of the boys sat while the girl and the other boy stood, balancing by holding onto the straps suspended from the ceiling of the car. They talked and joked together. I felt a little voyeuristic watching them share this private moment of fun, but I couldn’t help myself. I was too taken with their delight and energy. Their faces lit with joy as they laughed. The girl and the boy hung from the straps and swung their weight around, enjoying the interplay of gravity and momentum of the train and their own upper body strength.

The girl clung to the strap then braced her other hand on the side of the train. Walking her feet up the train side, she executed an impressive flip. She did it again, obviously pleased with herself. “You try doing that in heels!” she crowed. She did it again. I noted with admiration that she was wearing boots with 5″ heels.

I guessed the kids to be somewhere between 15 and 20. It’s an in-between age. Young enough to treat a train as a jungle gym but old enough for a night out on their own. I thought about my own kids, about the same age.

The disembodied crackling voice announced the next station: Lake Merritt. The train stopped. The kids stayed on. The train started again. The train abruptly came to a screeching halt. “That’s not good,” the passenger in front of me muttered. I looked around but couldn’t tell why we were stopped. I leaned my head back against the window; my husband stayed immersed in his game. The trains don’t run on time here. Stopping wasn’t unusual.

Two BART police officers stepped onto the car. Another guy, a passenger, trailed behind them. The passenger with the cops said something I couldn’t hear and gestured toward the three youths. He was a mousy white guy in a button-down oxford shirt, corduroy pants, and a fleece vest. He looked like the kind of guy who drives an SUV with Save the Planet bumper stickers.

The first police officer addressed the youths. “There’s been a complaint. I need you to step off the train.”

“What’s the complaint?” the girl asked.

“We didn’t do anything,” the boy said.

“There’s been a complaint,” the police officer insisted. “Please come with me.”

“What’s the complaint?” the girl repeated.

The mousy passenger who had made the accusation faded back. I didn’t see where he went. Now it was just the cops and the kids.

“We got a complaint that you were asking for money.” The cop’s voice carried.

“We didn’t do anything!” the boy repeated, louder.

“Please come with me,” the cop insisted. “This train can’t move until you get off.”

The crowd on the car grew restless, impatient. The two cops flanked the three youths and attempted to escort them off the train. For a minute, it looked like the kids were going to comply. The boy who was seated stood, headed toward the open door, but then changed his mind. “I didn’t do anything. I was just sitting there,” he said. He got back on the train.

The girl’s face had gone from joy to furor. “It’s that racist white guy,” she yelled, loud enough for the whole train car to hear. “We didn’t do anything, but he thinks we did because we’re black.”

A passenger stood up, Asian, “They really didn’t do anything,” he said to the cops. Then to the kids: “Let’s just step off the train so they can make their report.”

Another passenger stood up. A black woman. “Really, officers, these kids didn’t do anything.”

By this time I was on the edge of my seat. It was so unfair. These kids were just out for a good time. They had done nothing besides being full of energy and life. They hadn’t asked for money. They were being harassed by the cops for something they had not done.

I’m still not sure what made me decide to act on my sense of injustice. Maybe I arrogantly thought the police would listen to me, a middle-class, middle-aged, white woman. Maybe I saw the faces of my own kids in the black youths and imagined them being detained on hearsay. Whatever the trigger, I stood up, moving past my husband who looked at me quizzically. I walked up the train aisle to the officer who had been barking orders.

“Sir, they really didn’t do anything. They just got on at the last stop.” My voice was steady. Calm. Quiet. Pitched for the circle of people involved, not for the rest of the train. Internally I was amazed that my adrenaline wasn’t amped up, that my voice was not shaking. I don’t normally confront cops. It is not at all like me to tell a cop how to do his job. I couldn’t believe I was confronting a cop and not even nervous about it.

By this time the crowd on the train was growing antsy. A few were calling for the kids to get off the train so they could get where they were going, but far more were calling for the cops to leave the kids alone. Several more passengers stood up and started shouting at the cops:

“These kids didn’t do anything!”

“Leave them alone!”

“Racist white guy can’t tell blacks apart!”

“It must have been someone else!”

“Let the kids go!”

I felt self-conscious standing in the middle of everything. I’d played my bit part in the unfolding drama. I returned to my seat.

A woman on the other side of the car leaned over toward me, “I could have done that,” she said. She was white like me, graying hair cut shoulder length in a bob. She looked like she could live in the house next door in my white-picket-fence neighborhood.

“I’m sorry?” I was confused by her words. What does that even mean, I could have done that? If you could have why didn’t you? Did she mean that she could not have done what I did?

“I could have done what you did,” she repeated.

I smiled and nodded. I didn’t know what else to say.

My attention turned back to the growing situation. More passengers had stood up to defend the kids. Another BART police officer entered the car. The increasing tension and buzz from the crowd made it hard to hear. I realized that there were several more police officers waiting on the platform. The situation had escalated rapidly. This could become ugly.

“CHECK THE BUCKET,” the girl shouted.

For the first time I noticed that the kids had an amplifier and a bucket with them, and suddenly I saw a missing piece of the puzzle. The kids were street performers. They weren’t out for a night of clubbing. They were dancers.

Sometimes street performers come onto the BART trains. They strut up and down the aisle, put on a show, and ask for money. I personally find being a captive audience to these shows annoying. I never give the performers money because I don’t want to encourage it. But I did not realize that such behavior warranted police intervention on this scale.

Nor did I think that these kids had been dancing for money on this train. I looked around for the accuser but couldn’t see him.

I was incensed by the entire situation. The accuser had disappeared, the situation was escalating, and these innocent kids were now targets.

“This isn’t right,” I told my husband. I stood up, shouldered my backpack in case I decided to leave the train, and pushed past my husband’s knees for the second time. Without even thinking about whether or not my husband would follow me or would be upset with me for getting involved, I stepped into the situation again.

I sidled up to the cop who had originally been barking orders and who was now standing between the kids and the rest of the passengers.

“They really didn’t do anything,” I said into his ear. “They didn’t. This isn’t right. I will be happy to make a statement. What can I do to help?”

The cop turned to me. “They might not have done anything originally,” he said, “but now they are resisting. If they don’t get off the train we will have to arrest them.”

Another police officer entered the train. The mood of the passengers was growing dark, roiling. It became a palpable thing, shifting energies threatening to spill over into violence. My stomach fluttered.

I turned around, realized that half a dozen people had their cell phones out, recording the incident. I idly wondered if I would show up on youtube videos. I decided I didn’t care. This wasn’t right and I wouldn’t stand for it. These kids were being detained, harassed, and possibly arrested all because the mousy guy in the fleece vest didn’t like being bothered by dancers panhandling on BART. These kids might have danced on BART at some point, but not tonight, not on this train. They didn’t deserve this.

Another officer entered the train. It was now five officers to the three kids. The Asian passenger who had originally stepped up to intervene turned to the kids, “Why don’t we get off the train and get this handled. Then we can all get onto the next train.”

The police closed their circle, puffing themselves up, herding the kids to the exit. I followed them to the door. I was about to step off the train. I turned back to see if my husband was paying attention. He was staring at me, his face open, questioning. “This isn’t right,” I mouthed at him. He stood up, following me.

I turned my attention back to the kids. The cops had one of the boys in an arm lock and pressed him up against the wall of the station. It looked like it hurt. They had the girl surrounded. I couldn’t see her, but I could hear her screaming. I heard another voice — the other boy? the Asian passenger who had intervened? a cop? — telling her “This isn’t worth it. Stop struggling.”

I stepped off the train onto the platform.

The cops who had the girl surrounded had her hands behind her back. She was struggling and screaming. They began half pulling, half dragging her down the platform toward the station exit. “LET GO OF ME,” she shouted. “LET GO OF ME LET GO OF ME LET ME GO!”

An officer stood in the middle of the chaos just looking, sizing it all up. He was new on the scene. He was also the only officer not in the middle of the chaos. I walked up to him. “Sir, they didn’t do anything,” I said. I watched his face. Dark skin. Brown eyes. A guy who’d been having a quiet night. He turned to me.

“You’re a witness?” he asked.

“Yes sir. I was on the car. These kids didn’t do anything. This isn’t right.” The girl was still screaming. Each panicked shriek felt like shrapnel in my soul. That could be my daughter, I thought. So very not right. “I will be happy to make a statement,” I said.

“OK,” the cop replied. He took out a notebook, asked me my name. He had to ask me to spell my last name a second time. I handed him my driver’s license to make things easier for him. He started taking notes, then looked up at my husband and me. “I left my recorder in my office,” he said. “Would you be willing to go there to make a statement?”

I agreed immediately. To my surprise, my husband agreed as well. “Of course,” he said. We would probably be an hour late, or more, getting home. This was worth it.

We followed the officer off the platform, past the turnstiles, and through a door marked “authorized personnel only.” He led us down a maze of corridors, past a bank of cubicles, and into his office. As promised, he took out a recorder and interviewed us for a verbal statement. My husband and I each told our version of the events. They lined up. Each of us emphasized that the kids had done nothing wrong.

“Let me just clarify a few points,” the officer said after we finished our statements. “In your opinion, did the officers use excessive force at any time?” I said they had, citing the painful hands-behind-the-back restraining technique and the panicked girl screaming. My husband said they had not, but pointed out that the officers should have just let the kids go when so many passengers testified that the kids were innocent.

The officer asked a few more questions: had the officers used mace, batons, or physical violence on the kids? had they acted unprofessionally?

We finished the interview, the officer escorted us back into the station, said that internal affairs would probably be contacting us in connection with the investigation, then said goodbye.

As we waited for the next train, I played over the evening in my mind.

I realized that the statements we made existed only on a recorder in the cop’s office. The kids wouldn’t know we made the recorded statement. If they got lawyers, the lawyers wouldn’t know. Our testimony might never be heard by anyone involved in the case against the kids.

The more I played over the questions that the officer had asked us, the more I realized that the questions were all about the conduct of the other officers, not about the kids and their innocence.

The internal affairs investigation would, no doubt, turn into a huge deal. The situation had escalated so rapidly, fueled by racial tension. So many people had recorded videos. The train had been halted for 15 minutes or more. Passengers were irate. There would be much attention on the conduct of the cops.

I didn’t care about the cops. I wanted to help the kids.

Somehow, I doubted that I had.

Next time I see dancers on BART, I’m giving them whatever is in my wallet. I’ll think of it as their defense fund.

Categories: Software Testing

Test Automation As An Investigation Trigger

Eric Jacobson's Software Testing Blog - Fri, 03/21/2014 - 13:40

During a recent exchange about the value of automated checks, someone rhetorically asked:

“Is automation about finding lots of bugs or triggering investigation?”

Well…the later, right?

  • When an automated check passes consistently for months then suddenly fails, it’s an indication the system-under-test (SUT) probably unexpectedly changed.  Investigate!  The SUT change may not be directly related to the check but who cares, you can still pat the check on the back and say, “thank you automated check, for warning me about the SUT change”.
  • When you design/code an automated check, you are learning how to interact with your SUT and investigating it.  If there are bugs uncovered during the automated check design/coding, you report them now and assume the automated checks should happily PASS for the rest of their existence.
  • If someone is organized enough to tell you the SUT is about to change, you should test the change and assess the impact on your automated checks and make necessary updates.  Doing so requires investigating said SUT changes.

In conclusion, one can argue, even the lamest of automated checks can still provide value.  Then again, one can argue most anything.

Categories: Software Testing

Very Short Blog Posts (13): When Will Testing Be Done?

DevelopSense - Michael Bolton - Fri, 03/21/2014 - 12:54
When a decision maker asks “When will testing be done?”, in my experience, she really means is “When will I have enough information about the state of the product and the project, such that I can decide to release or deploy the product?” There are a couple of problems with the latter question. First, as […]
Categories: Software Testing

Web Performance News for the Week of March 17, 2014

LoadStorm - Fri, 03/21/2014 - 11:14


Does Adblock improve your load time?

An essential rule is to have ad banners displayed after the main content is displayed. In terms of perception, users wont notice the banner until they find the content they’re looking for on the page. While your web page will most likely download less data, your perception of speed isn’t likely to fluctuate drastically due to the adblock extension. Perhaps testing this would be an interesting experiment!

How does adblock work?

Adblock takes the div of the ad and gives it a height of 0. You can test this by putting a mini ad on your site and looking at the source code by switching adblock on and off. When the adblock is on, you can display a message that basically says if this height=0, then display a prompt or an alert telling the user that they have adblock on.

Even if loaded correctly, certain HTML elements are still hidden from the page especially if the element class=Ad.


For example, if youI had a whole bunch of images for a page called “apparel design”, and abbreviated them to AD very few of your images will display if you have adblock. AdBlock uses a pretty simple logic to determine what content is advertisment. One of the methods seems to be looking for anything named or prefixed “%ad%” “%banner%”
Here are some elements that adblock plus blocks:
id=”adBanner”: blocked
id=”adbanner”: blocked
id=”adAd”: blocked
id=”bannerad”: blocked
id=”adHello”: not blocked
class=”adAd”: not blocked
id=”adVert”: not blocked
id=”adFixed”: not blocked
id=”adFull”: not blocked

Another method for blocking ads is communication blocking. There are certain servers that host ads.AdBlock just needs to keep a list of all these server addresses and when the browser is ready to retrieve the ads, Adblock steps in to stop the connection. This is when communication to ad-servers are blocked altogether (the client request does not occur). The adblock will block all requests from certain URLs such as Google Adsense. Adblock Plus gets help from various lists such as EasyList to define, customize and distribute rules that maximize blocking efficiency.

Adblock Users

User experience is important to retain visitors to your website, but what if your website is generating revenue completely from web ads? Some user complain that ads put their computer at risk, that ads are ugly, irrelevant, and interrupting. Ads also include tracking technology which feels like a major violation of privacy for web users. Usually the best way is to stop users from complaining is making sure your ads are non-intrusive and topical.

Although we may hate ads, they are important to keep the internet free. To help facilitate a middle ground for uses and advertisers, Adblock Plus has set up a program called acceptable ads.Users can allow some of the advertising to not be considered annoying to be viewed. By doing this they support websites that rely on advertising but choose to do it in a non-intrusive way. Adblock Plus also answers the question “which ads are acceptable? To name a few they listed Static advertisements only (no animations, sounds or similar) and preferably text only, no attention-grabbing images

The post Web Performance News for the Week of March 17, 2014 appeared first on LoadStorm.

Swiss Testing Day 2014

Alan Page - Fri, 03/21/2014 - 08:38

As you may have noticed, my blogging has slowed. I’ve been navigating and ramping up on a new team and helping the team shift roles (all fun stuff). I’ve also been figuring out how to work in a big org (our team is “small”, but it’s part of one of Microsoft’s “huge” orgs – and there are some learning pains that go with that.

But – I did take a few days off this week to head to Zurich and give a talk at Swiss Testing Day. I attended the conference three years ago (where I was impressed with the turnout, passion and conference organization), and I’m happy to say that it’s still a great conference. I met some great people (some who I had virtually met on twitter), and had a great time catching up with Adrian and the rest of the Swiss-Q gang (the company that puts on Swiss Testing Day).

I talked (sort of) about testing Xbox. I talked a bit about Xbox, and mostly about what it takes to build and test the Xbox. I stole a third or so from my Star West keynote, and was able to use a few more examples now that the Xbox One has shipped (ironically, the Xbox One is available in every country surrounding Switzerland, but not in Switzerland). Of note, is that now I’ve retired from the Xbox team (or moved on, at least), I expect that I’m done with giving talks about Xbox (although I expect I’ll use examples of the Xbox team in at least one of my sessions at Star East).

I owe some follow ups on my Principles post, and I promise to get to those soon. I’ll define “soon” later.

(potentially) related posts:
  1. Thoughts on Swiss Testing Day
  2. Home Again
  3. Rockin’ STAR West
Categories: Software Testing

Kanban in Action – book review

The Quest for Software++ - Thu, 03/20/2014 - 01:28

This wonderful little book is a gentle introduction to Kanban by Marcus Hammarberg and Joakim Sunden. It explains the theory behind flow-based processes and provides a ton of practical implementation tips on everything from visualising work to how to properly take out a sticky note.

The first part deals with the basic principles of Kanban, using visual boards to show and manage work in progress, managing queues and bottlenecks and distributing and limiting work across team members. The second part explains how to manage continuous process improvement, how to deal with estimation and planning and how to define and implement different classes of service.

My impression is that this book will be most useful to people completely new to Kanban, who are investigating the concepts or starting to adopt this process. If you already use Kanban, you might find the chapters on managing bottlenecks and process metrics interesting.

Compared to David Anderson’s book, Kanban in Action is more approachable for beginners. Each important concept is described with lots of small, concrete examples, which will help readers new to Kanban put things into perspective, but also reinforce the message that there is no one-size-fits-all solution. Anderson’s book goes in more depth to explain the theory behind the practice, and this book has more practical information and concrete advice on topics such as setting work in progress limits, managing different types of items on a visualisation board and choosing workflow metrics. If you’re researching this topic or starting to implement Kanban, it’s worth reading both books.

Webinar Video and Slides - What to Automate First?

Randy Rice's Software Testing & Quality - Wed, 03/19/2014 - 21:42
Hi Folks,

Thanks for attending the webinar today on What to Automate First - The Low-Hanging Fruit of Test Automation. If you missed it, here is the video:

http://youtu.be/eo66ouKGyVk

Here are the slides:

http://www.slideshare.net/rrice2000/what-do-we-automate-first

I hope you enjoy the content!
Categories: Software Testing

I Said Something Clever, Once

QA Hates You - Wed, 03/19/2014 - 04:55

Talking about automated testing once, I said:

Automated testing, really, is a misnomer. The testing is not automated. It requires someone to build it and to make sure that the scripts remain synched to a GUI.

Quite so, still

Categories: Software Testing

Testing on the Toilet: What Makes a Good Test?

Google Testing Blog - Tue, 03/18/2014 - 16:10
by Erik Kuefler

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

Unit tests are important tools for verifying that our code is correct. But writing good tests is about much more than just verifying correctness — a good unit test should exhibit several other properties in order to be readable and maintainable.

One property of a good test is clarity. Clarity means that a test should serve as readable documentation for humans, describing the code being tested in terms of its public APIs. Tests shouldn't refer directly to implementation details. The names of a class's tests should say everything the class does, and the tests themselves should serve as examples for how to use the class.

Two more important properties are completeness and conciseness. A test is complete when its body contains all of the information you need to understand it, and concise when it doesn't contain any other distracting information. This test fails on both counts:

@Test public void shouldPerformAddition() {
Calculator calculator = new Calculator(new RoundingStrategy(),
"unused", ENABLE_COSIN_FEATURE, 0.01, calculusEngine, false);
int result = calculator.doComputation(makeTestComputation());
assertEquals(5, result); // Where did this number come from?
}
Lots of distracting information is being passed to the constructor, and the important parts are hidden off in a helper method. The test can be made more complete by clarifying the purpose of the helper method, and more concise by using another helper to hide the irrelevant details of constructing the calculator:

@Test public void shouldPerformAddition() {
Calculator calculator = newCalculator();
int result = calculator.doComputation(makeAdditionComputation(2, 3));
assertEquals(5, result);
}
One final property of a good test is resilience. Once written, a resilient test doesn't have to change unless the purpose or behavior of the class being tested changes. Adding new behavior should only require adding new tests, not changing old ones. The original test above isn't resilient since you'll have to update it (and probably dozens of other tests!) whenever you add a new irrelevant constructor parameter. Moving these details into the helper method solved this problem.

Categories: Software Testing

How Low Can You Go?

QA Hates You - Tue, 03/18/2014 - 10:35

I think I have found the source of the problem:

I expected some sort of enlightenment when I got all the way back to error 000, but that’s not the case, actually.

Or is Google denying there’s an error here?

Fun fact: 000 is 000 in binary. It is one of 10 numbers that are the same.

Categories: Software Testing

E-Commerce Benchmark – Drupal Commerce

LoadStorm - Tue, 03/18/2014 - 09:00


This post is part of a series: FirstPrevious

Introduction

Drupal Commerce is the last stop on our tour of e-commerce platforms. Drupal is a free content management system, and Drupal Commerce is an extension that allows you to build a web store with Drupal. This setup is similar to WooCommerce and Virtuemart, which both rely on their own content management systems.

If you missed our previous posts, we are load testing six different e-commerce platforms and scoring them based on web performance. We continue our e-commerce benchmarking series by giving you a glimpse of out-of-the-box Drupal Commerce and scoring it against the same load test standards as our out-of-the-box Magento store in the first blog post.

Setup

To streamline setup, I decided to use a bundle called Commerce Kickstart which contains a single package for both Drupal and Drupal Commerce. So you don’t have to worry about the in-betweens of installing the content management system and extension. How easy is that!

Commerce Kickstart is one of the easiest e-commerce packages to install, much like the Magento and osCommerce packages. There was one blaring difference about Drupal Commerce: there were no example products! This was unexpected because the front page (shown below) clearly shows a fake product slideshow.

Every other store had some sort of example product, even if it was just clipart garden tools. I ended up having to spend time adding sample items. I used an array of image sizes and qualities to mimic the variety in a typical web store.

Strangely enough, the example store came with categories. So I was sure to “stock” the items into those categories. I was ready to load test after an acceptable amount of products were added.

Testing Phase



At 3 minutes into each of the three tests performed, we consistently see a spike in performance error rates. A majority of those performance errors are request connection timeout errors which means the load engines are not even able to establish a connection with the target server, Drupal.
Amazon Web Services could be blamed for this. The server is then crashing early on at around 350 VUsers. We expect higher scalability for a large Amazon EC2. So it’s like we’re missing a piece of a larger puzzle that needs investigating.

Score


Drupal Commerce gained a score of 55.7 out of 100. This puts Drupal Commerce in the lower end of the of ranks. The score would be better for Drupal Commerce if the WebPageTest repeat view metrics were not missing. We could not re-test due to time constraints, but we are assuming a value of 60 seconds for all the repeat views. This stops Drupal Commerce from unfairly being ranked as the worst in performance.

At this point, we’ve tested six different open source e-commerce platforms.

  • Magento Community Edition
  • WooCommerce
  • osCommerce
  • VirtueMart
  • OpenCart
  • Drupal Commerce

Since they are all free and open source, you can give them each a try on your own as well. A deeper analysis of the experiments will be done to make a final ranking of all tested e-commerce platforms. Come back next time and see how each platforms ranks!

This post is part of a series: FirstPrevious

The post E-Commerce Benchmark – Drupal Commerce appeared first on LoadStorm.

Performance Tuning : 7 Ways to Spring Clean your Website

LoadStorm - Mon, 03/17/2014 - 11:10


Spring is here! Historically, this time of the year is representative of growth and renewal. It’s the perfect time for cleaning up your house, your yard, and your website.

Keeping a modern website running smoothly can be time consuming and resource intensive. Modernizing your website may never become effortless, but this process can definitely become more manageable, maybe even enjoyable. This list provides an overview of quick and easy to apply tricks – including information on image optimization, general code cleanup , and using Gzip – to zip through clean up and arrive at a happy site.

Before you start,choose a tool to performance test your website

To be effective in performance tuning, it’s essential to establish a baseline and measure the effect of the changes you make. Otherwise, how will you know if the changes were worthwhile? And, in the future, how will you know what to do again?

We like WebPageTest!
There are a ton of free tools out there. Our personal favorite? WebPageTest! This free, open-source tool makes it easy to analyze the most important metrics, a waterfall view of all requests and resources, and even capture video of rendering. Check out our previous post on performance analysis using WebPageTest.

Wherever you decide to proceed from, focus on one task at a time. 1. Update Your Platform

Every once in a while, I get these annoying pop ups in the center of my screen, while I’m in the middle of watching YouTube videos and scrolling endlessly through Facebook. Ahem, I mean, studying. I usually ignore them. However, updating your web app to the latest version usually yields better performance and speed, and bug fixes. For example, if you use a content management system, make sure you update it regularly.

2. Get Rid of Bad Links

There’s some debate on whether broken links on a site can harm SEO health. Either way, nobody likes clicking on something that doesn’t work. It gives the impression that the site owner, and even the company, is unprepared. There are several free tools available to help you find these broken links and fix them, including Screaming Frog and Google Webmaster Tools.

On the same token, minimize redirects. Redirects may not be as obvious to users (or even web developers for that matter),but they cause extra delay that can be avoided. Screaming Frog can help to diagnose these occurrences as well.

3. Minimize HTTP Requests

By now we’ve all heard the popular mantra: The fastest request is one not made. A simple way to minimize requests is to combine code and consolidate multiple files into one. This can also be done by implementing CSS Sprites for multiple images. More information on implementing this strategy can be found here.

4. Remove Unused Code, Plug-ins, and Settings

Sometimes it’s easier to just comment out the code we don’t need, but after a while, this stuff can just become unnecessary clutter. This is applicable to code, images in a database, old plug-ins, themes, and even the settings/tables left over from an older theme or plug-in that has since been replaced. If you’re not currently using a theme or plugin, get rid of it. You can always download it again later. Chances are you’ve moved on to something sexier anyways.

5. Clean up Images

There are countless image optimization techniques that can be utilized to boost performance, some of which are more complicated than others. Some simple image tuning techniques to start with include cropping and resizing. It’s also important to serve the image at it’s real size, rather than adjusting them on the fly. For example, if your image size is 1000 x 1000, you don’t want to use a markup to display the image at 100 x 100 because this will mean you’re just sending extra bytes. Instead, resize the image in an editor to 100 x 100, and use that smaller file. Additional image tuning techniques include reducing color depth, as well as removing any unnecessary image comments.

6. Clean Out Your Database

This is probably the trickiest thing on the list. Have you removed old revision posts lately? What about unused images?

7. Use GZIP

The average website has grown 151% in the past three years, with an increase in the amount of requests and the size of requests. GZip is a tool that can be used to combat and reduce the weight of the requests. The easiest way to implement it is to stick the script into the PHP of your site’s header. An in-depth explanation of using GZip can be found here.

Now that you’re done..

Ahh.. Don’t you feel better? Now you can measure your results and compare them to your baseline. Even though some of these suggestions aren’t meant to drastically improve the speed of your site, an making incremental improvements and keeping an organized production will reap huge gains in the long run. The best thing you can gain is experience. Post the results of your optimization below! How did these tips work out for your website? Do you have any useful tips for performance tuning your site?

The post Performance Tuning : 7 Ways to Spring Clean your Website appeared first on LoadStorm.

QA Music: About Your User Stories

QA Hates You - Mon, 03/17/2014 - 04:16

Do you really know what it’s like to walk a mile in their shoes?

Or are your organization’s concepts of what the user wants or needs based on the speculations of twenty-somethings with computer science degrees who’ve known nothing but working with computers their whole lives?

Categories: Software Testing

Web Performance News for the Week of March 10, 2014

LoadStorm - Fri, 03/14/2014 - 16:04



Brief History: How IE 6 Ruined Internet Explorer’s Reputation

In the early 2000s, IE 6 was the dominant web browsers that reigned supreme due to the lack of competition. This gave them the power to set the standard. Unfortunately, the standard was set all the way down to Satan’s feet. Because there wasn’t any competition at the time, the standard didn’t matter. Web developers had to make their websites compatible with IE. It was never the other way around. The lack of debugging tools, performance optimizations, and upgrades were little to none. IE 6 was released in 2000. For five staggering years, there was hardly an update. We didn’t see IE 7 until 2006. And here we are in 2014 with 22.2% of China still using Internet Explorer 6. Why?

Enforcing piracy laws in China wasn’t (still isn’t) a priority and XP was easy to pirate. The catch was that people had difficulty upgrading IE on a pirated version because Windows was checking to see if the copy was genuine. If pirates got anything newer, they would have to register their copy of Windows and pay for it.

XP for the Win

The release of IE 6 was supposed to be the be-all and end-all. When Windows XP was released in 2001, Microsoft required third party hardware manufacturers like Dell, HP, and Gateway to include IE 6 in all copies of XP. If these companies didn’t want IE 6 on their PCs and laptops, Microsoft would not agree to sell them copies of Windows. This move is what also killed Netscape. By removing Netscape off the map, Microsoft increased their browser market share. So, what did the number one web browser do when they were at the top spot? Nothing. Most members on the IE team were reorganized into the MSN division to build MSN Explorer. Since IE 6 had the biggest slice of pie, Microsoft believed that the browser war was over, innovation hit its peak, and the new nemesis to focus on was AOL.

Because Netscape was crushed and IE 6 achieved dominate market share, Microsoft disassembled their IE team and placed everyone on different projects. The belief was that the future of applications would be desktop based. That was one reason why there was a five-year gap between IE 6 and IE 7. The lack of continuous improvement for IE was frustrating for developers. The absence of updating IE lead to bugs and flaws in the browser security which turned IE 6 into an attack vector for hackers. 

Embrace, Extend, and Extinguish

Microsoft has been successful because of their their closed solutions which gave them a competitive advantage in the marketplace. They took the word processor and excel market away from WordPerfect and Lotus. Microsoft did this by making their product incompatible with the competition. Word Perfect could not read Word files, and the architecture of Lotus was incompatible with Excel. This strategy from Microsoft was called well EEE (embrace, extend, extinguish). Microsoft would implement a product and eventually enact incompatibilities on it. Finally, Microsoft would have enough power to push any potential competitor out of the market. This business culture that Microsoft was built on would not work for IE.

Originally Titled Phoenix

Firefox was introduced in 2005, one year before IE 7 was introduced. Firefox’s layout engine (Gecko) was heavily dependent on open source libraries. This was an advantage because developers using open source leveraged a passionate community that made a difference in development.  Microsoft’s layout engine (Trident) did not fit well with W3C standards, JavaScript implementation, or Java implementation. As a matter of fact, Microsoft originally intended to remove Java off the grid which resulted in a lawsuit.

Firefox first came along by reintroducing tabbed browsing from Opera. They did what IE was failing to do correctly, which was following  W3C standards and consistently updated their browser for their users. Once Web developers were able to make pages that followed web standards,  IE’s reputation became worse. Microsoft eventually rolled out IE 8, which seemed to compete well against Firefox at the time but they didn’t introduce IE 9 until two years later. This gave Firefox the time to continue gaining more users while consistently patching and send updates to users. IE was simply falling behind the times. With each new update from their competitors, It seemed like IE failed  to stay up-to-date against Firefox’s and eventually Chrome’s web standards.

 

Too late for IE?

Fast-forwarding to the present, Web developers are still frustrated of having to maintain outdated versions of IE. These older versions of IE are holding them and the entire web development back. Because standards were horribly set, front-end developers had to do twice the work necessary which required proprietary code hacks to make things work properly. Some sites are no longer putting up with older versions of IE. Several years ago, Australian retailer Kogan.com enacted a tax on IE 7.”The way we’ve been able to keep our prices so low is by using technology to make our business efficient and streamlined. One of the things stopping that is our Web team having to spend a lot of time making our new website look normal on IE7,” Kogan wrote.

The release of IE 9 was a turn around for IE. Microsoft paid attention to their users and worked better with W3C on following web standards. Their latest IE 11 proved to meet the standards of the web. Is it the best? That’s debatable. Reputation wise, IE still has a long way to go before they can be considered a serious browser.

 

The post Web Performance News for the Week of March 10, 2014 appeared first on LoadStorm.

Your Site’s Next Registered User

QA Hates You - Wed, 03/12/2014 - 11:54

Nobody would ever do that: Dunedin man changes name to 99-character monster:

A Dunedin man has changed his name to the longest legally allowed, after apparently losing a bet five years ago.

The 22-year-old man from Normanby is now legally known as ‘Full Metal Havok More Sexy N Intelligent Than Spock And All The Superheroes Combined With Frostnova’ – just one character shy of Department of Internal Affairs’ (DIA) 100 character limit.

Which proves the man is not in QA; otherwise, he would have renamed himself with a 101-character name.

Categories: Software Testing

Pages