Software Testing

Failed Data Integration Costs Customers Hundreds of Dollars

QA Hates You - Wed, 03/25/2015 - 08:04

A failed data integration is going to cost St. Louis area residents up to hundreds of dollars. Or require a refund.

No, that extra few hundred dollars on your monthly sewer bill isn’t a typo.

A bill miscalculation that began nearly two years ago has the Metropolitan St. Louis Sewer District asking thousands of customers in St. Louis County to pay for services that never showed up on bills. The average undercharge for an affected household: $450.

Nearly 1,900 residential customers and 3,700 commercial customers are being asked to pay more on their current sewer bill, which should be in mailboxes by the week of April 6 at the latest.

The reason?

The discrepancy began in May 2013, when Missouri American Water, which provides water service in much of the St. Louis County portion of MSD’s territory, changed its billing system.

MSD buys water meter data from Missouri American in order to calculate many customer rates, but the new system’s data wasn’t properly computing with MSD’s billing programs, district spokesman Lance LeComb said.

“Ultimately MSD is responsible for getting the data right and making sure we have correct account information and send out correct bills,” he said. “Missouri American Water was very supportive of our efforts and provided additional staff” to fix the problem.

Dependency upon an integration, and something at the data provider changed. Expensively.

On the MSD bills, they encourage you to “go green” and pay your bills online:

Given their track record, I can understand anyone’s reluctance to allow MSD’s computer systems access to a customer bank account or credit card.

Categories: Software Testing

Hello Again. I’m Back.

Eric Jacobson's Software Testing Blog - Tue, 03/24/2015 - 11:06

Egads!  It’s been several months since my last post.  Where have I been? 

I’ve transitioned to a new company and an exciting new role as Principal Test Architect.  After spending months trying to understand how my new company operates, I am beginning to get a handle on how we might improve testing.

In addition to my work transition, myself and each member of my family have just synchronously suffered through this year’s nasty flu, and then another round of stomach flu shortly thereafter.  The joys of daycare…

And finally, now that my son, Haakon, has arrived, I’ve been adjusting to my new life with two young children.  1 + 1 <> 2.  

It has been a rough winter.

But alas, my brain is once again telling me, “Oh, that would make a nice blog post”.  So let’s get this thing going again!

Categories: Software Testing

Joining SpeedCurve

Steve Souders - Mon, 03/23/2015 - 12:46

I’m excited to announce that I’m joining SpeedCurve!

SpeedCurve provides insight into the interaction between performance and design to help companies deliver fast and engaging user experiences. I’ll be joining Mark Zeman, who launched the company in 2013. We make a great team. I’ve been working on web performance since 2002. Mark has a strong design background including running a design agency and lecturing at New Zealand’s best design school. At SpeedCurve, Mark has pioneered the intersection of performance and design. I’m looking forward to working together to increase the visibility developers and designers have into how their pages perform.

During the past decade+ of evangelizing performance, I’ve been fortunate to work at two of the world’s largest web companies as Chief Performance Yahoo! and Google’s Head Performance Engineer. This work provided me with the opportunity to conduct research, evaluate new technologies, and help the performance community grow.

One aspect that I missed was having deeper engagements with individual companies. That was a big reason why I joined Fastly as Chief Performance Officer. Over the past year I’ve been able to sit down with Fastly customers and help them understand how to speed up their websites and applications. These companies were able to succeed not only because Fastly has built a powerful CDN, but also because they have an inspiring team. I will continue to be a close friend and advisor to Fastly.

During these engagements, I’ve seen that many of these companies don’t have the necessary tools to help them identify how performance is impacting (hurting) the user experience on their websites. There is even less information about ways to improve performance. The standard performance metric is page load time, but there’s often no correlation between page load time and the user’s experience.

We need to shift from network-based metrics to user experience metrics that focus on rendering and when content becomes available. That’s exactly what Mark is doing at SpeedCurve, and why I’m excited to join him. The shift toward focusing on both performance and design is an emerging trend highlighted by Lara Hogan’s book Designing for Performance and Mark’s Velocity speaking appearances. SpeedCurve is at the forefront of this movement.

We (mostly Mark) just launched a redesign of SpeedCurve that includes many new, powerful features such as responsive design analysis, performance budgets, and APIs for continuous deployment. Check it out and let us know what you think.

Categories: Software Testing

Android UI Automated Testing

Google Testing Blog - Fri, 03/20/2015 - 14:51
by Mona El Mahdy

Overview

This post reviews four strategies for Android UI testing with the goal of creating UI tests that are fast, reliable, and easy to debug.

Before we begin, let’s not forget an import rule: whatever can be unit tested should be unit tested. Robolectric and gradle unit tests support are great examples of unit test frameworks for Android. UI tests, on the other hand, are used to verify that your application returns the correct UI output in response to a sequence of user actions on a device. Espresso is a great framework for running UI actions and verifications in the same process. For more details on the Espresso and UI Automator tools, please see: test support libraries.

The Google+ team has performed many iterations of UI testing. Below we discuss the lessons learned during each strategy of UI testing. Stay tuned for more posts with more details and code samples.

Strategy 1: Using an End-To-End Test as a UI Test

Let’s start with some definitions. A UI test ensures that your application returns the correct UI output in response to a sequence of user actions on a device. An end-to-end (E2E) test brings up the full system of your app including all backend servers and client app. E2E tests will guarantee that data is sent to the client app and that the entire system functions correctly.

Usually, in order to make the application UI functional, you need data from backend servers, so UI tests need to simulate the data but not necessarily the backend servers. In many cases UI tests are confused with E2E tests because E2E is very similar to manual test scenarios. However, debugging and stabilizing E2E tests is very difficult due to many variables like network flakiness, authentication against real servers, size of your system, etc.


When you use UI tests as E2E tests, you face the following problems:
  • Very large and slow tests. 
  • High flakiness rate due to timeouts and memory issues. 
  • Hard to debug/investigate failures. 
  • Authentication issues (ex: authentication from automated tests is very tricky).

Let’s see how these problems can be fixed using the following strategies.

Strategy 2: Hermetic UI Testing using Fake Servers

In this strategy, you avoid network calls and external dependencies, but you need to provide your application with data that drives the UI. Update your application to communicate to a local server rather than external one, and create a fake local server that provides data to your application. You then need a mechanism to generate the data needed by your application. This can be done using various approaches depending on your system design. One approach is to record server responses and replay them in your fake server.

Once you have hermetic UI tests talking to a local fake server, you should also have server hermetic tests. This way you split your E2E test into a server side test, a client side test, and an integration test to verify that the server and client are in sync (for more details on integration tests, see the backend testing section of blog).

Now, the client test flow looks like:


While this approach drastically reduces the test size and flakiness rate, you still need to maintain a separate fake server as well as your test. Debugging is still not easy as you have two moving parts: the test and the local server. While test stability will be largely improved by this approach, the local server will cause some flakes.

Let’s see how this could this be improved...

Strategy 3: Dependency Injection Design for Apps.

To remove the additional dependency of a fake server running on Android, you should use dependency injection in your application for swapping real module implementations with fake ones. One example is Dagger, or you can create your own dependency injection mechanism if needed.

This will improve the testability of your app for both unit testing and UI testing, providing your tests with the ability to mock dependencies. In instrumentation testing, the test apk and the app under test are loaded in the same process, so the test code has runtime access to the app code. Not only that, but you can also use classpath override (the fact that test classpath takes priority over app under test) to override a certain class and inject test fakes there. For example, To make your test hermetic, your app should support injection of the networking implementation. During testing, the test injects a fake networking implementation to your app, and this fake implementation will provide seeded data instead of communicating with backend servers.


Strategy 4: Building Apps into Smaller Libraries

If you want to scale your app into many modules and views, and plan to add more features while maintaining stable and fast builds/tests, then you should build your app into small components/libraries. Each library should have its own UI resources and user dependency management. This strategy not only enables mocking dependencies of your libraries for hermetic testing, but also serves as an experimentation platform for various components of your application.

Once you have small components with dependency injection support, you can build a test app for each component.

The test apps bring up the actual UI of your libraries, fake data needed, and mock dependencies. Espresso tests will run against these test apps. This enables testing of smaller libraries in isolation.

For example, let’s consider building smaller libraries for login and settings of your app.


The settings component test now looks like:


Conclusion

UI testing can be very challenging for rich apps on Android. Here are some UI testing lessons learned on the Google+ team:
  1. Don’t write E2E tests instead of UI tests. Instead write unit tests and integration tests beside the UI tests. 
  2. Hermetic tests are the way to go. 
  3. Use dependency injection while designing your app. 
  4. Build your application into small libraries/modules, and test each one in isolation. You can then have a few integration tests to verify integration between components is correct . 
  5. Componentized UI tests have proven to be much faster than E2E and 99%+ stable. Fast and stable tests have proven to drastically improve developer productivity.

Categories: Software Testing

Live Virtual Training in User Acceptance Testing, Testing Mobile Devices, and ISTQB Foundation Level Agile Extension Certification

Randy Rice's Software Testing & Quality - Fri, 03/20/2015 - 14:24

I am offering three courses in live virtual format. I will be the instructor on each of these. You can ask me questions along the way in each course and we will have some interactive exercises. Here is the line-up:
April 1 - 2 (Wed - Thursday) - User Acceptance Testing 1:00 p.m. - 4:00 p.m. EDT.  This is a great course to learn a structured way to approach user acceptance testing that validates systems from a real-world perspective. Register at https://www.mysoftwaretesting.com/Structured_User_Acceptance_Testing_Live_Virtual_p/uat-lv.htm
April 7 - 8 (Tuesday - Wednesday) - Testing Mobile Devices 1:00 p.m. - 4:00 p.m. EDT. Learn how to define a strategy and approach for mobile testing that fits the unique needs of your customers and technology. We will cover a wide variety of tests you can perform and you are welcome to try some of the tests on your own mobile devices during the class. Register at https://www.mysoftwaretesting.com/Testing_Mobile_Applications_Live_Virtual_Course_p/mobilelv.htm
April 14 - 16 (Tuesday - Thursday) - ISTQB Foundation Level Agile Extension Course 1:00 p.m. - 4:30 p.m. EDT. This is a follow-on to the ISTQB Foundation Level Certification. The focus is on understanding agile development and testing principles, as well as agile testing methods and tools. The course follows the ISTQB Foundation Level Extension Agile Tester Syllabus 2014. Register at https://www.mysoftwaretesting.com/ISTQB_Agile_Extension_Course_Public_Course_p/agilelv.htm
I really hope you can join me in one of these courses. If you want to have your team trained, just let me know and I'll create a custom quote for you.
Categories: Software Testing

Exploratory Testing 3.0

DevelopSense - Michael Bolton - Mon, 03/16/2015 - 22:09
This blog post was co-authored by James Bach and me. In the unlikely event that you don’t already read James’ blog, I recommend you go there now. The summary is that we are beginning the process of deprecating the term “exploratory testing”, and replacing it with, simply, “testing”. We’re happy to receive replies either here […]
Categories: Software Testing

Web Performance News of the Week

LoadStorm - Fri, 03/13/2015 - 17:00

This week in web performance, we saw announcements from Apple, Chinese tech giants joining the auto-industry, and Bitcoin struggles with handling denial of service attacks.

Spring Forward developer announcements from Apple

Announcements from Apple always bring exciting discussion to the tech world, and this time was no different. Apple made several announcements on Monday, including ResearchKit, an open-source software framework to attempt to integrate medical and health research with iPhone apps. Other exciting developer announcements included the OS X Server 4.1 Developer Preview, as well as new Xcode 6.3 beta 3 which includes iOS 8.3 SDK and Swift 1.2.

China’s tech companies announce plans to  invade the auto industry

Two more tech giants are jumping into the self-driving car industry. Just a few days later, Alibaba, China’s biggest tech company, announced a joint venture with Saic, China’s largest auto company. The plans is to build a connected car that would communicate with other vehicles through the cloud. Just a few days earlier, Baidu, a Chinese search engine declared that they have been working with auto manufacturers. Both suggested a self-driving car could hit the road within the next year. Over the past month, rumors of Apple joining the auto industry and poaching engineers for an electric car project have caused quite a stir, although Mercedes Benz is not concerned with the competition.

Find out how much your site costs through new tool from webpagetest.org

For those of us in the web performance world, webpagetest.org has been a very useful tool. WebPageTest is an open-source project, primarily supported by Google that can be used to analyze website performance. This week, a new addition to this tool that can be used to estimate how much your website would cost users around the world to load. Whatdoesmysitecost.com will either use your website URL or WebPageTest ID and then will test your site. Try it out!

Bitcoin mining pools targeted in multiple DDOS attacks

Mobile currency mining pools are being held for ransom, in recent distributed denial of service attacks that began this month. AntPool, BW.com, NiceHash, CKPool and GHash.io are among a number of Bitcoin mining pools and operations that have been hit, and attackers are demanding a ransom of 5 to 10 Bitcoin in order to stop the attacks. Many suspect attacks like this are likely to continue. Check out our recent discussion, where we ask if load testing can stop hackers. In other Bitcoin news, T-mobile announced this week that they will begin to accept Bitcoin as payment in Poland. In fact, they’re  even offering a 20% discount for Bitcoin users.

The post Web Performance News of the Week appeared first on LoadStorm.

Oracles Are About Problems, Not Correctness

DevelopSense - Michael Bolton - Thu, 03/12/2015 - 20:49
As James Bach and I have have been refining our ideas of testing, we’ve been refining our ideas about oracles. In a recent post, I referred to this passage: Program testing involves the execution of a program over sample test data followed by analysis of the output. Different kinds of test output can be generated. […]
Categories: Software Testing

Explore capabilities, not features

The Quest for Software++ - Thu, 03/12/2015 - 07:30

This is an excerpt from my upcoming book, Fifty Quick Ideas To Improve Your Tests

Exploratory testing requires a clear mission. The mission statement provides focus and enables teams to triage what is important and what is out of scope. A clear mission prevents exploratory testing sessions turning into unstructured playing with the system. As software features are implemented, and user stories get ready for exploratory testing, it’s only logical to set the mission for exploratory testing sessions around new stories or changed features. Although it might sound counter-intuitive, story oriented missions lead to tunnel-vision and prevent teams from getting the most out of their testing sessions.

Stories and features are a solid starting point for coming up with good deterministic checks. However, they aren’t so good for exploratory testing missions. When exploratory testing is focused on a feature, or a set of changes delivered by a user story, people end up evaluating if the feature works, and rarely stray off the path. In a sense, teams end up proving what they expect to see. However, exploratory testing is most powerful when it deals with unexpected and unknown. For that, we need to allow tangential observations and insights, and design new tests around unexpected discoveries. To achieve that, the mission teams set for exploratory testing can’t be focused purely on features.

Good exploratory testing deals with unexpected risks, and for that we need to look beyond the current piece of work. On the other hand, we can’t cast the net too widely, because testing will lack focus. A good perspective to investigate, that balances wider scope and still provides focus, is around user capabilities. Features provide capabilities to users to do something useful, or take away user capabilities to do something dangerous or damaging. A good way to look for unexpected risks is to avoid exploring features, but explore related capabilities instead.

Key benefits

Focusing exploratory testing on capabilities instead of features leads to better insights and prevents tunnel vision.

A nice example of that is the contact form we built for MindMup last year. The related software feature was sending support requests when users fill in the form. We could have explored that feature using multiple vectors, such as field content length, e-mail formats, international character sets in the name or the message, but ultimately this would only focus on proving that the form works. Casting the net a bit wider, we identified two capabilities related to the contact form. People should be able to contact us for support easily in case of trouble. We should be able to support them easily, and solve their problems. Likewise, there is a capability we wanted to prevent. Nobody should be able to block or break the contact channels for other users through intentional or unintentional misuse. We set those capabilities as the mission of our exploratory testing session, and that led us to look at the accessibility of the contact form in case of trouble, and the ease of reporting typical problem scenarios. We discovered two critically important insights.

The first one is that a major cause of trouble would not be covered by the initial solution. Flaky and unreliable network access was responsible for a lot of incoming support requests. But when the internet connection for users goes down randomly, even though the form is filled correctly, the browser might fail to connect to our servers. If someone suddenly goes completely offline, the contact form won’t actually help at all. People might fill in the form, but lack of reliable network access will still disrupt their capability to contact us. The same goes for our servers suddenly dropping offline. None of those situations should happen in an ideal world, but when they do, that’s when users actually need support. So the feature was implemented correctly, but there was still a big capability risk. This led us to offer an alternative contact channel when the network is not accessible. We displayed the contact e-mail prominently on the form, and also repeated it in the error message if the form submission failed.

The second big insight was that people might be able to contact us, but without knowing the internals of the application, they wouldn’t be able to provide information for troubleshooting in case of data corruption or software bugs. That would pretty much leave us in the dark, and disrupt our capability to provide support. As a result, we decided to not even ask for common troubleshooting info (such as browser and operating system version), but instead read it and send automatically in the background. We also pulled out the last 1000 events that happened in the user interface, and sent them automatically with the support request, so that we could replay and investigate what exactly happened.

How to make it work

To get to good capabilities for exploring, brainstorm what a feature allows users to do, or what it prevents them from doing. When exploring user stories, try to focus on the user value part (‘In order to…’) rather than the feature description (‘I want …’).

If you use impact maps for planning work, the third level of the map (actor impacts) are a good starting point for discussing capabilities. Impacts will typically be changes to a capability. If you use user story maps, then the top-level item in the user story map spine related to the current user story is a nice starting point for discussion.

Pages