Feed aggregator

Web Performance News of the Week

LoadStorm - Fri, 03/27/2015 - 15:06

This week in web performance, Google clarified the new mobile-friendly ranking algorithm details, Facebook open-sourced its Augmented Traffic Control tool, and internet service providers filed lawsuit against the FCC to stop net neutrality.

Google clarifies the Mobile-friendly algorithm details

The April 21st launch date for the new mobile-friendly ranking algorithm is approaching quickly, This week Google held a live Q&A hangout to provide more details about what we can expect.

Here’s what you need to know:

  • There are no degrees of mobile-friendliness; your page is either mobile-friendly or it isn’t. Google added that it will be the same with desktop search, and is not isolated to mobile searches.
  • The mobile-friendly ranking changes will affect your site on a page by page basis. So if half of your site’s pages are mobile-friendly, those half will benefit from the mobile friendly ranking changes. This is good news for anyone concerned that their site doesn’t make the cut, as they can focus on getting their main pages up to date first.
  • The new ranking algorithm will officially begin on April 21st. However, the algorithm runs in real-time, meaning you can start preparing your site for analysis now. It was said in the hangout that it may even take a week or so to completely roll out, so it’s not entirely clear how quick we can expect to see changes in site rankings.
  • Google News may not be ranked by the new mobile-friendly algorithm yet. Interestingly, the Google News ranking team has no plans on implementing the mobile-friendly algorithm into the Google News results.

So how do you know if your page is friendly? The easiest way to check if your page passes is by typing in your site’s URL here.

The most important criteria that need to be met by Googlebots are:

  • pages automatically sizing content to the screen
  • avoiding software that is uncommon on mobile devices
  • containing readable text without zooming
  • and placing links far enough apart to be selected easily

It’s important to note that a page may look perfect on your device, but may look completely different on your coworker’s phone. A few easy ways to start making your site mobile-friendly can be found in our responsive web design blog post. It’s also very important to note that there are over 200 different criteria used to rank your site, so it’s possible these changes may not affect your site very much.

Internet providers file lawsuits to reverse net neutrality laws

USTelecom (who represents AT&T), Verizon, Alamo Broadband, and other companies, filed two separate lawsuits against the Federal Communications Commission in the US Court of Appeals this week. The companies want the FCC rules to be ignored and say the FCC acted outside of their authority, violating the companies’ constitutional rights. Both suits acknowledged the their challenges may be premature, but were filed nonetheless “out of an abundance of caution.” The FCC acknowledged the suits and believed that the petitions for review filed were “premature and subject to dismissal.”

 

Facebook to bring fast internet to the entire world by open sourcing its Augmented Traffic Tool

This week, Facebook announced they would be making their custom built Augmented Traffic Control (ATC) tool open-source. Facebook developers have used the tool to simulate different types of network connections, including older 2G and Edge mobile data networks, and networks that frequently become disconnected. By testing how a site or app perform under these conditions , they are able to optimize their app performance in low connection areas around the world.

The tool started out as an open-source project itself, and now can continue to evolve, allowing developers around the world to use and improve it. Mark Zuckerberg recently spoke about Internet.org at the 2015 Mobile World Congress in Spain. The not for profit is Facebook’s commitment to make the internet accessible to people in the developing world and wrote in his letter to potential investors, “we don’t build services to make money; we make money to build better services.”

The post Web Performance News of the Week appeared first on LoadStorm.

Getting Manual Testers Involved in Automation

Eric Jacobson's Software Testing Blog - Fri, 03/27/2015 - 08:23

Most of the testers at my new company do not have programming skills (or at least are not putting them to use).  This is not necessarily a bad thing.  But in our case, many of the products-under-test are perfect candidates for automation (e.g., they are API rich).

We are going through an Agile transformation.  Discussions about tying programmatic checks to “Done” criteria are occurring and most testers are now interested in getting involved with automation.  But how?

I think this is a common challenge.

Here are some ways I have had success getting manual testers involved in automation.  I’ll start with the easiest and work my way down to those requiring more ambition.  A tester wanting to get involved in automation can:

  1. Do unit test reviews with their programmers.  Ask the programmers to walk you through the unit tests.  If you get lost ask questions like, “what would cause this unit test to fail?” or “can you explain the purpose of this test at a domain level?”.
  2. Work with automators to inform the checks they automate.  If you have people focused on writing automated checks, help them determine what automation might help you.  Which checks do you often repeat?  Which are boring?
  3. Design/request a test utility that mocks some crucial interface or makes the invisible visible.  Bounce ideas off your programmers and see if you can design test tools to speed things up.  This is not traditional automation.  But it is automation by some definitions.
  4. Use data-driven automation to author/maintain important checks via a spreadsheet.  This is a brilliant approach because it lets the test automater focus on what they love, designing clever automation.  It lets the tester focus on what they love, designing clever inputs.  Show the tester where the spreadsheet is and how to kick off the automation.
  5. Copy and paste an automated check pattern from an IDE, rename the check and change the inputs and expected results to create new checks.  This takes 0-to-little coding skills.  This is a potential end goal.  If a manual tester gets to this point, buy them a beer and don’t push them further.  This leads to a great deal of value, and going further can get awkward.
  6. Follow an automated check pattern but extend the framework.  Spend some time outside of work learning to code. 
  7. Stand up an automation framework, design automated checks.  Support an Agile team by programming all necessary automated checks.  Spend extensive personal time learning to code.  Read books, write personal programs, take online courses, find a mentor. 
Categories: Software Testing

The Parable of the Deer

QA Hates You - Fri, 03/27/2015 - 04:43

So I’m coming back from an errand early one morning, and I turn onto the farm road where I live. I live out in the country, you see, and the farm road is a long, straight road that rolls over hills and creeks between fences, woods, and pastures. As I’m driving down the farm road, I think to myself, Although it’s technically day time, on cloudy days, deer are often active later than normal. I remember a particular stretch of the road just before the creek, where a cow pasture faces woods and where I’ve seen deer before. Then, as the relative elevation of the road changes in relation to the pasture, I see a single deer in the middle of the pasture against the backdrop of the hills beyond, and I slow my truck to ten miles an hour. Where are your buddies? I ask him, because does and fawns travel together in family groups.

Two deer dart across the road, and two others turn from the fence they were about to hop and retreat into the pasture. If I’d been traveling normal speed, I very well might have hit one of them.

I know the general behavior of deer; I know the lay of the land and the geography of deer crossings; and I just might have seen the deer in my peripheral vision, outside my focus but enough to trigger additional caution until I did see it consciously. That’s how I knew the deer were there.

And that’s how I found that bug. I know the general behavior of computers, applications, and interfaces; I know something of the domain or problem this program is trying to solve; and I have wide peripheral vision when testing, the ability to see things wrong in the corner of my eye and retry my actions with focus on the problem area.

Categories: Software Testing

How Much Of It Would QA Chew If QA Would Chew It?

QA Hates You - Thu, 03/26/2015 - 12:09

Nihilist chewing gum.

Never mind searching for it on Amazon.com. It’s futile. Also, it’s not available.

(Link source.)

Categories: Software Testing

Guiding Principles for Building a Performance Engineering-Driven Delivery Model

While recently attending a Dynatrace User Group in Hartford, I had the opportunity to sit in on a great presentation from a leading US insurance company as they explained their 3 year APM journey. I see a lot of these success stories, but this one was especially impressive. To see how they have refined their […]

The post Guiding Principles for Building a Performance Engineering-Driven Delivery Model appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Failed Data Integration Costs Customers Hundreds of Dollars

QA Hates You - Wed, 03/25/2015 - 08:04

A failed data integration is going to cost St. Louis area residents up to hundreds of dollars. Or require a refund.

No, that extra few hundred dollars on your monthly sewer bill isn’t a typo.

A bill miscalculation that began nearly two years ago has the Metropolitan St. Louis Sewer District asking thousands of customers in St. Louis County to pay for services that never showed up on bills. The average undercharge for an affected household: $450.

Nearly 1,900 residential customers and 3,700 commercial customers are being asked to pay more on their current sewer bill, which should be in mailboxes by the week of April 6 at the latest.

The reason?

The discrepancy began in May 2013, when Missouri American Water, which provides water service in much of the St. Louis County portion of MSD’s territory, changed its billing system.

MSD buys water meter data from Missouri American in order to calculate many customer rates, but the new system’s data wasn’t properly computing with MSD’s billing programs, district spokesman Lance LeComb said.

“Ultimately MSD is responsible for getting the data right and making sure we have correct account information and send out correct bills,” he said. “Missouri American Water was very supportive of our efforts and provided additional staff” to fix the problem.

Dependency upon an integration, and something at the data provider changed. Expensively.

On the MSD bills, they encourage you to “go green” and pay your bills online:

Given their track record, I can understand anyone’s reluctance to allow MSD’s computer systems access to a customer bank account or credit card.

Categories: Software Testing

Hello Again. I’m Back.

Eric Jacobson's Software Testing Blog - Tue, 03/24/2015 - 11:06

Egads!  It’s been several months since my last post.  Where have I been? 

I’ve transitioned to a new company and an exciting new role as Principal Test Architect.  After spending months trying to understand how my new company operates, I am beginning to get a handle on how we might improve testing.

In addition to my work transition, myself and each member of my family have just synchronously suffered through this year’s nasty flu, and then another round of stomach flu shortly thereafter.  The joys of daycare…

And finally, now that my son, Haakon, has arrived, I’ve been adjusting to my new life with two young children.  1 + 1 <> 2.  

It has been a rough winter.

But alas, my brain is once again telling me, “Oh, that would make a nice blog post”.  So let’s get this thing going again!

Categories: Software Testing

5 Tips to Improve SharePoint Web Part Performance

In a recent SharePoint Performance PerfBytes Episode Mark Tomlinson, Howard Chorney and I discussed SharePoint Performance based on my blog posts System Performance Checks and SharePoint Deployment Checks. We soon concluded that Web Parts – being one of the key concepts in SharePoint – ultimately decides whether your SharePoint sites scale, perform fast and will […]

The post 5 Tips to Improve SharePoint Web Part Performance appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Joining SpeedCurve

Steve Souders - Mon, 03/23/2015 - 12:46

I’m excited to announce that I’m joining SpeedCurve!

SpeedCurve provides insight into the interaction between performance and design to help companies deliver fast and engaging user experiences. I’ll be joining Mark Zeman, who launched the company in 2013. We make a great team. I’ve been working on web performance since 2002. Mark has a strong design background including running a design agency and lecturing at New Zealand’s best design school. At SpeedCurve, Mark has pioneered the intersection of performance and design. I’m looking forward to working together to increase the visibility developers and designers have into how their pages perform.

During the past decade+ of evangelizing performance, I’ve been fortunate to work at two of the world’s largest web companies as Chief Performance Yahoo! and Google’s Head Performance Engineer. This work provided me with the opportunity to conduct research, evaluate new technologies, and help the performance community grow.

One aspect that I missed was having deeper engagements with individual companies. That was a big reason why I joined Fastly as Chief Performance Officer. Over the past year I’ve been able to sit down with Fastly customers and help them understand how to speed up their websites and applications. These companies were able to succeed not only because Fastly has built a powerful CDN, but also because they have an inspiring team. I will continue to be a close friend and advisor to Fastly.

During these engagements, I’ve seen that many of these companies don’t have the necessary tools to help them identify how performance is impacting (hurting) the user experience on their websites. There is even less information about ways to improve performance. The standard performance metric is page load time, but there’s often no correlation between page load time and the user’s experience.

We need to shift from network-based metrics to user experience metrics that focus on rendering and when content becomes available. That’s exactly what Mark is doing at SpeedCurve, and why I’m excited to join him. The shift toward focusing on both performance and design is an emerging trend highlighted by Lara Hogan’s book Designing for Performance and Mark’s Velocity speaking appearances. SpeedCurve is at the forefront of this movement.

We (mostly Mark) just launched a redesign of SpeedCurve that includes many new, powerful features such as responsive design analysis, performance budgets, and APIs for continuous deployment. Check it out and let us know what you think.

Categories: Software Testing

Android UI Automated Testing

Google Testing Blog - Fri, 03/20/2015 - 14:51
by Mona El Mahdy

Overview

This post reviews four strategies for Android UI testing with the goal of creating UI tests that are fast, reliable, and easy to debug.

Before we begin, let’s not forget an import rule: whatever can be unit tested should be unit tested. Robolectric and gradle unit tests support are great examples of unit test frameworks for Android. UI tests, on the other hand, are used to verify that your application returns the correct UI output in response to a sequence of user actions on a device. Espresso is a great framework for running UI actions and verifications in the same process. For more details on the Espresso and UI Automator tools, please see: test support libraries.

The Google+ team has performed many iterations of UI testing. Below we discuss the lessons learned during each strategy of UI testing. Stay tuned for more posts with more details and code samples.

Strategy 1: Using an End-To-End Test as a UI Test

Let’s start with some definitions. A UI test ensures that your application returns the correct UI output in response to a sequence of user actions on a device. An end-to-end (E2E) test brings up the full system of your app including all backend servers and client app. E2E tests will guarantee that data is sent to the client app and that the entire system functions correctly.

Usually, in order to make the application UI functional, you need data from backend servers, so UI tests need to simulate the data but not necessarily the backend servers. In many cases UI tests are confused with E2E tests because E2E is very similar to manual test scenarios. However, debugging and stabilizing E2E tests is very difficult due to many variables like network flakiness, authentication against real servers, size of your system, etc.


When you use UI tests as E2E tests, you face the following problems:
  • Very large and slow tests. 
  • High flakiness rate due to timeouts and memory issues. 
  • Hard to debug/investigate failures. 
  • Authentication issues (ex: authentication from automated tests is very tricky).

Let’s see how these problems can be fixed using the following strategies.

Strategy 2: Hermetic UI Testing using Fake Servers

In this strategy, you avoid network calls and external dependencies, but you need to provide your application with data that drives the UI. Update your application to communicate to a local server rather than external one, and create a fake local server that provides data to your application. You then need a mechanism to generate the data needed by your application. This can be done using various approaches depending on your system design. One approach is to record server responses and replay them in your fake server.

Once you have hermetic UI tests talking to a local fake server, you should also have server hermetic tests. This way you split your E2E test into a server side test, a client side test, and an integration test to verify that the server and client are in sync (for more details on integration tests, see the backend testing section of blog).

Now, the client test flow looks like:


While this approach drastically reduces the test size and flakiness rate, you still need to maintain a separate fake server as well as your test. Debugging is still not easy as you have two moving parts: the test and the local server. While test stability will be largely improved by this approach, the local server will cause some flakes.

Let’s see how this could this be improved...

Strategy 3: Dependency Injection Design for Apps.

To remove the additional dependency of a fake server running on Android, you should use dependency injection in your application for swapping real module implementations with fake ones. One example is Dagger, or you can create your own dependency injection mechanism if needed.

This will improve the testability of your app for both unit testing and UI testing, providing your tests with the ability to mock dependencies. In instrumentation testing, the test apk and the app under test are loaded in the same process, so the test code has runtime access to the app code. Not only that, but you can also use classpath override (the fact that test classpath takes priority over app under test) to override a certain class and inject test fakes there. For example, To make your test hermetic, your app should support injection of the networking implementation. During testing, the test injects a fake networking implementation to your app, and this fake implementation will provide seeded data instead of communicating with backend servers.


Strategy 4: Building Apps into Smaller Libraries

If you want to scale your app into many modules and views, and plan to add more features while maintaining stable and fast builds/tests, then you should build your app into small components/libraries. Each library should have its own UI resources and user dependency management. This strategy not only enables mocking dependencies of your libraries for hermetic testing, but also serves as an experimentation platform for various components of your application.

Once you have small components with dependency injection support, you can build a test app for each component.

The test apps bring up the actual UI of your libraries, fake data needed, and mock dependencies. Espresso tests will run against these test apps. This enables testing of smaller libraries in isolation.

For example, let’s consider building smaller libraries for login and settings of your app.


The settings component test now looks like:


Conclusion

UI testing can be very challenging for rich apps on Android. Here are some UI testing lessons learned on the Google+ team:
  1. Don’t write E2E tests instead of UI tests. Instead write unit tests and integration tests beside the UI tests. 
  2. Hermetic tests are the way to go. 
  3. Use dependency injection while designing your app. 
  4. Build your application into small libraries/modules, and test each one in isolation. You can then have a few integration tests to verify integration between components is correct . 
  5. Componentized UI tests have proven to be much faster than E2E and 99%+ stable. Fast and stable tests have proven to drastically improve developer productivity.

Categories: Software Testing

Live Virtual Training in User Acceptance Testing, Testing Mobile Devices, and ISTQB Foundation Level Agile Extension Certification

Randy Rice's Software Testing & Quality - Fri, 03/20/2015 - 14:24

I am offering three courses in live virtual format. I will be the instructor on each of these. You can ask me questions along the way in each course and we will have some interactive exercises. Here is the line-up:
April 1 - 2 (Wed - Thursday) - User Acceptance Testing 1:00 p.m. - 4:00 p.m. EDT.  This is a great course to learn a structured way to approach user acceptance testing that validates systems from a real-world perspective. Register at https://www.mysoftwaretesting.com/Structured_User_Acceptance_Testing_Live_Virtual_p/uat-lv.htm
April 7 - 8 (Tuesday - Wednesday) - Testing Mobile Devices 1:00 p.m. - 4:00 p.m. EDT. Learn how to define a strategy and approach for mobile testing that fits the unique needs of your customers and technology. We will cover a wide variety of tests you can perform and you are welcome to try some of the tests on your own mobile devices during the class. Register at https://www.mysoftwaretesting.com/Testing_Mobile_Applications_Live_Virtual_Course_p/mobilelv.htm
April 14 - 16 (Tuesday - Thursday) - ISTQB Foundation Level Agile Extension Course 1:00 p.m. - 4:30 p.m. EDT. This is a follow-on to the ISTQB Foundation Level Certification. The focus is on understanding agile development and testing principles, as well as agile testing methods and tools. The course follows the ISTQB Foundation Level Extension Agile Tester Syllabus 2014. Register at https://www.mysoftwaretesting.com/ISTQB_Agile_Extension_Course_Public_Course_p/agilelv.htm
I really hope you can join me in one of these courses. If you want to have your team trained, just let me know and I'll create a custom quote for you.
Categories: Software Testing

How to Create Performance Models using Application Monitoring Data

Dynatrace collects a wealth of monitoring data on applications and one of the great aspects is that it also provides interfaces allowing external applications to use this information. An example we’ve just recently seen in a blog post showed how you can use Dynatrace data to monitor your entire application landscape across a server farm. […]

The post How to Create Performance Models using Application Monitoring Data appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

The Essential Omni Channel User Experience Measurement Index

In the past we have been looking at page load times of our desktop browser applications and we used concepts like the APDEX to find out what the user experience looks like but since it got defined a lot of things have changed. At Velocity last November I presented that APDEX is dead, and W3C […]

The post The Essential Omni Channel User Experience Measurement Index appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Exploratory Testing 3.0

DevelopSense - Michael Bolton - Mon, 03/16/2015 - 22:09
This blog post was co-authored by James Bach and me. In the unlikely event that you don’t already read James’ blog, I recommend you go there now. The summary is that we are beginning the process of deprecating the term “exploratory testing”, and replacing it with, simply, “testing”. We’re happy to receive replies either here […]
Categories: Software Testing

Web Performance News of the Week

LoadStorm - Fri, 03/13/2015 - 17:00

This week in web performance, we saw announcements from Apple, Chinese tech giants joining the auto-industry, and Bitcoin struggles with handling denial of service attacks.

Spring Forward developer announcements from Apple

Announcements from Apple always bring exciting discussion to the tech world, and this time was no different. Apple made several announcements on Monday, including ResearchKit, an open-source software framework to attempt to integrate medical and health research with iPhone apps. Other exciting developer announcements included the OS X Server 4.1 Developer Preview, as well as new Xcode 6.3 beta 3 which includes iOS 8.3 SDK and Swift 1.2.

China’s tech companies announce plans to  invade the auto industry

Two more tech giants are jumping into the self-driving car industry. Just a few days later, Alibaba, China’s biggest tech company, announced a joint venture with Saic, China’s largest auto company. The plans is to build a connected car that would communicate with other vehicles through the cloud. Just a few days earlier, Baidu, a Chinese search engine declared that they have been working with auto manufacturers. Both suggested a self-driving car could hit the road within the next year. Over the past month, rumors of Apple joining the auto industry and poaching engineers for an electric car project have caused quite a stir, although Mercedes Benz is not concerned with the competition.

Find out how much your site costs through new tool from webpagetest.org

For those of us in the web performance world, webpagetest.org has been a very useful tool. WebPageTest is an open-source project, primarily supported by Google that can be used to analyze website performance. This week, a new addition to this tool that can be used to estimate how much your website would cost users around the world to load. Whatdoesmysitecost.com will either use your website URL or WebPageTest ID and then will test your site. Try it out!

Bitcoin mining pools targeted in multiple DDOS attacks

Mobile currency mining pools are being held for ransom, in recent distributed denial of service attacks that began this month. AntPool, BW.com, NiceHash, CKPool and GHash.io are among a number of Bitcoin mining pools and operations that have been hit, and attackers are demanding a ransom of 5 to 10 Bitcoin in order to stop the attacks. Many suspect attacks like this are likely to continue. Check out our recent discussion, where we ask if load testing can stop hackers. In other Bitcoin news, T-mobile announced this week that they will begin to accept Bitcoin as payment in Poland. In fact, they’re  even offering a 20% discount for Bitcoin users.

The post Web Performance News of the Week appeared first on LoadStorm.

Oracles Are About Problems, Not Correctness

DevelopSense - Michael Bolton - Thu, 03/12/2015 - 20:49
As James Bach and I have have been refining our ideas of testing, we’ve been refining our ideas about oracles. In a recent post, I referred to this passage: Program testing involves the execution of a program over sample test data followed by analysis of the output. Different kinds of test output can be generated. […]
Categories: Software Testing

The Art of DevOps Part II – Islands of Development

Welcome back to part two of my four part series on The Art of DevOps. I have previously set the stage, so in this article I will focus on the primary objectives in executing a solid DevOps operation specifically within the Islands of Development. The intel herein revolves around clear, concise communications and sharpening defenses […]

The post The Art of DevOps Part II – Islands of Development appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Explore capabilities, not features

The Quest for Software++ - Thu, 03/12/2015 - 07:30

This is an excerpt from my upcoming book, Fifty Quick Ideas To Improve Your Tests

Exploratory testing requires a clear mission. The mission statement provides focus and enables teams to triage what is important and what is out of scope. A clear mission prevents exploratory testing sessions turning into unstructured playing with the system. As software features are implemented, and user stories get ready for exploratory testing, it’s only logical to set the mission for exploratory testing sessions around new stories or changed features. Although it might sound counter-intuitive, story oriented missions lead to tunnel-vision and prevent teams from getting the most out of their testing sessions.

Stories and features are a solid starting point for coming up with good deterministic checks. However, they aren’t so good for exploratory testing missions. When exploratory testing is focused on a feature, or a set of changes delivered by a user story, people end up evaluating if the feature works, and rarely stray off the path. In a sense, teams end up proving what they expect to see. However, exploratory testing is most powerful when it deals with unexpected and unknown. For that, we need to allow tangential observations and insights, and design new tests around unexpected discoveries. To achieve that, the mission teams set for exploratory testing can’t be focused purely on features.

Good exploratory testing deals with unexpected risks, and for that we need to look beyond the current piece of work. On the other hand, we can’t cast the net too widely, because testing will lack focus. A good perspective to investigate, that balances wider scope and still provides focus, is around user capabilities. Features provide capabilities to users to do something useful, or take away user capabilities to do something dangerous or damaging. A good way to look for unexpected risks is to avoid exploring features, but explore related capabilities instead.

Key benefits

Focusing exploratory testing on capabilities instead of features leads to better insights and prevents tunnel vision.

A nice example of that is the contact form we built for MindMup last year. The related software feature was sending support requests when users fill in the form. We could have explored that feature using multiple vectors, such as field content length, e-mail formats, international character sets in the name or the message, but ultimately this would only focus on proving that the form works. Casting the net a bit wider, we identified two capabilities related to the contact form. People should be able to contact us for support easily in case of trouble. We should be able to support them easily, and solve their problems. Likewise, there is a capability we wanted to prevent. Nobody should be able to block or break the contact channels for other users through intentional or unintentional misuse. We set those capabilities as the mission of our exploratory testing session, and that led us to look at the accessibility of the contact form in case of trouble, and the ease of reporting typical problem scenarios. We discovered two critically important insights.

The first one is that a major cause of trouble would not be covered by the initial solution. Flaky and unreliable network access was responsible for a lot of incoming support requests. But when the internet connection for users goes down randomly, even though the form is filled correctly, the browser might fail to connect to our servers. If someone suddenly goes completely offline, the contact form won’t actually help at all. People might fill in the form, but lack of reliable network access will still disrupt their capability to contact us. The same goes for our servers suddenly dropping offline. None of those situations should happen in an ideal world, but when they do, that’s when users actually need support. So the feature was implemented correctly, but there was still a big capability risk. This led us to offer an alternative contact channel when the network is not accessible. We displayed the contact e-mail prominently on the form, and also repeated it in the error message if the form submission failed.

The second big insight was that people might be able to contact us, but without knowing the internals of the application, they wouldn’t be able to provide information for troubleshooting in case of data corruption or software bugs. That would pretty much leave us in the dark, and disrupt our capability to provide support. As a result, we decided to not even ask for common troubleshooting info (such as browser and operating system version), but instead read it and send automatically in the background. We also pulled out the last 1000 events that happened in the user interface, and sent them automatically with the support request, so that we could replay and investigate what exactly happened.

How to make it work

To get to good capabilities for exploring, brainstorm what a feature allows users to do, or what it prevents them from doing. When exploring user stories, try to focus on the user value part (‘In order to…’) rather than the feature description (‘I want …’).

If you use impact maps for planning work, the third level of the map (actor impacts) are a good starting point for discussing capabilities. Impacts will typically be changes to a capability. If you use user story maps, then the top-level item in the user story map spine related to the current user story is a nice starting point for discussion.

How to Analyze Problems in Multi-Threaded Applications

As part of my Share Your PurePath and Performance Clinic initiatives I get to see lots of interesting problems out there. This time I picked two examples that just came in this week from Balasz and Daniel. Both wanted my opinion why their apps show high response time contribution to their web requests coming from […]

The post How to Analyze Problems in Multi-Threaded Applications appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Pages