Feed aggregator

Testing on the Toilet: Writing Descriptive Test Names

Google Testing Blog - Thu, 10/16/2014 - 14:40
by Andrew Trenk,

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

How long does it take you to figure out what behavior is being tested in the following code?

@Test public void isUserLockedOut_invalidLogin() {
authenticator.authenticate(username, invalidPassword);
assertFalse(authenticator.isUserLockedOut(username));
authenticator.authenticate(username, invalidPassword);
assertFalse(authenticator.isUserLockedOut(username));
authenticator.authenticate(username, invalidPassword);
assertTrue(authenticator.isUserLockedOut(username));
}
You probably had to read through every line of code (maybe more than once) and understand what each line is doing. But how long would it take you to figure out what behavior is being tested if the test had this name?

isUserLockedOut_lockOutUserAfterThreeInvalidLoginAttempts

You should now be able to understand what behavior is being tested by reading just the test name, and you don’t even need to read through the test body. The test name in the above code sample hints at the scenario being tested (“invalidLogin”), but it doesn’t actually say what the expected outcome is supposed to be, so you had to read through the code to figure it out.

Putting both the scenario and the expected outcome in the test name has several other benefits:

- If you want to know all the possible behaviors a class has, all you need to do is read through the test names in its test class, compared to spending minutes or hours digging through the test code or even the class itself trying to figure out its behavior. This can also be useful during code reviews since you can quickly tell if the tests cover all expected cases.

- By giving tests more explicit names, it forces you to split up testing different behaviors into separate tests. Otherwise you may be tempted to dump assertions for different behaviors into one test, which over time can lead to tests that keep growing and become difficult to understand and maintain.

- The exact behavior being tested might not always be clear from the test code. If the test name isn’t explicit about this, sometimes you might have to guess what the test is actually testing.

- You can easily tell if some functionality isn’t being tested. If you don’t see a test name that describes the behavior you’re looking for, then you know the test doesn’t exist.

- When a test fails, you can immediately see what functionality is broken without looking at the test’s source code.

There are several common patterns for structuring the name of a test (one example is to name tests like an English sentence with “should” in the name, e.g., shouldLockOutUserAfterThreeInvalidLoginAttempts). Whichever pattern you use, the same advice still applies: Make sure test names contain both the scenario being tested and the expected outcome.

Sometimes just specifying the name of the method under test may be enough, especially if the method is simple and has only a single behavior that is obvious from its name.

Categories: Software Testing

How 15 Minutes Spent on Optimizing Performance Could Save you Millions in Lost Revenue

Whether you are well established e-Commerce brand such as Nordstrom, JCPenny, or Costco, or a small business with a web shop, it’s never too late to do your end user performance homework. Robert, who developed and runs a “smaller” ASP.NET MVC based web shop, started getting individual user complaints about performance and usability problems. A […]

The post How 15 Minutes Spent on Optimizing Performance Could Save you Millions in Lost Revenue appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

Why does a typical ecommerce page take 6.5 seconds to load primary content?

Web Performance Today - Wed, 10/15/2014 - 18:20

Every quarter at Radware, we release a new “state of the union” report, with key findings about the web performance of the world’s most popular ecommerce sites.

Every quarter, we find that the median top 100 ecommerce site takes longer to render feature content than it took the previous quarter.

Every quarter, we field the question: But how could this possibly be happening? Networks, browsers, hardware… they’re all getting better, aren’t they?

The answer to this question is: Pages are slower because they’re bigger, fatter, and more complex than ever. Size and complexity comes with a performance price tag, and that price tag gets steeper every year.

In this post, I’m going to walk through a few of the key findings from our latest report. Then I’m going to share a few examples of practices that are responsible for this downward performance trend.

First, some background

Since 2010, we’ve been measuring and analyzing the performance of the top 500 online retailers (as ranked by Alexa.com). We look at web page metrics such as load time, time to interact (the amount of time it takes for a page to render its feature “above the fold” content), page size, page composition, and adoption of performance best practices. Our goal is to obtain a real-world “shopper’s eye view” of the performance of leading sites, and to track how this performance changes over time. We release the results four times a year in quarterly “state of the union” reports. (You can see a partial archive of past reports here.)

Finding 1: The median page takes 6.5 seconds to render feature content, and 11.4 seconds to fully load.

If you care about the user experience, then Time to Interact (TTI) is a metric you should care about. TTI is the amount of time it takes for a page to download and display its primary content. (On ecommerce sites, primary content is usually some kind of feature banner or hero image.) Most consumers say they expect pages to load in 3 seconds or less, so this sets the bar for TTI.

We found the median top 100 ecommerce page has a Time to Interact of 6.5 seconds — more than twice as slow as the ideal TTI of 3 seconds. This means that when a visitor comes to that median page, this is what she or he sees (rendered in a timed filmstrip view, one of my favourite WebPagetest features):

Finding 2: The median page has slowed down by 23% in just one year.

Not only is this much slower than users expect, it’s gotten slower over time. In our Fall 2013 report, we found that the median page had a TTI of 5.3 seconds. In other words, the median page has slowed down by 23%. That’s a huge increase.

Case study after case study has proven to us that, when it comes to page speed, every second counts. Walmart found that, for every 1 second of performance improvement, they experienced up to a 2% conversion increase. Even more dramatically, at the recent Velocity NY conference, Staples shared that shaving just 1 second off their median load time yielded a 10% conversion increase.

Finding 3: “Page bloat” continues to be rampant.

This finding is consistent quarter over quarter: pages keep getting bigger, both in terms of payload and number of resources. With a payload of 1492 KB, the median page is 19% larger than it was one year ago.

Looking at page size is focusing on just one part of the problem. In my opinion, page complexity is arguably a much bigger problem than page size. To understand why, you need to know that while 1492 KB is a pretty hefty payload, this number is actually down considerably from the peak payload of 1677 KB we reported three months ago, in our Summer 2013 state of the union. At that time, the median page contained 100 resource requests, fewer than the current median of 106 resources. So the median page today is significantly smaller but has 6% more resources than it did three months ago. Here’s what that means…

Some of the most common performance culprits

Every resource request introduces incremental performance slowdown, as well as the risk of page failure. Let me illustrate with waterfall snippets showing a few real-world examples. (If you’re new to waterfall charts, here’s a tutorial on how to read them. TL:DR version: long blue bars are bad.)

Before you look at these, I want to point out that the goal here is not to publicly shame any of these site owners. Not only are these examples typical of what you might find on many ecommerce sites, they’re most definitely not the most egregious examples I’ve seen. I chose them specifically because of how typical they are.

1. Hero images that take too long to download.

2. Stylesheets that take too long to download and parse. Stylesheets that are improperly placed and block the page from rendering.

3. Custom fonts that require huge amounts of CSS, or heavy Javascript, or are hosted externally.

4. Poorly implemented responsive design. (Related to 2, but merits its own callout given the popularity of RWD.)

Takeaway

Back when I started using the web, you could make pages any colour you wanted, as long as that colour was grey. I still remember the excitement I felt the very first time I was able to use colour on the background of a page. (It was yellow, if you’re wondering.) I remember when the <center> tag came along. And good golly, I remember when we were first able to add images to pages. That was a heady (and animated gif-filled) time.

Sure, pages used to be leaner and faster. They also used to look really, really boring. I don’t long for the return of a grey, graphic-less web.

This is all to say that I love images, stylesheets, custom fonts, and responsive design. They give designers and developers unprecedented control over the look and feel of web pages. They make pages more beautiful across an ever-increasing number of platforms and devices. But they can also inflict massive performance penalties — penalties that cannot be completely mitigated by faster browsers, networks, and gadgets. And this impact is increasing, rather than decreasing, over time.

As site owners, designers, developers, UX professionals, or whatever your role is, we need to be mindful of this. Performance is the responsibility of every single person who touches a web page, from conception through to launch.

Get the report: State of the Union: Ecommerce Page Speed & Web Performance [Fall 2014]

 

The post Why does a typical ecommerce page take 6.5 seconds to load primary content? appeared first on Web Performance Today.

Fifty Quick Ideas To Improve Your User Stories – Now Available

The Quest for Software++ - Tue, 10/14/2014 - 09:18

The final version of my latest book, Fifty Quick Ideas To Improve Your User Stories, is now available on the Kindle store, and the print version will be available on Amazon soon.

This book will help you write better stories, spot and fix common issues, split stories so that they are smaller but still valuable, and deal with difficult stuff like crosscutting concerns, long-term effects and non-functional requirements. Above all, this book will help you achieve the promise of agile and iterative delivery: to ensure that the right stuff gets delivered through productive discussions between delivery team members and business stakeholders.

For the next seven days, the book will be sold on the Kindle store at half the normal price. To grab it at that huge discount, head over to the Kindle store (the price will double on 22nd October). Here are the quick links: Amazon.com UK DE FR ES IT JP CN BR CA MX AU IN

Reflecting on the Perform Global User Conference

Perf Planet - Tue, 10/14/2014 - 07:00

Another wonderful Perform Conference! Great American purchased the Dynatrace Software in December of 2012, and we made our first appearance at the Perform Conference 9 months later (2013).  We had no idea what to expect last October, but walked away impressed at the work that had been put into the Conference and with the value […]

The post Reflecting on the Perform Global User Conference appeared first on Compuware APM Blog.

Reflecting on the Perform Global User Conference

Another wonderful Perform Conference! Great American purchased the Dynatrace Software in December of 2012, and we made our first appearance at the Perform Conference 9 months later (2013).  We had no idea what to expect last October, but walked away impressed at the work that had been put into the Conference and with the value […]

The post Reflecting on the Perform Global User Conference appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

Performance Monitoring in Healthcare IT

LoadImpact - Tue, 10/14/2014 - 04:20

Healthcare providers and patients are more reliant than ever on IT departments to deliver highly accessible, functional and highly performing applications. No longer limited to small test cases or non-critical systems, today’s modern IT department have to deliver access to electronic medical records, messaging, imaging, collaboration, prescription management and even virtual office visits over the ... Read more

The post Performance Monitoring in Healthcare IT appeared first on Load Impact Blog.

Categories: Load & Perf Testing

6 Facts About Cyber Monday Every E-Commerce Business Should Know

LoadStorm - Mon, 10/13/2014 - 09:41
1. Cyber Monday is Growing

Adobe reported that Cyber Monday e-commerce sales in 2013 reached $2.29 billion – a staggering 16% increase over 2012. comScore reported that desktop sales on Cyber Monday 2013 totaled over $1.73 billion, making it the heaviest US online spending day in history.

With these kinds of numbers, only time will tell how long Cyber Monday will continue to grow. But one thing is certain, Cyber Monday is the single most important day of the year for e-commerce businesses.

2. Mobile Shopping is Growing

Sites only designed for desktops are missing out on a huge chunk of the market. According to IBM’s Cyber Monday report, more than 18% of consumers used mobile devices to visit retailer sites. Even more impressive is the fact that mobile sales accounted for 13% of all online spending that day – an increase of 96% over 2012.

While making an e-commerce application mobile isn’t easy, its definitely not something you can skip anymore!

3. Internet Shoppers Are an Impatient Bunch

Even with the surge of traffic on Cyber Monday, web performance is absolutely critical to success. Did you know that studies have shown:

  • 74% of users will abandon a mobile site if it takes longer than 5 seconds to load
  • 46% of users will NOT return to a poorly performing website
  • You have 5 seconds to engage a customer before he or she leaves your site (How much of that are you wasting with load time?)

Customers don’t have any patience for slow sites and the fact is that if a site isn’t fast, they will spend their money elsewhere. The peak load time for conversions is 2 seconds and just a one second delay in load time causes:

  • A 7% decrease in conversions
  • 11% fewer page views
  • A 16% decrease in customer satisfaction

For sources and more statistics of the impact of web performance on conversions, check out our full infographic.

Fast and scalable wins the race!

4. Each Year Several Companies Have High Profile Website Crashes

In 2012, the standout crash was finishline.com, in 2013 it was Motorola. In both cases, the heavy load of traffic slowed the websites to a crawl, returned lots of errors, and inevitably crashed completely.

According to Finish Line CEO Glen S. Lyon, Finish Line’s new website launched “November 19th and cost us approximately $3 million in lost sales . . . Following the launch, it became apparent that the customer experience was negatively impacted.” To read more about Motorola’s debacle in 2013, check out our recent blog post: Cyber Monday and The Impact of Web Performance.

5. Cyber Monday Shopping Has Gone Social

For better or worse, many shoppers are sharing their experiences on social media. According to OfferPop, Cyber Monday accounted for 1.4% of all social media chatter that day last year. This could be excellent exposure of people sharing the great deal you are offering or a PR nightmare.

Unprepared businesses will not only lose out on business, but unhappy shoppers will also share their bad experience with friends. Since we are using Finish Line as our example of high profile website crashes, it seems fitting to illustrate how social media played into their painful weekend. Angry tweets and Facebook messages popped up throughout the weekend, here is a small list of some of the best courtesy of Retail Info Systems News:

“The site is slower than slow.”

“I have had the flashing Finish Line icon running on my page for over 30 minutes now trying to confirm my order. And I have tried to refresh and nothing.”

“I have been trying for 2 days to submit an order on your site – receive error message every time.”

“Y’all’s website is down. Are you maybe going to extend your sales because of it?”

“Wait, you schedule maintenance on Cyber Monday?”

“Extremely disappointed! Boo! You are my go to store, and the one day you have huge Internet sales your website doesn’t work.”

6. Preparing Early for the Surge of Traffic is Essential

Want to avoid being like Finishline and Motorola? There is only one way to ensure that your website will absolutely, positively, without a doubt be fast and error free under pressure on Cyber Monday: performance testing.

Performance testing is a critical part of development and leaving it to the last minute or testing as an afterthought is a recipe for disaster (see above examples). Performance testing is best used as a part of an iterative testing cycle: Run a test, tune the application, run a test, review changes in scalability, tune the application, run a test, etc. until the desired performance is reached at the necessary scale whether that is 300 concurrent users or 300,000. Without time to tune an application after testing, the application may very well be in hot water with no way to get out before the big day.

There are tons of performance testing tools to choose from (I personally recommend LoadStorm ; ) but whatever tool you use, the moral of this story is: Test early, test often.

The post 6 Facts About Cyber Monday Every E-Commerce Business Should Know appeared first on LoadStorm.

Win a free Velocity Barcelona 2014 conference pass!

Perf Planet - Mon, 10/13/2014 - 07:00
A simple raffle and perhaps your last chance to win a 2-day pass for the great Web Performance and Operations conference (Barcelona, Oct 17-19).

Design Test Automation Around Golden Masters

Eric Jacobson's Software Testing Blog - Fri, 10/10/2014 - 15:08

“Golden Master”, it sounds like the bad guy in a James Bond movie.  I first heard the term used by Doug Hoffman at STPCon Spring 2012 during his Exploratory Test Automation workshop.  Lately, I’ve been writing automated golden master tests that check hundreds of things with very little test code.

I think Golden-Master-Based testing is super powerful, especially when paired with automation.

A golden master is simply a known good version of something from your product-under-test.  It might be a:

  • web page
  • reference table
  • grid populated with values
  • report
  • or some other file output by your product

Production is an excellent place to find golden masters because if users are using it, it’s probably correct.  But golden masters can also be fabricated by a tester.

Let’s say your product outputs an invoice file.  Here’s a powerful regression test in three steps:

  1. Capture a known good invoice file from production (or a QA environment).  This file is your golden master. 
  2. Using the same parameters that were used to create the golden master, re-create the invoice file on the new code under test. 
  3. Programmatically compare the new invoice to the golden master using your favorite diff tool or code.

Tips and Ideas:

  • Make sure the risky business logic code you want to test is being exercised.
  • If you expand on this test, and fully automate it, account for differences you don’t care about (e.g., the invoice generated date in the footer, new features you are expecting to not yet be in production). 
  • Make it a data-driven test. Pass in a list of orders and customers, retrieve production golden masters and compare them to dynamically generated versions based on the new code.
  • Use interesting dates and customers.  Iterate through thousands of scenarios using that same automation code.
  • Use examples from the past that may not be subject to changes after capturing the golden master.
  • Structure your tests assertions to help interpret failures.  The first assertion on the invoice file might be, does the item line count match?  The second might be, do each line’s values match?
  • Get creative.  Golden masters can be nearly anything.

Who else uses this approach?  I would love to hear your examples.

Categories: Software Testing

Behind the .NET 4.5 Async Scene: The performance impact of Asynchronous programming in C#

Perf Planet - Thu, 10/09/2014 - 07:02

Since .NET version 4.5, the C# language has two new keywords: async and await. The purpose of these new keywords is to support asynchronous programming. This post explains what these two keywords do, what they don’t do, and what the impact of these keywords are on application performance. A little warning: don’t get scared from […]

The post Behind the .NET 4.5 Async Scene: The performance impact of Asynchronous programming in C# appeared first on Compuware APM Blog.

Behind the .NET 4.5 Async Scene: The performance impact of Asynchronous programming in C#

Since .NET version 4.5, the C# language has two new keywords: async and await. The purpose of these new keywords is to support asynchronous programming. This post explains what these two keywords do, what they don’t do, and what the impact of these keywords are on application performance. A little warning: don’t get scared from […]

The post Behind the .NET 4.5 Async Scene: The performance impact of Asynchronous programming in C# appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

Discovering Why Ecommerce Online App Customers Are Not Buying Products

Perf Planet - Thu, 10/09/2014 - 05:30

If a customer walks into a physical retail business location and chooses not to buy something, the store manager is at most just one or two questions away from valuable information regarding why. If the customer came in looking for a stereo, found the stereo they wanted and was still prepared to walk out empty handed, you can just ask them why they've made that decision and what you can do to change their mind. When you're talking about ecommerce online application customers that are not buying products, however, you don't necessarily have that luxury. You will instead have to provide the necessary context for those decisions yourself through extensive research.

do u webview?

Perf Planet - Thu, 10/09/2014 - 04:52

A “webview” is a browser bundled inside of a mobile application producing what is called a hybrid app. Using a webview allows mobile apps to be built using Web technologies (HTML, JavaScript, CSS, etc.) but still package it as a native app and put it in the app store. In addition to allowing devs to work with familiar technologies, other advantages of building a hybrid app include greater code reuse across the app and the website, and easier support for multiple mobile platforms.

We all have webview traffic

Deciding whether to build a hybrid app versus a native app, or to have an app at all, is a lengthy debate and not the point of this post. Even if you don’t have a hybrid app, a significant amount of your mobile traffic comes from webviews. That’s because many sources of traffic are hybrid apps. Two examples on iOS are the Facebook app and Google Chrome. “Whoa, whoa, whoa” you say, Facebook’s retreat from its hybrid app is well known. That’s true. The Facebook timeline, for example, is no longer rendered using a webview:

However, the Facebook timeline contains links, such as the link to http://www.guggenheim.org/ in the timeline above. When users click on links in the timeline, the Facebook app opens those in a webview:

Similarly, Chrome for iOS is implemented using a webview. Across all iOS traffic, 6% comes from Facebook’s webview and 5% comes from Google Chrome according to ScientiaMobile. And there are other examples: Twitter’s iOS app uses a webview to render clicked links, etc.

I encourage you to scan your server logs to gauge how much of your mobile traffic comes from webviews. There’s not much documentation on webview User-Agent strings. For iOS, the User-Agent is typically a base string with information appended by the app. Here’s the User-Agent string for Facebook’s webview:

Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Mobile/11D201 [FBAN/FBIOS;FBAV/12.1.0.24.20; FBBV/3214247; FBDV/iPhone6,1;FBMD/iPhone; FBSN/iPhone OS;FBSV/7.1.1; FBSS/2; FBCR/AT&T;FBID/phone;FBLC/en_US;FBOP/5]

Here’s the User-Agent string from Chrome for iOS:

Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_2 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) CriOS/37.0.2062.60 Mobile/11D257 Safari/9537.53

That’s a lot of detail. The bottom line is: we’re all getting more webview traffic than we expect. Therefore, it’s important that we understand how webviews perform and take that into consideration when building our mobile websites.

Webview performance

Since a webview is just a bundled browser, we might think that webviews and their mobile browser counterpart have similar performance profiles. It turns out that this is not the case. This was discovered as an unintentional side effect from the article iPhone vs. Android – 45,000 Tests Prove Who is Faster. This article from 2011, in the days of iOS 4.3, noted that the iPhone browser was 52% slower than Android’s. The results were so dramatic it triggered the following response from Apple:

[Blaze's] testing is flawed. They didn’t actually test the Safari browser on the iPhone. Instead they only tested their own proprietary app, which uses an embedded Web viewer that doesn’t actually take advantage of Safari’s Web performance optimizations.

Apple’s response is accurate. The study conducted by Blaze (now part of Akamai) was conducted using a webview, so it was not a true comparison of the mobile browser from each platform. But the more important revelation is that webviews were hobbled resulting in worse performance than mobile Safari. Specifically, the webview on iOS 4.3 did not have Nitro’s JIT compiler for JavaScript, application cache, nor asynchronous script loading.

This means it’s not enough to track the performance of mobile browsers alone; we also need to track the performance of webviews. This is especially true in light of the fact that more than 10% of iOS traffic comes from webviews. Luckily, the state of webviews is better than it was in 2011. Even better, the most recent webviews have significantly more features when it comes to performance. The following table compares the most recent iOS and Android webviews along a set of important performance features.

iOS 7
UIWebView iOS 8
WKWebView Android 4.3
Webkit
Webview Android 4.4
Chromium
Webview Nitro/V8 ✔ ✔ html5test.com 410 440 278 434 localStorage ✔ ✔ ✔ ✔ app cache ✔ ✔ ✔ ✔ indexedDB ✔ ✔ SPDY ✔ ✔ WebP ✔ srcset ✔ ? WebGL ✔ ? requestAnimation- Frame ✔ ✔ ✔ Nav Timing ✔ ✔ ✔ Resource Timing ✔

As shown in this table, the newest webviews have dramatically better performance. The most important improvement is JIT compilation for JavaScript. While localStorage and app cache now have support across all webviews, the newer webviews add support for indexedDB. Support for SPDY in the newer webviews is important to help mitigate the impact of slow mobile networks. WebP, image srcset, and WebGL address the bloat of mobile images, but support for these features is mixed. (I wasn’t able to confirm the status of srcset and WebGL in Android 4.4′s webview. Please add comments and I’ll update the table.) The requestAnimationFrame API gives smoother animations. Finally, adoption of the Nav Timing and Resource Timing APIs gives website owners the ability to track performance for websites served inside webviews.

Not out of the woods yet

While the newest webviews have a better performance profile, we’re still on the hook for supporting older webviews. Hybrid apps will continue to use the older webviews until they’re rebuilt and updated. The Android webview is pinned at Chromium 30 and requires an OS upgrade to get feature updates. Similar to the issues with legacy browsers, traffic from legacy webviews will continue for at least a year. Given the significant amount of traffic from webviews and the dramatic differences in webview performance, it’s important that developers measure performance on old and new webviews, and apply mobile performance best practices to make their website as fast as possible even on old webviews.

(Many thanks to Maximiliano Firtman, Tim Kadlec, Brian LeRoux, and Guy Podjarny for providing information for this post.)

do u webview?

Steve Souders - Thu, 10/09/2014 - 04:52

A “webview” is a browser bundled inside of a mobile application producing what is called a hybrid app. Using a webview allows mobile apps to be built using Web technologies (HTML, JavaScript, CSS, etc.) but still package it as a native app and put it in the app store. In addition to allowing devs to work with familiar technologies, other advantages of building a hybrid app include greater code reuse across the app and the website, and easier support for multiple mobile platforms.

We all have webview traffic

Deciding whether to build a hybrid app versus a native app, or to have an app at all, is a lengthy debate and not the point of this post. Even if you don’t have a hybrid app, a significant amount of your mobile traffic comes from webviews. That’s because many sources of traffic are hybrid apps. Two examples on iOS are the Facebook app and Google Chrome. “Whoa, whoa, whoa” you say, Facebook’s retreat from its hybrid app is well known. That’s true. The Facebook timeline, for example, is no longer rendered using a webview:

However, the Facebook timeline contains links, such as the link to http://www.guggenheim.org/ in the timeline above. When users click on links in the timeline, the Facebook app opens those in a webview:

Similarly, Chrome for iOS is implemented using a webview. Across all iOS traffic, 6% comes from Facebook’s webview and 5% comes from Google Chrome according to ScientiaMobile. And there are other examples: Twitter’s iOS app uses a webview to render clicked links, etc.

I encourage you to scan your server logs to gauge how much of your mobile traffic comes from webviews. There’s not much documentation on webview User-Agent strings. For iOS, the User-Agent is typically a base string with information appended by the app. Here’s the User-Agent string for Facebook’s webview:

Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Mobile/11D201 [FBAN/FBIOS;FBAV/12.1.0.24.20; FBBV/3214247; FBDV/iPhone6,1;FBMD/iPhone; FBSN/iPhone OS;FBSV/7.1.1; FBSS/2; FBCR/AT&T;FBID/phone;FBLC/en_US;FBOP/5]

Here’s the User-Agent string from Chrome for iOS:

Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_2 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) CriOS/37.0.2062.60 Mobile/11D257 Safari/9537.53

That’s a lot of detail. The bottom line is: we’re all getting more webview traffic than we expect. Therefore, it’s important that we understand how webviews perform and take that into consideration when building our mobile websites.

Webview performance

Since a webview is just a bundled browser, we might think that webviews and their mobile browser counterpart have similar performance profiles. It turns out that this is not the case. This was discovered as an unintentional side effect from the article iPhone vs. Android – 45,000 Tests Prove Who is Faster. This article from 2011, in the days of iOS 4.3, noted that the iPhone browser was 52% slower than Android’s. The results were so dramatic it triggered the following response from Apple:

[Blaze's] testing is flawed. They didn’t actually test the Safari browser on the iPhone. Instead they only tested their own proprietary app, which uses an embedded Web viewer that doesn’t actually take advantage of Safari’s Web performance optimizations.

Apple’s response is accurate. The study conducted by Blaze (now part of Akamai) was conducted using a webview, so it was not a true comparison of the mobile browser from each platform. But the more important revelation is that webviews were hobbled resulting in worse performance than mobile Safari. Specifically, the webview on iOS 4.3 did not have Nitro’s JIT compiler for JavaScript, application cache, nor asynchronous script loading.

This means it’s not enough to track the performance of mobile browsers alone; we also need to track the performance of webviews. This is especially true in light of the fact that more than 10% of iOS traffic comes from webviews. Luckily, the state of webviews is better than it was in 2011. Even better, the most recent webviews have significantly more features when it comes to performance. The following table compares the most recent iOS and Android webviews along a set of important performance features.

iOS 7
UIWebView iOS 8
WKWebView Android 4.3
Webkit
Webview Android 4.4
Chromium
Webview Nitro/V8 ✔ ✔ html5test.com 410 440 278 434 localStorage ✔ ✔ ✔ ✔ app cache ✔ ✔ ✔ ✔ indexedDB ✔ ✔ SPDY ✔ ✔ WebP ✔ srcset ✔ ? WebGL ✔ ? requestAnimation- Frame ✔ ✔ ✔ Nav Timing ✔ ✔ ✔ Resource Timing ✔

As shown in this table, the newest webviews have dramatically better performance. The most important improvement is JIT compilation for JavaScript. While localStorage and app cache now have support across all webviews, the newer webviews add support for indexedDB. Support for SPDY in the newer webviews is important to help mitigate the impact of slow mobile networks. WebP, image srcset, and WebGL address the bloat of mobile images, but support for these features is mixed. (I wasn’t able to confirm the status of srcset and WebGL in Android 4.4′s webview. Please add comments and I’ll update the table.) The requestAnimationFrame API gives smoother animations. Finally, adoption of the Nav Timing and Resource Timing APIs gives website owners the ability to track performance for websites served inside webviews.

Not out of the woods yet

While the newest webviews have a better performance profile, we’re still on the hook for supporting older webviews. Hybrid apps will continue to use the older webviews until they’re rebuilt and updated. The Android webview is pinned at Chromium 30 and requires an OS upgrade to get feature updates. Similar to the issues with legacy browsers, traffic from legacy webviews will continue for at least a year. Given the significant amount of traffic from webviews and the dramatic differences in webview performance, it’s important that developers measure performance on old and new webviews, and apply mobile performance best practices to make their website as fast as possible even on old webviews.

(Many thanks to Maximiliano Firtman, Tim Kadlec, Brian LeRoux, and Guy Podjarny for providing information for this post.)

Categories: Software Testing

Customer Success: How Distinctive Apparel Was Able to Improve Online Customer Experience

Perf Planet - Wed, 10/08/2014 - 14:12

According to Forrester, U.S. consumer on spending e-retail will increase 62% by 2016. That’s good news for retailers, but it also signals growing competition to improve online customer experience in an already crowded space.

 

The Myths of Tech Interviews

Alan Page - Wed, 10/08/2014 - 11:16

I recently ran across this article on nytimes.com from over a year ago. Here’s the punch line (or at least one of them):

“We looked at tens of thousands of interviews, and everyone who had done the interviews and what they scored the candidate, and how that person ultimately performed in their job. We found zero relationship. It’s a complete random mess…”

I expect (hope?) that the results aren’t at all surprising. After almost 20 years at Microsoft and hundreds of interviews, it’s exactly what I expected. Interviews, at best, are a guess at finding a good employee, but often serve the ego of the interviewer more than the needs of the company. The article makes note of that as well.

“On the hiring side, we found that brainteasers are a complete waste of time. How many golf balls can you fit into an airplane? How many gas stations in Manhattan? A complete waste of time. They don’t predict anything. They serve primarily to make the interviewer feel smart.”

I wish more interviewers paid attention to statistics or articles (or their peers) and stopped asking horrible interview questions, and really, really tried to see if they could come up with better approaches.

Why Bother?

So – why do we do interviews if they don’t work? Well, they do work – hopefully at least as a method of making sure you don’t hire a completely incapable person. While it’s hard to predict future performance based on an interview, I think they may be more effective at making sure you don’t hire a complete loser – but even this approach has flaws, as I frequently see managers pass on promising candidates for (perhaps) the wrong reasons out of fear of making a “bad hire”.

I do mostly “as appropriate” interviewer at Microsoft (this is the person on the interview loop who makes the ultimate hire / no hire decision on candidates based on previous interviews and their own questions). For college candidates or industry hires, one of the key questions I’m looking to answer is, “is this person worth investing 12-18 months of salary and benefits to see if they can cut it”. A hire decision is really nothing more than an agreement for a long audition. If I say yes, I’m making a (big) bet that the candidate will figure out how to be valuable within a year or so, and assume they will be “managed out” if not. I don’t know the stats on my hire decisions, but while my heart says I’m great, my head knows that I may be just throwing darts.

What Makes a Good Tech Employee?

If I had a secret formula for what made people successful in tech jobs, I’d share. But here’s what I look for anyway:

  1. Does the candidate like to learn? To me, knowing how to figure out how to do something is way more interesting than knowing how to do it in the first place. In fact, the skills you know today will probably be obsolete in 3-5 years anyway, so you better be able to give me examples about how you love to learn new things.
  2. Plays well with others – (good) software engineering is a collaborative process. I have no desire to hire people who want to sit in their office with the door closed all day while their worried team mates pass flat food under their door. Give me examples of solving problems with others.
  3. Is the candidate smart? By “smart”, I don’t mean you can solve puzzles or write some esoteric algorithm at my white board. I want to know if you can carry on an intelligent conversation and add value. I want to know your opinions and see how you back them up. Do you regurgitate crap from textbooks and twitter, or do you actually form your own ideas and thoughts?
  4. If possible, I’ll work with them on a real problem I’m facing and evaluate a lot of the above simultaneously. It’s a good method that I probably don’t use often enough (but will make a mental note now to do this more).

The above isn’t a perfect list (and leaves off the “can they do the job?” question, but I think someone who can do the above can at least stay employed.

Categories: Software Testing

How to Test Your Site in October for Holiday Shopping Traffic

Perf Planet - Wed, 10/08/2014 - 06:00

October is here, and most eCommerce sites have completed the code changes in preparation for the holiday shopping season, so now is the time to test their websites. Here is what some great companies are doing to ensure that their apps and sites are ready to handle the increased traffic of the upcoming eCommerce season.

Planning

One of the most important aspects of testing is planning to ensure that all bases are covered and ample time is allotted for the most critical components.

Knowing what to test is the key here, and that can be determined by identifying what has changed, where, and how. RUM data or web analytics are helpful in figuring out what pages users frequent most, which path they generally follow, and what devices and networks they typically use. It is important to make sure that the infrastructure and code that handle these processes that drive revenue are both reliable and fast. Also remember to check with the Marketing team to see what other pages may see increased traffic (such as promotional and landing pages) since these are often the first impression for the customer.

Once you’ve determined what to test it is important to follow a strict methodology. This is where you put the RUM data and planning into work.

Testing Functionality

We won’t go too in depth here, but new pages/features require the following:

  • Unit Testing: The developer tests the individual unit of code.
  • Integration Testing: The code is tested with the components it is integrated with.
  • System Testing: The entire application is tested.
  • Regression Testing: This ensures that the changes have not introduced any new bugs.
  • Acceptance Testing: This validates that the requested or required changes have actually been implemented, and is done by the requesting product team as well as QA.

A similar process can be followed for infrastructure changes. Depending on the nature of the change it is worthwhile to measure performance from within your infrastructure using APM and externally with Synthetic Monitoring both before and after. This will help ensure that any degradation can be identified and fixed before it becomes amplified at scale.

Non-Functional Testing

Congrats, you’ve made it past functional testing without anything exploding! Now you can move on to non-functional testing, which includes analyzing performance (note the optimizations mentioned in our recent blog post) and security. Synthetic monitors can help in this process to detect any issues, and will also be useful for performance comparisons before and after.

Review recent RUM data, log files in systems like Splunk, and last year’s traffic numbers to see where and when you experienced peak activity; look at critical aspects of site experience, such as login and checkout flows.   This will pinpoint which areas should be focused on during load testing.

Assuming your servers can handle that level of traffic, you can then move on to stress testing. Touching the upper limit of the system by exceeding the expected load, stress testing can help identify links in the chain that didn’t seem as important, but can still take down the whole system when they fail. Examples of this might include a database going offline or a load balancer failing. Stress testing will also reveal how well the system recovers after failure.

Then we come to security testing. It would be terribly sad to thoroughly go through this whole process of testing and ensuring that you are ready for an increase in traffic to have it all be thwarted by a security flaw. This is one of the most important pieces of the puzzle; your infrastructure performance could be severely affected, and most importantly your customers’ information could be at risk,

Finally, you have to prepare for disaster. No matter how ready you think your systems are, failure can come from unexpected places. When that happens, you need to know that your failover or backup systems will work as expected.For example, you can test your failover by taking nodes or services offline simulating power or connectivity issues.

Preparing for disaster also includes checking on your support channels and customer service personnel to make sure they’re ready, as well as any relevant vendors to make sure that they’re readily available throughout the holiday season and aren’t planning any major changes.

Good luck!

 

 

The post How to Test Your Site in October for Holiday Shopping Traffic appeared first on Catchpoint's Blog.

Understanding Device Targeting & Its Affect on Online App User Engagement

Perf Planet - Tue, 10/07/2014 - 08:00

Device targeting is a method of content delivery that tailors delivery according to the device the Internet is being accessed from. Content appearing on a laptop, mobile phone, or tablet for the same user may be quite different according to which device he or she is using or which apps are being accessed. The goal is to improve online app user engagement and all signs, so far, are indicating that device targeting is a highly effective means of doing so.

Partners Sharing Best Practices at PERFORM 2014 This Week

Perf Planet - Tue, 10/07/2014 - 07:02

“The rapidly changing IT landscape, especially related to how customers interact with businesses, means companies must now tackle a new class of problems to be competitive.”  — Jonathan Morgan, Director of User Experience for Rosetta Tackling those new problems is exactly what the annual PERFORM Global User Conference is all about.  This week the event begins in Orlando […]

The post Partners Sharing Best Practices at PERFORM 2014 This Week appeared first on Compuware APM Blog.

Pages