Feed aggregator

Win a free Velocity NYC 2014 conference pass!

Perf Planet - 0 sec ago
A simple raffle and perhaps your last chance to win a 2-day pass for the great Web Performance and Operations conference (NYC, Oct 15-17).

QA Music: Dangerous

QA Hates You - 2 hours 33 min ago

Not Roxette. Shaman’s Harvest:

Categories: Software Testing

The Value of Checklists

Randy Rice's Software Testing & Quality - Thu, 08/28/2014 - 20:15
For many years, I have included checklists on my Top 10 list of test tools (I also include "your brain"). Some people think this is ridiculous and inappropriate, but I have my reasons. I'm also not the only one who values checklists.

Atul Gawande makes a compelling case for checklists, especially in critical life-or-death situations in his book "The Checklist Manifesto." In reviewing the book on Amazon.com, Malcom Gladwell writes, "Gawande begins by making a distinction between errors of ignorance (mistakes we make because we don't know enough), and errors of ineptitude (mistakes we made because we don’t make proper use of what we know). Failure in the modern world, he writes, is really about the second of these errors, and he walks us through a series of examples from medicine showing how the routine tasks of surgeons have now become so incredibly complicated that mistakes of one kind or another are virtually inevitable: it's just too easy for an otherwise competent doctor to miss a step, or forget to ask a key question or, in the stress and pressure of the moment, to fail to plan properly for every eventuality."

Gladwell also makes another good point, "Experts need checklists--literally--written guides that walk them through the key steps in any complex procedure. In the last section of the book, Gawande shows how his research team has taken this idea, developed a safe surgery checklist, and applied it around the world, with staggering success."

In testing, we face similar challenges in testing all types of applications - from basic web sites to safety-critical systems. It is very easy to miss a critical detail in many of the things we do - from setting up a test environment to performing and evaluating a test.

I have a tried and true set of checklists that also help me to think of good tests to document and perform. It is important to note that a checklist leads to tests, but are not the same as test cases or the tests they represent.

I have been in some organizations where just a simple set of checklists would transform their test effectiveness from zero to over 80%! I even offer them my checklists, but there has to be the motivation (and humility) to use them correctly.

Humility? Yes, that's right. We miss things because we get too sure of ourselves and think we don't need something as lowly, simple and repetitive as a checklist.

Checklists cost little to produce, but have high-yield in value. By preventing just one production software defect, you save thousands of dollars in rework.

And...your checklists can grow as you learn new things to include. (This is especially true for my travel checklist!) So they are a great vehicle for process improvement.

Checklists can be great drivers for reviews as well. However, many people also skip the reviews. This is also unfortunate because reviews have been proven to be more effective than dynamic testing. Even lightweight peer reviews are very effective as pointed out in the e-book from Smartbear, Best Kept Secrets of Peer Code Reviews.

Now, there is a downside to checklists. That is, the tendency just to "check the box" without actually performing the action. So, from the QA perspective, I always spot check to get some sense of whether or not this is happening.

Just as my way of saying "thanks" for reading this, here is a link to one of my most popular checklists for common error conditions in software.

I would love to hear your comments about your experiences with checklists.
Categories: Software Testing

What if it is the Network? Dive Deep and Back in Time to Find the Root Cause

Perf Planet - Thu, 08/28/2014 - 07:00

Modern Application Performance Management (APM) solutions can be tremendously helpful in delivering end-to-end visibility into the application delivery chain: across all tiers and network sections, all the way to the end user. In previous blog posts we showed how to narrow down to various root causes of the problems that the end-users might experience. Those issues ranged […]

The post What if it is the Network? Dive Deep and Back in Time to Find the Root Cause appeared first on Compuware APM Blog.

What if it is the Network? Dive Deep and Back in Time to Find the Root Cause

Modern Application Performance Management (APM) solutions can be tremendously helpful in delivering end-to-end visibility into the application delivery chain: across all tiers and network sections, all the way to the end user. In previous blog posts we showed how to narrow down to various root causes of the problems that the end-users might experience. Those issues ranged […]

The post What if it is the Network? Dive Deep and Back in Time to Find the Root Cause appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

Initcwnd settings of major CDN providers

Perf Planet - Wed, 08/27/2014 - 13:47

In November 2011, we first published the initcwnd values of CDNs, following our blog post Tuning initcwnd for optimum performance that showed how tuning the initial congestion window parameter (initcwnd) on the server can have a significant improvement in TCP performance.
We have been wanting to do an update for a long time and finally found the time to do it. Today, August 27 2014, we publish our new data, based on tests we ran yesterday. Some CDNs are no longer in the market (Cotendo, Voxel) and new CDNs emerged, so our list of CDNs looks a bit different from 2+ years ago.
First we show you the data, then some conclusions and finally we describe our test methodology.

Update (Nov 16, 2011): Fastly now has initcwnd set to 10 on all edge servers.
Update (May 12, 2012): Amazon Cloudfront set initcwnd to 10 since Feb 2012, we verified this today.

--> CDN initcwnd values

Below is a chart showing the value of the initcwnd setting on the edge servers of several CDNs.

Note: Google and Microsoft are the services they provide to load JS libraries from their global server network.

Conclusions

Most CDNs have a initcwnd of 10. This is the default value in Linux kernel and apparently many CDNs have found this to work well. CacheFly, Highwinds, MaxCDN, ChinaCache, Akamai, Limelight and Level3 have a higher initwncd value. For some reason(s) they changed the setting and we are confident they did this for performance reasons. Internap has a slightly lower value of 9. And then there is Microsoft's libraries CDN (ajax.aspnetcdn.com): it sends two packets in the first round trip. That is ridiculously low and bad for performance.
CacheFly's behavior is remarkable. The edge server sends out a very large burst of packets - 70. Our test machine advertised a receive window of 262144, so CacheFly clearly takes that into account. The test file was 100 KB and the 70 packets add up to that 100 KB.
Note: the value for initcwnd is (obviously) not the only parameter that determines CDN performance. Don't think that CacheFly is the fastest CDN now globally, and Internap is the slowest. The initcwnd value is just *one* of the performance parameters and we believe it is good to know its value.

Test Methodology

All tests were conducted on a Macbook Air in The Netherlands with a high initrwnd to make sure we have large window sizes at the receiving end. .

For each CDN, we made requests to some far-away POP (US-West or APAC) to ensure a high RTT to make it easier reading tcpdumps. For each test, we made few hits to prime the cache at the edge servers. We then studied the tcpdumps, ran the entire process several times for sanity check.
Some CDNs (Highwinds, CacheFly, Cloudflare and Bitgravity) do Anycast for HTTP globally and use a single IP everywhere and this means we can't use the IP address of a far-away POP in our tests. So, we added an extra 500ms latency using Dummynet. No dummy packet loss was added.

We used this python script to run tests and capture results using tcpdump. The dumps were manually analyzed in wireshark as described here.

I’m Way Past Inbox 0

QA Hates You - Wed, 08/27/2014 - 10:01

Today on Gmail, I got my inbox down to inbox -50:

How do you do that?

Well, in my case, I deleted a large number of emails from an unused email box and then, when it hung up, I deleted them again.

How do you test for that?

Well, if you’re me, you not only use an automated testing tool like Selenium or WATIR not only for interface checking, but to create large record sets to then use in manual testing. For example, you set up a script that adds 10,000 comments and then manually test to see what happens when you go to an item with a large number of comments. You can inspect how it looks (is the number too big for the space allocated to it on the page) but also what happens when you add another comment, when you delete the item, when you recommend the item to a friend.

Categories: Software Testing

Too Many Performance Defects? 4 Ways to Cut Down the Noise

Perf Planet - Wed, 08/27/2014 - 09:22

Share

Time is always a challenge for the performance-minded web developer. You’d love to spend all day tuning your site, but competing priorities are an unavoidable fact of life.

When we announced our new Zoompf Alerts beta last week, we mentioned one of the primary goals of Zoompf Alerts was to Find only the performance problems you care the most about. (And if you’re not on the free beta yet, you’re missing out…).

Managing the signal to noise ratio was a top priority for Zoompf Alerts from the start. When you get an alert, it should be meaningful – information of real value that you should act on now. The rest…can wait.

If you consider yourself overwhelmed with performance defects, this article is for you. In the following sections I’ll go over 4 tips to reduce the noise to a manageable level.

Tip 1: Increase Your Alert Threshold

Alert Thresholds control how severe a performance defect must be before you get alerted. The higher the threshold, the worse the defect must be. For example, if your threshold is set to high, the Content Not Compressed performance rule will only flag alerts on “Any item at least 200 KB in size which can be optimized by at least 50%, or any item with a savings of at least 200 KB.” Conversely, if you set your threshold to Low, any item with at least 15 kb of savings will alert. To see a full list of our threshold values, try changing the threshold slider in one of our Demo Accounts.

High Thresholds are designed to alert you when only the “really bad” stuff happens. Ideal for developers who have only limited time for performance tuning. As you lower your thresholds, more detail will get exposed – ideal when you’re trying to make your site as fast as possible.

Adjust the thresholds to a level that makes the most sense to your circumstance. And don’t worry, you can always change this setting later.

To access your Alert Thresholds, from your dashboard click ‘Settings’ and then the ‘Alert Thresholds’ tab.

Tip 2: Ignore Defects You Can’t Fix

Sometimes there are problems with content you just can’t fix. For example, maybe that CSS include is hosted by an external provider that will not turn on content compression. Or maybe those images are supplied by a different department and you have no say in what they publish.

While we always advise you fix as many defects as possible (your users will be the final judge), we also understand your time is limited and you need to prioritize.

Towards this end, Ignore Rules are here to help. Ignore Rules allow you to hide a specific piece of content, or even an entire domain, from all future alerts. Setting up an ignore rule is very easy, in your performance snapshot view, just click the “Ignore” icon on the far right for any result you wish to ignore.

For example:

Your alert emails also contain a similar link. Once you click ignore, you can then choose from several options controlling how much content you wish to ignore, up to and including all defects from a specific domain:

Finally, if you’re worried about “out of site, out of mind”, don’t. You get a subtle reminder about ignored defects in your performance snapshots, and always have the ability to restore ignored defects to your alerts later. For example:

Tip 3: Disable Unused Performance Rules

Beyond ignoring specific content, you can also ignore specific “types” of performance defects based on the performance rule that was violated. For example, Content Not Compressed, Content Not Minified, etc.

Say for example you have a hosting provider that gives you no control over server configuration settings, but you can still control your content and images uploaded to the host. In this scenario, you may not want to be alerted about compression or caching defects since you are powerless to fix them.

Disabling performance rules is the tool for the job here. To access, go to your Settings page and click the Alert Thresholds tab. From there, scroll down to the Performance Rules section. You’ll see a list of all the performance rules Zoompf Alerts will test your site against, with a short description and an on/off slider for each. For example,

Simply turn off the rules for which you do not want to receive alerts. Again, don’t worry as you can always change this setting again later.

Tip 4: Hide Third Party Content (3PC)

Okay this one is easy, we already do it for you. If you are including ad trackers, analytics beacons and other third party controls for which you have no ability to fix, you do not want to get alerted when their performance is sub-optimal (and it usually is).

At Zoompf we maintain an extensive database of Third Party Content domains, and even have an open source project to maintain this at ThirdPartyContent.Org.

We automatically filter out any performance defects identified for 3PC components from your results, so you don’t need to worry about them. Still, there is always new 3PC arriving so it’s a constant arms race to keep up with the latest. If you have 3PC in your results, hit the Contact link on your dashboard and tell us about it! We’ll strive to keep that database up to date as new 3PC arrives.

And if a defect within your site depends on external 3PC, we’ll also flag that dependency with a 3PC badge in your results so you can clearly see the dependency. For example:

Now of course we’re not saying you should always ignore 3PC. Your users still have to wait for it to load, so you should at least be aware of its impact. To view how much 3PC you have on your site, we highly recommend running our Free Report and you’ll get a nice breakdown of just how big an impact 3PC has on your page load time.

In Closing

With Zoompf Alerts you have the tools to balance the information density to the level that fits your working style. If you are trying to squeeze every ounce of performance out of your site, you should choose a low alert threshold and not ignore any content or performance rules. Conversely, if you have very little time to focus on performance, setting a high alert threshold with judicious use of ignore rules can help you manage your time while still providing a valuable performance safety net.

We hope you find this level of flexibility useful. And of course, if there’s more we can do always feel free to contact us!

If you’re not on the Zoompf Alerts beta, learn more and sign up for free. And if you want to run a deeper one-time performance analysis of your site, check out our free report.

The post Too Many Performance Defects? 4 Ways to Cut Down the Noise appeared first on Zoompf Web Performance.

Chrome - Firefox WebRTC Interop Test - Pt 1

Google Testing Blog - Tue, 08/26/2014 - 15:09
by Patrik Höglund

WebRTC enables real time peer-to-peer video and voice transfer in the browser, making it possible to build, among other things, a working video chat with a small amount of Python and JavaScript. As a web standard, it has several unusual properties which makes it hard to test. A regular web standard generally accepts HTML text and yields a bitmap as output (what you see in the browser). For WebRTC, we have real-time RTP media streams on one side being sent to another WebRTC-enabled endpoint. These RTP packets have been jumping across NAT, through firewalls and perhaps through TURN servers to deliver hopefully stutter-free and low latency media.

WebRTC is probably the only web standard in which we need to test direct communication between Chrome and other browsers. Remember, WebRTC builds on peer-to-peer technology, which means we talk directly between browsers rather than through a server. Chrome, Firefox and Opera have announced support for WebRTC so far. To test interoperability, we set out to build an automated test to ensure that Chrome and Firefox can get a call up. This article describes how we implemented such a test and the tradeoffs we made along the way.

Calling in WebRTC Setting up a WebRTC call requires passing SDP blobs over a signaling connection. These blobs contain information on the capabilities of the endpoint, such as what media formats it supports and what preferences it has (for instance, perhaps the endpoint has VP8 decoding hardware, which means the endpoint will handle VP8 more efficiently than, say, H.264). By sending these blobs the endpoints can agree on what media format they will be sending between themselves and how to traverse the network between them. Once that is done, the browsers will talk directly to each other, and nothing gets sent over the signaling connection.

Figure 1. Signaling and media connections.
How these blobs are sent is up to the application. Usually the browsers connect to some server which mediates the connection between the browsers, for instance by using a contact list or a room number. The AppRTC reference application uses room numbers to pair up browsers and sends the SDP blobs from the browsers through the AppRTC server.

Test DesignInstead of designing a new signaling solution from scratch, we chose to use the AppRTC application we already had. This has the additional benefit of testing the AppRTC code, which we are also maintaining. We could also have used the small peerconnection_server binary and some JavaScript, which would give us additional flexibility in what to test. We chose to go with AppRTC since it effectively implements the signaling for us, leading to much less test code.

We assumed we would be able to get hold of the latest nightly Firefox and be able to launch that with a given URL. For the Chrome side, we assumed we would be running in a browser test, i.e. on a complete Chrome with some test scaffolding around it. For the first sketch of the test, we imagined just connecting the browsers to the live apprtc.appspot.com with some random room number. If the call got established, we would be able to look at the remote video feed on the Chrome side and verify that video was playing (for instance using the video+canvas grab trick). Furthermore, we could verify that audio was playing, for instance by using WebRTC getStats to measure the audio track energy level.

Figure 2. Basic test design.
However, since we like tests to be hermetic, this isn’t a good design. I can see several problems. For example, if the network between us and AppRTC is unreliable. Also, what if someone has occupied myroomid? If that were the case, the test would fail and we would be none the wiser. So to make this thing work, we would have to find some way to bring up the AppRTC instance on localhost to make our test hermetic.

Bringing up AppRTC on localhostAppRTC is a Google App Engine application. As this hello world example demonstrates, one can test applications locally with
google_appengine/dev_appserver.py apprtc_code/
So why not just call this from our test? It turns out we need to solve some complicated problems first, like how to ensure the AppEngine SDK and the AppRTC code is actually available on the executing machine, but we’ll get to that later. Let’s assume for now that stuff is just available. We can now write the browser test code to launch the local instance:
bool LaunchApprtcInstanceOnLocalhost()
// ... Figure out locations of SDK and apprtc code ...
CommandLine command_line(CommandLine::NO_PROGRAM);
EXPECT_TRUE(GetPythonCommand(&command_line));

command_line.AppendArgPath(appengine_dev_appserver);
command_line.AppendArgPath(apprtc_dir);
command_line.AppendArg("--port=9999");
command_line.AppendArg("--admin_port=9998");
command_line.AppendArg("--skip_sdk_update_check");

VLOG(1) << "Running " << command_line.GetCommandLineString();
return base::LaunchProcess(command_line, base::LaunchOptions(),
&dev_appserver_);
}

That’s pretty straightforward [1].

Figuring out Whether the Local Server is Up Then we ran into a very typical test problem. So we have the code to get the server up, and launching the two browsers to connect to http://localhost:9999?r=some_room is easy. But how do we know when to connect? When I first ran the test, it would work sometimes and sometimes not depending on if the server had time to get up.

It’s tempting in these situations to just add a sleep to give the server time to get up. Don’t do that. That will result in a test that is flaky and/or slow. In these situations we need to identify what we’re really waiting for. We could probably monitor the stdout of the dev_appserver.py and look for some message that says “Server is up!” or equivalent. However, we’re really waiting for the server to be able to serve web pages, and since we have two browsers that are really good at connecting to servers, why not use them? Consider this code.
bool LocalApprtcInstanceIsUp() {
// Load the admin page and see if we manage to load it right.
ui_test_utils::NavigateToURL(browser(), GURL("localhost:9998"));
content::WebContents* tab_contents =
browser()->tab_strip_model()->GetActiveWebContents();
std::string javascript =
"window.domAutomationController.send(document.title)";
std::string result;
if (!content::ExecuteScriptAndExtractString(tab_contents,
javascript,
&result))
return false;

return result == kTitlePageOfAppEngineAdminPage;
}

Here we ask Chrome to load the AppEngine admin page for the local server (we set the admin port to 9998 earlier, remember?) and ask it what its title is. If that title is “Instances”, the admin page has been displayed, and the server must be up. If the server isn’t up, Chrome will fail to load the page and the title will be something like “localhost:9999 is not available”.

Then, we can just do this from the test:
while (!LocalApprtcInstanceIsUp())
VLOG(1) << "Waiting for AppRTC to come up...";

If the server never comes up, for whatever reason, the test will just time out in that loop. If it comes up we can safely proceed with the rest of test.

Launching the Browsers A browser window launches itself as a part of every Chromium browser test. It’s also easy for the test to control the command line switches the browser will run under.

We have less control over the Firefox browser since it is the “foreign” browser in this test, but we can still pass command-line options to it when we invoke the Firefox process. To make this easier, Mozilla provides a Python library called mozrunner. Using that we can set up a launcher python script we can invoke from the test:
from mozprofile import profile
from mozrunner import runner

WEBRTC_PREFERENCES = {
'media.navigator.permission.disabled': True,
}

def main():
# Set up flags, handle SIGTERM, etc
# ...
firefox_profile =
profile.FirefoxProfile(preferences=WEBRTC_PREFERENCES)
firefox_runner = runner.FirefoxRunner(
profile=firefox_profile, binary=options.binary,
cmdargs=[options.webpage])

firefox_runner.start()
Notice that we need to pass special preferences to make Firefox accept the getUserMedia prompt. Otherwise, the test would get stuck on the prompt and we would be unable to set up a call. Alternatively, we could employ some kind of clickbot to click “Allow” on the prompt when it pops up, but that is way harder to set up.

Without going into too much detail, the code for launching the browsers becomes
GURL room_url =
GURL(base::StringPrintf("http://localhost:9999?r=room_%d",
base::RandInt(0, 65536)));
content::WebContents* chrome_tab =
OpenPageAndAcceptUserMedia(room_url);
ASSERT_TRUE(LaunchFirefoxWithUrl(room_url));
Where LaunchFirefoxWithUrl essentially runs this:
run_firefox_webrtc.py --binary /path/to/firefox --webpage http://localhost::9999?r=my_room
Now we can launch the two browsers. Next time we will look at how we actually verify that the call worked, and how we actually download all resources needed by the test in a maintainable and automated manner. Stay tuned!

1The explicit ports are because the default ports collided on the bots we were running on, and the --skip_sdk_update_check was because the SDK stopped and asked us something if there was an update.

Categories: Software Testing

Passing Around the DevOps Hat – Help from The Terminator

Perf Planet - Tue, 08/26/2014 - 09:13

Share

Last year I posted an article on the growing DevOps movement, the focus on bridging the gap between development and technical operations teams to improve stability, performance and scalability of your website. It certainly takes some time to change the course of a big ship, but in our discussions at Zoompf we’ve been pleasantly surprised by the increasing number of organizations that are starting to buy-in to the benefits of DevOps. Like the adoption of Agile 10 years ago, these movements take time.

Still, it’s rare for us to meet engineers who are 100% focused on DevOps, and more rare still on performance tuning alone. More often then not, its a role that is shared across other responsibilities. Performance optimization can often be what you get to in your “spare time”, whatever that means.

If you find yourself passing around the hat like this, you’re not alone. We feel your pain…come lay down on the couch and share your troubles.

Still, we’re here to help. Recently we announced a fantastic new tool that can help you reclaim some of those lost hours of the day: our new Zoompf Performance Alerts beta.

Okay okay let’s stop right here – i’m not trying to sell you anything…well not yet at least. It’s a free beta – free as in no credit card, no paypal, nothing. We won’t even try to get your immortal soul.

Here’s the deal – it’s a new product and we want feedback – lots and lots of feedback. Will we charge someday? Maybe, probably, we’ll see – we do have servers to feed after all, but by then we hope these alerts are so fracking useful for you that you won’t care. And if they’re not, well then you opt out and it didn’t cost you anything, right?

Anyway, back to the alerts. Why would you care? Quite simply, our new alerts are the culmination of 4 years of lessons learned talking with folks like yourself in the performance trenches. Again and again we’ve heard “I don’t have a lot of time to work on performance, just tell me the really bad stuff and I’ll fix that.”. Zoompf Alerts was designed to solve that very problem. While our commercial product helps you analyze a “foot wide and mile deep” (checking over 400+ performance rules), Zoompf Alerts is more focused on notifying you of these 3-5 “really bad” problems you need to drop everything to fix right now.

How does this work? Simply put, Zoompf Alerts runs the same performance scanner we’ve spent 4 years building and improving, but filters out all but the most egregious problems. And then it runs again…and again…and again. Several times a day in fact. And if anything changes, you get an email.

What kinds of changes? Well, the really bad stuff, for example:

  • Someone patched your web server and content compression suddenly turned off. Ouch.
  • Or maybe the marketing team is running a new campaign and deployed a gorgeous, striking, attention grabbing hero image – that happened to also be a 5 MB bitmap.
  • Or a new developer added a new javascript library, that happened to be a 2 MB un-minified mess

Zoompf Alerts is that faithful watchdog, keeping an eye on your code, images, and server configuration 24 hours a day, 7 days a week. Like the Terminator, it feels no pain and it will never, ever stop. (Speaking of course of the “good” Ah-nold in Terminators 2 and 3, and not the decidedly more evil version in Terminator 1…).

So give it a try, its free and it might just save your bacon when that new intern decides to convert all your images into bitmaps. And besides, you wouldn’t want to make Ah-nold angry would you?

Learn more about Zoompf Alerts, and if you like what you see Join the Beta.

Join Alerts Beta

The post Passing Around the DevOps Hat – Help from The Terminator appeared first on Zoompf Web Performance.

Data Driven Performance Problems are Not Always Related to Hibernate

Perf Planet - Tue, 08/26/2014 - 07:00

Data-driven performance problems are not new. But most of the time it’s related to too much data queried from the database. O/R mappers like Hibernate have been a very good source for problem pattern blog posts in the past. Last week I got to analyze a new type of data-driven performance problem. It was on […]

The post Data Driven Performance Problems are Not Always Related to Hibernate appeared first on Compuware APM Blog.

Data Driven Performance Problems are Not Always Related to Hibernate

Data-driven performance problems are not new. But most of the time it’s related to too much data queried from the database. O/R mappers like Hibernate have been a very good source for problem pattern blog posts in the past. Last week I got to analyze a new type of data-driven performance problem. It was on […]

The post Data Driven Performance Problems are Not Always Related to Hibernate appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

SSL Performance Diary #2: HTTP Strict Transport Security

Perf Planet - Fri, 08/22/2014 - 11:06

Share

This post part of a series, where I discuss the steps I am taking to implement SSL on a website while simultaneously improving its performance. I previously discussed optimized SSL certificates. Today I’ll discuss a super easy SSL performance optimization that everyone should be doing: HTTP Strict Transport Security.

When we were designing the web interface to our new Zoompf Alerts beta website, SSL was a requirement from day 1. Not only did I want to protect our customers’s information, SSL provides a gateway to using SPDY, which can improve our site’s performance.

I wanted to make it as easy as possible for people to access the Zoompf Alerts beta website. Since I am using SSL, I wanted a user to be able to type app.zoompf.com into their browsers address bar and get properly redirected to https://app.zoompf.com. This meant configuring a website at port 80, whose sole purpose is to send HTTP redirects to the SSL version of the site on 443. But how does adding this HTTP to HTTPS redirect impact page load times? I ran a WebPageTest assessment to find out:

We see that the browser spends 179 ms, nearly 2 tenths of a second, connecting and wanting for the redirect to the SSL site. (We have to keep the time the DNS lookup took, since that would have to happen regardless of whether we were connection to HTTP or HTTPS). While 179 ms doesn’t sound like a lot, consider that the load time is only 1.391 seconds. This means our HTTP to HTTPS redirect accounts for 13% of our page load time!

Is there something we can do to eliminate this, while still making it easy for our users to reach our website? The answer is yes, by using HTTP Strict Transport Security.

Enter HSTS

HTTP Strict Transport Security (HSTS) is a web security policy mechanism that informs complying web browsers they are to interact with a specific website using only secure HTTPS connections. You can think of HSTS as kind of a caching header, but for the host/port/schema to access a website. You are basically telling the web browser “when you try to visit this domain within the next X amount of time, you must do so over SSL.” This means that the browser won’t make an HTTP request to “http://app.zoompf.com”, only to get redirected to “https://app.zoompf.com.” The browser will make a direct connection to https://app.zoompf.com. This has 2 performance benefits. First, you avoid needing a redirect from the HTTP to the HTTPS site. Second, you are ensuring that as much traffic as possible is using HTTPS, which means you can use SPDY.

Below is an abbreviated list of the HTTP headers that https://twitter.com returns.

Cache-Control: no-cache, no-store, must-revalidate, pre-check=0, post-check=0 Content-Encoding: gzip Content-Length: 12264 Content-Type: text/html;charset=utf-8 Date: Fri, 22 Aug 2014 16:33:40 GMT Expires: Tue, 31 Mar 1981 05:00:00 GMT Last-Modified: Fri, 22 Aug 2014 16:33:40 GMT Pragma: no-cache Server: tfe_b Strict-Transport-Security: max-age=631138519 X-Content-Type-Options: nosniff X-Frame-Options: SAMEORIGIN

Twitter using the Strict-Transport-Security header to tell your browser that, for the next 90 days, always connection to https://twitter.com, and never to http://twitter.com.

Right about now you are probably asking yourself, “How does the browser know to communicate with a host using HTTPS without it first hitting the HTTP? Doesn’t the server still need to service an HTTP request originally to tell the browser to only use HTTPS in the future?” The answer is yes, but ideally only that one time. Consider trying to go to http://www.bank.com. You have never been there before. You get a redirect to https://www.bank.com. Once you access it over HTTPS, that is when you get the Strict-Transport-Security header that says “BTW, use the HTTPS for all communication in the future, and cache this directive for X seconds”. For all follow up visits, your browser won’t even let you go to http://www.bank.com. Even if you manually typed that into the browsers address bar, the browser automatically changes it to HTTPS because if the HSTS header. In many ways, you can think of this like HTTP caching, using the Expires or Cache-Control headers. However, with HSTS, instead of caching content, you are caching information about the communications channel to use.

There is an added security benefit to HSTS as well. The request to the HTTP site, and then the redirect to the HTTPS site, happens over an insecure channel where a user is not guaranteed to be communicated with my server. This creates a window of opportunity for an attacker with network access to redirect the user to a different site. The attacker could then trick the user into entering their credentials into an attacker-controlled website which is impersonating the Portal. Automated tools such as SSLStrip exist to do this. HSTS prevents this from occurring, by avoiding the HTTP request in the first place. In fact, its why HSTS was designed! The performance benefits are really just a side effect of the security feature!

While you can get some of this performance advantage by using a 301 redirect with a caching header, that approach is not as good as using HSTS. First of all, browsers don’t have to respect caching headers, and caching redirects behavior varies from browser to browser. Secondly, caches can be cleared, by the browser if it runs out of space, or by the user. HSTS is stored separately by the browser, and does not face these issues.

Currently 58% web browsers in use support HSTS. Implementing HSTS is as simple as included a special HTTP response header, Strict-Transport-Security, in each response from SSL site. This can easily be implemented via application logic, or with a web server level configuration rule. In fact, I implemented in ASP.NET with a single line of code in our master template: Response.Headers["Strict-Transport-Security"]= "max-age=1209600". The max-age number, much like in the Cache-Control header, is the number of seconds the web browser should remember to only access a site over SSL. Every time the user visits the site, the browser “resets” its internal value. I choice 2 weeks, since the usage of our app means a user will probably be looking at at least 1 page, once every 2 weeks.

If you are deploying a website that is SSL only, I highly suggest using HSTS to improve both performance and security. Since HSTS works at the hostname level, websites that have secured only part of their content with SSL can still benefit, assuming that separate content is served from a different hostname. This scenario is quite common with online shopping sites.

If you are interesting optimizing SSL for performance, then you obviously care about keeping your site fast and ensuring your visitors have a good user experience. Try out Zoompf’s Free Performance Report to get an idea of other areas you can improve, and to keep your site fast over time, check out our newly announced Free Alerts Beta!

The post SSL Performance Diary #2: HTTP Strict Transport Security appeared first on Zoompf Web Performance.

You’ve Gotten Your Junk Data in My Junk Tests

QA Hates You - Fri, 08/22/2014 - 05:43

One of the recurring pratfalls in testing your integration with third party widgets shared by, and updateable by, others who use it is their test data becomes available to you sometimes.

Take, for instance, testing integration with Google maps. It’s becoming harder and harder to submit a string that returns no results. Search for asdf, for example, an old tester favorite.

Someone in testing adding Google Places has added that as test data, and it’s there for all of us to see.

Categories: Software Testing

jslint's suggestion will break your site: Unexpected 'in'...

Perf Planet - Thu, 08/21/2014 - 22:45

I use jslint to validate my JavaScript before it goes out to production. The tool is somewhat useful, but you really have to spend some time ignoring all the false errors it flags. In some cases you can take its suggestions, while in others you can ignore them with no ill effects.

In this particular case, I came across an error, where, if you follow the suggestions, your site will break.

My code looks like this:


if (!("performance" in window) || !window.performance) {
return null;
}

jslint complains saying:

Unexpected 'in'. Compare with undefined, or use the hasOwnProperty method instead.

This is very bad advice for the following reasons:

  • Comparing with undefined will throw an exception on Firefox 31 if used inside an anonymous iframe.
  • Using hasOwnProperty will cause a false negative on IE 10 because window.hasOwnProperty("performance") is false even though IE supports the performance timing object.

So, the only course of action, is to use in for this case.

Test This #8 - The Follow-On Journey

Eric Jacobson's Software Testing Blog - Thu, 08/21/2014 - 14:39

While reading Duncan Nisbet’s TDD For Testers article, I stumbled on a neat term he used, “follow-on journey”.

For me, the follow-on journey is a test idea trigger for something I otherwise would have just called regression testing.  I guess “Follow-on journey” would fall under the umbrella of regression testing but it’s more specific and helps me quickly consider the next best tests I might execute.

Here is a generic example:

Your e-commerce product-under-test has a new interface that allows users to enter sales items into inventory by scanning their barcode.  Detailed specs provide us with lots of business logic that must take place to populate each sales item upon scanning its barcode.  After testing the new sales item input process, we should consider testing the follow-on journey; what happens if we order sales items ingested via the new barcode scanner?

I used said term to communicate test planning with another tester earlier today.  The mental image of an affected object’s potential journeys helped us leap to some cool tests.

Categories: Software Testing

Speed Up Your Bootstrap and Font-Awesome Sites Using Font Compression

Perf Planet - Thu, 08/21/2014 - 10:00

Share

Twitter Bootstrap and it’s frequent add-on Font-Awesome have gained tremendous popularity in recent years due to the relative ease these libraries provide for building good looking, responsive websites. So much so that here at Zoompf we’ve seen the same performance problem appear again and again: uncompressed font files.

Of course, uncompressed font files are not specific to bootstrap or font-awesome – in fact this problem is quite common! Still, I call out these 2 particular libraries due to their popularity and the pervasiveness of this problem. Font files come in many shapes and sizes, but the easiest way to spot them is to look in your CSS files for includes to files with these extensions: SVG, EOT, WOFF, TFF and OTF. (And there are many more).

Here’s the dirty little secret about fonts: almost all of them can benefit from HTTP compression, and most webservers do NOT compress font files by default. SVG, for example, is entirely defined in XML, which is highly compressible! In the example I’ll show below, just the one font file fontawesome-webfont.svg can be reduced by 179 kb.

So even if you have compression turned on for your HTML, CSS and Javascript, you’re still missing out on large possible savings by excluding your font files. In the demonstration below, we’ll dive into this in more detail, then show what you can do about it!


Demonstration

So let’s take a hands-on look. To begin, I launched a test Amazon 64 bit linux instance running Apache2 in a manner similar to my earlier blog post about mod_pagespeed.

From there, I constructed a very basic HTML file that includes the latest Bootstrap (3.2.0) and Font-Awesome (4.1.0) libraries.



If you’re playing along at home, you can download this full sample here: FontCompressBlog.tar or FontCompressBlog.zip

After installing the base Apache 2.2 image provided by Amazon:


yum install httpd service httpd start

I then ran the Zoompf scanner against this page, you can see the full report here. The results showed no compression enabled for ANY of the files, the default for the prepackaged Apache on Amazon.



So basically we have a very small page (1 kb) that already has over 333 kb of bloat just by including those 2 libraries! Of course bootstrap and font-awesome are, well, awesome, so let’s see what we can do to help here.

Turns out that while mod_deflate is installed as part of the Apache installation on Amazon, it is not pre-configured to compress any specific file types. You can fix this a number of ways, I chose to edit the httpd.conf file in /etc/httpd/conf and add this:



This tells Apache to compress HTML, CSS and Javascript files, which is the common compression use case for the majority of the web. Many Apache installations will now set this for you automatically, as well as performance packages like mod_pagespeed, but you can’t assume this is a given.

Rerunning our performance scan sees some nice improvement, but we still see those pesky uncompressed font files!



Don’t be ashamed, “this happens to everyone”. Those poor old font files are always left out…

Well, let’s fix it. For Apache, you need to make sure your configuration also compresses the proper font extension as well. In this example, we do this by adding these 2 lines to the httpd.conf:



This instructs Apache to also compress SVG and EOT files, both of which were included by the Bootstrap/font-awesome libraries. A quick restart of Apache (“service httpd restart”) and rescan with Zoompf and we see this very satisfying result:



Also refer to the full report.

So by turning on compression for the correct file formats, we saved over 333 kb of unneeded bloat from just these 2 libaries alone – 225 kb (68%) of which was from the font libraries! Fonts can chew up a lot of space, and are rarely included in the compression filters for most websites.


What Else Can I Do?

So you may ask why webservers don’t just compress everything by default? The problem is that some content, like PNG and JPG images, are already compressed. If the webserver tried to “re-compress” files that were already compressed, performance would actually get worse! We have a great blog post about this in our archive called Lose the Wait: HTTP Compression. I highly recommend checking it out.

So with this said, we need to explicitly tell the webserver what file types to compress. In the case of Apache, this can be done by adding AddOutputFilterByType lines to your configuration like shown above. While individual needs may vary, here’s a recommended configuration that covers most common scenarios:


AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript # Common Fonts AddOutputFilterByType DEFLATE image/svg+xml AddOutputFilterByType DEFLATE application/x-font-ttf AddOutputFilterByType DEFLATE application/font-woff AddOutputFilterByType DEFLATE application/vnd.ms-fontobject AddOutputFilterByType DEFLATE application/x-font-otf

If you need to add other file types, you can look up the appropriate entries from this list of default MIME types for Apache.

If you want to take it a step further, we highly recommend checking out the HTML5 Boiler Plate project. This resource provides a great default Apache configuration that is well optimized for speed. Of particular interest is their sample .htaccess file.

While this article does not cover the configuration for other servers such as IIS and NGINX, the problems remain the same and a similar methodology applies.


What about NGINX and IIS?

NGINX is a little easier then Apache, but still requires you supply the appropriate file types for compression in your nginx.conf file under the gzip_types directive. You can read more about this in this helpful article.

For Microsoft IIS 7, it’s a little trickier. While IIS 7 turns on compression by default, it does not compress most font files, including SVG. To enable this, you need to use the “appcmd.exe” tool (located for me in C:\System32\inetsrv directory) to add the missing MIME types to the “static types” list of what gets compressed. Microsoft has a reference on this here. And here’s the syntax for adding SVG:
appcmd.exe set config -section:system.webServer/httpCompression /+"staticTypes.[mimeType='image/svg+xml',enabled='True']" /commit:apphost
And similar commands can be issued for the other types. After you’re done, restart IIS to be safe and those types should now be compressed. If you have further problems, a great place to look to debug is in the “httpCompression” tag section of your “application.host” file, located in the c:\Windows\System32\inetsrv\config directory (or equivalent). For example, on my test instance I had to move the image\svg+xml entry above the “*/*” deny rule like such:



No doubt an order of precedence applied there, so you may get hit with a similar problem by using the appcmd tool alone.

In Closing

As shown above, even the default “out of the box” installation of useful libraries like Bootstrap and font-awesome can introduce a good amount of bloat to your webpage. While the capabilities of these libraries more then justify the cost, there are still steps you can take to get the best of both worlds here.

While turning on HTTP Compression is always a must-do for any website administrator, very commonly (LARGE) font files are left out of the compression equation. As shown above, this is often resolved through a simple configuration update.

To see if your website is compressing your font files properly, try out Zoompf’s Free Report, and to keep your site fast over time, check out our newly announced Free Alerts Beta!

The post Speed Up Your Bootstrap and Font-Awesome Sites Using Font Compression appeared first on Zoompf Web Performance.

Understanding Application Performance on the Network – Part VIII: Chattiness and Application Windowing

Perf Planet - Thu, 08/21/2014 - 07:03

In Part VII, we looked at the interaction between the TCP receive window and network latency. In Part VIII we examine the performance implications of two application characteristics – chattiness and application windowing as they relate to network latency. Chattiness A “chatty” application has, as an important performance characteristic, a large number of remote requests […]

The post Understanding Application Performance on the Network – Part VIII: Chattiness and Application Windowing appeared first on Compuware APM Blog.

Understanding Application Performance on the Network – Part VIII: Chattiness and Application Windowing

In Part VII, we looked at the interaction between the TCP receive window and network latency. In Part VIII we examine the performance implications of two application characteristics – chattiness and application windowing as they relate to network latency. Chattiness A “chatty” application has, as an important performance characteristic, a large number of remote requests […]

The post Understanding Application Performance on the Network – Part VIII: Chattiness and Application Windowing appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

Resource Timing practical tips

Perf Planet - Thu, 08/21/2014 - 01:45

The W3C Web Performance Working Group brought us Navigation Timing in 2012 and it’s now available in nearly every major browser. Navigation Timing defines a JavaScript API for measuring the performance of the main page. For example:

// Navigation Timing var t = performance.timing, pageloadtime = t.loadEventStart - t.navigationStart, dns = t.domainLookupEnd - t.domainLookupStart, tcp = t.connectEnd - t.connectStart, ttfb = t.responseStart - t.navigationStart;

Having timing metrics for the main page is great, but to diagnose real world performance issues it’s often necessary to drill down into individual resources. Thus, we have the more recent Resource Timing spec. This JavaScript API provides similar timing information as Navigation Timing but it’s provided for each individual resource. An example is:

// Resource Timing var r0 = performance.getEntriesByType("resource")[0], loadtime = r0.duration, dns = r0.domainLookupEnd - r0.domainLookupStart, tcp = r0.connectEnd - r0.connectStart, ttfb = r0.responseStart - r0.startTime;

As of today, Resource Timing is supported in Chrome, Chrome for Android, Opera, IE10, and IE11. This is likely more than 50% of your traffic so should provide enough data to uncover the slow performing resources in your website.

Using Resource Timing seems straightforward, but when I wrote my first production-quality Resource Timing code I ran into several issues that I want to share. Here are my practical tips for tracking Resource Timing metrics in the real world.

1. Use getEntriesByType(“resource”) instead of getEntries().

You begin using Resource Timing by getting the set of resource timing performance objects for the current page. Many Resource Timing examples use performance.getEntries() to do this, which implies that only resource timing objects are returned by this call. But getEntries() can potentially return four types of timing objects: “resource”, “navigation”, “mark”, and “measure”.

This hasn’t caused problem for developers so far because “resource” is the only entry type in most pages. The “navigation” entry type is part of Navigation Timing 2, which isn’t implemented in any browser AFAIK. The “mark” and “measure” entry types are from the User Timing specification which is available in some browsers but not widely used.

In other words, getEntriesByType("resource") and getEntries() will likely return the same results today. But it’s possible that getEntries()  will soon return a mix of performance object types. It’s best to use performance.getEntriesByType("resource") so you can be sure to only retrieve resource timing objects. (Thanks to Andy Davies for explaining this to me.)

2. Use Nav Timing to measure the main page’s request.

When fetching a web page there is typically a request for the main HTML document. However, that resource is not returned by performance.getEntriesByType("resource"). To get timing information about the main HTML document you need to use the Navigation Timing object (performance.timing).

Although unlikely, this could cause bugs when there are no entries. For example, my earlier Resource Timing example used this code:

performance.getEntriesByType("resource")[0]

If the only resource for a page is the main HTML document, then getEntriesByType("resource") returns an empty array and referencing element [0] results in a JavaScript error. If you don’t have a page with zero subresources then you can test this on http://fast.stevesouders.com/.

3. Beware of issues with secureConnectionStart.

The secureConnectionStart property allows us to measure how long it takes for SSL negotiation. This is important – I often see SSL negotiation times of 500ms or more. There are three possible values for secureConnectionStart:

  • If this attribute is not available then it must be set to undefined.
  • If HTTPS is not used then it must be set to zero.
  • If this attribute is available and HTTPS is used then it must be set to a timestamp.

There are three things to know about secureConnectionStart. First, in Internet Explorer the value of secureConnectionStart is always “undefined” because it’s not available (the value is buried down inside WinINet).

Second, there’s a bug in Chrome that causes secureConnectionStart to be incorrectly set to zero. If a resource is fetched using a pre-existing HTTPS connection then secureConnectionStart is set to zero when it should really be a timestamp. (See bug 404501 for the full details.) To avoid skewed data make sure to check that secureConnectionStart is neither undefined nor zero before measuring the time for SSL negotiation:

var r0 = performance.getEntriesByType("resource")[0]; if ( r0.secureConnectionStart ) { var ssl = r0.connectEnd - r0.secureConnectionStart; }

Third, the spec is misleading with regard to this line: “…if the scheme of the current page is HTTPS, this attribute must return the time immediately before the user agent starts the handshake process…” (emphasis mine). It’s possible for the current page to be HTTP and still contain HTTPS resources for which we want to measure the SSL negotiation. The spec should be changed to “…if the scheme of the resource is HTTPS, this attribute must return the time immediately before the user agent starts the handshake process…”. Fortunately, browsers behave according to the corrected language, in other words, secureConnectionStart is available for HTTPS resources even on HTTP pages.

4. Add “Timing-Allow-Origin” to cross-domain resources.

For privacy reasons there are cross-domain restrictions on getting Resource Timing details. By default, a resource from a domain that differs from the main page’s domain has these properties set to zero:

  • redirectStart
  • redirectEnd
  • domainLookupStart
  • domainLookupEnd
  • connectStart
  • connectEnd
  • secureConnectionStart
  • requestStart
  • responseStart

There are some cases when it’s desirable to measure the performance of cross-domain resources, for example, when a website uses a different domain for its CDN (such as “youtube.com” using “s.ytimg.com”) and for certain 3rd party content (such as “ajax.googleapis.com”). Access to cross-domain timing details is granted if the resource returns the Timing-Allow-Origin response header. This header specifies the list of (main page) origins that are allowed to see the timing details, although in most cases a wildcard (“*”) is used to grant access to all origins. As an example, the Timing-Allow-Origin response header for http://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js is:

Timing-Allow-Origin: *

It’s great when 3rd parties add this response header; it allows website owners to measure and understand the performance of that 3rd party content on their pages. As reported by (and thanks to) Ilya Grigorik, several 3rd parties have added this response header. Here are some example resources that specify “Timing-Allow-Origin: *”:

When measuring Resource Timing it’s important to determine if you have access to the restricted timing properties. You can do that by testing whether any of the restricted properties (listed above) (except for secureConnectionStart) is zero. I always use requestStart. Here’s a better version of the earlier code snippet that checks that the restricted properties are available before calculating the more detailed performance metrics:

// Resource Timing var r0 = performance.getEntriesByType("resource")[0], loadtime = r0.duration; if ( r0.requestStart ) { var dns = r0.domainLookupEnd - r0.domainLookupStart, tcp = r0.connectEnd - r0.connectStart, ttfb = r0.responseStart - r0.startTime; } if ( r0.secureConnectionStart ) { var ssl = r0.connectEnd - r0.secureConnectionStart; }

It’s really important that you do these checks. Otherwise, if it’s assumed you have access to these restricted properties you won’t actually get any errors; you’ll just get bogus data. Since the values are set to zero when access is restricted, things like “domainLookupEnd – domainLookupStart” translate to “0 – 0″ which returns a plausible result (“0″) but is not necessarily the true value of the (in this case) DNS lookup. This results in your metrics having an overabundance of “0″ values which incorrectly skews the aggregate stats to look rosier than they actually are.

5. Understand what zero means.

As mentioned in #4, some Resource Timing properties are set to zero for cross-domain resources that have restricted access. And again, it’s important to check for that state before looking at the detailed properties. But even when the restricted properties are accessible it’s possible that the metrics you calculate will result in a value of zero, and it’s important to understand what that means.

For example, (assuming there are no access restrictions) the values for domainLookupStart and domainLookupEnd are timestamps. The difference of the two values is the time spent doing a DNS resolution for this resource. Typically, there will only be one resource in the page that has a non-zero value for the DNS resolution of a given hostname. That’s because the DNS resolution is cached by the browser; all subsequent requests use that cached DNS resolution. And since DNS resolutions are cached across web pages, it’s possible to have all DNS resolution calculations be zero for an entire page. Bottomline: a zero value for DNS resolution means it was read from cache.

Similarly, the time to establish a TCP connection (“connectEnd – connectStart”) for a given hostname will be zero if it’s a pre-existing TCP connection that is being re-used. Each hostname should have ~6 unique TCP connections that should show up as 6 non-zero TCP connection time measurements, but all subsequent requests on that hostname will use a pre-existing connection and thus have a TCP connection time that is zero. Bottomline: a zero value for TCP connection means a pre-existing TCP connection was re-used.

The same applies to calculating the time for SSL negotiation (“connectEnd – secureConnectionStart”). This might be non-zero for up to 6 resources, but all subsequent resources from the same hostname will likely have a SSL negotiation time of zero because they use a pre-existing HTTPS connection.

Finally, if the duration property is zero this likely means the resource was read from cache.

6. Determine if 304s are being measured.

There’s another bug in Chrome stable (version 36) that has been fixed in version 37. This issue is going away, but since most users are on Chrome stable your current metrics are likely different than they are in reality. Here’s the bug: a cross-origin resource that does have Timing-Allow-Origin on a 200 response will not have that taken into consideration on a 304 response. Thus, 304 responses will have zeroes for all the restricted properties as shown by this test page.

This shouldn’t happen because the Timing-Allow-Origin header from the cached 200 response should be applied (by the browser) to the 304 response. This is what happens in Internet Explorer. (Try the test page in IE 10 or 11 to confirm.) (Thanks to Eric Lawrence for pointing this out.)

This impacts your Chrome Resource Timing results as follows:

  • If you’re checking that the restricted fields are zero (as described in #4), then you’ll skip measuring these 304 responses. That means you’re only measuring 200 responses. But 200 responses are slower than 304 responses, so your Resource Timing aggregate measurements are going to be larger than they are in reality.
  • If you’re not checking that the restricted fields are zero, then you’ll record many zero values which are faster than the 304 response was in reality, and your Resource Timing stats will be too optimistic.

There’s no easy way to avoid these biases, so it’s good news that the bug has been fixed. One thing you might try is to send a Timing-Allow-Origin on your 304 responses. Unfortunately, the popular Apache web server is unable to send this header in 304 responses (see bug 51223). Further evidence of the lack of Timing-Allow-Origin in 304 responses can be found by looking at the 3rd party resources listed in #4. As stated, it’s great that these 3rd parties return that header on 200 responses, but 4 out of 5 of them do not return Timing-Allow-Origin on 304 responses. Until Chrome 37 becomes stable it’s likely that Resource Timing metrics are skewed either too high or too low because of the missing details for the restricted properties. Fortunately, the value for duration is accurate regardless.

7. Look at Boomerang.

If you’re thinking of writing your own Resource Timing code you should first look at the Resource Timing plugin for Boomerang. (The code is on GitHub.) Boomerang is the popular open source RUM package maintained by Philip Tellis. He originally open sourced it when he was at Yahoo! but is now providing ongoing maintenance and enhancements as part of his work at SOASTA for the commercial version (mPulse). The code is clear, pithy, and robust and addresses many of the issues mentioned above.

In conclusion, Navigation Timing and Resource Timing are outstanding new specifications that give website owners much more visibility into the performance of their web pages. Resource Timing is the newer of the two specs, and as such there are still some wrinkles to be ironed out. These tips will help you get the most from your Resource Timing metrics. I encourage you to start tracking these metrics today to understand how your website is performing for the people who matter the most: real users.

Update: Here are some other 3rd party resources that include the Timing-Allow-Origin response header thus allowing website owners to measure their performance of this 3rd party content:

Pages