Feed aggregator

How to Pick the Right CDN Provider

Perf Planet - 4 hours 34 min ago

In order to deliver a complete web experience to your users, it’s important to utilize the services of a reliable Content Delivery Network. The basic promise of the traditional CDN product is to provide better and faster web performance for end users by serving static assets from the edge (it’s worth noting that CDN companies also offer additional products like site acceleration, which goes beyond serving static files).

The reason that they can deliver on this promise is not only because they have bigger network pipes, more servers around the world, and better network peering capabilities than you do, but also because they have teams dedicated to optimizing their stacks and monitoring their infrastructure in order to better manage increased traffic spikes.

In other words, a CDN is able to leverage economies of scale to provide you with a service that is faster, better, and more cost-effective than what it would be if you relied on your own infrastructure. Of course some CDN companies deliver on these promises better and more efficiently than others, but how is a prospective consumer to know what factors to look at when deciding which one to choose?

First of all, there are few basic qualifications to look at. If a prospective vendor is unable to deliver on these, then it should go without saying that it should not be considered any further:

  • Make sure that the CDN is faster than your Origin.
  • You want the CDN to be good at delivering a small file (~5KB), but also a larger payload.
  • The CDN should be able to minimize the geographical impact by reducing the latency to fetch all of the items on the page.

However, given the increasingly complex infrastructure that makes up many modern websites, there are many other more intricate metrics and data points that should be looked at and factored into the decision to go with one CDN provider over another.

Metrics & Data to look for

DNS Time

Some CDNs have more complex DNS setup than others and can slow things down. What often happens is that a faster Wait time gets offset by slower DNS response times.

Keep in mind that DNS performance from the last mile, end user, is quite different from the tests run in the backbone. End users rely on DNS resolvers of their ISPs or Public Resolvers, whereas backbone monitoring relies on resolvers that are very close to the machine running the tests.

Connect Time

Review the differing connect times to make sure that your CDN has great network connectivity, low latency, and no packet loss. Additionally, you want to make sure it does not get slower during peak hours and that you are being routed to the right network peering. For example, if an end user is on Verizon FIOS, there is no reason to go through five different backbone networks because that CDN does not have a direct peering with Verizon.

Wait Time

This metrics is important when looking at various CDNs, it helps you see if your content is hot on the edge or that does edge needs to fetch it from the Origin servers. The Wait time is also an indicator of potential capacity issues or bad configuration at the CDN level or Origin server (e.g. setting cache expiration in the past).

A CDN will deliver different performance if an asset is hot, i.e. requested 100,000+ times in the past hour vs. a few times an hour. A CDN is a shared environment where more popular items are faster to deliver than others; if something is in memory it’s fast, but if it has to hit a spinning disk it’s a different story. Thus I would personally consider having Solid State Drives as criteria in my CDN selection.

Throughput

Make sure that the throughput of the CDN test is higher than the origin no matter what the file size is!

Traceroutes

You need to run traceroutes from where you are monitoring to make sure you are not mapped to the wrong place. Many CDNs use commercial geo-mapping databases and the data for the IP could be wrong. So if your CDN sends requests to the United Kingdom from your home connection in Los Angeles, something is not right.

Other things to keep an eye on

Most CDNs will give you access to a control panel, so make sure you monitor your Cache Hit / Miss ratio. How often do they have to come back to the origin? A good CDN architecture should not come to the origin very often.

You have to also ask questions about what happen when an edge server does not have that content? How long does it take to purge a file? How long does it take to load a file to the edge? How long before a Cname is active?

How well does the CDN handle public name resolverssuch as OpenDNS, Dyn, or Google? These companies are carrying more and more of the DNS traffic, and this could impact certain CDNs’ geo-load balancing algorithms.

Are the metrics from the CDN such as DNS, Connect, Wait and Response consistent? (Note: do not just look at averages!) Remember, great performance is about more than just speed; the reliability to deliver a consistent experience is just as important, if not more so.

Conclusion

After considering all of these factors, the simple questions that you and your IT Ops team must answer are:

  • Can the CDN improve my performance in key markets where we lack physical server presence?
  • By how much? Is it a 5% improvement, 20% or 200%?
  • Will this translate into revenue increase? Institute a way to monitor the long-term impact of a CDN on your revenue.

Now besides speed, CDNs do bring other benefits that are not measured in seconds:

  • By offloading your static files you have more servers, more bandwidth and personnel to deal with more important things than serving static files.
  • They can help you handle DDOS attacks.
  • They are better prepared to handle seasonal traffic.

Once you have selected a CDN and are up and running on their platform, the work is not over. You should have regular communication with your CDN provider and monitor their performance versus the origin at all times. Doing so will create a symbiotic relationship between you and your provider, and benefit all parties (including your end users) in the long run.

 

The post How to Pick the Right CDN Provider appeared first on Catchpoint's Blog.

Web Performance News of the Week

LoadStorm - Fri, 07/31/2015 - 16:09

The new Microsoft Edge browser is here

This week Windows 10 was released for free downloads. Along with it, the new Microsoft Edge browser, previously codenamed Project Sparta, was finally delivered. Edge features a redesigned interface, the ability to mark up pages, and the Cortana search assistance. Microsoft also announced the Edge supports automated testing through the W3C WebDriver standard, and has a recording option to create HAR files. Preliminary speed tests performed by Microsoft claimed it beat Chrome and Safari at their own JavaScript benchmark tests. The “browser built for getting things done” will be the only browser supported on Windows 10 mobile devices, but both Internet Explorer and Microsoft Edge will be available on desktop. Microsoft says they have no plans to extend the browser to Linux, Mac OS X, Android or iOS devices.

New Shaka player release from Google devs

During the 100 Days of Google Dev this week, Google announced Shaka Player v1.2.0, a new mechanism for delivering high quality video performance on the web. The JavaScript library implements a DASH client, which enables users to deliver video in segments over a normal HTTP connection at a variety of different bit rates, resolutions, or formats to suit client needs.
The DASH client, which stands for Dynamic Adaptive Streaming HTTP, removes the disadvantages of typical streaming services, such as proprietary protocols or required hardware. In addition, Shaka supports delivery of protected content via the EME APIs to get licenses and do decryption. Because the high performance video player is plugin free and built on web standards, it can be used for a large array of devices. The Shaka project can be downloaded and tested now.

Facebook releases new Security Checkup tool worldwide

Facebook’s Security Checkup tool is now being released to all users worldwide. The tool was built into their platform to guide users through each of the available security options, one at a time. The tool allows users to change their passwords, enable login alerts, and clean up login sessions simply by clicking through the screen prompts. Facebook’s Security Checkup was introduced earlier this year in a limited test release, allowing users to test and give feedback on the tool. The checkup should be positioned at the top of your Facebook newsfeed, ready to try out.

#NoHacked campaign is back from Google

In a continuation of their #NoHacked campaign, Google focused attention on protecting sites from hacking. Google is engaging webmasters on Twitter, Google+, and will hold a live Q&A hangout on hacking prevention and recovery in the next few weeks. In their first installment, titled “How to avoid being the target of hackers”, Google offered basic safety tips. To keep your site safe they suggested strengthening your account security, keeping your site’s software updated, researching how your web host handles security issues, and using Google tools to stay informed of potentially compromised content on your site.

The post Web Performance News of the Week appeared first on LoadStorm.

Aligning Web Performance Goals with Business Initiatives

Perf Planet - Wed, 07/29/2015 - 09:51

The following post originally appeared as a guest byline on Multichannel Merchant.

Ever since the internet and eCommerce websites were created, the needs of the end users have been at the forefront of the mind of every IT operations professional. It’s a perfectly logical prioritization to make; the customer is always right, so it only makes sense to put the end users’ needs above all others. And hey, at the end of the day we’re end users as well, so we know how our own users want and deserve to be treated.

However, that only tells part of the story. In addition to the needs of the end users, the needs of the business and IT teams within the company must be taken into account as well. After all, these are the people who are ultimately setting the goals for the site, and making sure that those goals are reached. With responsibilities like that, the need for efficiency and prioritization becomes clear. It’s not that the customers’ needs should take a backseat; rather, their needs will be better served if the business and IT teams are able to meet their goals in the fastest and most efficient manner possible.

Here are four steps that any company can take to accomplish this:

Filter out the ambient noise

You can’t fix what you don’t measure, but nor can you fix what you can’t identify. Many web performance initiatives result in a firehose effect when the data starts coming in, but too much information is often just as bad as no information. Sometimes a problem will even be detected, but the sheer volume of data means that hours of sifting through it are required in order to pinpoint the root cause of that problem and solve it.

The bottom line is that data is not enough; it must be accompanied by extensive analytics which isolate the various systems that make up the complex infrastructure of a page. This also helps you establish baselines, identify historical trends, and see the impact of different optimization techniques.

It’s also necessary to save that data and export it to the company’s own internal systems. The ability to analyze data going back as far as 2-3 years will help you to compare trends month-over-month and year-over-year in order to understand seasonal impact, and help to identify problems that may only arise once or twice in any given calendar year.

Proactive testing around the clock

Your monitoring strategy needs to deliver data on performance as it’s experienced by end users across major U.S. markets. The key, of course, is to catch problems before those users encounter them through round-the-clock testing.

Synthetic (or Active) monitoring is extremely useful in achieving this goal, and alerting businesses about problems within their infrastructure by testing from a ‘clean lab’ environment. By simulating web traffic to show how quickly (or slowly) end users are able to enter a site or a particular page, synthetic monitoring offers a proactive approach.

However, synthetic also gives little insight into what users are actually doing once they enter your site, or discerning the impact of any outages that do occur. By using collecting data from actual end users (aka Real User Measurement or RUM) in conjunction with your synthetic strategy, site managers are able to identify areas in which they can optimize the user experience one end-users actually enter the site. After all, there’s no use in having a highly reliable, fast homepage if your critical conversion paths deeper within the site are slow. Nor is there any point in only measuring user activity if you can’t catch an outage before it impacts them.

This approach can also be done in any international markets that represent growth potential for your company – China and India being two major markets where eCommerce companies are trying to establish a foothold – so that you can monitor your performance in the face of the obstacles that are unique to different regions around the globe.

Be wary of who you let into your house

A modern website – particularly those of eCommerce or media companies – is often rife with third party tags and elements that can ultimately have a marked effect on the performance of the page. Hosting all of these third parties means that you also have to monitor their performance as well your own systems, because the two are interconnected; poor performance on the part of just one third party service is sometimes all it takes to drag down performance for a whole site. And as the number of these services grows, the risks grow higher, and websites become harder and more complex to manage.

With that in mind, clear performance baselines are needed for every third party element, with SLAs in place for when they fail to meet the minimum requirements. By maintaining a rigorous monitoring initiative of these elements on top of your own, you can quickly and easily demonstrate the performance of the various hosts on your pages. This allows you to understand overall web performance both before and after a third party service is added, and work with third party service providers to continually optimize performance. And, in the event of a third party failure, companies can implement contingency plans quickly, to suspend the service temporarily from the site.

Stay in bed when you can

Ask any IT professional what their biggest job-related pet peeve is, and the most likely answer you’ll get is being woken up in the middle of the night by what turns out to be a false alert. Often these false positives are the result of network problems as opposed to website problems, or problems with a third party as opposed to a first party. Having an alerting system in place that can tell the difference between these and verify failed tests before issuing an alert can be the difference between a night of restful sleep and one of scrambling for no good reason.

The ultimate goal is to empower IT teams to find and fix the widest range of problems possible in the least amount of time. Having those teams overwhelmed with excess data in order to fix a problem is a colossal waste of time and resources, as is having them spending time on problems that are ultimately outside of their control. By freeing that time up to work on initiatives to improve the existing platforms, all three groups – the IT department, the business team, and the end users – will be able to enjoy a richer, more rewarding web experience.

 

The post Aligning Web Performance Goals with Business Initiatives appeared first on Catchpoint's Blog.

WordPress Analytics and Performance Monitoring made easy

I was recently asked to provide some analytics metrics around popular blog posts and popular authors. If you are a WordPress admin you might have been asked similar questions. As we use Dynatrace UEM (User Experience Management) for analytics and performance monitoring of our WordPress instance anyway I just went forward using it. What follows in […]

The post WordPress Analytics and Performance Monitoring made easy appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

WordPress Analytics and Performance Monitoring made easy

Perf Planet - Wed, 07/29/2015 - 05:55

I was recently asked to provide some analytics metrics around popular blog posts and popular authors. If you are a WordPress admin you might have been asked similar questions. As we use Dynatrace UEM (User Experience Management) for analytics and performance monitoring of our WordPress instance anyway I just went forward using it. What follows in […]

The post WordPress Analytics and Performance Monitoring made easy appeared first on Dynatrace APM Blog.

Performance Test Your Password Protected Pages, New Export Options and More!

Perf Planet - Tue, 07/28/2015 - 07:34

Share

Just 2 weeks after announcing our new Recurring Performance Tests, we have two more great features to announce for our Zoompf WPO Beta!

Test Your Password Protected Pages

Pixabay Public Domain

If you’re hosting a full blown web based application (like Zoompf), or even a smaller password protected area of your content/ecommerce site, you’ll like this feature.

The Login Script feature allows you to specify user/password login credentials to be used prior to running a performance test on a specific password protected page in your site. For example, say you want to test a heavy user dashboard page that requires a login to access.

This feature works like such:

  1. When creating a New Performance Test, select the Advanced option and expand the Authorization section.
  2. Check Enable Login Script and you’ll see something like this:

3. For the Login Page URL field, enter the URL of the page you typically use to log into your site. If this is not readily apparent, try logging out of your site first, then visiting the password protected page you want to test. Most likely the web application will redirect you to a login page. That’s the URL you want to use here.

4. Login Form is an optional field to be used only if your login page contains multiple HTML form tags. For most login pages, you can leave this blank.

If your login page does contain more than 1 form tag, use this field to specify the name or id attribute of the form tag surrounding your user and password input fields. The scanner will match on name first, id second. If neither exist, you’ll need to add one of these attributes to your source code.

5. Plug in the name and password form field names and values. Typically these are in HTML input tags with names like “email” and “password”, but might be different based on your site. If your login has additional parameters (say a userid as well), then click Add Field to supply additional form values to post for the login.

6. You can now close out the authorization section and continue to configure your performance test as before. For the Start URL, specify the direct URL to the password protected page you want to test.

7. Launch the test. The Zoompf Performance scanner will first visit the login page url you provided using the field name/values you provided.  Assuming the login succeeds, the web server will return an authorization cookie that is then passed with the next request to the start URL(s) for the performance test, thus granting authorized access to that protected page.

Once the test is complete, you’ll see a nice screenshot of the password protected page and all cookies are cleared from the scanner server to preserve your security. Still, to be ultra-safe, we’d recommend only using a userid with limited permissions for your testing.

HTTP Authentication

Useful in more limited access situations like corporate intranets, the Zoompf scanner can also utilize HTTP Authentication to access protected pages. To use, expand the Authentication section mentioned above and scroll down to the HTTP Authentication section. You can now supply domain specific user/password pairs for the various resources loaded by your page (and the start page itself). Simply supply user:password@domain in each line like below:

 

The user/password pairs will be sent in the HTTP header requests for only those domains specified.

Again, we recommend using user/passwords with limited security access, and ideally only for https resources. Still, this may be useful for testing new development projects or limited access corporate intranets.

Export That Data!

For our second act, I wanted to highlight the new Export button available in the upper-right corner of almost every view.

Depending on the page (or the tab) selected, you’ll get different options for what can be exported:

  1. Copy to Clipboard: a plain text summary of the content on the page that can easily be embedded in a report or your defect tracker.
  2. Email Summary: that same summary, but embeddable in a plain text email that can be sent to an engineer or your defect tracker.
  3. Download CSV: exports a full comma separated value list of all data shown on the active table in the view, preserving any sorting or filtering options used on that page.

A few special notes about the CSV download:

  • If you’re in the snapshot view, the CSV is context specific to the currently selected tab…so try it on different tabs!
  • While the download preserves any sorting or filtering you’ve done on the page, it will export ALL data matching that sort/filter, not just the one page of data you see on the page. In other words, it’s a full export.
  • Where possible, all links exported are share links, meaning they are accessible to users without requiring a login. Share links are a “safe” read-only view into a specific piece of data that are designed to be readily shared with other teams or groups.
  • All data is in raw format to provide the largest degree of granularity. So for example, file sizes are always in bytes. (and remember 1 kb = 1024 bytes and not 1,000 bytes so there may be a slight adjustment from what you see in the views).

There’s some rich data to mine in there, and of course we’re always open to feedback on additional information that will help you optimize your site.

Anyway, that’s it for now, stay tuned for more exciting announcements coming soon!

Not using Zoompf to improve your web performance? Test your site with our free performance report! Want even more detail? Contact us to get a demo of the Zoompf WPO Beta!

The post Performance Test Your Password Protected Pages, New Export Options and More! appeared first on Zoompf Web Performance.

Unlocking Critical SAP Performance Insight

SAP performance issues can be extremely complex and painful – for users, for administrators, and for IT teams alike. My colleagues and I know this first-hand as our experience as Dynatrace Guardian Consultants gives us unique insight into some rather difficult performance challenges. The seemingly simple question – “Why are users experiencing poor performance?” – […]

The post Unlocking Critical SAP Performance Insight appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Unlocking Critical SAP Performance Insight

Perf Planet - Fri, 07/24/2015 - 05:35

SAP performance issues can be extremely complex and painful – for users, for administrators, and for IT teams alike. My colleagues and I know this first-hand as our experience as Dynatrace Guardian Consultants gives us unique insight into some rather difficult performance challenges. The seemingly simple question – “Why are users experiencing poor performance?” – […]

The post Unlocking Critical SAP Performance Insight appeared first on Dynatrace APM Blog.

Performance Analysis Week 4 – Performance Defects

Perf Planet - Fri, 07/24/2015 - 02:54
In the fourth and final week of the Performance Analysis series, Ian Molyneaux talks about identifying and managing Performance Defects.

TCP Flags and What They Mean

Perf Planet - Thu, 07/23/2015 - 12:39

So you’re reviewing a packet capture and you come across the TCP headers with the URG or PSH flags set.  Have you ever wondered what they mean? Or what they do? Or what they’re supposed to do? Let’s dive in and check them out.

Pushing the data forward

Consider this scenario: you’re using one of your favorite browsers to download some html from a website. The actual request is pretty small and will likely not fill the entire segment, let alone two segments in order to be queued for dispatch. Instead, the request is packaged and marked with a PSH flag thereby informing the client’s operating system to move this request along to the server and not wait for the buffers to fill.

 

We also learn from section 20.5 in TCP/IP Illustrated that the PSH flag is not something that is usually set by an API, but rather determined by the TCP implementation of the particular distribution. Accordingly, Berkley-derived operating systems use the PSH flag to signal that the send buffers are empty on the client side. This is evident in the response to the request above and captured below.

 

It’s a small reply, which, too, does not fill a segment. Accordingly, it’s marked with a PSH before the transaction is concluded with the FIN.

My request is URGent!

The intended purpose of the URG flag is to let the stream and receiving application know that there is some data that needs to be prioritized. For example, you have a file transfer that needs to be aborted due to the fact that you inadvertently sent the wrong one.

The idea was that the URG flag would be set, with a pointer to the last byte of urgent data (set in the urgent pointer field of the TCP header) and prioritizes that data; the rest of the data in the segment is treated as normal priority data. Typically, the PSH flag is also set because the data being sent, is urgent, and shouldn’t wait around for the segment to be full before entering a queue for dispatch.

Per Stevens, (Section 20.8) there are differing implementations of the URG flag and a bifurcation as to urgent data and out-of-band management. To that end, finding a compliant packet trace has been elusive.

 

The post TCP Flags and What They Mean appeared first on Catchpoint's Blog.

TCP Flags and What They Mean

Perf Planet - Thu, 07/23/2015 - 12:39

So you’re reviewing a packet capture and you come across the TCP headers with the URG or PSH flags set.  Have you ever wondered what they mean? Or what they do? Or what they’re supposed to do? Let’s dive in and check them out.

Pushing the data forward

Consider this scenario: you’re using one of your favorite browsers to download some html from a website. The actual request is pretty small and will likely not fill the entire segment, let alone two segments in order to be queued for dispatch. Instead, the request is packaged and marked with a PSH flag thereby informing the client’s operating system to move this request along to the server and not wait for the buffers to fill.

 

We also learn from section 20.5 in TCP/IP Illustrated that the PSH flag is not something that is usually set by an API, but rather determined by the TCP implementation of the particular distribution. Accordingly, Berkley-derived operating systems use the PSH flag to signal that the send buffers are empty on the client side. This is evident in the response to the request above and captured below.

 

It’s a small reply, which, too, does not fill a segment. Accordingly, it’s marked with a PSH before the transaction is concluded with the FIN.

My request is URGent!

The intended purpose of the URG flag is to let the stream and receiving application know that there is some data that needs to be prioritized. For example, you have a file transfer that needs to be aborted due to the fact that you inadvertently sent the wrong one.

The idea was that the URG flag would be set, with a pointer to the last byte of urgent data (set in the urgent pointer field of the TCP header) and prioritizes that data; the rest of the data in the segment is treated as normal priority data. Typically, the PSH flag is also set because the data being sent, is urgent, and shouldn’t wait around for the segment to be full before entering a queue for dispatch.

Per Stevens, (Section 20.8) there are differing implementations of the URG flag and a bifurcation as to urgent data and out-of-band management. To that end, finding a compliant packet trace has been elusive.

 

The post TCP Flags and What They Mean appeared first on Catchpoint's Blog.

The SEO Expert’s Guide to Web Performance Using WebPageTest

Perf Planet - Thu, 07/23/2015 - 07:19

Share

Note: we guest posted this guide earlier this week on Moz.com and wanted to share with our Zoompf readers as well. We hope you find it useful!

Any SEO professional knows that both site performance and user experience play an important role in search engine rankings and conversion rates. And just like there are great tools to help you find your search rank, research keywords, and track links, there are also excellent tools to help you improve your site performance as well. In this post, we will dive into one of the best free tools you can use to measure and improve your site performance: WebPageTest.

Do you know these questions?

There are several key questions an SEO professional should answer when it comes to improving the performance and user experience (UX) of your website:

  • What is my Time To First Byte? Time to first byte (or TTFB) is a measure of how fast the network and webserver returned that first byte of data in the HTML file you requested. The lower this number the better, since it means the site responded quickly. TTFB is an important metric since it is the performance measure most strongly correlated with a page’s search ranking. A high TTFB can also indicate an underpowered web server.
  • How quickly does my site render? Have you ever visited a page and then stared at a white screen waiting? Even if your site fully loads in only a few seconds, if the user isn’t seeing any progress they have a bad experience. Getting your site to render quickly can make your site “feel” faster than the competition. And speaking of competition…
  • How does my site compare to my competitors? Having a fast site is always a plus, but knowing how fast your site loads relative to your competitors can give you an idea about where you spend your resources and attention.
  • How does your site respond for mobile users? Sites are increasingly receiving the majority of their traffic from mobile users. When measuring your site performance, its critical you evaluate how well your site performs on mobile devices as well.
  • How do I make my site faster and provide a better UX? Collecting data and metrics is great, but creating a plan of action is even better. In this and a follow up post, we will show you clear steps you can take to improve your site’s performance and UX.

Understanding the answers to these questions will help you speed up your site to improve your conversion rates and UX. Luckily the free tool WebPageTest can help you get there!

What does WebPageTest Do?

Created originally at AOL by Patrick Meenan in 2008 and now enjoying backing by prominent technology companies like Google, WebPageTest (WPT) is the swiss army knife for measuring your site’s performance. While WPT’s capabilities are vast (and sometimes overwhelming), with some guidance you will find that it can be indispensable to improving your site performance. And best of all, WPT is FREE and open sourced under the free BSD license!

At its most basic level, WPT measures how a particular web page loads. As the page loads, a number of useful metrics are captured, cataloged and then displayed in various charts and tables useful for spotting performance delays. These metrics and visuals can help us answer the important questions we listed above. You can also control many aspects of WPT’s analysis such as the platform to use (desktop vs mobile), browser of interest (Chrome, Firefox, IE, etc), and even the geographic location.

Actually this is an incredible simplification of all the available options and abilities: WPT can do much much more…so much so a book is already in the works. Still you can get great value with just some high level basics, so let’s dive in!

Testing your site with WebPageTest

Let’s walk through an example. Even if you are familiar with WebPageTest, you might want to skim. We are going to select some specific options to make sure that WebPageTest collects and measures all the data we need to answer our important performance and UX questions above.

To start go ahead and visit www.WebPageTest.org. It’s okay to load this URL on your browser of choice (the browser you visit the URL with is NOT the browser used to run the test, that all happens on the remote server in a controlled environment). If possible, try to use Chrome since some of the more advanced visual tools for displaying the result data work best on Chrome, but it’s not a big deal if you’d rather not.

You should now see a page like this:

Right away you see 2 interesting options:

  • Test Location has a list of over 40 different regions around the world. Through its partnership program, WPT is physically running (for free) on servers located in all of those locations. When you enter a page URL for testing, the server at the location will load that URL locally with the browser you select. Truly wonderful, but of course, you get what you pay for: the speed and reliability of your tests may vary greatly from location to location. In addition, not all regions support the same testing options.
  • Browser contains a number of different desktop and mobile browser configurations available for testing at that location. Note that unless otherwise specified, the mobile specific browser tests are actually running on actual mobile devices and not emulators. Patrick Meenan has a neat picture of one of these configurations on his blog (below).

Let’s go ahead and stick with the defaults (Dulles, VA and Chrome).

Go ahead and expand the Advanced Settings section and you’ll see something like this:

Some comments here:

  • Connection is the simulated connection speed. Doesn’t matter too much unless you’re doing advanced testing. Just keep the default of Cable 5/1 for now.
  • Number of tests: as the name implies this controls the number of repeat tests to run. This is very important as the Internet can suffer frequent spikes and jitter in response times based on network congestion (see our earlier post on HTTP/2). To get a reliable result, its best to run multiple samples and have WPT automatically choose the median result. We recommend at least 3 tests, more if you have the time to wait.
  • Repeat View specifies if each test should load the page just once (with the browser cache cleared), or once with the cache cleared and again with the cache primed. Why does this matter? Whenever your browser visits a URL for the first time, there is almost always a large number of images, Javascript files and other resources that must first be downloaded by the browser to be used by that page. Once the browser has all these files, it will (depending on the server settings) cache them locally on your device for each subsequent page loaded from that website. This explains why the first page you visit almost always loads slower then subsequent pages on the same website. We recommend setting First View and Repeat View here so you can measure both the first experience for your new users and the ongoing experience for your repeat visitors.
  • Capture Video: One of the coolest features of WPT – allows you to visually see what your users would see on that device in that location. I’ll dive into this more below, but for now just make sure it’s checked.
  • For now leave the rest of the options blank, and you can ignore the other tabs.

Go ahead and enter your site URL and hit Start Test. It’ll usually take 30-60 seconds to get a result, depending on the options you selected and how deep the work queue is. If you find it taking an inordinately long time, try repeating the test from a different location.

Let’s now look at the results.

WebPageTest Results

Upon completion you’ll see a lot of data returned, much more then I can cover in this post. Let’s stick to the highlights for now.

First, at the top you’ll see some metrics on the overall page load time itself, for example:

As I mentioned above, this is the median result after 3 runs. You’ll also see a breakout of the first page load (no caching by the browser) vs. the repeat page load (browser is now caching some resources). You should almost always expect the repeat view to be faster then the first view. If not, you have some caching problems and should try free tools like PageSpeed Insights and Zoompf to diagnose why your caching is not properly configured.

There’s a lot to digest in these numbers, so let’s stick to the highlights:

  1. First Byte: This is the Time-to-first-byte metric we are looking for! As you’ll see below, the HTML document is usually a small sliver of the overall time – it’s all the images, JavaScript files, etc. that take most of the time to load. You want a number no more than about 200-400 ms. See our earlier article about optimizing Time to First Byte for recommendations on how to improve this value.
  2. Start Render: This is the point visually where you start seeing something other then a blank white page staring back at you. This also directly maps to a number that we want for our questions above.
  3. Document Complete: This is the point where the webpage has loaded up all the initial components of the HTML DOM and you can start interacting with the page (scrolling and such). You may still see images and other background parts of the page continue to “pop in” on the page after this, though. This number is helpful for developers, but less important from an SEO/UX perspective.
  4. Fully Loaded: This is the point at which everything is done loading. All images, all tracking beacons, everything. Many websites intentionally design for a faster document complete time (time you can interact with the page) at the expense of a slower fully loaded time (e.g. load the “extra stuff” in the background while the user is interacting with the page). There are raging debates over whether this is a good practice or not, I’ll steer clear of those and simply say “do what’s best for your users”. Again, this is a number that is helpful for developers, but less important from an SEO/UX perspective.
  5. Speed Index: This is a metric specific to WPT averaging when the visual elements of the page load. This attempts to solve a growing discrepancy between the values above and what the user perceives as “fast”. The math behind how it is calculated is kind of cool, and you can learn more about it here. The smaller the number, the faster and more completely you page loads.

These metrics help us answer some of our questions above. We will also see how to easily compare your metrics to your competitor.

Waterfall Charts

One place WPT really shines is its waterfall charts. Put simply, a waterfall chart is a graph of what resources were loaded by your browser to render a webpage, with the horizontal axis charting increasing time and the vertical axis representing the in-order sequence of loaded resources from top to bottom. In addition, each line in the chart is color coded to capture the various loading and rendering activities performed by your browser to load that resource.

For example:

Waterfall charts are valuable for identifying bottlenecks causing your page to load slow. A simple frame of reference is that is that wider the chart, the slower your page loads, and the taller the chart, the more resources that it loads. There is a ton of information packed into a waterfall chart, and interpreting a waterfall is a big topic with a lot of nuances. So much so that we’re going to dive into this topic in much more detail in our next post. Stay tuned.

Seeing how your site loads

If waterfall charts are the “killer app” of WPT, its performance videos are the killer upgrade. By selecting that Capture Video checkbox when you started your test earlier, WebPageTest captured a filmstrip showing exactly what your user would see if loading your website using the test parameters you provided. This is extremely valuable if, for example, you don’t happen to be working in Singapore on a Nexus 7, but would still like to see what your users there experience.

To access your video, click the Summary tab on your test result, then scroll down and click the Watch Video link on the far right column next to the Test Result you want to view.

You’ll then see something similar to this:

Remember those metrics that are important? If you site has a slow TTFB, you see a big delay before anything happens. The video also helps show you your start render time. This really helps provide some context: 750ms might sound fast, but being able to visualize it really drives home what your users are experiencing.

WPT’s video of your page load in itself is a great way to share with others exactly what your users are seeing. It is also a phenomenal tool to help build the case internally for performance optimization if you aren’t happy with the results. But can we do more?

Comparing against your competitors (and yourself)

Yes you can! WPT’s video capabilities go further, and that’s where it gets really interesting: you can also generate side by side videos of your site versus your competition!

To do so, repeat the steps above to generate a new test, but now using the URL of your competitor. Run your test and then click Test History. You’ll see something like this:

Click compare on the 2 tests of interest and you’ll see a cool side by side filmstrip like this:

Scrolling left and right will show a visual comparison of how the 2 pages loaded relative to each other. The gold boxes indicate when visual change occurred on the site getting loaded. Scroll down and you’ll see an overlay showing where in the waterfall chart the visual images loaded. Click the Create Video button and you’ll see a cool side by side animation like this.

This is a fantastic way of visualizing how your users see you versus your competition. In fact, you can compare up to 9 simultaneous videos, as we whimsically did some time back in this video:

But what about testing for mobile? While you can run 2 separate WPT analyses for your site using a desktop and mobile device, this is rather clunky. You have to switch back and forth comparing results. I am a big fan of using the comparing options, but to test my site using multiple different devices. This allows you to leverage all the great features above, like side-by-side video loading, and quickly see problems. Is you mobile site loading faster than your desktop site? It should, and if not, you should investigate why.

Getting even more from WebPageTest

I could spend hours going over all the advanced features of WebPageTest, and in fact Patrick Meenan has done just that in several of his great presentations and videos, but I wanted to wrap this up with a few of the more particularly noteworthy features for the SEO focused performance optimizer:

  • Private Instances: If WPT is loading too slowly for you, or your geographic needs are very specific, consider hosting a private instance on your own servers. WPT is open source and free to use under the free BSD license. There are many great resources to help you here, including Google’s documentation and Patrick’s presentation.
  • API: Most if not all the data exposed in the WPT web interface is also accessible via Restful API. If you’d like to show this data internally in your own format, this is the way to go.
  • Single Point of Failure (SPOF) Testing: What happens to your site if a key partner is down? Find out with the SPOF testing option. Simply list the host name(s) you want to simulate downtime with via the SPOF tab when launching a test, and see how poorly your site performance when a key resource fails to load. You may be horribly surprised. Even “fast” sites can load in 20+ seconds if a key advertising partner is offline. In fact, this feature is so useful we will explore using SPOF testing in our next post.
  • TCP Dumps: If your network engineers are debugging a truly thorny problem, additional logging is available via TCP Dumps. Especially useful for debugging server side problems. Skip to timecode 15:50 here to learn more.
Next Time: Diagnosing Performance Problems with WebPageTest

WebPageTest is an indispensable tool for finding and debugging front end performance problems, and a faster site leads to better user engagement and improved search rank. By default, WPT exposes a number of key metrics that are critical to SEO professionals and their understanding of their site’s performance and UX. I hope this overview provided a basic foundation for you to start diving in and using WebPageTest to optimize your own website speed.

While we have answered a nearly all the important questions listed at the start of this post, we left one largely unanswered: What do I do to make my site faster and improve the UX? To answer this question, we need to go beyond just looking at the data WPT presents us, and instead go deeper and review the data to diagnose your performance bottlenecks. This includes not only using some of the more advanced features of WPT like the SPOF testing, but also reviewing the waterfall charts and using tools like Zoompf’s free performance report tool to analyze what is slowing down your website and learn what you can do to improve your performance. We will do all that and more in our next post.

The post The SEO Expert’s Guide to Web Performance Using WebPageTest appeared first on Zoompf Web Performance.

How to use performance intelligence to deliver better retail banking experiences

Banks today are investing heavily in transformation projects to make it convenient for their customers to do business with them, anytime, anywhere, anyhow. While making the transition from the early days of retail IT banking systems to the current multi-channel digital environment, banks are also making vehement efforts to address the challenges brought about by […]

The post How to use performance intelligence to deliver better retail banking experiences appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

How to use performance intelligence to deliver better retail banking experiences

Perf Planet - Thu, 07/23/2015 - 00:00

Banks today are investing heavily in transformation projects to make it convenient for their customers to do business with them, anytime, anywhere, anyhow. While making the transition from the early days of retail IT banking systems to the current multi-channel digital environment, banks are also making vehement efforts to address the challenges brought about by […]

The post How to use performance intelligence to deliver better retail banking experiences appeared first on Dynatrace APM Blog.

The drastic effects of omitting NODE_ENV in your Express.js applications

Most developers learn best by examples, which naturally tend to simplify matters and omit things that aren’t essential for understanding. This means that the “Hello World” example, when used as starting point for an application, may be not suitable for production scenarios at all. I started using Node.js like that and I have to confess that […]

The post The drastic effects of omitting NODE_ENV in your Express.js applications appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

The drastic effects of omitting NODE_ENV in your Express.js applications

Perf Planet - Wed, 07/22/2015 - 05:55

Most developers learn best by examples, which naturally tend to simplify matters and omit things that aren’t essential for understanding. This means that the “Hello World” example, when used as starting point for an application, may be not suitable for production scenarios at all. I started using Node.js like that and I have to confess that […]

The post The drastic effects of omitting NODE_ENV in your Express.js applications appeared first on Dynatrace APM Blog.

From Alert Email to Backup Fail to Performance Problem

Perf Planet - Tue, 07/21/2015 - 14:10

Share

If you got an alert email from your performance monitoring solution telling you your site was now loading 600 ms slower, what would you do? What is your process to figuring out what happened and what you need to do to fix it?

This is the story of how one of our customers came to use Zoompf. And it all started with an alert email from Pingdom to their development team. The alert email notified them that the page load time for their website had increased by around 600 ms. The lead engineer (name changed to protect the innocent, let’s call him “Han”) was on duty that weekend, so it was his job to figure it out.

The first thing Han checked was the backend services. Perhaps there was a database query or a code path that was taking longer than normal. Han’s company used an agile development process and published new versions of the site several times a week. Maybe something bad slipped past QA. However, the application tier and database tier were operating normally and there was nothing in the logs. To be sure, Han looked through the commit log in source control. None of the changes stood out as something that would effect page load time significantly.

Next, Han checked the web servers. CPU load was normal. Traffic levels were normal. However the network I/O measurements were higher than normal. In fact, the web server cluster was transmitting about 35% more data than normal. That’s strange. Why are Traffic levels the same, but the bytes out had increased significantly? It was this key piece of data that lead Han to the problem.

It turns out, earlier that day, there had been a hard drive failure on one of the nginx servers that handled static content. The IT/Ops team provisioned a new drive and restored from backup. After some digging and research, Han was able to determine that the restored web server configuration file was actually out-of-date. It was an older version whose The HTTP compression settings were not optimized, and so the web site was serving all its CSS, JavaScript, and JSON responses without HTTP compression. This is why the bandwidth usage was up, way up. Since more content had to be transmitted, it took longer to download and render the page, which is what was slowing down the page load, and what ultimately triggered the alert email Han received.

Within a few minutes, Han updated the config file, and the site began serving static assets with HTTP compression again. Page load time dropped back to normal level, and the site was fast again.

This was a perfect example of how something beyond your control, like IT restoring from a backup, can impact your website’s performance. How would IT know if their change impacted performance? If you get an alert, how do you figure out the source of the problem? Han did a admirable job in a tough situation, but it still took several hours to find and fix the problem. You and your team may be performance experts, but you have better things to do with your time. And it’s hard to do manual work efficiently when you are under the pressure and in the moment of a site that has become slow.

This is where Zoompf can help. Zoompf automates the analysis of your website to tell you the specific problems that are affecting a page. Even better, our free Zoompf Alerts product continuously scanning your website throughout the day, looking for specific front-end performance issues, and alerts you when new problems are introduced. If Han had been using Zoompf Alerts, we would have received an email as soon as the old server configuration had been deployed. Sound like something that could help you? You can join Zoompf Alerts now for free!

The post From Alert Email to Backup Fail to Performance Problem appeared first on Zoompf Web Performance.

How to get Visibility into Docker Clusters running Kubernetes

Google officially launched today the final version of their cluster manager project Kubernetes, which treats Docker containers as first-class citizens. In this blog post, I will show you how you can monitor the performance of your Kubernetes cluster using the Dynatrace Docker Monitor Plugin, which is available here and on GitHub. Why Docker? Docker has been […]

The post How to get Visibility into Docker Clusters running Kubernetes appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

How to get Visibility into Docker Clusters running Kubernetes

Perf Planet - Tue, 07/21/2015 - 10:56

Google officially launched today the final version of their cluster manager project Kubernetes, which treats Docker containers as first-class citizens. In this blog post, I will show you how you can monitor the performance of your Kubernetes cluster using the Dynatrace Docker Monitor Plugin, which is available here and on GitHub. Why Docker? Docker has been […]

The post How to get Visibility into Docker Clusters running Kubernetes appeared first on Dynatrace APM Blog.

Enhancing Catchpoint’s Alerting System with VictorOps

Perf Planet - Mon, 07/20/2015 - 13:35

When systems fail, it’s imperative to not only have the right data as it pertains to the problem, but it is also important to have the proper tools to investigate the problem, and to communicate the specifics of the failure to your team. In today’s data driven world, getting an alert about a problem is relatively easy; solving these problems collaboratively with teammates as quickly as possible, however, can be a real challenge.

It’s with that in mind that we’ve integrated VictorOps into the Catchpoint solution. By combining our existing innovative alert system with their collaborative platform, it means that it’s now easier than ever for DevOps professionals and their teams to solve problems quickly.

What VictorOps Does

VictorOps keeps your entire team in the loop

With their unique platform for resolving issues that allows for real-time problem solving through leveraging the entire team’s collective knowledge. VictorOps allows teams to easily share situational data, quickly inject thoughts and comments, and leverage all team members equally around data supplied by platforms such as Catchpoint.

How Catchpoint and VictorOps Work Together

Catchpoint already offers a large array of alerting options on every metric measured, and includes trend detecting and dynamic thresholds based on historical data. Additionally, the customized Catchpoint nodes —whether they’re part of the most extensive nodes infrastructure in the industry or your OnPrem agents located specific to your needs — are already built to reduce false alerts. Now, integrating with VictorOps will mean that you can collaboratively solve any issues brought to your attention by Catchpoint’s alerting system.

Our partnership with VictorOps is providing you with the capability to streamline your TTR with correct actionable data, thus keeping your time managing only the alerts that matter with the right people to address them.

How to Set Up the VictorOps Integration

1. Log in to your VictorOps account.

2. If you are a new VictorOps user, click Complete Setup on the top left corner of the page next to your company name.

3. Click Integrations at the top of the page.

4. If you are an existing VictorOps user, go to Settings and click Integrations.

5. Select REST Endpoint from the integrations list.

6. Click Enable Integration.

7. Under Post URL, select the portion of the URL up until the ‘/$routing_key’ as shown. This will be the VictorOps REST Endpoint that you will point Catchpoint to.

Catchpoint Setup

1. Log in to the Catchpoint portal and go to Settings –> API

2. In the Alerts API Section, paste in the VictorOps REST Endpoint POST URL that was provided when you enabled the REST Endpoint within VictorOps.

3. The Alert format you need to choose is Template. Click Add New and you will be taken to the Edit Template window.

4. Give the Template a name and set the Format to Text.

5. The Template format must follow the VictorOps REST Endpoint guidelines. At a minimum you must include a ‘message_type’ field.

6. Other fields such as the ‘entity_type,’ ‘state_message,’ or ‘monitoring_tool’ can include Catchpoint Alert API Macros that will allow you to customize the Alert content. A full list of the Alert API Macros can be found here.

 

7. Save your template the save the API configuration.

8. If you have any alerts configured on your tests, you will now see Catchpoint Alerts appear within your VictorOps timeline.

The post Enhancing Catchpoint’s Alerting System with VictorOps appeared first on Catchpoint's Blog.

Pages