Web Performance Today

Subscribe to Web Performance Today feed
Updated: 1 hour 2 min ago

The (Context) Sensitivity of Speed

Tue, 04/28/2015 - 10:31

Traditionally, when we look at web performance, we create a time-series graph that focuses on dimensions like browser or page template. These are all technically-oriented measurements that are collected automatically based on what is readily available. Last year at the Velocity conference, I met Anh-Tuan Gai from WebPerf IO who showed me a more business-oriented approach to visualizing performance data. I found his approach very interesting and asked him to collaborate on sharing the approach for Web Performance Today.

At WebPerf IO, they start the performance measurement activity by segmenting users into groups based on business value. For example, if the site is particularly interested in generating traffic from sources like Facebook or search engines, then they segment users into these groups prior to looking at the performance data. To help illustrate this, Anh-Tuan provided anonymized data.

Here is a look at the data from a traditional time series perspective:

This graph shows a time series of high traffic pages from a sample site. There are three different page templates (home, section, and article) and the graph shows the performance of each over a nine week period. While informative, notice that it is difficult to understand how page performance impacts the business by looking at just this graph alone.

In an attempt to uncover how a business is impacted by speed, it is often common to introduce bounce rate to a visualization. A low bounce rate percentage normally indicates higher user engagement – users stay on the site longer and view more pages. A high bounce rate indicates that users are abandoning the site quickly and are not as highly engaged.

The above graph shows a traditional view of bounce rate in % by load time (in seconds). We have also included the traffic volume (page view count) to illustrate that most of the users experience a page that loads in ~2.5 seconds with a bounce rate of ~24%.

It is becoming more evident that bounce rate climbs as pages takes longer to load. However, this is still a relatively generalized graph, where it is hard to see any evidence of how our users are reacting to performance.

Now for some excitement: What if we segment the users into the business groups that our marketing efforts are targeting? In this case, we’re looking at users who are coming from Facebook and search results.

The first group – ‘direct’ (light blue line) – are users who are navigating directly from a bookmark or entering the url in the address bar. The second group – ‘search’ (darker blue line) – are those navigating to the target site from a search engine like Google. The third group (orange line) is the most interesting, made up of users that navigate to the target site from Facebook.

In this case, we can see that users from Facebook, a segment our business is particularly interested in, are much more sensitive to performance. For the first two groups, the bounce rate for pages that load in 4 seconds is in the 20% – 30% range, but more than 60% of users who navigated from Facebook have bounced at this point. Interesting, and not at all apparent from the traditional views.

After talking more with Anh-Tuan, his experience is that often the segments we care about most are the most sensitive to performance. Considering this, many businesses looking at performance metric may be underestimating the impact of performance on their most important users.

For more on the importance of speed and how the context of speed can affect very real business goals, especially in ecommerce, I invite you download our most recent State of the Union: Ecommerce Page Speed and Web Performance Report. The report offers data on the most popular online retail sites stack up against each other.  There are also some great tips you can implement to help take charge of your site’s performance.    

The post The (Context) Sensitivity of Speed appeared first on Web Performance Today.

New Findings: State of the Union for Ecommerce Page Speed and Web Performance [Spring 2015]

Wed, 04/15/2015 - 08:40

There are compelling arguments why companies – particularly online retailers – should care about serving faster pages to their users. Countless studies have found an irrefutable connection between load times and key performance indicators ranging from page views to revenue.

For every 1 second of improvement, Walmart.com experienced up to a 2% conversion increase. Firefox reduced average page load time by 2.2 seconds, which increased downloads by 15.4% — resulting in an estimated 10 million additional downloads per year. And when auto parts retailer AutoAnything.com cut load times in half, it experienced a 13% increase in sales.

Recently at Radware, we released our latest research into the performance and page speed of the world’s top online retailers. This research aims to answer the question: in a world where every second counts, are retailers helping or hurting their users’ experience – and ultimately their own bottom line?

Since 2010, we’ve measured and analyzed the performance of the top 100 ecommerce sites (as ranked by Alexa.com). We look at web page metrics such as load time, time to interact (the amount of time it takes for a page to render its feature “above the fold” content), page size, page composition, and adoption of performance best practices. Our goal is to obtain a real-world “shopper’s eye view” of the performance of leading sites, and to track how this performance changes over time.

Here’s a sample of just a few of the findings from our Spring 2015 State of the Union for Ecommerce Page Speed & Web Performance:

Time to interact (TTI) is a crucial indicator of a page’s ability both to deliver a satisfactory user experience (by delivering content that the user is most likely to care about) and fulfill the site owner’s objectives (by allowing the user to engage with the page and perform whatever call to action the site owner has deemed the key action for that page). The median TTI in our research was 5.2 seconds.

Ideally, pages should be interactive in 3 seconds or less. Separate studies have found that 57% of consumers will abandon a page that takes longer than 3 seconds to load. A site that loads in 3 seconds experiences 22% fewer page views, a 50% higher bounce rate, and a 22% fewer conversions than a site that loads in 1 second, while a site that loads in 5 seconds experiences 35% fewer page views, a 105% higher bounce rate, and 38% fewer conversions. Only 14% of the top 100 retail sites rendered feature content in 3 seconds or less.

Page size and complexity typically correlate to slower load times. The median top 100 page is 1354 KB in size and contains 108 resource requests (e.g. images, HTML, third-party scripts). In the past two years, the median number of resources for a top 100 ecommerce page has grown by 26% — from 86 resources in Spring 2013 to 108 resources in Spring 2015. Each of these page resources represents an individual server call.

Images typically comprise between 50 to 60% of a page’s total weight, making them fertile territory for optimization. Yet 43% of the top 100 retail sites fail to compress images. Only 10% received an ‘A’ grade for image compression.

Despite eBay and Walmart’s status as retail giants, both sites have had less than stellar performance rankings in previous ecommerce performance studies. In our Fall 2014 report, these sites ranked 36th and 57th, respectively, out of 100. Our latest research, however, finds that both companies have made an impressive performance comeback – with each site’s home page rendering primary content in fewer than 3 seconds. The report provides insight into what each site did to improve its performance.

The good news is that there are opportunities for every site – even those that are relatively fast already – to fine-tune performance by taking a more aggressive approach to front-end optimization.

Get the report: Spring 2015 State of the Union for Ecommerce Page Speed & Web Performance

The post New Findings: State of the Union for Ecommerce Page Speed and Web Performance [Spring 2015] appeared first on Web Performance Today.

The Case for Auto-Preloading: The Anatomy of a Battle-Tested WPO Treatment

Mon, 03/09/2015 - 16:28

The only constant is change.
My career has featured many changes – my positions change, the seasons change, and technologies change (almost as often as the seasons). Despite these changes I like to keep learning and sharing what I’ve learned as an inventor, developer, speaker and performance-obsessed researcher. That is why I am especially pleased to share that I’m being trusted to take the helm here at Web Performance Today. Radware and Tammy Everts have given me this opportunity to share some of what I know with the community they’ve built here. I’ll do my best not to disappoint

Looking back on more than six years of implementing web performance optimization (WPO) in the field, auto-preloading is by far the single most effective performance optimization technique I have seen. It provides the most benefit — we often achieve more than 70% acceleration from this technique alone.

I’d like to tell you the story of how we came to recognize the incredible value of auto-preloading, and how this single technique doesn’t just make individual pages faster — it accelerates a user’s entire flow through a site and ultimately delivers the best possible user experience.

Overview: How Does Preloading Work?

Preloading, also known as “predictive browser caching”, is a WPO technique that uses various methods to load resources (e.g. image files) needed by the “next” page in a user’s path through the site while the user is still consuming content on the current page. The preloaded resources are sent with expires headers so they are pulled from cache when then next page is viewed.

Although auto-preloading is based on some of the most basic of the WPO principles (e.g. make fewer requests, leverage the browser cache), it is not simple to implement without significant infrastructure and development cycles.

Our “Aha” Moment

Consolidation is a widely utilized performance best practice that, put in its simplest terms, bundles similar pages resources (e.g. images) so that fewer round trips are required to send resources from the server to the user’s browser. However, simple consolidation has a performance drawback: it doesn’t play well with another performance technique, browser caching.

With browser caching, resources are stored in the browser’s cache to be re-used on subsequent pages in a user’s flow through the site, again eliminating the need for server round-trips. So while the browser might cache a consolidated bundle of resources, that bundle might contain only some, but not all, of the resources needed for the next page in the flow. If we then create a bundle targeted at the “next” page in order to reduce round-trips, we must include all the common resources previously loaded on the first page, plus any resources unique to the “next” page.
Effectively this is a double download of the resources that are common across the pages. This is often why, in the minds of some performance experts, consolidation is considered an anti-pattern (especially on warm cache page views) since it often causes the repeated download of common resources.

When we were developing advanced treatments for our FastView technology, we saw this problem as a golden opportunity to take advantage of the time that a user is spending looking at the current page to “preload” individual resources into their browser cache. In other words, while a user is scanning the page and deciding where to visit next, resources for every likely navigation choice are quietly downloading behind the scenes.

Sounds simple, doesn’t it? It’s not.

Roadblock #1: The Preloading Mechanism Itself Is Pretty Tricky

There were two critical roadblocks to making preloading work.
The goal of the preloading mechanism is to load resources into the local browser cache after the current page rendering is complete. A key requirement of this process is that the current page DOM must not be affected by any of the preloading activity.

We found that, when it comes to preloading, one size does not fit all. There is no common preloading technique that can be used across browsers, and sometimes the same technique did not even work for different versions of the same browser. For example:

  • Firefox supports the syntax that does everything you need but does not raise an event to tell you when it is done. (This makes tracking your progress difficult but not impossible.)
  • Chrome supports the prefetch directive, but not in the Google Analytics build, so instead you must load resources as objects.
  • Most modern browsers support encoding images as base64-encoded images using the dataUri syntax; except Internet Explorer 8, which has a 32K limit (oh, that’s ~24K before encoding: don’t get that wrong or it won’t work).
  • Internet Explorer 7 does not support the dataUri, so you must use an HTML cabinet (an old holdover from Microsoft Office) or, alternately, you can use good old image spriting, but this means this means converting all your image references to that pesky CSS syntax.
  • For Internet Explorer 6, you must use spriting for images, and neither dataUri nor MHTML will save you.

With so many details, we took our techniques to the field and added detailed analytics so we could track just how well each technique really worked and when they didn’t. We refined and debugged until we had repeatable results.

Roadblock #2: Preloading Also Solves Refactoring Issues

On any given web page, there are potentially dozens of different navigation choices a user can make. Preloading resources for every conceivable choice is impossible to do without creating new problems related to over-preloading and bandwidth consumption, not to mention the fact that it would fill up the browser cache much too quickly, thereby negating the purpose of the browser cache.

In the early stages of developing this feature, we would state the page flows (through looking at our own analytics engine) and create lists manually. This led to the much more sophisticated approach that FastView now employs: a heuristic data-gathering list, based on page transitions, which allows FastView to collect and use this data automatically in real-time.

This is a good example of where WPO can really help developers who would otherwise have to create a complex subsystem dedicated to WPO and preloading. Since FastView directs site traffic, it is in a perfect location to collect and analyze the data required to create the best possible preload list based on real user behavior. As the user behavior changes and new flows become more important, the lists changes to reflect the new usage pattern. For the same reason developers don’t develop compilers or linkers in house, automated WPO tools like FastView provide key mechanisms for site operators that would be very difficult and not cost effective for site owners to attempt to build in house.

Bonus: Preloading Also Solves Refactoring Issues

One of the strengths of preloading is that it retains the original resource names and granularity of the origin site. This means that when common resources are created by site developers they are cached, as is, without repackaging. This makes for easier debugging and less complication in production environments.

Takeaway

Web performance optimization isn’t a per-page challenge. The only meaningful way to look at WPO is in terms of contextual acceleration (i.e. multi-page flows), and auto-preloading is the most effective technique for ensuring the performance of the entire user experience on a site.

LEARN MORE: Preloading is a feature in our FastView WPO solution, as well as the latest release of our Alteon application delivery controller.

The post The Case for Auto-Preloading: The Anatomy of a Battle-Tested WPO Treatment appeared first on Web Performance Today.

Google’s experimental new “slow” label could revolutionize how we tackle web performance

Wed, 02/25/2015 - 12:11

Earlier today, Search Engine Land posted about a new label that Google appears to be testing in its search results pages. The red “slow” label, pictured here, warns people that your site is unacceptably slow.

Screen capture via K Neeraj Kayastha

There’s little explanation from Google on how this label is being tested and what the plans for full rollout might be. Based on the small amount of current information, all we know is that it appears to be visible to some, but not all, mobile Android users. It may also only currently be applied to Google properties.

Given the fact that mobile performance is a huge priority for Google, this label isn’t a trivial feature. If you care about performance, user experience, and SEO, then you should care about this potential game-changer.

Who defines “slow”?

If you’re like me, the first question you asked when you heard about this label is “How does Google define ‘slow’?” To me, that’s the most critical missing piece of information. We know that Google uses the Google toolbar to crowdsource performance data from real users, so my assumption is that this crowdsourced information is what determines whether or not a site earns a “slow” label.

But the actual metrics are a mystery. Is a “slow” page any page that takes more than 4 seconds to load? Or is it ranked according to its speed relative to competing sites? No one knows but Google.

Everyone hates slow pages.

“Slow” is a powerfully repellant word. People hate to wait. We’ll visit a site less often if it’s slower than a competitor by just 250 milliseconds. This behaviour is hardwired, and it’s unlikely to change. And as one survey from Tealeaf/Harris Interactive shows, when pages are slow, especially on mobile, we don’t react well.

Convincing site owners to care about performance is an uphill struggle.

“But my site isn’t slow.”

This is one of the most frequent things I hear when I talk to site owners. And from their perspective, they’re right. That’s because most people’s experience of using their own site happens close to the source, inside a speedy corporate LAN. It’s also because they may be misinterpreting the performance data they’re receiving from their measurement vendors. (This is incredibly common.) For many site owners who do see the light about their site’s real-world performance, this painful — and often embarrassing — revelation happens when they’re on the road and try to demo something outside of their site’s speedy comfort zone.

Convincing site owners to care about mobile performance is an even steeper uphill struggle.

Everything I said above, times ten. When it comes to visibility about mobile performance, there are a lot of heads stuck in the sand. My sense is that this is because measurement has, historically, been tricky for mobile. I think it’s also because tackling mobile performance is a huge scary beast that a lot of people are, understandably, afraid to look at head on.

Making speed a priority for SEO reasons has been only somewhat successful in the past.

This isn’t the first time Google has waded into the topic of making speed an SEO issue. But while this has attracted a fair bit of tech media attention and discussion in the past, I’ve never met a site owner who suddenly decided to prioritize performance because of it. This could be due to the fact that the details have always been murky, with most people agreeing that speed is probably just one relatively minor part of Google’s search ranking algorithm.

The “slow” tag, however, is too in-your-face to dismiss.

It signals that Google is taking performance — and particularly mobile performance — very seriously. Site owners would be wise to do the same now, before this tag is fully rolled out.

Next steps: You can’t fix what you can’t measure. First, get real visibility into how actual users see your pages. Identify your performance problem areas. Then fix them.

The post Google’s experimental new “slow” label could revolutionize how we tackle web performance appeared first on Web Performance Today.

11 Reasons to Care About Mobile Performance in 2015 [INFOGRAPHIC]

Wed, 02/04/2015 - 11:27

Yesterday on the Radware blog, I shared some compelling stats around mobile web performance. Today I thought it would be fun (and hopefully helpful) to round up my favourite stats into a poster. I hope you enjoy!

Other infographics you might enjoy:

 

The post 11 Reasons to Care About Mobile Performance in 2015 [INFOGRAPHIC] appeared first on Web Performance Today.

1 out of 6 internet users in the US don’t have broadband access

Thu, 01/29/2015 - 12:55

As of today, the Federal Communication Commission has updated its definition of “broadband” from 4 Mbps to 25 Mbps. In effect, this means that 17% of internet users in the United States now don’t have broadband access.

This is huge news. Here’s why.

From the FCC’s press release:

Broadband deployment in the United States — especially in rural areas — is failing to keep pace with today’s advanced, high-quality voice, data, graphics and video offerings, according to the 2015 Broadband Progress Report adopted today by the Federal Communications Commission.

…The 4 Mbps/1 Mbps standard set in 2010 is dated and inadequate for evaluating whether advanced broadband is being deployed to all Americans in a timely way, the FCC found.

This is no longer considered “broadband”.

In a post I wrote just last week, I talked at length about the widely held misconception — even within the tech community — that non-broadband users are a tiny, negligible minority.

Most city dwellers have a hard time swallowing the idea that a sizable chunk of the population experiences download speeds of less than 20 Mbps. (This is no doubt due to the fact that only 8% of urban Americans don’t have broadband access.)

What does this mean for site owners?

This misconception that non-broadband users are freakish outliers is a major issue when it prevents us from even acknowledging that there is a performance problem. All too often, site owners, developers, designers, and anyone else who interacts with their company website — all of whom generally fall into the category of urban broadband user — assume that their own speedy user experience is typical of all users.

The FCC’s definition puts non-broadband users on the centre stage. Seventeen percent of the population — 55 million people — is hard to ignore.

Some of the FCC’s other findings include:

  • More than half of all rural Americans lack broadband access, and 20% don’t even have access to 4 Mbps service.
  • 63% of residents on Tribal lands and in US territories lack broadband access.
  • 35% of US schools lack access to “fiber networks capable of delivering the advanced broadband required to support today’s digital-learning tools”.
  • Overall, the broadband availability gap closed by only 3% last year.
  • When broadband is available, Americans living in urban and rural areas adopt it at very similar rates.
100 Mbps could be just around the corner

The FCC has also expressed that this update is just the beginning. According to FCC Commissioner Jessica Rosenworcel:

“We invented the internet. We can do audacious things if we set big goals, and I think our new threshold, frankly, should be 100Mbps. I think anything short of that shortchanges our children, our future, and our new digital economy.”

It’s going to be very interesting to see how the major service providers respond to this announcement. I’m looking forward to seeing what difference this update makes — both to the level of service and to the way services are marketed to subscribers.

The post 1 out of 6 internet users in the US don’t have broadband access appeared first on Web Performance Today.

These 7 things are slowing down your site’s user experience (and you have no control over them)

Wed, 01/21/2015 - 13:13

As a website owner, you have 100% control over your site, plus a hefty amount of control over the first and middle mile of the network your pages travel over. You can (and you should) optimize the heck out of your pages, invest in a killer back end, and deploy the best content delivery network that money can buy. These tactics put you in charge of several performance areas, which is great.

But when it comes to the last mile — or more specifically, the last few feet — matters are no longer in your hands.

Today, let’s review a handful of performance-leaching culprits that are outside your control — and which can add precious seconds to your load times.

1. End user connection speed

If you live in an urban center, then congratulations — you probably enjoy connections speeds of 20-30 Mbps or more. If you don’t live in an urban center, then you have my sympathy.

As someone who lives some distance from a major city, I can tell you that 1-3 Mbps is a (sadly) common occurrence in my house, especially during the internet rush hour when everyone in my neighborhood gets home and starts streaming movies and doing whatever else it is they do online. There’ve been many, many times when I’ve gotten incredibly frustrated waiting for a file to download or stream, and I’ve performed many, many speed tests with sub-3 Mbps results.

Oh look! Here’s one right now:

I’ve gotten into long, heated debates with people (always city dwellers) who flat-out refuse to believe that THAT many people are really affected by poor connection speeds. Their arguments seem to be largely fueled by the fact that they’re operating with a limited view of data, which corroborates their own personal experience.

In all my research into US connection speeds, I’ve found that the numbers vary hugely depending on the source. At the highest end is OOKLA’s “household download index” that states a typical US household connection is 30 Mbps. My issue with OOKLA’s data is I can’t find specifics on how they gather it, or whether it’s based on average speeds or peak speeds. I suspect the latter, based on comparing their data to Akamai’s “state of the internet” reports, which give numbers for both average and peak Mbps. OOKLA’s index loosely correlates to Akamai’s peak broadband numbers. According to Akamai, average broadband connection speeds run approximately 12-17 Mbps for the ten fastest states. The slowest states perform significantly worse, with Alaska trailing at 7.2 Mbps.

Another issue: When most agents report on connection speed, they tend to focus on broadband, which ignores a massive swath of the population. (Broadband is, by definition, an internet connection that is above 4 Mbps, regardless of the connection being used.) According to Akamai, in many states broadband adoption is only around 50%. And in roughly one out of ten states, broadband adoption rates are actually declining.

Takeaway: Non-broadband users aren’t freakish outliers whom you should feel comfortable ignoring.

And that’s just desktop performance. Don’t even get me started on mobile.

2. What the neighbours are downloading

If, like me, you’re already at the low end of the connection speed spectrum, when your neighbours start streaming a movie or downloading a massive file, you REALLY feel it. I’ve been known to schedule my download/upload activities around the release of new episodes of Game of Thrones and Breaking Bad. Sad or savvy? Maybe a little bit of both.

 3. How old their modem is

I’ve yet to encounter an ISP that proactively reminds customers to upgrade their hardware. Most people use the same modem for between five to ten years, which guarantees they’re not experiencing optimal load times — even if they’re paying for high-speed internet.

Here’s why:

If your modem is five or more years old, then it pre-dates DOCSIS 3.0. (DOCSIS stands for Data Over Cable Service Interface Specification. It’s the international data transmission standard used by cable networks.) Back around 2010-2011, most cable companies made the switch from DOCSIS 2.0 to DOCSIS 3.0. If you care about performance, you should be using a DOCSIS 3.0 modem. Otherwise, you’ll never be able to fully leverage your high-speed plan (if you have one).

The funny (but not ha-ha funny) thing here is the fact that so many people have jumped on the high-speed bandwagon and are currently paying their ISPs for so-called high-speed internet. But without the right modem they’re not getting the full service they’ve paid for.

4. How old their device is

The average person replaces their machine every 4.5 years. In those 4.5 years, performance can seriously degrade — due to viruses or, most commonly, simply running low on memory. And while many people suspect that upgrading their memory would help, most folks consider themselves too unsavvy to actually do it. Instead, they suffer with poor performance until it gets so bad they finally replace their machine.

5. How old their browser is

You upgrade your browser religiously. Many people do not or cannot. (To illustrate: When Wine.com first became a Radware customer, one of their performance issues was that a significant portion of their customer base worked in the financial sector, which at the time was obligated to use IE 6 because it was the only browser compatible with specific legacy apps. These customers tended to visit the site from work, which inevitably led to a poor user experience until Wine.com was able to implement FastView.)

Each generation of browser offers performance gains over the previous generation. For example, according to Microsoft, IE 10 is 20% faster than IE 9. But according to StatCounter, roughly half of Internet Explorer users use IE 8 or 9, missing out on IE 10’s performance boost.

6. How they’re using (and abusing) their browser

Browser age is just one issue. There are a number of other scenarios that can affect browser performance:

  • Stress from having multiple tabs or windows open simultaneously.
  • Performance impact from toolbar add-ons. (Not all widgets affect performance, but some definitely do, particularly security plug-ins.)
  • Degradation over time. (The longer the browser remains open, the greater its likelihood of slowing down and crashing.)
  • Variable performance when visiting sites that use HTML5 or Flash, or when watching videos. (Some browsers handle some content types better than others.)

Most people are guilty of keeping the same browser window open, racking up a dozen or so tabs, and living with the slow performance degradation that ensues — until the browser finally crashes. According to this Lifehacker post, you should never have more than nine browser tabs open at the same time. Raise your hand if you have more than nine tabs open right now. *raises hand*

7. Running other applications

Running too many applications at the same time affects performance. But most non-techie internet users don’t know this.

Security solutions are a good example of this. Antivirus software scans incoming files to identify and eliminate viruses and other malware such as adware, spyware, trojan horses, etc. It does this by analyzing all files coming through your browser. Like firewalls and security products, antivirus software operates in realtime, meaning that files are paused for inspection before being permitted to download.

Because of this inspection, a performance penalty is inevitable and unavoidable. The extent of the penalty depends on the software and on the composition of the page/resource being rendered in the browser.

How do all these factors add up, performance-wise?

Consider this scenario:

Someone visits your site, using a four-year-old PC running IE9 with a handful of nifty toolbar add-ons, including a parental control plug-in (this is a shared computer) and a couple of security plug-ins, all of which are significantly hurting performance. They think they have high-speed internet because they’re paying their service provider for it, but they’re using a six-year-old modem. They tend to leave all their applications on, for easy access. They only restart their machine when it’s running so slowly that it becomes intolerable — or when it crashes. They keep the same browser window open for days or weeks on end, and they currently have eleven tabs open. They’re concerned about internet security, so they’re also running antivirus software.

All of these factors can add up — not just to milliseconds, but to seconds of extra user time. Unfortunately, when people are unhappy with how fast a page is loading, the number one target for their blame is the site. Fair? No. It’s also not fair that these lost seconds result in lost revenue for your company.

Takeaway

As I said at the top of this post, you have no control over potential problems caused by any of these factors. But that doesn’t mean you shouldn’t arm yourself with the knowledge that problems are occurring.

This is also why it’s crucial to not rely on your own experience with your site. It’s also why you can’t rely on synthetic tests to give you a true sense of how your pages perform. (Don’t get me wrong: synthetic tests definitely have value, but they tend to skew toward the optimistic end of the spectrum.) You need real user monitoring that gives you full visibility into how actual people are experiencing your site.

The post These 7 things are slowing down your site’s user experience (and you have no control over them) appeared first on Web Performance Today.