Perf Planet

Subscribe to Perf Planet feed
News and views from the web performance blogosphere
Updated: 2 hours 33 min ago

How to Approach Application Failures in Production

Tue, 07/22/2014 - 05:55

In my recent article, “Software Quality Metrics for your Continuous Delivery Pipeline – Part III – Logging”, I wrote about the good parts and the not-so-good parts of logging and concluded that logging usually fails to deliver what it is so often mistakenly used for: as a mechanism for analyzing application failures in production. In […]

The post How to Approach Application Failures in Production appeared first on Compuware APM Blog.

Only 22 Percent of Top Mobile Sites are Ready for the Amazon Fire Phone

Mon, 07/21/2014 - 16:15

The first of Amazon's Fire phones will ship later this week.  It's yet to be seen if the phone will be the disruptive technology success Amazon hopes for or a commercial flop, but we do know that many online retailers are not prepared for it.  Yottaa recently sampled 150 top mobile retail websites ("m.dot" sites) and only 1 in 5 were able to handle the new phone properly. The rest of the sites (78 percent) will present Fire phone users the unoptimized "desktop" version of the site -- a surefire poor experience on a phone. 


Unlocking Insight – How to Extract User Experience by Complementing Splunk

Thu, 07/17/2014 - 07:00

Splunk is a great Operational Intelligence solution capable of processing, searching and analyzing masses of machine-generated data from a multitude of disparate sources. By complementing it with an APM solution you can deliver insights that provide value beyond the traditional log analytics Splunk is built upon: Operational Intelligence: Let Your Data Drive Your Business In a […]

The post Unlocking Insight – How to Extract User Experience by Complementing Splunk appeared first on Compuware APM Blog.

Adaptive vs. Responsive Web Design: Quantifying the Difference on Mobile

Wed, 07/16/2014 - 10:14

Much has been written recently about Adaptive Web Design (AWD) vs. Responsive Web Design (RWD), with pros and cons of both being highlighted.

For those new to these methodologies, here is a short summary of both approaches:

Adaptive Web Design Responsive Web Design Server or Client-side to detect the device CSS ‘media queries’ to detect the device Serves separate HTML, using CSS to modify the page based on screen size ‘Fluid CSS Grids’ are resized to fit the device screen Site content is pre-selected and mobile optimized assets are downloaded All assets are downloaded whether they are required or not Multiple templates for each device Single template Images are optimized for device resolutions Images are resized to fit the device

With all of the above in mind, we decided to undertake a little experiment to see how well both performed when monitored from a mobile device profile. The sites were monitored from four backbone locations in New York and another four in San Francisco.

Testing, testing, one, two…

In order to give both approaches a level playing field, we built two identical sites using the popular CMS, WordPress. Both sites had the same dummy content loaded, including images and text. At this initial point of the experiment, no additional optimization was done to either site and all the default settings were used.

WordPress comes with a RWD based template called ‘TwentyFourteen’ as standard. This template gives a magazine-style layout to your site and is highly configurable. An online demonstration of this template can be found here.

When visiting this demonstration site, resize your browser window and see how the site adapts to the resolution changes using the CSS ‘media queries.’

If you are using Chrome or Safari you can switch to mobile device emulation. In Chrome, go to Tools -> Developer Tools -> Emulation and select a device.

In Safari, enable the Develop menu in Preferences and then go to Develop -> User Agent and select a device.

In order to implement the AWD approach, we used a WordPress Plugin called WiziApp. The plugin serves up a mobile theme to users that access the site from mobile devices, and also offers advanced configuration options to further streamline the mobile experience.

By using the AWD approach, you can assign more control over how the page is loaded. With the WiziApp Plugin, it will not load all the posts on the homepage we are monitoring. Instead, it implements a ‘Show more’ option at the bottom of the device screen. This prevents the loading of all the homepage content upfront. Implementing a ‘load on scroll’ would also have worked nicely.

Round One: Default, Unoptimized Sites

It will come as no surprise that both sites performed identically from a desktop perspective, as both used the default WordPress template. But as soon as you switch to a mobile device or tablet, the difference is crystal clear. Again, please note that at this stage no additional optimization has been done and defaults are being used.

Metric (Defaults) Adaptive Responsive Response 568 ms 1,202 ms Document Complete 1,536 ms 4,086 ms Webpage Response 2,889 ms 4,860 ms Bytes Downloaded 2,474,326 kb 4,229,362 kb Objects Downloaded 20 61

The clear winner here on all accounts is AWD. The AWD template is quicker, downloading less data bytes and fewer objects, while the RWD template is downloading everything that a desktop user would receive. If you were out and about using a 3G/4G connection, that is a lot of megabytes and would lead to a very painful and slow online experience.

Put Your Images on a Diet

The initial dummy data that was used to populate the site was very image heavy and the images had not been optimized for either desktop or mobile use. A quick install of the WP Smush.it Plugin allowed for a Bulk Smush.it of the images used. Also, the Imsanity Plugin was used to bulk resize the images from their original imported sizes. These steps were replicated across both of the sites. So what is the experience for our mobile users now?

Metric (Image Optimization) Adaptive Responsive Response 554 ms (-2.46%) 1,151 ms (-4.24%) Document Complete 1,462 ms (-4.82%) 2,853 ms (-30.18%) Webpage Response 1,801 ms (-37.66%) 4,251 ms (-12.53%) Bytes Downloaded 514,144 kb  (-79.22%) 1,504,168 kb (-64.44%) Objects Downloaded 20 (0%) 61 (0%)

The optimized images clearly had a significant effect on both formats, particularly in the amount of data that was downloaded. But once again, AWD comes out as the clear winner, with a document complete that is nearly twice as fast and an even greater disparity in the amount of bytes downloaded than we saw with the un-optimized images.

Squeezing the Juice

One final last optimization to do is the concatenation of CSS and JavaScript to reduce the number of objects being downloaded. As the AWD template already has a low number of objects, the impact of this exercise won’t be so significant. Using another WordPress Plugin called Autoptimize and a few mouse clicks later the optimization of HTML, JavaScript and CSS was complete for both sites.

Metric (CSS & JavaScript optimization) Adaptive Responsive Response 463 ms (-18.49%) 1,136 ms (-5.49%) Document Complete 1,041 ms (-32.23%) 2,004 ms (-50.95%) Webpage Response 1,098 ms (-61.99%) 3,562 ms (-26.71%) Bytes Downloaded 460,950 kb (-81.37%) 1,498,634 kb (-64.57%) Objects Downloaded 15 (-25%) 52 (-14.75%)

Once again, the Adaptive template remains significantly faster and more efficient than its Responsive counterpart. It’s still nearly twice as fast with regard to the document complete, and the gaps in the amount of data and number of objects being downloaded are even greater with the additional optimization.

Sure, there are many more optimization techniques we could have employed, but even optimizing both sites identically, the experiment shows that Responsive never even came close to catching up with Adaptive, although it saw massive gains itself in document complete and page weight.

At the end of the day, it’s all about customer satisfaction, and by giving your customers the site that they need for their respective devices, the happier they will be. If you go to a restaurant and you are a vegetarian you don’t want to be offered filet mignon. It’s the same with your mobile users; give them what they asked for – an optimized mobile experience.

While Adaptive will be more difficult to implement, it will be better for your end users. You will retain their stickiness and loyalty, and help them to not walk into lamp posts while they look at their device screen waiting for your content to load.

 

The post Adaptive vs. Responsive Web Design: Quantifying the Difference on Mobile appeared first on Catchpoint's Blog.

Are we getting attacked? No, it’s just Google indexing our site

Wed, 07/16/2014 - 05:45

Friday morning at 7:40AM we received the first error from our APMaaS Monitors informing us about our Community Portal being unavailable. It “magically recovered itself” within 20 minutes but just about an hour later was down again. The Potential Root Cause was reported by dynaTrace which captured an Out-of-Memory (OOM) Exception in Confluence’s JVM that […]

The post Are we getting attacked? No, it’s just Google indexing our site appeared first on Compuware APM Blog.

In the World of DNS, Cache is King

Tue, 07/15/2014 - 07:03

The first time I learned how DNS gets resolved, I was quite surprised by how long and complicated the process was. Think about how many websites you visit in a given day, then consider how many of those you go to multiple times. Now imagine that every time you did, the ISP’s DNS server at the other end had to repeat the entire recursion process from scratch and query all the name servers in the recursion chain.

To put that into context, think about your cell phone. When you want to make a call to a friend with whom you speak regularly, you simply go to recent calls and tap on their name. But what if, instead of having that information readily available, you had to call 411 to get their phone number, then type it in manually. Seems pretty tedious, right?

The fact is that there are a lot of steps — and therefore a lot of time — required to change a domain name into an IP address. Fortunately, the DNS designers had thought about how to speed up DNS and implemented caching. DNS caching allows any DNS server or client to locally store the DNS records and re-use them in the future – eliminating the need for new DNS queries.

The Domain Name System implements a time-to-live (TTL) on every DNS record. TTL specifies the number of seconds the record can be cached by a DNS client or server. When the record is stored in cache, whatever TTL value came with it gets stored as well. The server continues to update the TTL of the record stored in the cache, counting down every second. When it hits zero, the record is deleted or purged from the cache. At that point, if a query for that record is received, the DNS server has to start the resolution process.

To understand caching, let’s look at the same example as previous articles, resolving www.google.com. When you type www.google.com on the browser, the browser asks the Operating System for the IP address. The OS has what is known as a “stub resolver” or “DNS client,” a simple resolver handling all the DNS lookups for the OS. The resolver will send DNS queries (with recursive flag on) to a specified recursive resolver (name server) and stores the records in its cache based on their TTL.

When the “Stub Resolver” gets the request from the application it first looks in its cache, if it has the record it gives the information to the application. If it does not have it, it sends a DNS query (with recursive flag) to the recursive resolver (DNS server of your ISP).

When the “Recursive Resolver” gets the query it first looks in its cache to see what information it has for www.google.com. If it has the A records, it sends the records to the “Stub Resolver.” If it does not have the A records, but has the NS records for the authoritative name servers, it will than query those name servers (bypassing the root and .com gTLD servers).

If it does not have the authoritative names servers, it will query the .com gTLD servers (which most likely are in cache since their TTL is very high, and they are used for any .com domain). The “Recursive Resolver” will query the root servers for the gTLDs only if they are not in the cache, which is quite rare (usually after a full purge).

To prevent the propagation of expired DNS records, the DNS servers will pass the adjusted TTL to a query and not the original TTL value of the record. For example, let’s assume that the TTL for an A record of www.google.com is four hours and it is stored in cache by the “Recursive Resolver” at 8 a.m. When a new user, on the same resolver, queries for the same domain at 9 a.m., the resolver will send an A record with a TTL of three hours.

So far we have covered how DNS caches on the OS and DNS Servers, however there is one last layer of cache: the application. Any application can choose to cache the DNS data, however they cannot follow the DNS specification. Applications rely on a OS function called “getaddrinfo()” to resolve a domain (all OS have the same function name). This function returns back the list of IP addresses for the domain – but it does not return DNS records, hence there is no TTL information that the application can use.

As a result, different applications cache the data for a specific period of time. IE10+ will store up to 256 domains in its cache for a fixed time of 30 minutes. While 256 domains might seem like a lot, it is not – a lot of pages in the internet have more than 50 domains referenced thanks to third party tags and retargeting. Chrome, on the other hand, will cache the DNS information for one minute, and stores up to 1,000 records. You can view and clear the DNS cache of Chrome by visiting chrome://net-internals/#dns.

NS Cache Trap

One major trap that people fall into with DNS cache is the authoritative name server records. As we mentioned before, the authoritative name servers are specified in the query response as NS records. NS records have a TTL, but do not provide the IP addresses of the name servers. The IP information is in the additional records of the response and are A or AAAA records.

Therefore, a Recursive Resolver relies on both NS and A records to reach the name server. Ideally, the TTL on both types of records should be the same, but every once in a while someone will misconfigure their DNS zones and they pass in DNS query responses for the domain new A or AAAA records for the name servers with lower or higher TTL than what was specified in the TLDs. These new records override the old records, causing a discrepancy.

When both records are in cache, the “Recursive Resolver” will query one of the IPs of the name servers. If the “Recursive Resolver” has only the NS records in the cache, no A or AAAA records, it will have to resolve the name sever domain, so ns1.google.com, to get the IP address for it. This is not good, as it adds time to resolving the domain in question. And if it has the A or AAAA records of the name server but not the NS records, it will be forced to do a DNS lookup for www.google.com.

Setting TTL: A Balancing Act

So what’s better, a longer or shorter TTL? When appropriate, use a longer TTL, as it leads to longer caching on resolvers and Oss – meaning better performance for end users – and it also reduces traffic to your name servers as they will be queried less often. However, it also reduces your ability to make DNS changes, leaving you more vulnerable to DNS poisoning attacks and unable to set up offsite error pages when your datacenter is not accessible.

On the other hand, a shorter TTL limits caching and will add time to downloading the page and/or resources, while raising the stress on your name servers. Yet they let you make quicker changes to your DNS configuration.

————————

DNS resolution is a multi-step process that involves a lot of servers over the internet. Caching mechanism built in the protocol speeds up the process by storing information for periods of time and re-using it for future DNS queries. While DNS servers and/or clients do follow the DNS specs on TTL, application like browser do not follow the spec – hence their cache is stored for an arbitrary amount of time.

 

 

The post In the World of DNS, Cache is King appeared first on Catchpoint's Blog.

How to Roll Out Citrix XenDesktop Globally, Without a Hitch

Tue, 07/08/2014 - 07:00

In addition to Pieter Van Heck, Alasdair Patton (Delivery Architect for Compuware APM) and Mark Revill (Field Technical Services, Compuware APM) also contributed to the authoring of this post.  Windows XP is living on borrowed time: Microsoft officially ended extended support for it on 8 April 2014. It was inevitable, but now many companies and organisations […]

The post How to Roll Out Citrix XenDesktop Globally, Without a Hitch appeared first on Compuware APM Blog.

Google Has Given HTTPS A Huge Boost

Mon, 07/07/2014 - 06:08

For a while now there’s been talk of Google favoring secure HTTPS pages in its results. We just noticed this week that any Google searches for content on our web site now return secure HTTPS URLs instead of HTTP:

It’s not clear when this happened but a quick check on our web server shows that nearly 75% of all connections were HTTPS:

Only a year or so ago HTTPS connections only made up about 10% of all connections. The percentage of HTTPS URLs being used is only going to increase as more people find HTTPS based results on Google and then share them in web pages, emails and social media.

So if your site supports based HTTP and HTTPS then HTTPS is now the most important in terms of optimising performance. The good news if that HTTPS isn’t necessarily much slower than HTTP and may be even faster if you support SPDY.

Understanding Application Performance on the Network – Part IV: Packet Loss

Thu, 07/03/2014 - 05:55

We know that losing packets is not a good thing; retransmissions cause delays. We also know that TCP ensures reliable data delivery, masking the impact of packet loss. So why are some applications seemingly unaffected by the same packet loss rate that seems to cripple others? From a performance analysis perspective, how do you understand […]

The post Understanding Application Performance on the Network – Part IV: Packet Loss appeared first on Compuware APM Blog.

Recommended iOS Apps For Developers

Tue, 07/01/2014 - 10:17

The small screens of the iPad and iPhone don’t lend themselves to in-depth development tasks, but their mobility and convenience can be useful when tracking down problems in the field or providing support when you are out of the office.

Here’s a list of apps that you may find useful if you are involved in development or tech support: (Please let us know in the comments if there’s any we’ve missed)

Networking & Web


HttpWatch – of course this has to be on the list! Our app is the ultimate browser based HTTP sniffer for iOS. (Paid & Free)


iNetwork Utility – scans your network showing device types and network addresses. It also supports Wake-On-Lan and can browse services advertised with Bonjour. (Paid)


System Status - shows in-depth information about the state of your iOS device including active network connections, memory and CPU usage. (Paid)


SpeedTest
– tests the speed of your internet connection using servers close to you or at chosen locations (Free)


Pingdom - view the status of sites and services that you are monitoring and receive alerts if downtime occurs (Free)


IPMI Touch
– view the hardware status of IPMI supprting servers on your network including temperature, fan speed, etc. Servers can also be remotely powered on/off or restarted (Paid)


Prompt
– a Telnet and SSH client for your iPhone or iPad (Paid)


Jump Desktop
– a remote desktop (RDP/VNC) client for Windows PCs and Servers (Paid)

Editing and File Handling


Textastic Code Editor
  – a fully featured code editor with syntax highlighting (iPhone Paid & iPad Paid)


Dropbox
– iOS 7 doesn’t have a public file system for sharing files between apps. Dropbox is the best alternative until improved sharing features arrive in iOS 8. You can save files from Mobile Safari or Mail and then open them in other apps or PC/ Mac (Free)

iOS Development & App Management


AppCooker - prototype iPhone and iPad apps on your iPad (Paid)


iTunes Connect
– view the sales and download data for your app on iOS (Free)


Crash Manager
– view and manage crash reports from your Crashlytics account (Free)

Learning


Stack Exchange
– access Stack Overflow, Super User and Server Fault from your iPhone or iPad (Free)


NewsBlur
- a blog reader app that synchronizes with your NewsBlur account (Free)


WWDC
- download and view videos from Apple Developer conferences (Free)

DNS Lookup: How a Domain Name is Translated to an IP Address

Tue, 07/01/2014 - 09:31

At Catchpoint, we believe that fast DNS (Domain Name System) is just as important as fast content. DNS is what translates your familiar domain name (www.google.com) into an IP address your browser can use (173.194.33.174). This system is fundamental to the performance of your webpage, yet most people don’t fully understand how it works. Therefore, in order to help you better understand the availability and performance of your site, we will be publishing a series of blog articles to shed light on the sometimes complex world of DNS, starting with the basics.

For the sake of simplicity, this article is assuming there was no DNS cached anywhere, hence this is a worst case scenario. We will tackle DNS caching in future articles.

Before the page and any resource on the page is loaded, the DNS must be resolved so the browser can establish a TCP connection to make the HTTP request. In addition, for every external resource referenced by a URL, the DNS resolution must complete the same steps (per unique domain) before the request is made over HTTP. The DNS Resolution process starts when the user types a URL address on the browser and hits Enter. At this point, the browser asks the operating system for a specific page, in this case google.com.

Step 1: OS Recursive Query to DNS Resolver

Since the operating system doesn’t know where “www.google.com” is, it queries a DNS resolver. The query the OS sends to the DNS Resolver has a special flag that tells it is a “recursive query.” This means that the resolver must complete the recursion and the response must be either an IP address or an error.

For most users, their DNS resolver is provided by their Internet Service Provider (ISP), or they are using an open source alternative such as Google DNS (8.8.8.8) or OpenDNS (208.67.222.222). This can be viewed or changed in your network or router settings. At this point, the resolver goes through a process called recursion to convert the domain name into an IP address.

DNS Settings on a Mac (left) and Windows Settings for IPv4 Protocol of the network connection (right).

Step 2: DNS Resolver Iterative Query to the Root Server

The resolver starts by querying one of the root DNS servers for the IP of “www.google.com.” This query does not have the recursive flag and therefore is an “iterative query,” meaning its response must be an address, the location of an authoritative name server, or an error. The root is represented in the hidden trailing “.” at the end of the domain name. Typing this extra “.” is not necessary as your browser automatically adds it.

There are 13 root server clusters named A-M with servers in over 380 locations. They are managed by 12 different organizations that report to the Internet Assigned Numbers Authority (IANA), such as Verisign, who controls the A and J clusters. All of the servers are copies of one master server run by IANA.

Step 3: Root Server Response

These root servers hold the locations of all of the top level domains (TLDs) such as .com, .de, .io, and newer generic TLDs such as .camera.

The root doesn’t not have the IP info for “www.google.com,” but it knows that .com might know, so it returns the location of the .com servers. The root responds with a list of the 13 locations of the .com gTLD servers, listed as NS or “name server” records.

Step 4:  DNS Resolver Iterative Query to the TLD Server

Next the resolver queries one of the .com name servers for the location of google.com. Like the Root Servers, each of the TLDs have 4-13 clustered name servers existing in many locations. There are two types of TLDs: country codes (ccTLDs) run by government organizations, and generic (gTLDs). Every gTLD has a different commercial entity responsible for running these servers. In this case, we will be using the gTLD servers controlled by Verisign, who run the .com, .net, .edu, and .gov among gTLDs.

Step 5: TLD Server Response

Each TLD server holds a list of all of the authoritative name servers for each domain in the TLD. For example, each of the 13 .com gTLD servers has a list with all of the name servers for every single .com domain. The .com gTLD server does not have the IP addresses for google.com, but it knows the location of google.com’s name servers. The .com gTLD server responds with a list of all of google.com’s NS records. In this case Google has four name servers, “ns1.google.com” to “ns4.google.com.”

Step 6: DNS Resolver Iterative Query to the Google.com NS

Finally, the DNS resolver queries one of Google’s name server for the IP of “www.google.com.”

Step 7: Google.com NS Response

This time the queried Name Server knows the IPs and responds with an A or AAAA address record (depending on the query type) for IPv4 and IPv6, respectively.

Step 8: DNS Resolver Response to OS

At this point the resolver has finished the recursion process and is able to respond to the end user’s operating system with an IP address.

Step 9: Browser Starts TCP Handshake

At this point the operating system, now in possession of www.google.com’s IP address, provides the IP to the Application (browser), which initiates the TCP connection to start loading the page. For more information of this process, we wrote a blog post on the anatomy of HTTP.

As mentioned earlier, this is worst case scenario in terms of the length of time to complete the resolution. In most cases, if the user has recently accessed URLs of the same domain, or other users relying on the same DNS resolver have done such requests, there will be no DNS resolution required, or it will be limited to the query on the local DNS resolver. We will cover this in later articles.

In this DNS non-cached case, four sets of DNS servers were involved, hence a lot could have gone wrong. The end user has no idea what is happening behind the scenes; they are simply are waiting for the page to load and all of these DNS queries have to happen before the browser can request the webpage.

This is why we stress the importance of fast DNS. You can have a fast and well-built site, but if your DNS is slow, your webpage will still have poor response time.

 

The post DNS Lookup: How a Domain Name is Translated to an IP Address appeared first on Catchpoint's Blog.

How You Can Monitor Your Web Performance for Free

Tue, 07/01/2014 - 07:00

I recently analyzed FIFA’s World Cup website for web performance best practices and highlighted the top problems FIFA had on their website (too many flag images, very large favicon, et Cetera). After completing the post, I realized we could use a freely available trial service to run some tests to see if they have improved […]

The post How You Can Monitor Your Web Performance for Free appeared first on Compuware APM Blog.

SSL Performance Diary #1: The Certificate Chain

Mon, 06/30/2014 - 11:14

Share

Adding support for encrypted SSL connections to a website doesn’t have to reduce the performance of your site. SSL can be properly configured so that it provides security and confidentiality, with minimal impact on the website’s page load time. In fact, SSL is a necessary step to implementing SPDY, which can actually improve the performance of your site.

This is the first in a series of blog posts, where I discuss the steps I am taking to implement SSL on a website while simultaneously improving its performance. Today I will be discussing certificates.

How Certificates Impact Performance

The very first thing you need when setting up SSL for a website is a certificate. While most people call these SSL Certificates, technically these are called X.509 Certificates because they are part of a larger suite of cryptographic standards and protocols that are not SSL specific. I will use both terms interchangeably. Our certificate will be used to prove we are who we say we are when a user accesses our website.

A site’s SSL certificate is downloaded by every visitor to our website. In fact, the SSL certificate is downloaded when the browser is negotiating the SSL connection. After all, the connection can’t be secure if you cannot confirm the identity of who you are talking too! A variety of this SSL negotiation occurs every time a new TCP connection is made to your website. (I’ve discussed SSL negotiation before). Since browsers download resources in parallel, this means that a visitor can download you certificate multiple times, even with visiting a single page! Since SSL certificates are used so much it is well worth the time to ensure your certificate is as streamlined and optimized as possible.

On important aspect of a certificate that impacts performance is called the certificate chain or trust chain.

The Certificate Chain

When you visit https://app.zoompf.com, how do you know you are really talking to app.zoompf.com? SSL certificates solve this problem. You receive a certificate telling your browser “yes, this is app.zoompf.com. trust me.” But how do you know you can trust the certificate the server sent, telling you the site is app.zoompf.com?

This is where the certificate chain comes in. My certificate will be digitally signed by some other entity’s certificate, which essentially says “Billy is cool, I vouch for him, here is my certificate.” If the browser trusts the certificate of who signed by certificate, we are done. However, it possible the browser doesn’t trust my certificate, or the certificate of the entity that signed my certificate. What happens then?

Simple! If the browser doesn’t trust the entity that issued my certificate, it will look to see who signed that entity’s certificate. And so on and so on. The browser will “walk” up this chain of certificates, seeing who is vouching for who, until it finds a certificate of someone it trusts. The certificate chain looks something like this:

Here we see my certificate for app.zoompf.com. My certificate was signed by the certificate by “DigiCert Secure Server CA.” The browser does not trust this certificate. However, that certificate was signed by the “DigiCert Global Root CA” certificate, which is trusted. So my certificate chain length is 3.

You want to keep this certificate chain as short as possible, since validating each certificate in the chain takes time. Additional certificates also means more data that has to be exchanged while establishing the secured connection. The browser might even need to make additional requests to download other certificates, or to check that a certificate is still valid and hasn’t been revoked. Remember, the browser has to perform this validation for every new TCP connection, so we want this to be as fast as possible. A certificate chain length of 3 is very good. Twitter and Etsy are very popular websites which both have a certificate chain length of 3.

Ensuring Short Certificate Chains

A good strategy to get a certificate with a short chain is to purchase your certificates from a large, well known Certificate Authority. Browsers come pre-loaded with a list of several hundred certificates that they do trust, including those from well known CA’s. Vendors that sell cheaper certificates often have a larger number of intermediaries. Each of these intermediaries are another link in the certificate chain of who is vouching for whom. This increases the length of the certificate chain and can lead to performance problems.

A good way to estimate long a certificate chain may be before purchasing a certificate is look at the certificate chain of the site you are buying from! Consider Namecheap’s SSLs.com marketplace, which sells low cost DNS names and SSL certificates. Let’s look at their certificate chain.

Wow, we are already 4 certificates deep! If I purchase a certificate from this company, it could have a certificate chain that is 5 certificates long! Now, this is just an estimate. It certainly is possible that a budget focused CA could a sign your certificate with a certificate that is “farther up” the chain. However, if a CA isn’t willing to do that (or can’t do that) for their own website’s certificate, how likely are they to do it for certificates you purchase from them?

To ensure your SSL certificate has a short certificate chain, my advice is to purchase your certificate from a large, well known CA. For our new Zoompf project, I purchased our certificate from Digicert for a little over $100, and the experience was great.

Checking your Certificate Chain Length

Checking the certificate chain of the online store for a CA is a good way to estimate certificate length. But what if you already have an SSL certificate for your website? Modern desktop web browsers allow you to view a site’s certificate, including certificate chain length. For example, in Chrome, simply click on the padlock/HTTPS area of the address bar, click the “Connection” tab, and click “Certificate Information” as shown below:

Other desktop web browsers have a similar workflow. Unfortunately mobile web browsers rarely expose this functionality.

Give Me More!

That’s it for this post in our series, The SSL Performance Diary. Want to follow along as I document my steps implementing SSL on a new Zoompf project running on IIS 7.5? Subscribe to our RSS feed or sign up for our newsletter to get updated via email. (~1 week every other week, only when we have no blog content. no spam ever.)

The post SSL Performance Diary #1: The Certificate Chain appeared first on Zoompf Web Performance.

Webinar Recording: Monetizing Mobile

Fri, 06/27/2014 - 14:11

This week Yottaa welcomed Adrian Mendoza, co-founder of Marlin Mobile, to present a webinar called "Monetizing Mobile: How to Optimize Mobile Engagement and Conversions".  Adrian and Ari Weil, VP of Products at Yottaa, discuss the current state of mobile, the challenges site owners face in presenting an engaging experience, and tactics to overcome those challenges.  A video recording of the webinar is available below.  


Understanding Application Performance on the Network – Part III: TCP Slow-Start

Thu, 06/26/2014 - 06:00

In Part II, we discussed performance constraints caused by both bandwidth and congestion. Purposely omitted was a discussion about packet loss – which is often an inevitable result of heavy network congestion. I’ll use this blog entry on TCP slow-start to introduce the Congestion Window (CWD), which is fundamental for Part IV’s in-depth review of […]

The post Understanding Application Performance on the Network – Part III: TCP Slow-Start appeared first on Compuware APM Blog.

“Using WebPagetest” Early Release TODAY

Wed, 06/25/2014 - 18:25

Just in time for Velocity Santa Clara! I’m really excited to share that the book Using WebPagetest, which I’ve coauthored with Andy Davies and Marcel Duran, is now available for early release online at O’Reilly. Here’s a snippet from the preface that helps to set the stage for the rest of the book:

We all know bad web performance when we see it. When something takes too long to load or become interactive we start to get bored, impatient, or even angry. The speed of a web page has the ability to evoke negative feelings and actions from us. When we lose interest, wait too long, or get mad, we may not behave as expected: to consume more content, see more advertisements, or purchase more products.

The web as a whole is getting measurably slower. Rich media like photos and videos are cheaper to download thanks to faster Internet connections, but they are more prevalent than ever. Expectations of performance are high and the bar is being raised ever higher.

By reading this book, chances are you’re not only a user but most importantly someone who can do something about this problem. There are many tools at your disposal that specialize in web performance optimizations. However, none are more venerable than WebPagetest.org. WebPagetest is a free, open source web application that audits the speed of web pages. In this book, we will walk you through using this tool to test the performance of web pages so that you can diagnose the signs of slowness and get your users back on track.

I really hope that this book will help the web performance community come to a greater understanding of how to use WebPagetest and more effectively lift the quality of performance on the web.

If you’d like to access the early release or preorder the physical book, check out http://shop.oreilly.com/product/0636920033592.do.

Understanding the Anatomy of a Wireless Connection

Wed, 06/25/2014 - 10:34

Consider your smartphone. This increasingly complicated mobile device drives internet usage around the globe. According to Statista, mobile phones (and this excludes tablets) accounted for 17.4% of all web traffic in 2013, up from 11.1% in 2012. Some of this traffic is derived from new web users, whose smartphones provided their first access to the internet, but another segment of the increased traffic represents a change in preference: namely, the preference to surf the web from mobile phones instead of desktops.

What is it that makes mobile phones so attractive? Well, the fact that they’re mobile. A cellphone is untethered, unplugged, unwired; it’s internet on the go. Using your smartphone to access the internet is no different than using your desktop, but you can put it in your pocket and walk down the street, right? Well, that’s the hope, at least from a web developer’s perspective. Web developers strive for a seamless transition where mobile end users perceive no change in performance, but many things change behind the scenes.

Let’s take a look…

Any device not physically connected to a network or router, but still capable of connecting with other network nodes (think other cellphones or Apple TV) falls under the category of wireless. A very incomplete list of wireless networks includes WiFi, 2G, 3G, and 4G LTE. While each wireless network functions in its own specific manner, focusing in on one may be helpful to understand current advantages and limitations of wireless internet access. For instance, what happens on the 4G LTE mobile network after you type a URL into your smartphone browser and click “Go”?

Here in the U.S., smartphones communicate through radio waves and frequencies determined by the Federal Communications Commission (FCC) and the National Telecommunications and Information Administration (NTIA). The FCC and NTIA allocate the radio spectrum for different purposes: lower frequencies (larger, farther traveling radio waves) better suit AM radio, while two-way cellphone communication uses higher frequencies to transfer more data, but at the sacrifice of covering a smaller geographical area or cell.

Cells are provided by cell sites, also called base stations, which are facilities that contain cellular telephone transmitters. The size of a cell can vary depending on a cell site’s transmission power and the surrounding terrain. Typically, someone using their smartphone on the go will move through a honeycomb of cells and switch from cell site to cell site as they click around a website.

But making these radio connections comes at great expense to battery life, a valuable resource for mobile phones. For this reason, different states of ‘awakeness’ exist to mitigate power drain as much as possible. Depending on recent activity, a smartphone’s state can range from disconnected to active or somewhere in between (idle); or in milliwatts, range from less than 15mW upwards of 3,500mW!

However, this, to a great extent, falls outside of the user’s control. Instead, we place our trust in the Radio Resource Controller (RRC), a protocol that masterminds all connections between your smartphone and a cell site, as well as dictates your device’s ‘awakeness’ state. So unlike a desktop computer, which enjoys a continuous network connection through an Ethernet cable, wireless devices face a constant struggle to balance connection vs. battery life. In other words, a lower power state may prolong battery life, but it introduces unavoidable latency to reestablish a connection to the radio link. The RRC resides directly in the cell tower for 4G in effort to continue to reduce this latency.

[Before we move on, a quick note to web developers: every radio transmission, regardless of size, transitions a smartphone into an active state. No amount of data is negligible because any transmission forces the device into the same battery-draining pace. Therefore, best practices prescribe delivering as much data in one shot as possible, and keeping total transmissions to a minimum.]

Following the initial negotiations of the RRC, a component named Radio Access Network (RAN) allocates radio spectrum and connects the hand-held device with the Core Network (CN), which in LTE is called the Evolved Packet Core (EPC). The RAN also speaks to the CN about the user’s location, a rather difficult and complicated duty since connections jump from tower to tower as a user moves or a cell saturates.

The CN carries the data packets from the RAN to the worldwide web and back with the help of its own components: Serving Gateway (SGW) and Packet Data Network Gateway (PGW). SGW receives the packets from the cell tower and routes them to the PGW which assigns the device an IP address. The device can now communicate with the public internet and ultimately reach the servers housing the URL’s content.

The desired data then begins its return journey. It first returns to the PGW, becomes encapsulated and receives scrubbing for quality of service. The packets continue back to the SGW, which in turn works with (this is last acronym, I promise!) the Mobility Management Entity (MME). The MME provides the Serving gateway with location data, allowing the SGW to route the packets to the RAN and, finally, to your phone.

One note on the return path: another battle between battery life and latency can occur if your smartphone settles into an idle state: as mentioned above, the SGW actually does not know the exact location of the device it must route the packets back to, but instead uses a logical grouping of cell towers to maintain only an idea of the user’s location, asking the MME to help it pinpoint the device. This is in order to avoid draining a device’s power.

In order to discern a device’s exact location, the radio base station must speak with the device, and as we know, that means an active state and many mW on behalf of the device. The SGW’s logical grouping process permits the smartphone to nap and save power, only waking periodically to listen for a broadcast from nearby towers. If the MME does not know the smartphone’s location, it pages the towers, asks them to broadcast, and once the device awakes, receives its exact location. The MME then informs the SGW and the packets continue on their way.

I point this out to emphasize the simple fact that tradeoffs exist, no matter how you choose to access the internet, be it via wire or via waves. Although the mobile path progression laid out above is no way comprehensive, its purpose is to highlight several of the costs and benefits of choosing mobility.

So consider your smartphone, the wireless infrastructure it relies on, and the websites it surfs. These parts must work together; and if we understand the reasoning for their designs and functions, and the resulting latencies, it may help us improve mobile experience and better address the growing trend of mobile internet usage.

 

 

The post Understanding the Anatomy of a Wireless Connection appeared first on Catchpoint's Blog.

5 New WPO Features

Wed, 06/25/2014 - 09:42

Share

Today I am happy to announce that we have launched a new version of Zoompf WPO! We’ve added some great new features and other improvements based largely on feedback we received from our users. Here is what is new:

Optimized Image Comparison

As I’ve written about many times before, images are a huge part of the web. Zoompf tests not only for lossless image optimizations, but also lossy image optimization. Lossy optimizations are especially important because they can reduces images by 50% or more with a slight loss in quality. As I presented at the Velocity Conference, when done properly these lossy optimizations are largely invisible to the user. However, designers are often sensitive to any change to the source image. They want to compare any optimization with the original image to see how the image has changed.

To help our users see the differences between a lossy optimized image and the original, we added an interactive image comparison control to Zoompf WPO, as shown below:

This control allows you to slide a divider back and forth, exposing more or less of the original or optimized image. This allows you to quickly compare how a lossy optimization alters the source image. It is our hope that this gives designers the reassurance to know that an optimization not will negatively impact their users. Zoompf WPO shows this control for all image optimizations.

Lossy Optimization Bundle

One of the most loved features of WPO is our Optimized Resource Bundle. While Zoompf is scanning your site, it is performing lossless and lossy optimizations on your content to determine if a resource can be optimized and by how much. Since we have done all the optimizations for you already, we provide these optimized resources as a ZIP file you can download. In the past, the optimized resource bundle has only contained the losslessly optimized versions of your content. This was to ensure that designers could copy over the optimized content without fear of reducing quality.

Now that you can easily see the impact of lossy image optimizations with the Optimized Image Comparison control mentioned above, we wanted to provide an easy way to get these images in bulk. So we have added the Lossy Optimization Bundle. This bundle includes a mix of lossless and lossy optimized content. If an image can be both optimized using both lossless and lossy methods, the smaller image is included. Using the lossy optimization bundle, designers can be sure they are using the best optimizations possible for their content.

You can download the lossy optimization bundle under the “Reports&” section while looking at a scan.

URL Shading and 3PC Labels

Knowing which pages are affected by an issue is the first step in resolving the problem. In the past , Zoompf displayed the URL of an affected page as a fully qualified URL in a single color. If the URL was inside of a table, we may truncate the URL after a few hundred characters.

In retrospect, this was a terrible approach from a readability perspective. It’s hard for the eye to mentally parse a long, single line of text without any spaces and in a single color. We also found that simply truncating a URL was a poor way to reduce its size.

Our insight came when we realized that a URL is composed of different parts, and some parts were more important than others when trying to figure out to what a URL resolves. So, we use a new approach, that looks like this:

We use color to emphasize the important parts of a URL (the hostname, the file name if present) and deemphasize the less important parts (the path, the query string). We also are smart about how we truncate and where we truncate. In the image above, we reduce the size of the path, and add an ellipsis. The result is a far more readable URL.

We also now label a URL as being third party content, as shown below:

This label lets know you the content is from a third party, so you can quickly see which problems affects your content and which affects third party content. As always, you can control whether Zoompf tests third party content for performance issues using the “Audit Third Party Content” under the “Options” section when launching a new scan. Our third party content detection is powered from our database on ThirdPartyContent.org.

Targeted Response Bodies

I started my career in software the way many do: working in the QA department testing code. A big hassle was often convincing the developer that a problem was real. I quickly found that if I could include some evidence of the problem, like a screen shot, there was less back-and-forth with the developer as they fixed the issue.

To help QA and developers resolve issues faster, we have always shown the response body, and highlighted in the response where the problem is. For example, if we detect a page is loading a large number of external JavaScript files we will highlight all the <script src> tags in the response. However, as HTML pages continue to grow in size, it can be tedious looking through 100 KB of text, While we have always include a “jump to cause of issue” link, there is a better way.

Zoompf WPO now has targeted response bodies. We automatically clip the response to only show the problem text and surrounding areas, as shown in the screen shot below.

You always have the option to see the entire body by clicking on the link. Targeted response bodies help remove the clutter so WPO users can see exactly what the problem is.

Improved Scan Summary Page

Zoompf has long showed a roll up of how many bandwidth savings was possible for a scan via both lossless optimizations and lossy optimization. Now, in addition to the total percentage savings, we also show you the total byte savings as well.

One of Zoompf WPO’s key features is our “Summary of Issues”. This provides a list of all the performance issues we discovered on your site, sorted by highest impact and ease-of-fix. This table answers the question “If I can only fix 3 performance issues this dev cycle, which should they be.” We made a few requested changes to make it even better. First, we numbered the defects to make it easier to read. Next, we made the “severity” and “Affected URLs” columns sortable, so you can see not only the most severe issues, but also which issues affect the most pages.

Additional Improvements

We have made literally hundreds of other improvements to both the WPO web interface, and the underlining performance scanner. A few of the less nerdy highlights are:

  • PDF reports now have a footer with page numbers.
  • All our byte sizes are now formatted to use KB or MB. So instead of “376,810″ you will see “376 KB”
  • Our savings tables, which show the savings for optimizing individual items, now includes a new column showing both the percentage savings and the savings in bytes.
  • Our savings tables also now includes the total savings for all items, in bytes, at the bottom of the table.
  • The navigation UI has been simplified, so that reports and scan tools are only available when looking at a scan. Breadcrumb UI now exists on all pages.
  • The “Audit Third Party Content” option now uses the ThirdPartyContent.org database.
Moving Forward

We are very proud to release this new version of WPO to our users. The ideas for the lossy optimizations bundle, targeted response bodies, and our improvements to the scan summary page came directly from our customers. If you have any improvements or suggestion, please let us know and we will work hard to incorporate them into the product.

Want to see what WPO is all about? You can see videos on the Zoompf Youtube Channel where I use WPO to check real websites in our “How Fast Is” series. If you’d like a preview of what Zoompf WPO would find on your website, you can use our free performance scan.

The post 5 New WPO Features appeared first on Zoompf Web Performance.

How to Monitor Swift/iOS8 Applications for Crashes and Performance Issues

Wed, 06/25/2014 - 06:00

Apple just came out with the new programming language Swift . According to Apple, Swift will make it a lot easier and more fun to develop apps for both iOS and OS X. That’s in contrast to the current language, Objective-C, which is somewhat antiquated and considered by many to be difficult to use. Although […]

The post How to Monitor Swift/iOS8 Applications for Crashes and Performance Issues appeared first on Compuware APM Blog.

REST tips

Tue, 06/24/2014 - 12:50

At SitePen, we have long been advocates for building web applications on a RESTful architecture. Over the last several years, it has been exciting to see organizations increasingly provide RESTful endpoints for the Dojo-based front-ends that we support and develop. A well-designed REST backend can be a excellent foundation for manageable, scalable applications, that will be ready to evolve into the future. I wanted to share a few tips for designing a set of RESTful services.

The central descriptor in any REST request is the URL. And consequently, defining the URLs, and their meanings is central to creating a REST API. The most important principle to remember in this process is that URLs represent resources, the conceptual entities that make up our application (whether they be contacts, friends, products, etc.). Often this is described as the “noun” or the object of a request. We can perform various types of actions (“verb”) on this resource, but the URL declares what the target of our action is, while the method (along with the request body for some methods) describe what the action will be. With this foundation, let’s first consider a few things to avoid in URLs:

  • It is considered verboten to use URLs that describe the expected format of the response in the URL. URLs that end with .json or .xml, are improperly communicating the format in the URL. While this is expedient for simple files, for more developed resources, we can do better. The format of a response is part of the representation of a resource, and REST encourages multiple representations of each resource, giving a clear indication that we are still talking about the same resource, even if there are may be requests for the same data in JSON, XML, or some other format. HTTP makes this easy. We can include an Accept header to clearly indicate the expected format, and even negotiate if the first choice is not available.
  • Don’t indicate your server technology in your URLs. URLs that are suffixed with .jsp or .php extensions clearly indicate the technology being used, and are irrelevant to what data is actually being retrieved or how the client will use it. If you decide to switch from PHP to Node.js in a few years, while trying to maintain backwards-compatibility for your clients, all those URLs with .php extensions will be either painful to change, or ugly to preserve.
  • Try to avoid verbs, that is, an action to be taken, within a URL. Again, a URL indicates a resource, an entity or noun. The primary action to be taken should be described by the HTTP method. Now, of course, we may actually have more actions for a given URL/resource than corresponding HTTP methods. That’s fine. We can have multiple actions going through the POST method, with a parameter (whether it be JSON property, or a URL encoded query parameter) embedded within the request body, to distinguish different actions, without needing separate URLs.
Generating Identifiers

In many data-driven applications, you will often need to create new objects which will need corresponding identifiers. In most applications, the server is responsible for generating unique ids for new objects. When this happens, we typically need to communicate the newly created object’s identifier to the client in case it needs to reference the object for future updates. Generally, this is done by making a POST request to create the new object (or row in the database table), and then the server can simply return the new object, with its identifier, in the HTTP response. Dojo’s JsonRest and Cache store, as well as the new dstore‘s Rest store are designed to work with this convention, examining the response for any updates to the new object, including the identifier.

Another possibility is to actually generate identifiers on the client-side. This can be particularly beneficial in applications that need to support offline situations. When a server generates an identifier, a new object remains in an unidentified state until the server provides the identifier. This can be problematic if significant time may elapse before receiving the authoritative identifier, and additional modifications may take place. When a client generates the identifier, the new object can immediately be identified, there is no need to wait for server assignment. Remember that when a client is generating identifiers, it usually must be randomly generated. The client can’t usually coordinate incrementing ids and avoid conflicts with other clients without difficulty.

Exceptions

The purpose of REST is to provide infrastructure and patterns for better scalability, performance, and manageability of applications. However, this is a tool; application-specific practical concerns can always trump strict adherence to any specific pattern. There are a few frequent situations that often call for exceptions from strict interpretations of REST:

  • JSONP – Many applications need to access data through cross-domain communication. Ideally, we would use cross-origin XHR (CORS) for this, but legacy browsers, as well as servers, often constrain us to use JSONP, which limits our ability to use headers to adhere to strict HTTP semantics.
  • Multiple-Action/Transactions – Client applications may have multiple operation requests that they wish to send to the server. Sending these through separate HTTP requests may be unacceptably slow. In addition, we may even need multiple operations to be performed atomically in a single transaction. These concerns often require defining an endpoint that can handle a collection of operations.
  • Session-Based Authentication – While HTTP has its own authentication mechanism, session-based authentication has proven much more practical for maintaining an application-specific branded login interface, and other aspects of maintaining authentication.

dojo/request , a collection of tools for managing I/O communication, includes a registry that makes it easy to create your own custom derivative of the XHR provider (or other providers) that include authentication tokens or other validation to facilitate authentication while protecting against cross-site request forgery attacks. This makes it easy to maintain fairly simple REST-style URLs in application code, while allowing the registry to negotiate transport-specific details.

REST Benefits

A well designed REST interface can be provide a solid foundation for your application. Your application can take advantage of caching for better performance, discoverability for easier consumption of server services, and a consistent interface for managing and independently evolving client and server components. Dojo helps you to easily build on top of REST services, and use consistent resource-oriented APIs and strategies throughout your application.

We cover dojo/request, dojo/store, and RESTful application architecture principles in depth in our Dojo workshops offered throughout the US, Canada, and Europe, or at your location. We also provide expert JavaScript and Dojo support and development services, to help you get the most from JavaScript, Dojo, and RESTful application architecture. Contact us to discuss how we can help you best follow RESTful principles within your application.

SitePen offers beginner, intermediate, and advanced Dojo Toolkit workshops to make your development team as skilled and efficient as possible when creating dynamic, responsive web applications. Sign up today!

Pages