Perf Planet

Subscribe to Perf Planet feed
News and views from the web performance blogosphere
Updated: 2 hours 39 min ago

Catchpoint Quiz: Who Do You Become When a Website Stalls?

Wed, 08/20/2014 - 12:14

We’ve all experienced the maddening sensation of clicking on a website link, only to sit there and stare at a painfully slow loading screen while the spinning hourglass or pinwheel taunts us.

How we react when this happens, however, is a different story. Take the quiz below to find out who you become when you encounter internet latency.

Afterwards, be sure to enter to win a free four-day pass to Velocity 2014 in New York City!


The post Catchpoint Quiz: Who Do You Become When a Website Stalls? appeared first on Catchpoint's Blog.

Announcing the Zoompf Alerts Beta!

Wed, 08/20/2014 - 11:04


Today we are ecstatic to announce the public beta launch of Zoompf Alerts!

Zoompf Alerts is a new website performance solution that continuously scans your web pages for changes that introduce new performance defects. Unlike traditional performance monitoring solutions, Zoompf Alerts does not focus on measuring page load times. Instead, Zoompf Alerts detects when actual performance defects are introduced on your site – for example that new 1 MB unoptimized hero image, or that large CSS file no longer served with caching.

By focusing on specific front-end performance defects rather than nebulous and inconsistent page load times, Zoompf helps you answer the question “why did I become slow?” If you ever felt you’ve wasted hours staring at waterfall charts to no avail, then the Zoompf Alerts beta is for you!

The Zoompf Alerts beta is completely free and you can opt-out at any time. To learn more, check out our live demo accounts, and then sign up here:

Join Alerts Beta

About Zoompf Alerts

Over the last 6 months, the team at Zoompf has been hard at work building and refining the Zoompf Alerts product, using the direct feedback from over 25 different performance-minded companies. We’ve gone through 3 major releases, streamlining and refining the product to accomplish these two goals: Find only the performance problems you care the most about, and resolve them quickly.

We set out to accomplish this with 3 main features, which we will now retroactively give impressive and marketing-y names: Continuous Coverage, Targeted Defect Detection, and Superior Defect Intelligence.

Continuous Coverage

Performance defects are just like functional defects. The earlier you find them, the cheaper the impact to your company. This is why Zoompf Alerts continuously scans your website throughout the day, alerting you immediately when new defects are detected.

In addition to timely information, this approach also allows you to track how your performance changes over time. How does your site performance compare to last week? What problems have you introduced, and which are still outstanding? Which areas of your website need to most attention? Zoompf Alerts provides the visibility to answer these questions.

Below you can see how the new Performance Dashboard charts how your defects are changing over time.

Better still, you can view detailed performance “snapshots” captured your performance at specific points in time, allowing you to view your website as it was when the defects were first introduced.

You can also receive daily or weekly summary emails. This allows you to easily track your performance over time in a mobile-friendly, easily digestible format.

Targeted Defect Detection

While website performance can be critically important to your business, we understand this can be one of many priorities. To help, Zoompf Alerts provides Targeted Defect Detection, allowing you to customize how much detail you receive. This customization is accomplished with 3 configurable features: Alert Thresholds, Third Party Content (3PC) Filters, and content Ignore Rules.

Alert Thresholds

Alert Thresholds control the minimum defect severity before you get notified. For example, if time is tight you may choose to set a high alert threshold to alert only if images can be losslessly optimized beyond 200 KB. Conversely, if you are squeezing every last ounce of speed out of your site, you may choose a low alert threshold to notify you of virtually all possible image optimizations. The choice is yours!

You can even choose to disable specific performance rules if they are not relevant to your website.

Third Party Content (3PC) Filters

While Third Party Content (externally linked content that you do not directly control) can often have a significant impact on your site performance, you are often limited in your options to “fix” 3PC defects. Zoompf recognizes 3PC problems can add to the “noise” your team has to process, so by default we utilize our extensive database of third party content to filter out 3PC results so you get alerted on only those defects you can actually fix.

If you do decide you want to know about issues with 3PC, you can still view clearly tagged 3PC content in your results as is shown below.

Ignore Rules

Ignore Rules provide you a powerful mechanism to filter out unwanted content from your defect results. Say, for example, you have a certain CSS file with a known defect that you just don’t have time to fix right now. Rather then continuously receiving alerts on that file, you can choose instead to “ignore” that file in future alerts. Ignore Rules allow you to ignore specific URLs, domains, performance rules, and more!

And don’t worry, ignored defects don’t disappear completely. We still track the problems, and a subtle message keeps you informed how many defects are being suppressed. You can always restore ignored defects later.

With alert thresholds, 3PC filters, and ignore rules, Zoompf lets you control what defects should be detected, and how severe the problem must be before we flag the issue. This is another way to automate testing for performance defects, without getting overwhelmed.

Superior Defect Intelligence

Detecting a performance defect is just half the battle. Zoompf Alerts also helps you see where the problem is, and gives detailed information on how to fix it. And in some cases, we can even fix the problem for you!

First, Zoompf Alerts stores the full HTTP response for any performance defect, captured at the point in time the defect was detected. We then highlight the specific code in the HTTP response so you can quickly spot any problems!

Zoompf Alerts also provides step by step instructions on how to fix most problems, often highlighting solutions specific to the technologies you are using!

And for certain performance defects, like unoptimized images or un-minified files, you can actually download the optimized versions as well!

In Closing

By continuous scanning your website for performance defects with Zoompf Alerts, you can:

  • Automatically find specific performance defects as they are introduced.
  • See only the performance defects you care about.
  • Get detailed information on the cause of the defect, and steps to fix it.

We are so excited to make the beta public. Join the (free!) beta and help keep your site fast!

Join Alerts Beta

The post Announcing the Zoompf Alerts Beta! appeared first on Zoompf Web Performance.

Easily Boost your Web Application by Using nginx

Tue, 08/19/2014 - 08:52

More and more Web sites and applications are being moved from Apache to nginx. While Apache is still the number 1 HTTP server with more than 60% on active Web sites, nginx has now taken over the 2nd place in the ranking and relegated Microsoft’s IIS to 3rd place. Among the top 10.000 Web sites […]

The post Easily Boost your Web Application by Using nginx appeared first on Compuware APM Blog.

Australia’s Attitude toward Website and Application Monitoring – she’ll be right mate

Wed, 08/13/2014 - 16:00

I recently attended the Online Retailer Conference in Sydney, and I couldn’t resist the temptation to survey the audience.  I recalled a conversation I had with a journalist a few months ago, who said ‘I don’t think anyone monitors apps in Australia.  The developers build them; then they just throw them over the fence’. He […]

The post Australia’s Attitude toward Website and Application Monitoring – she’ll be right mate appeared first on Compuware APM Blog.

The Irony of Google’s HTTPS Mandate

Tue, 08/12/2014 - 10:41

With last week’s announcement that Google will start rewarding those sites with HTTPS security configurations with better search result rankings, many companies will now be faced with a difficult decision.

HTTPS is used as an added layer of security to standard HTTP sites, and has become necessary for companies that deal in eCommerce, financial tracking, or any other sort of site on which users have to log in or are expected to enter sensitive information. Therefore, Google’s rationale for taking this measure is to ensure that the pages that it directs its users to are operating as securely as possible (even if HTTPS is hardly a cure-all for security issues online).

It would stand to reason, therefore, that sites who are still operating under HTTP should switch to the more secure connection. However, it’s not that simple.

For one, the language that the company used to make this announcement is somewhat ambiguous. They have not specified exactly how much it would impact a site’s rankings, yet companies are compelled to do it anyway because they’d rather be safe than sorry. And as a drawback to the more secure connection, implementing an HTTPS system can be costly for any business, especially a start-up that is just getting off the ground and has to monitor its finances closely.

The irony of all of this is that in the past, Google has urged web developers to optimize their sites’ performance as much as possible in order to boost their search rankings. Now, however, it’s telling them to use a slower connection protocol whether they need the added security or not. Additional complications will arise in the use of third party tags. If a site switches to SSL but uses third parties which have not, it’s going to be dragged down in Google’s search results through no fault of its own.

Don’t get us wrong – Google’s added emphasis on internet security for its users is grounded in practical and justifiable motives. But in doing so, it’s creating additional headaches for sites that don’t have security issues in the first place. As the company with the greatest amount of influence over the web as we know it, perhaps they should look more closely at the wide-ranging effects that their actions have on the little guys.


The post The Irony of Google’s HTTPS Mandate appeared first on Catchpoint's Blog.

Understanding Application Performance on the Network – Part VII: TCP Window Size

Tue, 08/12/2014 - 07:25

In Part VI, we dove into the Nagle algorithm – perhaps (or hopefully) something you’ll never see. In Part VII, we get back to “pure” network and TCP roots as we examine how the TCP receive window interacts with WAN links. TCP Window Size Each node participating in a TCP connection advertises its available buffer […]

The post Understanding Application Performance on the Network – Part VII: TCP Window Size appeared first on Compuware APM Blog.

Why Internal Application Monitoring Is Not What Your Customer Needs

Thu, 08/07/2014 - 13:27

When faced with choosing between internal application monitoring and external synthetic monitoring, many business professionals are under the impression that as long their website is up and running, then there must not be any problems with it. This, however, couldn’t be further from the truth.

The need for an external synthetic monitoring tool is actually more important than anything else, because in this world of high-speed, instant gratification (eCommerce site administrators are nodding their heads), if your customers are affected by poor performance, you will not have customers for long.

For a simple explanation of this concept, imagine a fast food restaurant. First of all, out of all of the fierce competition, one needs to make the choice of a specific restaurant. Once chosen, you don’t want to turn business away. So if the door is unlocked, it would mean that the store is open, or in the case of a website, “available.”

Once at the counter, the customer puts in the order, which would be the same as someone going to a website to perform any interaction. These interactions would be the clicks or “requests” within the site. Once the order goes in and as they wait at the counter, the cheeseburger, fries, and apple pie they requested are all delivered in a reasonable and expected amount of time.

However, there appears to be a problem with the drink, because it’s taking longer than usual to get it. Behind the scenes, the drink guy has to change the cylinder containing the mix, and subsequently the drink order takes 10 times longer to be filled. The customer does not know this, and as time passes, frustration grows.

When the drink is finally delivered, the unhappy customer decides to complain to the manager about the speed (or lack thereof) of getting his order. When questioned, each person explains “I did MY job.” This includes the drink guy, whose only responsibility was to deliver the drink as ordered, which in this case, required changing the cylinder. Speed was of no consequence to him, and in his mind, he fulfilled his task to expectations. So from an internal perspective, this is absolutely true. Everyone did their job 100% correctly.

The customer’s (external) perception, however, is a completely different story. To the customer, this was a complete failure and frustrating experience which is sure to tarnish his perspective of the entire brand for the foreseeable future, thus preventing any return for a long time. Even worse, it increases the likelihood of this displeasure being shared with friends on social media, which as we all know can spread like wildfire and irreparably damage a brand.

Hence the importance of monitoring from the outside from the customer’s perspective. This is an absolute necessity in order to avoid customer frustration due to behind-the-scenes problems that slow down web performance and drive away new business.

Whereas internal monitoring provides a valuable – yet incomplete – look at how your site is performing by your own standards, external monitoring can act as a buffer to bad experiences and catch problems before they become an issue for customers. This will keep them happy and satisfied, and more importantly, encourage them to keep coming back.

The post Why Internal Application Monitoring Is Not What Your Customer Needs appeared first on Catchpoint's Blog.

How to Spruce up your Evolved PHP Application – Part 2

Wed, 08/06/2014 - 07:00

In the first part of my blog I covered the data side of the tuning process on my homegrown PHP application Spelix: database issues, caching on both the server and the client. By just applying these insights I could bring Spelix to a stage where the number of users could be increased by more than […]

The post How to Spruce up your Evolved PHP Application – Part 2 appeared first on Compuware APM Blog.

Performance Testing Mobile Apps: How to capture the complete end-user experience

Wed, 08/06/2014 - 05:06
On 26th August, Ian Molyneaux will present a webinar on the challenges around Performance Testing for the mobile end-user.

Bar Exam Failures Highlight Importance of Performance Testing

Thu, 07/31/2014 - 08:51

It goes without saying that old-fashioned methods of doing business and keeping records are increasingly shifting into digital and/or online formats. Usually that means making life easier for everyone involved. Of course, when the systems behind such technological advancements fail, it can mean a nightmare instead.

This week, law students around the country have been going through the painstaking process of taking the bar exam, the culmination of weeks – sometimes months – of laborious studying that will determine whether they can actually practice law after pouring years of their life and tens of thousands of dollars into their education.

So you can imagine their frustration when, after completing the essay part of the exam, those students were unable to send their answers in.

ExamSoft, whose software is used by many states to conduct their bar exams (as well as by individual law schools for regular exams throughout the year), experienced server overload when it came time for those thousands of exam takers to submit their essays, as those trying to do so came face to face with ExamSoft’s version of the pinwheel of death:

The failures brought to many people’s minds the disaster that was the introduction of so many months ago, while simultaneously putting to rest the idea that putting government services in the hands of a private company is a guarantee for success.

As the legions of test takers took to social media to vent their frustrations at the company, the team at ExamSoft went into panic mode as their IT staff desperately tried to correct the problem while the rest supposedly began contacting the state bars in order to explain why so many people seemed to miss the deadline to submit their exams.

The process for taking the exam differs from state to state, and seeing as ExamSoft is used by several state bars, the terrible experiences on Tuesday varied. However, there were reports of people being told to try submitting from multiple locations – none of which worked – and then waiting for as much as three hours while ExamSoft tried to solve the problem.

Adding to the students’ fury is the fact that they have to pay a significant chunk of money simply for the “privilege” of using the software. To download it can cost as much as $150 (again, depending on the state), and several state bars have an additional triple-digit fee on top of that. After being forced to pay that much, one should be able to expect a certain degree of functionality, but obviously that didn’t happen on Tuesday.

The problem was later blamed on a processing issue on ExamSoft’s end, which they apologized for on their Facebook page, and those users who experienced the errors were given deadline extensions so that they won’t be penalized. But the fallout from this fiasco is far from over, as the company has also left itself open to potential legal ramifications (after all, these are future lawyers we’re talking about), seeing as those end users who had to spend hours dealing with these failures were forced to take valuable time away from the preparations for the second part of the exam the following day.

In the meantime, we once again see the importance of performance and capacity testing BEFORE undertaking a big online task. Failure to do so can have wide-ranging ramifications for a long time.


The post Bar Exam Failures Highlight Importance of Performance Testing appeared first on Catchpoint's Blog.

Sanity Check: Object Creation Performance

Tue, 07/29/2014 - 19:59

[Update: Seriously, go read this post by Vyacheslav Egorov @mraleph. It's the most fantastic thing I've read in a long, long time!]

I hear all kinds of myths and misconceptions spouted as facts regarding the performance characteristics of various object creation patterns in JavaScript. The problem is that there’s shreds of facts wrapped up in all kinds of conjecture and, frankly, FUD (fear-uncertainty-doubt), so separating them out is very difficult.

So, I need to assert() a couple of things here at the beginning of this post, so we’re on the same page about context.

  1. assert( "Your app creates more than just a few dozen objects" ) — If your application only ever creates a handful of objects, then the performance of these object creations is pretty much moot, and you should ignore everything I talk about in this post. Move along.
  2. assert( "Your app creates objects not just on load but during critical path operations." ) — If the only time you create your objects is during the initial load, and (like above) you’re only creating a handful of these, the performance differences discussed here are not terribly relevant. If you don’t know what the critical path is for your application, stop reading and go figure that out first. If you use an “object pool” where you create objects all at once upfront, and use these objects later during run-time, then consult the first assertion.

Let me make the conclusions of those assertions clear: if you’re only creating a few objects (and by that, I mean 100 or less), the performance of creating them is basically irrelevant.

If you’ve been led to believe you need to use prototype, new, and class style coding in JS to get maximum object creation performance for just a couple of objects, you need to set such silliness aside. Come back to this article if you ever need to create lots of objects.

I suspect the vast majority of you readers don’t need to create hundreds or thousands of objects in your application. In my 15 year JS dev career, I’ve seen a lot of apps in a lot of different problem domains, but it’s been remarkably rare how often JavaScript apps legitimately need to create enough objects where the performance of such creation was something to get worked up about.

But, to read other blogs and books on the topic, just about every JavaScript application needs to avail itself of complicated object “inheritance” heirarchies to have any hope of reasonable performance.


Object Creations Where Performance Isn’t Relevant

Let’s list some examples of object creation needs where the performance differences will never be a relevant concern of yours, beyond “premature micro optimizations”:

  • You have some UI widgets in your page, like a calendar widget, a few smart select drop-downs, a navigation toolbar, etc. In total, you have about a dozen objects (widgets) you create to build the pieces of your UI. Most of them only ever get created once at load time of your app. A few of them (like smart form elements) might get recreated from time to time, but all in all, it’s pretty lightweight on object creations in your app.
  • You create objects to represent your finite data model in your application. You probably have less than 10 different data-domains in your whole application, which probably means you have at most one object per domain. Even if we were really liberal and counted sub-objects, you have way less than 50 total objects ever created, and few if any of them get recreated.
  • You’re making a game with a single object for each character (good guy, enemy, etc). At any given time, you need 30 or fewer objects to represent all these characters. You can (and should) be creating a bunch of objects in an object pool at game load, and reusing objects (to avoid memory churn) as much as possible. So, if you’re doing things properly, you actually won’t be doing an awful lot of legitimate object creation.

Are you seeing the pattern? There’s a whole lot of scenarios where JS developers have commonly said they need to absolutely maxmimize object creation performance, and there’s been plenty of obsession on micro-optimizations fueled by (irresponsible and misleading) micro-benchmarks on jsPerf to over-analyze various quirks and niches.

But in truth, the few microseconds difference between the major object creation patterns when you only have a dozen objects to ever worry about is so ridiculously irrelevant, it’s seriously not even worth you reading the rest of this post if that’s the myth you stubbornly insist on holding strong to.

Object Creation On The Critical Path

Let me give a practical example I ran into recently where quite clearly, my code was going to create objects in the critical path, where the performance definitely matters.

Native Promise Only is a polyfill I’ve written for ES6 Promises. Inside this library, every time a new promise needs to be created (nearly everything inside of promises creates more promises!), I need to create a new object. Comparatively speaking, that’s a lot of objects.

These promise objects tend to have a common set of properties being created on them, and in general would also have a shared set of methods that each object instance would need to have access to.

Moreover, a promise library is designed to be used extensively across an entire application, which means it’s quite likely that some or all of my code will be running in the critical path of someone’s application.

As such, this is a perfect example where paying attention to object creation performance makes sense!

Simple Objects

Let’s draw up an app scenario and use it to evaluate various object creation options: a drawing application, where each line segment or shape someone draws on the canvas will be individually represented by a JS object with all the meta data in them.

On complex drawing projects, it wouldn’t be at all surprising to see several thousand drawing elements placed on the canvas. Responsiveness while freehand drawing is most definitely a “critical path” for UX, so creating the objects performantly is a very reasonable thing to explore.

OK, let’s examine the first object creation pattern — the most simple of them: just simple object literals. You could easily do something like:

function makeDrawingElement(shapeType,coords,color) { var obj = { id: generateUniqueId(), type: shapeType, fill: color, x1: coords.x1, y1: coords.y1, x2: coords.x2, y2: coords.y2, // add some references to shared shape utilities deleteObj: drawing.deleteObj, moveForward: drawing.moveForward, moveBackward: drawing.moveBackward }; return obj; } var el = makeDrawingElement( "line", { x1:10, y1:10, x2:50, y2:100 }, "red" );

OK, so all we’re doing here is creating a new object literal each time we create a new shape on the drawing surface. Easy enough.

But it’s perhaps the least performant of the various options. Why?

As an internal implementation detail, it’s hard for JS engines to find and optimize repeated object literals as shown here. Moreover, as you can see, we’re copying function references (not functions themselves!) onto each object.

Let’s see some other patterns which can address those concerns.

Hidden Classes

It’s a famous fact that JavaScript engines like v8 track what’s called a “hidden class” for an object, and as long as you’re repeatedly creating new objects of this same “shape” (aka “hidden class”), it can optimize the creation quite a bit.

So, it’s common for people to suggest creating objects like this, to take advantage of “hidden class” implementation optimizations:

function DrawingElement(shapeType,coords,color) { = generateUniqueId(); this.type = shapeType; this.fill = color; this.x1 = coords.x1; this.y1 = coords.y1; this.x2 = coords.x2; this.y2 = coords.y2; // add some references to shared shape utilities this.deleteObj = drawing.deleteObj; this.moveForward = drawing.moveForward; this.moveBackward = drawing.moveBackward; } var el = new DrawingElement( "line", { x1:10, y1:10, x2:50, y2:100 }, "red" );

Simple enough of a change, right? Well, depends on your perspective. It seems like an interesting and strange “hack” that we have to add our properties to a this object created by a new Fn(..) call just to opt into such magical optimizations.

In fact, this “hidden class” optimization is often mis-construed, which contributes to its “magical” sense.

It’s true that all those new DrawingElement(..) created object instances should share the same “hidden class”, but the more important optimization is that because DrawingElement(..) is used as a somewhat declarative constructor, the engine can estimate before even the first constructor call how many this.prop = .. assignments will happen, so it knows generally how “big” to pre-size the objects that will be made by the “hidden class”.

As long as this DrawingElement(..) constructor adds all the properties to the this object instance that are ever going to be added, the size of the object instance won’t have to grow later. This leads to the best-case performance in that respect.

The somewhat declarative nature of new Fn(..) and this aids in the optimization estimates, but it also invokes the implication that we’re actually doing class-oriented coding in JS, and it thus invites that sort of design abstraction on top of our code. Unfortunately, JS will fight you from all directions in that effort. Building class-like abstractions on top of your code will often hurt your performance, not help it.

Shared .prototype Methods

Many developers will cite that a further problem performance-wise (that is, memory usage, specifically) with this code is the copying of function references, when we could instead use the [[Prototype]] “inheritance” (more accurately, delegation link) to share methods among many object instancess without duplication of function references.

So, we keep building on top of the “class abstraction” pattern with code like this:

function DrawingElement(shapeType,coords,color) { = generateUniqueId(); this.type = shapeType; this.fill = color; this.x1 = coords.x1; this.y1 = coords.y1; this.x2 = coords.x2; this.y2 = coords.y2; } // add some references to shared shape utilities // aka, "defining the `DrawingElement` "class" methods DrawingElement.prototype.deleteObj = drawing.deleteObj; DrawingElement.prototype.moveForward = drawing.moveForward; DrawingElement.prototype.moveBackward = drawing.moveBackward; var el = new DrawingElement( "line", { x1:10, y1:10, x2:50, y2:100 }, "red" );

Now, we’ve placed the function references on DrawingElement.prototype object, which el will automatically be [[Prototype]] linked to, so that calls such as el.moveForward() will continue to work as before.

So… now it certainly looks like we’ve fully embraced a DrawingElement “class”, and we’re “instantiating” elements of this class with each new call. This is the gateway drug of complexity of design/abstraction that will lead you perhaps to later try to do things like this:

function DrawingElement(..) { .. } function Line(..) { this ); } Line.prototype = Object.create( DrawingElement ); function Shape(..) { this ); } Shape.prototype = Object.create( DrawingElement ); ..

Be careful! This is a slippery slope. You will very quickly create enough abstraction here to completely erase any potential performance micro-gains you may be getting over the normal object literal.

Objects Only

I’ve written extensively in the past about an alternate simpler pattern I call “OLOO” (objects-linked-to-other-objects), including the last chapter of my most recent book: “You Don’t Know JS: this & Object Prototypes”.

OLOO-style coding to approach the above scenario would look like this:

var DrawingElement = { init: function(shapeType,coords,color) { = generateUniqueId(); this.type = shapeType; this.fill = color; this.x1 = coords.x1; this.y1 = coords.y1; this.x2 = coords.x2; this.y2 = coords.y2; }, deleteObj: drawing.deleteObj, moveForward: drawing.moveForward, moveBackward: drawing.moveBackward }; var el = Object.create( DrawingElement ); // notice: no `new` here el.init( "line", { x1:10, y1:10, x2:50, y2:100 }, "red" );

This OLOO-style approach accomplishes the exact same functionality/capability as the previous snippets. Same number and shape of objects as before.

Instead of the DrawingElement.prototype object belonging to the DrawingElement(..) constructor function, with OLOO, an init(..) function method belongs to the DrawingElement object. Now, we don’t need to have any references to .prototype, nor any usage of the new keyword.

This code is simpler to express.

But is it faster, slower, or the same as the previous “class” approach?

TL;DR: It’s much slower. <frowny-face>

Unfortunately, the OLOO-style implicit object initialization (using init(..)) is not declarative enough for the engine to make the same sort of pre-”guesses” that it can with the constructor form discussed earlier. So, the this.prop = .. assignments are likely expanding the object storage several times, leading to extra churn of those “hidden classes”.

Also, it’s undeniable that with OLOO-style coding, you make an “extra” call, by separately doing Object.create(..) plus el.init(..). But is that extra call a relevant performance penalty? It seems Object.create(..) is in fact a bit slower.

On the good side, in both cases, there’s no copied function references to each instance, but instead just shared methods we [[Prototype]] delegate to, so at the very least, THAT OPTIMIZATION is also in play with both styles of coding.

Performance Benchmark: OO vs OLOO

The problem with definitively quantifying the performance hit of OLOO-style object creation is that micro-benchmarking is famously flawed on such issues. The engines do all kinds of special optimizations that are hard to “defeat” to isolate and test accurately.

Nevertheless, I’ve put together this test to try to get some kind of reasonable approximation. Just take the results with a big grain of salt:

So, clearly, OLOO is a lot slower according to this test, right?

Not so fast. Let’s sanity check.

Importantly, context is king. Look at how fast both of them are running.

In recent Chrome, OLOO is running up to ~2.3million ops/sec. If you do the math, that’s 2 per microsecond. IOW each creation operation takes ~430 nanoseconds. That’s still insanely damn fast, if you ask me.

What about classical OO-style? Again, in recent Chrome, ~30million objects created per second. 30 per microsecond. IOW each creation operation is taking ~33 nanoseconds. So… we’re talking about the difference between an object creation taking 33 nanoseconds and 420 nanoseconds.

Let’s look at it more practically: say your drawing app needs to create 10,000 objects (a pretty significantly complex drawing, TBH!). How long will each approach take? Classical OO will take ~330 microseconds, and OLOO will take ~4.3 milliseconds.

That’s a difference of 4 milliseconds in an absolutely worst-case sort of scenario. That’s ~1/4 of the average screen refresh cycle (16.7ms for 60fps). You could re-create all ten thousand objects three or four times, all at once in a row, and still only drop at most a single animation frame.

In short, while “OLOO is 90% slower” seems pretty significant, I actually think it’s probably not terribly relevant for most apps. Only the rarest of apps would ever be creating several tens of thousands of objects per second and needing to squeeze out a couple of milliseconds per cycle.


First, and foremost, if you’re not creating thousands of objects, stop obsessing about object creation performance.

Secondly, even if you are creating ten thousand objects at once, 4 extra milliseconds in the worst case on the critical path is not likely to bring your app to its knees.

Knowing how to identify the cases where you need (and don’t need!) to optimize your object creations, and how to do so, is very important to the mature JavaScript developer. Don’t just listen to the hype. Give it some sane, critical thought.

Velocity highlights (video bonus!)

Tue, 07/29/2014 - 00:08

We’re in the quiet period between Velocity Santa Clara and Velocity New York. It’s a good time to look back at what we saw and look forward to what we’ll see this September 15-17 in NYC.

Velocity Santa Clara was our biggest show to date. There was more activity across the attendees, exhibitors, and sponsors than I’d experienced at any previous Velocity. A primary measure of Velocity is the quality of the speakers. As always, the keynotes were livestreamed. The people who tuned in were not disappointed. I recommend reviewing all of the keynotes from the Velocity YouTube Playlist. All of them were great, but here were some of my favorites:

Virtual Machines, JavaScript and Assembler – Start. Here. Scott Hanselman’s walk through the evolution of the Web and cloud computing is informative and hilarious. Lowering the Barrier to Programming – Pamela Fox works on the computer programming curriculum at Khan Academy. She also devotes time to Girl Develop It. This puts her in a good position to speak about the growing gap between the number of programmers and the number of programmer jobs, and how bringing more diversity into programming is necessary to close this gap. Achieving Rapid Response Times in Large Online Services - Jeff Dean, Senior Fellow at Google, shares amazing techniques developed at Google for fast, scalable web services. Mobile Web at Etsy – People who know Lara Swanson know the incredible work she’s done at Etsy building out their mobile platform. But it’s not all about technology. For a company to be successful it’s important to get cultural buy-in. Lara explains how Etsy achieved both the cultural and technical advances to tackle the challenges of mobile. Build on a Bedrock of Failure – I want to end with this motivational cross-disciplinary talk from skateboarding icon Rodney Mullen. When you’re on the bleeding edge (such as skateboarding or devops), dealing with failure is a critical skill. Rodney talks about why people put themselves in this position, how they recover, and what they go on to achieve.

Now for the bonus! Some speakers have posted the videos of their afternoon sessions. These are longer, deeper talks on various topics. Luckily, some of the best sessions are available on YouTube:

Is TLS Fast Yet? – If you know performance then you know Ilya Grigorik. And if you know SPDY, HTTP/2, privacy, and security you know TLS is important. Here, the author of High Performance Browser Networking talks about how fast TLS is and what we can do to make it faster. GPU and Web UI Performance: Building an Endless 60fps Scroller – Whoa! Whoa whoa whoa! Math?! You might not have signed up for it, but Diego Ferreiro takes us through the math and physics for smooth scrolling at 60 frames-per-second and his launch of ScrollerJS. WebPagetest Power Users Part 1 and Part 2 – WebPagetest is one of the best performance tools out there. Pat Meenan, creator of WebPagetest, guides us through the new and advanced features. Smooth Animation on Mobile Web, From Kinetic Scrolling to Cover Flow Effect – Ariya Hidayat does a deep dive into the best practices for smooth scrolling on mobile. Encouraging Girls in IT: A How To Guide - Doug Ireton and his 7-year-old daughter, Jane Ireton, lament the lack of women represented in computer science and Jane’s adventure learning programming.

If you enjoy catching up using video, I recommend you watch these and other videos from the playlist. If you’re more of the “in-person” type, then I recommend you register for Velocity New York now. While you’re there, use my STEVE25 discount code for 25% off. I hope to see you in New York!

Forrester's top tech trends - our take

Mon, 07/28/2014 - 10:00

Forbes recently published an article that outlines the "Top Technology Trends for 2014 and Beyond", as told by experts at Forrester Research.  It's an excellent distillation of the trends we at Yottaa have observed of late.  Some highlights:

Navigation Timing Now Supported in Safari

Fri, 07/25/2014 - 11:24

As loyal followers of the Catchpoint blog are aware, we have been leading the charge for quite some time (a year ago to the day, to be exact) for Apple to release a version of Safari that supports a navigation timing API, even taking the time to create our own petition.

Today, our quest has finally been achieved.

Upon the release of the of the fourth edition of Safari 8 for OS X and iOS8, we were thrilled to discover that this version, while still in beta, included the all-important navigation timing API that means that Safari now joins the rest of the browser giants like Chrome, IE, Firefox, and Opera (although it still does not support resource timing).

The importance of this development to the IT industry is massive. With the consistent growth of Real User Measurement as a tool to optimize user experience on the internet, and navigation timing being the primary method for achieving that goal, the possibility for developers to now understand how Safari users (who account for over a quarter of all internet traffic) means that DevOps professionals now have a far more complete, granular insight into user experience than ever before.

Prior to this, a significant blind spot existed when trying to understand web performance on Safari, as all we could do was measure the difference between the time that a page started rendering and when it was completed. Now with Safari’s navigation timing, we can monitor metrics such as DNS connect, Response, Wait Time, Load Time, and a host of other data points (seen in the left-hand column below) that were unavailable under the old heuristic model (right-hand column).

Metrics with navigation timing (left) vs. heuristic (right)

The effect will have an especially big impact when looking at mobile users, as Apple’s combined presence across smartphones and tablets means that Safari comprised a whopping 59.1% of mobile browser traffic as of April of this year. And as we all know, with mobile browsing overtaking desktop in the past year and continuing to grow, the ability to glean insight into the mobile users’ experience is invaluable to those of us who are committed to optimizing everyone’s web performance.


The post Navigation Timing Now Supported in Safari appeared first on Catchpoint's Blog.

our next event: Discuss 5 conversion tactics for retailers with Peter sheldon of Forrester Research

Fri, 07/25/2014 - 11:22

Increasing conversions is a complex puzzle. At Yottaa we focus on a few methods that accomplish that feat: improving performance, sequencing content, and applying contextual rendering to name a few. But there are many, many more business decisions involved in conversion.  Website design, promotions and pricing, loyalty programs, customer support, and shipping policies all roll into the ultimate customer decision to hit "complete purchase".  

Understanding Application Performance on the Network – Part VI: The Nagle Algorithm

Thu, 07/24/2014 - 06:02

In Part V, we discussed processing delays caused by “slow” client and server nodes. In Part VI, we’ll discuss the Nagle algorithm, a behavior that can have a devastating impact on performance and, in many ways, appear to be a processing delay. Common TCP ACK Timing Beyond being important for (reasonably) accurate packet flow diagrams, […]

The post Understanding Application Performance on the Network – Part VI: The Nagle Algorithm appeared first on Compuware APM Blog.

Dojo FAQ: When should I use DocumentFragments?

Wed, 07/23/2014 - 12:36

A common practice when adding new elements to a document is to add them to a container DIV and then insert that node into into a document. A lesser used alternative is the DocumentFragment class. So, what is a document fragment, and when should you use one instead of a container DIV?

The main reason to use a container DIV is performance efficiency. Adding elements to a DOM document causes a relatively expensive document reflow, while adding elements to an external node and then inserting that into the document only causes a single reflow. There are times, however, when using a container DIV is problematic. For example, assume you need to add 1000 table rows to an existing TBODY element. Using a container node would result in an invalid document since there is no element that can contain TR elements in a TBODY. However, you can use a document fragment to efficiently handle this use case.

A document fragment is a lightweight container that looks and acts like a standard DOM node, with the key difference that when a document fragment is added to a document, the children of the fragment are added in its place. This operation is still treated as a single node insertion, in that the document is only reflowed after all the children have been added, so inserting a fragment is generally as efficient as inserting a single node. In the case of our table example, the new TR elements can be added to a DocumentFragment, and then the fragment can be appended to the TBODY. The result will be that all the TR elements in the fragment are appended directly to the TBODY.

DocumentFragments are nothing new; they are supported as far back as Internet Explorer 6. Dojo’s dom-construct.toDom function (and the dojo._toDom function before that), which converts HTML strings into DOM nodes, has always returned the new nodes in a DocumentFragment. So, to answer the original question, you can use a fragment anywhere you don’t need the structure provided by a container node, and you should use a document fragment in situations where a container node is unsuitable.

Learning more

SitePen covers DocumentFragments and other DOM manipulation topics in our Dojo workshops offered throughout the US, Canada, and Europe, or at your location. We also provide expert JavaScript and Dojo support and development services. Contact us for a free 30 minute consultation to discuss how we can help.

SitePen offers beginner, intermediate, and advanced Dojo Toolkit workshops to make your development team as skilled and efficient as possible when creating dynamic, responsive web applications. Sign up today!

Tales of Latency: Why Internet Speed Matters

Wed, 07/23/2014 - 12:25

The following is based on a true story. The names have been omitted, and the places and times changed to protect those who suffered from slow internet.

The line at the TSA security screening had already begun to extend beyond the “Line Begins Here” sign, and was only going to get worse. But for this traveler, that was only the latest of his problems. He was now in danger of missing his flight from JFK to Dulles, and if that happened, odds of getting to the hospital in time were slim.

He hadn’t even wanted to go on this trip; in fact, he had begged his boss to send someone else in his stead. True, his wife’s due date was still three weeks away and this was only going to be a 48-hour commitment, but he’d had a bad feeling – after all, both he and his sister had each come nearly a month early.

Sure enough, he had received the frantic call from his wife that she had gone into labor as he was wrapping up his last meeting – which had run interminably long – and dashed out of the building while pulling out his phone to arrange a car to the airport (knowing full well that getting a cab at rush hour in midtown Manhattan was a hopeless cause). But thanks to a maddeningly slow connection on his taxi app, he was now in danger of missing out on the birth of his first child.

Now that he saw the line though, his fears doubled. Frantically checking the departure board, he noticed that there was another flight on a different airline that was due to leave just half an hour later. If he could get on that one, it should give him enough time to get past security and to the gate before the doors closed.

But he knew that if he got out of line to go back to the ticket counter, it would set him back another 10-15 minutes at least, and he couldn’t afford that. No matter, he thought. The airline had to have a mobile site where you could purchase tickets, and even if he was paying for this one out of his own pocket, it was worth it for such a momentous life event.

So balancing his garment bag on his carry-on, he pulled up the site on his iPhone and frantically started searching for their online ticket ordering section. It took a few clicks and some scrolling around banner ads (would it have killed them to build an Adaptive site rather than a Responsive one?), but finally he found it and saw to his delight that there were three seats left.

After entering his personal info and his credit card number, he tapped the “Purchase Ticket” button with a smile on his face, sure that he was going to be able to make the new flight and get home in time.

Then he waited….

And waited….

And waited….

By the time he refreshed the page, the flight was booked. His son was born four and a half hours later as the plane he finally got on was just touching down in D.C.

Have a tale of your own about how slow internet has affected your life? Share it with us at


The post Tales of Latency: Why Internet Speed Matters appeared first on Catchpoint's Blog.

Velocity Europe 2014 registrations open, Intechnica speakers announced

Wed, 07/23/2014 - 07:51
Registration for Velocity Europe 2014 is now open, and Intechnica are delighted to be represented by Andy Still and Ian Molyneaux on the official schedule.

How to Spruce up your Evolved PHP Application

Wed, 07/23/2014 - 07:00

Do you have a PHP application running and have to deal with inconveniences like lack of scalability, complexity of debugging, and low performance? That’s bad enough! But trust me: you are not alone! I’ve been developing Spelix, a system for cave management, for more than 20 years. It originated from a single user DOS application, […]

The post How to Spruce up your Evolved PHP Application appeared first on Compuware APM Blog.