Feed aggregator

Summer 2014 State of the Union: Ecommerce Page Speed & Web Performance [INFOGRAPHIC]

Web Performance Today - Thu, 07/31/2014 - 11:21

Last week, we released our quarterly State of the Union for ecommerce web performance, which, among other things, found that:

  • The median top 100 retail site takes 6.2 seconds to render primary content and 10.7 seconds to fully load.
  • 17% of the top 100 pages took 10 seconds or longer to load their feature content. Only 14% delivered an optimal sub-3-second user experience.
  • The median top 100 page is 1677 KB in size — 67% larger than it was just one year ago, when the median page was 1007 KB.

These findings and more — including Time to Interact and Load Time for the ten fastest sites — are illustrated in this set of infographics. Please feel free to download and share them. And if you have any questions about our research or findings, don’t hesitate to ask me.


Get the report: State of the Union: Ecommerce Page Speed & Web Performance [Summer 2014]

The post Summer 2014 State of the Union: Ecommerce Page Speed & Web Performance [INFOGRAPHIC] appeared first on Web Performance Today.

Testing on the Toilet: Don't Put Logic in Tests

Google Testing Blog - Thu, 07/31/2014 - 10:59
by Erik Kuefler

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

Programming languages give us a lot of expressive power. Concepts like operators and conditionals are important tools that allow us to write programs that handle a wide range of inputs. But this flexibility comes at the cost of increased complexity, which makes our programs harder to understand.

Unlike production code, simplicity is more important than flexibility in tests. Most unit tests verify that a single, known input produces a single, known output. Tests can avoid complexity by stating their inputs and outputs directly rather than computing them. Otherwise it's easy for tests to develop their own bugs.

Let's take a look at a simple example. Does this test look correct to you?

@Test public void shouldNavigateToPhotosPage() {
String baseUrl = "http://plus.google.com/";
Navigator nav = new Navigator(baseUrl);
assertEquals(baseUrl + "/u/0/photos", nav.getCurrentUrl());
The author is trying to avoid duplication by storing a shared prefix in a variable. Performing a single string concatenation doesn't seem too bad, but what happens if we simplify the test by inlining the variable?

@Test public void shouldNavigateToPhotosPage() {
Navigator nav = new Navigator("http://plus.google.com/");
assertEquals("http://plus.google.com//u/0/photos", nav.getCurrentUrl()); // Oops!
After eliminating the unnecessary computation from the test, the bug is obvious—we're expecting two slashes in the URL! This test will either fail or (even worse) incorrectly pass if the production code has the same bug. We never would have written this if we stated our inputs and outputs directly instead of trying to compute them. And this is a very simple example—when a test adds more operators or includes loops and conditionals, it becomes increasingly difficult to be confident that it is correct.

Another way of saying this is that, whereas production code describes a general strategy for computing outputs given inputs, tests are concrete examples of input/output pairs (where output might include side effects like verifying interactions with other classes). It's usually easy to tell whether an input/output pair is correct or not, even if the logic required to compute it is very complex. For instance, it's hard to picture the exact DOM that would be created by a Javascript function for a given server response. So the ideal test for such a function would just compare against a string containing the expected output HTML.

When tests do need their own logic, such logic should often be moved out of the test bodies and into utilities and helper functions. Since such helpers can get quite complex, it's usually a good idea for any nontrivial test utility to have its own tests.

Categories: Software Testing

When Design Best Practices Become Performance Worst Practices [SLIDES]

Web Performance Today - Tue, 07/29/2014 - 11:39

Last week, I had the great privilege of speaking at the annual Shop.org Online Merchandising Workshop. In the performance community, we so often find ourselves preaching to the converted: to each other, to developers, and to others who focus on the under-the-hood aspect of web performance. Attending this Shop.org event was a fantastic chance to talk with a completely different group of professionals — people in marketing and ecommerce — in other words, people who govern much of the high-level strategy and day-to-day decision-making that happens at retail sites.

When attending other speakers’ sessions, it was gratifying to see performance bubble up as a recurring theme. It was obvious to me that there’s an emerging sense of interest and urgency around performance. The tricky part is ensuring that performance gets its share of mental real estate among a group of professionals who are clearly already burdened with a massive set of challenges in the increasingly complex ecommerce space.

The focus of my talk was on what happens when long-held design conventions turn out to be performance liabilities. Using examples from Radware’s quarterly State of the Union ecommerce performance research, I shared four best practices that, when poorly implemented, often end up being worst practices. I invite you to check out my slides (or watch the video of my talk here), and please let me know if you have any questions about our research and findings.

Get the report: State of the Union: Ecommerce Page Speed & Web Performance [Summer 2014]

The post When Design Best Practices Become Performance Worst Practices [SLIDES] appeared first on Web Performance Today.

Headless Drupal 8 – Retrieving Content Using Backbone.js

LoadImpact - Tue, 07/29/2014 - 03:50

In this post I’ll explain how to decouple Drupal front-end to use your own implementation using Backbone.js and Drupal 8 as a RESTful API to retrieve the content.

This is what we call a Headless Drupal.

Drupal 8 front-end is going to be really attractive to front-end developers because it’s using Twig template engine and because its templates have been cleaned of divitis (an interminable bunch of nested divs).

Despite this, there will be some cases where you will need to use your own front-end implementation in any framework, and request the content stored in your back-end application.

So let’s start installing Drupal 8

The fastest and easiest way to do it is to use Bitnami installers. You only need to select your platform and be sure that you download the Drupal 8 version of the installer.

Once you have downloaded the installer, just run it and follow the installation instructions. You only have to remember the user and password to login into Drupal. In my case I used user as username and password as password :P

The installer will take a few minutes to finish (a little bit more in Windows operating systems). Once it’s finished you should launch the Bitnami Drupal stack so you will see the following web page:

Now you can click Access Bitnami Drupal Stack link to access to your Drupal website. Just fill the login form using the username and password you previously entered when installing the Bitnami stack.

Now we need to add some content and try accessing it using a REST callback. So, let’s start creating new content by clicking the Add content link.

Select Basic page to create a basic content type with title and body. Currently there is not a way to request images or other fields through REST in Drupal 8. It is in the works however.

Fill title and body fields and press Save and publish button when done.

Repeat the same process to create a few content pages in order to have some data to request later. Once you have four or five basic pages press Structure in the top menu and then press Views option.

 A view in Drupal is a list of content where you can add and remove fields, filters and select your desired sort ordering. It’s really handy to create complex pages of content.

In this case we are going to create a simple page that lists all our already created basic pages. So start pressing the Add new view button.

Fill the view create form with the settings shown in the following screenshot:

Now you have a default listing of basic pages that shows only the title field. We also want to show body field, so let’s add it. Press Add button in the Fields section.

Now select Body field from the popup window. You can find it easily using the search box, as you can see in this screenshot:

Be careful not to select Body (body language) field. Once you have found Body field, check the checkbox and press Apply (all displays) button. Continue with default values pressing the apply button. Now you should have a view like this one:

You can take a look at the results by pressing the Save button and browsing to the selected URL – in this case articles. So if you visit http://localhost/drupal/articles you will see a page that lists all the basic pages you already created, with title and body fields:

Now it’s time to enable Web Services modules in Drupal to have access to this content through REST requests. Just click Extend button in the administration bar at the top. Find and enable the modules at the end of the page and press Save configuration button.

You will also need to configure Drupal permissions to allow REST requests. It can be done in People option in the top administration bar and by selecting the Permissions tab.

Configure the permissions to allow read requests for every user and write requests to only authenticated users, as you can see in the following screenshot:

You should now have access to the Drupal content you have created in JSON format using a GET request with CURL command, for example:

curl -i -H "Accept: application/json" http://path-to-your-localhost/drupal/node/1

The result, your Drupal node in JSON format, should look something like this:

You can try to access the rest of your content using the same way with CURL command or any tool like Dev HTTP Client.

If we want to access the page with the list of contents we previously created using Views, we will have to clone it. Just visit your view configuration page again and press Duplicate as REST export button in the dropdown menu on the right.

Now we need to change the access Path to avoid conflicts with the other page. You can do it by pressing the path link in Path Settings section, as you can see in the following screenshot:

If you save the view and try to access the path you have configured, using CURL command, you will get the page content in JSON format.

curl -i -H "Accept: application/json" http://path-to-your-localhost/drupal/articles/rest

You will see a lot of information related to Drupal nodes like changed and created dates , node status and node id. This is useful information if you really need it but it can considerably increase the amount of data transferred – an aspect that’s really important when you accessing it using a mobile device.

Let’s configure the view to provide only the data we really need, like content title and body, for example. You can start by simply changing the format from Entity to Fields and select only the fields you want. Just press Entity link in Format section on your view settings page.

Now change the format to Fields, press save and select Raw format for all fields, to avoid Drupal adding unnecessary field formatting.

If your save these changes you can see that the JSON response contains only the data we want, which saves a lot time and data transfer.

It’s time to access and display the data properly using an external application, in our case a simple Backbone.js application that makes the REST requests to Drupal, gets the data in JSON format and displays it using a template.

Let’s start creating a folder for our application using the same web server provided by Bitnami.

Locate the folder where you have installed the Bitnami Drupal stack and create a folder named app using the following command:

mkdir /path-to-bitnami-drupal-stack/drupal-8.0.alpha13-0/apps/drupal/htdocs/app

We are going to place all Backbone.js application files inside this folder so consider it for the following code examples. You can find the whole application code in the following github repository:


First we need to create a index.html file where we are going to load the latest Backbone.js, Underscore.js and jQuery libraries using the following code:

<script src="http://code.jquery.com/jquery-2.1.1.min.js" type="text/javascript"></script> <script src="http://underscorejs.org/underscore-min.js" type="text/javascript"></script> <script src="http://backbonejs.org/backbone-min.js" type="text/javascript"></script>

The order in which you load the libraries is really important, so take it into consideration.

Now you need to create your Backbone.js application file. We are going to call it app.js and you have to load it the same way you have loaded previous libraries:

<script src="app.js" type="text/javascript"></script>

The Backbone.js application structure consist in the following classes:

var Article = Backbone.Model.extend({ title: '', body: '' });

The model describes the data structure, in this case the Drupal article structure that consist of title and body fields.

var Articles = Backbone.Collection.extend({ model: Article, url: 'http://path-to-your-localhost/drupal/articles/rest' });

The collection is a group of model items fetched from the given URL, that points to your previously configured REST view.

var ArticleView = Backbone.View.extend({ tagName: 'li', template: _.template($('#article-view').html()), render: function() { this.$el.html(this.template(this.model.toJSON())); return this; } });

We define a view to display single articles using it’s own template that should be placed in index.html file, as you can see in the following snippet:

<script type="text/template" id="article-view"> <h2><%= title %></h2> <%= body %> </script>

We also define a view to display the list of articles using the previous view as a subview. This view should fetch the data through a GET request explicitly called using the fetch function.

This is not the best way, but I will describe in my next blog post how to do it using routers. In the next post, I will also demonstrate how to create and delete articles using POST requests.

In the meantime, I suggest you test the code samples and try different template implementations, adding more fields to the view.

Questions or comments? Post them below or Tweet them to me @rteijeiro

See you in the next post ;)

Categories: Load & Perf Testing

Velocity highlights (video bonus!)

Steve Souders - Tue, 07/29/2014 - 00:08

We’re in the quiet period between Velocity Santa Clara and Velocity New York. It’s a good time to look back at what we saw and look forward to what we’ll see this September 15-17 in NYC.

Velocity Santa Clara was our biggest show to date. There was more activity across the attendees, exhibitors, and sponsors than I’d experienced at any previous Velocity. A primary measure of Velocity is the quality of the speakers. As always, the keynotes were livestreamed. The people who tuned in were not disappointed. I recommend reviewing all of the keynotes from the Velocity YouTube Playlist. All of them were great, but here were some of my favorites:

Virtual Machines, JavaScript and Assembler – Start. Here. Scott Hanselman’s walk through the evolution of the Web and cloud computing is informative and hilarious. Lowering the Barrier to Programming – Pamela Fox works on the computer programming curriculum at Khan Academy. She also devotes time to Girl Develop It. This puts her in a good position to speak about the growing gap between the number of programmers and the number of programmer jobs, and how bringing more diversity into programming is necessary to close this gap. Achieving Rapid Response Times in Large Online Services - Jeff Dean, Senior Fellow at Google, shares amazing techniques developed at Google for fast, scalable web services. Mobile Web at Etsy – People who know Lara Swanson know the incredible work she’s done at Etsy building out their mobile platform. But it’s not all about technology. For a company to be successful it’s important to get cultural buy-in. Lara explains how Etsy achieved both the cultural and technical advances to tackle the challenges of mobile. Build on a Bedrock of Failure – I want to end with this motivational cross-disciplinary talk from skateboarding icon Rodney Mullen. When you’re on the bleeding edge (such as skateboarding or devops), dealing with failure is a critical skill. Rodney talks about why people put themselves in this position, how they recover, and what they go on to achieve.

Now for the bonus! Some speakers have posted the videos of their afternoon sessions. These are longer, deeper talks on various topics. Luckily, some of the best sessions are available on YouTube:

Is TLS Fast Yet? – If you know performance then you know Ilya Grigorik. And if you know SPDY, HTTP/2, privacy, and security you know TLS is important. Here, the author of High Performance Browser Networking talks how fast TLS is and what we can do to make it faster. GPU and Web UI Performance: Building an Endless 60fps Scroller – Whoa! Whoa whoa whoa! Math?! You might not have signed up for it, but Diego Ferreiro takes us through the math and physics for smooth scrolling at 60 frames-per-second and his launch of ScrollerJS. WebPagetest Power Users Part 1 and Part 2 – WebPagetest is one of the best performance tools out there. Pat Meenan, creator of WebPagetest, guides us through the new and advanced features. Smooth Animation on Mobile Web, From Kinetic Scrolling to Cover Flow Effect – Ariya Hidayat does a deep dive into the best practices for smooth scrolling on mobile. Encouraging Girls in IT: A How To Guide - Doug Ireton and his 7-year-old daughter, Jane Ireton, lament the lack of women represented in computer science and Jane’s adventure learning programming.

If you enjoy catching up using video, I recommend you watch these and other videos from the playlist. If you’re more of the “in-person” type, then I recommend you register for Velocity New York now. While you’re there, use my STEVE25 discount code for 25% off. I hope to see you in New York!

Categories: Software Testing

QA Music – Wayback

QA Hates You - Mon, 07/28/2014 - 04:34

And by way back, I mean a couple of years. Remember this?

Of course you do. QA never forgets.

Whereas some “successful” people can forget their failures when they’ve moved on, QA cannot. Because QA has to test for all failures it has experienced in person or vicariously from now and forever more, amen.

Categories: Software Testing

Will It Meet Our Needs? vs. Is There A Problem Here?

Eric Jacobson's Software Testing Blog - Fri, 07/25/2014 - 12:12

Which of the above questions is more important for testers to ask?

Let’s say you are an is-there-a-problem-here tester: 

  • This calculator app works flawlessly as far as I can tell.  We’ve tested everything we can think of that might not work and everything we can think of that might work.  There appear to be no bugs.  Is there a problem here?  No.
  • This mileage tracker app crashes under a load of 1000 users.  Is there a problem here?  Yes.

But might the is-there-a-problem-here question get us into trouble sometimes?

  • This calculator app works flawlessly…but we actually needed a contact list app.
  • This mileage tracker app crashes under a load of 1000 users but only 1 user will use it.

Or perhaps the is-there-a-problem-here question only fails us when we use too narrow an interpretation:

  • Not meeting our needs, is a problem.  Is there a problem here?  Yes.  We developed the wrong product, a big problem.
  • A product that crashes under a load of 1000 users may actually not be a problem if we only need to support 1 user.  Is there a problem here?  No.

Both are excellent questions.  For me, the will-it-meet-our-needs question is easier to apply and I have a slight bias towards it.  I’ll use them both for balance.

Note: The “Will it meet our needs?” question came to me from a nice Pete Walen article.  The “Is there a problem here?” came to me via Michael Bolton.

Categories: Software Testing

Understanding Application Performance on the Network – Part VI: The Nagle Algorithm

In Part V, we discussed processing delays caused by “slow” client and server nodes. In Part VI, we’ll discuss the Nagle algorithm, a behavior that can have a devastating impact on performance and, in many ways, appear to be a processing delay. Common TCP ACK Timing Beyond being important for (reasonably) accurate packet flow diagrams, […]

The post Understanding Application Performance on the Network – Part VI: The Nagle Algorithm appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

How to Spruce up your Evolved PHP Application

Do you have a PHP application running and have to deal with inconveniences like lack of scalability, complexity of debugging, and low performance? That’s bad enough! But trust me: you are not alone! I’ve been developing Spelix, a system for cave management, for more than 20 years. It originated from a single user DOS application, […]

The post How to Spruce up your Evolved PHP Application appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

New findings: The median top 100 ecommerce page takes 6.2 seconds to render primary content

Web Performance Today - Wed, 07/23/2014 - 01:15

Every quarter at Radware, we measure and analyze the performance of the top 500 retail websites.* And every quarter, I’ve grown accustomed to the persistence of two trends: pages are growing bigger and, not coincidentally, slower.

But while I expected to see some growth and slowdown in our latest research — released this week in our State of the Union: Ecommerce Page Speed & Web Performance [Summer 2014] — I have to admit that I wasn’t expecting to see this much:

  • The median top 100 ecommerce page now takes 6.2 seconds to render primary content (AKA Time to Interact or TTI). In our Summer 2013 report, the median TTI was 4.9 seconds. In other words, TTI has slowed down by 27% in just one year.
  • The median page has grown by 67% in one year — from 1007 KB in June 2013 to 1677 KB in June 2014.

Our other findings included:

  • Only 14% of the top 100 pages we tested were able to meet user expectations by rendering primary content in 3 seconds or less.
  • 17% of the pages we tested had a TTI of 10 seconds or longer.
  • 43% of sites failed to implement image compression.
  • 66% did not use progressive JPEGs.
Why the dramatic slowdown?

The long answer to this question is as complex as modern web pages themselves. But the short answer is simple: web pages have never been as large and complex as they are today.

The performance challenges that plague modern web pages have been born out of all the great things we can now make pages do: dynamic content, high-resolution images, carousels, custom fonts, responsive design, and third-party scripts that gather sophisticated data about visitors.

But all of this amazing functionality comes with a high performance price tag if it’s not implemented with performance in mind.

Take images, for example. We’re so accustomed to expecting to see high-quality images everywhere on the web that we take them for granted and don’t think about their heavy performance impact. Page size has a direct impact on page speed, and images comprise at least half a typical page’s weight. As such, they represent an extremely fertile ground for optimization.

Yet we found that many leading retailers are not taking advantage of techniques such as image compression and progressive image rendering, which can improve both real and perceived load time. More on this later in this post.

Page complexity and how it affects Time to Interact

To better understand the impact of page size and complexity, we can look to two additional metrics for more insight:

  • Time to First Byte (TTFB) – This is the window of time between when the browser asks the server for content and when it starts to get the first bit back.
  • Start Render Time – This metric indicates when content begins to display in the user’s browser. Start Render Time can be delayed by slow TTFB, as well as a number of other factors, which we’ll discuss after we’ve taken a moment to enjoy this nifty graph:

Slow Time to First Byte is usually a sign that the site has server issues or a latency problem. These problems can be addressed updating your server environment and by using a content delivery network (CDN) to cache page resources closer to end users. Most (78%) of the sites we tested currently use a CDN, and this number has not changed appreciably in recent years. And while our research doesn’t give us visibility into each site’s server environment, it’s safe to assume that the sites we studied are already on top of that. Therefore it’s not a surprise that TTFB has not changed significantly. Site owners are already doing about as much as they can to address TTFB. Throwing more servers at the problem won’t do anything.

But while TTFB has plateaued, Start Render Time has suffered – almost doubling in the past year, from 2.1 seconds in Summer 2013 to 4 seconds today. Increased page complexity is the likely culprit. As we’ve alreasy covered in this post, modern web pages are more complex than ever – in the form of poorly placed style sheets,badly executed JavaScript, and third-party scripts (such as tracking beacons and page analytics) that block the rest of the page from rendering – and this complexity can incur a hefty performance penalty.

Images: The low-hanging fruit of front-end optimization

At least half of a typical page’s weight is comprised of images, yet most of the pages we tested failed to properly implement image optimization techniques that could make their pages significantly faster.

Almost half (43%) of the pages we tested failed to implement image compression, with 8% scoring an A. Image compression is a performance technique that minimizes the size (in bytes) of a graphics file without degrading the quality of the image to an unacceptable level. Reducing an image’s file size has two benefits:

  • reducing the amount of time required for images to be sent over the internet or downloaded, and
  • increasing the number of images that can be stored in the browser cache, thereby improving page render time on repeat visits to the same page.

In last week’s post, I wrote about how progressive image rendering can improve both real and perceived performance. Yet despite the promise of progressive images, we found that two out of three of the pages we tested didn’t use progressive JPEGs, and only 5% scored an A.

These are just two image optimization techniques among many others, such as spriting, lazy loading (when you defer “below the fold” images to load after “above the fold” content), and auto-preloading (which predicts which pages users are likely to visit next based on analyzing overall traffic patterns on your site, and then “preloads” page resources in the visitor’s browser cache). As I mentioned during a talk I gave yesterday at a Shop.org event, unless you’re already implementing all of these best practices, these techniques represent a huge untapped opportunity to optimize your pages.


Even for leading retailers, tackling performance is like trying to take aim at a constantly moving target. As soon as you’ve gotten a bead on one performance issue, a brand-new challenge pops up. Images grow ever richer, custom fonts proliferate, new third-party scripts are introduced, and stylesheets perform increasingly complex tasks. The silver lining here is that the impact of all this complexity can be mitigated with a thoughtful optimization strategy and a commitment to evolving this strategy to continue to meet future demands.

Get the report: State of the Union: Ecommerce Page Speed & Web Performance [Summer 2014]

*Using WebPagetest, we test the home pages of the top 500 retail sites as they would perform in Chrome over a DSL connection. Each URL is tested nine times, and the median result is used in our analysis.

The post New findings: The median top 100 ecommerce page takes 6.2 seconds to render primary content appeared first on Web Performance Today.

The Deadline to Sign up for GTAC 2014 is Jul 28

Google Testing Blog - Tue, 07/22/2014 - 18:53
Posted by Anthony Vallone on behalf of the GTAC Committee

The deadline to sign up for GTAC 2014 is next Monday, July 28th, 2014. There is a great deal of interest to both attend and speak, and we’ve received many outstanding proposals. However, it’s not too late to add yours for consideration. If you would like to speak or attend, be sure to complete the form by Monday.

We will be making regular updates to our site over the next several weeks, and you can find conference details there:

For those that have already signed up to attend or speak, we will contact you directly in mid August.

Categories: Software Testing

How to Approach Application Failures in Production

In my recent article, “Software Quality Metrics for your Continuous Delivery Pipeline – Part III – Logging”, I wrote about the good parts and the not-so-good parts of logging and concluded that logging usually fails to deliver what it is so often mistakenly used for: as a mechanism for analyzing application failures in production. In […]

The post How to Approach Application Failures in Production appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

When to Leverage Commercial Load Testing Services, and When to Go it Alone

LoadImpact - Tue, 07/22/2014 - 03:00

How and where you execute load and performance testing is a decision that depends on a number of factors in your organization and even within the application development team.

It is not a clear cut decision that can be made based on the type of application or the number of users, but must be made in light of organizational preferences, cadence of development, timeline and of course the nature of the application itself and technical expertise currently on staff.

In this post we will provide some context around some of the key decision points that companies of all size should consider when putting together load NS performance testing plans.

This discussion is really an amalgamation of On-Premise versus SaaS/Open-Source versus Commercial Services.

In the load testing space there are commercial offerings that offer both SaaS and on-premise solutions as well as many SaaS only solutions for generating user load.

From an open source perspective, JMeter is the obvious choice (there are other less popular options such as FunkLoad, Gatling, Grinder, SOAPUI, etc). Having said that, let’s look at the advantages and challenges of the open source solution,  JMeter, and contrast that with a cloud-based commercial offering.

Key JMeter Advantages:

  1. 100% Java application so it can be run on any platform (windows, osx, linux) that can run Java.
  2. Ability to test a variety of types of servers – not just front end HTTP servers.  LDAP, JMS, JDBC, SOAP, FTP are some of the more popular services that JMeter can load test out of the box.
  3. Extensible, plug-in architecture. The open source community is very active in development around JMeter plugins and many additional capabilities exist to extend reporting, graphing, server resource monitoring and other feature sets.  Users can write their own plugins if desired as well.  Depending on how much time and effort is spent there is little that JMeter can’t be made to do.
  4. Other than the time to learn the platform there is no software cost of course since it is open source.  This may be of particular value to development teams with limited budget or who have management teams who prefer to spend on in-house expertise versus commercial tools.
  5. It can be easy to point the testing platform at a development server and not have to engage the network or server team to provide external access for test traffic.  It’s worth noting that while this is easier it is also less realistic in terms of real world results.

Key JMeter Disadvantages:

  1. Being that it is open source you do not have an industry vendor to rely upon for support, development or expertise.  This doesn’t mean that JMeter isn’t developed well or that the community isn’t robust – quite the opposite. Depending on the scope of the project and visibility of the application it can be very helpful to have industry expertise available and obligated to assist.  Putting myself in a project manager’s shoes, would I be comfortable telling upper management, “we thoroughly tested the application with an open source tool with assistance from forums and mailing lists?” if there were to be a major scale issue discovered in production?
  2. It’s very easy to end up with test results that aren’t valid.  The results may be highly reliable – but reliably measuring bottlenecks that have nothing to do with the application infrastructure isn’t terribly useful.  Since JMeter can be run right from a desktop workstation, you can quickly run into network and CPU bottlenecks from the testing platform itself – ultimately giving you unrealistic results.
  3. Large scale tests – not in the wheelhouse of JMeter.  Right in the documentation (section 16.2 of best practices) is a warning about limiting numbers of threads.  If a truly large scale test is required you can build a farm of test servers orchestrated by a central controller, but this is getting pretty complicated, requires dedicated hardware and network resources, and still isn’t a realistic real-world scenario anyway.
  4. The biggest disadvantage is inherent in all on-premise tools in this category in that it is not cloud based.  Unless you are developing an in-house application and all users are on the LAN, it does not makes a ton of sense to rely (entirely) on test results from inside your network.  I’m not suggesting they aren’t useful but if users are geographically distributed then testing in that mode should be considered.
  5. Your time: doing everything yourself is a trap many smart folks fall into, and often times at the expense of project deadlines, focus. Your time is valuable and in most cases it could be better spend somewhere else.

This discussion really boils down to if you like to do things yourself or if the project scope and criticality dictate using commercial tools and expertise.

For the purposes of general testing, getting familiar with how load testing works and rough order of magnitude sizing, you can certainly use open source tools on your own – with the caveats mentioned.  If the application is likely to scale significantly or have users geographically distributed, then I do think using a cloud based service is a much more realistic way to test.

In addition to the decision of open source versus commercial tools is if professional consulting services should be engaged.  Testing should be an integral part of the development process and many teams do not have expertise (or time) to develop a comprehensive test plan, script and configure the test, analyse the data and finally sort out remediation strategies on their own.

This is where engaging experts who are 100% focused on testing can provide real tangible value and ensure that your application scales and performs exactly as planned.

A strategy I have personally seen work quite well with a variety of complex technologies is to engage professional services and training at the onset of a project to develop internal capabilities and expertise, allowing the organization to extract maximum value from the commercial product of choice.

I always recommended to my customers to budget for training and service up front with any product purchase instead of trying to shoe-horn it in later, ensuring new capabilities promised by the commercial product are realized and management is satisfied with the product value and vendor relationship.


This post was written by Peter Cannell. Peter has been a sales and engineering professional in the IT industry for over 15 years. His experience spans multiple disciplines including Networking, Security, Virtualization and Applications. He enjoys writing about technology and offering a practical perspective to new technologies and how they can be deployed. Follow Peter on his blog or connect with him on Linkedin.

Don’t miss Peter’s next post, subscribe to the Load Impact blog by clicking the “follow” button below. 

Categories: Load & Perf Testing

How Models Change

DevelopSense - Michael Bolton - Sat, 07/19/2014 - 13:38
Like software products, models change as we test them, gain experience with them, find bugs in them, realize that features are missing. We see opportunities for improving them, and revise them. A product coverage outline, in Rapid Testing parlance, is an artifact (a map, or list, or table…) that identifies the dimensions or elements of […]
Categories: Software Testing

Test Automation Can Be So Much More…

Eric Jacobson's Software Testing Blog - Fri, 07/18/2014 - 13:32

I often hear people describe their automated test approach by naming the tool, framework, harness, technology, test runner, or structure/format.  I’ve described mine the same way.  It’s safe.  It’s simple.  It’s established.  “We use Cucumber”.

Lately, I’ve seen things differently.

Instead of trying to pigeon hole each automated check into a tightly controlled format for an entire project, why not design automated checks for each Story, based on their best fit for that story?

I think this notion comes from my context-driven test schooling.  Here’s an example:

On my current project, we said “let’s write BDD-style automated checks”.  We found it awkward to pigeon-hole many of our checks into Given, When, Then.  After eventually dropping the mandate for BDD-style, I discovered the not-as-natural-language style to be easier to read, more flexible, and quicker to author…for some Stories.  Some Stories are good candidates for data-driven checks authored via Excel.  Some might require manual testing with a mocked product...computer-assisted-exploratory-testing…another use of automation.  Other Stories might test better using non-deterministic automated diffs.

Sandboxing all your automated checks into FitNesse might make test execution easier.  But it might stifle test innovation.

Categories: Software Testing

#UnsexyTech and trying to make it a little sexier

LoadImpact - Thu, 07/17/2014 - 08:54

We think what we do is pretty cool. I mean come on! Performance and load testing, who doesn’t get excited at the idea?!

Well, apparently not everyone. Some have even said performance testing is a bit like selling health insurance: most people know it’s important to have, but you don’t reap the benefits of having it until something unexpected happens.

In any event, we wanted to try and find a way to explain what we do in a more relatable and humorous way. Framing our somewhat “unsexy tech”  in a way that connects back to everyone’s everyday lives.

Well, here is is. With the help of our video producers Adme (props to a job well done), we made this nifty short video to explain what we do and, if possible, make you chuckle a little. Enjoy!



Are you working with unsexy tech? Let us know why you think your tech is super sexy in the comments below.

Categories: Load & Perf Testing

How to extract User Experience by complementing Splunk

Splunk is a great Operational Intelligence solution capable of processing, searching and analyzing masses of machine-generated data from a multitude of disparate sources. By complementing it with an APM solution you can deliver insights that provide value beyond the traditional log analytics Splunk is built upon: True Operational Intelligence with dynaTrace and Splunk for Application Monitoring […]

The post How to extract User Experience by complementing Splunk appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

How to create the illusion of faster web pages (while creating actual happier users)

Web Performance Today - Wed, 07/16/2014 - 17:17

The internet may change, and web pages may grow and evolve, but user expectations are constant. Throughout several studies, these numbers about response times and human perception have been consistent for more than 45 years:

These numbers are hard-wired. We have zero control over them. They’re consistent regardless of the type of device, application, or connection we’re using at any given moment.

These numbers are why our industry has attached itself to the goal of making pages render in 1 second or less — and to the even more audacious goal that pages render in 100 milliseconds or less.

But when you consider how many things have to happen before anything begins to appear in the browser — from DNS lookup and TCP connection to parsing HTML, downloading stylesheets, and executing JavaScript — 1 second can seem impossible. And forget about 100 milliseconds. In our most recent state of the union for ecommerce performance, we found that start render time for the top 500 retailers was 2.9 seconds. In other words, a typical visitor sits and stares at a blank screen for almost 3 seconds before he or she even begins to see something. Not good.

We can do better.

I talk a lot about page bloat, insidious third-party scripts, the challenges of mobile performance, and all the other things that make hitting these goals seem like an impossible feat. But rather than get discouraged, let me point you toward this great quote from Ilya Grigorik in his book High Performance Browser Networking:

“Time is measured objectively but perceived subjectively, and experiences can be engineered to improve perceived performance.”

Today I’m going to talk about some tricks and techniques you can use to manipulate subjective time to your advantage.

1. Progress bars

Never underestimate the power of a progress bar to help improve the user experience on pages that are necessarily slower (e.g. search results page). Progress bars reassure visitors that the page is actually functional and give a sense of how much longer the visitor will need to wait.

A few tips:

  • A progress bar on a page that loads in less than 5 seconds or less will make that page feel slower than it really is. Jakob Nielsen suggests that percent-done progress bars be reserved for pages/applications that take 10 seconds or longer to load.
  • Progress bars that offer the illusion of a left-moving ripple can improve perceived wait time by up to 10%.
  • Progress bars that pulse, and that increase pulsation frequency as the bar progresses, are perceived as being faster.
  • Similarly, progress bars that speed up are considered more satisfying by users. (Not surprising. This harkens back to the research on colonoscopy pain perception we covered a while back.)

Here’s a video that shows some of these principles in action:

More good progress bar best practices here.

2. Load something useful first

This principle seems like a no-brainer, but it’s amazing how many times I analyze pages that load primary content last. In our spring performance SOTU, we found that the median Time to Interact (TTI) for the top 100 retailers was 5.4 seconds. This means that it takes 5.4 seconds for a page’s most important content to render. This represents a failure for both the site owner and the site visitor.

A Nielsen-Norman Group eyetracking study found that a user who is served feature content within the first second of page load spent 20% of his or her time within the feature area, whereas a user who is subjected to an eight-second delay of a page’s feature content spent only 1% of his or her time visually engaging with that area of the page.

In other words, rendering your most important content last is a great way to ensure that your visitors get a sub-par user experience.

3. Defer non-essential content

Further to the point above, deferral is an excellent technique for clearing the decks to ensure that your primary content quickly takes center stage. Help your visitors see the page faster by delaying the loading and rendering of any content that is below the initially visible area (sometimes called “below the fold”). Deferring non-essential content won’t change your bottom-line load time, but it can dramatically improve your perceived load time.

Many script libraries aren’t needed until after a page has finished rendering. Downloading and parsing these scripts can safely be deferred until after the onLoad event. For example, scripts that support interactive user behavior, such as “drag and drop,” can’t possibly be called before the user has even seen the page. The same logic applies to script execution. Defer as much as possible until after onload instead of needlessly holding up the initial rendering of the important visible content on the page.

The script to defer could be your own or, often more importantly, scripts from third parties. Poorly optimized scripts for advertisements, social media widgets, or analytics support can block a page from rendering, sometimes adding precious seconds to load times.

4. Progressive image rendering

Progressive image rendering is predicated on the idea that the faster you can get a complete image in the browser, no matter what the resolution, the better the user experience — as opposed to baseline images, which load line by line and take much longer to render a complete image.

(Image source)

Progressive JPEGs aren’t new. They were widely used in the 1990s but fell out of favor due to performance issues caused by slow connection speeds and crudely rendered JPEGs. Back in the day, watching a progressive image load pixel by pixel was a painful experience. Now that connection speeds have improved and progressive JPEGs are more sophisticated, this technique is feasible again and is returning as a performance best practice. Last year, Google engineer Pat Meenan studied the top 2,000 retail sites and found that progressive JPEGs improved median load time by up to 15%.

Despite Pat’s findings, and despite the argument that rendering images progressively should also yield a better perceived UX, progressive JPEGs have an incredibly low adoption rate: around 7-8%. The reason for this could be the dated prejudice I just mentioned, or it could be the fact that Photoshop’s default save option is for baseline images. (If you’re anti-progressive JPEG, I’d love to know why. Please let me know in the comments.)

Note: Kent Alstad and I will be presenting some cool new research about the impact of progressive image rendering on the user experience at Velocity in New York and Barcelona this fall. I hope to see you there!

5. Auto-preloading (AKA predictive browser caching)

Not to be confused with browser preloading, auto-preloading is a powerful performance technique in which all user paths through a website are observed and recorded. Based on this massive amount of aggregated data, the auto-preloading engine can predict where a user is likely to go based on the page they are currently on and the previous pages in their path. The engine loads the resources for those “next” pages in the user’s browser cache, enabling the page to render up to 70% faster.

(Note that this is necessarily a data-intensive, highly dynamic technique that can only be performed by an automated solution. If you’re interested in learning more, read Kent’s excellent post here.)

6. Pretend to work, even when you don’t

Mark Wilson wrote a great post about a handful of tricks Instagram uses to make their pages appear more responsive. My favourite is the one they call “optimistic actions”. To illustrate: even if your connection is down or slow, if you click on the Instagram “like” button but your connection is slow or down, the UI registers the click and highlights the button. It send the actual data back to server whenever the connection is restored. In this way, the site always appears to be working, even when it’s not.

I’m curious to see other applications of this technique in the wild. If anyone can point me toward more examples, please do.

7. Touch event conversion

This tip applies to mobile. If you’re a mobile user, then you’re probably already aware of the slight delay that occurs between the time you touch your screen and the time it takes for the page to respond. That’s because every time you touch your screen, the device has to translate that touch event into a click event. This translation takes between 300-500 milliseconds per click.

At Radware, we’ve developed a feature called Touch Event Conversion in our FastView solution. It automatically converts click events to touch events, which relieves the mobile browser of the realtime overhead of the click-to-touch translation.

If you want to explore how to do this manually, check out Kyle Peatt’s post on the Mobify blog. He covers some other mobile performance-enhancing tips there as well.

8. Make perceived value match the wait

If long wait times are necessary, ensure that you’re delivering something with a value that’s commensurate with the wait. A good example of this is travel websites. When you’re searching for the best hotel rates, most of us don’t mind waiting several seconds. We rationalize the wait because we assume that the engine is searching a massive repository of awesome travel deals in order to give us the very best results.

If you have experience with other perception-enhancing performance practices, I’d love to hear about them. Leave me a note in the comments.

Related posts:

The post How to create the illusion of faster web pages (while creating actual happier users) appeared first on Web Performance Today.

Are we getting attacked? No, it’s just Google indexing our site

Friday morning at 7:40AM we received the first error from our APMaaS Monitors informing us about our Community Portal being unavailable. It “magically recovered itself” within 20 minutes but just about an hour later was down again. The Potential Root Cause was reported by dynaTrace which captured an Out-of-Memory (OOM) Exception in Confluence’s JVM that […]

The post Are we getting attacked? No, it’s just Google indexing our site appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

You are lying

A few years back I was the manager of an internal service provider. My job was to deliver services to pharmaceutical business units within my company. We bought service externally as well as created our own and sold standards based functionality internally on a no profit basis. My main responsibility was Intel based servers and […]

The post You are lying appeared first on Compuware APM Blog.

Categories: Load & Perf Testing