Where do I get the Local Script Validator?

BrowserMob - Wed, 06/27/2012 - 10:31

There has been some confusion about where to get the latest version of the local script validator.

 

We are adding a new section to the web app to link to tools like this one, but for now you can always find the latest version that will match the validator in the scripts application here:

http://static.wpm.neustar.biz/tools/local-validator.tar.gz

 

Thanks!

 

Simon

Categories: Load & Perf Testing

Cloud Connect Santa Clara 2012

The Cloud Connect in Santa Clara  starts on Monday, 13th of February. I will be there Tuesday and Wednesday at the dynaTrace booth. I will also talk about Performance SLAs in the Cloud.  If you are there, make sure to drop by our booth for a chat and personal dynaTrace demo. see you there, Michael [...]
Categories: Load & Perf Testing

The State of Web Readiness 2012

LoadImpact - Tue, 06/26/2012 - 02:32

For the moment we are attending the O’Reilly Velocity Conference in Santa Clara, where we have launched our brand new report “The State of Web Readiness 2012”.  In short the report covers how robust sites are based on 8,522 load tests executed in 132 countries. We found out that the average site was load tested at up to 3.4 times the actual capacity. What does that mean? Well the short summary is that a large part of the websites in the world might not stand up to what site owners expect of them.

 

This is actual data from actual load tests conducted with our own cloud-based online load test tool and frankly, we were a bit concerned with the findings of our study. Not that we are surprised that websites go down when we need them the most. Even though web sites have been a mainstream occurrence for over 15 years, we don’t lift an eyebrow when Apple Store crashes when a new iPhone-model is released. And if even the largest company in the world isn’t able to provide a premium sales channel that performs reliably, then who is, right? It almost seems unavoidable that websites go down. Like a natural disaster you can’t prepare for.

 

Our analysis indicates something else. After going through 8,522 actual tests we believe that you can be prepared with the right knowledge. The analysis shows that an important factor in the unreliable web is simply overconfidence about how many visitors websites can really handle. If you haven’t done the tests and you still think your website will continue to work unaffected during a hot product launch, a seasonal peak in interest or if you are luckily beeing “slashdotted”, think again!

 

Have a look at our report here. And we’re looking foward to hear more about what you think about the state of web readiness.

 

Categories: Load & Perf Testing

What’s new in LoadRunner 11.50?

Performance Testing with LoadRunner Focus - Fri, 06/22/2012 - 19:36
LoadRunner 11.50 was released on June 5th, during HP Discover 2012 in Las Vegas. This release was originally going to be called LoadRunner 12.00, but this would have put it out of step with the version numbers for ALM/Quality Center and QTP, which might have created some marketing confusion. Significant R&D effort has been spent [...]
Categories: Load & Perf Testing

Advanced Alerting AWS APIs

BrowserMob - Fri, 06/22/2012 - 15:29

Advanced Alerting


We are pleased to announce two new Advanced Alerting APIs that allow you to manage EC2 and S3 resources. Advanced Alerting provides the capability of scripting alert policies in Javascript. This allows you to create complex policies, that can alert the right people when there is a problem with your website. Since the alert policy is a simple Javascript program you can write a policy that decides who to notify based on the nature of the monitoring error and the time of day. The APIs provide a HttpClient which you can use to make REST calls to different endpoints. It contains a Leftronic API which allows you to push your monitoring data to a dashboard, where it can be visualized in real time. Of course these are Alerting APIs, so we provide a APIs to send email, send SMS, or make voice calls.

 

EC2 API

 

The EC2 API provides the ability to start, stop and get the status of EC2 instances. If you are hosting your website in the Amazon cloud, and the page load time is increasing, you can use the EC2 API to spin up more EC2 AMIs that handle the load on your servers. The API let's you spin up new instances in different availability zones, so if you see that your page load times are increases from one of your monitoring locations, you can spin up instances in a nearby AWS Availability Zone.  Later when your load time decreases, you can stop those instances, in order to scale down unnecessary resources and save money on infrastructure costs. The EC2 API allows you to use Neustar Monitoring and Alerting to automatically scale up and scale down your infrastructure.

 

S3 API

 

The S3 API lets you put, get, delete and copy objects to and from S3. You can use this with the Alerting API that lets you retrieve the HAR file for a monitoring job. The HAR file contains detailed object level information about your webpages.  It includes the response times for each object, the size of the object, and the http headers. You can retrieve the HAR file data, and put it into S3. Later you can perform analytics on the HAR files in order to see what objects are the bottleneck in the page load times for your webpages.

 

The Alerting APIs are documented here.

The EC2 API is documented here.

The S3 API is documented here.

The HAR Specification is here.

.

Categories: Load & Perf Testing

RUM Graphs

BrowserMob - Fri, 06/22/2012 - 12:54

Introduction

Under the Real User Monitoring (RUM) Graphs tab there are four main visualizations that together provide an overall summary of user experience and give you the ability to further investigate points of interest.

  • Load time graph
  • Page hits distributed by load time graph
  • Page hits graph
  • Apdex graph

Those graphs get refreshed every minute.

 

Hour level > Minute level > Request level

The default for all zoom levels greater than 1 hour is to display one data point for each hour of collected data. This means when youhover over a point in the hour level graph, you are seeing a summary of all data collected for that hour. To view more detailed informationfor a specific hour, you can click on any point in the left panel graphs (Load time, Page hits, Apdex).


By clicking on a point in the hour level graph, you will be brought to a zoomed in view of that hour's data. Now each point shown on the graphrepresents one minute. To see detailed information about a specific minute, you can again click any point in the graph and a table will be displayed showing each individual request that happened during that minute. You can click on any row to show details about each request. Alternatively, you can click the 'back' button to return the the hour level view.

 

Load time graph

This graph shows the average page load time, average DOM load time, and average time to first byte (TTFB) for all pages which have the beacon.

  • TTFB is the time elapsed from when the user requested the page until the first byte of data was received by the browser.

         A large spike in TTFB could indicate slow server response, network latency, etc.

  • DOM load time is the time required for the browser to receive all document data and render the page.

         A spike in DOM load time could indicate slow loading resources, images, objects, etc.

  • Total page load time is the sum of TTFB and DOM load time. At this point, the page is rendered and available for interaction with the user

 

You can view more detailed information about each point in the graph by hovering over a point. When you do so, the average load time, minimum load time, maximum load time, number of hits, and Apdex score for that hour will be displayed in the right column table. For hour level data, you will also see the 'Page hits distributed by load time' graph updated on hover.

 

Page hits distributed by load time graph

This panel displays summarized details for the graph's current time range or, as mentioned above, specific details once a point is hovered over. The graph shows six buckets which all page hits are divided into. The number beneath each bar is the total load time range and the number above each bar is the number of page hits which fell within that time range.

 

By hovering over a peak in the Load time graph and viewing the distribution graph, you can quickly determine if the high average load time was caused by a large number of user's experiencing elevated load times or by a smaller number of users seeing exceptionally high load times. Distribution data is only available at the hour level.

 

Page hits graph

This graph shows the total number of page requests for a specific point in time. By hovering over a poor load time or Apdex value, you can quickly determine the number of users affected.

 

Apdex graph

This graph shows you a measurement of user frustration based on page load time. A brighter red color indicates a lower Apdex score which means a greater level and concentration of user frustration. The lighter the color, the lower the level of user frustration for the time period.

Categories: Load & Perf Testing

Real User Monitoring (RUM) - FAQ

BrowserMob - Fri, 06/22/2012 - 11:22

What is Real User Monitoring (RUM)?

  • Similar to website analytics products, RUM sends tracking beacons from your users' browsers to our servers, and allows you to monitor performance of your site from your users' perspective.

 

When should I use Real User Monitoring vs. Synthetic Monitoring?

  • Real User and Synthetic monitoring serve different purposes and complement each other. Synthetic monitoring is great for detecting if your site is down, or establishing a baseline performance level, or investigating issues whether at the page element level or at the network level. Real user monitoring helps you get knowledge of your users' experience with your site. Real User Monitoring addresses several questions, including:
    •     What is my end user experience like? Do most end users have an acceptable experience or a frustrating one?
    •     I want to know my website performance with browsers my users are using.
    •     I want to know my website performance on the pages my users are going to. Synthetic monitoring cannot cover all paths of my website.
    •     I want to know my website performance from the locations my users are coming from. Synthetic monitoring does not have agents everywhere.
    •     I want to know my website performance from ISPs my users are connected to. Synthetic monitoring checks from data centers on fast Internet nodes do not give a realistic picture of my website performance as experienced by my users.

 

How do I get set up?

  • Installing is a matter of adding a small piece of JavaScript code into your web page templates. You can find the code to be added inside the Web Beacon tab of the Real User console at https://rum.wpm.neustar.biz.

 

What does the JavaScript code do?

  • After the page is loaded, the code you add to your web pages will fetch and execute a small Javascript file stored on a CDN. This code will retrieve web performance data from your user's browser, using the Navigation Timing API. Then the collected data will be sent to our servers through a HTTP GET request to https://rum-collector.wpm.neustar.biz/beacon... The response will have no content (HTTP status code 204).

 

Will the Javascript tag add any latency to my page load time?

  • The latency is minimal, and the impact happens after the page load, causing no effect to users.

 

What browsers support the navigation timing method?

  • Chrome 6+, Firefox 7+, Internet Explorer 9+, Android Browser 4.0+. You can find a full list of compatible browsers here.

 

How do I find out if the tag is properly installed on my website?

  • A few minutes after the tag is installed, you should start seeing data in the Time Series tab of the Real User console.

 

What data is being collected?

  • We collect performance-related information, the current url, and the referrer url. We do not collect user's personal data or session information.

 

What does the page load time over time graph show? How is the data aggregated?

  • The data we collect is aggregated every minute. At this point we aggregate only the page load time and the time to first byte. We calculate different metrics that we display for every minute and every hour in our main load time over time graph: average, min/max, Apdex score, load time distribution. All those are based on the page load time. In addition, we display the time to first byte average.

 

Why are the Real User Page Load times different than synthetic Page Load times?

  • There are a number of factors that could explain that difference.
    • First is the agent's location. Synthetic monitoring agents tend to be on high throughput networks whereas real users mostly use slower connections (DSL, cable). Also, the geographical distribution of those agents could impact the results.
    • Second, the browsers used vary widely in a real user world whereas they are consistent in case of synthetic monitoring.

 

Why are my hits #s so low?

  • Since we are collecting only measurements done on browsers supporting the Navigation Timing API, there are a number of samples that are not saved on our servers. Our goal is to collect accurate measurements as opposed to all measurements. Also, we discard requests coming from our own synthetic monitoring agents.

 

Why do I see gaps in the page hits and Apdex graphs that do not show in the page load time graph?

  • The page load time graph is designed to show trends in page load time over time. For periods with no page hits, and therefore no page load time data, an interpolated line is shown spanning the time period.

 

Why are there no graphs displayed in Internet Explorer?

  • Internet Explorer version 8 and earlier do not have native support for Scalable Vector Graphics (SVG), which is required for Real User graphing. You can enable graphing in Internet Explorer by clicking the 'Install Google Chrome Frame' button under the Graphs tab of the Real User console and following the installation instructions. Google Chrome Frame is a free plugin that adds SVG support to Internet Explorer.

 

How long is my RUM data stored?

  • The individual requests are stored for seven days.
  • The minute-level aggregate data is stored for seven days.
  • The day-level aggregate data is stored indefinitely for now.

 

Is there an API I can use to retrieve my RUM data?

  • Yes. Information about our APIs can be found here.

 

Is this a free product?

  • Yes, while we are in Beta, RUM will be free of charge.

 

We would love to hear your feedback! Feel free to send your comments to .

Categories: Load & Perf Testing

Improve Web Performance by 500% with this Architecture Shift

LoadStorm - Thu, 06/21/2012 - 10:20

If you could drop the page rendering time to 1/5th of its current time, would you do it?

Wouldn't that be a 5x improvement? As a web performance geek, are you interested in speeding up your user experience by 500%?

Sounds good, but how? Twitter engineers have offered us a tremendous case study to consider. It centers on the cost of running JavaScript on the browser.




Distributed Processing - Put More on the Client Side

Web applications have been increasingly including more and more JavaScript in order to improve the user experience. The terms "Web 2.0" and "Rich Internet Application" are now old news, but the underlying technologies continue to grow quickly in both deployment and tools available. The browsers are becoming much more complex as they provide capabilities to handle sophisticated processing of client-side programming.

It sure seems to me that the functionality of browsers and operating systems are converging. So what? That all sounds good...warm and fuzzy for web developers. New toys! Better interactivity! Cool features on my app!

Yeah, but aren't you a bit skeptical? Have you been coding long enough to recognize the downside of complexity? There is always a cost associated with the new technologies being deployed, but most of us web geeks have overlooked the performance hit of RIA/JavaScript because it fits the big umbrella architecture improvement best described as "distributed processing".

Back in 1999, I was teaching XML courses where it was common to hear me say something like:

"If we push as much of the processing overhead to the individual PC or handheld device, then we can improve overall system performance. Rather than make our server farm 100x more powerful, let's shift the running code to a 10,000 machines on the client side."

That was a good theory. It worked for a decade or so. But now we are seeing the limitations of this architectural philosophy. The client-side can't keep up. We have over-loaded the capability. We are assuming too much regarding the browsers' ability to process efficiently. Smart phones and tablets aren't as fast as we think they are.

If we analyze what components comprise the rendering speed, many times today's Rich Internet Applications will have very significant performance hits caused by JavaScript execution. In Twitter's fully client-side JavaScript design, users wouldn't see anything until the JavaScript was downloaded and processed. Result? Slower user experience. Lower user satisfaction.

The Twitter engineers concluded: "The bottom line is that a client-side architecture leads to slower performance because most of the code is being executed on our users' machines rather than our own."

Let me say that another way: Too much JavaScript in web pages is killing performance!

Heresy? Is Scott is out of his mind? Can empirical evidence be shown to prove this conclusion? Probably a little of each.



Twitter Innovates by Returning to Server Side

Consider what Twitter is doing now. They have proven that web performance can be greatly enhanced by moving more of the processing to the server-side.

According to Dan Webb, Engineering Manager of Twitter's Web Core team:

"We took the execution of JavaScript completely out of our render path. By rendering our page content on the server and deferring all JavaScript execution until well after that content has been rendered, we've dropped the time to first Tweet to one-fifth of what it was."

The result was a 500% improvement in performance! The metric used is a direct reflection of what their users need to see in their browser - Twitter's key measurement of usability relating to performance.

So, speed can be greatly improved by going "old school" with more of the processing on the server. In Twitter's case, the old school was really only a couple of years ago. Webb shares some insight into the history of their decisions prior to this innovative performance optimization:

"When we shipped #NewTwitter in September 2010, we built it around a web application architecture that pushed all of the UI rendering and logic to JavaScript running on our users’ browsers and consumed the Twitter REST API directly, in a similar way to our mobile clients. That architecture broke new ground by offering a number of advantages over a more traditional approach, but it lacked support for various optimizations available only on the server."

Ok, they bought-in two years ago to the heavier client concept. What Webb is saying fits perfectly with what I was evangelizing back in 1999. It fits with what my computer science professors preached back in the 1980's about distributed processing.

Brilliant! Shift the processing to the consumer! Performance problems solved!

Perhaps not.


Take Control of your Front-end Performance

When I click a link, my expectation is to see content as soon as possible. Recent studies say that we want to see something in less than 2 seconds or we start to have an emotional letdown that affects our "state of satisfaction" with the site.

Twitter engineers were focused on that speed. They went searching for ways to tune their architecture to produce performance gains. They experimented and found the answer in what would seem a step backward in technology. Move rendering to the server?! Huh? You've got to be kidding!

Webb says:

"To improve the twitter.com experience for everyone, we've been working to take back control of our front-end performance by moving the rendering to the server. This has allowed us to drop our initial page load times to 1/5th of what they were previously and reduce differences in performance across browsers."

Not only does this approach result in getting content to the user in 20% of the original time, it greatly helps with browser inequality. Very cool side effect, don't you think?

It makes sense when I think about it because older versions of IE are notoriously poor performers, and they make us web developers look bad. Users don't really know or care that the reason the page takes 15 seconds to load isn't my fault. Mostly, they assume I'm a bad coder. They subconsciously conclude that owner of this site is stupid because it moves so slow. Slow is a synonym for "dumb" or "poor quality" or "low value" in many contexts. That's why I get mad at users that hang on to IE 6 or 7 - it hurts my reputation.

What a wonderful way to overcome the "IE penalty"! Take away much of the thinking that is difficult (slow) for the browser.


Conclusion: Re-think Your Architecture, Tune, Test, Iterate

All web applications are different in the way they perform, and their requirements aren't the same. Will your users accept a 5-second render time? Does your app need sub-second appearance of some content?

I recommend you explore the impact of reducing JavaScript control over your UI. Experiment with moving more pre-processing of the page to your server-side. Will Twitter's model get you significant performance gains?

There is only one way to properly attack your web application performance optimization - scientifically. You need to analyze your needs and plan a project well. Iterate! Test, tune, test, tune, test...

Limit your tuning to one change, then run a performance test.

Why should you give energy to load and performance testing? Why is an iterative process of testing and tuning a better approach? Because you can't control what you don't measure. If you don't test, you won't know the speed. If you don't tune one item at a time between tests, you won't know what tweak produced what result.

Take note of how Twitter engineers identified their most important goals, "Before starting any of this work we added instrumentation to find the performance pain points and identify which categories of users we could serve better."

Perhaps you can get a 500% gain in speed, and that will result in more success for your web app.

"Faster sites make more money. Period."

Bottom line, isn't more money from your site what you really want?!





Public Beta of Real User Monitoring - Live on June 21, 2012

BrowserMob - Wed, 06/20/2012 - 19:10

We are pleased to announce the public beta of Real User Monitoring for all of our WPM customers!  As of Thursday, June 21, 2012, you will see a new tab in your console "Real User (Beta)".  The feature is FREE for you to try out, so take advantage of it!

 

Here are some important details and answers to questions you may have about WPM's Real User Monitoring:

 

What is Real User Monitoring (RUM)?

  • Similar to website analytics products, RUM sends tracking beacons from your users' browsers to our servers, and allows you to monitor performance of your site from your users' perspective.

 

When should I use Real User Monitoring vs. Synthetic Monitoring?

  • Real User and Synthetic monitoring serve different purposes and complement each other. Synthetic monitoring is great for detecting if your site is down, or establishing a baseline performance level, or investigating issues whether at the page element level or at the network level. Real user monitoring helps you get knowledge of your users' experience with your site. Real User Monitoring addresses several questions, including:
    •     What is my end user experience like? Do most end users have an acceptable experience or a frustrating one?
    •     I want to know my website performance with browsers my users are using.
    •     I want to know my website performance on the pages my users are going to. Synthetic monitoring cannot cover all paths of my website.
    •     I want to know my website performance from the locations my users are coming from. Synthetic monitoring does not have agents everywhere.
    •     I want to know my website performance from ISPs my users are connected to. Synthetic monitoring checks from data centers on fast Internet nodes do not give a realistic picture of my website performance as experienced by my users.

 

How do I get set up?

  • Installing is a matter of adding a small piece of JavaScript code into your web page templates. You can find the code to be added inside the Web Beacon tab of the Real User console at https://rum.wpm.neustar.biz.

 

What does the JavaScript code do?

  • After the page is loaded, the code you add to your web pages will fetch and execute a small Javascript file stored on a CDN. This code will retrieve web performance data from your user's browser, using the Navigation Timing API. Then the collected data will be sent to our servers through a HTTP GET request to https://rum-collector.wpm.neustar.biz/beacon... The response will have no content (HTTP status code 204).

 

Will the Javascript tag add any latency to my page load time?

  • The latency is minimal, and the impact happens after the page load, causing no effect to users.

 

What browsers support the navigation timing method?

  • Chrome 6+, Firefox 7+, Internet Explorer 9+, Android Browser 4.0+. You can find a full list of compatible browsers here.

 

How do I find out if the tag is properly installed on my website?

  • A few minutes after the tag is installed, you should start seeing data in the Graphs tab of the Real User console.

 

What data is being collected?

  • We collect performance-related information, the current url, and the referrer url. We do not collect user's personal data or session information.

 

What does the page load time over time graph show? How is the data aggregated?

  • The data we collect is aggregated every minute. At this point we aggregate only the page load time and the time to first byte. We calculate different metrics that we display for every minute and every hour in our main load time over time graph: average, min/max, Apdex score, load time distribution. All those are based on the page load time. In addition, we display the time to first byte average.

 

What is the Time to First Byte?

  • When your user's browser connects to your web server, the browser must execute a number of steps.
  • These can be broken down into: Redirects, DNS Lookup, Initial Connection, Waiting, Receiving Data, Closing Connection. The Time to First Byte (or TTFB) is the time your browser spends waiting on the web server to send back the data. See this post for more information.

 

What is the DOM load time?

  • The DOM load time is the time it takes the browser to download, process and render all the page elements. It is the total page load time minus the Time to First Byte.

 

What does the Apdex score mean?

  • The Apdex score gives you a quick idea of your site performance by using a method of evaluating user satisfaction. All samples we collect are tagged as either, "satisfied" (page load time < 2 seconds), "tolerating" (page load time between 2  and 8 seconds), or "frustrated" (page load time > 8 seconds). The Apdex formula is the number of satisfied samples plus half of the tolerating samples plus none of the frustrated samples, divided by all the samples: Apdext = (Satisfied Count + Tolerated Count / 2) / Total Samples. An Apdex score of 1 for a given time range means that all samples for that time range had a page load time below 2 seconds.

 

Why are the Real User Page Load times different than synthetic Page Load times?

  • There are a number of factors that could explain that difference.
    • First is the agent's location. Synthetic monitoring agents tend to be on high throughput networks whereas real users mostly use slower connections (DSL, cable). Also, the geographical distribution of those agents could impact the results.
    • Second, the browsers used vary widely in a real user world whereas they are consistent in case of synthetic monitoring.

 

Why are my hits #s so low?

  • Since we are collecting only measurements done on browsers supporting the Navigation Timing API, there are a number of samples that are not saved on our servers. Our goal is to collect accurate measurements as opposed to all measurements. Also, we discard requests coming from our own synthetic monitoring agents.

 

Why do I see gaps in the page hits and Apdex graphs that do not show in the page load time graph?

  • The page load time graph is designed to show trends in page load time over time. For periods with no page hits, and therefore no page load time data, an interpolated line is shown spanning the time period.

 

Why are there no graphs displayed in Internet Explorer?

  • Internet Explorer version 8 and earlier do not have native support for Scalable Vector Graphics (SVG), which is required for Real User graphing. You can enable graphing in Internet Explorer by clicking the 'Install Google Chrome Frame' button under the Graphs tab of the Real User console and following the installation instructions. Google Chrome Frame is a free plugin that adds SVG support to Internet Explorer.

 

Is there an API I can use to retrieve my RUM data?

  • Yes. Information about our APIs can be found here.

 

Is this a free product?

  • It is for now. We are trying to gather enough feedback from our customers to build a great product. While we are in Beta, RUM will be free of charge.

 

What other new features can I expect from using this product?

  • We are working on getting more data available such as performance per page, browser, location. We would love to hear your feedback! Feel free to send your comments to .
Categories: Load & Perf Testing

Python Timer Class - Context Manager for Timing Code Blocks

Corey Goldberg - Sun, 06/17/2012 - 07:39

Here is a handy Python Timer class. It creates a context manager object, used for timing a block of code.

from timeit import default_timer class Timer(object): def __init__(self, verbose=False): self.verbose = verbose self.timer = default_timer def __enter__(self): self.start = self.timer() return self def __exit__(self, *args): end = self.timer() self.elapsed_secs = end - self.start self.elapsed = self.elapsed_secs * 1000 # millisecs if self.verbose: print 'elapsed time: %f ms' % self.elapsed

To use the Timer (context manager object), invoke it using Python's `with` statement. The duration of the context (code inside your `with` block) will be timed. It uses the appropriate timer for your platform, via the `timeit` module.

Timer is used like this:

with Timer() as target: # block of code goes here. # result (elapsed time) is stored in `target` properties.

Example script:
timing a web request (HTTP GET), using the `requests` module.

#!/usr/bin/env python import requests from timer import Timer url = 'https://github.com/timeline.json' with Timer() as t: r = requests.get(url) print 'fetched %r in %.2f millisecs' % (url, t.elapsed)

Output:

fetched 'https://github.com/timeline.json' in 458.76 millisecs

`timer.py` in GitHub Gist form, with more examples:

Categories: Load & Perf Testing

My Evolving Droids

Corey Goldberg - Sat, 06/16/2012 - 06:25

(Droid -> Droid X -> Droid Razr) [stock pics]

In the past few years, mobile devices have turned into amazing little pocket computers. I'm blown away by the hardware (ARM) and software (Android) evolution.

For smartphone wireless service, I was a Verizon customer (USA) for the past decade. I've owned the flagship Droid series devices by Motorola since their first launch (currently running on the 4G LTE network). It started with a clunky original Droid, followed by a Droid X, and then the sleek Droid Razr... every year another generation of better, faster devices.

Device Specs:

Motorola Droid - Nov 2009

  • OS: Android v2.0
  • CPU: 600 MHz ARM Cortex-A8 (TI OMAP3430, 65nm)
  • RAM: 256 MB
  • Internal Storage: 512 MB
  • Screen Size: 3.7 inch
  • Resolution: 480 x 854

Motorola Droid X - July 2010

  • OS: Android v2.1
  • CPU: 1 GHz ARM Cortex-A8 (TI OMAP3630, 45nm)
  • RAM: 512 MB
  • Internal Storage: 8 GB
  • Screen Size: 4.3 inch
  • Resolution: 480 x 854

Motorola Droid Razr - Nov 2011

  • OS: Android v2.3
  • CPU: 1.2 GHz dual-core ARM Cortex-A9 (TI OMAP4430, 45nm)
  • RAM: 1 GB
  • Internal Storage: 16 GB
  • Screen Size: 4.3 inch
  • Resolution: 540 x 960

However, my life with the Moto Droids is coming to an end. I just signed up with a new wireless carrier and pre-ordered a new Sammy GS3!:


(Samsung Galaxy S3) [stock pic]

Samsung Galaxy S3 - June 2012

  • OS: Android v4.0
  • CPU: 1.5 GHz dual-core Krait (Qualcomm Snapdragon S4 MSM8960, 28nm)
  • RAM: 2 GB
  • Internal Storage: 16 GB
  • Screen Size: 4.8 inch
  • Resolution: 720 x 1280

Categories: Load & Perf Testing

Monitoring - Create a Websocket server and WPM script

BrowserMob - Fri, 06/15/2012 - 11:42

As you may know, our WPM agents now support Websocket which is an implementation of W3C Websocket API. Basically, it allows clients to implement two-way communication (full-duplex) with a remote host that supports Websocket.

 

The following WPM script opens 2 websocket connections to:

  • random-wsserver.nodejitsu.com: return random strings
  • statserver.nodejitsu.com": return server's statistic info, for example: hostname, platform, free memory status, etc

 

Some highlights:

  • WPM Websocket script type is Basic. The Websocket is established over a "special" HTTP GET and persisted to that transmit datagram in full-duplex. Mixing the script with some commands from Browser type, for example: test.openBrowser(), will fail the script.
  • Our Websocket driver supports Websocket server URLs with ASCII letters 'a'-'z', digits '0'-'9', and the hyphen '-'.
  • WPM allows only one open websocket at a time. The first one must be closed before the second one is open.

 

In this example, two Websocket servers are implemented and hosted in Nodejitsu. Sourcecode of these two node servers and html pages (for testing) are attached.

 

//============================================================================= function WebSocket(url, protocols) {     return test.openWebSocket(url); } function randomInt(min, max) {   return Math.floor(Math.random() * (max - min + 1)) + min; } function randomString(length) {     var dict = "0123456789ABCDEFGHIJKLMNOPQRSTUVWXTZabcdefghiklmnopqrstuvwxyz",         result = '', i;     for (i = 0; i < length; ++i) {         result += dict[randomInt(0, dict.length-1)];     }     return result; } //============================================================================= // Setup the first Websocket to "random string" server var ws = new WebSocket("ws://random-wsserver.nodejitsu.com/"),     counter = 0; ws.onopen = function() {     test.log("WebSocket '"+ws.url+"' Open (bufferedAmount '"+ws.bufferedAmount+"')"); }; ws.onmessage = function(msg) {     var response = randomString(randomInt(1, 100));     test.log("WebSocket Message Received: '"+msg+"' (bufferedAmount '"+ws.bufferedAmount+"')");     test.log("WebSocket Message Sending: '"+response+"' (bufferedAmount '"+ws.bufferedAmount+"')");     ws.send(response);     ++counter; }; ws.onerror = function(e) {     test.log("WebSocket Error: '"+e.toString()+"' (bufferedAmount '"+ws.bufferedAmount+"')"); }; ws.onclose = function(code, reason, remote) {     test.log("WebSocket Close. code: '"+code+"' reason: '"+reason+"' remote: '"+remote+"' (bufferedAmount '"+ws.bufferedAmount+"')"); }; //============================================================================= // Setup script test.beginTransaction(); // step 1 test.beginStep("Wait (max 1 min) to receive the first 5 messages of the first Websocket"); test.waitFor(function() {     return counter >= 5; }, 60000 * 1); //< 1 minutes test.endStep(); // step 2 test.beginStep("Wait (max 1 min) to receive another 5 messages of the first Websocket"); test.waitFor(function() {     return counter >= 10; }, 60000 * 1); //< 1 minutes test.endStep(); // step 3 test.beginStep("Wait (max 1 min) to receive last 5 messages of the first Websocket"); test.waitFor(function() {     return counter >= 15; }, 60000 * 1); //< 1 minutes test.endStep(); // step 4 test.beginStep("Close the first websocket"); ws.close(); test.log("Counter: " + counter); test.assertTrue(counter >= 15, "Test not passed: received less than 15 messages for first Websocket"); test.endStep(); // Setup the second Websocket to statserver var ws2 = new WebSocket("ws://statserver.nodejitsu.com/"),     counter2 = 0; ws2.onopen = function() {     test.log("WebSocket 2 '"+ws2.url+"' Open (bufferedAmount '"+ws2.bufferedAmount+"')"); }; ws2.onmessage = function(msg) {     var response = randomString(randomInt(1, 100));     test.log("WebSocket 2 Message Received: '"+msg+"' (bufferedAmount '"+ws2.bufferedAmount+"')");     test.log("WebSocket 2 Message Sending: '"+response+"' (bufferedAmount '"+ws2.bufferedAmount+"')");     ws2.send(response);     ++counter2; }; ws2.onerror = function(e) {     test.log("WebSocket 2 Error: '"+e.toString()+"' (bufferedAmount '"+ws2.bufferedAmount+"')"); }; ws2.onclose = function(code, reason, remote) {     test.log("WebSocket 2 Close. code: '"+code+"' reason: '"+reason+"' remote: '"+remote+"' (bufferedAmount '"+ws2.bufferedAmount+"')"); }; // step 5 test.beginStep("Wait (max 1 min) to receive 5 messages of the second Websocket"); test.waitFor(function() {     return counter2 >= 5; }, 60000 * 1); //< 1 minutes test.endStep(); // step 6 test.beginStep("Close the second websocket"); ws2.close(); test.log("Counter2: " + counter2); test.assertTrue(counter2 >= 5, "Test not passed: received less than 5 messages for second Websocket"); test.endStep(); test.endTransaction();
Categories: Load & Perf Testing

Home Audio Setup

Corey Goldberg - Thu, 06/14/2012 - 09:40

I work from my apartment and listen to music or talk-radio nearly 24-hours a day (even at soft volume while I sleep). My home audio setup is pretty important to me.

Here is a description of the current rig for home listening:

The Gear:

  • Media Server (Ubuntu, Logitech Media Server)
  • Touchscreen UI (Android, Squeezebox Controller)
  • Wifi Music Player (Squeezebox Touch)
  • Amplifier (Harman Kardon)
  • 5 Speakers (Polk Audio)

The Content:

Streaming Radio Networks:

  • Slacker Radio Plus ($3.99/month) - slacker.com
  • SiriusXM Internet Radio ($14.49/month) - siriusxm.com
  • Pandora (Free account) - pandora.com
  • SomaFM (Free) - somafm.com

MP3 Collection:

  • 7010 Files
  • 60 GB
  • Track List: Corey's MP3s

Tablet interface (Logitech Squeezebox Controller):

Web interface (Logitech Media Server 7.7.2):

Rock on!
Categories: Load & Perf Testing

New partner in Benelux

LoadImpact - Thu, 06/14/2012 - 01:57
Today we're proud to announce that we have signed a premium partnership agreement with Systemation in Benelux, one of the leading distributors in integration testing and quality assurance in northern Europe. As you might already know, we provide a full-scaled partner program in priority markets in the U.S., Europe and Asia. Together with partners like Systemation we can reach out much faster to clients in need of efficient load testing. As a Premium Partner, Systemation will actively represent, sell and implement customer projects for the Benelux market.Systemation acts as distributor in the Benelux countries for several leading international software companies like Load Impact. Together with the software solutions Systemation provides their clients with a comprehensive suite of professional services, including implementation, local technical support and software maintenance.Jaap Franse, Managing Director of Systemation, states the following concerning our partnership: "We are proud to add the world class Load Impact solution to our portfolio of services. This marks a significant step in supporting our customers when executing performance tests for their business critical web applications."So if you're active in the Benelux market and in need for performance testing, contact Systemation directly. 
Full press release.
Categories: Load & Perf Testing

Why Performance Testing Is Important - Perspective of a Guest Blogger

LoadStorm - Wed, 06/13/2012 - 11:38

When it comes to understanding and improving a website system, whether it be a personal project, a business venture or otherwise, it’s vital a web application is tested for its responsiveness in terms of its stability, i.e how well it can handle a particular workload. Businesses can also find out a lot about their website system through software performance testing, which can build performance into the design and structure of the system, prior to any coding taking place.

Performance testing encompasses a range of different tests which enable analysis of various aspects of the system. One of the simplest ways to test the performance of a website is through load testing. This provides information about the behaviour of the system when handling specific loads of users, who might be providing a number of transactions simultaneously on the same application.


The role of load testing in business

Load testing can monitor the system’s response times for each of the transactions during a set period of time. This type of monitoring can provide a lot of useful information, especially for business managers and stakeholders, who look for conclusions based on these results, along with any data to support these findings. Load testing can also raise attention to any problems in the application software and fix these bottlenecks before they become more problematic.

As with any test in performance testing, the results depend on the application under test. The main reason for carrying out such testing is to measure and report the behaviour of the website under an anticipated live load. As a result of testing, end user response times can be reported, as can key business processes, CPU and memory statistics. Importantly, it allows site owners to see how a planned release performs compared to a website system which is currently live.

These results can of course be affected by other factors outside of the system’s control such as the users’ broadband speeds. As with any application under test, every application will differ a little between the simulation of users and the live number of users when it comes to performance testing.


Different kinds of performance testing

Other areas which performance testing can monitor are stress testing, which deals with the upper limits of capacity within the system which determines how well the web application will perform if there is a sudden surge of demand and the current number of users goes above and beyond the maximum levels.

Endurance testing also monitors the system’s continuous load, under which potential leaks for example, can be detected in the memory utilization, along with analysis of performance degradation and how the system copes under sustained use.

Other types of performance testing include spike testing, which involves a sudden increase in the load of users, to see how the system behaves and responds to a dramatic ‘spike’ in users. Meanwhile, configuration testing can look at configuration changes to any components of the system in terms of performance and behaviour. In some cases, isolation testing may be required to repeat a test execution and to reconstruct where a system problem lies.

All of these are methods of performance testing which can demonstrate that a system is meeting performance criteria. They can also analyse and compare the results of two systems and even measure which aspects of the system can be improved by discovering what causes it to overload.


Testing a future performance

As technology has advanced, performance testing has become increasingly more difficult to determine. This is in part due to the complexity of modern web browsers and advances such as HTML 5. These new web browsers have various complex features, for example websockets, which mean that messages are becoming more event-driven by user actions.

In the future, HTML clients are undoubtedly going to become more sophisticated, almost like full apps themselves. This will mean that simulated performance behaviour is more difficult and harder to mimic, making it more complicated to understand than the actual system.

The advantage that a performance testing system such as LoadStorm has in this aspect, both now and in the future is that it has a sustainable scripting solution which will be able to overcome a number of issues. While performance testing can sound daunting, its results mean that it’s a process well worth undertaking.

The purpose of performance testing provides its investors a high level of confidence in a web application’s ability to handle large volumes and patterns of traffic prior to going live. This is because testing a site’s web performance is an important part of the web design and optimisation process, an integral cog in uniting both the business and the consumer.




6 Best Practices For Excellence in Operations Management

BrowserMob - Tue, 06/12/2012 - 11:50

6 Best Practices

For Excellence in Operations Management

 

Compiled by Todd Minnella, Neustar

 

The IT operations group is the heart of today’s enterprise.  After all, they maintain the health of the complex, ever-changing application infrastructure that provides services to the business and its customers.  The ops team’s goal is the same each day: to ensure that all applications are stable and always available.  If there are issues, they must be addressed quickly to prevent or minimize any negative customer impact. 

 

Having been in the IT operations/support industry for the past 25 years, I’ve had the opportunity to deal with many issues affecting IT infrastructure.  Drawing on my experience, I’ve put together a list of best practices to help you achieve operational excellence.

 

1. Stability is Job #1

Stability is your most important concern!  It is the number one metric for measuring the health of an application, outranking speed and performance.  Speed and performance are nice to have, but if the website or service is not available and stable, your customers won’t be able to use it. 

 

2. IT problems cannot be analyzed in isolation

When analyzing IT infrastructure problems, you need to be realistic.  There are different degrees of importance.  Of course, if the problem is affecting your customers, it’s automatically a higher priority.  However, this is not to say operational defects should be a second priority.  If ignored, some can have serious negative effects on your organization.   When analyzing problems and determining their priority, you need to work with your product team to gauge the business impact. 

 

3. Transparency

Transparency is key. When analyzing infrastructure problems, share them with your team and work together to determine priorities and solutions.   It’s also important for everyone to celebrate your successes.  Have a scoreboard tracking how many days of stability the team has achieved.  This way everybody can visualize and share in the successes.

 

4. Get to know what “normal” looks like

To be successful in an operational role, you need to identify and understand problems.   To do that, you first need to know what “normal” is for your network.   Tracking your application’s performance over an extended period of time will give you insights into your average load time, how high loads affect your network, latency, dependencies and limits of your network.   So when there is a problem, you’ll identify it easily by comparing normal and “not normal” for your systems.

 

5. Building a high performance application

Once your application is stable, the next step is to enhance it for high performance.  Remember, when designing application infrastructure network, strive for simplicity.  The fewer hops and elements you have, the easier it will be to support and maintain. As always, don’t optimize too early. The system needs to be stable, functional and thoroughly tested before optimization.  When testing, it is essential to test at production scale.  Applications perform differently when under high load. Load testing is a must for high performance!

 

6. High quality operations

Lastly, you should strive to build a healthy operational culture for your group.  Be an enabler for your business and work with your peers to manage change.  Operations teams are well-positioned to help an organization be successful by guiding and overseeing change activity. Thus, it’s important not to become a barrier to change. Where possible, make decisions for your business based on data – systems for collecting system and application metrics are plentiful, so decisions around changes and improvements should be based on verifiable, repeatable testing, not by “feel.”  Take ownership of issues and act as an advocate for your customers. The voice of operations is valuable, so smart businesses listen to their ops teams’ feedback.  If you can, be a customer of your application or service. Use it regularly and provide regular suggestions and detailed problem reports to your product team partners.

 

Providing high quality IT operations for a high-performing application is a challenging (and rewarding) profession. I hope these insights will prove helpful to you.  Good luck!

 

About Todd Minnella

Todd is a Manager of Systems Administration at Neustar, maintaining WPM, Webmetrics and BrowserMob services.  He lives and breathes performance daily!

Categories: Load & Perf Testing

Monitoring APIs for Web Performance Management

BrowserMob - Tue, 06/12/2012 - 11:43

Monitoring API’s are now available for Web Performance Management.   Built as REST services, these APIs allow for deep integration with other applications and provide versatility in configuring applications in real time. 

 

Monitors can be configured from other applications with the use of the Monitor Administration APIs.  These configurations currently available in the user interface are now accessible through the web services.  The summary information as well as the raw and aggregate sample information is available in the Summary and Sample Information APIs.  The raw data can be analyzed further and integrated into your reports to keep stakeholders informed of what is happening on your website.  Maintenance Window APIs can retrieve, update and delete windows from your monitoring configurations. 

 

The help assist with the development and integration of API’s, our documentation includes an interactive API portal, where calls can be tested with your specific data sets.  These methods are all available by using your accounts API key that is found within your ‘My Account’ tab within Web Performance Management.

 

For more information on how to use the APIs along with different language coding examples, visit Neustar’s API Documentation Portal.

Categories: Load & Perf Testing

Extending VuGen 11.5 with custom add-ins

Performance Testing with LoadRunner Focus - Fri, 06/08/2012 - 11:29
This is my presentation on the exciting potential of VuGen Add-ins that I delivered at HP Discover 2012 (Las Vegas) on June 7th. The original PowerPoint file can be downloaded from SlideShare. Presentation abstract: HP has redesigned the latest version of VuGen from the ground up with a new, flexible architecture. For the first time, [...]
Categories: Load & Perf Testing

dynaTrace AJAX Edition 3.5 with Beta support for Internet Explorer 10 and Firefox 10

With the new release of the dynaTrace AJAX Edition you have Beta support for the Firefox 10 browser and therefore you can test the performance of your Web sites with the latest available Browsers from Mozilla and Microsoft. The new dynaTrace AJAX Edition also helps you to prepare yourself for the future. With the support [...]
Categories: Load & Perf Testing

History of Python - Development Visualization - Gource

Corey Goldberg - Mon, 06/04/2012 - 17:07

I made a new visualization. Have a look!

History of Python - Gource - development visualization (august 1990 - june 2012)
[HD video, encoded at 1080p. watch on YouTube in highest resolution possible.]

What is it?

This is a visualization of Python core development. It shows growth of the Python project's source code over time (August 1990 - June 2012). Nearly 22 years! The source code history and relations are displayed by Gource as an animated tree, tracking commits over time. Directories appear as branches with files as leaves. Developers can be seen working on the tree at the times they contributed to the Python project.

Video:
Rendered with Gource v0.37 on Ubuntu 12.04

Music:
Chris Zabriskie - The Life and Death of a Certain K Zabriskie Patriarch

Repository:
cpython 3.3.0 alpha, retrieved from mercurial on June 2 2012

for more visualizations and other videos, check out my YouTube channel.

Categories: Load & Perf Testing