Syndicate content
Latest Blog Posts in Web Performance Management
URL: http://community.neustar.biz/community/wpm?view=blog
Updated: 14 hours 53 min ago

Scheduled Maintenance for Jive Community (January 26th, 2013)

Wed, 01/23/2013 - 14:40

Please note that Neustar's Community website will be undergoing some routine maintenance on Saturday, January 26th, 2013 between 10 PM and 3 AM.  This may result in the WPM Community becoming slow or unavailable during that time period.  The WPM platform will not be impacted by this maintenance. 

 

And remember, if you're using WPM to monitor a site/application that is undergoing maintenance be sure to setup maintenance windows to prevent planned downtime events from impacting your performance metrics.

Categories: Load & Perf Testing

Creating a Maintenance Window

Wed, 01/23/2013 - 14:08

In most cases you cannot predict when an outage will occur or when a site or application will slowdown.  This is why the value in external monitoring is in its constant execution.  But sometimes you know there will be an outage, like that time you had to replace a faulty hard drive.  And sometimes you know there will be performance degradation, like on Saturday nights at 2 a.m. when the backup process kicks in, syncing Gigs of data between production and backup.  As a consumer of monitoring data you likely do not want to be constantly alerted when these issues occur and when reporting data up to your manager or out to your customers you probably don't want to include planned downtime or have it impact your SLAs.  This is where maintenance windows come in handy.

 

Maintenance windows allow you to turn off alerting or turn off alerting and monitoring for a subset of monitoring services.  Setup is simple and consists of the following 4 steps:

 

1. Select Maintenance Windows from the Monitoring menu 

Under the monitoring tab select the Maintenance Window option.  Maintenance windows can only be applied to monitoring and cannot be applied to load testing or real user measurements.  Think of maintenance windows as a distinct object on the WPM platform (like you think of a script, or a monitoring service, or an alert policy):

 

 

2. Click Create Maintenance Window

The resulting screen will show all existing maintenance windows that have been configured (the screenshot below shows a WPM account with no maintenance windows configured).  Anything you need to do regarding maintenance windows can be done from this screen.  You can create, edit and delete maintenance windows here.  We want to create a new maintenance window so we click on Create Maintenance Window:

 

 

3. Configure the new maintenance window

Here's the heart of it, configuring the maintenance window.  The fields are:

 

  • Title - Always good to give a descriptive title so that when you're looking at a list of maintenance windows you know what each one was for.  In the screenshot below I put Backup Running so that I know this maintenance window is meant to stop alerts from being generated when my backup routine is running.
  • Monitors - This is why suggested thinking about maintenance windows as a distinct object on the WPM platform.  Maintenance windows are not a configuration of any individual service.  Instead a maintenance window is a unique object on the WPM platform that takes one or more monitoring services as a setting.  Any services that we select here will have their monitoring and/or alerting turned off during the maintenance window.
  • Start - This is the start date and time for the maintenance window.  The 17th was a Thursday so if we select weekly later on then our maintenance window repeat every Thursday.
  • Duration (hours and minutes) - Use these two fields to set the length of time for the maintenance window.  In the example below I've set the duration for 1 hour so my maintenance window will end at 11 pm.
  • Repeats - How often the maintenance window will repeat.  Most commonly used is One time only and weekly, but Monthly and Yearly are options as well.
  • Monitoring - You can choose to turn off alerting or monitoring & alerting.  I usually turn off alerting only since I still want to gather metrics during the maintenance window.  If you're looking to preserve SLA metrics I'd recommend turning off monitoring and alerting.
  • Description - Describe the maintenance window with a little more detail.  It's like when you comment your code, it's a pain in the butt, it's optional, but two months later you'll wish it was there!

 

Fill all those fields out and click on Save:

 

 

4.  Confirm that the maintenance window is configured

That's it, you're done!  Always make sure your maintenance windows are configured and are in a pending status.  The maintenance windows will be displayed per service:

 

 


Ongoing monitoring is critical to capturing data that can help support SLAs, alert you to outages and develop trends in performance.  Sometimes though you need to exclude data or alerts that are associated with planned downtime.  Maintenance windows can help you create periods of time where monitoring and/or alerting will be turned off for these planned downtime events.

Categories: Load & Perf Testing

General FAQ for Getting Started on Web Performance Management

Tue, 01/22/2013 - 11:54

To get you up and ready to use Web Performance Management, we gathered some commonly asked questions about website performance and the technical features/benefits.

Categories: Load & Perf Testing

Scheduled Maintenance for Tuesday, January 22, 2013

Mon, 01/21/2013 - 18:32

We will be performing maintenance on Tuesday, January 22, 2013 at 10:00 PM PST.  The expected downtime will be 10 to 20 minutes.

 

During this maintenance window you will not be able to create new load tests, check your monitoring balance, or sign up for new services.

 

Monitoring and alerting will not be affected.

 

If you have any concerns, please contact us at .

 

Thank you!

Categories: Load & Perf Testing

What's New | 14 January 2013

Mon, 01/14/2013 - 09:53

We are excited to release a new Monitoring feature - Grouping. You can now group monitors by type and location, or create customized groupings to suit your specific business needs. Grouping is especially useful if you have many monitors in your account, as it allows you to quickly identify and view the status of your most important monitors.

 

This release also included some enhancements to Real User Monitoring, so be sure to visit the Real User tab in your account.

 

Also, as a reminder, check out our API library for:

  • Instant Test API
  • Monitoring API
  • Maintenance Window API
  • Real User Measurements API
  • Load Testing API

 

We continually add and improve the API library with every release.

Categories: Load & Perf Testing

Scheduled Maintenance for 01/12/13 from 10PM PT to 3AM PT

Thu, 01/10/2013 - 15:07

Hello!

 

We will be performing maintenance to our Community site on 01/12/13 10 PM - 3 AM Pacific

 

During this maintenance window, the Community site may experience brief intermittent loss of connectivity, but the WPM application will be fully available.

 

For more information or questions, please contact us at .

Categories: Load & Perf Testing

Alerting Policy: Alert Escalation

Thu, 01/10/2013 - 15:01
Introduction

Performance degradation and outages are bound to occur with web applications and web sites, the best we can hope for is quick resolution - hopefully before our customers even notice.  We give lots of thought to minimizing the number of such events by improving our applications code base, investing in hardware, preemptive monitoring, etc. but we can also give some thought to minimizing the length of time of these events when they do happen.  The Intelligent Alerting (IA) platform is well suited for doing this since it allows us to develop scripts that define how a monitor will react to the performance data generated.  The best way to minimize this time would be to develop IA scripts that will fire off self-healing events (As an example, if you're using Amazon for computing power you could starting up more EC2 instances to handle temporary load level increases).  This is a more involved & complex approach and will be the topic of future blog posts.  Another approach - one I consider much easier to implement - is to escalate alerts to additional contacts as an error persists.  To do this we need to make two modifications to the default IA script:

 

  • Track incident start time
  • Send alert to contact(s) based on duration of incident

 

Typically I send an initial alert to my personal email account, but with my email not pushed to my cell phone I'll usually follow it up with a text message to my cell phone.  But my phone is pretty quiet and usually will not wake me up.  So if the error occurs again I send it to a pager which has a very annoying (and much louder) ring.  If I hear that pager going off I know that an issue has been validated by WPM and has persisted over several monitoring intervals.  Alerts don't have to be sent to just me, they can (and should) be sent to others within the organization that have the ability to troubleshoot/address issues.  Let's look at the modifications that were made to the default IA script. 

 

The concept and attached script was developed by Pauljames Dimitriu, an Engineer at Neustar.

 

Track incident start time

When a sample is returned by the WPM platform and processed by the IA system we start by tracking the time the error occurred:

 

if (error) {      // Set the time of the initial incident (or initialize)      if (!initialIncidentTime)               initialIncidentTime = new Date().getTime(); } else {          initialIncidentTime = 0;          monitorInfo.incident = {}; } monitorInfo.failures = failures; // Save the incident time as part of the monitoring info set monitorInfo.initialIncidentTime = initialIncidentTime;

 

We're saving the current time when an error occurs and then adding that information to our monitorinfo data structure.  If the sample is not an error then we're resetting our initial incident time to 0.  There's also a helper function for converting the time into an interval.  Since monitoring works on an interval basis it's consistent to look at the time period as an interval (error has persisted for 5 minutes) instead of as a timestamp (error has persisted since 12:30:00 PST).  The helper function takes the initialIncidentTime (a timestamp) and converts it to an interval:

 

function getIncidentTimeInMinutes(incidentTimestamp) {   // calculate the initial incident time   MINUTE_IN_MILLISECONDS = 60000;   currentTime = new Date().getTime();   return parseInt((currentTime - incidentTimestamp) / MINUTE_IN_MILLISECONDS); }

 

Send alert based on load time

 

Now all we need to do is decide who the alerts should be sent to.  We do this by establishing an JSON object containing arrays of alert contacts, this is our alert hierarchy:

 

    var emailAddresses = ['contact@email.com'];     var emailAddressArray = {            'regular': ['contact_regular@email.com'],            'noc': ['noc_esc@email.com'],            'mgr_it': ['mgr_it@email.com']      };     // Send alert based on how long (minutes) the error has persisted.     incidentTime = getIncidentTimeInMinutes(initialIncidentTime);     if (incidentTime >= 5)       emailAddresses = emailAddresses.concat(emailAddressArray['regular']);     if (incidentTime >= 10)       emailAddresses = emailAddresses.concat(emailAddressArray['noc']);     if (incidentTime >= 15)       emailAddresses = emailAddresses.concat(emailAddressArray['mgr_it']);      neustar.email.sendHtmlMessage({'to': emailAddresses,                                    'subject': subject + " been alerting for " + incidentTime + " minutes",                                    'body': htmlMessage });

 

And then creating a series of if statements to determine who alerts should be sent to.  As an error persists alerts will be sent to all previously alerted contacts as well as those new contacts that meet the thresholds.  For example, if our error persists for 10 minutes we will continue to alert the group of regular contacts and we'll start alerting the group of noc contacts.

 

Conclusion

Alert escalation is important to help reduce the amount of time our web applications are either down or performing poorly and is accomplished by ensuring that the people that can troubleshoot the issues are engaged at the right time.  Implementing an alert escalation policy requires that we track the time a series of consecutive errors started, create an escalation hierarchy, and add logic to send alerts to the contacts in the hierarchy based on the length of time an error has persisted. 

Categories: Load & Perf Testing

Web Performance Meetup in San Diego - February 20, 2013, presents "Best Practices for Load Test planning, scripting and execution"

Wed, 01/09/2013 - 16:50

Join us for a fun and casual evening to discuss all things performance!

 

Meet with other web site system administrators, developers, designers and business people who are interested in making their sites work fast to get a better user experience, lower abandonment rates, and make more money.

 

Description:

Load testing today’s never ending variety of web sites and applications can be confusing, but testing is imperative to ensure optimal performance.  If you know you want to test but aren't sure where to get started, or you’re looking to improve your existing testing plans, this is the meetup session for you.  Our speaker, Steve Thurner, is a Professional Services Engineer with expertise in JavaScript and Selenium, who executes load tests for a variety of customers every day. He will be sharing his experiences with best practices for planning, scripting and executing load tests.  Plus, we’ll be selecting a meetup participant from the audience, and testing and recommending site optimizations for them, which will be shared in a future hands-on session.

 

About the speaker:

Steve has been in the IT industry since 1995. He has a broad and diverse background beginning with coding in COBOL, building web sites, building a handheld application, to helping companies plan for new software installation and most recently conducting load testing of web sites. In his free time Steve is just as passionate working to raise money for the American Diabetes Association and every April takes on the challenge of riding in San Diego’s Tour de Cure as a Red Rider.

 

When:

Wednesday Feb 20

5:30-6:30 – Networking plus come join us for Happy Hour (food and drinks served!)

6:30-7:30 – Presentation, Q&A

 

Where:

Neustar

3570 Carmel Mountain Road, Suite 400

San Diego, CA 92130

 

Please RSVP at our Meetup Page.

 

Invite your friends and colleagues for a night of learning and networking... plus an opportunity to win lots of cool raffle prizes. Reminder: Bring your business cards for the raffle drawing!

 

Get Involved! Passionate about web performance and interested in speaking?  Have a customer that's done something amazing with their site speed? Something you're dying to learn? Let us know! Questions? Email Connie Quach

 

San Diego Meetup Sponsors (Thank you!)

O'Reilly Velocity

Neustar

 

Not in San Diego? Attend a web performance meetup in one of these locations:

Los Angeles

Bay Area

Austin

Indianapolis

Atlanta

Boston

New York

 

We hope to see you soon!

Categories: Load & Perf Testing

Scheduled Maintenance for Thursday, January 10, 2013

Mon, 01/07/2013 - 14:50

We will be performing maintenance to our back-end functions in our application on Thursday, January 10, 2013 from 10pm PST to 11pm PST.

 

During this maintenance window, you will not be able to create new load tests, check your monitoring balance, or sign up for new services.  

 

Monitoring and alerting will not be affected.

 

If you have any concerns, please contact us at .

 

Thank you!

Categories: Load & Perf Testing

Investigating your website's client-side JavaScript errors with RUM

Wed, 01/02/2013 - 11:22

Real User Measurement (RUM) is a great tool for reporting on the performance of your website from your user's perspective. An interesting feature is the JavaScript error reporting: RUM captures the JavaScript errors triggered in the browser when the page is loaded (see the announcement Client Error Capture with RUM). Since we released that feature we added new ways to analyze that information. In this post I will go over a couple of use cases that highlight those additions to our RUM UI.

 

Let's start from the Time Series graph. You quickly notice an unusual number of JavaScript (JS) errors and decide to drill down. The UI shows you exactly the individual measurements that reported the JS errors.

 

Copy the JS error message and paste it into the Analysis tab's "JS error" filter field. Then group by URL and hit the Apply button.

The result tells you that this JS error only happened for that one URL.

 

Then you might wonder if it happened for all browsers. Very easy to find out with Group By Browser.

The result tells you that this JS error only happened with Internet Explorer 9.

 

Then you might think the problem might come from a single user's browser that for some reason would trigger the error. Quickly find out if this is an isolated error by grouping by ASN.

The result tells you that this JS error happened for more than one IE9 user. At this point you should probably try to reproduce the error using your own copy of IE9 and pointing to that URL. Chances are you would see the same JS error and can start working on a fix.

 

Note that since we do not collect personal information like cookies we cannot confirm with certainty of users uniqueness. However, using the ASN should be sufficient for this use case. You could also group by country or region to confirm this information.

 

Grouping By JavaScript Error

 

The Analysis tab allows you to group by JS error. This means that for a given date range you can quickly find out how many JS errors your users saw, and which ones were the most frequent. Remove the content of the JS error filter field and hit the Apply button.

 

Investigating the most frequent one, you enter either the JS error message or the resource name it occurred in into the filter field, and group by browser.

Looks like a Firefox 17 issue...

 

Let's see if this is a user specific issue using grouping by ASN like we did above.

Yes! We can be pretty confident that this error only happened for a single user. It's up to you to decide if this issue is worth investigating further considering that its impact is minimal.

 

Conclusion

The Analysis tab might feel a little bit overwhelming at first but after playing with it for a few minutes one can see the reporting capabilities this tool offers. Investigating JS errors is just one example of what  can be accomplished with this tool.

Categories: Load & Perf Testing

What's New | January 2013

Fri, 12/28/2012 - 16:53

Happy New Year!

 

Hope you had a great holiday season, we look forward to kicking off the New Year with lots more product enhancements and updates to help make your product experience a great one. As you know, it’s our priority to listen to your feedback, gather and interpret survey results, and keep a close eye on our Community site to see what you all are saying.

 

In this latest release, we focused on adding major enhancements to our Monitoring and Real User Measurements products along with some UX enhancements on our Support pages. We also made improvements to our API library and reporting capabilities.

 

For Real User Measurement, we added the ability to have multiple beacons within a single account

 

By adding multiple RUM beacons in your account, you now have the opportunity to view RUM results from multiple domains and environments with your data organized and separated by each web property.

Incorporated YSlow to analyze performance data in Monitoring


To help you get closer on identifying the root cause, we incorporated a new tool within the Monitoring app – YSlow. Previously, there were challenges and guess work for you to identify the issues but we gave you HTML Archive File (HAR) to sift through lots of data. Now, by adding YSlow, you can take a step further to isolating problems with individual monitoring samples, which can save you time.

 

What is YSlow? YSlow analyzes web pages and why they're slow based on Yahoo!'s rules for high performance web sites.

 

Included a Date Picker for custom date ranges in Monitoring Graphs


A minor but important feature, we included a date picker to your monitoring reports and graphs. The Date Picker allows you to click on Start and End Dates on the control bar for your monitoring graphs.

 


Added more field selections for your Monitoring List View - "Interval" and "Number of locations"


For more details about why you need to check and insert the “Interval” and “Number of locations” field, see our post, Defining Monitoring Interval, Sample Rate, Locations, and Units.

 

Enhanced Support section for easy access to more “Help” information


When we launched, our Support section was a bit light but with this release, we added more details to help you navigate and find your answers to your “Help” questions. Check out our Overview page, highlighting Top Articles trending on our Community site. We have a list of Support Packages available to you and a What’s New section to notify you of our latest updates to the product.

 

 

Just a reminder, check out our API library for:

  • Instant Test API
  • Monitoring API
  • Maintenance Window API
  • Real User Measurements API
  • Load Testing API

We continually add and improve this library at every release.

Categories: Load & Perf Testing

Defining Monitoring Interval, Sample Rate, Locations, and Units

Fri, 12/21/2012 - 16:48

Most recently, we updated our monitoring dashboard to include a three new columns which will help define how frequently we collect monitoring samples and how much you would consume on a daily basis.

 

 

ColumnDescriptionDaily Unit Usage Displays how many units a specific monitor consumes each daySample Rate Displays how frequently the monitor will sample# of Locations Displays how many locations the monitor is sampling from

 

 

Below is a screenshot of the new columns enabled and sorted by Daily Unit Usage descending:

 

Understanding the difference between Interval, Sample Rate, Locations, and Units:

 

When you create/edit a monitor, you are given the opportunity to select the monitoring interval and the number of locations from which to take samples. Monitoring interval and # of locations to sample can dictate how frequently a sample will get taken.  For example, in the screenshot above, you can see that for the first monitor we, have selected 101 locations and a monitoring interval of 1 minute.  This means that we will be taking 101 samples each minute (one from each monitoring location) which translates into ~2 samples a second. The daily unit consumption is very high on this monitor because we are sampling so frequently,  consuming 581,760 units a day for this one monitor.  Monitoring unit consumption is calculated by multiplying the number of samples by the number of script units.

 

101 (# of locations) x 60 (minutes per hour) x 24 (hours per day) x 4 (units per sample) = 581,760

 

In most cases, monitoring from 101 locations once a minute is excessive.  A more realistic use cases would be to monitor from 3 locations with a 10 minute interval, which would result in a sample taken approximately once every 3.5 minutes. To further illustrate, a script that consumes 16 units per sample will consume 6,912 monitoring units per day.

Categories: Load & Perf Testing

Alerting Policy: Send Alerts As Text Messages

Mon, 12/17/2012 - 10:33
Introduction

Intelligent Alerting policies will send alerts via email by default.  If you'd like to send alerts as text messages to your cell phone you can always use your cell provider's SMS Gateway to deliver the text message to a phone.  As an example, here are instructions on how to email a text message to an AT&T wireless device.  This will help you respond quickly to alerts if you can't (or refuse to) setup email push notifications on your phone.  Sending alerts as Text Messages is pretty simple then, we need to do the following:

 

  • Modify our contact list.
  • Format the alert so it can be read.

 

NOTE: This blog post will cover only the code necessary to add alerts as text messages.  The attached script causes text messages as well as email messages to be sent.

 

Modify the contact list

A list of contacts to send alerts to is defined in all default alerting policies.  This is done by creating an array of contacts (even if you only have one contact):

 

var myContacts = ['tyler.fullerton@neustar.biz', 'support@neustar.biz'];

 

Later in my alert policy I would pass that array as a parameter to the sendPlainMessage method:

 

neustar.email.sendPlainMessage({'to': myContacts,                                    'subject': 'WPM Alert!',                                     'body': 'Site is down - ' + monitorName});

 

All I have to do then is add the address of my phone.  Using the AT&T example I would do something like this:

 

var myContacts = ['tyler.fullerton@neustar.biz', 'support@neustar.biz', '8585551234@txt.att.net'];

 

I can also setup separate contact groups which would allow me to continue to send a well formatted email to a series of email addresses while also sending minimalist versions of the alert as text messages.

 

Format the alert

If we're going to send alert messages as a text then we need to be aware that the default alert message format is going to look awful as a text.  The default message template has more formatting content then actual content displayed!  So it would likely be delivered to us in dozens of separate messages, wouldn't be formatted, and wouldn't be interpreted (so we'd have to read through a bunch of HTML to find our alert).  It's very important that we format the message to be simple and to the point.  In this example I used the sendPlainMessage method instead of the sendHTMLMessage method to send some plain text without formatting.

 

Conclusion

Sending alerts as texts to a cell phone can be helpful in cases where your don't have your email setup to push to your device.  Configuring an alert policy to do this is simple.  Make sure your cell provider provides an SMS Gateway and then start with a default alert policy.  Modify the list of contacts that the policy will send to and use sendPlainMessage instead of sendHTMLMessage for sending the message.

Categories: Load & Perf Testing

Scheduled Maintenance for 12/12/12 from 6:00PM PT to 6:30PM PT

Mon, 12/10/2012 - 13:50

Hello!

 

We will be performing maintenance to our Community site on Wednesday, 12/12/12 between 6:00PM PT to 6:30PM PT.

 

During this maintenance window, the Community site will be unavailable during the maintenance window, but that the WPM application will be fully available.

 

For more information or questions, please contact us at .

Categories: Load & Perf Testing

What's New | December 2012

Fri, 11/30/2012 - 17:08

Over the past several months, we have been gathering your feedback through survey results, reading your discussion posts on Community, and listening to our support teams.  We take your comments to heart and do our best to help make your product experience a great one.

 

With our most recent product release, we covered a full range of UI/UX enhancements, new features, and bug fixes.  See a full listing of these features below, and again, please complete our survey or chat with us on Community if you have any additional product enhancement ideas you’d like to share!

 

Overall User Experience Design Enhancements


Our new look offers a fresh and simplified perspective on how you view all of your accounts and services, including monitoring, load testing, and/or real user measurements activity. 

 

Heading over to Scripting, you will notice overall User Interface enhancements  with a new look to the Overview page and the addition of your list of Scripts. You can now easily search your scripts, upload or create a new script, and re-validate a script all from the same page.

 

 

 

 

Share your monitoring and real user measurement results - without the requirement for your recipients to log in!

 

To share graphs,  first select the monitors  you want to graph, then hit the Graph button. Once you are at this window, you can now share the graph with your stakeholders without having them log in to the account. Simply copy and paste the URL to share with them when the pop-up window opens.

 

Here's the view in Monitoring:

 

 

Here's the view in Real User Measurements:

 

 

 

More Features for Real User Measurements and Monitoring


You can now visually see Error and Apdex alongside your monitoring graphs, allowing for quick root cause analysis. RUM Average Load Time, Number of Hits, Apdex score, and JavaScript Error Low Profile are all displayed within the Monitoring Graph-Log page.

 

Time Series

Not only can you share the time series graph with your stakeholders, the time range has been enhanced to include 90 days and 1 year.

 

Additional data to help make quick decisions

Group by and filter JS errors:  Use the new filtering functionality to quickly find out which are the most common JS errors on your website, and when each error occurs (either per URL - see screenshot, or per browser, country, etc.). Note that we only capture JS errors happening up to 1 second after the browser's onload event. Those errors are the most likely to affect your users' experience when the page is loaded.

 

Web Beacon:  You may now select Maximum Load Time and Collection Sampling Preferences to improve your results

 

For additional information, check out the latest post from the lead developer of this product.

 

 

Invoice Billing versus Credit Card Purchases

For larger purchases, we have streamlined the process to allow you to request for invoice billing during checkout rather than paying by credit card.

Categories: Load & Perf Testing

Real User Measurements - End of November release

Fri, 11/30/2012 - 10:06

We are  excited to announce the latest features of our Real User Measurements product.

 

UI - Time Series

 

You can now share the Time Series graph with anyone. Also, the time range has been enhanced to 90 days and 1 year.

 

 

UI - Analysis

 

You can now group by and filter JS errors. This allows you to quickly find out which are the most common JS errors on your website, and when each error occurs (either per URL - see screenshot, or per browser, country, etc.). Note that we only capture JS errors happening up to 1 second after the browser's onload event. Those errors are the most likely to affect your users' experience when the page is loaded.

 

 

 

UI - Web Beacon

 

Two preferences have been added:

- Define the maximum load time allowed in a measurement (default is 300 seconds). Depending on how long your page load time tends to be, this will help exclude outliers that might skew your average load time. But do not set it too low or you may exclude measurements that are worth keeping.

- Set the collection sampling rate (default is 100%). This is completely optional. At this point we recommend that you keep this setting at 100%.

 

Web Beacon

 

RUM web beacons  call our servers using the same protocol (http or https) as your web site's URL. It used to be https only.

 

 

 

Enjoy!

Categories: Load & Perf Testing

Scheduled Maintenance for 12/01/12 - 10 PM PT - 3 AM PT

Wed, 11/28/2012 - 13:27

We will be performing maintenance to our Community site on Saturday, 12/1 between 10PM PT to Sunday, 12/2 3AM PT.

 

During this maintenance window, there should be no impact to you but if there is, please contact us at .

Categories: Load & Perf Testing

Intermittent Authentication & Registration Failures, Thursday November 15th

Fri, 11/16/2012 - 01:54

WPM Customers,

 

We experienced intermittent authentication and registration failures on the Web Performance Management platform due to issues in backend services.  Users would have experienced problems logging in, and those issues would resolve within 10-20 minutes.  A similar problem was happening with account registrations.  For these customers we will provision the account and follow up with you via email directly for your new services and a platform trial.

 

We apologize for any inconvenience.

Categories: Load & Perf Testing

Scheduled Maintenance Scheduled for Thursday, November 15

Thu, 11/08/2012 - 14:23

We will be performing maintenance to our back-end functions in our application on Thursday, November 15, 2012 from 10pm PST to 11pm PST. 

 

During this maintenance window, you will not be able to create new load tests, check your monitoring balance, or sign up for new services.   

 

Monitoring and alerting will not be affected.

 

If you have any concerns, please contact us at .

 


Categories: Load & Perf Testing

What's New | November 2012

Wed, 11/07/2012 - 14:03

Improved UX design for easy monitoring job creation


Since the launch of our WPM platform, we have listened to your feedback about how you can create a monitor simply and quickly, and excited to announce the latest design and workflow.

 

  • The new monitoring overview page allows you to quickly identify what is happening with your monitors and your activity summary

  • Easily create a new monitoring job with inline edits to your scripts and alerts

 

Receive alerts based on JavaScript errors and load-times

 

If you have not tried out RUM, this might be a great time to check it out.  And if you are using it, you can now head over to the analysis tab within the product to view detailed reporting for JavaScript errors and load times. With this information, you can set up alerts so you know what is happening on your site at all times.

 

Not only can you get alerts based on JavaScript errors or high load times, you can view the information in a graph that will display JavaScript error counts beneath the time series and identify statistical movements across time.

 

 

If you have more questions about Real User Measurements, check out our Community page to get your answers.

Categories: Load & Perf Testing