Feed aggregator

How Home Energy Efficiency Relates to Web Performance Responsibility

Perf Planet - 12 hours 48 min ago

Share

Remember NetZero? I was a customer way back in the day. I also remember some Wall Street analyst commenting on the company when it went public during the dot com bomb: “Why would you name a company after its bottom line?” No idea who said that, but can you believe the company still exists?!? Wow! They’ve pivoted. A few times.

But today, the term “net zero” has a totally different meaning, and applies mainly to buildings and more specifically homes that will, over the course of time, use net zero energy. These houses are incredibly efficient and utilize the latest technology in geothermal, hydroelectric, and solar energy generation, capture, and storage. Yes, we’re getting around to web performance, I promise.

Image courtesy of Wikipedia

Here goes: when does the builder and/or buyer ask the question, “how efficient will this home be?” If they’re building a net zero home, by definition, they don’t ask that question. If they do ask it, they are new to the concept, and they ask the question before the first fist full of dirt is ever moved. Now, riddle me this, NetZero Man: what if the builder and/or buyer did ask that question halfway through the building process? Ridiculous, right?

But that would be an improvement on when the question of web performance gets asked, because it usually comes out when someone says or hears this:

“Wow, our web site is so slow!”

So, whose job is web performance anyway? Who do you look at or point to when someone says, “Wow, our web site is so slow!”? In most cases, there are four possibilities: developer, designer, QA, IT/Ops. Whose job is it? Well, it’s everyone’s job, but the more important question is not “who, but “when is web performance?”

No, that doesn’t make any grammatical sense at all: “when is web performance?” But that’s the appropriate question, in the same manner as it’s appropriate to ask “how efficient will it be?” when you are building a house. We should be creating the entire website experience to be absolutely as fast as possible. Unfortunately, for most startups and small businesses, web performance comes in second if not third fiddle to functionality and design.

While that approach may be the most pragmatic for getting an MVP (minimum viable product) off the ground, ultimately the startup has to be able to compete against the big competitors who have multi-million dollar budgets. And, the big companies with multi-million dollar budgets have to be on guard for faster, more agile startups who have placed a big cross-hairs on the logo of the entrenched leader in the market.

All this is to say that web performance must be ranked just as high as design and functionality when it comes to the priorities of the web team. When it doesn’t, someone is going to have to pull double duty doing lots of troubleshooting to figure out what’s degrading the performance of the website or app, all while wondering, “whose job is web performance anyway?”

If you are interested in things like performance responsibility and designing fast sites from day one, then you will love Zoompf. We have a number of free tools that help you detect and correct front-end performance issues on your website: check out our free report to analyze your website for common performance problems, and if you like the results consider signing up for our free alerts to get notified when changes to your site slow down your performance.

The post How Home Energy Efficiency Relates to Web Performance Responsibility appeared first on Zoompf Web Performance.

A Ping Pong Defect

QA Hates You - 15 hours 34 sec ago

While sitting in a restaurant, I saw that the closed captioning on the sports program was frequently emitting a string of random characters in the speech:

Forensically speaking, we could assume that this bug occurs in one of the following places:

  • The software transliterating the text to speech. That is, when the software encounters a certain condition, it puts a cartoon curse word into the data.
  • The network transmitting the information. That is, the transmission of the data introduces garbage.
  • The device displaying the transmitted information. That is, the television or satellite box that introduces the captions into the picture inserts the junk every two lines or so.

Okay, I’ll grant you the fourth option: That the broadcasters were actually cursing that much. However, given that the FCC has not announced fines daily, I’m willing to say that it’s nonzero, but unlikely.

The beauty of a defect that could occur almost anywhere, between disparate parts of the product and across different teams and technologies, means that it could ultimately be nobody’s fault. Well, if you ask one of the teams, it’s one of the other team’s fault.

You know, a little something squirrelly happens, you log a defect, and the server, interface, and design teams spend megabytes reassigning the defect to each other and disclaiming responsibility. It drives me nuts.

So what do you do? You find a product owner or someone who’ll take charge of it and pursue it across fiefdoms or who’ll put the screws to the varied factions until it gets fixed.

Because everybody’s got something they’d rather be working on than somebody else’s problem. Even if it’s everybody’s problem.

Categories: Software Testing

Monolith to MicroServices: Key Architectural Metrics to Watch

Through my Share Your PurePath program I can confirm that many software companies are moving towards a more service-oriented approach. Whether you just call them services – or Micro-services doesn’t really matter. If you want to get a quick overview of the concepts I really encourage you to read Benjamin Wootton’s blog and the comments […]

The post Monolith to MicroServices: Key Architectural Metrics to Watch appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Monolith to MicroServices: Key Architectural Metrics to Watch

Perf Planet - Wed, 08/26/2015 - 06:32

Through my Share Your PurePath program I can confirm that many software companies are moving towards a more service-oriented approach. Whether you just call them services – or Micro-services doesn’t really matter. If you want to get a quick overview of the concepts I really encourage you to read Benjamin Wootton’s blog and the comments […]

The post Monolith to MicroServices: Key Architectural Metrics to Watch appeared first on Dynatrace APM Blog.

WebPageTest Found a Performance Problem? Fix It Using Zoompf HAR Import!

Perf Planet - Tue, 08/25/2015 - 08:44

Share

The HAR (aka “HTTP Archive”) file format is a JSON representation of all the content loaded by your browser to render a particular URL. HAR files contain lists of resources requested (CSS, Javascript, images, etc.), the order they were requested, how long it took to download them, and more. The HAR file format is evolving to become the universal communication language between web performance tools. (MaxCDN has a good overview article here).

What you may not know, though, is many of the performance tools you use today can export HAR files as outputs.

For example in Chrome: load any URL, right click anywhere on the page to Inspect Element, click the Network tab, and then right click again to Save as HAR with Content.

Or in WebPageTest, simply click the Export HTTP Archive option in the upper-right corner after you run a new test.

And of course many other tools like Firebug, Zoompf and Rigor also export har files.

Why is this important?

While .har files (and the correspondingly rendered waterfall charts) are quite valuable in identifying performance bottlenecks with your page load resources, they suffer from one key drawback: they don’t help you actually fix the problems. This is where Zoompf comes in.

HAR File Import

Today I’m pleased to announce another great feature for our Web Performance Optimizer beta: HAR file imports!

As users of Zoompf already know, the Zoompf WPO tool analyzes the content of your site for over 400 causes of slow performance, providing detailed step by step instructions on how to resolve each problem. This level of analysis answers the “what’s next?” question. After you find performance slowdowns on your site using WebPageTest or Chrome Dev Tools, you still need to figure out how fix the problems. The Zoompf HAR import can give you that insight.

Here’s how it works.

First, using your favorite waterfall tool of choice, export your initial results to a HAR file. Now, fire up Zoompf WPO and select the New Performance Test option.

From there, select Create New Test, click Advanced, and then HAR File.

You’ll now be prompted to upload that HAR file, or import directly from a hosted URL. Optionally expand the Scanner Options section to further customize your analysis (for example, applying one of our new Defect Check Policies) and then click Start Test to begin the analysis.

The Zoompf performance analyzer will then analyze the content in that HAR file and run its suite of performance defect checks against the specific content in that file, as it was originally captured. This is a great way to analyze “point in time” results that you may have only intermittently seen when debugging with other tools like WebPageTest.

When the analysis is complete, you’ll see a wealth of new information about those HAR file results, including a new interactive waterfall chart that shows all the Performance Defects causing each of the bottlenecks in your results:

 

Using the HAR Import feature is a great way to chase down those hard to fix performance defects. After you identify a performance bottleneck using WebPageTest, Chrome or Rigor, then just import those HAR results into Zoompf to find out what caused that bottleneck and how you can fix the problem!

If you’d like to learn more about Zoompf Web Performance Optimizer, contact us to schedule a demo. Want to see how your site is performing right now? Try our free performance report.

 

The post WebPageTest Found a Performance Problem? Fix It Using Zoompf HAR Import! appeared first on Zoompf Web Performance.

Apparently, The Screen Size In Production Is Different

QA Hates You - Tue, 08/25/2015 - 07:28

At least, I hope this is the result of the screen size being different in production than it was in the spec.

Otherwise, the implication would be that the interface was not tested.

Remember when you’re testing that the spec or requirements are merely suggestions, and you should go afield of your testing matrix as often as you can.

Categories: Software Testing

Windows 95 Nostalgia

Alan Page - Mon, 08/24/2015 - 12:48

I feel like today’s a good day to share a few stories about my first few months at Microsoft, and the (very) small part I played in shipping Windows 95.

My start at Microsoft is a story on its own, and probably worth recapping here in an abbreviated form. I started at Microsoft in January 1995 as a contractor testing networking components  for the Japanese , Chinese, and Korean versions of Windows 95. I knew some programming and even a bit of Japanese (I later became almost proficient, but have forgotten a lot of it now). I also knew, for better or for worse, a lot about Netware and about hardware troubleshooting, and that got me in the door (and got me hired full time 5 months later).

Other than confirming that mainline functionality (including upgrade paths) were correct, there were two big parts of my job that were unique to testing CKJ (Chinese, Korean, Japanese) versions of Windows. The first was that at the time, there were a dozen or so LAN cards (this was long before networking was integrated onto a motherboard) that were unique to Japan, and I was (solely) responsible for ensuring these cards worked across a variety of scenarios (upgrades from Windows, upgrades from LanMan, clean installs, NetWare support, protocol support, etc.). One interesting anecdote from this work was that I found that one of the cards had a bug in its configuration file causing it to not work in one of the upgrade scenarios. Given the time it typically took to go to the manufacturer to make a fix and get it back we decided to make the fix on our end. Because I knew the fix (a one liner), I made the change, checked it in, and that one liner became the first line of “code” I wrote for a shipping product at Microsoft.

The other interesting part of testing CKJ Windows was that Windows 95 was not Unicode; it was a mixed byte system where some (most) characters were made up of two bytes. Each language had a reserved set of bytes specified as Lead Bytes, that indicated that that byte, along with the subsequent byte were part of a single double-byte character. Programs that parsed strings had to parse the string using functions aware of this mechanism, or they would fail. Often, we found UI where we could put the cursor in the middle of a character. The interesting twist for networking was that the second byte could be 0x7c (‘|’), or 0x5c (‘\’). As you can imagine, these characters caused a lot of havoc when used in computer names, network shares, paths, and files, and I found many bugs testing with these characters (more explanation on double-byte characters, along with one of my favorite related bugs is described here).

While I didn’t do nearly as much for the product as many people on the team who had worked on the product for years, I think I made an impact, and I learned so many things and learned from so many different people.

(potentially) related posts:
  1. One of my Favorite Bugs
  2. My last fifteen years (part 2)
  3. Twenty Years…and Change
Categories: Software Testing

Apple Leading the Way in the Adoption of IPv6

Perf Planet - Mon, 08/24/2015 - 08:07

This article originally appeared on Mattias Geniar’s personal blog.

This is pretty exciting news for the adoption of IPv6. In June, Apple announced that all iOS9 apps need to be “IPv6 compatible.”

Because IPv6 support is so critical to ensuring your applications work across the world for every customer, we are making it an AppStore submission requirement, starting with iOS 9.

In this case, compatible just means the applications should implement the NSURLSession class (or a comparable alternative), which will translate DNS names to either AAAA (IPv6) or A (IPv4) records, depending on which one is available.

It doesn’t mean the actual hostnames need to be IPv6. You can still have a IPv4-only application, as long as all your DNS-related system calls are done in a manner that would allow IPv6, if available.

But to further encourage IPv6 adoption, Apple has just motivated all its app developers to use IPv6 where applicable: IPv4 networks will get a 25ms penalty compared to IPv6 connections.

Apple implemented “Happy Eyeballs,” an algorithm published by the IETF which can make dual-stack applications (those that understand both IPv4 and IPv6) more responsive to users, avoiding the usual problems faced by users with imperfect IPv6 connections or setups.

Since its introduction into the Mac OSX line four years ago, Apple has pushed a substantial change to the implementation for the next Mac OSX release “El Capitan”.

1. Query the DNS resolver for A and AAAA.
If the DNS records are not in the cache, the requests are sent back to back on the wire, AAAA first.

2. If the first reply we get is AAAA, we send out the v6 SYN immediately

3. If the first reply we get is A and we’re expecting a AAAA, we start a 25ms timer
— If the timer fires, we send out the v4 SYN
— If we get the AAAA during that 25ms window, we move on to address selection
[v6ops] Apple and IPv6 — Happy Eyeballs

In other words: DNS calls are done in parallel, both for an AAAAA record as well as an A record. Before the answer of the A-record is accepted, at least 25ms should have passed waiting for a potential response to the AAAA query.

RFC6555, which describes the Happy Eyeballs algorithm, notes that Firefox and Chrome use a 300ms penalty timer.

1. Call getaddinfo(), which returns a list of IP addresses sorted by the host’s address preference policy.

2. Initiate a connection attempt with the first address in that list (e.g., IPv6).

3. If that connection does not complete within a short period of time (Firefox and Chrome use 300 ms), initiate a connection attempt with the first address belonging to the other address family (e.g., IPv4).

4. The first connection that is established is used. The other connection is discarded.
RFC6555

Apple’s implementation isn’t as strict as Chrome’s of Firefox’s, but it is making a very conscious move to push for IPv6 adoption. I reckon it’s good news for Belgian app developers, as we’re leading the IPv6 adoption charts.

If you’re interested, here are some IPv6 related blogposts I published a few years ago;

 

The post Apple Leading the Way in the Adoption of IPv6 appeared first on Catchpoint's Blog.

Suggestions For Your QA Mission Statement

QA Hates You - Mon, 08/24/2015 - 05:27

Victor Frankenstein’s creation speaking in Mary Shelley’s Frankenstein pretty much sums up my testing approach:

I will revenge my injuries: if I cannot inspire love, I will cause fear; and chiefly towards you, my arch-enemy, because my creator, do I swear inextinguishable hatred. Have a care: I will work at your destruction, nor finish until I desolate your heard so that you shall curse the hour of your birth.

Nineteenth century curses are the best.

Here’s a statement of work from Frankenstein himself later in the book:

My present situation was one in which all voluntary thought was swallowed up and lost. I was hurried away by fury; revenge alone endowed me with strength and composure; it moulded my feelings and allowed me to be calculating and calm, at periods when otherwise delirium or death would have been my portion.

Categories: Software Testing

Announcing DaSpec — awesome executable specifications in Markdown

The Quest for Software++ - Sat, 08/22/2015 - 19:47

It’s my great pleasure to announce the immediate availability of DaSpec v1.0, the first stable version ready for production use. DaSpec is an automation framework for Executable Specifications in Markdown. It can help you:

  • Share information about planned features with non-technical stakeholders easily, and get actionable unambiguous feedback from them
  • Ensure and document shared understanding of the planned software, making the definition of done stronger and more objective
  • Document software features and APIs in a way that is easy to understand and maintain, so you can reduce the bus factor of your team and onboard new team members easily
  • Make any kind of automated tests readable to non-technical team members and stakeholders

DaSpec helps teams achieve those benefits by validating human-readable documents against a piece of software, similar to tools such as FitNesse, Cucumber or Concordion. The major difference is that DaSpec works with Markdown, a great, intuitive format that is well supported by a large ecosystem of conversion, editing and processing tools. Run and play with the key examples in your browser now, without installing any software, to see what DaSpec could do for you.

DaSpec’s primary target are teams practising Behaviour Driven Development, Specification by Example, ATDD and generally running short, frequent delivery cycles with a heavy dependency on test automation. It can, however, be useful to anyone looking to reduce the cost of discovering outdated information in documentation and tests.

For more information on what’s new in version 1.0, check out the release notes.

The first version of DaSpec supports automation using JavaScript only. We plan to port it to other platforms depending on the community feedback. Get in touch and let us know what you’d like to see next.

CI Test Automation Part 1 – Feature Branches

Eric Jacobson's Software Testing Blog - Fri, 08/21/2015 - 15:16

My dev teams are using Git Flow.  A suite of check-in tests run in our Continuous Integration upon merge-to-dev.  The following is the model I think we should be using (and in some cases we are):

  1. A product programmer creates a feature branch and starts writing FeatureA.
  2. A test programmer creates a feature branch and starts writing the automated tests for FeatureA.
  3. The product programmer merges their feature branch to the dev branch.  The CI tests execute but there are no FeatureA tests yet.
  4. The test programmer gets the latest code from the dev branch and completes the test code for FeatureA.
  5. The test programmer merges their test code to the dev branch, which kicks off the tests that were just checked in.  They should pass because the test programmer ran them locally prior to merge to dev.
Categories: Software Testing

Improving ViewState Tokens

LoadStorm - Fri, 08/21/2015 - 10:13
Overview

ViewState tokens are a common performance issue I come across while helping customers. I’ve found some information on how to get performance improvements by modifying ViewState tokens.

What is a ViewState token? It is ASP.NET’s way to ensure the state of the page elements sent and received match. It is an encoded string sent out to the user which gets sent back to your server in a POST request. The main problem with this kind of token is that it needs to represent every form element on the page. This gets large when complex elements such as a drop-down menu is in the form. Countries, companies, customers, or other database tables with many entries are good drop-down examples. In some cases it can double the size of your page. The largest ViewState I’ve seen was 300,000 characters long. The generated HTML for that page was at least 600,000 characters. That’s a lot of response text.

Performance Impacts

As the ViewState grows larger. It affects performance in the following ways:

  • Increased CPU cycles to serialize and to deserialize the ViewState.
  • Pages take longer to download because they are larger.
  • Large ViewStates can impact the efficiency of garbage collection.

According to one source I found:

ViewStates work best when serializing basic types such as strings, integers, and Booleans. Objects such as arrays, ArrayLists, and Hashtables are also good with those basic types. When you want to store a non-basic type, ASP.NET tries to use the associated type converter. If it cannot find one, it uses the expensive binary serializer. The size of the object is proportional to the size of the ViewState. Avoid storing large objects.

Improvement Options

From much of what I read there are only three options:

  1. Remove ViewStates when they’re not needed.
  2. Remove any unnecessary elements in the form.
  3. Compress the ViewState to reduce the data transferred.

I rarely see ViewStates used that are not needed, and in some cases a few elements can be removed from the form. Compression is often the best improvement for the end-user experience. Keep in mind that compressing and decompressing the ViewState causes more CPU work. This could have a poor impact on performance at scale.

Compress a ViewState whenever it is above a particular size:
http://sebnilsson.com/3ec8d9da/asp-net-webforms-seo-compressing-view-state/

More suggestions on compressing the ViewState:
http://stackoverflow.com/questions/2380106/asp-net-compress-viewstate
http://www.codeproject.com/Articles/14733/ViewState-Compression

Sources:

https://msdn.microsoft.com/en-us/library/ff647787.aspx#scalenetchapt06_topic19
http://www.monitis.com/blog/2012/07/26/improving-asp-net-performance-part-12-view-state-management/
http://jagbarcelo.blogspot.com/2009/03/viewstate-size-minimization.html

The post Improving ViewState Tokens appeared first on LoadStorm.

The Importance of Monitoring Consumer ISPs

Perf Planet - Thu, 08/20/2015 - 10:27

At their heart, web performance monitoring companies have a basic goal: to provide their customers with the most comprehensive and accurate look possible into the health of their online systems. This goal is accomplished in multiple ways, but one of the most important is through synthetic testing from servers plugged into the internet backbone.

Using these nodes as ‘dummy computers’ on which to run tests of their websites and applications, businesses can glean valuable data pertaining to how their sites and applications are performing, and also how peering issues, connectivity bottlenecks, etc. are affecting their overall performance. Remember, the internet is a series of tubes, and congestion in one location often has ripple effects that spread far and wide.

There are two key types of network providers: Transit Providers (e.g. Cogent, Level3, NTT) and the Last Mile (e.g. Comcast, Verizon, AT&T, etc.). As an end user, you do not buy your internet from Cogent, but from Comcast. Yet in your datacenter, you buy connectivity from the backbone providers like Cogent or Level3, or you buy into peering exchanges.

To present, the most accurate view into the end user experience a monitoring tool requires having nodes on these two types of networks, and more on the “Eyeball” or Last Mile networks like Comcast, AT&T, etc. – i.e. ISPs that end users are using to access the internet.

This principle has been made even clearer over the last few years with the fight over net neutrality, during which it was made widely known that some of these ISPs practice throttling techniques on their networks, showing favor to certain websites by slowing service down for others (the whole “fast lane/slow lane” battle that we saw take place in the halls of Congress). But without nodes that are connected to these networks, how were businesses supposed to know if poor performance was due to problems on their own end, or with the consumer ISPs themselves?

This is why Catchpoint has been investing heavily in expanding our node infrastructure on these prominent eyeball networks. When looking at broadband providers in the U.S., you’ll see that over 60 million internet users in the U.S. (or 71.4%) are subscribers of the Big Four: Comcast (26.5%), AT&T (19.1%), Time Warner (14.9%), or Verizon (10.9%).

So not only does Catchpoint have over 200 nodes in 39 different cities across the United States, but we are connected to all the Big Four carriers, as well as the next five biggest ISPs (CenturyLink, Charter, Cox, Cablevision, and Frontier), which together handle another 21% of all U.S. internet traffic.

(NOTE: Despite some statements to the contrary, we ARE in fact monitoring in the cloud, both from transit networks – where companies buy bandwidth – and from eyeball networks, where actual consumers are. But we’re not limited to monitoring from public clouds like Amazon Web Services, where your consumers are not.)

On top of that, Catchpoint has also stayed ahead of the curve by building nodes that are specifically tailored to IPv6 (we’re at 50 and counting), which has already begun to replace the exhausted IPv4 addresses. In doing so, we’re ensuring that our customers have access to the most current information possible.

Of course, the U.S. is hardly the only place where we’re expanding our network; our global node coverage encompasses:

  • Total Number of Agents/Nodes: 459
    • Backbone: 332
    • Last Mile: 90
    • Wireless: 37
  • 69 countries
  • 164 cities
  • 180 Autonomous Systems

When you combine all that with the amount of data that every test run provides, plus the ability to isolate and analyze every component of your web transactions, it results in a level of depth into their online systems that’s simply unavailable from any other monitoring system. Because we are not just a web monitoring tool; you can with Catchpoint keep an eye on 13 different kinds of end points from Web, DNS, and Traceroutes, to our latest one: Websockets.

As we like to say, performance is a journey rather than a destination; it’s a never-ending process that can constantly be improved upon. Therefore, we’re going to continue to invest in our testing capabilities and our node infrastructure to give the most comprehensive view to our customers.

 

The post The Importance of Monitoring Consumer ISPs appeared first on Catchpoint's Blog.

Citrix Session Reliability Part 2: When “Network Errors” are neither “Network” nor “Errors”

In last week’s post I looked at Citrix Session Reliability and its relationship to key network performance indicators. I concluded by saying that measuring TCP connectivity issues is essential to an understanding of the user’s overall experience, and also that connectivity (or availability) from the user’s perspective likely has little to do with the underlying […]

The post Citrix Session Reliability Part 2: When “Network Errors” are neither “Network” nor “Errors” appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Citrix Session Reliability Part 2: When “Network Errors” are neither “Network” nor “Errors”

Perf Planet - Thu, 08/20/2015 - 06:02

In last week’s post I looked at Citrix Session Reliability and its relationship to key network performance indicators. I concluded by saying that measuring TCP connectivity issues is essential to an understanding of the user’s overall experience, and also that connectivity (or availability) from the user’s perspective likely has little to do with the underlying […]

The post Citrix Session Reliability Part 2: When “Network Errors” are neither “Network” nor “Errors” appeared first on Dynatrace APM Blog.

New Features: Ignore Defects and Performance Knowledge Base!

Perf Planet - Wed, 08/19/2015 - 10:02

Share

Another batch of new features for the Zoompf Web Performance Optimizer beta to announce today!

Ignore Specific Defects with Defect Check Policies

Whenever you run a Performance Test with Zoompf WPO, you are testing your site for over 400 common causes of slow performance. These causes include defects such as Content Served Without Compression, Lossless PNG Image Optimization, Unminified CSS, and more. We call each of these tests a Defect Check.

In some cases, though, there may be a set of Defect Checks that are outside of your ability to fix. For example, say your marketing department won’t let you lossless optimize any images, so you’d like Zoompf to stop telling you about them. When this happens, you may prefer to ignore these results in your performance test findings. Defect Check Policies will now allow you to do this.

To configure a Defect Check Policy, visit your Settings page and select the Defect Check Policies tab. You’ll see a view similar to this:

Right away you’ll see a number of default system policies already included in your account (with more to come soon!).

Click the title of any of those policies to browse a read-only view of all the Defect Checks included in that policy. Alternatively, you can create your own new policy by copying an existing system policy or starting from scratch with Create New Policy, for example:

 

Click around the tree a bit and you’ll notice a nifty preview window to the right giving you a full description of the Defect Check. This is a great new way to browse all the rich performance content Zoompf has provided in its product for years, but more on that later.

Check the boxes on only the defects you want to test, ignoring the known defects you want to ignore, then click Create to create that policy. Once created, you can then modify any of your existing Performance Tests to use that new Defect Check Policy for all future snapshots. To do this, visit the View Results link in the navigation header,  click Edit on the test you wish to modify, and expand the Scanner Options section.

You’ll now see a new dropdown for Defect Check Policy like this:

Select the new policy you just created, and hit update. All future snapshots for this test will now test only those Defect Checks you selected, ignoring those defects you chose to skip. This is a great way to cut down the noise!

One other tip: If you find yourself using the same Defect Check Policy for most if not all of your Performance Tests, consider setting your new policy as the “Default” on the Defect Check Policies settings page. A default policy will be used automatically for all new Performance Tests created (unless you specify otherwise in the test). Note, you will still need to modify existing tests to use any new policies you create.

To get a better understanding of which tests are using which policies, use the new Defect Check Policy filter in the left hand filter bar on the View Results page.

New Performance Knowledge Base!

You may have noticed while you were browsing the Defect Check Policies all the great performance content visible on the right hand preview pane. While this content has been in the Zoompf product for quite some time, it was always a little buried and harder to access: you’d have to first run a performance test to view this content, and even then you’d only see content for those defects that were found.

With the addition of Defect Check Policies, we felt now was a good time to make all this great content much easier for you to access. Therefore, we’re happy to announce our new Performance Knowledge Base!

Accessing the Knowledge Base is a snap, just log into your WPO account and click the Knowledge Base button on the top nav.

From there, you’ll see a read-only view of the Defect Check data we mentioned above, but easily browseable so you can view any topic of interest.

There’s over 400 articles in there, so you’ll have a lot of late night reading to keep you busy :-). We’ll also be continuing to add more content over time, so keep coming back for new updates.

We’re really excited about these new features and hope you’ll find them useful. And of course, if you have any feedback on how we can improve this (or any other feature) please feel free to contact us with your feedback!

Enjoy!

 

The post New Features: Ignore Defects and Performance Knowledge Base! appeared first on Zoompf Web Performance.

Automated Performance Engineering Framework for CI/CD

Author: Aftab Alam, Senior Project Manager, Infosys Independent Validation and Testing Services with contribution from Shweta Dubey Continuous Integration is an important part of agile based development process. It’s getting huge attention in every phase of Software Development Cycle (SDLC) to deliver business feature faster and confidently. Most of the times it’s easy to catch functional bugs […]

The post Automated Performance Engineering Framework for CI/CD appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Automated Performance Engineering Framework for CI/CD

Perf Planet - Wed, 08/19/2015 - 05:55

Author: Aftab Alam, Senior Project Manager, Independent Validation and Testing Services with contribution from Shweta Dubey Continuous Integration is an important part of agile based development process. It’s getting huge attention in every phase of Software Development Cycle (SDLC) to deliver business feature faster and confidently. Most of the times it’s easy to catch functional bugs using […]

The post Automated Performance Engineering Framework for CI/CD appeared first on Dynatrace APM Blog.

Clear Skies Ahead: Explaining and Contextualizing Apdex Values

Perf Planet - Tue, 08/18/2015 - 11:40

Partly cloudy with scattered showers.

Apdex is like a basic weather report; it provides a general overview of current conditions, but it doesn’t tell you how many people are standing out in the rain. Typically, Apdex is used as an index to represent response time performance against user expectations. For web pages, this is a common barometer for indicating levels of user satisfaction on how quickly a page loads.

Like the weather report, it’s popular to criticize Apdex for being too imprecise, or not representing a complete view of the experience population. Rather than take that well-trodden approach, let’s instead look at how Apdex can be used productively without sacrificing its simplicity.

The basic Apdex calculation is straight-forward. Threshold values are used to separate response time samples into three zones: Satisfied, Tolerating, and Frustrated. The index is then calculated using a basic formula.

This is best explained by example. Consider the sample web response times shown in the figure below. Users who experience response times less than the Tolerating threshold (T) are considered to be Satisfied, and their samples are counted in the Satisfied_count. Samples falling between T and the Frustrated threshold (F) are in the Tolerating_count. These are given half the weight of Satisfied samples (they’re divided by two). By default, the Frustrated threshold is set at F = 4T. Users experiencing response times greater than F are considered Frustrated, and aren’t counted in the Apdex numerator. When divided by the total number of samples, the result is a normalized index from zero to one.

The success of this index lies in its simplicity. Once you define T, the Tolerating threshold, F is automatically set at four times higher. The obvious problem here is that while most users may be satisfied waiting three or four seconds for a web page to load, few will wait four times longer to ever reach the Frustrated threshold. Depending on the type of website being measured, a more representative threshold for frustration could be much lower than 4T.

In this case, a more generic implementation of Apdex should be used. Instead of automatically setting the Frustrated threshold to 4T, it can be manually set to a more appropriate value to allow for a more fine-tuned representation of user experience. In the example above, if a web page’s Tolerating threshold is set to three seconds, but experience shows that users abandon the site after seven seconds, F could be set to seven instead of the default 12 seconds.

A similar issue occurs when Apdex is applied more broadly, such as to server, database or other transaction metrics. In these cases, the Frustrated threshold cannot be assumed to be four times higher than the Tolerating threshold. The level of user satisfaction could be smaller or larger than 4T, and could vary by the application needs and goals of the organization itself. Employing a variable Tolerating zone (where F is set independently of T) will help support these needs, and is easy to implement.

Another issue for using Apdex is how to interpret the index itself. A decimal number between zero and one is not meaningful to many. Apdex does specify a set of ranges, shown below, but like the weather report, these are rather subjective.

Translating an index value to a simple verbal rating is a good idea, but certainly isn’t new. For instance, the NOAA National Weather Service uses “Partly Cloudy” when the sky is 3/8 to 5/8 covered by clouds. But just how helpful is it to say your webpage performance is Fair? It might be more productive to ask what percent of webpage loads were in the Frustrated zone, versus those in the Satisfied or Tolerating zones.

This can also be useful for those concerned with SLAs or reporting user experience to a less technical audience. For many, saying “95% of our test runs this month were satisfactory” is more meaningful than reporting how many were completed within three seconds, or that the Apdex rating was “Good.”

The Apdex rating can thus be used as an initial, high-level indicator of current conditions – and when more information is needed, Apdex zones can be further explored by charting each as individual metrics. This leverages the Apdex model while maintaining simplicity and providing more than just a one-word indication.

The figure below illustrates this point by showing webpage response times with corresponding Apdex and zone percentages for a major online retailer using synthetic testing. As webpage response times worsen, the Apdex rating drops from Excellent to Unacceptable. Looking further at the Apdex zones, the bottom chart shows the percent of Satisfied page loads drop to zero, while those in the Frustrated zone peak at 80%. This provides a better view into how the incident may affect actual user experience.

Overall, if we want to use Apdex like a summary weather report, it has to be meaningful, easy to communicate, and serve as a starting point for further examination. Combined, these recommendations should help in achieving those goals, and without sacrificing the overall simplicity of the Apdex model.

 

The post Clear Skies Ahead: Explaining and Contextualizing Apdex Values appeared first on Catchpoint's Blog.

Eliminating Roundtrips with Preconnect

Ilya Grigorik - Mon, 08/17/2015 - 01:00

The "simple" act initiating an HTTP request can incur many roundtrips before the actual request bytes are routed to the server: the browser may have to resolve the DNS name, perform the TCP handshake, and negotiate the TLS tunnel if a secure socket is required. All accounted for, that's anywhere from one to three — and more in unoptimized cases — roundtrips of latency to set up the socket before the actual request bytes are routed to the server.

Modern browsers try their best to anticipate what connections the site will need before the actual request is made. By initiating early "preconnects", the browser can set up the necessary sockets ahead of time and eliminate the costly DNS, TCP, and TLS roundtrips from the critical path of the actual request. That said, as smart as modern browsers are, they cannot reliably predict all the preconnect targets for each and every website.

The good news is that we can — finally — help the browser; we can tell the browser which sockets we will need ahead of initiating the actual requests via the new preconnect hint shipping in Firefox 39 and Chrome 46! Let's take a look at some hands-on examples of how and where you might want to use it.

Preconnect for dynamic request URLs

Your application may not know the full resource URL ahead of time due to conditional loading logic, UA adaptation, or other reasons. However, if the origin from which the resources are going to be fetched is known, then a preconnect hint is a perfect fit. Consider the following example with Google Fonts, both with and without the preconnect hint:

In the first trace, the browser fetches the HTML and discovers that it needs a CSS resource residing on fonts.googleapis.com. With that downloaded it builds the CSSOM, determines that the page will need two fonts, and initiates requests for each from fonts.gstatic.com — first though, it needs to perform the DNS, TCP, and TLS handshakes with that origin, and once the socket is ready both requests are multiplexed over the HTTP/2 connection.

<link href='https://fonts.gstatic.com' rel='preconnect' crossorigin> <link href='https://fonts.googleapis.com/css?family=Roboto+Slab:700|Open+Sans' rel='stylesheet'>

In the second trace, we add the preconnect hint in our markup indicating that the application will fetch resources from fonts.gstatic.com. As a result, the browser begins the socket setup in parallel with the CSS request, completes it ahead of time, and allows the font requests to be sent immediately! In this particular scenario, preconnect removes three RTTs from the critical path and eliminates over half of second of latency.

The font-face specification requires that fonts are loaded in "anonymous mode", which is why we must provide the crossorigin attribute on the preconnect hint: the browser maintains a separate pool of sockets for this mode. Initiating preconnect via Link HTTP header

In addition to declaring the preconnect hints via HTML markup, we can also deliver them via an HTTP Link header. For example, to achieve the same preconnect benefits as above, the server could have delivered the preconnect hint without modifying the page markup - see below. The Link header mechanism allows each response to indicate to the browser which other origins it should connect to ahead of time. For example, included widgets and dependencies can help optimize performance by indicating which other origins they will need, and so on.

Preconnect with JavaScript

We don't have to declare all preconnect origins upfront. The application can invoke preconnects in response to user input, anticipated activity, or other user signals with the help of JavaScript. For example, consider the case where an application anticipates the likely navigation target and issues an early preconnect:

function preconnectTo(url) { var hint = document.createElement("link"); hint.rel = "preconnect"; hint.href = url; document.head.appendChild(hint); }

The user starts on jsbin.com; at ~3.0 second mark the page determines that the user might be navigating to engineering.linkedin.com and initiates a preconnect for that origin; at ~5.0 second mark the user initiates the navigation, and the request is dispatched without blocking on DNS, TCP, or TLS handshakes — nearly a second saved for the navigation!

Preconnect often, Preconnect wisely

Preconnect is an important tool in your optimization toolbox. As above examples illustrate, it can eliminate many costly roundtrips from your request path — in some cases reducing the request latency by hundreds and even thousands of milliseconds. That said, use it wisely: each open socket incurs costs both on the client and server, and you want to avoid opening sockets that might go unused. As always, apply, measure real-world impact, and iterate to get the best performance mileage from this feature.

Finally, for debugging purposes, do note that preconnect directives are treated as optimization hints: the browser might not act on each directive each and every time, and the browser is allowed to adjust its logic to perform a partial handshake - e.g. fall back to DNS lookup only, or DNS+TCP for TLS connections.

Pages