Feed aggregator

Easily Boost your Web Application by Using nginx

More and more Web sites and applications are being moved from Apache to nginx. While Apache is still the number 1 HTTP server with more than 60% on active Web sites, nginx has now taken over the 2nd place in the ranking and relegated Microsoft’s IIS to 3rd place. Among the top 10.000 Web sites […]

The post Easily Boost your Web Application by Using nginx appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

The 10 most popular posts of 2014 (so far)

Web Performance Today - Tue, 08/19/2014 - 07:13

Autumn is shaping up to be a very full season, so I’m taking advantage of the relative quiet to take a little R&R. I’ll see you back here in September. In the meantime, here’s a roundup of posts that Google Analytics tells me people liked. I hope you like them, too.

1. Stop the presses: Has the average web page actually gotten SMALLER?

According to the HTTP Archive, in May of this year the average top 1,000 web page was 1491 KB in size, 5% smaller than it was in November 2013, when the average page reached a record size of 1575 KB. Does this finding represent the start of a new trend toward smaller pages, or is it just an isolated incident?

2. How to create the illusion of faster web pages (while also creating actual happier users)

In our most recent State of the Union for ecommerce performance, we found that start render time for the top 500 retailers was 2.9 seconds. In other words, a typical visitor sits and stares at a blank screen for almost 3 seconds before he or she even begins to see something. Not good. But there’s hope. But here are eight tips to help improve perceived web performance.

3. Waterfalls 101: How to use a waterfall chart to diagnose your website’s performance pains

If you already live and breathe waterfall charts, this post isn’t for you. (But this one might be.) This post is for people who are interested in performance but don’t necessarily have a lot of technical know-how. It’s for people who want to crack the hood and learn:

  • why pages behave the way they do,
  • how to begin to diagnose performance issues before sending out red-flag emails to everyone on your team, and
  • how to talk performance with the experts within your organization who do live and breathe waterfalls.
4. New findings: The median top 100 ecommerce page takes 6.2 seconds to render primary content

Every quarter at Radware, we measure and analyze the performance of the top 500 retail websites. And every quarter, I’ve grown accustomed to the persistence of two trends: pages are growing bigger and, not coincidentally, slower. But while I expected to see some growth and slowdown in our latest research — released last month in our State of the Union: Ecommerce Page Speed & Web Performance [Summer 2014] — I have to admit that I wasn’t expecting to see this much.

5. How does web page speed affect conversions? [INFOGRAPHIC]

Performance has only recently started to make headway into the conversion rate optimization (CRO) space. These inroads are long overdue, but still, it’s good to see movement. In the spirit of doing my part to hustle thing along, here’s a collection of infographics representing real-world examples of the huge impact of page speed on conversions.

6. 12 websites that prove RWD and performance CAN play well together

There’s a lot of debate about the performance pros and cons of responsive design. While RWD does present performance challenges, these challenges aren’t insurmountable and shouldn’t be used as an excuse for poor performance. In this post, I tested 60 sites that have been lauded for their awesome RWD implementation. While it’s true that 80% of these sites failed to deliver a relatively fast user experience, 20% succeeded, proving it can be done.

7. There are more mobile-optimized sites than ever. So why are mobile pages getting bigger?

In 2011, the average page served to a mobile device was 475 KB. Today, the average page is 740 KB. Put that number in the context of your data plan and think about it. This is just one of the eye-opening findings from this dive into the Mobile HTTP Archive.

8. The Great Web Slowdown [INFOGRAPHIC]

This is a poster version of the infographics we created to accompany our Winter 2013/14 State of the Union for Ecommerce Performance. At the time, the median Time To Interact for the top 100 retailers was 5 seconds and the median load time was 10 seconds. Compare that to our most recent SOTU, which found that the median TTI was 6.2 seconds and median load time was 10.7 seconds.

9. New findings: Retail sites that use a CDN are slower than sites that do not

This post generated a bit of controversy when it went live. Our quarterly ecommerce performance research at Radware found that using a content delivery network (CDN) correlates to slower Time to Interact, not faster.

But correlation isn’t causation, and this finding shouldn’t be interpreted as a criticism of CDNs. Sites that use a CDN are more likely to incorporate large high-resolution images and other rich content, as well as being more likely to implement third-party marketing scripts, such as trackers and analytic tools. All of these resources can have a significant impact on performance, especially if they’re not implemented with a “performance first” approach. Takeaway: keep your CDN, but look at how you can leverage other performance optimization techniques as well.

10. Nine web performance predictions for 2014

The year’s more than half over. How many of these have come true?

The post The 10 most popular posts of 2014 (so far) appeared first on Web Performance Today.

Fingered by an Error Message

QA Hates You - Tue, 08/19/2014 - 04:55

Why would a user do that?

None of that stopped 26-year-old Diondre J— of Slidell, who checked into Slidell Memorial Hospital on Aug. 5 under the name of her deceased sister, Delores, Slidell Police Department spokesman Daniel Seuzeneau said Wednesday.

When hospital staff attempted to put the information into the hospital’s database, an error message informed them they might have been treating a dead person. The police were contacted, and Diondre J— was stopped in the hospital parking lot.

It’s good to see someone was on the job testing to see what would happen if you tried to enter a patient’s date of treatment after the patient’s date of death.

Because sometimes a user might do that.

Categories: Software Testing

ISTQB Advanced Test Manager Training - Live Online Sept. 15 - 19

Randy Rice's Software Testing & Quality - Sun, 08/17/2014 - 22:24
I am offering a special live, online version of the ISTQB Advanced Test Manager Training the week of September 15 - 19, 2014. I'm also offering a special "buy 2 get 1 free" offer. Just use coupon code "ISTQB915" at https://www.mysoftwaretesting.com/ISTQB_Advanced_Test_Manager_Course_Public_Course_p/ctaltmpub.htm

Be sure to register as a remote attendee.

I will be the instructor and the class schedule will be from 8:30 a.m. to 4:00 p.m. CDT each day. You will get a complete set of printed course notes if you register at least one week in advance of the class. The sessions will be recorded and posted daily, so if you have miss a portion you can catch up.

I hope to see you there!
Categories: Software Testing

Australia’s Attitude toward Website and Application Monitoring – she’ll be right mate

I recently attended the Online Retailer Conference in Sydney, and I couldn’t resist the temptation to survey the audience.  I recalled a conversation I had with a journalist a few months ago, who said ‘I don’t think anyone monitors apps in Australia.  The developers build them; then they just throw them over the fence’. He […]

The post Australia’s Attitude toward Website and Application Monitoring – she’ll be right mate appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

Testing on the Toilet: Web Testing Made Easier: Debug IDs

Google Testing Blog - Tue, 08/12/2014 - 13:01
by Ruslan Khamitov 

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

Adding ID attributes to elements can make it much easier to write tests that interact with the DOM (e.g., WebDriver tests). Consider the following DOM with two buttons that differ only by inner text:
Save buttonEdit button<div class="jfk-button">Save</div><div class="jfk-button">Edit</div>
How would you tell WebDriver to interact with the “Save” button in this case? You have several options. One option is to interact with the button using a CSS selector:
However, this approach is not sufficient to identify a particular button, and there is no mechanism to filter by text in CSS. Another option would be to write an XPath, which is generally fragile and discouraged:
//div[@class='jfk-button' and text()='Save']
Your best option is to add unique hierarchical IDs where each widget is passed a base ID that it prepends to the ID of each of its children. The IDs for each button will be:

In GWT you can accomplish this by overriding onEnsureDebugId()on your widgets. Doing so allows you to create custom logic for applying debug IDs to the sub-elements that make up a custom widget:
@Override protected void onEnsureDebugId(String baseId) {
saveButton.ensureDebugId(baseId + ".save-button");
editButton.ensureDebugId(baseId + ".edit-button");
Consider another example. Let’s set IDs for repeated UI elements in Angular using ng-repeat. Setting an index can help differentiate between repeated instances of each element:
<tr id="feedback-{{$index}}" class="feedback" ng-repeat="feedback in ctrl.feedbacks" >
In GWT you can do this with ensureDebugId(). Let’s set an ID for each of the table cells:
@UiField FlexTable table;
UIObject.ensureDebugId(table.getCellFormatter().getElement(rowIndex, columnIndex),
baseID + colIndex + "-" + rowIndex);
Take-away: Debug IDs are easy to set and make a huge difference for testing. Please add them early.

Categories: Software Testing

Understanding Application Performance on the Network – Part VII: TCP Window Size

In Part VI, we dove into the Nagle algorithm – perhaps (or hopefully) something you’ll never see. In Part VII, we get back to “pure” network and TCP roots as we examine how the TCP receive window interacts with WAN links. TCP Window Size Each node participating in a TCP connection advertises its available buffer […]

The post Understanding Application Performance on the Network – Part VII: TCP Window Size appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

What the 10 fastest ecommerce sites can teach us about web performance

Web Performance Today - Tue, 08/12/2014 - 07:12

We recently released our latest quarterly research into the performance and page composition of the top 500 online retailers. (The full report is available for download here.) Today, I thought it would be revealing to take a look at the ten fastest sites and the ten slowest sites and see what they have in common, where they differ, and what insights we can derive from this.


Every quarter, we use WebPagetest — an online tool supported by Google — to measure and analyze the performance and page composition of home pages for the top 500 ecommerce sites, as ranked by Alexa. WebPagetest is a synthetic tool that lets us see how pages perform across a variety of browsers and connection types. For the purposes of our study, we focused on page performance in Chrome over a DSL connection. This gives us a real-world look (as much as any synthetic tool can provide) at how pages perform for real users under realistic browsing conditions.

NOTE: When I talk about “fast” and “slow” in this post, I’m talking about a page’s Time to Interact (TTI) — the moment that the page’s feature content has rendered and become usable. If your focus is on measuring the user experience, TTI is the best metric we currently have. As the graph above demonstrates, TTI (represented by the green bar) can be significantly faster than load time (represented by the red bar).

1. Faster pages are smaller.

Among the ten fastest pages, the median page contained 50 resource requests and was 556 KB in size. Among the ten slowest pages, the median page contained 141 resource requests and was 3289 KB in size.

(Note that these numbers are for the page at the moment the onLoad event fires in the browser, AKA “document complete” time, AKA the amount of time it takes for most resources to render in the browser. Doc complete time shouldn’t be confused with fully load time, AKA the amount of time it takes for every resource to render. More on this dictinction later in this post.)

In other words, the median slow page was almost three times larger than the median fast page in terms of number of resources, and about six times larger in terms of size.

Looking at the range of page sizes offers a bit more perspective. For the ten fastest pages, the total number of resources lived within a pretty tight range: from 15 to 72 resources. The smallest page was just 251 KB, and the largest was 2003 KB. With the ten slowest pages, we saw a much wider range: from 89 to 373 resources. The smallest page was 2073 KB, and the largest was more than 10 MB.

If you’ve been reading this blog for a while, then the issue of page bloat and its impact on performance isn’t new to you. But it bears repeating, so I’m repeating it.

2. Faster pages have a faster Time to First Byte (TTFB).

Time to First Byte is the window of time between when the browser asks the server for content and when it starts to get the first bit back. The user’s internet connection is a factor here, but there are other factors that can slow down TTFB, such as the amount of time it takes your servers to think of what content to send, and the distance between your servers and the user. In other words, slow TTFB can be an indicator of a server problem or a CDN (or lack thereof) problem — or both.

Among the ten fastest site, the median TTFB was 0.254 seconds, compared to 0.344 seconds for the median for the ten slowest sites. This difference — less than 100 milliseconds — might not sounds like much to be concerned about, but bear in mind that TTFB isn’t a one-time metric. It affects every resource on the page, meaning its effects are cumulative.

3. Faster pages understand their critical rendering path and know what to defer.

Deferral is a fundamental performance technique. As its name suggests, deferral is the practice of deferring any page resources that are not part of a page’s critical rendering path, so that these non-essential resources load last. (The optimal critical rendering path has been excellently defined by Patrick Sexton as “a webpage that has only the absolutely necessary events occur to render the things required for just the initial view of that webpage”.)

Faster pages seem to have a better handle on deferral, which we can infer from looking at the difference between their page size metrics at doc complete versus fully loaded. As already mentioned, among the ten fastest pages, the median page contained 50 resources and was 556 KB in size at doc complete. But when fully loaded, the median page doubled in size to 1116 KB, and contained almost 50% more resources.

Compare this to the ten slowest pages. The median page grew by about 30%, from 3289 KB to 4156 KB, and from 141 resources to 186 resources. And in several cases, the difference between the doc complete and fully loaded metrics was either unchanged or only negligibly different, indicating that these site owners have not put any effort into optimizing the critical rendering path.

 4. CDN adoption is the same among the fastest and slowest sites.

Seven out of ten of the fastest pages used a CDN — as did seven out of ten of the slowest pages. This finding isn’t terribly surprising, as it goes hand in hand with the finding in our spring report that using a CDN doesn’t always correlate to faster pages.

This isn’t to say that site owners shouldn’t use a content delivery network. If you serve pages to a highly distributed user base, then a CDN should be part of your performance toolkit. (I encourage you to read this post for further discussion of this issue.) But this finding is a good reminder that a CDN isn’t a standalone solution.

5. Adoption of other performance best practices is consistent (in its inconsistency) among the fastest and slowest sites.

Looking at the ten fastest and ten slowest sites, we see that they all enable keep-alives, while none use progressive JPEGs. Image compression was hit-and-miss equally among both groups. None of this is terribly surprising. Keep-alives are pretty much a default best practice at this point, and since they can be controlled relatively easily just by configuring your server to enable them, there’s no excuse for not doing this. Using progressive JPEGs (as opposed to baseline images), on the other hand, is still an uphill battle, despite the fact that some studies have shown that they improve the user experience by up to 15%. It’s surprising, though, to still see so many sites not fully leveraging image compression, as this best practice has been around for years.


If you care about delivering a faster user experience to your customers, then look to the fastest online retailers for insight. The most high-performing sites:

  • contain smaller, leaner pages,
  • understand the critical rendering path, and
  • know what resources to defer.

The good news is that there are opportunities for every site — even the ones that are relatively fast already — to fine-tune performance by taking a more aggressive approach to front-end optimization. Why bother? Remember this stat:

Get the report: State of the Union: Ecommerce Page Speed & Web Performance [Summer 2014]

The post What the 10 fastest ecommerce sites can teach us about web performance appeared first on Web Performance Today.

The Role of a Test Architect

Randy Rice's Software Testing & Quality - Fri, 08/08/2014 - 13:14
I get some really good questions by e-mail and this is one of them.

Q: What does a QA Architect do in a team, and what skills are needed for this job?

A; To me, (although different organizations will have different definitions) the test architect is a person who understands testing at a very in-depth level and is responsible for things such as:
  • Designing test frameworks for test automation and testing in general
  • Directing and coordinating the implementation of test automation and other test tools
  • Designing test environments
  • Providing guidance on the selection of the most effective test design techniques in a given situation
  • Providing guidance on technical types of testing such as performance testing and security testing
  • Designing methods for the creation of test data
  • Coordinating testing with release processes
This is a "big picture" person who also understands software development and can work with developers to ensure that the test approaches align with development approaches. So, the test architect should be able to understand various project life cycles. These days, a test architect needs to understand the cloud, SOA and mobile computing.

The terms "architect" in many organizations implies a very deep level of understanding and is a highly respected position. The expectations are pretty high. The architect can provide guidance on things that the test manager may not have technical knowledge about. Yet, the test architect can focus on testing and not have to deal with administrative things that managers deal with.

Follow-up question: "What kind of company is more likely to have such a role? A large company or a smaller company? When you say "directing and coordinating" sounds like communicating across teams, like QA -> dev -> dev ops -> db to get things done."

A: I would say that you would likely find the role in companies that truly understand the role and value of testing. For example, they know that QA does not equal testing. It would be an interesting project to research how many companies in a sample group would have test architect as a defined role. I would tend to think of larger companies being more likely to have a test architect, but I've seen smaller software companies with test architects and larger companies with no one in any type of test leadership or designer role.

Another indication might be those companies that have a more centralized testing function, such as testing center of excellence. I have some misgivings about the COE approach in that they often fail because people see them as little bureaucracies instead of support. Also, they tend to try to "take over the world" in a company in terms of test practice. The lifespan of a testing COE is often less than 2 years from what I have seen. It's good money for testing consultants to come in and establish the COE, but then they leave (or get asked to leave) and the energy/interest goes away.

And...the company would need to see the need for both functional and technical testing. You need a test architect to put those pieces together.

This is not "command and control" but rather design and facilitation. And you are right, the test architect role touches many other areas, tasks and people.

 Follow-up Question: What kind of tools? I'm assuming you're talking more than handy shell scripts. Simulators? Rest clients like postman customizations?
A: Right, the tools are typically high-end commercial tools, but there is a trend toward open source tools and frameworks. One of my friends calls the big commercial tool approach "Big Pharma". The key is that the test architect knows how to define a framework where the tools can work together. This can include the customized homegrown tools as well. Those can be handy.

By the way, the term "framework" can have many meanings. The way I'm using the term here is a structure for test design and test automation that allows functional testers to focus on understanding the application under test and design (and implement) good tests, with the technical testers building and maintaining the technical side of automation.

We also have to expand the view to include not only test automation, but other support tools as well, such as incident management, configuration management and others. For example, there is a great opportunity to use tools to greatly reduce the time needed for test design. Tools such as ACTS (from nist.gov) and Hexawise can be helpful for combinatorial testing, test data generators are needed for test automation and performance testing, and I'm especially keen on model-based testing when the model is used to generate tests (not the manual approach). I think BenderRBT does a great job in designing tests from requirements. Grid Tools has recently introduced a tool called Agile Designer to design tests based on workflow. I'll have more information on that in an upcoming post.

What Does it Take to Become a Test Architect?

I suppose one could take many paths. However, I would not automatically assume someone in an SDET (Software Developer in Test) would be qualified. That's because more than just the technical perspective is needed. The test architect also needs to understand the business processes used in the organization. Personally, I think people at this level, or who aspire to it, would profit by studying Enterprise Architecture. My favorite is the Zachman Enterprise Architecture Framework.

I would look for:

1. Knowledge and understanding at a deep level - Not the superficial knowledge that most people never get past. This means that they know about metrics, code complexity, reliability modeling, performance modeling, security vulnerabilities - all at an advanced to expert level. This is why I encourage people who are on the certification path to go beyond foundation and on to advanced and expert levels of training. This also includes being a continuous learner and reading good books in the testing field and related disciplines. I would start with Boris Beizer's "Software Testing Techniques, 2nd Ed."

2. Meaningful experience - At the risk of just putting an arbitrary number of years as the baseline, I would think at least eight to ten years of solid test design, tool application and perhaps software design and development would be needed. You need a decent portfolio of work to show you have what it takes to work in the role.

3. Great interpersonal skills - The test architect has to negotiate at times and exert influence to get their ideas across. They have to get along with others to function as part of the larger organizational team. Of course, this also includes developers, test managers and development architects. Just because you are a guru doesn't mean you have to be a stubborn and contentious jerk.

4. Objectivity - When choosing between alternative approaches and tools, objectivity is needed.

5. Problem Solving - This requires a creative approach to solving problems and seeing challenges from different angles. It's not uncommon for solutions to be devised where no one has gone before.

I hope this helps raise the awareness of this important role.

Questions or comments? I'm happy to address them!


Categories: Software Testing

Why Internal Application Monitoring Is Not What Your Customer Needs

Perf Planet - Thu, 08/07/2014 - 13:27

When faced with choosing between internal application monitoring and external synthetic monitoring, many business professionals are under the impression that as long their website is up and running, then there must not be any problems with it. This, however, couldn’t be further from the truth.

The need for an external synthetic monitoring tool is actually more important than anything else, because in this world of high-speed, instant gratification (eCommerce site administrators are nodding their heads), if your customers are affected by poor performance, you will not have customers for long.

For a simple explanation of this concept, imagine a fast food restaurant. First of all, out of all of the fierce competition, one needs to make the choice of a specific restaurant. Once chosen, you don’t want to turn business away. So if the door is unlocked, it would mean that the store is open, or in the case of a website, “available.”

Once at the counter, the customer puts in the order, which would be the same as someone going to a website to perform any interaction. These interactions would be the clicks or “requests” within the site. Once the order goes in and as they wait at the counter, the cheeseburger, fries, and apple pie they requested are all delivered in a reasonable and expected amount of time.

However, there appears to be a problem with the drink, because it’s taking longer than usual to get it. Behind the scenes, the drink guy has to change the cylinder containing the mix, and subsequently the drink order takes 10 times longer to be filled. The customer does not know this, and as time passes, frustration grows.

When the drink is finally delivered, the unhappy customer decides to complain to the manager about the speed (or lack thereof) of getting his order. When questioned, each person explains “I did MY job.” This includes the drink guy, whose only responsibility was to deliver the drink as ordered, which in this case, required changing the cylinder. Speed was of no consequence to him, and in his mind, he fulfilled his task to expectations. So from an internal perspective, this is absolutely true. Everyone did their job 100% correctly.

The customer’s (external) perception, however, is a completely different story. To the customer, this was a complete failure and frustrating experience which is sure to tarnish his perspective of the entire brand for the foreseeable future, thus preventing any return for a long time. Even worse, it increases the likelihood of this displeasure being shared with friends on social media, which as we all know can spread like wildfire and irreparably damage a brand.

Hence the importance of monitoring from the outside from the customer’s perspective. This is an absolute necessity in order to avoid customer frustration due to behind-the-scenes problems that slow down web performance and drive away new business.

Whereas internal monitoring provides a valuable – yet incomplete – look at how your site is performing by your own standards, external monitoring can act as a buffer to bad experiences and catch problems before they become an issue for customers. This will keep them happy and satisfied, and more importantly, encourage them to keep coming back.

The post Why Internal Application Monitoring Is Not What Your Customer Needs appeared first on Catchpoint's Blog.

How to Spruce up your Evolved PHP Application – Part 2

Perf Planet - Wed, 08/06/2014 - 07:00

In the first part of my blog I covered the data side of the tuning process on my homegrown PHP application Spelix: database issues, caching on both the server and the client. By just applying these insights I could bring Spelix to a stage where the number of users could be increased by more than […]

The post How to Spruce up your Evolved PHP Application – Part 2 appeared first on Compuware APM Blog.

How to Spruce up your Evolved PHP Application – Part 2

In the first part of my blog I covered the data side of the tuning process on my homegrown PHP application Spelix: database issues, caching on both the server and the client. By just applying these insights I could bring Spelix to a stage where the number of users could be increased by more than […]

The post How to Spruce up your Evolved PHP Application – Part 2 appeared first on Compuware APM Blog.

Categories: Load & Perf Testing

Performance Testing Mobile Apps: How to capture the complete end-user experience

Perf Planet - Wed, 08/06/2014 - 05:06
On 26th August, Ian Molyneaux will present a webinar on the challenges around Performance Testing for the mobile end-user.

Don’t Bother Indicating “Pass” or “Fail”

Eric Jacobson's Software Testing Blog - Tue, 08/05/2014 - 10:42

This efficiency didn’t occur to me until recently.  I was doing an exploratory test session and documenting my tests via Rapid Reporter.  My normal process had always been to document the test I was about to execute…

TEST: Edit element with unlinked parent

…execute the test.  Then write “PASS” or “FAIL” after it like this…

TEST: Edit element with unlinked parent – PASS

But it occurred to me that if a test appears to fail, I tag said failure as a “Bug”, “Issue”, “Question”, or “Next Time”.  As long as I do that consistently, there is no need to add “PASS” or “FAIL” to the documented tests.  While debriefing about my tests post session, the assumption will be that the test passed unless indicated otherwise.

Even though it felt like going to work without pants, after a few more sessions, it turns out, not resolving to “PASS” or “FAIL” reduces administrative time and causes no ambiguity during test reviews.  Cool!

Wait. It gets better.

On further analysis, resolving all my tests to “PASS” or “FAIL” may have prevented me from actual testing.  It was influencing me to frame everything as a check.  Real testing does not have to result in “PASS” or “FAIL”.  If I didn’t know what was supposed to happen after editing an element with an unlinked parent (as in the above example), well then it didn’t really “PASS” or “FAIL”, right?  However, I may have learned something important nevertheless, which made the test worth doing…I’m rambling.

The bottom line is, maybe you don’t need to indicate “PASS” or “FAIL”.  Try it.

Categories: Software Testing

9 awesome posts about web performance (that I wish I’d written)

Web Performance Today - Tue, 08/05/2014 - 07:10

For every post I write about performance, there are dozens that I read. Every so often, I read one that makes me clutch my (metaphorical) pearls and wish I’d written it myself. Here’s a batch of recent I-wish-I’d-written-that posts by people you should be following, if you aren’t already.

The Five Pillars of Web Performance by Jason Thibeault

“Striving for the fastest possible delivery requires that performance considerations be taken into account from the very first design meeting of your website, web application, or even mobile offering. And to do that means you need an understanding of the five pillars of website performance: static objects, dynamic content, interface, page size, and storage.” Keep reading >

The Principles of Performance by Design by Eric Mobley

“When you put performance as a priority in the beginning of a project, you will use it to guide your decision making. Performance should not be left to the developer right before a site is launched, that is far too late—it is really the result of decisions made by the entire team starting at the beginning of the project.” Keep reading >

Critical Rendering Path by Patrick Sexton

“The most important concept in pagespeed is the critical rendering path. This is true because understanding this concept can help you do a very wonderful thing. It gives you the power to make a large webpage with many resources load faster than a small webpage with few resources. I like that.” Keep reading >

What Every Frontend Developer Should Know About Webpage Rendering by Alexander Skutin

“Today I’d like to focus on the subject of web page rendering and why it is important in web development. A lot of articles are available covering the subject, but the information is scattered and somehow fragmented. To wrap my head around the subject, for example, I had to study a lot of sources. That’s why I decided I should write this article. I believe the article will be useful for beginners as well as for more advanced developers who want to refresh and structure what they already know.” Keep reading >

Should ‘Mobile First’ be ‘Performance First’? by Ben Cooper

“Your users will thank you if you take a Performance First approach. And you will thank yourself, because learning about performance will allow you to gain so much awesome knowledge, which will become helpful in so many areas. It’s a win-win for everyone.” Keep reading >

Responsive Web Design (RWD) and User Experience by Amy Schade

“Responsive design is a tool, not a cure-all. While using responsive design has many perks when designing across devices, using the technique does not ensure a usable experience (just as using a gourmet recipe does not ensure the creation of a magnificent meal.) Teams must focus on the details of content, design, and performance in order to support users across all devices.” Keep reading >

“RWD Is Bad for Performance” Is Good for Performance by Tim Kadlec

“Saying responsive design is bad for performance is the same as saying optimizing for touch is bad for performance. It doesn’t make much sense because you can’t expect to do either in isolation and create a good overall experience. All that being said, I’ve learned to embrace the “responsive design is bad for performance” statement whenever it comes up.” Keep reading >

It’s Time for a Web Page Diet by Terrence Dorsey

“Slimming down web sites is not necessarily a matter of learning new and sophisticated programming techniques. Rather, getting back to basics and focusing on correct implementation of web development essentials — HTML, CSS and JavaScript — may be all it takes to make sure your own web sites are slim, speedy and responsive.” Keep reading >

The Ultimate Image Optimization Cheat Sheet by Dean Hume

“Images play a massive role in modern web development today. They comprise of around 62% of the average page’s total payload, which is an astonishing amount! In terms of web page performance, images can be a huge performance roadblock, but they don’t have to be. Simple image optimization techniques can make a massive difference to your page load times and significantly reduce the overall weight of your page.” Keep reading >

I’m sure there are some great articles I’ve missed. If you know of any, please let me know!

The post 9 awesome posts about web performance (that I wish I’d written) appeared first on Web Performance Today.

Live Online Classes for Software Quality

Randy Rice's Software Testing & Quality - Mon, 08/04/2014 - 10:19
Rice Consulting has teamed with Wind Ridge International Consulting to offer the following courses live, online.

These are all short daily online events, spread over a week. Five sessions, 1.5 hours each. 

To learn more about a class, just click on the title or the "More Info" link. To see the specific dates and times, prices, and to register, click on the "Register" link below each description above. 

Building a High Performance Team
A key role of all successful managers is the building and leading a high performance team (HPT) that provides business value to the organization.  The HPT plays a central role in any software organization. Therefore, they need to have a unique combination of well-developed technical, communication, leadership, and people skills in order to be successful. This class focuses on the unique challenges the software organizations faces in building, leading, and retaining a HPT. More Info...


Establishing the Software Quality Organization

This class describes the activities required to establish a successful software quality organization (SQO).  The SQO is comprised of three distinct quality functions, quality assurance, quality control (testing), and configuration management, that work in harmony and complement each other. More Info...


Fundamentals of Process Improvement

There are two ways to make process improvement changes: random or planned.  You can’t start making improvements until you know your exact current state and determine where you want to go.  This class takes the students through a step-by-step proven process for making planned improvements. More Info... 


Introduction to Configuration Management
This class introduces the attendees to the basics of CM as well as a practical process for establishing and maintaining a CM program that meets the corporate goals and information needs. More Info...  


The instructor for these classes is Thomas Staab. Tom is a frequent advisor to numerous companies and government agencies on the topic of technology management, business/technology interface, resource and process optimization, project management, working with the multi-generational/ multi-cultural workforce, successful outsourcing, and maximizing business value and ROI.
Tom specializes in improving processes, managing projects, and maximizing results with the goal of enabling senior management, business and technology leaders to form a cohesive unit and become even more successful corporate and government leaders.
He quickly gains the full and active support from senior leaders down to the front line managers. His mission is to increased business value and produce significant return-on-investment (ROI) for clients.
Categories: Software Testing

Summer 2014 State of the Union: Ecommerce Page Speed & Web Performance [INFOGRAPHIC]

Web Performance Today - Thu, 07/31/2014 - 11:21

Last week, we released our quarterly State of the Union for ecommerce web performance, which, among other things, found that:

  • The median top 100 retail site takes 6.2 seconds to render primary content and 10.7 seconds to fully load.
  • 17% of the top 100 pages took 10 seconds or longer to load their feature content. Only 14% delivered an optimal sub-3-second user experience.
  • The median top 100 page is 1677 KB in size — 67% larger than it was just one year ago, when the median page was 1007 KB.

These findings and more — including Time to Interact and Load Time for the ten fastest sites — are illustrated in this set of infographics. Please feel free to download and share them. And if you have any questions about our research or findings, don’t hesitate to ask me.


Get the report: State of the Union: Ecommerce Page Speed & Web Performance [Summer 2014]

The post Summer 2014 State of the Union: Ecommerce Page Speed & Web Performance [INFOGRAPHIC] appeared first on Web Performance Today.

Testing on the Toilet: Don't Put Logic in Tests

Google Testing Blog - Thu, 07/31/2014 - 10:59
by Erik Kuefler

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

Programming languages give us a lot of expressive power. Concepts like operators and conditionals are important tools that allow us to write programs that handle a wide range of inputs. But this flexibility comes at the cost of increased complexity, which makes our programs harder to understand.

Unlike production code, simplicity is more important than flexibility in tests. Most unit tests verify that a single, known input produces a single, known output. Tests can avoid complexity by stating their inputs and outputs directly rather than computing them. Otherwise it's easy for tests to develop their own bugs.

Let's take a look at a simple example. Does this test look correct to you?

@Test public void shouldNavigateToPhotosPage() {
String baseUrl = "http://plus.google.com/";
Navigator nav = new Navigator(baseUrl);
assertEquals(baseUrl + "/u/0/photos", nav.getCurrentUrl());
The author is trying to avoid duplication by storing a shared prefix in a variable. Performing a single string concatenation doesn't seem too bad, but what happens if we simplify the test by inlining the variable?

@Test public void shouldNavigateToPhotosPage() {
Navigator nav = new Navigator("http://plus.google.com/");
assertEquals("http://plus.google.com//u/0/photos", nav.getCurrentUrl()); // Oops!
After eliminating the unnecessary computation from the test, the bug is obvious—we're expecting two slashes in the URL! This test will either fail or (even worse) incorrectly pass if the production code has the same bug. We never would have written this if we stated our inputs and outputs directly instead of trying to compute them. And this is a very simple example—when a test adds more operators or includes loops and conditionals, it becomes increasingly difficult to be confident that it is correct.

Another way of saying this is that, whereas production code describes a general strategy for computing outputs given inputs, tests are concrete examples of input/output pairs (where output might include side effects like verifying interactions with other classes). It's usually easy to tell whether an input/output pair is correct or not, even if the logic required to compute it is very complex. For instance, it's hard to picture the exact DOM that would be created by a Javascript function for a given server response. So the ideal test for such a function would just compare against a string containing the expected output HTML.

When tests do need their own logic, such logic should often be moved out of the test bodies and into utilities and helper functions. Since such helpers can get quite complex, it's usually a good idea for any nontrivial test utility to have its own tests.

Categories: Software Testing

Bar Exam Failures Highlight Importance of Performance Testing

Perf Planet - Thu, 07/31/2014 - 08:51

It goes without saying that old-fashioned methods of doing business and keeping records are increasingly shifting into digital and/or online formats. Usually that means making life easier for everyone involved. Of course, when the systems behind such technological advancements fail, it can mean a nightmare instead.

This week, law students around the country have been going through the painstaking process of taking the bar exam, the culmination of weeks – sometimes months – of laborious studying that will determine whether they can actually practice law after pouring years of their life and tens of thousands of dollars into their education.

So you can imagine their frustration when, after completing the essay part of the exam, those students were unable to send their answers in.

ExamSoft, whose software is used by many states to conduct their bar exams (as well as by individual law schools for regular exams throughout the year), experienced server overload when it came time for those thousands of exam takers to submit their essays, as those trying to do so came face to face with ExamSoft’s version of the pinwheel of death:

The failures brought to many people’s minds the disaster that was the introduction of HealthCare.gov so many months ago, while simultaneously putting to rest the idea that putting government services in the hands of a private company is a guarantee for success.

As the legions of test takers took to social media to vent their frustrations at the company, the team at ExamSoft went into panic mode as their IT staff desperately tried to correct the problem while the rest supposedly began contacting the state bars in order to explain why so many people seemed to miss the deadline to submit their exams.

The process for taking the exam differs from state to state, and seeing as ExamSoft is used by several state bars, the terrible experiences on Tuesday varied. However, there were reports of people being told to try submitting from multiple locations – none of which worked – and then waiting for as much as three hours while ExamSoft tried to solve the problem.

Adding to the students’ fury is the fact that they have to pay a significant chunk of money simply for the “privilege” of using the software. To download it can cost as much as $150 (again, depending on the state), and several state bars have an additional triple-digit fee on top of that. After being forced to pay that much, one should be able to expect a certain degree of functionality, but obviously that didn’t happen on Tuesday.

The problem was later blamed on a processing issue on ExamSoft’s end, which they apologized for on their Facebook page, and those users who experienced the errors were given deadline extensions so that they won’t be penalized. But the fallout from this fiasco is far from over, as the company has also left itself open to potential legal ramifications (after all, these are future lawyers we’re talking about), seeing as those end users who had to spend hours dealing with these failures were forced to take valuable time away from the preparations for the second part of the exam the following day.

In the meantime, we once again see the importance of performance and capacity testing BEFORE undertaking a big online task. Failure to do so can have wide-ranging ramifications for a long time.


The post Bar Exam Failures Highlight Importance of Performance Testing appeared first on Catchpoint's Blog.

Sanity Check: Object Creation Performance

Perf Planet - Tue, 07/29/2014 - 19:59

[Update: Seriously, go read this post by Vyacheslav Egorov @mraleph. It's the most fantastic thing I've read in a long, long time!]

I hear all kinds of myths and misconceptions spouted as facts regarding the performance characteristics of various object creation patterns in JavaScript. The problem is that there’s shreds of facts wrapped up in all kinds of conjecture and, frankly, FUD (fear-uncertainty-doubt), so separating them out is very difficult.

So, I need to assert() a couple of things here at the beginning of this post, so we’re on the same page about context.

  1. assert( "Your app creates more than just a few dozen objects" ) — If your application only ever creates a handful of objects, then the performance of these object creations is pretty much moot, and you should ignore everything I talk about in this post. Move along.
  2. assert( "Your app creates objects not just on load but during critical path operations." ) — If the only time you create your objects is during the initial load, and (like above) you’re only creating a handful of these, the performance differences discussed here are not terribly relevant. If you don’t know what the critical path is for your application, stop reading and go figure that out first. If you use an “object pool” where you create objects all at once upfront, and use these objects later during run-time, then consult the first assertion.

Let me make the conclusions of those assertions clear: if you’re only creating a few objects (and by that, I mean 100 or less), the performance of creating them is basically irrelevant.

If you’ve been led to believe you need to use prototype, new, and class style coding in JS to get maximum object creation performance for just a couple of objects, you need to set such silliness aside. Come back to this article if you ever need to create lots of objects.

I suspect the vast majority of you readers don’t need to create hundreds or thousands of objects in your application. In my 15 year JS dev career, I’ve seen a lot of apps in a lot of different problem domains, but it’s been remarkably rare how often JavaScript apps legitimately need to create enough objects where the performance of such creation was something to get worked up about.

But, to read other blogs and books on the topic, just about every JavaScript application needs to avail itself of complicated object “inheritance” heirarchies to have any hope of reasonable performance.


Object Creations Where Performance Isn’t Relevant

Let’s list some examples of object creation needs where the performance differences will never be a relevant concern of yours, beyond “premature micro optimizations”:

  • You have some UI widgets in your page, like a calendar widget, a few smart select drop-downs, a navigation toolbar, etc. In total, you have about a dozen objects (widgets) you create to build the pieces of your UI. Most of them only ever get created once at load time of your app. A few of them (like smart form elements) might get recreated from time to time, but all in all, it’s pretty lightweight on object creations in your app.
  • You create objects to represent your finite data model in your application. You probably have less than 10 different data-domains in your whole application, which probably means you have at most one object per domain. Even if we were really liberal and counted sub-objects, you have way less than 50 total objects ever created, and few if any of them get recreated.
  • You’re making a game with a single object for each character (good guy, enemy, etc). At any given time, you need 30 or fewer objects to represent all these characters. You can (and should) be creating a bunch of objects in an object pool at game load, and reusing objects (to avoid memory churn) as much as possible. So, if you’re doing things properly, you actually won’t be doing an awful lot of legitimate object creation.

Are you seeing the pattern? There’s a whole lot of scenarios where JS developers have commonly said they need to absolutely maxmimize object creation performance, and there’s been plenty of obsession on micro-optimizations fueled by (irresponsible and misleading) micro-benchmarks on jsPerf to over-analyze various quirks and niches.

But in truth, the few microseconds difference between the major object creation patterns when you only have a dozen objects to ever worry about is so ridiculously irrelevant, it’s seriously not even worth you reading the rest of this post if that’s the myth you stubbornly insist on holding strong to.

Object Creation On The Critical Path

Let me give a practical example I ran into recently where quite clearly, my code was going to create objects in the critical path, where the performance definitely matters.

Native Promise Only is a polyfill I’ve written for ES6 Promises. Inside this library, every time a new promise needs to be created (nearly everything inside of promises creates more promises!), I need to create a new object. Comparatively speaking, that’s a lot of objects.

These promise objects tend to have a common set of properties being created on them, and in general would also have a shared set of methods that each object instance would need to have access to.

Moreover, a promise library is designed to be used extensively across an entire application, which means it’s quite likely that some or all of my code will be running in the critical path of someone’s application.

As such, this is a perfect example where paying attention to object creation performance makes sense!

Simple Objects

Let’s draw up an app scenario and use it to evaluate various object creation options: a drawing application, where each line segment or shape someone draws on the canvas will be individually represented by a JS object with all the meta data in them.

On complex drawing projects, it wouldn’t be at all surprising to see several thousand drawing elements placed on the canvas. Responsiveness while freehand drawing is most definitely a “critical path” for UX, so creating the objects performantly is a very reasonable thing to explore.

OK, let’s examine the first object creation pattern — the most simple of them: just simple object literals. You could easily do something like:

function makeDrawingElement(shapeType,coords,color) { var obj = { id: generateUniqueId(), type: shapeType, fill: color, x1: coords.x1, y1: coords.y1, x2: coords.x2, y2: coords.y2, // add some references to shared shape utilities deleteObj: drawing.deleteObj, moveForward: drawing.moveForward, moveBackward: drawing.moveBackward }; return obj; } var el = makeDrawingElement( "line", { x1:10, y1:10, x2:50, y2:100 }, "red" );

OK, so all we’re doing here is creating a new object literal each time we create a new shape on the drawing surface. Easy enough.

But it’s perhaps the least performant of the various options. Why?

As an internal implementation detail, it’s hard for JS engines to find and optimize repeated object literals as shown here. Moreover, as you can see, we’re copying function references (not functions themselves!) onto each object.

Let’s see some other patterns which can address those concerns.

Hidden Classes

It’s a famous fact that JavaScript engines like v8 track what’s called a “hidden class” for an object, and as long as you’re repeatedly creating new objects of this same “shape” (aka “hidden class”), it can optimize the creation quite a bit.

So, it’s common for people to suggest creating objects like this, to take advantage of “hidden class” implementation optimizations:

function DrawingElement(shapeType,coords,color) { this.id = generateUniqueId(); this.type = shapeType; this.fill = color; this.x1 = coords.x1; this.y1 = coords.y1; this.x2 = coords.x2; this.y2 = coords.y2; // add some references to shared shape utilities this.deleteObj = drawing.deleteObj; this.moveForward = drawing.moveForward; this.moveBackward = drawing.moveBackward; } var el = new DrawingElement( "line", { x1:10, y1:10, x2:50, y2:100 }, "red" );

Simple enough of a change, right? Well, depends on your perspective. It seems like an interesting and strange “hack” that we have to add our properties to a this object created by a new Fn(..) call just to opt into such magical optimizations.

In fact, this “hidden class” optimization is often mis-construed, which contributes to its “magical” sense.

It’s true that all those new DrawingElement(..) created object instances should share the same “hidden class”, but the more important optimization is that because DrawingElement(..) is used as a somewhat declarative constructor, the engine can estimate before even the first constructor call how many this.prop = .. assignments will happen, so it knows generally how “big” to pre-size the objects that will be made by the “hidden class”.

As long as this DrawingElement(..) constructor adds all the properties to the this object instance that are ever going to be added, the size of the object instance won’t have to grow later. This leads to the best-case performance in that respect.

The somewhat declarative nature of new Fn(..) and this aids in the optimization estimates, but it also invokes the implication that we’re actually doing class-oriented coding in JS, and it thus invites that sort of design abstraction on top of our code. Unfortunately, JS will fight you from all directions in that effort. Building class-like abstractions on top of your code will often hurt your performance, not help it.

Shared .prototype Methods

Many developers will cite that a further problem performance-wise (that is, memory usage, specifically) with this code is the copying of function references, when we could instead use the [[Prototype]] “inheritance” (more accurately, delegation link) to share methods among many object instancess without duplication of function references.

So, we keep building on top of the “class abstraction” pattern with code like this:

function DrawingElement(shapeType,coords,color) { this.id = generateUniqueId(); this.type = shapeType; this.fill = color; this.x1 = coords.x1; this.y1 = coords.y1; this.x2 = coords.x2; this.y2 = coords.y2; } // add some references to shared shape utilities // aka, "defining the `DrawingElement` "class" methods DrawingElement.prototype.deleteObj = drawing.deleteObj; DrawingElement.prototype.moveForward = drawing.moveForward; DrawingElement.prototype.moveBackward = drawing.moveBackward; var el = new DrawingElement( "line", { x1:10, y1:10, x2:50, y2:100 }, "red" );

Now, we’ve placed the function references on DrawingElement.prototype object, which el will automatically be [[Prototype]] linked to, so that calls such as el.moveForward() will continue to work as before.

So… now it certainly looks like we’ve fully embraced a DrawingElement “class”, and we’re “instantiating” elements of this class with each new call. This is the gateway drug of complexity of design/abstraction that will lead you perhaps to later try to do things like this:

function DrawingElement(..) { .. } function Line(..) { DrawingElement.call( this ); } Line.prototype = Object.create( DrawingElement ); function Shape(..) { DrawingElement.call( this ); } Shape.prototype = Object.create( DrawingElement ); ..

Be careful! This is a slippery slope. You will very quickly create enough abstraction here to completely erase any potential performance micro-gains you may be getting over the normal object literal.

Objects Only

I’ve written extensively in the past about an alternate simpler pattern I call “OLOO” (objects-linked-to-other-objects), including the last chapter of my most recent book: “You Don’t Know JS: this & Object Prototypes”.

OLOO-style coding to approach the above scenario would look like this:

var DrawingElement = { init: function(shapeType,coords,color) { this.id = generateUniqueId(); this.type = shapeType; this.fill = color; this.x1 = coords.x1; this.y1 = coords.y1; this.x2 = coords.x2; this.y2 = coords.y2; }, deleteObj: drawing.deleteObj, moveForward: drawing.moveForward, moveBackward: drawing.moveBackward }; var el = Object.create( DrawingElement ); // notice: no `new` here el.init( "line", { x1:10, y1:10, x2:50, y2:100 }, "red" );

This OLOO-style approach accomplishes the exact same functionality/capability as the previous snippets. Same number and shape of objects as before.

Instead of the DrawingElement.prototype object belonging to the DrawingElement(..) constructor function, with OLOO, an init(..) function method belongs to the DrawingElement object. Now, we don’t need to have any references to .prototype, nor any usage of the new keyword.

This code is simpler to express.

But is it faster, slower, or the same as the previous “class” approach?

TL;DR: It’s much slower. <frowny-face>

Unfortunately, the OLOO-style implicit object initialization (using init(..)) is not declarative enough for the engine to make the same sort of pre-”guesses” that it can with the constructor form discussed earlier. So, the this.prop = .. assignments are likely expanding the object storage several times, leading to extra churn of those “hidden classes”.

Also, it’s undeniable that with OLOO-style coding, you make an “extra” call, by separately doing Object.create(..) plus el.init(..). But is that extra call a relevant performance penalty? It seems Object.create(..) is in fact a bit slower.

On the good side, in both cases, there’s no copied function references to each instance, but instead just shared methods we [[Prototype]] delegate to, so at the very least, THAT OPTIMIZATION is also in play with both styles of coding.

Performance Benchmark: OO vs OLOO

The problem with definitively quantifying the performance hit of OLOO-style object creation is that micro-benchmarking is famously flawed on such issues. The engines do all kinds of special optimizations that are hard to “defeat” to isolate and test accurately.

Nevertheless, I’ve put together this test to try to get some kind of reasonable approximation. Just take the results with a big grain of salt:

So, clearly, OLOO is a lot slower according to this test, right?

Not so fast. Let’s sanity check.

Importantly, context is king. Look at how fast both of them are running.

In recent Chrome, OLOO is running up to ~2.3million ops/sec. If you do the math, that’s 2 per microsecond. IOW each creation operation takes ~430 nanoseconds. That’s still insanely damn fast, if you ask me.

What about classical OO-style? Again, in recent Chrome, ~30million objects created per second. 30 per microsecond. IOW each creation operation is taking ~33 nanoseconds. So… we’re talking about the difference between an object creation taking 33 nanoseconds and 420 nanoseconds.

Let’s look at it more practically: say your drawing app needs to create 10,000 objects (a pretty significantly complex drawing, TBH!). How long will each approach take? Classical OO will take ~330 microseconds, and OLOO will take ~4.3 milliseconds.

That’s a difference of 4 milliseconds in an absolutely worst-case sort of scenario. That’s ~1/4 of the average screen refresh cycle (16.7ms for 60fps). You could re-create all ten thousand objects three or four times, all at once in a row, and still only drop at most a single animation frame.

In short, while “OLOO is 90% slower” seems pretty significant, I actually think it’s probably not terribly relevant for most apps. Only the rarest of apps would ever be creating several tens of thousands of objects per second and needing to squeeze out a couple of milliseconds per cycle.


First, and foremost, if you’re not creating thousands of objects, stop obsessing about object creation performance.

Secondly, even if you are creating ten thousand objects at once, 4 extra milliseconds in the worst case on the critical path is not likely to bring your app to its knees.

Knowing how to identify the cases where you need (and don’t need!) to optimize your object creations, and how to do so, is very important to the mature JavaScript developer. Don’t just listen to the hype. Give it some sane, critical thought.