Feed aggregator

Web Performance News of the Week

LoadStorm - Fri, 04/24/2015 - 17:04

Mobile friendliness algorithm update went global this week

Google began globally rolling out their mobile search algorithm update to include mobile-friendly websites as a ranking signal. The updated search algorithm improves search result rankings for web pages that are optimized for mobile traffic, and should impact more sites than Google’s Panda and Penguin updates.

  • Only page rankings on mobile searches will be affected by the new search algorithm. Contrary to what was said in a Google hangout Q&A session earlier this year, the new algorithm is isolated to mobile searches only.
  • There are no degrees of mobile-friendliness; your page is either mobile-friendly or it isn’t. To check if your site is up to par, you can test your page URL using their Mobile-Friendly Test.
  • The mobile-friendly ranking changes will affect results on a page by page basis, not by entire websites.
  • Pages will automatically be re-processed if changes are made to make it mobile-friendly because the algorithm runs in real-time. To expedite the re-processing of the page, you can ask Google to re-crawl your URLs by using Fetch as Google with Submit to Index.

Google has reported that since two months ago, there are 4.7% more mobile-friendly sites relative to the whole. This is great news, as by 2020, 90 percent of the world’s population over 6 years old will have a mobile phone, and smartphone subscriptions are expected to top 6.1 billion.

Target website overwhelmed during Lilly Pulitzer frenzy

This week, the limited-edition Lilly Pulitzer collection went live on their website, giving Black Friday a run for it’s money. The resulting spike in traffic was too much for the site, causing it to crash several times and leaving customers unable to view their carts. Eventually, the entire Target website was overwhelmed and went down. Target responded to frustrated shoppers via Twitter, announcing that they were adjusting the site. Many customers noted that at that point it didn’t matter, because most of the stock sold out within minutes.

This isn’t the first time Target has experienced a crash after launching a fashion line. The 2011 limited edition launch of a high end Italian line, Missoni, caused the site to crash and remain down for most of the day. Many customers turned to the official Lilly Pulitzer website after the crash.

Security advisory goes out regarding WordPress Plugins and themes

A vulnerability re-discovered last week prompted a coordinated plugin update between multiple developers and the WordPress core security team to address a cross-site scripting (XSS) security vulnerability. The vulnerability was created after developers were misled by the WordPress Official Documentation, and misused two common functions. The add_query_arg() and remove_query_arg() functions are often used by developers to modify and add query strings to URLs. The two functions do not escape user input automatically, meaning the use of functions such as esc_url() or esc_url_raw() is necessary.

The Sucuri team analyzed the top 300-400 plugins, and found at least 15 different plugins containing the vulnerability:

Jetpack
WordPress SEO
Google Analytics by Yoast
All In one SEO
Gravity Forms
Multiple Plugins from Easy Digital Downloads
UpdraftPlus
WP-E-Commerce
WPTouch
Download Monitor
Related Posts for WordPress
My Calendar
P3 Profiler
Give
Multiple iThemes products including Builder and Exchange
Broken-Link-Checker
Ninja Forms

Whether your site uses any of the plugins listed above, it’s a good idea to update your plugins immediately to eliminate any risk.

IBM brings security analytics to the cloud

This Tuesday, IBM announced it will be launching two new threat analytic services in the cloud as Software as a Service tools to give companies the ability to quickly prioritize security threats.

The two new cloud services include IBM Security Intelligence and Intelligent Log Management. By bringing it’s Security Intelligence technology to the cloud, IBM customers will be able to analyze security threat information from over 500 different data sources for devices, systems and applications to determine if a real security threat exists, or if the security related events are simply anomalies. “The option of doing predictive analytics via the cloud gives security teams the flexibility to bring in skills, innovation and information on demand across all of their security environments.” said Jason Corbin, vice president of product management and strategy at IBM.

The post Web Performance News of the Week appeared first on LoadStorm.

How to Automate Enterprise Application Monitoring with Ansible – Part II

In the first part of this series, I shared practical advice on how to automatically deploy Dynatrace Agents into distributed enterprise applications using Ansible in less than 60 seconds. While we had conveniently assumed its presence back then, we will today address the automated installation of our Dynatrace Application Monitoring solution comprising Clients, Collectors and […]

The post How to Automate Enterprise Application Monitoring with Ansible – Part II appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

To improve testing, snoop on the competition

The Quest for Software++ - Thu, 04/23/2015 - 01:47

This is an excerpt from my upcoming book, Fifty Quick Ideas To Improve Your Tests

As a general rule, teams focus the majority of testing activities on their zone of control, on the modules they develop, or the software that they are directly delivering. But it’s just as irresponsible not to consider competition when planning testing as it is in the management of product development in general, whether the field is software or consumer electronics.

Software products that are unique are very rare, and it’s likely that someone else is working on something similar to the product or project that you are involved with at the moment. Although the products might be built using different technical platforms and address different segments, key usage scenarios probably translate well across teams and products, as do the key risks and major things that can go wrong.

When planning your testing activities, look at the competition for inspiration — the cheapest mistakes to fix are the ones already made by other people. Although it might seem logical that people won’t openly disclose information about their mistakes, it’s actually quite easy to get this data if you know where to look.

Teams working in regulated industries typically have to submit detailed reports on problems caught by users in the field. Such reports are kept by the regulators and can typically be accessed in their archives. Past regulatory reports are a priceless treasure trove of information on what typically goes wrong, especially because of the huge financial and reputation impact of incidents that are escalated to such a level.

For teams that do not work in regulated environments, similar sources of data could be news websites or even social media networks. Users today are quite vocal when they encounter problems, and a quick search for competing products on Facebook or Twitter might uncover quite a few interesting testing ideas.

Lastly, most companies today operate free online support forums for their customers. If your competitors have a publicly available bug tracking system or a discussion forum for customers, sign up and monitor it. Look for categories of problems that people typically inquire about and try to translate them to your product, to get more testing ideas.

For high-profile incidents that have happened to your competitors, especially ones in regulated industries, it’s often useful to conduct a fake post-mortem. Imagine that a similar problem was caught by users of your product in the field and reported to the news. Try to come up with a plausible excuse for how it might have happened, and hold a fake retrospective about what went wrong and why such a problem would be allowed to escape undetected. This can help to significantly tighten up testing activities.

Key benefits

Investigating competing products and their problems is a cheap way of getting additional testing ideas, not about theoretical risks that might happen, but about things that actually happened to someone else in the same market segment. This is incredibly useful for teams working on a new piece of software or an unfamiliar part of the business domain, when they can’t rely on their own historical data for inspiration.

Running a fake post-mortem can help to discover blind spots and potential process improvements, both in software testing and in support activities. High-profile problems often surface because information falls through the cracks in an organisation, or people do not have sufficiently powerful tools to inspect and observe the software in use. Thinking about a problem that happened to someone else and translating it to your situation can help establish checks and make the system more supportable, so that problems do not escalate to that level. Such activities also communicate potential risks to a larger group of people, so developers can be more aware of similar risks when they design the system, and testers can get additional testing ideas to check.

The post-mortem suggestions, especially around improving the support procedures or observability, help the organisation to handle ‘black swans’ — unexpected and unknown incidents that won’t be prevented by any kind of regression testing. We can’t know upfront what those risks are (otherwise they wouldn’t be unexpected), but we can train the organisation to react faster and better to such incidents. This is akin to government disaster relief organisations holding simulations of floods and earthquakes to discover facilitation and coordination problems. It’s much cheaper and less risky to discover things like this in a safe simulated environment than learn about organisational cracks when the disaster actually happens.

How to make it work

When investigating support forums, look for patterns and categories rather than individual problems. Due to different implementations and technology choices, it’s unlikely that third-party product issues will directly translate to your situation, but problem trends or areas of influence will probably be similar.

One particularly useful trick is to look at the root cause analyses in the reports, and try to identify similar categories of problems in your software that could be caused by the same root causes.

Webcast: Bootstrapping a Hardware Company in the US

O'Reilly Media - Wed, 04/22/2015 - 20:00
This webcast will discuss how it's possible to start an open hardware business in the US completely by bootstrapping and crowdfunding and postpone (or avoid the need altogether) outside investment. O'Reilly Media, Inc. 2015-04-22T17:12:46-08:10

Just Say No to More End-to-End Tests

Google Testing Blog - Wed, 04/22/2015 - 17:10
by Mike Wacker

At some point in your life, you can probably recall a movie that you and your friends all wanted to see, and that you and your friends all regretted watching afterwards. Or maybe you remember that time your team thought they’d found the next "killer feature" for their product, only to see that feature bomb after it was released.

Good ideas often fail in practice, and in the world of testing, one pervasive good idea that often fails in practice is a testing strategy built around end-to-end tests.

Testers can invest their time in writing many types of automated tests, including unit tests, integration tests, and end-to-end tests, but this strategy invests mostly in end-to-end tests that verify the product or service as a whole. Typically, these tests simulate real user scenarios.
End-to-End Tests in Theory While relying primarily on end-to-end tests is a bad idea, one could certainly convince a reasonable person that the idea makes sense in theory.

To start, number one on Google's list of ten things we know to be true is: "Focus on the user and all else will follow." Thus, end-to-end tests that focus on real user scenarios sound like a great idea. Additionally, this strategy broadly appeals to many constituencies:
  • Developers like it because it offloads most, if not all, of the testing to others. 
  • Managers and decision-makers like it because tests that simulate real user scenarios can help them easily determine how a failing test would impact the user. 
  • Testers like it because they often worry about missing a bug or writing a test that does not verify real-world behavior; writing tests from the user's perspective often avoids both problems and gives the tester a greater sense of accomplishment. 
End-to-End Tests in Practice So if this testing strategy sounds so good in theory, then where does it go wrong in practice? To demonstrate, I present the following composite sketch based on a collection of real experiences familiar to both myself and other testers. In this sketch, a team is building a service for editing documents online (e.g., Google Docs).

Let's assume the team already has some fantastic test infrastructure in place. Every night:
  1. The latest version of the service is built. 
  2. This version is then deployed to the team's testing environment. 
  3. All end-to-end tests then run against this testing environment. 
  4. An email report summarizing the test results is sent to the team.

The deadline is approaching fast as our team codes new features for their next release. To maintain a high bar for product quality, they also require that at least 90% of their end-to-end tests pass before features are considered complete. Currently, that deadline is one day away:

Days LeftPass %Notes15%Everything is broken! Signing in to the service is broken. Almost all tests sign in a user, so almost all tests failed.04%A partner team we rely on deployed a bad build to their testing environment yesterday.-154%A dev broke the save scenario yesterday (or the day before?). Half the tests save a document at some point in time. Devs spent most of the day determining if it's a frontend bug or a backend bug.-254%It's a frontend bug, devs spent half of today figuring out where.-354%A bad fix was checked in yesterday. The mistake was pretty easy to spot, though, and a correct fix was checked in today.-41%Hardware failures occurred in the lab for our testing environment.-584%Many small bugs hiding behind the big bugs (e.g., sign-in broken, save broken). Still working on the small bugs.-687%We should be above 90%, but are not for some reason.-789.54%(Rounds up to 90%, close enough.) No fixes were checked in yesterday, so the tests must have been flaky yesterday.
Analysis Despite numerous problems, the tests ultimately did catch real bugs.

What Went Well 
  • Customer-impacting bugs were identified and fixed before they reached the customer.

What Went Wrong 
  • The team completed their coding milestone a week late (and worked a lot of overtime). 
  • Finding the root cause for a failing end-to-end test is painful and can take a long time. 
  • Partner failures and lab failures ruined the test results on multiple days. 
  • Many smaller bugs were hidden behind bigger bugs. 
  • End-to-end tests were flaky at times. 
  • Developers had to wait until the following day to know if a fix worked or not. 

So now that we know what went wrong with the end-to-end strategy, we need to change our approach to testing to avoid many of these problems. But what is the right approach?
The True Value of Tests Typically, a tester's job ends once they have a failing test. A bug is filed, and then it's the developer's job to fix the bug. To identify where the end-to-end strategy breaks down, however, we need to think outside this box and approach the problem from first principles. If we "focus on the user (and all else will follow)," we have to ask ourselves how a failing test benefits the user. Here is the answer:

A failing test does not directly benefit the user. 

While this statement seems shocking at first, it is true. If a product works, it works, whether a test says it works or not. If a product is broken, it is broken, whether a test says it is broken or not. So, if failing tests do not benefit the user, then what does benefit the user?

A bug fix directly benefits the user.

The user will only be happy when that unintended behavior - the bug - goes away. Obviously, to fix a bug, you must know the bug exists. To know the bug exists, ideally you have a test that catches the bug (because the user will find the bug if the test does not). But in that entire process, from failing test to bug fix, value is only added at the very last step.

StageFailing TestBug OpenedBug FixedValue AddedNoNoYes
Thus, to evaluate any testing strategy, you cannot just evaluate how it finds bugs. You also must evaluate how it enables developers to fix (and even prevent) bugs.
Building the Right Feedback LoopTests create a feedback loop that informs the developer whether the product is working or not. The ideal feedback loop has several properties:
  • It's fast. No developer wants to wait hours or days to find out if their change works. Sometimes the change does not work - nobody is perfect - and the feedback loop needs to run multiple times. A faster feedback loop leads to faster fixes. If the loop is fast enough, developers may even run tests before checking in a change. 
  • It's reliable. No developer wants to spend hours debugging a test, only to find out it was a flaky test. Flaky tests reduce the developer's trust in the test, and as a result flaky tests are often ignored, even when they find real product issues. 
  • It isolates failures. To fix a bug, developers need to find the specific lines of code causing the bug. When a product contains millions of lines of codes, and the bug could be anywhere, it's like trying to find a needle in a haystack. 
Think Smaller, Not LargerSo how do we create that ideal feedback loop? By thinking smaller, not larger.

Unit TestsUnit tests take a small piece of the product and test that piece in isolation. They tend to create that ideal feedback loop:

  • Unit tests are fast. We only need to build a small unit to test it, and the tests also tend to be rather small. In fact, one tenth of a second is considered slow for unit tests. 
  • Unit tests are reliable. Simple systems and small units in general tend to suffer much less from flakiness. Furthermore, best practices for unit testing - in particular practices related to hermetic tests - will remove flakiness entirely. 
  • Unit tests isolate failures. Even if a product contains millions of lines of code, if a unit test fails, you only need to search that small unit under test to find the bug. 

Writing effective unit tests requires skills in areas such as dependency management, mocking, and hermetic testing. I won't cover these skills here, but as a start, the typical example offered to new Googlers (or Nooglers) is how Google builds and tests a stopwatch.

Unit Tests vs. End-to-End TestsWith end-to-end tests, you have to wait: first for the entire product to be built, then for it to be deployed, and finally for all end-to-end tests to run. When the tests do run, flaky tests tend to be a fact of life. And even if a test finds a bug, that bug could be anywhere in the product.

Although end-to-end tests do a better job of simulating real user scenarios, this advantage quickly becomes outweighed by all the disadvantages of the end-to-end feedback loop:

UnitEnd-toEndFast

Reliable


Isolates Failures

Simulates a Real User


Integration TestsUnit tests do have one major disadvantage: even if the units work well in isolation, you do not know if they work well together. But even then, you do not necessarily need end-to-end tests. For that, you can use an integration test. An integration test takes a small group of units, often two units, and tests their behavior as a whole, verifying that they coherently work together.

If two units do not integrate properly, why write an end-to-end test when you can write a much smaller, more focused integration test that will detect the same bug? While you do need to think larger, you only need to think a little larger to verify that units work together.
Testing PyramidEven with both unit tests and integration tests, you probably still will want a small number of end-to-end tests to verify the system as a whole. To find the right balance between all three test types, the best visual aid to use is the testing pyramid. Here is a simplified version of the testing pyramid from the opening keynote of the 2014 Google Test Automation Conference:



The bulk of your tests are unit tests at the bottom of the pyramid. As you move up the pyramid, your tests gets larger, but at the same time the number of tests (the width of your pyramid) gets smaller.

As a good first guess, Google often suggests a 70/20/10 split: 70% unit tests, 20% integration tests, and 10% end-to-end tests. The exact mix will be different for each team, but in general, it should retain that pyramid shape. Try to avoid these anti-patterns:
  • Inverted pyramid/ice cream cone. The team relies primarily on end-to-end tests, using few integration tests and even fewer unit tests. 
  • Hourglass. The team starts with a lot of unit tests, then uses end-to-end tests where integration tests should be used. The hourglass has many unit tests at the bottom and many end-to-end tests at the top, but few integration tests in the middle. 
Just like a regular pyramid tends to be the most stable structure in real life, the testing pyramid also tends to be the most stable testing strategy.


Categories: Software Testing

Webcast: Using Ionic with Cordova/PhoneGap

O'Reilly Media - Wed, 04/22/2015 - 13:59
In this webcast, Raymond Camden will introduce you to the Ionic framework. Ionic is a framework for hybrid mobile development that adds UI, UX, and other cool improvements to your development process. O'Reilly Media, Inc. 2015-04-22T11:12:45-08:10

If You Interrupt Testing, It Will Cost Us 2 Lost Bugs

Eric Jacobson's Software Testing Blog - Wed, 04/22/2015 - 10:42

Look at your calendar (or that of another tester).  How many meetings exist?

My new company is crazy about meetings.  Perhaps it’s the vast numbers of project managers, product owners, and separate teams along the deployment path.  It’s a wonder programmers/testers have time to finish anything.

Skipping meetings works, but is an awkward way to increase test time.  What if you could reduce meetings or at least meeting invites?  Try this.  Express the cost of attending the meeting in units of lost bugs.  If you find, on average, about 1 bug per hour of testing, you might say:

“Sure, I can attend your meeting, but it will cost us 1 lost bug.”

“This week’s meetings cost us 9 lost bugs.”

Obviously some meetings (e.g., design, user story review, bug triage) improve your bug finding, so be selective when choosing to declare the bugs lost cost.

Categories: Software Testing

The Art of DevOps Part III – Staging Grounds

In this 4 part blog series, I am exposing DevOps best practices using a metaphor inspired by the famous 6th century Chinese manuscript: “The Art of War”. It is worth reminding that Sun Tzu, just like me, considered war as a necessary evil, which must be avoided whenever possible. What are we fighting for here? […]

The post The Art of DevOps Part III – Staging Grounds appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

The Day I Quit

QA Hates You - Wed, 04/22/2015 - 08:37

When I was a lad, fresh out of the university with a degree in English and Philosophy and no actual career prospects, I worked as a produce clerk for a small off-chain produce and cheese shop. They had daily garbage pickup on weekdays, but nothing on the weekends, which were some of the busiest days of the week. As a result, on Sunday afternoons, the dumpster started to fail boundary analysis, at which time the store manager would order a clerk or two to climb up onto the pile and jump up and down to compact it so we could dump the last few cans of refuse into it. Come to think of it, I’ve seen the same philosophy applied to hardware resource management.

So as I stood and watched the younger kids jumping in the dumpster, I decided that if I was ever ordered to climb into the dumpster, I would drop my apron in the alley and never come back.

Want to know what would make me leave QA? Needing an implant of some sort to do my job:

PayPal is working on a new generation of embeddable, injectable and ingestible devices that could replace passwords as a means of identification.

Jonathan LeBlanc, PayPal’s global head of developer evangelism, claims that these devices could include brain implants, wafer-thin silicon chips that can be embedded into the skin, and ingestible devices with batteries that are powered by stomach acid.

These devices would allow “natural body identification,” by monitoring internal body functions like heartbeat, glucose levels and vein recognition, Mr LeBlanc told the Wall Street Journal.

Over time they would come to replace passwords and even more advanced methods of identification, like fingerprint scanning and location verification, which he says are not always reliable.

I’d rather not be personally, bodily on the Internet of Things unless there’s a compelling medical reason for it, and even then I’m going to ask my doctor to examine all the steampunk options first.

Categories: Software Testing

Webcast: Introducing D3.js

O'Reilly Media - Mon, 04/20/2015 - 19:49
This session introduces D3 starting with its underlying philosophy. We'll also walk through example code and see some of the nifty visualizations that no other JavaScript library can support. O'Reilly Media, Inc. 2015-04-20T17:12:45-08:10

Web Performance News of the Week

LoadStorm - Mon, 04/20/2015 - 16:46

Google modifies the structure of URLs on mobile

This week Google made a change to the URL structure that is displayed normally from a mobile search, explaining that “well-structured URLs offer users a quick hint about the page topic and how the page fits within the website”. The algorithms were updated starting in the USin an effort to display names that would better reflect the page. This will structure the results using a breadcrumbs-like format, and use the “real-world name of the site” instead of the domain name. I checked search results on my mobile, but the change had not appeared to be made yet. Here is an example search Google used:

To complement the launch, Google also announced support for schema.org, a site that provides a collection of schemas that webmasters can use to markup HTML pages in ways recognized by major search engines. This intends to help webmasters signal both the website name that should be used instead of the domain name, as well as the URL structure as breadcrumbs.

Here are the examples Google pointed to in their documentation on site names and breadcrumbs.

The announcement garnered mixed reactions. While some people are calling the change helpful, arguing that the change encourages better site organization, others argue that by not displaying the actual URL, users no longer could easily verify the true identity of a site. What do you think about this interesting development?

 

Google adds mobile-friendliness as a part of their ranking algorithm

Tomorrow marks the official launch of Google’s new ranking algorithm, which will add mobile-friendliness as a ranking signal. Although Google held a live Q&A hangout a month ago, it was somewhat unclear whether or not the changes could be seen before the official launch date on April 21st.

So how do you know if your page is mobile friendly? The easiest way to check if a page on your site passes is by typing in the URL here.

This will test the way your site looks to a Googlebot. It checks for:

  • pages automatically sizing content to the screen
  • avoiding software that is uncommon on mobile devices
  • containing readable text without zooming
  • and placing links far enough apart to be selected easily

 

A few easy ways to see what your site looks like on an array of different devices, and to start making your site mobile-friendly, can be found in our responsive web design blog post.

 

Additional algorithm details to note include:

  • There are no degrees of mobile-friendliness; your page is either mobile-friendly or it isn’t. Google added that it will be the same with desktop search, and is not isolated to mobile searches.
  • The mobile-friendly ranking changes will affect your site on a page by page basis. So if half of your site’s pages are mobile-friendly, those half will benefit from the mobile friendly ranking changes. This is good news for anyone concerned that their site doesn’t make the cut, as they can focus on getting their main pages up to date first.
  • The new ranking algorithm will officially begin on April 21st. However, the algorithm runs in real-time, meaning you can start preparing your site for analysis now. It was said in the hangout that it may even take a week or so to completely roll out, so it’s not entirely clear how quick we can expect to see changes in site rankings.
  • Google News may not be ranked by the new mobile-friendly algorithm yet. Interestingly, the Google News ranking team has no plans on implementing the mobile-friendly algorithm into the Google News results.

Zuckerberg’s internet.org loses support, with companies siting net neutrality concerns

A facebook backed project intended to make the internet accessible for people in the developing world has lost the support of several prominent companies this week. In the midst of a fierce national debate regarding net neutrality, the New Delhi Television Limited (NDTV), the Times Group, a media company that owns Times of India, and Cleartrip, a travel website all removed content from Internet.org in India, which they had made available for free. Furthermore, the companies urged its competitors to withdraw as well.

“We support net neutrality because it creates a fair, level playing field for all companies – big and small – to produce the best service and offer it to consumers. We will lead the drive towards a neutral internet, but we need our fellow publishers and content providers to do so as well, so that the playing field continues to be level.” – statement from a Times official.

The companies sited concerns that the Internet.org initiative did not align with the net neutrality mission, arguing that Internet.org is set up to prioritize content from partner businesses who pay telecom companies for data charges.

Zuckerberg responded to critics, affirming that net neutrality and Internet.org “can and must coexist.” He detailed his stance on the controversy, proclaiming that “Internet.org doesn’t block or throttle any other services or create fast lanes — and it never will. We’re open for all mobile operators and we’re not stopping anyone from joining. We want as many internet providers to join so as many people as possible can be connected.”

The Internet.org initiative has reportedly expanded internet accesss to over six countries so far, including Columbia, India, Zambia, Tanzania, Kenya and Ghana. Still, some worry that the long term net neutrality implications outweigh the immediate access it provides to the developing world.

The post Web Performance News of the Week appeared first on LoadStorm.

Mobilegeddon is here.

Google’s new algorithms for mobile search have arrived. If you’re calling it #Mobilegeddon then it caught you by surprise.  And in fact, it’s catching a lot of businesses off guard if they don’t have a solid digital performance strategy in place. A bit of background Google is in the process of changing how they rank […]

The post Mobilegeddon is here. appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Webcast: 4 Phases of Enterprise DevOps Maturity

O'Reilly Media - Fri, 04/17/2015 - 16:24
Heather Mickman and Ross Clanton will share their perspective on those questions based on their experiences at Target as well as their discussions with other enterprise DevOps practitioners. O'Reilly Media, Inc. 2015-04-17T13:12:42-08:10

Webcast: Introducing Reactive Programming

O'Reilly Media - Fri, 04/17/2015 - 16:24
In this webcast, we will explore the origins behind reactive programming, what it means to business, architects, and programmers alike, as well as how to leverage this new way of reasoning about system design through code. O'Reilly Media, Inc. 2015-04-17T13:12:42-08:11

How To Start A Challenging Test

Eric Jacobson's Software Testing Blog - Thu, 04/16/2015 - 15:58

Last week I started testing an update to a complex legacy process.  At first, my head was spinning (it still kind of is).    There are so many inputs and test scenarios...so much I don’t understand.  Where to begin?

I think doing something half-baked now is better than doing something fully-baked later.  If we start planning a rigorous test based on too many assumptions we may not understand what we’re observing. 

In my case, I started with the easiest tests I could think of:

  • Can I trigger the process-under-test? 
  • Can I tell when the process-under-test completes?
  • Can I access any internal error/success logging for said process?
  • If I repeat the process-under-test multiple times, are the results consistent?

If there were a spectrum that showed a focus between learning by not manipulating and learning by manipulating something-under-test, it might look like this:

My tests started on the left side of the spectrum and worked right.  Now that I can get consistent results, let me see if I can manipulate it and predict its results: 

  • If I pass ValueA to InputA, do the results match my expectations?
  • If I remove ValueA from InputA, do the results return as before?
  • If I pass ValueB to InputA, do the results match my expectations?

As long as my model of the process-under-test matches my above observations, I can start expanding complexity:

  • If I pass ValueA and ValueB to InputA and ValueC and ValueD to InputB, do the results match my expectations?
  • etc.

Now I have something valuable to discuss with the programmer or product owner.  “I’ve done the above tests.  What else can you think of?”.  It’s much easier to have this conversation when you’re not completely green, when you can show some effort.  It’s easier for the programmer or product owner to help when you lead them into the zone.

That worst is over.  The rest is easy.  Now you can really start testing!

Sometimes you just have to do something to get going.  Even if it’s half-baked.

Categories: Software Testing

Breaking Down the Website Performance of Tax Day 2015

So yesterday was April 15th 2015… Tax Day.  Across the US, taxpayers had a deadline that they need to file their federal and state taxes.  As with every aspect of our day-to-day lives we are seeing the digitization of how we prepare and file our taxes.  Just a quick look at the traffic going to the […]

The post Breaking Down the Website Performance of Tax Day 2015 appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Webcast: Anomaly Detection and Self-Learning Monitoring Systems

O'Reilly Media - Wed, 04/15/2015 - 16:06
This webcast will discuss the latest developments in building intelligent monitoring systems. O'Reilly Media, Inc. 2015-04-15T13:12:41-08:10

New Findings: State of the Union for Ecommerce Page Speed and Web Performance [Spring 2015]

Web Performance Today - Wed, 04/15/2015 - 08:40

There are compelling arguments why companies – particularly online retailers – should care about serving faster pages to their users. Countless studies have found an irrefutable connection between load times and key performance indicators ranging from page views to revenue.

For every 1 second of improvement, Walmart.com experienced up to a 2% conversion increase. Firefox reduced average page load time by 2.2 seconds, which increased downloads by 15.4% — resulting in an estimated 10 million additional downloads per year. And when auto parts retailer AutoAnything.com cut load times in half, it experienced a 13% increase in sales.

Recently at Radware, we released our latest research into the performance and page speed of the world’s top online retailers. This research aims to answer the question: in a world where every second counts, are retailers helping or hurting their users’ experience – and ultimately their own bottom line?

Since 2010, we’ve measured and analyzed the performance of the top 100 ecommerce sites (as ranked by Alexa.com). We look at web page metrics such as load time, time to interact (the amount of time it takes for a page to render its feature “above the fold” content), page size, page composition, and adoption of performance best practices. Our goal is to obtain a real-world “shopper’s eye view” of the performance of leading sites, and to track how this performance changes over time.

Here’s a sample of just a few of the findings from our Spring 2015 State of the Union for Ecommerce Page Speed & Web Performance:

Time to interact (TTI) is a crucial indicator of a page’s ability both to deliver a satisfactory user experience (by delivering content that the user is most likely to care about) and fulfill the site owner’s objectives (by allowing the user to engage with the page and perform whatever call to action the site owner has deemed the key action for that page). The median TTI in our research was 5.2 seconds.

Ideally, pages should be interactive in 3 seconds or less. Separate studies have found that 57% of consumers will abandon a page that takes longer than 3 seconds to load. A site that loads in 3 seconds experiences 22% fewer page views, a 50% higher bounce rate, and a 22% fewer conversions than a site that loads in 1 second, while a site that loads in 5 seconds experiences 35% fewer page views, a 105% higher bounce rate, and 38% fewer conversions. Only 14% of the top 100 retail sites rendered feature content in 3 seconds or less.

Page size and complexity typically correlate to slower load times. The median top 100 page is 1354 KB in size and contains 108 resource requests (e.g. images, HTML, third-party scripts). In the past two years, the median number of resources for a top 100 ecommerce page has grown by 26% — from 86 resources in Spring 2013 to 108 resources in Spring 2015. Each of these page resources represents an individual server call.

Images typically comprise between 50 to 60% of a page’s total weight, making them fertile territory for optimization. Yet 43% of the top 100 retail sites fail to compress images. Only 10% received an ‘A’ grade for image compression.

Despite eBay and Walmart’s status as retail giants, both sites have had less than stellar performance rankings in previous ecommerce performance studies. In our Fall 2014 report, these sites ranked 36th and 57th, respectively, out of 100. Our latest research, however, finds that both companies have made an impressive performance comeback – with each site’s home page rendering primary content in fewer than 3 seconds. The report provides insight into what each site did to improve its performance.

The good news is that there are opportunities for every site – even those that are relatively fast already – to fine-tune performance by taking a more aggressive approach to front-end optimization.

Get the report: Spring 2015 State of the Union for Ecommerce Page Speed & Web Performance

The post New Findings: State of the Union for Ecommerce Page Speed and Web Performance [Spring 2015] appeared first on Web Performance Today.

5 Can’t Miss Web Performance Optimization Basics

At this year’s STPCon I offered a Performance Clinic for attendees that came to the conference early. As part of this, we did 2 live performance diagnostic sessions: one on a pretty cool 3D visualization app based on Java and JavaScript, another one on a live eCommerce website. On the eCommerce landing page we found […]

The post 5 Can’t Miss Web Performance Optimization Basics appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Webcast: Understanding end-to-end of application performance testing

O'Reilly Media - Fri, 04/10/2015 - 15:30
In this webinar, Mark Tomlinson will present how you can start learning performance engineering process and how to setup and configure tools to conduct different types of tests. O'Reilly Media, Inc. 2015-04-10T11:12:37-08:10

Pages