Feed aggregator

The Best Software Testing Tool? That’s Easy…

Eric Jacobson's Software Testing Blog - Thu, 05/21/2015 - 15:55


After experimenting with a Test Case Management application’s Session-Test tool, a colleague of mine noted the tool’s overhead (i.e., the non-test-related waiting and admin effort forced by the tool).  She said, I would rather just use Notepad to document my testing.  Exactly!

Notepad has very little overhead.  It requires no setup, no license, no logging in, few machine resources, it always works, and we don’t waste time on trivial things like making test documentation pretty (e.g., let’s make passing tests green!).

Testing is an intellectual activity, especially if you’re using automation.  The test idea is the start.  Whether it comes to us in the midst of a discussion, requirements review, or while performing a different test, we want to document it.  Otherwise we risk losing it.

Don’t overlook the power of Notepad.

Categories: Software Testing

What Fax Machines can teach us about Performance Testing

Perf Planet - Thu, 05/21/2015 - 09:55


A long time ago, a friend was giving me a quick tour of his small business. At the time, the internet wasn’t yet what we know it as today, so fax was still a very popular medium of B2B communication. His company sent out millions of faxes every week for things like mortgage rates, product information, travel reservations and many other things we now take for granted as a push notification or email. But at the time, fax was cool.

Image courtesy Wikipedia, licensed under creative commons.

As we were walking around his offices and just talking about the business, he showed me their testing area, where they would send test faxes much like digital marketers will send out test email newsletters today. What was weird about this testing area was the fax machines. This company sent out faxes for a living. This guy – a former Bell Labs guy – knew everything there was to know about the transaction of a fax. But the fax machines they used for testing were ancient! The fax machines were old, faded beige plastic and used that hideous curly thermal paper.

He saw the look on my face, and he knew what I was thinking, but he remained silent, allowing me to form the question in my head. He had done this before. “Why do you test on these old piece-of-crap fax machines?!?” He had set me up on purpose, and he answered the question with three words that I’ll never forget. Three words that have shaped my perspective on testing just about anything: “Least common denominator.”

Huh? I didn’t get it yet. So he continued, “if the fax looks good on this piece of crap, it’ll look fantastic when it comes off any relatively new fax machine.” There was that “Ah ha!” moment, and I got it.

Jump ahead just a few years, and we’re at that same place when it comes to testing web apps and web pages. You don’t go out and buy the very latest, fastest, baddest, most awesome machines to test on. No, you get the oldest, slowest viable piece of junk on which your app or web site is supposed to function, and test everything on those devices. When we say “function” here, we mean its performance. Its speed.

If your web app runs fast on an iPhone 4, chances are it’ll be very slick on an iPhone 6 Plus. Test on the least common denominator to make sure your web apps work really well on the very latest devices.

Are you someone who tests performance and works to keeping your website super fast? You will love our free Zoompf Alerts beta. Zoompf Alerts monitors your site throughout the day, notifying you when performance problems get introduced with your CSS, JavaScript, HTML or Image and more. Make sure nothing changes in a manner that hurts your website performance. It’s free and you can opt-out at any time so sign up for Zoompf Alerts beta now

The post What Fax Machines can teach us about Performance Testing appeared first on Zoompf Web Performance.

Webcast: How the Internet affects Cloud performance and how managed Internet performance can make your job easier

O'Reilly Media - Wed, 05/20/2015 - 19:55
Join us for this webinar where we will dive in to the end-to-end intricacies of the Internet and where you should be focusing your time, money and resources. O'Reilly Media, Inc. 2015-05-20T17:13:14-08:10

Webcast: Simplifying the Your Management of Big Data Applications

O'Reilly Media - Wed, 05/20/2015 - 16:55
In this presentation, Tyler Hannan discuss how you can simplify the management of the technologies required to support your Big Data applications and give practical considerations to make when choosing the right tools for the job. O'Reilly Media, Inc. 2015-05-20T13:13:14-08:10

Using Waifu2x to Upscale Japanese Prints

Perf Planet - Wed, 05/20/2015 - 16:42

In my spare time I’ve been working on a database of Japanese prints for a little over 3.5 years now. I’m fully aware that I’ve never actually written about this, personally very important, project on my blog — until now. Unfortunately this isn’t a post explaining that project. I do still hope to write more about the intricacies of it, some day, but until that time you can watch this talk I gave at OpenVisConf 2014 about the site and the tech behind it.

I’ve been doing a lot of work exploring different computer vision, and machine learning, algorithms to see how they might apply to the world of art history study. I’m especially interested in finding novel uses of technology that could greatly benefit art historians in their work and also help individuals better appreciate art.

One tool that I came across yesterday is called Waifu2x. It’s a convolutional neural network (CNN) that is designed to optimally “upscale” images (taking small images and generating a larger image). The creator of this tool built it to better upscale poorly-sized Anime images and video. This is an effort that I can massively cheer on – while I’m not a purveyor of Anime, I love applying algorithmic overkill to non-tech hobbies.

Waifu2x also provides a live demo site that you can use to test it on other images. When I saw this I became immediately intrigued. Anime has direct stylistic influences drawn from the “old” world of Japanese woodblock printing, which was popular from the late 1600s to the late 1800s. Maybe the pre-trained upscaler could also work well for photos of Japanese prints? (Naturally I could train a new CNN to do this, but it may not even be necessary!)

Now the first questions that should come up, before even attempting upscale Japanese prints, are simply: Are there enough tiny images of prints that need to be made bigger? Who will benefit from this?

To answer those questions: Unfortunately there are tons of tiny pictures of Japanese prints in the world. To provide one very real example: The Tokyo National Museum has one of the greatest collections of Japanese prints in the world… none of which are (publicly) digitized. If a researcher wants to see if a the TNM has a particular copy of a print they’ll need to use the following 3 volume set of books (which I own):

Inside the book is 3,926 small, black-and-white, scans of every print in their collection:

I plan on digitizing these books and bringing these (not-ideal) prints online as they will be of the utmost use to scholars. However given their tiny size it will be very hard for most researchers to be able to make out what exactly the print is depicting. Thus any technology that is able to upscale the images to make them a bit easier to view would be greatly appreciated.

I began by experimenting with a few different, existing, print images and was really intrigued by the results.

I started with a primitive Ukiyo-e print by Hishikawa Moronobu:

(Source Image)

And here is the image scaled 2x (normally) and then using Waifu2x (be sure to click the images to see them full size):

And here is another early actor print by Utagawa Kunisada:

(Source Image)

And here is the image scaled 2x (normally) and then using Waifu2x (be sure to click the images to see them full size):

I also have a few blown-up details comparing the results between these two:

It’s immediately apparent that the lines are still quite “crisp” in the Waifu2x version. This is immediately very compelling as being able to see those details can be quite important. In fact seeing these upscaled images like this is seriously quite impressive. It definitely reinforces the fact that the algorithms Anime training data has suited this subject matter well!

To my eye it almost looks like they entire image has also become more “smooth”, it seems much more mottled, almost as if someone spilled water on it (Japanese prints are printed with watercolors and thus are quite susceptible to water damage that actually creates similar results as to what’s seen here). It’s not clear that this would become any better with better training data as the original source image really only has so much data to begin with.

Helping researchers be able to better see the lines of a print from a tiny image can be a double-edged sword. They will certainly appreciate being able to have a larger and semi-crisp image. However the lack of precision is a massive problem (not that the original image at 2x would’ve been any better). Researchers rely upon being able to spot minute differences between print impressions in order to be able to understand when a print could’ve been created. I suspect that using this technique will need to be reserved to only extremely-small images and come with a massive caveat warning the viewer as to the nature of the image and its appearance.

Regardless, this technology is quite exciting and it’s extremely serendipitous that the subject matter that was used just so happens to correlate nicely with one of my areas of study. I’ll be very curious to see where else people have success with this particular utility and in what context.

An Example Client for the REST API

BrowserMob - Wed, 05/20/2015 - 12:30

The REST API is an often underutilized feature that provides a powerful and versatile medium for managing scripts, monitors, load tests and various aspects of the WPM application. In the following repository I've assembled a simple Python client for creating REST calls and loading the response into an easily-parsed JSON object.




Installing the Client as a Package


The current packaged version (at the time of writing this post, 2.1) is registered with PyPi. Use the following to install and maintain your package with pip.


pip install wpm_api_client


The client is compatible with both Python 2 and 3.


Setup an API Connection


First we create a connection object using our API key and secret. This can be found by logging in to the WPM Home interface and navigating to My Account > Users > Manage User.


import wpm_api c = wpm_api.Client('my_api_key', 'my_secret')


Test the Connection


You can test your connection using the Who Am I? or Echo methods in the Load Testing API. Try the following.


account_info = c.load().who_am_i() print account_info

Note: print(account_info) in Python 3.


The output will be a serialized JSON object, like so.


{u'data': {u'username': u'your_username'}, u'result': u'OK'}

Note: Python 3's print function is a bit nicer and will print out the deserialized JSON. It is a JSON object, nonetheless.


With this we could also do.


print account_info['data']['username']


Which will return, simply, 'your_username' as a string. Or...


import json print json.dumps(account_info)


For a string containing the response's raw JSON output.


Some Additional Examples


Creating a New Basic Test Script and Adding It to a Monitor


The script endpoint method will generate a basic test script which monitors a given url. It requires 1 argument.


new_script = c.script().endpoint('https://home.wpm.neustar.biz/') print new_script


Your response (new_script) will be an object containing the result ('OK') as well as a script 'data' object. To create a monitor, there is a sequence of optional arguments in addition to several optional keyword arguments, required being name, interval and locations. The test_script argument is required in non network-type monitors. Let's create a monitor named 'Example Monitor' using our new test script. We want it to run every 5 minutes from Washington, DC.


new_monitor = c.monitor().create('Example Monitor', 5, 'washingtondc', test_script=new_script['data']['script']['id'])


We now have an object with the id of our new service. If we wanted to edit that existing service, we would need to supply the ID to the monitoring API. For example, let's add a description.


print c.monitor(new_monitor['data']['items']['id']).update(description='An example monitor for home.wpm.neustar.biz')


Your output should look as follows.


{'data': {'items': {'updated': 'your_monitor_id'}}}


For a full list of monitoring methods check the Monitoring API docs or refer to the comments in monitor.py.

Categories: Load & Perf Testing

Performance Monitoring Is About More than Online Systems

Perf Planet - Tue, 05/19/2015 - 07:21

Imagine you had my day this past weekend. Imagine that your wife likes to shop for clothes online, and really loves both Nordstrom and HauteLook, but since it’s difficult to gauge how something will fit, she likes to order multiple sizes and have her husband return the extras to the store.

So you get up in the morning, gather up the goods, load them into the car, and head off to the mall. You wait in line at the customer service counter where returns are handled, and feel that brief moment of triumph when you finally reach the front of the line and hear the cashier say that wonderful phrase, “Next customer, please.”

But after stepping up and explaining that you’re there to return 11 items, she gives you a disconcerting look. Their in-store app can only return five items at a time, she explains. So at the very least, you’re going to have to do go through this process three times from start to finish.

And so the process begins. Items are being scanned and registered on a smartphone via a mobile app, but then she stops scanning and is just staring at the screen. It seems that the HauteLook app has started to slow down. This tends to happen on Saturdays, she explains, because that’s their busiest day not just at this location, but all of the store branches around the country.

It continues like this for half an hour. Despite there being six cashiers to handle the customers, your returns and the app’s shortcomings have slowed the process to a slow grind, and you can feel the eyes of the people in line behind you boring a hole in the back of your head.

Oh, and did we mention that your crying infant is with you as well?

We obviously talk a lot about web performance and slow websites being a hindrance to a company’s bottom line, but online systems and performance monitoring goes well beyond just consumer-facing websites.

My wife adores both Nordstrom and HauteLook, and particularly loves the customer-friendly policy of accepting online returns at the store. But as Nordstrom learned, a policy is only as good as the tools that are used to carry it out. Internal applications used by employees require vigilant monitoring strategies, particularly when there are multiple branch locations that all operate on the same servers. Otherwise, you might as well tell your customers to just do all their shopping online and rely on the Postal Service to handle their returns for them.

Ensuring optimal performance of internal applications and systems can be just as important to customer service as external webpages. And with employee productivity often relying on those systems, they can have a marked effect on your bottom line regardless of whether or not customers are aware of their performance.


The post Performance Monitoring Is About More than Online Systems appeared first on Catchpoint's Blog.

Today’s Required Reading

QA Hates You - Tue, 05/19/2015 - 04:15

7 timeless lessons of programming ‘graybeards’:

The software industry venerates the young. If you have a family, you’re too old to code. If you’re pushing 30 or even 25, you’re already over the hill.

Alas, the whippersnappers aren’t always the best solution. While their brains are full of details about the latest, trendiest architectures, frameworks, and stacks, they lack fundamental experience with how software really works and doesn’t. These experiences come only after many lost weeks of frustration borne of weird and inexplicable bugs.

Like the viewers of “Silicon Valley,” who by the end of episode 1.06 get the satisfaction of watching the boy genius crash and burn, many of us programming graybeards enjoy a wee bit of schadenfraude when those who have ignored us for being “past our prime” end up with a flaming pile of code simply because they didn’t listen to their programming elders.

(Link via tweet.)

Categories: Software Testing

Fifty Quick Ideas To Improve Your Tests now available

The Quest for Software++ - Tue, 05/19/2015 - 02:00

My new book, Fifty Quick Ideas to Improve Your Tests, is now available on Amazon. Grab it at 50% discount before Friday:

This book is for cross-functional teams working in an iterative delivery environment, planning with user stories and testing frequently changing software under tough time pressure. This book will help you test your software better, easier and faster. Many of these ideas also help teams engage their business stakeholders better in defining key expectations and improve the quality of their software products.

For more info, check out FiftyQuickIdeas.com

How well does APM scale for Enterprise businesses?

Perf Planet - Mon, 05/18/2015 - 09:39
How well do APM tools scale for Enterprise businesses? Head over to APMDigest to hear our take.

QA Music: Making A Deal With The Bad Wolf

QA Hates You - Mon, 05/18/2015 - 03:46

AWOLNATION, “Hollow Moon (Bad Wolf)”

Categories: Software Testing

I Prefer This Over That

Test Obsessed - Sun, 05/17/2015 - 11:22

A couple weeks ago I tweeted:

I prefer: - Recovery over Perfection - Predictability over Commitment - Safety Nets over Change Control - Collaboration over Handoffs

— ElisabethHendrickson (@testobsessed) May 6, 2015

Apparently it resonated. I think that’s more retweets than anything else original I’ve said on Twitter in my seven years on the platform. (SEVEN years? Holy snack-sized sound bytes! But I digress.)

@jonathandart said, “I would love to read a fleshed out version of that tweet.”

OK, here you go.

First, a little background. Since I worked on Cloud Foundry at Pivotal for a couple years, I’ve been living the DevOps life. My days were filled with zero-downtime deployments, monitoring, configuration as code, and a deep antipathy for snowflakes. We honed our practices around deployment checklists, incident response, and no-blame post mortems.

It is within that context that I came to appreciate these four simple statements.

Recovery over Perfection

Something will go wrong. Software might behave differently with real production data or traffic than you could possibly have imagined. AWS could have an outage. Humans, being fallible, might publish secret credentials in public places. A new security vulnerability may come to light (oh hai, Heartbleed).

If we aim for perfection, we’ll be too afraid to deploy. We’ll delay deploying while we attempt to test all the things (and fail anyway because ‘all the things’ is an infinite set). Lowering the frequency with which we deploy in order to attempt perfection will ironically increase the odds of failure: we’ll have fewer turns of the crank and thus fewer opportunities to learn, so we’ll be even farther from perfect.

Perfect is indeed the enemy of good. Striving for perfection creates brittle systems.

So rather than strive for perfection, I prefer to have a Plan B. What happens if the deployment fails? Make sure we can roll back. What happens if the software exhibits bad behavior? Make sure we can update it quickly.

Predictability over Commitment

Surely you have seen at least one case where estimates were interpreted as a commitment, and a team was then pressured to deliver a fixed scope in fixed time.

Some even think such commitments light a fire under the team. They give everyone something to strive for.

It’s a trap.

Any interesting, innovative, and even slightly complex development effort will encounter unforeseen obstacles. Surprises will crop up that affect our ability to deliver. If those surprises threaten our ability to meet our commitments, we have to make painful tradeoffs: Do we live up to our commitment and sacrifice something else, like quality? Or do we break our commitment? The very notion of commitment means we probably take the tradeoff. We made a commitment, after all. Broken commitments are a sign of failure.

Commitment thus trumps sustainability. It leads to mounting technical debt. Some number of years later find themselves constantly firefighting and unable to make any progress.

The real problem with commitments is that they suggest that achieving a given goal is more important than positioning ourselves for ongoing success. It is not enough to deliver on this one thing. With each delivery, we need to improve our position to deliver in the future.

So rather than committing in the face of the unknown, I prefer to use historical information and systems that create visibility to predict outcomes. That means having a backlog that represents a single stream of work, and using velocity to enable us to predict when a given story will land. When we’re surprised by the need for additional work, we put that work in the backlog and see the implications. If we don’t like the result, we make an explicit decision to tradeoff scope and time instead of cutting corners to make a commitment.

Aiming for predictability instead of commitment allows us to adapt when we discover that our assumptions were not realistic. There is no failure, there is only learning.

Safety Nets over Change Control

If you want to prevent a given set of changes from breaking your system, you can either put in place practices to tightly control the nature of the changes, or you can make it safer to change things.

Controlling the changes typically means having mechanisms to accept or reject proposed changes: change control boards, review cycles, quality gates.

Such systems may be intended to mitigate risk, but they do so by making change more expensive. The people making changes have to navigate through the labyrinth of these control systems to deliver their work. More expensive change means less change means less risk. Unless the real risk to your business is a slogging pace of innovation in a rapidly changing market.

Thus rather than building up control systems that prevent change, I’d rather find ways to make change safe. One way is to ensure recoverability. Recovery over perfection, after all.

Fast feedback cycles make change safe too. So instead of a review board, I’d rather have CI to tell us when the system is violating expectations. And instead of a laborious code review process, I’d rather have a pair work with me in real time.

If you want to keep the status quo, change control is fine. But if you want to go fast, find ways to make change cheap and safe.

Collaboration over Handoffs

In traditional processes there are typically a variety of points where one group hands off work to another. Developers hand off to other developers, to QA for test, to Release Engineering to deliver, or to Ops to deploy. Such handoffs typically involve checklists and documentation.

But the written word cannot convey the richness of a conversation. Things will be missed. And then there will be a back and forth.

“You didn’t document foo.”
“Yes, we did. See section 3.5.1.”
“I read that. It doesn’t give me the information I need.”

The next thing you know it’s been 3 weeks and the project is stalled.

We imagine a proper handoff to be an efficient use of everyone’s time, but they’re risky. Too much can go wrong, and when it does progress stops.

Instead of throwing a set of deliverables at the next team down the line, bring people together. Embed testers in the development team. Have members of the development team rotate through Ops to help with deployment and operation for a period of time. It actually takes less time to work together than it does to create sufficient documentation to achieve a perfect handoff.

True Responsiveness over the Illusion of Control

Ultimately all these statements are about creating responsive systems.

When we design processes that attempt to corral reality into a neat little box, we set ourselves up for failure. Such systems are brittle. We may feel in control, but it’s an illusion. The real world is not constrained by our imagined boundaries. There are surprises just around the corner.

We can’t control the surprises. But we can be ready for them.

Categories: Software Testing

Webcast: Building a Fast Data Front End for Hadoop

O'Reilly Media - Fri, 05/15/2015 - 19:23
During this webcast you will learn the pros and cons of the various approaches used to create fast data applications. O'Reilly Media, Inc. 2015-05-15T15:13:24-08:10

Webcast: High Performance Images - Beautiful Shouldn't Mean Slow

O'Reilly Media - Fri, 05/15/2015 - 16:21
In this prelude webinar to the upcoming "High Performance Images" book, we'll discuss the ways you can provide the eye-pleasing experience you want without sacrificing your sites performance. O'Reilly Media, Inc. 2015-05-15T13:13:27-08:10

Multi-Repository Development

Google Testing Blog - Fri, 05/15/2015 - 15:00
Author: Patrik Höglund

As we all know, software development is a complicated activity where we develop features and applications to provide value to our users. Furthermore, any nontrivial modern software is composed out of other software. For instance, the Chrome web browser pulls roughly a hundred libraries into its third_party folder when you build the browser. The most significant of these libraries is Blink, the rendering engine, but there’s also ffmpeg for image processing, skia for low-level 2D graphics, and WebRTC for real-time communication (to name a few).

Figure 1. Holy dependencies, Batman!
There are many reasons to use software libraries. Why write your own phone number parser when you can use libphonenumber, which is battle-tested by real use in Android and Chrome and available under a permissive license? Using such software frees you up to focus on the core of your software so you can deliver a unique experience to your users. On the other hand, you need to keep your application up to date with changes in the library (you want that latest bug fix, right?), and you also run a risk of such a change breaking your application. This article will examine that integration problem and how you can reduce the risks associated with it.
Updating Dependencies is HardThe simplest solution is to check in a copy of the library, build with it, and avoid touching it as much as possible. This solution, however, can be problematic because you miss out on bug fixes and new features in the library. What if you need a new feature or bug fix that just made it in? You have a few options:
  • Update the library to its latest release. If it’s been a long time since you did this, it can be quite risky and you may have to spend significant testing resources to ensure all the accumulated changes don’t break your application. You may have to catch up to interface changes in the library as well. 
  • Cherry-pick the feature/bug fix you want into your copy of the library. This is even riskier because your cherry-picked patches may depend on other changes in the library in subtle ways. Also, you still are not up to date with the latest version. 
  • Find some way to make do without the feature or bug fix.
None of the above options are very good. Using this ad-hoc updating model can work if there’s a low volume of changes in the library and our requirements on the library don’t change very often. Even if that is the case, what will you do if a critical zero-day exploit is discovered in your socket library?

One way to mitigate the update risk is to integrate more often with your dependencies. As an extreme example, let’s look at Chrome.

In Chrome development, there’s a massive amount of change going into its dependencies. The Blink rendering engine lives in a separate code repository from the browser. Blink sees hundreds of code changes per day, and Chrome must integrate with Blink often since it’s an important part of the browser. Another example is the WebRTC implementation, where a large part of Chrome’s implementation resides in the webrtc.org repository. This article will focus on the latter because it’s the team I happen to work on.
How “Rolling” Works The open-sourced WebRTC codebase is used by Chrome but also by a number of other companies working on WebRTC. Chrome uses a toolchain called depot_tools to manage dependencies, and there’s a checked-in text file called DEPS where dependencies are managed. It looks roughly like this:
# ...
'https://chromium.googlesource.com/' +
'external/webrtc/trunk/webrtc.git' +
'@' + '5727038f572c517204e1642b8bc69b25381c4e9f',

The above means we should pull WebRTC from the specified git repository at the 572703... hash, similar to other dependency-provisioning frameworks. To build Chrome with a new version, we change the hash and check in a new version of the DEPS file. If the library’s API has changed, we must update Chrome to use the new API in the same patch. This process is known as rolling WebRTC to a new version.

Now the problem is that we have changed the code going into Chrome. Maybe getUserMedia has started crashing on Android, or maybe the browser no longer boots on Windows. We don’t know until we have built and run all the tests. Therefore a roll patch is subject to the same presubmit checks as any Chrome patch (i.e. many tests, on all platforms we ship on). However, roll patches can be considerably more painful and risky than other patches.

Figure 2. Life of a Roll Patch.
On the WebRTC team we found ourselves in an uncomfortable position a couple years back. Developers would make changes to the webrtc.org code and there was a fair amount of churn in the interface, which meant we would have to update Chrome to adapt to those changes. Also we frequently broke tests and WebRTC functionality in Chrome because semantic changes had unexpected consequences in Chrome. Since rolls were so risky and painful to make, they started to happen less often, which made things even worse. There could be two weeks between rolls, which meant Chrome was hit by a large number of changes in one patch.
Bots That Can See the Future: “FYI Bots” We found a way to mitigate this which we called FYI (for your information) bots. A bot is Chrome lingo for a continuous build machine which builds Chrome and runs tests.

All the existing Chrome bots at that point would build Chrome as specified in the DEPS file, which meant they would build the WebRTC version we had rolled to up to that point. FYI bots replace that pinned version with WebRTC HEAD, but otherwise build and run Chrome-level tests as usual. Therefore:

  • If all the FYI bots were green, we knew a roll most likely would go smoothly. 
  • If the bots didn’t compile, we knew we would have to adapt Chrome to an interface change in the next roll patch. 
  • If the bots were red, we knew we either had a bug in WebRTC or that Chrome would have to be adapted to some semantic change in WebRTC.
The FYI “waterfall” (a set of bots that builds and runs tests) is a straight copy of the main waterfall, which is expensive in resources. We could have cheated and just set up FYI bots for one platform (say, Linux), but the most expensive regressions are platform-specific, so we reckoned the extra machines and maintenance were worth it.
Making Gradual Interface Changes This solution helped but wasn’t quite satisfactory. We initially had the policy that it was fine to break the FYI bots since we could not update Chrome to use a new interface until the new interface had actually been rolled into Chrome. This, however, often caused the FYI bots to be compile-broken for days. We quickly started to suffer from red blindness [1] and had no idea if we would break tests on the roll, especially if an interface change was made early in the roll cycle.

The solution was to move to a more careful update policy for the WebRTC API. For the more technically inclined, “careful” here means “following the API prime directive[2]. Consider this example:
class WebRtcAmplifier {
int SetOutputVolume(float volume);
Normally we would just change the method’s signature when we needed to:
class WebRtcAmplifier {
int SetOutputVolume(float volume, bool allow_eleven1);
… but this would compile-break Chome until it could be updated. So we started doing it like this instead:
class WebRtcAmplifier {
int SetOutputVolume(float volume);
int SetOutputVolume2(float volume, bool allow_eleven);
Then we could:
  1. Roll into Chrome 
  2. Make Chrome use SetOutputVolume2 
  3. Update SetOutputVolume’s signature 
  4. Roll again and make Chrome use SetOutputVolume 
  5. Delete SetOutputVolume2
This approach requires several steps but we end up with the right interface and at no point do we break Chrome.
ResultsWhen we implemented the above, we could fix problems as they came up rather than in big batches on each roll. We could institute the policy that the FYI bots should always be green, and that changes breaking them should be immediately rolled back. This made a huge difference. The team could work smoother and roll more often. This reduced our risk quite a bit, particularly when Chrome was about to cut a new version branch. Instead of doing panicked and risky rolls around a release, we could work out issues in good time and stay in control.

Another benefit of FYI bots is more granular performance tests. Before the FYI bots, it would frequently happen that a bunch of metrics regressed. However, it’s not fun to find which of the 100 patches in the roll caused the regression! With the FYI bots, we can see precisely which WebRTC revision caused the problem.
Future Work: Optimistic Auto-rollingThe final step on this ladder (short of actually merging the repositories) is auto-rolling. The Blink team implemented this with their ARB (AutoRollBot). The bot wakes up periodically and tries to do a roll patch. If it fails on the trybots, it waits and tries again later (perhaps the trybots failed because of a flake or other temporary error, or perhaps the error was real but has been fixed).

To pull auto-rolling off, you are going to need very good tests. That goes for any roll patch (or any patch, really), but if you’re edging closer to a release and an unstoppable flood of code changes keep breaking you, you’re not in a good place.

References[1] Martin Fowler (May 2006) “Continuous Integration”
[2] Dani Megert, Remy Chi Jian Suen, et. al. (Oct 2014) “Evolving Java-based APIs”
  1. We actually did have a hilarious bug in WebRTC where it was possible to set the volume to 1.1, but only 0.0-1.0 was supposed to be allowed. No, really. Thus, our WebRTC implementation must be louder than the others since everybody knows 1.1 must be louder than 1.0.

Categories: Software Testing

Web Performance News of the Week

LoadStorm - Fri, 05/15/2015 - 13:52

This week Bing announced it will add its own mobile-friendliness algorithm to its search results, WordPress released a security update, Google added a new Search Analytics report for web developers, and the FCC denied delay of net neutrality rules.

Bing will roll out its own mobile-friendly algorithm in the upcoming months

This week Bing announced they would be following Google’s lead, but are taking a slightly different approach to mobile-friendly search rankings. Bing announced in November that they were investing in mobile-friendly pages, and have since added “Mobile-friendly” tags to relevant sites, resulting in positive user feedback. However, Bing will not be rolling out Mobilegeddon. Instead, the mobile-friendliness signal will focus on balancing mobile-friendly pages while continuing “to focus on delivering the most relevant results for a given query.” So pages that are not mobile-friendly will not be penalized, and users can expect sites that contain more relevant results to be shown before mobile-friendly ones with less relevant content.

Shyam Jayasankar, a spokesman for the Bing Mobile Relevance team said, “This is a fine balance and getting it right took a few iterations, but we believe we are now close.”

Mobile-friendliness detection will focus on several important factors, but highlighted some of the more important ones:

  • Easy navigation – links should be far enough apart to easily navigate and click the right one.
  • Readability – Text should be readable without requiring zooming or lateral scrolling.
  • Scrolling – sites should typically fit within device width
  • Compatibility – the site must only use content that is compatible for the device; i.e. no plugin issues, flash, copyright issues on content, etc.

Bing also mentioned considering pop-ups that make it difficult to view the core of the page as a ranking signal (oh please, oh please!). They also stressed that Bingbot mobile user agents must be able to access all the necessary CSS and script files required to determine mobile-friendliness, and that they were very interested in listening to feedback on the mobile ranking.

WordPress security update addresses additional security issues

WordPress rolled out its second security update this month to address a security flaw which affected millions of websites. The exploit comprised of the vector based icons, called Genericons, that are often included by default into WordPress sites and plugins (including the Twenty Fifteen theme). The flaw was pointed out by security researchers from Sucuri, a cloud-based security company, who noted that it may be a “bit harder to exploit” than other vulnerabilities , but could allow attackers to take control of the sites.
The flaw leaves websites open to a cross-site scripting (XSS) vulnerability, similar to security risks we’ve seen WordPress address in the past month.
Make sure your site is up to date to keep it secure!

Google adds more precise data in their new Search Analytics report

Google has added a new feature to the Webmaster Tools to help website managers understand how users find your site as well as how the content will appear to them in Google search results. The new Search Analytics report contains data that is more recent and calculated differently from Google’s Search Queries results. The report was added to give users additional options for traffic analysis, allowing them more granularity with the ability to filter content and decompose search data for analysis. A fun example Google used to show off the new tool was a comparison of mobile traffic before and after the April 21st mobile update. The Search Queries report will remain available in Google Webmaster Tools for three more months to allow webmasters to get adjusted.

FCC refuses to delay net neutrality rules

USTelecom, AT&T, and CenturyLink jointly filed a petition asking the U.S. Court of Appeals for the D.C. Circuit to stay the FCC’s Open Internet order. USTelecom president Walter McCormick explained that they are “seeking to stay this ill-conceived order’s reclassification of broadband service as a public utility service.” The FCC denied the petition, refusing to delay net neutrality rules. Digital rights group, Public Knowledge, commended the decision, arguing that the reclassification would enable the FCC to enforce consumer protections in the future. Several groups have filed separate lawsuits, bringing the total number of lawsuits filed challenging net neutrality regulations to 10. This week, Free Press and New America’s Open Technology Institute (OTI) filed a motion to intervene in the legal challenges against the FCC’s Net Neutrality rules. In the motion, Free Press stated that they “rely on an open Internet to communicate with its members, activists, allies and the public in furtherance of its mission.” and therefore were considered a “party in interest in the proceeding”.

The post Web Performance News of the Week appeared first on LoadStorm.

Delivering Omnichannel Success with IBM WebSphere Commerce: A Digital Agency Perspective

I had the opportunity to talk to Jon Anhold, an Associate Technology Partner at Rosetta, a leading Digital Agency that specializes in IBM WebSphere Commerce about how they consistently deliver Omnichannel success to their clients. His feedback and responses were really insightful and I just couldn’t help but share them in their entirety! Matt: Rosetta describes its mission […]

The post Delivering Omnichannel Success with IBM WebSphere Commerce: A Digital Agency Perspective appeared first on Dynatrace APM Blog.

Categories: Load & Perf Testing

Delivering Omnichannel Success with IBM WebSphere Commerce: A Digital Agency Perspective

Perf Planet - Fri, 05/15/2015 - 08:20

I had the opportunity to talk to Jon Anhold, an Associate Technology Partner at Rosetta, a leading Digital Agency that specializes in IBM WebSphere Commerce about how they consistently deliver Omnichannel success to their clients. His feedback and responses were really insightful and I just couldn’t help but share them in their entirety! Matt: Rosetta describes its mission […]

The post Delivering Omnichannel Success with IBM WebSphere Commerce: A Digital Agency Perspective appeared first on Dynatrace APM Blog.

Webcast: All-vs-All: Correlation Using Spark/Hadoop

O'Reilly Media - Thu, 05/14/2015 - 13:12
The webcast covers Pearson and Spearman correlations implemented in Spark/Hadoop. O'Reilly Media, Inc. 2015-04-06T13:12:37-08:29

Webcast: Organizing the Internet of Things

O'Reilly Media - Thu, 05/14/2015 - 13:12
In this webcast I aim to introduce the three main branches localization, function and process that we use in GO and demonstrate how they're immediately applicable in the IoT — after all, a cell is just a large, interconnected system. O'Reilly Media, Inc. 2015-04-01T09:51:16-08:31