Feed aggregator

Webcast: Debugging Mobile Web Apps: Tips, Tricks, Tools, and Techniques - Jul 17 2014

O'Reilly Media - Fri, 04/18/2014 - 04:21

Debugging web apps across multiple platforms and devices can be extremely difficult. Fortunately there are a few cutting edge tools that can ease the pain. Follow along as Jonathan demonstrates how - and when - to use the latest and greatest tools and techniques to debug the mobile web.

About Jonathan Stark

Jonathan Stark is a mobile strategy consultant who helps CEOs transition their business to mobile.

Jonathan is the author of three books on mobile and web development, most notably O'Reilly's Building iPhone Apps with HTML, CSS, and JavaScript which is available in seven languages.

His Jonathan's Card experiment made international headlines by combining mobile payments with social giving to create a "pay it forward" coffee movement at Starbucks locations all over the U.S.

Hear Jonathan speak, watch his talk show, listen to his podcast (co-hosted with the incomparable @kellishaver), join the mailing list, or connect online:

Stress Testing Drupal Commerce

LoadStorm - Thu, 04/17/2014 - 07:53

I’ve had the pleasure of working with Andy Kucharski for several years on various performance testing projects. He’s recognized as one of the top Drupal performance experts in the world. He is the Founder of Promet Source and is a frequent speaker at conferences, as well as a great client of LoadStorm. As an example of his speaking prowess, he gave the following presentation at Drupal Mid Camp in Chicago 2014.

Promet Source is a Drupal web application and website development company that offers expert services and support. They specialize in building and performance tuning complex Drupal web applications. Andy’s team worked with our Web Performance Lab to conduct stress testing on Drupal Commerce in a controlled environment. He is skilled at using LoadStorm and New Relic to push Drupal implementations to the point of failure. His team tells me he is good at breaking things.

In this presentation at Drupal Mid Camp, Andy explained how his team ran several experiments in which they load tested a kickstarter drupal commerce site on an AWS instance and then compared how the site performed after several well known performance tuning enhancements were applied. They compared performance improvements after Drupal cache, aggregation, Varnish, and Nginx reverse proxy.

View the below slideshare to see the summary of the importance of web performance and to see how they used LoadStorm to prove that they were able to scale Drupal Commerce from a point of failure (POF) of 100 users to 450 users! That’s a tremendous 450% improvement in scalability.

Drupal commerce performance profiling and tunning using loadstorm experiments drupal mid camp chicago 2014 from Andrew Kucharski

The post Stress Testing Drupal Commerce appeared first on LoadStorm.

Very Short Blog Posts (16): Usability Problems Are Probably Testability Problems Too

DevelopSense - Michael Bolton - Wed, 04/16/2014 - 13:04
Want to add ooomph to your reports of usability problems in your product? Consider that usability problems also tend to be testability problems. The design of the product may make it frustrating, inconsistent, slow, or difficult to learn. Poor affordances may conceal useful features and shortcuts. Missing help files could fail to address confusion; self-contradictory […]
Categories: Software Testing

Top 3 PHP Performance Tips for Continuous Delivery

Are you developing or hosting PHP applications? Are you doing performance sanity checks along your delivery pipeline? No? Not Yet? Then start with a quick check. It only takes 15 minutes and it really pays off. As developer you can improve your code, and as somebody responsible for your build pipeline you can automate these […]
Categories: Load & Perf Testing

Top 3 PHP Performance Tips for Continuous Delivery

Perf Planet - Wed, 04/16/2014 - 05:35
Are you developing or hosting PHP applications? Are you doing performance sanity checks along your delivery pipeline? No? Not Yet? Then start with a quick check. It only takes 15 minutes and it really pays off. As developer you can improve your code, and as somebody responsible for your build pipeline you can automate these […]

Seven Reasons Why our iOS App Costs $ 99.99

Perf Planet - Tue, 04/15/2014 - 09:26

That’s not a typo. The paid version of the HttpWatch app really costs $ 99.99. We’ve had some great feedback about the app but there’s been a fair amount of surprise and disbelief that we would attempt to sell an app for 100x more than Angry Birds:

“I like this app and what it offers; providing waterfalls charts and webpage testing tools on iOS but £70 for the professional version is way out most people’s budget. Even my company won’t pay for it at that price even though we’re Sales Engineers all about CDN & website acceleration”

“Its a perfect app, BUTTTT the profissional version is outrageously expensive! I always buy the apps that I like, but this value is prohibitively expensive! Unfeasible!”

“$100+ for pro is stupidly expensive though. I would have dropped maybe $10 on it but not $100…”

So why don’t we just drop the price to a few dollars and make everyone happy? Here are some of the reasons why.

1. We Need To Make a Profit

We stay in business by developing software and selling it at a profit. We’re not looking to sell advertising, get venture capital, build market share or sell the company. Revenue from the software we sell has to pay for salaries, equipment, software, web site hosting and all the other expenses that are incurred when developing and selling software.

Ultimately, everything we do has to pay for itself one way or another. Our app is priced to allow us to recoup our original app development costs and cover future upgrades and bug fixes.

2. After Apple’s Cut It’s Actually a $70 app

Apple takes a hefty 30% margin on every app store transaction. It doesn’t matter who you are - every app developer has the 30% deducted from their revenue.

Therefore each sale of our app yields about $ 70 or £ 40 depending on slight pricing variations in each country’s app store.

3. This Isn’t a Mass Market App

Unfortunately, the market for paid apps that are aimed at a technical audience like HttpWatch is very small compared to gaming apps like Angry Birds or apps with general appeal like Paper.

It’s unreasonable to expect narrow market apps to receive the development attention they deserve if they sell for just a few dollars. The app store is littered with great apps that have never been updated for the iPhone 5 screen size or iOS 7 because they don’t generate enough revenue to make it worthwhile.

4. Dropping the Price Doesn’t Always Increase App Revenue

In the past few years there’s been a race to the bottom with even low price paid apps pushed out by free apps that have in-app purchases. There’s a general expectation that apps should always be just a few dollars or free. We often hear that if we dropped the price of the app to under $ 10 there would be a massive increase in sales leading to an increase in revenue.

To test this out we ran some pricing experiments. First dropping the price to $ 9.99 and then $ 19.99 to compare the sales volume and revenues to pricing it at $ 99.99. There was a significant increase in sales volume of nearly 500% with the $ 9.99 price:

Interestingly, dropping the price to $ 19.99 seemed to make no difference to the number of sales.

However, the 90% price drop led to revenues dropping over 50% at $ 9.99 compared to the $ 99.99 pricing.

5. There’s no Upgrade or Maintenance Pricing on the App Store

For software developers the major downside of the app store is that there is no mechanism for offering upgrades or maintenance to existing app users at a reduced rate compared to the price of a new app.

For paid apps you really only get one chance to get revenue from a customer unless you create a whole new app (e.g. like Rovio did with Angry Birds Star Wars and Angry Seasons). Creating a whole new technical app at regular intervals doesn’t make sense as so much of the functionality would be similar and it would alienate existing users who would have to pay the full app price to ‘upgrade’.

6. We Plan to Keep Updating the App

Charging a relatively high initial price for the app means that we can justify its continued maintenance in the face of OS and device changes as well as adding new features.

7. We Want to Interact with Customers

Talking to customers to get feedback, provide support and discuss ideas for new features has a cost.

A good programmer in the UK costs about $ 40 an hour. If an app sells for $ 10 that only pays for about ten minutes of a programmer’s time. Therefore, spending more than ten minutes interacting with a customer who bought a $ 10 app effectively results in a loss on that app sale.

Uncover Hidden Performance Issues Through Continuous Testing

LoadImpact - Tue, 04/15/2014 - 03:24

On-premise test tools, APMs, CEMs and server/network based monitoring solutions may not be giving you a holistic picture of your system’s performance; cloud-based continuous testing can.  

When it comes to application performance a wide array of potential causes of performance issues and end user dissatisfaction exist.  It is helpful to view the entire environment, from end user browser or mobile device all the way through to the web and application servers, as the complex system that it is.

Everything between the user’s browser or mobile and your code can affect performance

The state of the art in application performance monitoring has evolved to include on-premise test tools, Application Performance Management (APM) solutions, customer experience monitoring (CEM) solutions, server and network based monitoring. All of these technologies seek to determine root causes of performance problems, real or perceived by end users. Each of these technologies has it’s own merits and costs and seek to tackle the problem from different angles. Often a multifaceted approach is required when high value, mission critical applications are being developed and deployed.

On-premise solutions can blast the environment with 10+Gbit/sec of traffic in order to stress routers, switches and servers. These solutions can be quite complex and costly, and are typically used to validate new technology before it can be deployed in the enterprise.

APM solutions can be very effective in determining if network issues are causing performance problems or if the root cause is elsewhere. They will typically take packet data from a switch SPAN port or TAP (test access point), or possibly a tap-aggregation solution. APM solutions are typically “always-on” and can be an early warning system detecting applications problems before the help desk knows about an issue.  These systems can also be very complex and will require training & professional services to get the maximum value.

What all of these solutions lack is a holistic view of the system which has to take into account edge devices (Firewalls, Anti-Malware, IPS, etc), network connectivity and even endpoint challenges such as packet loss and latency of mobile connections. Cloud-based testing platforms such as Load Impact allow both developers and application owners to implement a continuous testing methodology that can shed light on issues that can impact application performance that might be missed by other solutions.

A simply way to accomplish this is to perform a long-term (1 to 24+ hr) application response test to look for anomalies that can crop up at certain times of day.  In this example I compressed the timescale and introduced my own anomalies to illustrate the effects of common infrastructure changes.

The test environment is built on an esxi platform and includes a 10gbit virtual network, 1gbit physical LAN, and Untangle NG Firewall and a 50/5 mbit/sec internet link.  For the purposes of this test the production configuration of the Untangle NG Firewall was left intact – including Firewall rules, IPS protections however QoS was disabled.  Turnkey Linux was used for the Ubuntu-based Apache webserver with 8 CPU cores and 2 gigs of ram.

It was surprising to me what did impact response times and what had no effect whatsoever.  Here are a few examples:

First up is the impact of bandwidth consumption on the link serving the webserver farm.  This was accomplished by saturating the download link with traffic, and as expected it had a dramatic impact on application response time:

At approx 14:13 link saturation occurred (50Mbit) and application response times nearly tripled as a result

Snapshot of the Untangle Firewall throughput during link saturation testing

Next up is executing a Vmware snapshot of the webserver.  I fully expected this to impact response times significantly, but the impact is brief.  If this was a larger VM then the impact could have been longer in duration:

This almost 4x spike in response time only lasts a few seconds and is the result of a VM snapshot

Lastly was a test to simulate network congestion on the LAN segment where the webserver is running.  

This test was accomplished using Iperf to generate 6+ Gbit/sec of network traffic to the webserver VM.  While I fully expected this to impact server response times, the fact that it did not is a testament to how good the 10gig vmxnet3 network driver is:

Using Iperf to generate a link-saturating 15+Gbit/sec of traffic to Apache (Ubuntu on VM)


In this test approx 5.5Gbit/sec was generated to the webserver,no impact whatsoever in response times

Taking a continuous monitoring approach for application performance has benefits to not only application developers and owners, but those responsible for network, security and server infrastructure.  The ability to pinpoint the moment when performance degrades and correlate that with server resources (using the Load Impact Server Metrics Agent) and other external events is very powerful.  

Often times application owners do not have control or visibility into the entire infrastructure and having concrete “when and where” evidence makes having conversations with other teams in the organization more productive.


This post was written by Peter Cannell. Peter has been a sales and engineering professional in the IT industry for over 15 years. His experience spans multiple disciplines including Networking, Security, Virtualization and Applications. He enjoys writing about technology and offering a practical perspective to new technologies and how they can be deployed. Follow Peter on his blog or connect with him on Linkedin.

Categories: Load & Perf Testing

Testing on the Toilet: Test Behaviors, Not Methods

Google Testing Blog - Mon, 04/14/2014 - 15:25
by Erik Kuefler

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.

After writing a method, it's easy to write just one test that verifies everything the method does. But it can be harmful to think that tests and public methods should have a 1:1 relationship. What we really want to test are behaviors, where a single method can exhibit many behaviors, and a single behavior sometimes spans across multiple methods.

Let's take a look at a bad test that verifies an entire method:

@Test public void testProcessTransaction() {
User user = newUserWithBalance(LOW_BALANCE_THRESHOLD.plus(dollars(2));
new Transaction("Pile of Beanie Babies", dollars(3)));
assertContains("You bought a Pile of Beanie Babies", ui.getText());
assertEquals(1, user.getEmails().size());
assertEquals("Your balance is low", user.getEmails().get(0).getSubject());

Displaying the name of the purchased item and sending an email about the balance being low are two separate behaviors, but this test looks at both of those behaviors together just because they happen to be triggered by the same method. Tests like this very often become massive and difficult to maintain over time as additional behaviors keep getting added in—eventually it will be very hard to tell which parts of the input are responsible for which assertions. The fact that the test's name is a direct mirror of the method's name is a bad sign.

It's a much better idea to use separate tests to verify separate behaviors:

@Test public void testProcessTransaction_displaysNotification() {
new User(), new Transaction("Pile of Beanie Babies"));
assertContains("You bought a Pile of Beanie Babies", ui.getText());
@Test public void testProcessTransaction_sendsEmailWhenBalanceIsLow() {
User user = newUserWithBalance(LOW_BALANCE_THRESHOLD.plus(dollars(2));
new Transaction(dollars(3)));
assertEquals(1, user.getEmails().size());
assertEquals("Your balance is low", user.getEmails().get(0).getSubject());

Now, when someone adds a new behavior, they will write a new test for that behavior. Each test will remain focused and easy to understand, no matter how many behaviors are added. This will make your tests more resilient since adding new behaviors is unlikely to break the existing tests, and clearer since each test contains code to exercise only one behavior.

Categories: Software Testing

AB Testing – Episode 3

Alan Page - Mon, 04/14/2014 - 11:02

Yes – it’s more of Brent and Alan yelling at each other. But this time, Brent says that he, “hates the Agile Manifesto”. I also talk about my trip to Florida, and we discuss estimation and planning and a bunch of other stuff.

Subscribe to the ABTesting Podcast!

Subscribe via RSS
Subscribe via iTunes

(potentially) related posts:
  1. Alan and Brent talk testing…
  2. More Test Talk with Brent
  3. Thoughts on Swiss Testing Day
Categories: Software Testing

Performance Testing Insights: Part I

LoadStorm - Sat, 04/12/2014 - 18:35

Performance Testing can be viewed as the systematic process of collecting and monitoring the results of system usage, then analyzing them to aid system improvement towards desired results. As part of the performance testing process, the tester needs to gather statistical information, examine server logs and system state histories, determine the system’s performance under natural and artificial conditions and alter system modes of operation.

Performance testing complements functional testing. Functional testing can validate proper functionality under correct usage and proper error handling under incorrect usage. It cannot, however, tell how much load an application can handle before it breaks or performs improperly. Finding the breaking points and performance bottlenecks, as well as identifying functional errors that only occur under stress, requires performance testing.

The purpose of Performance testing is to demonstrate that:

1. The application processes required business process and transaction volumes within specified response times in a real-time production database (Speed).

2. The application can handle various user load scenarios (stresses), ranging from a sudden load “spike” to a persistent load “soak” (Scalability).

3. The application is consistent in availability and functional integrity (Stability).

4. Determination of minimum configuration that will allow the system to meet the formal stated performance expectations of stakeholders

When should I start testing and when should I stop?

When to Start Performance Testing:

A common practice is to start performance testing only after functional, integration, and system testing are complete; that way, it is understood that the target application is “sufficiently sound and stable” to ensure valid performance test results. However, the problem with the above approach is that it delays performance testing until the latter part of the development lifecycle. Then, if the tests uncover performance-related problems, one has to resolve problems with potentially serious design implications at a time when the corrections made might invalidate earlier test results. In addition, the changes might destabilize the code just when one wants to freeze it, prior to beta testing or the final release.
A better approach is to begin performance testing as early as possible, just as soon as any of the application components can support the tests. This will enable users to establish some early benchmarks against which performance measurement can be conducted as the components are developed.

When to Stop Performance Testing:

The conventional approach is to stop testing once all planned tests are executed and there is consistent and reliable pattern of performance improvement. This approach gives users accurate performance information at that instance. However, one can quickly fall behind by just standing still. The environment in which clients will run the application will always be changing, so it’s a good idea to run ongoing performance tests. Another alternative is to set up a continual performance test and periodically examine the results. One can “overload” these tests by making use of real world conditions. Regardless of how well it is designed, one will never be able to reproduce all the conditions that application will have to contend with in the real-world environment.

The post Performance Testing Insights: Part I appeared first on LoadStorm.

DELETE ME LATER: Browser Wishlist 2013

Perf Planet - Sat, 04/12/2014 - 12:05



Speaking at STP Con 2014 – Top Performance Mistakes

I am pleased to announce that the team from STPCon has invited me back to speak at their conference next week, April 14-17 in New Orleans. I will be presenting highlights from the top performance landmines we have seen over the last year to educate testers on what to look out for during testing before […]
Categories: Load & Perf Testing

Speaking at STP Con 2014 – Top Performance Mistakes

Perf Planet - Fri, 04/11/2014 - 08:12
I am please that the team from STP Con has invited me back to speak at their upcoming conference next week, April 14-17 in New Orleans. I will be presenting the highlights of the Top Performance Landmines we have seen over the last year to educate testers what to look out for during testing before […]

Infographic: How Engaged Are Your Users?

Perf Planet - Thu, 04/10/2014 - 14:00

Engagement means many things to many people.  Quantifying engagement online is up to you, your team, and the way your website operates. (Teams might use time on site, pages per visit, conversions, or something else entirely.)  But whatever metrics you use to define success online, there are some truths we hold self-evident in online marketing -- for instance, if your website is down, nobody is being engaged.  This list is based on those fundamental truths.  Use this checklist to find out where your site stands among techniques and stats that are guaranteed to improve engagement, regarless what metrics you use. 

Write Code Every Day

Perf Planet - Thu, 04/10/2014 - 12:28

Last fall work on my coding side projects came to a head: I wasn’t making adequate progress and I couldn’t find a way to get more done without sacrificing my ability to do effective work at Khan Academy.

There were a few major problems with how I was working on my side projects. I was primarily working on them during the weekends and sometimes in the evenings during the week. This is a strategy that does not work well for me, as it turns out. I was burdened with an incredible amount of stress to try and complete as much high quality work as possible during the weekend (and if I was unable to it felt like a failure). This was a problem as there’s no guarantee that every weekend will be free – nor that I’ll want to program all day for two days (removing any chance of relaxation or doing anything fun).

There’s also the issue that a week between working on some code is a long time, it’s very easy to forget what you were working on or what you left off on (even if you keep notes). Not to mention if you miss a weekend you end up with a two week gap as a result. That massive multi-week context switch can be deadly (I’ve had many side projects die due to attention starvation like that).

Inspired by the incredible work that Jennifer Dewalt completed last year, in which she taught herself programming by building 180 web sites in 180 days, I felt compelled to try a similar tactic: working on my side projects every single day.

Illustration by Steven Resig

I decided to set a couple rules for myself:

  1. I must write code every day. I can write docs, or blog posts, or other things but it must be in addition to the code that I write.
  2. It must be useful code. No tweaking indentation, no code re-formatting, and if at all possible no refactoring. (All these things are permitted, but not as the exclusive work of the day.)
  3. All code must be written before midnight.
  4. The code must be Open Source and up on Github.

Some of these rules were arbitrary. The code doesn’t technically need to be written before midnight of the day of but I wanted to avoid staying up too late writing sloppy code. Neither does the code have to be Open Source or up on Github. This just forced me to be more mindful of the code that I was writing (thinking about reusability and deciding to create modules earlier in the process).

Thus far I’ve been very successful, I’m nearing 20 weeks of consecutive work. I wanted to write about it as it’s completely changed how I code and has had a substantial impact upon my life and psyche.

With this in mind a number of interesting things happened as a result of this change in habit:

Minimum viable code. I was forced to write code for no less than 30 minutes a day. (It’s really hard to write meaningful code in less time, especially after remembering where you left off the day before.) Some week days I work a little bit more (usually no more than an hour) and on weekends I’m sometimes able to work a full day.

Code as habit. It’s important to note that that I don’t particularly care about the outward perception of the above Github chart. I think that’s the most important take away from this experiment: this is about a change that you’re making in your life for yourself not a change that you’re making to satisfy someone else’s perception of your work. The same goes for any form of dieting or exercise: if you don’t care about improving yourself then you’ll never actually succeed.

Battling anxiety. Prior to starting this experiment I would frequently feel a high level of anxiety over not having completed “enough” work or made “enough” progress (both of which are relatively unquantifiable as my side projects had no specific deadlines). I realized that the feeling of making progress is just as important as making actual progress. This was an eye-opener. Once I started to make consistent progress every day the anxiety started to melt away. I felt at peace with the amount of work that I was getting done and I no longer had the over-bearing desire to frantically get any work done.

Weekends. Getting work done on weekends use to be absolutely critical towards making forward momentum (as they were, typically, the only time in which I got significant side project coding done). That’s not so much the case now – and that’s a good thing. Building up a weeks-worth of expectations about what I should accomplish during the weekend only ended up leaving me disappointed. I was rarely able to complete all the work that I wanted and it forced me to reject other weekend activities that I enjoyed (eating dim sum, visiting museums, going to the park, spending time with my partner, etc.) in favor of getting more work done. I strongly feel that while side projects are really important they should not be to the exclusion of life in general.

Background processing. An interesting side effect of writing side project code every day is that your current task is frequently running in the back of your mind. Thus when I go for a walk, or take a shower, or any of the other non-brain-using activities I participate in, I’m thinking about what I’m going to be coding later and finding a good way to solve that problem. This did not happen when I was working on the code once a week, or every other week. Instead that time was consumed thinking about some other task or, usually, replaced with anxiety over not getting any side project work done.

Context switch. There’s always going to be a context switch cost when resuming work on a side project. Unfortunately it’s extremely hard to resume thinking about a project after an entire week of working on another task. Daily work has been quite helpful in this regard as the time period between work is much shorter, making it easier to remember what I was working on.

Work balance. One of the most important aspects of this change was in simply learning how to better balance work/life/side project. Knowing that I was going to have to work on the project every single day I had to get better at balancing my time. If I was scheduled to go out in the evening, and not get back until late, then I would need to work on my side project early in the day, before starting my main Khan Academy work. Additionally if I hadn’t finished my work yet, and I was out late, then I’d hurry back home to finish it up (instead of missing a day). I should note that I’ve been finding that I have less time to spend on hobbies (such as woodblock printing) but that’s a reasonable tradeoff that I’ll need to live with.

Outward perception. This has all had the added benefit of communicating this new habit externally. My partner understands that I have to finish this work every day, and thus activities sometimes have to be scheduled around it. It’s of considerable comfort to be able to say “Yes, we can go out/watch a movie/etc. but I have to get my coding in later” and have that be understood and taken into consideration.

How much code was written? I have a hard time believing how much code I’ve written over the past few months. I created a couple new web sites, re-wrote some frameworks, and created a ton of new node modules. I’ve written so much I sometimes forget the things I’ve made – work from even a few weeks prior seem like a distant memory. I’m extremely pleased with the amount of work that I’ve gotten done.

I consider this change in habit to be a massive success and hope to continue it for as long as I can. In the meantime I’ll do all that I can to recommend this tactic to others who wish to get substantial side project work done. Let me know if this technique does, or doesn’t, work for you – I’m very interested in hearing additional anecdotes!

Discuss this post on Hacker News.

Users, Usage, Usability, and Data

Alan Page - Thu, 04/10/2014 - 10:33

The day job (and a new podcast) have been getting the bulk of my time lately, but I’m way overdue to talk about data and quadrants.

If you need a bit of context or refresher on my stance, this post talks about my take on Brian Marick’s quadrants (used famously by Gregory and Crispin in their wonderful Agile Testing book); and I assert that the the left side of the quadrant is well suited for programmer ownership, and that the right side is suited for quality / test team ownership. I also assert that the right side knowledge can be obtained through data, and that one could gather what they need in production – from actual customer usage.

And that’s where I’ll try to pick up.

Agile Testing labels Q3 as “Tools”, and Q4 as “Manual”. This is (or can be) generally true, but I claim that it doesn’t have to be true. Yes, there are some synthetic tests you want to run locally to ensure performance, reliability, and other Q3 activities, but you can get more actionable data by examining data from customer usage. Your top-notch performance suite doesn’t matter if your biggest slowdown occurs on a combination of graphics card and bus speed that you don’t have in your test lab. Customers use software in ways we can’t imagine – and on a variety of configurations that are practically impossible to duplicate in labs. Similarly, stress suites are great – but knowing what crashes your customers are seeing, as well as the error paths they are hitting is far more valuable. Most other “ilities” can be detected from customer usage as well.

Evaluating Q4 from data is …interesting. The list of items in the graphic above is from Agile Testing, but note that Q4 is the quadrant labeled (using my labels) Customer Facing / Quality Product. You do Alpha and Beta testing in order to get customer feedback (which *is* data, of course), but beyond there, I need to make a bit larger leaps.

To avoid any immediate arguments, I’m not saying that exploratory testing can be replaced with data, or that exploratory testing is no longer needed. What I will  say is that not even your very best exploratory tester can represent how a customer uses a product better than the actual customer.

So let’s move on to scenarios and, to some extent, usability testing. Let’s say that one of the features / scenarios of your product is “Users can use our client app to create a blog post, and post it to their blog”. The “traditional” way to validate this scenario is to either make a bunch of test cases (either written down in advance(yuck) or discovered through exploration) that create blog entries with different formatting and options, and then make sure it can post to whatever blog services are supported. We would also dissect the crap out of the scenario and ask a lot of questions about every word until all ambiguity is removed. There’s nothing inherently wrong with this approach, but I think we can do better.

Instead of the above, tweak your “testing” approach. Instead of asking, “Does this work?”, or “What would happen if…?”, ask “How will I know if the scenario was completed successfully?” For example, if you knew:

  • How many people started creating a blog post in our client app?
  • Of the above set, how many post successfully to their blog
  • What blog providers do they post to
  • What error paths are being hit?
  • How long does posting to their blog take?
  • What sort of internet connection do they have?
  • How long does it take for the app to load?
  • After they post, do they edit the blog immediately (is it WYSIWYG)?
  • etc.

With the above, you can begin to infer a lot about how people use your application and discover outliers, answer questions; and perhaps, help you discover new questions you want to have answered. And to get an idea of whether they may have liked the experience, perhaps you could track things like:

  • How often do people post to their blog from our client app?
  • When they encounter an error path, what do they do? Try again? Exit? Uninstall?
  • etc.

Of course, you can get subjective data as well via short surveys. These tend to annoy people, but used strategically and sparsely, can help you gauge the true customer experience. I know of at least one example at Microsoft where customers were asked to provide a star rating and feedback after using an application – over time, the team could use available data to accurately predict what star rating customers would give their experience. I believe that’s a model that can be reproduced frequently.

Does a data-prominent strategy work everywhere? Of course not. Does it replace the need for testing? Don’t even ask – of course not. Before taking too much of the above to heart, answer a few questions about your product. If your product is a web site or web service or anything else you can update (or roll back) as frequently as you want, of course you want to rely on data as much as possible for Q3 and Q4. But, even for “thick” apps that run on a device (computer, phone, toaster) that’s always connected, you should also consider how you can use data to answer questions typically asked by test cases.

But look – don’t go crazy. There are a number of products, where long tests (what I call Q3 and Q4 tests) can be replaced entirely by data. But don’t blindly decide that you no longer need people to write stress suites or do exploratory testing. If you can’t answer your important questions from analyzing data, by all means, use people with brains and skills to help you out. And even if  you think you can get all your answers with data, use people as a safety net while you make the transition. It’s quite possible (probable?) to gather a bunch of data that isn’t actually the data you need, and then mis-analyze it and ship crap people don’t want – that’s not a trap you want to fall into.

Data is a powerful ally. How many times, as a tester, have you found an issue and had to convince someone it was something that needed to be fixed or customers would rebel? With data, rather than rely on your own interpretation of what customers want, you can make decisions based on what customers are actually doing. For me, that’s powerful, and a strong statement towards the future of software quality.

(potentially) related posts:
  1. Finding Quality
  2. It’s all just testing
  3. Being Forward
Categories: Software Testing

Load Impact: Closed Vulnerability to Heartbleed Bug

LoadImpact - Thu, 04/10/2014 - 03:57

As you may have heard, a serious bug in the OpenSSL library was recently found. The bug, known colloquially as “Heartbleed” (CVE-2014-0160), impacted an estimated two-thirds of sites on the internet – including Load Impact.

While Load Impact has no evidence of anyone exploiting this vulnerability, we have taken action to mitigate all risks and are no longer vulnerable. 

The vulnerability has existed in OpenSSL the past two years and, during this time, could have been used by malicious hackers to target a specific online service or site, and covertly read random traffic between the site and its users. Over time, this means an attacker could gather sensitive information such as account details, passwords, encryption keys, etc. used by the site or its users.

Many sites have unknowingly been vulnerable to this bug the past two years, and most probably have little or no information about whether they have been targeted by hackers or not, as the attack would appear to be an entirely legitimate request and is unlikely to even be logged by most systems.

We advise you to be aware of this issue and ask your various online service providers for information if they haven’t provided you an update already. You should also consider changing your passwords on most systems you have been using for the past two years.

Load Impact has only been vulnerable to this bug since October 2013 – when we started using Amazon’s SSL service (through Amazon’s ELBs) – so our exposure is limited. However, since there is still a risk that someone may have stolen information from us in the past six months, we have now replaced our SSL certificates and keys. 

As an extra precaution, we advise our users to:

  • Create a new password
  • Generate new API keys

Feel free to contact us if you have any questions.

More info on the OpenSSL “Heartbleed bug” can be found here: http://heartbleed.com/

Categories: Load & Perf Testing

How Does Web Page Speed Affect Conversions [INFOGRAPHIC]

Web Performance Today - Wed, 04/09/2014 - 18:23

Performance has only recently started to make headway into the conversion rate optimization (CRO) space. These inroads are long overdue, but still, it’s good to see movement. In the spirit of doing my part to hustle thing along, here’s a collection of infographics representing real-world examples of the huge impact of page speed on conversions.


The post How Does Web Page Speed Affect Conversions [INFOGRAPHIC] appeared first on Web Performance Today.

Out-Of-Touch People Want Metrics

Eric Jacobson's Software Testing Blog - Wed, 04/09/2014 - 12:19

The second thing (here is the first) Scott Barber said that stayed with me is this:

The more removed people are from IT workers, the higher their desire for metrics.  To paraphrase Scott, “the managers on the floor, in the cube farms, agile spaces or otherwise with their teams most of the time, don’t use a lot of metrics because they just feel what’s going on.”

It seems to me, those higher up people dealing with multiple projects don’t have (as much) time to visit  the cube farms and they know summarized information is the quickest way to learn something.  The problem is, too many of them think:


It hadn’t occurred to me until Scott said it.  That, alone, does not make metrics bad.  But it helps me to understand why I (as a test manager) don’t bother with them but I spend a lot of time fending off requests for them from out-of-touch people (e.g., directors, other managers).  Note: by “out-of-touch” I mean out-of-touch with the details of the workers.  Not out-of-touch in general.

Scott reminds us the right way to find the right metric for your team is to start with the question:

What is it we’re trying to learn?

I love that.  Maybe a metric is not the best way of learning.  Maybe it is.  If it is, perhaps coupling it with a story will help explain the true picture.

Thanks Scott!

Categories: Software Testing

Infographic: Web Performance Impacts Conversion Rates

LoadStorm - Wed, 04/09/2014 - 09:46

There are many companies that design or re-design websites with the goal of increasing conversion rates. What do we mean by conversion rates? SiteTuners defines Conversion Rate as: “The percentage of landing page visitors who take the desired conversion action.” Included in this list are completing a purchase, filling out an information form, or donating to the cause.

Almost all companies focus on the design, look, and layout of a website when attempting to increase conversion rates. However, we at LoadStorm want to draw more attention to the role that web performance plays in conversions. Many studies have conclusively shown that a delay in page load time will negatively affect conversions, check out our latest infographic for a summary of the statistics!



1. http://www.aberdeen.com/Aberdeen-Library/5136/RA-performance-web-application.aspx

2. https://econsultancy.com/blog/10936-site-speed-case-studies-tips-and-tools-for-improving-your-conversion-rate

3. http://www.globaldots.com/how-website-speed-affects-conversion-rates/

4. http://www.crestechglobal.com/wp-content/uploads/2013/10/WhitePaper_AgileBottleneck_Performance.pdf

5. http://www.webperformancetoday.com/2013/12/11/slower-web-pages-user-frustration/

6. http://kylerush.net/blog/meet-the-obama-campaigns-250-million-fundraising-platform

7. http://www-01.ibm.com/software/th/collaboration/webexperience/index.html

The post Infographic: Web Performance Impacts Conversion Rates appeared first on LoadStorm.