Software Testing

Get Your Automated Checks In Their Face

Eric Jacobson's Software Testing Blog - Thu, 03/03/2016 - 13:03
If a tree falls in the forest and nobody hears it, does it make a sound?  If you have automated checks and nobody knows it, does it make an impact?

To me, the value of any given suite of automated checks depends on its usage...

Categories: Software Testing

Category: Bug

QA Hates You - Thu, 03/03/2016 - 06:04

You know I log every instance of controls/edit boxes/drop-down lists where the lower-cased g gets chopped at the bottom.

Well, except this one in the FogBugz defect tracker itself:

Internet Explorer is the worst offender in this regard, but the screenshot above is from Firefox.

Now you know why Roger Dougherty, single, born in August and living at 1021 Brighton Way, Harrisburg, Oregon always signs up for applications I test.

Categories: Software Testing

Test Planning Is Throwaway, Testing Is Forever

Eric Jacobson's Software Testing Blog - Mon, 02/29/2016 - 16:10

FeatureA will be ready to test soon.  You may want to think about how you will test FeatureA.  Let’s call this activity “Test Planning”.  In Test Planning, you are not actually interacting with the product-under-test.  You are thinking about how you might do it.  Your Test Planning might include, but is not limited to, the following:

  • Make a list of test ideas you can think of.  A Test Idea is the smallest amount of information that can capture the essence of a test.
  • Grok FeatureA:  Analyze the requirements document.  Talk to available people.
  • Interact with the product-under-test before it includes FeatureA.
  • Prepare the test environment data and configurations you will use to test.
  • Note any specific test data you will use.
  • Determine what testing you will need help with (e.g., testing someone else should do).
  • Determine what not to test.
  • Share your test plan with anyone who might care.  At least share the test ideas (first bullet) with the product programmers while they code.
  • If using automation, design the check(s).  Stub them out.

All the above are Test Planning activities.  About four of the above resulted in something you wrote down.  If you wrote them in one place, you have an artifact.  The artifact can be thought of as a Test Plan.  As you begin testing (interacting with the product-under-test), I think you can use the Test Plan one of two ways:

  1. Morph it into “Test Notes” (or “Test Results”).
  2. Refer to it then throw it away.

Either way, we don’t need the Test Plan after the testing.  Just like we don’t need those other above Test Planning activities after the testing.  Plans are more useful before the thing they plan.

Execution is more valuable than a plan.  A goal of a skilled tester is to report on what was learned during testing.  The Test Notes are an excellent way to do this.  Attach the Test Notes to your User Story.  Test Planning is throwaway.

Categories: Software Testing

Is Automated Checking Valuable For Data Warehouses?

Eric Jacobson's Software Testing Blog - Fri, 02/26/2016 - 15:18

My data warehouse team is adopting automated checking.  Along the way, we are discovering some doubters.  Doubters are a good problem.  They challenge us to make sure automation is appropriate.  In an upcoming meeting, we will try to answer the question in this blog post title.

My short answer:  Yes.

My long answer:  See below.

The following are data warehouse (or database) specific:

  • More suited to machines – Machines are better than humans at examining lots of data quickly. 
  • Not mentally stimulating for humans(this is the other side of the above reason) Manual DB testers are hard to find.  Testers tend to like front-ends so they gravitate toward app dev teams.  DB testers need technical skills (e.g., DB dev skills).  People who have them prefer to do DB dev work.
  • Straight forward repeatable automation patterns – For each new dimension table, we normally want the same types of automated checks.  This makes automated check design easier and faster to code.  The entire DW automation suite contains a smaller amount of design patterns than the average appliction.

The following are not limited to data warehouse (or database):

  • Time to market – (Automated checks) help you go faster.  Randy Shoup says it well at 9:55 in this talk.  Writing quick and dirty software leads to technical debt which leads to no time to do it right (“technical debt viscious cycle”).  Writing automated checks as you write software  leads to a solid foundation which leads to confidence which leads to faster and better (“virtuous cycle of quality”)…Randy’s words.
  • Regression checking - In general, machines are better than humans at indicating something changed. 
  • Get the most from your human testing - Free the humans to focus on deep testing of new features, not shallow testing of old features.
  • In case the business ever changes their mind - If you ever have to revist code to make changes or refactor, automated checks will help you do it quicker.  If you think the business will never change their mind, then maybe automation is not as important.
  • Automated checks help document current functionality.
  • Easier to fix problems - Automated checks triggered in a Continuous Integration find problems right after code is checked in.  These problems are usually easier to fix when fresh in a developer’s mind.
Categories: Software Testing

Coming to Dallas/Ft. Worth - Testing Mobile Applications - ASTQB Certification Course - April 19 - 20, 2016

Randy Rice's Software Testing & Quality - Thu, 02/25/2016 - 09:58
I hope you can join me for this special course presentation for the new Certified Mobile Tester (CMT) designation from the American Software Testing Qualifications Board.

We will be in the DFW area (Irving, TX) on the dates of Tuesday, April 19 and Wednesday, April 20 at the Holiday Inn Express - 4235 West Airport Freeway, Irving, TX 75062

Seating is limited, so I recommend registering as soon as possible to get your place. (This class size is limited to 15 people.)

To register:

About the Course and Certification

In the fall of 2015, the ASTQB (of which I am on the board of directors) felt there was a compelling need for testers to have a robust and meaningful certification focused on testing mobile applications. So, we set about writing a syllabus and exam for that certification. I was honored to contribute as a co-author of the syllabus. At the present, this is a certification only offered by the ASTQB.

To learn more about the certification, see the syllabus and sample exam, just go to:

To see the course outline, go to:

There are no pre-requisites for this certification! While we reference concepts from the ISTQB Foundation Level, everything you need to know is taught in this course.

We have exercises to reinforce key concepts and sample exams after each module to give you a taste of what to expect on the actual exam. You can also bring your own mobile device as a way to perform the exercises, although this is not required.

Costs and Logistics

Cost: $1,500 USD per person, plus exam ($150).

You can attend the course without taking the exam. However, we will be offering a live exam at the end of the 2nd day.

There will also be the option to take the exam later electronically, if you desire. However, we need to know your preference 2 weeks in advance.

This class will be streamed live, so if you want to attend virtually, that is possible. There is a $100 discount for virtual attendees. Virtual attendees in the USA will receive a course notebook in advance of the training and will also have access to the e-learning course at no extra cost.

For teams of 3 or more, there is a 10% discount of the course registration fee. The exams are not discounted.

We will not have breakfast items, however, we will have a light lunch (pizza, sandwiches, etc.) brought in each day. Please let us know if you have any dietary needs or requests.

Important Notice for Those Who Plan to Travel to DFW to Attend

Please do not book any non-refundable travel (air fare, hotel, etc.) until we confirm the class. We make every attempt to not cancel a class, but sometimes this is unavoidable. We make the call about 2 - 3 weeks in advance of the class, or earlier, if possible.

We located this class to be close to the DFW airport for the convenience of those who may be traveling in for the class. The hotel runs a free airport shuttle.

You are responsible for making your own hotel reservations.

If you plan to take the exam on Day 2, the exam will take place from 3:30 p.m. to 4:30 p.m.  Please allow adequate time to catch your flight. The hotel is 6.4 miles from the DFW airport.

Other Questions?

Feel free to call our office with any questions or special needs - 405-691-8075.

Categories: Software Testing

QA Music – Better Relationships with Co-Workers

QA Hates You - Mon, 02/22/2016 - 03:10

“The Monster” by Eminem

I’m not friend with the monsters under my bed. I’ve frightened them all away.

Categories: Software Testing

Introducing Claudia.js – deploy Node.js microservices to AWS easily

The Quest for Software++ - Sun, 02/21/2016 - 16:00

I’m proud to announce the 1.0 release of Claudia.js, a new opensource deployment tool for Javascript developers interested in running microservices in AWS. AWS Lambda and API Gateway offer scalability on demand, zero operations overhead and almost free execution, priced per use, so they are a very compelling way to run server-side code. However they can be tedious to set up, especially for simple scenarios. The runtime is oriented towards executing Java code, so running Node.js functions requires you to iron out quite a few issues, that aren’t exactly well documented. Claudia.js automates and simplifies deployment workflows and error prone tasks, so you can focus on important problems and not have to worry about AWS service quirks. Even better, it sets everything up the way Javascript developers expect, so you’ll feel right at home.

Check out the video below for an example how you can set up and deploy a new API in less than five minutes!

Claudia.js is available from NPM, and the source code is on Github!

Not Only Wireframes, But Yes, Wireframes

QA Hates You - Thu, 02/18/2016 - 10:23

You know, I like to get a look at any and all artifacts as soon as possible to see if I can spot any flaws as early as I can. This includes comps, prototypes, copy, and wireframes, where I hope to catch oversights before they get into the code.

But in addition to looking for oversights, I always wanted to review the documents qua documents, especially if your company is providing wireframes, comps, prototypes, copy, and so on to the client for review. It gives you a chance to catch mistakes, misspellings, improper branding, and inconsistencies before your client can look at them and think, “Ew, these guys can’t spell our name right on the wireframes. What would they do to our Web site?”

Yes, I did review RFP responses and proposals as well.

The Purple One links to this article entitled Wireframes – Should They Really Be Tested? And If So, How?

New trainees came on board and we had a training class to learn software testing concepts. After seeing those enthusiastic faces with their almost blank-slate minds (professionally), I decided to take a detour to my routine training.

After a brief introduction, instead of talking about software testing like I normally do, I threw a question at the fresh minds – ‘Can anyone explain me what a wireframe is? ’

The answer was a pause and thus, we decided to discuss it. And that is how it started – Wireframe/Prototype Testing

This should provide a good argument and overview if you need one.

Categories: Software Testing

Automated Checking Is Very Human

Eric Jacobson's Software Testing Blog - Wed, 02/17/2016 - 15:20

I read A Context-Driven Approach to Automation in Testing at the gym this morning.  I expected the authors to hate on automation but they didn’t.  Bravo.

They contrasted the (popular) perception that automation is cheap and easy because you don’t have to pay the computer, with the (not so popular) perception that automation requires a skilled human to design, code, maintain, and interpret the results of the automation.  That human also wants a paycheck.

Categories: Software Testing

New Software Development Employee Orientation Guide

QA Hates You - Wed, 02/17/2016 - 03:31

You owe it to yourself to make your new co-workers read this: Living in the Age of Software Fuckery: Ten Anti-patterns and Malpractices in Modern Software Development

Well, all except the new managers. They teach this stuff in MBA and MIS programs already. But as a good idea.

Link via iDisposable.

Categories: Software Testing

EarlGrey - iOS Functional UI Testing Framework

Google Testing Blog - Tue, 02/16/2016 - 16:14
By Siddartha Janga on behalf of Google iOS Developers 

Brewing for quite some time, we are excited to announce EarlGrey, a functional UI testing framework for iOS. Several Google apps like YouTube, Google Calendar, Google Photos, Google Translate, Google Play Music and many more have successfully adopted the framework for their functional testing needs.

The key features offered by EarlGrey include:

  • Powerful built-in synchronization : Tests will automatically wait for events such as animations, network requests, etc. before interacting with the UI. This will result in tests that are easier to write (no sleeps or waits) and simple to maintain (straight up procedural description of test steps). 
  • Visibility checking : All interactions occur on elements that users can see. For example, attempting to tap a button that is behind an image will lead to test failure immediately. 
  • Flexible design : The components that determine element selection, interaction, assertion and synchronization have been designed to be extensible. 

Are you in need for a cup of refreshing EarlGrey? EarlGrey has been open sourced under the Apache license. Check out the getting started guide and add EarlGrey to your project using CocoaPods or manually add it to your Xcode project file.

Categories: Software Testing

QA Career Advice from Barron’s

QA Hates You - Tue, 02/16/2016 - 03:58

Last week’s Barron’s had an article that pretty much covers the best way to enjoy a long career in QA.

Categories: Software Testing

QA Music: One for the Introverts at the Conference

QA Hates You - Mon, 02/15/2016 - 03:21

QA or the Highway is coming up this week, so now it’s time for our long distance dedication to the introverts at the conference. It’s Alessia Cara with “Here”:

To be honest, I’ve held entire jobs where I felt this way.

Categories: Software Testing

Fortunately, QA Can Hit That Hotkey

QA Hates You - Fri, 02/12/2016 - 10:55

My first ticket logged in Jira, and my first bug found in Jira.

Fortunately, I have an undefined key on my special QA-language keyboard.

Categories: Software Testing

Critical Metric: Critical Resources

Steve Souders - Wed, 02/10/2016 - 20:37

A big change in the World of Performance for 2015 [this post is being cross-posted from the 2015 Performance Calendar] is the shift to metrics that do a better job of measuring the user experience. The performance industry grew up focusing on page load time, but teams with more advanced websites have started replacing PLT with metrics that have more to do with rendering and interactivity. The best examples of these new UX-focused metrics are Start Render and Speed Index.

Start Render and Speed Index

A fast start render time is important for a good user experience because once users request a new page, they’re left staring at the old page or, even worse, a blank screen. This is frustrating for users because nothing is happening and they don’t know if the site is down, if they should reload the page, or if they should simply wait longer. A fast start render time means the user doesn’t have to experience this frustration because she is reassured that the site is working and delivering upon her request.

Speed Index, a metric developed by Pat Meenan as part of WebPageTest, is the average time at which visible parts of the page are displayed. Whereas start render time captures when the rendering experience starts, Speed Index reflects how quickly the entire viewport renders. These metrics measure different things, but both focus on how quickly pages render which is critical for a good user experience.

Critical Resources

The main blockers to fast rendering are stylesheets and synchronous scripts. Stylesheets block all rendering in the page until they finish loading. Synchronous scripts (e.g., <script src="main.js">) block rendering for all following DOM elements. Therefore, synchronous scripts in the HEAD of the page block the entire page from rendering until they finish loading.

I call stylesheets and synchronous scripts “critical blocking resources” because of their big impact on rendering. A few months back I decided to start tracking this as a new performance metric as part of SpeedCurve and the HTTP Archive. Most performance services already have metrics for scripts and stylesheets, but a separate metric for critical resources is different in a few ways:

  • It combines stylesheets and synchronous scripts into a single metric, making it easier to track their impact.
  • It only counts synchronous scripts. Asynchronous scripts don’t block rendering so they’re not included. The HTTP Archive data for the world’s top 500K URLs shows that the median website has 10 synchronous scripts and 2 async scripts, so ignoring those async scripts gives a more accurate measurement of the impact on rendering. (I do this as a WebPageTest custom metric. The code is here.)
  • Synchronous scripts loaded in iframes are not included because they don’t block rendering of the main page. (I’m still working on code to ignore stylesheets in iframes.)
Critical Metric

I’m confident this new “critical resources” metric will prove to be key for tracking a good user experience in terms of performance. Whether that’s true will be borne out as adoption grows and we gain more experience correlating this to other metrics that reflect a good user experience.

In the meantime, I added this metric to the HTTP Archive and measured the correlation to start render time, Speed Index, and page load time. Here are the results for the Dec 1 2015 crawl:

The critical resources metric described in this article is called “CSS & Sync JS” in the charts above. It has the highest correlation to Speed Index and the second highest correlation to start render time. This shows that “critical resources” is a good indicator of rendering performance. It doesn’t show up in the top five variables correlated to load time, which is fine. Most people agree that page load time is no longer a good metric because it doesn’t reflect the user experience.

We all want to create great, enjoyable user experiences. With the complexity of today’s web apps – preloading, lazy-loading, sync & async scripts, dynamic images, etc. – it’s important to have metrics that help us know when our user experience performance is slipping. Tracking critical resources provides an early indicator of how our code might affect the user experience, so we can keep our websites fast and our users happy.


Categories: Software Testing

Ways To Boost The Value of Testers Who Don’t Code

Eric Jacobson's Software Testing Blog - Wed, 02/10/2016 - 16:00
Despite the fact that most Automation Engineers are writing superficial automation, the industry still worships automation skills, and for good reasons.  This is intimidating for testers who don’t code, especially when finding themselves working alongside automation engineers. 
Here are some things, I can think of, testers-who-don’t-code can do to help boost thier value:
  • Find more bugs -  This is one of the most valued services a tester can provide.  Scour a software quality characteristics list like this to expand your test coverage be more aggressive with your testing.  You can probably cover way more than automation engineers in a shorter amount of time.  Humans are much better at finding bugs than machines.  Finding bugs is not a realistic goal of automation.
  • Faster Feedback – Everybody wants faster feedback.  Humans can deliver faster feedback than automation engineers on new testing.  Machines are faster on old testing (e.g., regression testing).  Report back on what works and doesn’t while the automation engineer is still writing new test code. 
  • Give better test reportsNobody cares about test results.  Find ways to sneak them in and make them easier to digest.  Shove them into your daily stand-up report (e.g., “based on what I tested yesterday, I learned that these things appear to be working, great job team!”). Give verbal test summaries to your programmers after each and every test session with their code.  Give impromptu test summaries to your Product Owner.
  • Sit with your users – See how they use your product.  Learn what is important to them.
  • Volunteer for unwanted tasks – “I’ll stay late tonight to test the patch”, “I’ll do it this weekend”.  You have a personal life though.  Take back the time.  Take Monday off.
  • Work for your programmers -  Ask what they are concerned about. Ask what they would like you to test.
  • What if? – Show up at design meetings and have a louder presence at Sprint Planning meeting.  Blast the team with relentless “what if” scenarios.  Use your domain expertise and user knowledge to conceive of conflicts.  Remove the explicit assumptions one at a time and challenge the team, even at the risk of being ridiculous (e.g., what if the web server goes down?  what if their phone battery dies?).
  • Do more security testing – Security testing, for the most part, can not be automated.  Develop expertise in this area.
  • Bring new ideas – Read testing blogs and books. Attend conferences. Tweak your processes.  Pilot new ideas. Don’t be status quo.
  • Consider Integration – Talk to the people who build the products that integrate with your product.  Learn how to operate their product and perform integration tests that are otherwise being automated via mocks. You just can’t beat the real thing.
  • Help your automation engineer – Tell them what you think needs to be automated.  Don’t be narrow-minded in determining what to automate.  Ask them which automation they are struggling to write or maintain, then offer to maintain it yourself, with manual testing.
  • Get visible – Ring a bell when you find a bug.  Give out candy when you don’t find a bug.  Wear shirts with testing slogans, etc.
  • Help code automation – You’re not a coder so don’t go building frameworks, designing automation patterns, or even independently designing new automated checks.  Ask if there are straight forward automation patterns you can reuse with new scenarios.  Ask for levels of abstraction that hide the complicated methods and let you focus on business inputs and observations.  Here are other ways to get involved.
What am I missing?
Categories: Software Testing

QA Music: Poisonous Monday

QA Hates You - Mon, 02/08/2016 - 05:20

Because I just loaded it onto a cheap MP3 player for my gym workouts, have Poison, “Come Hell or High Water”:

Categories: Software Testing