The Stuff QA Nightmares Are Made Of
There’s a White House petition to make the United States change from imperial units to metric units of measurement.
I got two words for you: Gimli Glider, a plane that almost fell out of the sky during the Canadian switchover to the metric system:
At the time of the incident, Canada was converting to the metric system. As part of this process, the new 767s being acquired by Air Canada were the first to be calibrated for metric units (litres and kilograms) instead of customary units (gallons and pounds). All other aircraft were still operating with Imperial units (gallons and pounds). For the trip to Edmonton, the pilot calculated a fuel requirement of 22,300 kilograms (49,000 lb). A dripstick check indicated that there were 7,682 litres (1,690 imp gal; 2,029 US gal) already in the tanks. To calculate how much more fuel had to be added, the crew needed to convert the quantity in the tanks to a weight, subtract that figure from 22,300 kg and convert the result back into a volume. In previous times, this task would have been completed by a flight engineer, but the 767 was the first of a new generation of airliners that flew without a flight engineer and flew only with a pilot and co-pilot.
A litre of jet fuel weighs 0.803 kg, so the correct calculation was:
7682 L × 0.803 kg/L = 6169 kg
22300 kg − 6169 kg = 16131 kg
16131 kg ÷ (0.803 kg/L) = 20088 L of fuel to be transferred
Between the ground crew and pilots, however, they arrived at an incorrect conversion factor of 1.77, the weight of a litre of fuel in pounds. This was the conversion factor provided on the refueller’s paperwork and which had always been used for the airline’s imperial-calibrated fleet. Their calculation produced:
7682 L × 1.77 kg/L = 13597 kg
22300 kg − 13597 kg = 8703 kg
8703 kg ÷ (1.77 kg/L) = 4916 L of fuel to be transferred
Instead of 22,300 kg of fuel, they had 22,300 pounds on board — a little over 10,000 kg, or less than half the amount required to reach their destination. Knowing the problems with the FQIS, Captain Pearson double-checked their calculations but was given the same incorrect conversion factor and inevitably came up with the same erroneous figures.
You can read the whole story of the Gimli Glider in the excellent book Freefall: 41,000 feet & Out of Fuel, which I read on a plane trip to Florida a couple years ago.
No, I have five words for you. The other three are Mars Climate Orbiter:
On November 10, 1999, the Mars Climate Orbiter Mishap Investigation Board released a Phase I report, detailing the suspected issues encountered with the loss of the spacecraft. Previously, on September 8, 1999, Trajectory Correction Maneuver-4 was computed and then executed on September 15, 1999. It was intended to place the spacecraft at an optimal position for an orbital insertion maneuver that would bring the spacecraft around Mars at an altitude of 226 kilometers on September 23, 1999. However, during the week between TCM-4 and the orbital insertion maneuver, the navigation team indicated the altitude may be much lower than intended at 150 to 170 kilometers. Twenty-four hours prior to orbital insertion, calculations placed the orbiter at an altitude of 110 kilometers; 80 kilometers is the minimum altitude that Mars Climate Orbiter was thought to be capable of surviving during this maneuver. Final calculations placed the spacecraft in a trajectory that would have taken the orbiter within 57 kilometers of the surface where the spacecraft likely disintegrated because of atmospheric stresses. The primary cause of this discrepancy was engineering error. Specifically, the flight system software on the Mars Climate Orbiter was written to take thrust instructions using the metric unit newtons (N), while the software on the ground that generated those instructions used the Imperial measure pound-force (lbf). This error has since been known as the metric mixup and has been carefully avoided in all missions since by NASA.
Now, imagine every computer system you know being rewritten to use or convert from imperial measurements to metric measurements. All the Why1K bugs in critical embedded systems. Forget flying. Forget driving. I’ll be walking.
You might be thinking, Gee, Director, aren’t you hitting the luddite thing a little hard these days? Well, I just got a smart phone and a Roku box. It’s an instinctive pushback against my entrance to the 21st century.
(Petition link via tweet.)
Big Things Worth Doing – How We Began Homeschooling
I awoke early on a Tuesday last September to a long lost familiar feeling… Anxiety. I couldn’t figure out what was causing it. Was it the overcast, stormy clouds? Was it taking too much down time over the long weekend?
Then it hit me.
Today is the first day of school. Er, rather, homeschool. Today is the first official day we are not sending our son back to school.
What were we thinking?!
We had discussed homeschooling as an option for our son over three years. Towards the end of the last school year, grade 3, we decided to try it out over the summer. We were tired of training new teachers to help him in the classroom, battling over homework assignments, consoling him over social challenges, dealing with meltdowns, and tempering other behaviours that were disrupting the classroom. It was heartbreaking to see him grow up this way.
We want to see our son thrive (don’t all parents?), and we didn’t see that happening at school. He (and the teachers) weren’t getting the support they need, and we had no success getting him ‘labelled’ so that they do. That’s a whole other story!
We spent six weeks over the summer focusing on learning experiences and finding a rhythm that allowed both he and I to flourish (as I continue to run my coaching/consulting/training business, albeit on a smaller scale for a while). We’re still working it out, but have found a good fit. Despite some really hard days, it was a positive experience that allowed him to become more curious and creative, and to learn at a pace that limited his frustration.
Homeschooling felt like the right thing to do.
We sent a letter to the school board to notify them that we are going to homeschool this year (required by law every year once your child has been in the school system, in Ontario, Canada). I felt calm and positive about our decision.
Then I awoke full of anxiety on the official day one.
As the day progressed, I realized everything was okay. My son learned things. I learned things. We had great day.
I realized I was anxious because we were doing something important. We were following through on our decision to make a big positive difference for our son, and our family.
Big things worth doing are the things that we struggle with internally. Big changes are hard and scary, but it’s because it’s scary that I know it is the right thing to do. Facing fear and anxiety and doing the scary thing anyway is how we evolve and grow and become more of who we are meant to be in this world. Life will be better, because we dig down and find the courage to make it so.
This is an ongoing lesson that I’m constantly reminding myself of.
Yes, we’ll make mistakes along the way, just as all parents do… and it will be an awesome journey learning with him and experiencing the world together.
You can follow our adventure in homeschooling in-depth at OneSpiritedChild.com.
SUBSCRIBE TO FLYING META’S LIFE ARTICLES
Agile planning: Focus on average time over many iterations
New Year, New Look
I’ve been playing with the Thematic Framework for WordPress a bit over the last few months. I like it because it’s simple, easily extendable and massively tweakable. I’m not much of a designer, but I was having fun making my own child theme for Thematic when I came across Child’s Play theme from scottnix.com. I liked it so much, that I threw away my work and applied Child’s Play to angryweasel.com. I tweaked the hell out of my last theme (and I’m sure I’ll tweak this one…eventually) – but what you see now (unless you’re reading my RSS feed) is completely stock. Not much to it, but it’s exactly what I was looking for.
Posts related to software forthcoming…HNY.(potentially) related posts:
- 2011 year end roundup
- HWTSAM – One Year Later
- Another year bites the dust
Poster - Software Testing Creativity
Check the STC's site for more details on ordering it: http://www.thetestingplanet.com/2013/01/the-poster-software-testing-creativity/
I'll be personally packing and posting out the posters to all who order!
Well... not all, but hopefully you get the picture :)
HAPPY NEW YEAR EVERYONE!!
Copy the JS error message and paste it into the Analysis tab's "JS error" filter field. Then group by URL and hit the Apply button.
The result tells you that this JS error only happened for that one URL.
Then you might wonder if it happened for all browsers. Very easy to find out with Group By Browser.
The result tells you that this JS error only happened with Internet Explorer 9.
Then you might think the problem might come from a single user's browser that for some reason would trigger the error. Quickly find out if this is an isolated error by grouping by ASN.
The result tells you that this JS error happened for more than one IE9 user. At this point you should probably try to reproduce the error using your own copy of IE9 and pointing to that URL. Chances are you would see the same JS error and can start working on a fix.
Note that since we do not collect personal information like cookies we cannot confirm with certainty of users uniqueness. However, using the ASN should be sufficient for this use case. You could also group by country or region to confirm this information.
The Analysis tab allows you to group by JS error. This means that for a given date range you can quickly find out how many JS errors your users saw, and which ones were the most frequent. Remove the content of the JS error filter field and hit the Apply button.
Investigating the most frequent one, you enter either the JS error message or the resource name it occurred in into the filter field, and group by browser.
Looks like a Firefox 17 issue...
Let's see if this is a user specific issue using grouping by ASN like we did above.
Yes! We can be pretty confident that this error only happened for a single user. It's up to you to decide if this issue is worth investigating further considering that its impact is minimal.
The Analysis tab might feel a little bit overwhelming at first but after playing with it for a few minutes one can see the reporting capabilities this tool offers. Investigating JS errors is just one example of what can be accomplished with this tool.
Application security plan: Who is responsible for testing?
Scrum team commitments: More harm than good
LoadRunner Password Encoder and lr_decrypt()
What's New | January 2013
Happy New Year!
Hope you had a great holiday season, we look forward to kicking off the New Year with lots more product enhancements and updates to help make your product experience a great one. As you know, it’s our priority to listen to your feedback, gather and interpret survey results, and keep a close eye on our Community site to see what you all are saying.
In this latest release, we focused on adding major enhancements to our Monitoring and Real User Measurements products along with some UX enhancements on our Support pages. We also made improvements to our API library and reporting capabilities.
For Real User Measurement, we added the ability to have multiple beacons within a single account
By adding multiple RUM beacons in your account, you now have the opportunity to view RUM results from multiple domains and environments with your data organized and separated by each web property.
Incorporated YSlow to analyze performance data in Monitoring
To help you get closer on identifying the root cause, we incorporated a new tool within the Monitoring app – YSlow. Previously, there were challenges and guess work for you to identify the issues but we gave you HTML Archive File (HAR) to sift through lots of data. Now, by adding YSlow, you can take a step further to isolating problems with individual monitoring samples, which can save you time.
What is YSlow? YSlow analyzes web pages and why they're slow based on Yahoo!'s rules for high performance web sites.
Included a Date Picker for custom date ranges in Monitoring Graphs
A minor but important feature, we included a date picker to your monitoring reports and graphs. The Date Picker allows you to click on Start and End Dates on the control bar for your monitoring graphs.
Added more field selections for your Monitoring List View - "Interval" and "Number of locations"
For more details about why you need to check and insert the “Interval” and “Number of locations” field, see our post, Defining Monitoring Interval, Sample Rate, Locations, and Units.
Enhanced Support section for easy access to more “Help” information
When we launched, our Support section was a bit light but with this release, we added more details to help you navigate and find your answers to your “Help” questions. Check out our Overview page, highlighting Top Articles trending on our Community site. We have a list of Support Packages available to you and a What’s New section to notify you of our latest updates to the product.
Just a reminder, check out our API library for:
- Instant Test API
- Monitoring API
- Maintenance Window API
- Real User Measurements API
- Load Testing API
We continually add and improve this library at every release.
Project management skills: Earn the technical respect of the team
Holiday Shopping Season 2012 – And Your New Year’s Performance Resolution is…?
Infographic: Real Performance Metrics from 14,000 Sites
Yottaa is a customer of LoadStorm's load testing tool, and they are focused on helping companies speed up their website. They test and monitor thousands of web apps, and they recently compiled stats regarding web performance that I found interesting. Thus, I share it with you.
The data sample comprised 14,000 different sites. They measured several aspects of web performance in metrics for front-end and back-end. The infographic below is well-designed and hopefully will be of value to you.
Rolling Up The Year
Something I learned from one of my mentors, Jim Rohn, is to reflect at the end of a period of time and think about:
What went well?
What could have been better?
What can we build on?
What can we be thankful for?
What have we learned?
Mr. Rohn talked about the impact this would have over years and even a lifetime. I am convinced this is a big part of gaining true wisdom.
Prolific leadership author and coach, John Maxwell, says he does this on a weekly basis sitting in his hot tub. For me, I tend to reflect more on an annual basis (not in a hot tub).
In my "Becoming an Influential Test Team Leader" tutorial, we discuss this as one of the 15 ways you can add value to your team without spending a lot of money. This really is "low hanging fruit," but we tend to miss it. Even individually, as a leader, this is a good time to think back over the year and roll the lessons into major themes to remember and value.
I would caution you about being too hard on yourself. Sometimes, hard introspection is needed, but we have enough negativity coming our way from the external. Imagine what a coach might be telling you. Not the coach that was always berating you, but the one that you may have found encouraging, yet holding a hard line of accountability.
I don't know how many things you will have on your list. By the way, this is a good time to journal them. (You don't keep a journal? Fix that in 2013!) I typically have about ten to twelve things that stick out over the year. It is interesting to go back several years to see if I really am learning from my personal retrospectives.
In many ways, life and work is a test. So think of this as your annual "test summary report." I hope things went well for you this year. I hope in the areas they didn't go so well, that you find 2013 to be a better year. Inasmuch as things depend on you, I hope you gain the skills and knowledge to excel. In the areas that are circumstance-driven, I hope you will find peace and endurance. I'm pulling for you!
Bad Performance Testing
Defining Monitoring Interval, Sample Rate, Locations, and Units
Most recently, we updated our monitoring dashboard to include a three new columns which will help define how frequently we collect monitoring samples and how much you would consume on a daily basis.
ColumnDescriptionDaily Unit Usage Displays how many units a specific monitor consumes each daySample Rate Displays how frequently the monitor will sample# of Locations Displays how many locations the monitor is sampling from
Below is a screenshot of the new columns enabled and sorted by Daily Unit Usage descending:
Understanding the difference between Interval, Sample Rate, Locations, and Units:
When you create/edit a monitor, you are given the opportunity to select the monitoring interval and the number of locations from which to take samples. Monitoring interval and # of locations to sample can dictate how frequently a sample will get taken. For example, in the screenshot above, you can see that for the first monitor we, have selected 101 locations and a monitoring interval of 1 minute. This means that we will be taking 101 samples each minute (one from each monitoring location) which translates into ~2 samples a second. The daily unit consumption is very high on this monitor because we are sampling so frequently, consuming 581,760 units a day for this one monitor. Monitoring unit consumption is calculated by multiplying the number of samples by the number of script units.
101 (# of locations) x 60 (minutes per hour) x 24 (hours per day) x 4 (units per sample) = 581,760
In most cases, monitoring from 101 locations once a minute is excessive. A more realistic use cases would be to monitor from 3 locations with a 10 minute interval, which would result in a sample taken approximately once every 3.5 minutes. To further illustrate, a script that consumes 16 units per sample will consume 6,912 monitoring units per day.