Software Testing

The Right To Scowl

QA Hates You - Tue, 05/17/2016 - 04:51

Employers can’t stop the QA mindset:

The NLRB’s ruling last week said that requiring employees to maintain a “positive work environment” is too restrictive, as the workplace can sometimes get contentious. You can’t keep your employees from arguing.

To celebrate, I’m going to turn this smile upside down. Which is just as well, as co-workers fear my smile more than my frown.

(Link via.)

Categories: Software Testing

The JavaScript Warning By Which All Others Are Measured

QA Hates You - Fri, 05/13/2016 - 07:12

If you have JavaScript blocked and go to DocuSign, instead of a little bit of red text above the form, you get a page with a message that tells you how to enable JavaScript in the browser you’re using:

Your Web site probably falls far, far short of this.

However, the page still has a common bug. Anyone care to tell me what?

Categories: Software Testing

Not Tested In Alternate Configurations, I See

QA Hates You - Tue, 05/10/2016 - 05:32

Facebook logs a helpful message to the console to help prevent XSS exploits:

However, if the user displays the console on the right instead of the bottom, this message does not lay out properly in Firefox:

Obviously, Facebook did not test this in all possible configurations. If Facebook tested it at all.

Categories: Software Testing

QA Music: Happy Monday!

QA Hates You - Mon, 05/09/2016 - 04:55

“Happy Song” by Bring Me The Horizon.

Categories: Software Testing

Impact Mapping Workshop – now opensource and free to use

The Quest for Software++ - Sun, 05/08/2016 - 16:00
If you’re looking for a good way to introduce impact mapping to your company/clients/at a public training workshop, you can now use the materials from my battle-tested workshop - I’ve just opensourced it. All the materials (facilitation guide, slides, exercise materials and handouts) are now available on GitHub. The materials are completely free to use any way you like, under the Creative Commons 4.0 Attribution license. Get them from: https://github.com/impactmapping My goal with this is to help others spread the word easily. The exercises in this workshop lead the participants through the most important concepts and have quite a few...

ISTQB Advanced Level Security Tester Certification Officially Available!

Randy Rice's Software Testing & Quality - Fri, 05/06/2016 - 08:46
I am very excited to announce that the ISTQB Advanced Security Tester certification is officially available. I am the chair of the effort to write the syllabus for this certification. It took us over 5 years to complete this project. We had input from security testers worldwide.
The syllabus is not yet posted on the ASTQB web site, but it will be available there very soon. It will take training providers such as myself a while to create courses and get them accredited, but they will also be out in the marketplace in the coming weeks and months.
Cybersecurity is a very important concern for every person and organization. However, only a small percentage of companies perform continuous security testing to make sure security measures are working as designed. This certification prepares people to work at an advanced level in cybersecurity as security testers. This is a great specialty area for testers looking to branch out into a new field - or to show their knowledge as security testers.
Below is a diagram showing the topics in the certification (click to enlarge):
Categories: Software Testing

GTAC Diversity Scholarship

Google Testing Blog - Wed, 05/04/2016 - 08:32
by Lesley Katzen on behalf of the GTAC Diversity Committee

We are committed to increasing diversity at GTAC, and we believe the best way to do that is by making sure we have a diverse set of applicants to speak and attend. As part of that commitment, we are excited to announce that we will be offering travel scholarships this year.
Travel scholarships will be available for selected applicants from traditionally underrepresented groups in technology.

To be eligible for a grant to attend GTAC, applicants must:

  • Be 18 years of age or older.
  • Be from a traditionally underrepresented group in technology.
  • Work or study in Computer Science, Computer Engineering, Information Technology, or a technical field related to software testing.
  • Be able to attend core dates of GTAC, November 15th - 16th 2016 in Sunnyvale, CA.


To apply:
Please fill out the following form to be considered for a travel scholarship.
The deadline for submission is June 1st.  Scholarship recipients will be announced on June 30th. If you are selected, we will contact you with information on how to proceed with booking travel.


What the scholarship covers:
Google will pay for standard coach class airfare for selected scholarship recipients to San Francisco or San Jose, and 3 nights of accommodations in a hotel near the Sunnyvale campus. Breakfast and lunch will be provided for GTAC attendees and speakers on both days of the conference. We will also provide a $50.00 gift card for other incidentals such as airport transportation or meals. You will need to provide your own credit card to cover any hotel incidentals.


Google is dedicated to providing a harassment-free and inclusive conference experience for everyone. Our anti-harassment policy can be found at:
https://www.google.com/events/policy/anti-harassmentpolicy.html
Categories: Software Testing

GTAC 2016 Registration is now open!

Google Testing Blog - Wed, 05/04/2016 - 08:27
by Sonal Shah on behalf of the GTAC Committee

The GTAC (Google Test Automation Conference) 2016 application process is now open for presentation proposals and attendance. GTAC will be held at the Google Sunnyvale office on November 15th - 16th, 2016.

GTAC will be streamed live on YouTube again this year, so even if you cannot attend in person, you will be able to watch the conference remotely. We will post the livestream information as we get closer to the event, and recordings will be posted afterwards.

Speakers
Presentations are targeted at students, academics, and experienced engineers working on test automation. Full presentations are 30 minutes and lightning talks are 10 minutes. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 25 talks and 300 attendees for the event. The selection process is not first come first serve (no need to rush your application), and we select a diverse group of engineers from various locations, company sizes, and technical backgrounds.

Deadline
The due date for both presentation and attendance applications is June 1st, 2016.

Cost
There are no registration fees, but speakers and attendees must arrange and pay for their own travel and accommodations.

More information
Please read our FAQ for most common questions
https://developers.google.com/google-test-automation-conference/2016/faq.
Categories: Software Testing

Testers Don’t Prevent Problems

DevelopSense - Michael Bolton - Wed, 05/04/2016 - 05:55
Testers don’t prevent errors, and errors aren’t necessarily waste. Testing, in and of itself, does not prevent bugs. Platform testing that reveals a compatibility bug provides a developer with information. That information prompts him to correct an error in the product, which prevents that already-existing error from reaching and bugging a customer. Stress testing that […]
Categories: Software Testing

The V.5H Bug

QA Hates You - Tue, 05/03/2016 - 08:00

How prepared is your software for this sudden shift?

Venezuelans lost half an hour of sleep on Sunday when their clocks moved forward to save power, as the country grapples with a deep economic crisis.

The time change was ordered by President Nicolas Maduro as part of a package of measures to cope with a severe electricity shortage.

I’m calling this the V.5H bug.

Categories: Software Testing

Is There a Simple Coverage Metric?

DevelopSense - Michael Bolton - Tue, 04/26/2016 - 16:35
In response to my recent blog post, 100% Coverage is Possible, reader Hema Khurana asked: “Also some measure is required otherwise we wouldn’t know about the depth of coverage. Any straight measures available?” I replied, “I don’t know what you mean by a ‘straight’ measure. Can you explain what you mean by that?” Hema responded: […]
Categories: Software Testing

Experience Matters

QA Hates You - Tue, 04/26/2016 - 08:33

I came across this today: Being A Developer After 40

It also applies to testing and software QA. Most of the good testers I know or have known were older than the stereotypical 23 year old wunderkind. Because they’d seen things.

Categories: Software Testing

Filling a gap in Istanbul coverage

Alan Page - Tue, 04/19/2016 - 17:38

I’m at no loss for blog material, but have been short on time (that’s not going to change, so I’ll need to tweak priorities). But…I wanted to write something a bit different  from normal in case anyone else ever needs to solve this specific problem (or if anyone else knows that this problem already has an even better solution).

Our team uses a tool called Istanbul to measure code coverage. It generates a report that looks sort of like this (minus the privacy scribbling).

For those who don’t know me, I feel compelled to once again share that I think Code Coverage is a wonderful tool, but a horrible metric. Driving coverage numbers up purely for the sake of getting a higher number is idiotic and irresponsible. However, the value of discovering untested and unreachable code is invaluable, and dismissing the tool entirely can be worse than using the measurements incorrectly.

The Missing Piece

Istanbul shows all up coverage for our web app (about 600 files in 300 or so directories). What I wanted to do, was to break down coverage by feature team as well. The “elegant” solution would be to create a map of files to features, then add code to the Istanbul reporter to add the feature team to each file / directory, and then modify the table output to include the ability to filter by team (or create separate reports by team).

I don’t have time for the elegant solution (but here’s where someone can tell me if it already exists).

The (or “My”) Solution

This seems like a job for Excel, so first, I looked to see if Istanbul had CSV as a reporter format (it doesn’t). It does, however output json and xml, so I figured a quick and dirty solution was possible.

The first thing I did was assign a team owner to each code directory. I pulled the list of directories from the Istanbul report (I copied from the html, but I could have pulled from the xml as well), and then used excel to create a CSV file with file and owner. I could figure out a team owner for over 90% of the files from the name (thanks to reasonable naming conventions!), and I used git log to discover the rest. I ended up with a format that looked like this:

app.command-foo,SomeTeam
app.components.a,SomeTeam
app.components.b,AnotherTeam
app.components.c,SomeTeam
app.components.d,SomeOtherTeam

Then it was a matter of parsing the coverage xml created by Istanbul and making a new CSV with the data I cared about (directory, coverage percentage, statements, and statements hit). The latter two are critical, because I would need to recalculate coverage per team.

There was a time (like my first 20+ years in software) where a batch file was my answer for almost anything, but lately – and especially in this case – a bit of powershell was the right tool for the job.

The pseudo code was pretty much:

  • Load the xml file into a PS object
  • Walk the xml nodes to get the coverage data for a node
  • Load a map file from a csv
  • Use the map and node information to create a new csv

Hacky, yet effective.

I posted the whole script on github here.

(potentially) related posts:
  1. Scaling Code Coverage
  2. An Approach to Code Coverage
  3. Filling A Hole
Categories: Software Testing

It’s Not A Factory

DevelopSense - Michael Bolton - Mon, 04/18/2016 - 22:38
One model for a software development project is the assembly line on the factory floor, where we’re making a buhzillion copies of the same thing. And it’s a lousy model. Software is developed in an architectural studio with people in it. There are drafting tables, drawing instruments, good lighting, pens and pencils and paper. And […]
Categories: Software Testing

The key first step for successful organisational change

The Quest for Software++ - Mon, 04/18/2016 - 16:00

Last week, while helping a group of product managers learn how to get more out of user stories, I asked the participants to list the key challenges they’ll face trying to bring all the new techniques back to their organisations. “We’ve always done it this way”, “We already know how to do it” and “If it ain’t broken, don’t fix it” kept popping up. People who suffer from those problems then often explain how they’ve tried to propose good ideas, but their colleagues seemed uninterested, defiant, obstructive, resistant, ignorant, or any of the more colourful NSFW attributes that I’ll leave out of this post. And even when their colleagues commit to try a new idea, changes in internal processes rarely happen as quickly and as successfully as people expect.

Whether you subscribe to the Adam Smith’s invisible hand or Herman Daly’s invisible foot, something magical directs the behaviour of a whole group of people, even if each individual’s contribution can be rationally explained. In many cases, the key issue is that various participants in the group don’t actually agree on a single problem. That’s why offering a solution in the form of good practices rarely creates big results. Sure, any process change in a larger organisation suffers from complacency, fear and petty kingdom politics, but it also competes with people’s individual priorities, short-term goals and division targets. While someone is arguing for iterative delivery, other people care more about reducing operational cost or meeting sales quotas. And so, instead of an overnight success, change initiatives end up being much more like Groundhog Day.

The FutureSmart Firmware story is one of the best documented successful large-scale organisational agile adoptions. Gary Gruver, Mike Young and Pat Fulghum wrote about it in A Practical Approach to Large-Scale Agile Development. Unlike unknown thousands of mediocre attempts that resulted in water-scrum-fall and desperation, or installing pre-packaged four-letter acronyms that don’t really change anything, the LaserJet group division didn’t start by trying to ‘become agile’. They visualised the problem, in a compelling way that sparked action. By adding up all the different categories where the delivery teams spend time, the authors came to a chilling conclusion: the firmware development group spent 95% of their time just on getting the basic product out, and invested only 5% on ‘adding innovation’. The numbers shocked the stakeholders, and it was pretty clear to everyone that continuing on the same path won’t allow the organisation to stay competitive. Visualising the problem also provided something that everyone can rally around. People didn’t waste time on stupid questions such as should the daily scrum be at 10AM or 4PM, or if pair programming is required for everything. Instead, people started thinking about how to change the process so that they can get the basic product out cheaper and faster, and leave more time for new product development. All the tactical decisions could be measured against that. At the end, the FutureSmart group adopted a process that doesn’t sound like any other pre-packaged scaled version of Scrum. If you’ve been doing agile delivery for a while, you’ll probably disagree with a lot of the terminology in the book, and occasionally think that their mini-milestones stink of mini-waterfall, but all that is irrelevant. What matters is that, at the end, they got the results. Gruver, Your and Fulghum describe the outcome: development costs per program are down 78%, the number of programs under development more than doubled, and the maintenance/innovation balance moved to 60-40, increasing the capacity for innovation by a factor of eight.

Visualising the problem is a necessary first step to get buy-in from people, but not all visualisations are the same. Raw data, budgets and plans easily get ignored. Too much data confuses people. To move people to action, choose something that can’t easily be ignored. Based on a research of about eighty highly successful large-scale organisational changes, John Kotter writes in the Heart of Change that ‘people change when they are shown a truth that influences their feelings’. Kotter advocates creating a sense of urgency using a pattern he called ‘See-Feel-Change’. The best way to do that, according to Kotter, is to create a ‘dramatic, look-at-this presentation’. One of the most inspirational stories from Kotter’s book is ‘Gloves on the boardroom table’, about Jon Stegner, a procurement manager at a large US manufacturing company. Stegner calculated that his employer could save over a billion dollars by consolidating procurement. He put together a business case which was promptly ignored and forgotten by the company leadership. Then he tried something radical – instead of selling the solution, he found a typical example of poor purchasing decisions. One of his assistants identified that the company bought 424 types of work gloves, at vastly different prices. The same pair that cost one factory $5 could cost another factory three times as much. Stegner then bought 424 different pairs of gloves, put all the different price tags on them, dumped the whole collection on the main boardroom table, and invited all the division presidents to visit the exhibition. The executives first stared at it silently, then started discovering pairs that look alike but with huge differences in price tags, and then agreed that they needed to stop this from ever happening again. The gloves were sent on a road show to the major factories, and Stegner soon had the commitment to consolidate purchasing decisions.

One of the most effective ways to shake up a system is to create new feedback loops. Kotter’s ‘See-Feel-Change’ is great to provide an initial spark. Fast and relevant feedback motivates people to continue to change their behaviour. In Thinking in Systems, Donella H. Meadows tells a story about a curious difference in electricity consumption during the 1970s OPEC oil embargo in Netherlands. In a suburb near Amsterdam, built out of almost identical houses, some households were consistently using roughly 30% less energy than the others, and nobody could explain the difference. Similar families lived in all the houses, and they were all paying the same prices for electricity. The big difference, discovered later, was the position of the electricity meters. In some houses the meters were in the basements, difficult to see. In the other houses, the meters were in the main doorway, easily observable as people passed during the day. The simple feedback loop, immediately visible, stimulated energy savings without any coercion or enforcement.

The next time you feel the organisation around you is ignoring a brilliant idea, instead of selling a solution, try selling the problem first. More practically, visualise the problem so it can sell itself, allowing people to see and feel the issue. For the best results, try visualising the problem in a way that closes a feedback loop. Create a way for people to influence the results and see how your visualisation changes.

QA Music: It’s Sixx:A.M. Somewhere

QA Hates You - Mon, 04/18/2016 - 05:38

“Rise”

Categories: Software Testing

100% Coverage is Possible

DevelopSense - Michael Bolton - Fri, 04/15/2016 - 22:39
In testing, what does “100% coverage” mean? 100% of what, specifically? Some people might say that “100% coverage” could refer to lines of code, or branches within the code, or the conditions associated with the branches. That’s fine, but saying “100% of the lines (or branches, or conditions) in the program were executed” doesn’t tell […]
Categories: Software Testing

Do it *my* way, or do it *our* way

Alan Page - Thu, 04/14/2016 - 13:27

I was thinking about this on the way to work today, and thought I’d try to spit out a quick blog post before I got side-tracked again.

I’ve been very fortunate to have had success with organizational change with teams at Microsoft. Whether it’s getting programmers to run integration tests before check-in, or helping a team get to a daily zero-bug bar, my leadership style is the same. I believe that people will do things that they think are valuable. In fact, this quote from Eisenhower (which is, admittedly, overused) aligns tightly with my style.

Leadership is the art of getting someone else to do something you want done because [s]he wants to do it.

I talk with people to understand what their concerns and motivations are. I communicate plans and strategies to the team. Often, I “plant seeds” – for example, I may mention to a manager a few of the benefits of keeping engineering debt low and give a few examples. No judgement or decree – just an idea to put in their head. Later, I may mention that it would may be a good idea to keep pri 1 bug counts at zero, and maybe overall bugs below some arbitrary number. Often, a few weeks later, I’ll see that manager’s team with zero pri 1 bugs. Or, I’ll mention in a meeting that I’d like to get the whole team down to zero bugs, and I generally have support from everywhere I planted a seed.

The big advantage of this style of change management (in my experience) is that the team owns the change, and accept it as part of the way they work. The disadvantage, is that it takes time. To me, that time investment is worth it.

There’s a faster approach, but I don’t like it – yet I see it used often. It probably has a better name, but I’ll call it the do-it-because-I-said-so style of leadership. Eisenhower also said that leadership doesn’t come from barking orders or insisting on action (paraphrase because I’m too lazy to look it up). To me, leadership isn’t about your ideas, it’s about working with others and building your tribe. Too many so-called leaders think that leadership is being the loudest voice, or being the one that makes mandates to an organization. That’s not leadership to me. That’s being a dick.

That said, there’s a middle ground there, that I see often enough to respect, but not often enough to completely understand. I know some leaders who are able to make explicit mandates and have their team rally around them immediately. They don’t do this often, and I think it helps. They are humble and I think this helps. They have a relationship with their followers – and this helps too. Maybe the answer is that they’ve waited until they’re a real leader (rather than a self-proclaimed chest-thumper), and waited until circumstances were necessary before making a mandate.

What kind of leader do you want to be?

(potentially) related posts:
  1. Stuff About Leadership
  2. The Ballad of the Senior Tester
  3. What’s Your Super Power?
Categories: Software Testing

As Expected

DevelopSense - Michael Bolton - Tue, 04/12/2016 - 11:38
This morning, I started a local backup. Moments later, I started an online backup. I was greeted with this dialog: Looks a little sparse. Unhelpful. But there is that “More details” drop-down to click on. Let’s do that. Ah. Well, that’s more information. But it’s confusing and unhelpful, but I suppose it holds the promise […]
Categories: Software Testing

You Are Not Checking

DevelopSense - Michael Bolton - Sun, 04/10/2016 - 20:10
Note: This post refers to testing and checking in the Rapid Software Testing namespace. For those disinclined to read Testing and Checking Refined, here are the definitions of testing and checking as defined by me and James Bach within the Rapid Testing namespace. Testing is the process of evaluating a product by learning about it […]
Categories: Software Testing

Pages