The Safest Development Job in the World

QA Hates You - Mon, 08/27/2012 - 09:01

I don’t normally do job postings here at QAHY, but my wife is looking for a new flunky, so here are the details.

Senior Programmer Analyst

Exits, Inc. is seeking a software developer/architect/requirements analyst to assist with ongoing and expanding business needs with its export compliance software and services sold to major and small customers alike. The work is varied – meetings, coding, design, requirements, and legacy application translation to a new system. Early work will be code centric.

Qualifications
Five to 10 years’ experience coding on systems projects in a combination of .NET and ASP Classic. Experience with large, complex web applications. Business rule/requirements definition experience.

Technical Skillset
C# (preferably 4.0), .NET services, VBScript, XML, IIS, SQL Server, HTML, MVC design pattern, JavaScript, cookies, I/O, various data transport protocols. XML Schema.

Preferred Qualifications
Experience defining business rules with customers and working as a vendor with large project teams. Experience translating customer requirements into system design and identifying and communicating requirements contradictions to the team. Data integration experience. Sharepoint knowledge and/or experience. Regular expressions experience. AS2/sFTP protocol experience. ASP Classic experience. Extensive business writing experience.

Conditions
This position is a six-month contract for a telecommuting worker. A long-term to open-ended extension is likely. Some travel may be necessary, so applicants should have easy access to an airport. Please reply with preferred hourly rate, contact information, and a list of qualifications addressing these requirements to: . Applicants must be eligible to work in the United States of America.

Why is it the safest developer job in the world? Because I’m not allowed to test my wife’s code any more.

Hey, wait a minute. If I write code for your wife, that won’t be your wife’s code. Could you test my code? You’re right. IT’S A TRAP! But one where you can work from home and make some money for that abuse.

Categories: Software Testing

Book Report: Winning through Intimidation by Robert J. Ringer (1974)

QA Hates You - Mon, 08/27/2012 - 03:31

I’ll admit, I picked up this book because it had the word intimidation right in the title. Hey, who in QA wouldn’t want to be more intimidating? And maybe to win for once?

The book was a best seller for the self-published author in the 1970s and has been re-released this century with a different title (To Be or Not To Be Intimidated). So somebody found the lessons within it to be worthwhile. A lot of somebodies.

The book explains the author’s approach to real estate sales and a bit of philosophy extrapolated from the behaviors that made him a successful real estate salesman dealing in the sale of large apartment complexes across the country. The author invokes Ayn Rand, author of Atlas Shrugged and The Fountainhead, in the first sentence, saying that he titled the book using the off-putting word Intimidation because its meaning was slanted to the perjorative, much as Rand did when she entitled a book (based on her essay entitled) “The Virtue of Selfishness”. So you get an idea from an outset what sort of take the author is going to have on life.

As it stands, Ringer’s freewheeling view of business is pretty cut-throat, zero-sum. But by intimidation, he means, ultimately, setting yourself up in a strong posture to deal with adversity. So you don’t have to think you’re going to be someone who comes out of the book with the tools to become some sort of cut-throat jerk looking to take advantage of easy marks to get something out of this book. If you’re a contractor out there on your own hustling for deals and contracts, you’ll get something out of how he implements his philosophy.

One of his basic theories of success is called Theory of Sustenance of a Positive Attitude through the Assumption of a Negative Result. That is, to keep your positive attitude in a position where failure abounds (in his case property sales), you need to recognize and even plan that most of your efforts will fail. That way, you’re not surprised nor discouraged when they do. It’s an interesting twist on giving yourself a mindset for startups or whatnot; when you read about failures of successful people, they always brush them off. Perhaps it’s a native mindset in those particular people, or perhaps it’s a mindset groomed through advice much like Ringer’s here.

Additionally, Ringer presents three bases for the successful execution of intimidation: A good, professional image; a good legal framework for the work done; and successful execution of the work. Although we’re not dealing with signing up to represent properties for sale, you could apply the tenets to looking for, securing, and finishing consulting work in any stripe, including IT contracting.

One important lesson, also encapsulated in the book, is that the goal of any contract or project is not the successful completion of the project (in Ringer’s world, the closing of a sale). The goal of the project is getting paid for doing the work. It’s a lesson he reiterates throughout the book, and it’s a lesson that many people in the IT world should remember. In a world where a lot of companies start up with the goal of getting users and eyeballs (still) without actual thought to “monetization,” this lesson bears repeating and drumming.

I enjoyed the book, and it gave me a different perspective that I can apply to the industry from the experience of another. It’s key, though, to remember that the business world is not quite as cynical nor zero sum as the author here presents it, where other business people are out to take your chips and cut your hands off at the wrists, but it’s probably not a bad idea to plan as though they are. Because some are.

Books mentioned in this review:

Categories: Software Testing

Testing Google's New API Infrastructure

Google Testing Blog - Fri, 08/24/2012 - 16:15

By Anthony Vallone

If you haven’t noticed, Google has been launching many public APIs recently. These APIs are empowering mobile app, desktop app, web service, and website developers by providing easy access to Google tools, services, and data. In the past couple of years, we have invested heavily in building a new API infrastructure for our APIs. Before this infrastructure, our teams had numerous technical challenges to solve when releasing an API: scalability, authorization, quota, caching, billing, client libraries, translation from external REST requests to internal RPCs, etc. The new infrastructure generically solves these problems and allows our teams to focus on their service. Automating testing of these new APIs turned out to be quite a large problem. Our solution to this problem is somewhat unique within Google, and we hope you find it interesting.


System Under Test (SUT)

Let’s start with a simplified view of the SUT design:



A developer’s application uses a Google-supplied API Client Library to call Google API methods. The library connects to the API Infrastructure service and sends the request. Part of the request defines the particular API and version being used by the client. This service is knowledgeable of all Google APIs, because they are defined by API Configuration files. These files are created by each API providing team. Configuration files declare API versions, methods, method parameters, and other API settings. Given an API request and information about the API, the API Infrastructure Service can translate the request to Google’s internal RPC format and pass it to the correct API Provider Service. This service then satisfies the request and passes the response back to the developer’s app via the API Infrastructure Service and API Client Library.

Now, the Fun Part

As of this writing, we have released 10 language-specific client libraries and 35 public APIs built on this infrastructure. Also, each of the libraries need to work on multiple platforms. Our test space has three dimensions: API (35), language (10), and platform (varies by lib). How are we going to test all the libraries on all the platforms against all the APIs when we only have two engineers on the team dedicated to test automation?

Step 1: Create a Comprehensive API

Each API uses different features of the infrastructure, and we want to ensure that every feature works. Rather than use the APIs to test our infrastructure, we create a Test API that uses every feature. In some cases where API configuration options are mutually exclusive, we have to create API versions that are feature-specific. Of course, each API team still needs to do basic integration testing with the infrastructure, but they can assume that the infrastructure features that their API depends on are well tested by the infrastructure team.

Step 2: Client Abstraction Layer in the Tests

We want to avoid creating library-specific tests, because this would lead to mass duplication of test logic. The obvious solution is to create a test library to be used by all tests as an abstraction layer hiding the various libraries and platforms. This allows us to define tests that don’t care about library or platform.

Step 3: Adapter Servers

When a test library makes an API call, it should be able to use any language and platform. We can solve this by setting up servers on each of our target platforms. For each target language, create a language-specific server. These servers receive requests from test clients. The servers need only translate test client requests into actual library calls and return the response to the caller. The code for these servers is quite simple to create and maintain.

Step 4: Iterate

Now, we have all the pieces in place. When we run our tests, they are configured to run over all supported languages and platforms against the test API:



Test Nirvana Achieved

We have a suite of straightforward tests that focus on infrastructure features. When the tests run, they are quick, reliable, and test all of our supported features, platforms, and libraries. When a feature is added to the API infrastructure, we only need to create one new test, update each adapter server to handle a new call type, and add the feature to the Test API.




Categories: Software Testing

"Who Was That Masked Man?"

ABAKAS - Catherine Powell - Fri, 08/24/2012 - 08:27
When we were kids, we'd occasionally get to watch the Lone Ranger. We watched the fifties version, with the Lone Ranger and Tonto riding into town in black and white and saving the day. Inevitably, it would end with someone staring into the camera and asking, "Who was that masked man?" as the Lone  Ranger rode off to his next town.
"Here I come to save the day!" (Wrong TV show, right sentiment)
I watched the show again not too long ago with a friend's son, and got to thinking that there really was something to this idea of someone riding into town, clearing out the bad guys, and riding off into the distance. It's really rather like being a consultant in some ways. You ride in, find the bad parts, fix them, and ride out. Fortunately for me, there is a lot less horseback riding and gunplay in software than there ever was in the Lone Ranger!

But let's look at each of those steps:

  1. You come in
  2. You find the bad part(s)
  3. You fix them
  4. You leave
All of those steps are important. If you leave without finishing every single step, well, you're no Lone Ranger.
Come In Coming in doesn't have to mean going to the client's offices. It just means that you need to show up. You have to interact with the client  - and not just the project sponsor. This is how you'll know the full extent of the problem, and start to build trust to fix it. This means you have to be around and be available for casual conversations. This might be in person, by phone, in chat rooms, or over IM.
Find the Bad Part(s) You're here because there's a problem the client can't or won't solve internally. Understanding that problem gives you the bounds of your engagement. Sure, there are probably other things you could optimize or improve, but don't lose sight of the thing you're actually here to fix!
Fix Them You have to actually fix the bad part(s). Don't offer a proposal; don't outline the problem and note how it could be fixed. Do the work and fix it. This is what differentiates the Lone Ranger from the stereotypical management consultant.
Leave This part is surprisingly hard sometimes. Sometimes the problem will be fixed and you just keep coming around, or start working on a new problem, or maintaining the fixes. This is all well and good until you're sure the fix is stable, or if there is another problem to be solved. When it's just maintenance, though, then it's time to leave. Don't forget to actually do that part.
And that is how 24 minutes with the Lone Ranger turned into a rant on consulting software engineers. Now, back to the fake Wild West for me!
Categories: Software Testing

Musings on Test Design

Alan Page - Thu, 08/23/2012 - 11:45

The wikipedia entry on test automation troubles me. It says:

Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.

I can’t decide whether it bothers me because it’s wrong, or because so many people believe the statement is true. In fact, I know the approach in some organizations is to “design” a huge list of manual tests based on written and non-written requirements, and then either try to automate those tests, or pass the tests off to another team (or company) to automate.

This is an awful approach to test design. It’s not holistic. It’s short-sighted. It’s immature, and it’s unprofessional. It’s flat-out crummy testing. I frequently say, “You should automate 100% of the tests that should be automated”. Let me put it another way to be clear:

Some tests can only be run via some sort of test automation.
Some tests can only be done via human interaction.

That part (should be) obvious. Here’s the part I don’t think many people get:

You can’t effectively think about automated testing separately from human testing.

Test Design answers the question, “How are we going to test this?” The answer to that question will help you decide where automation can help (and where it won’t).

Here’s a screen shot of part of the registration form for outlook.com (disclaimer – I have no idea how this was tested).

Let’s look at two different ways of answering the “How will we test this?” question.

The “automator” may look at this and think the following.

  • I’ll build a table of first and last names and use those for test accounts
  • I’ll try every combination of Days, Months, and Years for Birthdate
  • I’ll generate a bunch of different test account names
  • I’ll create a password for each account
  • Once I try all of the combinations, I can sign off on this form
  • (or, they may think, “I wonder what sort of test script the test team will ask me to automate”

The “human” may look at the same form and think this:

  • I’ll want to try short names, long names, and blank names
  • I’ll see if I can find invalid dates (e.g. Feb 29 in a non-leap year)
  • Some characters are invalid for email names – I’ll try to find some of those
  • I’ll make sure the 8-character minimum and case sensitivity is enforced
  • Oh – I’ll try that passwords with foreign characters too.
  • Once I go through all of that, and anything else I discover, I can sign off on this form.

I’ll fire off a disclaimer now, because I’ve probably pissed off both “automators”, and “humans” with the generalizations above. I know there’s overlap. My argument is that there should be more.

In my contrived examples above, the “automator” is answering the question, “What can I automate?”, and the “human is answering the question, “What can I explore or discover?”. Neither is directly answering the question, “how are we going to test this?”.

I could just merge the lists, but that’s not exactly right. Let’s throw away humans and coders for a minute and see if we can use a mind map to get the whole picture together. Here’s what I came up with giving myself a limit of 10 minutes. There’s likely something missing, but it’s a starting point.

Now (and at no time before now), I can begin to think about where automation may help me. My goal isn’t to automate everything, it’s to automate what makes the most sense to automate. Things like submitting the form from multiple machines at once, or cycling through tables of data make sense to automate early. Other items, like checking for non-existent days or checking max length of a name are nice exploratory tasks. And then, there are ideas like foreign characters in the name, or trying RTL languages that may start as an exploratory test, but may lead to ideas for automation.

The point is, and this is worth repeating so often, is that thinking of test automation and “human” testing as separate activities is one of the biggest sins I see in software testing today. It’s not only ineffective, but I think this sort of thinking is a detriment to the advancement of software testing.

In my world, there are no such things as automated testing, exploratory testing, manual testing, etc.

There is only testing.

More musings and rants to follow.

Categories: Software Testing

Taking the lead in an Agile enterprise

SearchSoftwareQuality - Wed, 08/22/2012 - 07:12
Executive managers are now taking the lead in adopting Agile and Lean processes to create Agile enterprises.


Categories: Software Testing

Defining the role of the Agile project manager

SearchSoftwareQuality - Tue, 08/21/2012 - 13:35
The role of the Agile project manager is defocused, often eliminated, in Scrum teams. Matt Heusser describes how they can survive -- and thrive.


Categories: Software Testing

Why Am I Here?

Selena Delesie - Tue, 08/21/2012 - 12:47

Questioned Proposal, by Eleaf

In the day-to-day grind at work, it is easy to lose ourselves in details. Sometimes the details are important, often they are not.

We attend meeting after meeting, produce report after report, fill out template after template, complain, discuss things in circles, and maybe do some actual work. Admittedly, maybe it isn’t that bad for you in your job. I’ve seen people stuck in these cycles though… I’ve even been in them myself.

We forget to take a step back and ask ourselves, our managers, our colleagues, and our customers….

Why Am I Here?

Think about this for a moment.

Am I here (in my job, at this business) to:

  • Make money for myself?
  • Make money for the business?
  • Create great products?
  • Provide great services?
  • Serve the customers really well so they leave happy and rave about me/us?
  • Make friends?
  • Plan social events?
  • Surf the web?
  • Take 10 minute smoke breaks every hour?
  • Take 2 hour lunches?
  • Socialize and have fun?
  • Do my best work every day?
  • Do the bare minimum?
  • Do great work with least amount of time/money/effort?
  • Complain about the crummy work environment and the people I work with?
  • Enjoy the people and environment I work in?
  • Get paid for 8 hours of work and actually do only 2 hours?
  • Provide awesome value for the job I’m paid to do?
  • Work as many hours as possible because it makes me look good?
  • Create more work for other people to do?
  • Minimize the work other people will need to do (because my work is that awesome)?
  • Help others succeed in their roles?
  • Be powerful and important?
  • Tell other people what to do?
  • Be right?
  • Learn from others?
  • Be nice to others?
  • Be arrogant and grumpy with others?
  • Say ‘no’ to everything that’s requested of me?
  • Do what I say I will do?
  • Say yes, but mean no, and don’t do what I agreed to do?
  • Sulk in a corner because of all the stupid stuff that happens to me?
  • Take responsibility for my actions and words?
  • Take control of my destiny?
  • Follow processes because that’s the way it’s done?
  • Improve on processes to make them effective, efficient, and high-value (low cost for big value output)?
  • Think negative?
  • Think positive?
  • Be realistic?
  • Do something because it’s the right thing to do?
  • Do something because I was told to do it?
  • Follow the crowd?
  • Embellish or hide the truth?
  • Speak up and speak truth?
  • Be a manager?
  • Be a leader?
  • Compromise my values?
  • Be present and attentive in my work and interactions?

 

How About It?

Take some time to consider ‘Why Am I Here (in my job/business)?‘  

Connect with your inner self, your colleagues, your customers, and anyone else and figure this out. The truth might astound you. You might discover that your purpose for being there isn’t what you want to do, or that you aren’t doing what you should be doing, or something else entirely.

Now share with us here on this page!

With sharing comes more connections, more learning, and more awesomeness!

What questions did you ask yourself? Your colleagues? Your managers? Your customers?

What questions would you add to this list?

What did you discover?

 

 

 

 

SUBSCRIBE TO FLYING META’S BUSINESS ARTICLES

 

 

Categories: Software Testing

Should Testers Write Source Code?

Eric Jacobson's Software Testing Blog - Tue, 08/21/2012 - 10:22

My system 1 thinking says “no”.  I’ve often heard separation of duties makes testers valuable.

Let’s explore this.

A programmer and a tester are both working on a feature requiring a complex data pull.  The tester knows SQL and the business data better than the programmer.

If Testers Write Source Code:

The tester writes the query and hands it to the programmer.  Two weeks later, as part of the “testing phase”, the tester tests the query (they wrote themselves) and finds 0 bugs.  Is anything dysfunctional about that? 

If Testers do NOT Write Source Code:

The programmer struggles but manages to cobble some SQL together.  In parallel, the tester writes their own SQL and puts it in an automated check.  During the “testing phase”, the tester compares the results of their SQL with that of the programmer’s and finds 10 bugs.  Is anything dysfunctional about that? 

Categories: Software Testing

FUBARsend

QA Hates You - Tue, 08/21/2012 - 04:14

Fabusend is an email provider handles not large, mass emails, but small, individual emails. It (apparently) adds in letterhead images and whatnot for senders and allows the recipient to click through if the images are blocked in the client email program.

But if you click through a little late, you get all kinds of FUBAR.

There’s the JavaScript error:

If you’re looking at it in Firefox, you’ll see the tables are misaligned:

Now, you’re saying, Who’s going to see that?

Someone who stores important emails and then tries to click through them. Someone like me. And not “like me” in the QA sense, “like me” in the user sense.

It’s such a little thing!

You know, you’re right, it is going to be seen by only a few people. But it’s also not something that requires refactoring the whole application, ainna? Take a couple minutes, fix the little error page up, would you?

Your goal should be a quality application across the board, not just quality for most of the people, most of the time.

Categories: Software Testing

Orchestrating Test Automation

Alan Page - Mon, 08/20/2012 - 22:18

I’ve been gathering some information on test automation systems recently and will probably have a flurry of posts upcoming on automation and related subjects.

Some observations as I browse what Bing has to tell me about the subject of Test Automation:

  • Wikipedia tells me that test automation “commonly involves automating a manual process already in place”
  • The bulk of the test automation articles are about UI automation
  • There are some reasonably good articles warning about the problems one may run into in test automation projects. None of these articles provide solutions or alternatives.

There’s more in my search results, but those three results alone say a lot about what’s wrong with the way (most?) teams approach test automation.

I expect I’ll write more eventually on all of these points, but my anecdotal experience, along with a few dozen web pages and articles tells me that there’s a lot of confusion in regards to test automation.

You see, test design (including the design of how test automation will execute) is really, really hard. It’s hard to write sustainable and reliable tests, and it’s hard to write trustworthy tests. Double the effort required if those tests involve UI automation. Designing good tests is one of the hardest tasks in software development. That’s worth saying again. Designing good tests is one of the hardest tasks in software development.

Compared to everything else in the equation, the “writing code” part of test automation is easy. Massively easy. Almost brain-dead easy. When I compare writing code to composing music, I talk about the creative and challenging aspect of melody, creating a texture and mood, and figuring out where notes and space between notes help with both. There’s also the rote part of the job – chord voicings, doubling, and other parts of filling in the score. The melody and texture choices are the test design of music composition – filling in the rest of the parts is the coding part. Sure, there’s creativity in a countermelody, as much as there’s creativity in a cool algorithm, but it’s still the rote part of the activity.

I fear that testers are worrying too much about the effort and skills required to automate tests – and not enough about what it takes to design reliable, portable, accurate, and trustworthy tests that actually matter.

Tell me I’m wrong.

Categories: Software Testing

The Dreaded Feature List

ABAKAS - Catherine Powell - Mon, 08/20/2012 - 09:34
We've all seen them: the lists of features. Whether they show up in a backlog, or a test plan, or a competitive matrix, or an RFP, feature lists are everywhere. It's the driving force of software development, in many ways. "How many features have you built?" "When will that feature be ready?" "Does it have X feature?"

There's one big problem with that feature focus, though: it's not what customers actually want.

There are only two times I can think of where a customer actually explicitly cares about features:

  1. When comparing different products (RFP, evaluation, etc)
  2. When they're using the presence or absence of a feature as a proof point in an argument about your product
The rest of the time, they care only that they can solve their problem with your product. Having a "print current inventory" feature is useless. Being able to take a hard copy of the inventory report to the warehouse and scribble all over it while doing a count - that's what the customer actually wants. "Print current inventory" is just a way to get to the actual desire. These stories - tales of things the customer does that involve your software - are the heart and soul of the solution.
So - with the exception of RFPs and bakeoffs - ignore features. Start focusing on the customer and their stories.
Categories: Software Testing

It Pays To Build Your QA Vocabulary Power

QA Hates You - Mon, 08/20/2012 - 07:07

Use this one in conversation:

glich – (n.) A software bug thought to have been killed, but due to some eldritch dark developer magic or incompetence, rises again more powerful and destructive in an attempt to achieve some form of immortality.

Allowing a space in the password was a glich that caused many users to fail at resetting their passwords after the security breach.

Categories: Software Testing

Implementing Agile in very large enterprises

SearchSoftwareQuality - Fri, 08/17/2012 - 10:48
Many Fortune 2000 companies are implementing an Agile methodology. Learn what experts and consultants are saying about how successful it is.


Categories: Software Testing

How enterprises can best select from the various ALM tool options

SearchSoftwareQuality - Fri, 08/17/2012 - 10:13
Enterprises have several ALM tool options. They can best address their specific needs by identifying what is most important for their organization.


Categories: Software Testing

Metrics

Cartoon Tester - Fri, 08/17/2012 - 00:40

Categories: Software Testing

Good Software Makes Decisions

ABAKAS - Catherine Powell - Thu, 08/16/2012 - 08:17
A lot of software is complex. Sometimes it's not even the software that's complex; it's the landscape of the problem that the software is solving. There are data elements and workflows and variations and options. Requirements frequently start with, "Normally, we foo the bar in the bat, with baz. On alternate Tuesdays in the summer, though, we pluto the mars in the saturn. Our biggest customer does something a little different with that bit, so we'll need an option to jupiter the mars in the saturn, too." Add in a product manager who answers every question with, "well, let's make that an option", and you end up with a user interface that looks like this:

No one wants to use that.

It's software that has taken all of the complexity of the problem space and barfed it out onto the user. That's not what they wanted. I promise. Even if it's what they asked for, it's not what they wanted.

Building software is about making decisions. Good software limits options; it doesn't add them. You start with a massive problem space: "Software can do anything!" and limit from there. First, you decide that you're going to solve an accounting problem. Then you decide what kind of inputs and outputs go into this accounting problem, and how the workflows go. You do this by consulting with people who know the problem space really well. Define what you will do and - just as important - what you won't do. Exposing a decision says, "I don't understand this problem; you decide." Making a decision builds customer confidence; it says, "I know what I'm doing." All the time you're creating power by limiting choices.

It's okay to make decisions in software. Take a stand; your software will be better for it.
Categories: Software Testing

Self-organizing teams bolster large-scale Agile development

SearchSoftwareQuality - Thu, 08/16/2012 - 05:24
Learn how self-organizing teams can accomplish Agile development on a large scale without chaos and excessive documentation.


Categories: Software Testing

Critical Thinking For Testers with Michael Bolton

Eric Jacobson's Software Testing Blog - Wed, 08/15/2012 - 13:39

After RST class (see my Four Day With Michael Bolton post), Bolton did a short critical thinking for testers workshop.  If you get an opportunity to attend one of these at a conference or other place, it’s time well spent.  The exercises were great, but I won’t blog about them because I don’t want to give them away.  Here is what I found in my notes…

  • There are two types of thinking:
    1. System 1 Thinking – You use it all the time to make quick answers.  It works fine as long as things are not complex.
    2. System 2 Thinking – This thinking is lazy, you have to wake it up.
  • If you want to be excellent at testing, you need to use System 2 Thinking.  Testing is not a straight forward technical problem because we are creating stuff that is largely invisible.
  • Don’t plan or execute tests until you obtain context about the test mission.
  • Leaping to assumptions carries risk.  Don’t build a network of assumptions.
  • Avoid assumptions when:
    • critical things depend on it
    • when the assumption is unlikely to be true
    • the assumption is dangerous when not declared
  • Huh?  Really?  So?   (James Bach’s critical thinking heuristic)
    • Huh? – Do I really understand?
    • Really? – How do I know what you say is true?
    • So? – Is that the only solution?
  • “Rule of Three” – If you haven't thought of at least three plausible explanations, you’re not thinking critically enough.
  • Verbal Heuristics: Words to help you think critically and/or dig up hidden assumptions.
  • Mary Had a Little Lamb Heuristic – emphasize each word in that phrase and see where it takes you.
  • Change “the” to “a” Heuristic:
    • “the killer bug” vs. “a killer bug”
    • “the deadline” vs. “a deadline”
  • “Unless” Heuristic:  I’m done testing unless…you have other ideas
  • “Except” Heuristic:  Every test must have expected results except those we have no idea what to expect from.
  • “So Far” Heuristic:  I’m not aware of any problems…so far
  • “Yet” Heuristic: Repeatable tests are fundamentally more valuable, yet they never seem to find bugs.
  • “Compared to what?” Heuristic: Repeatable tests are fundamentally more valuable…compared to what?
  • A tester’s job is to preserve uncertainty when everyone around us is certain.
  • “Safety Language” is a precise way of speaking which differentiates between observation and inference.  Safety Language is a strong trigger for critical thinking.
    • “You may be right” is a great way to end an argument.
    • “It seems to me” is a great way to begin an observation.
    • Instead of “you should do this” try “you may want to do this”.
    • Instead of “it works” try “it meets the requirements to some degree”
    • All the verbal heuristics above can help us speak precisely.
Categories: Software Testing

How continuous development can reduce cost challenges in ALM

SearchSoftwareQuality - Wed, 08/15/2012 - 11:23
Software consultant Nari Kannan writes about the cost challenges in application lifecycle management and how continuous development can reduce them.


Categories: Software Testing