Second, I really believe as testers we have the right and responsibility to ask questions. So my issue is not with the asking of questions. We can still be friends.
I tweeted yesterday that "The open letter to the ISTQB is meaningless." Here's why.
1. The letter fails to distinguish between the ISTQB and the country examination boards (such as the ASTQB, Canadian Testing Board, UKTB, etc.) that write and administer the exams. Therefore, the ISTQB does not write exam questions. To get the answers to the questions about the validity of questions and coefficients, you would have to write such a request to each individual country exam board. The ISTQB may provide a high-level response, but it can't answer the detailed questions posed because it simply doesn't have the information.
2. There are Non-disclosure Agreements in place to protect the intellectual property of each country board and the ISTQB in general. One of the challenges with any exam (ITIL, PMP, etc.) is to keep the contents of the exam confidential as to prevent questions from being passed around. There is a sample exam available on the ASTQB website that gives a flavor of the questions being asked. Kryterion would not release results because they also have confidentiality restrictions.
3. "Have there ever been any problems with the validity of the exams?" This is like asking, "Are you still beating your wife?" Everything has shortcomings. The reason the ASTQB invested in independent exam reviews and measurement was to make the exam as valid as possible.
I agree that the questions on the exam must be a valid reflection of the learning objectives in the syllabus. As a training provider, I don't know what is on the exam. There is a firm line of separation. I focus on the methods and application as opposed to the brain cram approach.
That's it for now. Gotta get back to the test lab.
- Measuring your Automation might be easy. Using those measurements is not. Examples:
- # of times a test ran
- how long tests take to run
- how much human effort was involved to execute and analyze results
- how much human effort was involved to automate the test
- number of automated tests
- EMTE (Equivalent Manual Test Effort) – What effort it would have taken humans to manually execute the same test being executed by a machine. Example: If it would take a human 2 hours, the EMTE is 2 hours.
- How can this measure be useful? It is an easy way to show management the benefits of automation (in a way managers can easily understand).
- How can this measure be abused? If we inflate EMTE by re-running automated tests just for the sake of increasing EMTE, when are misleading. Sure, we can run our automated tests everyday, but unless the build is changing every day, we are not adding much value.
- How else can this measure be abused? If you hide the fact that humans are capable of noticing and capturing much more than machines.
- How else can this measure be abused? If your automated tests can not be executed by humans and if your human tests can not be executed by a machine.
- ROI (Return On Investment) – Dorothy asked the students what ROI they had achieved with the automation they created. All 6 students who answered, got it wrong; they explained various benefits of their automation, but none were expressed as ROI. ROI should be a number, hopefully a positive number.
- The trick is to convert tester time effort to money.
- ROI does not measure things like “faster execution”, “quicker time to market”, “test coverage”
- How can this measure be useful? Managers may think there is no benefit to automation until you tell them there is. ROI may be the only measure they want to hear.
- How is this measure not useful? ROI may not be important. It may not measure your success. “Automation is an enabler for success, not a cost reduction tool” – Yoram Mizrachi. You company probably hires lawyers without calculating their ROI.
- She did the usual tour of poor-to-better automation approaches (e.g., capture playback to advanced key-word driven framework). I’m bored by this so I have a gap in my notes.
- Testware architecture – consider separating your automation code from your tool, so you are not tied to the tool.
- Use pre and post processing to automate test setup, not just the tests. Everything should be automated except selecting which tests to run and analyzing the results.
- If you expect a test to fail, use the execution status “Expected Fail”, not “Fail”.
- Comparisons (i.e., asserts, verifications) can be “specific” or “sensitive”.
- Specific Comparison – an automated test only checks one thing.
- Sensitive Comparison – an automated test checks several things.
- I wrote “awesome” in my notes next to this: If your sensitive comparisons overlap, 4 tests might fail instead of 3 passing and 1 failing. IMO, this is one of the most interesting decisions an automator must make. I think it really separates the amateurs from the experts. Nicely explained, Dorothy!
Tim McGraw, “The Cowboy in Me”
The urge to run, the restlessness
The heart of stone I sometimes get
The things I’ve done for foolish pride
The me that’s never satisfied
The face that’s in the mirror when I don’t like what I see
I guess that’s just the cowboy in me
It sounds a lot like the typical workday in software testing.
If you want to have test automation
And don't care about trials and tribulation
Just believe all the hype
Get a tool of each type
But be warned, you'll have serious frustration!
(a limerick by Dorothy Graham)
I attended Dorothy Graham’s STARCanada tutorial, “Managing Successful Test Automation”. Here are some highlights from my notes:
- “Test execution automation” was the tutorial concern. I like this clarification; sets it apart from “exploratory test automation” or “computer assisted exploratory testing”).
- Only 19% of people using automation tools (In Australia) are getting “good benefits”…yikes.
- Testing and Automating should be two different tasks, performed by different people.
- A common problem with testers who try to be automators: Should I automate or just manually test? Deadline pressures make people push automation into the future.
- Automators – People with programming skills responsible for automating tests. The automated tests should be able to be executed by non-technical people.
- Testers – People responsible for writing tests, deciding which tests to automate, and executing automated tests. “Some testers would rather break things than make things”.
- Dorothy mentioned “checking” but did not use the term herself during the tutorial.
- Automation should be like a butler for the testers. It should take care of the tedious and monotonous, so the testers can do what they do best.
- A “pilot” is a great way to get started with automation.
- Calling something a “pilot” forces reflection.
- Set easily achievable automation goals and reflect after 3 months. If goals were not met, try again with easier goals.
- Bad Test Automation Objects– And Why:
- Reduce the number of bugs found by users – Exploratory testing is much more effective at finding bugs.
- Run tests faster – Automation will probably run tests slower if you include the time it takes to write, maintain, and interpret the results. The only testing activity automation might speed up is “test execution”.
- Improve our testing – The testing needs to be improved before automation even begins. If not, you will have poor automation. If you want to improve your testing, try just looking at your testing.
- Reduce the cost and time for test design – Automation will increase it.
- Run regression tests overnight and on weekends – If your automated tests suck, this goal will do you no good. You will learn very little about your product overnight and on weekends.
- Automate all tests – Why not just automated the ones you want to automate?
- Find bugs quicker – It’s not the automation that finds the bugs, it’s the tests. Tests do not have to be automated, they can also be run manually.
- The thing I really like about Dorothy’s examples above, is that she helps us separate the testing activity from the automation activity. It helps us avoid common mistakes, such as forgetting to focus on the tests first.
- Good Test Automation Objectives:
- Free testers from repetitive test execution to spend more time on test design and exploratory testing – Yes! Say no more!
- Provide better repeatability of regression tests – Machines are good checkers. These checks may tell you if something unexpected has changed.
- Provide test coverage for tests not feasible for humans to execute – Without automation, we couldn’t get this information.
- Build an automation framework that is easy to maintain and easy to add new tests to.
- Run the most useful tests, using under-used computer resources, when possible – This is a better objective than running tests on weekends.
- Automate the most useful and valuable tests, as identified by the testers – much better than “automated all tests”.
I haven’t blogged much recently, and it’s mainly for three reasons.
- I’m busy – probably the hardest I’ve worked in all of my time in software. And although there have been a few late nights, the busy isn’t coming from 80-100 hour weeks, it’s coming from 50 hour weeks of working on really hard shit. As a result, I haven’t felt like writing much lately.
- I can’t tell you.
- I’m not really doing much testing.
It’s the third point that I want to elaborate on, because it’s sort of true, and sort of not true – but worth discussing / dumping.
First off, I’ve been writing a ton of code recently. But not really much code to test the product directly. Instead, I’ve been neck deep in writing code to help us test the product. This includes both infrastructure (stories coming someday), and tools that help other testers find bugs. I frequently say that a good set of analysis tools to run alongside your tests is like having personal testing assistants. Except you don’t have to pay them, and they don’t usually interrupt you while you’re thinking.
I’ve also been spending a lot of time thinking about reliability, and different ways to measure and report software reliability. There’s nothing really new there other than applying the context of my current project, but it’s interesting work. On top of thinking about reliability and the baggage that goes along with it, I spend a lot of time making sure the right activities to improve reliability happen across the org. I know that some testers identify themselves as “information providers”, but I’ve always found that too passive of a role for my context. My role (and the role of many of my peers) is to not only figure out what’s going on, but to figure out what changes are needed, and then make them happen.
And this last bit is really what I’ve done for years (with different flavors and variations). I find holes and I make sure they get filled. Sometimes I fill the holes. Often, I need to get others to fill the holes – and ideally, make them want to fill them for me. I work hard at this, and while I don’t always succeed, I often do, and I enjoy the work. Most often, I’m driving changes in testing – improving (or changing) test design, test tools, or test strategy. Lately, there’s been a mix of those along with a lot of multi-discipline work. The fuzzy blur between disciplines on our team (and on many teams at MS these days) contributes a lot to that, and just “doing what needs to be done” fills in the rest.
I’m still a tester, of course, and I’ll probably wear that hat until I retire. What I do while wearing that hat will, of course, change often – and that’s (still) ok.(potentially) related posts: