Software developers: Dealing with untrusted Wi-Fi connections

SearchSoftwareQuality - Tue, 04/30/2013 - 06:24
Expert Dan Cornell explains how to ensure mobile apps behave securely -- even when they encounter untrusted Wi-Fi or Bluetooth connections.


Categories: Software Testing

The Death Star Conspiracy as software testing ethics training

SearchSoftwareQuality - Mon, 04/29/2013 - 21:00
Take a satirical look at the facts behind the destruction of the Death Star and learn about the need for ethics training in software QA management.


Categories: Software Testing

Software development cycle best practice: Threat modeling

SearchSoftwareQuality - Mon, 04/29/2013 - 11:54
Early in the software development cycle ask, Who might attack the application? How would they do it? What are they after? This is threat modeling.


Categories: Software Testing

Managing Successful Test Automation – Part 2

Eric Jacobson's Software Testing Blog - Mon, 04/29/2013 - 10:14
  • Measuring your Automation might be easy.  Using those measurements is not.  Examples:
    • # of times a test ran
    • how long tests take to run
    • how much human effort was involved to execute and analyze results
    • how much human effort was involved to automate the test
    • number of automated tests
  • EMTE (Equivalent Manual Test Effort) – What effort it would have taken humans to manually execute the same test being executed by a machine.  Example: If it would take a human 2 hours, the EMTE is 2 hours.
    • How can this measure be useful? It is an easy way to show management the benefits of automation (in a way managers can easily understand).
    • How can this measure be abused?  If we inflate EMTE by re-running automated tests just for the sake of increasing EMTE, when are misleading.  Sure, we can run our automated tests everyday, but unless the build is changing every day, we are not adding much value.
    • How else can this measure be abused?  If you hide the fact that humans are capable of noticing and capturing much more than machines.
    • How else can this measure be abused?  If your automated tests can not be executed by humans and if your human tests can not be executed by a machine.
  • ROI (Return On Investment) – Dorothy asked the students what ROI they had achieved with the automation they created.  All 6 students who answered, got it wrong; they explained various benefits of their automation, but none were expressed as ROI.  ROI should be a number, hopefully a positive number. 
    • ROI=(benefit-cost)/cost
    • The trick is to convert tester time effort to money.
    • ROI does not measure things like “faster execution”, “quicker time to market”, “test coverage”
    • How can this measure be useful?  Managers may think there is no benefit to automation until you tell them there is.  ROI may be the only measure they want to hear.
    • How is this measure not useful?  ROI may not be important.  It may not measure your success.  “Automation is an enabler for success, not a cost reduction tool” – Yoram Mizrachi.  You company probably hires lawyers without calculating their ROI.
  • She did the usual tour of poor-to-better automation approaches (e.g., capture playback to advanced key-word driven framework).  I’m bored by this so I have a gap in my notes.
  • Testware architecture – consider separating your automation code from your tool, so you are not tied to the tool.
  • Use pre and post processing to automate test setup, not just the tests.  Everything should be automated except selecting which tests to run and analyzing the results.
  • If you expect a test to fail, use the execution status “Expected Fail”, not “Fail”.
  • Comparisons (i.e., asserts, verifications) can be “specific” or “sensitive”.
    • Specific Comparison – an automated test only checks one thing.
    • Sensitive Comparison – an automated test checks several things.
    • I wrote “awesome” in my notes next to this: If your sensitive comparisons overlap, 4 tests might fail instead of 3 passing and 1 failing.  IMO, this is one of the most interesting decisions an automator must make.  I think it really separates the amateurs from the experts.  Nicely explained, Dorothy!
Categories: Software Testing

A guide to platform-specific security for the mobile developer

SearchSoftwareQuality - Mon, 04/29/2013 - 07:17
It's essential for the mobile developer to understand the security features of the different mobile operating systems. Dan Cornell explains the basics.


Categories: Software Testing

Software product success: Maintain customer focus

SearchSoftwareQuality - Mon, 04/29/2013 - 07:08
SearchSoftwareQuality expert Scott Sehlhorst explains why maintaining customer focus is crucial to delivering high quality software products.


Categories: Software Testing

on load testing and performance April-13

LoadImpact - Mon, 04/29/2013 - 07:06

What are others are saying about load testing and web performance as of now? Well, apparently more people have things to say about how to make things scale rather than how to measure it, but anyway, this is what cought our attention recently:

 

 

  • Boundary.com has a two part interview with Todd Hoff, founder of High Scalability and advisor to several start ups. Read the first part here: Facebook Secrets of web performance
  • Another insight in how the big ones are doing it is this 30 minute video from this years Pycon US. Rick Branson of Instagram talks about how they handle their load as well as the Justin Bieber effect. 
  • The marketing part of the world have really started to understand how page load times affect sales as well as Google rankings. In this article Online marketing experts portent.com explains all the loops and hoos they went trough to get to sub second page load time. Interesting read indeed.
  • One of my favorite sources for LAMP related performance insights is the MySQL performance blog. In a post from last week, they explain a bit about how to use their tools to analyze high load problems.
What big load testing or web performance news did we miss? Have your say in the comments below.

 

Categories: Load & Perf Testing

QA Music: The QA Boy in Me

QA Hates You - Mon, 04/29/2013 - 03:20

Tim McGraw, “The Cowboy in Me”

The urge to run, the restlessness
The heart of stone I sometimes get
The things I’ve done for foolish pride
The me that’s never satisfied
The face that’s in the mirror when I don’t like what I see
I guess that’s just the cowboy in me

It sounds a lot like the typical workday in software testing.

Categories: Software Testing

Software test plan 2013: Keynote speakers cite top trends

SearchSoftwareQuality - Fri, 04/26/2013 - 05:58
At STP 2013, keynote speakers weighed in on software test plans to work towards in the coming year.


Categories: Software Testing

Managing Successful Test Automation – Part 1

Eric Jacobson's Software Testing Blog - Thu, 04/25/2013 - 14:00

If you want to have test automation
And don't care about trials and tribulation
Just believe all the hype
Get a tool of each type
But be warned, you'll have serious frustration!

(a limerick by Dorothy Graham)

I attended Dorothy Graham’s STARCanada tutorial, “Managing Successful Test Automation”.  Here are some highlights from my notes:

  • “Test execution automation” was the tutorial concern. I like this clarification; sets it apart from “exploratory test automation” or “computer assisted exploratory testing”).
  • Only 19% of people using automation tools (In Australia) are getting “good benefits”…yikes.
  • Testing and Automating should be two different tasks, performed by different people.
    • A common problem with testers who try to be automators:  Should I automate or just manually test?  Deadline pressures make people push automation into the future.
    • Automators – People with programming skills responsible for automating tests.  The automated tests should be able to be executed by non-technical people.
    • Testers – People responsible for writing tests, deciding which tests to automate, and executing automated tests.  “Some testers would rather break things than make things”.
    • Dorothy mentioned “checking” but did not use the term herself during the tutorial.
    • Automation should be like a butler for the testers.  It should take care of the tedious and monotonous, so the testers can do what they do best.
  • A “pilot” is a great way to get started with automation.
    • Calling something a “pilot” forces reflection.
    • Set easily achievable automation goals and reflect after 3 months.  If goals were not met, try again with easier goals.
  • Bad Test Automation Objects– And Why:
    • Reduce the number of bugs found by users – Exploratory testing is much more effective at finding bugs.
    • Run tests faster – Automation will probably run tests slower if you include the time it takes to write, maintain, and interpret the results.  The only testing activity automation might speed up is “test execution”.
    • Improve our testing – The testing needs to be improved before automation even begins.  If not, you will have poor automation.  If you want to improve your testing, try just looking at your testing.
    • Reduce the cost and time for test design – Automation will increase it.
    • Run regression tests overnight and on weekends – If your automated tests suck, this goal will do you no good.  You will learn very little about your product overnight and on weekends.
    • Automate all tests – Why not just automated the ones you want to automate?
    • Find bugs quicker – It’s not the automation that finds the bugs, it’s the tests.  Tests do not have to be automated, they can also be run manually.
  • The thing I really like about Dorothy’s examples above, is that she helps us separate the testing activity from the automation activity.  It helps us avoid common mistakes, such as forgetting to focus on the tests first.
  • Good Test Automation Objectives:
    • Free testers from repetitive test execution to spend more time on test design and exploratory testing – Yes!  Say no more!
    • Provide better repeatability of regression tests – Machines are good checkers.  These checks may tell you if something unexpected has changed.
    • Provide test coverage for tests not feasible for humans to execute – Without automation, we couldn’t get this information.
    • Build an automation framework that is easy to maintain and easy to add new tests to.
    • Run the most useful tests, using under-used computer resources, when possible – This is a better objective than running tests on weekends.
    • Automate the most useful and valuable tests, as identified by the testers – much better than “automated all tests”.
Categories: Software Testing

Code signing: Why it matters for mobile developers

SearchSoftwareQuality - Thu, 04/25/2013 - 05:27
Code signing creates a system of trust among mobile users, but it doesn't bolster the security of the app itself, says expert Dan Cornell.


Categories: Software Testing

Balancing the Load

A question that every online application provider will face eventually is: does my application scale? Can I add an extra 100 users and still ensure the same user experience? If the application architecture is properly designed the easiest way is to put additional server behind load balancer to handle more traffic. In this article we [...]
Categories: Load & Perf Testing

Filling A Hole

Alan Page - Wed, 04/24/2013 - 20:35

I haven’t blogged much recently, and it’s mainly for three reasons.

  1. I’m busy – probably the hardest I’ve worked in all of my time in software. And although there have been a few late nights, the busy isn’t coming from 80-100 hour weeks, it’s coming from 50 hour weeks of working on really hard shit. As a result, I haven’t felt like writing much lately.
  2. I can’t tell you.
  3. I’m not really doing much testing.

It’s the third point that I want to elaborate on, because it’s sort of true, and sort of not true – but worth discussing / dumping.

First off, I’ve been writing a ton of code recently. But not really much code to test the product directly. Instead, I’ve been neck deep in writing code to help us test the product. This includes both infrastructure (stories coming someday), and tools that help other testers find bugs. I frequently say that a good set of analysis tools to run alongside your tests is like having personal testing assistants. Except you don’t have to pay them, and they don’t usually interrupt you while you’re thinking.

I’ve also been spending a lot of time thinking about reliability, and different ways to measure and report software reliability. There’s nothing really new there other than applying the context of my current project, but it’s interesting work. On top of thinking about reliability and the baggage that goes along with it, I spend a lot of time making sure the right activities to improve reliability happen across the org. I know that some testers identify themselves as “information providers”, but I’ve always found that too passive of a role for my context. My role (and the role of many of my peers) is to not only figure out what’s going on, but to figure out what changes are needed, and then make them happen.

And this last bit is really what I’ve done for years (with different flavors and variations). I find holes and I make sure they get filled. Sometimes I fill the holes. Often, I need to get others to fill the holes – and ideally, make them want to fill them for me. I work hard at this, and while I don’t always succeed, I often do, and I enjoy the work. Most often, I’m driving changes in testing – improving (or changing) test design, test tools, or test strategy. Lately, there’s been a mix of those along with a lot of multi-discipline work. The fuzzy blur between disciplines on our team (and on many teams at MS these days) contributes a lot to that, and just “doing what needs to be done” fills in the rest.

I’m still a tester, of course, and I’ll probably wear that hat until I retire. What I do while wearing that hat will, of course, change often – and that’s (still) ok.

(potentially) related posts:
  1. Yet another future of testing post (YAFOTP)
  2. The Skeptics Dilemma
  3. Activities and Roles
Categories: Software Testing

Software testing techniques: Overcoming biases

SearchSoftwareQuality - Wed, 04/24/2013 - 17:15
Gerie Owen offers software testing techniques to overcome biases and boost code quality and answers the pressing "how did I miss that bug?" question.


Categories: Software Testing

Counting bugs

Cartoon Tester - Wed, 04/24/2013 - 11:52

Categories: Software Testing

Mobile apps development: Managing updates

SearchSoftwareQuality - Wed, 04/24/2013 - 11:41
ALM expert Kevin Parker explains a key challenge of mobile apps development: managing the constant updates these business-critical apps demand.


Categories: Software Testing

Hybrid security: Beyond pen testing and static analysis

SearchSoftwareQuality - Wed, 04/24/2013 - 07:38
Securing an application's attack surface takes more than pen testing and code analysis. Kevin Beaver explains the hybrid security analysis approach.


Categories: Software Testing

Agile management: Avoiding pitfalls of multi-team projects

SearchSoftwareQuality - Wed, 04/24/2013 - 06:32
Amy Reichert explains why eliminating external distractions is an essential Agile management practice for leaders heading multi-team projects.


Categories: Software Testing

Mobile apps development: Dealing with tight release cycles

SearchSoftwareQuality - Tue, 04/23/2013 - 08:21
Mobile apps development projects require shorter release cycles. Kevin Parker explains how automation can help teams cope with the tight time frame.


Categories: Software Testing

Why Would a Baseball Player Do That?

QA Hates You - Tue, 04/23/2013 - 04:16

Ever get asked why a user would do that? Of course you do. You’ve already been asked that today.

Here’s a little story for you about why the ball player trying to steal third base ended up on first base:

This guy stole second. Then he tried to steal third but somehow wound up on first. Then he got thrown out trying to steal second again. All in a span of five pitches.

The result, as far as we’re concerned:

The part where a runner on second base finishes the next play on first base? It’s not possible to score that without crashing every computer in America.

“There’s no way to do that,” longtime official scorer and SABR historian David Vincent said Saturday. “Not covered in the rules. A runner on second base going to first base? That’s impossible.”

Now obviously it’s not “impossible,” because it really happened. But tell that to the computer programmers of America.

“All the computer software — none of it will handle that,” Vincent said. “You don’t run the bases [from] second to first. Any software that processes play-by-play won’t accept that.”

So because it’s theoretically impossible, the official box score of this game listed Segura as having been thrown out stealing third — even though he slid into second. Huh?

“That’s because the play-by-play listed him as staying at second base [because it couldn't compute that he was actually on first],” Vincent said. “So then he had to be caught stealing third. But that never happened. So that has to get changed.”

Right. But that’s not all. The official box score and play-by-play also said that Braun got caught stealing second.

So why would a user do that? Because the user could do that. And just because someone has not done that does not mean someone will not do that in a strange set of circumstances you cannot anticipate now.

(Link via tweet linking to this article.)

Categories: Software Testing