Surface Update

Alan Page - Mon, 11/05/2012 - 02:31

If you read my last post (sorry, no link – I’m not ready to try that yet), you’ll know that I’m attempting to use my Microsoft surface device as my only electronic device on my trip to Amsterdam for Eurostar.

So far, it’s working ok. The cover/keyboard works better than I though tit would – although I seem to have a problem with it registering the space bar a lot. You really do need to be a good touch typer to have confidence with this thing.

My presentation tweaks aren’t quite done, so I’ll have to put this thing through a few more paces to see what I really think. I’m also curious to see how limited the Surface is with no internet connection (I assume I’ll find myself in that situation on this trip, but we’ll see).

Boarding in a few minutes – more updates from Amsterdam.

ugh – first problem is that the wordpress app won’t publish my post…

Continued: Because I can’t post, I’ll continue with my update. My first bad experience has little to do with the surface – it’s that the wordpress app is lame. I won’t describe the depths of the lameness, but I will say that I’ll likely post this directly from my blog rather than the WP app.

I just spent 30 minutes or so tweaking my slides – that was a good experience. Typing is still a bit weird (and I still miss the space bar about every ten words or so), but for formatting, moving slides around, etc. the Surface works remarkably well. I find the screen size and resolution seem to be just about perfect for working on an airplane (I’ll find out if I say the same for hotel rooms and coffee shops in an upcoming post).

More later (when I find Wi-Fi).

Categories: Software Testing

Does test-driven development fall into project manager duties?

SearchSoftwareQuality - Fri, 11/02/2012 - 12:29
A project manager must understand and facilitate testing, but how closely do project manager duties align with test-driven development efforts?


Categories: Software Testing

Join Us in Supporting Relief Efforts for the Northeast

Randy Rice's Software Testing & Quality - Thu, 11/01/2012 - 18:30
In Oklahoma, we have experienced the devastating impact of natural disasters, especially tornadoes. Each time we have had a major disaster, people from the Northeast were here to help us. Members of the NYPD and NYFD were here to help after the Murrah Building bombing. Also, the Red Cross and Salvation Army were here for months afterwards in these disasters.

Now it's our turn to return the help.

We are making an initial donation, but ask you to join with us either directly at:

http://blog.salvationarmyusa.org/category/eds/

or by enrolling in an online e-learning course. We are donating 10% each week for all e-learning enrollments. Please understand this is not a sales ploy. It's just our way of helping by sharing what we earn.

http://www.mysoftwaretesting.com/

Thanks!

Randy
Categories: Software Testing

Bug Magic

Cartoon Tester - Thu, 11/01/2012 - 01:29
The main idea for this cartoon came from a developer I work with. We were both working on a difficult project where there was lots of scope creep, unreliable technology, and a few other project smells. Bugs seem to appear and disappear at their own will. The bugs seem to be working as a team, one bug would sacrifice itself and gather all of our attention so the other bugs could enjoy a life of freedom.


Categories: Software Testing

How I Do Pull Requests

ABAKAS - Catherine Powell - Wed, 10/31/2012 - 10:42
I've been working with a couple new developers lately and we've been talking about how we actually write, review and check in code. Early on, we all agreed that we would use a shared repository and pull requests. That way we get good peer code reviews, and we can look at each other's work in progress code.

Great!

One of the developers approached me after that conversation and said, "umm, so how exactly do I do pull requests?" On Github, here's how I do it:


# CD into the project
$ cd Documents/project/project_repo/

# Check what branch I'm on (almost always should be master when I'm starting a new feature)
$ git branch
* master

# Pull to make sure I have the latest
$ git pull
Already up-to-date.

# Make a new branch off master. This will be my feature branch and can be named whatever I want. I prefix all my feature branches with cmp_ so people know who did most of the coding on it.
$ git branch cmp_NEWFEATURE

# Switch to my new branch
$ git checkout cmp_NEWFEATURE
Switched to branch 'cmp_NEWFEATURE'

#####
# DEVELOP.
# Commit my changes because they're awesome
# Develop more stuff. Make changes, etc.
# Commit my changes, because they're awesome.
# Pull from master and merge to make sure my stuff still works.
# NOW I'm ready to make a pull request.
#####

# push my branch to origin
$ git push origin cmp_NEWFEATURE

# Create the pull request
# I do this on github. See https://help.github.com/articles/using-pull-requests
# Set the base branch to master and the head branch to cmp_NEWFEATURE

# Twiddle my thumbs waiting for the pull request to be approved.
# When it's approved, huzzah!

# If the reviewer said, "go ahead and merge", then the merge is on me to do.

# Check out the master branch. This is what I'll be merging into
$ git checkout master

# Merge from my pull request branch into master. I use --squash so the whole merge is one commit, no matter how many commits it took on the branch to get there.
$ git merge --squash cmp_NEWFEATURE

# If the merge isn't clean, go ahead and fix and commit.

# Push my newly-merged master up to origin. The pull request is done!
$ git push

# Go into github and close the pull request. I do this in a browser.

# Delete the feature branch from origin. Tidiness is awesome.
$ git push origin :

# Delete my local copy of the feature branch.
$ git branch -D


More information can be found here: https://help.github.com/articles/using-pull-requests.

Happy pulling!
Categories: Software Testing

PARTY TIME!

Cartoon Tester - Wed, 10/31/2012 - 01:38
I'm not a fan of Halloween (it's too scary!) but it seems to be an ever growing trend in the UK, so here's my quick attempt to be 'trendy':



... I do wonder... if bugs did have Halloween parties, which costume (i.e. tester) would be the most popular? Or would it be a Test Driven Develop'er? My bet is on the evil tester.
Categories: Software Testing

My Surface Challenge

Alan Page - Tue, 10/30/2012 - 23:33

I had a not-so-good week last week.Lot’s of stuff too private for blog sharing, but let’s just say it was a cascade of dark and bad times that nobody should have to go through. One of the bits of bad luck was that the LCD on my laptop broke. Normally, this wouldn’t be so bad. Most of the time (i.e. unless I’m traveling), I log into my laptop via terminal server, so a broken screen has no impact. This is my “work” laptop, and it’s overdue for an upgrade, so normally, everything would line up well.

This is not a normal week. I leave for Amsterdam to speak at Euro STAR on Sunday, and there’s no way to get a replacement laptop through work on time. I looked into getting the screen replaced, but at $500, I didn’t feel like it was cost effective – I thought the same of renting a computer for $120 for the week. I have a strong network at MS, and pinged some of my colleagues on the Windows team for spare hardware, but that well was dry too. I begged admins, IT, and anyone I could find to let me use a laptop for the week, but everything fell through.

So, I figured it was time to buck up and buy a personal laptop. Microsoft is giving me (and every employee) a Surface RT, but we won’t get those until January. I looked around a bit, and felt a little strange paying (at least) $700 for a laptop that I probably wouldn’t use much after this trip (have you figured out yet that I hate spending money?).

In the end, I decided to go all in – to jump on the Windows Surface bandwagon and buy a Surface RT (yes – the same device I’ll get for from MS in a few months). If I like it, it will be nice to have two (if I love it, I’ll give one away!) The challenge I’ve given myself is to see how productive I can be using the Surface on the plane and in my hotel room – and using the Surface as my powerpoint source for my keynote at the conference. It’s going to be a fun few days, and I expect I’ll blog a bit about how it’s going along the way. I only hope that if my experience goes south that the Windows bigwigs aren’t going to get mad.

We’ll see what happens.

Categories: Software Testing

GTAC Coming to New York in the Spring

Google Testing Blog - Tue, 10/30/2012 - 11:18

By The GTAC Committee

The next and seventh GTAC (Google Test Automation Conference) will be held on April 23-24, 2013 at the beautiful Google New York office! We had a late start preparing for GTAC, but things are now falling into place. This will be the second time it is hosted at our second largest engineering office. We are also sticking with our tradition of changing the region each year, as the last GTAC was held in California.

The GTAC event brings together engineers from many organizations to discuss test automation. It is a great opportunity to present, learn, and challenge modern testing technologies and strategies. We will soon be recruiting speakers to discuss their innovations.

This year’s theme will be “Testing Media and Mobile“. In the past few years, substantial changes have taken place in both the media and mobile areas. Television is no longer the king of media. Over 27 billion videos are streamed in the U.S. per month. Over 1 billion people now own smartphones. HTML5 includes support for audio, video, and scalable vector graphics, which will liberate many web developers from their dependence on third-party media software. These are incredibly complex technologies to test. We are thrilled to be hosting this event in which many in the industry will share their innovations.

Registration information for speakers and attendees will soon be posted here and on the GTAC site (http://www.gtac.biz). Even though we will be focusing on “Testing Media and Mobile”, we will be accepting proposals for talks on other topics.


Categories: Software Testing

Where Does All That Time Go?

DevelopSense - Michael Bolton - Mon, 10/29/2012 - 21:04
It had been a long day, so a few of the fellows from the class agreed to meet a restaurant downtown. The main courses had been cleared off the table, some beer had been delivered, and we were waiting for dessert. Pedro (not his real name) was complaining, again, about how much time he had [...]
Categories: Software Testing

QA Music: Our Sweet Dreams Are, In Fact, Nightmares

QA Hates You - Mon, 10/29/2012 - 02:47

Marilyn Manson is old enough these days that I can dig it. Here it is with a remake of the Eurythmics’ “Sweet Dreams (Are Made Of This)”:

My goodness, that should make the week look better by comparison.

Categories: Software Testing

Why Are There So Many C++ Testing Frameworks?

Google Testing Blog - Fri, 10/26/2012 - 17:22
By Zhanyong Wan - Software Engineer

These days, it seems that everyone is rolling their own C++ testing framework, if they haven't done so already. Wikipedia has a partial list of such frameworks. This is interesting because many OOP languages have only one or two major frameworks. For example, most Java people seem happy with either JUnit or TestNG. Are C++ programmers the do-it-yourself kind?

When we started working on Google Test (Google’s C++ testing framework), and especially after we open-sourced it, people began asking us why we were doing it. The short answer is that we couldn’t find an existing C++ testing framework that satisfied all our needs. This doesn't mean that these frameworks were all poorly designed or implemented. Rather, many of them had great ideas and tricks that we learned from. However, Google had a huge number of C++ projects that got compiled on various operating systems (Linux, Windows, Mac OS X, and later Android, among others) with different compilers and all kinds of compiler flags, and we needed a framework that worked well in all these environments and ccould handle many different types and sizes of projects.

Unlike Java, which has the famous slogan "Write once, run anywhere," C++ code is being written in a much more diverse environment. Due to the complexity of the language and the need to do low-level tasks, compatibility between different C++ compilers and even different versions of the same compiler is poor. There is a C++ standard, but it's not well supported by compiler vendors. For many tasks you have to rely on unportable extensions or platform-specific functionality. This makes it hard to write a reasonably complex system that can be built using many different compilers and works on many platforms.

To make things more complicated, most C++ compilers allow you to turn off some standard language features in return for better performance. Don't like using exceptions? You can turn it off. Think dynamic cast is bad? You can disable Run-Time Type Identification, the feature behind dynamic cast and run-time access to type information. If you do any of these, however, code using these features will fail to compile. Many testing frameworks rely on exceptions. They are automatically out of the question for us since we turn off exceptions in many projects (in case you are curious, Google Test doesn’t require exceptions or run-time type identification by default; when these language features are turned on, Google Test will try to take advantage of them and provide you with more utilities, like the exception assertions.).

Why not just write a portable framework, then? Indeed, that's a top design goal for Google Test. And authors of some other frameworks have tried this too. However, this comes with a cost. Cross-platform C++ development requires much more effort: you need to test your code with different operating systems, different compilers, different versions of them, and different compiler flags (combine these factors and the task soon gets daunting); some platforms may not let you do certain things and you have to find a workaround there and guard the code with conditional compilation; different versions of compilers have different bugs and you may have to revise your code to bypass them all; etc. In the end, it's hard unless you are happy with a bare-bone system.

So, I think a major reason that we have many C++ testing frameworks is that C++ is different in different environments, making it hard to write portable C++ code. John's framework may not suit Bill's environment, even if it solves John's problems perfectly.

Another reason is that some limitations of C++ make it impossible to implement certain features really well, and different people chose different ways to workaround the limitations. One notable example is that C++ is a statically-typed language and doesn't support reflection. Most Java testing frameworks use reflection to automatically discover tests you've written such that you don't have to register them one-by-one. This is a good thing as manually registering tests is tedious and you can easily write a test and forget to register it. Since C++ has no reflection, we have to do it differently. Unfortunately there is no single best option. Some frameworks require you to register tests by hand, some use scripts to parse your source code to discover tests, and some use macros to automate the registration. We prefer the last approach and think it works for most people, but some disagree. Also, there are different ways to devise the macros and they involve different trade-offs, so the result is not clear cut.

Let’s see some actual code to understand how Google Test solves the test registration problem. The simplest way to add a test is to use the TEST macro (what else would we name it?):

TEST(Subject, HasCertainProperty) {
  … testing code goes here …
}

This defines a test method whose purpose is to verify that the given subject has the given property. The macro automatically registers the test with Google Test such that it will be run when the test program (which may contain many such TEST definitions) is executed.

Here’s a more concrete example that verifies a Factorial() function works as expected for positive arguments:

TEST(FactorialTest, HandlesPositiveInput) {
  EXPECT_EQ(1, Factorial(1));
  EXPECT_EQ(2, Factorial(2));
  EXPECT_EQ(6, Factorial(3));
  EXPECT_EQ(40320, Factorial(8));
}

Finally, many C++ testing framework authors neglected extensibility and were happy just providing canned solutions, so we ended up with many solutions, each satisfying a different niche but none general enough. A versatile framework must have a good extension story. Let's face it: you cannot be all things to all people, no matter what. Instead of bloating the framework with rarely used features, we should provide good out-of-box solutions for maybe 95% of the use cases, and leave the rest to extensions. If I can easily extend a framework to solve a particular problem of mine, I will feel less motivated to write my own thing. Unfortunately, many framework authors don't seem to see the importance of extensibility. I think that mindset contributed to the plethora of frameworks we see today. In Google Test, we try to make it easy to expand your testing vocabulary by defining custom assertions that generate informative error messages. For instance, here’s a naive way to verify that an int value is in a given range:

bool IsInRange(int value, int low, int high) {
  return low <= value && value <= high;
}
  ...
  EXPECT_TRUE(IsInRange(SomeFunction(), low, high));

The problem is that when the assertion fails, you only know that the value returned by SomeFunction() is not in range [low, high], but you have no idea what that return value and the range actually are -- this makes debugging the test failure harder.

You could provide a custom message to make the failure more descriptive:

  EXPECT_TRUE(IsInRange(SomeFunction(), low, high))
      << "SomeFunction() = " << SomeFunction() 
      << ", not in range ["
      << low << ", " << high << "]";

Except that this is incorrect as SomeFunction() may return a different answer each time.  You can fix that by introducing an intermediate variable to hold the function’s result:

  int result = SomeFunction();
  EXPECT_TRUE(IsInRange(result, low, high))
      << "result (return value of SomeFunction()) = " << result
      << ", not in range [" << low << ", " << high << "]";

However this is tedious and obscures what you are really trying to do.  It’s not a good pattern when you need to do the “is in range” check repeatedly. What we need here is a way to abstract this pattern into a reusable construct.

Google Test lets you define a test predicate like this:

AssertionResult IsInRange(int value, int low, int high) {
  if (value < low)
    return AssertionFailure()
        << value << " < lower bound " << low;
  else if (value > high)
    return AssertionFailure()
        << value << " > upper bound " << high;
  else
    return AssertionSuccess()
        << value << " is in range [" 
        << low << ", " << high << "]";
}

Then the statement EXPECT_TRUE(IsInRange(SomeFunction(), low, high)) may print (assuming that SomeFunction() returns 13):

   Value of: IsInRange(SomeFunction(), low, high)
     Actual: false (13 < lower bound 20)
   Expected: true

The same IsInRange() definition also lets you use it in an EXPECT_FALSE context, e.g. EXPECT_FALSE(IsInRange(AnotherFunction(), low, high)) could print:

   Value of: IsInRange(AnotherFunction(), low, high)
     Actual: true (25 is in range [20, 60])
   Expected: false

This way, you can build a library of test predicates for your problem domain, and benefit from clear, declarative test code and descriptive failure messages.

In the same vein, Google Mock (our C++ mocking framework) allows you to easily define matchers that can be used exactly the same way as built-in matchers.  Also, we have included an event listener API in Google Test for people to write plug-ins. We hope that people will use these features to extend Google Test/Mock for their own need and contribute back extensions that might be generally useful.

Perhaps one day we will solve the C++ testing framework fragmentation problem, after all. :-)

Categories: Software Testing

Use Your Own API

ABAKAS - Catherine Powell - Fri, 10/26/2012 - 12:17
One of the early milestones for an engineering team is the development of the first API. Someone wants to use our stuff! Maybe it's another team in the company, or another company altogether. Either way, someone will be consuming your API.

That's truly awesome news. It's always fun to see the great things teams can accomplish with your work.

So how do we create an API? Most of this is fairly straightforward:

  • Figure out what people want to do
  • Create an appropriate security model
  • Settle on an API model (RESTful JSON API? XML-RPC? Choose your technologies.)
  • Define the granularity and the API itself.
  • Document it: training materials, PyDoc/rdoc/javadoc, implementation guide, etc.
The trouble is that most people stop there. There's actually one more step to building a good API:
Use your own API.
All the theory in the world doesn't replace building an application with an API. Things that seem like great ideas on paper turn out to be very painful in the real world - like a too-granular API resulting in hundreds of calls, or hidden workflows enforced by convention only. Things that seemed like they were going to be hard are easy. Data elements show up in unexpected places. This is all stuff you figure out not through examination, but through trying it.
So try it. Create an API, then create a simple app that uses your API. It's amazing what you'll learn.
Categories: Software Testing

What Makes Software Cool?

Eric Jacobson's Software Testing Blog - Fri, 10/26/2012 - 09:59

For most of us, testing for “coolness” is not at the top of our quality list.  Our users don’t have to buy what we test.  Instead, they get forced to use it by their employer.  Nevertheless, coolness can’t hurt. 

As far as testing for it…good luck.  It does not appear to be as straightforward as some may think.

I attended a mini-UX Conference earlier this week and saw Karen Holtzblatt, CEO and founder of InContext, speak.  Her keynote was the highlight of the conference for me, mostly because she was fun to watch.  She described the findings of 90 interviews and 2000 survey results, where her company asked people to show them “cool” things and explain why they considered them cool.

Her conclusion was that software aesthetics are way less important than the following four aspects:

  1. Accomplishments – When using your software, people need to feel a sense of accomplishment without disrupting the momentum of their lives.  They need to feel like they are getting something done that was otherwise difficult.  They need to do this without giving up any part of their life.  Example: Can they accomplish something while waiting in line?
  2. Connection – When using your software, they should be motivated to connect with people they actually care about (e.g., not Facebook friends).  These connections should be enriched in some manner.  Example: Were they able to share it with Mom?  Did they talk about it over Thanksgiving dinner?
  3. Identity - When using your software, they should feel like they’re not alone.  They should be asking themselves, “Who am I?”, “Do I fit in with these other people?”.  They should be able to share their identity with joy.
  4. Sensation – When using your software, they should experience a core sensory pleasure.  Examples: Can they interact with it in a fresh way via some new interface?  Can they see or hear something delightful?

Here are a few other notes I took:

  • Modern users have no tolerance for anything but the most amazing experience.
  • The app should help them get from thought to action, nothing in between.
  • Users expect software to gather all the data they need and think for them.

I guess maybe I’ll think twice the next time I feel like saying, “just publish the user procedures, they’ll get it eventually”.

Categories: Software Testing

Mobile app security advice: Err on the side of protection

SearchSoftwareQuality - Fri, 10/26/2012 - 07:21
Apps running on mobile devices demand a high level of security. Experts offer advice and techniques on implementing mobile app security effectively.


Categories: Software Testing

I’ve Heard of Bugs Closing Windows Before, But This Is Ridiculous

QA Hates You - Thu, 10/25/2012 - 09:17

2005-’07 BMW 7 Series Recalled Because Doors May Inadvertently Open:

BMW is recalling 7,485 2005-’07 BMW 7 Series cars because a software problem may cause the doors to inadvertently open, according to the National Highway Traffic Safety Administration.

“Due to a software problem, the doors may appear to be closed and latched, but, in fact, may inadvertently open,” said NHTSA in its summary of the problem. “The door may unexpectedly open due to road or driving conditions or occupant contact with the door. The sudden opening may result in occupant ejection or increase the risk of injury in the event of a crash.”

You know, I still have a vehicle whose key makes moving parts move and whose (albeit plastic) door handle makes other physical things move.

Know why?

Because I work in QA.

Also, I’m to cheap to buy a new vehicle every decade.

(Link courtesy of Gimlet again, who I should consider giving the keys to the blog except he works in ops and therefore only disdains you.)

Categories: Software Testing

Acceptance test-driven development explained

SearchSoftwareQuality - Wed, 10/24/2012 - 11:52
Acceptance test driven development brings developers, testers and business together to sort out bugs before coding starts, according to a new book.


Categories: Software Testing

Failure Is Inevitable; We Just Try To Make It Really, Really Hard

QA Hates You - Wed, 10/24/2012 - 11:47

Wired has an interesting story from the world of manufacturing: Why Products Fail.

It deals with the fact that entropy will lead to the eventual demise of even the most finely crafted and engineered items. The laws of mother nature, the variability in the repeated processing of materials, and other things work against absolute perfection.

Of course, in the IT world, failure emerges a lot more quickly given the nature of the “engineering” (that is, the cobbling together of Internet examples to barely solve poorly understood problems) coupled with “natural laws” — the physical and technological environment from Web browser versions to commonplace architectures–that change every six months or so leads to far less success, and little fection, much less perfection.

To maintain our sanity, though, we really do have to recognize that things will break down. We just have to keep agitating and pushing to make sure that that eventual failure is more isolated and harder to get to than a couple keystrokes combined with a mouse click.

Categories: Software Testing

Predictive analytics programs need open organizational minds

SearchSoftwareQuality - Wed, 10/24/2012 - 09:42
Using predictive analytics tools can help companies move forward more intelligently, particularly when they welcome new ideas on business strategies and processes.


Categories: Software Testing

More Obscure References Than A Second-Tier Gaming Convention

QA Hates You - Tue, 10/23/2012 - 13:54

Ladies and gentlemen, the IBM Jargon and General Computing Dictionary circa 1990.

You can bet your bottom dollar that I’m going to bring some of that slang back. By myself if I have to, but I bet some of you will join me.

I wish I was reading stuff like that on the computer networks in 1990. Instead, I was busy reading the MJ-12 cover-up in raw text files. Just think, if I had wasted my time on the former instead of the latter, I coulda made something of my life.

Thanks be to Gimlet for the pointer.

Categories: Software Testing

Mobile business intelligence brings benefits -- and barriers

SearchSoftwareQuality - Tue, 10/23/2012 - 11:32
Developing and delivering mobile business intelligence applications poses various challenges that can derail a mobile BI strategy if not avoided.


Categories: Software Testing