Software Testing

Test Automation Can Be So Much More…

Eric Jacobson's Software Testing Blog - Fri, 07/18/2014 - 13:32

I often hear people describe their automated test approach by naming the tool, framework, harness, technology, test runner, or structure/format.  I’ve described mine the same way.  It’s safe.  It’s simple.  It’s established.  “We use Cucumber”.

Lately, I’ve seen things differently.

Instead of trying to pigeon hole each automated check into a tightly controlled format for an entire project, why not design automated checks for each Story, based on their best fit for that story?

I think this notion comes from my context-driven test schooling.  Here’s an example:

On my current project, we said “let’s write BDD-style automated checks”.  We found it awkward to pigeon-hole many of our checks into Given, When, Then.  After eventually dropping the mandate for BDD-style, I discovered the not-as-natural-language style to be easier to read, more flexible, and quicker to author…for some Stories.  Some Stories are good candidates for data-driven checks authored via Excel.  Some might require manual testing with a mocked product...computer-assisted-exploratory-testing…another use of automation.  Other Stories might test better using non-deterministic automated diffs.

Sandboxing all your automated checks into FitNesse might make test execution easier.  But it might stifle test innovation.

Categories: Software Testing

Very Short Blog Posts (20): More About Testability

DevelopSense - Michael Bolton - Mon, 07/14/2014 - 15:30
A few weeks ago, I posted a Very Short Blog Post on the bare-bones basics of testability. Today, I saw a very good post from Adam Knight talking about telling the testability story. Adam focused, as I did, on intrinsic testability—things in the product itself that it more testable. But testability isn’t just a product […]
Categories: Software Testing

Measuring Coverage at Google

Google Testing Blog - Mon, 07/14/2014 - 13:45
By Marko Ivanković, Google Zürich

Introduction
Code coverage is a very interesting metric, covered by a large body of research that reaches somewhat contradictory results. Some people think it is an extremely useful metric and that a certain percentage of coverage should be enforced on all code. Some think it is a useful tool to identify areas that need more testing but don’t necessarily trust that covered code is truly well tested. Others yet think that measuring coverage is actively harmful because it provides a false sense of security.

Our team’s mission was to collect coverage related data then develop and champion code coverage practices across Google. We designed an opt-in system where engineers could enable two different types of coverage measurements for their projects: daily and per-commit. With daily coverage, we run all tests for their project, where as with per-commit coverage we run only the tests affected by the commit. The two measurements are independent and many projects opted into both.

While we did experiment with branch, function and statement coverage, we ended up focusing mostly on statement coverage because of its relative simplicity and ease of visualization.

How we measured
Our job was made significantly easier by the wonderful Google build system whose parallelism and flexibility allowed us to simply scale our measurements to Google scale. The build system had integrated various language-specific open source coverage measurement tools like Gcov (C++), Emma / JaCoCo (Java) and Coverage.py (Python), and we provided a central system where teams could sign up for coverage measurement.

For daily whole project coverage measurements, each team was provided with a simple cronjob that would run all tests across the project’s codebase. The results of these runs were available to the teams in a centralized dashboard that displays charts showing coverage over time and allows daily / weekly / quarterly / yearly aggregations and per-language slicing. On this dashboard teams can also compare their project (or projects) with any other project, or Google as a whole.

For per-commit measurement, we hook into the Google code review process (briefly explained in this article) and display the data visually to both the commit author and the reviewers. We display the data on two levels: color coded lines right next to the color coded diff and a total aggregate number for the entire commit.


Displayed above is a screenshot of the code review tool. The green line coloring is the standard diff coloring for added lines. The orange and lighter green coloring on the line numbers is the coverage information. We use light green for covered lines, orange for non-covered lines and white for non-instrumented lines.

It’s important to note that we surface the coverage information before the commit is submitted to the codebase, because this is the time when engineers are most likely to be interested in improving it.

Results
One of the main benefits of working at Google is the scale at which we operate. We have been running the coverage measurement system for some time now and we have collected data for more than 650 different projects, spanning 100,000+ commits. We have a significant amount of data for C++, Java, Python, Go and JavaScript code.

I am happy to say that we can share some preliminary results with you today:


The chart above is the histogram of average values of measured absolute coverage across Google. The median (50th percentile) code coverage is 78%, the 75th percentile 85% and 90th percentile 90%. We believe that these numbers represent a very healthy codebase.

We have also found it very interesting that there are significant differences between languages:

C++JavaGoJavaScriptPython56.6%61.2%63.0%76.9%84.2%

The table above shows the total coverage of all analyzed code for each language, averaged over the past quarter. We believe that the large difference is due to structural, paradigm and best practice differences between languages and the more precise ability to measure coverage in certain languages.

Note that these numbers should not be interpreted as guidelines for a particular language, the aggregation method used is too simple for that. Instead this finding is simply a data point for any future research that analyzes samples from a single programming language.

The feedback from our fellow engineers was overwhelmingly positive. The most loved feature was surfacing the coverage information during code review time. This early surfacing of coverage had a statistically significant impact: our initial analysis suggests that it increased coverage by 10% (averaged across all commits).

Future work
We are aware that there are a few problems with the dataset we collected. In particular, the individual tools we use to measure coverage are not perfect. Large integration tests, end to end tests and UI tests are difficult to instrument, so large parts of code exercised by such tests can be misreported as non-covered.

We are working on improving the tools, but also analyzing the impact of unit tests, integration tests and other types of tests individually.

In addition to languages, we will also investigate other factors that might influence coverage, such as platforms and frameworks, to allow all future research to account for their effect.

We will be publishing more of our findings in the future, so stay tuned.

And if this sounds like something you would like to work on, why not apply on our job site?

Categories: Software Testing

Pushing Up All The Levels On The Mixing Board

Eric Jacobson's Software Testing Blog - Fri, 07/11/2014 - 09:50

…may not be a good way to start testing.

I heard a programmer use this metaphor to describe the testing habits of a tester he had worked with. 

As a tester, taking all test input variables to their extreme, may be an effective way to find bugs.  However, it may not be an effective way to report bugs.  Skilled testers will repeat the same test until they isolate the minimum variable(s) that cause the bug.  Or using this metaphor, they may repeat the same test with all levels on the mixing board pulled down, except the one they are interested in observing.

Once identified, the skilled tester will repeat the test only changing the isolated variable, and accurately predict a pass or fail result.

Categories: Software Testing

“Bug In The Test” vs. “Bug In The Product”

Eric Jacobson's Software Testing Blog - Wed, 07/09/2014 - 12:44

Dear Test Automators,

The next time you discuss automation results, please consider qualifying the context of the word “bug”.

If automation fails, it means one of two things:

  1. There is a bug in the product-under-test.
  2. There is a bug in the automation.

The former is waaaaaay more important than the latter.  Maybe not to you, but certainly for your audience.

Instead of saying,

This automated check failed”,

consider saying,

This automated check failed because of a bug in the product-under-test”.

 

Instead of saying,

I’m working on a bug”,

consider saying,

I’m working on a bug in the automation”.

 

Your world is arguably more complex than that of testers who don’t use automation.  You must test twice as many programs (the automation and the product-under-test).  Please consider being precise when you communicate.

Categories: Software Testing

QA Music: Quality Assurance, Defined

QA Hates You - Mon, 07/07/2014 - 04:24

Avenged Sevenfold, “This Means War”:

Are we at war with the others in the software industry who accept poor quality software? Are we at war with ourselves because we give it just slightly less than we’ve got and sometimes a lot less than it takes?

Yes.

Categories: Software Testing

Data Testing vs. Functional Testing

Eric Jacobson's Software Testing Blog - Wed, 07/02/2014 - 12:32

So, you’ve got a green thumb.  You’ve been growing houseplants your whole life.  Now try to grow an orchid.  What you’ve learned about houseplants has taught you very little about orchids.

  • Put one in soil and you’ll kill it (orchids grow on rocks or bark). 
  • Orchids need about 20 degrees Fahrenheit difference between day and night.
  • Orchids need wind and humidity to strive.
  • Orchids need indirect sunlight.  Lots of it.  But put them in the sun and they’ll burn.
  • Fading flowers does not mean your orchid is dying (orchids bloom in cycles).

So, you’re a skilled tester.  You’ve been testing functional applications with user interfaces your whole career.  Now try to test a data warehouse.  What you’ve learned about functionality testing has taught you very little about data testing.

  • “Acting like a user”, will not get you far.  Efficient data testing does not involve a UI and depends little on other interfaces.  There are no buttons to click or text boxes to interrogate during a massive data quality investigation.
  • Lack of technical skills will kill you.  Interacting with a DB requires DB Language skills (e.g., TSQL).  Testing millions of lines of data requires coding skills to enlist the help of machine-aided-exploratory-testing.
  • Checking the health of your data warehouse prior to deployments probably requires automated checks.
  • For functional testing, executing shallow tests first to cover breadth, then deep tests later is normally a good approach.  In data testing, the opposite may be true.
  • If you are skilled at writing bug reports with detailed repro steps, this skill may hinder your effectiveness at communicating data warehouse bugs, where repro steps may not be important.
  • If you are used to getting by as a tester, not reading books about the architecture or technology of your system-under-test, you may fail at data warehouse testing.  In order to design valuable tests, a tester will need to study data warehouses until they grok concepts like Inferred Members, Junk Dimensions, Partitioning, Null handling, 3NF, grain, and Rapidly Changing Monster Dimensions.

Testers, let’s respect the differences in the projects we test, and grow our skills accordingly.  Please don’t use a one-size-fits-all approach.

Categories: Software Testing

GTAC is Almost Here!

Google Testing Blog - Tue, 07/01/2014 - 13:28
by The GTAC Committee

GTAC is just around the corner, and we’re all very busy and excited. I know we say this every year, but this is going to be the best GTAC ever! We have updated the GTAC site with important details:


If you are on the attendance list, we’ll see you on April 23rd. If not, check out the Live Stream page where you can watch the conference live and can get involved in Q&A after each talk. Perhaps your team can gather in a conference room and attend remotely.

Categories: Software Testing

GTAC 2013 Wrap-up

Google Testing Blog - Tue, 07/01/2014 - 13:27
by The GTAC Committee

The Google Test Automation Conference (GTAC) was held last week in NYC on April 23rd & 24th. The theme for this year's conference was focused on Mobile and Media. We were fortunate to have a cross section of attendees and presenters from industry and academia. This year’s talks focused on trends we are seeing in industry combined with compelling talks on tools and infrastructure that can have a direct impact on our products. We believe we achieved a conference that was focused for engineers by engineers. GTAC 2013 demonstrated that there is a strong trend toward the emergence of test engineering as a computer science discipline across companies and academia alike.

All of the slides, video recordings, and photos are now available on the GTAC site. Thank you to all the speakers and attendees who made this event spectacular. We are already looking forward to the next GTAC. If you have suggestions for next year’s location or theme, please comment on this post. To receive GTAC updates, subscribe to the Google Testing Blog.

Here are some responses to GTAC 2013:

“My first GTAC, and one of the best conferences of any kind I've ever been to. The talks were consistently great and the chance to interact with so many experts from all over the map was priceless.” - Gareth Bowles, Netflix

“Adding my own thanks as a speaker (and consumer of the material, I learned a lot from the other speakers) -- this was amazingly well run, and had facilities that I've seen many larger conferences not provide. I got everything I wanted from attending and more!” - James Waldrop, Twitter

“This was a wonderful conference. I learned so much in two days and met some great people. Can't wait to get back to Denver and use all this newly acquired knowledge!” - Crystal Preston-Watson, Ping Identity

“GTAC is hands down the smoothest conference/event I've attended. Well done to Google and all involved.” - Alister Scott, ThoughtWorks

“Thanks and compliments for an amazingly brain activity spurring event. I returned very inspired. First day back at work and the first thing I am doing is looking into improving our build automation and speed (1 min is too long. We are not building that much, groovy is dynamic).” - Irina Muchnik, Zynx Health

Categories: Software Testing

ThreadSanitizer: Slaughtering Data Races

Google Testing Blog - Mon, 06/30/2014 - 15:30
by Dmitry Vyukov, Synchronization Lookout, Google, Moscow

Hello,

I work in the Dynamic Testing Tools team at Google. Our team develops tools like AddressSanitizer, MemorySanitizer and ThreadSanitizer which find various kinds of bugs. In this blog post I want to tell you about ThreadSanitizer, a fast data race detector for C++ and Go programs.

First of all, what is a data race? A data race occurs when two threads access the same variable concurrently, and at least one of the accesses attempts is a write. Most programming languages provide very weak guarantees, or no guarantees at all, for programs with data races. For example, in C++ absolutely any data race renders the behavior of the whole program as completely undefined (yes, it can suddenly format the hard drive). Data races are common in concurrent programs, and they are notoriously hard to debug and localize. A typical manifestation of a data race is when a program occasionally crashes with obscure symptoms, the symptoms are different each time and do not point to any particular place in the source code. Such bugs can take several months of debugging without particular success, since typical debugging techniques do not work. Fortunately, ThreadSanitizer can catch most data races in the blink of an eye. See Chromium issue 15577 for an example of such a data race and issue 18488 for the resolution.

Due to the complex nature of bugs caught by ThreadSanitizer, we don't suggest waiting until product release validation to use the tool. For example, in Google, we've made our tools easily accessible to programmers during development, so that anyone can use the tool for testing if they suspect that new code might introduce a race. For both Chromium and Google internal server codebase, we run unit tests that use the tool continuously. This catches many regressions instantly. The Chromium project has recently started using ThreadSanitizer on ClusterFuzz, a large scale fuzzing system. Finally, some teams also set up periodic end-to-end testing with ThreadSanitizer under a realistic workload, which proves to be extremely valuable. When races are found by the tool, our team has zero tolerance for races and does not consider any race to be benign, as even the most benign races can lead to memory corruption.

Our tools are dynamic (as opposed to static tools). This means that they do not merely "look" at the code and try to surmise where bugs can be; instead they they instrument the binary at build time and then analyze dynamic behavior of the program to catch it red-handed. This approach has its pros and cons. On one hand, the tool does not have any false positives, thus it does not bother a developer with something that is not a bug. On the other hand, in order to catch a bug, the test must expose a bug -- the racing data access attempts must be executed in different threads. This requires writing good multi-threaded tests and makes end-to-end testing especially effective.

As a bonus, ThreadSanitizer finds some other types of bugs: thread leaks, deadlocks, incorrect uses of mutexes, malloc calls in signal handlers, and more. It also natively understands atomic operations and thus can find bugs in lock-free algorithms (see e.g. this bug in the V8 concurrent garbage collector).

The tool is supported by both Clang and GCC compilers (only on Linux/Intel64). Using it is very simple: you just need to add a -fsanitize=thread flag during compilation and linking. For Go programs, you simply need to add a -race flag to the go tool (supported on Linux, Mac and Windows).

Interestingly, after integrating the tool into compilers, we've found some bugs in the compilers themselves. For example, LLVM was illegally widening stores, which can introduce very harmful data races into otherwise correct programs. And GCC was injecting unsafe code for initialization of function static variables. Among our other trophies are more than a thousand bugs in Chromium, Firefox, the Go standard library, WebRTC, OpenSSL, and of course in our internal projects.

So what are you waiting for? You know what to do!
Categories: Software Testing

Software Testing Careers - Should I Stay or Should I Go?

Randy Rice's Software Testing & Quality - Fri, 06/27/2014 - 13:10
At the recent Better Software Conference, I really enjoyed a presentation by James Whittaker in which he gave some keys to succeeding as a tester, or any career, for that matter. The short list is:
1. Ambition
2. Passion
3. Specialization
4. Imitation of mentors
5. Re-invention
6. Clairvoyance - Keep abreast of the industry. Read!
7. Leadership
8. Find your "super powers" (Things you do really, really well)

That got me thinking about this topic again, because my passion is to help people have better lives, not just be better testers. If you are a great tester stuck in a bad company, working for a clueless boss, etc., then you will be miserable. That leads to an unhappy life everywhere outside of work. Believe me, I know.

It also caused me to rethink things I have taught in the past - especially the idea that you can thrive even in a dysfunctional environment.

Some of you reading this are fortunate to be in a great company, working for a great manager and most days are sunny. However, I talk with thousands of testers every year and I think the great majority of them are in frustrating situations.

One of my most popular "lightning talks" is "Ten Proven Ways to Demotivate Your Team." I included those slides as an opener for my keynote at StarEast 2014. Instead of being funny, I think it evoked serious feelings. I guess that's how stand-up comedy works - sometimes the material works and sometimes, it bombs.

After the StarEast session, I had several people pull me aside to ask me how to deal with their dysfunctional workplace issues. So I decided to write here about some ways to survive terrible workplaces and go on to thrive in your career.

I really thought about these points and got feedback from others in the QA and test profession. Their consensus was the same as mine.

1. The main thing is that you must see yourself as a free agent. The idea that a company will take care of you until retirement died about 25 years ago. You have to develop yourself, invest in yourself, and be prepared to work in many situations - like a sports player or a musician can go from team to team or band to band. Unfortunately, corporations have destroyed trust by outsourcing and reorganizations that have devalued people and the value they bring to the company.

2. It's one thing to have a bad manager, but another to be in a company that has a terrible culture. The problem is that even if your manager leaves or gets fired, you are still in the same cesspool as before. No matter how the team structure changes, you are still in a no-win situation. Even if you became the new manager, you will be up against the same people who make the same bad, misguided policies as before.

3. You are responsible for developing your own skills. It's sad, but too many companies don't want to invest in their people when it comes to training or skill building. They are a) tight-fisted with money and b) afraid that people will benefit from the training by leaving the company afterwards. This just shows that management knows the culture is bad, but they don't care enough to fix it. Or, they don't want to take the risk to take a few positive steps, like training.

4. You have to know "when to hold them, know when to fold them." Leaving a job has risks, just as taking a job has risks. Try your best to do your home work by talking to current employees you may have access to. It's not a sign of failure to leave a job. In fact, sometimes that's the only way to get ahead. Just be careful to not appear as a "job hopper." That can be a career killer. When explaining to a prospective employer why you are considering a new position, you want to stay on the positive side. It's risky to be critical of your present situation. You can say that you feel that your skills and talents are being underused and you feel like you can make a greater contribution someplace else. Back when I had a "real job" I took the high road and thanked the employer I was leaving. That paid off.

I think I have a nomination for the worst treatment ever. It happened recently to one of my sons who had just taken a new job and got married. The company gave him an expensive gift - worth about $300. While on his honeymoon, they fired him (He didn't find out until he opened his mail upon returning home). Then, they deducted the cost of the gift from his final paycheck.

There are no perfect answers on this issue. You have to carefully consider the options, but just keep in mind that toxic environments will eventually wear you down. As much as we like to say we can separate work life from personal life, it's practically impossible to do because we are emotional beings.

Do yourself a favor and build a career, not a job.

I hope this helps and I would love to hear your comments.

Randy




Categories: Software Testing

I Know It’s Like Training Wheels, But….

QA Hates You - Thu, 06/26/2014 - 03:58

I know this is just a simple trick that marks me as a beginner, but I like to add a comment at the end of a block to indicate what block of code is ending.

Java:

} // if button displayed }catch (NoSuchElementException e){ buttonState = "not present"; } // try/catch return buttonState; } // check button } //class

Ruby:

end # until end # wait for browser end # class end

Sure, an IDE will show me, sometimes faintly, what block a bracket closes, but I prefer this clearer indication which is easier to see.

Categories: Software Testing

Who Cares About Test Results?

Eric Jacobson's Software Testing Blog - Wed, 06/25/2014 - 12:28

I think it’s only people who experience bugs.

Sadly, devs, BAs, other testers, stakeholders, QA managers, directors, etc. seldom appear interested in the fruits of our labor.  The big exception is when any of these people experience a bug, downstream of our test efforts.

“Hey, did you test this?  Did it pass?  It’s not working when I try it.”

Despite the disinterest, us testers spend a lot of effort standing up ways to report test results.  Whether it be elaborate pass/fail charts or low-tech information-radiators on public whiteboards, we do our best.  I’ve put lots of energy into coaching my testers to give better test reports but I often second guess this…wondering how beneficial the skill is.

Why isn’t anyone listening?  These are some reasons I can think of:

  • Testers have done such a poor job of communicating test results, in the past, that people don’t find the results valuable.
  • Testers have done such a poor job of testing, that people don’t find the results valuable.
  • People are mainly interested in completing their own work.  They assume all is well with their product until a bug report shows up.
  • Testing is really difficult to summarize.  Testers haven't found an effective way of doing this.
  • Testing is really difficult to summarize.  Potentially interested parties don’t want to take the time to understand the results.
  • People think testers are quality cops instead of quality investigators; People will wait for the cops to knock on their door to deliver bad news.
  • Everyone else did their own testing and already know the results.
  • Test results aren’t important.  They have no apparent bearing on success or failure of a product.
Categories: Software Testing

In Other Words in Other Places

QA Hates You - Wed, 06/25/2014 - 05:15

Now on StickyMinds: Picture Imperfect: Methods for Testing How Your App Handles Images.

It’s a list of dirty tricks but without the snark.

Categories: Software Testing

Once The Users Smell Blood…

Eric Jacobson's Software Testing Blog - Tue, 06/24/2014 - 11:19

We had a relatively disastrous prod deployment last week.  Four bugs, caused by a large refactor, were missed in test.  But here’s the weirder part, along with those four bugs, the users started reporting previously existing functionality as new bugs, and in some cases, convincing us to do emergency patches to change said previously existing functionality.

It seems bugs beget bugs.

Apparently the shock of these initial four bugs created a priming effect, which resulted in overly-critical user perceptions:

“I’ve never noticed that before…must be something else those clowns broke.”

I’ve heard people are more likely to tidy up if they smell a faint scent of cleaning liquid.  Same thing occurs with bugs I guess. 

What’s the lesson here?  Releasing four bugs might be more expensive than fixing four bugs.  It might mean fixing seven and dealing with extra support calls until the priming effect wears off.

Categories: Software Testing

The Secret of My Success

QA Hates You - Tue, 06/24/2014 - 09:18

Haters gonna hate – but it makes them better at their job: Grumpy and negative people are more efficient than happy colleagues:

Everyone hates a hater. They’re the ones who hate the sun because it’s too hot, and the breeze because it’s too cold.

The rest of us, then, can take comfort in the fact that haters may not want to get involved in as many activities as the rest of us.

But in a twist of irony, that grumpy person you know may actually be better at their job since they spend so much time on fewer activities.

It’s not true, of course.

Haters don’t hate other haters.

But the rest could hold true.

Categories: Software Testing

QA Music: Is There Any Hope for QA?

QA Hates You - Mon, 06/23/2014 - 05:27

Devour the Day, “Good Man”:

Categories: Software Testing

LoadStorm v2.0 Now with Transaction Response Timing – What does this mean for you?

LoadStorm - Tue, 06/17/2014 - 12:17

Today, LoadStorm published a press release announcing our new Transaction Response Timing. For many professional performance testers, especially those used to products like HP Loadrunner or SOASTA CloudTest, wrapping timings around logical business processes and related transactions, is a familiar concept. For those of you that aren’t familiar, I’ll explain.

What is Transaction Response Time?

Transaction Response Time represents the time taken for the application to complete a defined transaction or business process.

Why is it important to measure Transaction Response Time?

The objective of a performance test is to ensure that the application is working optimally under load. However, the definition of “optimally” under load may vary with different systems.
By defining an initial acceptable response time, we can benchmark the application if it is performing as anticipated.

The importance of Transaction Response Time is that it gives the project team/application team an idea of how the application is performing in the measurement of time. With this information, they can relate to the users/customers on the expected time when processing request or understanding how their application performed.

What does Transaction Response Time encompass?

The Transaction Response Time encompasses the time taken for the request made to the web server, there after being processed by the Web Server and sent to the Application Server, which in most instances will make a request to the Database Server. All this will then be repeated again in reverse from the Database Server, Application Server, Web Server and back to the user. Take note that the time taken for the request or data in the network transmission is also factored in.

To simplify, the Transaction Response Time is comprised of the following:
Processing time on Web Server
Processing time on Application Server
Processing time on Database Server
Network latency between the servers, and the client

The following diagram illustrates Transaction Response Time.

Transaction Response Time = (t1 + t2 + t3 + t4 + t5 + t6 + t7 + t8 + t9) X 2
Note: X 2 represents factoring of the time taken for the data to return to the client.

How do we measure?

Measurement of the Transaction Response Time begins when the defined transaction makes a request to the application. From here, until the transaction completes before proceeding with the next subsequent request (in terms of transaction), the time is been measured and will stop when the transaction completes.

How can we use Transaction Response Time to analyze performance issues?

Transaction Response Time allows us to identify abnormalities when performance issues surface. This will be represented as a slow response of the transaction, which differs significantly (or slightly) from the average of the Transaction Response Time. With this, we can correlate by using other measurements such as the number of virtual users that are accessing the application at that point in time and the system-related metrics (e.g. CPU Utilization) to identify the root cause.

With all the data that has been collected during the load test, we can correlate the measurements to find trends and bottlenecks between the response time, and the amount of load that was generated.

How is it beneficial to the Project Team?

Using Transaction Response Time, the Project Team can better relate to their users by using transactions as a form of language protocol that their users can comprehend. Users will know that transactions (or business processes) are performing at an acceptable level in terms of time.
Users may be unable to understand the meaning of CPU utilization or Memory usage and thus using the common language of time is ideal to convey performance-related issues.

The post LoadStorm v2.0 Now with Transaction Response Timing – What does this mean for you? appeared first on LoadStorm.

You Can Learn From Others’ Failures

QA Hates You - Tue, 06/17/2014 - 04:27

10 Things You Can Learn From Bad Copy:

We’ve all read copy that makes us cringe. Sometimes it’s hard to put a finger on exactly what it is that makes the copy so bad. Nonetheless, its lack of appeal doesn’t go unnoticed.

Of course, writing is subjective in nature, but there are certain blunders that are universal. While poor writing doesn’t do much to engage the reader or lend authority to its publisher, it can help you gain a better understanding of what is needed to produce quality content.

It’s most applicable to content-heavy Web sites, but some are more broadly applicable to applications in general. Including #8, Grammar Matters:

Obviously, you wouldn’t use poor grammar on purpose. Unfortunately, many don’t know when they’re using poor grammar.

That’s one of the things we’re here for.

(Link via SupaTrey.)

Categories: Software Testing

GTAC 2014: Call for Proposals & Attendance

Google Testing Blog - Mon, 06/16/2014 - 15:57
Posted by Anthony Vallone on behalf of the GTAC Committee

The application process is now open for presentation proposals and attendance for GTAC (Google Test Automation Conference) (see initial announcement) to be held at the Google Kirkland office (near Seattle, WA) on October 28 - 29th, 2014.

GTAC will be streamed live on YouTube again this year, so even if you can’t attend, you’ll be able to watch the conference from your computer.

Speakers
Presentations are targeted at student, academic, and experienced engineers working on test automation. Full presentations and lightning talks are 45 minutes and 15 minutes respectively. Speakers should be prepared for a question and answer session following their presentation.

Application
For presentation proposals and/or attendance, complete this form. We will be selecting about 300 applicants for the event.

Deadline
The due date for both presentation and attendance applications is July 28, 2014.

Fees
There are no registration fees, and we will send out detailed registration instructions to each invited applicant. Meals will be provided, but speakers and attendees must arrange and pay for their own travel and accommodations.



Categories: Software Testing

Pages