Thursday, January 7, 2010

Powerful Metrics for Testers

Insanity is defined as "doing the same things over and over and expecting a different result". As a tester, it is easy to get into a rut of doing the same thing over and over again but never really improving your test process or identifying things that can aid in bringing real success to your job. To improve, we need to understand our goals and measure our progress towards them. This article discusses how to develop metrics that aid you in achieving
specific goals.

What are Powerful Metrics?

Metrics are simply a way to measure specific things you are doing, but you could create hundreds of irrelevant metrics that really don't move you any closer to solving your everyday problems. Powerful metrics are a small set of indicators that are targeted to help you accomplish a specific set of goals.

Metrics Driven by Specific Goals

Before defining your metrics, ask yourself "What goal(s) am I trying to accomplish?", then write your goals down. For example, typical goals for a tester might be to:

  • Ensure that each new feature in the release is fully tested
  • Ensure that our release date is realistic
  • Ensure that existing features don't break with the new release

Now that we have our goals defined, let's figure out how we can create a set of metrics to help us accomplish them.

"Ensure that each new feature in the release is fully tested"
To do this, we must have assurance that we have enough positive and negative test cases to fully test each new feature. This can be done by creating a traceability matrix that counts the number of test cases, number of positive test cases and number of negative test cases for each requirement:

Requirement# Test Cases# Positive# NegativeComments
Visual Studio Integration1046440Good Coverage
Perforce Integration000No Coverage
Enhanced Contact Management Reporting45450No Negative Test Case Coverage
Contact Management Pipeline Graph1082Too few test cases

“Ensure that our release date is realistic”
To do this, we must have daily running totals that show us test progress (how many test cases have been run, how many has failed, how many has passed, how many are blocked). By trending this out, day-by-day, we can determine if we are progressing towards the release date.

Software Planner
provides graphs that make this easy:

Another way to do it is to create a Burn down chart that spans the testing days. On the chart, it will decrement each day by the number of outstanding test cases still to be run (awaiting run, failed or blocked) and we can compare those totals to the daily expected number to determine if the release date is realistic. Let's assume we have a 2 week QA cycle, we can define the number of test cases to run and daily we can keep track of how many are still left to complete versus how many should be left to complete.

In the example above, we can see that we are not getting through the test cases quickly enough. On day 5, we are behind schedule by 15 test cases and we can see this clearly on the Burn down graph -- the red area (actual test cases left) is taller than the blue area (expected test cases left).

“Ensure that existing features don’t break with the new release”
To do this, we must run regression tests (could be
manual or automated) that tests the old functionality. Note: This assumes we have enough test cases to fully test the old functionality, if we don’t have enough coverage here, we have to create additional test cases. With regression tests, you can normally stick to “positive test cases” with this group, unless you know of specific "negative test cases" that would be helpful to prevent common issues once in production.

To ensure you have enough of these test cases, it is a good idea to group the number of regression test cases by functional area. Let's assume we were developing a Contact Management system and we wanted to be sure we had regression test cases assigned to each functional area of that system:

Functional Area# Test CasesComments
Contact Management Maintenance25Good Coverage
Contact Management Reporting0No Coverage
Contact Management Import Process10Good Coverage
Contact Management Export Process5Coverage may be too light

To determine if release dates are reasonable for your regression testing effort, use the same Burn down approach illustrated above.

Summary

As the new year begins, dedicate yourself to improving your job by identifying your goals and tracking metrics that help you determine how you are trending towards your goals. The metrics listed above work fine for my team but I would like to hear what metrics you find are helpful in your organization. To communicate that to me, please fill out this survey:
http://www.surveymonkey.com/s/DW656JZ
.

If we get great feedback from this, we will publish the results in our next month's article so that we can all learn from each other.

No comments:

Post a Comment