Are We Done Yet?

Happy Valentine’s Day! I ran into a lot of people at last night’s SPIN. The meeting was held at the PDR offices, and PDR’s QA Practice Director, Pat Freeman, presented on Software Quality Metrics for Decision Making that we’ll get into in a minute. Kishore distributed copies of Better Software magazine. You can apply for a free digital subscription. About 30 folks attended this meeting. Matt Cardarella, Katie Pattison, and Bridget Ganow (I’m sorry if I butchered your last names) of Speedway traveled from north Dayton to attend this meeting. As I said earlier, SPIN will change the way you work and increase the quality of your output, and seeing these folks make the trek down just reinforces SPIN’s value for me.

Bridget and I talked about the Speedway loyalty program for a few minutes. She’s an analyst for the program. I signed up for the loyalty program mid summer. What confused me is that the application for the Speedway program didn’t mention any of the benefits, so it took me a few days to complete the app because I was not sure of the value. Finally I completed it mostly based on the fact that they would not sell my information thinking I could learn the benefits at some future time. That future time was last night as Bridget filled me in on all the details.

Kishore, Joe, Julie Nimitz, Senior Account Director of Partner Technology, and Russ McMahon, all regular attendees, were there. I also ran into Brian Williams, a fellow ex-Whittman-Hart-er who’s been a contract analyst for TEKSystems at Cincinnati Financial. I had a chance to meet Mark Yozwiak, VP of Business Development for TPSi, for the first time. TPSi provides technology, engineering, and database solutions, and is currently positioned as a smaller-scale dbaDIRECT.

I believe PDR provided some great food for this event. Be sure to check out the March meeting: Putting Engineering in Software Engineering. As an aside, I’ve also been learning of some interesting IT initiatives in Cincinnati. Pay attention to Friday’s news for some changes in the local IT market.

So Pat began the presentation, and it was quickly obvious that the guy knows his stuff. In line with the KISS mentality (Keep It Simple Stupid) Pat taught us to answer two, and only two, critical questions: 1) What is the current state of testing in terms of quality and progress, and 2) Is the testing finished. Any testing effort, metrics, measurement, meetings, and all other testing output have to align to these two questions, or the effort is wasted and distracting. Many companies can’t put a finger on the current state of quality which boils down to the defect rate and the total number of defects. The other question, Are-We-Done-Yet, means more than just reaching the end of a test cycle. Done needs to be defined in terms of the current state of testing.

The purpose of software quality metrics is to facilitate critical decision making. Exit criteria is defined up front so that the organization agrees upon the definition of Done before getting in the weeds of an actual test cycle. Process control allows an organization to tweak testing during a cycle to better answer question 1. When an organization can apply disciplined testing based on specified metrics over the course of a test cycle, then multiple cycles, then multiple systems, an organization can engage in continuous process improvement where metrics will become more and more meaningful and interpreted more consistently.

Pat defines 7 simple metrics:

  • Total open defects
  • Total critical defects
  • Total defect arrival rate (defects per day)
  • Critical defect arrival rate (defects per day)
  • Total mean time to discover a defect (testing hours per defect found)
  • Mean time to discover a critical defect (testing hours per defect found)
  • Test cases executed vs test cases planned

These seven metrics align with the two questions of current-state and are-we-done, and facilitate critical decision making. Measurements are typically a composite 5-day rolling average in order to preempt knee-jerk reactions based on one day spikes. What you need from your testers are simply the total hours spent testing each day. You should have all of the other numbers by counting test cases and defects, then you can derive rates and mean times.

Just as critical as the appropriate measurements are those metrics that you don’t need to measure for decision making. Pat cautioned us to stay away from any metrics too difficult to measure or too open for interpretation as well as anything irrelevant to making decisions around the two questions. Some examples might be measuring the age of open defects, or the number of defects per developer. Neither of these answer the two questions. You’ll also want to stay away from any punitive use or interpretation of the metrics as you’ll find people mis-reporting in the future in order to avoid penalties.

To find success, Pat urged disciplined and ruthless capture of every defect, including rejected defects as you still need to take some action on them. Just as important, as you’ll see from the charts, is capturing tester time on a daily basis. Now, you want to make this as simple as possible on the testing team in order to make sure you get the metrics. Don’t overburden a testing team with a complex metrics capturing system. A successful system can be as simple as Excel just as long as you capture the data. So, if this means having the team email you total hours each day so that you can enter the data, so be it.

Now that you have your metrics, create charts that make information simple to understand. Each chart has a purpose, and no chart alone tells the entire story. Together, all the information allows for informed decision making. You’ll want to work hard to ensure buy-in from key decision makers before testing begins so that everyone understands the purpose of the metrics and how metrics will support decision making. Then distribute the information every day at the same time along with your summary or interpretation of the metrics for that day. As people get used to receiving your information, if you’re late by 15 minutes one day you’ll get a call on them. That’s when you know you’ve been successful in educating the organization.

If you don’t find yourself getting buy-in on the value of the metrics, get out of your chair and go talk to people. You’ll need to reinforce the process a number of times until folks understand the value.

Pat asked me not to post the example charts here in order to maintain the integrity of his presentation. That’s understandable. You can find the presentation posted on the SPIN site.

You’ll find 7 charts. Here’s my interpretation of what you need to know. On each chart you’ll see a horizontal waterline that describes the exit criteria you agreed upon before the testing cycle began. This waterline and the accompanying data allow you to make informed decisions.

Chart 1 – Total Open Defects

This chart describes all open defects of any severity level. Points of interest include the appearance of downward trend towards the end of the testing cycle. This trend needs additional corroboration against the other data, but looks good. Also, the 10/31 defect number is above the waterline yet includes no high-severity defects that prohibit a user from doing their job. So stakeholders might be able to make a decision to release even though the total number of defects is above the agreed exit criteria.

Chart 2 – Open Critical Defects

This chart describes all defects that prohibit a user from performing their job responsibilities. The waterline here is at zero. You can see this testing team reached the waterline threshold with one day to spare. Again, the downward trend at the end of the testing cycle looks good. You’ll want to ensure this trend continues to agree with other metrics. Let’s say that critical defects existed at the end of the testing cycle. Stakeholders could potentially make a decision to release, anyway, knowing the functionality where the defect exists won’t be exercised for 3 months which gives the development team an opportunity to fix the issue and release again. The point is that you now have the metrics to make a decision like this.

Chart 3 – Defect Arrival Rate

This chart describes how many new defects are discovered each day. You’ll see the same downward trend towards the end of the testing cycle. Also notice that metrics here are below the waterline at the end of the cycle. Points of interest include the 10/23 and 10/24 points in the cycle which show there are still a lot of defects left to find. Around 10/13 and 10/14 the metrics could show that you’re done with critical defects, but you’ll want to dig to ensure that is the case. It could be that testing has slowed due to vacation schedules, or maybe because the development team is in fix and release mode and has asked for testing to slow.

Chart 4 – Critical Defect Arrival Rate

This chart explains how many new, critical defects are found each day. Again, notice the downward trend and the watermark at the end of the cycle.

Chart 5 – Mean Time to Discover a new Defect

This chart describes how many testing hours the team spends to find a new defect of any severity. According to Pat, if this number begins reaching the 4 to 7 hour mark, the testing team may be getting closer to the end of a testing cycle. Obviously, a higher number is better because the team has to spend more and more time to uncover issues. This chart’s accuracy depends on testers reporting their actual testing hours.

Chart 6 – Mean Time to Discover Critical Defects

This chart shows how many testing hours the team spends to find a new critical defect. You’ll see the watermark set much higher here.

Chart 7 – Test Execution Progress

This chart maps the planned test case progress vs. the actual testing progress. The watermark is set to “all test cases.”

In the end, taking all of the data mapped onto easy to understand charts, a project stakeholder can make informed decisions about the software process based on testing progress.

Andy

Advertisements

~ by Andy on February 14, 2008.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: