Posts Tagged ‘cmmi’

Volatile measures

March 11, 2010

Many clients working towards CMMI maturity level 2 have to deal with measurement & analysis as well as requirements management. In fact they are expected to measure their requirements process. They often resort to the good old standby of measuring “requirements volatility”.  Until I ask them why…

You may have a good reason to measure requirements volatiliy. If you do, please write it down because it should be fundamental to your measurement & analysis process. At maturity level 1 and 2, most companies don’t have a good understanding of their processes. I would therefor expect most measurement indicators to focus on gaining an understanding of the process: indicators that answer questions such as “what factors have a significant impact on the effort or quality of my requirements process?”. To answer this requires trial and error. You may have a hunch that, say, the quality of the coffee has a significant influence on the effort required to develop requirements. In that case, you must develop measures to put this assumption to the test. Whatever the results, it will be valuable because you will learn something. You will either learn that the quality of the coffee is indeed of significant impact – in which case you can move on to control the quality of the coffee and after that to improve the quality of the coffee. Or you will learn that the quality of the coffee does not have a significant impact, in which case you must develop a different theory and put that to the test. (Note: you may still want to keep the quality of the coffee at an acceptable level – it may have an impact on other processes…)

So, how does requirements volatility fit into this understand-control-improve scheme? Presumably, at maturity level 1 or 2, it is based on an assumption that requirements volatility is significant in some way. The tale I’m often told is that a high requirements volatility indicates that the requirements are not ready for the next stage of the development process. To me that means the requirements are not stable enough to create a baseline. Unfortunately, most definitions for measuring requirements volatility are set up to measure changes to requirements after a baseline has been created. Either this is based on a very different definition of ‘baseline’ (more like my definition of ‘snapshot’), or the indicator cannot be used during the crucial early stages. Sure, requirements volatility can be of use in later stages of a project. However, when used only in the later stages: what does it tell us about the requirements process? Well, some say, a high requirements volatility shows that the quality of the baselined requirements was insufficient. This is an assumption, and I would hope that the first step is to determine if it is a valid assumption. So I would prefer to start collecting measures that can clarify if the assumption is valid or not, before leaping off and taken action on that assumption.

In summary, while requirements volatility may be a useful indicator in some organizations it should not be the first one to adopt. Also, initial indicators could be short-lived (volatile, even): as the organization finds out which assumptions hold and which don’t, they move on to the next set of indicators.