How many metrics?

When I demo Moose, I often get this question: How many metrics does Moose offer?

This simple and apparently innocent question holds the essence of what is wrong with how software analysis is approached today. But, first, let me answer the question.

In Moose, we see metrics as properties of entities. As we model entities with Smalltalk classes, metrics are defined in Smalltalk methods. Here is an example of a metric that defines the number of methods (NOM) metric belonging to an object oriented type:

FAMIXType>>numberOfMethods
  <MSEProperty: #numberOfMethods type: #Number>
  <derived>
  <MSEComment: 'The number of methods in a class'>
  ^ self
    lookUpPropertyNamed: #numberOfMethods
    computedAs: [self methods size]

Beside the implementation, the method also presents a number of annotations. These are used to get them to display in the user interface and to offer automatic import/export. The essential annotation is the #'MSEProperty:type:' one. The first argument for this annotation is the name of the property, and the second is the type of the property. All metrics will have the type equal to #Number. We can use this information to distinguish a metric definition from other methods.

To do the actual counting we use Moose to analyze itself. Once we have the model, the computation is straightforward:

(self allMethods select: [:eachMethod | 
  eachMethod annotationInstances anySatisfy: [:eachAnn | 
    (eachAnn name = #'MSEProperty:type:') 
      and: [eachAnn attributes second value = #Number]]]) size

The result is 171. This is how many metrics are there implemented in Moose. At least, this is the answer I can give on January 8 at 15:52 (CET).

Still, how many metrics? is the wrong question. The premise is that the more metrics there are, the greater the value of the tool. The premise behind the premise is that the more predefined metrics there are, the better you will be able to measure what is important in your context. This is not to say that the predefined metrics are not useful. They are, but only when used in context.

Here is the catch: regardless of how many one-click metrics, or analyses for that matter, a tool ships with, they cannot accommodate your context exactly. In our case, there is no way a tool can know from outside what annotations we are using and what they mean. And this is the key to providing a useful answer.

Instead, you should ask How expensive is it to craft a new analysis?. It took me less than 5 minutes to build the above metric. This is what makes a platform truly useful.

The 171 number is secondary. What makes the difference is 5 minutes.

Posted by Tudor Girba at 10 January 2012, 7:55 am with tags assessment, analysis, economics link
|

Comments

Well said!

Posted by Alexandre Bergel at 11 January 2012, 1:03 am link
|