I've been getting into code metrics quite a bit recently and I've been particularly interested in the way that that they change as code changes. Complexity metrics are particularly interesting. There's no shortage of them. McCabe, Halstead, etc. People have written many papers comparing and contrasting them. At the end of the day, it seems that many of them give you the same thing. There's a lot of correlation, but that doesn't stop discussion about metrics. People look at a number and they want to know what it means.
Recently, Chad Fowler, Corey Haines and I have been working a tool called Turbulence which plots complexity versus commits for Rails projects. When you're in Ruby, gems are your goldmine. For Turbulence, we picked off flog from metrics-fu and used it as our complexity measure. It's nice except.. it's not so obvious what the number means. It's a function of the weights of particular constructs in methods that are supposed to be hard to test. That's fine, but I want a measure of what's hard to understand and change. I'm sure it overlaps but when I'm looking at a 4.5, it would be nice to know what that means.
Generally, with metrics you develop a "feel for the number." When you have that feel, the number really speaks to you. But, really, the number is a proxy for the thing you really care about, which is usually something like "how hard is this to test, or understand." So, there's this little game that we play. We look at a number and we try to imagine what led to it. "Sure, the code is using instance_eval, but maybe it was used in a nice clear way?" I think, though, that this sort of reasoning is easier when the metrics are very concrete. It makes me wonder whether we should go back to using line counts or counts of conditionals as simple measures of complexity.