By Joost Visser

Writing code is easy, maintaining is not. And, what if we could make it easy to write maintaible code? “Current development speed is a function of past development quality” (@brianm). In a way: the quality of what you write hinders or supports the development of new stuff. There is all kinds of evidence that code quality influences speed and costs of software development. However, 37% of the organizations has no code quality control at all, and only 17% fully apply quality code control (full = running code quality tools regularly, and do something with the result). In a way, metrics are there to measure code quality, and thus (indirectly) maintainability.

Sustainable business requires maintainable software. Maintainability is the prime quality for software: to remain suitable your software needs to change. Evolvability is how they call it in the standard: analysis (to understand where and how to modify), modifiability (making the change), testability (did you do it right), modularity and reusable to ensure you can build upon it in a next iteration. Note there is nothing about architecture or documentation here! This all sounds nice, but how do you measure this? One way is to look at code metrics, as done in [1]. Many metrics are for a single software unit, such as a method or module. Just taking the average is not sufficient, as such metrics do not follow a normal distribution. In [1], they decided to aggregrate several metrics (in a statistically correct(!) way), and to create categories (low, moderate, high and very high).

One way to improve the model was perception: instead of using a scale from — to ++ was from 1 star to 5 stars. No change in measure, just in perception. Sentiment is important as well (Gamification 🙂 )! Also, because they were consultants and saw lots of code, they could build benchmarks to derive metric tresholds, rather than just come up with some numbers. In this way, the numbers stay in tune with practice. It only works with assumption (or hope as Joost puts it) that quality will always improve. A bit like C?TO states that our mathematics education is still on a high level, as every year the distribution of grades is the same…

Another metric added was component balance [2,3]: is there a good number of components and relative size, again relative to all others (so if everyone has 7 components, you need to have 7 as well?). Encapsulation is seen as a good practice. Hence,  the degree of component independence can be calculated: how many components do you need to alter to change something?  Conclusion: maintainability is something relative?

Bottom line: metrics are hints, and not the holy grail! You can test and use it yourself:

[1] Heitlager, Kuipers & Visser. A practical model for measuring maintainability QUATIC 2007

[2] Eric Bouwers, José Pedro Correia, Arie van Deursen, and Joost Visser, Quantifying the Analyzability of Software Architectures, in proceedings of the 9th Working IEEE/IFIP Conference on Software Architecture (WICSA 2011), pp. 83-92,IEEE Computer Society, 2011.

[3] Eric Bouwers, Arie van Deursen, and Joost Visser, Dependency Profiles for Software Architecture Evaluations, In proceedings of the 27th IEEE International Conference on Software Maintenance (ICSM 2011), p540-543, IEEE, 2011.

IPA Fall days 2017 – Building Maintainable Software