Hiroaki Kitano of Sony Computer Systems Laboratory is visiting my department today and giving a talk on 'robustness' as applied to biological and engineering systems. Robustness is an interesting property to think about, as indicated in his abstract:

Robustness is a ubiquitously observed property of biological systems. It is considered to be a fundamental feature of complex evolvable systems. It is attained by several underlying principles that are universal to both biological organisms and sophisticated engineering systems. Robustness facilitates evolvability and robust traits are often selected by evolution. Such a mutually beneficial process is made possible by specific architectural features observed in robust systems. But there are trade-offs between robustness, fragility, performance and resource demands, which explain system behaviour, including the patterns of failure. Insights into inherent properties of robust systems will provide us with a better understanding of complex diseases and a guiding principle for therapy design.

His website is at:

<http://www.symbio.jst.go.jp/symbio2/>

and a particularly interesting paper is "Biological Robustness":

<http://www.symbio.jst.go.jp/symbio2/papers/NRGRobustnessKitano2004.pdf>

Basically, the idea is that "robustness" is the tendency of a system to return to a stable state after being perturbed by some external force. I can see at least a couple of interesting applications to Hackystat-based research in software engineering:

1. Can we see evidence of 'robustness' in the measures that we take from a developing software system? In other words, when the system is 'perturbed' (i.e. lots of commits, resulting in more than normal build failures), do we see some sort of response that leads to a more 'normal' level of build failures? (This is the positive spin on robustness.)

2. Does robustness also explain other not-so-great properties of our development process. For example, why is it the case that our test coverage remains at 80% and doesn't get higher over time?

3. The paper I cited above talks about "attractors", which means that certain combinations of system parameters are more robust (or more able to return to stability following a perturbation) than others. In the realm of software development, what this might mean is that certain combinations of software development procedures are robust, while others aren't. As one example, Kent Beck has famously said something to the effect that if you don't follow all of the 14 tenets of XP, you don't get XP. One could explain this in terms of robustness by saying that you need all 14 to create the "attractor". Interestingly, this should be testable, because if it's true, then once a system (i.e. a development group) attains all 14 practices, it should be "robust", in the sense that it will be resistant to attempts to move away from those 14 practices.

Fun stuff to think about.

Cheers,
Philip






Reply via email to