Excerpts from Mike Spreitzer's message of 2014-10-16 22:24:30 -0700: > I like the idea of measuring complexity. I looked briefly at `python -m > mccabe`. It seems to measure each method independently. Is this really > fair? If I have a class with some big methods, and I break it down into > more numerous and smaller methods, then the largest method gets smaller, > but the number of methods gets larger. A large number of methods is > itself a form of complexity. It is not clear to me that said re-org has > necessarily made the class easier to understand. I can also break one > class into two, but it is not clear to me that the project has necessarily > become easier to understand. While it is true that when you truly make a > project easier to understand you sometimes break it into more classes, it > is also true that you can do a bad job of re-organizing a set of classes > while still reducing the size of the largest method. Has the McCabe > metric been evaluated on Python projects? There is a danger in focusing > on what is easy to measure if that is not really what you want to > optimize. > > BTW, I find that one of the complexity issues for me when I am learning > about a Python class is doing the whole-program type inference so that I > know what the arguments are. It seems to me that if you want to measure > complexity of Python code then something like the complexity of the > argument typing should be taken into account. >
Fences don't solve problems. Fences make it harder to cause problems. Of course you can still do the wrong thing and make the code worse. But you can't do _this_ wrong thing without asserting why you need to. _______________________________________________ OpenStack-dev mailing list OpenStack-dev@lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev