"Size" in terms of "number of bytes" or "number of lines" isn't interesting to me. It's a variable that is easy to measure, but is dependent on too many other variables. Just as with any model, we should be looking for variables as independent as possible.
"Complexity of an algorithm" without programmers in absolutely nonsensical. I was talking about complexity *assuming maximal proficiency with a model of computation*. All I'm doing is taking out programmer's skill as a free variable - I'm fixing a variable in order to get rid of a dimension, nothing more. Alejandro said: > Which one of the following systems is more complex? Those aren't systems, they are graphs. You can define "complexity" to be whatever you wish. If you define it as (| edges | + | vertices |), system B is more complex. I think I'm on quite a different wavelength than you guys. Arguing about the "inherent" meaning of "complexity" is a idiotic exercise in futility - leave it to the philosophy undergrads. John Zabroski wrote: > > Andrey Fedorov wrote: > >> Let me be explicit. Here is my model: [...] > > This is wrong, of course. > A model is a bunch of definitions - I defined the inter-dependence of arbitrary terms "size", "complexity", "perspective", etc. in terms of models used by Lakoff's work in cognition. Just because your intuition about what the words "inherently" mean that it doesn't agree with doesn't make it "wrong". I agree that if you were to ask me to find measures and an actual mathematical relationships between these variables, I would have to do a lot more thinking. Concepts, Tecniques, and Models of Computer Programming is inferior to > Design Concepts in Programming Languages by Franklyn Turbak and David > Gifford. The former is about "how to think in Oz", while the latter is "how > to think like a programming language designer". > That's a bit crass, but thanks for the recommendation. I'll bump it to the next thing on my reading list after I'm done with CTM. > Of course, you are actually not that far off, too. The problem is that > when all solutions are uniformly small, you need a way to compare their > expressive power when they are extended. Thus, you study complexity in > terms of what changes and what is fixed (the absence of change), and also > how many bits of information you've encoded into each solution size. Then, > there is a question of how much effort you must expend to make something > trustworthy. Trustworthiness itself has complexity issues, e.g. how many > bits of information are added, removed or changed from one executable > specification to another. And just studying bits of information is only one > technique, derived from work by Kolmogorov and Chaitin. Here is a great > example using a precedence graph: http://cr.yp.to/qhasm/20050210-fxch.txt > We can therefore measure, for example, the complexity in mapping a model to > a linear store, and also try to predict timing characteristics of the > physical model. We can also then compare them to other models. > You're talking way past me. By "complexity" I just mean "how hard is this to understand?". A experiment to measure of complexity of some code would be taking an experienced Java engineer, giving him Java code he hasn't seen, and asking him to understand it to the point of being able to recreate it from memory, or modify it, or something else to demonstrate he's internalized it to some extent. I don't see how Kolmogorov or Chaitin's models will help here. There are two "schools of thought" I've seen about making something "trustworthy": (1) repeating your intentions - like type systems, unit tests, etc., and (2) creating abstractions. I'm not interested in (1), as I don't think it's viable in the long-term, and (2) collapses nicely into the definition of "complexity". CTM really doesn't do a great job discussing any such stuff. Chapter 10 on > GUIs is a good example; they make a similar "low slope" encoding decision > Microsoft architect Mike Hillberg made; they argue that using syntax to > express containment is a good idea. I am saying syntax is better for > instead expressing the mathematical relationships that define that > containment, because if you need to change containment relations, it is > algebraically manipulable. You can then separately define how to make > changes conflict-serializable. WPF doesn't think this way; instead, it > defines concepts like a Visual Tree, Logical Tree, Visual and Freezable > objects. This in turn leads to the API controlling you, and so you spend > most of your most "productive" hours figuring out how you can manipulate the > static production rules of WPF's Retained Mode architecture. Retained Mode > should be more flexible, and also allow for higher-level control of resource > virtualization, as well as defning UI compositions in a way more amenable to > combining interaction graphs and scene graphs. -- This is intended to > totally *blow up* any preconceived notions you have about user interfaces > and how they should be built e.g. what you maybe have read from Martin > Fowler's bliki, such as Reenskaug's Model-View-Controller, Taligent's > Model-View-Presenter, and Sun/Oracle/IBM/Apple/Microsoft's various > interpretations of these, etc. > I don't follow. Will try again after I read Turbak & Gifford. By then, thought, we'll probably have forgotten this conversation ever took place.
_______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
