Sean Heber wrote: > I could send a single message to an object that, by itself, seemed obvious > and straightforward but which actually causes a huge cascade of messages to > flow among a huge set of objects that result in all sorts of unexpected > consequences to the state of the original object which drastically alters > the meaning of future messages to that object. That's the kind of complexity > that causes problems, IMO, and it has nothing to do with size.
We just have a difference of definitions then. I would say the side-effects of a given operation definitely contribute to its size. Perhaps another kind of example is constructing parser grammars. I've spent > a lot of time looking at OMeta examples and thinking I understand it. After > all, they are often short and seemingly simple. Then I sit down and try to > implement one from scratch and fail miserably. Perhaps I have yet to learn > the correct mental model to use when building these, but they sure "seem" > small and simple when I'm not trying to write one... That's unrelated, as I said "assuming you trust the interpreter". The problems you (and I, as well) have with understanding OMeta don't have a place in a model of computation. Whatever model we use for computation, I think it shouldn't confuse the notions of "programmer competency" and "expressive power of a language". John Zabroski wrote: > My simplest counter-example would be... A counter-example to your definitions of "simple", "small", and "trustworthy" isn't enough, though, because I don't agree with how you're using those words. I don't think the way you're using them conforms to any useful model. I'm literally asking for a general method I could use to measure the "simplicity", the "size", and the "trustworthiness" of a program in a way that is consistent. Let me be explicit. Here is my model: A programming language combines a certain number of "perspectives" from which it lets you express algorithms (a "kernel language" [1] tries to isolate a single perspective). Perspectives consist of a set of related metaphors [2] which provide "thinking-blocks" for you to structure an algorithm. The "size" of "an algorithm expressed from a certain perspective" is measured in the number of these "thinking blocks" your mind needs to understand its behavior. Finding the optimal perspective from which to express an algorithm will lead to an a representation which is all "small", "simple", and "trustworthy". The Big Problem In Our Field is finding two things: --one-- new perspectives of computation and --two-- ways of implementing them in machines. An important feature of this model is isolating these definitions from questions about the "learning curve" or "experience" of a given programmer. It doesn't address the pertinent question of "how well can a given programmer solve a given problem?", as that would involve measuring the "intuition" a programmer has in regard to different perspectives, and the size of the solution from those perspectives. Those are non-trivial to measure. Cheers, Andrey 1. This terms is from Roy and Haridi's "Concepts, Techniques, and Models of Computer Programming". I remember reading Kay refer to such a language as being a "crystallization of style". 2. http://en.wikipedia.org/wiki/Conceptual_metaphor On Tue, Mar 2, 2010 at 4:59 PM, Sean Heber <[email protected]> wrote: > On Mar 2, 2010, at 3:18 PM, Andrey Fedorov wrote: > > > John Zabroski wrote: > > the three stumbling blocks are size, complexity and trustworthiness > > > > How are these different? > > > > A small program is a simple program by definition, assuming it's > expressed in an intuitively comprehensible way. > > I'm not so sure that a small program is necessarily simple. > > I could send a single message to an object that, by itself, seemed obvious > and straightforward but which actually causes a huge cascade of messages to > flow among a huge set of objects that result in all sorts of unexpected > consequences to the state of the original object which drastically alters > the meaning of future messages to that object. That's the kind of complexity > that causes problems, IMO, and it has nothing to do with size. > > Perhaps another kind of example is constructing parser grammars. I've spent > a lot of time looking at OMeta examples and thinking I understand it. After > all, they are often short and seemingly simple. Then I sit down and try to > implement one from scratch and fail miserably. Perhaps I have yet to learn > the correct mental model to use when building these, but they sure "seem" > small and simple when I'm not trying to write one... > > l8r > Sean > > > _______________________________________________ > fonc mailing list > [email protected] > http://vpri.org/mailman/listinfo/fonc >
_______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
