On Tue, Mar 2, 2010 at 6:16 PM, Andrey Fedorov <[email protected]> wrote:

> Let me be explicit. Here is my model: A programming language combines a
> certain number of "perspectives" from which it lets you express algorithms
> (a "kernel language" [1] tries to isolate a single perspective).
> Perspectives consist of a set of related metaphors [2] which provide
> "thinking-blocks" for you to structure an algorithm. The "size" of "an
> algorithm expressed from a certain perspective" is measured in the number of
> these "thinking blocks" your mind needs to understand its behavior. Finding
> the optimal perspective from which to express an algorithm will lead to an a
> representation which is all "small", "simple", and "trustworthy".
>
> The Big Problem In Our Field is finding two things: --one-- new
> perspectives of computation and --two-- ways of implementing them in
> machines.
>
>

>
> 1. This terms is from Roy and Haridi's "Concepts, Techniques, and Models
> of Computer Programming". I remember reading Kay refer to such a language
> as being a "crystallization of style".
> 2. http://en.wikipedia.org/wiki/Conceptual_metaphor
>
>
This is wrong, of course. The idea behind multiple models, as opposed to
defining a single kernel language, is that a model need only develop a
minimal model of some task, whereas a kernel language is designed to allow
modeling of many tasks; the simplest example of such a kernel language the
untyped lambda calculus (e.g., see papers such as Lambda The Ultimate
Declarative and Lambda the Ultimate Imperative).  Models can complement one
another, and choosing between two models does not mean that just because
they are different they are comparable.  Two models may not even be
comparable in terms of abstractions, they may be abstracting two different
things, and so making a direct comparison would be a category error and also
a potential sign of negligent design ("design hole") and also therefore a
chance for innovation.

Concepts, Tecniques, and Models of Computer Programming is inferior to
Design Concepts in Programming Languages by Franklyn Turbak and David
Gifford.  The former is about "how to think in Oz", while the latter is "how
to think like a programming language designer".

Of course, you are actually not that far off, too.  The problem is that when
all solutions are uniformly small, you need a way to compare their
expressive power when they are extended.  Thus, you study complexity in
terms of what changes and what is fixed (the absence of change), and also
how many bits of information you've encoded into each solution size.  Then,
there is a question of how much effort you must expend to make something
trustworthy.  Trustworthiness itself has complexity issues, e.g. how many
bits of information are added, removed or changed from one executable
specification to another.  And just studying bits of information is only one
technique, derived from work by Kolmogorov and Chaitin.  Here is a great
example using a precedence graph: http://cr.yp.to/qhasm/20050210-fxch.txt
We can therefore measure, for example, the complexity in mapping a model to
a linear store, and also try to predict timing characteristics of the
physical model.  We can also then compare them to other models.

CTM really doesn't do a great job discussing any such stuff.  Chapter 10 on
GUIs is a good example; they make a similar "low slope" encoding decision
Microsoft architect Mike Hillberg made; they argue that using syntax to
express containment is a good idea.  I am saying syntax is better for
instead expressing the mathematical relationships that define that
containment, because if you need to change containment relations, it is
algebraically manipulable.  You can then separately define how to make
changes conflict-serializable.  WPF doesn't think this way; instead, it
defines concepts like a Visual Tree, Logical Tree, Visual and Freezable
objects.  This in turn leads to the API controlling you, and so you spend
most of your most "productive" hours figuring out how you can manipulate the
static production rules of WPF's Retained Mode architecture.  Retained Mode
should be more flexible, and also allow for higher-level control of resource
virtualization, as well as defning UI compositions in a way more amenable to
combining interaction graphs and scene graphs.  -- This is intended to
totally *blow up* any preconceived notions you have about user interfaces
and how they should be built e.g. what you maybe have read from Martin
Fowler's bliki, such as Reenskaug's Model-View-Controller, Taligent's
Model-View-Presenter, and Sun/Oracle/IBM/Apple/Microsoft's various
interpretations of these, etc.
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to