On Nov 20, 2007, at 2:50 AM, Waldemar Kornewald wrote:
Something is conceptually wrong and I don't yet have an idea what it is, but from reading on your website (separating intention from optimization, etc.) I guess you do. Could you please elaborate? (if you are allowed to do so :)
This is something we'll be building above the *OLA stuff (we build it to fully understand it) before the results can be trickled down into *OLA itself. So it's a little of topic for the immediate goals of the *OLA/FoNC work, but what the hey: here's the harmonic mean of the answers you might get from Alan and me for the above question...
We can define 'y = sort x' as 'y = permute x and y[i] <= y[i+1]'. This is possibly the shortest (ascii) way of writing the intent of sorting, but it's a useful implementation only for sizes of x that are very small.
There are many optimisations that have been devised for this meaning. None add anything (except useful pragmatics) and all reduce the clarity of the meaning. The optimisations can be separated out into a completely distinct category, and the original meaning used as an expression of intent and to check the correctness of its optimisations.
One of our longer-term goals is to devise a dynamic 'runnable mathematics' that can serve as the vehicle for expressing these meanings. In some cases the optimisations can (and should) be generated from the meanings; elsewhere the optimisations will have to be supplied by hand. In all cases the system must be able to run via the meanings alone (however slowly) and the meanings are the sole arbiters of the integrity of the optimisations.
An immediate goal of *OLA is to provide the flexible syntax and semantics needed to implement this 'runnable mathematics of meaning', empower it with abstractions drawn from diverse paradigms (imperative, declarative, functional, logical, etc., -- whichever best fits a clear and concise expression of a particular intent) and in such a way that the math can 'reason' about itself to generate its own optimisations in as many cases as possible.
This isn't a panacea: meanings will have to be debugged for the same reasons that proofs in math are not always correct (not all cases are identified, small and large errors of logic, etc.).
There are too many expansions of degrees of freedom for something like 'predicates and proofs' to work. Many methods will be collections of 'bounding heuristics' (TCP is a good example) that together successfully give rise to dynamic stability. So a 'meaning' will generally be the simplest runnable heuristics that can both generate and check desired results where the aim is comprehensive simplicity and the pragmatics are left to the separated optimisations.
HTH. Cheers, Ian _______________________________________________ fonc mailing list [email protected] http://vpri.org/mailman/listinfo/fonc
