We would have to define what you mean by the term computation.
Computation is a way to transform a language "syntactically" by defined
rules.
The lambda calculus is a fundamental way of performing such
transformation via reduction rules (the alpha, beta, gamma rules).

In the end the beta-reduction is term substitution. But abstraction and
substitution in a generic purpose von Neumann-style computer has to be
modelled accordingly: A variable in the computer is a memory location/a
register that can be updated (but it is not a 1:1 correspondence). E.g.
A function in a computer is jump to a certain code location having to
write to certain locations in memory/registers to get the arguments passed.

IMHO the computational model of objects and method dispatch is more of a
black box / communcation-oriented model. One does not know much about
the destination and dispatchs a message interpreting the result. In
functional languages the model is more white boxed. One can always
decompose a term into subterms and interpret it. Therefore functional
languages do not grow easily to distributed programming, where the
knowledge over the terms is limited.

Best,
Jakob

Am 12.02.12 03:20, schrieb Steve Wart:
> I think of OO as an organization mechanism. It doesn't add
> fundamentally to computation, but it allows complexity to be managed
> more easily.
>
> On Sat, Feb 11, 2012 at 5:23 PM, Kurt Stephens <k...@kurtstephens.com
> <mailto:k...@kurtstephens.com>> wrote:
>
>
>     COLAs or CLOAs? : are lambda systems fundamentally simpler than
>     object systems?
>
>     Should Combined-Object-Lambda-Archtecture really be
>     Combined-Lambda-Object-Architecture?
>
>     Ian Piumarta's IDST bootstraps a object-system, then a compiler,
>     then a lisp evaluator.  Maru bootstraps a lisp evaluator, then
>     crafts an object system, then a compiler.  Maru is much smaller
>     and elegant than IDST.
>
>     Are object systems necessarily more complex than lambda
>     evaluators?  Or is this just another demonstration of how Lisp
>     code/data unification is more powerful?
>
>     If message send and function calls are decomposed into lookup()
>     and apply(), the only difference basic OO message-passing and
>     function calling is lookup(): the former is late-bound, the latter
>     is early bound (in the link-editor, for example.).  Is OO lookup()
>     the sole complicating factor?  Is a lambda-oriented compiler
>     fundamentally less complex than a OO compiler?
>
>     I took the object->lambda approach in TORT
>     (http://github.com/kstephens/tort) tried to keep the OO kernel
>     small, and delay the compiler until after the lisp evaluator.  The
>     object system started out "tiny" but to support the lisp evaluator
>     created in an OO-style (which the REPL and compiler is built on)
>     required a lot of basic foundational object functionality.
>      Despite its name, TORT is no longer tiny; I probably didn't
>     restrain myself enough; it tries too much to support C extension
>     and dynamic linking.
>
>     Did Gregor Kiczales, Ian and others stumble upon the benefits of
>     lisp->object bootstrapping .vs. object->lisp bootstrapping?  I've
>     written object-oriented LISPs before
>     (http://github.com/kstephens/ll based on ideas from OAKLISP).  Do
>     OO techniques make language implementation "feel" easier in the
>     beginning, only to complect later on?
>
>      Just some ideas,
>      Kurt Stephens
>
>     http://kurtstephens.com/node/154
>
>
>     _______________________________________________
>     fonc mailing list
>     fonc@vpri.org <mailto:fonc@vpri.org>
>     http://vpri.org/mailman/listinfo/fonc
>
>
>
>
> _______________________________________________
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc


_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to