Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?

2012-02-12 Thread Julian Leviston
Hiya,

On 13/02/2012, at 2:47 PM, Kurt Stephens wrote:

> Read Ian Piumarta's "Open, extensible object models" ( 
> http://piumarta.com/software/cola/objmodel2.pdf ).
> At a certain level, send(), lookup() and apply() have bootstrap 
> implementations to break the infinite regress.  TORT was directly inspired by 
> Ian's paper.  MOS (written a while ago) has a very similar object model and 
> short-circuit.  There is an object, well-known by the system, that describes 
> its self -- this is where the short-circuit lives.
> 

Yeah I've read it about 20 times. I found it quite difficult... (mostly because 
of the fact that I didn't understand some of the C idioms, but also because the 
architecture was difficult to comprehend for me in general). Ian's object model 
didn't actually reify message though, did it? I should go read it more. I wish 
that it was more accessible to me, and/or that I could somehow increase my 
ability to comprehend it. Any further recommendations for things to read along 
this lines? I'm QUITE interested (this was my favourite part of the FoNC 
project to a point).

> A graph of TORT's object model is here: 
> http://kurtstephens.com/pub/tort/tort.svg .  It's turtles all the way down, 
> er... um... to the right.  The last turtle, the "method table", is chewing 
> its tail. :)
> 
>>> I'm not a fan of HTTP/SOAP bloat.  The fundamentals are often bulldozed by 
>>> it.
>> 
>> How about REST?
>> 
> REST is not object messaging or HTTP.
> 

REST works over HTTP. It can be object messaging, can't it? At least, as far as 
my understanding and implementations have gone. Maybe I'm deluded.


> -- Kurt
> 
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?

2012-02-12 Thread Kurt Stephens

On 2/12/12 4:11 PM, Julian Leviston wrote:

On 13/02/2012, at 6:01 AM, Kurt Stephens wrote:

If send(sel, rcvr, args) can decomposed into apply(lookup(sel, rcvr, args), 
rcvr, args), then this follows:

  Message.new(sel, rcvr, args).lookup().apply()

...


I don't follow why a "message" isn't simply a token.

> For example, in Ruby, a message is simply a symbol, is it not?

You are confusing the name of a method (its selector), used to locate 
the method, for the message.  The message contains the selector, 
receiver, arguments, and in the case of Ruby, an optional block -- 
everything need to pass data and transfer control.  The selector 
represents many methods -- the actual behavior is in the method.  In 
some object systems (TORT) selectors and methods can be anonymous.


> One has to draw the line in the sand at some point.
> Why bother building an entire object for something which simply holds 
reference to a Method name

> (or not, as the case may be)?
> In this case, the media *is* the message, is it not? ;-) or are
> you stipulating that a Message would know how to perform lookup and 
call,
> and therefore warrants an entire class... the only issue here is that 
 of efficiency, isn't it?


Why draw a line at all?  The elements of the message are already there, 
at least in CRuby, but are buried in the C stack -- not visible to the 
user in an organized manner.  TORT allocates Message objects on the C 
stack, however a Message object does not need to "exist" until it's 
referenced.


In TORT, the receiver's method table implements lookup(msg) and the 
located method implements apply(msg, rcvr, args...); in fact, all 
objects in TORT implement apply(), most of them raise an error when 
apply() is called (similar to Self's code slot).  The message object, 
not the receiver, is always the first implicit argument.


I could imagine Message implementing both lookup() and apply(), but it 
could be delegated to any other object(s), or replacing the default 
object responsible for send() with another that prints traces then 
delegates to some other object.


Message.apply() could act like invoking a continuation.  You could take 
a Message, install it as a method somewhere and restart it.



Sorry if I'm asking really obviously answerable questions. Please bear with me. 
I'm eager to learn.

Another question I have, is, how do you solve the obvious recursive quality of 
having
> Message.new(sel, scr, args) and also then sending messages to the 
message object?

> Is "new" the bottom-out point?




Read Ian Piumarta's "Open, extensible object models" ( 
http://piumarta.com/software/cola/objmodel2.pdf ).
At a certain level, send(), lookup() and apply() have bootstrap 
implementations to break the infinite regress.  TORT was directly 
inspired by Ian's paper.  MOS (written a while ago) has a very similar 
object model and short-circuit.  There is an object, well-known by the 
system, that describes its self -- this is where the short-circuit lives.


A graph of TORT's object model is here: 
http://kurtstephens.com/pub/tort/tort.svg .  It's turtles all the way 
down, er... um... to the right.  The last turtle, the "method table", is 
chewing its tail. :)



I'm not a fan of HTTP/SOAP bloat.  The fundamentals are often bulldozed by it.


How about REST?


REST is not object messaging or HTTP.

-- Kurt

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The death of CPU scaling: From one core to many — and why we’re still stuck

2012-02-12 Thread David Barbour
I thought the recent article from Herb Sutter was quite good.

http://herbsutter.com/welcome-to-the-jungle/

On Sun, Feb 12, 2012 at 12:53 PM, John Zabroski wrote:

> This is a very good article but it does not mention the ultimate
> bottleneck above Amdahl's Law: the speed of light is a constant, we cannot
> change it, therefore poorly designed communication protocols will be the
> next big target for (operating) systems research as propagation delay will
> be the ultimate bottleneck for an intergalactic network.
>
> Cheers,
> Z-Bo
>
> Sent from my Droid Charge on Verizon 4GLTE
>
> --Original Message--
> From: Eugen Leitl 
> To: ,
> Cc: "Fundamentals of New Computing" ,
> Date: Thursday, February 9, 2012 12:12:12 PM GMT+0100
> Subject: [fonc] The death of CPU scaling: From one core to many — and why
> we’re still   stuck
>
>
>
> http://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck?print
>
> The death of CPU scaling: From one core to many — and why we’re still stuck
>
> By Joel Hruska on February 1, 2012 at 2:31 pm
>
> It’s been nearly eight years since Intel canceled Tejas and announced its
> plans for a new multi-core architecture. The press wasted little time in
> declaring conventional CPU scaling dead — and while the media has a
> tendency
> to bury products, trends, and occasionally people well before their
> expiration date, this is one declaration that’s stood the test of time.
>
> To understand the magnitude of what happened in 2004 it may help to consult
> the following chart. It shows transistor counts, clock speeds, power
> consumption, and instruction-level parallelism (ILP). The doubling of
> transistor counts every two years is known as Moore’s law, but over time,
> assumptions about performance and power consumption were also made and
> shown
> to advance along similar lines. Moore got all the credit, but he wasn’t the
> only visionary at work. For decades, microprocessors followed what’s known
> as
> Dennard scaling. Dennard predicted that oxide thickness, transistor length,
> and transistor width could all be scaled by a constant factor. Dennard
> scaling is what gave Moore’s law its teeth; it’s the reason the
> general-purpose microprocessor was able to overtake and dominate other
> types
> of computers.
>
> CPU Scaling [1]CPU scaling showing transistor density, power consumption,
> and
> efficiency. Chart originally from The Free Lunch Is Over: A Fundamental
> Turn
> Toward Concurrency in Software [2]
>
> The original 8086 drew ~1.84W and the P3 1GHz drew 33W, meaning that CPU
> power consumption increased by 17.9x while CPU frequency improved by 125x.
> Note that this doesn’t include the other advances that occurred over the
> same
> time period, such as the adoption of L1/L2 caches, the invention of
> out-of-order execution, or the use of superscaling and pipelining to
> improve
> processor efficiency. It’s for this reason that the 1990s are sometimes
> referred to as the golden age of scaling. This expanded version of Moore’s
> law held true into the mid-2000s, at which point the power consumption and
> clock speed improvements collapsed. The problem at 90nm was that transistor
> gates became too thin to prevent current from leaking out into the
> substrate.
>
> Intel and other semiconductor manufacturers have fought back with
> innovations
> [3] like strained silicon, hi-k metal gate, FinFET, and FD-SOI — but none
> of
> these has re-enabled anything like the scaling we once enjoyed. From 2007
> to
> 2011, maximum CPU clock speed (with Turbo Mode enabled) rose from 2.93GHz
> to
> 3.9GHz, an increase of 33%. From 1994 to 1998, CPU clock speeds rose by
> 300%.
>
> Next page: The multi-core swerve [4]
>
> The multi-core swerve
>
> For the past seven years, Intel and AMD have emphasized multi-core CPUs as
> the answer to scaling system performance, but there are multiple reasons to
> think the trend towards rising core counts is largely over. First and
> foremost, there’s the fact that adding more CPU cores never results in
> perfect scaling. In any parallelized program, performance is ultimately
> limited by the amount of serial code (code that can only be executed on one
> processor). This is known as Amdahl’s law. Other factors, such as the
> difficulty of maintaining concurrency across a large number of cores, also
> limit the practical scaling of multi-core solutions.
>
> Amdahl's Law [5]
>
> AMD’s Bulldozer is a further example of how bolting more cores together can
> result in a slower end product [6]. Bulldozer was designed to share logic
> and
> caches in order to reduce die size and allow for more cores per processor,
> but the chip’s power consumption badly limits its clock speed while slow
> caches hamstring instructions per cycle (IPC). Even if Bulldozer had been a
> significantly better chip, it wouldn’t change the long-term trend towards
> diminishing marginal returns. The more cores per die, the lower

Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?

2012-02-12 Thread Julian Leviston

On 13/02/2012, at 6:01 AM, Kurt Stephens wrote:

> On 2/12/12 11:15 AM, Steve Wart wrote:
> 
>> Can the distributed computation model you describe be formalized as a
>> set of rewrite rules, or is the "black box" model really about a
>> protocol for message dispatch? Attempts to build distributed messaging
>> systems haven't been particularly simple. In fact I consider both CORBA
>> and Web Services to be failures for that reason.
> 
> Perhaps it's because the "Message" in OO systems is often forgotten: message 
> passing is described as "calling a method", instead of "sending a message".
> 
> Many languages do not reify the message itself as an object.
> 
> If send(sel, rcvr, args) can decomposed into apply(lookup(sel, rcvr, args), 
> rcvr, args), then this follows:
> 
>  Message.new(sel, rcvr, args).lookup().apply()
> 
> Tort does this, so does MOS ( 
> http://kurtstephens.com/pub/mos/current/src/mos/ ), however Ruby, for 
> example, does not -- not sure how many others do.
> 
> The Self Language handles apply() by cloning the method object, assigning 
> arguments into its slots, then transferring control to the object's code 
> slot.  Yet, there is still no tangible Message object.
> 

I don't follow why a "message" isn't simply a token. For example, in Ruby, a 
message is simply a symbol, is it not? One has to draw the line in the sand at 
some point. Why bother building an entire object for something which simply 
holds reference to a Method name (or not, as the case may be)? In this case, 
the media *is* the message, is it not? ;-) or are you stipulating that a 
Message would know how to perform lookup and call, and therefore warrants an 
entire class... the only issue here is that  of efficiency, isn't it?

Sorry if I'm asking really obviously answerable questions. Please bear with me. 
I'm eager to learn.

Another question I have, is, how do you solve the obvious recursive quality of 
having Message.new(sel, scr, args) and also then sending messages to the 
message object? Is "new" the bottom-out point?


>> It's very difficult to use OO in this way without imposing excessive
>> knowledge about the internal representation of objects if you need to
>> serialize parameters or response objects.
>> 
> 
> Remembering there is an implicit Message object behind the scenes makes 
> message distribution a bottom-up abstraction, starting with identity 
> transforms:
> 
> http://kurtstephens.com/pub/abstracting_services_in_ruby/asir.slides/index.html
> 
> This doesn't fully remove the serialization issues, but those are readily, 
> and often already, solved.  One generic serialization is suitable for the 
> parameters and the message objects; decomposed into Request and Response 
> objects.  The Transport is then agnostic of encoding/decoding details.
> 
>> HTTP seems to have avoided this by using MIME types, but this is more
>> about agreed upon engineering standards rather than computational
>> abstractions.
>> 
> 
> I'm not a fan of HTTP/SOAP bloat.  The fundamentals are often bulldozed by it.

How about REST?

> 
>> Cheers,
>> Steve
>> 
> 
> -- Kurt

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] The death of CPU scaling: From one core to many — and why we’re still stuck

2012-02-12 Thread John Zabroski
This is a very good article but it does not mention the ultimate bottleneck 
above Amdahl's Law: the speed of light is a constant, we cannot change it, 
therefore poorly designed communication protocols will be the next big target 
for (operating) systems research as propagation delay will be the ultimate 
bottleneck for an intergalactic network.

Cheers,
Z-Bo

Sent from my Droid Charge on Verizon 4GLTE

--Original Message--
From: Eugen Leitl 
To: ,
Cc: "Fundamentals of New Computing" ,
Date: Thursday, February 9, 2012 12:12:12 PM GMT+0100
Subject: [fonc] The death of CPU scaling: From one core to many — and why we’re 
still   stuck


http://www.extremetech.com/computing/116561-the-death-of-cpu-scaling-from-one-core-to-many-and-why-were-still-stuck?print

The death of CPU scaling: From one core to many — and why we’re still stuck

By Joel Hruska on February 1, 2012 at 2:31 pm

It’s been nearly eight years since Intel canceled Tejas and announced its
plans for a new multi-core architecture. The press wasted little time in
declaring conventional CPU scaling dead — and while the media has a tendency
to bury products, trends, and occasionally people well before their
expiration date, this is one declaration that’s stood the test of time.

To understand the magnitude of what happened in 2004 it may help to consult
the following chart. It shows transistor counts, clock speeds, power
consumption, and instruction-level parallelism (ILP). The doubling of
transistor counts every two years is known as Moore’s law, but over time,
assumptions about performance and power consumption were also made and shown
to advance along similar lines. Moore got all the credit, but he wasn’t the
only visionary at work. For decades, microprocessors followed what’s known as
Dennard scaling. Dennard predicted that oxide thickness, transistor length,
and transistor width could all be scaled by a constant factor. Dennard
scaling is what gave Moore’s law its teeth; it’s the reason the
general-purpose microprocessor was able to overtake and dominate other types
of computers.

CPU Scaling [1]CPU scaling showing transistor density, power consumption, and
efficiency. Chart originally from The Free Lunch Is Over: A Fundamental Turn
Toward Concurrency in Software [2]

The original 8086 drew ~1.84W and the P3 1GHz drew 33W, meaning that CPU
power consumption increased by 17.9x while CPU frequency improved by 125x.
Note that this doesn’t include the other advances that occurred over the same
time period, such as the adoption of L1/L2 caches, the invention of
out-of-order execution, or the use of superscaling and pipelining to improve
processor efficiency. It’s for this reason that the 1990s are sometimes
referred to as the golden age of scaling. This expanded version of Moore’s
law held true into the mid-2000s, at which point the power consumption and
clock speed improvements collapsed. The problem at 90nm was that transistor
gates became too thin to prevent current from leaking out into the substrate.

Intel and other semiconductor manufacturers have fought back with innovations
[3] like strained silicon, hi-k metal gate, FinFET, and FD-SOI — but none of
these has re-enabled anything like the scaling we once enjoyed. From 2007 to
2011, maximum CPU clock speed (with Turbo Mode enabled) rose from 2.93GHz to
3.9GHz, an increase of 33%. From 1994 to 1998, CPU clock speeds rose by 300%.

Next page: The multi-core swerve [4]

The multi-core swerve

For the past seven years, Intel and AMD have emphasized multi-core CPUs as
the answer to scaling system performance, but there are multiple reasons to
think the trend towards rising core counts is largely over. First and
foremost, there’s the fact that adding more CPU cores never results in
perfect scaling. In any parallelized program, performance is ultimately
limited by the amount of serial code (code that can only be executed on one
processor). This is known as Amdahl’s law. Other factors, such as the
difficulty of maintaining concurrency across a large number of cores, also
limit the practical scaling of multi-core solutions.

Amdahl's Law [5]

AMD’s Bulldozer is a further example of how bolting more cores together can
result in a slower end product [6]. Bulldozer was designed to share logic and
caches in order to reduce die size and allow for more cores per processor,
but the chip’s power consumption badly limits its clock speed while slow
caches hamstring instructions per cycle (IPC). Even if Bulldozer had been a
significantly better chip, it wouldn’t change the long-term trend towards
diminishing marginal returns. The more cores per die, the lower the chip’s
overall clock speed. This leaves the CPU ever more reliant on parallelism to
extract acceptable performance. AMD isn’t the only company to run into this
problem; Oracle’s new T4 processor is the first Niagara-class chip to focus
on improving single-thread performance rather than pushing up the total
number of threads per CPU.

Rage Jobs [7

Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?

2012-02-12 Thread John Zabroski
Applying the Linda analogy to object systems has been done before, perhaps most 
notably by Ehud Shapiro's work on Concurrent Prolog. Ehud once wrote a reply to 
Gertner in the Communications of the ACM, explaining how he could implement 
Tuple-space programs as logic programs. The Dining Philosopher's Problem was 
used as an example.

Even more recently we have things like TeaTime which furthers the tuple-space 
example with infinite backtracking. More broadly, Linda is just another 
implementation of the fully decoupled Observer pattern, where the Subject is 
decoupled from the Observer by a Registry object. TeaTime can then be seen as a 
further generalization to allow for multi-version concurrency control. Other 
systems and/or languages use the Observer pattern as a litnus test for type 
safety or true "safe" mutable state. Some people describe the problem in very 
weak languages as Callback Hell, because an unsafe non decoupled Ovserver 
pattern implementation doesn't have a fairness strategy or way to prioritize 
Observers out of the box. For example, should the first object to get a 
notification from the subject be the first to reply? What about race conditions 
and dirty reads due to the first ovserver causing a second change notification 
from the Subject?

It turns out these strategies like fairness and message prioritization can be 
built from a trustworthy core - as shown in the E language and first 
hypothesized by Hewitt in his Actor Model. In the Actor Model an actor can 
reorder messages and thus decide priority, even providing fairness which the 
Actor Model axioms say nothing about. In E, communication is handled by 'vats' 
which are essentially a different way to implement a Registry object to fully 
decouple the Subject and Observer and still provide fair resource allocation. 
Yet, E lacks static well-defined types for reasoning about behavior. So type 
theorists come along and ask if they can make a type safe Observable behavior 
that addresses these concerns - see Scala for example.

Hope this perspective on history helps.

Cheers, 
Z-Bo

Sent from my Droid Charge on Verizon 4GLTE

--Original Message--
From: Hans-Martin Mosner 
To: "Fundamentals of New Computing" 
Date: Sunday, February 12, 2012 8:19:37 PM GMT+0100
Subject: Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler 
than object systems?

Am 12.02.2012 20:01, schrieb Kurt Stephens:
> Many languages do not reify the message itself as an object.
>
I have been musing lately how the Linda model (tuple spaces) could be helpful.
Tuples can be understood as messages sent to an anonymous receiver (whoever 
does a "get" with a matching signature).
One nice analogy is that cell biology works pretty similar to that - proteins 
and other molecules are built from
constituents, flot freely (more or less) in the cytoplasm and cause effects 
when they bind to a matching site.
Of course, programming on this level alone seems pretty tedious - I don't know 
how well it scales to higher abstraction
levels. But it does have some nice practical consequences:
- the state of the system is defined by the tuples, so snapshotting and 
in-place software upgrading is relatively painless
- it lends itself well to parallelisation and distribution

Cheers,
Hans-Martin
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?

2012-02-12 Thread Kurt Stephens

On 2/12/12 1:19 PM, Hans-Martin Mosner wrote:

Am 12.02.2012 20:01, schrieb Kurt Stephens:

Many languages do not reify the message itself as an object.


I have been musing lately how the Linda model (tuple spaces) could be helpful.


We've been using tuple spaces at my current job for 5+ years.  It is too 
low-level for distributing object messages.  It feels a little too 
"lispy", to boot.  :)  The Linda model is a distribution abstraction 
that doesn't fit very well into a "Web SOA", there's too much 
HTTP-centrism now to ignore.


It's *very* likely we will replace it with ASIR framework described in 
my slides.  One could implement a Tuple space (or HTTP) Transport with it.




Cheers,
Hans-Martin


-- Kurt
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?

2012-02-12 Thread John Zabroski
Hi Kurt,

Lisp is more expressive than lambda alone since Lisp contains forms that cannot 
be represented by lambda directly - an encoding must be used. A novel 
explanation of this math phenomena can be found in Barry Jay's recent book, 
Pattern Calculus, published via Springer-Verlag. But very briefly, pattern 
matching is more general than beta reduction used in the lambda calculus.

I hope this pointer will help you work through your intuition, but the hard 
work is on you since reading Barry's book has some pre-reqs. ;)

Barry does read this list here and there, fyi.



Sent from my Droid Charge on Verizon 4GLTE

--Original Message--
From: Kurt Stephens 
To: "Fundamentals of New Computing" 
Date: Saturday, February 11, 2012 7:23:01 PM GMT-0600
Subject: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than 
object systems?


COLAs or CLOAs? : are lambda systems fundamentally simpler than object 
systems?

Should Combined-Object-Lambda-Archtecture really be 
Combined-Lambda-Object-Architecture?

Ian Piumarta’s IDST bootstraps a object-system, then a compiler, then a 
lisp evaluator.  Maru bootstraps a lisp evaluator, then crafts an object 
system, then a compiler.  Maru is much smaller and elegant than IDST.

Are object systems necessarily more complex than lambda evaluators?  Or 
is this just another demonstration of how Lisp code/data unification is 
more powerful?

If message send and function calls are decomposed into lookup() and 
apply(), the only difference basic OO message-passing and function 
calling is lookup(): the former is late-bound, the latter is early bound 
(in the link-editor, for example.).  Is OO lookup() the sole 
complicating factor?  Is a lambda-oriented compiler fundamentally less 
complex than a OO compiler?

I took the object->lambda approach in TORT 
(http://github.com/kstephens/tort) tried to keep the OO kernel small, 
and delay the compiler until after the lisp evaluator.  The object 
system started out “tiny” but to support the lisp evaluator created in 
an OO-style (which the REPL and compiler is built on) required a lot of 
basic foundational object functionality.  Despite its name, TORT is no 
longer tiny; I probably didn’t restrain myself enough; it tries too much 
to support C extension and dynamic linking.

Did Gregor Kiczales, Ian and others stumble upon the benefits of 
lisp->object bootstrapping .vs. object->lisp bootstrapping?  I’ve 
written object-oriented LISPs before (http://github.com/kstephens/ll 
based on ideas from OAKLISP).  Do OO techniques make language 
implementation “feel” easier in the beginning, only to complect later on?

   Just some ideas,
   Kurt Stephens

http://kurtstephens.com/node/154


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?

2012-02-12 Thread Hans-Martin Mosner
Am 12.02.2012 20:01, schrieb Kurt Stephens:
> Many languages do not reify the message itself as an object.
>
I have been musing lately how the Linda model (tuple spaces) could be helpful.
Tuples can be understood as messages sent to an anonymous receiver (whoever 
does a "get" with a matching signature).
One nice analogy is that cell biology works pretty similar to that - proteins 
and other molecules are built from
constituents, flot freely (more or less) in the cytoplasm and cause effects 
when they bind to a matching site.
Of course, programming on this level alone seems pretty tedious - I don't know 
how well it scales to higher abstraction
levels. But it does have some nice practical consequences:
- the state of the system is defined by the tuples, so snapshotting and 
in-place software upgrading is relatively painless
- it lends itself well to parallelisation and distribution

Cheers,
Hans-Martin
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?

2012-02-12 Thread Kurt Stephens

On 2/12/12 11:15 AM, Steve Wart wrote:


Can the distributed computation model you describe be formalized as a
set of rewrite rules, or is the "black box" model really about a
protocol for message dispatch? Attempts to build distributed messaging
systems haven't been particularly simple. In fact I consider both CORBA
and Web Services to be failures for that reason.


Perhaps it's because the "Message" in OO systems is often forgotten: 
message passing is described as "calling a method", instead of "sending 
a message".


Many languages do not reify the message itself as an object.

If send(sel, rcvr, args) can decomposed into apply(lookup(sel, rcvr, 
args), rcvr, args), then this follows:


  Message.new(sel, rcvr, args).lookup().apply()

Tort does this, so does MOS ( 
http://kurtstephens.com/pub/mos/current/src/mos/ ), however Ruby, for 
example, does not -- not sure how many others do.


The Self Language handles apply() by cloning the method object, 
assigning arguments into its slots, then transferring control to the 
object's code slot.  Yet, there is still no tangible Message object.



It's very difficult to use OO in this way without imposing excessive
knowledge about the internal representation of objects if you need to
serialize parameters or response objects.



Remembering there is an implicit Message object behind the scenes makes 
message distribution a bottom-up abstraction, starting with identity 
transforms:



http://kurtstephens.com/pub/abstracting_services_in_ruby/asir.slides/index.html

This doesn't fully remove the serialization issues, but those are 
readily, and often already, solved.  One generic serialization is 
suitable for the parameters and the message objects; decomposed into 
Request and Response objects.  The Transport is then agnostic of 
encoding/decoding details.



HTTP seems to have avoided this by using MIME types, but this is more
about agreed upon engineering standards rather than computational
abstractions.



I'm not a fan of HTTP/SOAP bloat.  The fundamentals are often bulldozed 
by it.



Cheers,
Steve



-- Kurt


On Sun, Feb 12, 2012 at 4:02 AM, Jakob Praher mailto:j...@hapra.at>> wrote:

We would have to define what you mean by the term computation.
Computation is a way to transform a language "syntactically" by
defined rules.
The lambda calculus is a fundamental way of performing such
transformation via reduction rules (the alpha, beta, gamma rules).

In the end the beta-reduction is term substitution. But abstraction
and substitution in a generic purpose von Neumann-style computer has
to be modelled accordingly: A variable in the computer is a memory
location/a register that can be updated (but it is not a 1:1
correspondence). E.g. A function in a computer is jump to a certain
code location having to write to certain locations in
memory/registers to get the arguments passed.

IMHO the computational model of objects and method dispatch is more
of a black box / communcation-oriented model. One does not know much
about the destination and dispatchs a message interpreting the
result. In functional languages the model is more white boxed. One
can always decompose a term into subterms and interpret it.
Therefore functional languages do not grow easily to distributed
programming, where the knowledge over the terms is limited.

Best,
Jakob



___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?

2012-02-12 Thread Casey Ransberger
There's always

http://en.wikipedia.org/wiki/Actor_model

and

http://www.dalnefre.com/wp/humus/

...which seem to make concurrency less of a PITA. Like most languages that
crystalize a particular style, though, there's some learning involved for
folks (like me!) who hadn't really thought about the actor model in a deep
way.

On Sun, Feb 12, 2012 at 9:15 AM, Steve Wart  wrote:

> Simplicity, like productivity, is an engineering metric that can only be
> measured in the context of a particular application. Most successful
> programming languages aren't "mathematically pure" but some make it easier
> than others to use functional idioms (by which I mean some mechanism to
> emulate the types of operations you describe below). However many problems
> are simpler to solve without such abstractions. That's why "practical
> languages" like Smalltalk-80 and Lisp and their mainstream derivatives have
> succeeded over their more pure counterparts.
>
> I think I responded to this message because I was sick in bed yesterday
> reading the intro paragraphs of The Algorithmic Beauty of Plants yesterday (
> http://algorithmicbotany.org/papers/#abop), and shared the introduction
> of DOL-systems with my son. We both enjoyed it, but I'm not sure how this
> translates into reasoning about distributed systems.
>
> Can the distributed computation model you describe be formalized as a set
> of rewrite rules, or is the "black box" model really about a protocol for
> message dispatch? Attempts to build distributed messaging systems haven't
> been particularly simple. In fact I consider both CORBA and Web Services to
> be failures for that reason.
>
> It's very difficult to use OO in this way without imposing excessive
> knowledge about the internal representation of objects if you need to
> serialize parameters or response objects.
>
> HTTP seems to have avoided this by using MIME types, but this is more
> about agreed upon engineering standards rather than computational
> abstractions.
>
> Cheers,
> Steve
>
>
> On Sun, Feb 12, 2012 at 4:02 AM, Jakob Praher  wrote:
>
>>  We would have to define what you mean by the term computation.
>> Computation is a way to transform a language "syntactically" by defined
>> rules.
>> The lambda calculus is a fundamental way of performing such
>> transformation via reduction rules (the alpha, beta, gamma rules).
>>
>> In the end the beta-reduction is term substitution. But abstraction and
>> substitution in a generic purpose von Neumann-style computer has to be
>> modelled accordingly: A variable in the computer is a memory location/a
>> register that can be updated (but it is not a 1:1 correspondence). E.g. A
>> function in a computer is jump to a certain code location having to write
>> to certain locations in memory/registers to get the arguments passed.
>>
>> IMHO the computational model of objects and method dispatch is more of a
>> black box / communcation-oriented model. One does not know much about the
>> destination and dispatchs a message interpreting the result. In functional
>> languages the model is more white boxed. One can always decompose a term
>> into subterms and interpret it. Therefore functional languages do not grow
>> easily to distributed programming, where the knowledge over the terms is
>> limited.
>>
>> Best,
>> Jakob
>>
>>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc
>
>


-- 
Casey Ransberger
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?

2012-02-12 Thread Steve Wart
Simplicity, like productivity, is an engineering metric that can only be
measured in the context of a particular application. Most successful
programming languages aren't "mathematically pure" but some make it easier
than others to use functional idioms (by which I mean some mechanism to
emulate the types of operations you describe below). However many problems
are simpler to solve without such abstractions. That's why "practical
languages" like Smalltalk-80 and Lisp and their mainstream derivatives have
succeeded over their more pure counterparts.

I think I responded to this message because I was sick in bed yesterday
reading the intro paragraphs of The Algorithmic Beauty of Plants yesterday (
http://algorithmicbotany.org/papers/#abop), and shared the introduction of
DOL-systems with my son. We both enjoyed it, but I'm not sure how this
translates into reasoning about distributed systems.

Can the distributed computation model you describe be formalized as a set
of rewrite rules, or is the "black box" model really about a protocol for
message dispatch? Attempts to build distributed messaging systems haven't
been particularly simple. In fact I consider both CORBA and Web Services to
be failures for that reason.

It's very difficult to use OO in this way without imposing excessive
knowledge about the internal representation of objects if you need to
serialize parameters or response objects.

HTTP seems to have avoided this by using MIME types, but this is more about
agreed upon engineering standards rather than computational abstractions.

Cheers,
Steve

On Sun, Feb 12, 2012 at 4:02 AM, Jakob Praher  wrote:

>  We would have to define what you mean by the term computation.
> Computation is a way to transform a language "syntactically" by defined
> rules.
> The lambda calculus is a fundamental way of performing such transformation
> via reduction rules (the alpha, beta, gamma rules).
>
> In the end the beta-reduction is term substitution. But abstraction and
> substitution in a generic purpose von Neumann-style computer has to be
> modelled accordingly: A variable in the computer is a memory location/a
> register that can be updated (but it is not a 1:1 correspondence). E.g. A
> function in a computer is jump to a certain code location having to write
> to certain locations in memory/registers to get the arguments passed.
>
> IMHO the computational model of objects and method dispatch is more of a
> black box / communcation-oriented model. One does not know much about the
> destination and dispatchs a message interpreting the result. In functional
> languages the model is more white boxed. One can always decompose a term
> into subterms and interpret it. Therefore functional languages do not grow
> easily to distributed programming, where the knowledge over the terms is
> limited.
>
> Best,
> Jakob
>
>
___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc


Re: [fonc] COLAs or CLOAs? : are lambda systems fundamentally simpler than object systems?

2012-02-12 Thread Jakob Praher
We would have to define what you mean by the term computation.
Computation is a way to transform a language "syntactically" by defined
rules.
The lambda calculus is a fundamental way of performing such
transformation via reduction rules (the alpha, beta, gamma rules).

In the end the beta-reduction is term substitution. But abstraction and
substitution in a generic purpose von Neumann-style computer has to be
modelled accordingly: A variable in the computer is a memory location/a
register that can be updated (but it is not a 1:1 correspondence). E.g.
A function in a computer is jump to a certain code location having to
write to certain locations in memory/registers to get the arguments passed.

IMHO the computational model of objects and method dispatch is more of a
black box / communcation-oriented model. One does not know much about
the destination and dispatchs a message interpreting the result. In
functional languages the model is more white boxed. One can always
decompose a term into subterms and interpret it. Therefore functional
languages do not grow easily to distributed programming, where the
knowledge over the terms is limited.

Best,
Jakob

Am 12.02.12 03:20, schrieb Steve Wart:
> I think of OO as an organization mechanism. It doesn't add
> fundamentally to computation, but it allows complexity to be managed
> more easily.
>
> On Sat, Feb 11, 2012 at 5:23 PM, Kurt Stephens  > wrote:
>
>
> COLAs or CLOAs? : are lambda systems fundamentally simpler than
> object systems?
>
> Should Combined-Object-Lambda-Archtecture really be
> Combined-Lambda-Object-Architecture?
>
> Ian Piumarta's IDST bootstraps a object-system, then a compiler,
> then a lisp evaluator.  Maru bootstraps a lisp evaluator, then
> crafts an object system, then a compiler.  Maru is much smaller
> and elegant than IDST.
>
> Are object systems necessarily more complex than lambda
> evaluators?  Or is this just another demonstration of how Lisp
> code/data unification is more powerful?
>
> If message send and function calls are decomposed into lookup()
> and apply(), the only difference basic OO message-passing and
> function calling is lookup(): the former is late-bound, the latter
> is early bound (in the link-editor, for example.).  Is OO lookup()
> the sole complicating factor?  Is a lambda-oriented compiler
> fundamentally less complex than a OO compiler?
>
> I took the object->lambda approach in TORT
> (http://github.com/kstephens/tort) tried to keep the OO kernel
> small, and delay the compiler until after the lisp evaluator.  The
> object system started out "tiny" but to support the lisp evaluator
> created in an OO-style (which the REPL and compiler is built on)
> required a lot of basic foundational object functionality.
>  Despite its name, TORT is no longer tiny; I probably didn't
> restrain myself enough; it tries too much to support C extension
> and dynamic linking.
>
> Did Gregor Kiczales, Ian and others stumble upon the benefits of
> lisp->object bootstrapping .vs. object->lisp bootstrapping?  I've
> written object-oriented LISPs before
> (http://github.com/kstephens/ll based on ideas from OAKLISP).  Do
> OO techniques make language implementation "feel" easier in the
> beginning, only to complect later on?
>
>  Just some ideas,
>  Kurt Stephens
>
> http://kurtstephens.com/node/154
>
>
> ___
> fonc mailing list
> fonc@vpri.org 
> http://vpri.org/mailman/listinfo/fonc
>
>
>
>
> ___
> fonc mailing list
> fonc@vpri.org
> http://vpri.org/mailman/listinfo/fonc


___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc