Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-14 Thread Brown, John Mickey
From: Jeff Gonis mailto:jeff.go...@gmail.com>>
> >To: Alan Kay mailto:alan.n...@yahoo.com>>
> >Cc: Fundamentals of New Computing mailto:fonc@vpri.org>>
> >Sent: Tuesday, February 12, 2013 10:33 AM
> >Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"
> >
> >
> >I see no one has taken Alan's bait and asked the million dollar question: if 
> >you decided that messaging is no longer the right path for scaling, what 
> >approach are you currently using?


>From my last trip to the SPLASH conference a few years ago, I've been
contemplating a lot of these ideas. Especially the messaging paradigm and
the current conundrum with concurrency and scaling.
A common theme (brought up by Ivan Sutherland paraphrased here) is that in
the past processing was expensive, now its communication. From the sheer
amount of development code and processing done to pack up data in an
understandable form, serialize it, encrypt it, etc so it can get to
another component to process (at least at the macro distributed systems
level). Now it's a concern with the multi-core processors that need to
process data. I went to some of the workshops on the functional languages
and Actors as a way to handle some of these issues through asynchronous
messaging.

My thought, but I've not found much work in this area is in this era of
virtualization, a small program or service can be instantiated anywhere as
long as location independence is honored.
So why not do a lot of the "distributed work" by instantiating these
services to where a majority of the data is located. Sure there will be
times when it needs to be shipped somewhere, but most of the communication
needed would be smaller control data to coordinate the program
instantiations and handoff (Like some of the IPC work I used to do with
unix processes using semaphores). The most significant con I see to this
is handling fault tolerance so there are no deadlocks.

I realize there are much more experienced smart people on this forum that
could shoot some holes in this approach, and I invite you to do so. My
feeling won't be hurt as long as I can learn some more.
Most of my experience is in the boring arena of Business IT (with my
earlier years in Defense with Simulators and Communication Systems).
Thanks for any reply,
   John Brown

From: Alan Kay mailto:alan.n...@yahoo.com>>
Reply-To: Alan Kay mailto:alan.n...@yahoo.com>>, 
Fundamentals of New Computing mailto:fonc@vpri.org>>
Date: Wed, 13 Feb 2013 18:51:59 -0500
To: Fundamentals of New Computing mailto:fonc@vpri.org>>
Subject: [SUSPECTED SPAM] Re: [fonc] Terminology: "Object Oriented" vs "Message 
Oriented"

Hi Barry

I like your characterization, and do think the next level also will require a 
qualitatively different approach

Cheers,

Alan


From: Barry Jay mailto:barry@uts.edu.au>>
To: fonc@vpri.org
Sent: Wednesday, February 13, 2013 1:13 PM
Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

Hi Alan,

the phrase I picked up on was "doing experiments". One way to think of the 
problem is that we are trying to automate the scientific process, which is a 
blend of reasoning and experiments. Most of us focus on one or the other, as in 
deductive AI versus databases of common knowledge, but the history of physics 
etc suggests that we need to develop both within a single system, e.g. a 
language that supports both higher-order programming (for strategies, etc) and 
generic queries (for conducting experiments on newly met systems).

Yours,
Barry


On 02/14/2013 02:26 AM, Alan Kay wrote:
Hi Thiago

I think you are on a good path.

One way to think about this problem is that the broker is a human programmer 
who has received a module from half way around the world that claims to provide 
important services. The programmer would confine it in an address space and 
start doing experiments with it to try to discover what it does (and/or perhaps 
how well its behavior matches up to its claims). Many of the discovery 
approaches of Lenat in AM and Eurisko could be very useful here.

Another part of the scaling of modules approach could be to require modules to 
have much better models of the environments they expect/need in order to run.

For example, suppose a module has a variable that it would like to refer to 
some external resource. Both static and dynamic typing are insufficient here 
because they are only about kinds of results rather than meanings of results.

But we could readily imagine a language in which the variable had associated 
with it a "dummy" or "stand-in" model of what is desired. It could be a slow 
version of something we are hoping to get a faster version of. It could be 
sample values and tests, etc. All of these would be useful for debugging our 
module -- in fact, we could make this a requirement of our module system, that 
the modules carry enough information to allow them to be debugged with only 
their own model of the en

Re: [fonc] [SUSPECTED SPAM] Re: Terminology: "Object Oriented" vs "Message Oriented"

2013-02-13 Thread Brown, John Mickey
>From my last trip to the SPLASH conference a few years ago, I've been
contemplating a lot of these ideas. Especially the messaging paradigm and
the current conundrum with concurrency and scaling.
A common theme (brought up by Ivan Sutherland paraphrased here) is that in
the past processing was expensive, now its communication. From the sheer
amount of development code and processing done to pack up data in an
understandable form, serialize it, encrypt it, etc so it can get to
another component to process (at least at the macro distributed systems
level). Now it's a concern with the multi-core processors that need to
process data. I went to some of the workshops on the functional languages
and Actors as a way to handle some of these issues through asynchronous
messaging.

My thought, but I've not found much work in this area is in this era of
virtualization, a small program or service can be instantiated anywhere as
long as location independence is honored.
So why not do a lot of the "distributed work" by instantiating these
services to where a majority of the data is located. Sure there will be
times when it needs to be shipped somewhere, but most of the communication
needed would be smaller control data to coordinate the program
instantiations and handoff (Like some of the IPC work I used to do with
unix processes using semaphores). The most significant con I see to this
is handling fault tolerance so there are no deadlocks.

I realize there are much more experienced smart people on this forum that
could shoot some holes in this approach, and I invite you to do so. My
feeling won't be hurt as long as I can learn some more.
Most of my experience is in the boring arena of Business IT (with my
earlier years in Defense with Simulators and Communication Systems).
Thanks for any reply,
   John Brown


On 2/13/13 5:09 AM, "Thiago Silva"  wrote:

>Hello,
>
>as I was thinking over these problems today, here are some initial
>thoughts,
>just to get the conversation going...
>
>
>The first time I read about the Method Finder and Ted's memo, I tried to
>grasp
>the broader issue, and I'm still thinking of some interesting examples to
>explore.
>
>I can see the problem of finding operations by their meanings, the
>problem of
>finding objects by the services they provide and the overal structure of
>the
>discovery, negotiation and binding.
>
>My feeling is that, besides using worlds as mechanism, an explicit
>"discovery"
>context may be required (though I can't say much without further
>experimentations), specially when trying to figure out operations that
>don't
>produce a distinguishable value but rather change the state of computation
>(authenticating, opening a file, sending a message through the network,
>etc)
>or when doing remote discovery.
>
>For brokering (and I'm presuming the use of such entities, as I could not
>get
>rid of them in my mind so far), my first thought was that a chain of
>brokers
>of some sorts could be useful in the architecture where each could have
>specific ways of mediating discovery and negotiation through the "levels"
>(or
>narrowed options, providing isolation for some services. Worlds come to
>mind).
>
>During the "binding time", I think it would be important that some
>requirements of the client could be relaxed or even be tagged optional to
>allow the module to execute at least a subset of its features (or to
>execute
>features with suboptimal operations) when full binding isn't possible --
>though this might require special attention to guarantee that eg.
>disabling
>optional features don't break the execution.
>
>Further, different versions of services may require different kinds of
>pre/post-processing (eg. initialization and finalization routines). When
>abstracting a service (eg. storage) like this, I think it's when the "glue
>code" starts to require sophistication (because it needs to fill more
>blanks)...and to have it automated, the provider will need to make
>requirements to the client as well. This is where I think a common
>vocabulary
>will be more necessary.
>
>--
>Thiago
>
>Excerpts from Alan Kay's message of 2013-02-12 16:12:40 -0300:
>> Hi Jeff
>>
>> I think "intermodule communication schemes" that *really scale* is one
>>of the most important open issues of the last 45 years or so.
>>
>> It is one of the several "pursuits" written into the STEPS proposal
>>that we didn't use our initial efforts on -- so we've done little to
>>advance this over the last few years. But now that the NSF funded part
>>of STEPS has concluded, we are planning to use much of the other strand
>>of STEPS to look at some of these neglected issues.
>>
>> There are lots of facets, and one has to do with messaging. The idea
>>that "sending a message" has scaling problems is one that has been
>>around for quite a while. It was certainly something that we pondered at
>>PARC 35 years ago, and it was an issue earlier for both the ARPAnet and
>>its offspring: the Internet.
>>
>> Several members of th

Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

2013-02-12 Thread Brown, John Mickey
Dude….   You said shiny "objects"….Lol.

Messaging certainly seems to have a larger focus with multi-core, many-core, 
and cloud computing concepts (that itself is morphing into shiny objects).
I also enjoy these history lessons and discussions.

John

From: David Hussman mailto:david.huss...@devjam.com>>
Reply-To: Fundamentals of New Computing mailto:fonc@vpri.org>>
Date: Tue, 12 Feb 2013 11:36:35 -0500
To: 'Alan Kay' mailto:alan.n...@yahoo.com>>, 'Fundamentals 
of New Computing' mailto:fonc@vpri.org>>
Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

Alan,

Thanks for the thoughtful words / history. I am a lurker on this group and I 
dig seeing this kind of dialog during times when I am so often surrounded by 
bright shiny object types.

David

From: fonc-boun...@vpri.org 
[mailto:fonc-boun...@vpri.org] On Behalf Of Alan Kay
Sent: Tuesday, February 12, 2013 10:23 AM
To: Fundamentals of New Computing
Subject: Re: [fonc] Terminology: "Object Oriented" vs "Message Oriented"

Hi Loup

I think how this happened has already been described in "The Early History of 
Smalltalk".

But 

In the Fall of 1966, Sketchpad was what got me started thinking about 
"representing concepts as whole things". Simula, a week later, provided a 
glimpse of how one could "deal with issues that couldn't be done wonderfully 
with constraints and solving" (namely, you could hide procedures inside the 
entities).

This triggered off many thoughts in a few minutes, bringing in "ideas that 
seemed similar" from biology, math (algebras), logic (Carnap's intensional 
logic), philosophy (Plato's "Ideas"), hardware (running multiple active units 
off a bus), systems design (the use of virtual machines in time-sharing), and 
networking (the ARPA community was getting ready to do the ARPAnet). Bob Barton 
had pronounced that "recursive design is making the parts have the same powers 
as the wholes", which for the first time I was able to see was really powerful 
if the wholes and the parts were entire computers hardware or software or some 
mixture.

The latter was hugely important to me because it allowed a "universal 
simulation system" to be created from just a few ideas that would cover 
everything and every other kind of thing.

During this period I had no label for what I was doing, including "this thing I 
was doing", I was just doing.

A few months later someone asked me what I was doing, and I didn't think about 
the answer -- I was still trying to see how the synthesis of ideas could be 
pulled off without a lot of machinery (kind of the math stage of the process).

Back then, there was already a term in use called "data driven programming". 
This is where "data" contains info that will help find appropriate procedures.

And the term "objects" was also used for "composite data" i.e. blocks of 
storage with different fields containing values of various kinds. This came 
naturally from "card images" (punched cards were usually 80 or more characters 
long and divided into fields).

At some point someone (probably in the 50s) decided to use some of the fields 
to help the logic of plug board programming and "drive" the processes off the 
cards rather than "just processing" them.

So if you looked at how Sketchpad was implemented you would see, in the terms 
of the day: "objects that were data driven". Ivan gives Doug Ross credit for 
his "plex structures", which were an MIT way to think about these ideas. 
Sketchpad also used "threaded lists" in its blocks (this was not a great idea 
but it was popular back then -- Simula later took this up as well).

So I just said "object oriented programming" and went back to work.

Later I regretted this (and some of the other labels that were also put in 
service) after the ideas worked out nicely and were very powerful for us at 
PARC.

The success of the ideas made what we were doing popular, and people wanted to 
be a part of it. This led to using the term "object oriented" as a designer 
jeans label for pretty much anything (there was even an "object-oriented" 
COBOL!). This appropriation of labels without content is a typical pop culture 
"fantasy football" syndrome.

PARC was an integral part of the ARPA community, the last gasp of which in the 
70s was designing the Internet via a design group that contained PARC people 
(PARC had actually already done an "internetwork" -- called PUP -- with 
"gateways" (routers) to interconnect Ethernetworks and other networks within 
Xerox).

It was clear to all in this community from the mid-60s onward that how 
"messaging" was done was one of the keys to achieving scaling. This is why 
"what I was working on" had "messages" as the larger coordination idea (rather 
than the subset of "calls").

At PARC we wanted to do a complete personal computing system on the Alto, which 
was a microcoded ~150ns cycle CPU with 16 program counters and 64k 16bit words 
of memory that cycled at ~750ns (where hal

Re: [fonc] Ask For Forgiveness Programming - Or How We'll Program 1000 Cores

2012-04-13 Thread Brown, John Mickey
Nice write up. It seems to convey the same topics I thought of during the 
conference. I told David Ungar that I agreed mostly with his premise because at 
a macro scale (distributed computing), we've been dealing with this issue of 
scaling and data staleness for decades. I've often asked the business the 
tolerance of receiving correct timely data during requirements. Pure 
synchronized solutions in the enterprise are extremely costly.

Although the application may be different and have different constraints at the 
micro level, there's opportunity for the two camps (micro and enterprise) to 
learn practices, patterns, and innovation from each other that can impact both 
fields.

I'm particularly interested in how to apply the Actor language concepts into 
enterprise business applications to allow it to be more mainstream (perhaps in 
the ESB and Fabric space)

Glad to find someone else that recognized the same parallels (pardon the pun).

John Mickey Brown - Application Architect

"In times of drastic change, it is the learners who inherit the future. The 
learned usually find themselves equipped to live in a world that no longer 
exists" - Eric Hoffner

-Original Message-
From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of Eugen 
Leitl
Sent: Friday, April 13, 2012 5:53 AM
To: t...@postbiota.org; i...@postbiota.org; forkit!
Cc: Fundamentals of New Computing
Subject: [fonc] Ask For Forgiveness Programming - Or How We'll Program 1000 
Cores


http://highscalability.com/blog/2012/3/6/ask-for-forgiveness-programming-or-how-well-program-1000-cor.html

Ask For Forgiveness Programming - Or How We'll Program 1000 Cores

Tuesday, March 6, 2012 at 9:15AM

The argument for a massively multicore future is now familiar: while clock
speeds have leveled off, device density is increasing, so the future is cheap
chips with hundreds and thousands of cores. That’s the inexorable logic
behind our multicore future.

The unsolved question that lurks deep in the dark part of a programmer’s mind
is: how on earth are we to program these things? For problems that aren’t
embarrassingly parallel, we really have no idea. IBM Research’s David Ungar
has an idea. And it’s radical in the extreme...

Grace Hopper once advised “It's easier to ask for forgiveness than it is to
get permission.” I wonder if she had any idea that her strategy for dealing
with human bureaucracy would the same strategy David Ungar thinks will help
us tame  the technological bureaucracy of 1000+ core systems?

You may recognize David as the co-creator of the Self programming language,
inspiration for the HotSpot technology in the JVM and the prototype model
used by Javascript. He’s also the innovator behind using cartoon animation
techniques to build user interfaces. Now he’s applying that same creative
zeal to solving the multicore problem.

During a talk on his research, Everything You Know (about Parallel
Programming) Is Wrong! A Wild Screed about the Future, he called his approach
“anti-lock or “race and repair” because the core idea is that the only way
we’re going to be able to program the new multicore chips of the future is to
sidestep Amdhal’s Law and program without serialization, without locks,
embracing non-determinism. Without locks calculations will obviously be
wrong, but correct answers can be approached over time using techniques like
fresheners:

A thread that, instead of responding to user requests, repeatedly selects
a cached value according to some strategy, and recomputes that value from its
inputs, in case the value had been inconsistent. Experimentation with a
prototype showed that on a 16-core system with a 50/50 split between workers
and fresheners, fewer than 2% of the queries would return an answer that had
been stale for at least eight mean query times. These results suggest that
tolerance of inconsistency can be an effective strategy in circumventing
Amdahl’s law.

During his talk David mentioned that he’s trying  to find a better name than
“anti-lock or “race and repair” for this line of thinking. Throwing my hat
into the name game, I want to call it Ask For Forgiveness Programming (AFFP),
based on the idea that using locks is “asking for permission” programming, so
not using locks along with fresheners is really “asking for forgiveness.” I
think it works, but it’s just a thought.

No Shared Lock Goes Unpunished

Amdahl’s Law is used to understand why simply having more cores won’t save us
for a large class of problems. The idea is that any program is made up of a
serial fraction and a parallel fraction. More cores only helps you with the
parallel portion. If an operation takes 10 seconds, for example, and one
second of it is serial, then having infinitely many cores will only help you
make the parallelizable part faster, the serial code will always take  one
second. Amdahl says you can never go faster than that 10%. As long as your
code has a serial portion it’s impossible to go faster.

Jakob Engb

RE: [fonc] Fwd: [AGERE! at SPLASH] Talks by Mark Miller

2011-11-08 Thread Brown, John Mickey
I was able to attend the AGERE! Workshop at Splash. Very interesting concepts. 
I'm interested to see how Actor based programming can enter the mainstream 
programming to provide some consistency in EDA systems.

John Mickey Brown - Application Architect

From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of Max 
OrHai
Sent: Tuesday, November 08, 2011 3:54 PM
To: Fundamentals of New Computing
Subject: [fonc] Fwd: [AGERE! at SPLASH] Talks by Mark Miller

Some on this list with interests in security may enjoy these, too...

Related:
- The AGERE! (Actors and Agents Reloaded) workshop webpage: 
http://www.alice.unibo.it/xwiki/bin/view/AGERE/

- AmbientTalk (actor language for mobile devices): http://soft.vub.ac.be/amop/

-- Max

-- Forwarded message --
From: Tom Van Cutsem mailto:tomvc...@gmail.com>>
Date: Thu, Nov 3, 2011 at 12:37 PM
Subject: [AGERE! at SPLASH] Talks by Mark Miller
To: agere-at-spl...@googlegroups.com


Dear all,

During the panel session, Mark Miller showed some slides from a talk he gave at 
our university (University of Brussels, Belgium) a couple of weeks ago. At the 
workshop, I promised to forward links to the videos of the full talks when they 
would become available. See the abstract and links below.

How does this relate to actors? Mark talks about capability-based security, 
which meshes really well with object-oriented, and - in the distributed case - 
with actor-based programming. Don't worry if you are not an expert on security: 
Mark explains the issues in a very clear and understandable way.

Thanks again to the organizers for a successful AGERE! workshop.

Kind regards,
Tom Van Cutsem

Talk 1/2: Secure Distributed Programming with Object-capabilities in JavaScript

Until now, browser-based security has been hell. The object-capability (ocap) 
model provides a simple and expressive alternative. Google's Caja project uses 
the latest JavaScript standard, EcmaScript 5, to support fine-grained safe 
mobile code, solving the secure mashup problem. Dr. SES -- Distributed 
Resilient Secure EcmaScript -- extends the ocap model cryptographically over 
the network, enabling RESTful composition of mutually suspicious web services. 
We show how to apply the expressiveness of object programming to the expression 
of security patterns, solving security problems normally thought to be 
difficult with simple elegant programs.

Slides: 
Video: 

Talk 2/2: Bringing Object-orientation to Security Programming

Just as we should not expect our base programming language to provide all the 
data types we need, so we should not expect our security foundation to provide 
all the abstractions we need to express security policy. The answer to both is 
the same: We need foundations that provide simple abstraction mechanisms, which 
we use to build an open ended set of abstractions, which we then use to express 
policy. We show how to use EcmaScript 5 to enforce the security latent in 
object-oriented abstraction mechanisms: encapsulation, message-passing, 
polymorphism, and interposition. With these secured, we show how to build 
abstractions for confinement, rights amplification, transitive wrapping and 
revocation, and smart contracts.

Slides: 
Video: 
--
You received this message because you are subscribed to the Google Groups 
"AGERE! at SPLASH" group.
To post to this group, send email to 
agere-at-spl...@googlegroups.com.
To unsubscribe from this group, send email to 
agere-at-splash+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/agere-at-splash?hl=en.

___
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc