The same things are at a Very High Level computing (beyond application
boundary at enterprise-wide level, and especially beyond enterprise boundary
at business eco-systems level). There are "BPMN engines", "issue trackers",
"project management systems", "document management/workflow systems", etc..
And when you try to perform workflow/process/case execution of something
that need to be executed on all this "engines" mess, you have a huge
problems: too many high level execution paradigms (project, process, case
management, complex event management, etc.), too few good architectures and
tools to do this.

 

I think that scalability should go not only from hardware to "application as
desktop publishing" level but to support of enterprise architecture and
beyond (business eco-system architecture with federated enterprises). SOA
ideas is definitely not helpful here in its current "enterprise bus" state.

 

I consider programming, modeling and ontologizing from CPU hardware up to a
business eco-system level the same discipline and transfer from
programming/modeling/ontologizing-in-the-small to the same-in-the-large as
one of urgent needs. We should generalize concept of external execution to
preserve it meaning from hardware CPU core to OS/browser/distributed
application level to extended enterprise (network of hundreds of enterprises
that perform complex industrial projects like nuclear power station design
and construction).

 

Best regards,

Anatoly Levenchuk

 

From: fonc-boun...@vpri.org [mailto:fonc-boun...@vpri.org] On Behalf Of Alan
Kay
Sent: Thursday, March 01, 2012 3:10 AM
To: Duncan Mak; Fundamentals of New Computing
Subject: Re: [fonc] Error trying to compile COLA

 

Hi Duncan

 

The short answers to these questions have already been given a few times on
this list. But let me try another direction to approach this.

 

The first thing to notice about the overlapping windows interface "personal
computer experience" is that it is logically independent of the
code/processes running underneath. This means (a) you don't have to have a
single religion "down below" (b) the different kinds of things that might be
running can be protected from each other using the address space mechanisms
of the CPU(s), and (c) you can think about allowing "outsiders" to do pretty
much what they want to create a really scalable really expandable WWW.

 

If you are going to put a "browser app" on an "OS", then the "browser" has
to be a mini-OS, not an app. 

 

But "standard apps" are a bad idea (we thought we'd gotten rid of them in
the 70s) because what you really want to do is to integrate functionality
visually and operationally using the overlapping windows interface, which
can safely get images from the processes and composite them on the screen.
(Everything is now kind of "super-desktop-publishing".) An "app" is now just
a kind of integration.

 

But the route that was actually taken with the WWW and the browser was in
the face of what was already being done.

 

Hypercard existed, and showed what a WYSIWYG authoring system for end-users
could do. This was ignored.

 

Postscript existed, and showed that a small interpreter could be moved
easily from machine to machine while retaining meaning. This was ignored.

 

And so forth.

 

19 years later we see various attempts at inventing things that were already
around when the WWW was tacked together.

 

But the thing that is amazing to me is that in spite of the almost universal
deployment of it, it still can't do what you can do on any of the machines
it runs on. And there have been very few complaints about this from the
mostly naive end-users (and what seem to be mostly naive computer folks who
deal with it).

 

Some of the blame should go to Apple and MS for not making real OSs for
personal computers -- or better, going the distance to make something better
than the old OS model. In either case both companies blew doing basic
protections between processes. 

 

On the other hand, the WWW and first browsers were originally done on
workstations that had stronger systems underneath -- so why were they so
blind?

 

As an aside I should mention that there have been a number of attempts to do
something about "OS bloat". Unix was always "too little too late" but its
one outstanding feature early on was its tiny kernel with a design that
wanted everything else to be done in "user-mode-code". Many good things
could have come from the later programmers of this system realizing that
being careful about dependencies is a top priority. (And you especially do
not want to have your dependencies handled by a central monolith, etc.)

 

So, this gradually turned into an awful mess. But Linus went back to square
one and redefined a tiny kernel again -- the realization here is that you do
have to arbitrate basic resources of memory and process management, but you
should allow everyone else to make the systems they need. This really can
work well if processes can be small and interprocess communication fast (not
the way Intel and Motorola saw it ...). 

 

And I've also mentioned Popek's LOCUS system as a nice model for migrating
processes over a network. It was Unix only, but there was nothing about his
design that required this.

 

Cutting to the chase with a current day example. We made Etoys 15 years ago
so children could learn about math, science, systems, etc. It has a particle
system that allows many interesting things to be explored.

 

Windows (especially) is so porous that SysAdmins (especially in school
districts) will not allow teachers to download .exe files. This wipes out
the Squeak plugin that provides all the functionality.

 

But there is still the browser and Javascript. But Javascript isn't fast
enough to do the particle system. But why can't we just download the
particle system and run it in a safe address space? The browser people don't
yet understand that this is what they should have allowed in the first
place. So right now there is only one route for this (and a few years ago
there were none) -- and that is Native Client on Google Chrome. 

 

 But Google Chrome is only 13% penetrated, and the other browser fiefdoms
don't like NaCl..... Google Chrome is an .exe file so teachers can't
download it (and if they could, they could download the Etoys plugin).

 

Just in from browserland ... there is now -- 19 years later -- an allowed
route to put samples in your machine's sound buffer that works on some of
the browsers.

 

Holy cow folks!

 

Alan

 

 

 


  _____  


From: Duncan Mak <duncan...@gmail.com>
To: Alan Kay <alan.n...@yahoo.com>; Fundamentals of New Computing
<fonc@vpri.org> 
Sent: Wednesday, February 29, 2012 11:50 AM
Subject: Re: [fonc] Error trying to compile COLA

 

Hello Alan,

On Tue, Feb 28, 2012 at 4:30 PM, Alan Kay <alan.n...@yahoo.com> wrote:

For example, one of the many current day standards that was dismissed
immediately is the WWW (one could hardly imagine more of a mess). 


I was talking to a friend the other day about the conversations going on in
this mailing list - my friend firmly believes that the Web (HTTP) is one of
the most important innovations in recent decades.


 

One thing he cites as innovative is a point that I think TimBL mentions
often: that the Web was successful (and not prior hypertext systems) because
it allowed for broken links.

 

Is that really a good architectural choice? If not, is there a reason why
the Web succeeded, where previous hypertext systems failed? Is it only
because of "pop culture"?

 

What are the architectural flaws of the current Web? Is there anything that
could be done to make it better, in light of these flaws?

 

-- 
Duncan.

 

_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to