Hi John

Yes, we are talking about ideas spread out over time. The Alto happened in 1973 
and the Ethernet was put into practical use a year or so later. And after that 
came the Dolphin and Dorado computers (which were Alto-like but faster and 
bigger). Smalltalk images in the latter part of the 70s included the three 
microcode "personalities" for each of the machines that would turn them into 
the 
same Smalltalk Virtual Machine. So one could get the image from the distributed 
file system and run it efficiently regardless of what machine one was on. 


Gerry Popek from UCLA spent a year at PARC and decided to make an "on the fly" 
distributed process load-balancing network OS (called the LOCUS Distributed 
Operating System -- there's a book of the same name from MIT Press in the late 
80s -- it is well worth reading). This was essentially a modified Unix (he was 
not at PARC anymore), with portable processes that could be automatically moved 
around the network while in process. The implementation was working quite well 
by 1985 on heterogeneous collections of PDP-11s, Macs, and PCs (using similar 
"personality" hooks for the code) on the commercial Ethernet of the day. I 
tried 
to get Apple to buy this, but to no avail.

So the idea that hardware on networks should just be caches for movable process 
descriptions and the processes themselves goes back quite a ways. There's a 
real 
sense in which MS and Apple never understood networking or operating systems 
(or 
what objects really are), and when they decided to beef up their OSs, they went 
to (different) very old bad mainframe models of OS design to try to adapt to 
personal computers.

Cheers,

Alan




________________________________
From: John Zabroski <johnzabro...@gmail.com>
To: Fundamentals of New Computing <fonc@vpri.org>
Sent: Mon, January 3, 2011 2:59:51 PM
Subject: Re: [fonc] The Elements of Computing Systems




On Mon, Jan 3, 2011 at 2:01 PM, Alan Kay <alan.n...@yahoo.com> wrote:

Please say more ...
>
>The Alto didn't have any hardware for this ... nor did it have any regular 
>code 
>... it was microcoded and almost a Turing Machine in many ways. The main 
>feature 
>of the hardware architecture was the 16-way zero-overhead multitasking of the 
>microcode pre-triggered by simple events.
>
>Are you actually commenting on the way the first Ethernet interface was done?
>
>Cheers,
>
>Alan
>
>


Thought about this...

I think I confused three different projects.  The Interim Dynabook OS for 
Smalltalk-72, the Alto, and Butler Lampson's much later work on secure 
distributed computing ( 
http://research.microsoft.com/en-us/um/people/blampson/Systems.html#DSsecurity 
).

I don't think I've ever read a hardware description of the Alto. I'm 26 years 
old and there are only so many bits an eye can see ;-)

>From what I read about the Alto, it was fairly cool in that, if I understood 
>the 
description right, the network was basically a virtual paging system for the 
local computer, and you could essentially have your local computer be whatever 
was the most important local features for you to have (.e.g, in the event of 
network failures and unavailability of services).  But I think it was reading 
Butler's later work that made me think the Alto was half-baked in this regard 
and mainly a hacked together good idea.  For example, I didn't see anywhere 
where it said that the image could be persisted locally, and that the contents 
of that image could be named elsewhere on the network (e.g., some metadata 
describing the content of that image, imagine content-based routing for 
programs, since programs are really just content; you kind of see the modern 
Web 
Browser edging slowly in this direction vis a vis MS Research showing that the 
best way to partition programs is by their features; given that, it is a short 
jump to realize that through modular definition you can basically share lots 
and 
lots of code throughout a network, but the key is that the representation of 
the 
code has to be at a suitably high level of abstraction so that the content can 
be cached closer to the edges of the network).

Basically, an operating system should be a recursively defined resource and 
anybody who wants a copy of that operating system should be able to just point 
to an address where an image of that resource is located, and automatically 
suck 
in that image and start running.  And there should be no obligation that once 
the thing starts running that it has to be the same thing forever.

Hope that helps.



      
_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to