Hi Alan,

thanks for elaborating.  I guess I had seen some of those criticisms before but 
not connected them with actual design flaws.

While there is certainly room for criticism, it seems hard to dispute that the 
WWW is a remarkably successful artifact, a vast and unprecedented global 
distributed computing and document engine that actually works at a scale of 
1,000,000,000 : 1.  Just in terms of scaling, that seems pretty good to me and 
worth of study.   And it manages to work where previous efforts by Really Smart 
People™ utterly failed, which is also interesting.  Last not least, the fact 
that it contradicts our theories, which say that it is put together "wrong", 
makes it not less but more interesting, at least to the scientist/empricist in 
me. 


More specifically, some of your criticisms seem a bit unfair to me.  For 
example, TBL was trying to build a document distribution system for scientists, 
not the ultimate distributed computing platform.  That part happened sort of by 
accident.  While a document is a special case of an app, the fact is that this 
view does make a lot of things much more complicated (I am currently adopting 
that approach for tablet educational content), and limiting the design in the 
way he did seems entirely appropriate for the desired goal.  Being based on the 
NeXT text system, the original WWW app also naturally included authoring as 
well as viewing.   The fact that later browsers omitted this feature can't 
really be laid on the doorstep of the original design, and in fact seems to be 
more a reflection of a human condition than a technical one:  our wishes 
notwithstanding, people consume much more than they author.  Even in forums 
where "authoring" is as trivial as typing text and hitting return the ratio is 
in the range of 10:1 to 100:1 in favor of consumption.  And even those 
consumption/creation ratios typically have an unfavorable signal-to-noise 
ratio.  Creation being harder and rarer than consumption is not (no longer?) 
primarily a technical problem.

In terms of dynamic content in a browser, we already have tons of Web 2.0 apps, 
Fabrice Bellard has shown that we can run Linux on JavaScript ( 
http://bellard.org/jslinux/index.html ), and another site runs Win XP in a Java 
sandbox ( http://jpc2.com/ ). Google Native Client may give us a bit of a 
performance boost, but I don't see it bringing anything fundamentally new to 
the table that will drastically change the overall situation. 


I am also not sure wether taking a single-address-space model of computing and 
extending it to internet-scale is the right direction to take.  Apart from the 
well-documented pitfalls of this approach, one of the big lessons I *thought* I 
had learned from you (i.e. your writings) was to take something large 
(computers in a network exchanging messages) and scale it down (objects and 
messages in a single computer), rather than take something small (CPU, 
instructions) and attempt to scale it up (ADTs, RPCs, …).  Of course, it is 
likely that I misunderstood.  Before the WWW, we really didn't have a large 
(global-scale) distributed system to scale down, just our ideas and analogies 
(cells from biology being one example) of what such a system might look like.   
Well, now that we actually have a 1E9 : 1 scale system to look at and scale 
down, and it turns out that it looks a bit different than we thought it would. 
This seems like a good thing to me (see above), because it means we have an 
opportunity for learning.

So maybe it is true that there should be no difference between our "local" and 
our "global" apps, but instead of making our global apps look like our local 
ones, a more fruitful approach could be to make our local apps look like our 
global ones.

There are a bunch of features that appear worthwhile to me, the first one being 
the fact that we can hide computation behind a "static document" interface.   
So when I type in a URI:   http://www.vpri.org/  the interface I am using 
treats the resource as just a thing that I have referenced using a single name. 
 Underneath, a lot of messages are exchanged in various forms to make this 
happen, but this is hidden.  Furthermore, the endpoint that the name is 
resolved to may be a static resource or a program that generates the resource 
dynamically, I have no way of finding out.  That's pretty decoupled!  If you 
believe that the "Rule of Least Expressiveness" stated by Roy/Haridi  ("When 
programming a component, the right computation model for the component is the 
least expressive model that results in a natural program") is a good thing, 
which I do, then this is a powerful feature.  

It also has many powerful and serendipitous consequences.  For example, when I 
helped build a CMS in a largely RESTful style (we weren't aware of the term or 
the architecture, just like we weren't aware of eXtreme Programming, we just 
thought it was a good idea to build it that way), not only were we almost 
totally resilient against crashes (system updates between mouse-clicks!), our 
users were able to configure their UI themselves by saving bookmarks to parts 
of the program they needed to access frequently.  Documentation could easily 
become active by embedding links to the live-system right in the help-files 
describing the functionality.  With the adoption of a "cooler" dynamic 
JavaScript / Web 2.0 interface that is more like a traditional app (Squeak or 
otherwise), these capabilities were lost.  I personally find dynamic sites 
(dynamic on the client, so JavaScript, Flash, Java) less usable/useful than 
static ones. 


Back to useful features of the REST model, having the "what can I do next" 
information embedded in the answer to my request ("Hypertext as the carrier of 
application state") also seems to be a powerful way of really, really, really 
late binding APIs.  Pushing content negotiation into the infrastructure makes 
things less brittle by allowing multiple users to have different views of the 
same resource without having to pollute the model.   Clearly separating 
simple/idempotent GET and PUT requests from rarer/more complex POSTs not only 
enables caching and scalability on a global scale, but also seems like a good 
way of separating basic CRUD tasks, which just won't go away, from the 
semantically richer, intensional messages that should be at the heart of good 
OO design.  No more accessor messages, let the URIs take care of that and make 
the messaging interface intensional  :)


Anyway:  while thinking and working on software architecture and what it might 
mean for the next steps in programming 
(http://lists.squeakfoundation.org/pipermail/squeak-dev/1998-October/024942.html)
 the web just sort of happened, and the fact that most of the components are 
cobbled together in Perl ( http://xkcd.com/224/ ) and thus rather ungainly 
caused me to fail to realize that the way the pieces fit together, the 
interstitial aspects ("ma"?) were actually rather profound, often not for what 
they put in but rather for what they left out to make things work on a global 
scale.

That's why your slightly off-the-cuff remark startled me and made me ask.  
Thanks again for clarifying, and I hope my response is a positive contribution.

Just my 2 €-¢ and worth every single one of the two :-)

Marcel



On Jul 24, 2011, at 19:24 , Alan Kay wrote:

> Hi Marcel
> 
> I think I've already said a bit about the Web on this list -- mostly about 
> the complete misunderstanding of the situation the web and browser designers 
> had. 
> 
> All the systems principles needed for a good design were already extant, but 
> I don't think they were known to the designers, even though many of them were 
> embedded in the actual computers and operating systems they used.
> 
> The simplest way to see what I'm talking about is to notice the many-many 
> things that could be done on a personal computer/workstation that couldn't be 
> done in the web & browser running on the very same personal 
> computer/workstation. There was never any good reason for these differences.
> 
> Another way to look at this is from the point of view of "separation of 
> concerns". A big question in any system is "how much does 'Part A' have to 
> know about 'Part B' (and vice versa) in order to make things happen?" The web 
> and browser designs fail on this really badly, and have forced set after set 
> of weak conventions into larger and larger, but still weak browsers and, 
> worse, onto zillions of web pages on the net. 
> 
> Basically, one of the main parts of good systems design is to try to find 
> ways to finesse safe actions without having to know much. So -- for example 
> -- Squeak runs everywhere because it can carry all of its own resources with 
> it, and the OS processes/address-spaces allow it to run safely, but do not 
> have to know anything about Squeak to run it. Similarly Squeak does not have 
> to know much to run on every machine - just how to get events, a display 
> buffer, and to map its file conventions onto the local ones. On a bare 
> machine, Squeak *is* the OS, etc. So much for old ideas from the 70s!
> 
> The main idea here is that a windowing 2.5 D UI can compose views from many 
> sources into a "page". The sources can be opaque because they can even do 
> their own rendering if needed. Since the sources can run in protected 
> address-spaces their actions can be confined, and "we" the mini-OS running 
> all this do not have to know anything about them. This is how apps work on 
> personal computers, and there is no reason why things shouldn't work this way 
> when the address-spaces come from other parts of the net. There would then be 
> no difference between "local" and "global" apps.
> 
> Since parts of the address spaces can be externalized, indexing as rich (and 
> richer) to what we have now still can be done.
> 
> And so forth.
> 
> The Native Client part of Chrome finally allows what should have been done in 
> the first place (we are now about 20+ years after the first web proposals by 
> Berners-Lee).  However, this approach will need to be adopted by most of the 
> already existing multiple browsers before it can really be used in a 
> practical way in the world of personal computing -- and there are signs that 
> there is not a lot of agreement or understanding why this would be a good 
> thing. 
> 
> The sad and odd thing is that so many people in the computer field were so 
> lacking in "systems consciousness" that they couldn't see this, and failed to 
> complain mightily as the web was being set up and a really painful genii was 
> being let out of the bottle.
> 
> As Kurt Vonnegut used to say "And so it goes".
> 
> Cheers,
> 
> Alan
> 
> From: Marcel Weiher <[email protected]>
> To: Fundamentals of New Computing <[email protected]>
> Cc: Alan Kay <[email protected]>
> Sent: Sun, July 24, 2011 5:39:26 AM
> Subject: Re: [fonc] Alan Kay talk at HPI in Potsdam
> 
> [..]
> There was one question I had on the scaling issue that would not have fitted 
> in the Q&A:   while praising the design of the Internet, you spoke less well 
> of the World Wide Web, which surprised me a bit.   Can you elaborate?


_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to