On 2/29/2012 4:09 PM, Alan Kay wrote:
Hi Duncan

The short answers to these questions have already been given a few times on this list. But let me try another direction to approach this.

The first thing to notice about the overlapping windows interface "personal computer experience" is that it is logically independent of the code/processes running underneath. This means (a) you don't have to have a single religion "down below" (b) the different kinds of things that might be running can be protected from each other using the address space mechanisms of the CPU(s), and (c) you can think about allowing "outsiders" to do pretty much what they want to create a really scalable really expandable WWW.

If you are going to put a "browser app" on an "OS", then the "browser" has to be a mini-OS, not an app.


agreed.

I started writing up my own response, but this one beat mine, and seems to address things fairly well.


But "standard apps" are a bad idea (we thought we'd gotten rid of them in the 70s) because what you really want to do is to integrate functionality visually and operationally using the overlapping windows interface, which can safely get images from the processes and composite them on the screen. (Everything is now kind of "super-desktop-publishing".) An "app" is now just a kind of integration.

yep. even on the PC with native apps, typically much of what is going on is in the domain of shared components in DLLs and similar, often with "the apps" being mostly front-end interfaces for this shared functionality.


one doesn't really need to see all of what is going on in the background, and the front-end UI and the background library functionality may be unrelated.

a annoyance though (generally seen among newbie developers) is that they confuse the UI for the app as a whole, thinking that throwing together a few forms in Visual Studio or similar, and invoking some functionality from these shared DLLs, suddenly makes them a big-shot developer (taking for granted all of the hard work that various people put into these libraries which their app depends on).

sort of makes it harder to get much respect though when much of ones' work is going into this sort of functionality, rather than a big flashy looking GUI (with piles of buttons everywhere, ...). it is also sad when many people judge how "advanced" an app is based primarily on how many GUI widgets they see on screen at once.


But the route that was actually taken with the WWW and the browser was in the face of what was already being done.

Hypercard existed, and showed what a WYSIWYG authoring system for end-users could do. This was ignored.

Postscript existed, and showed that a small interpreter could be moved easily from machine to machine while retaining meaning. This was ignored.

And so forth.


yep. PostScript was itself a notable influence to me, and my VM designs actually tend to borrow somewhat from PostScript, albeit often using bytecode in place of text, generally mapping nearly all "words" to opcode numbers, and using blocks more sparingly (typically using internal jumps instead), ... so, partway between PS and more traditional bytecode.


19 years later we see various attempts at inventing things that were already around when the WWW was tacked together.

But the thing that is amazing to me is that in spite of the almost universal deployment of it, it still can't do what you can do on any of the machines it runs on. And there have been very few complaints about this from the mostly naive end-users (and what seem to be mostly naive computer folks who deal with it).

yep.

it is also notable that I can easily copy files around within the same computer, but putting files online or sharing them with others quickly turns into a big pile of suck. a partial reason I think is due to a lack of good integration between local and network file storage (in both Windows and Linux, there has often been this "thing" of implementing access to network resources more as a Explorer / File Manager / ... hack than doing it "properly" at the OS filesystem level).

at this point, Windows has at least since integrated things as the FS level (one can mount SMB/CIFS shares, FTP servers, and WebDAV shares, as drive letters).

on Linux, it is still partly broken though, with GVFS and Samba dealing with the issues, but in a kind of half-assed way (and it is lame to have to route through GVFS something which should be theoretically handled by the OS filesystem).

nevermind that Java and Flash fail to address these issues as well, when given both knew full well what they were doing, yet both proceed to retain an obvious separation between "filesystem" and "URLs".

why not be like "if you open a file, and its name is a URL, you open the URL", and more so, have the ability to have a URL path as part of the working directory (well, not like Java didn't do "something" to file IO... good or sensible is another matter though...).


Some of the blame should go to Apple and MS for not making real OSs for personal computers -- or better, going the distance to make something better than the old OS model. In either case both companies blew doing basic protections between processes.


to be fair though, both Apple and MS got going when the internet was still a relative novelty...

albeit, both have been sort of dragging a bit.


On the other hand, the WWW and first browsers were originally done on workstations that had stronger systems underneath -- so why were they so blind?

"vision"? the WWW started out as a simplistic system for accessing shared documents. I don't really know if "web applications" were under consideration at the time, in all likelihood they were an afterthought (to be built piece by piece and kludge-by-kludge over the following decades).


As an aside I should mention that there have been a number of attempts to do something about "OS bloat". Unix was always "too little too late" but its one outstanding feature early on was its tiny kernel with a design that wanted everything else to be done in "user-mode-code". Many good things could have come from the later programmers of this system realizing that being careful about dependencies is a top priority. (And you especially do not want to have your dependencies handled by a central monolith, etc.)

agreed.

this is a big problem with many FOSS programs though.
I personally think systems like GNU Autoconf, ... are an example of how poorly some of this stuff has been handled.

OTOH, many Java developers go on endlessly about "WORA" ("Write Once, Run Anywhere"), and seem to believe that Java owns the concept, and that Java/JVM and homogeneity are the only way this is possible.


I have disagreed, asserting that it is technically possible for non-Java VMs to also embrace this concept (even if few others have really done so, or even made a good solid attempt), nor that the concept requires some sort of centralized homogenous platform (rather than, say, dealing with dependencies and heterogeneity in ways which aren't stupid...).

a few ideas I have mentioned, namely having a concept similar to "ifdef", and maybe having something similar to "XML namespaces" and URL's for dependency management, have generally caused people to balk.

but, I don't personally think that something like:
ifdef(IS_WINDOWS_DESKTOP) { ... }
or:
ifdef(IS_CELLPHONE) { ... }
or:
ifdef(HAS_TOUCHSCREEN_ONLY) { ... }
or:
...

necessarily hinders WORA, and in-fact it may improve the situation, since then the application has what information it needs to adapt itself to the target it is currently running on (in this case, the ifdef's would tend to be evaluated on the final target, rather than when initially building the code, so the "defines" are essentially closer to being special symbolic constants at runtime).


So, this gradually turned into an awful mess. But Linus went back to square one and redefined a tiny kernel again -- the realization here is that you do have to arbitrate basic resources of memory and process management, but you should allow everyone else to make the systems they need. This really can work well if processes can be small and interprocess communication fast (not the way Intel and Motorola saw it ...).

except, sadly, the Linux kernel has grown again to being a 14 Mloc beast, so it is hardly tiny.

granted, very little of this code is actually built in any particular configuration, as otherwise the kernel would probably be too large to boot (probably, unless the boot process has changed and can now allow an arbitrarily large kernel...).


the basic process model works fairly well though...


And I've also mentioned Popek's LOCUS system as a nice model for migrating processes over a network. It was Unix only, but there was nothing about his design that required this.


could be cool.


Cutting to the chase with a current day example. We made Etoys 15 years ago so children could learn about math, science, systems, etc. It has a particle system that allows many interesting things to be explored.

Windows (especially) is so porous that SysAdmins (especially in school districts) will not allow teachers to download .exe files. This wipes out the Squeak plugin that provides all the functionality.


yes, Windows security kind of sucks...


But there is still the browser and Javascript. But Javascript isn't fast enough to do the particle system. But why can't we just download the particle system and run it in a safe address space? The browser people don't yet understand that this is what they should have allowed in the first place. So right now there is only one route for this (and a few years ago there were none) -- and that is Native Client on Google Chrome.


JavaScript has been getting faster though.
in recent years, Mozilla has put a lot of effort into exploring newer/faster JITs.
granted, this is not the only drawback with JS though.


in my own "vision", C is also part of the picture, although I had generally imagined in a VM managed form, probably with some amount of (potentially not strictly free) background-checking for "untrusted" code (in part, the pointers could also be "fake" as well, very possibly they would also be boxed in many cases, as in my current VM).

potentially, the C code could also be disallowed from holding a pointer to any address not either in "safe" areas, or outside of its known authority range (pointers could be validated either during an untrusted load, or throw an exception on the first attempt to access them). with luck, something like this could be mostly optimized away by the JIT (by attempting to prove cases where such checks or boxing is unnecessary).

ironically, I have often wanted a lot of these sorts of checks as a debugging feature as well, mostly to help trap things like accidental array overruns, ...

granted, if done well, however, this could also allow the implementation to use either an NaCl style strategy (a restricted address space and other checks), or use other possible strategies, without significantly impacting the code's performance or functionality (a bytecode could be used, with it being left up to the JIT how to go about compiling it to native code and performing any relevant security checks).


I haven't done much with C in a VM though, partly because this is a more complex problem and less immediately relevant (to myself) than some other stuff I am working on (it is along the same lines as trying to get my FFI tools to support C++ features, ...).



But Google Chrome is only 13% penetrated, and the other browser fiefdoms don't like NaCl..... Google Chrome is an .exe file so teachers can't download it (and if they could, they could download the Etoys plugin).

Just in from browserland ... there is now -- 19 years later -- an allowed route to put samples in your machine's sound buffer that works on some of the browsers.

Holy cow folks!


yep.


Alan



    ------------------------------------------------------------------------
    *From:* Duncan Mak <duncan...@gmail.com>
    *To:* Alan Kay <alan.n...@yahoo.com>; Fundamentals of New
    Computing <fonc@vpri.org>
    *Sent:* Wednesday, February 29, 2012 11:50 AM
    *Subject:* Re: [fonc] Error trying to compile COLA

    Hello Alan,

    On Tue, Feb 28, 2012 at 4:30 PM, Alan Kay <alan.n...@yahoo.com
    <mailto:alan.n...@yahoo.com>> wrote:

        For example, one of the many current day standards that was
        dismissed immediately is the WWW (one could hardly imagine
        more of a mess).


    I was talking to a friend the other day about the conversations
    going on in this mailing list - my friend firmly believes that the
    Web (HTTP) is one of the most important innovations in recent decades.

    One thing he cites as innovative is a point that I think TimBL
    mentions often: that the Web was successful (and not prior
    hypertext systems) because it allowed for broken links.

    Is that really a good architectural choice? If not, is there a
    reason why the Web succeeded, where previous hypertext systems
    failed? Is it only because of "pop culture"?

    What are the architectural flaws of the current Web? Is there
    anything that could be done to make it better, in light of these
    flaws?

-- Duncan.




_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to