On 2/28/2012 2:30 PM, Alan Kay wrote:
Yes, this is why the STEPS proposal was careful to avoid "the current day world".

For example, one of the many current day standards that was dismissed immediately is the WWW (one could hardly imagine more of a mess).


I don't think "the web" is entirely horrible:
HTTP basically works, and XML is "ok" IMO, and an XHTML variant could be ok.

granted, moving up from this, stuff quickly turns terrible (poorly designed, and with many "shiny new technologies" which are almost absurdly bad).


practically though, the WWW is difficult to escape, as a system lacking support for this is likely to be rejected outright.


But the functionality plus more can be replaced in our "ideal world" with encapsulated confined migratory VMs ("Internet objects") as a kind of next version of Gerry Popek's LOCUS.

The browser and other storage confusions are all replaced by the simple idea of separating out the safe objects from the various modes one uses to send and receive them. This covers files, email, web browsing, search engines, etc. What is left in this model is just a UI that can integrate the visual etc., outputs from the various encapsulated VMs, and send them events to react to. (The original browser folks missed that a scalable browser is more like a kernel OS than an App)

it is possible.

in my case, I had mostly assumed file and message passing.
theoretically, script code could be passed along as well, but the problem with passing code is how to best ensure that things are kept secure.


in some of my own uses, an option is to throw a UID/GID+privileges system into the mix, but there are potential drawbacks with this (luckily, the performance impact seems to be relatively minor). granted, a more comprehensive system (making use of ACLs and/or "keyrings" could be potentially a little more costly, rather than simple UID/GID rights checking, but all this shouldn't be too difficult to mostly optimize away in most cases).

the big issue is mostly to set up all the security in a "fairly secure" way.

currently, by default, nearly everything defaults to requiring root access. unprivileged code would thus require interfaces to be exposed to it directly (probably via "setuid" functions). however, as-is, it is defeated by most application code defaulting to "root".

somehow though, I think I am probably the only person I know who thinks this system is "sane".

however, it did seem like it would probably be easier to set up and secure than one based on scoping and visibility.


otherwise, yeah, maybe one can provide a bunch of APIs, and "apps" can be mostly implemented as scripts which invoke these APIs?...


These are old ideas, but the vendors etc didn't get it ...


maybe:
browser vendors originally saw the browser merely as a document viewing app (rather than as a "platform").


support for usable network file systems and "applications which aren't raw OS binaries" are slow-coming.

AFAIK, the main current contenders in the network filesystem space are SMB2/CIFS and WebDAV.

possibly useful could be integrating things in a form which is not terrible, for example:
OS has a basic HTML layout engine (doesn't care where the data comes from);
the OS's VFS can directly access HTTP, ideally without having to "mount" things first;
...

in this case, the "browser" is essentially just a fairly trivial script, say: creates a window, and binds an HTML layout object into a form with a few other widgets; passes any HTTP requests to the OS's filesystem API, with the OS managing getting the contents from the servers.

a partial advantage then is that other apps may not have to deal with libraries or sockets or similar to get files from web-servers, and likewise shell utilities would work, by default, with web-based files.

"cp http://someserver/somefile ~/myfiles/"

or similar...


actually, IIRC, my OS project may have actually done this (or it was planned, either way). I do remember though that sockets were available as part of the filesystem (under "/dev/" somewhere), so no sockets API was needed (it was instead based on opening the socket and using "ioctl()" calls...).


side note: what originally killed my OS project was, at the time, reaching the conclusion that it wouldn't have been likely possible for me to compete on equal terms with Windows and Linux, rendering the effort pointless, vs instead developing purely in userspace. does bring up some interesting thoughts though.


or such...


Cheers,

Alan



    ------------------------------------------------------------------------
    *From:* Reuben Thomas <r...@sc3d.org>
    *To:* Fundamentals of New Computing <fonc@vpri.org>
    *Sent:* Tuesday, February 28, 2012 1:01 PM
    *Subject:* Re: [fonc] Error trying to compile COLA

    On 28 February 2012 20:51, Niklas Larsson <metanik...@gmail.com
    <mailto:metanik...@gmail.com>> wrote:
    >
    > But Linux contains much more duplication than drivers only, it
    > supports many filesystems, many networking protocols, and many
    > architectures of which only a few of each are are widely used.
    It also
    > contains a lot of complicated optimizations of operations that would
    > be unwanted in a simple, transparent OS.

    Absolutely. And many of these cannot be removed, because otherwise you
    cannot interoperate with the systems that use them. (A similar
    argument can be made for hardware if you want your OS to be widely
    usable, but the software argument is rather more difficult to avoid.)

    > Let's put a number on that: the first public
    > release of Linux, 0.01, contains 5929 lines i C-files and 2484 in
    > header files. I'm sure that is far closer to what a minimal
    viable OS
    > is than what current Linux is.

    I'm not sure that counts as "viable".

    A portable system will always have to cope with a wide range of
    hardware. Alan has already pointed to a solution to this: devices that
    come with their own drivers. At the very least, it's not unreasonable
    to assume something like the old Windows model, where drivers are
    installed with the device, rather than shipped with the OS. So that
    percentage of code can indeed be removed.

    More troublingly, an interoperable system will always have to cope
    with a wide range of file formats, network protocols &c. As FoNC has
    demonstrated with TCP/IP, implementations of these sometimes made much
    smaller, but many formats and protocols will not be susceptible to
    reimplementation, for technical, legal or simple lack of interest.

    As far as redundancy in the Linux model, then, one is left with those
    parts of the system that can either be implemented with less code
    (hopefully, a lot of it), or where there is already duplication (e.g.
    schedulers).

    Unfortunately again, one person's "little-used architecture" is
    another's bread and butter (and since old architectures are purged
    from Linux, it's a reasonable bet that there are significant numbers
    of users of each supported architecture), and one person's
    "complicated optimization" is another's essential performance boost.
    It's precisely due to heavy optimization of the kernel and libc that
    the basic UNIX programming model has remained stable for so long in
    Linux, while still delivering the performance of advanced hardware
    undreamed-of when UNIX was designed.

-- http://rrt.sc3d.org
    _______________________________________________
    fonc mailing list
    fonc@vpri.org <mailto:fonc@vpri.org>
    http://vpri.org/mailman/listinfo/fonc




_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

_______________________________________________
fonc mailing list
fonc@vpri.org
http://vpri.org/mailman/listinfo/fonc

Reply via email to