On 6/9/2011 11:10 AM, Josh Gargus wrote:
That all sounds very cool.

However, I don't think that it's feasible to try to ship something like this as 
standard in all browsers, if only for political reasons.  It would be 
impossible to get Mozilla, Google, Apple, and Microsoft to agree on it.

That's what's cool about NaCl.  It's minimal enough to be a feasible candidate 
for universal adoption.  If it's adopted, then an ecosystem springs up with 
people inventing recursive exokernels to run in the browser.

Cheers,
Josh


I don't understand though why one needs "recursive exokernels" though...

why not just "local virtual filesystems"?...

I guess there is always the issue though that if only a virtual environment (say, x86) is provided, that about as soon as someone needs scripting, then they will build an interpreter or JIT on top of this (or drag in an external one, say, CPython or Lua...), meaning recursive interpretation overhead...

a partial solution could be to provide interpreters for higher-level bytecodes (such as maybe Java ByteCode, or another higher-level bytecode), or higher-level script facilities (JavaScript and eval) built right into the core API. probably also an assembler, ...

or, possibly, some "high-level" features could be implemented as ISA-extension hacks (x86 with optional built-in dynamic typing, OO facilities, ...). such that people are less tempted to supply their own (and further degrade performance).

or such...


On Jun 9, 2011, at 10:56 AM, Toby Watson wrote:

How about _recursive_ VM/JITs *beneath* the level that HTML/JS is implemented.

So the "browser" that ships only supports this recursive VM.

HTML is an application of this that can be evolved by open source at
internet scale / time. Web pages can point at a specific HTML
implementation or a general redirector like google apis to get the
commonly agreed standard version.

Other containers/'plugins', Squeak, Flash, Java run as VMs, can run
their native bytecode/images but also, potentially, expose the VM
interface up again. Nesting VMs is useful also. Though you won't spare
the use-case any love, Flash video players often load multiple ads
SDKs, an arrangement that could benefit from isolation, i.e.
browser-more-like-OS.

If the top and bottom VM interfaces are the same then we can stack
them (as well as nesting them).

Base VM would have exokernel / NaCL like exposure of the native
capabilities of the device. Exokernel&  FluxOS have some nifty tricks
to punch through layers so performance is not so impacted by stacking.

An intermediate VM layer could provide ISA / hardware abstraction so
that everything above that looks the same.

I re-read history of Smalltalk recently and was reminded of this from Alan,

'Bob Barton, the main designer of the B5000 and a professor at Utah
had said in one of his talks a few days earlier: "The basic principal
of recursive design is to make the parts have the same power as the
whole." For the first time I thought of the whole as the entire
computer and wondered why anyone would want to divide it up into
weaker things called data structures and procedures. Why not divide it
up into little computers, as time sharing was starting to? But not in
dozens. Why not thousands of them, each simulating a useful
structure?'

Toby

On 9 June 2011 10:25, Josh Gargus<[email protected]>  wrote:
On Jun 9, 2011, at 2:04 AM, BGB wrote:

On 6/9/2011 12:56 AM, Josh Gargus wrote:

On May 31, 2011, at 7:30 AM, Alan Kay wrote:

Hi Cornelius

There are lots of egregiously wrong things in the web design. Perhaps one of 
the simplest is that the browser folks have lacked the perspective to see that 
the browser is not like an application, but like an OS. i.e. what it really 
needs to do is to take in and run foreign code (including low level code) 
safely and coordinate outputs to the screen (Google is just starting to realize 
this with NaCl after much prodding and beating.)

I think everyone can see the implications of these two perspectives and what 
they enable or block
Some of the implications, anyway.  The benefits of the OS-perspective are 
clear.  Once it hits its stride, there will be no (technical) barriers to 
deploying the sorts of systems that we talk about here 
(Croquet-Worlds-Frank-OMeta-whatnot).  Others will be doing their own cool 
things, and there will be much creativity and innovation.

However, elsewhere in this thread it is noted that the HTML-web is structured-enough to 
be indexable, mashupable, and so forth.  It makes me wonder: is there a risk that the 
searchability, etc. of the web will be degraded by the appearance of a number of 
mutually-incompatible better-than-HTML web technologies?  Probably not... in the worst 
case, someone who wants to be searchable can also publish in the "legacy" 
format.

However, can we do better than that?   I guess the answer depends on which 
aspect of the status quo we're trying to improve on (searchability, mashups, 
etc).  For search, there must be plenty of technologies that can improve on 
HTML by decoupling search-metadata from presentation/interaction (such as 
OpenSearch, mentioned elsewhere in this thread).  Mashups seem harder... maybe 
it needs to happen organically as some of the newly-possible systems find 
themselves converging in some areas.

But I'm not writing because I know the answers, but rather the opposite.  What 
do you think?

hmm... it is a mystery....

actually, possibly a relevant question here, would be why Java applets largely fell on 
their face, but Flash largely took off (in all its uses from YouTube to "Punch The 
Monkey"...).

but, yeah, there is another downside to deploying ones' technology in a browser:
writing browser plug-ins...


and, for browser-as-OS, what exactly will this mean, technically?...
network-based filesystem and client-side binaries and virtual processes?...
like, say, if one runs a tiny sand-boxed Unix-like system inside the browser, 
then push or pull binary files, which are executed, and may perform tasks?...

This isn't quite what I had in mind.  Perhaps "hypervisor" is better than "OS" 
to describe what I'm talking about, and I believe Alan too: a thin-as-possible platform that 
provides access to computing resources such as end-user I/O (mouse, multitouch, speakers, display, 
webcam, etc.), CPU/GPU, local persistant storage, and network.  Just enough to enable others to run 
OSes on top of this hypervisor.

If it tickles you fancy, then by all means use it to run a sand-boxed Unix.  
Undoubtedly someone will; witness the cool hack to run Linux in the browser, 
accomplished by writing an x86 emulator in Javascript 
(http://bellard.org/jslinux/).

However, such a hypervisor will also host more ambitious OSes, for example, platforms for 
persistent capability-secure peer-to-peer real-time collaborative end-use-scriptable 
augmented-reality environments.  (again, trying to use word-associations to roughly 
sketch what I'm referring to, as I did earlier with 
"Croquet-Worlds-Frank-OMeta-whatnot").

Does this make my original question clearer?

Cheers,
Josh




_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc



_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc

Reply via email to