On 6/9/2011 10:25 AM, Josh Gargus wrote:
On Jun 9, 2011, at 2:04 AM, BGB wrote:
On 6/9/2011 12:56 AM, Josh Gargus wrote:
On May 31, 2011, at 7:30 AM, Alan Kay wrote:
Hi Cornelius
There are lots of egregiously wrong things in the web design. Perhaps one of
the simplest is that the browser folks have lacked the perspective to see that
the browser is not like an application, but like an OS. i.e. what it really
needs to do is to take in and run foreign code (including low level code)
safely and coordinate outputs to the screen (Google is just starting to realize
this with NaCl after much prodding and beating.)
I think everyone can see the implications of these two perspectives and what
they enable or block
Some of the implications, anyway. The benefits of the OS-perspective are
clear. Once it hits its stride, there will be no (technical) barriers to
deploying the sorts of systems that we talk about here
(Croquet-Worlds-Frank-OMeta-whatnot). Others will be doing their own cool
things, and there will be much creativity and innovation.
However, elsewhere in this thread it is noted that the HTML-web is structured-enough to
be indexable, mashupable, and so forth. It makes me wonder: is there a risk that the
searchability, etc. of the web will be degraded by the appearance of a number of
mutually-incompatible better-than-HTML web technologies? Probably not... in the worst
case, someone who wants to be searchable can also publish in the "legacy"
format.
However, can we do better than that? I guess the answer depends on which
aspect of the status quo we're trying to improve on (searchability, mashups,
etc). For search, there must be plenty of technologies that can improve on
HTML by decoupling search-metadata from presentation/interaction (such as
OpenSearch, mentioned elsewhere in this thread). Mashups seem harder... maybe
it needs to happen organically as some of the newly-possible systems find
themselves converging in some areas.
But I'm not writing because I know the answers, but rather the opposite. What
do you think?
hmm... it is a mystery....
actually, possibly a relevant question here, would be why Java applets largely fell on
their face, but Flash largely took off (in all its uses from YouTube to "Punch The
Monkey"...).
but, yeah, there is another downside to deploying ones' technology in a browser:
writing browser plug-ins...
and, for browser-as-OS, what exactly will this mean, technically?...
network-based filesystem and client-side binaries and virtual processes?...
like, say, if one runs a tiny sand-boxed Unix-like system inside the browser,
then push or pull binary files, which are executed, and may perform tasks?...
This isn't quite what I had in mind. Perhaps "hypervisor" is better than "OS"
to describe what I'm talking about, and I believe Alan too: a thin-as-possible platform that
provides access to computing resources such as end-user I/O (mouse, multitouch, speakers, display,
webcam, etc.), CPU/GPU, local persistant storage, and network. Just enough to enable others to run
OSes on top of this hypervisor.
If it tickles you fancy, then by all means use it to run a sand-boxed Unix.
Undoubtedly someone will; witness the cool hack to run Linux in the browser,
accomplished by writing an x86 emulator in Javascript
(http://bellard.org/jslinux/).
interesting...
less painfully slow than I would have expected from the description...
I wasn't thinking exactly like "run an emulator, run OS in emulator",
but more like, a browser plugin which looked and acted similar to a
small Unix (with processes and so on, and a POSIX-like API, and a
filesystem), but would likely be different in that it would "mount"
content from the website as part of its local filesystem (probably
read-only by default), and possibly each process could have its own
local VFS.
screen/input/... would be provided by APIs.
granted, from its description, I think NaCl may already be sort of like
this, but I haven't really messed with it.
as noted before, I wrote an x86 interpreter/emulator before, which
exposed a POSIX-like set of core APIs.
however, the "kernel" was actually just running inside the interpreter
(so "system calls" just sort of broke out of the interpreter, and were
handled directly by native code).
hence, this interpreter only ran Ring-3 code.
it could have also done Ring-0 stuff, but this would have been more
effort, and would require a more "authentic" emulation of x86 (since I
was only dealing with Ring-3, many variations existed from "true" x86,
such as the segmentation system being mostly absent, and the use of
"spans" for the MMU rather than pages or page-tables). also, real-mode
did not exist...
granted, this interpreter ran a bit slower than native code, but mostly
this was going into operations like (internal) loading/storing from
various registers (especially EAX, ECX, and EDX, IIRC...), and generally
doing memory word loads/stores (for sake of being "generic", I used
byte-for-byte loads and shifting to implement these).
initially, the "main switch" was being a big performance killer, but
then I switched partly to threaded code, which made this problem mostly
go away. in this case, threaded code means that operations were handled
by calling directly through function pointers.
there was effectively a cache of pre-decoded instructions (a big hash
table holding structs), each with their own function pointers
(instruction handlers). any detected SMC (self-modifying-code) worked by
simply flushing the entire hash.
likely, so micro-optimization could be done, such as handling
threaded-code more efficiently, or avoiding internal switches for
register loads/stores (for example, DWORD registers being loaded/stored
directly by an index number, ...). but, it worked at the time...
more aggressively, one could JIT to native code (IOW: "dynamic
translation"...).
However, such a hypervisor will also host more ambitious OSes, for example, platforms for
persistent capability-secure peer-to-peer real-time collaborative end-use-scriptable
augmented-reality environments. (again, trying to use word-associations to roughly
sketch what I'm referring to, as I did earlier with
"Croquet-Worlds-Frank-OMeta-whatnot").
Does this make my original question clearer?
ok.
what exactly this would be like is less obvious.
I personally have a much easier time imagining what "Unix in a browser"
would look like.
with just a plain OS in the browser though, one could run apps...
then one could have 3D mostly by having this virtual OS expose OpenGL
(or GL ES).
possibly, for sake of simplicity, the "app" could always use OpenGL,
just its "text mode" would just be using OpenGL to draw all the characters.
then maybe some special API calls for handling input, and "enabling" GL
(disabling drawing the console UI).
I before wondered about the problem of what to do about client-program
memory use, but it seems like there is a nifty solution: if a limit is
exceeded, allocation calls fail (say, each process is limited to a
certain amount of memory).
possibly, a given "app" is also limited to a certain maximum number of
child processes, at which point "fork()" calls will fail (or send out a
"SIGKILL" or similar to all processes belonging to the parent app).
or such...
_______________________________________________
fonc mailing list
[email protected]
http://vpri.org/mailman/listinfo/fonc