> Mounting a bin directory from some remote servers is a potential vector
for malicious code and requires all services to provide binaries for all
platforms (arm, x86, riscv,...). Instead, serving the source code and
mkfile allows for audit ability (what did I just run?) and support for
their own platform. Plan 9 compilers were designed not just to produce
optimal code but also for speed of compilation.

Would this be fast enough for what we experienced back then with early
websites, however? What with the stats on how people close or click away
from a tab within N seconds if it hasn't fully loaded yet, I'd think that
having to compile at all could've been prohibitive to people taking this
route vs. web forms. Though, I'm not sure how user behaviors or
expectations of speed were back then for the web.

I was thinking what may have eventually happened would have been an
interpreted language would pop up that would be what web applications were
written in, sort of like java applets, except not embedded in the browser,
and hopefully in a simpler language. Early web applications were also very
simple 'put info in textbox and hit enter' forms, if I remember correctly,
so a small, expandable runtime that would be quickly started up in a
sandbox might have been on equal footing with html, assuming it was indeed
able to start up and run quickly (or maybe just fork a running one into a
new namespace?). Ideally, developers could then write their own libraries
(in C or whatever they like) that the web language would hook into that
could do more powerful things - and those libraries might be the time to
provide source and makefiles, or binaries if they wanted to keep things
close to the chest.

Thinking more on the 'writing to a ctl file' idea, which I'm really getting
into, I was thinking users may have ended up making their own graphical
interfaces for web services; UIs that abstracted away the actual writing to
ctl files and put it in a GUI for less advanced users. It'd've been
interesting to see a competition in UI design among OSS interfaces for web
services, sort of like what we see today with apps for reddit on phones
(except hopefully they wouldn't all be awful). Or, maybe everyone would
just use the service-provider's provided interfaces.

Do you think there would've been fewer large databases if things had gone
this way? Just thinking on my banking example, it seems like it'd be
easiest to just have a /bank.com/users/<username> folder with the relevant
files in it that users could union-mount, since you're not forced to show
things through a web interface. Though, I suppose a bank could just expose
that folder as an interface for what's actually a DB running in the

> This was however because I wanted to call a site "Troff the Crime

I chortled.

I was wondering if maybe today a similar thing could be done with docker or
the rocket containers, but I'm not familiar enough with them; it seems like
they're somewhat baked images, not just namespaced folders with the
relevant things union-mounted inside them, so it might not be easy or fast
to just union mount the things you need for the web-app you're loading in a
new docker instance. Also, they have no UI support, thought it seems like
you can dynamically link an X socket into them to draw to an X session on
the parent machine with some extra work.

Let me know if this conversation is not really appropriate for this mailing
list at this point, by the way. I don't want to be a nuisance.

I appreciate the discussion so far - thanks!


Reply via email to