Virtualization and jailing are hacks to work around the inherent

Virtualization is much more than that. It has a future and the future's here. It also has a rather glorious past in IBM VM/CMS.

restriction ('only root can become another user', hence su/sudo, only
root can open certain ports, etc.) which Plan 9 cleanly does away
with.

By assuming _anyone_ at a terminal is root, while sometimes the "terminal" is not a terminal at all. What happens when your home computer is bootstrapped? Is there a thing glenda can't do? I mean, if someone other than you turns your home computer on is it OK for them to be entitled to the same privileges that you normally are? Assuming there's method of stopping them from disconnecting the hard disk inside and/or from peeking into the data on it (there are practical solutions for both of these problems).

A plan9 terminal can run programs, and can have a local storage file
system, with multiple users. As for authentication, in such use case
unix auth is little more than a farce of security theater which could
easily be implemented in plan9 (and I think some people has) if you
wanted to keep your three year old child from accessing your account
but is futile for much else.

A "terminal" per se should be dumb. How come it can run programs? It seems a Plan 9 term isn't exactly a terminal, not a dumb one for sure. If it can run a program, any program, who's going to control what the program accesses, especially when there are _multiple_ users some of whom may not be exactly trustable and there's a local store of sensitive information?

Basically, a terminal should not hold _any_ information on its users. Where does the security of not keeping authentication information on a so-called terminal go when you _keep_ it on the "terminal?" But with multiple users you're going to need authentication. Right?

My impression: the UNIX authentication "farce" happened because UNIX began as a replacement to a time-sharing system for more or less physically secure computers but then was downsized to an OS--many OS's, in fact--also usable on personal computers, e.g. 386BSD. Personal computers aren't as physically secure as the proverbial "big computer in the basement," hence the need for role-based security which was, incidentally, introduced in 386BSD. However, as long as the physical security problem persists the "farce" goes on. Nothing wrong with UNIX. The twist is in the placement and role of personal computers which can be flaky vessels for sensitive information.

Plan 9 doesn't solve that problem for the most common form of computer, i.e. the _home_ computer. Not even for the so-called "workstation." It solves the problem only for the corporate/university/organization "access point," if you know what I mean. Even then that isn't a _new_ solution--it was there when the original time-sharing systems were in operation. Of course, the Plan 9 solution costs--any solution does--and for the home computer these costs aren't followed by gains.

The real problem: "standalone" terminal, also known as the home computer

The real solution: physical security for anything that may carry sensitive information. Physical security must include software security against physical threats as well, e.g. encryption.

As a side note, Rob Pike has been quoted--I take no responsibility for authenticity--saying, "a smart terminal is not a smart ass terminal, but rather a terminal you can educate."

That's the root of the problem: underestimation of home computers. A home computer is a smart terminal as well as a smart ass terminal and there's nothing you can do about it.

Try to do ioctl over the network.

I think I said ioctl serves a less generic function.

Here is a reason: Because Plan 9 has no network-related syscalls, and
applications contain no networking code (even when they are still
network transparent thanks to 9P), when ipv6 was added to plan9, no
[...]

UNIX can accommodate this approach any minute now, figuratively speaking. It has the infrastructure. Current networking traditions in UNIX aren't inherent, they're circumstantial. Remember, the file system abstraction began in UNIX--or even before UNIX?

I don't think any unix systems allows a single application (or
namespace) to access *multiple* network stacks concurrently... and
remote network stacks? don't think so either.

So, what exactly is happening when the same process is sending HTTP requests to a server on the local 802.3 network, a second server on the Internet accessible through my dial-up connection, and a third server on a 802.11 network? Aren't there _three_ network stacks beneath (or over? the PPP, the Ethernet, the WiFi interfaces? To my meager knowledge, these are distinct at least up to network layer, i.e. physical-to-host, medium access (if present), and data link layers are different.

namespace) to access *multiple* network stacks concurrently... and
remote network stacks? don't think so either.

Accessing another computer's network stack is possible through RPC. Though the actual requirements for that feat are way beyond my scope.

Ah, interesting example, isn't it sad that every database system on
unix (or windows) needs to include its own networking code, its own
authentication, etc.?

Please take a look at a simple application using System::Data::DataGrid. Networking is completely transparent to the DataGrid class. It's been abstracted like in Plan 9, though not in a technically identical way. In fact, .NET framework has a whole range of abstractions for various purposes.

--On Thursday, August 21, 2008 9:42 AM +0200 Uriel <[EMAIL PROTECTED]> wrote:

On Wed, Aug 20, 2008 at 11:46 PM, Eris Discordia
<[EMAIL PROTECTED]> wrote:
Thank you, sqweek. The second golden Golden Apple with καλλιστι
on it is totally yours. The first one went to Russ Cox.

 You don't care who mounts what where, because the rest of the system
doesn't notice the namespace change.

So essentially there shouldn't be a problem with mounting on a single
"public" namespace as long as there is one user on the system. mount
restriction in UNIX systems was put in place because multiple users exist
some of whom may be malicious. Virtualization and jailing will relax that
requirement.

Mount restrictions on unix are needed (among other reasons) because of
a broken security model (ie., suid).

Virtualization and jailing are hacks to work around the inherent
limitation that in unix resources can not be easily
abstracted/isolated and are plagued by the 'only root can do X'
restriction ('only root can become another user', hence su/sudo, only
root can open certain ports, etc.) which Plan 9 cleanly does away
with.

Linux could do many things plan9 can do, if it got rid of all suid
programs (by perhaps using the cap device implementation for the linux
kernel, if that is ever accepted in mainline linux), but until then...

 Uh, what now? You either have an interesting definition of home
computer or some fucked up ideas about plan 9. You only need a cpu
server if you want to let other machines run processes on your
machine. You only need an auth server if you want to serve resources
to a remote machine.

Neither statement is true. On a home computer you certainly need a term.
You'll need a cpu for a number of tasks. And you'll need auth if there's
going to be more than one user on the system, or if you need a safe way
of authenticating yourself to your computer. A single glenda account
doesn't quite cut it. If you're going to access your storage you'll need
some fs('s), too.

The bottom line is: term is _certainly_ not enough for doing all the
tasks a *BSD does, and requiring a home computer to do all these tasks
is far from inconceivable. One *BSD system is almost functionally
equivalent to a combination of term, cpu, auth, and some fs('s).

A plan9 terminal can run programs, and can have a local storage file
system, with multiple users. As for authentication, in such use case
unix auth is little more than a farce of security theater which could
easily be implemented in plan9 (and I think some people has) if you
wanted to keep your three year old child from accessing your account
but is futile for much else.

incantation, that's beside the point. In 9p, the abstraction is a file
tree, and the interface is

auth/attach/open/read/write/clunk/walk/remove/stat.

ioctl and VFS are suspiciously similar even though they serve less
generic functions.

Try to do ioctl over the network.

network operations - everything is done via /net. Thanks to private
namespaces, you can transparently replace /net with some other crazy
[compatible] filesystem, which might load balance over multiple

How does that differ from presenting of a network interface by a block
device on UNIX? And why should avoiding system calls be considered an
advantage? Your VFS layer could do anything expected from /net provided
that file system abstraction for the resources represented under /net is
viable in the first place.

Here is a reason: Because Plan 9 has no network-related syscalls, and
applications contain no networking code (even when they are still
network transparent thanks to 9P), when ipv6 was added to plan9, no
changes were required to either any syscalls or any applications. On
the other hand on unix they are still to this day adding ipv6 support
to certain apps (and every app that needs to access remote resources
needs its own networking code that is aware of each protocol it wants
to support, etc).

When ipv6 needs to be replaced, the pain in the unix software
ecosystem will be even greater, while in plan9 it will be virtually
painless.

There are also the benefits of allowing different applications
(namespaces) use different network stacks without requiring full
virtualization of the whole OS (the few unix systems that have been
able to implement this functionality have done so after many years of
painful efforts and the result is incredibly clunky and complex), and
I don't think any unix systems allows a single application (or
namespace) to access *multiple* network stacks concurrently... and
remote network stacks? don't think so either.


implemented on any system, which is true [to an extent]. But it's
apparent than no others have the taste to do it as elegantly as plan 9 -

It's not a matter of taste. There are situations, many situations
actually, where the file system abstraction is plainly naive. Sticking
with it for every application verges on being an "ideology."

The VFS approach is by no means inferior to Plan 9's
everything-is-a-file, but on UNIX systems it is limited to resources
that can be meaningfully represented as file systems. Representing a
relational database as a file system is meaningless. The better
representation is something along the lines of the
System::Data::DataGrid class on Microsoft .NET framework.

Ah, interesting example, isn't it sad that every database system on
unix (or windows) needs to include its own networking code, its own
authentication, etc.?

Peace

uriel

Reply via email to