On 8/10/07, Eric Van Hensbergen <[EMAIL PROTECTED]> wrote:

>
> However, it was decided at the last IWP9 that we had probably taken
> the wrong approach with 9p2000.u and it would probably have been
> better to extend 9p with new operations that had different
> syntax/semantics from standard 9p as these would be easier to filter
> out.

And, before anyone jumps on this idea too hard, this approach works;
it's how I solved this messy problem in 1998. It was well worth trying
the .u variant, however, as you can't know some things until you try
them :-)

> It was further suggested by some that a better approach on Linux/UNIX
> would have been to take what we know from 9p and design a protocol
> specific to the VFS (similar to FUSE but capable of transparent
> network, etc.)

That's probably true. What's weird is the first one I did (for 2.0.3x)
fit into Linux VFS layer just fine. That's because the Linux VFS was
not as "mature" as it is now, and it was actually far easier in 1998
than it is now to fit a "non-standard" VFS into Linux. Back in 1998 I
had
- plan 9 style union mounts
- private name spaces that worked for all programs
- 9p mounts

and it was all pretty easy. The newer VFS layer has frozen more design
assumptions into practice, with the result that it is harder, not
easier, to put interesting file systems into Linux. It was a bit
harder for 2.2, even harder for 2.4 and, as you know, a pain in the
neck in 2.6. It's easier, I suppose, to put boring file systems in,
that act like all the other ones. There's a paper in that reality
somewhere ...

> However, such support is mainly necessary for folks looking to remote
> access feature rich on-disk file-systems.  I believe straight 9p is
> more than adequate for 99% of synthetic file systems with something as
> simple as 9p2000.u giving you extra bits necessary for basic UNIX
> compatibility.

I think that you are absolutely right. My experience certainly
supports this statement ...

It's an interesting idea to rethink v9fs with the current Linux VFS in
mind. I had not heard that one. It makes a lot of sense, however.

At the Google meeting, one discussion about putting private name
spaces in-kernel (as opposed to the current 9p-fuse which several at
Google are using) was to just have a per-user name space, mounted at
some well known place, and work with that. This was also very similar
to what I did on the original: codify the mount point to be at
/private: all users had to do a private mount (which was not a
privileged operation, hence easy) at /private; no user could see
anothers private mount; and I never had to deal with the issue of
multi-user access to single file, which has caused some work (see the
code). On the old stuff, for each private namespace file, there was
only one user, and you figured out who it was by looking in the
"inode". Going to one user per mount might also make life simpler.
That would, however, go against another idea, which was to use 9p to
serve root partitions. I'm not a big believer in network-mounted root
file systems, so am less concerned with this :-)

ron

Reply via email to