one thing of note, linux vfs implements a dcache.  this connects the
virtual memory system to the filesystem.  (but oddly in linux network
buffers are handled seperately.) there is no client-side caching in
the plan 9 kernel.

there is a notable exception.  there is an executable cache.

>> One more point, I googled a lot on "kernel resident file systems and
>> non kernel resident file systems", but I could not find a single
>> useful link. It would be great if you could specify the difference
>> between the two. I wish that eases up the situation a bit.

since the devtab[] functions map 1:1 with 9p, all the mount driver
needs to do for calls outside the kernel is to marshal/demarshal
9p messages.

it's important to remember that most in-kernel file servers could easily
exist outside the kernel.  the entire ip stack can be implemented from
user space.  (and it has been in the past.)

> "Kernel resident filesystem" in this context simply means a filesystem  
> which was created for use by the kernel; this may or may not be  
> visible to user-space applications - I'm not too sure. 

every element of mounttab[] has an associated device letter.  to
mount the device, one does (typically)
        bind -a '#'^$letter /dev

for example, to bind a second ip stack on /net.alt,
        bind -a '#I1' /net.alt
 
> To sum up, you  
> use the 9 primitive operations provided by each 'Dev' when you work  
> with kernel-resident filesystems, while all other filesystems are  
> dealt with using regular 9P.

all devices are accessed through devtab.  it may be that that entry
is the mount driver.  the mount driver turns devtab[]->fn into
the corresponding 9p message.  (and vice versa.)

- erik

Reply via email to