> > There is no difference between a file opened over the network and one
> opened locally, except that in the network open case the network
> filesystem does not maintain consistency between the client view of the
> file and the server view of that same file, because doing so takes time
> and the network filesystem designer decided to trade off in favour of
> speed over consistency, usually without provision of a knob to change this
> decision.

> Keith, I think you're ignoring the fact that when opening a file across a
> network the file system knows it's operating across the network and does
> not make the assumption that its cache is valid.  If things operated the
> way you describe almost all shared access operations would lead to
> corruption.  Operations when opening a file across a network are slower
> because the NFS has to check whether the file has been updated since its
> local copy was cached.

> The various teams which develop network file systems like NFS SMB and AFP
> all worried about these things and got them right.  Individual clients
> won't write out-of-date caches back to a centrally held file unless you've
> explicitly turned off the network-savvy routines.

I believe you are incorrect, Simon.  

To my knowledge there are *no* network filesystems which properly arbitrate 
multiple shared-read and shared-write client access to files opened across the 
network and provide a consistent view (similar to the view of the file that one 
would obtain if all the clients were operating on a local file with no network 
filesystem involved).  Moreover, as more and more time progresses there is less 
and less ability for things to work properly.  Except in very rare cases it has 
always been my experience that concurrent updates (as in record updates) leads 
to file corruption.

If shared network access worked properly then Client/Server computing would 
never have needed to be invented.






Reply via email to