I thought that until a few days ago, a sequence like the following (two 
nfs servers exporting the same clustered data)

on client 1 lock range A through B of file1 (exported from nfs server 1)
on client 2 lock range A through C of file 1 (exported from nfs server 2)
on client 1 write  A through B
on client 2 write A through C
on client 1 unlock A through B
on client 2 unlock A through C

would corrupt data (theoretically could be fixed as nfsd calls lock 
methods 
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=fd85b8170dabbf021987875ef7f903791f4f181e)
 
but the more obvious point is that with two nfsd servers exporting the 
same file data via the same cluster fs (under nfsd), the latencies can be 
longer and the opportunity for stale metadata (file sizes) and also writes 
getting reordered is higher.


Steve French
Senior Software Engineer
Linux Technology Center - IBM Austin
phone: 512-838-2294
email: sfrench at-sign us dot ibm dot com



"J. Bruce Fields" <[EMAIL PROTECTED]> 
05/16/2007 11:02 AM

To
Steven French/Austin/[EMAIL PROTECTED]
cc
Christoph Hellwig <[EMAIL PROTECTED]>, [EMAIL PROTECTED], 
[EMAIL PROTECTED], linux-fsdevel@vger.kernel.org, [EMAIL PROTECTED]
Subject
Re: + knfsd-exportfs-add-exportfsh-header-fix.patch added to -mm tree






On Wed, May 16, 2007 at 09:55:41AM -0500, Steven French wrote:
> Any ideas what are the minimum export operation(s) that cifs would need 
to 
> add to export under nfsd?  It was not clear to me after reading the 
> Exporting document in Documentation directory.
> 
> (some users had wanted to export files from Windows servers to nfs 
clients 
> files by putting an nfs server mounted over cifs in between - I realize 
> that this can corrupt data due to nfs client caching etc., as even in 
some 
> cases could happen if you try to export a cluster file system under 
nfsd).

What cases are you thinking of?

--b.


-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to