On Wed, May 16, 2007 at 12:03:57PM -0500, Steven French wrote:
> I thought that until a few days ago, a sequence like the following (two 
> nfs servers exporting the same clustered data)
> 
> on client 1 lock range A through B of file1 (exported from nfs server 1)
> on client 2 lock range A through C of file 1 (exported from nfs server 2)
> on client 1 write  A through B
> on client 2 write A through C
> on client 1 unlock A through B
> on client 2 unlock A through C
> 
> would corrupt data (theoretically could be fixed as nfsd calls lock 
> methods 
> http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=fd85b8170dabbf021987875ef7f903791f4f181e)
>  

Right.

> but the more obvious point is that with two nfsd servers exporting the 
> same file data via the same cluster fs (under nfsd), the latencies can be 
> longer and the opportunity for stale metadata (file sizes)

Hm.  How could nfsd get stale metadata?

I'm just (probably naively) assuming that a "cluster" filesystem
attempts to provide much higher cache consistency than actually
necessary to keep nfs clients happy.  But, if not, it would be nice to
understand the problem.

--b.
-
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to