On Thu, 2008-01-17 at 11:40 -0500, J. Bruce Fields wrote:
> On Thu, Jan 17, 2008 at 11:31:22AM -0500, Wendy Cheng wrote:
> >>    it *should* be the case that the set of locks held on the
> >>    filesystem(s) that are moving are the same as the set of locks
> >>    held by the virtual ip that is moving.
> >>
> >> is still true in the cluster filesystem case, right?
> >>
> >> --b.
> >>   
> > Yes .... Wendy
> 
> In one situations (buggy client?  Weird network failure?) could that
> fail to be the case?
> 
> Would there be any advantage to enforcing that requirement in the
> server?  (For example, teaching nlm to reject any locking request for a
> certain filesystem that wasn't sent to a certain server IP.)

Trying to dredge up my clustered nfsd/lockd memories from having worked
on an implementation more than 7 years ago...

With a clustered filesystem being exported, it might be the case that
the cluster has a set of IP addresses (probably one per node) that are
used for load balancing clients. Each node exports all file systems. As
nodes fail (and all of this only matters when an interface failing is
the cause of node failure - a node crash need not apply here), ip
addresses are failed-over to other nodes, taking with them the set of
clients that were accessing the cluster via that ip address.

I assume the intent here with this implementation is that the node
taking over will start lock recovery for the ip address? With that
perspective, I guess it would be important that each file system only be
accessed with a single ip address otherwise lock recovery will not work
correctly since another node/ip could accept locks for that filesystem,
possibly "stealing" a lock that is in recovery. As I recall, our
implementation put the entire filesystem cluster-wide into recovery
during fail-over.

Frank


-
To unsubscribe from this list: send the line "unsubscribe linux-nfs" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to