On Thu, Apr 25, 2013 at 02:51:20PM -0400, Chuck Lever wrote:
> 
> On Apr 25, 2013, at 2:46 PM, "bfie...@fieldses.org" <bfie...@fieldses.org> 
> wrote:
> 
> > On Thu, Apr 25, 2013 at 02:40:11PM -0400, Chuck Lever wrote:
> >> 
> >> On Apr 25, 2013, at 2:19 PM, "bfie...@fieldses.org" <bfie...@fieldses.org> 
> >> wrote:
> >> 
> >>> On Thu, Apr 25, 2013 at 02:10:36PM +0000, Myklebust, Trond wrote:
> >>>> On Thu, 2013-04-25 at 09:49 -0400, bfie...@fieldses.org wrote:
> >>>>> On Thu, Apr 25, 2013 at 01:30:58PM +0000, Myklebust, Trond wrote:
> >>>>>> On Thu, 2013-04-25 at 09:29 -0400, bfie...@fieldses.org wrote:
> >>>>>> 
> >>>>>>> My position is that we simply have no idea what order of magnitude 
> >>>>>>> even
> >>>>>>> delay should be.  And that in such a situation exponential backoff 
> >>>>>>> such
> >>>>>>> as implemented in the synchronous case seems the reasonable default as
> >>>>>>> it guarantees at worst doubling the delay while still bounding the
> >>>>>>> long-term average frequency of retries.
> >>>>>> 
> >>>>>> So we start with a 15 second delay, and then go to 60 seconds?
> >>>>> 
> >>>>> I agree that a server should normally be doing the wait on its own if
> >>>>> the wait would be on the order of an rpc round trip.
> >>>>> 
> >>>>> So I'd be inclined to start with a delay that was an order of magnitude
> >>>>> or two more than a round trip.
> >>>>> 
> >>>>> And I'd expect NFS isn't common on networks with 1-second latencies.
> >>>>> 
> >>>>> So the 1/10 second we're using in the synchronous case sounds closer to
> >>>>> the right ballpark to me.
> >>>> 
> >>>> OK, then. Now all I need is actual motivation for changing the existing
> >>>> code other than handwaving arguments about "polling is better than flat
> >>>> waits".
> >>>> What actual use cases are impacting us now, other than the AIX design
> >>>> decision to force CLOSE to retry at least once before succeeding?
> >>> 
> >>> Nah, I've got nothing, and I agree that the AIX problem is there bug.
> >>> 
> >>> Just for fun I looked at re-checked the Linux server cases.  As far as I
> >>> can tell they are:
> >>> 
> >>>   - delegations: returned immediately on detection of any
> >>>     conflict.  The current behavior in the sync case looks
> >>>     reasonable to me.
> >>>   - allocation failures: not really sure it's the best error, but
> >>>     it seems to be all the protocol offers.  We probably don't
> >>>     care much what the client does in this case.
> >>>   - some rare cases that would probably indicate bugs (e.g.,
> >>>     attempting to destroy a client while other rpc's from that
> >>>     client are running.)  Again we don't care what the client does
> >>>     here.
> >>>   - the 4.1 slot-inuse case.
> >>> 
> >>> We also by default map four errors (ETIMEDOUT, EAGAIN, EWOULDBLOCK,
> >>> ENOMEM) to delay.  I thought I remembered one of those being used by
> >>> some HFS system, but can't actually find an example now.  A quick grep
> >>> doesn't show anything interesting.
> >> 
> >> It's worth mentioning that servers that have frozen state (say, in 
> >> preparation for Transparent State Migration) may use NFS4ERR_DELAY to 
> >> prevent clients from modifying open or lock state until that state has 
> >> transitioned to a destination server.
> > 
> > I thought they'd decided they'll be forced to find a different way to do
> > that?
> > 
> > (The issue being that it only works if you're using 4.1, and if the
> > session state itself isn't part of the state to be transferred.
> > Otherwise you're forced to modify the state anyway since NFS4ERR_DELAY
> > is seqid-modifying.)
> 
> The answer is not to return NFS4ERR_DELAY on seqid-modifying operations.
> 
> The source server can return NFS4ERR_DELAY to the client's migration recovery 
> operations (the GETATTR(fs_locations) request) for example.
> 
> Or, the server could return it on the initial PUTFH operation in a compound 
> containing seqid-modifying operations.

Oh, right, I'd forgotten that approach....

--b.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to