(Apologies for the double-posting to individual recipients; resending
in plain text to get through vger list filters.)

The current code does not have a bug in this path (until we find one,
of course). nrs_resource_put_safe() releases references to the related
NRS resource hierarchies and policies for an RPC; as Oleg pointed out,
the server-side code which is included in the Intel/OpenSFS tree
(http://git.whamcloud.com/fs/lustre-release.git) has more than just
the FIFO policy, and so up to NRS_RES_MAX (currently 2) policies can
be related with an NRS head (struct ptlrpc_nrs).
nrs_resource_put_safe() takes the NRS head spinlock (nrs_lock),
releases references to the NRS head's policies, and releases the same
spinlock. nrs_policy_stop0() can be entered from
nrs_policy_put_locked(), but whichever policy is passed to it, it will
release the same spinlock, as all policies for an RPC share the same
NRS head, and thus the same nrs_lock.

Liang wrote this part; I believe the aim is to avoid a lock dance of
nrs_lock when releasing policy references for the NRS head; the
proposed patch would add the lock dance, so I am not sure it would
help much; not that it would hurt, but I don't see a good reason to
add it, other than to suppress the sparse warning.

Btw, I am guessing that the reason for having a part of both NRS and
the server-side of the PTLRPC code in the kernel is only to satisfy
build dependencies, and this code is unreachable; i.e. as the TODO
file suggests, the client and server sides of PTLRPC will be
decoupled, and so this code will eventually be removed from the
kernel, likely before transitioning from staging.



Cheers,
Nikitas

On Thu, May 21, 2015 at 8:33 AM, Drokin, Oleg <oleg.dro...@intel.com> wrote:
> On May 21, 2015, at 11:12 AM, Dan Carpenter wrote:
>
>> Oh, sorry, I didn't read your patch very carefully.  It won't cause a
>> deadlock.  But I'm going to assume it's still not right until lustre
>> expert Acks it.
>
> I just took a closer look and it appears original code is buggy and the patch 
> just propagates the bugginess.
>
> If we look at the nrs_policy_put_locked, it eventually ends up in 
> nrs_policy_stop0,
> it would hold a lock on whatever happened to be the first policy in the array 
> not NULL.
> But nrs_policy_stop0 would unlock the lock on the policy it was called on 
> (already likely a deadlock material) and then relock it.
>
> The problems would arise only if there are more than one nrs policy 
> registered which is theoretically possible, but certainly makes no sense a 
> client (besides, none of the advanced NRS policies
> made it in anyway and I almost feel like they just add unnecessary 
> complication in client-only code).
>
> The code looks elaborate enough as if the first policy lock is to be always 
> used as the guardian lock, but then stop0 behavior might be a bug then?
> Or it's possible we never end up in stop0 due to nrs state machine?
> Let's see what Nikitas and Liang remember about any of this (one of them is 
> the original author of this code, but I am not sure who.)
>
> Nikitas, Liang: The code in question is in nrs_resource_put_safe:
>         for (i = 0; i < NRS_RES_MAX; i++) {
>                 if (pols[i] == NULL)
>                         continue;
>
>                 if (nrs == NULL) {
>                         nrs = pols[i]->pol_nrs;
>                         spin_lock(&nrs->nrs_lock);
>                 }
>                 nrs_policy_put_locked(pols[i]);
>         }
>
>         if (nrs != NULL)
>                 spin_unlock(&nrs->nrs_lock);
>
_______________________________________________
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

Reply via email to