Oleg,

My bad to take so long to reply. I have recently returned from a long vacation.

在 2024/9/28 1:18, Oleg Nesterov 写道:
> On 09/27, Liao Chang wrote:
>>
>> +int recycle_utask_slot(struct uprobe_task *utask, struct xol_area *area)
>> +{
>> +    int slot = UINSNS_PER_PAGE;
>> +
>> +    /*
>> +     * Ensure that the slot is not in use on other CPU. However, this
>> +     * check is unnecessary when called in the context of an exiting
>> +     * thread. See xol_free_insn_slot() called from uprobe_free_utask()
>> +     * for more details.
>> +     */
>> +    if (test_and_put_task_slot(utask)) {
>> +            list_del(&utask->gc);
>> +            clear_bit(utask->insn_slot, area->bitmap);
>> +            atomic_dec(&area->slot_count);
>> +            utask->insn_slot = UINSNS_PER_PAGE;
>> +            refcount_set(&utask->slot_ref, 1);
> 
> This lacks a barrier, CPU can reorder the last 2 insns
> 
>               refcount_set(&utask->slot_ref, 1);
>               utask->insn_slot = UINSNS_PER_PAGE;
> 
> so the "utask->insn_slot == UINSNS_PER_PAGE" check in xol_get_insn_slot()
> can be false negative.

Good catcha! Would an atomic_set() with release ordering be sufficient here
instead of smp_mb()?

> 
>> +static unsigned long xol_get_insn_slot(struct uprobe_task *utask,
>> +                                   struct uprobe *uprobe)
>>  {
>>      struct xol_area *area;
>>      unsigned long xol_vaddr;
>> @@ -1665,16 +1740,46 @@ static unsigned long xol_get_insn_slot(struct uprobe 
>> *uprobe)
>>      if (!area)
>>              return 0;
>>
>> -    xol_vaddr = xol_take_insn_slot(area);
>> -    if (unlikely(!xol_vaddr))
>> +    /*
>> +     * The racing on the utask associated slot_ref can occur unless the
>> +     * area runs out of slots. This isn't a common case. Even if it does
>> +     * happen, the scalability bottleneck will shift to another point.
>> +     */
> 
> I don't understand the comment, I guess it means the race with
> recycle_utask_slot() above.
> >> +  if (!test_and_get_task_slot(utask))

Exactly, While introducing another refcount operation here might seem like
a downside, the potential racing on it should be less than the ones on
xol_area->bitmap and xol_area->slot_count(which you've already optimized).

>>              return 0;
> 
> No, we can't do this. xol_get_insn_slot() should never fail.
> 
> OK, OK, currently xol_get_insn_slot() _can_ fail, but only if get_xol_area()
> fails to allocate the memory. Which should "never" happen and we can do 
> nothing
> in this case anyway.

Sorry, I haven't trace the exact path where xol_get_insn_slot() fails. I suspect
it might repeatedly trigger BRK exceptions before get_xol_area() successfully
returns. Please correct me if I am wrong.

> 
> But it certainly must not fail if it races with another thread, this is 
> insane.

Agreed, it is somewhat costly when race fails. I suggest that it allocates a new
slot upon the race fails instead of returning 0.

> 
> And. This patch changes the functions which ask for cleanups. I'll try to send
> a couple of simple patches on Monday.

Thank you for pointing that out, I must have missed some patches while I was on
vacation, I will carefully review the mailing list to ensure that this patch can
work with any recent cleanups.

> 
> Oleg.
> 
> 

-- 
BR
Liao, Chang


Reply via email to