On Sun, Mar 12, 2017 at 6:49 PM, Eric Dumazet <eric.duma...@gmail.com> wrote:
> On Sun, 2017-03-12 at 17:49 +0200, Saeed Mahameed wrote:
>> On Sun, Mar 12, 2017 at 5:29 PM, Eric Dumazet <eric.duma...@gmail.com> wrote:
>> > On Sun, 2017-03-12 at 07:57 -0700, Eric Dumazet wrote:
>> >
>> >> Problem is XDP TX :
>> >>
>> >> I do not see any guarantee mlx4_en_recycle_tx_desc() runs while the NAPI
>> >> RX is owned by current cpu.
>> >>
>> >> Since TX completion is using a different NAPI, I really do not believe
>> >> we can avoid an atomic operation, like a spinlock, to protect the list
>> >> of pages ( ring->page_cache )
>> >
>> > A quick fix for net-next would be :
>> >
>>
>> Hi Eric, Good catch.
>>
>> I don't think we need to complicate with an expensive spinlock,
>>  we can simply fix this by not enabling interrupts on XDP TX CQ (not
>> arm this CQ at all).
>> and handle XDP TX CQ completion from the RX NAPI context, in a serial
>> (Atomic) manner before handling RX completions themselves.
>> This way locking is not required since all page cache handling is done
>> from the same context (RX NAPI).
>>
>> This is how we do this in mlx5, and this is the best approach
>> (performance wise) since we dealy XDP TX CQ completions handling
>> until we really need the space they hold (On new RX packets).
>
> SGTM, can you provide the patch for mlx4 ?
>

of course, We will send it soon.

> Thanks !
>
>

Reply via email to