> -----Original Message-----
> From: Simon Horman <[email protected]>
> Sent: Friday, July 26, 2024 9:22 AM
> To: Lobakin, Aleksander <[email protected]>
> Cc: [email protected]; Nguyen, Anthony L
> <[email protected]>; David S. Miller <[email protected]>; Eric
> Dumazet <[email protected]>; Jakub Kicinski <[email protected]>; Paolo
> Abeni <[email protected]>; NEX SW NCIS OSDT ITP Upstreaming
> <[email protected]>; [email protected]; linux-
> [email protected]; Kubiak, Michal <[email protected]>
> Subject: Re: [PATCH iwl-net 3/3] idpf: fix UAFs when destroying the queues
> 
> On Wed, Jul 24, 2024 at 03:40:24PM +0200, Alexander Lobakin wrote:
> > The second tagged commit started sometimes (very rarely, but possible)
> > throwing WARNs from
> > net/core/page_pool.c:page_pool_disable_direct_recycling().
> > Turned out idpf frees interrupt vectors with embedded NAPIs *before*
> > freeing the queues making page_pools' NAPI pointers lead to freed
> > memory before these pools are destroyed by libeth.
> > It's not clear whether there are other accesses to the freed vectors
> > when destroying the queues, but anyway, we usually free queue/interrupt
> > vectors only when the queues are destroyed and the NAPIs are guaranteed
> > to not be referenced anywhere.
> >
> > Invert the allocation and freeing logic making queue/interrupt vectors
> > be allocated first and freed last. Vectors don't require queues to be
> > present, so this is safe. Additionally, this change allows to remove
> > that useless queue->q_vector pointer cleanup, as vectors are still
> > valid when freeing the queues (+ both are freed within one function,
> > so it's not clear why nullify the pointers at all).
> >
> > Fixes: 1c325aac10a8 ("idpf: configure resources for TX queues")
> > Fixes: 90912f9f4f2d ("idpf: convert header split mode to libeth +
> napi_build_skb()")
> > Reported-by: Michal Kubiak <[email protected]>
> > Signed-off-by: Alexander Lobakin <[email protected]>
> 
> Reviewed-by: Simon Horman <[email protected]>
> 

Tested-by: Krishneil Singh <[email protected]>

Reply via email to