> -----Original Message-----
> From: Intel-wired-lan <[email protected]> On Behalf Of
> Joshua Hay
> Sent: Friday, July 25, 2025 11:42 AM
> To: [email protected]
> Cc: [email protected]; Hay, Joshua A <[email protected]>; Luigi
> Rizzo <[email protected]>; Brian Vazquez <[email protected]>; Chittim,
> Madhu <[email protected]>; Loktionov, Aleksandr
> <[email protected]>
> Subject: [Intel-wired-lan] [PATCH iwl-net v3 4/6] idpf: replace flow 
> scheduling
> buffer ring with buffer pool
> 
> Replace the TxQ buffer ring with one large pool/array of buffers (only for 
> flow
> scheduling). This eliminates the tag generation and makes it impossible for a
> tag to be associated with more than one packet.
> 
> The completion tag passed to HW through the descriptor is the index into the
> array. That same completion tag is posted back to the driver in the completion
> descriptor, and used to index into the array to quickly retrieve the buffer
> during cleaning.  In this way, the tags are treated as a fix sized resource. 
> If all
> tags are in use, no more packets can be sent on that particular queue (until
> some are freed up). The tag pool size is 64K since the completion tag width is
> 16 bits.
> 
> For each packet, the driver pulls a free tag from the refillq to get the next 
> free
> buffer index. When cleaning is complete, the tag is posted back to the 
> refillq. A
> multi-frag packet spans multiple buffers in the driver, therefore it uses 
> multiple
> buffer indexes/tags from the pool.
> Each frag pulls from the refillq to get the next free buffer index.
> These are tracked in a next_buf field that replaces the completion tag field 
> in
> the buffer struct. This chains the buffers together so that the packet can be
> cleaned from the starting completion tag taken from the completion
> descriptor, then from the next_buf field for each subsequent buffer.
> 
> In case of a dma_mapping_error occurs or the refillq runs out of free buf_ids,
> the packet will execute the rollback error path. This unmaps any buffers
> previously mapped for the packet. Since several free buf_ids could have
> already been pulled from the refillq, we need to restore its original state as
> well. Otherwise, the buf_ids/tags will be leaked and not used again until the
> queue is reallocated.
> 
> Descriptor completions only advance the descriptor ring index to "clean"
> the descriptors. The packet completions only clean the buffers associated with
> the given packet completion tag and do not update the descriptor ring index.
> 
> When operating in queue based scheduling mode, the array still acts as a ring
> and will only have TxQ descriptor count entries. The tx_bufs are still 
> associated
> 1:1 with the descriptor ring entries and we can use the conventional indexing
> mechanisms.
> 
> Fixes: c2d548cad150 ("idpf: add TX splitq napi poll support")
> Signed-off-by: Luigi Rizzo <[email protected]>
> Signed-off-by: Brian Vazquez <[email protected]>
> Signed-off-by: Joshua Hay <[email protected]>
> Reviewed-by: Madhu Chittim <[email protected]>
> Reviewed-by: Aleksandr Loktionov <[email protected]>
> ---
> v3:
> - remove unreachable code
> 
> v2:
> - removed unused buf_size
> - miscellaneous cleanup based on changes to prior patches and addition
>   of packet rollback changes patch
> - refactor packet rollback logic to iterate through chained bufs
> - add refillq state restore if rollback occurs
> ---
> 2.39.2

Tested-by: Samuel Salin <[email protected]>

Reply via email to