On 2023/12/8 1:20, Alexander Lobakin wrote:
...

> +
> +/**
> + * libie_rx_page_pool_create - create a PP with the default libie settings
> + * @bq: buffer queue struct to fill
> + * @napi: &napi_struct covering this PP (no usage outside its poll loops)
> + *
> + * Return: 0 on success, -errno on failure.
> + */
> +int libie_rx_page_pool_create(struct libie_buf_queue *bq,
> +                           struct napi_struct *napi)
> +{
> +     struct page_pool_params pp = {
> +             .flags          = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
> +             .order          = LIBIE_RX_PAGE_ORDER,
> +             .pool_size      = bq->count,
> +             .nid            = NUMA_NO_NODE,

Is there a reason the NUMA_NO_NODE is used here instead of
dev_to_node(napi->dev->dev.parent)?

> +             .dev            = napi->dev->dev.parent,
> +             .netdev         = napi->dev,
> +             .napi           = napi,
> +             .dma_dir        = DMA_FROM_DEVICE,
> +             .offset         = LIBIE_SKB_HEADROOM,
> +     };
> +
> +     /* HW-writeable / syncable length per one page */
> +     pp.max_len = LIBIE_RX_BUF_LEN(pp.offset);
> +
> +     /* HW-writeable length per buffer */
> +     bq->rx_buf_len = libie_rx_hw_len(&pp);
> +     /* Buffer size to allocate */
> +     bq->truesize = roundup_pow_of_two(SKB_HEAD_ALIGN(pp.offset +
> +                                                      bq->rx_buf_len));
> +
> +     bq->pp = page_pool_create(&pp);
> +
> +     return PTR_ERR_OR_ZERO(bq->pp);
> +}
> +EXPORT_SYMBOL_NS_GPL(libie_rx_page_pool_create, LIBIE);
> +

...

> +/**
> + * libie_rx_sync_for_cpu - synchronize or recycle buffer post DMA
> + * @buf: buffer to process
> + * @len: frame length from the descriptor
> + *
> + * Process the buffer after it's written by HW. The regular path is to
> + * synchronize DMA for CPU, but in case of no data it will be immediately
> + * recycled back to its PP.
> + *
> + * Return: true when there's data to process, false otherwise.
> + */
> +static inline bool libie_rx_sync_for_cpu(const struct libie_rx_buffer *buf,
> +                                      u32 len)
> +{
> +     struct page *page = buf->page;
> +
> +     /* Very rare, but possible case. The most common reason:
> +      * the last fragment contained FCS only, which was then
> +      * stripped by the HW.
> +      */
> +     if (unlikely(!len)) {
> +             page_pool_recycle_direct(page->pp, page);
> +             return false;
> +     }
> +
> +     page_pool_dma_sync_for_cpu(page->pp, page, buf->offset, len);

Is there a reason why page_pool_dma_sync_for_cpu() is still used when
page_pool_create() is called with PP_FLAG_DMA_SYNC_DEV flag? Isn't syncing
already handled in page_pool core when when PP_FLAG_DMA_SYNC_DEV flag is
set?

> +
> +     return true;
> +}
>  
>  /* O(1) converting i40e/ice/iavf's 8/10-bit hardware packet type to a parsed
>   * bitfield struct.
> 
_______________________________________________
Intel-wired-lan mailing list
[email protected]
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

Reply via email to