Re: [RFC PATCH net-next v7 04/14] netdev: support binding dma-buf to netdevice

2024-03-28 Thread Simon Horman
On Thu, Mar 28, 2024 at 11:55:23AM -0700, Mina Almasry wrote:
> On Thu, Mar 28, 2024 at 11:28 AM Simon Horman  wrote:
> >
> > On Tue, Mar 26, 2024 at 03:50:35PM -0700, Mina Almasry wrote:
> > > Add a netdev_dmabuf_binding struct which represents the
> > > dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to
> > > rx queues on the netdevice. On the binding, the dma_buf_attach
> > > & dma_buf_map_attachment will occur. The entries in the sg_table from
> > > mapping will be inserted into a genpool to make it ready
> > > for allocation.
> > >
> > > The chunks in the genpool are owned by a dmabuf_chunk_owner struct which
> > > holds the dma-buf offset of the base of the chunk and the dma_addr of
> > > the chunk. Both are needed to use allocations that come from this chunk.
> > >
> > > We create a new type that represents an allocation from the genpool:
> > > net_iov. We setup the net_iov allocation size in the
> > > genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally
> > > allocated by the page pool and given to the drivers.
> > >
> > > The user can unbind the dmabuf from the netdevice by closing the netlink
> > > socket that established the binding. We do this so that the binding is
> > > automatically unbound even if the userspace process crashes.
> > >
> > > The binding and unbinding leaves an indicator in struct netdev_rx_queue
> > > that the given queue is bound, but the binding doesn't take effect until
> > > the driver actually reconfigures its queues, and re-initializes its page
> > > pool.
> > >
> > > The netdev_dmabuf_binding struct is refcounted, and releases its
> > > resources only when all the refs are released.
> > >
> > > Signed-off-by: Willem de Bruijn 
> > > Signed-off-by: Kaiyuan Zhang 
> > > Signed-off-by: Mina Almasry 
> >
> > ...
> >
> > > +int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
> > > + struct net_devmem_dmabuf_binding 
> > > *binding)
> > > +{
> > > + struct netdev_rx_queue *rxq;
> > > + u32 xa_idx;
> > > + int err;
> > > +
> > > + if (rxq_idx >= dev->num_rx_queues)
> > > + return -ERANGE;
> > > +
> > > + rxq = __netif_get_rx_queue(dev, rxq_idx);
> > > + if (rxq->mp_params.mp_priv)
> > > + return -EEXIST;
> > > +
> > > + err = xa_alloc(>bound_rxq_list, _idx, rxq, xa_limit_32b,
> > > +GFP_KERNEL);
> > > + if (err)
> > > + return err;
> > > +
> > > + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we 
> > > can't
> > > +  * race with another thread that is also modifying this value. 
> > > However,
> > > +  * the driver may read this config while it's creating its * 
> > > rx-queues.
> > > +  * WRITE_ONCE() here to match the READ_ONCE() in the driver.
> > > +  */
> > > + WRITE_ONCE(rxq->mp_params.mp_ops, _devmem_ops);
> >
> > Hi Mina,
> >
> > This causes a build failure because mabuf_devmem_ops is not added until a
> > subsequent patch in this series.
> >
> 
> My apologies. I do notice the failure in patchwork now. I'll do a
> patch by patch build for the next iteration.

Thanks, much appreciated.

> > > + WRITE_ONCE(rxq->mp_params.mp_priv, binding);
> > > +
> > > + err = net_devmem_restart_rx_queue(dev, rxq_idx);
> > > + if (err)
> > > + goto err_xa_erase;
> > > +
> > > + return 0;
> > > +
> > > +err_xa_erase:
> > > + WRITE_ONCE(rxq->mp_params.mp_ops, NULL);
> > > + WRITE_ONCE(rxq->mp_params.mp_priv, NULL);
> > > + xa_erase(>bound_rxq_list, xa_idx);
> > > +
> > > + return err;
> > > +}
> >
> > ...
> 
> 
> 
> -- 
> Thanks,
> Mina
> 


Re: [RFC PATCH net-next v7 04/14] netdev: support binding dma-buf to netdevice

2024-03-28 Thread Mina Almasry
On Thu, Mar 28, 2024 at 11:28 AM Simon Horman  wrote:
>
> On Tue, Mar 26, 2024 at 03:50:35PM -0700, Mina Almasry wrote:
> > Add a netdev_dmabuf_binding struct which represents the
> > dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to
> > rx queues on the netdevice. On the binding, the dma_buf_attach
> > & dma_buf_map_attachment will occur. The entries in the sg_table from
> > mapping will be inserted into a genpool to make it ready
> > for allocation.
> >
> > The chunks in the genpool are owned by a dmabuf_chunk_owner struct which
> > holds the dma-buf offset of the base of the chunk and the dma_addr of
> > the chunk. Both are needed to use allocations that come from this chunk.
> >
> > We create a new type that represents an allocation from the genpool:
> > net_iov. We setup the net_iov allocation size in the
> > genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally
> > allocated by the page pool and given to the drivers.
> >
> > The user can unbind the dmabuf from the netdevice by closing the netlink
> > socket that established the binding. We do this so that the binding is
> > automatically unbound even if the userspace process crashes.
> >
> > The binding and unbinding leaves an indicator in struct netdev_rx_queue
> > that the given queue is bound, but the binding doesn't take effect until
> > the driver actually reconfigures its queues, and re-initializes its page
> > pool.
> >
> > The netdev_dmabuf_binding struct is refcounted, and releases its
> > resources only when all the refs are released.
> >
> > Signed-off-by: Willem de Bruijn 
> > Signed-off-by: Kaiyuan Zhang 
> > Signed-off-by: Mina Almasry 
>
> ...
>
> > +int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
> > + struct net_devmem_dmabuf_binding *binding)
> > +{
> > + struct netdev_rx_queue *rxq;
> > + u32 xa_idx;
> > + int err;
> > +
> > + if (rxq_idx >= dev->num_rx_queues)
> > + return -ERANGE;
> > +
> > + rxq = __netif_get_rx_queue(dev, rxq_idx);
> > + if (rxq->mp_params.mp_priv)
> > + return -EEXIST;
> > +
> > + err = xa_alloc(>bound_rxq_list, _idx, rxq, xa_limit_32b,
> > +GFP_KERNEL);
> > + if (err)
> > + return err;
> > +
> > + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
> > +  * race with another thread that is also modifying this value. 
> > However,
> > +  * the driver may read this config while it's creating its * 
> > rx-queues.
> > +  * WRITE_ONCE() here to match the READ_ONCE() in the driver.
> > +  */
> > + WRITE_ONCE(rxq->mp_params.mp_ops, _devmem_ops);
>
> Hi Mina,
>
> This causes a build failure because mabuf_devmem_ops is not added until a
> subsequent patch in this series.
>

My apologies. I do notice the failure in patchwork now. I'll do a
patch by patch build for the next iteration.

> > + WRITE_ONCE(rxq->mp_params.mp_priv, binding);
> > +
> > + err = net_devmem_restart_rx_queue(dev, rxq_idx);
> > + if (err)
> > + goto err_xa_erase;
> > +
> > + return 0;
> > +
> > +err_xa_erase:
> > + WRITE_ONCE(rxq->mp_params.mp_ops, NULL);
> > + WRITE_ONCE(rxq->mp_params.mp_priv, NULL);
> > + xa_erase(>bound_rxq_list, xa_idx);
> > +
> > + return err;
> > +}
>
> ...



-- 
Thanks,
Mina


Re: [RFC PATCH net-next v7 04/14] netdev: support binding dma-buf to netdevice

2024-03-28 Thread Simon Horman
On Tue, Mar 26, 2024 at 03:50:35PM -0700, Mina Almasry wrote:
> Add a netdev_dmabuf_binding struct which represents the
> dma-buf-to-netdevice binding. The netlink API will bind the dma-buf to
> rx queues on the netdevice. On the binding, the dma_buf_attach
> & dma_buf_map_attachment will occur. The entries in the sg_table from
> mapping will be inserted into a genpool to make it ready
> for allocation.
> 
> The chunks in the genpool are owned by a dmabuf_chunk_owner struct which
> holds the dma-buf offset of the base of the chunk and the dma_addr of
> the chunk. Both are needed to use allocations that come from this chunk.
> 
> We create a new type that represents an allocation from the genpool:
> net_iov. We setup the net_iov allocation size in the
> genpool to PAGE_SIZE for simplicity: to match the PAGE_SIZE normally
> allocated by the page pool and given to the drivers.
> 
> The user can unbind the dmabuf from the netdevice by closing the netlink
> socket that established the binding. We do this so that the binding is
> automatically unbound even if the userspace process crashes.
> 
> The binding and unbinding leaves an indicator in struct netdev_rx_queue
> that the given queue is bound, but the binding doesn't take effect until
> the driver actually reconfigures its queues, and re-initializes its page
> pool.
> 
> The netdev_dmabuf_binding struct is refcounted, and releases its
> resources only when all the refs are released.
> 
> Signed-off-by: Willem de Bruijn 
> Signed-off-by: Kaiyuan Zhang 
> Signed-off-by: Mina Almasry 

...

> +int net_devmem_bind_dmabuf_to_queue(struct net_device *dev, u32 rxq_idx,
> + struct net_devmem_dmabuf_binding *binding)
> +{
> + struct netdev_rx_queue *rxq;
> + u32 xa_idx;
> + int err;
> +
> + if (rxq_idx >= dev->num_rx_queues)
> + return -ERANGE;
> +
> + rxq = __netif_get_rx_queue(dev, rxq_idx);
> + if (rxq->mp_params.mp_priv)
> + return -EEXIST;
> +
> + err = xa_alloc(>bound_rxq_list, _idx, rxq, xa_limit_32b,
> +GFP_KERNEL);
> + if (err)
> + return err;
> +
> + /* We hold the rtnl_lock while binding/unbinding dma-buf, so we can't
> +  * race with another thread that is also modifying this value. However,
> +  * the driver may read this config while it's creating its * rx-queues.
> +  * WRITE_ONCE() here to match the READ_ONCE() in the driver.
> +  */
> + WRITE_ONCE(rxq->mp_params.mp_ops, _devmem_ops);

Hi Mina,

This causes a build failure because mabuf_devmem_ops is not added until a
subsequent patch in this series.

> + WRITE_ONCE(rxq->mp_params.mp_priv, binding);
> +
> + err = net_devmem_restart_rx_queue(dev, rxq_idx);
> + if (err)
> + goto err_xa_erase;
> +
> + return 0;
> +
> +err_xa_erase:
> + WRITE_ONCE(rxq->mp_params.mp_ops, NULL);
> + WRITE_ONCE(rxq->mp_params.mp_priv, NULL);
> + xa_erase(>bound_rxq_list, xa_idx);
> +
> + return err;
> +}

...