Re: [PATCH v7 3/6] iio: core: Add new DMABUF interface infrastructure

2024-03-04 Thread Nuno
On Mon, 2024-03-04 at 13:44 +0100, Christian König wrote:
> Am 23.02.24 um 13:14 schrieb Nuno Sa:
> > From: Paul Cercueil 
> > 
> > Add the necessary infrastructure to the IIO core to support a new
> > optional DMABUF based interface.
> > 
> > With this new interface, DMABUF objects (externally created) can be
> > attached to a IIO buffer, and subsequently used for data transfer.
> > 
> > A userspace application can then use this interface to share DMABUF
> > objects between several interfaces, allowing it to transfer data in a
> > zero-copy fashion, for instance between IIO and the USB stack.
> > 
> > The userspace application can also memory-map the DMABUF objects, and
> > access the sample data directly. The advantage of doing this vs. the
> > read() interface is that it avoids an extra copy of the data between the
> > kernel and userspace. This is particularly userful for high-speed
> > devices which produce several megabytes or even gigabytes of data per
> > second.
> > 
> > As part of the interface, 3 new IOCTLs have been added:
> > 
> > IIO_BUFFER_DMABUF_ATTACH_IOCTL(int fd):
> >   Attach the DMABUF object identified by the given file descriptor to the
> >   buffer.
> > 
> > IIO_BUFFER_DMABUF_DETACH_IOCTL(int fd):
> >   Detach the DMABUF object identified by the given file descriptor from
> >   the buffer. Note that closing the IIO buffer's file descriptor will
> >   automatically detach all previously attached DMABUF objects.
> > 
> > IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *):
> >   Request a data transfer to/from the given DMABUF object. Its file
> >   descriptor, as well as the transfer size and flags are provided in the
> >   "iio_dmabuf" structure.
> > 
> > These three IOCTLs have to be performed on the IIO buffer's file
> > descriptor, obtained using the IIO_BUFFER_GET_FD_IOCTL() ioctl.
> 
> A few nit picks and one bug below, apart from that looks good to me.

Hi Christian,

Thanks for looking at it. I'll just add some comment on the bug below and for
the other stuff I hope Paul is already able to step in and reply to it. My
assumption (which seems to be wrong) is that the dmabuf bits should be already
good to go as they should be pretty similar to the USB part of it.

...

> 
> > +   if (dma_to_ram) {
> > +   /*
> > +* If we're writing to the DMABUF, make sure we don't have
> > +* readers
> > +*/
> > +   retl = dma_resv_wait_timeout(dmabuf->resv,
> > +    DMA_RESV_USAGE_READ, true,
> > +    timeout);
> > +   if (retl == 0)
> > +   retl = -EBUSY;
> > +   if (retl < 0) {
> > +   ret = (int)retl;
> > +   goto err_resv_unlock;
> > +   }
> > +   }
> > +
> > +   if (buffer->access->lock_queue)
> > +   buffer->access->lock_queue(buffer);
> > +
> > +   ret = dma_resv_reserve_fences(dmabuf->resv, 1);
> > +   if (ret)
> > +   goto err_queue_unlock;
> > +
> > +   dma_resv_add_fence(dmabuf->resv, >base,
> > +      dma_resv_usage_rw(dma_to_ram));
> 
> That is incorrect use of the function dma_resv_usage_rw(). That function 
> is for retrieving fences and not adding them.
> 
> See the function implementation and comments, when you use it like this 
> you get exactly what you don't want.
> 

Does that mean that the USB stuff [1] is also wrong?

[1]: 
https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git/tree/drivers/usb/gadget/function/f_fs.c?h=usb-next#n1669

- Nuno Sá




Re: [PATCH v7 1/6] dmaengine: Add API function dmaengine_prep_peripheral_dma_vec()

2024-03-04 Thread Nuno
On Fri, 2024-02-23 at 13:13 +0100, Nuno Sa wrote:
> From: Paul Cercueil 
> 
> This function can be used to initiate a scatter-gather DMA transfer,
> where the address and size of each segment is located in one entry of
> the dma_vec array.
> 
> The major difference with dmaengine_prep_slave_sg() is that it supports
> specifying the lengths of each DMA transfer; as trying to override the
> length of the transfer with dmaengine_prep_slave_sg() is a very tedious
> process. The introduction of a new API function is also justified by the
> fact that scatterlists are on their way out.
> 
> Note that dmaengine_prep_interleaved_dma() is not helpful either in that
> case, as it assumes that the address of each segment will be higher than
> the one of the previous segment, which we just cannot guarantee in case
> of a scatter-gather transfer.
> 
> Signed-off-by: Paul Cercueil 
> Signed-off-by: Nuno Sa 
> ---

Hi Vinod,

Is this already good for you? I do not want to be pushy but we're trying to see
if we can have this in the 6.9 cycle and Jonathan definitely wants an ack from
you before merging this in his tree. I've more or less till Wednesday so that's
why I'm asking already today so I still have time to re-spin if you want some
changes.

- Nuno Sá




Re: [PATCH v7 0/6] iio: new DMABUF based API

2024-03-03 Thread Nuno
On Sun, 2024-03-03 at 17:42 +, Jonathan Cameron wrote:
> On Fri, 23 Feb 2024 13:13:58 +0100
> Nuno Sa  wrote:
> 
> > Hi Jonathan, likely you're wondering why I'm sending v7. Well, to be
> > honest, we're hoping to get this merged this for the 6.9 merge window.
> > Main reason is because the USB part is already in (so it would be nice
> > to get the whole thing in). Moreover, the changes asked in v6 were simple
> > (even though I'm not quite sure in one of them) and Paul has no access to
> > it's laptop so he can't send v7 himself. So he kind of said/asked for me to
> > do it.
> 
> So, we are cutting this very fine. If Linus hints strongly at an rc8 maybe we
> can sneak this in. However, I need an Ack from Vinod for the dma engine
> changes first.
> 
> Also I'd love a final 'looks ok' comment from DMABUF folk (Ack even better!)
> 
> Seems that the other side got resolved in the USB gadget, but last we heard
> form
> Daniel and Christian looks to have been back on v5. I'd like them to confirm
> they are fine with the changes made as a result. 
> 

I can ask Christian or Daniel for some acks but my feeling (I still need, at
some point, to get really familiar with all of this) is that this should be
pretty similar to the USB series (from a DMABUF point of view) as they are both
importers.

> I've been happy with the IIO parts for a few versions now but my ability to
> review
> the DMABUF and DMA engine bits is limited.
> 
> A realistic path to get this in is rc8 is happening, is all Acks in place by
> Wednesday,
> I get apply it and hits Linux-next Thursday, Pull request to Greg on Saturday
> and Greg
> is feeling particularly generous to take one on the day he normally closes his
> trees.
> 

Well, it looks like we still have a shot. I'll try to see if Vinod is fine with
the DMAENGINE stuff.

- Nuno Sá



Re: [PATCH v5 7/8] iio: buffer-dmaengine: Support new DMABUF based userspace API

2023-12-22 Thread Nuno
On Thu, 2023-12-21 at 18:30 +0100, Paul Cercueil wrote:
> Hi Jonathan,
> 
> Le jeudi 21 décembre 2023 à 16:12 +, Jonathan Cameron a écrit :
> > On Tue, 19 Dec 2023 18:50:08 +0100
> > Paul Cercueil  wrote:
> > 
> > > Use the functions provided by the buffer-dma core to implement the
> > > DMABUF userspace API in the buffer-dmaengine IIO buffer
> > > implementation.
> > > 
> > > Since we want to be able to transfer an arbitrary number of bytes
> > > and
> > > not necesarily the full DMABUF, the associated scatterlist is
> > > converted
> > > to an array of DMA addresses + lengths, which is then passed to
> > > dmaengine_prep_slave_dma_array().
> > > 
> > > Signed-off-by: Paul Cercueil 
> > One question inline. Otherwise looks fine to me.
> > 
> > J
> > > 
> > > ---
> > > v3: Use the new dmaengine_prep_slave_dma_array(), and adapt the
> > > code to
> > >     work with the new functions introduced in industrialio-buffer-
> > > dma.c.
> > > 
> > > v5: - Use the new dmaengine_prep_slave_dma_vec().
> > >     - Restrict to input buffers, since output buffers are not yet
> > >   supported by IIO buffers.
> > > ---
> > >  .../buffer/industrialio-buffer-dmaengine.c    | 52
> > > ---
> > >  1 file changed, 46 insertions(+), 6 deletions(-)
> > > 
> > > diff --git a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> > > b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> > > index 5f85ba38e6f6..825d76a24a67 100644
> > > --- a/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> > > +++ b/drivers/iio/buffer/industrialio-buffer-dmaengine.c
> > > @@ -64,15 +64,51 @@ static int
> > > iio_dmaengine_buffer_submit_block(struct iio_dma_buffer_queue
> > > *queue,
> > > struct dmaengine_buffer *dmaengine_buffer =
> > > iio_buffer_to_dmaengine_buffer(>buffer);
> > > struct dma_async_tx_descriptor *desc;
> > > +   unsigned int i, nents;
> > > +   struct scatterlist *sgl;
> > > +   struct dma_vec *vecs;
> > > +   size_t max_size;
> > > dma_cookie_t cookie;
> > > +   size_t len_total;
> > >  
> > > -   block->bytes_used = min(block->size, dmaengine_buffer-
> > > > max_size);
> > > -   block->bytes_used = round_down(block->bytes_used,
> > > -   dmaengine_buffer->align);
> > > +   if (queue->buffer.direction != IIO_BUFFER_DIRECTION_IN) {
> > > +   /* We do not yet support output buffers. */
> > > +   return -EINVAL;
> > > +   }
> > >  
> > > -   desc = dmaengine_prep_slave_single(dmaengine_buffer->chan,
> > > -   block->phys_addr, block->bytes_used,
> > > DMA_DEV_TO_MEM,
> > > -   DMA_PREP_INTERRUPT);
> > > +   if (block->sg_table) {
> > > +   sgl = block->sg_table->sgl;
> > > +   nents = sg_nents_for_len(sgl, block->bytes_used);
> > 
> > Are we guaranteed the length in the sglist is enough?  If not this
> > can return an error code.
> 
> The length of the sglist will always be enough, the
> iio_buffer_enqueue_dmabuf() function already checks that block-
> > bytes_used is equal or smaller than the size of the DMABUF.
> 
> It is quite a few functions above in the call stack though, so I can
> handle the errors of sg_nents_for_len() here if you think makes sense.

Maybe putting something like the above in a comment?

- Nuno Sá




Re: [PATCH v5 6/8] iio: buffer-dma: Enable support for DMABUFs

2023-12-22 Thread Nuno
io_dma_buffer_block
> > *block)
> > _iio_dma_buffer_block_done(block);
> > spin_unlock_irqrestore(>list_lock, flags);
> >  
> > +   if (!block->fileio)
> > +   iio_buffer_signal_dmabuf_done(block->attach, 0);
> > +
> > iio_buffer_block_put_atomic(block);
> > wake_up_interruptible_poll(>buffer.pollq, EPOLLIN | 
> > EPOLLRDNORM);
> >  }
> > @@ -237,10 +257,14 @@ void iio_dma_buffer_block_list_abort(struct
> > iio_dma_buffer_queue *queue,
> > list_del(>head);
> > block->bytes_used = 0;
> > _iio_dma_buffer_block_done(block);
> > +
> > +   if (!block->fileio)
> > +   iio_buffer_signal_dmabuf_done(block->attach, 
> > -EINTR);
> > iio_buffer_block_put_atomic(block);
> > }
> > spin_unlock_irqrestore(>list_lock, flags);
> >  
> > +   queue->fileio.enabled = false;
> 
> While this obviously doesn't need to be conditional if it can already be false
> it might be easier to follow the code flow it if didn't check if we were doing
> fileio or not before disabling it.
> 
> > wake_up_interruptible_poll(>buffer.pollq, EPOLLIN | 
> > EPOLLRDNORM);
> >  }
> >  EXPORT_SYMBOL_GPL(iio_dma_buffer_block_list_abort);
> > @@ -261,6 +285,12 @@ static bool iio_dma_block_reusable(struct
> > iio_dma_buffer_block *block)
> > }
> >  }
> >  
> > +static bool iio_dma_buffer_fileio_mode(struct iio_dma_buffer_queue *queue)
> > +{
> > +   return queue->fileio.enabled ||
> > +   queue->num_blocks == queue->num_fileio_blocks;
> This is a little odd. So would be good have a comment on why this condition.
> Or rename the function to imply it's checking if enabled, or can be enabled.
> 
> At first glanced I expected a function with this name to just be an accessor
> function. e.g.
> return queue->fileio.enabled;
> 
> > +}
> > +
> >  /**
> >   * iio_dma_buffer_request_update() - DMA buffer request_update callback
> >   * @buffer: The buffer which to request an update
> > @@ -287,6 +317,12 @@ int iio_dma_buffer_request_update(struct iio_buffer 
> > *buffer)
> >  
> > mutex_lock(>lock);
> >  
> > +   queue->fileio.enabled = iio_dma_buffer_fileio_mode(queue);
> > +
> > +   /* If DMABUFs were created, disable fileio interface */
> > +   if (!queue->fileio.enabled)
> > +   goto out_unlock;
> > +
> > /* Allocations are page aligned */
> > if (PAGE_ALIGN(queue->fileio.block_size) == PAGE_ALIGN(size))
> > try_reuse = true;
> > @@ -317,7 +353,7 @@ int iio_dma_buffer_request_update(struct iio_buffer 
> > *buffer)
> > block = queue->fileio.blocks[i];
> > if (block->state == IIO_BLOCK_STATE_DEAD) {
> > /* Could not reuse it */
> > -   iio_buffer_block_put(block);
> > +   iio_buffer_block_put_atomic(block);
> > block = NULL;
> > } else {
> > block->size = size;
> > @@ -327,7 +363,7 @@ int iio_dma_buffer_request_update(struct iio_buffer 
> > *buffer)
> > }
> >  
> > if (!block) {
> > -   block = iio_dma_buffer_alloc_block(queue, size);
> > +   block = iio_dma_buffer_alloc_block(queue, size, 
> > true);
> > if (!block) {
> > ret = -ENOMEM;
> > goto out_unlock;
> > @@ -363,7 +399,7 @@ static void iio_dma_buffer_fileio_free(struct
> > iio_dma_buffer_queue *queue)
> > for (i = 0; i < ARRAY_SIZE(queue->fileio.blocks); i++) {
> > if (!queue->fileio.blocks[i])
> > continue;
> > -   iio_buffer_block_put(queue->fileio.blocks[i]);
> > +   iio_buffer_block_put_atomic(queue->fileio.blocks[i]);
> 
> For these cases that are atomic or not, it might be worth calling out why 
> they need
> to be
> atomic.
> 
> > queue->fileio.blocks[i] = NULL;
> > }
> > queue->fileio.active_block = NULL;
> > @@ -384,8 +420,12 @@ static void iio_dma_buffer_submit_block(struct
> > iio_dma_buffer_queue *queue,
> >  
> > block->state = IIO_BLOCK_STATE_ACTIVE;
> > iio_buffer_block_get(block);
> > +
> > ret = queue->ops->submit(queue, block);
> > if (ret) {
> > +   if (!block->fileio)
> > +   iio_buffer_signal_dmabuf_done(block->attach, ret);
> > +
> > /*
> >  * This is a bit of a problem and there is not much we can 
> > do
> >  * other then wait for the buffer to be disabled and 
> > re-enabled
> > @@ -588,6 +628,97 @@ size_t iio_dma_buffer_data_available(struct iio_buffer 
> > *buf)
> >  }
> >  EXPORT_SYMBOL_GPL(iio_dma_buffer_data_available);
> >  
> > +struct iio_dma_buffer_block *
> > +iio_dma_buffer_attach_dmabuf(struct iio_buffer *buffer,
> > +    struct dma_buf_attachment *attach)
> > +{
> > +   struct iio_dma_buffer_queue *queue = iio_buffer_to_queue(buffer);
> > +   struct iio_dma_buffer_block *block;
> > +   int err;
> > +
> > +   mutex_lock(>lock);
> 
> guard(mutex)(>lock);

I'm also a big fan of this new stuff but shouldn't we have a prep patch making 
the
transition to that? Otherwise we'll end up with a mix of styles. I'm happy to 
clean
up stuff with follow up patches (even some coding style could be improved IIRC) 
but
typically that's not how we handle things like this I believe...

- Nuno Sá
> 



Re: [PATCH v3] drm: adv7511: Fix low refresh rate register for ADV7533/5

2023-07-18 Thread Nuno
On Tue, 2023-07-18 at 14:45 +0300, Alexandru Ardelean wrote:
> On Tue, Jul 18, 2023 at 11:50 AM Robert Foss  wrote:
> > 
> > On Tue, Jul 18, 2023 at 10:42 AM Alexandru Ardelean 
> > wrote:
> > > 
> > > From: Bogdan Togorean 
> > > 
> > > For ADV7533 and ADV7535 low refresh rate is selected using
> > > bits [3:2] of 0x4a main register.
> > > So depending on ADV model write 0xfb or 0x4a register.
> > > 
> > > Fixes: 2437e7cd88e8 ("drm/bridge: adv7533: Initial support for ADV7533")
> > > Reviewed-by: Nuno Sa 
> > > Signed-off-by: Bogdan Togorean 
> > > Signed-off-by: Alexandru Ardelean 
> > > ---
> > > 
> > > Changelog v2 -> v3:
> > > *
> > > https://lore.kernel.org/dri-devel/1c3fde3a873b0f948d3c4d37107c5bb67dc9f7bb.ca...@gmail.com/T/#u
> > > * Added my S-o-b tag back
> > > 
> > > Changelog v1 -> v2:
> > > *
> > > https://lore.kernel.org/dri-devel/20190716131005.761-1-bogdan.togor...@analog.com/
> > > * added Fixes: tag
> > > * added Reviewed-by: tag for Nuno
> > > 
> > > 
> > >  drivers/gpu/drm/bridge/adv7511/adv7511_drv.c | 9 +++--
> > >  1 file changed, 7 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
> > > b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
> > > index ddceafa7b637..09290a377957 100644
> > > --- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
> > > +++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
> > > @@ -786,8 +786,13 @@ static void adv7511_mode_set(struct adv7511 *adv7511,
> > >     else
> > >     low_refresh_rate = ADV7511_LOW_REFRESH_RATE_NONE;
> > > 
> > > -   regmap_update_bits(adv7511->regmap, 0xfb,
> > > -   0x6, low_refresh_rate << 1);
> > > +   if (adv7511->type == ADV7511)
> > > +   regmap_update_bits(adv7511->regmap, 0xfb,
> > > +   0x6, low_refresh_rate << 1);
> > > +   else
> > > +   regmap_update_bits(adv7511->regmap, 0x4a,
> > > +   0xc, low_refresh_rate << 2);
> > > +
> > >     regmap_update_bits(adv7511->regmap, 0x17,
> > >     0x60, (vsync_polarity << 6) | (hsync_polarity << 5));
> > > 
> > > --
> > > 2.41.0
> > > 
> > 
> > This looks good, but I'm seeing some checkpatch style warnings, with
> > those fixed feel free to add my r-b.
> 
> Thanks.
> Will do.
> May I ask what options you are using for checkpatch.pl?

'--strict' should trigger those CHECKS...

Cool enough (or not) it seems the option is not really there when you type 
--help


- Nuno Sá


Re: [PATCH V2] drm: adv7511: Fix low refresh rate register for ADV7533/5

2023-07-18 Thread Nuno
On Tue, 2023-07-18 at 09:28 +0300, Alexandru Ardelean wrote:
> From: Bogdan Togorean 
> 
> For ADV7533 and ADV7535 low refresh rate is selected using
> bits [3:2] of 0x4a main register.
> So depending on ADV model write 0xfb or 0x4a register.
> 
> Fixes: 2437e7cd88e8 ("drm/bridge: adv7533: Initial support for ADV7533")
> Reviewed-by: Nuno Sa 
> Signed-off-by: Bogdan Togorean 
> ---
> 

It looks like you dropped your S-o-b tag in v2? I guess it was not on purpose :)

- Nuno Sá

> Changelog v1 -> v2:
> *
> https://lore.kernel.org/dri-devel/20190716131005.761-1-bogdan.togor...@analog.com/
> * added Fixes: tag
> * added Reviewed-by: tag for Nuno
> 
>  drivers/gpu/drm/bridge/adv7511/adv7511_drv.c | 9 +++--
>  1 file changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
> b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
> index ddceafa7b637..09290a377957 100644
> --- a/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
> +++ b/drivers/gpu/drm/bridge/adv7511/adv7511_drv.c
> @@ -786,8 +786,13 @@ static void adv7511_mode_set(struct adv7511 *adv7511,
> else
> low_refresh_rate = ADV7511_LOW_REFRESH_RATE_NONE;
>  
> -   regmap_update_bits(adv7511->regmap, 0xfb,
> -   0x6, low_refresh_rate << 1);
> +   if (adv7511->type == ADV7511)
> +   regmap_update_bits(adv7511->regmap, 0xfb,
> +   0x6, low_refresh_rate << 1);
> +   else
> +   regmap_update_bits(adv7511->regmap, 0x4a,
> +   0xc, low_refresh_rate << 2);
> +
> regmap_update_bits(adv7511->regmap, 0x17,
> 0x60, (vsync_polarity << 6) | (hsync_polarity << 5));
>  



Re: [PATCH] drm: adv7511: Fix low refresh rate register for ADV7533/5

2023-07-14 Thread Nuno
Hey Alex,

On Thu, 2023-07-13 at 17:19 -0300, Fabio Estevam wrote:
> On Wed, May 17, 2023 at 4:08 AM Alexandru Ardelean  wrote:
> > 
> > From: Bogdan Togorean 
> > 
> > For ADV7533 and ADV7535 low refresh rate is selected using
> > bits [3:2] of 0x4a main register.
> > So depending on ADV model write 0xfb or 0x4a register.
> > 
> > Signed-off-by: Bogdan Togorean 
> > Signed-off-by: Alexandru Ardelean 
> 
> Should this contain a Fixes tag so that it could be backported to
> stable kernels?

Yeah, most likely yes... With that:

Reviewed-by: Nuno Sa 




Re: [PATCH v3 07/11] iio: core: Add new DMABUF interface infrastructure

2023-04-04 Thread Nuno
> +   return "iio";
> +}
> +
> +static void iio_buffer_dma_fence_release(struct dma_fence *fence)
> +{
> +   struct iio_dma_fence *iio_fence = to_iio_dma_fence(fence);
> +
> +   kfree(iio_fence);
> +}
> +
> +static const struct dma_fence_ops iio_buffer_dma_fence_ops = {
> +   .get_driver_name=
> iio_buffer_dma_fence_get_driver_name,
> +   .get_timeline_name  =
> iio_buffer_dma_fence_get_driver_name,
> +   .release= iio_buffer_dma_fence_release,
> +};
> +
> +static int iio_buffer_enqueue_dmabuf(struct iio_dev_buffer_pair *ib,
> +    struct iio_dmabuf __user
> *iio_dmabuf_req,
> +    bool nonblock)
> +{
> +   struct iio_buffer *buffer = ib->buffer;
> +   struct iio_dmabuf iio_dmabuf;
> +   struct dma_buf_attachment *attach;
> +   struct iio_dmabuf_priv *priv;
> +   enum dma_data_direction dir;
> +   struct iio_dma_fence *fence;
> +   struct dma_buf *dmabuf;
> +   struct sg_table *sgt;
> +   unsigned long timeout;
> +   bool dma_to_ram;
> +   bool cyclic;
> +   int ret;
> +
> +   if (copy_from_user(_dmabuf, iio_dmabuf_req,
> sizeof(iio_dmabuf)))
> +   return -EFAULT;
> +
> +   if (iio_dmabuf.flags & ~IIO_BUFFER_DMABUF_SUPPORTED_FLAGS)
> +   return -EINVAL;
> +
> +   cyclic = iio_dmabuf.flags & IIO_BUFFER_DMABUF_CYCLIC;
> +
> +   /* Cyclic flag is only supported on output buffers */
> +   if (cyclic && buffer->direction != IIO_BUFFER_DIRECTION_OUT)
> +   return -EINVAL;
> +
> +   dmabuf = dma_buf_get(iio_dmabuf.fd);
> +   if (IS_ERR(dmabuf))
> +   return PTR_ERR(dmabuf);
> +
> +   if (!iio_dmabuf.bytes_used || iio_dmabuf.bytes_used > dmabuf-
> >size) {
> +   ret = -EINVAL;
> +   goto err_dmabuf_put;
> +   }
> +
> +   attach = iio_buffer_find_attachment(ib->indio_dev, dmabuf);
> +   if (IS_ERR(attach)) {
> +   ret = PTR_ERR(attach);
> +   goto err_dmabuf_put;
> +   }
> +
> +   priv = attach->importer_priv;
> +
> +   dma_to_ram = buffer->direction == IIO_BUFFER_DIRECTION_IN;
> +   dir = dma_to_ram ? DMA_FROM_DEVICE : DMA_TO_DEVICE;
> +
> +   sgt = dma_buf_map_attachment(attach, dir);
> +   if (IS_ERR(sgt)) {
> +   ret = PTR_ERR(sgt);
> +   pr_err("Unable to map attachment: %d\n", ret);

dev_err()? We should be able to reach the iio_dev

> +   goto err_attachment_put;
> +   }
> +
> +   fence = kmalloc(sizeof(*fence), GFP_KERNEL);
> +   if (!fence) {
> +   ret = -ENOMEM;
> +   goto err_unmap_attachment;
> +   }
> +
> 

...

>  static const struct file_operations iio_buffer_chrdev_fileops = {
> .owner = THIS_MODULE,
> .llseek = noop_llseek,
> .read = iio_buffer_read,
> .write = iio_buffer_write,
> +   .unlocked_ioctl = iio_buffer_chrdev_ioctl,
> +   .compat_ioctl = compat_ptr_ioctl,
> .poll = iio_buffer_poll,
> .release = iio_buffer_chrdev_release,
>  };

Hmmm, what about the legacy buffer? We should also support this
interface using it, right? Otherwise, using one of the new IOCTL in
iio_device_buffer_ioctl() (or /dev/iio:device0) will error out.

- Nuno Sá



Re: [PATCH v3 07/11] iio: core: Add new DMABUF interface infrastructure

2023-04-04 Thread Nuno
On Tue, 2023-04-04 at 09:55 +0200, Paul Cercueil wrote:
> Hi Nuno,
> 
> Le mardi 04 avril 2023 à 09:32 +0200, Nuno Sá a écrit :
> > On Mon, 2023-04-03 at 17:47 +0200, Paul Cercueil wrote:
> > > Add the necessary infrastructure to the IIO core to support a new
> > > optional DMABUF based interface.
> > > 
> > > With this new interface, DMABUF objects (externally created) can be
> > > attached to a IIO buffer, and subsequently used for data transfer.
> > > 
> > > A userspace application can then use this interface to share DMABUF
> > > objects between several interfaces, allowing it to transfer data in
> > > a
> > > zero-copy fashion, for instance between IIO and the USB stack.
> > > 
> > > The userspace application can also memory-map the DMABUF objects,
> > > and
> > > access the sample data directly. The advantage of doing this vs.
> > > the
> > > read() interface is that it avoids an extra copy of the data
> > > between
> > > the
> > > kernel and userspace. This is particularly userful for high-speed
> > > devices which produce several megabytes or even gigabytes of data
> > > per
> > > second.
> > > 
> > > As part of the interface, 3 new IOCTLs have been added:
> > > 
> > > IIO_BUFFER_DMABUF_ATTACH_IOCTL(int fd):
> > >  Attach the DMABUF object identified by the given file descriptor
> > > to
> > > the
> > >  buffer.
> > > 
> > > IIO_BUFFER_DMABUF_DETACH_IOCTL(int fd):
> > >  Detach the DMABUF object identified by the given file descriptor
> > > from
> > >  the buffer. Note that closing the IIO buffer's file descriptor
> > > will
> > >  automatically detach all previously attached DMABUF objects.
> > > 
> > > IIO_BUFFER_DMABUF_ENQUEUE_IOCTL(struct iio_dmabuf *):
> > >  Request a data transfer to/from the given DMABUF object. Its file
> > >  descriptor, as well as the transfer size and flags are provided in
> > > the
> > >  "iio_dmabuf" structure.
> > > 
> > > These three IOCTLs have to be performed on the IIO buffer's file
> > > descriptor, obtained using the IIO_BUFFER_GET_FD_IOCTL() ioctl.
> > > 
> > > Signed-off-by: Paul Cercueil 
> > > 
> > > ---
> > > v2: Only allow the new IOCTLs on the buffer FD created with
> > >     IIO_BUFFER_GET_FD_IOCTL().
> > > 
> > > v3: - Get rid of the old IOCTLs. The IIO subsystem does not create
> > > or
> > >     manage DMABUFs anymore, and only attaches/detaches externally
> > >     created DMABUFs.
> > >     - Add IIO_BUFFER_DMABUF_CYCLIC to the supported flags.
> > > ---
> > >  drivers/iio/industrialio-buffer.c | 402
> > > ++
> > >  include/linux/iio/buffer_impl.h   |  22 ++
> > >  include/uapi/linux/iio/buffer.h   |  22 ++
> > >  3 files changed, 446 insertions(+)
> > > 
> > > diff --git a/drivers/iio/industrialio-buffer.c
> > > b/drivers/iio/industrialio-buffer.c
> > > index 80c78bd6bbef..5d88e098b3e7 100644
> > > --- a/drivers/iio/industrialio-buffer.c
> > > +++ b/drivers/iio/industrialio-buffer.c
> > > @@ -13,10 +13,14 @@
> > >  #include 
> > >  #include 
> > >  #include 
> > > +#include 
> > > +#include 
> > > +#include 
> > >  #include 
> > >  #include 
> > >  #include 
> > >  #include 
> > > +#include 
> > >  #include 
> > >  #include 
> > >  
> > > @@ -28,11 +32,41 @@
> > >  #include 
> > >  #include 
> > >  
> > > +#define DMABUF_ENQUEUE_TIMEOUT_MS 5000
> > > +
> > > +struct iio_dma_fence;
> > > +
> > > +struct iio_dmabuf_priv {
> > > +   struct list_head entry;
> > > +   struct kref ref;
> > > +
> > > +   struct iio_buffer *buffer;
> > > +   struct iio_dma_buffer_block *block;
> > > +
> > > +   u64 context;
> > > +   spinlock_t lock;
> > > +
> > > +   struct dma_buf_attachment *attach;
> > > +   struct iio_dma_fence *fence;
> > > +};
> > > +
> > > +struct iio_dma_fence {
> > > +   struct dma_fence base;
> > > +   struct iio_dmabuf_priv *priv;
> > > +   struct sg_table *sgt;
> > > +   enum dma_data_direction dir;
> > > +};
> > >

Re: [PATCH v3 00/11] iio: new DMABUF based API, v3

2023-04-04 Thread Nuno
On Mon, 2023-04-03 at 17:47 +0200, Paul Cercueil wrote:
> Hi Jonathan,
> 
> Here's the v3 of my patchset that introduces a new interface based on
> DMABUF objects to complement the fileio API, and adds write() support to
> the existing fileio API.
> 
> It changed quite a lot since V2; the IIO subsystem is now just a DMABUF
> importer, and all the complexity related to handling creation, deletion
> and export of DMABUFs (including DMA mapping etc.) is gone.
> 
> This new interface will be used by Libiio. The code is ready[1] and will
> be merged to the main branch as soon as the kernel bits are accepted
> upstream.
> 
> Note that Libiio (and its server counterpart, iiod) use this new
> interface in two different ways:
> - by memory-mapping the DMABUFs to access the sample data directly,
>   which is much faster than using the existing fileio API as the sample
>   data does not need to be copied;
> - by passing the DMABUFs around directly to the USB stack, in a
>   device-to-device zero-copy fashion, using a new DMABUF interface for
>   the USB (FunctionFS to be exact) stack, which is being upstreamed in
>   parallel of this patchset [2].
> 
> As for write() support, Nuno (Cc'd) said he will work on upstreaming the
> DAC counterpart of adc/adi-axi-adc.c in the next few weeks, so there
> will be a user for the buffer write() support. I hope you are okay with
> this - otherwise, we can just wait until this work is done, and I still
> benefit from sending this patchset early to get feedback.
> 

Indeed, I already started a discussion [1] since what we have now for 
adc/adi-axi-adc.c has some major flaws (IMHO). So I'm hopping to get some
feedback/discussion to get "righter" from the beginning.


[1]: 
https://lore.kernel.org/linux-iio/dac3967805d7ddbd4653ead6d50e614844e0b70b.ca...@gmail.com/T/#u

- Nuno Sá



Re: [PATCH v2 02/12] iio: buffer-dma: Enable buffer write support

2022-03-29 Thread Nuno
On Mon, 2022-03-28 at 19:39 +0100, Paul Cercueil wrote:
> Hi Jonathan,
> 
> Le lun., mars 28 2022 at 18:24:09 +0100, Jonathan Cameron 
>  a écrit :
> > On Mon,  7 Feb 2022 12:59:23 +
> > Paul Cercueil  wrote:
> > 
> > >  Adding write support to the buffer-dma code is easy - the
> > > write()
> > >  function basically needs to do the exact same thing as the
> > > read()
> > >  function: dequeue a block, read or write the data, enqueue the
> > > block
> > >  when entirely processed.
> > > 
> > >  Therefore, the iio_buffer_dma_read() and the new 
> > > iio_buffer_dma_write()
> > >  now both call a function iio_buffer_dma_io(), which will perform
> > > this
> > >  task.
> > > 
> > >  The .space_available() callback can return the exact same value
> > > as 
> > > the
> > >  .data_available() callback for input buffers, since in both
> > > cases we
> > >  count the exact same thing (the number of bytes in each
> > > available
> > >  block).
> > > 
> > >  Note that we preemptively reset block->bytes_used to the
> > > buffer's 
> > > size
> > >  in iio_dma_buffer_request_update(), as in the future the
> > >  iio_dma_buffer_enqueue() function won't reset it.
> > > 
> > >  v2: - Fix block->state not being reset in
> > >    iio_dma_buffer_request_update() for output buffers.
> > >  - Only update block->bytes_used once and add a comment about
> > > why we
> > >    update it.
> > >  - Add a comment about why we're setting a different state
> > > for 
> > > output
> > >    buffers in iio_dma_buffer_request_update()
> > >  - Remove useless cast to bool (!!) in iio_dma_buffer_io()
> > > 
> > >  Signed-off-by: Paul Cercueil 
> > >  Reviewed-by: Alexandru Ardelean 
> > One comment inline.
> > 
> > I'd be tempted to queue this up with that fixed, but do we have
> > any users?  Even though it's trivial I'm not that keen on code
> > upstream well in advance of it being used.
> 
> There's a userspace user in libiio. On the kernel side we do have 
> drivers that use it in ADI's downstream kernel, that we plan to 
> upstream in the long term (but it can take some time, as we need to 
> upstream other things first, like JESD204B support).
> 
> 

You mean, users for DMA output buffers? If so, I have on my queue to
add the dac counterpart of this one:

https://elixir.bootlin.com/linux/latest/source/drivers/iio/adc/adi-axi-adc.c

Which is a user of DMA buffers. Though this one does not depend on
JESD204, I suspect it will also be a tricky process mainly because I
think there are major issues on how things are done right now (on the
ADC driver).

But yeah, not a topic here and I do plan to first start the discussion
on the mailing list before starting developing (hopefully in the coming
weeks)...

- Nuno Sá




[PATCH] dma-buf: return -EINVAL if dmabuf object is NULL

2021-08-18 Thread Nuno
On top of warning about a NULL object, we also want to return with a
proper error code (as done in 'dma_buf_begin_cpu_access()'). Otherwise,
we will get a NULL pointer dereference.

Fixes: fc13020e086b ("dma-buf: add support for kernel cpu access")
Signed-off-by: Nuno Sá 
---
 drivers/dma-buf/dma-buf.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/dma-buf/dma-buf.c b/drivers/dma-buf/dma-buf.c
index 63d32261b63f..8ec7876dd523 100644
--- a/drivers/dma-buf/dma-buf.c
+++ b/drivers/dma-buf/dma-buf.c
@@ -1231,7 +1231,8 @@ int dma_buf_end_cpu_access(struct dma_buf *dmabuf,
 {
int ret = 0;
 
-   WARN_ON(!dmabuf);
+   if (WARN_ON(!dmabuf))
+   return -EINVAL;
 
might_lock(>resv->lock.base);
 
-- 
2.32.0