Re: [PATCH v2 07/12] iio: buffer-dma: Use DMABUFs instead of custom solution

2022-03-28 Thread Andy Shevchenko
On Mon, Mar 28, 2022 at 11:30 PM Paul Cercueil  wrote:
> Le lun., mars 28 2022 at 18:54:25 +0100, Jonathan Cameron
>  a écrit :
> > On Mon,  7 Feb 2022 12:59:28 +
> > Paul Cercueil  wrote:
> >
> >>  Enhance the current fileio code by using DMABUF objects instead of
> >>  custom buffers.
> >>
> >>  This adds more code than it removes, but:
> >>  - a lot of the complexity can be dropped, e.g. custom kref and
> >>iio_buffer_block_put_atomic() are not needed anymore;
> >>  - it will be much easier to introduce an API to export these DMABUF
> >>objects to userspace in a following patch.

> > I'm a bit rusty on dma mappings, but you seem to have
> > a mixture of streaming and coherent mappings going on in here.
>
> That's OK, so am I. What do you call "streaming mappings"?

dma_*_coherent() are for coherent mappings (usually you do it once and
cache coherency is guaranteed by accessing this memory by device or
CPU).
dma_map_*() are for streaming, which means that you often want to map
arbitrary pages during the transfer (usually used for the cases when
you want to keep previous data and do something with a new coming, or
when a new coming data is supplied by different virtual address, and
hence has to be mapped for DMA).

> > Is it the case that the current code is using the coherent mappings
> > and a potential 'other user' of the dma buffer might need
> > streaming mappings?
>
> Something like that. There are two different things; on both cases,
> userspace needs to create a DMABUF with IIO_BUFFER_DMABUF_ALLOC_IOCTL,
> and the backing memory is allocated with dma_alloc_coherent().
>
> - For the userspace interface, you then have a "cpu access" IOCTL
> (DMA_BUF_IOCTL_SYNC), that allows userspace to inform when it will
> start/finish to process the buffer in user-space (which will
> sync/invalidate the data cache if needed). A buffer can then be
> enqueued for DMA processing (TX or RX) with the new
> IIO_BUFFER_DMABUF_ENQUEUE_IOCTL.
>
> - When the DMABUF created via the IIO core is sent to another driver
> through the driver's custom DMABUF import function, this driver will
> call dma_buf_attach(), which will call iio_buffer_dma_buf_map(). Since
> it has to return a "struct sg_table *", this function then simply
> creates a sgtable with one entry that points to the backing memory.

...

> >>  +   ret = dma_map_sgtable(at->dev, >sg_table, dma_dir, 0);
> >>  +   if (ret) {
> >>  +   kfree(dba);
> >>  +   return ERR_PTR(ret);
> >>  +   }

Missed DMA mapping error check.

> >>  +
> >>  +   return >sg_table;
> >>  +}

...

> >>  -   /* Must not be accessed outside the core. */
> >>  -   struct kref kref;


> >>  +   struct dma_buf *dmabuf;

Is it okay to access outside the core? If no, why did you remove
(actually not modify) the comment?

-- 
With Best Regards,
Andy Shevchenko


Re: [PATCH v2 07/12] iio: buffer-dma: Use DMABUFs instead of custom solution

2022-03-28 Thread Paul Cercueil

Hi Jonathan,

Le lun., mars 28 2022 at 18:54:25 +0100, Jonathan Cameron 
 a écrit :

On Mon,  7 Feb 2022 12:59:28 +
Paul Cercueil  wrote:


 Enhance the current fileio code by using DMABUF objects instead of
 custom buffers.

 This adds more code than it removes, but:
 - a lot of the complexity can be dropped, e.g. custom kref and
   iio_buffer_block_put_atomic() are not needed anymore;
 - it will be much easier to introduce an API to export these DMABUF
   objects to userspace in a following patch.

 Signed-off-by: Paul Cercueil 

Hi Paul,

I'm a bit rusty on dma mappings, but you seem to have
a mixture of streaming and coherent mappings going on in here.


That's OK, so am I. What do you call "streaming mappings"?


Is it the case that the current code is using the coherent mappings
and a potential 'other user' of the dma buffer might need
streaming mappings?


Something like that. There are two different things; on both cases, 
userspace needs to create a DMABUF with IIO_BUFFER_DMABUF_ALLOC_IOCTL, 
and the backing memory is allocated with dma_alloc_coherent().


- For the userspace interface, you then have a "cpu access" IOCTL 
(DMA_BUF_IOCTL_SYNC), that allows userspace to inform when it will 
start/finish to process the buffer in user-space (which will 
sync/invalidate the data cache if needed). A buffer can then be 
enqueued for DMA processing (TX or RX) with the new 
IIO_BUFFER_DMABUF_ENQUEUE_IOCTL.


- When the DMABUF created via the IIO core is sent to another driver 
through the driver's custom DMABUF import function, this driver will 
call dma_buf_attach(), which will call iio_buffer_dma_buf_map(). Since 
it has to return a "struct sg_table *", this function then simply 
creates a sgtable with one entry that points to the backing memory.


Note that I added the iio_buffer_dma_buf_map() / _unmap() functions 
because the dma-buf core would WARN() if these were not provided. But 
since this code doesn't yet support importing/exporting DMABUFs to 
other drivers, these are never called, and I should probably just make 
them return a ERR_PTR() unconditionally.


Cheers,
-Paul


Jonathan


 ---
  drivers/iio/buffer/industrialio-buffer-dma.c | 192 
---

  include/linux/iio/buffer-dma.h   |   8 +-
  2 files changed, 122 insertions(+), 78 deletions(-)

 diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c 
b/drivers/iio/buffer/industrialio-buffer-dma.c

 index 15ea7bc3ac08..54e6000cd2ee 100644
 --- a/drivers/iio/buffer/industrialio-buffer-dma.c
 +++ b/drivers/iio/buffer/industrialio-buffer-dma.c
 @@ -14,6 +14,7 @@
  #include 
  #include 
  #include 
 +#include 
  #include 
  #include 

 @@ -90,103 +91,145 @@
   * callback is called from within the custom callback.
   */

 -static void iio_buffer_block_release(struct kref *kref)
 -{
 -  struct iio_dma_buffer_block *block = container_of(kref,
 -  struct iio_dma_buffer_block, kref);
 -
 -  WARN_ON(block->state != IIO_BLOCK_STATE_DEAD);
 -
 -  dma_free_coherent(block->queue->dev, PAGE_ALIGN(block->size),
 -  block->vaddr, block->phys_addr);
 -
 -  iio_buffer_put(>queue->buffer);
 -  kfree(block);
 -}
 -
 -static void iio_buffer_block_get(struct iio_dma_buffer_block 
*block)

 -{
 -  kref_get(>kref);
 -}
 -
 -static void iio_buffer_block_put(struct iio_dma_buffer_block 
*block)

 -{
 -  kref_put(>kref, iio_buffer_block_release);
 -}
 -
 -/*
 - * dma_free_coherent can sleep, hence we need to take some special 
care to be

 - * able to drop a reference from an atomic context.
 - */
 -static LIST_HEAD(iio_dma_buffer_dead_blocks);
 -static DEFINE_SPINLOCK(iio_dma_buffer_dead_blocks_lock);
 -
 -static void iio_dma_buffer_cleanup_worker(struct work_struct *work)
 -{
 -  struct iio_dma_buffer_block *block, *_block;
 -  LIST_HEAD(block_list);
 -
 -  spin_lock_irq(_dma_buffer_dead_blocks_lock);
 -  list_splice_tail_init(_dma_buffer_dead_blocks, _list);
 -  spin_unlock_irq(_dma_buffer_dead_blocks_lock);
 -
 -  list_for_each_entry_safe(block, _block, _list, head)
 -  iio_buffer_block_release(>kref);
 -}
 -static DECLARE_WORK(iio_dma_buffer_cleanup_work, 
iio_dma_buffer_cleanup_worker);

 -
 -static void iio_buffer_block_release_atomic(struct kref *kref)
 -{
 +struct iio_buffer_dma_buf_attachment {
 +  struct scatterlist sgl;
 +  struct sg_table sg_table;
struct iio_dma_buffer_block *block;
 -  unsigned long flags;
 -
 -  block = container_of(kref, struct iio_dma_buffer_block, kref);
 -
 -  spin_lock_irqsave(_dma_buffer_dead_blocks_lock, flags);
 -  list_add_tail(>head, _dma_buffer_dead_blocks);
 -  spin_unlock_irqrestore(_dma_buffer_dead_blocks_lock, flags);
 -
 -  schedule_work(_dma_buffer_cleanup_work);
 -}
 -
 -/*
 - * Version of iio_buffer_block_put() that can be called from 
atomic context

 - */
 -static void iio_buffer_block_put_atomic(struct 
iio_dma_buffer_block *block)

 -{

Re: [PATCH v2 07/12] iio: buffer-dma: Use DMABUFs instead of custom solution

2022-03-28 Thread Christian König

Am 28.03.22 um 19:54 schrieb Jonathan Cameron:

On Mon,  7 Feb 2022 12:59:28 +
Paul Cercueil  wrote:


Enhance the current fileio code by using DMABUF objects instead of
custom buffers.

This adds more code than it removes, but:
- a lot of the complexity can be dropped, e.g. custom kref and
   iio_buffer_block_put_atomic() are not needed anymore;
- it will be much easier to introduce an API to export these DMABUF
   objects to userspace in a following patch.

Signed-off-by: Paul Cercueil 

Hi Paul,

I'm a bit rusty on dma mappings, but you seem to have
a mixture of streaming and coherent mappings going on in here.

Is it the case that the current code is using the coherent mappings
and a potential 'other user' of the dma buffer might need
streaming mappings?


Streaming mappings are generally not supported by DMA-buf.

You always have only coherent mappings.

Regards,
Christian.



Jonathan


---
  drivers/iio/buffer/industrialio-buffer-dma.c | 192 ---
  include/linux/iio/buffer-dma.h   |   8 +-
  2 files changed, 122 insertions(+), 78 deletions(-)

diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c 
b/drivers/iio/buffer/industrialio-buffer-dma.c
index 15ea7bc3ac08..54e6000cd2ee 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -14,6 +14,7 @@
  #include 
  #include 
  #include 
+#include 
  #include 
  #include 
  
@@ -90,103 +91,145 @@

   * callback is called from within the custom callback.
   */
  
-static void iio_buffer_block_release(struct kref *kref)

-{
-   struct iio_dma_buffer_block *block = container_of(kref,
-   struct iio_dma_buffer_block, kref);
-
-   WARN_ON(block->state != IIO_BLOCK_STATE_DEAD);
-
-   dma_free_coherent(block->queue->dev, PAGE_ALIGN(block->size),
-   block->vaddr, block->phys_addr);
-
-   iio_buffer_put(>queue->buffer);
-   kfree(block);
-}
-
-static void iio_buffer_block_get(struct iio_dma_buffer_block *block)
-{
-   kref_get(>kref);
-}
-
-static void iio_buffer_block_put(struct iio_dma_buffer_block *block)
-{
-   kref_put(>kref, iio_buffer_block_release);
-}
-
-/*
- * dma_free_coherent can sleep, hence we need to take some special care to be
- * able to drop a reference from an atomic context.
- */
-static LIST_HEAD(iio_dma_buffer_dead_blocks);
-static DEFINE_SPINLOCK(iio_dma_buffer_dead_blocks_lock);
-
-static void iio_dma_buffer_cleanup_worker(struct work_struct *work)
-{
-   struct iio_dma_buffer_block *block, *_block;
-   LIST_HEAD(block_list);
-
-   spin_lock_irq(_dma_buffer_dead_blocks_lock);
-   list_splice_tail_init(_dma_buffer_dead_blocks, _list);
-   spin_unlock_irq(_dma_buffer_dead_blocks_lock);
-
-   list_for_each_entry_safe(block, _block, _list, head)
-   iio_buffer_block_release(>kref);
-}
-static DECLARE_WORK(iio_dma_buffer_cleanup_work, 
iio_dma_buffer_cleanup_worker);
-
-static void iio_buffer_block_release_atomic(struct kref *kref)
-{
+struct iio_buffer_dma_buf_attachment {
+   struct scatterlist sgl;
+   struct sg_table sg_table;
struct iio_dma_buffer_block *block;
-   unsigned long flags;
-
-   block = container_of(kref, struct iio_dma_buffer_block, kref);
-
-   spin_lock_irqsave(_dma_buffer_dead_blocks_lock, flags);
-   list_add_tail(>head, _dma_buffer_dead_blocks);
-   spin_unlock_irqrestore(_dma_buffer_dead_blocks_lock, flags);
-
-   schedule_work(_dma_buffer_cleanup_work);
-}
-
-/*
- * Version of iio_buffer_block_put() that can be called from atomic context
- */
-static void iio_buffer_block_put_atomic(struct iio_dma_buffer_block *block)
-{
-   kref_put(>kref, iio_buffer_block_release_atomic);
-}
+};
  
  static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf)

  {
return container_of(buf, struct iio_dma_buffer_queue, buffer);
  }
  
+static struct iio_buffer_dma_buf_attachment *

+to_iio_buffer_dma_buf_attachment(struct sg_table *table)
+{
+   return container_of(table, struct iio_buffer_dma_buf_attachment, 
sg_table);
+}
+
+static void iio_buffer_block_get(struct iio_dma_buffer_block *block)
+{
+   get_dma_buf(block->dmabuf);
+}
+
+static void iio_buffer_block_put(struct iio_dma_buffer_block *block)
+{
+   dma_buf_put(block->dmabuf);
+}
+
+static int iio_buffer_dma_buf_attach(struct dma_buf *dbuf,
+struct dma_buf_attachment *at)
+{
+   at->priv = dbuf->priv;
+
+   return 0;
+}
+
+static struct sg_table *iio_buffer_dma_buf_map(struct dma_buf_attachment *at,
+  enum dma_data_direction dma_dir)
+{
+   struct iio_dma_buffer_block *block = at->priv;
+   struct iio_buffer_dma_buf_attachment *dba;
+   int ret;
+
+   dba = kzalloc(sizeof(*dba), GFP_KERNEL);
+   if (!dba)
+   return ERR_PTR(-ENOMEM);
+
+   sg_init_one(>sgl, 

Re: [PATCH v2 07/12] iio: buffer-dma: Use DMABUFs instead of custom solution

2022-03-28 Thread Jonathan Cameron
On Mon,  7 Feb 2022 12:59:28 +
Paul Cercueil  wrote:

> Enhance the current fileio code by using DMABUF objects instead of
> custom buffers.
> 
> This adds more code than it removes, but:
> - a lot of the complexity can be dropped, e.g. custom kref and
>   iio_buffer_block_put_atomic() are not needed anymore;
> - it will be much easier to introduce an API to export these DMABUF
>   objects to userspace in a following patch.
> 
> Signed-off-by: Paul Cercueil 
Hi Paul,

I'm a bit rusty on dma mappings, but you seem to have
a mixture of streaming and coherent mappings going on in here.

Is it the case that the current code is using the coherent mappings
and a potential 'other user' of the dma buffer might need
streaming mappings?

Jonathan

> ---
>  drivers/iio/buffer/industrialio-buffer-dma.c | 192 ---
>  include/linux/iio/buffer-dma.h   |   8 +-
>  2 files changed, 122 insertions(+), 78 deletions(-)
> 
> diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c 
> b/drivers/iio/buffer/industrialio-buffer-dma.c
> index 15ea7bc3ac08..54e6000cd2ee 100644
> --- a/drivers/iio/buffer/industrialio-buffer-dma.c
> +++ b/drivers/iio/buffer/industrialio-buffer-dma.c
> @@ -14,6 +14,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  
> @@ -90,103 +91,145 @@
>   * callback is called from within the custom callback.
>   */
>  
> -static void iio_buffer_block_release(struct kref *kref)
> -{
> - struct iio_dma_buffer_block *block = container_of(kref,
> - struct iio_dma_buffer_block, kref);
> -
> - WARN_ON(block->state != IIO_BLOCK_STATE_DEAD);
> -
> - dma_free_coherent(block->queue->dev, PAGE_ALIGN(block->size),
> - block->vaddr, block->phys_addr);
> -
> - iio_buffer_put(>queue->buffer);
> - kfree(block);
> -}
> -
> -static void iio_buffer_block_get(struct iio_dma_buffer_block *block)
> -{
> - kref_get(>kref);
> -}
> -
> -static void iio_buffer_block_put(struct iio_dma_buffer_block *block)
> -{
> - kref_put(>kref, iio_buffer_block_release);
> -}
> -
> -/*
> - * dma_free_coherent can sleep, hence we need to take some special care to be
> - * able to drop a reference from an atomic context.
> - */
> -static LIST_HEAD(iio_dma_buffer_dead_blocks);
> -static DEFINE_SPINLOCK(iio_dma_buffer_dead_blocks_lock);
> -
> -static void iio_dma_buffer_cleanup_worker(struct work_struct *work)
> -{
> - struct iio_dma_buffer_block *block, *_block;
> - LIST_HEAD(block_list);
> -
> - spin_lock_irq(_dma_buffer_dead_blocks_lock);
> - list_splice_tail_init(_dma_buffer_dead_blocks, _list);
> - spin_unlock_irq(_dma_buffer_dead_blocks_lock);
> -
> - list_for_each_entry_safe(block, _block, _list, head)
> - iio_buffer_block_release(>kref);
> -}
> -static DECLARE_WORK(iio_dma_buffer_cleanup_work, 
> iio_dma_buffer_cleanup_worker);
> -
> -static void iio_buffer_block_release_atomic(struct kref *kref)
> -{
> +struct iio_buffer_dma_buf_attachment {
> + struct scatterlist sgl;
> + struct sg_table sg_table;
>   struct iio_dma_buffer_block *block;
> - unsigned long flags;
> -
> - block = container_of(kref, struct iio_dma_buffer_block, kref);
> -
> - spin_lock_irqsave(_dma_buffer_dead_blocks_lock, flags);
> - list_add_tail(>head, _dma_buffer_dead_blocks);
> - spin_unlock_irqrestore(_dma_buffer_dead_blocks_lock, flags);
> -
> - schedule_work(_dma_buffer_cleanup_work);
> -}
> -
> -/*
> - * Version of iio_buffer_block_put() that can be called from atomic context
> - */
> -static void iio_buffer_block_put_atomic(struct iio_dma_buffer_block *block)
> -{
> - kref_put(>kref, iio_buffer_block_release_atomic);
> -}
> +};
>  
>  static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer 
> *buf)
>  {
>   return container_of(buf, struct iio_dma_buffer_queue, buffer);
>  }
>  
> +static struct iio_buffer_dma_buf_attachment *
> +to_iio_buffer_dma_buf_attachment(struct sg_table *table)
> +{
> + return container_of(table, struct iio_buffer_dma_buf_attachment, 
> sg_table);
> +}
> +
> +static void iio_buffer_block_get(struct iio_dma_buffer_block *block)
> +{
> + get_dma_buf(block->dmabuf);
> +}
> +
> +static void iio_buffer_block_put(struct iio_dma_buffer_block *block)
> +{
> + dma_buf_put(block->dmabuf);
> +}
> +
> +static int iio_buffer_dma_buf_attach(struct dma_buf *dbuf,
> +  struct dma_buf_attachment *at)
> +{
> + at->priv = dbuf->priv;
> +
> + return 0;
> +}
> +
> +static struct sg_table *iio_buffer_dma_buf_map(struct dma_buf_attachment *at,
> +enum dma_data_direction dma_dir)
> +{
> + struct iio_dma_buffer_block *block = at->priv;
> + struct iio_buffer_dma_buf_attachment *dba;
> + int ret;
> +
> + dba = kzalloc(sizeof(*dba), GFP_KERNEL);
> + if (!dba)
> + return ERR_PTR(-ENOMEM);
> +
> + sg_init_one(>sgl, 

[PATCH v2 07/12] iio: buffer-dma: Use DMABUFs instead of custom solution

2022-02-07 Thread Paul Cercueil
Enhance the current fileio code by using DMABUF objects instead of
custom buffers.

This adds more code than it removes, but:
- a lot of the complexity can be dropped, e.g. custom kref and
  iio_buffer_block_put_atomic() are not needed anymore;
- it will be much easier to introduce an API to export these DMABUF
  objects to userspace in a following patch.

Signed-off-by: Paul Cercueil 
---
 drivers/iio/buffer/industrialio-buffer-dma.c | 192 ---
 include/linux/iio/buffer-dma.h   |   8 +-
 2 files changed, 122 insertions(+), 78 deletions(-)

diff --git a/drivers/iio/buffer/industrialio-buffer-dma.c 
b/drivers/iio/buffer/industrialio-buffer-dma.c
index 15ea7bc3ac08..54e6000cd2ee 100644
--- a/drivers/iio/buffer/industrialio-buffer-dma.c
+++ b/drivers/iio/buffer/industrialio-buffer-dma.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -90,103 +91,145 @@
  * callback is called from within the custom callback.
  */
 
-static void iio_buffer_block_release(struct kref *kref)
-{
-   struct iio_dma_buffer_block *block = container_of(kref,
-   struct iio_dma_buffer_block, kref);
-
-   WARN_ON(block->state != IIO_BLOCK_STATE_DEAD);
-
-   dma_free_coherent(block->queue->dev, PAGE_ALIGN(block->size),
-   block->vaddr, block->phys_addr);
-
-   iio_buffer_put(>queue->buffer);
-   kfree(block);
-}
-
-static void iio_buffer_block_get(struct iio_dma_buffer_block *block)
-{
-   kref_get(>kref);
-}
-
-static void iio_buffer_block_put(struct iio_dma_buffer_block *block)
-{
-   kref_put(>kref, iio_buffer_block_release);
-}
-
-/*
- * dma_free_coherent can sleep, hence we need to take some special care to be
- * able to drop a reference from an atomic context.
- */
-static LIST_HEAD(iio_dma_buffer_dead_blocks);
-static DEFINE_SPINLOCK(iio_dma_buffer_dead_blocks_lock);
-
-static void iio_dma_buffer_cleanup_worker(struct work_struct *work)
-{
-   struct iio_dma_buffer_block *block, *_block;
-   LIST_HEAD(block_list);
-
-   spin_lock_irq(_dma_buffer_dead_blocks_lock);
-   list_splice_tail_init(_dma_buffer_dead_blocks, _list);
-   spin_unlock_irq(_dma_buffer_dead_blocks_lock);
-
-   list_for_each_entry_safe(block, _block, _list, head)
-   iio_buffer_block_release(>kref);
-}
-static DECLARE_WORK(iio_dma_buffer_cleanup_work, 
iio_dma_buffer_cleanup_worker);
-
-static void iio_buffer_block_release_atomic(struct kref *kref)
-{
+struct iio_buffer_dma_buf_attachment {
+   struct scatterlist sgl;
+   struct sg_table sg_table;
struct iio_dma_buffer_block *block;
-   unsigned long flags;
-
-   block = container_of(kref, struct iio_dma_buffer_block, kref);
-
-   spin_lock_irqsave(_dma_buffer_dead_blocks_lock, flags);
-   list_add_tail(>head, _dma_buffer_dead_blocks);
-   spin_unlock_irqrestore(_dma_buffer_dead_blocks_lock, flags);
-
-   schedule_work(_dma_buffer_cleanup_work);
-}
-
-/*
- * Version of iio_buffer_block_put() that can be called from atomic context
- */
-static void iio_buffer_block_put_atomic(struct iio_dma_buffer_block *block)
-{
-   kref_put(>kref, iio_buffer_block_release_atomic);
-}
+};
 
 static struct iio_dma_buffer_queue *iio_buffer_to_queue(struct iio_buffer *buf)
 {
return container_of(buf, struct iio_dma_buffer_queue, buffer);
 }
 
+static struct iio_buffer_dma_buf_attachment *
+to_iio_buffer_dma_buf_attachment(struct sg_table *table)
+{
+   return container_of(table, struct iio_buffer_dma_buf_attachment, 
sg_table);
+}
+
+static void iio_buffer_block_get(struct iio_dma_buffer_block *block)
+{
+   get_dma_buf(block->dmabuf);
+}
+
+static void iio_buffer_block_put(struct iio_dma_buffer_block *block)
+{
+   dma_buf_put(block->dmabuf);
+}
+
+static int iio_buffer_dma_buf_attach(struct dma_buf *dbuf,
+struct dma_buf_attachment *at)
+{
+   at->priv = dbuf->priv;
+
+   return 0;
+}
+
+static struct sg_table *iio_buffer_dma_buf_map(struct dma_buf_attachment *at,
+  enum dma_data_direction dma_dir)
+{
+   struct iio_dma_buffer_block *block = at->priv;
+   struct iio_buffer_dma_buf_attachment *dba;
+   int ret;
+
+   dba = kzalloc(sizeof(*dba), GFP_KERNEL);
+   if (!dba)
+   return ERR_PTR(-ENOMEM);
+
+   sg_init_one(>sgl, block->vaddr, PAGE_ALIGN(block->size));
+   dba->sg_table.sgl = >sgl;
+   dba->sg_table.nents = 1;
+   dba->block = block;
+
+   ret = dma_map_sgtable(at->dev, >sg_table, dma_dir, 0);
+   if (ret) {
+   kfree(dba);
+   return ERR_PTR(ret);
+   }
+
+   return >sg_table;
+}
+
+static void iio_buffer_dma_buf_unmap(struct dma_buf_attachment *at,
+struct sg_table *sg_table,
+enum dma_data_direction dma_dir)
+{
+   struct