Re: [PATCH v4 02/26] iommu/sva: Manage process address spaces

2020-02-28 Thread Jonathan Cameron
On Fri, 28 Feb 2020 15:43:04 +0100
Jean-Philippe Brucker  wrote:

> On Wed, Feb 26, 2020 at 12:35:06PM +, Jonathan Cameron wrote:
> > > + * A single Process Address Space ID (PASID) is allocated for each mm. 
> > > In the
> > > + * example, devices use PASID 1 to read/write into address space X and 
> > > PASID 2
> > > + * to read/write into address space Y. Calling iommu_sva_get_pasid() on 
> > > bond 1
> > > + * returns 1, and calling it on bonds 2-4 returns 2.
> > > + *
> > > + * Hardware tables describing this configuration in the IOMMU would 
> > > typically
> > > + * look like this:
> > > + *
> > > + *PASID tables
> > > + * of domain A
> > > + *  .->++
> > > + * / 0 ||---> io_pgtable
> > > + */++
> > > + *Device tables  /   1 ||---> pgd X
> > > + *  ++  /  ++
> > > + *  00:00.0 |  A |-' 2 ||--.
> > > + *  ++ ++   \
> > > + *  ::   3 ||\
> > > + *  ++ ++ --> pgd Y
> > > + *  00:01.0 |  B |--./
> > > + *  ++   \  |
> > > + *  00:01.1 |  B |+   PASID tables  |
> > > + *  ++ \   of domain B  |
> > > + *  '->++   |
> > > + *   0 ||-- | --> io_pgtable
> > > + * ++   |
> > > + *   1 ||   |
> > > + * ++   |
> > > + *   2 ||---'
> > > + * ++
> > > + *   3 ||
> > > + * ++
> > > + *
> > > + * With this model, a single call binds all devices in a given domain to 
> > > an
> > > + * address space. Other devices in the domain will get the same bond 
> > > implicitly.
> > > + * However, users must issue one bind() for each device, because IOMMUs 
> > > may
> > > + * implement SVA differently. Furthermore, mandating one bind() per 
> > > device
> > > + * allows the driver to perform sanity-checks on device capabilities.  
> >   
> > > + *
> > > + * In some IOMMUs, one entry of the PASID table (typically the first 
> > > one) can
> > > + * hold non-PASID translations. In this case PASID 0 is reserved and the 
> > > first
> > > + * entry points to the io_pgtable pointer. In other IOMMUs the io_pgtable
> > > + * pointer is held in the device table and PASID 0 is available to the
> > > + * allocator.  
> > 
> > Is it worth hammering home in here that we can only do this because the 
> > PASID space
> > is global (with exception of PASID 0)?  It's a convenient simplification 
> > but not
> > necessarily a hardware restriction so perhaps we should remind people 
> > somewhere in here?  
> 
> I could add this four paragraphs up:
> 
> "A single Process Address Space ID (PASID) is allocated for each mm. It is
> a choice made for the Linux SVA implementation, not a hardware
> restriction."

Perfect.

> 
> > > + */
> > > +
> > > +struct io_mm {
> > > + struct list_headdevices;
> > > + struct mm_struct*mm;
> > > + struct mmu_notifier notifier;
> > > +
> > > + /* Late initialization */
> > > + const struct io_mm_ops  *ops;
> > > + void*ctx;
> > > + int pasid;
> > > +};
> > > +
> > > +#define to_io_mm(mmu_notifier)   container_of(mmu_notifier, struct 
> > > io_mm, notifier)
> > > +#define to_iommu_bond(handle)container_of(handle, struct iommu_bond, 
> > > sva)  
> > 
> > Code ordering wise, do we want this after the definition of iommu_bond?
> > 
> > For both of these it's a bit non obvious what they come 'from'.
> > I wouldn't naturally assume to_io_mm gets me from notifier to the io_mm
> > for example.  Not sure it matters though if these are only used in a few
> > places.  
> 
> Right, I can rename the first one to mn_to_io_mm(). The second one I think
> might be good enough.

Agreed. The second one does feel more natural.

> 
> 
> > > +static struct iommu_sva *
> > > +io_mm_attach(struct device *dev, struct io_mm *io_mm, void *drvdata)
> > > +{
> > > + int ret = 0;  
> > 
> > I'm fairly sure this is set in all paths below.  Now, of course the
> > compiler might not think that in which case fair enough :)
> >   
> > > + bool attach_domain = true;
> > > + struct iommu_bond *bond, *tmp;
> > > + struct iommu_domain *domain, *other;
> > > + struct iommu_sva_param *param = dev->iommu_param->sva_param;
> > > +
> > > + domain = iommu_get_domain_for_dev(dev);
> > > +
> > > + bond = kzalloc(sizeof(*bond), GFP_KERNEL);
> > > + if 

Re: [PATCH v4 02/26] iommu/sva: Manage process address spaces

2020-02-28 Thread Jason Gunthorpe
On Fri, Feb 28, 2020 at 03:40:07PM +0100, Jean-Philippe Brucker wrote:
> > > Device
> > > + * 00:00.0 accesses address spaces X and Y, each corresponding to an
> > > mm_struct.
> > > + * Devices 00:01.* only access address space Y. In addition each
> > > + * IOMMU_DOMAIN_DMA domain has a private address space, io_pgtable,
> > > that is
> > > + * managed with iommu_map()/iommu_unmap(), and isn't shared with the
> > > CPU MMU.
> > So this would allow IOVA and SVA co-exist in the same address space?
> 
> Hmm, not in the same address space, but they can co-exist in a device. In
> fact the endpoint I'm testing (hisi zip accelerator) already needs normal
> DMA alongside SVA for queue management. This one is integrated on an
> Arm-based platform so shouldn't be a concern for VT-d at the moment, but
> I suspect we might see more of this kind of device with mixed DMA.

Probably the most interesting usecases for PASID definately require
this, so this is more than a "suspect we might see"

We want to see the privileged kernel control the general behavior of
the PCI function and delegate only some DMAs to PASIDs associated with
the user mm_struct. The device is always trusted the label its DMA
properly.

These programming models are already being used for years now with the
opencapi implementation.

Jason
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4 02/26] iommu/sva: Manage process address spaces

2020-02-28 Thread Jean-Philippe Brucker
On Wed, Feb 26, 2020 at 12:35:06PM +, Jonathan Cameron wrote:
> > + * A single Process Address Space ID (PASID) is allocated for each mm. In 
> > the
> > + * example, devices use PASID 1 to read/write into address space X and 
> > PASID 2
> > + * to read/write into address space Y. Calling iommu_sva_get_pasid() on 
> > bond 1
> > + * returns 1, and calling it on bonds 2-4 returns 2.
> > + *
> > + * Hardware tables describing this configuration in the IOMMU would 
> > typically
> > + * look like this:
> > + *
> > + *PASID tables
> > + * of domain A
> > + *  .->++
> > + * / 0 ||---> io_pgtable
> > + */++
> > + *Device tables  /   1 ||---> pgd X
> > + *  ++  /  ++
> > + *  00:00.0 |  A |-' 2 ||--.
> > + *  ++ ++   \
> > + *  ::   3 ||\
> > + *  ++ ++ --> pgd Y
> > + *  00:01.0 |  B |--./
> > + *  ++   \  |
> > + *  00:01.1 |  B |+   PASID tables  |
> > + *  ++ \   of domain B  |
> > + *  '->++   |
> > + *   0 ||-- | --> io_pgtable
> > + * ++   |
> > + *   1 ||   |
> > + * ++   |
> > + *   2 ||---'
> > + * ++
> > + *   3 ||
> > + * ++
> > + *
> > + * With this model, a single call binds all devices in a given domain to an
> > + * address space. Other devices in the domain will get the same bond 
> > implicitly.
> > + * However, users must issue one bind() for each device, because IOMMUs may
> > + * implement SVA differently. Furthermore, mandating one bind() per device
> > + * allows the driver to perform sanity-checks on device capabilities.
> 
> > + *
> > + * In some IOMMUs, one entry of the PASID table (typically the first one) 
> > can
> > + * hold non-PASID translations. In this case PASID 0 is reserved and the 
> > first
> > + * entry points to the io_pgtable pointer. In other IOMMUs the io_pgtable
> > + * pointer is held in the device table and PASID 0 is available to the
> > + * allocator.
> 
> Is it worth hammering home in here that we can only do this because the PASID 
> space
> is global (with exception of PASID 0)?  It's a convenient simplification but 
> not
> necessarily a hardware restriction so perhaps we should remind people 
> somewhere in here?

I could add this four paragraphs up:

"A single Process Address Space ID (PASID) is allocated for each mm. It is
a choice made for the Linux SVA implementation, not a hardware
restriction."

> > + */
> > +
> > +struct io_mm {
> > +   struct list_headdevices;
> > +   struct mm_struct*mm;
> > +   struct mmu_notifier notifier;
> > +
> > +   /* Late initialization */
> > +   const struct io_mm_ops  *ops;
> > +   void*ctx;
> > +   int pasid;
> > +};
> > +
> > +#define to_io_mm(mmu_notifier) container_of(mmu_notifier, struct 
> > io_mm, notifier)
> > +#define to_iommu_bond(handle)  container_of(handle, struct iommu_bond, 
> > sva)
> 
> Code ordering wise, do we want this after the definition of iommu_bond?
> 
> For both of these it's a bit non obvious what they come 'from'.
> I wouldn't naturally assume to_io_mm gets me from notifier to the io_mm
> for example.  Not sure it matters though if these are only used in a few
> places.

Right, I can rename the first one to mn_to_io_mm(). The second one I think
might be good enough.


> > +static struct iommu_sva *
> > +io_mm_attach(struct device *dev, struct io_mm *io_mm, void *drvdata)
> > +{
> > +   int ret = 0;
> 
> I'm fairly sure this is set in all paths below.  Now, of course the
> compiler might not think that in which case fair enough :)
> 
> > +   bool attach_domain = true;
> > +   struct iommu_bond *bond, *tmp;
> > +   struct iommu_domain *domain, *other;
> > +   struct iommu_sva_param *param = dev->iommu_param->sva_param;
> > +
> > +   domain = iommu_get_domain_for_dev(dev);
> > +
> > +   bond = kzalloc(sizeof(*bond), GFP_KERNEL);
> > +   if (!bond)
> > +   return ERR_PTR(-ENOMEM);
> > +
> > +   bond->sva.dev   = dev;
> > +   bond->drvdata   = drvdata;
> > +   refcount_set(>refs, 1);
> > +   RCU_INIT_POINTER(bond->io_mm, io_mm);
> > +
> > +   mutex_lock(_sva_lock);
> > +   /* Is it already bound to the device or domain? */
> > +   list_for_each_entry(tmp, _mm->devices, mm_head) {
> 

Re: [PATCH v4 02/26] iommu/sva: Manage process address spaces

2020-02-28 Thread Jean-Philippe Brucker
On Wed, Feb 26, 2020 at 11:13:20AM -0800, Jacob Pan wrote:
> Hi Jean,
> 
> A few comments inline. I am also trying to converge to the common sva
> APIs. I sent out the first step w/o iopage fault and the generic ops
> you have here.

Great, thanks for sending it out, it's on my list to look at

> On Mon, 24 Feb 2020 19:23:37 +0100
> Jean-Philippe Brucker  wrote:
> 
> > From: Jean-Philippe Brucker 
> > 
> > Add a small library to help IOMMU drivers manage process address
> > spaces bound to their devices. Register an MMU notifier to track
> > modification on each address space bound to one or more devices.
> > 
> > IOMMU drivers must implement the io_mm_ops and can then use the
> > helpers provided by this library to easily implement the SVA API
> > introduced by commit 26b25a2b98e4. The io_mm_ops are:
> > 
> > void *alloc(struct mm_struct *)
> >   Allocate a PASID context private to the IOMMU driver. There is a
> >   single context per mm. IOMMU drivers may perform arch-specific
> >   operations in there, for example pinning down a CPU ASID (on Arm).
> > 
> > int attach(struct device *, int pasid, void *ctx, bool attach_domain)
> >   Attach a context to the device, by setting up the PASID table entry.
> > 
> > int invalidate(struct device *, int pasid, void *ctx,
> >unsigned long vaddr, size_t size)
> >   Invalidate TLB entries for this address range.
> > 
> > int detach(struct device *, int pasid, void *ctx, bool detach_domain)
> >   Detach a context from the device, by clearing the PASID table entry
> >   and invalidating cached entries.
> > 
> > void free(void *ctx)
> you meant release()?

Yes

[...]
> > +/**
> > + * DOC: io_mm model
> > + *
> > + * The io_mm keeps track of process address spaces shared between
> > CPU and IOMMU.
> > + * The following example illustrates the relation between structures
> > + * iommu_domain, io_mm and iommu_sva. The iommu_sva struct is a bond
> > between
> > + * io_mm and device. A device can have multiple io_mm and an io_mm
> > may be bound
> > + * to multiple devices.
> > + *  ___
> > + * |  IOMMU domain A   |
> > + * |   |
> > + * | |  IOMMU group   |+--- io_pgtables
> > + * | |||
> > + * | |   dev 00:00.0 +--- bond 1 --- io_mm X
> > + * | ||   \|
> > + * |   '- bond 2 ---.
> > + * |___| \
> > + *  ___   \
> > + * |  IOMMU domain B   | io_mm Y
> > + * |   | / /
> > + * | |  IOMMU group   ||/ /
> > + * | |||   / /
> > + * | |   dev 00:01.0  bond 3 -' /
> > + * | |   dev 00:01.1  bond 4 --'
> > + * | |||
> > + * |   +--- io_pgtables
> > + * |___|
> > + *
> > + * In this example, device 00:00.0 is in domain A, devices 00:01.*
> > are in domain
> > + * B. All devices within the same domain access the same address
> > spaces.
> Hmm, devices in domain A has access to both X & Y, isn't it
> contradictory?

I guess it's unclear, this is meant to explain that any device in domain B
for example, would access all address spaces bound to any other device in
that domain.

> 
> > Device
> > + * 00:00.0 accesses address spaces X and Y, each corresponding to an
> > mm_struct.
> > + * Devices 00:01.* only access address space Y. In addition each
> > + * IOMMU_DOMAIN_DMA domain has a private address space, io_pgtable,
> > that is
> > + * managed with iommu_map()/iommu_unmap(), and isn't shared with the
> > CPU MMU.
> So this would allow IOVA and SVA co-exist in the same address space?

Hmm, not in the same address space, but they can co-exist in a device. In
fact the endpoint I'm testing (hisi zip accelerator) already needs normal
DMA alongside SVA for queue management. This one is integrated on an
Arm-based platform so shouldn't be a concern for VT-d at the moment, but
I suspect we might see more of this kind of device with mixed DMA.

In addition on Arm MSI addresses are translated by the IOMMU, and since
they are requests w/o PASID they need the private address space on entry 0.

Are you not planning to use the RID_PASID entry of Scalable-Mode
Context-Entry in VT-d?

> I guess this is the PASID 0 for DMA request w/o PASID. If that is the
> case, perhaps needs more explanation since the private address space
> also has a private PASID within the domain.

The last sentence refers to this private address space used for requests
w/o PASID. I don't like referring to it as "PASID 0" since it might be
more confusing. It's entry 0 of 

Re: [PATCH v4 02/26] iommu/sva: Manage process address spaces

2020-02-26 Thread Jacob Pan
Hi Jean,

A few comments inline. I am also trying to converge to the common sva
APIs. I sent out the first step w/o iopage fault and the generic ops
you have here.

On Mon, 24 Feb 2020 19:23:37 +0100
Jean-Philippe Brucker  wrote:

> From: Jean-Philippe Brucker 
> 
> Add a small library to help IOMMU drivers manage process address
> spaces bound to their devices. Register an MMU notifier to track
> modification on each address space bound to one or more devices.
> 
> IOMMU drivers must implement the io_mm_ops and can then use the
> helpers provided by this library to easily implement the SVA API
> introduced by commit 26b25a2b98e4. The io_mm_ops are:
> 
> void *alloc(struct mm_struct *)
>   Allocate a PASID context private to the IOMMU driver. There is a
>   single context per mm. IOMMU drivers may perform arch-specific
>   operations in there, for example pinning down a CPU ASID (on Arm).
> 
> int attach(struct device *, int pasid, void *ctx, bool attach_domain)
>   Attach a context to the device, by setting up the PASID table entry.
> 
> int invalidate(struct device *, int pasid, void *ctx,
>unsigned long vaddr, size_t size)
>   Invalidate TLB entries for this address range.
> 
> int detach(struct device *, int pasid, void *ctx, bool detach_domain)
>   Detach a context from the device, by clearing the PASID table entry
>   and invalidating cached entries.
> 
> void free(void *ctx)
you meant release()?

>   Free a context.
> 
> Signed-off-by: Jean-Philippe Brucker 
> ---
>  drivers/iommu/Kconfig |   7 +
>  drivers/iommu/Makefile|   1 +
>  drivers/iommu/iommu-sva.c | 561
> ++ drivers/iommu/iommu-sva.h |
> 64 + drivers/iommu/iommu.c |   1 +
>  include/linux/iommu.h |   3 +
>  6 files changed, 637 insertions(+)
>  create mode 100644 drivers/iommu/iommu-sva.c
>  create mode 100644 drivers/iommu/iommu-sva.h
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index d2fade984999..acca20e2da2f 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -102,6 +102,13 @@ config IOMMU_DMA
>   select IRQ_MSI_IOMMU
>   select NEED_SG_DMA_LENGTH
>  
> +# Shared Virtual Addressing library
> +config IOMMU_SVA
> + bool
> + select IOASID
> + select IOMMU_API
> + select MMU_NOTIFIER
> +
>  config FSL_PAMU
>   bool "Freescale IOMMU support"
>   depends on PCI
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 9f33fdb3bb05..40c800dd4e3e 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -37,3 +37,4 @@ obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
>  obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o
>  obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> +obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
> new file mode 100644
> index ..64f1d1c82383
> --- /dev/null
> +++ b/drivers/iommu/iommu-sva.c
> @@ -0,0 +1,561 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Manage PASIDs and bind process address spaces to devices.
> + *
> + * Copyright (C) 2018 ARM Ltd.
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "iommu-sva.h"
> +
> +/**
> + * DOC: io_mm model
> + *
> + * The io_mm keeps track of process address spaces shared between
> CPU and IOMMU.
> + * The following example illustrates the relation between structures
> + * iommu_domain, io_mm and iommu_sva. The iommu_sva struct is a bond
> between
> + * io_mm and device. A device can have multiple io_mm and an io_mm
> may be bound
> + * to multiple devices.
> + *  ___
> + * |  IOMMU domain A   |
> + * |   |
> + * | |  IOMMU group   |+--- io_pgtables
> + * | |||
> + * | |   dev 00:00.0 +--- bond 1 --- io_mm X
> + * | ||   \|
> + * |   '- bond 2 ---.
> + * |___| \
> + *  ___   \
> + * |  IOMMU domain B   | io_mm Y
> + * |   | / /
> + * | |  IOMMU group   ||/ /
> + * | |||   / /
> + * | |   dev 00:01.0  bond 3 -' /
> + * | |   dev 00:01.1  bond 4 --'
> + * | |||
> + * |   +--- io_pgtables
> + * |___|
> + *
> + * In this example, device 00:00.0 is in domain A, devices 00:01.*
> are in domain
> + * B. All devices within the same domain access the same address
> spaces.
Hmm, devices in domain A has access to both X & 

Re: [PATCH v4 02/26] iommu/sva: Manage process address spaces

2020-02-26 Thread Jonathan Cameron
On Mon, 24 Feb 2020 19:23:37 +0100
Jean-Philippe Brucker  wrote:

> From: Jean-Philippe Brucker 
> 
> Add a small library to help IOMMU drivers manage process address spaces
> bound to their devices. Register an MMU notifier to track modification
> on each address space bound to one or more devices.
> 
> IOMMU drivers must implement the io_mm_ops and can then use the helpers
> provided by this library to easily implement the SVA API introduced by
> commit 26b25a2b98e4. The io_mm_ops are:
> 
> void *alloc(struct mm_struct *)
>   Allocate a PASID context private to the IOMMU driver. There is a
>   single context per mm. IOMMU drivers may perform arch-specific
>   operations in there, for example pinning down a CPU ASID (on Arm).
> 
> int attach(struct device *, int pasid, void *ctx, bool attach_domain)
>   Attach a context to the device, by setting up the PASID table entry.
> 
> int invalidate(struct device *, int pasid, void *ctx,
>unsigned long vaddr, size_t size)
>   Invalidate TLB entries for this address range.
> 
> int detach(struct device *, int pasid, void *ctx, bool detach_domain)
>   Detach a context from the device, by clearing the PASID table entry
>   and invalidating cached entries.
> 
> void free(void *ctx)
>   Free a context.
> 
> Signed-off-by: Jean-Philippe Brucker 

Hi Jean-Phillippe,

A few trivial comments from me in line.  Otherwise this all seems sensible.

Jonathan

> ---
>  drivers/iommu/Kconfig |   7 +
>  drivers/iommu/Makefile|   1 +
>  drivers/iommu/iommu-sva.c | 561 ++
>  drivers/iommu/iommu-sva.h |  64 +
>  drivers/iommu/iommu.c |   1 +
>  include/linux/iommu.h |   3 +
>  6 files changed, 637 insertions(+)
>  create mode 100644 drivers/iommu/iommu-sva.c
>  create mode 100644 drivers/iommu/iommu-sva.h
> 
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index d2fade984999..acca20e2da2f 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -102,6 +102,13 @@ config IOMMU_DMA
>   select IRQ_MSI_IOMMU
>   select NEED_SG_DMA_LENGTH
>  
> +# Shared Virtual Addressing library
> +config IOMMU_SVA
> + bool
> + select IOASID
> + select IOMMU_API
> + select MMU_NOTIFIER
> +
>  config FSL_PAMU
>   bool "Freescale IOMMU support"
>   depends on PCI
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 9f33fdb3bb05..40c800dd4e3e 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -37,3 +37,4 @@ obj-$(CONFIG_S390_IOMMU) += s390-iommu.o
>  obj-$(CONFIG_QCOM_IOMMU) += qcom_iommu.o
>  obj-$(CONFIG_HYPERV_IOMMU) += hyperv-iommu.o
>  obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
> +obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
> diff --git a/drivers/iommu/iommu-sva.c b/drivers/iommu/iommu-sva.c
> new file mode 100644
> index ..64f1d1c82383
> --- /dev/null
> +++ b/drivers/iommu/iommu-sva.c
> @@ -0,0 +1,561 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Manage PASIDs and bind process address spaces to devices.
> + *
> + * Copyright (C) 2018 ARM Ltd.

Worth updating the date?

> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "iommu-sva.h"
> +
> +/**
> + * DOC: io_mm model
> + *
> + * The io_mm keeps track of process address spaces shared between CPU and 
> IOMMU.
> + * The following example illustrates the relation between structures
> + * iommu_domain, io_mm and iommu_sva. The iommu_sva struct is a bond between
> + * io_mm and device. A device can have multiple io_mm and an io_mm may be 
> bound
> + * to multiple devices.
> + *  ___
> + * |  IOMMU domain A   |
> + * |   |
> + * | |  IOMMU group   |+--- io_pgtables
> + * | |||
> + * | |   dev 00:00.0 +--- bond 1 --- io_mm X
> + * | ||   \|
> + * |   '- bond 2 ---.
> + * |___| \
> + *  ___   \
> + * |  IOMMU domain B   | io_mm Y
> + * |   | / /
> + * | |  IOMMU group   ||/ /
> + * | |||   / /
> + * | |   dev 00:01.0  bond 3 -' /
> + * | |   dev 00:01.1  bond 4 --'
> + * | |||
> + * |   +--- io_pgtables
> + * |___|
> + *
> + * In this example, device 00:00.0 is in domain A, devices 00:01.* are in 
> domain
> + * B. All devices within the same domain access the same address spaces. 
> Device
> + * 00:00.0 accesses address spaces X and Y, each corresponding to an 
> mm_struct.
> +