RE: [RFC v3 1/8] vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free)

2020-02-17 Thread Liu, Yi L
> From: Liu, Yi L 
> Sent: Friday, January 31, 2020 8:41 PM
> To: Alex Williamson 
> Subject: RE: [RFC v3 1/8] vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free)
> > > +static int vfio_iommu_type1_pasid_free(struct vfio_iommu *iommu,
> > > +unsigned int pasid)
> > > +{
> > > + struct vfio_mm *vmm = iommu->vmm;
> > > + int ret = 0;
> > > +
> > > + mutex_lock(>lock);
> > > + if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) {
> >
> > But we could have been IOMMU backed when the pasid was allocated, did we
> just
> > leak something?  In fact, I didn't spot anything in this series that handles
> > a container with pasids allocated losing iommu backing.
> > I'd think we want to release all pasids when that happens since permission 
> > for
> > the user to hold pasids goes along with having an iommu backed device.
> 
> oh, yes. If a container lose iommu backend, then needs to reclaim the 
> allocated
> PASIDs. right? I'll add it. :-)

Hi Alex,

I went through the flow again. Maybe current series has already covered
it. There is vfio_mm which is used to track allocated PASIDs. Its life
cycle is type1 driver open and release. If I understand it correctly,
type1 driver release happens when there is no more iommu backed groups
in a container.

static void __vfio_group_unset_container(struct vfio_group *group)
{
[...]

/* Detaching the last group deprivileges a container, remove iommu */
if (driver && list_empty(>group_list)) {
driver->ops->release(container->iommu_data);
module_put(driver->ops->owner);
container->iommu_driver = NULL;
container->iommu_data = NULL;
}
[...]
}

Regards,
Yi Liu


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC v3 1/8] vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free)

2020-02-06 Thread Jacob Pan
Hi Alex,

On Fri, 31 Jan 2020 12:41:06 +
"Liu, Yi L"  wrote:

> > > +static int vfio_iommu_type1_pasid_free(struct vfio_iommu *iommu,
> > > +unsigned int pasid)
> > > +{
> > > + struct vfio_mm *vmm = iommu->vmm;
> > > + int ret = 0;
> > > +
> > > + mutex_lock(>lock);
> > > + if (!IS_IOMMU_CAP_DOMAIN_IN_CONTAINER(iommu)) {  
> > 
> > But we could have been IOMMU backed when the pasid was allocated,
> > did we just leak something?  In fact, I didn't spot anything in
> > this series that handles a container with pasids allocated losing
> > iommu backing. I'd think we want to release all pasids when that
> > happens since permission for the user to hold pasids goes along
> > with having an iommu backed device.  
> 
> oh, yes. If a container lose iommu backend, then needs to reclaim the
> allocated PASIDs. right? I'll add it. :-)
> 
> > Also, do we want _free() paths that can fail?  
> 
> I remember we discussed if a _free() path can fail, I think we agreed
> to let _free() path always success. :-)

Just to add some details. We introduced IOASID notifier such that when
VFIO frees a PASID, consumers such as IOMMU, can do the cleanup
therefore ensure free always succeeds.
https://www.spinics.net/lists/kernel/msg3349928.html
https://www.spinics.net/lists/kernel/msg3349930.html
This was not in my v9 set as I was considering some race conditions
w.r.t. registering notifier, gets notifications, and free call. I will
post it in v10.

Thanks,

Jacob
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [RFC v3 1/8] vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free)

2020-02-06 Thread Liu, Yi L
> From: Liu, Yi L 
> Sent: Friday, January 31, 2020 8:41 PM
> To: Alex Williamson 
> Subject: RE: [RFC v3 1/8] vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free)
> 
> Hi Alex,
> 
> > From: Alex Williamson [mailto:alex.william...@redhat.com]
> > Sent: Thursday, January 30, 2020 7:56 AM
> > To: Liu, Yi L 
> > Subject: Re: [RFC v3 1/8] vfio: Add
> > VFIO_IOMMU_PASID_REQUEST(alloc/free)
> >
> > On Wed, 29 Jan 2020 04:11:45 -0800
> > "Liu, Yi L"  wrote:
> >
> > > From: Liu Yi L 
> > >
[...]
> > > +
> > > +int vfio_mm_pasid_alloc(struct vfio_mm *vmm, int min, int max) {
> > > + ioasid_t pasid;
> > > + int ret = -ENOSPC;
> > > +
> > > + mutex_lock(>pasid_lock);
> > > + if (vmm->pasid_count >= vmm->pasid_quota) {
> > > + ret = -ENOSPC;
> > > + goto out_unlock;
> > > + }
> > > + /* Track ioasid allocation owner by mm */
> > > + pasid = ioasid_alloc((struct ioasid_set *)vmm->mm, min,
> > > + max, NULL);
> >
> > Is mm effectively only a token for this?  Maybe we should have a
> > struct vfio_mm_token since gets and puts are not creating a reference
> > to an mm, but to an "mm token".
> 
> yes, it is supposed to be a kind of token. vfio_mm_token is better naming. :-)

Hi Alex,

Just to double check if I got your point. Your point is to have a separate 
structure
which is only wrap of mm or just renaming current vfio_mm would be enough?


Regards,
Yi Liu

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [RFC v3 1/8] vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free)

2020-01-31 Thread Liu, Yi L
Hi Alex,

> From: Alex Williamson [mailto:alex.william...@redhat.com]
> Sent: Thursday, January 30, 2020 7:56 AM
> To: Liu, Yi L 
> Subject: Re: [RFC v3 1/8] vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free)
> 
> On Wed, 29 Jan 2020 04:11:45 -0800
> "Liu, Yi L"  wrote:
> 
> > From: Liu Yi L 
> >
> > For a long time, devices have only one DMA address space from platform
> > IOMMU's point of view. This is true for both bare metal and directed-
> > access in virtualization environment. Reason is the source ID of DMA
> > in PCIe are BDF (bus/dev/fnc ID), which results in only device
> > granularity DMA isolation. However, this is changing with the latest
> > advancement of I/O technology. More and more platform vendors are
> > utilizing the PCIe PASID TLP prefix in DMA requests, thus to give
> > devices with multiple DMA address spaces as identified by their
> > individual PASIDs. For example, Shared Virtual Addressing (SVA, a.k.a
> > Shared Virtual Memory) is able to let device access multiple process
> > virtual address space by binding the virtual address space with a
> > PASID. Wherein the PASID is allocated in software and programmed to
> > device per device specific manner. Devices which support PASID
> > capability are called PASID-capable devices. If such devices are
> > passed through to VMs, guest software are also able to bind guest
> > process virtual address space on such devices. Therefore, the guest
> > software could reuse the bare metal software programming model, which
> > means guest software will also allocate PASID and program it to device
> > directly. This is a dangerous situation since it has potential PASID
> > conflicts and unauthorized address space access. It would be safer to
> > let host intercept in the guest software's PASID allocation. Thus PASID are
> managed system-wide.

[...]

> > +static void vfio_mm_unlock_and_free(struct vfio_mm *vmm) {
> > +   mutex_unlock(_mm_lock);
> > +   kfree(vmm);
> > +}
> > +
> > +/* called with vfio.vfio_mm_lock held */ static void
> > +vfio_mm_release(struct kref *kref) {
> > +   struct vfio_mm *vmm = container_of(kref, struct vfio_mm, kref);
> > +
> > +   list_del(>vfio_next);
> > +   vfio_mm_unlock_and_free(vmm);
> > +}
> > +
> > +void vfio_mm_put(struct vfio_mm *vmm) {
> > +   kref_put_mutex(>kref, vfio_mm_release, _mm_lock); }
> > +EXPORT_SYMBOL_GPL(vfio_mm_put);
> > +
> > +/* Assume vfio_mm_lock or vfio_mm reference is held */ static void
> > +vfio_mm_get(struct vfio_mm *vmm) {
> > +   kref_get(>kref);
> > +}
> > +
> > +struct vfio_mm *vfio_mm_get_from_task(struct task_struct *task) {
> > +   struct mm_struct *mm = get_task_mm(task);
> > +   struct vfio_mm *vmm;
> > +
> > +   mutex_lock(_mm_lock);
> > +   list_for_each_entry(vmm, _mm_list, vfio_next) {
> > +   if (vmm->mm == mm) {
> > +   vfio_mm_get(vmm);
> > +   goto out;
> > +   }
> > +   }
> > +
> > +   vmm = vfio_create_mm(mm);
> > +   if (IS_ERR(vmm))
> > +   vmm = NULL;
> > +out:
> > +   mutex_unlock(_mm_lock);
> > +   mmput(mm);
> > +   return vmm;
> > +}
> > +EXPORT_SYMBOL_GPL(vfio_mm_get_from_task);
> > +
> > +int vfio_mm_pasid_alloc(struct vfio_mm *vmm, int min, int max) {
> > +   ioasid_t pasid;
> > +   int ret = -ENOSPC;
> > +
> > +   mutex_lock(>pasid_lock);
> > +   if (vmm->pasid_count >= vmm->pasid_quota) {
> > +   ret = -ENOSPC;
> > +   goto out_unlock;
> > +   }
> > +   /* Track ioasid allocation owner by mm */
> > +   pasid = ioasid_alloc((struct ioasid_set *)vmm->mm, min,
> > +   max, NULL);
> 
> Is mm effectively only a token for this?  Maybe we should have a struct
> vfio_mm_token since gets and puts are not creating a reference to an mm,
> but to an "mm token".

yes, it is supposed to be a kind of token. vfio_mm_token is better naming. :-)

> > +   if (pasid == INVALID_IOASID) {
> > +   ret = -ENOSPC;
> > +   goto out_unlock;
> > +   }
> > +   vmm->pasid_count++;
> > +
> > +   ret = pasid;
> > +out_unlock:
> > +   mutex_unlock(>pasid_lock);
> > +   return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(vfio_mm_pasid_alloc);
> > +
> > +int vfio_mm_pasid_free(struct vfio_mm *vmm, ioasid_t pasid) {
> > +   void *pdata;
> > +   int ret = 0;
> > +
> > +   mutex_lock(>pasid_lock);
> > +   p

Re: [RFC v3 1/8] vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free)

2020-01-29 Thread Alex Williamson
On Wed, 29 Jan 2020 04:11:45 -0800
"Liu, Yi L"  wrote:

> From: Liu Yi L 
> 
> For a long time, devices have only one DMA address space from platform
> IOMMU's point of view. This is true for both bare metal and directed-
> access in virtualization environment. Reason is the source ID of DMA in
> PCIe are BDF (bus/dev/fnc ID), which results in only device granularity
> DMA isolation. However, this is changing with the latest advancement of
> I/O technology. More and more platform vendors are utilizing the PCIe
> PASID TLP prefix in DMA requests, thus to give devices with multiple DMA
> address spaces as identified by their individual PASIDs. For example,
> Shared Virtual Addressing (SVA, a.k.a Shared Virtual Memory) is able to
> let device access multiple process virtual address space by binding the
> virtual address space with a PASID. Wherein the PASID is allocated in
> software and programmed to device per device specific manner. Devices
> which support PASID capability are called PASID-capable devices. If such
> devices are passed through to VMs, guest software are also able to bind
> guest process virtual address space on such devices. Therefore, the guest
> software could reuse the bare metal software programming model, which
> means guest software will also allocate PASID and program it to device
> directly. This is a dangerous situation since it has potential PASID
> conflicts and unauthorized address space access. It would be safer to
> let host intercept in the guest software's PASID allocation. Thus PASID
> are managed system-wide.
> 
> This patch adds VFIO_IOMMU_PASID_REQUEST ioctl which aims to passdown
> PASID allocation/free request from the virtual IOMMU. Additionally, such
> requests are intended to be invoked by QEMU or other applications which
> are running in userspace, it is necessary to have a mechanism to prevent
> single application from abusing available PASIDs in system. With such
> consideration, this patch tracks the VFIO PASID allocation per-VM. There
> was a discussion to make quota to be per assigned devices. e.g. if a VM
> has many assigned devices, then it should have more quota. However, it
> is not sure how many PASIDs an assigned devices will use. e.g. it is
> possible that a VM with multiples assigned devices but requests less
> PASIDs. Therefore per-VM quota would be better.
> 
> This patch uses struct mm pointer as a per-VM token. We also considered
> using task structure pointer and vfio_iommu structure pointer. However,
> task structure is per-thread, which means it cannot achieve per-VM PASID
> alloc tracking purpose. While for vfio_iommu structure, it is visible
> only within vfio. Therefore, structure mm pointer is selected. This patch
> adds a structure vfio_mm. A vfio_mm is created when the first vfio
> container is opened by a VM. On the reverse order, vfio_mm is free when
> the last vfio container is released. Each VM is assigned with a PASID
> quota, so that it is not able to request PASID beyond its quota. This
> patch adds a default quota of 1000. This quota could be tuned by
> administrator. Making PASID quota tunable will be added in another patch
> in this series.
> 
> Previous discussions:
> https://patchwork.kernel.org/patch/11209429/
> 
> Cc: Kevin Tian 
> CC: Jacob Pan 
> Cc: Alex Williamson 
> Cc: Eric Auger 
> Cc: Jean-Philippe Brucker 
> Signed-off-by: Liu Yi L 
> Signed-off-by: Yi Sun 
> Signed-off-by: Jacob Pan 
> ---
>  drivers/vfio/vfio.c | 125 
> 
>  drivers/vfio/vfio_iommu_type1.c |  92 +
>  include/linux/vfio.h|  15 +
>  include/uapi/linux/vfio.h   |  41 +
>  4 files changed, 273 insertions(+)
> 
> diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
> index c848262..c43c757 100644
> --- a/drivers/vfio/vfio.c
> +++ b/drivers/vfio/vfio.c
> @@ -32,6 +32,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #define DRIVER_VERSION   "0.3"
>  #define DRIVER_AUTHOR"Alex Williamson "
> @@ -46,6 +47,8 @@ static struct vfio {
>   struct mutexgroup_lock;
>   struct cdev group_cdev;
>   dev_t   group_devt;
> + struct list_headvfio_mm_list;
> + struct mutexvfio_mm_lock;
>   wait_queue_head_t   release_q;
>  } vfio;
>  
> @@ -2129,6 +2132,126 @@ int vfio_unregister_notifier(struct device *dev, enum 
> vfio_notify_type type,
>  EXPORT_SYMBOL(vfio_unregister_notifier);
>  
>  /**
> + * VFIO_MM objects - create, release, get, put, search
> + * Caller of the function should have held vfio.vfio_mm_lock.
> + */
> +static struct vfio_mm *vfio_create_mm(struct mm_struct *mm)
> +{
> + struct vfio_mm *vmm;
> +
> + vmm = kzalloc(sizeof(*vmm), GFP_KERNEL);
> + if (!vmm)
> + return ERR_PTR(-ENOMEM);
> +
> + kref_init(>kref);
> + vmm->mm = mm;
> + 

[RFC v3 1/8] vfio: Add VFIO_IOMMU_PASID_REQUEST(alloc/free)

2020-01-29 Thread Liu, Yi L
From: Liu Yi L 

For a long time, devices have only one DMA address space from platform
IOMMU's point of view. This is true for both bare metal and directed-
access in virtualization environment. Reason is the source ID of DMA in
PCIe are BDF (bus/dev/fnc ID), which results in only device granularity
DMA isolation. However, this is changing with the latest advancement of
I/O technology. More and more platform vendors are utilizing the PCIe
PASID TLP prefix in DMA requests, thus to give devices with multiple DMA
address spaces as identified by their individual PASIDs. For example,
Shared Virtual Addressing (SVA, a.k.a Shared Virtual Memory) is able to
let device access multiple process virtual address space by binding the
virtual address space with a PASID. Wherein the PASID is allocated in
software and programmed to device per device specific manner. Devices
which support PASID capability are called PASID-capable devices. If such
devices are passed through to VMs, guest software are also able to bind
guest process virtual address space on such devices. Therefore, the guest
software could reuse the bare metal software programming model, which
means guest software will also allocate PASID and program it to device
directly. This is a dangerous situation since it has potential PASID
conflicts and unauthorized address space access. It would be safer to
let host intercept in the guest software's PASID allocation. Thus PASID
are managed system-wide.

This patch adds VFIO_IOMMU_PASID_REQUEST ioctl which aims to passdown
PASID allocation/free request from the virtual IOMMU. Additionally, such
requests are intended to be invoked by QEMU or other applications which
are running in userspace, it is necessary to have a mechanism to prevent
single application from abusing available PASIDs in system. With such
consideration, this patch tracks the VFIO PASID allocation per-VM. There
was a discussion to make quota to be per assigned devices. e.g. if a VM
has many assigned devices, then it should have more quota. However, it
is not sure how many PASIDs an assigned devices will use. e.g. it is
possible that a VM with multiples assigned devices but requests less
PASIDs. Therefore per-VM quota would be better.

This patch uses struct mm pointer as a per-VM token. We also considered
using task structure pointer and vfio_iommu structure pointer. However,
task structure is per-thread, which means it cannot achieve per-VM PASID
alloc tracking purpose. While for vfio_iommu structure, it is visible
only within vfio. Therefore, structure mm pointer is selected. This patch
adds a structure vfio_mm. A vfio_mm is created when the first vfio
container is opened by a VM. On the reverse order, vfio_mm is free when
the last vfio container is released. Each VM is assigned with a PASID
quota, so that it is not able to request PASID beyond its quota. This
patch adds a default quota of 1000. This quota could be tuned by
administrator. Making PASID quota tunable will be added in another patch
in this series.

Previous discussions:
https://patchwork.kernel.org/patch/11209429/

Cc: Kevin Tian 
CC: Jacob Pan 
Cc: Alex Williamson 
Cc: Eric Auger 
Cc: Jean-Philippe Brucker 
Signed-off-by: Liu Yi L 
Signed-off-by: Yi Sun 
Signed-off-by: Jacob Pan 
---
 drivers/vfio/vfio.c | 125 
 drivers/vfio/vfio_iommu_type1.c |  92 +
 include/linux/vfio.h|  15 +
 include/uapi/linux/vfio.h   |  41 +
 4 files changed, 273 insertions(+)

diff --git a/drivers/vfio/vfio.c b/drivers/vfio/vfio.c
index c848262..c43c757 100644
--- a/drivers/vfio/vfio.c
+++ b/drivers/vfio/vfio.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #define DRIVER_VERSION "0.3"
 #define DRIVER_AUTHOR  "Alex Williamson "
@@ -46,6 +47,8 @@ static struct vfio {
struct mutexgroup_lock;
struct cdev group_cdev;
dev_t   group_devt;
+   struct list_headvfio_mm_list;
+   struct mutexvfio_mm_lock;
wait_queue_head_t   release_q;
 } vfio;
 
@@ -2129,6 +2132,126 @@ int vfio_unregister_notifier(struct device *dev, enum 
vfio_notify_type type,
 EXPORT_SYMBOL(vfio_unregister_notifier);
 
 /**
+ * VFIO_MM objects - create, release, get, put, search
+ * Caller of the function should have held vfio.vfio_mm_lock.
+ */
+static struct vfio_mm *vfio_create_mm(struct mm_struct *mm)
+{
+   struct vfio_mm *vmm;
+
+   vmm = kzalloc(sizeof(*vmm), GFP_KERNEL);
+   if (!vmm)
+   return ERR_PTR(-ENOMEM);
+
+   kref_init(>kref);
+   vmm->mm = mm;
+   vmm->pasid_quota = VFIO_DEFAULT_PASID_QUOTA;
+   vmm->pasid_count = 0;
+   mutex_init(>pasid_lock);
+
+   list_add(>vfio_next, _mm_list);
+
+   return vmm;
+}
+
+static void vfio_mm_unlock_and_free(struct vfio_mm *vmm)
+{
+   mutex_unlock(_mm_lock);
+