Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-29 Thread Ethan Zhao


在 2022/6/28 22:20, Jean-Philippe Brucker 写道:

On Tue, Jun 28, 2022 at 07:53:39PM +0800, Baolu Lu wrote:

Once the iopf_handle_single() is removed, the name of
iopf_handle_group() looks a little weired

and confused, does this group mean the iommu group (domain) ?
while I take some minutes to

No. This is not the iommu group. It's page request group defined by the
PCI SIG spec. Multiple page requests could be put in a group with a
same group id. All page requests in a group could be responded to device
in one shot.

Thanks your explaination, understand the concept of PCIe PRG.  I meant

do we still have the necessity to mention the "group" here in the name

iopf_handle_group(),  which one is better ? iopf_handle_prg() or

iopf_handler(),  perhaps none of them ? :)

Oh! Sorry for the misunderstanding.

I have no strong feeling to change this naming. :-) All the names
express what the helper does. Jean is the author of this framework. If
he has the same idea as you, I don't mind renaming it in this patch.

I'm not attached to the name, and I see how it could be confusing. Given
that io-pgfault is not only for PCIe, 'prg' is not the best here either.
iopf_handle_faults(), or just iopf_handler(), seem more suitable.


Both iopf_handle_faults() and iopf_handler() looks straight, iopf_handler()

saves one word 'faults', iopf already has the meaning 'io page fault' , so

iopf_handler() is clear enough I think.


Thanks,

Ethan



Thanks,
Jean


--
"firm, enduring, strong, and long-lived"

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-28 Thread Baolu Lu

On 2022/6/28 22:20, Jean-Philippe Brucker wrote:

On Tue, Jun 28, 2022 at 07:53:39PM +0800, Baolu Lu wrote:

Once the iopf_handle_single() is removed, the name of
iopf_handle_group() looks a little weired

and confused, does this group mean the iommu group (domain) ?
while I take some minutes to


No. This is not the iommu group. It's page request group defined by the
PCI SIG spec. Multiple page requests could be put in a group with a
same group id. All page requests in a group could be responded to device
in one shot.


Thanks your explaination, understand the concept of PCIe PRG.  I meant

do we still have the necessity to mention the "group" here in the name

iopf_handle_group(),  which one is better ? iopf_handle_prg() or

iopf_handler(),  perhaps none of them ? :)


Oh! Sorry for the misunderstanding.

I have no strong feeling to change this naming. :-) All the names
express what the helper does. Jean is the author of this framework. If
he has the same idea as you, I don't mind renaming it in this patch.


I'm not attached to the name, and I see how it could be confusing. Given
that io-pgfault is not only for PCIe, 'prg' is not the best here either.
iopf_handle_faults(), or just iopf_handler(), seem more suitable.


Okay, so I will rename it to iopf_handle_faults() in this patch.

Best regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-28 Thread Jean-Philippe Brucker
On Tue, Jun 28, 2022 at 07:53:39PM +0800, Baolu Lu wrote:
> > > > Once the iopf_handle_single() is removed, the name of
> > > > iopf_handle_group() looks a little weired
> > > > 
> > > > and confused, does this group mean the iommu group (domain) ?
> > > > while I take some minutes to
> > > 
> > > No. This is not the iommu group. It's page request group defined by the
> > > PCI SIG spec. Multiple page requests could be put in a group with a
> > > same group id. All page requests in a group could be responded to device
> > > in one shot.
> > 
> > Thanks your explaination, understand the concept of PCIe PRG.  I meant
> > 
> > do we still have the necessity to mention the "group" here in the name
> > 
> > iopf_handle_group(),  which one is better ? iopf_handle_prg() or
> > 
> > iopf_handler(),  perhaps none of them ? :)
> 
> Oh! Sorry for the misunderstanding.
> 
> I have no strong feeling to change this naming. :-) All the names
> express what the helper does. Jean is the author of this framework. If
> he has the same idea as you, I don't mind renaming it in this patch.

I'm not attached to the name, and I see how it could be confusing. Given
that io-pgfault is not only for PCIe, 'prg' is not the best here either.
iopf_handle_faults(), or just iopf_handler(), seem more suitable.

Thanks,
Jean
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-28 Thread Baolu Lu

On 2022/6/28 18:02, Tian, Kevin wrote:

From: Jean-Philippe Brucker 
Sent: Tuesday, June 28, 2022 5:44 PM

On Tue, Jun 28, 2022 at 08:39:36AM +, Tian, Kevin wrote:

From: Lu Baolu 
Sent: Tuesday, June 21, 2022 10:44 PM

Tweak the I/O page fault handling framework to route the page faults to
the domain and call the page fault handler retrieved from the domain.
This makes the I/O page fault handling framework possible to serve more
usage scenarios as long as they have an IOMMU domain and install a

page

fault handler in it. Some unused functions are also removed to avoid
dead code.

The iommu_get_domain_for_dev_pasid() which retrieves attached

domain

for a {device, PASID} pair is used. It will be used by the page fault
handling framework which knows {device, PASID} reported from the

iommu

driver. We have a guarantee that the SVA domain doesn't go away during
IOPF handling, because unbind() waits for pending faults with
iopf_queue_flush_dev() before freeing the domain. Hence, there's no

need

to synchronize life cycle of the iommu domains between the unbind() and
the interrupt threads.


I found iopf_queue_flush_dev() is only called in intel-iommu driver. Did
I overlook anything?


The SMMU driver will need it as well when we upstream PRI support.
Currently it only supports stall, and that requires the device driver to
flush all DMA including stalled transactions *before* calling unbind(), so
ne need for iopf_queue_flush_dev() in this case.



then it makes sense. Probably Baolu can add this information in the
commit msg so others with similar question can quickly get the
point here.


Sure. Updated.

Best regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-28 Thread Baolu Lu

On 2022/6/28 17:10, Ethan Zhao wrote:

Hi, Baolu

在 2022/6/28 14:28, Baolu Lu 写道:

Hi Ethan,

On 2022/6/27 21:03, Ethan Zhao wrote:

Hi,

在 2022/6/21 22:43, Lu Baolu 写道:

Tweak the I/O page fault handling framework to route the page faults to
the domain and call the page fault handler retrieved from the domain.
This makes the I/O page fault handling framework possible to serve more
usage scenarios as long as they have an IOMMU domain and install a page
fault handler in it. Some unused functions are also removed to avoid
dead code.

The iommu_get_domain_for_dev_pasid() which retrieves attached domain
for a {device, PASID} pair is used. It will be used by the page fault
handling framework which knows {device, PASID} reported from the iommu
driver. We have a guarantee that the SVA domain doesn't go away during
IOPF handling, because unbind() waits for pending faults with
iopf_queue_flush_dev() before freeing the domain. Hence, there's no 
need

to synchronize life cycle of the iommu domains between the unbind() and
the interrupt threads.

Signed-off-by: Lu Baolu 
Reviewed-by: Jean-Philippe Brucker 
---
  drivers/iommu/io-pgfault.c | 64 
+-

  1 file changed, 7 insertions(+), 57 deletions(-)

diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
index aee9e033012f..4f24ec703479 100644
--- a/drivers/iommu/io-pgfault.c
+++ b/drivers/iommu/io-pgfault.c
@@ -69,69 +69,18 @@ static int iopf_complete_group(struct device 
*dev, struct iopf_fault *iopf,

  return iommu_page_response(dev, );
  }
-static enum iommu_page_response_code
-iopf_handle_single(struct iopf_fault *iopf)
-{
-    vm_fault_t ret;
-    struct mm_struct *mm;
-    struct vm_area_struct *vma;
-    unsigned int access_flags = 0;
-    unsigned int fault_flags = FAULT_FLAG_REMOTE;
-    struct iommu_fault_page_request *prm = >fault.prm;
-    enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID;
-
-    if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID))
-    return status;
-
-    mm = iommu_sva_find(prm->pasid);
-    if (IS_ERR_OR_NULL(mm))
-    return status;
-
-    mmap_read_lock(mm);
-
-    vma = find_extend_vma(mm, prm->addr);
-    if (!vma)
-    /* Unmapped area */
-    goto out_put_mm;
-
-    if (prm->perm & IOMMU_FAULT_PERM_READ)
-    access_flags |= VM_READ;
-
-    if (prm->perm & IOMMU_FAULT_PERM_WRITE) {
-    access_flags |= VM_WRITE;
-    fault_flags |= FAULT_FLAG_WRITE;
-    }
-
-    if (prm->perm & IOMMU_FAULT_PERM_EXEC) {
-    access_flags |= VM_EXEC;
-    fault_flags |= FAULT_FLAG_INSTRUCTION;
-    }
-
-    if (!(prm->perm & IOMMU_FAULT_PERM_PRIV))
-    fault_flags |= FAULT_FLAG_USER;
-
-    if (access_flags & ~vma->vm_flags)
-    /* Access fault */
-    goto out_put_mm;
-
-    ret = handle_mm_fault(vma, prm->addr, fault_flags, NULL);
-    status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID :
-    IOMMU_PAGE_RESP_SUCCESS;
-
-out_put_mm:
-    mmap_read_unlock(mm);
-    mmput(mm);
-
-    return status;
-}
-


Once the iopf_handle_single() is removed, the name of 
iopf_handle_group() looks a little weired


and confused, does this group mean the iommu group (domain) ? while I 
take some minutes to


No. This is not the iommu group. It's page request group defined by the
PCI SIG spec. Multiple page requests could be put in a group with a
same group id. All page requests in a group could be responded to device
in one shot.


Thanks your explaination, understand the concept of PCIe PRG.  I meant

do we still have the necessity to mention the "group" here in the name

iopf_handle_group(),  which one is better ? iopf_handle_prg() or

iopf_handler(),  perhaps none of them ? :)


Oh! Sorry for the misunderstanding.

I have no strong feeling to change this naming. :-) All the names
express what the helper does. Jean is the author of this framework. If
he has the same idea as you, I don't mind renaming it in this patch.

Best regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-28 Thread Baolu Lu

On 2022/6/28 16:39, Tian, Kevin wrote:

  static void iopf_handle_group(struct work_struct *work)
  {
struct iopf_group *group;
+   struct iommu_domain *domain;
struct iopf_fault *iopf, *next;
enum iommu_page_response_code status =
IOMMU_PAGE_RESP_SUCCESS;

group = container_of(work, struct iopf_group, work);
+   domain = iommu_get_domain_for_dev_pasid(group->dev,
+   group->last_fault.fault.prm.pasid);
+   if (!domain || !domain->iopf_handler)
+   status = IOMMU_PAGE_RESP_INVALID;

Miss a comment on why no refcnt is required on domain as explained
in the commit msg.


I had some comments around iommu_queue_iopf() in the previous patch. The
iommu_queue_iopf() is the generic page fault handler exposed by iommu
core, hence that's the right place to document this.

Post it below as well:

diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
index 1df8c1dcae77..aee9e033012f 100644
--- a/drivers/iommu/io-pgfault.c
+++ b/drivers/iommu/io-pgfault.c
@@ -181,6 +181,13 @@ static void iopf_handle_group(struct work_struct *work)
  * request completes, outstanding faults will have been dealt with by 
the time

  * the PASID is freed.
  *
+ * Any valid page fault will be eventually routed to an iommu domain 
and the

+ * page fault handler installed there will get called. The users of this
+ * handling framework should guarantee that the iommu domain could only be
+ * freed after the device has stopped generating page faults (or the iommu
+ * hardware has been set to block the page faults) and the pending page 
faults

+ * have been flushed.
+ *
  * Return: 0 on success and <0 on error.
  */
 int iommu_queue_iopf(struct iommu_fault *fault, void *cookie)

Best regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-28 Thread Tian, Kevin
> From: Jean-Philippe Brucker 
> Sent: Tuesday, June 28, 2022 5:44 PM
> 
> On Tue, Jun 28, 2022 at 08:39:36AM +, Tian, Kevin wrote:
> > > From: Lu Baolu 
> > > Sent: Tuesday, June 21, 2022 10:44 PM
> > >
> > > Tweak the I/O page fault handling framework to route the page faults to
> > > the domain and call the page fault handler retrieved from the domain.
> > > This makes the I/O page fault handling framework possible to serve more
> > > usage scenarios as long as they have an IOMMU domain and install a
> page
> > > fault handler in it. Some unused functions are also removed to avoid
> > > dead code.
> > >
> > > The iommu_get_domain_for_dev_pasid() which retrieves attached
> domain
> > > for a {device, PASID} pair is used. It will be used by the page fault
> > > handling framework which knows {device, PASID} reported from the
> iommu
> > > driver. We have a guarantee that the SVA domain doesn't go away during
> > > IOPF handling, because unbind() waits for pending faults with
> > > iopf_queue_flush_dev() before freeing the domain. Hence, there's no
> need
> > > to synchronize life cycle of the iommu domains between the unbind() and
> > > the interrupt threads.
> >
> > I found iopf_queue_flush_dev() is only called in intel-iommu driver. Did
> > I overlook anything?
> 
> The SMMU driver will need it as well when we upstream PRI support.
> Currently it only supports stall, and that requires the device driver to
> flush all DMA including stalled transactions *before* calling unbind(), so
> ne need for iopf_queue_flush_dev() in this case.
>

then it makes sense. Probably Baolu can add this information in the
commit msg so others with similar question can quickly get the
point here.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-28 Thread Jean-Philippe Brucker
On Tue, Jun 28, 2022 at 08:39:36AM +, Tian, Kevin wrote:
> > From: Lu Baolu 
> > Sent: Tuesday, June 21, 2022 10:44 PM
> > 
> > Tweak the I/O page fault handling framework to route the page faults to
> > the domain and call the page fault handler retrieved from the domain.
> > This makes the I/O page fault handling framework possible to serve more
> > usage scenarios as long as they have an IOMMU domain and install a page
> > fault handler in it. Some unused functions are also removed to avoid
> > dead code.
> > 
> > The iommu_get_domain_for_dev_pasid() which retrieves attached domain
> > for a {device, PASID} pair is used. It will be used by the page fault
> > handling framework which knows {device, PASID} reported from the iommu
> > driver. We have a guarantee that the SVA domain doesn't go away during
> > IOPF handling, because unbind() waits for pending faults with
> > iopf_queue_flush_dev() before freeing the domain. Hence, there's no need
> > to synchronize life cycle of the iommu domains between the unbind() and
> > the interrupt threads.
> 
> I found iopf_queue_flush_dev() is only called in intel-iommu driver. Did
> I overlook anything?

The SMMU driver will need it as well when we upstream PRI support.
Currently it only supports stall, and that requires the device driver to
flush all DMA including stalled transactions *before* calling unbind(), so
ne need for iopf_queue_flush_dev() in this case.

Thanks,
Jean

> 
> >  static void iopf_handle_group(struct work_struct *work)
> >  {
> > struct iopf_group *group;
> > +   struct iommu_domain *domain;
> > struct iopf_fault *iopf, *next;
> > enum iommu_page_response_code status =
> > IOMMU_PAGE_RESP_SUCCESS;
> > 
> > group = container_of(work, struct iopf_group, work);
> > +   domain = iommu_get_domain_for_dev_pasid(group->dev,
> > +   group->last_fault.fault.prm.pasid);
> > +   if (!domain || !domain->iopf_handler)
> > +   status = IOMMU_PAGE_RESP_INVALID;
> 
> Miss a comment on why no refcnt is required on domain as explained
> in the commit msg.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-28 Thread Ethan Zhao

Hi, Baolu

在 2022/6/28 14:28, Baolu Lu 写道:

Hi Ethan,

On 2022/6/27 21:03, Ethan Zhao wrote:

Hi,

在 2022/6/21 22:43, Lu Baolu 写道:

Tweak the I/O page fault handling framework to route the page faults to
the domain and call the page fault handler retrieved from the domain.
This makes the I/O page fault handling framework possible to serve more
usage scenarios as long as they have an IOMMU domain and install a page
fault handler in it. Some unused functions are also removed to avoid
dead code.

The iommu_get_domain_for_dev_pasid() which retrieves attached domain
for a {device, PASID} pair is used. It will be used by the page fault
handling framework which knows {device, PASID} reported from the iommu
driver. We have a guarantee that the SVA domain doesn't go away during
IOPF handling, because unbind() waits for pending faults with
iopf_queue_flush_dev() before freeing the domain. Hence, there's no 
need

to synchronize life cycle of the iommu domains between the unbind() and
the interrupt threads.

Signed-off-by: Lu Baolu 
Reviewed-by: Jean-Philippe Brucker 
---
  drivers/iommu/io-pgfault.c | 64 
+-

  1 file changed, 7 insertions(+), 57 deletions(-)

diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
index aee9e033012f..4f24ec703479 100644
--- a/drivers/iommu/io-pgfault.c
+++ b/drivers/iommu/io-pgfault.c
@@ -69,69 +69,18 @@ static int iopf_complete_group(struct device 
*dev, struct iopf_fault *iopf,

  return iommu_page_response(dev, );
  }
-static enum iommu_page_response_code
-iopf_handle_single(struct iopf_fault *iopf)
-{
-    vm_fault_t ret;
-    struct mm_struct *mm;
-    struct vm_area_struct *vma;
-    unsigned int access_flags = 0;
-    unsigned int fault_flags = FAULT_FLAG_REMOTE;
-    struct iommu_fault_page_request *prm = >fault.prm;
-    enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID;
-
-    if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID))
-    return status;
-
-    mm = iommu_sva_find(prm->pasid);
-    if (IS_ERR_OR_NULL(mm))
-    return status;
-
-    mmap_read_lock(mm);
-
-    vma = find_extend_vma(mm, prm->addr);
-    if (!vma)
-    /* Unmapped area */
-    goto out_put_mm;
-
-    if (prm->perm & IOMMU_FAULT_PERM_READ)
-    access_flags |= VM_READ;
-
-    if (prm->perm & IOMMU_FAULT_PERM_WRITE) {
-    access_flags |= VM_WRITE;
-    fault_flags |= FAULT_FLAG_WRITE;
-    }
-
-    if (prm->perm & IOMMU_FAULT_PERM_EXEC) {
-    access_flags |= VM_EXEC;
-    fault_flags |= FAULT_FLAG_INSTRUCTION;
-    }
-
-    if (!(prm->perm & IOMMU_FAULT_PERM_PRIV))
-    fault_flags |= FAULT_FLAG_USER;
-
-    if (access_flags & ~vma->vm_flags)
-    /* Access fault */
-    goto out_put_mm;
-
-    ret = handle_mm_fault(vma, prm->addr, fault_flags, NULL);
-    status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID :
-    IOMMU_PAGE_RESP_SUCCESS;
-
-out_put_mm:
-    mmap_read_unlock(mm);
-    mmput(mm);
-
-    return status;
-}
-


Once the iopf_handle_single() is removed, the name of 
iopf_handle_group() looks a little weired


and confused, does this group mean the iommu group (domain) ? while I 
take some minutes to


No. This is not the iommu group. It's page request group defined by the
PCI SIG spec. Multiple page requests could be put in a group with a
same group id. All page requests in a group could be responded to device
in one shot.


Thanks your explaination, understand the concept of PCIe PRG.  I meant

do we still have the necessity to mention the "group" here in the name

iopf_handle_group(),  which one is better ? iopf_handle_prg() or

iopf_handler(),  perhaps none of them ? :)


Thanks,

Ethan



Best regards,
baolu



look into the code, oh, means a batch / list / queue  of iopfs , and 
iopf_handle_group() becomes a


generic iopf_handler() .

Doe it make sense to revise the names of iopf_handle_group(), 
iopf_complete_group,  iopf_group in


this patch set ?


Thanks,

Ethan


  static void iopf_handle_group(struct work_struct *work)
  {
  struct iopf_group *group;
+    struct iommu_domain *domain;
  struct iopf_fault *iopf, *next;
  enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS;
  group = container_of(work, struct iopf_group, work);
+    domain = iommu_get_domain_for_dev_pasid(group->dev,
+    group->last_fault.fault.prm.pasid);
+    if (!domain || !domain->iopf_handler)
+    status = IOMMU_PAGE_RESP_INVALID;
  list_for_each_entry_safe(iopf, next, >faults, list) {
  /*
@@ -139,7 +88,8 @@ static void iopf_handle_group(struct work_struct 
*work)

   * faults in the group if there is an error.
   */
  if (status == IOMMU_PAGE_RESP_SUCCESS)
-    status = iopf_handle_single(iopf);
+    status = domain->iopf_handler(>fault,
+  domain->fault_data);
  if (!(iopf->fault.prm.flags &
    IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE))



RE: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-28 Thread Tian, Kevin
> From: Lu Baolu 
> Sent: Tuesday, June 21, 2022 10:44 PM
> 
> Tweak the I/O page fault handling framework to route the page faults to
> the domain and call the page fault handler retrieved from the domain.
> This makes the I/O page fault handling framework possible to serve more
> usage scenarios as long as they have an IOMMU domain and install a page
> fault handler in it. Some unused functions are also removed to avoid
> dead code.
> 
> The iommu_get_domain_for_dev_pasid() which retrieves attached domain
> for a {device, PASID} pair is used. It will be used by the page fault
> handling framework which knows {device, PASID} reported from the iommu
> driver. We have a guarantee that the SVA domain doesn't go away during
> IOPF handling, because unbind() waits for pending faults with
> iopf_queue_flush_dev() before freeing the domain. Hence, there's no need
> to synchronize life cycle of the iommu domains between the unbind() and
> the interrupt threads.

I found iopf_queue_flush_dev() is only called in intel-iommu driver. Did
I overlook anything?

>  static void iopf_handle_group(struct work_struct *work)
>  {
>   struct iopf_group *group;
> + struct iommu_domain *domain;
>   struct iopf_fault *iopf, *next;
>   enum iommu_page_response_code status =
> IOMMU_PAGE_RESP_SUCCESS;
> 
>   group = container_of(work, struct iopf_group, work);
> + domain = iommu_get_domain_for_dev_pasid(group->dev,
> + group->last_fault.fault.prm.pasid);
> + if (!domain || !domain->iopf_handler)
> + status = IOMMU_PAGE_RESP_INVALID;

Miss a comment on why no refcnt is required on domain as explained
in the commit msg.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-28 Thread Baolu Lu

Hi Ethan,

On 2022/6/27 21:03, Ethan Zhao wrote:

Hi,

在 2022/6/21 22:43, Lu Baolu 写道:

Tweak the I/O page fault handling framework to route the page faults to
the domain and call the page fault handler retrieved from the domain.
This makes the I/O page fault handling framework possible to serve more
usage scenarios as long as they have an IOMMU domain and install a page
fault handler in it. Some unused functions are also removed to avoid
dead code.

The iommu_get_domain_for_dev_pasid() which retrieves attached domain
for a {device, PASID} pair is used. It will be used by the page fault
handling framework which knows {device, PASID} reported from the iommu
driver. We have a guarantee that the SVA domain doesn't go away during
IOPF handling, because unbind() waits for pending faults with
iopf_queue_flush_dev() before freeing the domain. Hence, there's no need
to synchronize life cycle of the iommu domains between the unbind() and
the interrupt threads.

Signed-off-by: Lu Baolu 
Reviewed-by: Jean-Philippe Brucker 
---
  drivers/iommu/io-pgfault.c | 64 +-
  1 file changed, 7 insertions(+), 57 deletions(-)

diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
index aee9e033012f..4f24ec703479 100644
--- a/drivers/iommu/io-pgfault.c
+++ b/drivers/iommu/io-pgfault.c
@@ -69,69 +69,18 @@ static int iopf_complete_group(struct device *dev, 
struct iopf_fault *iopf,

  return iommu_page_response(dev, );
  }
-static enum iommu_page_response_code
-iopf_handle_single(struct iopf_fault *iopf)
-{
-    vm_fault_t ret;
-    struct mm_struct *mm;
-    struct vm_area_struct *vma;
-    unsigned int access_flags = 0;
-    unsigned int fault_flags = FAULT_FLAG_REMOTE;
-    struct iommu_fault_page_request *prm = >fault.prm;
-    enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID;
-
-    if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID))
-    return status;
-
-    mm = iommu_sva_find(prm->pasid);
-    if (IS_ERR_OR_NULL(mm))
-    return status;
-
-    mmap_read_lock(mm);
-
-    vma = find_extend_vma(mm, prm->addr);
-    if (!vma)
-    /* Unmapped area */
-    goto out_put_mm;
-
-    if (prm->perm & IOMMU_FAULT_PERM_READ)
-    access_flags |= VM_READ;
-
-    if (prm->perm & IOMMU_FAULT_PERM_WRITE) {
-    access_flags |= VM_WRITE;
-    fault_flags |= FAULT_FLAG_WRITE;
-    }
-
-    if (prm->perm & IOMMU_FAULT_PERM_EXEC) {
-    access_flags |= VM_EXEC;
-    fault_flags |= FAULT_FLAG_INSTRUCTION;
-    }
-
-    if (!(prm->perm & IOMMU_FAULT_PERM_PRIV))
-    fault_flags |= FAULT_FLAG_USER;
-
-    if (access_flags & ~vma->vm_flags)
-    /* Access fault */
-    goto out_put_mm;
-
-    ret = handle_mm_fault(vma, prm->addr, fault_flags, NULL);
-    status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID :
-    IOMMU_PAGE_RESP_SUCCESS;
-
-out_put_mm:
-    mmap_read_unlock(mm);
-    mmput(mm);
-
-    return status;
-}
-


Once the iopf_handle_single() is removed, the name of 
iopf_handle_group() looks a little weired


and confused, does this group mean the iommu group (domain) ? while I 
take some minutes to


No. This is not the iommu group. It's page request group defined by the
PCI SIG spec. Multiple page requests could be put in a group with a
same group id. All page requests in a group could be responded to device
in one shot.

Best regards,
baolu



look into the code, oh, means a batch / list / queue  of iopfs , and 
iopf_handle_group() becomes a


generic iopf_handler() .

Doe it make sense to revise the names of iopf_handle_group(), 
iopf_complete_group,  iopf_group in


this patch set ?


Thanks,

Ethan


  static void iopf_handle_group(struct work_struct *work)
  {
  struct iopf_group *group;
+    struct iommu_domain *domain;
  struct iopf_fault *iopf, *next;
  enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS;
  group = container_of(work, struct iopf_group, work);
+    domain = iommu_get_domain_for_dev_pasid(group->dev,
+    group->last_fault.fault.prm.pasid);
+    if (!domain || !domain->iopf_handler)
+    status = IOMMU_PAGE_RESP_INVALID;
  list_for_each_entry_safe(iopf, next, >faults, list) {
  /*
@@ -139,7 +88,8 @@ static void iopf_handle_group(struct work_struct 
*work)

   * faults in the group if there is an error.
   */
  if (status == IOMMU_PAGE_RESP_SUCCESS)
-    status = iopf_handle_single(iopf);
+    status = domain->iopf_handler(>fault,
+  domain->fault_data);
  if (!(iopf->fault.prm.flags &
    IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE))




___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-27 Thread Ethan Zhao

Hi,

在 2022/6/21 22:43, Lu Baolu 写道:

Tweak the I/O page fault handling framework to route the page faults to
the domain and call the page fault handler retrieved from the domain.
This makes the I/O page fault handling framework possible to serve more
usage scenarios as long as they have an IOMMU domain and install a page
fault handler in it. Some unused functions are also removed to avoid
dead code.

The iommu_get_domain_for_dev_pasid() which retrieves attached domain
for a {device, PASID} pair is used. It will be used by the page fault
handling framework which knows {device, PASID} reported from the iommu
driver. We have a guarantee that the SVA domain doesn't go away during
IOPF handling, because unbind() waits for pending faults with
iopf_queue_flush_dev() before freeing the domain. Hence, there's no need
to synchronize life cycle of the iommu domains between the unbind() and
the interrupt threads.

Signed-off-by: Lu Baolu 
Reviewed-by: Jean-Philippe Brucker 
---
  drivers/iommu/io-pgfault.c | 64 +-
  1 file changed, 7 insertions(+), 57 deletions(-)

diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
index aee9e033012f..4f24ec703479 100644
--- a/drivers/iommu/io-pgfault.c
+++ b/drivers/iommu/io-pgfault.c
@@ -69,69 +69,18 @@ static int iopf_complete_group(struct device *dev, struct 
iopf_fault *iopf,
return iommu_page_response(dev, );
  }
  
-static enum iommu_page_response_code

-iopf_handle_single(struct iopf_fault *iopf)
-{
-   vm_fault_t ret;
-   struct mm_struct *mm;
-   struct vm_area_struct *vma;
-   unsigned int access_flags = 0;
-   unsigned int fault_flags = FAULT_FLAG_REMOTE;
-   struct iommu_fault_page_request *prm = >fault.prm;
-   enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID;
-
-   if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID))
-   return status;
-
-   mm = iommu_sva_find(prm->pasid);
-   if (IS_ERR_OR_NULL(mm))
-   return status;
-
-   mmap_read_lock(mm);
-
-   vma = find_extend_vma(mm, prm->addr);
-   if (!vma)
-   /* Unmapped area */
-   goto out_put_mm;
-
-   if (prm->perm & IOMMU_FAULT_PERM_READ)
-   access_flags |= VM_READ;
-
-   if (prm->perm & IOMMU_FAULT_PERM_WRITE) {
-   access_flags |= VM_WRITE;
-   fault_flags |= FAULT_FLAG_WRITE;
-   }
-
-   if (prm->perm & IOMMU_FAULT_PERM_EXEC) {
-   access_flags |= VM_EXEC;
-   fault_flags |= FAULT_FLAG_INSTRUCTION;
-   }
-
-   if (!(prm->perm & IOMMU_FAULT_PERM_PRIV))
-   fault_flags |= FAULT_FLAG_USER;
-
-   if (access_flags & ~vma->vm_flags)
-   /* Access fault */
-   goto out_put_mm;
-
-   ret = handle_mm_fault(vma, prm->addr, fault_flags, NULL);
-   status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID :
-   IOMMU_PAGE_RESP_SUCCESS;
-
-out_put_mm:
-   mmap_read_unlock(mm);
-   mmput(mm);
-
-   return status;
-}
-


Once the iopf_handle_single() is removed, the name of 
iopf_handle_group() looks a little weired


and confused, does this group mean the iommu group (domain) ? while I 
take some minutes to


look into the code, oh, means a batch / list / queue  of iopfs , and 
iopf_handle_group() becomes a


generic iopf_handler() .

Doe it make sense to revise the names of iopf_handle_group(), 
iopf_complete_group,  iopf_group in


this patch set ?


Thanks,

Ethan


  static void iopf_handle_group(struct work_struct *work)
  {
struct iopf_group *group;
+   struct iommu_domain *domain;
struct iopf_fault *iopf, *next;
enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS;
  
  	group = container_of(work, struct iopf_group, work);

+   domain = iommu_get_domain_for_dev_pasid(group->dev,
+   group->last_fault.fault.prm.pasid);
+   if (!domain || !domain->iopf_handler)
+   status = IOMMU_PAGE_RESP_INVALID;
  
  	list_for_each_entry_safe(iopf, next, >faults, list) {

/*
@@ -139,7 +88,8 @@ static void iopf_handle_group(struct work_struct *work)
 * faults in the group if there is an error.
 */
if (status == IOMMU_PAGE_RESP_SUCCESS)
-   status = iopf_handle_single(iopf);
+   status = domain->iopf_handler(>fault,
+ domain->fault_data);
  
  		if (!(iopf->fault.prm.flags &

  IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE))


--
"firm, enduring, strong, and long-lived"

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH v9 10/11] iommu: Per-domain I/O page fault handling

2022-06-21 Thread Lu Baolu
Tweak the I/O page fault handling framework to route the page faults to
the domain and call the page fault handler retrieved from the domain.
This makes the I/O page fault handling framework possible to serve more
usage scenarios as long as they have an IOMMU domain and install a page
fault handler in it. Some unused functions are also removed to avoid
dead code.

The iommu_get_domain_for_dev_pasid() which retrieves attached domain
for a {device, PASID} pair is used. It will be used by the page fault
handling framework which knows {device, PASID} reported from the iommu
driver. We have a guarantee that the SVA domain doesn't go away during
IOPF handling, because unbind() waits for pending faults with
iopf_queue_flush_dev() before freeing the domain. Hence, there's no need
to synchronize life cycle of the iommu domains between the unbind() and
the interrupt threads.

Signed-off-by: Lu Baolu 
Reviewed-by: Jean-Philippe Brucker 
---
 drivers/iommu/io-pgfault.c | 64 +-
 1 file changed, 7 insertions(+), 57 deletions(-)

diff --git a/drivers/iommu/io-pgfault.c b/drivers/iommu/io-pgfault.c
index aee9e033012f..4f24ec703479 100644
--- a/drivers/iommu/io-pgfault.c
+++ b/drivers/iommu/io-pgfault.c
@@ -69,69 +69,18 @@ static int iopf_complete_group(struct device *dev, struct 
iopf_fault *iopf,
return iommu_page_response(dev, );
 }
 
-static enum iommu_page_response_code
-iopf_handle_single(struct iopf_fault *iopf)
-{
-   vm_fault_t ret;
-   struct mm_struct *mm;
-   struct vm_area_struct *vma;
-   unsigned int access_flags = 0;
-   unsigned int fault_flags = FAULT_FLAG_REMOTE;
-   struct iommu_fault_page_request *prm = >fault.prm;
-   enum iommu_page_response_code status = IOMMU_PAGE_RESP_INVALID;
-
-   if (!(prm->flags & IOMMU_FAULT_PAGE_REQUEST_PASID_VALID))
-   return status;
-
-   mm = iommu_sva_find(prm->pasid);
-   if (IS_ERR_OR_NULL(mm))
-   return status;
-
-   mmap_read_lock(mm);
-
-   vma = find_extend_vma(mm, prm->addr);
-   if (!vma)
-   /* Unmapped area */
-   goto out_put_mm;
-
-   if (prm->perm & IOMMU_FAULT_PERM_READ)
-   access_flags |= VM_READ;
-
-   if (prm->perm & IOMMU_FAULT_PERM_WRITE) {
-   access_flags |= VM_WRITE;
-   fault_flags |= FAULT_FLAG_WRITE;
-   }
-
-   if (prm->perm & IOMMU_FAULT_PERM_EXEC) {
-   access_flags |= VM_EXEC;
-   fault_flags |= FAULT_FLAG_INSTRUCTION;
-   }
-
-   if (!(prm->perm & IOMMU_FAULT_PERM_PRIV))
-   fault_flags |= FAULT_FLAG_USER;
-
-   if (access_flags & ~vma->vm_flags)
-   /* Access fault */
-   goto out_put_mm;
-
-   ret = handle_mm_fault(vma, prm->addr, fault_flags, NULL);
-   status = ret & VM_FAULT_ERROR ? IOMMU_PAGE_RESP_INVALID :
-   IOMMU_PAGE_RESP_SUCCESS;
-
-out_put_mm:
-   mmap_read_unlock(mm);
-   mmput(mm);
-
-   return status;
-}
-
 static void iopf_handle_group(struct work_struct *work)
 {
struct iopf_group *group;
+   struct iommu_domain *domain;
struct iopf_fault *iopf, *next;
enum iommu_page_response_code status = IOMMU_PAGE_RESP_SUCCESS;
 
group = container_of(work, struct iopf_group, work);
+   domain = iommu_get_domain_for_dev_pasid(group->dev,
+   group->last_fault.fault.prm.pasid);
+   if (!domain || !domain->iopf_handler)
+   status = IOMMU_PAGE_RESP_INVALID;
 
list_for_each_entry_safe(iopf, next, >faults, list) {
/*
@@ -139,7 +88,8 @@ static void iopf_handle_group(struct work_struct *work)
 * faults in the group if there is an error.
 */
if (status == IOMMU_PAGE_RESP_SUCCESS)
-   status = iopf_handle_single(iopf);
+   status = domain->iopf_handler(>fault,
+ domain->fault_data);
 
if (!(iopf->fault.prm.flags &
  IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE))
-- 
2.25.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu