[PATCH 1/1] iommu/vt-d: Use device numa domain if RHSA is missing

2020-09-22 Thread Lu Baolu
If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR
table, we could default to the device NUMA domain as fall back.

Signed-off-by: Lu Baolu 
Reviewed-by: Kevin Tian 
Cc: Jacob Pan 
Cc: Kevin Tian 
Cc: Ashok Raj 
Link: https://lore.kernel.org/r/20200904010303.2961-1-baolu...@linux.intel.com
---
 drivers/iommu/intel/iommu.c | 37 +++--
 1 file changed, 35 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index 87b17bac04c2..1b7d390beb68 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -698,12 +698,47 @@ static int domain_update_iommu_superpage(struct 
dmar_domain *domain,
return fls(mask);
 }
 
+static int domain_update_device_node(struct dmar_domain *domain)
+{
+   struct device_domain_info *info;
+   int nid = NUMA_NO_NODE;
+
+   assert_spin_locked(_domain_lock);
+
+   if (list_empty(>devices))
+   return NUMA_NO_NODE;
+
+   list_for_each_entry(info, >devices, link) {
+   if (!info->dev)
+   continue;
+
+   /*
+* There could possibly be multiple device numa nodes as devices
+* within the same domain may sit behind different IOMMUs. There
+* isn't perfect answer in such situation, so we select first
+* come first served policy.
+*/
+   nid = dev_to_node(info->dev);
+   if (nid != NUMA_NO_NODE)
+   break;
+   }
+
+   return nid;
+}
+
 /* Some capabilities may be different across iommus */
 static void domain_update_iommu_cap(struct dmar_domain *domain)
 {
domain_update_iommu_coherency(domain);
domain->iommu_snooping = domain_update_iommu_snooping(NULL);
domain->iommu_superpage = domain_update_iommu_superpage(domain, NULL);
+
+   /*
+* If RHSA is missing, we should default to the device numa domain
+* as fall back.
+*/
+   if (domain->nid == NUMA_NO_NODE)
+   domain->nid = domain_update_device_node(domain);
 }
 
 struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus,
@@ -5095,8 +5130,6 @@ static struct iommu_domain 
*intel_iommu_domain_alloc(unsigned type)
if (type == IOMMU_DOMAIN_DMA)
intel_init_iova_domain(dmar_domain);
 
-   domain_update_iommu_cap(dmar_domain);
-
domain = _domain->domain;
domain->geometry.aperture_start = 0;
domain->geometry.aperture_end   =
-- 
2.17.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 1/1] iommu/vt-d: Use device numa domain if RHSA is missing

2020-08-30 Thread Lu Baolu

Hi Kevin,

Thanks a lot for looking at my patch.

On 8/28/20 10:13 AM, Tian, Kevin wrote:

From: Lu Baolu 
Sent: Thursday, August 27, 2020 1:57 PM

If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR
table, we could default to the device NUMA domain as fall back. This also
benefits the vIOMMU use case where only a single vIOMMU is exposed,
hence
no RHSA will be present but device numa domain can be correct.


this benefits vIOMMU but not necessarily only applied to single-vIOMMU
case. The logic still holds in multiple vIOMMU cases as long as RHSA is
not provided.


Yes. Will refine the description.





Cc: Jacob Pan 
Cc: Kevin Tian 
Cc: Ashok Raj 
Signed-off-by: Lu Baolu 
---
  drivers/iommu/intel/iommu.c | 31 +--
  1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index e0516d64d7a3..bce158468abf 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -700,12 +700,41 @@ static int
domain_update_iommu_superpage(struct dmar_domain *domain,
return fls(mask);
  }

+static int domain_update_device_node(struct dmar_domain *domain)
+{
+   struct device_domain_info *info;
+   int nid = NUMA_NO_NODE;
+
+   assert_spin_locked(_domain_lock);
+
+   if (list_empty(>devices))
+   return NUMA_NO_NODE;
+
+   list_for_each_entry(info, >devices, link) {
+   if (!info->dev)
+   continue;
+
+   nid = dev_to_node(info->dev);
+   if (nid != NUMA_NO_NODE)
+   break;
+   }


There could be multiple device numa nodes as devices within the
same domain may sit behind different IOMMUs. Of course there
is no perfect answer in such situation, and this patch is still an
obvious improvement on current always-on-node0 policy. But
some comment about such implication is welcomed.


I will add some comments here. Without more knowledge, currently we
choose to use the first hit.




+
+   return nid;
+}
+
  /* Some capabilities may be different across iommus */
  static void domain_update_iommu_cap(struct dmar_domain *domain)
  {
domain_update_iommu_coherency(domain);
domain->iommu_snooping =
domain_update_iommu_snooping(NULL);
domain->iommu_superpage =
domain_update_iommu_superpage(domain, NULL);
+
+   /*
+* If RHSA is missing, we should default to the device numa domain
+* as fall back.
+*/
+   if (domain->nid == NUMA_NO_NODE)
+   domain->nid = domain_update_device_node(domain);
  }

  struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8
bus,
@@ -5086,8 +5115,6 @@ static struct iommu_domain
*intel_iommu_domain_alloc(unsigned type)
if (type == IOMMU_DOMAIN_DMA)
intel_init_iova_domain(dmar_domain);

-   domain_update_iommu_cap(dmar_domain);
-


Is it intended or by mistake? If the former, looks it is a separate fix...


It's a cleanup. When a domain is newly created, this function is
actually a no-op.




domain = _domain->domain;
domain->geometry.aperture_start = 0;
domain->geometry.aperture_end   =
--
2.17.1




Best regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [PATCH 1/1] iommu/vt-d: Use device numa domain if RHSA is missing

2020-08-27 Thread Tian, Kevin
> From: Lu Baolu 
> Sent: Thursday, August 27, 2020 1:57 PM
> 
> If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR
> table, we could default to the device NUMA domain as fall back. This also
> benefits the vIOMMU use case where only a single vIOMMU is exposed,
> hence
> no RHSA will be present but device numa domain can be correct.

this benefits vIOMMU but not necessarily only applied to single-vIOMMU
case. The logic still holds in multiple vIOMMU cases as long as RHSA is
not provided.

> 
> Cc: Jacob Pan 
> Cc: Kevin Tian 
> Cc: Ashok Raj 
> Signed-off-by: Lu Baolu 
> ---
>  drivers/iommu/intel/iommu.c | 31 +--
>  1 file changed, 29 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
> index e0516d64d7a3..bce158468abf 100644
> --- a/drivers/iommu/intel/iommu.c
> +++ b/drivers/iommu/intel/iommu.c
> @@ -700,12 +700,41 @@ static int
> domain_update_iommu_superpage(struct dmar_domain *domain,
>   return fls(mask);
>  }
> 
> +static int domain_update_device_node(struct dmar_domain *domain)
> +{
> + struct device_domain_info *info;
> + int nid = NUMA_NO_NODE;
> +
> + assert_spin_locked(_domain_lock);
> +
> + if (list_empty(>devices))
> + return NUMA_NO_NODE;
> +
> + list_for_each_entry(info, >devices, link) {
> + if (!info->dev)
> + continue;
> +
> + nid = dev_to_node(info->dev);
> + if (nid != NUMA_NO_NODE)
> + break;
> + }

There could be multiple device numa nodes as devices within the
same domain may sit behind different IOMMUs. Of course there
is no perfect answer in such situation, and this patch is still an
obvious improvement on current always-on-node0 policy. But 
some comment about such implication is welcomed.

> +
> + return nid;
> +}
> +
>  /* Some capabilities may be different across iommus */
>  static void domain_update_iommu_cap(struct dmar_domain *domain)
>  {
>   domain_update_iommu_coherency(domain);
>   domain->iommu_snooping =
> domain_update_iommu_snooping(NULL);
>   domain->iommu_superpage =
> domain_update_iommu_superpage(domain, NULL);
> +
> + /*
> +  * If RHSA is missing, we should default to the device numa domain
> +  * as fall back.
> +  */
> + if (domain->nid == NUMA_NO_NODE)
> + domain->nid = domain_update_device_node(domain);
>  }
> 
>  struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8
> bus,
> @@ -5086,8 +5115,6 @@ static struct iommu_domain
> *intel_iommu_domain_alloc(unsigned type)
>   if (type == IOMMU_DOMAIN_DMA)
>   intel_init_iova_domain(dmar_domain);
> 
> - domain_update_iommu_cap(dmar_domain);
> -

Is it intended or by mistake? If the former, looks it is a separate fix...

>   domain = _domain->domain;
>   domain->geometry.aperture_start = 0;
>   domain->geometry.aperture_end   =
> --
> 2.17.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 1/1] iommu/vt-d: Use device numa domain if RHSA is missing

2020-08-27 Thread Lu Baolu
If there are multiple NUMA domains but the RHSA is missing in ACPI/DMAR
table, we could default to the device NUMA domain as fall back. This also
benefits the vIOMMU use case where only a single vIOMMU is exposed, hence
no RHSA will be present but device numa domain can be correct.

Cc: Jacob Pan 
Cc: Kevin Tian 
Cc: Ashok Raj 
Signed-off-by: Lu Baolu 
---
 drivers/iommu/intel/iommu.c | 31 +--
 1 file changed, 29 insertions(+), 2 deletions(-)

diff --git a/drivers/iommu/intel/iommu.c b/drivers/iommu/intel/iommu.c
index e0516d64d7a3..bce158468abf 100644
--- a/drivers/iommu/intel/iommu.c
+++ b/drivers/iommu/intel/iommu.c
@@ -700,12 +700,41 @@ static int domain_update_iommu_superpage(struct 
dmar_domain *domain,
return fls(mask);
 }
 
+static int domain_update_device_node(struct dmar_domain *domain)
+{
+   struct device_domain_info *info;
+   int nid = NUMA_NO_NODE;
+
+   assert_spin_locked(_domain_lock);
+
+   if (list_empty(>devices))
+   return NUMA_NO_NODE;
+
+   list_for_each_entry(info, >devices, link) {
+   if (!info->dev)
+   continue;
+
+   nid = dev_to_node(info->dev);
+   if (nid != NUMA_NO_NODE)
+   break;
+   }
+
+   return nid;
+}
+
 /* Some capabilities may be different across iommus */
 static void domain_update_iommu_cap(struct dmar_domain *domain)
 {
domain_update_iommu_coherency(domain);
domain->iommu_snooping = domain_update_iommu_snooping(NULL);
domain->iommu_superpage = domain_update_iommu_superpage(domain, NULL);
+
+   /*
+* If RHSA is missing, we should default to the device numa domain
+* as fall back.
+*/
+   if (domain->nid == NUMA_NO_NODE)
+   domain->nid = domain_update_device_node(domain);
 }
 
 struct context_entry *iommu_context_addr(struct intel_iommu *iommu, u8 bus,
@@ -5086,8 +5115,6 @@ static struct iommu_domain 
*intel_iommu_domain_alloc(unsigned type)
if (type == IOMMU_DOMAIN_DMA)
intel_init_iova_domain(dmar_domain);
 
-   domain_update_iommu_cap(dmar_domain);
-
domain = _domain->domain;
domain->geometry.aperture_start = 0;
domain->geometry.aperture_end   =
-- 
2.17.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu