Re: [PATCH v4 07/15] iommu/vt-d: Delegate the dma domain to upper layer

2020-08-24 Thread Lu Baolu

Hi Chris,

On 8/24/20 4:35 PM, Chris Wilson wrote:

Quoting Lu Baolu (2020-08-24 07:31:23)

Hi Chris,

On 2020/8/22 2:33, Chris Wilson wrote:

Quoting Lu Baolu (2019-05-25 06:41:28)

This allows the iommu generic layer to allocate a dma domain and
attach it to a device through the iommu api's. With all types of
domains being delegated to upper layer, we can remove an internal
flag which was used to distinguish domains mananged internally or
externally.


I'm seeing some really strange behaviour with this patch on a 32b
Skylake system (and still present on mainline). Before this patch
everything is peaceful and appears to work correctly. Applying this patch,
and we fail to initialise the GPU with a few DMAR errors reported, e.g.

[   20.279445] DMAR: DRHD: handling fault status reg 3
[   20.279508] DMAR: [DMA Read] Request device [00:02.0] fault addr 8900a000 
[fault reason 05] PTE Write access is not set

Setting an identity map for the igfx made the DMAR errors disappear, but
the GPU still failed to initialise.

There's no difference in the DMAR configuration dmesg between working and
the upset patch. And the really strange part is that switching to a 64b
kernel with this patch, it's working.

Any suggestions on what I should look for?


Can the patch titled "[PATCH] iommu/intel: Handle 36b addressing for
x86-32" solve this problem?


It does. Not sure why, but that mystery I can leave for others.


It's caused by left switching 36 bits operation against a 32-bit
integer. Your patch fixes this by converting the integer from unsigned
long to u64. It looks good to me. Thanks!


-Chris



Best regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4 07/15] iommu/vt-d: Delegate the dma domain to upper layer

2020-08-24 Thread Chris Wilson
Quoting Lu Baolu (2020-08-24 07:31:23)
> Hi Chris,
> 
> On 2020/8/22 2:33, Chris Wilson wrote:
> > Quoting Lu Baolu (2019-05-25 06:41:28)
> >> This allows the iommu generic layer to allocate a dma domain and
> >> attach it to a device through the iommu api's. With all types of
> >> domains being delegated to upper layer, we can remove an internal
> >> flag which was used to distinguish domains mananged internally or
> >> externally.
> > 
> > I'm seeing some really strange behaviour with this patch on a 32b
> > Skylake system (and still present on mainline). Before this patch
> > everything is peaceful and appears to work correctly. Applying this patch,
> > and we fail to initialise the GPU with a few DMAR errors reported, e.g.
> > 
> > [   20.279445] DMAR: DRHD: handling fault status reg 3
> > [   20.279508] DMAR: [DMA Read] Request device [00:02.0] fault addr 
> > 8900a000 [fault reason 05] PTE Write access is not set
> > 
> > Setting an identity map for the igfx made the DMAR errors disappear, but
> > the GPU still failed to initialise.
> > 
> > There's no difference in the DMAR configuration dmesg between working and
> > the upset patch. And the really strange part is that switching to a 64b
> > kernel with this patch, it's working.
> > 
> > Any suggestions on what I should look for?
> 
> Can the patch titled "[PATCH] iommu/intel: Handle 36b addressing for
> x86-32" solve this problem?

It does. Not sure why, but that mystery I can leave for others.
-Chris
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4 07/15] iommu/vt-d: Delegate the dma domain to upper layer

2020-08-24 Thread Lu Baolu

Hi Chris,

On 2020/8/22 2:33, Chris Wilson wrote:

Quoting Lu Baolu (2019-05-25 06:41:28)

This allows the iommu generic layer to allocate a dma domain and
attach it to a device through the iommu api's. With all types of
domains being delegated to upper layer, we can remove an internal
flag which was used to distinguish domains mananged internally or
externally.


I'm seeing some really strange behaviour with this patch on a 32b
Skylake system (and still present on mainline). Before this patch
everything is peaceful and appears to work correctly. Applying this patch,
and we fail to initialise the GPU with a few DMAR errors reported, e.g.

[   20.279445] DMAR: DRHD: handling fault status reg 3
[   20.279508] DMAR: [DMA Read] Request device [00:02.0] fault addr 8900a000 
[fault reason 05] PTE Write access is not set

Setting an identity map for the igfx made the DMAR errors disappear, but
the GPU still failed to initialise.

There's no difference in the DMAR configuration dmesg between working and
the upset patch. And the really strange part is that switching to a 64b
kernel with this patch, it's working.

Any suggestions on what I should look for?


Can the patch titled "[PATCH] iommu/intel: Handle 36b addressing for
x86-32" solve this problem?

Best regards,
baolu
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v4 07/15] iommu/vt-d: Delegate the dma domain to upper layer

2020-08-21 Thread Chris Wilson
Quoting Lu Baolu (2019-05-25 06:41:28)
> This allows the iommu generic layer to allocate a dma domain and
> attach it to a device through the iommu api's. With all types of
> domains being delegated to upper layer, we can remove an internal
> flag which was used to distinguish domains mananged internally or
> externally.

I'm seeing some really strange behaviour with this patch on a 32b
Skylake system (and still present on mainline). Before this patch
everything is peaceful and appears to work correctly. Applying this patch,
and we fail to initialise the GPU with a few DMAR errors reported, e.g.

[   20.279445] DMAR: DRHD: handling fault status reg 3
[   20.279508] DMAR: [DMA Read] Request device [00:02.0] fault addr 8900a000 
[fault reason 05] PTE Write access is not set

Setting an identity map for the igfx made the DMAR errors disappear, but
the GPU still failed to initialise.

There's no difference in the DMAR configuration dmesg between working and
the upset patch. And the really strange part is that switching to a 64b
kernel with this patch, it's working.

Any suggestions on what I should look for?
-Chris
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v4 07/15] iommu/vt-d: Delegate the dma domain to upper layer

2019-05-24 Thread Lu Baolu
This allows the iommu generic layer to allocate a dma domain and
attach it to a device through the iommu api's. With all types of
domains being delegated to upper layer, we can remove an internal
flag which was used to distinguish domains mananged internally or
externally.

Signed-off-by: James Sewart 
Signed-off-by: Lu Baolu 
---
 drivers/iommu/intel-iommu.c | 71 ++---
 1 file changed, 18 insertions(+), 53 deletions(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index 907f9323921d..d246b4a9ac1d 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -302,14 +302,8 @@ static inline void context_clear_entry(struct 
context_entry *context)
 static struct dmar_domain *si_domain;
 static int hw_pass_through = 1;
 
-/*
- * Domain represents a virtual machine, more than one devices
- * across iommus may be owned in one domain, e.g. kvm guest.
- */
-#define DOMAIN_FLAG_VIRTUAL_MACHINE(1 << 0)
-
 /* si_domain contains mulitple devices */
-#define DOMAIN_FLAG_STATIC_IDENTITY(1 << 1)
+#define DOMAIN_FLAG_STATIC_IDENTITYBIT(0)
 
 #define for_each_domain_iommu(idx, domain) \
for (idx = 0; idx < g_num_of_iommus; idx++) \
@@ -543,22 +537,11 @@ static inline void free_devinfo_mem(void *vaddr)
kmem_cache_free(iommu_devinfo_cache, vaddr);
 }
 
-static inline int domain_type_is_vm(struct dmar_domain *domain)
-{
-   return domain->flags & DOMAIN_FLAG_VIRTUAL_MACHINE;
-}
-
 static inline int domain_type_is_si(struct dmar_domain *domain)
 {
return domain->flags & DOMAIN_FLAG_STATIC_IDENTITY;
 }
 
-static inline int domain_type_is_vm_or_si(struct dmar_domain *domain)
-{
-   return domain->flags & (DOMAIN_FLAG_VIRTUAL_MACHINE |
-   DOMAIN_FLAG_STATIC_IDENTITY);
-}
-
 static inline int domain_pfn_supported(struct dmar_domain *domain,
   unsigned long pfn)
 {
@@ -606,7 +589,9 @@ struct intel_iommu *domain_get_iommu(struct dmar_domain 
*domain)
int iommu_id;
 
/* si_domain and vm domain should not get here. */
-   BUG_ON(domain_type_is_vm_or_si(domain));
+   if (WARN_ON(domain->domain.type != IOMMU_DOMAIN_DMA))
+   return NULL;
+
for_each_domain_iommu(iommu_id, domain)
break;
 
@@ -1675,7 +1660,6 @@ static void disable_dmar_iommu(struct intel_iommu *iommu)
if (!iommu->domains || !iommu->domain_ids)
return;
 
-again:
spin_lock_irqsave(_domain_lock, flags);
list_for_each_entry_safe(info, tmp, _domain_list, global) {
struct dmar_domain *domain;
@@ -1689,18 +1673,6 @@ static void disable_dmar_iommu(struct intel_iommu *iommu)
domain = info->domain;
 
__dmar_remove_one_dev_info(info);
-
-   if (!domain_type_is_vm_or_si(domain)) {
-   /*
-* The domain_exit() function  can't be called under
-* device_domain_lock, as it takes this lock itself.
-* So release the lock here and re-run the loop
-* afterwards.
-*/
-   spin_unlock_irqrestore(_domain_lock, flags);
-   domain_exit(domain);
-   goto again;
-   }
}
spin_unlock_irqrestore(_domain_lock, flags);
 
@@ -2365,7 +2337,7 @@ static int domain_mapping(struct dmar_domain *domain, 
unsigned long iov_pfn,
  struct scatterlist *sg, unsigned long phys_pfn,
  unsigned long nr_pages, int prot)
 {
-   int ret;
+   int iommu_id, ret;
struct intel_iommu *iommu;
 
/* Do the real mapping first */
@@ -2373,18 +2345,8 @@ static int domain_mapping(struct dmar_domain *domain, 
unsigned long iov_pfn,
if (ret)
return ret;
 
-   /* Notify about the new mapping */
-   if (domain_type_is_vm(domain)) {
-   /* VM typed domains can have more than one IOMMUs */
-   int iommu_id;
-
-   for_each_domain_iommu(iommu_id, domain) {
-   iommu = g_iommus[iommu_id];
-   __mapping_notify_one(iommu, domain, iov_pfn, nr_pages);
-   }
-   } else {
-   /* General domains only have one IOMMU */
-   iommu = domain_get_iommu(domain);
+   for_each_domain_iommu(iommu_id, domain) {
+   iommu = g_iommus[iommu_id];
__mapping_notify_one(iommu, domain, iov_pfn, nr_pages);
}
 
@@ -4619,9 +4581,6 @@ static int device_notifier(struct notifier_block *nb,
return 0;
 
dmar_remove_one_dev_info(dev);
-   if (!domain_type_is_vm_or_si(domain) &&
-   list_empty(>devices))
-   domain_exit(domain);