Re: [PATCH v4 12/15] iommu/vt-d: Cleanup get_valid_domain_for_dev()

2019-08-03 Thread Lu Baolu

Hi,

On 8/3/19 12:54 AM, Alex Williamson wrote:

On Fri, 2 Aug 2019 15:17:45 +0800
Lu Baolu  wrote:


Hi Alex,

Thanks for reporting this. I will try to find a machine with a
pcie-to-pci bridge and get this issue fixed. I will update you
later.


Further debug below...


Thanks for the debugging.




On 8/2/19 9:30 AM, Alex Williamson wrote:

DMAR: No ATSR found
DMAR: dmar0: Using Queued invalidation
DMAR: dmar1: Using Queued invalidation
pci :00:00.0: DMAR: Software identity mapping
pci :00:01.0: DMAR: Software identity mapping
pci :00:02.0: DMAR: Software identity mapping
pci :00:16.0: DMAR: Software identity mapping
pci :00:1a.0: DMAR: Software identity mapping
pci :00:1b.0: DMAR: Software identity mapping
pci :00:1c.0: DMAR: Software identity mapping
pci :00:1c.5: DMAR: Software identity mapping
pci :00:1c.6: DMAR: Software identity mapping
pci :00:1c.7: DMAR: Software identity mapping
pci :00:1d.0: DMAR: Software identity mapping
pci :00:1f.0: DMAR: Software identity mapping
pci :00:1f.2: DMAR: Software identity mapping
pci :00:1f.3: DMAR: Software identity mapping
pci :01:00.0: DMAR: Software identity mapping
pci :01:00.1: DMAR: Software identity mapping
pci :03:00.0: DMAR: Software identity mapping
pci :04:00.0: DMAR: Software identity mapping
DMAR: Setting RMRR:
pci :00:02.0: DMAR: Setting identity map [0xbf80 - 0xcf9f]
pci :00:1a.0: DMAR: Setting identity map [0xbe8d1000 - 0xbe8d]
pci :00:1d.0: DMAR: Setting identity map [0xbe8d1000 - 0xbe8d]
DMAR: Prepare 0-16MiB unity mapping for LPC
pci :00:1f.0: DMAR: Setting identity map [0x0 - 0xff]
pci :00:00.0: Adding to iommu group 0
pci :00:00.0: Using iommu direct mapping
pci :00:01.0: Adding to iommu group 1
pci :00:01.0: Using iommu direct mapping
pci :00:02.0: Adding to iommu group 2
pci :00:02.0: Using iommu direct mapping
pci :00:16.0: Adding to iommu group 3
pci :00:16.0: Using iommu direct mapping
pci :00:1a.0: Adding to iommu group 4
pci :00:1a.0: Using iommu direct mapping
pci :00:1b.0: Adding to iommu group 5
pci :00:1b.0: Using iommu direct mapping
pci :00:1c.0: Adding to iommu group 6
pci :00:1c.0: Using iommu direct mapping
pci :00:1c.5: Adding to iommu group 7
pci :00:1c.5: Using iommu direct mapping
pci :00:1c.6: Adding to iommu group 8
pci :00:1c.6: Using iommu direct mapping
pci :00:1c.7: Adding to iommu group 9


Note that group 9 contains device 00:1c.7


Yes.




pci :00:1c.7: Using iommu direct mapping


I'm booted with iommu=pt, so the domain type is IOMMU_DOMAIN_PT


Yes.




pci :00:1d.0: Adding to iommu group 10
pci :00:1d.0: Using iommu direct mapping
pci :00:1f.0: Adding to iommu group 11
pci :00:1f.0: Using iommu direct mapping
pci :00:1f.2: Adding to iommu group 11
pci :00:1f.3: Adding to iommu group 11
pci :01:00.0: Adding to iommu group 1
pci :01:00.1: Adding to iommu group 1
pci :03:00.0: Adding to iommu group 12
pci :03:00.0: Using iommu direct mapping
pci :04:00.0: Adding to iommu group 13
pci :04:00.0: Using iommu direct mapping
pci :05:00.0: Adding to iommu group 9


05:00.0 is downstream of 00:1c.7 and in the same group.  As above, the
domain is type IOMMU_DOMAIN_IDENTITY, so we take the following branch:

 } else {
 if (device_def_domain_type(dev) == IOMMU_DOMAIN_DMA) {

Default domain type is IOMMU_DOMAIN_DMA because of the code block in
device_def_domain_type() handling bridges to conventional PCI and
conventional PCI endpoints.

 ret = iommu_request_dma_domain_for_dev(dev);

This fails in request_default_domain_for_dev() because there's more
than one device in the group.

 if (ret) {
 dmar_domain->flags |= 
DOMAIN_FLAG_LOSE_CHILDREN;
 if (!get_private_domain_for_dev(dev)) {

With this commit, this now returns NULL because find_domain() does find
a domain, the same one we found before this code block.


This seems to be problematic.

Since it failed to request the right default domain type of group, the
driver decided to assign a private domain to it.

However, the device has already been attached by a domain, as "pci
:05:00.0: Adding to iommu group 9". So get_private_domain_for_dev()
returned error. It is same for identity private domain case.

We need to detach the domain first before requesting a private domain.
Can you please check whether below change works for you?

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index bdaed2da8a55..3ee9b0d20c28 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -4803,7 +4803,8 @@ static void dmar_remove_one_dev_info(struct device 
*dev)


spin_lock_irqsave(_domain_lock, flags);
info = dev->archdata.iommu;
-   

Re: [PATCH 4.19 17/32] iommu/vt-d: Dont queue_iova() if there is no flush queue

2019-08-03 Thread Pavel Machek
Hi!

> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -3721,7 +3721,7 @@ static void intel_unmap(struct device *d
>  
>   freelist = domain_unmap(domain, start_pfn, last_pfn);
>  
> - if (intel_iommu_strict) {
> + if (intel_iommu_strict || !has_iova_flush_queue(>iovad)) {
>   iommu_flush_iotlb_psi(iommu, domain, start_pfn,
> nrpages, !freelist, 0);
>   /* free iova */
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -65,9 +65,14 @@ init_iova_domain(struct iova_domain *iov
>  }
>  EXPORT_SYMBOL_GPL(init_iova_domain);
>  
> +bool has_iova_flush_queue(struct iova_domain *iovad)
> +{
> + return !!iovad->fq;

Should this be READ_ONCE()?

> @@ -100,13 +106,17 @@ int init_iova_flush_queue(struct iova_do
>   for_each_possible_cpu(cpu) {
>   struct iova_fq *fq;
>  
> - fq = per_cpu_ptr(iovad->fq, cpu);
> + fq = per_cpu_ptr(queue, cpu);
>   fq->head = 0;
>   fq->tail = 0;
>  
>   spin_lock_init(>lock);
>   }
>  
> + smp_wmb();
> +
> + iovad->fq = queue;
> +

Could we have a comment why the barrier is needed, and perhaps there
should be oposing smp_rmb() somewhere? Does this need to be
WRITE_ONCE() as it is racing against reader?

Thanks,
Pavel

-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


signature.asc
Description: Digital signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH 5/5] dma-mapping: remove ARCH_NO_COHERENT_DMA_MMAP

2019-08-03 Thread Takashi Iwai
On Sat, 03 Aug 2019 12:30:24 +0200,
Christoph Hellwig wrote:
> 
> On Fri, Aug 02, 2019 at 10:24:02AM +0200, Takashi Iwai wrote:
> > I wasn't careful enough to look at that change, sorry.
> > 
> > The code there tries to check whether dma_mmap_coherent() would always
> > fail on some platforms.  Then the driver clears the mmap capability
> > flag at the device open time and notifies user-space to fall back to
> > the dumb read/write mode.
> > 
> > So I'm afraid that simply dropping the check would cause the behavior
> > regression, e.g. on PARISC.
> > 
> > Is there any simple way to test whether dma_mmap_coherent() would work
> > or not in general on the target platform?  It's not necessarily in an
> > ifdef at all.
> 
> This isn't really a platform, but a per-device question.  I can add a
> "bool dma_can_mmap(struct device *dev)" helper to check that.

Yes, this would fit perfect.

> But how
> do I get at a suitable struct device in hw_support_mmap()?

substream->dma_buffer.dev.dev can be that, which is used in the mmap
helper side, snd_pcm_lib_default_mmap().


Thanks!

Takashi


Re: [PATCH 5/5] dma-mapping: remove ARCH_NO_COHERENT_DMA_MMAP

2019-08-03 Thread Christoph Hellwig
On Fri, Aug 02, 2019 at 10:24:02AM +0200, Takashi Iwai wrote:
> I wasn't careful enough to look at that change, sorry.
> 
> The code there tries to check whether dma_mmap_coherent() would always
> fail on some platforms.  Then the driver clears the mmap capability
> flag at the device open time and notifies user-space to fall back to
> the dumb read/write mode.
> 
> So I'm afraid that simply dropping the check would cause the behavior
> regression, e.g. on PARISC.
> 
> Is there any simple way to test whether dma_mmap_coherent() would work
> or not in general on the target platform?  It's not necessarily in an
> ifdef at all.

This isn't really a platform, but a per-device question.  I can add a
"bool dma_can_mmap(struct device *dev)" helper to check that.  But how
do I get at a suitable struct device in hw_support_mmap()?


Re: [PATCH] dma-mapping: fix page attributes for dma_mmap_*

2019-08-03 Thread Christoph Hellwig
On Fri, Aug 02, 2019 at 11:38:03AM +0100, Will Deacon wrote:
> 
> So this boils down to a terminology mismatch. The Arm architecture doesn't 
> have
> anything called "write combine", so in Linux we instead provide what the Arm
> architecture calls "Normal non-cacheable" memory for pgprot_writecombine().
> Amongst other things, this memory type permits speculation, unaligned accesses
> and merging of writes. I found something in the architecture spec about
> non-cachable memory, but it's written in Armglish[1].
> 
> pgprot_noncached(), on the other hand, provides what the architecture calls
> Strongly Ordered or Device-nGnRnE memory. This is intended for mapping MMIO
> (i.e. PCI config space) and therefore forbids speculation, preserves access
> size, requires strict alignment and also forces write responses to come from
> the endpoint.
> 
> I think the naming mismatch is historical, but on arm64 we wanted to use the
> same names as arm32 so that any drivers using these things directly would get
> the same behaviour.

That all makes sense, but it totally needs a comment.  I'll try to draft
one based on this.  I've also looked at the arm32 code a bit more, and
it seems arm always (?) supported Normal non-cacheable attribute, but
Linux only optionally uses it for arm v6+ because of fears of drivers
missing barriers.  The other really weird things is that in arm32
pgprot_dmacoherent incudes the L_PTE_XN bit, which from my understanding
is the no-execture bit, but pgprot_writecombine does not.  This seems to
not very unintentional.  So minus that the whole DMA_ATTR_WRITE_COMBІNE
seems to be about flagging old arm specific drivers as having the proper
barriers in places and otherwise is a no-op.

Here is my tentative plan:

 - respin this patch with a small fix to handle the
   DMA_ATTR_NON_CONSISTENT (as in ignore it unless actually supported),
   but keep the name as-is to avoid churn.  This should allow 5.3
   inclusion and backports
 - remove DMA_ATTR_WRITE_COMBINE support from mips, probably also 5.3
   material.
 - move all architectures but arm over to just define
   pgprot_dmacoherent, including a comment with the above explanation
   for arm64.
 - make DMA_ATTR_WRITE_COMBINE a no-op and schedule it for removal,
   thus removing the last instances of arch_dma_mmap_pgprot


Re: [PATCH v1] dma-mapping: normal memory for mmap() on coherent architectures

2019-08-03 Thread Christoph Hellwig
On Fri, Aug 02, 2019 at 11:35:48PM -0700, Gavin Li wrote:
> Ah, seems like it wasn't as simple of a fix as I thought :)

With our mess of dma flags nothing is ever as simple as it seems..

> I intended to use dma_mmap_coherent() in the usbfs driver
> (drivers/usb/core/devio.c) because its mmap() was broken on arm64 and
> all the other noncoherent DMA architectures.

Good to know, that is the right thing to do!

I'll try to expedite the fix to 5.3-rc + stable backports once we agree
on the details.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1] dma-mapping: normal memory for mmap() on coherent architectures

2019-08-03 Thread Gavin Li
Ah, seems like it wasn't as simple of a fix as I thought :)

I intended to use dma_mmap_coherent() in the usbfs driver
(drivers/usb/core/devio.c) because its mmap() was broken on arm64 and
all the other noncoherent DMA architectures.

Patch here: https://www.spinics.net/lists/linux-usb/msg183148.html
More info: https://www.spinics.net/lists/linux-usb/msg183180.html

On Fri, Aug 2, 2019 at 11:23 PM Christoph Hellwig  wrote:
>
> See the discussion at:
>
> https://lists.linuxfoundation.org/pipermail/iommu/2019-August/037716.html
>
> Just curious, what driver do you use that uses dma_mmap_coherent on
> x86?
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1] dma-mapping: normal memory for mmap() on coherent architectures

2019-08-03 Thread Christoph Hellwig
See the discussion at:

https://lists.linuxfoundation.org/pipermail/iommu/2019-August/037716.html

Just curious, what driver do you use that uses dma_mmap_coherent on
x86?
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu