On Wed, 2015-10-14 at 13:19 +0200, Joerg Roedel wrote:
> On Tue, Oct 13, 2015 at 05:18:29PM +0200, Jinpu Wang wrote:
> > > and during search I found a patch from Christian:
> > > http://permalink.gmane.org/gmane.linux.kernel.iommu/9993
> 
> I think David alreadt applied this patch.

However, in doing so I've noticed another problem.

Note how we carefully avoid *freeing* the old page table pages, in
__domain_unmap(). We keep them on a list and don't free them until
*after* the IOTLB has been flushed.

That's because the hardware can still be accessing them.

However, that call to dma_pte_free_pagetable() from within
__domain_mapping() doesn't do that.

It's irrelevant in the DMA API case, since we'll never be overwriting
the existing mappings in an IOVA range anyway. That would be a bug.

But for the IOMMU API when we're changing an existing mapping, we
really need to do that properly.

I think we want something like an additional (struct page **) argument
to __domain_mapping(), for it to put any such freed pages into.

Callers *other* than intel_iommu_map() can just pass NULL, and if it
has to free any page table pages when it's NULL, it can BUG().

Then intel_iommu_map() can free the pages when it's done the IOTLB
flush.... and IOTLB flush which I note is currently completely absent
from intel_iommu_map(). Which is another bug.

-- 
-- 
David Woodhouse                            Open Source Technology Centre
[email protected]                              Intel Corporation

Attachment: smime.p7s
Description: S/MIME cryptographic signature

_______________________________________________
iommu mailing list
[email protected]
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Reply via email to