Re: [PATCH v2 0/3] iommu/amd: I/O VA address limits

2020-07-22 Thread Sironi, Filippo via iommu
On Wed, 2020-07-22 at 14:19 +0200, j...@8bytes.org wrote:
> 
> On Fri, Jul 17, 2020 at 03:15:43PM +, Sironi, Filippo wrote:
> > I don't believe that we want to trust a userspace driver here, this
> > may
> > result in hosts becoming unstable because devices are asked to do
> > things
> > they aren't meant to do (e.g., DMA beyond 48 bits).
> 
> How is the hosts stability affected when a device is programmed with
> handles outside of its DMA mask?

I wouldn't be surprised if a PCIe device raises a PCIe SERR if it is
asked to do DMA beyond its abilities.

> Anyway, someone needs to know how to use the device in the end, and
> this
> entity also needs to know the DMA mask of the device and pass it down
> to
> path to the dma-iommu code.
> 
> Regards,
> 
> Joerg

I agree, device drivers should do the right thing and if we have generic
device drivers they may need "quirks" based on VID:DID to do the right
thing.

Still, I believe that the VFIO case is special because the device driver
is effectively in userspace I really think that trusting userspace isn't
quite correct (and we can keep discussing on this front).

I think that this discussion is orthogonal wrt the spirit of the
patches. They are actually adding a few bits to the AMD IOMMU driver
(and propagating them to the right places) to implement a portion of the
specification that wasn't implemented, i.e., relying on the IVRS table.
These patches are valuable independently on the content of the IVRS
table, be it 32, 48, or 64 bits.

Filippo





Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 0/3] iommu/amd: I/O VA address limits

2020-07-17 Thread Sironi, Filippo via iommu
On Fri, 2020-07-17 at 15:36 +0100, Robin Murphy wrote:
> On 2020-07-17 14:22, Sironi, Filippo wrote:
> > On Fri, 2020-07-17 at 10:47 +0100, Robin Murphy wrote:
> > > 
> > > On 2020-07-17 10:20, Sebastian Ott via iommu wrote:
> > > > Hello Joerg,
> > > > 
> > > > On 2020-07-10 14:31, Joerg Roedel wrote:
> > > > > On Wed, Jul 01, 2020 at 12:46:31AM +0200, Sebastian Ott wrote:
> > > > > > The IVRS ACPI table specifies maximum address sizes for I/O
> > > > > > virtual
> > > > > > addresses that can be handled by the IOMMUs in the system.
> > > > > > Parse
> > > > > > that
> > > > > > data from the IVRS header to provide aperture information
> > > > > > for
> > > > > > DMA
> > > > > > mappings and users of the iommu API.
> > > > > > 
> > > > > > Changes for V2:
> > > > > >- use limits in iommu_setup_dma_ops()
> > > > > >- rebased to current upstream
> > > > > > 
> > > > > > Sebastian Ott (3):
> > > > > > iommu/amd: Parse supported address sizes from IVRS
> > > > > > iommu/amd: Restrict aperture for domains to conform with
> > > > > > IVRS
> > > > > > iommu/amd: Actually enforce geometry aperture
> > > > > 
> > > > > Thanks for the changes. May I ask what the reason for those
> > > > > changes are?
> > > > > AFAIK all AMD IOMMU implementations (in hardware) support full
> > > > > 64bit
> > > > > address spaces, and the IVRS table might actually be wrong,
> > > > > limiting the
> > > > > address space in the worst case to only 32 bit.
> > > > 
> > > > It's not the IOMMU, but we've encountered devices that are
> > > > capable
> > > > of
> > > > more than
> > > > 32- but less than 64- bit IOVA, and there's no way to express
> > > > that
> > > > to
> > > > the IOVA
> > > > allocator in the PCIe spec. Our solution was to have our
> > > > platforms
> > > > express an
> > > > IVRS entry that says the IOMMU is capable of 48-bit, which these
> > > > devices
> > > > can generate.
> > > > 48 bits is plenty of address space in this generation for the
> > > > application we have.
> > > 
> > > Hmm, for constraints of individual devices, it should really be
> > > their
> > > drivers' job to call dma_set_mask*() with the appropriate value in
> > > the
> > > first place rather than just assuming that 64 means anything >32.
> > > If
> > > it's a case where the device itself technically is 64-bit capable,
> > > but
> > > an upstream bridge is constraining it, then that limit can also be
> > > described either by dedicated firmware properties (e.g. ACPI _DMA)
> > > or
> > > with a quirk like via_no_dac().
> > > 
> > > Robin.
> > 
> > You cannot rely on the device driver only because the device driver
> > attach might be a generic one like vfio-pci, for instance, that
> > doesn't
> > have any device specific knowledge.
> 
> Indeed, but on the other hand a generic driver that doesn't know the
> device is highly unlikely to set up any DMA transactions by itself
> either. In the case of VFIO, it would then be the guest/userspace
> driver's responsibility to take the equivalent action to avoid
> allocating addresses the hardware can't actually use.

I don't believe that we want to trust a userspace driver here, this may
result in hosts becoming unstable because devices are asked to do things
they aren't meant to do (e.g., DMA beyond 48 bits).

> I'm mostly just wary that trying to fake up a per-device restriction
> as
> a global one is a bit crude, and has the inherent problem that
> whatever
> you think the lowest common denominator is, there's the potential for
> some device to be hotplugged in later and break the assumption you've
> already had to commit to.

I agree, if the BIOS sets up an IVRS table with aperture of 48 bits and
all of a sudden we hotplug a device that only support 36 bits we're in a
bad place.

> And of course I am taking a bit of a DMA-API-centric viewpoint here -
> I
> think exposing per-device properties like bus_dma_limit that aren't
> easily identifiable for VFIO users to take into account is still
> rather
> an open problem.
> 
> Robin.

The use of ACPI _DMA that you suggest looks interesting and would allow
the kernel to prevent a dumb userspace driver using VFIO to make damage,
I think.

It doesn't look like there's much support for ACPI _DMA though. Are you
aware of existing efforts on this front?

Filippo





Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2 0/3] iommu/amd: I/O VA address limits

2020-07-17 Thread Sironi, Filippo via iommu
On Fri, 2020-07-17 at 10:47 +0100, Robin Murphy wrote:
> 
> On 2020-07-17 10:20, Sebastian Ott via iommu wrote:
> > Hello Joerg,
> > 
> > On 2020-07-10 14:31, Joerg Roedel wrote:
> > > On Wed, Jul 01, 2020 at 12:46:31AM +0200, Sebastian Ott wrote:
> > > > The IVRS ACPI table specifies maximum address sizes for I/O
> > > > virtual
> > > > addresses that can be handled by the IOMMUs in the system. Parse
> > > > that
> > > > data from the IVRS header to provide aperture information for
> > > > DMA
> > > > mappings and users of the iommu API.
> > > > 
> > > > Changes for V2:
> > > >   - use limits in iommu_setup_dma_ops()
> > > >   - rebased to current upstream
> > > > 
> > > > Sebastian Ott (3):
> > > >iommu/amd: Parse supported address sizes from IVRS
> > > >iommu/amd: Restrict aperture for domains to conform with IVRS
> > > >iommu/amd: Actually enforce geometry aperture
> > > 
> > > Thanks for the changes. May I ask what the reason for those
> > > changes are?
> > > AFAIK all AMD IOMMU implementations (in hardware) support full
> > > 64bit
> > > address spaces, and the IVRS table might actually be wrong,
> > > limiting the
> > > address space in the worst case to only 32 bit.
> > 
> > It's not the IOMMU, but we've encountered devices that are capable
> > of
> > more than
> > 32- but less than 64- bit IOVA, and there's no way to express that
> > to
> > the IOVA
> > allocator in the PCIe spec. Our solution was to have our platforms
> > express an
> > IVRS entry that says the IOMMU is capable of 48-bit, which these
> > devices
> > can generate.
> > 48 bits is plenty of address space in this generation for the
> > application we have.
> 
> Hmm, for constraints of individual devices, it should really be their
> drivers' job to call dma_set_mask*() with the appropriate value in the
> first place rather than just assuming that 64 means anything >32. If
> it's a case where the device itself technically is 64-bit capable, but
> an upstream bridge is constraining it, then that limit can also be
> described either by dedicated firmware properties (e.g. ACPI _DMA) or
> with a quirk like via_no_dac().
> 
> Robin.

You cannot rely on the device driver only because the device driver
attach might be a generic one like vfio-pci, for instance, that doesn't
have any device specific knowledge.

Filippo





Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 6/6] iommu/amd: Lock code paths traversing protection_domain->dev_list

2019-09-25 Thread Sironi, Filippo via iommu



> On 25. Sep 2019, at 06:23, Joerg Roedel  wrote:
> 
> From: Joerg Roedel 
> 
> The traversing of this list requires protection_domain->lock to be taken
> to avoid nasty races with attach/detach code. Make sure the lock is held
> on all code-paths traversing this list.
> 
> Reported-by: Filippo Sironi 
> Fixes: 92d420ec028d ("iommu/amd: Relax locking in dma_ops path")
> Signed-off-by: Joerg Roedel 
> ---
> drivers/iommu/amd_iommu.c | 25 -
> 1 file changed, 24 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index bac4e20a5919..9c26976a0f99 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -1334,8 +1334,12 @@ static void domain_flush_np_cache(struct 
> protection_domain *domain,
>   dma_addr_t iova, size_t size)
> {
>   if (unlikely(amd_iommu_np_cache)) {
> + unsigned long flags;
> +
> + spin_lock_irqsave(&domain->lock, flags);
>   domain_flush_pages(domain, iova, size);
>   domain_flush_complete(domain);
> + spin_unlock_irqrestore(&domain->lock, flags);
>   }
> }
> 
> @@ -1700,8 +1704,13 @@ static int iommu_map_page(struct protection_domain 
> *dom,
>   ret = 0;
> 
> out:
> - if (updated)
> + if (updated) {
> + unsigned long flags;
> +
> + spin_lock_irqsave(&dom->lock, flags);
>   update_domain(dom);
> + spin_unlock_irqrestore(&dom->lock, flags);
> + }
> 
>   /* Everything flushed out, free pages now */
>   free_page_list(freelist);
> @@ -1857,8 +1866,12 @@ static void free_gcr3_table(struct protection_domain 
> *domain)
> 
> static void dma_ops_domain_flush_tlb(struct dma_ops_domain *dom)
> {
> + unsigned long flags;
> +
> + spin_lock_irqsave(&dom->domain.lock, flags);
>   domain_flush_tlb(&dom->domain);
>   domain_flush_complete(&dom->domain);
> + spin_unlock_irqrestore(&dom->domain.lock, flags);
> }
> 
> static void iova_domain_flush_tlb(struct iova_domain *iovad)
> @@ -2414,6 +2427,7 @@ static dma_addr_t __map_single(struct device *dev,
> {
>   dma_addr_t offset = paddr & ~PAGE_MASK;
>   dma_addr_t address, start, ret;
> + unsigned long flags;
>   unsigned int pages;
>   int prot = 0;
>   int i;
> @@ -2451,8 +2465,10 @@ static dma_addr_t __map_single(struct device *dev,
>   iommu_unmap_page(&dma_dom->domain, start, PAGE_SIZE);
>   }
> 
> + spin_lock_irqsave(&dma_dom->domain.lock, flags);
>   domain_flush_tlb(&dma_dom->domain);
>   domain_flush_complete(&dma_dom->domain);
> + spin_unlock_irqrestore(&dma_dom->domain.lock, flags);
> 
>   dma_ops_free_iova(dma_dom, address, pages);
> 
> @@ -2481,8 +2497,12 @@ static void __unmap_single(struct dma_ops_domain 
> *dma_dom,
>   }
> 
>   if (amd_iommu_unmap_flush) {
> + unsigned long flags;
> +
> + spin_lock_irqsave(&dma_dom->domain.lock, flags);
>   domain_flush_tlb(&dma_dom->domain);
>   domain_flush_complete(&dma_dom->domain);
> + spin_unlock_irqrestore(&dma_dom->domain.lock, flags);
>   dma_ops_free_iova(dma_dom, dma_addr, pages);
>   } else {
>   pages = __roundup_pow_of_two(pages);
> @@ -3246,9 +3266,12 @@ static bool amd_iommu_is_attach_deferred(struct 
> iommu_domain *domain,
> static void amd_iommu_flush_iotlb_all(struct iommu_domain *domain)
> {
>   struct protection_domain *dom = to_pdomain(domain);
> + unsigned long flags;
> 
> + spin_lock_irqsave(&dom->lock, flags);
>   domain_flush_tlb_pde(dom);
>   domain_flush_complete(dom);
> + spin_unlock_irqrestore(&dom->lock, flags);
> }
> 
> static void amd_iommu_iotlb_sync(struct iommu_domain *domain,
> -- 
> 2.17.1
> 

Reviewed-by: Filippo Sironi 



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 5/6] iommu/amd: Lock dev_data in attach/detach code paths

2019-09-25 Thread Sironi, Filippo via iommu



> On 25. Sep 2019, at 06:22, Joerg Roedel  wrote:
> 
> From: Joerg Roedel 
> 
> Make sure that attaching a detaching a device can't race against each
> other and protect the iommu_dev_data with a spin_lock in these code
> paths.
> 
> Fixes: 92d420ec028d ("iommu/amd: Relax locking in dma_ops path")
> Signed-off-by: Joerg Roedel 
> ---
> drivers/iommu/amd_iommu.c   | 9 +
> drivers/iommu/amd_iommu_types.h | 3 +++
> 2 files changed, 12 insertions(+)
> 
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index 459247c32dc0..bac4e20a5919 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -201,6 +201,7 @@ static struct iommu_dev_data *alloc_dev_data(u16 devid)
>   if (!dev_data)
>   return NULL;
> 
> + spin_lock_init(&dev_data->lock);
>   dev_data->devid = devid;
>   ratelimit_default_init(&dev_data->rs);
> 
> @@ -2157,6 +2158,8 @@ static int attach_device(struct device *dev,
> 
>   dev_data = get_dev_data(dev);
> 
> + spin_lock(&dev_data->lock);
> +
>   ret = -EBUSY;
>   if (dev_data->domain != NULL)
>   goto out;
> @@ -2199,6 +2202,8 @@ static int attach_device(struct device *dev,
>   domain_flush_complete(domain);
> 
> out:
> + spin_unlock(&dev_data->lock);
> +
>   spin_unlock_irqrestore(&domain->lock, flags);
> 
>   return ret;
> @@ -2218,6 +2223,8 @@ static void detach_device(struct device *dev)
> 
>   spin_lock_irqsave(&domain->lock, flags);
> 
> + spin_lock(&dev_data->lock);
> +
>   /*
>* First check if the device is still attached. It might already
>* be detached from its domain because the generic
> @@ -2240,6 +2247,8 @@ static void detach_device(struct device *dev)
>   dev_data->ats.enabled = false;
> 
> out:
> + spin_unlock(&dev_data->lock);
> +
>   spin_unlock_irqrestore(&domain->lock, flags);
> }
> 
> diff --git a/drivers/iommu/amd_iommu_types.h b/drivers/iommu/amd_iommu_types.h
> index 0186501ab971..c9c1612d52e0 100644
> --- a/drivers/iommu/amd_iommu_types.h
> +++ b/drivers/iommu/amd_iommu_types.h
> @@ -633,6 +633,9 @@ struct devid_map {
>  * This struct contains device specific data for the IOMMU
>  */
> struct iommu_dev_data {
> + /*Protect against attach/detach races */
> + spinlock_t lock;
> +
>   struct list_head list;/* For domain->dev_list */
>   struct llist_node dev_data_list;  /* For global dev_data_list */
>   struct protection_domain *domain; /* Domain the device is bound to */
> -- 
> 2.17.1
> 

Reviewed-by: Filippo Sironi 



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 4/6] iommu/amd: Check for busy devices earlier in attach_device()

2019-09-25 Thread Sironi, Filippo via iommu



> On 25. Sep 2019, at 06:22, Joerg Roedel  wrote:
> 
> From: Joerg Roedel 
> 
> Check early in attach_device whether the device is already attached to a
> domain. This also simplifies the code path so that __attach_device() can
> be removed.
> 
> Fixes: 92d420ec028d ("iommu/amd: Relax locking in dma_ops path")
> Signed-off-by: Joerg Roedel 
> ---
> drivers/iommu/amd_iommu.c | 25 +++--
> 1 file changed, 7 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index 2919168577ff..459247c32dc0 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -2072,23 +2072,6 @@ static void do_detach(struct iommu_dev_data *dev_data)
>   domain->dev_cnt -= 1;
> }
> 
> -/*
> - * If a device is not yet associated with a domain, this function makes the
> - * device visible in the domain
> - */
> -static int __attach_device(struct iommu_dev_data *dev_data,
> -struct protection_domain *domain)
> -{
> - if (dev_data->domain != NULL)
> - return -EBUSY;
> -
> - /* Attach alias group root */
> - do_attach(dev_data, domain);
> -
> - return 0;
> -}
> -
> -
> static void pdev_iommuv2_disable(struct pci_dev *pdev)
> {
>   pci_disable_ats(pdev);
> @@ -2174,6 +2157,10 @@ static int attach_device(struct device *dev,
> 
>   dev_data = get_dev_data(dev);
> 
> + ret = -EBUSY;
> + if (dev_data->domain != NULL)
> + goto out;
> +
>   if (!dev_is_pci(dev))
>   goto skip_ats_check;
> 
> @@ -2198,7 +2185,9 @@ static int attach_device(struct device *dev,
>   }
> 
> skip_ats_check:
> - ret = __attach_device(dev_data, domain);
> + ret = 0;
> +
> + do_attach(dev_data, domain);
> 
>   /*
>* We might boot into a crash-kernel here. The crashed kernel
> -- 
> 2.17.1
> 

Reviewed-by: Filippo Sironi 



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 3/6] iommu/amd: Take domain->lock for complete attach/detach path

2019-09-25 Thread Sironi, Filippo via iommu



> On 25. Sep 2019, at 06:22, Joerg Roedel  wrote:
> 
> From: Joerg Roedel 
> 
> The code-paths before __attach_device() and __detach_device() are called
> also access and modify domain state, so take the domain lock there too.
> This allows to get rid of the __detach_device() function.
> 
> Fixes: 92d420ec028d ("iommu/amd: Relax locking in dma_ops path")
> Signed-off-by: Joerg Roedel 
> ---
> drivers/iommu/amd_iommu.c | 65 ---
> 1 file changed, 26 insertions(+), 39 deletions(-)
> 
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index 37a9c04fc728..2919168577ff 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -2079,27 +2079,13 @@ static void do_detach(struct iommu_dev_data *dev_data)
> static int __attach_device(struct iommu_dev_data *dev_data,
>  struct protection_domain *domain)
> {
> - unsigned long flags;
> - int ret;
> -
> - /* lock domain */
> - spin_lock_irqsave(&domain->lock, flags);
> -
> - ret = -EBUSY;
>   if (dev_data->domain != NULL)
> - goto out_unlock;
> + return -EBUSY;
> 
>   /* Attach alias group root */
>   do_attach(dev_data, domain);
> 
> - ret = 0;
> -
> -out_unlock:
> -
> - /* ready */
> - spin_unlock_irqrestore(&domain->lock, flags);
> -
> - return ret;
> + return 0;
> }
> 
> 
> @@ -2181,8 +2167,11 @@ static int attach_device(struct device *dev,
> {
>   struct pci_dev *pdev;
>   struct iommu_dev_data *dev_data;
> + unsigned long flags;
>   int ret;
> 
> + spin_lock_irqsave(&domain->lock, flags);
> +
>   dev_data = get_dev_data(dev);
> 
>   if (!dev_is_pci(dev))
> @@ -2190,12 +2179,13 @@ static int attach_device(struct device *dev,
> 
>   pdev = to_pci_dev(dev);
>   if (domain->flags & PD_IOMMUV2_MASK) {
> + ret = -EINVAL;
>   if (!dev_data->passthrough)
> - return -EINVAL;
> + goto out;
> 
>   if (dev_data->iommu_v2) {
>   if (pdev_iommuv2_enable(pdev) != 0)
> - return -EINVAL;
> + goto out;
> 
>   dev_data->ats.enabled = true;
>   dev_data->ats.qdep= pci_ats_queue_depth(pdev);
> @@ -2219,24 +2209,10 @@ static int attach_device(struct device *dev,
> 
>   domain_flush_complete(domain);
> 
> - return ret;
> -}
> -
> -/*
> - * Removes a device from a protection domain (unlocked)
> - */
> -static void __detach_device(struct iommu_dev_data *dev_data)
> -{
> - struct protection_domain *domain;
> - unsigned long flags;
> -
> - domain = dev_data->domain;
> -
> - spin_lock_irqsave(&domain->lock, flags);
> -
> - do_detach(dev_data);
> -
> +out:
>   spin_unlock_irqrestore(&domain->lock, flags);
> +
> + return ret;
> }
> 
> /*
> @@ -2246,10 +,13 @@ static void detach_device(struct device *dev)
> {
>   struct protection_domain *domain;
>   struct iommu_dev_data *dev_data;
> + unsigned long flags;
> 
>   dev_data = get_dev_data(dev);
>   domain   = dev_data->domain;
> 
> + spin_lock_irqsave(&domain->lock, flags);
> +
>   /*
>* First check if the device is still attached. It might already
>* be detached from its domain because the generic
> @@ -2257,12 +2236,12 @@ static void detach_device(struct device *dev)
>* our alias handling.
>*/
>   if (WARN_ON(!dev_data->domain))
> - return;
> + goto out;
> 
> - __detach_device(dev_data);
> + do_detach(dev_data);
> 
>   if (!dev_is_pci(dev))
> - return;
> + goto out;
> 
>   if (domain->flags & PD_IOMMUV2_MASK && dev_data->iommu_v2)
>   pdev_iommuv2_disable(to_pci_dev(dev));
> @@ -2270,6 +2249,9 @@ static void detach_device(struct device *dev)
>   pci_disable_ats(to_pci_dev(dev));
> 
>   dev_data->ats.enabled = false;
> +
> +out:
> + spin_unlock_irqrestore(&domain->lock, flags);
> }
> 
> static int amd_iommu_add_device(struct device *dev)
> @@ -2904,13 +2886,18 @@ int __init amd_iommu_init_dma_ops(void)
> static void cleanup_domain(struct protection_domain *domain)
> {
>   struct iommu_dev_data *entry;
> + unsigned long flags;
> +
> + spin_lock_irqsave(&domain->lock, flags);
> 
>   while (!list_empty(&domain->dev_list)) {
>   entry = list_first_entry(&domain->dev_list,
>struct iommu_dev_data, list);
>   BUG_ON(!entry->domain);
> - __detach_device(entry);
> + do_detach(entry);
>   }
> +
> + spin_unlock_irqrestore(&domain->lock, flags);
> }
> 
> static void protection_domain_free(struct protection_domain *domain)
> -- 
> 2.17.1
> 

Reviewed-by: Filippo Sironi 



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaef

Re: [PATCH 2/6] iommu/amd: Remove amd_iommu_devtable_lock

2019-09-25 Thread Sironi, Filippo via iommu



> On 25. Sep 2019, at 08:50, Sironi, Filippo  wrote:
> 
> 
> 
>> On 25. Sep 2019, at 06:22, Joerg Roedel  wrote:
>> 
>> From: Joerg Roedel 
>> 
>> The lock is not necessary because the device table does not
>> contain shared state that needs protection. Locking is only
>> needed on an individual entry basis, and that needs to
>> happen on the iommu_dev_data level.
>> 
>> Fixes: 92d420ec028d ("iommu/amd: Relax locking in dma_ops path")
>> Signed-off-by: Joerg Roedel 
>> ---
>> drivers/iommu/amd_iommu.c | 23 ++-
>> 1 file changed, 6 insertions(+), 17 deletions(-)
>> 
>> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
>> index 042854bbc5bc..37a9c04fc728 100644
>> --- a/drivers/iommu/amd_iommu.c
>> +++ b/drivers/iommu/amd_iommu.c
>> @@ -70,7 +70,6 @@
>> */
>> #define AMD_IOMMU_PGSIZES((~0xFFFUL) & ~(2ULL << 38))
>> 
>> -static DEFINE_SPINLOCK(amd_iommu_devtable_lock);
>> static DEFINE_SPINLOCK(pd_bitmap_lock);
>> 
>> /* List of all available dev_data structures */
>> @@ -2080,10 +2079,11 @@ static void do_detach(struct iommu_dev_data 
>> *dev_data)
>> static int __attach_device(struct iommu_dev_data *dev_data,
>> struct protection_domain *domain)
>> {
>> +unsigned long flags;
>>  int ret;
>> 
>>  /* lock domain */
>> -spin_lock(&domain->lock);
>> +spin_lock_irqsave(&domain->lock, flags);
>> 
>>  ret = -EBUSY;
>>  if (dev_data->domain != NULL)
>> @@ -2097,7 +2097,7 @@ static int __attach_device(struct iommu_dev_data 
>> *dev_data,
>> out_unlock:
>> 
>>  /* ready */
>> -spin_unlock(&domain->lock);
>> +spin_unlock_irqrestore(&domain->lock, flags);
>> 
>>  return ret;
>> }
>> @@ -2181,7 +2181,6 @@ static int attach_device(struct device *dev,
>> {
>>  struct pci_dev *pdev;
>>  struct iommu_dev_data *dev_data;
>> -unsigned long flags;
>>  int ret;
>> 
>>  dev_data = get_dev_data(dev);
>> @@ -2209,9 +2208,7 @@ static int attach_device(struct device *dev,
>>  }
>> 
>> skip_ats_check:
>> -spin_lock_irqsave(&amd_iommu_devtable_lock, flags);
>>  ret = __attach_device(dev_data, domain);
>> -spin_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
>> 
>>  /*
>>   * We might boot into a crash-kernel here. The crashed kernel
>> @@ -2231,14 +2228,15 @@ static int attach_device(struct device *dev,
>> static void __detach_device(struct iommu_dev_data *dev_data)
>> {
>>  struct protection_domain *domain;
>> +unsigned long flags;
>> 
>>  domain = dev_data->domain;
>> 
>> -spin_lock(&domain->lock);
>> +spin_lock_irqsave(&domain->lock, flags);
>> 
>>  do_detach(dev_data);
>> 
>> -spin_unlock(&domain->lock);
>> +spin_unlock_irqrestore(&domain->lock, flags);
>> }
>> 
>> /*
>> @@ -2248,7 +2246,6 @@ static void detach_device(struct device *dev)
>> {
>>  struct protection_domain *domain;
>>  struct iommu_dev_data *dev_data;
>> -unsigned long flags;
>> 
>>  dev_data = get_dev_data(dev);
>>  domain   = dev_data->domain;
>> @@ -2262,10 +2259,7 @@ static void detach_device(struct device *dev)
>>  if (WARN_ON(!dev_data->domain))
>>  return;
>> 
>> -/* lock device table */
>> -spin_lock_irqsave(&amd_iommu_devtable_lock, flags);
>>  __detach_device(dev_data);
>> -spin_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
>> 
>>  if (!dev_is_pci(dev))
>>  return;
>> @@ -2910,9 +2904,6 @@ int __init amd_iommu_init_dma_ops(void)
>> static void cleanup_domain(struct protection_domain *domain)
>> {
>>  struct iommu_dev_data *entry;
>> -unsigned long flags;
>> -
>> -spin_lock_irqsave(&amd_iommu_devtable_lock, flags);
>> 
>>  while (!list_empty(&domain->dev_list)) {
>>  entry = list_first_entry(&domain->dev_list,
>> @@ -2920,8 +2911,6 @@ static void cleanup_domain(struct protection_domain 
>> *domain)
>>  BUG_ON(!entry->domain);
>>  __detach_device(entry);
>>  }
>> -
>> -spin_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
>> }
> 
> I'm guessing that it is free not to take the domain lock in cleanup_domain
> since this is called when free-ing the domain.  Right?

My guess was wrong but I see that you've fixed this in patch 3/6.

>> static void protection_domain_free(struct protection_domain *domain)
>> -- 
>> 2.17.1

Reviewed-by: Filippo Sironi 



Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 2/6] iommu/amd: Remove amd_iommu_devtable_lock

2019-09-25 Thread Sironi, Filippo via iommu



> On 25. Sep 2019, at 06:22, Joerg Roedel  wrote:
> 
> From: Joerg Roedel 
> 
> The lock is not necessary because the device table does not
> contain shared state that needs protection. Locking is only
> needed on an individual entry basis, and that needs to
> happen on the iommu_dev_data level.
> 
> Fixes: 92d420ec028d ("iommu/amd: Relax locking in dma_ops path")
> Signed-off-by: Joerg Roedel 
> ---
> drivers/iommu/amd_iommu.c | 23 ++-
> 1 file changed, 6 insertions(+), 17 deletions(-)
> 
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index 042854bbc5bc..37a9c04fc728 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -70,7 +70,6 @@
>  */
> #define AMD_IOMMU_PGSIZES ((~0xFFFUL) & ~(2ULL << 38))
> 
> -static DEFINE_SPINLOCK(amd_iommu_devtable_lock);
> static DEFINE_SPINLOCK(pd_bitmap_lock);
> 
> /* List of all available dev_data structures */
> @@ -2080,10 +2079,11 @@ static void do_detach(struct iommu_dev_data *dev_data)
> static int __attach_device(struct iommu_dev_data *dev_data,
>  struct protection_domain *domain)
> {
> + unsigned long flags;
>   int ret;
> 
>   /* lock domain */
> - spin_lock(&domain->lock);
> + spin_lock_irqsave(&domain->lock, flags);
> 
>   ret = -EBUSY;
>   if (dev_data->domain != NULL)
> @@ -2097,7 +2097,7 @@ static int __attach_device(struct iommu_dev_data 
> *dev_data,
> out_unlock:
> 
>   /* ready */
> - spin_unlock(&domain->lock);
> + spin_unlock_irqrestore(&domain->lock, flags);
> 
>   return ret;
> }
> @@ -2181,7 +2181,6 @@ static int attach_device(struct device *dev,
> {
>   struct pci_dev *pdev;
>   struct iommu_dev_data *dev_data;
> - unsigned long flags;
>   int ret;
> 
>   dev_data = get_dev_data(dev);
> @@ -2209,9 +2208,7 @@ static int attach_device(struct device *dev,
>   }
> 
> skip_ats_check:
> - spin_lock_irqsave(&amd_iommu_devtable_lock, flags);
>   ret = __attach_device(dev_data, domain);
> - spin_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
> 
>   /*
>* We might boot into a crash-kernel here. The crashed kernel
> @@ -2231,14 +2228,15 @@ static int attach_device(struct device *dev,
> static void __detach_device(struct iommu_dev_data *dev_data)
> {
>   struct protection_domain *domain;
> + unsigned long flags;
> 
>   domain = dev_data->domain;
> 
> - spin_lock(&domain->lock);
> + spin_lock_irqsave(&domain->lock, flags);
> 
>   do_detach(dev_data);
> 
> - spin_unlock(&domain->lock);
> + spin_unlock_irqrestore(&domain->lock, flags);
> }
> 
> /*
> @@ -2248,7 +2246,6 @@ static void detach_device(struct device *dev)
> {
>   struct protection_domain *domain;
>   struct iommu_dev_data *dev_data;
> - unsigned long flags;
> 
>   dev_data = get_dev_data(dev);
>   domain   = dev_data->domain;
> @@ -2262,10 +2259,7 @@ static void detach_device(struct device *dev)
>   if (WARN_ON(!dev_data->domain))
>   return;
> 
> - /* lock device table */
> - spin_lock_irqsave(&amd_iommu_devtable_lock, flags);
>   __detach_device(dev_data);
> - spin_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
> 
>   if (!dev_is_pci(dev))
>   return;
> @@ -2910,9 +2904,6 @@ int __init amd_iommu_init_dma_ops(void)
> static void cleanup_domain(struct protection_domain *domain)
> {
>   struct iommu_dev_data *entry;
> - unsigned long flags;
> -
> - spin_lock_irqsave(&amd_iommu_devtable_lock, flags);
> 
>   while (!list_empty(&domain->dev_list)) {
>   entry = list_first_entry(&domain->dev_list,
> @@ -2920,8 +2911,6 @@ static void cleanup_domain(struct protection_domain 
> *domain)
>   BUG_ON(!entry->domain);
>   __detach_device(entry);
>   }
> -
> - spin_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
> }

I'm guessing that it is free not to take the domain lock in cleanup_domain
since this is called when free-ing the domain.  Right?

> static void protection_domain_free(struct protection_domain *domain)
> -- 
> 2.17.1
> 




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 1/6] iommu/amd: Remove domain->updated

2019-09-25 Thread Sironi, Filippo via iommu



> On 25. Sep 2019, at 06:22, Joerg Roedel  wrote:
> 
> From: Joerg Roedel 
> 
> This struct member was used to track whether a domain
> change requires updates to the device-table and IOMMU cache
> flushes. The problem is, that access to this field is racy
> since locking in the common mapping code-paths has been
> eliminated.
> 
> Move the updated field to the stack to get rid of all
> potential races and remove the field from the struct.
> 
> Fixes: 92d420ec028d ("iommu/amd: Relax locking in dma_ops path")
> Signed-off-by: Joerg Roedel 
> ---
> drivers/iommu/amd_iommu.c   | 49 +
> drivers/iommu/amd_iommu_types.h |  1 -
> 2 files changed, 25 insertions(+), 25 deletions(-)
> 
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index 7bdce3b10f3d..042854bbc5bc 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -1458,10 +1458,11 @@ static void free_pagetable(struct protection_domain 
> *domain)
>  * another level increases the size of the address space by 9 bits to a size 
> up
>  * to 64 bits.
>  */
> -static void increase_address_space(struct protection_domain *domain,
> +static bool increase_address_space(struct protection_domain *domain,
>  gfp_t gfp)
> {
>   unsigned long flags;
> + bool ret = false;
>   u64 *pte;
> 
>   spin_lock_irqsave(&domain->lock, flags);
> @@ -1478,19 +1479,21 @@ static void increase_address_space(struct 
> protection_domain *domain,
>   iommu_virt_to_phys(domain->pt_root));
>   domain->pt_root  = pte;
>   domain->mode+= 1;
> - domain->updated  = true;
> +
> + ret = true;
> 
> out:
>   spin_unlock_irqrestore(&domain->lock, flags);
> 
> - return;
> + return ret;
> }
> 
> static u64 *alloc_pte(struct protection_domain *domain,
> unsigned long address,
> unsigned long page_size,
> u64 **pte_page,
> -   gfp_t gfp)
> +   gfp_t gfp,
> +   bool *updated)
> {
>   int level, end_lvl;
>   u64 *pte, *page;
> @@ -1498,7 +1501,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
>   BUG_ON(!is_power_of_2(page_size));
> 
>   while (address > PM_LEVEL_SIZE(domain->mode))
> - increase_address_space(domain, gfp);
> + *updated = increase_address_space(domain, gfp) || *updated;
> 
>   level   = domain->mode - 1;
>   pte = &domain->pt_root[PM_LEVEL_INDEX(level, address)];
> @@ -1530,7 +1533,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
>   for (i = 0; i < count; ++i)
>   cmpxchg64(&lpte[i], __pte, 0ULL);
> 
> - domain->updated = true;
> + *updated = true;
>   continue;
>   }
> 
> @@ -1547,7 +1550,7 @@ static u64 *alloc_pte(struct protection_domain *domain,
>   if (cmpxchg64(pte, __pte, __npte) != __pte)
>   free_page((unsigned long)page);
>   else if (IOMMU_PTE_PRESENT(__pte))
> - domain->updated = true;
> + *updated = true;
> 
>   continue;
>   }
> @@ -1656,28 +1659,29 @@ static int iommu_map_page(struct protection_domain 
> *dom,
> gfp_t gfp)
> {
>   struct page *freelist = NULL;
> + bool updated = false;
>   u64 __pte, *pte;
> - int i, count;
> + int ret, i, count;
> 
>   BUG_ON(!IS_ALIGNED(bus_addr, page_size));
>   BUG_ON(!IS_ALIGNED(phys_addr, page_size));
> 
> + ret = -EINVAL;
>   if (!(prot & IOMMU_PROT_MASK))
> - return -EINVAL;
> + goto out;
> 
>   count = PAGE_SIZE_PTE_COUNT(page_size);
> - pte   = alloc_pte(dom, bus_addr, page_size, NULL, gfp);
> + pte   = alloc_pte(dom, bus_addr, page_size, NULL, gfp, &updated);
> 
> - if (!pte) {
> - update_domain(dom);
> - return -ENOMEM;
> - }
> + ret = -ENOMEM;
> + if (!pte)
> + goto out;
> 
>   for (i = 0; i < count; ++i)
>   freelist = free_clear_pte(&pte[i], pte[i], freelist);
> 
>   if (freelist != NULL)
> - dom->updated = true;
> + updated = true;
> 
>   if (count > 1) {
>   __pte = PAGE_SIZE_PTE(__sme_set(phys_addr), page_size);
> @@ -1693,12 +1697,16 @@ static int iommu_map_page(struct protection_domain 
> *dom,
>   for (i = 0; i < count; ++i)
>   pte[i] = __pte;
> 
> - update_domain(dom);
> + ret = 0;
> +
> +out:
> + if (updated)
> + update_domain(dom);
> 
>   /* Everything flushed out, free pages now */
>   free_page_list(freelist);
> 
> - return 0;
> + return ret;
> }
> 
> static unsigned long iommu_unmap_page(struct protection_dom

Re: [PATCH 4/5] iommu/amd: Hold the domain lock when calling iommu_map_page

2019-09-20 Thread Sironi, Filippo via iommu



> On 10. Sep 2019, at 19:49, Filippo Sironi  wrote:
> 
> iommu_map_page calls into __domain_flush_pages, which requires the
> domain lock since it traverses the device list, which the lock protects.
> 
> Signed-off-by: Filippo Sironi 
> ---
> drivers/iommu/amd_iommu.c | 5 +
> 1 file changed, 5 insertions(+)
> 
> diff --git a/drivers/iommu/amd_iommu.c b/drivers/iommu/amd_iommu.c
> index d4f25767622e..3714ae5ded31 100644
> --- a/drivers/iommu/amd_iommu.c
> +++ b/drivers/iommu/amd_iommu.c
> @@ -2562,6 +2562,7 @@ static int map_sg(struct device *dev, struct 
> scatterlist *sglist,
>   unsigned long address;
>   u64 dma_mask;
>   int ret;
> + unsigned long flags;
> 
>   domain = get_domain(dev);
>   if (IS_ERR(domain))
> @@ -2587,7 +2588,9 @@ static int map_sg(struct device *dev, struct 
> scatterlist *sglist,
> 
>   bus_addr  = address + s->dma_address + (j << 
> PAGE_SHIFT);
>   phys_addr = (sg_phys(s) & PAGE_MASK) + (j << 
> PAGE_SHIFT);
> + spin_lock_irqsave(&domain->lock, flags);
>   ret = iommu_map_page(domain, bus_addr, phys_addr, 
> PAGE_SIZE, prot, GFP_ATOMIC);
> + spin_unlock_irqrestore(&domain->lock, flags);
>   if (ret)
>   goto out_unmap;
> 
> @@ -3095,7 +3098,9 @@ static int amd_iommu_map(struct iommu_domain *dom, 
> unsigned long iova,
>   prot |= IOMMU_PROT_IW;
> 
>   mutex_lock(&domain->api_lock);
> + spin_lock(&domain->lock);
>   ret = iommu_map_page(domain, iova, paddr, page_size, prot, GFP_KERNEL);
> + spin_unlock(&domain->lock);
>   mutex_unlock(&domain->api_lock);

The spin_lock/spin_unlock aren't the correct choice.
These should be spin_lock_irqsave and spin_unlock_irqrestore.
Of course, with the variant Joerg suggested, this isn't a
problem anymore.

>   domain_flush_np_cache(domain, iova, page_size);
> -- 
> 2.7.4
> 




Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879



___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] intel-iommu: Don't be too aggressive when clearing one context entry

2017-08-31 Thread Sironi, Filippo via iommu
Hi Jacob,

> On 30. Aug 2017, at 17:50, Jacob Pan  wrote:
> 
> On Mon, 28 Aug 2017 16:16:29 +0200
> Filippo Sironi via iommu  wrote:
> 
>> Previously, we were invalidating context cache and IOTLB globally when
>> clearing one context entry.  This is a tad too aggressive.
>> Invalidate the context cache and IOTLB for the interested device only.
>> 
>> Signed-off-by: Filippo Sironi 
>> Cc: David Woodhouse 
>> Cc: David Woodhouse 
>> Cc: Joerg Roedel 
>> Cc: iommu@lists.linux-foundation.org
>> Cc: linux-ker...@vger.kernel.org
>> ---
>> drivers/iommu/intel-iommu.c | 25 ++---
>> 1 file changed, 22 insertions(+), 3 deletions(-)
>> 
>> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
>> index 3e8636f1220e..4bf3e59b0929 100644
>> --- a/drivers/iommu/intel-iommu.c
>> +++ b/drivers/iommu/intel-iommu.c
>> @@ -2351,13 +2351,32 @@ static inline int domain_pfn_mapping(struct
>> dmar_domain *domain, unsigned long i 
>> static void domain_context_clear_one(struct intel_iommu *iommu, u8
>> bus, u8 devfn) {
>> +unsigned long flags;
>> +struct context_entry *context;
>> +u16 did_old;
>> +
>>  if (!iommu)
>>  return;
>> 
>> +spin_lock_irqsave(&iommu->lock, flags);
>> +context = iommu_context_addr(iommu, bus, devfn, 0);
>> +if (!context) {
>> +spin_unlock_irqrestore(&iommu->lock, flags);
>> +return;
>> +}
> perhaps check with device_context_mapped()?

Using device_context_mapped() wouldn't simplify the code since it
would just tell me that at the time of check there was a context.
I would still need to lock, get the context, check if the context
is valid, do the work, and unlock.

Modifying device_context_mapped() to return the context isn't
going to work either because it may go away in the meantime since
I wouldn't hold the lock.

>> +did_old = context_domain_id(context);
>> +spin_unlock_irqrestore(&iommu->lock, flags);
>>  clear_context_table(iommu, bus, devfn);
>> -iommu->flush.flush_context(iommu, 0, 0, 0,
>> -   DMA_CCMD_GLOBAL_INVL);
>> -iommu->flush.flush_iotlb(iommu, 0, 0, 0,
>> DMA_TLB_GLOBAL_FLUSH);
>> +iommu->flush.flush_context(iommu,
>> +   did_old,
>> +   (((u16)bus) << 8) | devfn,
>> +   DMA_CCMD_MASK_NOBIT,
>> +   DMA_CCMD_DEVICE_INVL);
>> +iommu->flush.flush_iotlb(iommu,
>> + did_old,
>> + 0,
>> + 0,
>> + DMA_TLB_DSI_FLUSH);
>> }
>> 
>> static inline void unlink_domain_info(struct device_domain_info
>> *info)
> 
> [Jacob Pan]

Regards,
Filippo


Amazon Development Center Germany GmbH
Berlin - Dresden - Aachen
main office: Krausenstr. 38, 10117 Berlin
Geschaeftsfuehrer: Dr. Ralf Herbrich, Christian Schlaeger
Ust-ID: DE289237879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] intel-iommu: Don't be too aggressive when clearing one context entry

2017-08-31 Thread Sironi, Filippo via iommu
Hi Joerg,

> On 30. Aug 2017, at 15:31, Joerg Roedel  wrote:
> 
> Hi Filippo,
> 
> please change the subject to:
> 
>   iommu/vt-d: Don't be too aggressive when clearing one context entry
> 
> to follow the convention used in the iommu-tree. Another comment below.

Will do.

> On Mon, Aug 28, 2017 at 04:16:29PM +0200, Filippo Sironi wrote:
>> static void domain_context_clear_one(struct intel_iommu *iommu, u8 bus, u8 
>> devfn)
>> {
>> +unsigned long flags;
>> +struct context_entry *context;
>> +u16 did_old;
>> +
>>  if (!iommu)
>>  return;
>> 
>> +spin_lock_irqsave(&iommu->lock, flags);
>> +context = iommu_context_addr(iommu, bus, devfn, 0);
>> +if (!context) {
>> +spin_unlock_irqrestore(&iommu->lock, flags);
>> +return;
>> +}
>> +did_old = context_domain_id(context);
>> +spin_unlock_irqrestore(&iommu->lock, flags);
>>  clear_context_table(iommu, bus, devfn);
> 
> This function is the only caller of clear_context_table(), which does
> similar things (like fetching the context-entry) as you are adding
> above.
> 
> So you can either make clear_context_table() return the old domain-id
> so that you don't need to do it here, or you get rid of the function
> entirely and add the context_clear_entry() and __iommu_flush_cache()
> calls into this code-path.
> 
> Regards,
> 
>   Joerg

I went for merging domain_context_clear_one() with context_clear_one().

Regards,
Filippo


Amazon Development Center Germany GmbH
Berlin - Dresden - Aachen
main office: Krausenstr. 38, 10117 Berlin
Geschaeftsfuehrer: Dr. Ralf Herbrich, Christian Schlaeger
Ust-ID: DE289237879
Eingetragen am Amtsgericht Charlottenburg HRB 149173 B

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu