Re: [PATCH 3/4] iommu: add qcom_iommu

2017-05-11 Thread Sricharan R
Hi,

On 5/11/2017 10:20 PM, Rob Clark wrote:
> On Thu, May 11, 2017 at 11:08 AM, Sricharan R  
> wrote:
>> Hi Rob,
>>
>> 
>>
>>> +static irqreturn_t qcom_iommu_fault(int irq, void *dev)
>>> +{
>>> + struct qcom_iommu_ctx *ctx = dev;
>>> + u32 fsr, fsynr;
>>> + unsigned long iova;
>>> +
>>> + fsr = iommu_readl(ctx, ARM_SMMU_CB_FSR);
>>> +
>>> + if (!(fsr & FSR_FAULT))
>>> + return IRQ_NONE;
>>> +
>>> + fsynr = iommu_readl(ctx, ARM_SMMU_CB_FSYNR0);
>>> + iova = iommu_readq(ctx, ARM_SMMU_CB_FAR);
>>> +
>>> + dev_err_ratelimited(ctx->dev,
>>> + "Unhandled context fault: fsr=0x%x, "
>>> + "iova=0x%08lx, fsynr=0x%x, cb=%d\n",
>>> + fsr, iova, fsynr, ctx->asid);
>>> +
>>> + iommu_writel(ctx, ARM_SMMU_CB_FSR, fsr);
>>
>> Just thinking if the clocks should be enabled in the fault handler
>> for handling cases that would happen out of the master context.
>> While global faults are one case, that is anyways is handled in
>> secure world for this case. Something like bootloader used the iommu
>> and not handled the fault, and getting the fault in kernel the
>> moment we enable the ctx. Atleast downstream seems to enable the
>> clocks in the fault handler explicitly.
> 
> hmm, I wonder if we should instead do something to clear interrupts
> when we initialize the context?
> 
> I guess we probably don't want to get fault irq's from the bootloader..

Right, better to clear it in the beginning and that could be added.


Regards,
 Sricharan

> 
> BR,
> -R
> 
>> Regards,
>>  Sricharan
>>
>>
>>> +
>>> + return IRQ_HANDLED;
>>> +}
>>> +
>>> +static int qcom_iommu_init_domain(struct iommu_domain *domain,
>>> +   struct qcom_iommu_dev *qcom_iommu,
>>> +   struct iommu_fwspec *fwspec)
>>> +{
>>> + struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain);
>>> + struct io_pgtable_ops *pgtbl_ops;
>>> + struct io_pgtable_cfg pgtbl_cfg;
>>> + int i, ret = 0;
>>> + u32 reg;
>>> +
>>> + mutex_lock(_domain->init_mutex);
>>> + if (qcom_domain->iommu)
>>> + goto out_unlock;
>>> +
>>> + pgtbl_cfg = (struct io_pgtable_cfg) {
>>> + .pgsize_bitmap  = qcom_iommu_ops.pgsize_bitmap,
>>> + .ias= 32,
>>> + .oas= 40,
>>> + .tlb= _gather_ops,
>>> + .iommu_dev  = qcom_iommu->dev,
>>> + };
>>> +
>>> + qcom_domain->iommu = qcom_iommu;
>>> + pgtbl_ops = alloc_io_pgtable_ops(ARM_32_LPAE_S1, _cfg, fwspec);
>>> + if (!pgtbl_ops) {
>>> + dev_err(qcom_iommu->dev, "failed to allocate pagetable 
>>> ops\n");
>>> + ret = -ENOMEM;
>>> + goto out_clear_iommu;
>>> + }
>>> +
>>> + /* Update the domain's page sizes to reflect the page table format */
>>> + domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
>>> + domain->geometry.aperture_end = (1ULL << pgtbl_cfg.ias) - 1;
>>> + domain->geometry.force_aperture = true;
>>> +
>>> + for (i = 0; i < fwspec->num_ids; i++) {
>>> + struct qcom_iommu_ctx *ctx = to_ctx(fwspec, fwspec->ids[i]);
>>> +
>>> + if (!ctx->secure_init) {
>>> + ret = qcom_scm_restore_sec_cfg(qcom_iommu->sec_id, 
>>> ctx->asid);
>>> + if (ret) {
>>> + dev_err(qcom_iommu->dev, "secure init failed: 
>>> %d\n", ret);
>>> + goto out_clear_iommu;
>>> + }
>>> + ctx->secure_init = true;
>>> + }
>>> +
>>> + /* TTBRs */
>>> + iommu_writeq(ctx, ARM_SMMU_CB_TTBR0,
>>> + pgtbl_cfg.arm_lpae_s1_cfg.ttbr[0] |
>>> + ((u64)ctx->asid << TTBRn_ASID_SHIFT));
>>> + iommu_writeq(ctx, ARM_SMMU_CB_TTBR1,
>>> + pgtbl_cfg.arm_lpae_s1_cfg.ttbr[1] |
>>> + ((u64)ctx->asid << TTBRn_ASID_SHIFT));
>>> +
>>> + /* TTBCR */
>>> + iommu_writel(ctx, ARM_SMMU_CB_TTBCR2,
>>> + (pgtbl_cfg.arm_lpae_s1_cfg.tcr >> 32) |
>>> + TTBCR2_SEP_UPSTREAM);
>>> + iommu_writel(ctx, ARM_SMMU_CB_TTBCR,
>>> + pgtbl_cfg.arm_lpae_s1_cfg.tcr);
>>> +
>>> + /* MAIRs (stage-1 only) */
>>> + iommu_writel(ctx, ARM_SMMU_CB_S1_MAIR0,
>>> + pgtbl_cfg.arm_lpae_s1_cfg.mair[0]);
>>> + iommu_writel(ctx, ARM_SMMU_CB_S1_MAIR1,
>>> + pgtbl_cfg.arm_lpae_s1_cfg.mair[1]);
>>> +
>>> + /* SCTLR */
>>> + reg = SCTLR_CFIE | SCTLR_CFRE | SCTLR_AFE | SCTLR_TRE |
>>> + SCTLR_M | SCTLR_S1_ASIDPNE;
>>> +
>>> + if 

Re: [PATCH] iova: allow allocation of lowest domain PFN

2017-05-11 Thread Aaron Sierra
- Original Message -
> From: "Robin Murphy" 
> To: "Aaron Sierra" , "Joerg Roedel" 
> Cc: "iommu" , "Nate Watterson" 
> 
> Sent: Thursday, May 11, 2017 12:39:01 PM
> Subject: Re: [PATCH] iova: allow allocation of lowest domain PFN

Robin,

> Hi Aaron,
> 
> On 11/05/17 14:36, Aaron Sierra wrote:
>> I found it was impossible to allocate all of the address space
>> defined by an IOVA domain. For example, a trivial domain with 2 pages
>> would cover PFNs 0 (start) and 1 (limit), but attempting to allocate
>> more than one total PFN would fail because the start PFN could never be
>> allocated.
> 
> Does this only happen when start_pfn is 0? The recent fix in
> 5016bdb796b3 implies that allocating the entire address space works
> fine, albeit with a non-zero start_pfn. I was about to say that for all
> current callers, start_pfn is always at least 1 anyway, but I see the
> Nvidia DRM stuff in next is making that no longer true, oh well.

No, this limitation exists regardless of the start_pfn value. I've tested
0, 1, then powers-of-two up to half of the limit_pfn.

The problem with the test case laid out in 5016bdb796b3 is that start_pfn
is 1 and limit_pfn is odd (as usual), so the number of available PFNs is odd.
Therefore allocating two at a time can't possibly allocate all of the address
space.

>> This adds a function to prevent PFN limit calculations from dipping
>> "below" the start PFN in __alloc_and_insert_iova_range() and
>> __get_cached_rbnode(). It also alters the PFN validity checks in
>> __alloc_and_insert_iova_range() to anticipate possible PFN rollover.
>> These combine to allow every PFN within the IOVA domain to be available
>> for allocation.
>> 
>> Signed-off-by: Aaron Sierra 
>> ---
>>  drivers/iommu/iova.c | 43 +++
>>  1 file changed, 31 insertions(+), 12 deletions(-)
>> 
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index 5c88ba7..9d92005b 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -32,6 +32,18 @@ static unsigned long iova_rcache_get(struct iova_domain
>> *iovad,
>>  static void init_iova_rcaches(struct iova_domain *iovad);
>>  static void free_iova_rcaches(struct iova_domain *iovad);
>>  
>> +static inline unsigned long
>> +bounded_next_pfn(struct iova_domain *iovad, struct iova *iova)
>> +{
>> +unsigned long next_pfn = iova->pfn_lo - 1;
>> +
>> +if (!iova->pfn_lo)
>> +return iovad->start_pfn;
>> +
>> +/* make sure the PFN doesn't rollover */
>> +return (next_pfn > iova->pfn_lo) ? iovad->start_pfn : next_pfn;
> 
> Why this check? Substituting the definition of next_pfn:
> 
>   ((iova->pfn_lo - 1) > iova->pfn_lo)
>
> which is only true if iova->pfn_lo == 0, in which case we never get this
> far anyway.

That's a good question. That should be able to be simplified.
 
> I'm also not sure about returning start_pfn instead of 0 in the
> underflow case, since reserved entries may exist in the tree such that
> pfn_lo < start_pfn, and I'm not convinced that returning a "next" PFN
> which is actually "previous" to the lowest entry won't cause things to
> go horrendously wrong.

I didn't realize that reservations could occur below the defined IOVA
domain lower limit. Is that really intended behavior? I'd expected that if
you wanted to reserve 0, for example, you'd define a domain starting at
zero, then reserve that PFN before making the domain generally available.

[ snip ]

>>  
>> -if (!curr) {
>> -if (size_aligned)
>> -pad_size = iova_get_pad_size(size, limit_pfn);
>> -if ((iovad->start_pfn + size + pad_size) > limit_pfn) {
> 
> Wouldn't it be sufficient to just change this to
> 
>   if ((iovad->start_pfn + size + pad_size - 1) > limit_pfn)

That may be. My original solution was equivalent, but factored differently
(i.e. limit_pfn + 1). I couldn't fully wrap my head around the logic and
was worried about missing corner cases. If that is indeed a complete fix,
could the refactoring below still be beneficial?

-Aaron S.

> ?
> 
> Since a size of 0 is invalid, that would appear to be sufficiently
> resistant to underflow.
> 
> Robin.
> 
>> -spin_unlock_irqrestore(>iova_rbtree_lock, flags);
>> -return -ENOMEM;
>> -}
>> -}
>> +if (size_aligned)
>> +pad_size = iova_get_pad_size(size, limit_pfn);
>>  
>>  /* pfn_lo will point to size aligned address if size_aligned is set */
>> -new->pfn_lo = limit_pfn - (size + pad_size) + 1;
>> -new->pfn_hi = new->pfn_lo + size - 1;
>> +pfn_lo = limit_pfn - (size + pad_size) + 1;
>> +pfn_hi = pfn_lo + size - 1;
>> +
>> +/*
>> + * We're working with unsigned values, so we have to be careful about
>> + * how we detect a PFN "below" the lowest PFN 

Re: [PATCH] iova: allow allocation of lowest domain PFN

2017-05-11 Thread Aaron Sierra
- Original Message -
> From: "Robin Murphy" 
> To: "Aaron Sierra" , "Joerg Roedel" 
> Cc: "iommu" , "Nate Watterson" 
> 
> Sent: Thursday, May 11, 2017 12:39:01 PM
> Subject: Re: [PATCH] iova: allow allocation of lowest domain PFN

Robin,

> Hi Aaron,
> 
> On 11/05/17 14:36, Aaron Sierra wrote:
>> I found it was impossible to allocate all of the address space
>> defined by an IOVA domain. For example, a trivial domain with 2 pages
>> would cover PFNs 0 (start) and 1 (limit), but attempting to allocate
>> more than one total PFN would fail because the start PFN could never be
>> allocated.
> 
> Does this only happen when start_pfn is 0? The recent fix in
> 5016bdb796b3 implies that allocating the entire address space works
> fine, albeit with a non-zero start_pfn. I was about to say that for all
> current callers, start_pfn is always at least 1 anyway, but I see the
> Nvidia DRM stuff in next is making that no longer true, oh well.

No, this limitation exists regardless of the start_pfn value. I've tested
0, 1, then powers-of-two up to half of the limit_pfn.

The problem with the test case laid out in 5016bdb796b3 is that start_pfn
is 1 and limit_pfn is odd (as usual), so the number of available PFNs is odd.
Therefore allocating two at a time can't possibly allocate all of the address
space.

>> This adds a function to prevent PFN limit calculations from dipping
>> "below" the start PFN in __alloc_and_insert_iova_range() and
>> __get_cached_rbnode(). It also alters the PFN validity checks in
>> __alloc_and_insert_iova_range() to anticipate possible PFN rollover.
>> These combine to allow every PFN within the IOVA domain to be available
>> for allocation.
>> 
>> Signed-off-by: Aaron Sierra 
>> ---
>>  drivers/iommu/iova.c | 43 +++
>>  1 file changed, 31 insertions(+), 12 deletions(-)
>> 
>> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
>> index 5c88ba7..9d92005b 100644
>> --- a/drivers/iommu/iova.c
>> +++ b/drivers/iommu/iova.c
>> @@ -32,6 +32,18 @@ static unsigned long iova_rcache_get(struct iova_domain
>> *iovad,
>>  static void init_iova_rcaches(struct iova_domain *iovad);
>>  static void free_iova_rcaches(struct iova_domain *iovad);
>>  
>> +static inline unsigned long
>> +bounded_next_pfn(struct iova_domain *iovad, struct iova *iova)
>> +{
>> +unsigned long next_pfn = iova->pfn_lo - 1;
>> +
>> +if (!iova->pfn_lo)
>> +return iovad->start_pfn;
>> +
>> +/* make sure the PFN doesn't rollover */
>> +return (next_pfn > iova->pfn_lo) ? iovad->start_pfn : next_pfn;
> 
> Why this check? Substituting the definition of next_pfn:
> 
>   ((iova->pfn_lo - 1) > iova->pfn_lo)
>
> which is only true if iova->pfn_lo == 0, in which case we never get this
> far anyway.

That's a good question. That should be able to be simplified.
 
> I'm also not sure about returning start_pfn instead of 0 in the
> underflow case, since reserved entries may exist in the tree such that
> pfn_lo < start_pfn, and I'm not convinced that returning a "next" PFN
> which is actually "previous" to the lowest entry won't cause things to
> go horrendously wrong.

I didn't realize that reservations could occur below the defined IOVA
domain lower limit. Is that really intended behavior? I'd expected that if
you wanted to reserve 0, for example, you'd define a domain starting at
zero, then reserve that PFN before making the domain generally available.

[ snip ]

>>  
>> -if (!curr) {
>> -if (size_aligned)
>> -pad_size = iova_get_pad_size(size, limit_pfn);
>> -if ((iovad->start_pfn + size + pad_size) > limit_pfn) {
> 
> Wouldn't it be sufficient to just change this to
> 
>   if ((iovad->start_pfn + size + pad_size - 1) > limit_pfn)

That may be. My original solution was equivalent, but factored differently
(i.e. limit_pfn + 1). I couldn't fully wrap my head around the logic and
was worried about missing corner cases. If that is indeed a complete fix,
could the refactoring below still be beneficial?

-Aaron S.

> ?
> 
> Since a size of 0 is invalid, that would appear to be sufficiently
> resistant to underflow.
> 
> Robin.
> 
>> -spin_unlock_irqrestore(>iova_rbtree_lock, flags);
>> -return -ENOMEM;
>> -}
>> -}
>> +if (size_aligned)
>> +pad_size = iova_get_pad_size(size, limit_pfn);
>>  
>>  /* pfn_lo will point to size aligned address if size_aligned is set */
>> -new->pfn_lo = limit_pfn - (size + pad_size) + 1;
>> -new->pfn_hi = new->pfn_lo + size - 1;
>> +pfn_lo = limit_pfn - (size + pad_size) + 1;
>> +pfn_hi = pfn_lo + size - 1;
>> +
>> +/*
>> + * We're working with unsigned values, so we have to be careful about
>> + * how we detect a PFN "below" the lowest PFN 

Re: [PATCH] iova: allow allocation of lowest domain PFN

2017-05-11 Thread Robin Murphy
Hi Aaron,

On 11/05/17 14:36, Aaron Sierra wrote:
> I found it was impossible to allocate all of the address space
> defined by an IOVA domain. For example, a trivial domain with 2 pages
> would cover PFNs 0 (start) and 1 (limit), but attempting to allocate
> more than one total PFN would fail because the start PFN could never be
> allocated.

Does this only happen when start_pfn is 0? The recent fix in
5016bdb796b3 implies that allocating the entire address space works
fine, albeit with a non-zero start_pfn. I was about to say that for all
current callers, start_pfn is always at least 1 anyway, but I see the
Nvidia DRM stuff in next is making that no longer true, oh well.

> This adds a function to prevent PFN limit calculations from dipping
> "below" the start PFN in __alloc_and_insert_iova_range() and
> __get_cached_rbnode(). It also alters the PFN validity checks in
> __alloc_and_insert_iova_range() to anticipate possible PFN rollover.
> These combine to allow every PFN within the IOVA domain to be available
> for allocation.
> 
> Signed-off-by: Aaron Sierra 
> ---
>  drivers/iommu/iova.c | 43 +++
>  1 file changed, 31 insertions(+), 12 deletions(-)
> 
> diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
> index 5c88ba7..9d92005b 100644
> --- a/drivers/iommu/iova.c
> +++ b/drivers/iommu/iova.c
> @@ -32,6 +32,18 @@ static unsigned long iova_rcache_get(struct iova_domain 
> *iovad,
>  static void init_iova_rcaches(struct iova_domain *iovad);
>  static void free_iova_rcaches(struct iova_domain *iovad);
>  
> +static inline unsigned long
> +bounded_next_pfn(struct iova_domain *iovad, struct iova *iova)
> +{
> + unsigned long next_pfn = iova->pfn_lo - 1;
> +
> + if (!iova->pfn_lo)
> + return iovad->start_pfn;
> +
> + /* make sure the PFN doesn't rollover */
> + return (next_pfn > iova->pfn_lo) ? iovad->start_pfn : next_pfn;

Why this check? Substituting the definition of next_pfn:

((iova->pfn_lo - 1) > iova->pfn_lo)

which is only true if iova->pfn_lo == 0, in which case we never get this
far anyway.

I'm also not sure about returning start_pfn instead of 0 in the
underflow case, since reserved entries may exist in the tree such that
pfn_lo < start_pfn, and I'm not convinced that returning a "next" PFN
which is actually "previous" to the lowest entry won't cause things to
go horrendously wrong.

> +}
> +
>  void
>  init_iova_domain(struct iova_domain *iovad, unsigned long granule,
>   unsigned long start_pfn, unsigned long pfn_32bit)
> @@ -63,7 +75,7 @@ __get_cached_rbnode(struct iova_domain *iovad, unsigned 
> long *limit_pfn)
>   struct rb_node *prev_node = rb_prev(iovad->cached32_node);
>   struct iova *curr_iova =
>   rb_entry(iovad->cached32_node, struct iova, node);
> - *limit_pfn = curr_iova->pfn_lo - 1;
> + *limit_pfn = bounded_next_pfn(iovad, curr_iova);
>   return prev_node;
>   }
>  }
> @@ -146,6 +158,7 @@ static int __alloc_and_insert_iova_range(struct 
> iova_domain *iovad,
>   unsigned long flags;
>   unsigned long saved_pfn;
>   unsigned int pad_size = 0;
> + unsigned long pfn_lo, pfn_hi;
>  
>   /* Walk the tree backwards */
>   spin_lock_irqsave(>iova_rbtree_lock, flags);
> @@ -166,24 +179,30 @@ static int __alloc_and_insert_iova_range(struct 
> iova_domain *iovad,
>   break;  /* found a free slot */
>   }
>  adjust_limit_pfn:
> - limit_pfn = curr_iova->pfn_lo ? (curr_iova->pfn_lo - 1) : 0;
> + limit_pfn = bounded_next_pfn(iovad, curr_iova);
>  move_left:
>   prev = curr;
>   curr = rb_prev(curr);
>   }
>  
> - if (!curr) {
> - if (size_aligned)
> - pad_size = iova_get_pad_size(size, limit_pfn);
> - if ((iovad->start_pfn + size + pad_size) > limit_pfn) {

Wouldn't it be sufficient to just change this to

if ((iovad->start_pfn + size + pad_size - 1) > limit_pfn)

?

Since a size of 0 is invalid, that would appear to be sufficiently
resistant to underflow.

Robin.

> - spin_unlock_irqrestore(>iova_rbtree_lock, flags);
> - return -ENOMEM;
> - }
> - }
> + if (size_aligned)
> + pad_size = iova_get_pad_size(size, limit_pfn);
>  
>   /* pfn_lo will point to size aligned address if size_aligned is set */
> - new->pfn_lo = limit_pfn - (size + pad_size) + 1;
> - new->pfn_hi = new->pfn_lo + size - 1;
> + pfn_lo = limit_pfn - (size + pad_size) + 1;
> + pfn_hi = pfn_lo + size - 1;
> +
> + /*
> +  * We're working with unsigned values, so we have to be careful about
> +  * how we detect a PFN "below" the lowest PFN possible; zero.
> +  */
> + if (pfn_lo < iovad->start_pfn || pfn_lo > limit_pfn) {
> + 

Re: [PATCH 3/4] iommu: add qcom_iommu

2017-05-11 Thread Rob Clark
On Thu, May 11, 2017 at 11:08 AM, Sricharan R  wrote:
> Hi Rob,
>
> 
>
>> +static irqreturn_t qcom_iommu_fault(int irq, void *dev)
>> +{
>> + struct qcom_iommu_ctx *ctx = dev;
>> + u32 fsr, fsynr;
>> + unsigned long iova;
>> +
>> + fsr = iommu_readl(ctx, ARM_SMMU_CB_FSR);
>> +
>> + if (!(fsr & FSR_FAULT))
>> + return IRQ_NONE;
>> +
>> + fsynr = iommu_readl(ctx, ARM_SMMU_CB_FSYNR0);
>> + iova = iommu_readq(ctx, ARM_SMMU_CB_FAR);
>> +
>> + dev_err_ratelimited(ctx->dev,
>> + "Unhandled context fault: fsr=0x%x, "
>> + "iova=0x%08lx, fsynr=0x%x, cb=%d\n",
>> + fsr, iova, fsynr, ctx->asid);
>> +
>> + iommu_writel(ctx, ARM_SMMU_CB_FSR, fsr);
>
> Just thinking if the clocks should be enabled in the fault handler
> for handling cases that would happen out of the master context.
> While global faults are one case, that is anyways is handled in
> secure world for this case. Something like bootloader used the iommu
> and not handled the fault, and getting the fault in kernel the
> moment we enable the ctx. Atleast downstream seems to enable the
> clocks in the fault handler explicitly.

hmm, I wonder if we should instead do something to clear interrupts
when we initialize the context?

I guess we probably don't want to get fault irq's from the bootloader..

BR,
-R

> Regards,
>  Sricharan
>
>
>> +
>> + return IRQ_HANDLED;
>> +}
>> +
>> +static int qcom_iommu_init_domain(struct iommu_domain *domain,
>> +   struct qcom_iommu_dev *qcom_iommu,
>> +   struct iommu_fwspec *fwspec)
>> +{
>> + struct qcom_iommu_domain *qcom_domain = to_qcom_iommu_domain(domain);
>> + struct io_pgtable_ops *pgtbl_ops;
>> + struct io_pgtable_cfg pgtbl_cfg;
>> + int i, ret = 0;
>> + u32 reg;
>> +
>> + mutex_lock(_domain->init_mutex);
>> + if (qcom_domain->iommu)
>> + goto out_unlock;
>> +
>> + pgtbl_cfg = (struct io_pgtable_cfg) {
>> + .pgsize_bitmap  = qcom_iommu_ops.pgsize_bitmap,
>> + .ias= 32,
>> + .oas= 40,
>> + .tlb= _gather_ops,
>> + .iommu_dev  = qcom_iommu->dev,
>> + };
>> +
>> + qcom_domain->iommu = qcom_iommu;
>> + pgtbl_ops = alloc_io_pgtable_ops(ARM_32_LPAE_S1, _cfg, fwspec);
>> + if (!pgtbl_ops) {
>> + dev_err(qcom_iommu->dev, "failed to allocate pagetable ops\n");
>> + ret = -ENOMEM;
>> + goto out_clear_iommu;
>> + }
>> +
>> + /* Update the domain's page sizes to reflect the page table format */
>> + domain->pgsize_bitmap = pgtbl_cfg.pgsize_bitmap;
>> + domain->geometry.aperture_end = (1ULL << pgtbl_cfg.ias) - 1;
>> + domain->geometry.force_aperture = true;
>> +
>> + for (i = 0; i < fwspec->num_ids; i++) {
>> + struct qcom_iommu_ctx *ctx = to_ctx(fwspec, fwspec->ids[i]);
>> +
>> + if (!ctx->secure_init) {
>> + ret = qcom_scm_restore_sec_cfg(qcom_iommu->sec_id, 
>> ctx->asid);
>> + if (ret) {
>> + dev_err(qcom_iommu->dev, "secure init failed: 
>> %d\n", ret);
>> + goto out_clear_iommu;
>> + }
>> + ctx->secure_init = true;
>> + }
>> +
>> + /* TTBRs */
>> + iommu_writeq(ctx, ARM_SMMU_CB_TTBR0,
>> + pgtbl_cfg.arm_lpae_s1_cfg.ttbr[0] |
>> + ((u64)ctx->asid << TTBRn_ASID_SHIFT));
>> + iommu_writeq(ctx, ARM_SMMU_CB_TTBR1,
>> + pgtbl_cfg.arm_lpae_s1_cfg.ttbr[1] |
>> + ((u64)ctx->asid << TTBRn_ASID_SHIFT));
>> +
>> + /* TTBCR */
>> + iommu_writel(ctx, ARM_SMMU_CB_TTBCR2,
>> + (pgtbl_cfg.arm_lpae_s1_cfg.tcr >> 32) |
>> + TTBCR2_SEP_UPSTREAM);
>> + iommu_writel(ctx, ARM_SMMU_CB_TTBCR,
>> + pgtbl_cfg.arm_lpae_s1_cfg.tcr);
>> +
>> + /* MAIRs (stage-1 only) */
>> + iommu_writel(ctx, ARM_SMMU_CB_S1_MAIR0,
>> + pgtbl_cfg.arm_lpae_s1_cfg.mair[0]);
>> + iommu_writel(ctx, ARM_SMMU_CB_S1_MAIR1,
>> + pgtbl_cfg.arm_lpae_s1_cfg.mair[1]);
>> +
>> + /* SCTLR */
>> + reg = SCTLR_CFIE | SCTLR_CFRE | SCTLR_AFE | SCTLR_TRE |
>> + SCTLR_M | SCTLR_S1_ASIDPNE;
>> +
>> + if (IS_ENABLED(CONFIG_BIG_ENDIAN))
>> + reg |= SCTLR_E;
>> +
>> + iommu_writel(ctx, ARM_SMMU_CB_SCTLR, reg);
>> + }
>> +
>> + mutex_unlock(_domain->init_mutex);
>> +
>> + /* Publish page table ops for map/unmap */
>> + qcom_domain->pgtbl_ops = pgtbl_ops;

[PATCH 4.9 044/103] x86/pci-calgary: Fix iommu_free() comparison of unsigned expression >= 0

2017-05-11 Thread Greg Kroah-Hartman
4.9-stable review patch.  If anyone has any objections, please let me know.

--

From: Nikola Pajkovsky 

commit 68dee8e2f2cacc54d038394e70d22411dee89da2 upstream.

commit 8fd524b355da ("x86: Kill bad_dma_address variable") has killed
bad_dma_address variable and used instead of macro DMA_ERROR_CODE
which is always zero. Since dma_addr is unsigned, the statement

   dma_addr >= DMA_ERROR_CODE

is always true, and not needed.

arch/x86/kernel/pci-calgary_64.c: In function ‘iommu_free’:
arch/x86/kernel/pci-calgary_64.c:299:2: warning: comparison of unsigned 
expression >= 0 is always true [-Wtype-limits]
  if (unlikely((dma_addr >= DMA_ERROR_CODE) && (dma_addr < badend))) {

Fixes: 8fd524b355da ("x86: Kill bad_dma_address variable")
Signed-off-by: Nikola Pajkovsky 
Cc: iommu@lists.linux-foundation.org
Cc: Jon Mason 
Cc: Muli Ben-Yehuda 
Link: 
http://lkml.kernel.org/r/7612c0f9dd7c1290407dbf8e809def922006920b.1479161177.git.npajkov...@suse.cz
Signed-off-by: Thomas Gleixner 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/kernel/pci-calgary_64.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/kernel/pci-calgary_64.c
+++ b/arch/x86/kernel/pci-calgary_64.c
@@ -296,7 +296,7 @@ static void iommu_free(struct iommu_tabl
 
/* were we called with bad_dma_address? */
badend = DMA_ERROR_CODE + (EMERGENCY_PAGES * PAGE_SIZE);
-   if (unlikely((dma_addr >= DMA_ERROR_CODE) && (dma_addr < badend))) {
+   if (unlikely(dma_addr < badend)) {
WARN(1, KERN_ERR "Calgary: driver tried unmapping bad DMA "
   "address 0x%Lx\n", dma_addr);
return;


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH] iova: allow allocation of lowest domain PFN

2017-05-11 Thread Aaron Sierra
I found it was impossible to allocate all of the address space
defined by an IOVA domain. For example, a trivial domain with 2 pages
would cover PFNs 0 (start) and 1 (limit), but attempting to allocate
more than one total PFN would fail because the start PFN could never be
allocated.

This adds a function to prevent PFN limit calculations from dipping
"below" the start PFN in __alloc_and_insert_iova_range() and
__get_cached_rbnode(). It also alters the PFN validity checks in
__alloc_and_insert_iova_range() to anticipate possible PFN rollover.
These combine to allow every PFN within the IOVA domain to be available
for allocation.

Signed-off-by: Aaron Sierra 
---
 drivers/iommu/iova.c | 43 +++
 1 file changed, 31 insertions(+), 12 deletions(-)

diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c
index 5c88ba7..9d92005b 100644
--- a/drivers/iommu/iova.c
+++ b/drivers/iommu/iova.c
@@ -32,6 +32,18 @@ static unsigned long iova_rcache_get(struct iova_domain 
*iovad,
 static void init_iova_rcaches(struct iova_domain *iovad);
 static void free_iova_rcaches(struct iova_domain *iovad);
 
+static inline unsigned long
+bounded_next_pfn(struct iova_domain *iovad, struct iova *iova)
+{
+   unsigned long next_pfn = iova->pfn_lo - 1;
+
+   if (!iova->pfn_lo)
+   return iovad->start_pfn;
+
+   /* make sure the PFN doesn't rollover */
+   return (next_pfn > iova->pfn_lo) ? iovad->start_pfn : next_pfn;
+}
+
 void
 init_iova_domain(struct iova_domain *iovad, unsigned long granule,
unsigned long start_pfn, unsigned long pfn_32bit)
@@ -63,7 +75,7 @@ __get_cached_rbnode(struct iova_domain *iovad, unsigned long 
*limit_pfn)
struct rb_node *prev_node = rb_prev(iovad->cached32_node);
struct iova *curr_iova =
rb_entry(iovad->cached32_node, struct iova, node);
-   *limit_pfn = curr_iova->pfn_lo - 1;
+   *limit_pfn = bounded_next_pfn(iovad, curr_iova);
return prev_node;
}
 }
@@ -146,6 +158,7 @@ static int __alloc_and_insert_iova_range(struct iova_domain 
*iovad,
unsigned long flags;
unsigned long saved_pfn;
unsigned int pad_size = 0;
+   unsigned long pfn_lo, pfn_hi;
 
/* Walk the tree backwards */
spin_lock_irqsave(>iova_rbtree_lock, flags);
@@ -166,24 +179,30 @@ static int __alloc_and_insert_iova_range(struct 
iova_domain *iovad,
break;  /* found a free slot */
}
 adjust_limit_pfn:
-   limit_pfn = curr_iova->pfn_lo ? (curr_iova->pfn_lo - 1) : 0;
+   limit_pfn = bounded_next_pfn(iovad, curr_iova);
 move_left:
prev = curr;
curr = rb_prev(curr);
}
 
-   if (!curr) {
-   if (size_aligned)
-   pad_size = iova_get_pad_size(size, limit_pfn);
-   if ((iovad->start_pfn + size + pad_size) > limit_pfn) {
-   spin_unlock_irqrestore(>iova_rbtree_lock, flags);
-   return -ENOMEM;
-   }
-   }
+   if (size_aligned)
+   pad_size = iova_get_pad_size(size, limit_pfn);
 
/* pfn_lo will point to size aligned address if size_aligned is set */
-   new->pfn_lo = limit_pfn - (size + pad_size) + 1;
-   new->pfn_hi = new->pfn_lo + size - 1;
+   pfn_lo = limit_pfn - (size + pad_size) + 1;
+   pfn_hi = pfn_lo + size - 1;
+
+   /*
+* We're working with unsigned values, so we have to be careful about
+* how we detect a PFN "below" the lowest PFN possible; zero.
+*/
+   if (pfn_lo < iovad->start_pfn || pfn_lo > limit_pfn) {
+   spin_unlock_irqrestore(>iova_rbtree_lock, flags);
+   return -ENOMEM;
+   }
+
+   new->pfn_lo = pfn_lo;
+   new->pfn_hi = pfn_hi;
 
/* If we have 'prev', it's a valid place to start the insertion. */
iova_insert_rbtree(>rbroot, new, prev);
-- 
2.7.4
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 3.18 12/39] x86/pci-calgary: Fix iommu_free() comparison of unsigned expression >= 0

2017-05-11 Thread Greg Kroah-Hartman
3.18-stable review patch.  If anyone has any objections, please let me know.

--

From: Nikola Pajkovsky 

commit 68dee8e2f2cacc54d038394e70d22411dee89da2 upstream.

commit 8fd524b355da ("x86: Kill bad_dma_address variable") has killed
bad_dma_address variable and used instead of macro DMA_ERROR_CODE
which is always zero. Since dma_addr is unsigned, the statement

   dma_addr >= DMA_ERROR_CODE

is always true, and not needed.

arch/x86/kernel/pci-calgary_64.c: In function ‘iommu_free’:
arch/x86/kernel/pci-calgary_64.c:299:2: warning: comparison of unsigned 
expression >= 0 is always true [-Wtype-limits]
  if (unlikely((dma_addr >= DMA_ERROR_CODE) && (dma_addr < badend))) {

Fixes: 8fd524b355da ("x86: Kill bad_dma_address variable")
Signed-off-by: Nikola Pajkovsky 
Cc: iommu@lists.linux-foundation.org
Cc: Jon Mason 
Cc: Muli Ben-Yehuda 
Link: 
http://lkml.kernel.org/r/7612c0f9dd7c1290407dbf8e809def922006920b.1479161177.git.npajkov...@suse.cz
Signed-off-by: Thomas Gleixner 
Signed-off-by: Greg Kroah-Hartman 

---
 arch/x86/kernel/pci-calgary_64.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

--- a/arch/x86/kernel/pci-calgary_64.c
+++ b/arch/x86/kernel/pci-calgary_64.c
@@ -296,7 +296,7 @@ static void iommu_free(struct iommu_tabl
 
/* were we called with bad_dma_address? */
badend = DMA_ERROR_CODE + (EMERGENCY_PAGES * PAGE_SIZE);
-   if (unlikely((dma_addr >= DMA_ERROR_CODE) && (dma_addr < badend))) {
+   if (unlikely(dma_addr < badend)) {
WARN(1, KERN_ERR "Calgary: driver tried unmapping bad DMA "
   "address 0x%Lx\n", dma_addr);
return;


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH] iommu/mediatek: include linux/dma-mapping.h

2017-05-11 Thread Arnd Bergmann
The mediatek iommu driver relied on an implicit include of dma-mapping.h,
but for some reason that is no longer there in 4.12-rc1:

drivers/iommu/mtk_iommu_v1.c: In function 'mtk_iommu_domain_finalise':
drivers/iommu/mtk_iommu_v1.c:233:16: error: implicit declaration of function 
'dma_zalloc_coherent'; did you mean 'debug_dma_alloc_coherent'? 
[-Werror=implicit-function-declaration]
drivers/iommu/mtk_iommu_v1.c: In function 'mtk_iommu_domain_free':
drivers/iommu/mtk_iommu_v1.c:265:2: error: implicit declaration of function 
'dma_free_coherent'; did you mean 'debug_dma_free_coherent'? 
[-Werror=implicit-function-declaration]

This adds an explicit #include to make it build again.

Signed-off-by: Arnd Bergmann 
---
 drivers/iommu/mtk_iommu_v1.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/iommu/mtk_iommu_v1.c b/drivers/iommu/mtk_iommu_v1.c
index a27ef570c328..bc1efbfb9ddf 100644
--- a/drivers/iommu/mtk_iommu_v1.c
+++ b/drivers/iommu/mtk_iommu_v1.c
@@ -18,6 +18,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
-- 
2.9.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 1/2] PCI: Save properties required to handle FLR for replay purposes.

2017-05-11 Thread Jean-Philippe Brucker
Hi,

On 10/05/17 19:39, Ashok Raj wrote:
> From: CQ Tang 
> 
> Requires: https://patchwork.kernel.org/patch/9593891

Since your series is likely to go in much earlier than my SVM mess, maybe
you could carry that PCI patch along with it? Or I could resend it on its
own if you prefer.

I'm planning to resend the SVM series in a few weeks but it still won't
make it into mainline since it hasn't run on hardware.

Thanks,
Jean-Philippe

> After a FLR, pci-states need to be restored. This patch saves PASID features
> and PRI reqs cached.
> 
> Cc: Jean-Phillipe Brucker 
> Cc: David Woodhouse 
> Cc: iommu@lists.linux-foundation.org
> 
> Signed-off-by: CQ Tang 
> Signed-off-by: Ashok Raj 
> ---
>  drivers/pci/ats.c   | 65 
> +
>  drivers/pci/pci.c   |  3 +++
>  include/linux/pci-ats.h | 10 
>  include/linux/pci.h |  6 +
>  4 files changed, 69 insertions(+), 15 deletions(-)
> 
> diff --git a/drivers/pci/ats.c b/drivers/pci/ats.c
> index 2126497..a769955 100644
> --- a/drivers/pci/ats.c
> +++ b/drivers/pci/ats.c
> @@ -160,17 +160,16 @@ int pci_enable_pri(struct pci_dev *pdev, u32 reqs)
>   if (!pos)
>   return -EINVAL;
>  
> - pci_read_config_word(pdev, pos + PCI_PRI_CTRL, );
>   pci_read_config_word(pdev, pos + PCI_PRI_STATUS, );
> - if ((control & PCI_PRI_CTRL_ENABLE) ||
> - !(status & PCI_PRI_STATUS_STOPPED))
> + if (!(status & PCI_PRI_STATUS_STOPPED))
>   return -EBUSY;
>  
>   pci_read_config_dword(pdev, pos + PCI_PRI_MAX_REQ, _requests);
>   reqs = min(max_requests, reqs);
> + pdev->pri_reqs_alloc = reqs;
>   pci_write_config_dword(pdev, pos + PCI_PRI_ALLOC_REQ, reqs);
>  
> - control |= PCI_PRI_CTRL_ENABLE;
> + control = PCI_PRI_CTRL_ENABLE;
>   pci_write_config_word(pdev, pos + PCI_PRI_CTRL, control);
>  
>   pdev->pri_enabled = 1;
> @@ -206,6 +205,29 @@ void pci_disable_pri(struct pci_dev *pdev)
>  EXPORT_SYMBOL_GPL(pci_disable_pri);
>  
>  /**
> + * pci_restore_pri_state - Restore PRI
> + * @pdev: PCI device structure
> + *
> + */
> +void pci_restore_pri_state(struct pci_dev *pdev)
> +{
> +   u16 control = PCI_PRI_CTRL_ENABLE;
> +   u32 reqs = pdev->pri_reqs_alloc;
> +   int pos;
> +
> +   pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PRI);
> +   if (!pos)
> +   return;
> +
> +   if (!pdev->pri_enabled)
> +   return;
> +
> +   pci_write_config_dword(pdev, pos + PCI_PRI_ALLOC_REQ, reqs);
> +   pci_write_config_word(pdev, pos + PCI_PRI_CTRL, control);
> +}
> +EXPORT_SYMBOL_GPL(pci_restore_pri_state);
> +
> +/**
>   * pci_reset_pri - Resets device's PRI state
>   * @pdev: PCI device structure
>   *
> @@ -224,12 +246,7 @@ int pci_reset_pri(struct pci_dev *pdev)
>   if (!pos)
>   return -EINVAL;
>  
> - pci_read_config_word(pdev, pos + PCI_PRI_CTRL, );
> - if (control & PCI_PRI_CTRL_ENABLE)
> - return -EBUSY;
> -
> - control |= PCI_PRI_CTRL_RESET;
> -
> + control = PCI_PRI_CTRL_RESET;
>   pci_write_config_word(pdev, pos + PCI_PRI_CTRL, control);
>  
>   return 0;
> @@ -259,12 +276,7 @@ int pci_enable_pasid(struct pci_dev *pdev, int features)
>   if (!pos)
>   return -EINVAL;
>  
> - pci_read_config_word(pdev, pos + PCI_PASID_CTRL, );
>   pci_read_config_word(pdev, pos + PCI_PASID_CAP, );
> -
> - if (control & PCI_PASID_CTRL_ENABLE)
> - return -EINVAL;
> -
>   supported &= PCI_PASID_CAP_EXEC | PCI_PASID_CAP_PRIV;
>  
>   /* User wants to enable anything unsupported? */
> @@ -272,6 +284,7 @@ int pci_enable_pasid(struct pci_dev *pdev, int features)
>   return -EINVAL;
>  
>   control = PCI_PASID_CTRL_ENABLE | features;
> + pdev->pasid_features = features;
>  
>   pci_write_config_word(pdev, pos + PCI_PASID_CTRL, control);
>  
> @@ -305,6 +318,28 @@ void pci_disable_pasid(struct pci_dev *pdev)
>  EXPORT_SYMBOL_GPL(pci_disable_pasid);
>  
>  /**
> + * pci_restore_pasid_state - Restore PASID capabilities.
> + * @pdev: PCI device structure
> + *
> + */
> +void pci_restore_pasid_state(struct pci_dev *pdev)
> +{
> +   u16 control;
> +   int pos;
> +
> +   pos = pci_find_ext_capability(pdev, PCI_EXT_CAP_ID_PASID);
> +   if (!pos)
> +   return;
> +
> +   if (!pdev->pasid_enabled)
> +   return;
> +
> +   control = PCI_PASID_CTRL_ENABLE | pdev->pasid_features;
> +   pci_write_config_word(pdev, pos + PCI_PASID_CTRL, control);
> +}
> +EXPORT_SYMBOL_GPL(pci_restore_pasid_state);
> +
> +/**
>   * pci_pasid_features - Check which PASID features are supported
>   * @pdev: PCI device structure
>   *
> diff --git a/drivers/pci/pci.c b/drivers/pci/pci.c
> index 7904d02..c9a6510 100644
> --- a/drivers/pci/pci.c
> +++ b/drivers/pci/pci.c
> @@ -28,6 +28,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  

Re: [v5 1/4] ACPICA: IORT: Add Cavium ThunderX2 SMMUv3 model definition.

2017-05-11 Thread Will Deacon
On Thu, May 11, 2017 at 02:26:02AM +0200, Rafael J. Wysocki wrote:
> On Wednesday, May 10, 2017 05:01:55 PM Geetha sowjanya wrote:
> > From: Linu Cherian 
> > 
> > Add SMMUv3 model definition for ThunderX2.
> > 
> > Signed-off-by: Linu Cherian 
> > Signed-off-by: Geetha Sowjanya 
> 
> This is an ACPICA change, but you have not included the ACPICA maintainers
> into your original CC list (added now).
> 
> Bob, Lv, how should this be routed?
> 
> Do you want to apply this patch upstream first or can we make this change in
> Linux and upstream in parallel?  That shouldn't be a big deal, right?

I think we're still waiting for the updated IORT document to be published (I
think this should be in the next week or so), so I don't think we should
commit the new ID before that happens.

Will

> > ---
> >  include/acpi/actbl2.h | 2 ++
> >  1 file changed, 2 insertions(+)
> > 
> > diff --git a/include/acpi/actbl2.h b/include/acpi/actbl2.h
> > index faa9f2c..76a6f5d 100644
> > --- a/include/acpi/actbl2.h
> > +++ b/include/acpi/actbl2.h
> > @@ -779,6 +779,8 @@ struct acpi_iort_smmu {
> >  #define ACPI_IORT_SMMU_CORELINK_MMU400  0x0002 /* ARM Corelink MMU-400 
> > */
> >  #define ACPI_IORT_SMMU_CORELINK_MMU500  0x0003 /* ARM Corelink MMU-500 
> > */
> >  
> > +#define ACPI_IORT_SMMU_V3_CAVIUM_CN99XX 0x0002 /* Cavium ThunderX2 
> > SMMUv3 */
> > +
> >  /* Masks for Flags field above */
> >  
> >  #define ACPI_IORT_SMMU_DVM_SUPPORTED(1)
> > 
> 
> Thanks,
> Rafael
> 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH] iommu: remove unnecessary code

2017-05-11 Thread David Woodhouse
On Wed, 2017-05-10 at 22:49 -0500, Gustavo A. R. Silva wrote:
> did_old is an unsigned variable and, greater-than-or-equal-to-zero
> comparison of an unsigned variable is always true.
> 
> Addresses-Coverity-ID: 1398477
> Signed-off-by: Gustavo A. R. Silva 

So... why do you think that check was there? Do you think it's possible
that someone mistakenly *thought* it could be negative? What were they
actually checking for? Have you actually *fixed* a bug here, or have
you just masked it?

Even if you've done all that analysis and it *is* correct just to drop
the comparison rather than fixing it, you need to put verbiage to that
effect into the commit comment.

Never write patches to "fix warnings". Always to fix bugs.

> ---
>  drivers/iommu/intel-iommu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
> index d412a31..98daf4a 100644
> --- a/drivers/iommu/intel-iommu.c
> +++ b/drivers/iommu/intel-iommu.c
> @@ -2050,7 +2050,7 @@ static int domain_context_mapping_one(struct 
> dmar_domain *domain,
>   if (context_copied(context)) {
>   u16 did_old = context_domain_id(context);
>  
> - if (did_old >= 0 && did_old < cap_ndoms(iommu->cap))
> + if (did_old < cap_ndoms(iommu->cap))
>   iommu->flush.flush_context(iommu, did_old,
>      (((u16)bus) << 8) | devfn,
>      DMA_CCMD_MASK_NOBIT,

smime.p7s
Description: S/MIME cryptographic signature
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

[PATCH] iommu: remove unnecessary code

2017-05-11 Thread Gustavo A. R. Silva
did_old is an unsigned variable and, greater-than-or-equal-to-zero
comparison of an unsigned variable is always true.

Addresses-Coverity-ID: 1398477
Signed-off-by: Gustavo A. R. Silva 
---
 drivers/iommu/intel-iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/iommu/intel-iommu.c b/drivers/iommu/intel-iommu.c
index d412a31..98daf4a 100644
--- a/drivers/iommu/intel-iommu.c
+++ b/drivers/iommu/intel-iommu.c
@@ -2050,7 +2050,7 @@ static int domain_context_mapping_one(struct dmar_domain 
*domain,
if (context_copied(context)) {
u16 did_old = context_domain_id(context);
 
-   if (did_old >= 0 && did_old < cap_ndoms(iommu->cap))
+   if (did_old < cap_ndoms(iommu->cap))
iommu->flush.flush_context(iommu, did_old,
   (((u16)bus) << 8) | devfn,
   DMA_CCMD_MASK_NOBIT,
-- 
2.5.0

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu