Re: [RFC PATCH v1] iommu/io-pgtable-arm-v7s: Check for leaf entry right after finding it

2017-02-21 Thread Oleksandr Tyshchenko
Hi, Robin.

On Tue, Feb 21, 2017 at 2:00 PM, Robin Murphy <robin.mur...@arm.com> wrote:
> On 16/02/17 13:52, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>>
>> Do a check for already installed leaf entry at the current level before
>> performing any actions when trying to map.
>>
>> This check is already present in arm_v7s_init_pte(), i.e. before
>> installing new leaf entry at the current level if conditions to do so
>> are met (num_entries > 0).
>>
>> But, this might be insufficient in case when we have already
>> installed block mapping at this level and it is not time to
>> install new leaf entry (num_entries == 0).
>> In that case we continue walking the page table down with wrong pointer
>> to the next level.
>>
>> So, move check from arm_v7s_init_pte() to __arm_v7s_map() in order to
>> avoid all cases.
>
> Would it not be more logical (and simpler) to just check that the thing
> we dereference is valid to dereference when we dereference it? i.e.:
>
> -8<-
> diff --git a/drivers/iommu/io-pgtable-arm-v7s.c
> b/drivers/iommu/io-pgtable-arm-v7s.c
> index 0769276c0537..f3112f9ff494 100644
> --- a/drivers/iommu/io-pgtable-arm-v7s.c
> +++ b/drivers/iommu/io-pgtable-arm-v7s.c
> @@ -418,8 +418,10 @@ static int __arm_v7s_map(struct arm_v7s_io_pgtable
> *data, unsigned long iova,
> pte |= ARM_V7S_ATTR_NS_TABLE;
>
> __arm_v7s_set_pte(ptep, pte, 1, cfg);
> -   } else {
> +   } else if (ARM_V7S_PTE_IS_TABLE(pte, lvl)) {
> cptep = iopte_deref(pte, lvl);
> +   } else {
> +   return -EEXIST;
> }
>
> /* Rinse, repeat */
> ->8-

Agree. Sounds reasonable.

>
> I think the equivalent could be done in LPAE as well.

OK.

I will resend both modified patches without RFC prefix, right?

>
> Robin.
>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>> ---
>> This patch does similar things as the patch I have pushed a week ago
>> for long-descriptor format [1].
>> The reason why this patch is RFC is because I am not sure I did the right
>> thing and I even didn't test it.
>>
>> [1] 
>> https://lists.linuxfoundation.org/pipermail/iommu/2017-February/020411.html
>> ---
>> ---
>>  drivers/iommu/io-pgtable-arm-v7s.c | 15 ++-
>>  1 file changed, 10 insertions(+), 5 deletions(-)
>>
>> diff --git a/drivers/iommu/io-pgtable-arm-v7s.c 
>> b/drivers/iommu/io-pgtable-arm-v7s.c
>> index f50e51c..7f7594b 100644
>> --- a/drivers/iommu/io-pgtable-arm-v7s.c
>> +++ b/drivers/iommu/io-pgtable-arm-v7s.c
>> @@ -364,10 +364,6 @@ static int arm_v7s_init_pte(struct arm_v7s_io_pgtable 
>> *data,
>>   if (WARN_ON(__arm_v7s_unmap(data, iova + i * sz,
>>   sz, lvl, tblp) != sz))
>>   return -EINVAL;
>> - } else if (ptep[i]) {
>> - /* We require an unmap first */
>> - WARN_ON(!selftest_running);
>> - return -EEXIST;
>>   }
>>
>>   pte |= ARM_V7S_PTE_TYPE_PAGE;
>> @@ -392,11 +388,20 @@ static int __arm_v7s_map(struct arm_v7s_io_pgtable 
>> *data, unsigned long iova,
>>  {
>>   struct io_pgtable_cfg *cfg = >iop.cfg;
>>   arm_v7s_iopte pte, *cptep;
>> - int num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
>> + int i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
>>
>>   /* Find our entry at the current level */
>>   ptep += ARM_V7S_LVL_IDX(iova, lvl);
>>
>> + /* Check for already installed leaf entry */
>> +     do {
>> + if (ptep[i] && !ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) {
>> + /* We require an unmap first */
>> + WARN_ON(!selftest_running);
>> + return -EEXIST;
>> + }
>> + } while (++i < num_entries);
>> +
>>   /* If we can install a leaf entry at this level, then do so */
>>   if (num_entries)
>>   return arm_v7s_init_pte(data, iova, paddr, prot,
>>
>



-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v1] iommu/io-pgtable-arm: Check for leaf entry right after finding it

2017-02-13 Thread Oleksandr Tyshchenko
On Mon, Feb 13, 2017 at 1:27 PM, Will Deacon <will.dea...@arm.com> wrote:
> On Mon, Feb 13, 2017 at 01:07:02PM +0200, Oleksandr Tyshchenko wrote:
>> Any comments?
>
> Looks fine to me, but I don't think it's urgent and I already sent my
> SMMU pull for 4.11. I'll send this as a fix after the merge window.
OK. Thank you.

>
> I suspect we need something similar for io-pgtable-arm-v7s.c, too.
Agree. On the whole I will be able to make similar patch for arm-v7s,
but I won't be 100% sure
since I don't have any boards where arm-v7s compatible IOMMU installed.

Shall I make patch for arm-v7s too?

>
> Will



-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v1] iommu/io-pgtable-arm: Check for leaf entry right after finding it

2017-02-13 Thread Oleksandr Tyshchenko
Hi, all.

Any comments?

On Thu, Feb 9, 2017 at 3:56 PM, Oleksandr Tyshchenko
<olekst...@gmail.com> wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>
> Do a check for already installed leaf entry at the current level before
> performing any actions when trying to map.
>
> This check is already present in arm_lpae_init_pte(), i.e. before
> installing new leaf entry at the current level if conditions to do so
> are met (size == block_size).
>
> But, this might be insufficient in case when we have already
> installed block mapping at this level and it is not time to
> install new leaf entry (size != block_size).
> In that case we continue walking the page table down with wrong pointer
> to the next level.
>
> So, move check from arm_lpae_init_pte() to __arm_lpae_map() in order to
> avoid all cases.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
> ---
> I hope that following actions can help to catch it:
> 1. Call iommu_map for a block mapping (e.g. 2M) at some address
>(e.g. iova 0x8000 pa 0x8000).
> 2. Call iommu_map for a page mapping (4k) at some address from
>the previous mapped region (e.g. iova 0x80008000 pa 0x9000).
>
> I understand that after iommu_map should be iommu_unmap, but
> different scenarios may occur).
> ---
> ---
>  drivers/iommu/io-pgtable-arm.c | 13 -
>  1 file changed, 8 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
> index f5c90e1..ebdb82f 100644
> --- a/drivers/iommu/io-pgtable-arm.c
> +++ b/drivers/iommu/io-pgtable-arm.c
> @@ -272,11 +272,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable 
> *data,
> arm_lpae_iopte pte = prot;
> struct io_pgtable_cfg *cfg = >iop.cfg;
>
> -   if (iopte_leaf(*ptep, lvl)) {
> -   /* We require an unmap first */
> -   WARN_ON(!selftest_running);
> -   return -EEXIST;
> -   } else if (iopte_type(*ptep, lvl) == ARM_LPAE_PTE_TYPE_TABLE) {
> +   if (iopte_type(*ptep, lvl) == ARM_LPAE_PTE_TYPE_TABLE) {
> /*
>  * We need to unmap and free the old table before
>  * overwriting it with a block entry.
> @@ -315,6 +311,13 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable 
> *data, unsigned long iova,
> /* Find our entry at the current level */
> ptep += ARM_LPAE_LVL_IDX(iova, lvl, data);
>
> +   /* Check for already installed leaf entry */
> +   if (iopte_leaf(*ptep, lvl)) {
> +   /* We require an unmap first */
> +   WARN_ON(!selftest_running);
> +   return -EEXIST;
> +   }
> +
> /* If we can install a leaf entry at this level, then do so */
> if (size == block_size && (size & cfg->pgsize_bitmap))
> return arm_lpae_init_pte(data, iova, paddr, prot, lvl, ptep);
> --
> 2.7.4
>



-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[RFC PATCH v1] iommu/io-pgtable-arm: Check for leaf entry right after finding it

2017-02-09 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>

Do a check for already installed leaf entry at the current level before
performing any actions when trying to map.

This check is already present in arm_lpae_init_pte(), i.e. before
installing new leaf entry at the current level if conditions to do so
are met (size == block_size).

But, this might be insufficient in case when we have already
installed block mapping at this level and it is not time to
install new leaf entry (size != block_size).
In that case we continue walking the page table down with wrong pointer
to the next level.

So, move check from arm_lpae_init_pte() to __arm_lpae_map() in order to
avoid all cases.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
---
I hope that following actions can help to catch it:
1. Call iommu_map for a block mapping (e.g. 2M) at some address
   (e.g. iova 0x8000 pa 0x8000).
2. Call iommu_map for a page mapping (4k) at some address from
   the previous mapped region (e.g. iova 0x80008000 pa 0x9000).

I understand that after iommu_map should be iommu_unmap, but
different scenarios may occur).
---
---
 drivers/iommu/io-pgtable-arm.c | 13 -
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index f5c90e1..ebdb82f 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -272,11 +272,7 @@ static int arm_lpae_init_pte(struct arm_lpae_io_pgtable 
*data,
arm_lpae_iopte pte = prot;
struct io_pgtable_cfg *cfg = >iop.cfg;
 
-   if (iopte_leaf(*ptep, lvl)) {
-   /* We require an unmap first */
-   WARN_ON(!selftest_running);
-   return -EEXIST;
-   } else if (iopte_type(*ptep, lvl) == ARM_LPAE_PTE_TYPE_TABLE) {
+   if (iopte_type(*ptep, lvl) == ARM_LPAE_PTE_TYPE_TABLE) {
/*
 * We need to unmap and free the old table before
 * overwriting it with a block entry.
@@ -315,6 +311,13 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable 
*data, unsigned long iova,
/* Find our entry at the current level */
ptep += ARM_LPAE_LVL_IDX(iova, lvl, data);
 
+   /* Check for already installed leaf entry */
+   if (iopte_leaf(*ptep, lvl)) {
+   /* We require an unmap first */
+   WARN_ON(!selftest_running);
+   return -EEXIST;
+   }
+
/* If we can install a leaf entry at this level, then do so */
if (size == block_size && (size & cfg->pgsize_bitmap))
return arm_lpae_init_pte(data, iova, paddr, prot, lvl, ptep);
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[RFC PATCH v1] iommu/io-pgtable-arm-v7s: Check for leaf entry right after finding it

2017-02-16 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>

Do a check for already installed leaf entry at the current level before
performing any actions when trying to map.

This check is already present in arm_v7s_init_pte(), i.e. before
installing new leaf entry at the current level if conditions to do so
are met (num_entries > 0).

But, this might be insufficient in case when we have already
installed block mapping at this level and it is not time to
install new leaf entry (num_entries == 0).
In that case we continue walking the page table down with wrong pointer
to the next level.

So, move check from arm_v7s_init_pte() to __arm_v7s_map() in order to
avoid all cases.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
---
This patch does similar things as the patch I have pushed a week ago
for long-descriptor format [1].
The reason why this patch is RFC is because I am not sure I did the right
thing and I even didn't test it.

[1] https://lists.linuxfoundation.org/pipermail/iommu/2017-February/020411.html
---
---
 drivers/iommu/io-pgtable-arm-v7s.c | 15 ++-
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/drivers/iommu/io-pgtable-arm-v7s.c 
b/drivers/iommu/io-pgtable-arm-v7s.c
index f50e51c..7f7594b 100644
--- a/drivers/iommu/io-pgtable-arm-v7s.c
+++ b/drivers/iommu/io-pgtable-arm-v7s.c
@@ -364,10 +364,6 @@ static int arm_v7s_init_pte(struct arm_v7s_io_pgtable 
*data,
if (WARN_ON(__arm_v7s_unmap(data, iova + i * sz,
sz, lvl, tblp) != sz))
return -EINVAL;
-   } else if (ptep[i]) {
-   /* We require an unmap first */
-   WARN_ON(!selftest_running);
-   return -EEXIST;
}
 
pte |= ARM_V7S_PTE_TYPE_PAGE;
@@ -392,11 +388,20 @@ static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, 
unsigned long iova,
 {
struct io_pgtable_cfg *cfg = >iop.cfg;
arm_v7s_iopte pte, *cptep;
-   int num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
+   int i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
 
/* Find our entry at the current level */
ptep += ARM_V7S_LVL_IDX(iova, lvl);
 
+   /* Check for already installed leaf entry */
+   do {
+   if (ptep[i] && !ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) {
+   /* We require an unmap first */
+   WARN_ON(!selftest_running);
+   return -EEXIST;
+   }
+   } while (++i < num_entries);
+
/* If we can install a leaf entry at this level, then do so */
if (num_entries)
return arm_v7s_init_pte(data, iova, paddr, prot,
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [RFC PATCH v1] iommu/io-pgtable-arm-v7s: Check for leaf entry right after finding it

2017-02-16 Thread Oleksandr Tyshchenko
Hi, Robin.

It would be greatly appreciated if you could test this patch if nobody
has objections to the patch itself.

On Thu, Feb 16, 2017 at 3:52 PM, Oleksandr Tyshchenko
<olekst...@gmail.com> wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>
> Do a check for already installed leaf entry at the current level before
> performing any actions when trying to map.
>
> This check is already present in arm_v7s_init_pte(), i.e. before
> installing new leaf entry at the current level if conditions to do so
> are met (num_entries > 0).
>
> But, this might be insufficient in case when we have already
> installed block mapping at this level and it is not time to
> install new leaf entry (num_entries == 0).
> In that case we continue walking the page table down with wrong pointer
> to the next level.
>
> So, move check from arm_v7s_init_pte() to __arm_v7s_map() in order to
> avoid all cases.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
> ---
> This patch does similar things as the patch I have pushed a week ago
> for long-descriptor format [1].
> The reason why this patch is RFC is because I am not sure I did the right
> thing and I even didn't test it.
>
> [1] 
> https://lists.linuxfoundation.org/pipermail/iommu/2017-February/020411.html
> ---
> ---
>  drivers/iommu/io-pgtable-arm-v7s.c | 15 ++-
>  1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/iommu/io-pgtable-arm-v7s.c 
> b/drivers/iommu/io-pgtable-arm-v7s.c
> index f50e51c..7f7594b 100644
> --- a/drivers/iommu/io-pgtable-arm-v7s.c
> +++ b/drivers/iommu/io-pgtable-arm-v7s.c
> @@ -364,10 +364,6 @@ static int arm_v7s_init_pte(struct arm_v7s_io_pgtable 
> *data,
> if (WARN_ON(__arm_v7s_unmap(data, iova + i * sz,
> sz, lvl, tblp) != sz))
> return -EINVAL;
> -   } else if (ptep[i]) {
> -   /* We require an unmap first */
> -   WARN_ON(!selftest_running);
> -   return -EEXIST;
> }
>
> pte |= ARM_V7S_PTE_TYPE_PAGE;
> @@ -392,11 +388,20 @@ static int __arm_v7s_map(struct arm_v7s_io_pgtable 
> *data, unsigned long iova,
>  {
> struct io_pgtable_cfg *cfg = >iop.cfg;
> arm_v7s_iopte pte, *cptep;
> -   int num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
> +   int i = 0, num_entries = size >> ARM_V7S_LVL_SHIFT(lvl);
>
> /* Find our entry at the current level */
> ptep += ARM_V7S_LVL_IDX(iova, lvl);
>
> +   /* Check for already installed leaf entry */
> +   do {
> +   if (ptep[i] && !ARM_V7S_PTE_IS_TABLE(ptep[i], lvl)) {
> +   /* We require an unmap first */
> +   WARN_ON(!selftest_running);
> +   return -EEXIST;
> +   }
> +   } while (++i < num_entries);
> +
> /* If we can install a leaf entry at this level, then do so */
> if (num_entries)
> return arm_v7s_init_pte(data, iova, paddr, prot,
> --
> 2.7.4
>



-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v1 1/2] iommu/io-pgtable-arm: Check for leaf entry before dereferencing it

2017-02-27 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>

Do a check for already installed leaf entry at the current level before
dereferencing it in order to avoid walking the page table down with
wrong pointer to the next level.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
CC: Will Deacon <will.dea...@arm.com>
CC: Robin Murphy <robin.mur...@arm.com>
---
 drivers/iommu/io-pgtable-arm.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/io-pgtable-arm.c b/drivers/iommu/io-pgtable-arm.c
index feacc54..f9bc6eb 100644
--- a/drivers/iommu/io-pgtable-arm.c
+++ b/drivers/iommu/io-pgtable-arm.c
@@ -335,8 +335,12 @@ static int __arm_lpae_map(struct arm_lpae_io_pgtable 
*data, unsigned long iova,
if (cfg->quirks & IO_PGTABLE_QUIRK_ARM_NS)
pte |= ARM_LPAE_PTE_NSTABLE;
__arm_lpae_set_pte(ptep, pte, cfg);
-   } else {
+   } else if (!iopte_leaf(pte, lvl)) {
cptep = iopte_deref(pte, data);
+   } else {
+   /* We require an unmap first */
+   WARN_ON(!selftest_running);
+   return -EEXIST;
}
 
/* Rinse, repeat */
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v1 2/2] iommu/io-pgtable-arm-v7s: Check for leaf entry before dereferencing it

2017-02-27 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>

Do a check for already installed leaf entry at the current level before
dereferencing it in order to avoid walking the page table down with
wrong pointer to the next level.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
CC: Will Deacon <will.dea...@arm.com>
CC: Robin Murphy <robin.mur...@arm.com>
---
 drivers/iommu/io-pgtable-arm-v7s.c | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/iommu/io-pgtable-arm-v7s.c 
b/drivers/iommu/io-pgtable-arm-v7s.c
index 1c049e2..8d6ca28 100644
--- a/drivers/iommu/io-pgtable-arm-v7s.c
+++ b/drivers/iommu/io-pgtable-arm-v7s.c
@@ -422,8 +422,12 @@ static int __arm_v7s_map(struct arm_v7s_io_pgtable *data, 
unsigned long iova,
pte |= ARM_V7S_ATTR_NS_TABLE;
 
__arm_v7s_set_pte(ptep, pte, 1, cfg);
-   } else {
+   } else if (ARM_V7S_PTE_IS_TABLE(pte, lvl)) {
cptep = iopte_deref(pte, lvl);
+   } else {
+   /* We require an unmap first */
+   WARN_ON(!selftest_running);
+   return -EEXIST;
}
 
/* Rinse, repeat */
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v1 0/2] Misc fixes for ARM page table allocator (long/short descriptor format)

2017-02-27 Thread Oleksandr Tyshchenko
Hi.

There is a small patch series which contains the same fix for both
descriptor formats based on the preceding RFC patches:
https://lists.linuxfoundation.org/pipermail/iommu/2017-February/020411.html
https://lists.linuxfoundation.org/pipermail/iommu/2017-February/020477.html

Oleksandr Tyshchenko (2):
  iommu/io-pgtable-arm: Check for leaf entry before dereferencing it
  iommu/io-pgtable-arm-v7s: Check for leaf entry before dereferencing it

 drivers/iommu/io-pgtable-arm-v7s.c | 6 +-
 drivers/iommu/io-pgtable-arm.c | 6 +-
 2 files changed, 10 insertions(+), 2 deletions(-)

-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1 0/2] Misc fixes for ARM page table allocator (long/short descriptor format)

2017-03-16 Thread Oleksandr Tyshchenko
Hi, all.

Any comments?

On Mon, Feb 27, 2017 at 2:30 PM, Oleksandr Tyshchenko
<olekst...@gmail.com> wrote:
> Hi.
>
> There is a small patch series which contains the same fix for both
> descriptor formats based on the preceding RFC patches:
> https://lists.linuxfoundation.org/pipermail/iommu/2017-February/020411.html
> https://lists.linuxfoundation.org/pipermail/iommu/2017-February/020477.html
>
> Oleksandr Tyshchenko (2):
>   iommu/io-pgtable-arm: Check for leaf entry before dereferencing it
>   iommu/io-pgtable-arm-v7s: Check for leaf entry before dereferencing it
>
>  drivers/iommu/io-pgtable-arm-v7s.c | 6 +-
>  drivers/iommu/io-pgtable-arm.c | 6 +-
>  2 files changed, 10 insertions(+), 2 deletions(-)
>
> --
> 2.7.4
>



-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1] iommu/ipmmu-vmsa: Set context_id to non-existent value if allocation failed

2017-08-22 Thread Oleksandr Tyshchenko
Hi,

On Tue, Aug 22, 2017 at 5:24 PM, Joerg Roedel <j...@8bytes.org> wrote:
> On Mon, Aug 21, 2017 at 03:40:41PM +0300, Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>>
>> In ipmmu_domain_init_context() we are trying to allocate context and
>> if allocation fails we will call free_io_pgtable_ops(),
>> but "domain->context_id" hasn't been initialized yet (likely it is 0
>> because of kzalloc). Having the following call stack:
>> free_io_pgtable_ops() -> io_pgtable_tlb_flush_all() ->
>> ipmmu_tlb_flush_all() -> ipmmu_tlb_invalidate()
>> we will get a mistaken cache flush for a context pointed by
>> uninitialized "domain->context_id".
>>
>> So, set context_id to non-existent value (IPMMU_CTX_MAX) before calling
>> free_io_pgtable_ops() and check it for a valid value (< IPMMU_CTX_MAX)
>> before calling ipmmu_tlb_invalidate().
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>> ---
>>  drivers/iommu/ipmmu-vmsa.c | 4 
>>  1 file changed, 4 insertions(+)
>
> Applied, thanks.
Thank you.

>

-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1] iommu/ipmmu-vmsa: Set context_id to non-existent value if allocation failed

2017-08-23 Thread Oleksandr Tyshchenko
Hi, Laurent.

On Wed, Aug 23, 2017 at 12:25 AM, Laurent Pinchart
<laurent.pinch...@ideasonboard.com> wrote:
> Hi Oleksandr,
>
> Thank you for the patch.
>
> On Monday, 21 August 2017 15:40:41 EEST Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>>
>> In ipmmu_domain_init_context() we are trying to allocate context and
>> if allocation fails we will call free_io_pgtable_ops(),
>> but "domain->context_id" hasn't been initialized yet (likely it is 0
>> because of kzalloc). Having the following call stack:
>> free_io_pgtable_ops() -> io_pgtable_tlb_flush_all() ->
>> ipmmu_tlb_flush_all() -> ipmmu_tlb_invalidate()
>> we will get a mistaken cache flush for a context pointed by
>> uninitialized "domain->context_id".
>>
>> So, set context_id to non-existent value (IPMMU_CTX_MAX) before calling
>> free_io_pgtable_ops() and check it for a valid value (< IPMMU_CTX_MAX)
>> before calling ipmmu_tlb_invalidate().
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>> ---
>>  drivers/iommu/ipmmu-vmsa.c | 4 
>>  1 file changed, 4 insertions(+)
>>
>> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
>> index 2a38aa1..5b226c0 100644
>> --- a/drivers/iommu/ipmmu-vmsa.c
>> +++ b/drivers/iommu/ipmmu-vmsa.c
>> @@ -303,6 +303,9 @@ static void ipmmu_tlb_flush_all(void *cookie)
>>  {
>>   struct ipmmu_vmsa_domain *domain = cookie;
>>
>> + if (domain->context_id >= IPMMU_CTX_MAX)
>> + return;
>> +
>>   ipmmu_tlb_invalidate(domain);
>>  }
>>
>> @@ -380,6 +383,7 @@ static int ipmmu_domain_init_context(struct
>> ipmmu_vmsa_domain *domain) */
>>   ret = ipmmu_domain_allocate_context(domain->mmu, domain);
>>   if (ret == IPMMU_CTX_MAX) {
>> + domain->context_id = IPMMU_CTX_MAX;
>
> Wouldn't it make more sense to allocate the pgtable ops after initializing the
> context (moving the ipmmu_domain_allocate_context() call to the very end of
> the function) ? That way we would be less dependent on changes to pgtable ops
> init/cleanup code that could require the context to be set up.

Why not. But, not sure about the very end of the function. Since for
writing some HW registers down the function (IMTTLBR0/IMTTUBR0,
IMMAIR0)
we need to have what pgtable ops sets up for us (ttbr, mair). What
about to just swap alloc_io_pgtable_ops() and
ipmmu_domain_allocate_context()?

...
diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 2a38aa1..90af1c7 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -370,22 +370,22 @@ static int ipmmu_domain_init_context(struct
ipmmu_vmsa_domain *domain)
 */
domain->cfg.iommu_dev = domain->mmu->dev;

-   domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
-  domain);
-   if (!domain->iop)
-   return -EINVAL;
-
/*
 * Find an unused context.
 */
ret = ipmmu_domain_allocate_context(domain->mmu, domain);
-   if (ret == IPMMU_CTX_MAX) {
-   free_io_pgtable_ops(domain->iop);
+   if (ret == IPMMU_CTX_MAX)
return -EBUSY;
-   }

domain->context_id = ret;

+   domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
+  domain);
+   if (!domain->iop) {
+   ipmmu_domain_free_context(domain->mmu, domain->context_id);
+   return -EINVAL;
+   }
+
/* TTBR0 */
ttbr = domain->cfg.arm_lpae_s1_cfg.ttbr[0];
ipmmu_ctx_write(domain, IMTTLBR0, ttbr);
...

>
>>   free_io_pgtable_ops(domain->iop);
>>   return -EBUSY;
>>   }
>
>
> --
> Regards,
>
> Laurent Pinchart
>



-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1] iommu/ipmmu-vmsa: Set context_id to non-existent value if allocation failed

2017-08-23 Thread Oleksandr Tyshchenko
Hi, Robin

On Wed, Aug 23, 2017 at 1:05 PM, Robin Murphy <robin.mur...@arm.com> wrote:
> On 23/08/17 10:36, Oleksandr Tyshchenko wrote:
>> Hi, Laurent.
>>
>> On Wed, Aug 23, 2017 at 12:25 AM, Laurent Pinchart
>> <laurent.pinch...@ideasonboard.com> wrote:
>>> Hi Oleksandr,
>>>
>>> Thank you for the patch.
>>>
>>> On Monday, 21 August 2017 15:40:41 EEST Oleksandr Tyshchenko wrote:
>>>> From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>>>>
>>>> In ipmmu_domain_init_context() we are trying to allocate context and
>>>> if allocation fails we will call free_io_pgtable_ops(),
>>>> but "domain->context_id" hasn't been initialized yet (likely it is 0
>>>> because of kzalloc). Having the following call stack:
>>>> free_io_pgtable_ops() -> io_pgtable_tlb_flush_all() ->
>>>> ipmmu_tlb_flush_all() -> ipmmu_tlb_invalidate()
>>>> we will get a mistaken cache flush for a context pointed by
>>>> uninitialized "domain->context_id".
>>>>
>>>> So, set context_id to non-existent value (IPMMU_CTX_MAX) before calling
>>>> free_io_pgtable_ops() and check it for a valid value (< IPMMU_CTX_MAX)
>>>> before calling ipmmu_tlb_invalidate().
>>>>
>>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>>>> ---
>>>>  drivers/iommu/ipmmu-vmsa.c | 4 
>>>>  1 file changed, 4 insertions(+)
>>>>
>>>> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
>>>> index 2a38aa1..5b226c0 100644
>>>> --- a/drivers/iommu/ipmmu-vmsa.c
>>>> +++ b/drivers/iommu/ipmmu-vmsa.c
>>>> @@ -303,6 +303,9 @@ static void ipmmu_tlb_flush_all(void *cookie)
>>>>  {
>>>>   struct ipmmu_vmsa_domain *domain = cookie;
>>>>
>>>> + if (domain->context_id >= IPMMU_CTX_MAX)
>>>> + return;
>>>> +
>>>>   ipmmu_tlb_invalidate(domain);
>>>>  }
>>>>
>>>> @@ -380,6 +383,7 @@ static int ipmmu_domain_init_context(struct
>>>> ipmmu_vmsa_domain *domain) */
>>>>   ret = ipmmu_domain_allocate_context(domain->mmu, domain);
>>>>   if (ret == IPMMU_CTX_MAX) {
>>>> + domain->context_id = IPMMU_CTX_MAX;
>>>
>>> Wouldn't it make more sense to allocate the pgtable ops after initializing 
>>> the
>>> context (moving the ipmmu_domain_allocate_context() call to the very end of
>>> the function) ? That way we would be less dependent on changes to pgtable 
>>> ops
>>> init/cleanup code that could require the context to be set up.
>>
>> Why not. But, not sure about the very end of the function. Since for
>> writing some HW registers down the function (IMTTLBR0/IMTTUBR0,
>> IMMAIR0)
>> we need to have what pgtable ops sets up for us (ttbr, mair). What
>> about to just swap alloc_io_pgtable_ops() and
>> ipmmu_domain_allocate_context()?
>
> This looks a lot more reasonable - reserving a free context is both
> quicker and more likely to fail (due to limited hardware resources) than
> setting up a pagetable, so it makes a lot of sense to do that before
> anything else.
Agree.

>
> Robin.
>
>> ...
>> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
>> index 2a38aa1..90af1c7 100644
>> --- a/drivers/iommu/ipmmu-vmsa.c
>> +++ b/drivers/iommu/ipmmu-vmsa.c
>> @@ -370,22 +370,22 @@ static int ipmmu_domain_init_context(struct
>> ipmmu_vmsa_domain *domain)
>>  */
>> domain->cfg.iommu_dev = domain->mmu->dev;
>>
>> -   domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
>> -  domain);
>> -   if (!domain->iop)
>> -   return -EINVAL;
>> -
>> /*
>>  * Find an unused context.
>>  */
>> ret = ipmmu_domain_allocate_context(domain->mmu, domain);
>> -   if (ret == IPMMU_CTX_MAX) {
>> -   free_io_pgtable_ops(domain->iop);
>> +   if (ret == IPMMU_CTX_MAX)
>> return -EBUSY;
>> -   }
>>
>> domain->context_id = ret;
>>
>> +   domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
>> +  domain);
>> +   if (!domain->iop) {
>> +   ipmmu_domain_free_context(domain->mmu, domain->context_id);
>> +   return -EINVAL;
>> +   }
>> +
>> /* TTBR0 */
>> ttbr = domain->cfg.arm_lpae_s1_cfg.ttbr[0];
>> ipmmu_ctx_write(domain, IMTTLBR0, ttbr);
>> ...
>>
>>>
>>>>   free_io_pgtable_ops(domain->iop);
>>>>   return -EBUSY;
>>>>   }
>>>
>>>
>>> --
>>> Regards,
>>>
>>> Laurent Pinchart
>>>
>>
>>
>>
>



-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v1] iommu/ipmmu-vmsa: Set context_id to non-existent value if allocation failed

2017-08-23 Thread Oleksandr Tyshchenko
Hi, Laurent

On Wed, Aug 23, 2017 at 4:46 PM, Laurent Pinchart
<laurent.pinch...@ideasonboard.com> wrote:
> Hi Oleksandr,
>
> On Wednesday, 23 August 2017 14:58:47 EEST Oleksandr Tyshchenko wrote:
>> On Wed, Aug 23, 2017 at 1:05 PM, Robin Murphy wrote:
>> > On 23/08/17 10:36, Oleksandr Tyshchenko wrote:
>> >> On Wed, Aug 23, 2017 at 12:25 AM, Laurent Pinchart wrote:
>> >>> On Monday, 21 August 2017 15:40:41 EEST Oleksandr Tyshchenko wrote:
>> >>>> From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>> >>>>
>> >>>> In ipmmu_domain_init_context() we are trying to allocate context and
>> >>>> if allocation fails we will call free_io_pgtable_ops(),
>> >>>> but "domain->context_id" hasn't been initialized yet (likely it is 0
>> >>>> because of kzalloc). Having the following call stack:
>> >>>> free_io_pgtable_ops() -> io_pgtable_tlb_flush_all() ->
>> >>>> ipmmu_tlb_flush_all() -> ipmmu_tlb_invalidate()
>> >>>> we will get a mistaken cache flush for a context pointed by
>> >>>> uninitialized "domain->context_id".
>> >>>>
>> >>>> So, set context_id to non-existent value (IPMMU_CTX_MAX) before calling
>> >>>> free_io_pgtable_ops() and check it for a valid value (< IPMMU_CTX_MAX)
>> >>>> before calling ipmmu_tlb_invalidate().
>> >>>>
>> >>>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>> >>>> ---
>> >>>>
>> >>>>  drivers/iommu/ipmmu-vmsa.c | 4 
>> >>>>  1 file changed, 4 insertions(+)
>> >>>>
>> >>>> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
>> >>>> index 2a38aa1..5b226c0 100644
>> >>>> --- a/drivers/iommu/ipmmu-vmsa.c
>> >>>> +++ b/drivers/iommu/ipmmu-vmsa.c
>> >>>> @@ -303,6 +303,9 @@ static void ipmmu_tlb_flush_all(void *cookie)
>> >>>>  {
>> >>>>   struct ipmmu_vmsa_domain *domain = cookie;
>> >>>>
>> >>>> + if (domain->context_id >= IPMMU_CTX_MAX)
>> >>>> + return;
>> >>>> +
>> >>>>   ipmmu_tlb_invalidate(domain);
>> >>>>  }
>> >>>>
>> >>>> @@ -380,6 +383,7 @@ static int ipmmu_domain_init_context(struct
>> >>>> ipmmu_vmsa_domain *domain)
>> >>>>  */
>> >>>>   ret = ipmmu_domain_allocate_context(domain->mmu, domain);
>> >>>>   if (ret == IPMMU_CTX_MAX) {
>> >>>> + domain->context_id = IPMMU_CTX_MAX;
>> >>>
>> >>> Wouldn't it make more sense to allocate the pgtable ops after
>> >>> initializing the context (moving the ipmmu_domain_allocate_context()
>> >>> call to the very end of the function) ? That way we would be less
>> >>> dependent on changes to pgtable ops init/cleanup code that could
>> >>> require the context to be set up.
>> >>
>> >> Why not. But, not sure about the very end of the function. Since for
>> >> writing some HW registers down the function (IMTTLBR0/IMTTUBR0,
>> >> IMMAIR0) we need to have what pgtable ops sets up for us (ttbr, mair).
>> >> What about to just swap alloc_io_pgtable_ops() and
>> >> ipmmu_domain_allocate_context()?
>> >
>> > This looks a lot more reasonable - reserving a free context is both
>> > quicker and more likely to fail (due to limited hardware resources) than
>> > setting up a pagetable, so it makes a lot of sense to do that before
>> > anything else.
>>
>> Agree.
>
> That looks good to me too. In general I prefer initializing everything needed
> by the error path before calling anything that could trigger that error path,
> instead of initializing variables to magic values that mean part of the
> cleanup should be skipped.
Make sense.

>
> Will you send a v2 ?
Yes.

>
> --
> Regards,
>
> Laurent Pinchart
>

-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2] iommu/ipmmu-vmsa: Rereserving a free context before setting up a pagetable

2017-08-28 Thread Oleksandr Tyshchenko
Hi, all.

Any comments?

On Wed, Aug 23, 2017 at 5:31 PM, Oleksandr Tyshchenko
<olekst...@gmail.com> wrote:
> From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>
> Reserving a free context is both quicker and more likely to fail
> (due to limited hardware resources) than setting up a pagetable.
> What is more the pagetable init/cleanup code could require
> the context to be set up.
>
> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
> CC: Robin Murphy <robin.mur...@arm.com>
> CC: Laurent Pinchart <laurent.pinch...@ideasonboard.com>
> CC: Joerg Roedel <j...@8bytes.org>
>
> ---
> This patch fixes a bug during rollback logic:
> In ipmmu_domain_init_context() we are trying to find an unused context
> and if operation fails we will call free_io_pgtable_ops(),
> but "domain->context_id" hasn't been initialized yet (likely it is 0
> because of kzalloc). Having the following call stack:
> free_io_pgtable_ops() -> io_pgtable_tlb_flush_all() ->
> ipmmu_tlb_flush_all() -> ipmmu_tlb_invalidate()
> we will get a mistaken cache flush for a context pointed by
> uninitialized "domain->context_id".
>
>
> v1 here:
> https://lists.linuxfoundation.org/pipermail/iommu/2017-August/023857.html
> ---
>  drivers/iommu/ipmmu-vmsa.c | 42 +-
>  1 file changed, 21 insertions(+), 21 deletions(-)
>
> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
> index 2a38aa1..382f387 100644
> --- a/drivers/iommu/ipmmu-vmsa.c
> +++ b/drivers/iommu/ipmmu-vmsa.c
> @@ -341,6 +341,19 @@ static int ipmmu_domain_allocate_context(struct 
> ipmmu_vmsa_device *mmu,
> return ret;
>  }
>
> +static void ipmmu_domain_free_context(struct ipmmu_vmsa_device *mmu,
> + unsigned int context_id)
> +{
> +   unsigned long flags;
> +
> +   spin_lock_irqsave(>lock, flags);
> +
> +   clear_bit(context_id, mmu->ctx);
> +   mmu->domains[context_id] = NULL;
> +
> +   spin_unlock_irqrestore(>lock, flags);
> +}
> +
>  static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain)
>  {
> u64 ttbr;
> @@ -370,22 +383,22 @@ static int ipmmu_domain_init_context(struct 
> ipmmu_vmsa_domain *domain)
>  */
> domain->cfg.iommu_dev = domain->mmu->dev;
>
> -   domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
> -  domain);
> -   if (!domain->iop)
> -   return -EINVAL;
> -
> /*
>  * Find an unused context.
>  */
> ret = ipmmu_domain_allocate_context(domain->mmu, domain);
> -   if (ret == IPMMU_CTX_MAX) {
> -   free_io_pgtable_ops(domain->iop);
> +   if (ret == IPMMU_CTX_MAX)
> return -EBUSY;
> -   }
>
> domain->context_id = ret;
>
> +   domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
> +  domain);
> +   if (!domain->iop) {
> +   ipmmu_domain_free_context(domain->mmu, domain->context_id);
> +   return -EINVAL;
> +   }
> +
> /* TTBR0 */
> ttbr = domain->cfg.arm_lpae_s1_cfg.ttbr[0];
> ipmmu_ctx_write(domain, IMTTLBR0, ttbr);
> @@ -426,19 +439,6 @@ static int ipmmu_domain_init_context(struct 
> ipmmu_vmsa_domain *domain)
> return 0;
>  }
>
> -static void ipmmu_domain_free_context(struct ipmmu_vmsa_device *mmu,
> -         unsigned int context_id)
> -{
> -   unsigned long flags;
> -
> -   spin_lock_irqsave(>lock, flags);
> -
> -   clear_bit(context_id, mmu->ctx);
> -   mmu->domains[context_id] = NULL;
> -
> -   spin_unlock_irqrestore(>lock, flags);
> -}
> -
>  static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain)
>  {
> /*
> --
> 2.7.4
>



-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH v2] iommu/ipmmu-vmsa: Rereserving a free context before setting up a pagetable

2017-08-23 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>

Reserving a free context is both quicker and more likely to fail
(due to limited hardware resources) than setting up a pagetable.
What is more the pagetable init/cleanup code could require
the context to be set up.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
CC: Robin Murphy <robin.mur...@arm.com>
CC: Laurent Pinchart <laurent.pinch...@ideasonboard.com>
CC: Joerg Roedel <j...@8bytes.org>

---
This patch fixes a bug during rollback logic:
In ipmmu_domain_init_context() we are trying to find an unused context
and if operation fails we will call free_io_pgtable_ops(),
but "domain->context_id" hasn't been initialized yet (likely it is 0
because of kzalloc). Having the following call stack:
free_io_pgtable_ops() -> io_pgtable_tlb_flush_all() ->
ipmmu_tlb_flush_all() -> ipmmu_tlb_invalidate()
we will get a mistaken cache flush for a context pointed by
uninitialized "domain->context_id".


v1 here:
https://lists.linuxfoundation.org/pipermail/iommu/2017-August/023857.html
---
 drivers/iommu/ipmmu-vmsa.c | 42 +-
 1 file changed, 21 insertions(+), 21 deletions(-)

diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
index 2a38aa1..382f387 100644
--- a/drivers/iommu/ipmmu-vmsa.c
+++ b/drivers/iommu/ipmmu-vmsa.c
@@ -341,6 +341,19 @@ static int ipmmu_domain_allocate_context(struct 
ipmmu_vmsa_device *mmu,
return ret;
 }
 
+static void ipmmu_domain_free_context(struct ipmmu_vmsa_device *mmu,
+ unsigned int context_id)
+{
+   unsigned long flags;
+
+   spin_lock_irqsave(>lock, flags);
+
+   clear_bit(context_id, mmu->ctx);
+   mmu->domains[context_id] = NULL;
+
+   spin_unlock_irqrestore(>lock, flags);
+}
+
 static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain)
 {
u64 ttbr;
@@ -370,22 +383,22 @@ static int ipmmu_domain_init_context(struct 
ipmmu_vmsa_domain *domain)
 */
domain->cfg.iommu_dev = domain->mmu->dev;
 
-   domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
-  domain);
-   if (!domain->iop)
-   return -EINVAL;
-
/*
 * Find an unused context.
 */
ret = ipmmu_domain_allocate_context(domain->mmu, domain);
-   if (ret == IPMMU_CTX_MAX) {
-   free_io_pgtable_ops(domain->iop);
+   if (ret == IPMMU_CTX_MAX)
return -EBUSY;
-   }
 
domain->context_id = ret;
 
+   domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
+  domain);
+   if (!domain->iop) {
+   ipmmu_domain_free_context(domain->mmu, domain->context_id);
+   return -EINVAL;
+   }
+
/* TTBR0 */
ttbr = domain->cfg.arm_lpae_s1_cfg.ttbr[0];
ipmmu_ctx_write(domain, IMTTLBR0, ttbr);
@@ -426,19 +439,6 @@ static int ipmmu_domain_init_context(struct 
ipmmu_vmsa_domain *domain)
return 0;
 }
 
-static void ipmmu_domain_free_context(struct ipmmu_vmsa_device *mmu,
- unsigned int context_id)
-{
-   unsigned long flags;
-
-   spin_lock_irqsave(>lock, flags);
-
-   clear_bit(context_id, mmu->ctx);
-   mmu->domains[context_id] = NULL;
-
-   spin_unlock_irqrestore(>lock, flags);
-}
-
 static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain)
 {
/*
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v2] iommu/ipmmu-vmsa: Rereserving a free context before setting up a pagetable

2017-08-29 Thread Oleksandr Tyshchenko
Hi Laurent, Robin

On Tue, Aug 29, 2017 at 4:01 PM, Laurent Pinchart
<laurent.pinch...@ideasonboard.com> wrote:
> Hi Oleksandr,
>
> Thank you for the patch.
>
> On Wednesday, 23 August 2017 17:31:42 EEST Oleksandr Tyshchenko wrote:
>> From: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>>
>> Reserving a free context is both quicker and more likely to fail
>> (due to limited hardware resources) than setting up a pagetable.
>> What is more the pagetable init/cleanup code could require
>> the context to be set up.
>>
>> Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshche...@epam.com>
>> CC: Robin Murphy <robin.mur...@arm.com>
>> CC: Laurent Pinchart <laurent.pinch...@ideasonboard.com>
>> CC: Joerg Roedel <j...@8bytes.org>
>
> Reviewed-by: Laurent Pinchart <laurent.pinch...@ideasonboard.com>
>
>> ---
>> This patch fixes a bug during rollback logic:
>> In ipmmu_domain_init_context() we are trying to find an unused context
>> and if operation fails we will call free_io_pgtable_ops(),
>> but "domain->context_id" hasn't been initialized yet (likely it is 0
>> because of kzalloc). Having the following call stack:
>> free_io_pgtable_ops() -> io_pgtable_tlb_flush_all() ->
>> ipmmu_tlb_flush_all() -> ipmmu_tlb_invalidate()
>> we will get a mistaken cache flush for a context pointed by
>> uninitialized "domain->context_id".
>>
>>
>> v1 here:
>> https://lists.linuxfoundation.org/pipermail/iommu/2017-August/023857.html
>> ---
>>  drivers/iommu/ipmmu-vmsa.c | 42 +-
>>  1 file changed, 21 insertions(+), 21 deletions(-)
>>
>> diff --git a/drivers/iommu/ipmmu-vmsa.c b/drivers/iommu/ipmmu-vmsa.c
>> index 2a38aa1..382f387 100644
>> --- a/drivers/iommu/ipmmu-vmsa.c
>> +++ b/drivers/iommu/ipmmu-vmsa.c
>> @@ -341,6 +341,19 @@ static int ipmmu_domain_allocate_context(struct
>> ipmmu_vmsa_device *mmu, return ret;
>>  }
>>
>> +static void ipmmu_domain_free_context(struct ipmmu_vmsa_device *mmu,
>> +   unsigned int context_id)
>> +{
>> + unsigned long flags;
>> +
>> + spin_lock_irqsave(>lock, flags);
>> +
>> + clear_bit(context_id, mmu->ctx);
>> + mmu->domains[context_id] = NULL;
>> +
>> + spin_unlock_irqrestore(>lock, flags);
>> +}
>> +
>>  static int ipmmu_domain_init_context(struct ipmmu_vmsa_domain *domain)
>>  {
>>   u64 ttbr;
>> @@ -370,22 +383,22 @@ static int ipmmu_domain_init_context(struct
>> ipmmu_vmsa_domain *domain) */
>>   domain->cfg.iommu_dev = domain->mmu->dev;
>>
>> - domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
>> -domain);
>> - if (!domain->iop)
>> - return -EINVAL;
>> -
>>   /*
>>* Find an unused context.
>>*/
>>   ret = ipmmu_domain_allocate_context(domain->mmu, domain);
>> - if (ret == IPMMU_CTX_MAX) {
>> - free_io_pgtable_ops(domain->iop);
>> + if (ret == IPMMU_CTX_MAX)
>>   return -EBUSY;
>> - }
>>
>>   domain->context_id = ret;
>>
>> + domain->iop = alloc_io_pgtable_ops(ARM_32_LPAE_S1, >cfg,
>> +domain);
>> + if (!domain->iop) {
>> + ipmmu_domain_free_context(domain->mmu, domain->context_id);
>> + return -EINVAL;
>> + }
>> +
>>   /* TTBR0 */
>>   ttbr = domain->cfg.arm_lpae_s1_cfg.ttbr[0];
>>   ipmmu_ctx_write(domain, IMTTLBR0, ttbr);
>> @@ -426,19 +439,6 @@ static int ipmmu_domain_init_context(struct
>> ipmmu_vmsa_domain *domain) return 0;
>>  }
>>
>> -static void ipmmu_domain_free_context(struct ipmmu_vmsa_device *mmu,
>> -   unsigned int context_id)
>> -{
>> - unsigned long flags;
>> -
>> - spin_lock_irqsave(>lock, flags);
>> -
>> - clear_bit(context_id, mmu->ctx);
>> - mmu->domains[context_id] = NULL;
>> -
>> - spin_unlock_irqrestore(>lock, flags);
>> -}
>> -
>>  static void ipmmu_domain_destroy_context(struct ipmmu_vmsa_domain *domain)
>>  {
>>   /*
>
>
> --
> Regards,
>
> Laurent Pinchart
>

Thank you for the review.
Shall I resend patch with added R-bs?

-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH V3 5/8] dt-bindings: Add xen, grant-dma IOMMU description for xen-grant DMA ops

2022-05-30 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko 

The main purpose of this binding is to communicate Xen specific
information using generic IOMMU device tree bindings (which is
a good fit here) rather than introducing a custom property.

Introduce Xen specific IOMMU for the virtualized device (e.g. virtio)
to be used by Xen grant DMA-mapping layer in the subsequent commit.

The reference to Xen specific IOMMU node using "iommus" property
indicates that Xen grant mappings need to be enabled for the device,
and it specifies the ID of the domain where the corresponding backend
resides. The domid (domain ID) is used as an argument to the Xen grant
mapping APIs.

This is needed for the option to restrict memory access using Xen grant
mappings to work which primary goal is to enable using virtio devices
in Xen guests.

Signed-off-by: Oleksandr Tyshchenko 
---
Changes RFC -> V1:
   - update commit subject/description and text in description
   - move to devicetree/bindings/arm/

Changes V1 -> V2:
   - update text in description
   - change the maintainer of the binding
   - fix validation issue
   - reference xen,dev-domid.yaml schema from virtio/mmio.yaml

Change V2 -> V3:
   - Stefano already gave his Reviewed-by, I dropped it due to the changes 
(significant)
   - use generic IOMMU device tree bindings instead of custom property
 "xen,dev-domid"
   - change commit subject and description, was
 "dt-bindings: Add xen,dev-domid property description for xen-grant DMA ops"
---
 .../devicetree/bindings/iommu/xen,grant-dma.yaml   | 49 ++
 1 file changed, 49 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml

diff --git a/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml 
b/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
new file mode 100644
index ..ab5765c
--- /dev/null
+++ b/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
@@ -0,0 +1,49 @@
+# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/iommu/xen,grant-dma.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Xen specific IOMMU for virtualized devices (e.g. virtio)
+
+maintainers:
+  - Stefano Stabellini 
+
+description:
+  The reference to Xen specific IOMMU node using "iommus" property indicates
+  that Xen grant mappings need to be enabled for the device, and it specifies
+  the ID of the domain where the corresponding backend resides.
+  The binding is required to restrict memory access using Xen grant mappings.
+
+properties:
+  compatible:
+const: xen,grant-dma
+
+  '#iommu-cells':
+const: 1
+description:
+  Xen specific IOMMU is multiple-master IOMMU device.
+  The single cell describes the domid (domain ID) of the domain where
+  the backend is running.
+
+required:
+  - compatible
+  - "#iommu-cells"
+
+additionalProperties: false
+
+examples:
+  - |
+xen_iommu {
+compatible = "xen,grant-dma";
+#iommu-cells = <1>;
+};
+
+virtio@3000 {
+compatible = "virtio,mmio";
+reg = <0x3000 0x100>;
+interrupts = <41>;
+
+/* The backend is located in Xen domain with ID 1 */
+iommus = <_iommu 1>;
+};
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH V4 5/8] dt-bindings: Add xen, grant-dma IOMMU description for xen-grant DMA ops

2022-06-02 Thread Oleksandr Tyshchenko
From: Oleksandr Tyshchenko 

The main purpose of this binding is to communicate Xen specific
information using generic IOMMU device tree bindings (which is
a good fit here) rather than introducing a custom property.

Introduce Xen specific IOMMU for the virtualized device (e.g. virtio)
to be used by Xen grant DMA-mapping layer in the subsequent commit.

The reference to Xen specific IOMMU node using "iommus" property
indicates that Xen grant mappings need to be enabled for the device,
and it specifies the ID of the domain where the corresponding backend
resides. The domid (domain ID) is used as an argument to the Xen grant
mapping APIs.

This is needed for the option to restrict memory access using Xen grant
mappings to work which primary goal is to enable using virtio devices
in Xen guests.

Signed-off-by: Oleksandr Tyshchenko 
Reviewed-by: Stefano Stabellini 
---
Changes RFC -> V1:
   - update commit subject/description and text in description
   - move to devicetree/bindings/arm/

Changes V1 -> V2:
   - update text in description
   - change the maintainer of the binding
   - fix validation issue
   - reference xen,dev-domid.yaml schema from virtio/mmio.yaml

Change V2 -> V3:
   - Stefano already gave his Reviewed-by, I dropped it due to the changes 
(significant)
   - use generic IOMMU device tree bindings instead of custom property
 "xen,dev-domid"
   - change commit subject and description, was
 "dt-bindings: Add xen,dev-domid property description for xen-grant DMA ops"

Changes V3 -> V4:
   - add Stefano's R-b
   - remove underscore in iommu node name
   - remove consumer example virtio@3000
   - update text for two descriptions
---
 .../devicetree/bindings/iommu/xen,grant-dma.yaml   | 39 ++
 1 file changed, 39 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml

diff --git a/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml 
b/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
new file mode 100644
index ..be1539d
--- /dev/null
+++ b/Documentation/devicetree/bindings/iommu/xen,grant-dma.yaml
@@ -0,0 +1,39 @@
+# SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause)
+%YAML 1.2
+---
+$id: http://devicetree.org/schemas/iommu/xen,grant-dma.yaml#
+$schema: http://devicetree.org/meta-schemas/core.yaml#
+
+title: Xen specific IOMMU for virtualized devices (e.g. virtio)
+
+maintainers:
+  - Stefano Stabellini 
+
+description:
+  The Xen IOMMU represents the Xen grant table interface. Grant mappings
+  are to be used with devices connected to the Xen IOMMU using the "iommus"
+  property, which also specifies the ID of the backend domain.
+  The binding is required to restrict memory access using Xen grant mappings.
+
+properties:
+  compatible:
+const: xen,grant-dma
+
+  '#iommu-cells':
+const: 1
+description:
+  The single cell is the domid (domain ID) of the domain where the backend
+  is running.
+
+required:
+  - compatible
+  - "#iommu-cells"
+
+additionalProperties: false
+
+examples:
+  - |
+iommu {
+compatible = "xen,grant-dma";
+#iommu-cells = <1>;
+};
-- 
2.7.4

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH v3 6/8] genirq: Add and use an irq_data_update_affinity helper

2022-07-07 Thread Oleksandr Tyshchenko via iommu
>>> index 0fe2d79fb123..5ebb1771b4ab 100644
>>> --- a/arch/parisc/kernel/irq.c
>>> +++ b/arch/parisc/kernel/irq.c
>>> @@ -315,7 +315,7 @@ unsigned long txn_affinity_addr(unsigned int irq, int 
>>> cpu)
>>>{
>>>#ifdef CONFIG_SMP
>>> struct irq_data *d = irq_get_irq_data(irq);
>>> -   cpumask_copy(irq_data_get_affinity_mask(d), cpumask_of(cpu));
>>> +   irq_data_update_affinity(d, cpumask_of(cpu));
>>>#endif
>>> return per_cpu(cpu_data, cpu).txn_addr;
>>> diff --git a/drivers/irqchip/irq-bcm6345-l1.c 
>>> b/drivers/irqchip/irq-bcm6345-l1.c
>>> index 142a7431745f..6899e37810a8 100644
>>> --- a/drivers/irqchip/irq-bcm6345-l1.c
>>> +++ b/drivers/irqchip/irq-bcm6345-l1.c
>>> @@ -216,11 +216,11 @@ static int bcm6345_l1_set_affinity(struct irq_data *d,
>>> enabled = intc->cpus[old_cpu]->enable_cache[word] & mask;
>>> if (enabled)
>>> __bcm6345_l1_mask(d);
>>> -   cpumask_copy(irq_data_get_affinity_mask(d), dest);
>>> +   irq_data_update_affinity(d, dest);
>>> if (enabled)
>>> __bcm6345_l1_unmask(d);
>>> } else {
>>> -   cpumask_copy(irq_data_get_affinity_mask(d), dest);
>>> +   irq_data_update_affinity(d, dest);
>>> }
>>> raw_spin_unlock_irqrestore(>lock, flags);
>>>diff --git a/drivers/parisc/iosapic.c b/drivers/parisc/iosapic.c
>>> index 8a3b0c3a1e92..3a8c98615634 100644
>>> --- a/drivers/parisc/iosapic.c
>>> +++ b/drivers/parisc/iosapic.c
>>> @@ -677,7 +677,7 @@ static int iosapic_set_affinity_irq(struct irq_data *d,
>>> if (dest_cpu < 0)
>>> return -1;
>>>-cpumask_copy(irq_data_get_affinity_mask(d),
>>> cpumask_of(dest_cpu));
>>> +   irq_data_update_affinity(d, cpumask_of(dest_cpu));
>>> vi->txn_addr = txn_affinity_addr(d->irq, dest_cpu);
>>> spin_lock_irqsave(_lock, flags);
>>> diff --git a/drivers/sh/intc/chip.c b/drivers/sh/intc/chip.c
>>> index 358df7510186..828d81e02b37 100644
>>> --- a/drivers/sh/intc/chip.c
>>> +++ b/drivers/sh/intc/chip.c
>>> @@ -72,7 +72,7 @@ static int intc_set_affinity(struct irq_data *data,
>>> if (!cpumask_intersects(cpumask, cpu_online_mask))
>>> return -1;
>>>-cpumask_copy(irq_data_get_affinity_mask(data), cpumask);
>>> +   irq_data_update_affinity(data, cpumask);
>>> return IRQ_SET_MASK_OK_NOCOPY;
>>>}
>>> diff --git a/drivers/xen/events/events_base.c 
>>> b/drivers/xen/events/events_base.c
>>> index 46d9295d9a6e..5e8321f43cbd 100644
>>> --- a/drivers/xen/events/events_base.c
>>> +++ b/drivers/xen/events/events_base.c
>>> @@ -528,9 +528,10 @@ static void bind_evtchn_to_cpu(evtchn_port_t evtchn, 
>>> unsigned int cpu,
>>> BUG_ON(irq == -1);
>>> if (IS_ENABLED(CONFIG_SMP) && force_affinity) {
>>> -   cpumask_copy(irq_get_affinity_mask(irq), cpumask_of(cpu));
>>> -   cpumask_copy(irq_get_effective_affinity_mask(irq),
>>> -cpumask_of(cpu));
>>> +   struct irq_data *data = irq_get_irq_data(irq);
>>> +
>>> +   irq_data_update_affinity(data, cpumask_of(cpu));
>>> +   irq_data_update_effective_affinity(data, cpumask_of(cpu));
>>> }
>>
>>
>> Nit: commit description says about reusing irq_data_update_affinity()
>> only, but here we also reuse irq_data_update_effective_affinity(), so
>> I would mention that in the description.
>>
>> Reviewed-by: Oleksandr Tyshchenko  # Xen bits
> b4 shouts because of your email address:
>
> NOTE: some trailers ignored due to from/email mismatches:
>  ! Trailer: Reviewed-by: Oleksandr Tyshchenko 
>  # Xen bits
>   Msg From: Oleksandr 

sorry for the inconvenience


>
> I've used the tag anyway,


thank you


>   but you may want to fix your setup in the
> future.

yes, will do


>
> Thanks,
>
>   M.
>
-- 
Regards,

Oleksandr Tyshchenko
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu