Re: [PATCH kernel] powerpc/iommu: Annotate nested lock for lockdep

2021-02-22 Thread Alexey Kardashevskiy




On 18/02/2021 23:59, Frederic Barrat wrote:



On 16/02/2021 04:20, Alexey Kardashevskiy wrote:

The IOMMU table is divided into pools for concurrent mappings and each
pool has a separate spinlock. When taking the ownership of an IOMMU group
to pass through a device to a VM, we lock these spinlocks which triggers
a false negative warning in lockdep (below).

This fixes it by annotating the large pool's spinlock as a nest lock.

===
WARNING: possible recursive locking detected
5.11.0-le_syzkaller_a+fstn1 #100 Not tainted

qemu-system-ppc/4129 is trying to acquire lock:
c000119bddb0 (&(p->lock)/1){}-{2:2}, at: 
iommu_take_ownership+0xac/0x1e0


but task is already holding lock:
c000119bdd30 (&(p->lock)/1){}-{2:2}, at: 
iommu_take_ownership+0xac/0x1e0


other info that might help us debug this:
  Possible unsafe locking scenario:

    CPU0
    
   lock(&(p->lock)/1);
   lock(&(p->lock)/1);
===

Signed-off-by: Alexey Kardashevskiy 
---
  arch/powerpc/kernel/iommu.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 557a09dd5b2f..2ee642a6731a 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -1089,7 +1089,7 @@ int iommu_take_ownership(struct iommu_table *tbl)
  spin_lock_irqsave(>large_pool.lock, flags);
  for (i = 0; i < tbl->nr_pools; i++)
-    spin_lock(>pools[i].lock);
+    spin_lock_nest_lock(>pools[i].lock, >large_pool.lock);



We have the same pattern and therefore should have the same problem in 
iommu_release_ownership().


But as I understand, we're hacking our way around lockdep here, since 
conceptually, those locks are independent. I was wondering why it seems 
to fix it by worrying only about the large pool lock.


This is the other way around - we telling the lockdep not to worry about 
small pool locks if the nest lock (==large pool lock) is locked. The 
warning is printed when a nested lock is detected and the lockdep checks 
if there is a nest for this nested lock at check_deadlock().



That loop can take 
many locks (up to 4 with current config). However, if the dma window is 
less than 1GB, we would only have one, so it would make sense for 
lockdep to stop complaining.


Why would it stop if the large pool is always there?

Is it what happened? In which case, this 
patch doesn't really fix it. Or I'm missing something :-)


I tried with 1 or 2 small pools, no difference at all. I might also be 
missing something here too :)





   Fred




  iommu_table_release_pages(tbl);



--
Alexey


Re: [PATCH kernel] powerpc/iommu: Annotate nested lock for lockdep

2021-02-22 Thread Alexey Kardashevskiy




On 20/02/2021 14:49, Alexey Kardashevskiy wrote:



On 18/02/2021 23:59, Frederic Barrat wrote:



On 16/02/2021 04:20, Alexey Kardashevskiy wrote:

The IOMMU table is divided into pools for concurrent mappings and each
pool has a separate spinlock. When taking the ownership of an IOMMU 
group

to pass through a device to a VM, we lock these spinlocks which triggers
a false negative warning in lockdep (below).

This fixes it by annotating the large pool's spinlock as a nest lock.

===
WARNING: possible recursive locking detected
5.11.0-le_syzkaller_a+fstn1 #100 Not tainted

qemu-system-ppc/4129 is trying to acquire lock:
c000119bddb0 (&(p->lock)/1){}-{2:2}, at: 
iommu_take_ownership+0xac/0x1e0


but task is already holding lock:
c000119bdd30 (&(p->lock)/1){}-{2:2}, at: 
iommu_take_ownership+0xac/0x1e0


other info that might help us debug this:
  Possible unsafe locking scenario:

    CPU0
    
   lock(&(p->lock)/1);
   lock(&(p->lock)/1);
===

Signed-off-by: Alexey Kardashevskiy 
---
  arch/powerpc/kernel/iommu.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 557a09dd5b2f..2ee642a6731a 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -1089,7 +1089,7 @@ int iommu_take_ownership(struct iommu_table *tbl)
  spin_lock_irqsave(>large_pool.lock, flags);
  for (i = 0; i < tbl->nr_pools; i++)
-    spin_lock(>pools[i].lock);
+    spin_lock_nest_lock(>pools[i].lock, 
>large_pool.lock);



We have the same pattern and therefore should have the same problem in 
iommu_release_ownership().


But as I understand, we're hacking our way around lockdep here, since 
conceptually, those locks are independent. I was wondering why it 
seems to fix it by worrying only about the large pool lock. That loop 
can take many locks (up to 4 with current config). However, if the dma 
window is less than 1GB, we would only have one, so it would make 
sense for lockdep to stop complaining. Is it what happened? In which 
case, this patch doesn't really fix it. Or I'm missing something :-)



My rough undestanding is that when spin_lock_nest_lock is called first 
time, it does some magic with lockdep classes somewhere in 
__lock_acquire()/register_lock_class() and right after that the nested 
lock is not the same as before and it is annotated so  we cannot lock 
nested locks without locking the nest lock first and no (re)annotation 
is needed. I'll try to poke this code once again and see, it is just was 
easier with p9/nested which is gone for now because of little snow in 
one of the southern states :)



Turns out I have good imagination and in fact it does print this huge 
warning in the release hook as well so v2 is coming. Thanks,








   Fred




  iommu_table_release_pages(tbl);





--
Alexey


Re: [PATCH kernel] powerpc/iommu: Annotate nested lock for lockdep

2021-02-19 Thread Alexey Kardashevskiy




On 18/02/2021 23:59, Frederic Barrat wrote:



On 16/02/2021 04:20, Alexey Kardashevskiy wrote:

The IOMMU table is divided into pools for concurrent mappings and each
pool has a separate spinlock. When taking the ownership of an IOMMU group
to pass through a device to a VM, we lock these spinlocks which triggers
a false negative warning in lockdep (below).

This fixes it by annotating the large pool's spinlock as a nest lock.

===
WARNING: possible recursive locking detected
5.11.0-le_syzkaller_a+fstn1 #100 Not tainted

qemu-system-ppc/4129 is trying to acquire lock:
c000119bddb0 (&(p->lock)/1){}-{2:2}, at: 
iommu_take_ownership+0xac/0x1e0


but task is already holding lock:
c000119bdd30 (&(p->lock)/1){}-{2:2}, at: 
iommu_take_ownership+0xac/0x1e0


other info that might help us debug this:
  Possible unsafe locking scenario:

    CPU0
    
   lock(&(p->lock)/1);
   lock(&(p->lock)/1);
===

Signed-off-by: Alexey Kardashevskiy 
---
  arch/powerpc/kernel/iommu.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 557a09dd5b2f..2ee642a6731a 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -1089,7 +1089,7 @@ int iommu_take_ownership(struct iommu_table *tbl)
  spin_lock_irqsave(>large_pool.lock, flags);
  for (i = 0; i < tbl->nr_pools; i++)
-    spin_lock(>pools[i].lock);
+    spin_lock_nest_lock(>pools[i].lock, >large_pool.lock);



We have the same pattern and therefore should have the same problem in 
iommu_release_ownership().


But as I understand, we're hacking our way around lockdep here, since 
conceptually, those locks are independent. I was wondering why it seems 
to fix it by worrying only about the large pool lock. That loop can take 
many locks (up to 4 with current config). However, if the dma window is 
less than 1GB, we would only have one, so it would make sense for 
lockdep to stop complaining. Is it what happened? In which case, this 
patch doesn't really fix it. Or I'm missing something :-)



My rough undestanding is that when spin_lock_nest_lock is called first 
time, it does some magic with lockdep classes somewhere in 
__lock_acquire()/register_lock_class() and right after that the nested 
lock is not the same as before and it is annotated so  we cannot lock 
nested locks without locking the nest lock first and no (re)annotation 
is needed. I'll try to poke this code once again and see, it is just was 
easier with p9/nested which is gone for now because of little snow in 
one of the southern states :)





   Fred




  iommu_table_release_pages(tbl);



--
Alexey


Re: [PATCH kernel] powerpc/iommu: Annotate nested lock for lockdep

2021-02-18 Thread Frederic Barrat




On 16/02/2021 04:20, Alexey Kardashevskiy wrote:

The IOMMU table is divided into pools for concurrent mappings and each
pool has a separate spinlock. When taking the ownership of an IOMMU group
to pass through a device to a VM, we lock these spinlocks which triggers
a false negative warning in lockdep (below).

This fixes it by annotating the large pool's spinlock as a nest lock.

===
WARNING: possible recursive locking detected
5.11.0-le_syzkaller_a+fstn1 #100 Not tainted

qemu-system-ppc/4129 is trying to acquire lock:
c000119bddb0 (&(p->lock)/1){}-{2:2}, at: iommu_take_ownership+0xac/0x1e0

but task is already holding lock:
c000119bdd30 (&(p->lock)/1){}-{2:2}, at: iommu_take_ownership+0xac/0x1e0

other info that might help us debug this:
  Possible unsafe locking scenario:

CPU0

   lock(&(p->lock)/1);
   lock(&(p->lock)/1);
===

Signed-off-by: Alexey Kardashevskiy 
---
  arch/powerpc/kernel/iommu.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 557a09dd5b2f..2ee642a6731a 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -1089,7 +1089,7 @@ int iommu_take_ownership(struct iommu_table *tbl)
  
  	spin_lock_irqsave(>large_pool.lock, flags);

for (i = 0; i < tbl->nr_pools; i++)
-   spin_lock(>pools[i].lock);
+   spin_lock_nest_lock(>pools[i].lock, >large_pool.lock);



We have the same pattern and therefore should have the same problem in 
iommu_release_ownership().


But as I understand, we're hacking our way around lockdep here, since 
conceptually, those locks are independent. I was wondering why it seems 
to fix it by worrying only about the large pool lock. That loop can take 
many locks (up to 4 with current config). However, if the dma window is 
less than 1GB, we would only have one, so it would make sense for 
lockdep to stop complaining. Is it what happened? In which case, this 
patch doesn't really fix it. Or I'm missing something :-)


  Fred




iommu_table_release_pages(tbl);
  



[PATCH kernel] powerpc/iommu: Annotate nested lock for lockdep

2021-02-15 Thread Alexey Kardashevskiy
The IOMMU table is divided into pools for concurrent mappings and each
pool has a separate spinlock. When taking the ownership of an IOMMU group
to pass through a device to a VM, we lock these spinlocks which triggers
a false negative warning in lockdep (below).

This fixes it by annotating the large pool's spinlock as a nest lock.

===
WARNING: possible recursive locking detected
5.11.0-le_syzkaller_a+fstn1 #100 Not tainted

qemu-system-ppc/4129 is trying to acquire lock:
c000119bddb0 (&(p->lock)/1){}-{2:2}, at: iommu_take_ownership+0xac/0x1e0

but task is already holding lock:
c000119bdd30 (&(p->lock)/1){}-{2:2}, at: iommu_take_ownership+0xac/0x1e0

other info that might help us debug this:
 Possible unsafe locking scenario:

   CPU0
   
  lock(&(p->lock)/1);
  lock(&(p->lock)/1);
===

Signed-off-by: Alexey Kardashevskiy 
---
 arch/powerpc/kernel/iommu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c
index 557a09dd5b2f..2ee642a6731a 100644
--- a/arch/powerpc/kernel/iommu.c
+++ b/arch/powerpc/kernel/iommu.c
@@ -1089,7 +1089,7 @@ int iommu_take_ownership(struct iommu_table *tbl)
 
spin_lock_irqsave(>large_pool.lock, flags);
for (i = 0; i < tbl->nr_pools; i++)
-   spin_lock(>pools[i].lock);
+   spin_lock_nest_lock(>pools[i].lock, >large_pool.lock);
 
iommu_table_release_pages(tbl);
 
-- 
2.17.1