Re: [PATCH 1/3] kvm: arm/arm64: Take mmap_sem in stage2_unmap_vm

2017-03-15 Thread Paolo Bonzini


On 15/03/2017 10:17, Christoffer Dall wrote:
> On Tue, Mar 14, 2017 at 02:52:32PM +, Suzuki K Poulose wrote:
>> From: Marc Zyngier 
>>
>> We don't hold the mmap_sem while searching for the VMAs when
>> we try to unmap each memslot for a VM. Fix this properly to
>> avoid unexpected results.
>>
>> Fixes: commit 957db105c997 ("arm/arm64: KVM: Introduce stage2_unmap_vm")
>> Cc: sta...@vger.kernel.org # v3.19+
>> Cc: Christoffer Dall 
>> Signed-off-by: Marc Zyngier 
>> Signed-off-by: Suzuki K Poulose 
>> ---
>>  arch/arm/kvm/mmu.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
>> index 962616f..f2e2e0c 100644
>> --- a/arch/arm/kvm/mmu.c
>> +++ b/arch/arm/kvm/mmu.c
>> @@ -803,6 +803,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>>  int idx;
>>  
>>  idx = srcu_read_lock(>srcu);
>> +down_read(>mm->mmap_sem);
>>  spin_lock(>mmu_lock);
>>  
>>  slots = kvm_memslots(kvm);
>> @@ -810,6 +811,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>>  stage2_unmap_memslot(kvm, memslot);
>>  
>>  spin_unlock(>mmu_lock);
>> +up_read(>mm->mmap_sem);
>>  srcu_read_unlock(>srcu, idx);
>>  }
>>  
>> -- 
>> 2.7.4
>>
> 
> Are we sure that holding mmu_lock is valid while holding the mmap_sem?

Sure, spinlock-inside-semaphore and spinlock-inside-mutex is always okay.

Paolo


Re: [PATCH 1/3] kvm: arm/arm64: Take mmap_sem in stage2_unmap_vm

2017-03-15 Thread Paolo Bonzini


On 15/03/2017 10:17, Christoffer Dall wrote:
> On Tue, Mar 14, 2017 at 02:52:32PM +, Suzuki K Poulose wrote:
>> From: Marc Zyngier 
>>
>> We don't hold the mmap_sem while searching for the VMAs when
>> we try to unmap each memslot for a VM. Fix this properly to
>> avoid unexpected results.
>>
>> Fixes: commit 957db105c997 ("arm/arm64: KVM: Introduce stage2_unmap_vm")
>> Cc: sta...@vger.kernel.org # v3.19+
>> Cc: Christoffer Dall 
>> Signed-off-by: Marc Zyngier 
>> Signed-off-by: Suzuki K Poulose 
>> ---
>>  arch/arm/kvm/mmu.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
>> index 962616f..f2e2e0c 100644
>> --- a/arch/arm/kvm/mmu.c
>> +++ b/arch/arm/kvm/mmu.c
>> @@ -803,6 +803,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>>  int idx;
>>  
>>  idx = srcu_read_lock(>srcu);
>> +down_read(>mm->mmap_sem);
>>  spin_lock(>mmu_lock);
>>  
>>  slots = kvm_memslots(kvm);
>> @@ -810,6 +811,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>>  stage2_unmap_memslot(kvm, memslot);
>>  
>>  spin_unlock(>mmu_lock);
>> +up_read(>mm->mmap_sem);
>>  srcu_read_unlock(>srcu, idx);
>>  }
>>  
>> -- 
>> 2.7.4
>>
> 
> Are we sure that holding mmu_lock is valid while holding the mmap_sem?

Sure, spinlock-inside-semaphore and spinlock-inside-mutex is always okay.

Paolo


Re: [PATCH 1/3] kvm: arm/arm64: Take mmap_sem in stage2_unmap_vm

2017-03-15 Thread Christoffer Dall
On Wed, Mar 15, 2017 at 09:34:53AM +, Marc Zyngier wrote:
> On 15/03/17 09:17, Christoffer Dall wrote:
> > On Tue, Mar 14, 2017 at 02:52:32PM +, Suzuki K Poulose wrote:
> >> From: Marc Zyngier 
> >>
> >> We don't hold the mmap_sem while searching for the VMAs when
> >> we try to unmap each memslot for a VM. Fix this properly to
> >> avoid unexpected results.
> >>
> >> Fixes: commit 957db105c997 ("arm/arm64: KVM: Introduce stage2_unmap_vm")
> >> Cc: sta...@vger.kernel.org # v3.19+
> >> Cc: Christoffer Dall 
> >> Signed-off-by: Marc Zyngier 
> >> Signed-off-by: Suzuki K Poulose 
> >> ---
> >>  arch/arm/kvm/mmu.c | 2 ++
> >>  1 file changed, 2 insertions(+)
> >>
> >> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> >> index 962616f..f2e2e0c 100644
> >> --- a/arch/arm/kvm/mmu.c
> >> +++ b/arch/arm/kvm/mmu.c
> >> @@ -803,6 +803,7 @@ void stage2_unmap_vm(struct kvm *kvm)
> >>int idx;
> >>  
> >>idx = srcu_read_lock(>srcu);
> >> +  down_read(>mm->mmap_sem);
> >>spin_lock(>mmu_lock);
> >>  
> >>slots = kvm_memslots(kvm);
> >> @@ -810,6 +811,7 @@ void stage2_unmap_vm(struct kvm *kvm)
> >>stage2_unmap_memslot(kvm, memslot);
> >>  
> >>spin_unlock(>mmu_lock);
> >> +  up_read(>mm->mmap_sem);
> >>srcu_read_unlock(>srcu, idx);
> >>  }
> >>  
> >> -- 
> >> 2.7.4
> >>
> > 
> > Are we sure that holding mmu_lock is valid while holding the mmap_sem?
> 
> Maybe I'm just confused by the many levels of locking, Here's my rational:
> 
> - kvm->srcu protects the memslot list
> - mmap_sem protects the kernel VMA list
> - mmu_lock protects the stage2 page tables (at least here)
> 
> I don't immediately see any issue with holding the mmap_sem mutex here
> (unless there is a path that would retrigger a down operation on the
> mmap_sem?).
> 
> Or am I missing something obvious?

I was worried that someone else could hold the mmu_lock and take the
mmap_sem, but that wouldn't be allowed of course, because the semaphore
can sleep, so I agree, you should be good.

I just needed this conversation to feel good about this patch ;)

Reviewed-by: Christoffer Dall 


Re: [PATCH 1/3] kvm: arm/arm64: Take mmap_sem in stage2_unmap_vm

2017-03-15 Thread Christoffer Dall
On Wed, Mar 15, 2017 at 09:34:53AM +, Marc Zyngier wrote:
> On 15/03/17 09:17, Christoffer Dall wrote:
> > On Tue, Mar 14, 2017 at 02:52:32PM +, Suzuki K Poulose wrote:
> >> From: Marc Zyngier 
> >>
> >> We don't hold the mmap_sem while searching for the VMAs when
> >> we try to unmap each memslot for a VM. Fix this properly to
> >> avoid unexpected results.
> >>
> >> Fixes: commit 957db105c997 ("arm/arm64: KVM: Introduce stage2_unmap_vm")
> >> Cc: sta...@vger.kernel.org # v3.19+
> >> Cc: Christoffer Dall 
> >> Signed-off-by: Marc Zyngier 
> >> Signed-off-by: Suzuki K Poulose 
> >> ---
> >>  arch/arm/kvm/mmu.c | 2 ++
> >>  1 file changed, 2 insertions(+)
> >>
> >> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> >> index 962616f..f2e2e0c 100644
> >> --- a/arch/arm/kvm/mmu.c
> >> +++ b/arch/arm/kvm/mmu.c
> >> @@ -803,6 +803,7 @@ void stage2_unmap_vm(struct kvm *kvm)
> >>int idx;
> >>  
> >>idx = srcu_read_lock(>srcu);
> >> +  down_read(>mm->mmap_sem);
> >>spin_lock(>mmu_lock);
> >>  
> >>slots = kvm_memslots(kvm);
> >> @@ -810,6 +811,7 @@ void stage2_unmap_vm(struct kvm *kvm)
> >>stage2_unmap_memslot(kvm, memslot);
> >>  
> >>spin_unlock(>mmu_lock);
> >> +  up_read(>mm->mmap_sem);
> >>srcu_read_unlock(>srcu, idx);
> >>  }
> >>  
> >> -- 
> >> 2.7.4
> >>
> > 
> > Are we sure that holding mmu_lock is valid while holding the mmap_sem?
> 
> Maybe I'm just confused by the many levels of locking, Here's my rational:
> 
> - kvm->srcu protects the memslot list
> - mmap_sem protects the kernel VMA list
> - mmu_lock protects the stage2 page tables (at least here)
> 
> I don't immediately see any issue with holding the mmap_sem mutex here
> (unless there is a path that would retrigger a down operation on the
> mmap_sem?).
> 
> Or am I missing something obvious?

I was worried that someone else could hold the mmu_lock and take the
mmap_sem, but that wouldn't be allowed of course, because the semaphore
can sleep, so I agree, you should be good.

I just needed this conversation to feel good about this patch ;)

Reviewed-by: Christoffer Dall 


Re: [PATCH 1/3] kvm: arm/arm64: Take mmap_sem in stage2_unmap_vm

2017-03-15 Thread Marc Zyngier
On 15/03/17 09:17, Christoffer Dall wrote:
> On Tue, Mar 14, 2017 at 02:52:32PM +, Suzuki K Poulose wrote:
>> From: Marc Zyngier 
>>
>> We don't hold the mmap_sem while searching for the VMAs when
>> we try to unmap each memslot for a VM. Fix this properly to
>> avoid unexpected results.
>>
>> Fixes: commit 957db105c997 ("arm/arm64: KVM: Introduce stage2_unmap_vm")
>> Cc: sta...@vger.kernel.org # v3.19+
>> Cc: Christoffer Dall 
>> Signed-off-by: Marc Zyngier 
>> Signed-off-by: Suzuki K Poulose 
>> ---
>>  arch/arm/kvm/mmu.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
>> index 962616f..f2e2e0c 100644
>> --- a/arch/arm/kvm/mmu.c
>> +++ b/arch/arm/kvm/mmu.c
>> @@ -803,6 +803,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>>  int idx;
>>  
>>  idx = srcu_read_lock(>srcu);
>> +down_read(>mm->mmap_sem);
>>  spin_lock(>mmu_lock);
>>  
>>  slots = kvm_memslots(kvm);
>> @@ -810,6 +811,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>>  stage2_unmap_memslot(kvm, memslot);
>>  
>>  spin_unlock(>mmu_lock);
>> +up_read(>mm->mmap_sem);
>>  srcu_read_unlock(>srcu, idx);
>>  }
>>  
>> -- 
>> 2.7.4
>>
> 
> Are we sure that holding mmu_lock is valid while holding the mmap_sem?

Maybe I'm just confused by the many levels of locking, Here's my rational:

- kvm->srcu protects the memslot list
- mmap_sem protects the kernel VMA list
- mmu_lock protects the stage2 page tables (at least here)

I don't immediately see any issue with holding the mmap_sem mutex here
(unless there is a path that would retrigger a down operation on the
mmap_sem?).

Or am I missing something obvious?

Thanks,

M.
-- 
Jazz is not dead. It just smells funny...


Re: [PATCH 1/3] kvm: arm/arm64: Take mmap_sem in stage2_unmap_vm

2017-03-15 Thread Marc Zyngier
On 15/03/17 09:17, Christoffer Dall wrote:
> On Tue, Mar 14, 2017 at 02:52:32PM +, Suzuki K Poulose wrote:
>> From: Marc Zyngier 
>>
>> We don't hold the mmap_sem while searching for the VMAs when
>> we try to unmap each memslot for a VM. Fix this properly to
>> avoid unexpected results.
>>
>> Fixes: commit 957db105c997 ("arm/arm64: KVM: Introduce stage2_unmap_vm")
>> Cc: sta...@vger.kernel.org # v3.19+
>> Cc: Christoffer Dall 
>> Signed-off-by: Marc Zyngier 
>> Signed-off-by: Suzuki K Poulose 
>> ---
>>  arch/arm/kvm/mmu.c | 2 ++
>>  1 file changed, 2 insertions(+)
>>
>> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
>> index 962616f..f2e2e0c 100644
>> --- a/arch/arm/kvm/mmu.c
>> +++ b/arch/arm/kvm/mmu.c
>> @@ -803,6 +803,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>>  int idx;
>>  
>>  idx = srcu_read_lock(>srcu);
>> +down_read(>mm->mmap_sem);
>>  spin_lock(>mmu_lock);
>>  
>>  slots = kvm_memslots(kvm);
>> @@ -810,6 +811,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>>  stage2_unmap_memslot(kvm, memslot);
>>  
>>  spin_unlock(>mmu_lock);
>> +up_read(>mm->mmap_sem);
>>  srcu_read_unlock(>srcu, idx);
>>  }
>>  
>> -- 
>> 2.7.4
>>
> 
> Are we sure that holding mmu_lock is valid while holding the mmap_sem?

Maybe I'm just confused by the many levels of locking, Here's my rational:

- kvm->srcu protects the memslot list
- mmap_sem protects the kernel VMA list
- mmu_lock protects the stage2 page tables (at least here)

I don't immediately see any issue with holding the mmap_sem mutex here
(unless there is a path that would retrigger a down operation on the
mmap_sem?).

Or am I missing something obvious?

Thanks,

M.
-- 
Jazz is not dead. It just smells funny...


Re: [PATCH 1/3] kvm: arm/arm64: Take mmap_sem in stage2_unmap_vm

2017-03-15 Thread Christoffer Dall
On Tue, Mar 14, 2017 at 02:52:32PM +, Suzuki K Poulose wrote:
> From: Marc Zyngier 
> 
> We don't hold the mmap_sem while searching for the VMAs when
> we try to unmap each memslot for a VM. Fix this properly to
> avoid unexpected results.
> 
> Fixes: commit 957db105c997 ("arm/arm64: KVM: Introduce stage2_unmap_vm")
> Cc: sta...@vger.kernel.org # v3.19+
> Cc: Christoffer Dall 
> Signed-off-by: Marc Zyngier 
> Signed-off-by: Suzuki K Poulose 
> ---
>  arch/arm/kvm/mmu.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 962616f..f2e2e0c 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -803,6 +803,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>   int idx;
>  
>   idx = srcu_read_lock(>srcu);
> + down_read(>mm->mmap_sem);
>   spin_lock(>mmu_lock);
>  
>   slots = kvm_memslots(kvm);
> @@ -810,6 +811,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>   stage2_unmap_memslot(kvm, memslot);
>  
>   spin_unlock(>mmu_lock);
> + up_read(>mm->mmap_sem);
>   srcu_read_unlock(>srcu, idx);
>  }
>  
> -- 
> 2.7.4
> 

Are we sure that holding mmu_lock is valid while holding the mmap_sem?

Thanks,
-Christoffer


Re: [PATCH 1/3] kvm: arm/arm64: Take mmap_sem in stage2_unmap_vm

2017-03-15 Thread Christoffer Dall
On Tue, Mar 14, 2017 at 02:52:32PM +, Suzuki K Poulose wrote:
> From: Marc Zyngier 
> 
> We don't hold the mmap_sem while searching for the VMAs when
> we try to unmap each memslot for a VM. Fix this properly to
> avoid unexpected results.
> 
> Fixes: commit 957db105c997 ("arm/arm64: KVM: Introduce stage2_unmap_vm")
> Cc: sta...@vger.kernel.org # v3.19+
> Cc: Christoffer Dall 
> Signed-off-by: Marc Zyngier 
> Signed-off-by: Suzuki K Poulose 
> ---
>  arch/arm/kvm/mmu.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
> index 962616f..f2e2e0c 100644
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -803,6 +803,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>   int idx;
>  
>   idx = srcu_read_lock(>srcu);
> + down_read(>mm->mmap_sem);
>   spin_lock(>mmu_lock);
>  
>   slots = kvm_memslots(kvm);
> @@ -810,6 +811,7 @@ void stage2_unmap_vm(struct kvm *kvm)
>   stage2_unmap_memslot(kvm, memslot);
>  
>   spin_unlock(>mmu_lock);
> + up_read(>mm->mmap_sem);
>   srcu_read_unlock(>srcu, idx);
>  }
>  
> -- 
> 2.7.4
> 

Are we sure that holding mmu_lock is valid while holding the mmap_sem?

Thanks,
-Christoffer


[PATCH 1/3] kvm: arm/arm64: Take mmap_sem in stage2_unmap_vm

2017-03-14 Thread Suzuki K Poulose
From: Marc Zyngier 

We don't hold the mmap_sem while searching for the VMAs when
we try to unmap each memslot for a VM. Fix this properly to
avoid unexpected results.

Fixes: commit 957db105c997 ("arm/arm64: KVM: Introduce stage2_unmap_vm")
Cc: sta...@vger.kernel.org # v3.19+
Cc: Christoffer Dall 
Signed-off-by: Marc Zyngier 
Signed-off-by: Suzuki K Poulose 
---
 arch/arm/kvm/mmu.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 962616f..f2e2e0c 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -803,6 +803,7 @@ void stage2_unmap_vm(struct kvm *kvm)
int idx;
 
idx = srcu_read_lock(>srcu);
+   down_read(>mm->mmap_sem);
spin_lock(>mmu_lock);
 
slots = kvm_memslots(kvm);
@@ -810,6 +811,7 @@ void stage2_unmap_vm(struct kvm *kvm)
stage2_unmap_memslot(kvm, memslot);
 
spin_unlock(>mmu_lock);
+   up_read(>mm->mmap_sem);
srcu_read_unlock(>srcu, idx);
 }
 
-- 
2.7.4



[PATCH 1/3] kvm: arm/arm64: Take mmap_sem in stage2_unmap_vm

2017-03-14 Thread Suzuki K Poulose
From: Marc Zyngier 

We don't hold the mmap_sem while searching for the VMAs when
we try to unmap each memslot for a VM. Fix this properly to
avoid unexpected results.

Fixes: commit 957db105c997 ("arm/arm64: KVM: Introduce stage2_unmap_vm")
Cc: sta...@vger.kernel.org # v3.19+
Cc: Christoffer Dall 
Signed-off-by: Marc Zyngier 
Signed-off-by: Suzuki K Poulose 
---
 arch/arm/kvm/mmu.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/arm/kvm/mmu.c b/arch/arm/kvm/mmu.c
index 962616f..f2e2e0c 100644
--- a/arch/arm/kvm/mmu.c
+++ b/arch/arm/kvm/mmu.c
@@ -803,6 +803,7 @@ void stage2_unmap_vm(struct kvm *kvm)
int idx;
 
idx = srcu_read_lock(>srcu);
+   down_read(>mm->mmap_sem);
spin_lock(>mmu_lock);
 
slots = kvm_memslots(kvm);
@@ -810,6 +811,7 @@ void stage2_unmap_vm(struct kvm *kvm)
stage2_unmap_memslot(kvm, memslot);
 
spin_unlock(>mmu_lock);
+   up_read(>mm->mmap_sem);
srcu_read_unlock(>srcu, idx);
 }
 
-- 
2.7.4