On 04/18/2013 07:00 PM, Gleb Natapov wrote:
> On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
>> pte_list_clear_concurrently allows us to reset pte-desc entry
>> out of mmu-lock. We can reset spte out of mmu-lock if we can protect the
>> lifecycle of sp, we use this way to achieve the goal:
>>
>> unmap_memslot_rmap_nolock():
>> for-each-rmap-in-slot:
>>       preempt_disable
>>       kvm->arch.being_unmapped_rmap = rmapp
>>       clear spte and reset rmap entry
>>       kvm->arch.being_unmapped_rmap = NULL
>>       preempt_enable
>>
>> Other patch like zap-sp and mmu-notify which are protected
>> by mmu-lock:
>>       clear spte and reset rmap entry
>> retry:
>>       if (kvm->arch.being_unmapped_rmap == rmap)
>>              goto retry
>> (the wait is very rare and clear one rmap is very fast, it
>> is not bad even if wait is needed)
>>
> I do not understand what how this achieve the goal. Suppose that rmap
> == X and kvm->arch.being_unmapped_rmap == NULL so "goto retry" is skipped,
> but moment later unmap_memslot_rmap_nolock() does
> vm->arch.being_unmapped_rmap = X.

Access rmap is always safe since rmap and its entries are valid until
memslot is destroyed.

This algorithm protects spte since it can be freed in the protection of 
mmu-lock.

In your scenario:

======
   CPU 1                                      CPU 2

vcpu / mmu-notify access the RMAP         unmap rmap out of mmu-lock which is 
under
which is under mmu-lock                   slot-lock

zap spte1
clear RMAP entry

kvm->arch.being_unmapped_rmap = NULL,
do not wait

free spte1

                                        set kvm->arch.being_unmapped_rmap = RMAP
                                        walking RMAP and do not see spet1 on 
RMAP
                                        (the entry of spte 1 has been reset by 
CPU 1)
                                        set kvm->arch.being_unmapped_rmap = NULL
======

That protect CPU 2 can not access the freed-spte.


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to