On 02/08/2011 01:55 PM, Jan Kiszka wrote:
Only for walking the list of VMs, we do not need to hold the preemption
disabling kvm_lock. Convert stat services, the cpufreq callback and
mmu_shrink to RCU. For the latter, special care is required to
synchronize its list_move_tail with kvm_destroy_vm.


diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b6a9963..e9d0ed8 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3587,9 +3587,9 @@ static int mmu_shrink(struct shrinker *shrink, int 
nr_to_scan, gfp_t gfp_mask)
        if (nr_to_scan == 0)
                goto out;

-       raw_spin_lock(&kvm_lock);
+       rcu_read_lock();

-       list_for_each_entry(kvm,&vm_list, vm_list) {
+       list_for_each_entry_rcu(kvm,&vm_list, vm_list) {
                int idx, freed_pages;
                LIST_HEAD(invalid_list);

Have to #include rculist.h, and to change all list operations on vm_list to rcu variants.


@@ -3607,10 +3607,14 @@ static int mmu_shrink(struct shrinker *shrink, int 
nr_to_scan, gfp_t gfp_mask)
                spin_unlock(&kvm->mmu_lock);
                srcu_read_unlock(&kvm->srcu, idx);
        }
-       if (kvm_freed)
-               list_move_tail(&kvm_freed->vm_list,&vm_list);
+       if (kvm_freed) {
+               raw_spin_lock(&kvm_lock);
+               if (!kvm->deleted)
+                       list_move_tail(&kvm_freed->vm_list,&vm_list);

There is no list_move_tail_rcu().

Why check kvm->deleted? it's in the process of being torn down anyway, it doesn't matter if mmu_shrink or kvm_destroy_vm pulls the trigger.

+               raw_spin_unlock(&kvm_lock);
+       }

-       raw_spin_unlock(&kvm_lock);
+       rcu_read_unlock();




--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to