2019-05-30 09:05+0800, Wanpeng Li:
> From: Wanpeng Li <[email protected]>
> 
> The target vCPUs are in runnable state after vcpu_kick and suitable 
> as a yield target. This patch implements the sched yield hypercall.
> 
> 17% performance increasement of ebizzy benchmark can be observed in an 
> over-subscribe environment. (w/ kvm-pv-tlb disabled, testing TLB flush 
> call-function IPI-many since call-function is not easy to be trigged 
> by userspace workload).
> 
> Cc: Paolo Bonzini <[email protected]>
> Cc: Radim Krčmář <[email protected]>
> Cc: Liran Alon <[email protected]>
> Signed-off-by: Wanpeng Li <[email protected]>
> ---
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> @@ -7172,6 +7172,28 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu)
>       kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu);
>  }
>  
> +static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id)
> +{
> +     struct kvm_vcpu *target = NULL;
> +     struct kvm_apic_map *map = NULL;
> +
> +     rcu_read_lock();
> +     map = rcu_dereference(kvm->arch.apic_map);
> +
> +     if (unlikely(!map) || dest_id > map->max_apic_id)
> +             goto out;
> +
> +     if (map->phys_map[dest_id]->vcpu) {

This should check for map->phys_map[dest_id].

> +             target = map->phys_map[dest_id]->vcpu;
> +             rcu_read_unlock();
> +             kvm_vcpu_yield_to(target);
> +     }
> +
> +out:
> +     if (!target)
> +             rcu_read_unlock();

Also, I find the following logic clearer

  {
        struct kvm_vcpu *target = NULL;
        struct kvm_apic_map *map;
        
        rcu_read_lock();
        map = rcu_dereference(kvm->arch.apic_map);
        
        if (likely(map) && dest_id <= map->max_apic_id && 
map->phys_map[dest_id])
                target = map->phys_map[dest_id]->vcpu;
        
        rcu_read_unlock();
        
        if (target)
                kvm_vcpu_yield_to(target);
  }

thanks.

Reply via email to