Re: [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall
On Mon, 10 Jun 2019 at 22:17, Radim Krčmář wrote: > > 2019-05-30 09:05+0800, Wanpeng Li: > > From: Wanpeng Li > > > > The target vCPUs are in runnable state after vcpu_kick and suitable > > as a yield target. This patch implements the sched yield hypercall. > > > > 17% performance increasement of ebizzy benchmark can be observed in an > > over-subscribe environment. (w/ kvm-pv-tlb disabled, testing TLB flush > > call-function IPI-many since call-function is not easy to be trigged > > by userspace workload). > > > > Cc: Paolo Bonzini > > Cc: Radim Krčmář > > Cc: Liran Alon > > Signed-off-by: Wanpeng Li > > --- > > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > > @@ -7172,6 +7172,28 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu) > > kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu); > > } > > > > +static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id) > > +{ > > + struct kvm_vcpu *target = NULL; > > + struct kvm_apic_map *map = NULL; > > + > > + rcu_read_lock(); > > + map = rcu_dereference(kvm->arch.apic_map); > > + > > + if (unlikely(!map) || dest_id > map->max_apic_id) > > + goto out; > > + > > + if (map->phys_map[dest_id]->vcpu) { > > This should check for map->phys_map[dest_id]. Yeah, make a mistake here. > > > + target = map->phys_map[dest_id]->vcpu; > > + rcu_read_unlock(); > > + kvm_vcpu_yield_to(target); > > + } > > + > > +out: > > + if (!target) > > + rcu_read_unlock(); > > Also, I find the following logic clearer > > { > struct kvm_vcpu *target = NULL; > struct kvm_apic_map *map; > > rcu_read_lock(); > map = rcu_dereference(kvm->arch.apic_map); > > if (likely(map) && dest_id <= map->max_apic_id && > map->phys_map[dest_id]) > target = map->phys_map[dest_id]->vcpu; > > rcu_read_unlock(); > > if (target) > kvm_vcpu_yield_to(target); > } More better, thanks. Regards, Wanpeng Li
Re: [PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall
2019-05-30 09:05+0800, Wanpeng Li: > From: Wanpeng Li > > The target vCPUs are in runnable state after vcpu_kick and suitable > as a yield target. This patch implements the sched yield hypercall. > > 17% performance increasement of ebizzy benchmark can be observed in an > over-subscribe environment. (w/ kvm-pv-tlb disabled, testing TLB flush > call-function IPI-many since call-function is not easy to be trigged > by userspace workload). > > Cc: Paolo Bonzini > Cc: Radim Krčmář > Cc: Liran Alon > Signed-off-by: Wanpeng Li > --- > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c > @@ -7172,6 +7172,28 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu) > kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu); > } > > +static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id) > +{ > + struct kvm_vcpu *target = NULL; > + struct kvm_apic_map *map = NULL; > + > + rcu_read_lock(); > + map = rcu_dereference(kvm->arch.apic_map); > + > + if (unlikely(!map) || dest_id > map->max_apic_id) > + goto out; > + > + if (map->phys_map[dest_id]->vcpu) { This should check for map->phys_map[dest_id]. > + target = map->phys_map[dest_id]->vcpu; > + rcu_read_unlock(); > + kvm_vcpu_yield_to(target); > + } > + > +out: > + if (!target) > + rcu_read_unlock(); Also, I find the following logic clearer { struct kvm_vcpu *target = NULL; struct kvm_apic_map *map; rcu_read_lock(); map = rcu_dereference(kvm->arch.apic_map); if (likely(map) && dest_id <= map->max_apic_id && map->phys_map[dest_id]) target = map->phys_map[dest_id]->vcpu; rcu_read_unlock(); if (target) kvm_vcpu_yield_to(target); } thanks.
[PATCH v3 2/3] KVM: X86: Implement PV sched yield hypercall
From: Wanpeng Li The target vCPUs are in runnable state after vcpu_kick and suitable as a yield target. This patch implements the sched yield hypercall. 17% performance increasement of ebizzy benchmark can be observed in an over-subscribe environment. (w/ kvm-pv-tlb disabled, testing TLB flush call-function IPI-many since call-function is not easy to be trigged by userspace workload). Cc: Paolo Bonzini Cc: Radim Krčmář Cc: Liran Alon Signed-off-by: Wanpeng Li --- arch/x86/kvm/x86.c | 26 ++ 1 file changed, 26 insertions(+) diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c index e7e57de..8575b36 100644 --- a/arch/x86/kvm/x86.c +++ b/arch/x86/kvm/x86.c @@ -7172,6 +7172,28 @@ void kvm_vcpu_deactivate_apicv(struct kvm_vcpu *vcpu) kvm_x86_ops->refresh_apicv_exec_ctrl(vcpu); } +static void kvm_sched_yield(struct kvm *kvm, unsigned long dest_id) +{ + struct kvm_vcpu *target = NULL; + struct kvm_apic_map *map = NULL; + + rcu_read_lock(); + map = rcu_dereference(kvm->arch.apic_map); + + if (unlikely(!map) || dest_id > map->max_apic_id) + goto out; + + if (map->phys_map[dest_id]->vcpu) { + target = map->phys_map[dest_id]->vcpu; + rcu_read_unlock(); + kvm_vcpu_yield_to(target); + } + +out: + if (!target) + rcu_read_unlock(); +} + int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) { unsigned long nr, a0, a1, a2, a3, ret; @@ -7218,6 +7240,10 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu) case KVM_HC_SEND_IPI: ret = kvm_pv_send_ipi(vcpu->kvm, a0, a1, a2, a3, op_64_bit); break; + case KVM_HC_SCHED_YIELD: + kvm_sched_yield(vcpu->kvm, a0); + ret = 0; + break; default: ret = -KVM_ENOSYS; break; -- 2.7.4