Using hypercall to send IPIs by one vmexit instead of one by one for
xAPIC/x2APIC physical mode and one vmexit per-cluster for x2APIC cluster 
mode. 

Even if enable qemu interrupt remapping and PV TLB Shootdown, I can still 
observe ~14% performance boost by ebizzy benchmark for 64 vCPUs VM, the 
total msr-induced vmexits reduce ~70%.

The patchset implements the PV IPIs for vCPUs <= 128 VM, this is really 
common in cloud environment, after this patchset is applied, I can continue 
to add > 64 vCPUs VM support and that implementation has to introduce more 
complex logic.

Cc: Paolo Bonzini <[email protected]>
Cc: Radim Krčmář <[email protected]>
Cc: Vitaly Kuznetsov <[email protected]>

v1 -> v2:
 * sparse apic id > 128, or any other errors, fallback to original apic hooks
 * have two bitmask arguments so that one hypercall handles 128 vCPUs 
 * fix KVM_FEATURE_PV_SEND_IPI doc
 * document hypercall
 * fix NMI selftest fails
 * fix build errors reported by 0day

Wanpeng Li (2):
  KVM: X86: Implement PV IPI in linux guest
  KVM: X86: Implement PV send IPI support

 Documentation/virtual/kvm/cpuid.txt      |  4 ++
 Documentation/virtual/kvm/hypercalls.txt |  6 ++
 arch/x86/include/uapi/asm/kvm_para.h     |  1 +
 arch/x86/kernel/kvm.c                    | 99 ++++++++++++++++++++++++++++++++
 arch/x86/kvm/cpuid.c                     |  3 +-
 arch/x86/kvm/x86.c                       | 42 ++++++++++++++
 include/uapi/linux/kvm_para.h            |  1 +
 7 files changed, 155 insertions(+), 1 deletion(-)

-- 
2.7.4

Reply via email to