On 16/09/21 20:15, Oliver Upton wrote:
From: Paolo Bonzini<[email protected]>

Protect the reference point for kvmclock with a seqcount, so that
kvmclock updates for all vCPUs can proceed in parallel.  Xen runstate
updates will also run in parallel and not bounce the kvmclock cacheline.

nr_vcpus_matched_tsc is updated outside pvclock_update_vm_gtod_copy
though, so a spinlock must be kept for that one.

Signed-off-by: Paolo Bonzini<[email protected]>
[Oliver - drop unused locals, don't double acquire tsc_write_lock]
Signed-off-by: Oliver Upton<[email protected]>
---

This needs a small adjustment:

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 07d00e711043..b0c21d42f453 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -11289,6 +11289,7 @@ void kvm_arch_free_vm(struct kvm *kvm)
 int kvm_arch_init_vm(struct kvm *kvm, unsigned long type)
 {
        int ret;
+       unsigned long flags;
if (type)
                return -EINVAL;
@@ -11314,7 +11315,10 @@ int kvm_arch_init_vm(struct kvm *kvm, unsigned long 
type)
        mutex_init(&kvm->arch.apic_map_lock);
        seqcount_raw_spinlock_init(&kvm->arch.pvclock_sc, 
&kvm->arch.tsc_write_lock);
        kvm->arch.kvmclock_offset = -get_kvmclock_base_ns();
+
+       raw_spin_lock_irqsave(&kvm->arch.tsc_write_lock, flags);
        pvclock_update_vm_gtod_copy(kvm);
+       raw_spin_unlock_irqrestore(&kvm->arch.tsc_write_lock, flags);
kvm->arch.guest_can_read_msr_platform_info = true;
_______________________________________________
kvmarm mailing list
[email protected]
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to