On 02/08/19 09:48, Anup Patel wrote:
> +struct kvm_vmid {
> +     unsigned long vmid_version;
> +     unsigned long vmid;
> +};
> +

Please document that both fields are written under vmid_lock, and read
outside it.

> +             /*
> +              * On SMP we know no other CPUs can use this CPU's or
> +              * each other's VMID after forced exit returns since the
> +              * vmid_lock blocks them from re-entry to the guest.
> +              */
> +             force_exit_and_guest_tlb_flush(cpu_all_mask);

Please use kvm_flush_remote_tlbs(kvm) instead.  All you need to do to
support it is check for KVM_REQ_TLB_FLUSH and handle it by calling
__kvm_riscv_hfence_gvma_all.  Also, since your spinlock is global you
probably should release it around the call to kvm_flush_remote_tlbs.
(Think of an implementation that has a very small number of VMID bits).

> +     if (unlikely(vmid_next == 0)) {
> +             WRITE_ONCE(vmid_version, READ_ONCE(vmid_version) + 1);
> +             vmid_next = 1;
> +             /*
> +              * On SMP we know no other CPUs can use this CPU's or
> +              * each other's VMID after forced exit returns since the
> +              * vmid_lock blocks them from re-entry to the guest.
> +              */
> +             force_exit_and_guest_tlb_flush(cpu_all_mask);
> +     }
> +
> +     vmid->vmid = vmid_next;
> +     vmid_next++;
> +     vmid_next &= (1 << vmid_bits) - 1;
> +
> +     /* Ensure VMID next update is completed */
> +     smp_wmb();

This barrier is not necessary.  Writes to vmid->vmid need not be ordered
with writes to vmid->vmid_version, because the accesses happen in
completely different places.

(As a rule of thumb, each smp_wmb() should have a matching smp_rmb()
somewhere, and this one doesn't).

Paolo

> +     WRITE_ONCE(vmid->vmid_version, READ_ONCE(vmid_version));
> +

Reply via email to