On Thu, Jul 29, 2021 at 11:40:06AM +0100, Shameer Kolothum wrote:
> A new VMID allocator for arm64 KVM use. This is based on
> arm64 ASID allocator algorithm.
> 
> One major deviation from the ASID allocator is the way we
> flush the context. Unlike ASID allocator, we expect less
> frequent rollover in the case of VMIDs. Hence, instead of
> marking the CPU as flush_pending and issuing a local context
> invalidation on the next context switch, we broadcast TLB
> flush + I-cache invalidation over the inner shareable domain
> on rollover.
> 
> Signed-off-by: Shameer Kolothum <[email protected]>
> ---

[...]

> +void kvm_arm_vmid_update(struct kvm_vmid *kvm_vmid)
> +{
> +     unsigned long flags;
> +     unsigned int cpu;
> +     u64 vmid, old_active_vmid;
> +
> +     vmid = atomic64_read(&kvm_vmid->id);
> +
> +     /*
> +      * Please refer comments in check_and_switch_context() in
> +      * arch/arm64/mm/context.c.
> +      */
> +     old_active_vmid = atomic64_read(this_cpu_ptr(&active_vmids));
> +     if (old_active_vmid && vmid_gen_match(vmid) &&
> +         atomic64_cmpxchg_relaxed(this_cpu_ptr(&active_vmids),
> +                                  old_active_vmid, vmid))
> +             return;
> +
> +     raw_spin_lock_irqsave(&cpu_vmid_lock, flags);
> +
> +     /* Check that our VMID belongs to the current generation. */
> +     vmid = atomic64_read(&kvm_vmid->id);
> +     if (!vmid_gen_match(vmid)) {
> +             vmid = new_vmid(kvm_vmid);
> +             atomic64_set(&kvm_vmid->id, vmid);

new_vmid() can just set kvm_vmid->id directly

> +     }
> +
> +     cpu = smp_processor_id();

Why?

Will
_______________________________________________
kvmarm mailing list
[email protected]
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to