On Mon, Dec 12, 2011 at 9:40 AM, Avi Kivity <a...@redhat.com> wrote:
> On 12/11/2011 12:24 PM, Christoffer Dall wrote:
>> This commit introduces the framework for guest memory management
>> through the use of 2nd stage translation. Each VM has a pointer
>> to a level-1 tabled (the pgd field in struct kvm_arch) which is
>> used for the 2nd stage translations. Entries are added when handling
>> guest faults (later patch) and the table itself can be allocated and
>> freed through the following functions implemented in
>> arch/arm/kvm/arm_mmu.c:
>>  - kvm_alloc_stage2_pgd(struct kvm *kvm);
>>  - kvm_free_stage2_pgd(struct kvm *kvm);
>>
>> Further, each entry in TLBs and caches are tagged with a VMID
>> identifier in addition to ASIDs. The VMIDs are managed using
>> a bitmap and assigned when creating the VM in kvm_arch_init_vm()
>> where the 2nd stage pgd is also allocated. The table is freed in
>> kvm_arch_destroy_vm(). Both functions are called from the main
>> KVM code.
>>
>>
>>  struct kvm_arch {
>> -     pgd_t *pgd;     /* 1-level 2nd stage table */
>> +     u32    vmid;    /* The VMID used for the virt. memory system */
>> +     pgd_t *pgd;     /* 1-level 2nd stage table */
>> +     u64    vttbr;   /* VTTBR value associated with above pgd and vmid */
>>  };
>>
>
> I can't say I have a solid grasp here, but my feeling is that vmid needs
> to be per-vcpu.  Otherwise vcpu 1 can migrate to a cpu that previously
> ran vcpu 0, and reuse its tlb since they have the same vmid.
>
According to the ARM guys, the same rules apply as for ASIDs which
results into 1 vmid per VM. The vmid is checked along with a
guest-specific asid, so they would only share TLB entries when running
the same guest process, which I assume is fine then. Actually
desirable. But, granted, my SMP knowledge is limited so far, so if
Marc or Catalin feels like chipping in here...
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to