On 02/10/2024 12:02 pm, Jan Beulich wrote: > On 13.08.2024 21:01, Vaishali Thakkar wrote: >> +static DEFINE_SPINLOCK(hvm_asid_lock); >> + >> /* >> - * ASIDs partition the physical TLB. In the current implementation ASIDs >> are >> - * introduced to reduce the number of TLB flushes. Each time the guest's >> - * virtual address space changes (e.g. due to an INVLPG, MOV-TO-{CR3, CR4} >> - * operation), instead of flushing the TLB, a new ASID is assigned. This >> - * reduces the number of TLB flushes to at most 1/#ASIDs. The biggest >> - * advantage is that hot parts of the hypervisor's code and data retain in >> - * the TLB. >> - * >> * Sketch of the Implementation: >> - * >> - * ASIDs are a CPU-local resource. As preemption of ASIDs is not possible, >> - * ASIDs are assigned in a round-robin scheme. To minimize the overhead of >> - * ASID invalidation, at the time of a TLB flush, ASIDs are tagged with a >> - * 64-bit generation. Only on a generation overflow the code needs to >> - * invalidate all ASID information stored at the VCPUs with are run on the >> - * specific physical processor. This overflow appears after about 2^80 >> - * host processor cycles, so we do not optimize this case, but simply >> disable >> - * ASID useage to retain correctness. >> + * ASIDs are assigned in a round-robin scheme per domain as part of >> + * global allocator scheme and doesn't change during the lifecycle of >> + * the domain. Once vcpus are initialized and are up, we assign the >> + * same ASID to all vcpus of that domain at the first VMRUN. With the >> + * new scheme, we don't need to assign the new ASID during MOV-TO-{CR3, >> + * CR4}. In the case of INVLPG, we flush the TLB entries belonging to >> + * the vcpu and do the reassignment of the ASID belonging to the given >> + * domain. > Why reassignment? In the description you say that ASIDs no longer change while > a domain exists. > >> Currently we do not do anything special for flushing guest >> + * TLBs in flush_area_local as wbinvd() should able to handle this. > How's WBINVD coming into the picture here? Here we solely care about TLBs, > which WBINVD is solely about caches.
I suspect there might be confusion based on the eventual linkage in encrypted VMs. The ASID (== encryption key index) is stuffed in the upper physical address bits, and memory accesses are non-content between different encryption domains (they're genuinely different as far as physical address logic is concerned). Therefore in due course, Xen will need to issue WBINVD on all CPUs before the ASP will free up the ASID for reuse. But - this is a policy requirement for managing the VM lifecycle with the platform, rather than an actual linkage of TLB and caches. Otherwise a malicious Xen could create a new encrypted VM using the ASID, and the old VM's key (there's a protocol for using the same key "opaquely" to Xen for NVDIMM reasons), and read any not-yet-evicted cachelines in the clear (because both the ASID and the loaded key are the same). ~Andrew