Le 28/08/2025 à 15:05, Jan Beulich a écrit : > On 26.06.2025 16:01, Teddy Astie wrote: >> From: Vaishali Thakkar <vaishali.thak...@suse.com> (formely vates.tech) >> >> Currently ASID generation and management is done per-PCPU. This >> scheme is incompatible with SEV technologies as SEV VMs need to >> have a fixed ASID associated with all vcpus of the VM throughout >> it's lifetime. >> >> This commit introduces a Xen-wide allocator which initializes >> the asids at the start of xen and allows to have a fixed asids >> throughout the lifecycle of all domains. Having a fixed asid >> for non-SEV domains also presents us with the opportunity to >> further take use of AMD instructions like TLBSYNC and INVLPGB >> for broadcasting the TLB invalidations. >> >> Introduce vcpu->needs_tlb_flush attribute to schedule a guest TLB >> flush for the next VMRUN/VMENTER. This will be later be done using >> either TLB_CONTROL field (AMD) or INVEPT (Intel). This flush method >> is used in place of the current ASID swapping logic. >> >> Signed-off-by: Teddy Astie <teddy.as...@vates.tech> >> Signed-off-by: Vaishali Thakkar <vaishali.thak...@suse.com> (formely >> vates.tech) > > Not sure whether you may legitimately alter pre-existing S-o-b. In > any event the two S-o-b looks to be in the wrong order; like most > other tags they want to be sorted chronologically. >
Yes, I misordered it, will fix it for the next version. >> --- >> CC: Oleksii Kurochko <oleksii.kuroc...@gmail.com> >> >> Should the ASID/VPID/VMID management logic code being shared accross >> x86/ARM/RISC-V ? > > If all present and future architectures agree on how exactly they want > to use these IDs. Which I'm unsure of. RISC-V is now vaguely following > the original x86 model. > >> Is it possible to have multiples vCPUs of a same domain simultaneously >> scheduled on top of a single pCPU ? If so, it would need a special >> consideration for this corner case, such as we don't miss a TLB flush >> in such cases. > > No, how would two entities be able to run on a single pCPU at any single > point in time? > It was more concerning regarding having 2 vCPUs of the same domain (thus sharing the same ASID) running consecutively, e.g having on the same core dom1.vcpu1 -> dom1.vcpu2 without a appropriate TLB flush; because the address space may not be consistent between 2 vCPUs. I found a approach to fix it by tracking per domain which vCPU last ran on each pCPU so that in case of mismatch, we need to TLB flush. Along with explictely flush the TLB when the vCPU is migrated, because in the case too the TLB can be inconsistent for the ASID. >> I get various stability when testing shadow paging in these patches, unsure >> what's the exact root case. HAP works perfectly fine though. >> >> TODO: >> - Intel: Don't assign the VPID at each VMENTER, though we need >> to rethink how we manage VMCS with nested virtualization / altp2m >> for changing this behavior. >> - AMD: Consider hot-plug of CPU with ERRATA_170. (is it possible ?) >> - Consider cases where we don't have enough ASIDs (e.g Xen as nested guest) >> - Nested virtualization ASID management > > For these last two points - maybe we really need a mixed model? > Mixed model would not allow future support for broadcast TLB invalidation (even for hypervisor use) with e.g AMD INVLPGB or (future) Intel Remote Action Request. >> --- >> xen/arch/x86/flushtlb.c | 31 +++--- >> xen/arch/x86/hvm/asid.c | 148 +++++++++---------------- >> xen/arch/x86/hvm/emulate.c | 2 +- >> xen/arch/x86/hvm/hvm.c | 14 ++- >> xen/arch/x86/hvm/nestedhvm.c | 6 +- >> xen/arch/x86/hvm/svm/asid.c | 77 ++++++++----- >> xen/arch/x86/hvm/svm/nestedsvm.c | 1 - >> xen/arch/x86/hvm/svm/svm.c | 36 +++--- >> xen/arch/x86/hvm/svm/svm.h | 4 - >> xen/arch/x86/hvm/vmx/vmcs.c | 5 +- >> xen/arch/x86/hvm/vmx/vmx.c | 68 ++++++------ >> xen/arch/x86/hvm/vmx/vvmx.c | 5 +- >> xen/arch/x86/include/asm/flushtlb.h | 7 -- >> xen/arch/x86/include/asm/hvm/asid.h | 25 ++--- >> xen/arch/x86/include/asm/hvm/domain.h | 1 + >> xen/arch/x86/include/asm/hvm/hvm.h | 15 +-- >> xen/arch/x86/include/asm/hvm/svm/svm.h | 5 + >> xen/arch/x86/include/asm/hvm/vcpu.h | 10 +- >> xen/arch/x86/include/asm/hvm/vmx/vmx.h | 4 +- >> xen/arch/x86/mm/hap/hap.c | 6 +- >> xen/arch/x86/mm/p2m.c | 7 +- >> xen/arch/x86/mm/paging.c | 2 +- >> xen/arch/x86/mm/shadow/hvm.c | 1 + >> xen/arch/x86/mm/shadow/multi.c | 12 +- >> xen/include/xen/sched.h | 2 + >> 25 files changed, 227 insertions(+), 267 deletions(-) > > I think I said this to Vaishali already: This really wants splitting up some, > to become halfway reviewable. > Indeed, I am looking for a approach for splitting the change without breaking things in-between patches. > Jan > Teddy Teddy Astie | Vates XCP-ng Developer XCP-ng & Xen Orchestra - Vates solutions web: https://vates.tech