On 24.11.2025 13:33, Oleksii Kurochko wrote:
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -3096,3 +3096,12 @@ the hypervisor was compiled with `CONFIG_XSM` enabled.
> * `silo`: this will deny any unmediated communication channels between
> unprivileged VMs. To choose this, the separated option in kconfig must
> also
> be enabled.
> +
> +### vmid (RISC-V)
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Permit Xen to use Virtual Machine Identifiers. This is an optimisation which
> +tags the TLB entries with an ID per vcpu. This allows for guest TLB flushes
> +to be performed without the overhead of a complete TLB flush.
Please obey to the alphabetic sorting within this file.
> --- /dev/null
> +++ b/xen/arch/riscv/vmid.c
> @@ -0,0 +1,170 @@
> +/* SPDX-License-Identifier: GPL-2.0-only */
> +
> +#include <xen/domain.h>
> +#include <xen/init.h>
> +#include <xen/sections.h>
> +#include <xen/lib.h>
> +#include <xen/param.h>
> +#include <xen/percpu.h>
> +
> +#include <asm/atomic.h>
> +#include <asm/csr.h>
> +#include <asm/flushtlb.h>
> +#include <asm/p2m.h>
> +
> +/* Xen command-line option to enable VMIDs */
> +static bool __ro_after_init opt_vmid = true;
> +boolean_param("vmid", opt_vmid);
> +
> +/*
> + * VMIDs partition the physical TLB. In the current implementation VMIDs are
> + * introduced to reduce the number of TLB flushes. Each time a guest-physical
> + * address space changes, instead of flushing the TLB, a new VMID is
> + * assigned. This reduces the number of TLB flushes to at most 1/#VMIDs.
> + * The biggest advantage is that hot parts of the hypervisor's code and data
> + * retain in the TLB.
> + *
> + * Sketch of the Implementation:
> + *
> + * VMIDs are a hart-local resource. As preemption of VMIDs is not possible,
> + * VMIDs are assigned in a round-robin scheme. To minimize the overhead of
> + * VMID invalidation, at the time of a TLB flush, VMIDs are tagged with a
> + * 64-bit generation. Only on a generation overflow the code needs to
> + * invalidate all VMID information stored at the VCPUs with are run on the
> + * specific physical processor. When this overflow appears VMID usage is
> + * disabled to retain correctness.
> + */
> +
> +/* Per-Hart VMID management. */
> +struct vmid_data {
> + uint64_t generation;
> + uint16_t next_vmid;
> + uint16_t max_vmid;
> + bool used;
> +};
> +
> +static DEFINE_PER_CPU(struct vmid_data, vmid_data);
> +
> +static unsigned int vmidlen_detect(void)
> +{
> + unsigned int vmid_bits;
> + unsigned char gstage_mode = get_max_supported_mode();
> +
> + /*
> + * According to the RISC-V Privileged Architecture Spec:
> + * When MODE=Bare, guest physical addresses are equal to supervisor
> + * physical addresses, and there is no further memory protection
> + * for a guest virtual machine beyond the physical memory protection
> + * scheme described in Section "Physical Memory Protection".
> + * In this case, the remaining fields in hgatp must be set to zeros.
> + * Thereby it is necessary to set gstage_mode not equal to Bare.
> + */
> + ASSERT(gstage_mode != HGATP_MODE_OFF);
> + csr_write(CSR_HGATP,
> + MASK_INSR(gstage_mode, HGATP_MODE_MASK) | HGATP_VMID_MASK);
> + vmid_bits = MASK_EXTR(csr_read(CSR_HGATP), HGATP_VMID_MASK);
> + vmid_bits = flsl(vmid_bits);
> + csr_write(CSR_HGATP, _AC(0, UL));
> +
> + /* local_hfence_gvma_all() will be called at the end of pre_gstage_init.
> */
> +
> + return vmid_bits;
> +}
> +
> +void vmid_init(void)
With the presently sole caller being __init, this (and likely the helper above)
could be __init. Iirc you intend to also call this as secondary harts come up,
so this may be okay. But then it wants justifing in the description.
Jan