On 20.10.2025 17:57, Oleksii Kurochko wrote: > This patch adds the initial logic for constructing PTEs from MFNs in the > RISC-V > p2m subsystem. It includes: > - Implementation of p2m_pte_from_mfn(): Generates a valid PTE using the > given MFN, p2m_type_t, including permission encoding and PBMT attribute > setup. > - New helper p2m_set_permission(): Encodes access rights (r, w, x) into the > PTE based on both p2m type and access permissions. > - p2m_set_type(): Stores the p2m type in PTE's bits. The storage of types, > which don't fit PTE bits, will be implemented separately later. > - Add detection of Svade extension to properly handle a possible page-fault > if A and D bits aren't set. > > PBMT type encoding support: > - Introduces an enum pbmt_type_t to represent the PBMT field values. > - Maps types like p2m_mmio_direct_dev to p2m_mmio_direct_io, others default > to pbmt_pma. > > Signed-off-by: Oleksii Kurochko <[email protected]> > --- > Changes in V5: > - Moved setting of p2m_mmio_direct_io inside (!is_table) case in > p2m_pte_from_mfn(). > - Extend comment about the place of setting A/D bits with explanation > why it is done in this way for now. > --- > Changes in V4: > - p2m_set_permission() updates: > - Update permissions for p2m_ram_rw case, make it also executable. > - Add pernissions setting for p2m_map_foreign_* types. > - Drop setting peromissions for p2m_ext_storage. > - Only turn off PTE_VALID bit for p2m_invalid, don't touch other bits. > - p2m_pte_from_mfn() updates: > - Update ASSERT(), add a check that mfn isn't INVALID_MFN (1) > explicitly to avoid the case when PADDR_MASK isn't narrow enough to > catch the case (1). > - Drop unnessary check around call of p2m_set_type() as this check > is already included inside p2m_set_type(). > - Introduce new p2m type p2m_first_external to detect that passed type > is stored in external storage. > - Add handling of PTE's A and D bits in pm2_set_permission. Also, set > PTE_USER bit. For this cpufeatures.{h and c} were updated to be able > to detect availability of Svade extension. > - Drop grant table related code as it isn't going to be used at the moment. > --- > Changes in V3: > - s/p2m_entry_from_mfn/p2m_pte_from_mfn. > - s/pbmt_type_t/pbmt_type. > - s/pbmt_max/pbmt_count. > - s/p2m_type_radix_set/p2m_set_type. > - Rework p2m_set_type() to handle only types which are fited into PTEs bits. > Other types will be covered separately. > Update arguments of p2m_set_type(): there is no any reason for p2m anymore. > - p2m_set_permissions() updates: > - Update the code in p2m_set_permission() for cases p2m_raw_rw and > p2m_mmio_direct_io to set proper type permissions. > - Add cases for p2m_grant_map_rw and p2m_grant_map_ro. > - Use ASSERT_UNEACHABLE() instead of BUG() in switch cases of > p2m_set_permissions. > - Add blank lines non-fall-through case blocks in switch cases. > - Set MFN before permissions are set in p2m_pte_from_mfn(). > - Update prototype of p2m_entry_from_mfn(). > --- > Changes in V2: > - New patch. It was a part of a big patch "xen/riscv: implement p2m mapping > functionality" which was splitted to smaller. > --- > xen/arch/riscv/cpufeature.c | 1 + > xen/arch/riscv/include/asm/cpufeature.h | 1 + > xen/arch/riscv/include/asm/page.h | 8 ++ > xen/arch/riscv/p2m.c | 112 +++++++++++++++++++++++- > 4 files changed, 118 insertions(+), 4 deletions(-) > > diff --git a/xen/arch/riscv/cpufeature.c b/xen/arch/riscv/cpufeature.c > index b846a106a3..02b68aeaa4 100644 > --- a/xen/arch/riscv/cpufeature.c > +++ b/xen/arch/riscv/cpufeature.c > @@ -138,6 +138,7 @@ const struct riscv_isa_ext_data __initconst > riscv_isa_ext[] = { > RISCV_ISA_EXT_DATA(zbs), > RISCV_ISA_EXT_DATA(smaia), > RISCV_ISA_EXT_DATA(ssaia), > + RISCV_ISA_EXT_DATA(svade), > RISCV_ISA_EXT_DATA(svpbmt), > }; > > diff --git a/xen/arch/riscv/include/asm/cpufeature.h > b/xen/arch/riscv/include/asm/cpufeature.h > index 768b84b769..5f756c76db 100644 > --- a/xen/arch/riscv/include/asm/cpufeature.h > +++ b/xen/arch/riscv/include/asm/cpufeature.h > @@ -37,6 +37,7 @@ enum riscv_isa_ext_id { > RISCV_ISA_EXT_zbs, > RISCV_ISA_EXT_smaia, > RISCV_ISA_EXT_ssaia, > + RISCV_ISA_EXT_svade, > RISCV_ISA_EXT_svpbmt, > RISCV_ISA_EXT_MAX > }; > diff --git a/xen/arch/riscv/include/asm/page.h > b/xen/arch/riscv/include/asm/page.h > index 78e53981ac..4b6baeaaf2 100644 > --- a/xen/arch/riscv/include/asm/page.h > +++ b/xen/arch/riscv/include/asm/page.h > @@ -73,6 +73,14 @@ > #define PTE_SMALL BIT(10, UL) > #define PTE_POPULATE BIT(11, UL) > > +enum pbmt_type { > + pbmt_pma, > + pbmt_nc, > + pbmt_io, > + pbmt_rsvd, > + pbmt_count, > +}; > + > #define PTE_ACCESS_MASK (PTE_READABLE | PTE_WRITABLE | PTE_EXECUTABLE) > > #define PTE_PBMT_MASK (PTE_PBMT_NOCACHE | PTE_PBMT_IO) > diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c > index 71b211410b..f4658e2560 100644 > --- a/xen/arch/riscv/p2m.c > +++ b/xen/arch/riscv/p2m.c > @@ -11,6 +11,7 @@ > #include <xen/sections.h> > #include <xen/xvmalloc.h> > > +#include <asm/cpufeature.h> > #include <asm/csr.h> > #include <asm/flushtlb.h> > #include <asm/paging.h> > @@ -349,6 +350,18 @@ static pte_t *p2m_get_root_pointer(struct p2m_domain > *p2m, gfn_t gfn) > return __map_domain_page(p2m->root + root_table_indx); > } > > +static int p2m_set_type(pte_t *pte, p2m_type_t t) > +{ > + int rc = 0; > + > + if ( t > p2m_first_external ) > + panic("unimplemeted\n"); > + else > + pte->pte |= MASK_INSR(t, P2M_TYPE_PTE_BITS_MASK); > + > + return rc; > +} > + > static p2m_type_t p2m_get_type(const pte_t pte) > { > p2m_type_t type = MASK_EXTR(pte.pte, P2M_TYPE_PTE_BITS_MASK); > @@ -379,11 +392,102 @@ static inline void p2m_clean_pte(pte_t *p, bool > clean_pte) > p2m_write_pte(p, pte, clean_pte); > } > > -static pte_t p2m_pte_from_mfn(mfn_t mfn, p2m_type_t t) > +static void p2m_set_permission(pte_t *e, p2m_type_t t) > { > - panic("%s: hasn't been implemented yet\n", __func__); > + e->pte &= ~PTE_ACCESS_MASK; > + > + e->pte |= PTE_USER; > + > + /* > + * Two schemes to manage the A and D bits are defined: > + * • The Svade extension: when a virtual page is accessed and the A bit > + * is clear, or is written and the D bit is clear, a page-fault > + * exception is raised. > + * • When the Svade extension is not implemented, the following scheme > + * applies. > + * When a virtual page is accessed and the A bit is clear, the PTE is > + * updated to set the A bit. When the virtual page is written and the > + * D bit is clear, the PTE is updated to set the D bit. When G-stage > + * address translation is in use and is not Bare, the G-stage virtual > + * pages may be accessed or written by implicit accesses to VS-level > + * memory management data structures, such as page tables.
Can you point me at the part of the spec where this behavior is described? If things indeed work like this, ... > + * Thereby to avoid a page-fault in case of Svade is available, it is > + * necesssary to set A and D bits. ... I'd then agree with the "necessary" here. (Nit: note the extra 's' in your spelling.) Jan
