Hi Eric,

On 2025/12/2 23:53, Eric Auger wrote:

On 10/12/25 5:12 PM, Tao Tang wrote:
Enhance the page table walker to correctly handle secure and non-secure
memory accesses. This change introduces logic to select the appropriate
address space and enforce architectural security policies during walks.

The page table walker now correctly processes Secure Stage 1
translations. Key changes include:

- The get_pte function now uses the security context to fetch table
entries from either the Secure or Non-secure address space.

- The stage 1 walker tracks the security state, respecting the NSCFG
and NSTable attributes. It correctly handles the hierarchical security
model: if a table descriptor in a secure walk has NSTable=1, all
subsequent lookups for that walk are forced into the Non-secure space.
This is a one-way transition, as specified by the architecture.

- A check is added to fault nested translations that produce a Secure
IPA when Secure stage 2 is not supported (SMMU_S_IDR1.SEL2 == 0).

- The final TLB entry is tagged with the correct output address space,
ensuring proper memory isolation.

Stage 2 translations are currently limited to Non-secure lookups. Full
support for Secure Stage 2 translation will be added in a future series.

Signed-off-by: Tao Tang <[email protected]>
---
  hw/arm/smmu-common.c | 64 +++++++++++++++++++++++++++++++++++++++-----
  hw/arm/trace-events  |  2 +-
  2 files changed, 59 insertions(+), 7 deletions(-)

diff --git a/hw/arm/smmu-common.c b/hw/arm/smmu-common.c
index 5fabe30c75..a092bb5a8d 100644
--- a/hw/arm/smmu-common.c
+++ b/hw/arm/smmu-common.c
@@ -399,20 +399,26 @@ void smmu_iotlb_inv_vmid_s1(SMMUState *s, int vmid)
   * @base_addr[@index]
   */
  static int get_pte(dma_addr_t baseaddr, uint32_t index, uint64_t *pte,
-                   SMMUPTWEventInfo *info)
+                   SMMUPTWEventInfo *info, SMMUSecSID sec_sid)
  {
      int ret;
      dma_addr_t addr = baseaddr + index * sizeof(*pte);
-
+    MemTxAttrs attrs = smmu_get_txattrs(sec_sid);
+    AddressSpace *as = smmu_get_address_space(sec_sid);
+    if (!as) {
+        info->type = SMMU_PTW_ERR_WALK_EABT;
is it WALK_EABT or PERMISSION in that case? I fail to find where it is
specified in the spec. Add a reference once?


Maybe this is the same situation I described earlier in the previous thread [1]?  I’m still not confident there is a clear architected mapping for this condition to a specific PTW event type. Rather than arbitrarily picking WALK_EABT or PERMISSION, I am leaning towards treating it as a pure model bug:

I’ll switch this to a g_assert(as) so we don’t report an architected event for something that should never happen on a correctly wired machine model.


[1] https://lore.kernel.org/qemu-devel/[email protected]/

+        info->addr = addr;
+        return -EINVAL;
+    }
      /* TODO: guarantee 64-bit single-copy atomicity */
-    ret = ldq_le_dma(&address_space_memory, addr, pte, MEMTXATTRS_UNSPECIFIED);
+    ret = ldq_le_dma(as, addr, pte, attrs);
------------------------------<snip>------------------------------



------------------------------<snip>------------------------------
          tlbe->entry.translated_addr = gpa;
          tlbe->entry.iova = iova & ~mask;
          tlbe->entry.addr_mask = mask;
@@ -688,7 +726,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
          dma_addr_t pte_addr = baseaddr + offset * sizeof(pte);
          uint8_t s2ap;
- if (get_pte(baseaddr, offset, &pte, info)) {
+        /* Use NS as Secure Stage 2 is not implemented (SMMU_S_IDR1.SEL2 == 
0)*/
I don't really get this as you passed the sel2 in the cfg?


In the next revision I’ll simplify the story. SMMUTransCfg will no longer carry a sel2 field, and this series will explicitly not support Secure Stage 2. In that context, the Stage-2 PTW will be hard-coded to use SMMU_SEC_SID_NS. If/when we add SEL2 support in a follow-up series, we can then make this driven by configuration instead of hard-coded.


+        if (get_pte(baseaddr, offset, &pte, info, SMMU_SEC_SID_NS)) {
                  goto error;
          }
          trace_smmu_ptw_level(stage, level, ipa, subpage_size,
@@ -741,6 +780,8 @@ static int smmu_ptw_64_s2(SMMUTransCfg *cfg,
              goto error_ipa;
          }
+ tlbe->sec_sid = SMMU_SEC_SID_NS;
+        tlbe->entry.target_as = &address_space_memory;
          tlbe->entry.translated_addr = gpa;
          tlbe->entry.iova = ipa & ~mask;
          tlbe->entry.addr_mask = mask;
@@ -825,6 +866,17 @@ int smmu_ptw(SMMUState *bs, SMMUTransCfg *cfg, dma_addr_t 
iova,
          return ret;
      }
+ if (!cfg->sel2 && tlbe->sec_sid > SMMU_SEC_SID_NS) {
+        /*
+         * Nested translation with Secure IPA output is not supported if
+         * Secure Stage 2 is not implemented.
+         */
+        info->type = SMMU_PTW_ERR_TRANSLATION;
pointer to the spec for TRANSLATION error?

Otherwise looks good

Eric


After re-reading the spec, I think we should move the check earlier, when decoding the STE/CD, and use the combination of SMMU_S_IDR1.SEL2, Config == 0b11x, and the Secure Stream table context to detect an architecturally illegal nested configuration.


In that case I’ll report a C_BAD_STE-style configuration error and bail out before running any Secure Stage-1 page walk. That both matches the spec more closely and avoids doing extra work in this unsupported configuration. How do you think about this?


Thanks again your review.

Tao


Reply via email to