On 2023/4/18 00:25, Daniel Henrique Barboza wrote:
On 4/13/23 06:01, Weiwei Li wrote:
When PMP entry overlap part of the page, we'll set the tlb_size to 1,
and
this will make the address set with TLB_INVALID_MASK to make the page
un-cached. However, if we clear TLB_INVALID_MASK when TLB is
re-filled, then
the TLB host address will be cached, and the following instructions
can use
this host address directly which may lead to the bypass of PMP
related check.
Signed-off-by: Weiwei Li <liwei...@iscas.ac.cn>
Signed-off-by: Junqiang Wang <wangjunqi...@iscas.ac.cn>
---
For this commit I believe it's worth mentioning that it's partially
reverting
commit c3c8bf579b431b6b ("accel/tcg: Suppress auto-invalidate in
probe_access_internal") that was made to handle a particularity/quirk
that was
present in s390x code.
At first glance this patch seems benign but we must make sure that no
other
assumptions were made with this particular change in
probe_access_internal().
I think this change will introduce no external function change except
that we should
always walk the page table(fill_tlb) for memory access to that page. And
this is needed
for pages that are partially overlapped by PMP region.
Regards,
Weiwei Li
Thanks,
Daniel
accel/tcg/cputlb.c | 7 -------
1 file changed, 7 deletions(-)
diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c
index e984a98dc4..d0bf996405 100644
--- a/accel/tcg/cputlb.c
+++ b/accel/tcg/cputlb.c
@@ -1563,13 +1563,6 @@ static int probe_access_internal(CPUArchState
*env, target_ulong addr,
/* TLB resize via tlb_fill may have moved the entry. */
index = tlb_index(env, mmu_idx, addr);
entry = tlb_entry(env, mmu_idx, addr);
-
- /*
- * With PAGE_WRITE_INV, we set TLB_INVALID_MASK
immediately,
- * to force the next access through tlb_fill. We've just
- * called tlb_fill, so we know that this entry *is* valid.
- */
- flags &= ~TLB_INVALID_MASK;
}
tlb_addr = tlb_read_ofs(entry, elt_ofs);
}