When PMP entry overlap part of the page, we'll set the tlb_size to 1, and this will make the address set with TLB_INVALID_MASK to make the page un-cached. However, if we clear TLB_INVALID_MASK when TLB is re-filled, then the TLB host address will be cached, and the following instructions can use this host address directly which may lead to the bypass of PMP related check.
Signed-off-by: Weiwei Li <liwei...@iscas.ac.cn> Signed-off-by: Junqiang Wang <wangjunqi...@iscas.ac.cn> --- accel/tcg/cputlb.c | 7 ------- 1 file changed, 7 deletions(-) diff --git a/accel/tcg/cputlb.c b/accel/tcg/cputlb.c index e984a98dc4..d0bf996405 100644 --- a/accel/tcg/cputlb.c +++ b/accel/tcg/cputlb.c @@ -1563,13 +1563,6 @@ static int probe_access_internal(CPUArchState *env, target_ulong addr, /* TLB resize via tlb_fill may have moved the entry. */ index = tlb_index(env, mmu_idx, addr); entry = tlb_entry(env, mmu_idx, addr); - - /* - * With PAGE_WRITE_INV, we set TLB_INVALID_MASK immediately, - * to force the next access through tlb_fill. We've just - * called tlb_fill, so we know that this entry *is* valid. - */ - flags &= ~TLB_INVALID_MASK; } tlb_addr = tlb_read_ofs(entry, elt_ofs); } -- 2.25.1