Re: [Qemu-devel] [PATCH v9 17/20] cpu: TLB_FLAGS_MASK bit to force memory slow path

2019-08-23 Thread Richard Henderson
On 8/23/19 11:36 AM, Tony Nguyen wrote:
> The fast path is taken when TLB_FLAGS_MASK is all zero.
> 
> TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
> there are no other side effects.
> 
> Signed-off-by: Tony Nguyen 
> Reviewed-by: Richard Henderson 
> ---
>  include/exec/cpu-all.h | 10 --
>  1 file changed, 8 insertions(+), 2 deletions(-)

FYI, while looking at this again, we do not need a new bit.  You can simply set
TLB_MMIO for this use case, like we do for ROM.

This seems to be the only change to be made for this patch set; I can fix this
up myself while queuing.


r~



[Qemu-devel] [PATCH v9 17/20] cpu: TLB_FLAGS_MASK bit to force memory slow path

2019-08-23 Thread Tony Nguyen
The fast path is taken when TLB_FLAGS_MASK is all zero.

TLB_FORCE_SLOW is simply a TLB_FLAGS_MASK bit to force the slow path,
there are no other side effects.

Signed-off-by: Tony Nguyen 
Reviewed-by: Richard Henderson 
---
 include/exec/cpu-all.h | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/include/exec/cpu-all.h b/include/exec/cpu-all.h
index 536ea58f81..e496f9900f 100644
--- a/include/exec/cpu-all.h
+++ b/include/exec/cpu-all.h
@@ -331,12 +331,18 @@ CPUArchState *cpu_copy(CPUArchState *env);
 #define TLB_MMIO(1 << (TARGET_PAGE_BITS - 3))
 /* Set if TLB entry must have MMU lookup repeated for every access */
 #define TLB_RECHECK (1 << (TARGET_PAGE_BITS - 4))
+/* Set if TLB entry must take the slow path.  */
+#define TLB_FORCE_SLOW  (1 << (TARGET_PAGE_BITS - 5))
 
 /* Use this mask to check interception with an alignment mask
  * in a TCG backend.
  */
-#define TLB_FLAGS_MASK  (TLB_INVALID_MASK | TLB_NOTDIRTY | TLB_MMIO \
- | TLB_RECHECK)
+#define TLB_FLAGS_MASK \
+(TLB_INVALID_MASK  \
+ | TLB_NOTDIRTY\
+ | TLB_MMIO\
+ | TLB_RECHECK \
+ | TLB_FORCE_SLOW)
 
 /**
  * tlb_hit_page: return true if page aligned @addr is a hit against the
-- 
2.23.0