On 2020/6/30 22:56, Richard Henderson wrote:
On 6/29/20 6:07 AM, LIU Zhiwei wrote:
@@ -3189,7 +3189,7 @@ static void do_nonatomic_op_i32(TCGv_i32 ret, TCGv addr, 
TCGv_i32 val,
memop = tcg_canonicalize_memop(memop, 0, 0); - tcg_gen_qemu_ld_i32(t1, addr, idx, memop & ~MO_SIGN);
+    tcg_gen_qemu_ld_i32(t1, addr, idx, memop);
      gen(t2, t1, val);
      tcg_gen_qemu_st_i32(t2, addr, idx, memop);
@@ -3232,7 +3232,7 @@ static void do_nonatomic_op_i64(TCGv_i64 ret, TCGv addr, TCGv_i64 val, memop = tcg_canonicalize_memop(memop, 1, 0); - tcg_gen_qemu_ld_i64(t1, addr, idx, memop & ~MO_SIGN);
+    tcg_gen_qemu_ld_i64(t1, addr, idx, memop);
      gen(t2, t1, val);
      tcg_gen_qemu_st_i64(t2, addr, idx, memop);
This is insufficient for smin/smax -- we also need to extend the "val" input.

Do you mean we should call tcg_gen_ext_i64(val, val, memop) before gen(t2, t1, val) for do_nonatomic_op_i64?

I think it will be good if it doesn't have any other side effects.

Zhiwei


r~


Reply via email to