On 4/11/23 13:37, gaosong wrote:
static bool trans_vseteqz_v(DisasContext *ctx, arg_cv *a)
{
TCGv_i64 t1, t2, al, ah, zero;
al = tcg_temp_new_i64();
ah = tcg_temp_new_i64();
t1 = tcg_temp_new_i64();
t2 = tcg_temp_new_i64();
zero = tcg_constant_i64(0);
get_vreg64(ah, a->vj, 1);
get_vreg64(al, a->vj, 0);
CHECK_SXE;
tcg_gen_setcond_i64(TCG_COND_EQ, t1, al, zero);
tcg_gen_setcond_i64(TCG_COND_EQ, t2, ah, zero);
tcg_gen_and_i64(t1, t1, t2);
tcg_gen_or_i64(t1, al, ah);
tcg_gen_setcondi_i64(TCG_COND_EQ, t1, t1, 0
But otherwise correct, yes.
+#define SETANYEQZ(NAME, BIT, E) \
+void HELPER(NAME)(CPULoongArchState *env, uint32_t cd, uint32_t vj) \
+{ \
+ int i; \
+ bool ret = false; \
+ VReg *Vj = &(env->fpr[vj].vreg); \
+ \
+ for (i = 0; i < LSX_LEN/BIT; i++) { \
+ ret |= (Vj->E(i) == 0); \
+ } \
+ env->cf[cd & 0x7] = ret; \
+}
+SETANYEQZ(vsetanyeqz_b, 8, B)
+SETANYEQZ(vsetanyeqz_h, 16, H)
+SETANYEQZ(vsetanyeqz_w, 32, W)
+SETANYEQZ(vsetanyeqz_d, 64, D)
These could be inlined, though slightly harder.
C.f. target/arm/sve_helper.c, do_match2 (your n == 0).
Do you mean an inline like trans_vseteqz_v or just an inline helper function?
I meant inline tcg code generation, instead of a call to a helper.
But even if we keep this in a helper, see do_match2 for avoiding the loop over
bytes.
r~