On 11/15/18 11:36 PM, Alistair Francis wrote:
> +    tcg_out_opc_reg(s, OPC_ADD, base, TCG_GUEST_BASE_REG, addr_regl);

Should avoid this when guest_base == 0, which happens fairly regularly for a
64-bit guest.

> +        /* Prefer to load from offset 0 first, but allow for overlap.  */
> +        if (TCG_TARGET_REG_BITS == 64) {
> +            tcg_out_opc_imm(s, OPC_LD, lo, base, 0);
> +        } else {
> +            tcg_out_opc_imm(s, OPC_LW, lo, base, 0);
> +            tcg_out_opc_imm(s, OPC_LW, hi, base, 4);
> +        }

Comment sounds like two lines of code that's missing.

> +    const TCGMemOp bswap = opc & MO_BSWAP;
> +
> +    /* TODO: Handle byte swapping */

Should assert rather than emit bad code.

I do still plan to change tcg to allow backends to *not* handle byte swapping
if they don't want.  This will make the i386 and arm32 backends less icky.


r~

Reply via email to