On 01/17/2014 10:44 AM, Peter Maydell wrote:
> +/* Common SHL/SLI - Shift left with an optional insert */
> +static void handle_shli_with_ins(TCGv_i64 tcg_res, TCGv_i64 tcg_src,
> +                                 bool insert, int shift)
> +{
> +    tcg_gen_shli_i64(tcg_src, tcg_src, shift);
> +    if (insert) {
> +        /* SLI */
> +        uint64_t mask = (1ULL << shift) - 1;
> +        tcg_gen_andi_i64(tcg_res, tcg_res, mask);
> +        tcg_gen_or_i64(tcg_res, tcg_res, tcg_src);

This is

  tcg_gen_deposit_i64(tcg_res, tcg_res, tcg_src, shift, 64 - shift);

We do already special case such remaining-width deposits for hosts that don't
implement deposit, so we should get the exact same insn sequence for x86.

> +        tcg_gen_mov_i64(tcg_res, tcg_src);

Which means for the else you can elide the move and just shift directly into
the result.


r~


Reply via email to