On Tue, Apr 16, 2013 at 10:17 PM, Kenneth Zadeck <zad...@naturalbridge.com> wrote: > Here is a refreshed version of the rtl changes for wide-int. the only > change from the previous versions is that the wide-int binary operations > have been simplified to use the new wide-int binary templates.
Looking for from_rtx calls (to see where we get the mode/precision from) I see for example - o = rtx_to_double_int (outer); - i = rtx_to_double_int (inner); - - m = double_int::mask (width); - i &= m; - m = m.llshift (offset, HOST_BITS_PER_DOUBLE_INT); - i = i.llshift (offset, HOST_BITS_PER_DOUBLE_INT); - o = o.and_not (m) | i; - + + o = (wide_int::from_rtx (outer, GET_MODE (SET_DEST (temp))) + .insert (wide_int::from_rtx (inner, GET_MODE (dest)), + offset, width)); where I'd rather have the original code preserved as much as possible and not introduce a new primitive wide_int::insert for this. The conversion and review process will be much more error-prone if we do multiple things at once (and it might keep the wide_int initial interface leaner). Btw, the wide_int::insert implementation doesn't assert anything about the inputs precision. Instead it reads + if (start + width >= precision) + width = precision - start; + + mask = shifted_mask (start, width, false, precision); + tmp = op0.lshift (start, 0, precision, NONE); + result = tmp & mask; + + tmp = and_not (mask); + result = result | tmp; which eventually ends up performing everything in target precision. So we don't really care about the mode or precision of inner. Then I see diff --git a/gcc/dwarf2out.h b/gcc/dwarf2out.h index ad03a34..531a7c1 100644 @@ -180,6 +182,7 @@ typedef struct GTY(()) dw_val_struct { HOST_WIDE_INT GTY ((default)) val_int; unsigned HOST_WIDE_INT GTY ((tag ("dw_val_class_unsigned_const"))) val_unsigned; double_int GTY ((tag ("dw_val_class_const_double"))) val_double; + wide_int GTY ((tag ("dw_val_class_wide_int"))) val_wide; dw_vec_const GTY ((tag ("dw_val_class_vec"))) val_vec; struct dw_val_die_union { ick. That makes dw_val_struct really large ... (and thus dw_attr_struct). You need to make this a pointer to a wide_int at least. -/* Return a CONST_INT or CONST_DOUBLE corresponding to target reading +/* Return a constant integer corresponding to target reading GET_MODE_BITSIZE (MODE) bits from string constant STR. */ static rtx c_readstr (const char *str, enum machine_mode mode) { - HOST_WIDE_INT c[2]; + wide_int c; ... - return immed_double_const (c[0], c[1], mode); + + c = wide_int::from_array (tmp, len, mode); + return immed_wide_int_const (c, mode); } err - what's this good for? It doesn't look necessary as part of the initial wide-int conversion at least. (please audit your patches for such cases) @@ -4994,12 +4999,12 @@ expand_builtin_signbit (tree exp, rtx target) if (bitpos < GET_MODE_BITSIZE (rmode)) { - double_int mask = double_int_zero.set_bit (bitpos); + wide_int mask = wide_int::set_bit_in_zero (bitpos, rmode); if (GET_MODE_SIZE (imode) > GET_MODE_SIZE (rmode)) temp = gen_lowpart (rmode, temp); temp = expand_binop (rmode, and_optab, temp, - immed_double_int_const (mask, rmode), + immed_wide_int_const (mask, rmode), NULL_RTX, 1, OPTAB_LIB_WIDEN); } else Likewise. I suppose you remove immed_double_int_const but I see no reason to do that. It just makes your patch larger than necessary. [what was the reason again to have TARGET_SUPPORTS_WIDE_INT at all? It's supposed to be a no-op conversion, right?] @@ -95,38 +95,9 @@ plus_constant (enum machine_mode mode, rtx x, HOST_WIDE_INT c) switch (code) { - case CONST_INT: - if (GET_MODE_BITSIZE (mode) > HOST_BITS_PER_WIDE_INT) - { - double_int di_x = double_int::from_shwi (INTVAL (x)); - double_int di_c = double_int::from_shwi (c); - - bool overflow; - double_int v = di_x.add_with_sign (di_c, false, &overflow); - if (overflow) - gcc_unreachable (); - - return immed_double_int_const (v, VOIDmode); - } - - return GEN_INT (INTVAL (x) + c); - - case CONST_DOUBLE: - { - double_int di_x = double_int::from_pair (CONST_DOUBLE_HIGH (x), - CONST_DOUBLE_LOW (x)); - double_int di_c = double_int::from_shwi (c); - - bool overflow; - double_int v = di_x.add_with_sign (di_c, false, &overflow); - if (overflow) - /* Sorry, we have no way to represent overflows this wide. - To fix, add constant support wider than CONST_DOUBLE. */ - gcc_assert (GET_MODE_BITSIZE (mode) <= HOST_BITS_PER_DOUBLE_INT); - - return immed_double_int_const (v, VOIDmode); - } - + CASE_CONST_SCALAR_INT: + return immed_wide_int_const (wide_int::from_rtx (x, mode) + + wide_int::from_shwi (c, mode), mode); you said you didn't want to convert CONST_INT to wide-int. But the above is certainly a lot less efficient than before - given your change to support operator+ RTX even less efficient than possible. The above also shows three 'mode' arguments while one to immed_wide_int_const would be enough (given it would truncate the arbitrary precision result from the addition to modes precision). That is, I see no reason to remove the CONST_INT case or the CONST_DOUBLE case. [why is the above not in any way guarded with TARGET_SUPPORTS_WIDE_INT?] What happens with overflows in the wide-int case? The double-int case asserted that there is no overflow across 2 * hwi precision, the wide-int case does not. Still the wide-int case now truncates to 'mode' precision while the CONST_DOUBLE case did not. That's a change in behavior, no? Effectively the code for CONST_INT and CONST_DOUBLE did "arbitrary" precision arithmetic (up to the precision they can encode) which wide-int changes. Can we in such cases please to a preparatory patch and change the CONST_INT/CONST_DOUBLE paths to do an explicit [sz]ext to mode precision first? What does wide-int do with VOIDmode mode inputs? It seems to ICE on them for from_rtx and use garbage (0) for from_shwi. Ugh. Btw, plus_constant asserts that mode is either VOIDmode (I suppose semantically do "arbitrary precision") or the same mode as the mode of x (I suppose semantically do "mode precision"). Neither the current nor your implementation seems to do something consistent here :/ So please, for those cases (I suppose there are many more, eventually one of the reasons why you think that requiring a mode for all CONST_DOUBLEs is impossible), can we massage the current code to 1) document what is desired, 2) follow that specification with regarding to computation mode / precision and result mode / precision? Thanks, Richard. > kenny >