> No, about the disagreement of the precision of ptrdiff_t and that
> of sizetype.  See c-common.c:pointer_int_sum:
>   /* Convert the integer argument to a type the same size as sizetype
>      so the multiply won't overflow spuriously.  */
>   if (TYPE_PRECISION (TREE_TYPE (intop)) != TYPE_PRECISION (sizetype)
>       || TYPE_UNSIGNED (TREE_TYPE (intop)) != TYPE_UNSIGNED (sizetype))
>     intop = convert (c_common_type_for_size (TYPE_PRECISION (sizetype),
>                                              TYPE_UNSIGNED (sizetype)),
> intop);
> and consider what happens for example on m32c - we truncate the
> 24bit ptrdiff_t to the 16bit sizetype, losing bits.  And we are
> performing the index * size multiplication in a maybe artificially
> large type, losing information about overflow behavior and possibly
> generating slow code for no good reason.

That seems to be again the POINTER_PLUS_EXPR issue, not sizetype per se.

> Well, because if sizetype is SImode (with -m32) and bitsizetype DImode
> (we round up its precision to 64bits) then a negative byte-offset
> in the unsigned sizetype is 0xffff for example.  When we then perform
> arithmetic on bits, say (bitsizetype)sz * BITS_PER_UNIT + 9 we get
> 0xffff * 8 == 0x80001 (oops) + 9 == 0x80001.  bitsizetype is of too
> large precision to be a modulo-arithmetic bit-equivalent to sizetype
> (at least for our constant-folding code) for "negative" offsets.

OK.  The definitive fix would be to use ssizetype for offsets and restrict 
sizetype to size calculations.  Changing the precision would be a kludge.

Eric Botcazou

Reply via email to