Looking through the old problems: pointer addition is not done
right. What happens is simple - p + i turns into
BINOP[+]
<tree for p>
BINOP[*]
<tree for i>
VAL[sizeof(*p)]
and we end up multiplying i and sizeof(*p) in wrong type. On 64bit host
we _must_ expand i to 64 bits before we multiply; otherwise we get
wraparounds.
The question is, what do we do about that? The obvious way would be
to do cast_to(), i.e. turn the above into
BINOP[+]
<tree for p>
BINOP[*]
IMPLICIEDT_CAST[ptrdiff_t]
<tree for i>
VAL[sizeof(*p)]
But that means a fsckload of extra nodes allocated on pretty much any
program - use of arrays is not rare and indices tend to be int, so we
hit an extra allocated node on each such place.
Another possible solution is to add a primitive for combined conversion
and multiplication - basically, convert the first argument to the type of
the second one and multiply. We would actually need it only for ptrdiff_t;
sizeof(*p) is going to fit into the range anyway (it has to - the difference
between (char *)(p+1) and (char *)p must fit into it, or we couldn't do any
arithmetics on that pointer type at all; as soon as product overflows
ptrdiff_t we are free to do whatever the hell we like, since that's an
undefined behaviour and "multiply as ptrdiff_t values" gives an reasonable
result even in such cases).
That would cut down on extra nodes, but we pay for that by getting an
extra node type for linearizer, etc. to deal with. Not sure how nasty
that's going to be...
Comments?
-
To unsubscribe from this list: send the line "unsubscribe linux-sparse" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html