https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108949
Bug ID: 108949
Summary: Optimize shift counts
Product: gcc
Version: 13.0
Status: UNCONFIRMED
Severity: normal
Priority: P3
Component: tree-optimization
Assignee: unassigned at gcc dot gnu.org
Reporter: jakub at gcc dot gnu.org
Target Milestone: ---
>From https://gcc.gnu.org/bugzilla/show_bug.cgi?id=108941#c13 :
Because various backends support shift count truncation or have patterns that
recognize it in certain cases, I wonder if middle-end couldn't canonicalize
shift count (N + x)
where N is multiple of shift first operand's bitsize B to x & (B - 1) where the
latter
is often optimized away while the former is not.
For similar N - x it is more questionable because N - x is a single GIMPLE
statement while -y & (B - 1) are two; perhaps it could be done at expansion
time though.
In generic code at least for SHIFT_COUNT_TRUNCATED targets, otherwise maybe if
one can easily detect negation optab and subtraction instruction not accepting
immediate for the minuend. Or handle all this in each of the backends?
int
foo (int x, int y)
{
return x << (y & 31);
}
int
bar (int x, int y)
{
return x << (32 + y);
}
int
baz (int x, int y)
{
return x << (-y & 31);
}
int
qux (int x, int y)
{
return x << (32 - y);
}