https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113669

--- Comment #1 from Jakub Jelinek <jakub at gcc dot gnu.org> ---
This is because already the FE optimizes it, when it sees that
((int)(g_B * g_A[1])) & (g_A[1] & g_A[0]) | g_A[0]
is just being added to unsigned char element, the upper bits of it aren't
needed, so the multiplication and & and | are all performed in unsigned char
rather than wider types.

Reply via email to