https://gcc.gnu.org/bugzilla/show_bug.cgi?id=123156
Bug ID: 123156
Summary: [12-16 Regression] wrong code at all optimization
level
Product: gcc
Version: 16.0
Status: UNCONFIRMED
Severity: normal
Priority: P3
Component: c
Assignee: unassigned at gcc dot gnu.org
Reporter: xxs_chy at outlook dot com
Target Milestone: ---
Reproducer: https://godbolt.org/z/YMWWzqvsv
Testcase:
#include <stdint.h>
#include <stdio.h>
#define BS_VEC(type, num) type __attribute__((vector_size(num * sizeof(type))))
#define BITCAST(T, F, arg)
\
((union {
\
F src;
\
T dst
\
})arg)
\
.dst
uint64_t func_1()
{
BS_VEC(uint64_t, 32) BS_VAR_0;
BS_VAR_0 = BITCAST(
BS_VEC(uint64_t, 32), BS_VEC(uint8_t, 256),
__builtin_shufflevector(
(BS_VEC(uint8_t, 2)){ 2 }, (BS_VEC(uint8_t, 2)){}, 1, 0, 2, 2, 2,
3,
1, 2, 3, 1, 3, 3, 2, 1, 3, 0, 2, 2, 1, 2, 1, 3, 1, 1, 0, 0, 2, 0,
3,
2, 2, 0, 1, 3, 1, 2, 0, 2, 0, 3, 0, 0, 2, 0, 2, 1, 3, 2, 3, 2, 1,
1,
2, 3, 3, 3, 3, 0, 1, 2, 3, 2, 0, 2, 2, 2, 0, 3, 3, 3, 1, 3, 0, 0,
2,
3, 1, 1, 2, 2, 1, 2, 0, 0, 3, 2, 2, 3, 2, 2, 3, 2, 0, 2, 2, 0, 2,
1,
3, 1, 0, 2, 1, 3, 2, 1, 0, 2, 3, 0, 1, 3, 2, 1, 3, 1, 1, 3, 2, 2,
0,
3, 2, 2, 0, 3, 0, 3, 2, 3, 1, 3, 2, 3, 3, 2, 2, 0, 0, 0, 2, 1, 3,
1,
2, 2, 3, 0, 1, 3, 1, 1, 2, 0, 1, 2, 1, 2, 0, 2, 0, 2, 3, 3, 3, 1,
2,
0, 3, 1, 2, 0, 1, 0, 3, 0, 0, 3, 2, 2, 0, 3, 1, 2, 0, 1, 1, 3, 0,
1,
3, 1, 3, 2, 1, 3, 1, 2, 1, 1, 0, 1, 3, 3, 2, 3, 2, 2, 0, 2, 2, 2,
1,
1, 0, 3, 1, 3, 0, 3, 0, 0, 1, 3, 3, 1, 2, 1, 1, 3, 1, 0, 2, 1, 3,
2,
1, 3, 2, 2, 2, 3, 2, 0, 1, 3, 3, 3, 0, 0, 0, 1, 3, 2, 2, 1));
return BS_VAR_0[0];
}
int main()
{
uint64_t BS_CHECKSUM = func_1();
printf("BackSmith Checksum = 0x%016llx\n", BS_CHECKSUM);
}
Clang and GCC have different results for the testcase, even with
-fno-strict-aliasing flag:
Clang:
> BackSmith Checksum = 0x0000000000000200
GCC:
> BackSmith Checksum = 0x0200000202020200
The result becomes consistent when I reduce the vector element amount:
https://godbolt.org/z/4dz7rca7K