In the architecture where sign defaults to unsigned, the 'f' will be zero
extended to int type in the expression 'd = ~(f & ~2880764155)', then the
'd' will become -1 wich cause the case to fail.
So it's ok for the architectures where sign defaults to signed like x86,
but failed for the architectures where sign defaults to unsigned like arm
and csky. Change char to signed char to avoid this problem.

gcc/testsuite:
        * gcc.dg/torture/pr108574-3.c (b, f): Change type from char to
        signed char.
---
 gcc/testsuite/gcc.dg/torture/pr108574-3.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/gcc/testsuite/gcc.dg/torture/pr108574-3.c 
b/gcc/testsuite/gcc.dg/torture/pr108574-3.c
index 3c9146e31ac..b4d5dae9f80 100644
--- a/gcc/testsuite/gcc.dg/torture/pr108574-3.c
+++ b/gcc/testsuite/gcc.dg/torture/pr108574-3.c
@@ -1,7 +1,7 @@
 /* { dg-do run } */
 
 int a = 3557301289, d;
-char b, f;
+signed char b, f;
 unsigned short c = 241;
 short e, g;
 static void h() {
-- 
2.32.1 (Apple Git-133)

Reply via email to