[Bug target/113341] Using GCC as the bootstrap compiler breaks LLVM on 32-bit PowerPC
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113341 --- Comment #8 from Jessica Clarke --- The clang/ subdirectory should be building itself with -fno-strict-aliasing on GCC already
[Bug target/111908] Port CheriBSD-specific compiler warnings to GCC
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=111908 --- Comment #1 from Jessica Clarke --- NB: Arm have a vendor branch for Morello (intended to be generic across CHERI with a Morello-specific backend, rather than overly tied to the Morello prototype) at refs/vendors/ARM/heads/morello. I have no experience of it, and it's less mature than our decade-old Clang/LLVM, but it purports to both add CHERI C diagnostics and introduce Morello code generation. They are tied together, as they are in our Clang/LLVM, but there's no reason one couldn't port some of the diagnostics to non-CHERI and warn people that they're doing things outside of ISO (or even GNU) C, even if they happen to work on those architectures, though disambiguating intptr_t vs ((un)signed) long (long) (or even int on ILP32) may not be feasible given that *is* very much OK on ILP32/LP64/LLP64.
[Bug c/110910] New: weakref should allow incomplete array type
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=110910 Bug ID: 110910 Summary: weakref should allow incomplete array type Product: gcc Version: 13.2.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: c Assignee: unassigned at gcc dot gnu.org Reporter: jrtc27 at jrtc27 dot com Target Milestone: --- Consider: extern char foo[]; static char weak_foo[] __attribute__((weakref("foo"))); Normally, being a tentative type with internal linkage, weak_foo would not be allowed to have an incomplete type, and this is what GCC enforces today ("warning: array 'weak_foo' assumed to have one element" / "error: array size missing in 'weak_foo'" depending on -pedantic). However, weakref is special, and makes weak_foo not exist at all, but instead be an alias for foo, which can legitimately have internal linkage. Therefore I believe this restriction, namely C99 6.9.2p3, should be relaxed when weakref is used. This does mean the diagnostic for something like: static char weak_foo[]; ... static char weak_foo[] __attribute__((weakref("foo"))); needs to be delayed until the whole file has been parsed, but given GCC already supports: static char foo[]; ... static char foo[42]; as an extension that doesn't seem to be a problem.
[Bug c++/60512] would be useful if gcc implemented __has_feature similary to clang
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=60512 Jessica Clarke changed: What|Removed |Added CC||jrtc27 at jrtc27 dot com --- Comment #11 from Jessica Clarke --- Macros and __has_feature are equally expressive, sure, but why should Clang change what it’s been doing from the start because GCC doesn’t want to be compatible with how it’s always done it? It seems a bit rude to expect Clang to change when it was the one to define how these worked first and GCC took its implementation. It’s not like it’s a complicated thing for GCC to implement, and it should really have done so when it added sanitizer support in order to be fully compatible rather than do things differently and force users to support both ways in their code (which, to this day, isn’t reliably done, so there is code out there that only works with Clang’s sanitizers).
[Bug middle-end/107498] Wrong optimization leads to unaligned access when compiling OpenLDAP
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=107498 --- Comment #2 from Jessica Clarke --- #define mp_lowermp_pb.pb.pb_lower #define mp_uppermp_pb.pb.pb_upper #define mp_pagesmp_pb.pb_pages union { struct { indx_t pb_lower; /**< lower bound of free space */ indx_t pb_upper; /**< upper bound of free space */ } pb; uint32_tpb_pages; /**< number of overflow pages */ } mp_pb; That's the code. GCC is perfectly ok to optimise that to do a 32-bit load. If it has an alignment fault that's OpenLDAP's problem, the uint32_t there means it must be 32-bit aligned if you don't want UB.
[Bug target/105733] New: riscv: Poor codegen for large stack frames
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=105733 Bug ID: 105733 Summary: riscv: Poor codegen for large stack frames Product: gcc Version: 12.1.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: target Assignee: unassigned at gcc dot gnu.org Reporter: jrtc27 at jrtc27 dot com Target Milestone: --- Target: riscv*-*-* For the following test: #define BUF_SIZE 2064 void foo(unsigned long i) { volatile char buf[BUF_SIZE]; buf[i] = 0; } GCC currently generates: foo: li t0,-4096 addit0,t0,2016 li a4,4096 add sp,sp,t0 li a5,-4096 addia4,a4,-2032 add a4,a4,a5 addia5,sp,16 add a5,a4,a5 add a0,a5,a0 li t0,4096 sd a5,8(sp) sb zero,2032(a0) addit0,t0,-2016 add sp,sp,t0 jr ra whereas Clang generates the much shorter: foo: lui a1, 1 addiw a1, a1, -2016 sub sp, sp, a1 addia1, sp, 16 add a0, a0, a1 sb zero, 0(a0) lui a0, 1 addiw a0, a0, -2016 add sp, sp, a0 ret The: li a4,4096 ... li a5,-4096 addia4,a4,-2032 add a4,a4,a5 sequence in particular is rather surprising to see rather than just li a4,-2032 and constant-folding that would halve the instruction count difference between GCC and Clang alone. See: https://godbolt.org/z/8EGc85dsf
[Bug c/101645] -Wsign-conversion misses negation of unsigned int
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=101645 Jessica Clarke changed: What|Removed |Added CC||jrtc27 at jrtc27 dot com --- Comment #1 from Jessica Clarke --- Correction: "x is negated but as an *unsigned* int", which is key here
[Bug target/97534] [10/11 Regression] ICE in decompose, at rtl.h:2280 (arm-linux-gnueabihf)
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=97534 James Clarke changed: What|Removed |Added CC||jrtc27 at jrtc27 dot com, ||rearnsha at arm dot com --- Comment #4 from James Clarke --- [Adding Richard to CC] Richard, I see you committed a big series of changes in Oct 2019 to gcc/config/arm that affected subtraction; is it possible one of those broke this test case?