https://gcc.gnu.org/bugzilla/show_bug.cgi?id=68221
Bug ID: 68221 Summary: libgomp reduction-11/12 failures Product: gcc Version: 6.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: middle-end Assignee: unassigned at gcc dot gnu.org Reporter: jakub at gcc dot gnu.org Target Milestone: --- Without the XFAILs I've added I'm getting: FAIL: libgomp.c/reduction-11.c execution test FAIL: libgomp.c/reduction-12.c execution test FAIL: libgomp.c++/reduction-11.C execution test FAIL: libgomp.c++/reduction-12.C execution test on i686-linux (32-bit only, 64-bit x86_64-linux works) when the testcases are compiled with -fopenmp -O2. At -O0 they work. These testcases test array reductions with non-zero low-bound, where for stack space reasons the compiler is creating private variable just for the array section and not elements before or after it in the original array. E.g. for reduction (b[2:3]) the user is allowed to touch in the region b[2], b[3], b[4], but not b[0], b[1] or b[5]. The current implementation allocates short int b.23[3]; automatic variable in this case, and needs to replace the original b with &b.23 - 2, so that b + 2 then is in range. If the low-bound is not constant (or if it is zero), all is fine, but when it is constant, we after IL simplifications end up with MEM[(short int *)&b.23 + 4294967292B][2] but apparently on i686-linux -m32 -O2 -fopenmp on reduction-11.c at least PRE seems to think that stores to MEM[(short int *)&b.23 + 4294967292B][2] can't alias reads from it. On the OpenMP side I guess I could try casting &b.23 to uintptr_t and then back, but I'm afraid it will get folded away anyway. Another option is to add some optimization barrier like short *p; p = &b.23; asm ("" : "+g" (p)); but then points-to analysis will pessimize code. If we could get the above folded into MEM[(short int *)&b.23][0], it would be nice, but can we really rely on that being always done?