> On Sep 13, 2018, at 8:54 PM, Georg Rudoy <0xd34df...@gmail.com> wrote: > >> On 14.09.2018 at 0:44 user Richard Yao <r...@gentoo.org> wrote: >> This is a really odd design decision by the GCC developers. With other >> compilers, the separation between front end and backend is strong enough >> that you will never have this sort of thing. It does not seem necessary to >> me either. :/ I didn't want to get into this, but GCC's backend is actually split into a middle-end, that does architecture independent optimizations and a backend, that generates assembly code (and likely does architecture specific optimization):
https://en.wikibooks.org/wiki/GNU_C_Compiler_Internals/GNU_C_Compiler_Architecture You can get it to dump what appears to be the input to the middle end by doing -fdump-translation-unit. You can get GCC to dump each stage of the middle end using -fdump-tree-all. Presumably, any warnings that depend on optimization won't be generated, although someone would need to study the code to verify that. I suspect quite a few people talking about what -Werror -Wall does have never actually touched the internals of a compiler and perhaps that should change. I won't claim to be an expert, but I have minor experience with front-end development, writing code to do error generation, etcetera. Honestly, when I first looked at GCC's sources, I decided that I would be happier never looking at them ever again, but I am starting to reconsider that decision. > > You might be able to perform certain additional data/control flow analysis > after things like inlining, dead code removal or devirtualization. Do you have any examples? I am having some trouble making a test case that shows a different in behavior at different optimization levels either way. Here is an example: __attribute__((always_inline)) static inline int test(int x) { return x; } int main() { int x; return (test(x)*0); } GCC 7.3.0 will emit a warning regardless of optimization level provided that -Wall is passed. However, it will not emit a warning for this at any optimization level: int main() { int x; return (x*0); } > > Moving that logic to the frontend would require essentially duplicating > what's the optimizer's gonna do anyway, which might have negative effects on > compilation times (both with and without optimizations) and compiler code > maintenance. If it is easier to use the optimizer's infrastructure to figure this out, then the code could be written to call it to do analysis as a pseudo-optimization pass, which GCC actually appears to do when it runs its "Function test" "optimization" pass. There is no code maintenance burden. Fabian was right about needing -Wall for -Werror to catch many things because most warnings are off by default. However, I am extremely skeptical that the optimization level has significant error. -Wall gives us whatever we need and so far, I cannot find a code example showing differences in warning generation at different optimization levels. I'll accept the idea that the warning quality might change (although I don't fully understand the reasoning for it) by the GCC documentation. However, I find the idea that -O3 will make a warning appear that didn't otherwise appear to be very difficult to accept. The documentation claims that optimization passes can effect emission of warnings for uninitialized variables, but the only way that I could imagine that happening would be if an optimization pass made x*0 into 0 before the code generating the warning is run. This is the complete opposite of what is being claimed here. In some actual tests, I am unable to get GCC to emit a warning for that at any optimization level. I am also unable to get it to behave differently for uninitialized variables at different optimization levels no matter what idea I try. > > > > -- > Georg Rudoy > >
signature.asc
Description: OpenPGP digital signature