staffantj added a comment. In D86559#2250344 <https://reviews.llvm.org/D86559#2250344>, @Mordante wrote:
> In D86559#2250106 <https://reviews.llvm.org/D86559#2250106>, @staffantj wrote: > >> In D86559#2250034 <https://reviews.llvm.org/D86559#2250034>, @aaron.ballman >> wrote: >> >>> In D86559#2243575 <https://reviews.llvm.org/D86559#2243575>, @staffantj >>> wrote: >>> >>>> This is not a "teach everyone about it, it's safe" feature. It's there to >>>> produce a very fine-grained control in those cases where it really >>>> matters, and where profiling-guided optimizations would produce exactly >>>> the wrong result. Using this feature should be an automatic "is this >>>> needed" question in a code review. It is needed sometimes, just rarely. >>> >>> +1 but I would point out that when PGO is enabled, this attribute is >>> ignored, so it's an either/or feature. Either you get tuned optimizations, >>> or you get to guess at them, but the current design doesn't let you mix and >>> match. Do see that as being a major issue with the design? >> >> We did have discussions in SG14 about the possible need for "inverse PGO" >> optimization passes, but we didn't have the compiler dev expertise to know >> if we were on a good track or not. From the 8 years I've spent working on >> this kind of code, I've never mixed the two, since the interactions are just >> too unpredictable. What I will frequently do however, is compare benchmarks >> between PGO generated code, and our hand-specified optimizations. Sometimes >> the PGO detects interesting program flow that a mere human brain could not >> have envisioned, which we can then take advantage of (or de-prioritize) as >> needed. > > Are you currently using `__builtin_expect` and test whether these > expectations match with the PGO expectations? This isn't part of the current > patches, but it's something I intent to look at later. Not per se (as I understand the question). We use PGO as inspiration to find interesting code paths, but if we find a substantial overlap with such a code path and our annotations, then that is a decided code smell - we're manually stating something the caches and system will discover on it's own. To our mind, __builtin_expect, the noinline attribute, and other tools at our disposal to shape the generated code should only be used in those extreme cases where the metrics of the compiler, together with the runtime cache / preload systems will get it wrong for our "one--in-ten-million" use cases. The testing we do is typically by looking at the overall performance of the application over a day's processing (albeit at accelerated timescales). If we want to take advantage of an interesting code path discovered by examining the results of a PGO run, we'll craft the code to produce that outcome (including manual assembly if needed). That way we avoid trying to do both manual and automatic tuning - that rarely ends in a a happy place in our experience. Repository: rG LLVM Github Monorepo CHANGES SINCE LAST ACTION https://reviews.llvm.org/D86559/new/ https://reviews.llvm.org/D86559 _______________________________________________ cfe-commits mailing list cfe-commits@lists.llvm.org https://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-commits