https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77484
Bug ID: 77484 Summary: Static branch predictor causes ~6-8% regression of SPEC2000 GAP Product: gcc Version: 6.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: middle-end Assignee: unassigned at gcc dot gnu.org Reporter: wdijkstr at arm dot com Target Milestone: --- Changes in the static branch predictor (around August last year) caused regressions on SPEC2000. The PRED_CALL predictor causes GAP to regress by 6-8% on AArch64, and this hasn't been fixed on trunk. With this predictor turned off, INT is 0.6% faster and FP 0.4%. The reason is that the predictor causes calls that are guarded by if-statements to be placed at the end of the function. For Gap this is bad as it often executes several such statements in a row, resulting in 2 extra taken branches and additional I-cache misses per if-statement. So it seems that on average this prediction makes things worse. Overall the static prediction and -freorder-blocks provide a benefit. However does the gain of each static prediction being correct outweigh the cost of the prediction being incorrect? Has this been measured for each of the static predictors across multiple targets?