https://gcc.gnu.org/bugzilla/show_bug.cgi?id=79067
Bug ID: 79067 Summary: gcc.dg/tree-prof/cold_partition_label.c runs a million times longer than it used to and times out Product: gcc Version: 7.0 Status: UNCONFIRMED Severity: normal Priority: P3 Component: testsuite Assignee: unassigned at gcc dot gnu.org Reporter: sandra at gcc dot gnu.org Target Milestone: --- The change committed as part of r238325 made gcc.dg/tree-prof/cold_partition_label.c run 1000000 times as long as it used to, causing a timeout on a slow embedded target (nios2-linux-gnu): Index: gcc/testsuite/gcc.dg/tree-prof/cold_partition_label.c =================================================================== --- gcc/testsuite/gcc.dg/tree-prof/cold_partition_label.c (revision 238324) +++ gcc/testsuite/gcc.dg/tree-prof/cold_partition_label.c (revision 238325) @@ -29,9 +29,11 @@ foo (int path) int main (int argc, char *argv[]) { + int i; buf_hot = "hello"; buf_cold = "world"; - foo (argc); + for (i = 0; i < 1000000; i++) + foo (argc); return 0; } Other tests modified in the same commit have similar changes. I did some experiments with different loop counts, measuring the total clock time to run the one test as reported by the test harness: 1000 :12 10000 :19 100000 1:32 1000000 timed out after 5:11 I'm not sure how to fix this. I looked at the discussion on the mailing list about the patch that caused the regression and saw a comment that, even with the high counts, these tests don't run long enough to get good auto-fdo results on the targets that support that feature, so cutting down the count across the board doesn't seem like a good idea. Nor does simply bumping up dg-timeout-factor across the board-- why waste 10+ minutes running this test on a slow target like nios2-linux-gnu that doesn't even support the auto-fdo feature it's trying to test? Maybe add something to target-supports.exp to test for the presence of fdo support and use it to either skip the tests or control a -D option on the command-line to set the number of iterations?