Ralph Goers wrote: > What Ceki is doing is an imperfect, but better approach than what you > are suggesting. The current approach adjusts its expectations based on > the baseline performance of the build machine. So as builds are done on > slower or faster hardware the expected baseline should change with it. > > The problem with hard time limits is exactly what you say - as machines > get faster they will naturally pass tests they should have failed. So > over time the performance tests will become meaningless.
Hello Ralph, As machines get faster, their BIPS (bogo instructions per second) score will also increase, and the test threshold numbers adjusted accordingly. However, I agree that the current performance tests do not take into account further optimizations in the JDK and associated software, so that after a performance regression in SLF4J code, tests that should not have passed, pass due to optimizations in the JDK. > The challenge with the current approach is that it might need to use a > wider mix of instructions to get a more accurate representation of the > machine. True. At the same time, I don't want to become a benchmark specialist. Do you? The performance tests have a slack factor of *3*. As long as performance of the component under performance testing does not degrade by a higher factor (3 or more), the tests should continue to pass. The core idea behind the performance tests is to detect wild regressions in performance not minor ones. > Ralph -- Ceki Gülcü Logback: The reliable, generic, fast and flexible logging framework for Java. http://logback.qos.ch _______________________________________________ dev mailing list [email protected] http://www.slf4j.org/mailman/listinfo/dev
