I am just wondering whether it should be the other way round, in that
how many iterations can be done in time x. The logic here is that as
processors gets faster a given number of iterations will eventually
reach a value near zero. On the other hand for a given time period
there is much more room for expansion on the number of iterations
possible.
Andre
On 7-Aug-07, at 19:04 , Mitz Pettel wrote:
If I understood the bug fix and the test correctly, then a test
like this might work: do 5000 iterations (instead of 30000) and
look at the ratio between the time the first 500 and last 500
iterations take.
On Aug 7, 2007, at 5:22 PM, Antti Koivisto wrote:
On 8/7/07, Mitz Pettel <[EMAIL PROTECTED]> wrote:
On Aug 7, 2007, at 2:15 PM, [EMAIL PROTECTED] wrote:
- added performance test. With debug build on MBP this takes
about 1.5s to
run.
* fast/block/basic/stress-shallow-nested-expected.txt: Added.
* fast/block/basic/stress-shallow-nested.html: Added.
(emphases mine). Nothing about that makes sense to me.
Are you objecting to testing performance regressions as part of the
test suite in general or particular method here? I'm open to
suggestions.
It does not take long to run (300ms on release, 1.5s on debug MBP) so
I thought it would be appropriate for automatic test. There are
similar cases in the suite already. It needs to have non-zero
execution time so that O(n^2) nature of the bug shows up in testable
way.
antti
_______________________________________________
webkit-dev mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo/webkit-dev
_______________________________________________
webkit-dev mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo/webkit-dev
_______________________________________________
webkit-dev mailing list
[email protected]
http://lists.webkit.org/mailman/listinfo/webkit-dev