On Monday, May 19, 2014 4:36:49 AM UTC+3, Boris Zbarsky wrote: > On 5/18/14, 2:23 PM, Gavin Sharp wrote: > > > OMTC is important, and I'm excited to see it land > > > on Windows, but the Firefox and Performance teams have just come off a > > > months-long effort to make significant wins in TART, and the thought > > > of taking a 20% regression (huge compared to some of the improvements > > > we fought for) is pretty disheartening. > > > > My question here is whether we have data that indicates why there is a > > regression. Are we painting more, or are we waiting on things more? > > > > In particular, if I understand correctly TART uses a somewhat bizarre > > configuration: it tries to run refresh drivers at 10,000 frames per > > second (yes, 10kHz). That may not going interact at all well with > > compositing at 60Hz, and I'm not even sure how well it'll interact with > > the work to trigger the refresh driver off vsync. > > > > In any case, it's entirely possible to get regressions on TART that have > > nothing to do with actual slowdowns at normal frame rates. That may not > > be the case here, but it's a distinct possibility that it is. For > > example, on Mac we ended up special-casing the TART configuration and > > doing non-blocking buffer swaps in it (see bug 888899) precisely because > > otherwise TART ended up gated on things other than actual rendering > > time. I would not be terribly surprised if something like that needs to > > be done on Windows too... > > > > -Boris
It's not really 10K. It just means that the refresh driver sets the timeout to its next iteration as 0 or 1 ms, instead of aiming at 16-17 ms intervals. On OS X it indeed also includes non-blocking swap, and on windows - I don't know. We call it ASAP mode, and it's used on several tests (tscrollx, tsvgx, TART, CART). Since we started using it, those tests became much more sensitive and much better in detecting perf changes, with very few false positives, if at all. But ASAP is not that weird. Except for the non-blocking swap, the refresh driver can set its intervals to 0-1 ms also under normal conditions - when the load is high. In ASAP mode we explicitly induce this state even when the load is low. Without this mode, those tests would (and have) just flatline around 16.7 ms, thus not detecting regressions (or improvements) and in practice quite useless. Sure, it's different than "normal" mode, and possibly changes some internal balances into less than optimal ones, but overall, for measuring rendering throughputs (and changes of those), we don't have a better tool right now, and it has proved useful and reliable. Its problem, however, is that the numbers we measure with OMTC are not necessarily comparable to the numbers without OMTC. As long as it's without OMTC - it's reliable, and as long as it's with OMTC it's also reliable (bug 946567). But the perf changes which we measure on the switch itself might not be reliable, if only for the fact that defining "throughput" is not easy on OMTC. The gfx/layout guys are aware of these factors, and we occasionally try to check if we could come up with something better than ASAP mode, but so far there isn't something better which we're aware of. As I mentioned earlier, for this kind of switch, it would help to use few real machines with different human eyes looking at them and assessing if there's an observable difference. - avih _______________________________________________ dev-platform mailing list dev-platform@lists.mozilla.org https://lists.mozilla.org/listinfo/dev-platform