On Sep 30, 2008, at 1:41 PM, Peter Kasting wrote:
On Tue, Sep 30, 2008 at 1:35 PM, Brady Eidson <[EMAIL PROTECTED]>
wrote:
If we add a new well specified API that all browser vendors agree
on, everybody wins.
No; everybody who's willing and able to change wins.
Everyone else wins or loses depending on whether the new behavior is
better or worse for them. My argument is that this makes life
better for nearly all pages affected. The entire reason to change
setTimeout() is precisely _because_ not everyone will change their
web pages.
Okay, lets try this, there a 3 possibilities win, no change, and lose,
here are the groups:
New API No timer clamp
Benefits from higher precision timer No change Win
Hurt by high precision timer No change lose
Hurt by timer and willing to update No change lose (extra
work)
Benefits from timer and willing to update Win Win
So while two groups "win" with the chrome model, two groups actually
lose either due to site breakage, or having to do extra work to avoid
breakage. Whereas with a new API while only one group actually "wins"
no others are effected.
(Furthermore, I claim the number of people who will realize they
could get something better, and change their code to get it, is
lower than the number of people who will see that something is wrong
and fix it.)
I would disagree -- people who need high precision timers, and realise
that they're there *will* use them. Sites that are broken by a buggy
setTimeout implementation won't. Hell I have seen sites with actual
bugs (eg. bugs in the site, not in the browser) where i have provided
an actual patch to correct the bug and they still don't fix it. All a
broken setTimeout implementation will do is result a site making the
"easy" change of saying "don't use this browser because it's broken".
They do that even when the bug is in their site, so an actual browser
bug is even easier to ignore.
negates the need to introduce new incompatibilities into the already
published web by changing setTimeout().
This still implies there is a meaningful compatibility hit to making
this change. I have not yet seen any reason to agree that is the
case (in the sense of "CPU usage is not a web compatibility
issue"). There is _already_ no compatibility here. Browsers do
completely different things, of an equivalent magnitude (6 ms) to
the suggested change of 10 ms -> 3 or 4 ms. Firefox is even
different based on whether Flash happens to be running! How can
there be compatibility problems introduced by this proposal that
don't already exist?
Um, i would guess on Vista all browsers have a 10ms timeout, on XP the
only reason the 15ms timeout clamp exists is because of XP's low
default timer resolution. On Mac and (I assume all unixes/bsds/linux)
the timeout clamp is likely to be 10ms. But even the 15ms timeout is
only 1.5x longer that 10ms, where as 1ms represents an order of
magnitude difference.
If we were to look at a game that for instance assumed a 15ms clamp on
setTimeout, and used that as the game clock tick (which happens) then
a game that used to get maybe 50 updates a second will get 66 updates
if you have 10ms timer. With a 1ms resolution timer though the game
will get *160fps*, eg. 3 times faster than was intended.
--Oliver
* A very quick google brought up: http://www.c-point.com/javascript_tutorial/games_tutorial/how_to_create_games_using_javascript.htm
which uses a 0ms timeout to trigger torpedo motion
* From a comment on john resig's blog "I saw a javascript game
yesterday that had to LIMIT the framerate because google chrome made
it unplayable." -- so there are sites that have already had to do work
to not break with this model -- how many else are out there?
PK
_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev