Re: [webkit-dev] beforeload link (esp rel prefetch)

2011-01-14 Thread Mike Belshe
2011/1/14 Maciej Stachowiak m...@apple.com


 On Jan 13, 2011, at 2:49 PM, Gavin Peters (蓋文彼德斯) wrote:

 Thanks everyone for your replies on link headers and rel types.

 Mike Belshe from Chrome team put together a spec for these as part of
 Server Hints for SPDY.  His server hint information is at:
 https://sites.google.com/a/chromium.org/dev/spdy/link-headers-and-server-hint,
  and link rel=subresource is at
 https://sites.google.com/a/chromium.org/dev/spdy/link-headers-and-server-hint/link-rel-subresource.
   The bottom line for rel=subresource is that we've found in early
 experiments that some page loads, especially of pages with dynamic content,
 are sped up by 15-20%; it's much more than mere milliseconds that we're
 talking about here.  I'd like to do more experimentation with this, and to
 continue this we'd like to both have this rel type (with its prioritization)
 and the Link header (with its early arrival)

 I am skeptical of this testing and I would like to see the details.
 Anything you can put in a Link header in response headers, you can also put
 in a link element in the HTML response document. Even if the response is
 dynamically generated and therefore slow overall, it should be possible to
 emit a fixed portion with the prefetch hints. Therefore this strikes me as a
 workaround for poor Web application design.


The statement above makes it sound like I said there was a 15+% speedup from
this generically - that is certainly not the case!  It is very dependent on
content.  The testing was done almost two years ago and not by me.  You can
find some of it on the chromium.org website; but because it is old data and
not by me, I don't want to cite it as accurate.  I simply want to say that
there is some promise.   We're actively ready to do experimentation with
this inside of Chrome; but we need some leeway on testing it.  It's a
chicken and egg- we can't test it in real scenarios if we don't build it.

Regarding poor web page design - You're right, webpages can always optimize
around this.  But, the web page optimization space is moving more and more
to smart servers, and automated optimizers.  Web document developers
shouldn't need to know how to do these optimizations - it is something they
are inherently not good at, and with every new site layout, they can easily
regress.  Enabling the HTTP level headers for this allows accelerators to
monitor when these headers are appropriate and automatically insert them in
an efficient manner.  We are planning to test this case.

Mike




 Link rel types significantly change the semantics of each link.
  rel=stylesheet changes the HTML page's presentation, and in bug 20018,
 Alexey raised some good points about how this affects saving web pages, and
 I think these rel types in an HTTP header are justifiably more
 controversial.  But that having been said, the rel types prefetch,
 subresource, dns-prefetch are basically network level; they are instructions
 about cache seeding.  No resultant document should view differently based on
 these headers; only faster.

 I agree that beforeload support could be more pervasive than it is today.
  The exclusion of prefetch, icon and dns-prefetch from beforeload events
 bears revisiting.  But are these insurmountable?  Currently the bug up for
 review, bug 51941 doesn't remove beforeload handling from anything that had
 it.  The semantics of beforeload handlers for link headers wrt extensions
 bear some thought, but I suspect these can be solved: can we create another
 bug for adding this suppo

 It's not obvious how it could work, since a load triggered by a Link header
 has no associated element, and in fact starts before any elements exist. So
 there is nothing that can act as the event target. If you think it can be
 done, how about a proposal?

 Regards,
 Maciej


 - Gavin




 On 13 January 2011 12:48, Alexey Proskuryakov a...@webkit.org wrote:


 13.01.2011, в 09:14, Julian Reschke написал(а):

  I'm wondering what the use cases are. To me, adding a way for metadata
 to change document behavior sounds like a misfeature - it adds significant
 complexity to the system, while taking away existing functionality. As a
 specific example discussed in this thread, we'd break some browser
 extensions like Incognito if resource loading could bypass onbeforeload. As
 another example, we'd breakbase element.
 
  Well, there are document types where you *can't* inline the metadata.


 Indeed, and I don't have anything against metadata as long as it doesn't
 directly modify actual data. For example, Last-Modified and Cache-Control
 are quite obvious example of what can be in HTTP headers. Despite the
 practical/historical difficulties that I mentioned with Content-Type, it's
 arguably a valid example of metadata, too.

 Subresource references on the other hand are a part of a document, not of
 its metadata. Am I just missing a reason why one would want to prefetch
 subresources for a JPEG image?

  We should

[webkit-dev] Bug system: Platform and OS fields don't quite align

2010-09-14 Thread Mike Belshe
Hi,

I tend to hit code which is often chromium-platform specific.  It's hard to
know the appropriate Platform and OS fields for such a bug.  For
instance, I am working on a small change to WebKit/chromium/src/WebKit.cpp.
 It's not a Mac bug, its not a PC bug, its a Chromium bug.

Chromium is a platform of sorts.  Chromium bugs could be OS-specific (like
Windows, MacOS, etc).   So I think the platform field would be the right
place to surface such a thing.  Should the bug system surface a platform for
Chromium?  If so- perhaps there are other platforms to surface as well.  I'm
not sure when a particular flavor of webkit warrants a field in the bug
system.

If we aren't willing to reflect this through the bug system, what are the
right values for these fields?  I'm guessing the apple guys want to filter
out these bugs - how do you do it today?

Thanks,
Mike
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] PreloadScanner aggressiveness

2010-01-07 Thread Mike Belshe
Hi -

I've been working on SPDY, but I think I may have found a good performance
win for HTTP.  Specifically, if the PreloadScanner, which is responsible for
scanning ahead within an HTML document to find subresources, is throttled
today.  The throttling is intentional and probably sometimes necessary.
 Nonetheless, un-throttling it may lead to a 5-10% performance boost in some
configurations.  I believe Antti is no longer working on this?  Is there
anyone else working in this area that might have data on how aggressive the
PreloadScanner should be?  Below I'll describe some of my tests.

The PreloadScanner throttling happens in a couple of ways.  First, the
PreloadScanner only runs when we're blocked on JavaScript (see
HTMLTokenizer.cpp).  But further, as it discovers resources to be fetched,
it may delay or reject loading the subresource at all due to throttling in
loader.cpp and DocLoader.cpp.  The throttling is very important, depending
on the implementation of the HTTP networking stack, because throwing too
many resources (or the low-priority ones) into the network stack could
adversely affect HTTP load performance.  This latter problem does not impact
my Chromium tests, because the Chromium network stack does its own
prioritization and throttling (not too dissimilar from the work done by
loader.cpp).

*Theory*:
The theory I'm working under is that when the RTT of the network is
sufficiently high, the *best* thing the browser can do is to discover
resources as quickly as possible and pass them to the network layer so that
we can get started with fetching.  This is not speculative - these are
resources which will be required to render the full page.   The SPDY
protocol is designed around this concept - allowing the browser to schedule
all resources it needs to the network (rather than being throttled by
connection limits).  However, even with SPDY enabled, WebKit itself prevents
resource requests from fully flowing to the network layer in 3 ways:
   a) loader.cpp orders requests and defers requests based on the state of
the page load and a number of criteria.
   b) HTMLTokenizer.cpp only looks for resources further in the body when
we're blocked on JS
   c) preload requests are treated specially (docloader.cpp); if they are
discovered too early by the tokenizer, then they are either queued or
discarded.

*Test Case*
Can aggressive preloadscanning (e.g. always preload scan before parsing an
HTML Document) improve page load time?

To test this, I'm calling the PreloadScanner basically as the first part of
HTMLTokenizer::write().  I've then removed all throttling from loader.cpp
and DocLoader.cpp.  I've also instrumented the PreloadScanner to measure its
effectiveness.

*Benchmark Setup*
Windows client (chromium).
Simulated network with 4Mbps download, 1Mbps upload, 100ms RTT, 0% packet
loss.
I run through a set of 25 URLs, loading each 30 times; not recycling any
connections and clearing the cache between each page.
These are running over HTTP; there is no SPDY involved here.

*Results:*
Baseline
(without my changes)UnthrottledNotesAverage PLT2377ms2239ms+5.8% latency
redux.Time spent in the PreloadScanner1160ms4540msAs expected, we spend
about 4x more time in the PreloadScanner. In this test, we loaded 750 pages,
so it is about 6ms per page. My machine is fast, though.Preload Scripts
discovered262194404x more scripts discoveredPreload CSS discovered34810223x
more CSS discoveredPreload Images discovered11952391443x more images
discoveredPreload items throttled99830Preload Complete hits38036950This is
the count of items which were completely preloaded before WebKit even tried
to look them up in the cache. This is pure goodness.Preload Partial hits1708
7230These are partial hits, where the item had already started loading, but
not finished, before WebKit tried to look them up.Preload
Unreferenced42130These
are bad and the count should be zero. I'll try to find them and see if there
isn't a fix - the PreloadScanner is just sometimes finding resources that
are never used. It is likely due to clever JS which changes the DOM.



*Conclusions:*
For this network speed/client processor, more aggressive PreloadScanning
clearly is a win.   More testing is needed for slower machines and other
network types.  I've tested many network types; the aggressive preload
scanning seems to always be either a win or a wash; for very slow network
connections, where we're already at capacity, the extra CPU burning is
basically free.  For super fast networks, with very low RTT, it also appears
to be a wash.  The networks in the middle (including mobile simulations) see
nice gains.

*Next Steps and Questions:*
I'd like to land my changes so that we can continue to gather data.  I can
enable these via macro definitions or I can enable these via dynamic
settings.  I can then try to do more A/B testing.

Are there any existing web pages which the WebKit team would like tested
under these configurations?  I don't see a lot of testing 

Re: [webkit-dev] PreloadScanner aggressiveness

2010-01-07 Thread Mike Belshe
On Thu, Jan 7, 2010 at 12:49 PM, Maciej Stachowiak m...@apple.com wrote:


 On Jan 7, 2010, at 12:09 PM, Mike Belshe wrote:

 Hi -

 I've been working on SPDY, but I think I may have found a good performance
 win for HTTP.  Specifically, if the PreloadScanner, which is responsible for
 scanning ahead within an HTML document to find subresources, is throttled
 today.  The throttling is intentional and probably sometimes necessary.
  Nonetheless, un-throttling it may lead to a 5-10% performance boost in some
 configurations.  I believe Antti is no longer working on this?  Is there
 anyone else working in this area that might have data on how aggressive the
 PreloadScanner should be?  Below I'll describe some of my tests.

 The PreloadScanner throttling happens in a couple of ways.  First, the
 PreloadScanner only runs when we're blocked on JavaScript (see
 HTMLTokenizer.cpp).  But further, as it discovers resources to be fetched,
 it may delay or reject loading the subresource at all due to throttling in
 loader.cpp and DocLoader.cpp.  The throttling is very important, depending
 on the implementation of the HTTP networking stack, because throwing too
 many resources (or the low-priority ones) into the network stack could
 adversely affect HTTP load performance.  This latter problem does not impact
 my Chromium tests, because the Chromium network stack does its own
 prioritization and throttling (not too dissimilar from the work done by
 loader.cpp).


 The reason we do this is to prevent head-of-line blocking by low-priority
 resources inside the network stack (mainly considering how CFNetwork /
 NSURLConnection works).


Right - understood.




 *Theory*:
 The theory I'm working under is that when the RTT of the network is
 sufficiently high, the *best* thing the browser can do is to discover
 resources as quickly as possible and pass them to the network layer so that
 we can get started with fetching.  This is not speculative - these are
 resources which will be required to render the full page.   The SPDY
 protocol is designed around this concept - allowing the browser to schedule
 all resources it needs to the network (rather than being throttled by
 connection limits).  However, even with SPDY enabled, WebKit itself prevents
 resource requests from fully flowing to the network layer in 3 ways:
a) loader.cpp orders requests and defers requests based on the state of
 the page load and a number of criteria.
b) HTMLTokenizer.cpp only looks for resources further in the body when
 we're blocked on JS
c) preload requests are treated specially (docloader.cpp); if they are
 discovered too early by the tokenizer, then they are either queued or
 discarded.


 I think your theory is correct when SPDY is enabled, and possibly when
 using HTTP with pipelining. It may be true to a lesser extent with
 non-pipelining HTTP implementations when the network stack does its own
 prioritization and throttling, by reducing latency in getting the request to
 the network stack.


right.


 This is especially so when issuing a network request to the network stack
 may involve significant latency due to IPC or cross-thread communication or
 the like.


I hadn't considered IPC or cross thread latencies.  When I've measured these
in the past they are very very low.  One problem with the single-threaded
nature of our preloader and parser right now is that if the HTMLTokenizer is
in the middle of executing JS code, we're not doing anything to scan for
preloads; tons of data can be flowing in off the network which we're
oblivious to.  I'm not trying to change this for now, though, it's much more
involved, I think, due to thread safety requirements for the webcore cache.




 *Test Case*
 Can aggressive preloadscanning (e.g. always preload scan before parsing an
 HTML Document) improve page load time?

 To test this, I'm calling the PreloadScanner basically as the first part of
 HTMLTokenizer::write().  I've then removed all throttling from loader.cpp
 and DocLoader.cpp.  I've also instrumented the PreloadScanner to measure its
 effectiveness.

 *Benchmark Setup*
 Windows client (chromium).
 Simulated network with 4Mbps download, 1Mbps upload, 100ms RTT, 0% packet
 loss.
 I run through a set of 25 URLs, loading each 30 times; not recycling any
 connections and clearing the cache between each page.
 These are running over HTTP; there is no SPDY involved here.


 I'm interested in the following:

 - What kind of results do you get in Safari?


I've not done much benchmarking in Safari; do you have a good way to do
this?  Is there something I can read about or tools I can use?

For chromium, I use the benchmarking extension which lets me run through
lots of pages quickly.



 - How much of this effect is due to more aggressive preload scanning and
 how much is due to disabling throttling? Since the test includes multiple
 logically indpendent changes, it is hard to tell which are the ones that had
 an effect.


Great question

Re: [webkit-dev] PreloadScanner aggressiveness

2010-01-07 Thread Mike Belshe
On Thu, Jan 7, 2010 at 12:52 PM, Joe Mason jma...@rim.com wrote:

  I don’t think every port should be required to implement prioritization
 and throttling itself – that’s just duplication of effort.

I agree.  I wasn't thinking of turning this on globally; rather thinking
about how to turn it on selectively for ports that want it.

Mike



 Maybe there’s a good middle-ground, where PreloadScanner is run more often
 but still does the priority sorting?



 Joe



 *From:* webkit-dev-boun...@lists.webkit.org [mailto:
 webkit-dev-boun...@lists.webkit.org] *On Behalf Of *Mike Belshe
 *Sent:* Thursday, January 07, 2010 3:09 PM
 *To:* webkit-dev@lists.webkit.org
 *Subject:* [webkit-dev] PreloadScanner aggressiveness



 Hi -


 I've been working on SPDY, but I think I may have found a good performance
 win for HTTP.  Specifically, if the PreloadScanner, which is responsible for
 scanning ahead within an HTML document to find subresources, is throttled
 today.  The throttling is intentional and probably sometimes necessary.
  Nonetheless, un-throttling it may lead to a 5-10% performance boost in some
 configurations.  I believe Antti is no longer working on this?  Is there
 anyone else working in this area that might have data on how aggressive the
 PreloadScanner should be?  Below I'll describe some of my tests.



 The PreloadScanner throttling happens in a couple of ways.  First, the
 PreloadScanner only runs when we're blocked on JavaScript (see
 HTMLTokenizer.cpp).  But further, as it discovers resources to be fetched,
 it may delay or reject loading the subresource at all due to throttling in
 loader.cpp and DocLoader.cpp.  The throttling is very important, depending
 on the implementation of the HTTP networking stack, because throwing too
 many resources (or the low-priority ones) into the network stack could
 adversely affect HTTP load performance.  This latter problem does not impact
 my Chromium tests, because the Chromium network stack does its own
 prioritization and throttling (not too dissimilar from the work done by
 loader.cpp).



 *Theory*:

 The theory I'm working under is that when the RTT of the network is
 sufficiently high, the *best* thing the browser can do is to discover
 resources as quickly as possible and pass them to the network layer so that
 we can get started with fetching.  This is not speculative - these are
 resources which will be required to render the full page.   The SPDY
 protocol is designed around this concept - allowing the browser to schedule
 all resources it needs to the network (rather than being throttled by
 connection limits).  However, even with SPDY enabled, WebKit itself prevents
 resource requests from fully flowing to the network layer in 3 ways:

a) loader.cpp orders requests and defers requests based on the state of
 the page load and a number of criteria.

b) HTMLTokenizer.cpp only looks for resources further in the body when
 we're blocked on JS

c) preload requests are treated specially (docloader.cpp); if they are
 discovered too early by the tokenizer, then they are either queued or
 discarded.



 *Test Case*

 Can aggressive preloadscanning (e.g. always preload scan before parsing an
 HTML Document) improve page load time?



 To test this, I'm calling the PreloadScanner basically as the first part of
 HTMLTokenizer::write().  I've then removed all throttling from loader.cpp
 and DocLoader.cpp.  I've also instrumented the PreloadScanner to measure its
 effectiveness.



 *Benchmark Setup*

 Windows client (chromium).

 Simulated network with 4Mbps download, 1Mbps upload, 100ms RTT, 0% packet
 loss.

 I run through a set of 25 URLs, loading each 30 times; not recycling any
 connections and clearing the cache between each page.

 These are running over HTTP; there is no SPDY involved here.



 *Results:*

 *Baseline
 (without my changes)*

 *Unthrottled*

 *Notes*

 Average PLT

 2377ms

 2239ms

 +5.8% latency redux.

 Time spent in the PreloadScanner

 1160ms

 4540ms

 As expected, we spend about 4x more time in the PreloadScanner. In this
 test, we loaded 750 pages, so it is about 6ms per page. My machine is fast,
 though.

 Preload Scripts discovered

 2621

 9440

 4x more scripts discovered

 Preload CSS discovered

 348

 1022

 3x more CSS discovered

 Preload Images discovered

 11952

 39144

 3x more images discovered

 Preload items throttled

 9983

 0

 Preload Complete hits

 3803

 6950

 This is the count of items which were completely preloaded before WebKit
 even tried to look them up in the cache. This is pure goodness.

 Preload Partial hits

 1708

 7230

 These are partial hits, where the item had already started loading, but not
 finished, before WebKit tried to look them up.

 Preload Unreferenced

 42

 130

 These are bad and the count should be zero. I'll try to find them and see
 if there isn't a fix - the PreloadScanner is just sometimes finding
 resources that are never used. It is likely due to clever JS

Re: [webkit-dev] Sunspider 0.9.1 preview

2009-12-15 Thread Mike Belshe
[+cc John Resig since he's using this as part of dromaeo]

Overall, sounds like good progress.

A couple of ideas:
   - can we make it so that if you try to cut-and-paste comparisons of 0.9
to 0.9.1 results, it will say these results are from a different version?
   - can we make the version more prominent in the title?
   - what would you think of reducing the setTimeout(..., 500) to something
like setTimeout(..., 100)?  This will cut the runtime of the test by ~80%
:-)

I'll volunteer to do any of these tasks this week if you want me to look at
it.

Mike


On Mon, Dec 14, 2009 at 11:32 PM, Maciej Stachowiak m...@apple.com wrote:


 Hello folks,

 Over the past few days I made some changes to SunSpider to address some of
 the more serious issues reported. I focused on only changes that seem to
 make a significant difference to fairness and validity, so for example I did
 not remove accidental access to global variables. I also made a small number
 of harness changes that do not affect results but fix flaws in the harness.

 We are hesitant to change the SunSpider content or harness much at all,
 since it's been used for cross-version and cross-brwoser comparisons for so
 long. But these problems (many originally suggpointed out by Chrome or
 Mozilla folks) seemed important enough to address. Also, in addition to the
 patched content set, the original sunspider-0.9 content set is also
 available to run through the new harness.

 The most important harness change is greatly reducing the time between
 tests (as sugested by Mike Belshe) to avoid the negative impact of power
 management on many systems (both Mac and Windows), and which are most
 apparent for very fast browsers.

 I'm deliberately not posting this on the web site yet because I don't want
 a flood of gawkers testing their browser before enough people have had a
 chance to review and verify these changes.


 Harness changes:

 In-browser SunSpider suffers excessive penalty under power management
 https://bugs.webkit.org/show_bug.cgi?id=32505

 Enable Web-hosted version of SunSpider to handle multiple versions
 https://bugs.webkit.org/show_bug.cgi?id=32478

 Use JSON.parse instead of eval for Web-hosted SunSpider results processing
 https://bugs.webkit.org/show_bug.cgi?id=32490

 Some Browser-hosted SunSpider files are not valid HTML5
 https://bugs.webkit.org/show_bug.cgi?id=32536

 Make sunspider-0.9.1 the default content set (both command-line and hosted)
 https://bugs.webkit.org/show_bug.cgi?id=32537


 Content changes (in sunspider-0.9.1 suite only; sunspider-0.9 is as
 originally posted):

 SunSpider/tests/string-base64.js does not compute a valid base64 encoded
 string
 https://bugs.webkit.org/show_bug.cgi?id=16806

 sunspider regexp-dna is inaccurate on firefox
 https://bugs.webkit.org/show_bug.cgi?id=18989



 Further changes I'm considering but am unsure about:
 - Add correctness checking to all tests that don't use random numbers.
 - Stop using array-like indexing of strings in the base64 test since that
 doesn't work in IE8 and lower; but it is a standard construct now (ES5),
 future IE will support it, and it's a useful thing to test.

 Changes that probably won't be considered until a 2.0 version:
 - Adding new tests to cover other areas.
 - Rebalancing the runtime of the existing tests.
 - Considering different scoring methodology such as bigger-is-better or
 geometric mean or the like.
 - Removing use of random numbers from tests that do use them.

 Regards,
 Maciej

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Sunspider 0.9.1 preview

2009-12-15 Thread Mike Belshe
On Tue, Dec 15, 2009 at 1:01 PM, Maciej Stachowiak m...@apple.com wrote:


 On Dec 15, 2009, at 12:30 PM, Mike Belshe wrote:

 [+cc John Resig since he's using this as part of dromaeo]

 Overall, sounds like good progress.

 A couple of ideas:
- can we make it so that if you try to cut-and-paste comparisons of 0.9
 to 0.9.1 results, it will say these results are from a different version?


 Good idea. Filed https://bugs.webkit.org/show_bug.cgi?id=32573

- can we make the version more prominent in the title?


 I'll see if I can find a reasonable way to do so. 
 https://bugs.webkit.org/show_bug.cgi?id=32574

- what would you think of reducing the setTimeout(..., 500) to something
 like setTimeout(..., 100)?  This will cut the runtime of the test by ~80%
 :-)


 I don't know if you noticed in my comments below, but the gap between
 individual tests is now 10ms, there is only a one-time initial delay of
 500ms to give browsers a chance to recover from the effects of loading the
 driver page.


Ok - I had checked the code, saw the 500ms setTimeout there, and then griped
:-)

This sounds great!





 Or are you talking about the initial 500ms delay? On my MacBook Pro, in
 64-bit Safari, each run of the test takes 370ms measured time, plus 260ms
 for the gaps between tests, and there are a total of 5 cycles. So actual
 test time is 3780. So reducing the initial pause from 500ms to 100ms would
 be a 10% improvement on total runtime of the benchmark. I don't think that
 would be a meaningful difference. But I can look at whether this number can
 be reduced without distorting the results.


Agree - I don't care about that.

thanks,
Mike




 I'll volunteer to do any of these tasks this week if you want me to look at
 it.


 In general help is welcome, but I think I can take care of the two bugs
 cited above.

 Regards,
 Maciej



 Mike


 On Mon, Dec 14, 2009 at 11:32 PM, Maciej Stachowiak m...@apple.com wrote:


 Hello folks,

 Over the past few days I made some changes to SunSpider to address some of
 the more serious issues reported. I focused on only changes that seem to
 make a significant difference to fairness and validity, so for example I did
 not remove accidental access to global variables. I also made a small number
 of harness changes that do not affect results but fix flaws in the harness.

 We are hesitant to change the SunSpider content or harness much at all,
 since it's been used for cross-version and cross-brwoser comparisons for so
 long. But these problems (many originally suggpointed out by Chrome or
 Mozilla folks) seemed important enough to address. Also, in addition to the
 patched content set, the original sunspider-0.9 content set is also
 available to run through the new harness.

 The most important harness change is greatly reducing the time between
 tests (as sugested by Mike Belshe) to avoid the negative impact of power
 management on many systems (both Mac and Windows), and which are most
 apparent for very fast browsers.

 I'm deliberately not posting this on the web site yet because I don't want
 a flood of gawkers testing their browser before enough people have had a
 chance to review and verify these changes.


 Harness changes:

 In-browser SunSpider suffers excessive penalty under power management
 https://bugs.webkit.org/show_bug.cgi?id=32505

 Enable Web-hosted version of SunSpider to handle multiple versions
 https://bugs.webkit.org/show_bug.cgi?id=32478

 Use JSON.parse instead of eval for Web-hosted SunSpider results processing
 https://bugs.webkit.org/show_bug.cgi?id=32490

 Some Browser-hosted SunSpider files are not valid HTML5
 https://bugs.webkit.org/show_bug.cgi?id=32536

 Make sunspider-0.9.1 the default content set (both command-line and
 hosted)
 https://bugs.webkit.org/show_bug.cgi?id=32537


 Content changes (in sunspider-0.9.1 suite only; sunspider-0.9 is as
 originally posted):

 SunSpider/tests/string-base64.js does not compute a valid base64 encoded
 string
 https://bugs.webkit.org/show_bug.cgi?id=16806

 sunspider regexp-dna is inaccurate on firefox
 https://bugs.webkit.org/show_bug.cgi?id=18989



 Further changes I'm considering but am unsure about:
 - Add correctness checking to all tests that don't use random numbers.
 - Stop using array-like indexing of strings in the base64 test since that
 doesn't work in IE8 and lower; but it is a standard construct now (ES5),
 future IE will support it, and it's a useful thing to test.

 Changes that probably won't be considered until a 2.0 version:
 - Adding new tests to cover other areas.
 - Rebalancing the runtime of the existing tests.
 - Considering different scoring methodology such as bigger-is-better or
 geometric mean or the like.
 - Removing use of random numbers from tests that do use them.

 Regards,
 Maciej

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

Re: [webkit-dev] Making browsers faster: Resource Packages

2009-11-22 Thread Mike Belshe
On Sat, Nov 21, 2009 at 3:00 PM, Steve Souders st...@souders.org wrote:

  Here's my understanding of how this would work: In addition to the
 resource package LINK and the not-packaged stylesheet LINK, you still need
 LINKs for the other stylesheets. So the page could look like this:
 link rel=resource-package href=pkg.zip
 link rel=stylesheet href=in-package-A.css
 link rel=stylesheet href=in-package-B.css
 link rel=stylesheet href=NOT-in-package-C.css

 or this:
 link rel=resource-package href=pkg.zip
 link rel=stylesheet href=NOT-in-package-C.css
  link rel=stylesheet href=in-package-A.css
 link rel=stylesheet href=in-package-B.css

 Browsers probably shouldn't download any other resources until they've
 gotten the manifest.txt. In the first case, there isn't an extra RT
 (assuming in-package-A.css is the first file in the package), and the page
 should render faster, esp in IE  7 (if all the resources are on the same
 domain). In the second case there, is an extra RT delay for painting.
 Presumably, core stylesheets are packaged and come first, and
 page-specific stylesheets aren't packaged and come last, so the first
 situation is more typical.


CSS and JS can't be declared in arbitrary orders.  So while your argument is
good (about when the extra RTT exists), in practice, it is not always an
option.  If there are 3 scripts, two which can't be bundled and one which
can, then you may or may not suffer the extra RT.

This is really subtle stuff -  web designers could think they are speeding
up their pages when they're slowing them down.  The tools need to prevent
that.  It can't be manual.

Mike





 -Steve



 Mike Belshe wrote:

 Alexander - when you do the testing on this, one case I'd really like to
 see results on is this:

  Page contains a resource bundle, and the bundle contains a bunch of
 stylesheets, JS and other, but DOES NOT include one of the CSS files.
  Immediately following the link resource bundle, put a reference to the
 style sheet not included in the bundle.

  When the browser sees the link to the CSS, which is critical to the page
 download, does it wait for the resource bundle to load (I realize that
 technically it only needs to get the manifest)?  If not, it might download
 it twice (since it doesn't know the status of the bundle yet).

  Now simulate over a 200ms RTT link.  I believe you've just added a full
 RT to get the CSS, which was critical for layout.  Overall PLT won't suffer
 the full RTT, but time-to-first-paint will.

  Mike


 On Wed, Nov 18, 2009 at 3:57 PM, Peter Kasting pkast...@google.comwrote:

  On Wed, Nov 18, 2009 at 3:54 PM, Dirk Pranke dpra...@chromium.orgwrote:

 Another caching-related issue involves versioning of the archives. If
 version 2 of a zip contains only a few files modified since version 1,
 and I have version 1 cached, is there some way to take advantage of
 that?


  This is a specific case of my more general question, One of your stated
 goals is to avoid downloading resources you already have, but even with
 manifests, I see no way to do this, since the client can't actually tell the
 server 'only send items x, y, and z'.  This was the one point Alexander
 didn't copy in his reply mail.

  PK



___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Making browsers faster: Resource Packages

2009-11-18 Thread Mike Belshe
Overall, I think the general idea.

I'm concerned about the head-of-line blocking that it introduces.  If an
administrator poorly constructs the bundle, he could significantly hurt
perf.  Instead of using gzip, you could use a framer which chunked items
before gzipping.  This might be more trouble than it is worth.

Inside the browser, the caching is going to be kind of annoying.  Example:
 Say foo.zip contains foo.gif and baz.gif, and foo.zip expires in one week.
  When the browser downloads the manifest, it needs to unfold it and store
foo.gif and baz.gif in the cache.  Then, a week later, if the browser tries
to use foo.gif, it will be expired; does the browser fetch foo.zip?  or just
foo.gif?  Obviously, either will work.  But now you've got an inconsistent
cache.  If you hit another page which references foo.zip next, you'll
download the whole zip file when all you needed was bar.gif.  This is
probably a minor problem - I can't see this being very significant in
practice.  Did you consider having the resources for a bundle be addressed
such as:  http://www.foo.com/bundle.zip/foo.gif  ?  This would eliminate the
problem of two names for the same resource.  Maybe this was your intent -
the spec was unclear about the identity (URL) of the bundled resources.

I think it is a good enough idea to warrant an implementation.  Once we have
data about performance, it will be clear whether this should be made
official or not.

Mike


On Wed, Nov 18, 2009 at 11:56 AM, Alexander Limi l...@mozilla.com wrote:

 On Tue, Nov 17, 2009 at 5:56 PM, Alexander Limi l...@mozilla.com wrote:

 On Tue, Nov 17, 2009 at 5:53 PM, James Robinson jam...@google.comwrote:

 Yes, actual numbers would be nice to have.


 Steve Souders just emailed me some preliminary numbers from a bunch of
 major web sites, so that should be on his blog shortly.


 Numbers are up:

 http://www.stevesouders.com/blog/2009/11/18/fewer-requests-through-resource-packages/


 --
 Alexander Limi · Firefox User Experience · http://limi.net



 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Making browsers faster: Resource Packages

2009-11-18 Thread Mike Belshe
On Wed, Nov 18, 2009 at 2:47 PM, Mike Belshe m...@belshe.com wrote:

 Overall, I think the general idea.


I meant to say Overall I like the general idea




 I'm concerned about the head-of-line blocking that it introduces.  If an
 administrator poorly constructs the bundle, he could significantly hurt
 perf.  Instead of using gzip, you could use a framer which chunked items
 before gzipping.  This might be more trouble than it is worth.

 Inside the browser, the caching is going to be kind of annoying.  Example:
  Say foo.zip contains foo.gif and baz.gif, and foo.zip expires in one week.
   When the browser downloads the manifest, it needs to unfold it and store
 foo.gif and baz.gif in the cache.  Then, a week later, if the browser tries
 to use foo.gif, it will be expired; does the browser fetch foo.zip?  or just
 foo.gif?  Obviously, either will work.  But now you've got an inconsistent
 cache.  If you hit another page which references foo.zip next, you'll
 download the whole zip file when all you needed was bar.gif.  This is
 probably a minor problem - I can't see this being very significant in
 practice.  Did you consider having the resources for a bundle be addressed
 such as:  http://www.foo.com/bundle.zip/foo.gif  ?  This would eliminate
 the problem of two names for the same resource.  Maybe this was your intent
 - the spec was unclear about the identity (URL) of the bundled resources.

 I think it is a good enough idea to warrant an implementation.  Once we
 have data about performance, it will be clear whether this should be made
 official or not.

 Mike


 On Wed, Nov 18, 2009 at 11:56 AM, Alexander Limi l...@mozilla.com wrote:

 On Tue, Nov 17, 2009 at 5:56 PM, Alexander Limi l...@mozilla.com wrote:

 On Tue, Nov 17, 2009 at 5:53 PM, James Robinson jam...@google.comwrote:

 Yes, actual numbers would be nice to have.


 Steve Souders just emailed me some preliminary numbers from a bunch of
 major web sites, so that should be on his blog shortly.


 Numbers are up:

 http://www.stevesouders.com/blog/2009/11/18/fewer-requests-through-resource-packages/


 --
 Alexander Limi · Firefox User Experience · http://limi.net



 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev



___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Iterating SunSpider

2009-07-07 Thread Mike Belshe
On Mon, Jul 6, 2009 at 10:11 AM, Geoffrey Garen gga...@apple.com wrote:

  So, what you end up with is after a couple of years, the slowest test in
 the suite is the most significant part of the score.  Further, I'll predict
 that the slowest test will most likely be the least relevant test, because
 the truly important parts of JS engines were already optimized.  This has
 happened with Sunspider 0.9 - the regex portions of the test became the
 dominant factor, even though they were not nearly as prominent in the real
 world as they were in the benchmark.  This leads to implementors optimizing
 for the benchmark - and that is not what we want to encourage.


 How did you determine that regex performance is not nearly as prominent in
 the real world?


For a while regex was 20-30% of the benchmark on most browsers even though
it didn't consume 20-30% of the time that browsers spent inside javascript.

So, I determined this through profiling.  If you profile your browser while
browsing websites, you won't find that it spends 20-30% of its javascript
execution time running regex (even with the old pcre).  It's more like 1%.
 If this is true, then it's a shame to see this consume 20-30% of any
benchmark, because it means the benchmark scoring is not indicative of the
real world.  Maybe I just disagree with the mix ever having been very
representative?  Or maybe it changed over time?  I don't know because I
can't go back in time :-)  Perhaps one solution is to better document how a
mix is chosen.

I don't really want to make this a debate about regex and he-says/she-says
how expensive it is.  We should talk about the framework.  If the framework
is subject to this type of skew, where it can disproportionately weight a
test, is that something we should avoid?

Keep in mind I'm not recommending any change to existing SunSpider 0.9 -
just changes to future versions.

Maciej pointed out a case where he thought the geometric mean was worse; I
think thats a fair consideration if you have the perfect benchmark with an
exactly representative workload.  But we don't have the ability make a
perfectly representative benchmark workload, and even if we did it would
change over time - eventually making the benchmark useless...

Mike
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Iterating SunSpider

2009-07-07 Thread Mike Belshe
As I said, we can argue the mix of tests forever, but it is not useful.
 Yes, I would test using top-100 sites.  In the future, if a benchmark
claims to have a representative mix, it should document why.  Right?
Are you saying that you did see Regex as being such a high percentage of
javascript code?  If so, we're using very different mixes of content for our
tests.

Mike


On Tue, Jul 7, 2009 at 3:08 PM, Geoffrey Garen gga...@apple.com wrote:

 So, I determined this through profiling.  If you profile your browser while
 browsing websites, you won't find that it spends 20-30% of its javascript
 execution time running regex (even with the old pcre).


 What websites did you browse, and how did you choose them?

 Do you think your browsing is representative of all JavaScript
 applications?

 Geoff

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Iterating SunSpider

2009-07-07 Thread Mike Belshe
On Sat, Jul 4, 2009 at 3:27 PM, Maciej Stachowiak m...@apple.com wrote:


 On Jul 4, 2009, at 11:47 AM, Mike Belshe wrote:

 I'd like to understand what's going to happen with SunSpider in the future.
  Here is a set of questions and criticisms.  I'm interested in how these can
 be addressed.

 There are 3 areas I'd like to see improved in
 SunSpider, some of which we've discussed before:


 #1: SunSpider is currently version 0.9.  Will SunSpider ever change?  Or is 
 it static?
 I believe that benchmarks need to be able to
 move with the times.  As JS Engines change and improve, and as new areas are 
 needed
 to be benchmarked, we need to be able to roll the version, fix bugs, and
 benchmark new features.  The SunSpider version has not changed for ~2yrs.
  How can we change this situation?  Are there plans for a new version
 already underway?


 I've been thinking about updating SunSpider for some time. There are two
 categories of changes I've thought about:

 1) Quality-of-implementation changes to the harness. Among these might be
 ability to use the harness with multiple test sets. That would be 1.0.

 2) An updated set of tests - the current tests are too short, and don't
 adequately cover some areas of the language. I'd like to make the tests take
 at least 100ms each on modern browsers on recent hardware. I'd also be
 interested in incorporating some of the tests from the v8 benchmark suite,
 if the v8 developers were ok with this. That would be SunSpider 2.0.

 The reason I've been hesitant to make any changes is that the press and
 independent analysts latched on to SunSpider as a way of comparing
 JavaScript implementations. Originally, it was primarily intended to be a
 tool for the WebKit team to help us make our JavaScript faster. However, now
 that third parties are relying it, there are two things I want to be really
 careful about:

 a) I don't want to invalidate people's published data, so significant
 changes to the test content would need to be published as a clearly separate
 version.

 b) I want to avoid accidentally or intentionally making changes that are
 biased in favor of Safari or WebKit-based browsers in general, or that even
 give that impression. That would hurt the test's credibility. When we first
 made SunSpider, Safari actually didn't do that great on it, which I think
 helped people believe that the test wasn't designed to make us look good, it
 was designed to be a relatively unbiased comparison.

 Thus, any change to the content would need to be scrutinized in some way.
 I'm not sure what it would take to get widespread agreement that a 2.0
 content set is fair, but I agree it's time to make one soonish (before the
 end of the year probably). Thoughts on this are welcome.


 #2: Use of summing as a scoring mechanism is problematic
 Unfortunately, the sum-based scoring techniques do not withstand the test
 of time as browsers improve.  When the benchmark was first introduced, each
 test was equally weighted and reasonably large.  Over time, however, the
 test becomes dominated by the slowest tests - basically the weighting of the
 individual tests is variable based on the performance of the JS engine under
 test.  Today's engines spend ~50% of their time on just string and date
 tests.  The other tests are largely irrelevant at this point, and becoming
 less relevant every day.  Eventually many of the tests will take near-zero
 time, and the benchmark will have to be scrapped unless we figure out a
 better way to score it.  Benchmarking research which long pre-dates
 SunSpider confirms that geometric means provide a better basis for
 comparison:  http://portal.acm.org/citation.cfm?id=5673 Can future
 versions of the SunSpider driver be made so that they won't become
 irrelevant over time?


 Use of summation instead of geometric mean was a considered choice. The
 intent is that engines should focus on whatever is slowest. A simplified
 example: let's say it's estimated that likely workload in the field will
 consist of 50% Operation A, and 50% of Operation B, and I can benchmark them
 in isolation. Now let's say implementation in Foo these operations are
 equally fast, while in implementation Bar, Operation A is 4x as fast as in
 Foo, while Operation B is 4x as slow as in Foo. A comparison by geometric
 means would imply that Foo and Bar are equally good, but Bar would actually
 be twice as slow on the intended workload.


BTW - the way to work around this is to have enough sub-benchmarks such that
this just doesn't happen.  If we have the right test coverage, it seems
unlikely to me that a code change would dramatically improve exactly one
test at an exponential expense of exactly one other test.  I'm not saying it
is impossible - just that code changes don't generally cause that behavior.
 To combat this we can implement a broader base of benchmarks as well as
longer-running tests that are not too micro.

This brings up another problem with summation.  The only case

Re: [webkit-dev] Iterating SunSpider

2009-07-07 Thread Mike Belshe
On Tue, Jul 7, 2009 at 4:20 PM, Maciej Stachowiak m...@apple.com wrote:


 On Jul 7, 2009, at 4:01 PM, Mike Belshe wrote:


 I'd like benchmarks to:
a) have meaning even as browsers change over time
b) evolve.  as new areas of JS (or whatever) become important, the
 benchmark should have facilities to include that.

 Fair?  Good? Bad?


 I think we can't rule out the possibility of a benchmark becoming less
 meaningful over time. I do think that we should eventually produce a new and
 rebalanced set of test content. I think it's fair to say that time is
 approaching for SunSpider.


I certainly agree that updating the benchmark over time is necessary :-)




 In particular, I don't think geometric means are a magic bullet.


Yes, using a geometric mean does not mean that you never need to update the
test suite.  But it does give you a lot of mileage :-)  And I think its
closer to an industry standard than anything else (spec.org).



 When SunSpider was first created, regexps were a small proportion of the
 total execution in what were the fastest publicly available at the time.
 Eventually, everything else got much faster. So at some point, SunSpider
 said it might be a good idea to quadruple the speed of regexp matching
 now. But if it used a geometric mean, it would always say it's a good idea
 to quadruple the speed of regexp matching, unless it omitted regexp tests
 entirely. From any starting point, and regardless of speed of other
 facilities, speeding up regexps by a factor of N would always show the same
 improvement in your overall score. SunSpider, on the other hand, was
 deliberately designed to highlight the area where an engine most needs
 improvement.


I don't think the optimization of regex would have been effected by using a
different scoring mechanism.  In both scoring methods, the score of the
slowest test is the best pick for improving your overall score.  So vendors
would still need to optimize it to keep up.


Mike




 I think the only real way to deal with this is to periodically revise and
 rebalance the benchmark.

 Regards,
 Maciej


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Iterating SunSpider

2009-07-07 Thread Mike Belshe
On Tue, Jul 7, 2009 at 7:01 PM, Maciej Stachowiak m...@apple.com wrote:


 On Jul 7, 2009, at 6:43 PM, Mike Belshe wrote:


 (There are other benchmarks that use summation, for example iBench, though
 I am not sure these are examples of excellent benchmarks. Any benchmark that
 consists of a single test also implicitly uses summation. I'm not sure what
 other benchmarks do is as relevant of the technical merits.)

 Hehe - I don't think anyone has iBench except apple :-)


 This is now extremely tangential to the original point, but iBench is
 available to the general public here: 
 http://www.lionbridge.com/lionbridge/en-US/services/software-product-engineering/testing-veritest/benchmark-software.htm
 


Thanks!




  A lot of research has been put into benchmarking over the years; there is
 good reason for these choices, and they aren't arbitrary.  I have not seen
 research indicating that summing of scores is statistically useful, but
 there are plenty that have chosen geometric means.



 I think we're starting to repeat our positions at this point, without
 adding new information or really persuading each other.

 If you have research that shows statistical benefits to geometric mean
 scoring, or other new information to add, I would welcome it.


Only what is already on this thread or google for geometric mean
benchmark.

Mike





 Regards,
 Maciej


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Iterating SunSpider

2009-07-05 Thread Mike Belshe
On Sat, Jul 4, 2009 at 3:27 PM, Maciej Stachowiak m...@apple.com wrote:


 On Jul 4, 2009, at 11:47 AM, Mike Belshe wrote:

 I'd like to understand what's going to happen with SunSpider in the future.
  Here is a set of questions and criticisms.  I'm interested in how these can
 be addressed.

 There are 3 areas I'd like to see improved in
 SunSpider, some of which we've discussed before:


 #1: SunSpider is currently version 0.9.  Will SunSpider ever change?  Or is 
 it static?
 I believe that benchmarks need to be able to
 move with the times.  As JS Engines change and improve, and as new areas are 
 needed
 to be benchmarked, we need to be able to roll the version, fix bugs, and
 benchmark new features.  The SunSpider version has not changed for ~2yrs.
  How can we change this situation?  Are there plans for a new version
 already underway?


 I've been thinking about updating SunSpider for some time. There are two
 categories of changes I've thought about:

 1) Quality-of-implementation changes to the harness. Among these might be
 ability to use the harness with multiple test sets. That would be 1.0.


Cool



 2) An updated set of tests - the current tests are too short, and don't
 adequately cover some areas of the language. I'd like to make the tests take
 at least 100ms each on modern browsers on recent hardware. I'd also be
 interested in incorporating some of the tests from the v8 benchmark suite,
 if the v8 developers were ok with this. That would be SunSpider 2.0.


Cool.  Use of v8 tests is just fine; they're all open source.



 The reason I've been hesitant to make any changes is that the press and
 independent analysts latched on to SunSpider as a way of comparing
 JavaScript implementations. Originally, it was primarily intended to be a
 tool for the WebKit team to help us make our JavaScript faster. However, now
 that third parties are relying it, there are two things I want to be really
 careful about:

 a) I don't want to invalidate people's published data, so significant
 changes to the test content would need to be published as a clearly separate
 version.


Of course.  Small UI nit - the current SunSpider benchmark doesn't make the
version very prominent at all.  It would be nice to make it more salient.



 b) I want to avoid accidentally or intentionally making changes that are
 biased in favor of Safari or WebKit-based browsers in general, or that even
 give that impression. That would hurt the test's credibility. When we first
 made SunSpider, Safari actually didn't do that great on it, which I think
 helped people believe that the test wasn't designed to make us look good, it
 was designed to be a relatively unbiased comparison.


Of course.



 Thus, any change to the content would need to be scrutinized in some way.
 I'm not sure what it would take to get widespread agreement that a 2.0
 content set is fair, but I agree it's time to make one soonish (before the
 end of the year probably). Thoughts on this are welcome.


 #2: Use of summing as a scoring mechanism is problematic
 Unfortunately, the sum-based scoring techniques do not withstand the test
 of time as browsers improve.  When the benchmark was first introduced, each
 test was equally weighted and reasonably large.  Over time, however, the
 test becomes dominated by the slowest tests - basically the weighting of the
 individual tests is variable based on the performance of the JS engine under
 test.  Today's engines spend ~50% of their time on just string and date
 tests.  The other tests are largely irrelevant at this point, and becoming
 less relevant every day.  Eventually many of the tests will take near-zero
 time, and the benchmark will have to be scrapped unless we figure out a
 better way to score it.  Benchmarking research which long pre-dates
 SunSpider confirms that geometric means provide a better basis for
 comparison:  http://portal.acm.org/citation.cfm?id=5673 Can future
 versions of the SunSpider driver be made so that they won't become
 irrelevant over time?


 Use of summation instead of geometric mean was a considered choice. The
 intent is that engines should focus on whatever is slowest. A simplified
 example: let's say it's estimated that likely workload in the field will
 consist of 50% Operation A, and 50% of Operation B, and I can benchmark them
 in isolation. Now let's say implementation in Foo these operations are
 equally fast, while in implementation Bar, Operation A is 4x as fast as in
 Foo, while Operation B is 4x as slow as in Foo. A comparison by geometric
 means would imply that Foo and Bar are equally good, but Bar would actually
 be twice as slow on the intended workload.


I could almost buy this if:
   a)  we had a really really representative workload of what web pages do,
broken down into the exactly correct proportions.
   b)  the representative workload remains representative over time.

I'll argue that we'll never be very good at (a), and that (b) is impossible.

So, what

[webkit-dev] Iterating SunSpider

2009-07-04 Thread Mike Belshe
I'd like to understand what's going to happen with SunSpider in the future.
 Here is a set of questions and criticisms.  I'm interested in how these can
be addressed.

There are 3 areas I'd like to see improved in
SunSpider, some of which we've discussed before:

#1: SunSpider is currently version 0.9.  Will SunSpider ever change?
Or is it static?
I believe that benchmarks need to be able to
move with the times.  As JS Engines change and improve, and as new
areas are needed
to be benchmarked, we need to be able to roll the version, fix bugs, and
benchmark new features.  The SunSpider version has not changed for ~2yrs.
 How can we change this situation?  Are there plans for a new version
already underway?

#2: Use of summing as a scoring mechanism is problematic
Unfortunately, the sum-based scoring techniques do not withstand the test of
time as browsers improve.  When the benchmark was first introduced, each
test was equally weighted and reasonably large.  Over time, however, the
test becomes dominated by the slowest tests - basically the weighting of the
individual tests is variable based on the performance of the JS engine under
test.  Today's engines spend ~50% of their time on just string and date
tests.  The other tests are largely irrelevant at this point, and becoming
less relevant every day.  Eventually many of the tests will take near-zero
time, and the benchmark will have to be scrapped unless we figure out a
better way to score it.  Benchmarking research which long pre-dates
SunSpider confirms that geometric means provide a better basis for
comparison:  http://portal.acm.org/citation.cfm?id=5673 Can future versions
of the SunSpider driver be made so that they won't become irrelevant over
time?

#3: The SunSpider harness has a variance problem due to CPU power savings
modes.
Because the test runs a tiny amount of Javascript (often under 10ms)
followed by a 500ms sleep, CPUs will go into power savings modes between
test runs.  This radically changes the performance measurements and makes it
so that comparison between two runs is dependent on the user's power savings
mode.  To demonstrate this, run SunSpider on two machines- one with the
Windows balanced (default) setting for power, and then again with high
performance.  It's easy to see skews of 30% between these two modes.  I
think we should change the test harness to avoid such accidental effects.

(BTW - if you change SunSpider's sleep from 500ms  to 10ms, the test runs in
just a few seconds.  It is unclear to me why the pauses are so large.  My
browser gets a 650ms score, so run 5 times, that test should take ~3000ms.
 But due to the pauses, it takes over 1 minute to run test, leaving the CPU
~96% idle).

Possible solution:
The dromaeo test suite already incorporates the SunSpider individual tests
under a new benchmark harness which fixes all 3 of the above issues.   Thus,
one approach would be to retire SunSpider 0.9 in favor of Dromaeo.
http://dromaeo.com/?sunspider  Dromaeo has also done a lot of good work to
ensure statistical significance of the results.  Once we have a better
benchmarking framework, it would be great to build a new microbenchmark mix
which more realistically exercises today's JavaScript.

Thanks,
Mike
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] how to have multiple Javascript engines inside Webkit?

2008-11-08 Thread Mike Belshe
Hi, Haithem,
Both JSC and V8 running through the chromium tree, so you could try that.
 If you run the test_shell, its pretty much just a shell for driving the
webkit engine, and it works with both JS engines.
You can find more information here for how to build it:
http://dev.chromium.org/developers/how-tos/getting-around-the-chrome-source-code

Basically there are two visual studio projects; one builds JSC (chrome_kjs)
and the other is just chrome.

If you have any trouble, drop me an email.

Mike


On Sat, Nov 8, 2008 at 2:00 PM, Darin Adler [EMAIL PROTECTED] wrote:

 On Nov 8, 2008, at 12:44 PM, haithem rahmani wrote:

  I would like, for benchmarking purposes, to have different Javascript
 engines inside Webkit and to have a runtime option to enable/disable them.


 That's not supported; it would be very difficult to do so efficiently and
 no one has even tried to do this.

-- Darin

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] My Windows build notes

2008-10-14 Thread Mike Belshe
Thanks Adam.

On Tue, Oct 14, 2008 at 5:47 AM, Adam Roben [EMAIL PROTECTED] wrote:

 On Oct 13, 2008, at 7:09 PM, Mike Belshe wrote:

 It took me a while to get my windows build going, so I thought I'd share
 what I learned:


 Thanks, Mike! This kind of information is very helpful in keeping our
 instructions up-to-date.

 1) I had to completely start over with cygwin.  I uninstalled and
 reinstalled using the cygwin-downloader from here:
 http://webkit.org/building/tools.html

 2) Several components were missing from cygwin:
 - perl, make, gcc, bison, gperf, curl, unzip, flex

 3) Used cpan get Win32API::Registry to download that module


 Steps (2) and (3) were required even after installing via
 cygwin-downloader? You can see the list of packages it installs here: 
 http://trac.webkit.org/browser/trunk/WebKitTools/CygwinDownloader/cygwin-downloader.py#L47.
 You can see that the list includes all the packages you mentioned (even
 perl-libwin32, which should install the Win32API::Registry module, I
 believe).


Yes.  I've read some about multiple cygwin installations, maybe I have
multiple cygwins on one box?  The cygwin install/uninstall process is
basically voodoo.   Even after I deleted everything cygwin related I could
find, I reinstalled using the cygwin-downloader but I didn't have these
components.

Overall, I think cygwin is really brittle.  We might be able to make this
more reliable by putting this into the windows-specific tooling and then
reference it explicitly rather than relying on cygwin's installer?






 4) After downloading the source, I also had to run update-webkit.  I
 suspect this is a required step, although I don't think it is documented?


 This is documented here http://webkit.org/building/checkout.html:


1.

Type this command to update your source tree:

WebKit/WebKitTools/Scripts/update-webkit

If you downloaded the tarball, this will bring it up to date. Windows
users must always execute this command after first obtaining the code, 
 since
it will download additional libraries that are needed to build.


Not sure how I missed that.  Seems so obvious now! :-)



 I'm happy to update documentation if you point me at it; but I'm not sure
 if my experience is due to pilot error or if things have changed.


 The documentation all lives in the WebKitSite directory of the WebKit
 source tree. Patches to clarify/fix the build instructions can be posted on
 https://bugs.webkit.org/. Thanks!

 -Adam


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] My Windows build notes

2008-10-13 Thread Mike Belshe
It took me a while to get my windows build going, so I thought I'd share
what I learned:
1) I had to completely start over with cygwin.  I uninstalled and
reinstalled using the cygwin-downloader from here:
http://webkit.org/building/tools.html

2) Several components were missing from cygwin:
- perl, make, gcc, bison, gperf, curl, unzip, flex

3) Used cpan get Win32API::Registry to download that module

4) After downloading the source, I also had to run update-webkit.  I suspect
this is a required step, although I don't think it is documented?

I'm happy to update documentation if you point me at it; but I'm not sure if
my experience is due to pilot error or if things have changed.

Thanks,
Mike
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] setTimeout as browser speed throttle

2008-10-01 Thread Mike Belshe
I think you've already seen this, but in case you haven't - here is the bug
where I've been tracking this.  Every report I've seen related to minimum
timers is referenced in here.
   http://code.google.com/p/chromium/issues/detail?id=792

I think the evidence is pretty compelling that 10ms and 1ms is a better
value.

Mike




On Tue, Sep 30, 2008 at 11:55 PM, Maciej Stachowiak [EMAIL PROTECTED] wrote:


 On Sep 30, 2008, at 10:36 PM, Darin Fisher wrote:

 On Tue, Sep 30, 2008 at 7:14 PM, Maciej Stachowiak [EMAIL PROTECTED] wrote:

 ...

 2) Consider making WebKit's default minimum timer limit lower - something
 like 3ms-5ms. I don't know what we would do to verify that this is safe
 enough or who would do the work. Maybe Hyatt?



 We are in the process of verifying this now ;-)  Our eyes and ears are open
 (have been) for bug reports related to this.  Right now, we are happy to be
 trying something radical.  In the absence of problem reports, when do we
 declare success?


 Based on what Peter said, it sounds like there has been at least one
 problem report. There was at least one more vague report made informally in
 John Resig's blog comments. Maybe we need to start by agreeing what counts
 as absence of problem reports.

 However, by way of comparison, we added a 10ms clamp three and a half years
 after the first Safari beta was released to the public, as it took about
 that long to get enough bug reports to convince us that we had to do it for
 compatibility.

 Given that, I think we may want to find some more active way to look for
 potential problems and potential benefits.

 Regards,
 Maciej


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Proposed Timer API

2008-10-01 Thread Mike Belshe
If you're going to propose a new API designed for hi-res timers, it ought to
use units of microseconds instead of milliseconds.
Mike


On Tue, Sep 30, 2008 at 7:32 PM, Justin Haygood [EMAIL PROTECTED]wrote:


 http://blog.justinhaygood.com/2008/09/30/proposed-high-resolution-timer-api/

 It's based off Adobe's flash.utils.Timer API that they have in AS3. Once
 I get enough WK comments, I'll see about how to get it to HTML5...

 --Justin Haygood

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] JS binding wapper pointers: inline vs. separate hash table

2008-10-01 Thread Mike Belshe
On Wed, Oct 1, 2008 at 4:30 PM, Peter Kasting [EMAIL PROTECTED] wrote:

 On Wed, Oct 1, 2008 at 4:03 PM, Mike Belshe [EMAIL PROTECTED] wrote:

 Also - Chrome currently taps into RefCountable and adds Peerable across
 any RefCountable object, whether it needs Peerable or not.  Strings are an
 obvious example where we don't need Peer, and there are a lot of String
 objects.  We took this tax in Chrome because we didn't want to fork further
 from Webkit, and we didn't see a better way to do it.  We hope to correct
 this soon as we reconcile differences with WebKit.


 I bet this weighs pretty heavily in the memory figuring below.  Without
 hard evidence, I'd suspect that just making things Peerable only if they
 need to be (not even including some of Maciej, Sam, et al.'s earlier
 suggestions on tuning that down even further) would slash the memory numbers
 below pretty noticeably.


I think Mads already accounted for this, so the numbers below should be
pretty accurate.



 For each RefCountable, we can save 8 bytes if we remove Peerable.  For
 each TreeShared we can only save 4 bytes because TreeShared already has a
 vtable pointer.


 Mike, do you know if this is before or after the effort to chop the memory
 impact of Peerable by reducing the inherent overhead?  Seems like those 8
 bytes were going to get reduced to 4 (which would save noticeably)?


I think this is our current implementation; so if we can do better than 8
bytes, it would be less.



 Total size   Potential savings
  www.cnn.com:   43M 410K
 www.facebook.com:  43M 408K
 www.slashdot.org:  36M 208K
 m.ext.google.com:  45M 475K
 docs (normal doc): 42M 341K
 docs (big spreadsheet):55M 905K
 maps:  38M 159K


 My guess is that docs balloons so much between normal doc and big
 spreadsheet in large part because of the overhead of strings getting hit
 with this (although maybe it also creates lots more DOM nodes there).

 I guess the summary of all these comments from me is I strongly suspect
 that, if we remove the previous constraint of making as few changes as
 possible, we can slash these memory overhead numbers by at least 50% if not
 more.

 PK

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] setTimeout as browser speed throttle

2008-09-30 Thread Mike Belshe
Thanks for the concrete examples, Dave!  I tested all 3 of these, and
haven't yet found any problems.  But I don't have specific URLs.  I also
looked through the webkit bugs database as much as I could, and could not
locate them.
BTW - if the primary concern is not spinning the CPU, we could just drop the
throttle down to 2 or 3ms, and the CPU will be largely idle on most
machines.  In a few years, with faster processors, we'll be able to drop to
1ms and still have largely idle CPU.

Would 3ms be viable in your eyes?

Mike


On Mon, Sep 29, 2008 at 8:58 PM, David Hyatt [EMAIL PROTECTED] wrote:

 We encountered 100% CPU spins on amazon.com, orbitz.com, mapquest.com,
 among others (looking through Radar histories).  This was pre-clamp.  Web
 sites make this mistake because they don't know any better, and it works
 fine in IE.  It is a mistake these sites will continue to make, and Chrome
 is the only browser that will be susceptible.  Being different from IE here
 is not a good thing.  You will end up having to evangelize sites over and
 over to fix 100% CPU spins that occur only in your browser.  Do you really
 want that kind of headache?
 A new API will let Web apps get the performance they need while avoiding
 compatibility problems.

 dave

 On Sep 29, 2008, at 10:06 PM, Maciej Stachowiak wrote:


 On Sep 29, 2008, at 7:26 PM, Mike Belshe wrote:

 Hi,
 One of the differences between Chrome and Safari is that Chrome sets the
 setTimeout clamp to 1ms as opposed to 10ms.  This means that if the
 application writer requests a timer of less than 10ms, Chrome will allow it,
 whereas Safari will clamp the minimum timeout to 10ms.  The reason we did
 this was to minimize browser delays when running graphical javascript
 applications.

 This has been a concern for some, so I wanted to bring it up here and get
 an open discussion going.  My hope is to lower or remove the clamp over
 time.

 To demonstrate the benefit, here is one test case which benefits from
 removing the setTimeout clamp.  Chrome gets about a ~4x performance boost by
 reducing the setTimeout clamp.  This programming pattern in javascript is
 very common.


 http://www.belshe.com/test/sort/sort.htmlhttp://www.belshe.com/test/sort/sort.html

 One counter argument brought up is a claim that all other browsers use a
 10ms clamp, and this might cause incompatibilities.  However, it turns out
 that browsers already use widely varying values.


 I believe all major browsers (besides Chrome) have a minimum of either 10ms
 or 15.6ms. I don't think this is widely varying.

  We also really haven't seen any incompatibilities due to this change.  It
 is true that having a lower clamp can provide an easy way for web developers
 to accidentally spin the CPU, and we have seen one high-profile instance of
 this.  But of course spinning the CPU can be done in javascript all by
 itself :-)


 The kinds of problems we are concerned about are of three forms:

 1) Animations that run faster than intended by the author (it's true that
 10ms vs 16ms floors will give slight differences in speed, but not nearly as
 much so as 10ms vs no delay).

 2) Burning CPU and battery on pages where the author did not expect this to
 happen, and had not seen it on the browsers he or she has tested with.

 3) Possibly slowing things dow if a page is using a 0-delay timer to poll
 for completion of network activity. The popular JavaScript library jQuery
 does this to detect when all stylesheets have loaded. Lack of clamping could
 actually slow down the loading it is intended to wait for.

 4) Future content that is authored in one of Safari or Chrome that depends
 on timing of 0-delay timers will have different behavior in the other. Thus,
 we get less compatibility benefit for WebKit-based browsers through
 cross-testing.

 The fact that you say you have seen one high-profile instance doesn't sound
 to me like there are no incompatibilities. It sounds like there are some,
 and you have encountered at least one of them. Points 1 and 2 are what made
 us add the timer minimum in the first place, as documented in WebKit's SVN
 history and ChangeLogs. We originally did not have one, and added it for
 compatibility with other browsers.

 Currently Chrome gets an advantage on some benchmarks by accepting this
 compatibility risk. This leads to misleading performance comparisons, in
 much the same way as firing the load event before images are loaded would.

 Here is a summary of the minimum timeout for existing browsers (you can
 test your browser with this page: http://www.belshe.com/test/timers.html 
 )http://www.belshe.com/test/timers.html
 Safari for the mac:   10ms
 Safari for windows:15.6ms
 Firefox:   10ms or 15.6ms, depending on whether or not
 Flash is running on the system
 IE : 15.6ms
 Chrome:  1ms (future - remove the clamp?)

 So here are a couple of options:
1) Remove or lower the clamp so that javascript

Re: [webkit-dev] setTimeout as browser speed throttle

2008-09-30 Thread Mike Belshe
Thanks - I did see that bug.  Intentionally spinning the CPU vis
setTimeout(,0) is not a problem if it is what the application intended. In
the bug you mention, it mentions mapquest as a potential site with this
issue.  But I can't reproduce that, and there are no specific URLs.
Mike


2008/9/30 Alexey Proskuryakov [EMAIL PROTECTED]


 Sep 30, 2008, в 6:37 PM, Mike Belshe написал(а):

  Thanks for the concrete examples, Dave!  I tested all 3 of these, and
 haven't yet found any problems.  But I don't have specific URLs.  I also
 looked through the webkit bugs database as much as I could, and could not
 locate them.



 One example is https://bugs.webkit.org/show_bug.cgi?id=6998.

 - WBR, Alexey Proskuryakov



___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] setTimeout as browser speed throttle

2008-09-30 Thread Mike Belshe
I think we agree on this point - its a matter of what did the website author
intend.  I believe there are legitimate cases to ask for setTimeout(1).
 With a pure clamp based approach, this feature is not available today.  On
the other hand, if the website author accidentally uses setTimeout(1), then
they'll see varying behavior on different browsers.
I'm sure most website authors are not aware that setTimeout(1) could mean 10
or 15ms.  The distinction here is quite subtle, and of course there is no
standard which indicates that it would be anything other than the timeout
requested.

As for keeping the fan off - if we could keep the CPU idle a 3ms minimum
timeout loop does that resolve your concern?
Mike


2008/9/30 Alexey Proskuryakov [EMAIL PROTECTED]


 Sep 30, 2008, в 8:13 PM, Mike Belshe написал(а):

  Thanks - I did see that bug.  Intentionally spinning the CPU vis
 setTimeout(,0) is not a problem if it is what the application intended.



 I don't quite agree - even though this may have been the intention, the
 application developer will not be aware of all the consequences without
 testing in Safari/Chrome. The site will work perfectly in IE and Firefox,
 but in WebKit-based browsers, it will eat battery, make the computer too hot
 to hold on one's knees, and change the pitch of noise coming from it. These
 are quite practical issues.

 It may also work faster - but chances are that this delay is not
 important for user experience, given that the code was deployed and works
 adequately in other browsers.

 Otherwise, I certainly agree that having a high resolution timer support is
 a good idea.

 - WBR, Alexey Proskuryakov


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] setTimeout as browser speed throttle

2008-09-30 Thread Mike Belshe


  BTW - if the primary concern is not spinning the CPU, we could just drop
 the throttle down to 2 or 3ms, and the CPU will be largely idle on most
 machines.  In a few years, with faster processors, we'll be able to drop to
 1ms and still have largely idle CPU.

 Would 3ms be viable in your eyes?


 Yes, I was considering this also.  To me the primary issue with removing
 the setTimeout clamp is CPU hogging and not animation speed.  I do think
 lowering the clamp is reasonable.  I still think we should have an unclamped
 high res timer API though, regardless of what we decide with setTimeout.


OK - I created a second version of my timer test to help with testing of
this.  (these are only interesting tests to run on browsers which unclamp
the timers)

Here is the test I created before:  http://www.belshe.com/test/timers.html
The problem is that this will use lots of CPU just doing paints (it paints
with every iteration).  Most background spinners don't have this property.

http://www.belshe.com/test/timers2.htmlhttp://www.belshe.com/test/timers.html
This case only updates the UI every 1000 iterations.  With this test, the
CPU load is almost zero even with a 2ms timer.

Using a slightly more conservative minimum should account for slower
processors and also allow for a little more work to be done in the
setInterval loop.

Regarding the API - yes.  I'd like the units of time for the new API to be
at most measured in microseconds; 100-nanos might be more forward-thinking.

Mike
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] setTimeout as browser speed throttle

2008-09-30 Thread Mike Belshe
Subjective note:

I'm much more worried about sites spinning the CPU accidentally (e.g. they
used setTimeout(0) somewhere by accident) than I am about frame rates on
games.  Using the clock as your frame rate is super buggy, and sites need to
know better.  It won't work now and it won't work going forward.

If you recall - there used to be a TURBO button on PCs as they made the
switch from 8MHz to 12MHz to address this issue.  Turbo buttons don't exist
anymore.

Anyway, this is a personal preference; I have little sympathy on the game
front - but more sympathy on the accidental CPU front.

Mike




On Tue, Sep 30, 2008 at 2:47 PM, Oliver Hunt [EMAIL PROTECTED] wrote:


 On Sep 30, 2008, at 1:41 PM, Peter Kasting wrote:

 On Tue, Sep 30, 2008 at 1:35 PM, Brady Eidson [EMAIL PROTECTED] wrote:

 If we add a new well specified API that all browser vendors agree on,
 everybody wins.


 No; everybody who's willing and able to change wins.


 Everyone else wins or loses depending on whether the new behavior is better
 or worse for them.  My argument is that this makes life better for nearly
 all pages affected.  The entire reason to change setTimeout() is precisely
 _because_ not everyone will change their web pages.


 Okay, lets try this, there a 3 possibilities win, no change, and lose, here
 are the groups:
  New API No timer clamp
 Benefits from higher precision timer No change Win
 Hurt by high precision timer No change lose
 Hurt by timer and willing to update No change lose (extra work)
 Benefits from timer and willing to update Win Win

 So while two groups win with the chrome model, two groups actually lose
 either due to site breakage, or having to do extra work to avoid breakage.
  Whereas with a new API while only one group actually wins no others are
 effected.


 (Furthermore, I claim the number of people who will realize they could get
 something better, and change their code to get it, is lower than the number
 of people who will see that something is wrong and fix it.)

 I would disagree -- people who need high precision timers, and realise that
 they're there *will* use them.  Sites that are broken by a buggy setTimeout
 implementation won't.  Hell I have seen sites with actual bugs (eg. bugs in
 the site, not in the browser) where i have provided an actual patch to
 correct the bug and they still don't fix it.  All a broken setTimeout
 implementation will do is result a site making the easy change of saying
 don't use this browser because it's broken.  They do that even when the
 bug is in their site, so an actual browser bug is even easier to ignore.



 negates the need to introduce new incompatibilities into the already
 published web by changing setTimeout().


 This still implies there is a meaningful compatibility hit to making this
 change.  I have not yet seen any reason to agree that is the case (in the
 sense of CPU usage is not a web compatibility issue).  There is _already_
 no compatibility here.  Browsers do completely different things, of an
 equivalent magnitude (6 ms) to the suggested change of 10 ms - 3 or 4 ms.
  Firefox is even different based on whether Flash happens to be running!
  How can there be compatibility problems introduced by this proposal that
 don't already exist?

 Um, i would guess on Vista all browsers have a 10ms timeout, on XP the only
 reason the 15ms timeout clamp exists is because of XP's low default timer
 resolution.  On Mac and (I assume all unixes/bsds/linux) the timeout clamp
 is likely to be 10ms.  But even the 15ms timeout is only 1.5x longer that
 10ms, where as 1ms represents an order of magnitude difference.

 If we were to look at a game that for instance assumed a 15ms clamp on
 setTimeout, and used that as the game clock tick (which happens)  then a
 game that used to get maybe 50 updates a second will get 66 updates if you
 have 10ms timer.  With a 1ms resolution timer though the game will get
 *160fps*, eg. 3 times faster than was intended.

 --Oliver

 * A very quick google brought up:
 http://www.c-point.com/javascript_tutorial/games_tutorial/how_to_create_games_using_javascript.htmwhich
  uses a 0ms timeout to trigger torpedo motion
 * From a comment on john resig's blog I saw a javascript game yesterday
 that had to LIMIT the framerate because google chrome made it unplayable.
 -- so there are sites that have already had to do work to not break with
 this model -- how many else are out there?



 PK
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev



 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] setTimeout as browser speed throttle

2008-09-29 Thread Mike Belshe
Hi,
One of the differences between Chrome and Safari is that Chrome sets the
setTimeout clamp to 1ms as opposed to 10ms.  This means that if the
application writer requests a timer of less than 10ms, Chrome will allow it,
whereas Safari will clamp the minimum timeout to 10ms.  The reason we did
this was to minimize browser delays when running graphical javascript
applications.

This has been a concern for some, so I wanted to bring it up here and get an
open discussion going.  My hope is to lower or remove the clamp over time.

To demonstrate the benefit, here is one test case which benefits from
removing the setTimeout clamp.  Chrome gets about a ~4x performance boost by
reducing the setTimeout clamp.  This programming pattern in javascript is
very common.

   
http://www.belshe.com/test/sort/sort.htmlhttp://www.belshe.com/test/sort/sort.html

One counter argument brought up is a claim that all other browsers use a
10ms clamp, and this might cause incompatibilities.  However, it turns out
that browsers already use widely varying values.  We also really haven't
seen any incompatibilities due to this change.  It is true that having a
lower clamp can provide an easy way for web developers to accidentally spin
the CPU, and we have seen one high-profile instance of this.  But of course
spinning the CPU can be done in javascript all by itself :-)

Here is a summary of the minimum timeout for existing browsers (you can test
your browser with this page: http://www.belshe.com/test/timers.html
)http://www.belshe.com/test/timers.html
Safari for the mac:   10ms
Safari for windows:15.6ms
Firefox:   10ms or 15.6ms, depending on whether or not Flash
is running on the system
IE : 15.6ms
Chrome:  1ms (future - remove the clamp?)

So here are a couple of options:
   1) Remove or lower the clamp so that javascript apps can run
substantially faster.
   2) Keep the clamp and let them run slowly :-)

Thoughts?  It would be great to see Safari and Chrome use the same clamping
values.

Thanks,
Mike
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev