Re: [webkit-dev] setTimeout and Safari

2010-10-08 Thread Xianzhu Wang
I think a different way works: hooking window.setTimeout,
window.setInterval, window.clearTimeout and window.clearInterval after
the page is loaded and before any script is run, like the following:

(function() {
  var orgSetTimeout = window.setTimeout;
  window.setTimeout = function(f, t) {
...
var id = orgSetTimeout(function() {
   ...
   f();
}, t);
...
return id;
  };
  ...
})();

In Chrome, this can be done with a run_at document_start content
script in an extension.

Xianzhu

2010/10/8 Steve Conover scono...@gmail.com:
 It's true.  Maybe I'm wrong about this but it seems to me that at some
 point most pages settle.  I'm also planning to put in a hard timeout
 in what I'm building.  And I'm slightly more concerned about js than
 css.

 Do you know of a good way at getting at an event queue or something
 else containing a list or a count of upcoming setTimeout/setInterval
 operations, in either Safari or QtWebKit?


 On Thu, Oct 7, 2010 at 12:41 PM, Simon Fraser simon.fra...@apple.com wrote:
 On Oct 7, 2010, at 12:23 PM, Steve Conover wrote:

 So that I don't have to guess whether a page is done rendering.
 Many developers defer rendering using setTimeout, I'd like to wait
 until setTimeouts are done and then just after check the result.  This
 would be superior to guessing at a sleep interval in the calling code.

 Are you trying to choose a good time to snapshot the page?

 There are many things that can cause the page to keep changing; chained
 setTimeouts, setInterval, CSS transitions and animations, SVG animation,
 plugins etc etc. This is not a simple question to answer.

 Simon


 On Wed, Oct 6, 2010 at 5:54 PM, Simon Fraser simon.fra...@apple.com wrote:
 Why do you need to know if there are no more pending setTimeouts?

 Simon

 On Oct 6, 2010, at 5:52 PM, Steve Conover wrote:

 Hoping someone on -dev might have an idea about this...

 -Steve


 -- Forwarded message --
 From: Steve Conover scono...@gmail.com
 Date: Tue, Sep 28, 2010 at 11:19 PM
 Subject: Re: setTimeout and Safari
 To: webkit-h...@lists.webkit.org


 Actually I am discovering what one might describe as a normal
 problem here...how to know when the setTimeout's are done firing.  The
 ideal would that I could somehow drill into the dom implementation and
 ask whether any setTimeout events are waiting to fire (and stop
 polling if the queue length is zero).

 I'm sure that's way off in terms of how this is actually implemented.
 Does such a thing exist?  Could someone please point me to the
 relevant sourcecode?

 Regards,
 Steve

 On Tue, Sep 28, 2010 at 10:36 PM, Steve Conover scono...@gmail.com 
 wrote:
 Sigh.  Please disregard.  After an hour of troubleshooting, I sent
 this email, and two minutes later realized the problem was bad js
 (blush).

 On Tue, Sep 28, 2010 at 10:22 PM, Steve Conover scono...@gmail.com 
 wrote:
 I hope this is the right place to be asking this question.

 I'm using the cocoa api, and am able to load a web page in a WebView.
 However I have some javascript in the page that uses setTimeout to
 cause a function to fire 100ms into the future - but the page loads
 and ignores the setTimeout's.

 How do I get my setTimeout's to fire?  I suspect this has something to
 do with the Run Loop, but my experiments so far with various parts of
 the Run Loop api have been failures.

 Regards,
 Steve


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev




 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Pixel test experiment

2010-10-08 Thread Nikolas Zimmermann


Am 08.10.2010 um 00:44 schrieb Maciej Stachowiak:



On Oct 7, 2010, at 6:34 AM, Nikolas Zimmermann wrote:


Good evening webkit folks,

I've finished landing svg/ pixel test baselines, which pass with -- 
tolerance 0 on my 10.5  10.6 machines.
As the pixel testing is very important for the SVG tests, I'd like  
to run them on the bots, experimentally, so we can catch  
regressions easily.


Maybe someone with direct access to the leopard  snow leopard  
bots, could just run run-webkit-tests --tolerance 0 -p svg and  
mail me the results?
If it passes, we could maybe run the pixel tests for the svg/  
subdirectory on these bots?


Running pixel tests would be great, but can we really expect the  
results to be stable cross-platform with tolerance 0? Perhaps we  
should start with a higher tolerance level.


Sure, we could do that. But I'd really like to get a feeling, for  
what's problematic first. If we see 95% of the SVG tests pass with -- 
tolerance 0, and only a few need higher tolerances
(64bit vs. 32bit aa differences, etc.), I could come up with a per- 
file pixel test tolerance extension to DRT, if it's needed.


How about starting with just one build slave (say. Mac Leopard) that  
runs the pixel tests for SVG, with --tolerance 0 for a while. I'd be  
happy to identify the problems, and see

if we can make it work, somehow :-)

Cheers,
Niko

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Pixel test experiment

2010-10-08 Thread Dirk Schulze
We missed many changes because of an existent tolerance level in the past. We 
made a baseline for MacOS Leopard as well as Snow Leopard and I would active 
pixel tests just for those two bots. I don't expect any problems. Niko and I 
run pixel tests on different machines and get the same results.

Dirk

Am 08.10.2010 um 00:44 schrieb Maciej Stachowiak:

 
 On Oct 7, 2010, at 6:34 AM, Nikolas Zimmermann wrote:
 
 Good evening webkit folks,
 
 I've finished landing svg/ pixel test baselines, which pass with --tolerance 
 0 on my 10.5  10.6 machines.
 As the pixel testing is very important for the SVG tests, I'd like to run 
 them on the bots, experimentally, so we can catch regressions easily.
 
 Maybe someone with direct access to the leopard  snow leopard bots, could 
 just run run-webkit-tests --tolerance 0 -p svg and mail me the results?
 If it passes, we could maybe run the pixel tests for the svg/ subdirectory 
 on these bots?
 
 Running pixel tests would be great, but can we really expect the results to 
 be stable cross-platform with tolerance 0? Perhaps we should start with a 
 higher tolerance level.
 
 REgards,
 Maciej
 
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Pixel test experiment

2010-10-08 Thread Nikolas Zimmermann


Am 07.10.2010 um 22:28 schrieb Evan Martin:


Chromium also runs pixel tests (for all tests).  For SVG, I recall we
have problems where 32-bit and 64-bit code will end up drawing
(antialiasing) curves differently.  Does this sound familiar?  Do you
have any suggestions on how to address it?


This doesn't sound familiar, because I don't have many machines to  
test with. I'm mainly working on an older MacBook Pro, using 10.5, 32  
bits,
and a new MacBook Pro using 10.6 which is 64 bits. The pixel test  
baseline in platform/mac/svg is generated using the 10.6 machine, the  
10.5
baseline using the other 32 bit machine. If I recall correctly, I only  
saw a few AA differences, but much more font differences.


When we had pixel tests enabled in the past, I recall that eg. the  
Leopard bot passed them 100%, and I had several hundred tests failing
on my local leopard machine. Do you guys have baselines that pass on  
most developer machines (default installations of eg. win, no  
special fonts installed)
_and_ the bot? Or do you always rely on the pixel test results from  
the bots, and don't run pixel test locally?


Cheers,
Niko

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] X-Purpose, again!

2010-10-08 Thread 蓋文彼德斯
Two weeks ago, I tried to land a patch in bug 46529 to add an X-Purpose:
prefetch request header to link prefetch motivated requests from WebKit.
 I'd like to land this change, as I think it's an important part of making
link prefetch work well.  Link prefetching is part of HTML5.  Link
prefetching has been implemented by the Firefox browser for years.
Chrome's prefetching experiments have shown that prefetching on the web
today makes browsing faster.

Currently, Firefox's longstanding prefetch implementation sends a request
header X-Moz: prefetch on its prefetch requests.  The proposed header name
X-Purpose was chosen because it is what Safari already uses for its bitmap
previews.  In particular, Safari sends X-Purpose: preview headers on
requests for resources and subresources motivated by the previw feature of
Safari.  I've also started a discussion in HTTPbis about changing this
header to simply Purpose.  That has not been decided yet, and I think we
can go forward with X-Purpose, and rename the header as soon as there's
good consensus in HTTPbis.

Servers will use the X-Purpose header for a number of purposes.  Servers
under load may 500 prefetch requests, rather than server non-navigation
motivated requests.  Servers may 500 prefetch requests that would have
retrieved non-cacheable content.  Server operators who want accurate
information on user-initiated navigations to their resources can use this
header: when prefetch motivated requests arrive, they can respond with
Cache-Control: must-revalidate which should force a 304 at user-initiated
navigation[1].  Some server operators today opt-out of X-Moz prefetch
motivated requests, and that may continue to happen.

The X-Purpose header will not make requests any longer or different, except
for prefetching motivated requests (which will be longer by about 20 bytes).
 Nor will this header make server response headers longer.  Alexey in our
earlier discussion was concerned that all server responses would have to
have to be longer due to Vary: X-Purpose responses.  This turns out to not
be the case; a Vary response to prefetches doesn't make sense, since the
entire purpose of the feature is to prime caches.  To treat prefetch
requests differently, instead 500 them as described above, or use the
Cache-Control: must-revalidate trick above.

Alexey asked how X-Purpose headers will affect corporate network monitoring
software; he was particularly concerned that this header would make reports
from network monitoring software less accurate.  This turns out to not be
the case; prefetches right now transit the network unmarked.  The proposal
is to mark prefetch motivated requests, which is more information than we
had previously.  Today you can't attribute any resource request to user
navigation (since it might be prefetch motivated), but with this header, you
have some requests (prefetch motivated requests that didn't use the
cache-control trick) that you can't attribute user navigation to.  So
network monitoring has strictly more information.

Eric Seidel, in the X-Purpose bug, asked why we'd tag prefetch motivated
requests, when we don't, for instance, tag iframe or XMLHttpRequest
motivated requests.  I think the difference here is that there's effective
conditional handling that servers can make; there are a wide variety of
responses that servers can make to prefetches, discussed above, all of which
result in well defined resources  load semantics for user navigation.  In
the case of iframes, even display:none iframes or XMLHttpRequest, the result
of this kind of differential handling is undefined: without putting very
difficult to enforce requirements on pages themselves, it will stay
undefined.

Maciej last week suggested he'd get back to us and explain the motivations
that the Safari team used for adding X-Purpose: preview, in case they were
helpful here.  He hasn't yet, but I'm sure I'd appreciate hearing about
that!

Have I missed anything?  I think X-Purpose (or Purpose, or X-Moz...), is an
important part of a link prefetching implementation.  It's requested by
server operators, it has been used in the X-Moz form for years, and it helps
servers gather statistics properly  handle load.  It also makes network
monitoring software more accurate.  And all this, without significantly
lengthening requests.  I hope I don't sound thick headed here, but I don't
see a downside.

Can we go ahead and land this change?  Is there something missing from the
above?  If something is missing, please reply and explain what it is and how
you think it might be addressed.  Thanks!

- Gavin

[1] Unfortunately, such servers are losing some of the benefit of
prefetching since there's at least one critical-path RTT, and possibly two
(if no warm TCP connection is available) for these kinds of validated
navigations.
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] X-Purpose, again!

2010-10-08 Thread Dan Bernstein

On Oct 8, 2010, at 6:26 AM, Gavin Peters (蓋文彼德斯) wrote:

 In particular, Safari sends X-Purpose: preview headers on requests for 
 resources and subresources motivated by the previw feature of Safari.

That’s incorrect. The header is only present in the request for the main 
resource.___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] Tests that fail across different architectures

2010-10-08 Thread Martin Robinson
The GTK+ port runs layout tests on both a x86_64 and i386 operating
system installations. We have a growing list of tests that generate
different expected results between achitectures. These are almost all
SVG tests. Last night tests were added that even have different
results between our debug and release bots.

Here's an example of a failure:
http://build.webkit.org/results/GTK%20Linux%2064-bit%20Debug/r69399%20(14499)/svg/custom/relative-sized-inner-svg-pretty-diff.html

Tests that depend on this amount of precision are very sensitive to
rounding differences between architectures and library versions. I'm
just curious what the approach should be that isn't skipping these
tests. Perhaps a fudge factor in the output?

Martin
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Tests that fail across different architectures

2010-10-08 Thread Simon Fraser
On Oct 8, 2010, at 9:16 AM, Martin Robinson wrote:

 The GTK+ port runs layout tests on both a x86_64 and i386 operating
 system installations. We have a growing list of tests that generate
 different expected results between achitectures. These are almost all
 SVG tests. Last night tests were added that even have different
 results between our debug and release bots.
 
 Here's an example of a failure:
 http://build.webkit.org/results/GTK%20Linux%2064-bit%20Debug/r69399%20(14499)/svg/custom/relative-sized-inner-svg-pretty-diff.html
 
 Tests that depend on this amount of precision are very sensitive to
 rounding differences between architectures and library versions. I'm
 just curious what the approach should be that isn't skipping these
 tests. Perhaps a fudge factor in the output?

Yes, I would round the output to some number of significant digits (or decimal 
places perhaps).

Simon

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Tests that fail across different architectures

2010-10-08 Thread Nikolas Zimmermann


Am 08.10.2010 um 18:16 schrieb Martin Robinson:


The GTK+ port runs layout tests on both a x86_64 and i386 operating
system installations. We have a growing list of tests that generate
different expected results between achitectures. These are almost all
SVG tests. Last night tests were added that even have different
results between our debug and release bots.

Here's an example of a failure:
http://build.webkit.org/results/GTK%20Linux%2064-bit%20Debug/r69399%20(14499)/svg/custom/relative-sized-inner-svg-pretty-diff.html

Tests that depend on this amount of precision are very sensitive to
rounding differences between architectures and library versions. I'm
just curious what the approach should be that isn't skipping these
tests. Perhaps a fudge factor in the output?



Hi Martin,

we just switched over to a platform independant way to dump circles,  
rect, ellipses etc.
Paths are prepared, but the switch is not fully done yet. I'm working  
on a patch, that rounds the results for each command  coordinate-pair  
in the path debug string output.

This will hopefully resolve the issue!

Cheers,
Niko


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] setTimeout and Safari

2010-10-08 Thread Evan Martin
On Thu, Oct 7, 2010 at 12:41 PM, Simon Fraser simon.fra...@apple.com wrote:
 On Oct 7, 2010, at 12:23 PM, Steve Conover wrote:

 So that I don't have to guess whether a page is done rendering.
 Many developers defer rendering using setTimeout, I'd like to wait
 until setTimeouts are done and then just after check the result.  This
 would be superior to guessing at a sleep interval in the calling code.

 Are you trying to choose a good time to snapshot the page?

 There are many things that can cause the page to keep changing; chained
 setTimeouts, setInterval, CSS transitions and animations, SVG animation,
 plugins etc etc. This is not a simple question to answer.

Also img tags that are taking a long time to load.


It's a pretty ugly hack, but to performance test the new tab page in
Chrome we use a timeout keyed on painting.  If the page hasn't painted
in two seconds or so, we call it done, and record that it took up to
the time of the last seen paint to finish loading.  This metric at
least captures what matters to a user: whether the page contents have
settled, at the cost of the assumption that we never sit still for
more than two seconds while loading.
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Pixel test experiment

2010-10-08 Thread Maciej Stachowiak

On Oct 8, 2010, at 12:46 AM, Nikolas Zimmermann wrote:

 
 Am 08.10.2010 um 00:44 schrieb Maciej Stachowiak:
 
 
 On Oct 7, 2010, at 6:34 AM, Nikolas Zimmermann wrote:
 
 Good evening webkit folks,
 
 I've finished landing svg/ pixel test baselines, which pass with 
 --tolerance 0 on my 10.5  10.6 machines.
 As the pixel testing is very important for the SVG tests, I'd like to run 
 them on the bots, experimentally, so we can catch regressions easily.
 
 Maybe someone with direct access to the leopard  snow leopard bots, could 
 just run run-webkit-tests --tolerance 0 -p svg and mail me the results?
 If it passes, we could maybe run the pixel tests for the svg/ subdirectory 
 on these bots?
 
 Running pixel tests would be great, but can we really expect the results to 
 be stable cross-platform with tolerance 0? Perhaps we should start with a 
 higher tolerance level.
 
 Sure, we could do that. But I'd really like to get a feeling, for what's 
 problematic first. If we see 95% of the SVG tests pass with --tolerance 0, 
 and only a few need higher tolerances
 (64bit vs. 32bit aa differences, etc.), I could come up with a per-file pixel 
 test tolerance extension to DRT, if it's needed.
 
 How about starting with just one build slave (say. Mac Leopard) that runs the 
 pixel tests for SVG, with --tolerance 0 for a while. I'd be happy to identify 
 the problems, and see
 if we can make it work, somehow :-)

The problem I worry about is that on future Mac OS X releases, rendering of 
shapes may change in some tiny way that is not visible but enough to cause 
failures at tolerance 0. In the past, such false positives arose from time to 
time, which is one reason we added pixel test tolerance in the first place. I 
don't think running pixel tests on just one build slave will help us understand 
that risk.

Why not start with some low but non-zero tolerance (0.1?) and see if we can at 
least make that work consistently, before we try the bolder step of tolerance 0?

Also, and as a side note, we probably need to add more build slaves to run 
pixel tests at all, since just running the test suite without pixel tests is 
already slow enough that the testers are often significantly behind the 
builders.

Regards,
Maciej

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Tests that fail across different architectures

2010-10-08 Thread Martin Robinson
On Fri, Oct 8, 2010 at 9:21 AM, Nikolas Zimmermann
zimmerm...@physik.rwth-aachen.de wrote:
 we just switched over to a platform independant way to dump circles, rect,
 ellipses etc.
 Paths are prepared, but the switch is not fully done yet. I'm working on a
 patch, that rounds the results for each command  coordinate-pair in the
 path debug string output.
 This will hopefully resolve the issue!


Thanks for the information. This sounds perfect!

Martin
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Pixel test experiment

2010-10-08 Thread Dirk Schulze

 The problem I worry about is that on future Mac OS X releases, rendering of 
 shapes may change in some tiny way that is not visible but enough to cause 
 failures at tolerance 0. In the past, such false positives arose from time to 
 time, which is one reason we added pixel test tolerance in the first place. I 
 don't think running pixel tests on just one build slave will help us 
 understand that risk.
 
 Why not start with some low but non-zero tolerance (0.1?) and see if we can 
 at least make that work consistently, before we try the bolder step of 
 tolerance 0?
 
 Also, and as a side note, we probably need to add more build slaves to run 
 pixel tests at all, since just running the test suite without pixel tests is 
 already slow enough that the testers are often significantly behind the 
 builders.
 
 Regards,
 Maciej
Running pixel test with a tolerance of 0.1 is still better than don't run pixel 
tests at all. So if we get a consensus with a small tolerance, I'm fine.
And yes, we might get problems with a new MacOS release. We have a lot of 
differences (0.1%) between 10.5 and 10.6 right now.
But I don't see a problem with it as long as someone manages the results. Niko 
and I are doing it for SVG on MacOSX 10.6 and also continue it on 10.5 for a 
while.

Dirk
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Pixel test experiment

2010-10-08 Thread Jeremy Orlow
I'm not an expert on Pixel tests, but my understanding is that in Chromium
(where we've always run with tolerance 0) we've seen real regressions that
would have slipped by with something like tolerance 0.1.  When you have
0 tolerance, it is more maintenance work, but if we can avoid regressions,
it seems worth it.

J

On Fri, Oct 8, 2010 at 10:58 AM, Nikolas Zimmermann 
zimmerm...@physik.rwth-aachen.de wrote:


 Am 08.10.2010 um 19:53 schrieb Maciej Stachowiak:



 On Oct 8, 2010, at 12:46 AM, Nikolas Zimmermann wrote:


 Am 08.10.2010 um 00:44 schrieb Maciej Stachowiak:


 On Oct 7, 2010, at 6:34 AM, Nikolas Zimmermann wrote:

  Good evening webkit folks,

 I've finished landing svg/ pixel test baselines, which pass with
 --tolerance 0 on my 10.5  10.6 machines.
 As the pixel testing is very important for the SVG tests, I'd like to
 run them on the bots, experimentally, so we can catch regressions easily.

 Maybe someone with direct access to the leopard  snow leopard bots,
 could just run run-webkit-tests --tolerance 0 -p svg and mail me the
 results?
 If it passes, we could maybe run the pixel tests for the svg/
 subdirectory on these bots?


 Running pixel tests would be great, but can we really expect the results
 to be stable cross-platform with tolerance 0? Perhaps we should start with 
 a
 higher tolerance level.


 Sure, we could do that. But I'd really like to get a feeling, for what's
 problematic first. If we see 95% of the SVG tests pass with --tolerance 0,
 and only a few need higher tolerances
 (64bit vs. 32bit aa differences, etc.), I could come up with a per-file
 pixel test tolerance extension to DRT, if it's needed.

 How about starting with just one build slave (say. Mac Leopard) that runs
 the pixel tests for SVG, with --tolerance 0 for a while. I'd be happy to
 identify the problems, and see
 if we can make it work, somehow :-)


 The problem I worry about is that on future Mac OS X releases, rendering
 of shapes may change in some tiny way that is not visible but enough to
 cause failures at tolerance 0. In the past, such false positives arose from
 time to time, which is one reason we added pixel test tolerance in the first
 place. I don't think running pixel tests on just one build slave will help
 us understand that risk.


 I think we'd just update the baseline to the newer OS X release, then, like
 it has been done for the tiger - leopard, leopard - snow leopard switch?
 platform/mac/ should always contain the newest release baseline, when
 therere are differences on leopard, the results go into
 platform/mac-leopard/


  Why not start with some low but non-zero tolerance (0.1?) and see if we
 can at least make that work consistently, before we try the bolder step of
 tolerance 0?
 Also, and as a side note, we probably need to add more build slaves to run
 pixel tests at all, since just running the test suite without pixel tests is
 already slow enough that the testers are often significantly behind the
 builders.


 Well, I thought about just running the pixel tests for the svg/
 subdirectory as a seperate step, hence my request for tolerance 0, as the
 baseline passes without problems at least on my  Dirks machine already.
 I wouldnt' want to argue running 20.000+ pixel tests with tolerance 0 as
 first step :-) But the 1000 SVG tests, might be fine, with tolerance 0?

 Even tolerance 0.1 as default for SVG would be fine with me, as long as we
 can get the bots to run the SVG pixel tests :-)

 Cheers,
 Niko


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Tests that fail across different architectures

2010-10-08 Thread Evan Martin
On Fri, Oct 8, 2010 at 9:21 AM, Nikolas Zimmermann
zimmerm...@physik.rwth-aachen.de wrote:
 Paths are prepared, but the switch is not fully done yet. I'm working on a
 patch, that rounds the results for each command  coordinate-pair in the
 path debug string output.

Since you are going to cause rebaselines for all of these tests, I
would really appreciate it if you considered
  https://bugs.webkit.org/show_bug.cgi?id=18994
in the process.

(Basically: printf with %f/%g does the wrong thing in many European
locales, so we should switch how we print floats.)
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] pixel tests and --tolerance (was Re: Pixel test experiment)

2010-10-08 Thread Dirk Pranke
Jeremy is correct; the Chromium port has seen real regressions that
virtually no concept of a fuzzy match that I can imagine would've
caught.
new-run-webkit-tests doesn't currently support the tolerance concept
at al, and I am inclined to argue that it shouldn't.

However, I frequently am wrong about things, so it's quite possible
that there are good arguments for supporting it that I'm not aware of.
I'm not particularly interested in working on a tool that doesn't do
what the group wants it to do, and I would like all of the other
WebKit ports to be running pixel tests by default (and
new-run-webkit-tests ;) ) since I think it catches bugs.

As far as I know, the general sentiment on the list has been that we
should be running pixel tests by default, and the reason that we
aren't is largely due to the work involved in getting them back up to
date and keeping them up to date. I'm sure that fuzzy matching reduces
the work load, especially for the sort of mismatches caused by
differences in the text antialiasing.

In addition, I have heard concerns that we'd like to keep fuzzy
matching because people might potentially get different results on
machines with different hardware configurations, but I don't know that
we have any confirmed cases of that (except for arguably the case of
different code paths for gpu-accelerated rendering vs. unaccelerated
rendering).

If we made it easier to maintain the baselines (improved tooling like
the chromium's rebaselining tool, add reftest support, etc.) are there
still compelling reasons for supporting --tolerance -based testing as
opposed to exact matching?

-- Dirk

On Fri, Oct 8, 2010 at 11:14 AM, Jeremy Orlow jor...@chromium.org wrote:
 I'm not an expert on Pixel tests, but my understanding is that in Chromium
 (where we've always run with tolerance 0) we've seen real regressions that
 would have slipped by with something like tolerance 0.1.  When you have
 0 tolerance, it is more maintenance work, but if we can avoid regressions,
 it seems worth it.
 J

 On Fri, Oct 8, 2010 at 10:58 AM, Nikolas Zimmermann
 zimmerm...@physik.rwth-aachen.de wrote:

 Am 08.10.2010 um 19:53 schrieb Maciej Stachowiak:


 On Oct 8, 2010, at 12:46 AM, Nikolas Zimmermann wrote:


 Am 08.10.2010 um 00:44 schrieb Maciej Stachowiak:


 On Oct 7, 2010, at 6:34 AM, Nikolas Zimmermann wrote:

 Good evening webkit folks,

 I've finished landing svg/ pixel test baselines, which pass with
 --tolerance 0 on my 10.5  10.6 machines.
 As the pixel testing is very important for the SVG tests, I'd like to
 run them on the bots, experimentally, so we can catch regressions easily.

 Maybe someone with direct access to the leopard  snow leopard bots,
 could just run run-webkit-tests --tolerance 0 -p svg and mail me the
 results?
 If it passes, we could maybe run the pixel tests for the svg/
 subdirectory on these bots?

 Running pixel tests would be great, but can we really expect the
 results to be stable cross-platform with tolerance 0? Perhaps we should
 start with a higher tolerance level.

 Sure, we could do that. But I'd really like to get a feeling, for what's
 problematic first. If we see 95% of the SVG tests pass with --tolerance 0,
 and only a few need higher tolerances
 (64bit vs. 32bit aa differences, etc.), I could come up with a per-file
 pixel test tolerance extension to DRT, if it's needed.

 How about starting with just one build slave (say. Mac Leopard) that
 runs the pixel tests for SVG, with --tolerance 0 for a while. I'd be happy
 to identify the problems, and see
 if we can make it work, somehow :-)

 The problem I worry about is that on future Mac OS X releases, rendering
 of shapes may change in some tiny way that is not visible but enough to
 cause failures at tolerance 0. In the past, such false positives arose from
 time to time, which is one reason we added pixel test tolerance in the first
 place. I don't think running pixel tests on just one build slave will help
 us understand that risk.

 I think we'd just update the baseline to the newer OS X release, then,
 like it has been done for the tiger - leopard, leopard - snow leopard
 switch?
 platform/mac/ should always contain the newest release baseline, when
 therere are differences on leopard, the results go into
 platform/mac-leopard/

 Why not start with some low but non-zero tolerance (0.1?) and see if we
 can at least make that work consistently, before we try the bolder step of
 tolerance 0?
 Also, and as a side note, we probably need to add more build slaves to
 run pixel tests at all, since just running the test suite without pixel
 tests is already slow enough that the testers are often significantly behind
 the builders.

 Well, I thought about just running the pixel tests for the svg/
 subdirectory as a seperate step, hence my request for tolerance 0, as the
 baseline passes without problems at least on my  Dirks machine already.
 I wouldnt' want to argue running 20.000+ pixel tests with tolerance 0 as
 first step :-) 

Re: [webkit-dev] pixel tests and --tolerance (was Re: Pixel test experiment)

2010-10-08 Thread Simon Fraser
I think the best solution to this pixel matching problem is ref tests.

How practical would it be to use ref tests for SVG?

Simon

On Oct 8, 2010, at 12:43 PM, Dirk Pranke wrote:

 Jeremy is correct; the Chromium port has seen real regressions that
 virtually no concept of a fuzzy match that I can imagine would've
 caught.
 new-run-webkit-tests doesn't currently support the tolerance concept
 at al, and I am inclined to argue that it shouldn't.
 
 However, I frequently am wrong about things, so it's quite possible
 that there are good arguments for supporting it that I'm not aware of.
 I'm not particularly interested in working on a tool that doesn't do
 what the group wants it to do, and I would like all of the other
 WebKit ports to be running pixel tests by default (and
 new-run-webkit-tests ;) ) since I think it catches bugs.
 
 As far as I know, the general sentiment on the list has been that we
 should be running pixel tests by default, and the reason that we
 aren't is largely due to the work involved in getting them back up to
 date and keeping them up to date. I'm sure that fuzzy matching reduces
 the work load, especially for the sort of mismatches caused by
 differences in the text antialiasing.
 
 In addition, I have heard concerns that we'd like to keep fuzzy
 matching because people might potentially get different results on
 machines with different hardware configurations, but I don't know that
 we have any confirmed cases of that (except for arguably the case of
 different code paths for gpu-accelerated rendering vs. unaccelerated
 rendering).
 
 If we made it easier to maintain the baselines (improved tooling like
 the chromium's rebaselining tool, add reftest support, etc.) are there
 still compelling reasons for supporting --tolerance -based testing as
 opposed to exact matching?
 
 -- Dirk
 
 On Fri, Oct 8, 2010 at 11:14 AM, Jeremy Orlow jor...@chromium.org wrote:
 I'm not an expert on Pixel tests, but my understanding is that in Chromium
 (where we've always run with tolerance 0) we've seen real regressions that
 would have slipped by with something like tolerance 0.1.  When you have
 0 tolerance, it is more maintenance work, but if we can avoid regressions,
 it seems worth it.
 J
 
 On Fri, Oct 8, 2010 at 10:58 AM, Nikolas Zimmermann
 zimmerm...@physik.rwth-aachen.de wrote:
 
 Am 08.10.2010 um 19:53 schrieb Maciej Stachowiak:
 
 
 On Oct 8, 2010, at 12:46 AM, Nikolas Zimmermann wrote:
 
 
 Am 08.10.2010 um 00:44 schrieb Maciej Stachowiak:
 
 
 On Oct 7, 2010, at 6:34 AM, Nikolas Zimmermann wrote:
 
 Good evening webkit folks,
 
 I've finished landing svg/ pixel test baselines, which pass with
 --tolerance 0 on my 10.5  10.6 machines.
 As the pixel testing is very important for the SVG tests, I'd like to
 run them on the bots, experimentally, so we can catch regressions 
 easily.
 
 Maybe someone with direct access to the leopard  snow leopard bots,
 could just run run-webkit-tests --tolerance 0 -p svg and mail me the
 results?
 If it passes, we could maybe run the pixel tests for the svg/
 subdirectory on these bots?
 
 Running pixel tests would be great, but can we really expect the
 results to be stable cross-platform with tolerance 0? Perhaps we should
 start with a higher tolerance level.
 
 Sure, we could do that. But I'd really like to get a feeling, for what's
 problematic first. If we see 95% of the SVG tests pass with --tolerance 0,
 and only a few need higher tolerances
 (64bit vs. 32bit aa differences, etc.), I could come up with a per-file
 pixel test tolerance extension to DRT, if it's needed.
 
 How about starting with just one build slave (say. Mac Leopard) that
 runs the pixel tests for SVG, with --tolerance 0 for a while. I'd be happy
 to identify the problems, and see
 if we can make it work, somehow :-)
 
 The problem I worry about is that on future Mac OS X releases, rendering
 of shapes may change in some tiny way that is not visible but enough to
 cause failures at tolerance 0. In the past, such false positives arose from
 time to time, which is one reason we added pixel test tolerance in the 
 first
 place. I don't think running pixel tests on just one build slave will help
 us understand that risk.
 
 I think we'd just update the baseline to the newer OS X release, then,
 like it has been done for the tiger - leopard, leopard - snow leopard
 switch?
 platform/mac/ should always contain the newest release baseline, when
 therere are differences on leopard, the results go into
 platform/mac-leopard/
 
 Why not start with some low but non-zero tolerance (0.1?) and see if we
 can at least make that work consistently, before we try the bolder step of
 tolerance 0?
 Also, and as a side note, we probably need to add more build slaves to
 run pixel tests at all, since just running the test suite without pixel
 tests is already slow enough that the testers are often significantly 
 behind
 the builders.
 
 Well, I thought about just running the pixel tests for the 

Re: [webkit-dev] X-Purpose, again!

2010-10-08 Thread Alexey Proskuryakov

On 08.10.2010, at 6:26, Gavin Peters (蓋文彼德斯) wrote:

 Can we go ahead and land this change?


I still think that it's a misfeature to have cross-origin prefetch, and that 
hitting search results with prefetch requests is an abuse of network 
infrastructure, akin to classical spamming.

Since link prefetch isn't enabled in Safari, I don't really care.

- WBR, Alexey Proskuryakov

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] ArrayBuffer supprot

2010-10-08 Thread Jian Li
Hi,

I am looking into hooking up ArrayBuffer support in FileReader and
BlobBuilder. It seems that most of the typed array types (ArrayBuffer,
ArrayBufferView, Uint8Array, and etc) have already been implemented in
WebKit, except TypedArray.get() and DataView.

Currently all these typed array supports are put under 3D_CANVAS feature
guard. Since they're going to be widely used in other areas, like File API
and XHR, should we remove the conditional guard 3D_CANVAS from all the
related files? Should we also move these files out of html\canvas, say into
html\ or html\typearrays?

In addition, get() is not implemented for typed array views. Should we add
it?

Jian
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] ArrayBuffer supprot

2010-10-08 Thread Darin Adler
On Oct 8, 2010, at 2:46 PM, Jian Li wrote:

 Currently all these typed array supports are put under 3D_CANVAS feature 
 guard. Since they're going to be widely used in other areas, like File API 
 and XHR, should we remove the conditional guard 3D_CANVAS from all the 
 related files?

I suggest we keep the guard up to date. These arrays should be compiled in if 
any of the features that use them are compiled in. Once they are used in 
XMLHttpRequest, then they won’t need feature guards at all, but I believe the 
File API may still be under a feature guard.

-- Darin

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] How to save a page

2010-10-08 Thread Darin Adler
On Oct 7, 2010, at 10:53 AM, Jonas Galvez wrote:

 I realize this is probably not the right list to be making this question, and 
 I apologize in advance for the disruption.
 
 I'm just having a really hard time finding resources and documentation on 
 webkit's API.


See http://webkit.org/contact.html.

Your question is off topic for the mailing list we are on. On topic for this 
mailing list: http://lists.apple.com/mailman/listinfo/webkitsdk-dev. Possibly 
on topic for this mailing list: 
http://lists.webkit.org/mailman/listinfo/webkit-help.

-- Darin

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] ArrayBuffer supprot

2010-10-08 Thread Jian Li
Sounds good. I will add the File API feature guard to it and still keep
those files under html/canvas.

On Fri, Oct 8, 2010 at 2:55 PM, Darin Adler da...@apple.com wrote:

 On Oct 8, 2010, at 2:46 PM, Jian Li wrote:

  Currently all these typed array supports are put under 3D_CANVAS feature
 guard. Since they're going to be widely used in other areas, like File API
 and XHR, should we remove the conditional guard 3D_CANVAS from all the
 related files?

 I suggest we keep the guard up to date. These arrays should be compiled in
 if any of the features that use them are compiled in. Once they are used in
 XMLHttpRequest, then they won’t need feature guards at all, but I believe
 the File API may still be under a feature guard.

-- Darin


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] ArrayBuffer supprot

2010-10-08 Thread Maciej Stachowiak

On Oct 8, 2010, at 3:05 PM, Jian Li wrote:

 Sounds good. I will add the File API feature guard to it and still keep those 
 files under html/canvas.

Another possibility is to have an ArrayBuffer feature guard and ensure that it 
is on if at least one of the features depending on it is on.

 - Maciej

 
 On Fri, Oct 8, 2010 at 2:55 PM, Darin Adler da...@apple.com wrote:
 On Oct 8, 2010, at 2:46 PM, Jian Li wrote:
 
  Currently all these typed array supports are put under 3D_CANVAS feature 
  guard. Since they're going to be widely used in other areas, like File API 
  and XHR, should we remove the conditional guard 3D_CANVAS from all the 
  related files?
 
 I suggest we keep the guard up to date. These arrays should be compiled in if 
 any of the features that use them are compiled in. Once they are used in 
 XMLHttpRequest, then they won’t need feature guards at all, but I believe the 
 File API may still be under a feature guard.
 
-- Darin
 
 
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] ArrayBuffer supprot

2010-10-08 Thread Jian Li
On Fri, Oct 8, 2010 at 3:29 PM, Maciej Stachowiak m...@apple.com wrote:


 On Oct 8, 2010, at 3:05 PM, Jian Li wrote:

 Sounds good. I will add the File API feature guard to it and still keep
 those files under html/canvas.


 Another possibility is to have an ArrayBuffer feature guard and ensure that
 it is on if at least one of the features depending on it is on.

 This also sounds good. Personally I prefer appending File API feature guard
since it is simpler. When array buffer is used in XHR, we can then simply
remove all the feature guards. Otherwise, we will have to update all the
config files to add a new feature guard.


  - Maciej


 On Fri, Oct 8, 2010 at 2:55 PM, Darin Adler da...@apple.com wrote:

 On Oct 8, 2010, at 2:46 PM, Jian Li wrote:

  Currently all these typed array supports are put under 3D_CANVAS feature
 guard. Since they're going to be widely used in other areas, like File API
 and XHR, should we remove the conditional guard 3D_CANVAS from all the
 related files?

 I suggest we keep the guard up to date. These arrays should be compiled in
 if any of the features that use them are compiled in. Once they are used in
 XMLHttpRequest, then they won’t need feature guards at all, but I believe
 the File API may still be under a feature guard.

-- Darin


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev



___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] Unapplying execCommand

2010-10-08 Thread Ryosuke Niwa
Greetings all,

I realized that the behavior of undo is different on WebKit and Chromium.
 Namely on Safari, execCommand(undo) undoes all previous execCommands
while it undoes exactly one execCommand in Chromium.

For example, we if have
div id=test contenteditablehello/div
script
window.getSelection().selectAllChildren(test);
document.execCommand('bold', false, null);
document.execCommand('italic', false, null);
document.execCommand('undo', false, null);
/script

WebKit will have the plain hello after the script is ran.  i.e. hello is
neither boldened nor italicized.  On Chromium, hello is boldened after the
script is ran, which is also consistent with Firefox, Internet Explorer, and
Opera (I have to add artificial delay for MSIE and Opera to execute undo
properly).  I feel like this should be a bug and possibly a regression.

Does anyone know if this was a UI / functionality decision or it is a bug?

Best regards,
Ryosuke Niwa
Software Engineer
Google Inc.
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Unapplying execCommand

2010-10-08 Thread Darin Adler
On Oct 8, 2010, at 3:47 PM, Ryosuke Niwa wrote:

 Does anyone know if this was a UI / functionality decision or it is a bug?

This undo behavior you describe in Safari is what is automatically implemented 
by Cocoa’s NSUndoManager. The concept is that when a user undoes something, 
they want to undo everything they did that corresponds to one end user event, 
not a single command from some lower level point of view.

Getting different behavior on Mac OS X might be challenging. The Mac version of 
WebKit ties undo in with the Cocoa undo management rather than creating its own 
separate undo mechanism. This allows the end user to undo a set of operations 
that involve web views interleaved with other operations. That’s good for the 
Undo menu items in applications. But when a webpage does execCommand(undo) 
that behavior is not as useful. If this turns out to be important from a web 
compatibility point of view we can try to find some way to change it without 
losing the beneficial effects.

-- Darin

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev