Re: [webkit-dev] Compiling WebKit up to 25% faster in Windows?

2013-03-26 Thread James Robinson
Also keep in mind that currently different build systems hack the include
path up to have the same #include point to different headers depending on
the build configuration, so the path expansion for a given #include will
not be the same for all ports.  It's basically a very non-obvious way to do
#if PLATFORM() guards at include sites without looking like it.  For
instance there are 7 different versions of AuthenticationChallenge.h but
only one #include statement in ResourceLoader.cpp.

Consider:

$ find Source/WebCore -name *.h -printf %f\n | wc -l
3383

$ find Source/WebCore -name *.h -printf %f\n | sort | uniq | wc -l
3288


- James


On Tue, Mar 26, 2013 at 1:40 PM, Dirk Pranke dpra...@chromium.org wrote:

 On Tue, Mar 26, 2013 at 1:29 PM, Benjamin Poulain benja...@webkit.orgwrote:

 On Tue, Mar 26, 2013 at 1:20 PM, Dirk Pranke dpra...@chromium.org wrote

 If we have consensus that we should just switch to paths relative to
 Source (or maybe a couple different options), that would be (IMO) a big
 win. It sounds like Daniel  co. have already done the big bang conversion.


 I think using full path would be a serious hit regarding hackability.

 I would rather spend some time tweaking my compiler to cache each
 directory content than waste time finding where is every single header I
 need to include.


 Interesting. I have the exact opposite experience, having to paw around to
 figure out where Font.h actually lives rather than just seeing
 WebCore/platform/graphics/Font.h.

 At any rate, to be clear, I would be in favor of that change but I'm not
 expecting it to happen :).

 -- Dirk

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] window.internals abuse

2013-03-14 Thread James Robinson
The patch in question landed nearly a year ago and was added in order to
test functionality that was (at the time) implemented in WebCore.
 window.testRunner may have been a better choice, I don't remember what the
state of it at the time was.  Nowadays the functionality under test is no
longer in WebCore and thus this infrastructure is being removed so I think
the question is moot.

- James


On Wed, Mar 13, 2013 at 12:10 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Wed, Mar 13, 2013 at 11:50 AM, Dana Jansens dan...@chromium.orgwrote:

 On Wed, Mar 13, 2013 at 1:25 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Wed, Mar 13, 2013 at 10:22 AM, Dana Jansens dan...@chromium.orgwrote:

 On Wed, Mar 13, 2013 at 1:21 PM, Simon Fraser 
 simon.fra...@apple.comwrote:

 On Mar 13, 2013, at 10:15 AM, Dana Jansens dan...@chromium.org
 wrote:

 On Wed, Mar 13, 2013 at 12:42 PM, Simon Fraser simon.fra...@apple.com
  wrote:

 https://bugs.webkit.org/show_bug.cgi?id=80046 (landed in
 http://trac.webkit.org/changeset/114081) added
 internals. setBackgroundBlurOnNode().

 I consider this to be abuse of the Internals object. As shown by the
 location of the files (Source/WebCore/*testing*/Internals.*), the
 Internals interface is intended for testing, not adding API to toggle
 features in WebCore for non-test code.


 Sorry, but I think this is based on a misunderstanding. That function
 is used for enabling the feature in layout tests in order to verify it is
 working. It is not used in non-test code.


 So why does the feature exist at all?


 It can be enabled by UI code via the platform layer directly.


 That sounds like a feature to be tested via testRunner object, not
 internals object.

 internals object should be reserved for testing cross-platform,
 cross-port features. Otherwise, we'll end up zillion of if-defs for each
 and every port specific feature we want to test :(


 I don't think that testRunner existed in its current form a year ago when
 this was written. I can look into moving it there if it would be better
 suited.


 Of course not in the current form but testRunner (or layoutTestController
 before the rename) object have existed for years.  It had been the de facto
 standard way of exposing new testing functions to layout tests until we've
 introduced internals object two years ago.

 - R. Niwa


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] window.internals abuse

2013-03-14 Thread James Robinson
On Thu, Mar 14, 2013 at 1:47 PM, Ryosuke Niwa rn...@webkit.org wrote:

 \On Thu, Mar 14, 2013 at 1:43 PM, James Robinson jam...@google.comwrote:

 The patch in question landed nearly a year ago and was added in order to
 test functionality that was (at the time) implemented in WebCore.
  window.testRunner may have been a better choice, I don't remember what the
 state of it at the time was.  Nowadays the functionality under test is no
 longer in WebCore and thus this infrastructure is being removed so I think
 the question is moot.


 Great. Thanks for the follow up. Perhaps we need a wiki page to illustrate
 the use case for testRunner vs. internals?

 I don't think many new contributors are aware of the different use cases
 each object has.


That sounds great.
http://trac.webkit.org/wiki/Exposing%20WebKit%20internals%20for%20Layout%20Tests
seems
to cover some of this, but it seems a bit dated.

- James



 - R. Niwa


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Feature Announcement: IndieUI: Events

2013-03-05 Thread James Robinson
On Tue, Mar 5, 2013 at 9:11 AM, Chris Fleizach cfleiz...@apple.com wrote:

 Hello,

 I'm planning on implementing the IndieUI Events 1.0 W3C spec
 https://dvcs.w3.org/hg/IndieUI/raw-file/default/src/indie-ui-events.html



 This feature's primary goal is to allow assistive technologies (like a
 Screen reader) a way to control certain events that normally rely on a
 standard device, but it's abstract enough that it will be useful in other
 contexts.

 Example: The escape key on a keyboard might be used to dismiss a dialog,
 but an assistive technology might not be able to press the escape key.

 The spec's introduction states:

 IndieUI: Events 1.0 is an abstraction between physical, device-specific
 user interaction events and inferred user intent such as scrolling or
 changing values. This provides an intermediate layer between device- and
 modality-specific user interaction events, and the basic user interface
 functionality used by web applications. IndieUI: Events focuses on granular
 user interface interactions such as scrolling the view, canceling an
 action, changing the value of a user input widget, selecting a range,
 placing focus on an object, etc. Implementing platforms will combine
 modality-specific user input, user idiosyncratic heuristics to determine
 the specific corresponding Indie UI event, and send that to the web
 application in addition to the modality-specific input such as mouse or
 keyboard events, should applications wish to process it.


 The WebKit bug can be found here
 https://bugs.webkit.org/show_bug.cgi?id=111446https://bugs.webkit.org/show_bug.cgi?id=106742


This bug appears to be a closed issue about speech synthesis, not IndieUI.
 Do you have a link to the IndieUI tracking bug?  Thanks!

- James




 The feature flag I plan on using is
 INDIEUI_EVENTS

 Let me know if you have any questions.

 Thanks

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] New web-facing CSS feature: -webkit-cursor-visibility: auto-hide

2013-03-04 Thread James Robinson
On Mon, Mar 4, 2013 at 4:52 PM, Jer Noble jer.no...@apple.com wrote:


 On Mar 4, 2013, at 4:46 PM, Ryosuke Niwa rn...@webkit.org wrote:

 Could you add either build or runtime flag?


 I most definitely could.  But are there any ports who would disable the
 flag?  (Honestly asking, here.)  If not, adding a feature flag may be more
 trouble than its worth.


In chromium we would like the ability to monitor and, when appropriate,
disable vendor-prefixed non-standard CSS properties.  I think it's a bad
idea to assume that by default all ports will want to expose non-standard
API to the web platform without at least considering the situation and
having a plan to remove at least the prefixed version.  Please add a flag
and, for bonus points, hook up FeatureObserver so we can monitor usage of
this property.

- James



 -Jer

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Enable CANVAS_PATH by default

2013-02-14 Thread James Robinson
On Thu, Feb 14, 2013 at 9:55 AM, Dirk Schulze dschu...@adobe.com wrote:

 Hi WebKit folks,

 I worked on the Path interface defined by the Canvas spec of W3C and
 WHATWG [1][2] for the last couple of weeks.

 Summary:
 Canvas supports a new DOM interface called Path. The Path interface takes
 a series of very well known path methods like moveTo, lineTo, cubicCurveTo,
 rect and allows to create and keep a path independent of a Canvas context.
 Additionally, I added the attribute 'currentPath' to the Canvas context to
 provide read and write access to the current path created on the Canvas
 context. Code snippet:

 var path = new Path();
 path.rect(0,0,100,100);

 var ctx = canvas.getContext('2d');
 ctx.currentPath = path;
 ctx.lineTo(200,200);
 ctx.closePath();

 var path2 = ctx.currentPath; // path2 != path

 Not implemented are addText, addPath,  addPathByStrokingText. Another
 proposal from Rik Cabanier[3] seems to address the idea behind these
 methods better.

 I would like to ask to enable CANVAS_PATH by default on trunk. Ports can
 opt-out the flag again. More information about some implementation details
 in a short article[4]. If there are any concerns, suggestions or questions,
 I am happy to answer them.


Could you please add a runtime enable flag before flipping this on for
chromium?  Thanks.

- James



 Greetings,
 Dirk

 [1] http://www.w3.org/html/wg/drafts/2dcontext/html5_canvas/#path-objects
 [2]
 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#path-objects
 [3] http://blogs.adobe.com/webplatform/2013/01/31/revised-canvas-paths/
 [4] http://dschulze.com/blog/articles/10/html-canvas-path-object-in-webkit
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Exporting symbols (was Re: Common build system (was Re: WebKit Wishes))

2013-02-02 Thread James Robinson
On Fri, Feb 1, 2013 at 5:12 PM, Balazs Kelemen kbal...@webkit.org wrote:

 On 02/01/2013 02:28 AM, Darin Fisher wrote:


 It would be nice if, in the shared library build of chromium, webcore and
 perhaps the modules and platform were separate DLLs.


 The shared library build is kind of a developer build, right? In this case
 I believe you can solve this by setting the default visibility to public at
 compiler/linker level, no need for exports.


That doesn't work on windows.

- James



 -kbalazs

 __**_
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 https://lists.webkit.org/**mailman/listinfo/webkit-devhttps://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Breaking other ports

2013-01-29 Thread James Robinson
On Tue, Jan 29, 2013 at 6:29 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Tue, Jan 29, 2013 at 5:46 PM, Adam Barth aba...@webkit.org wrote:

 I understand that the new rules of the road for WebKit2 are that
 contributors are allowed to break non-Apple ports.  However, those new
 norms do not extend to WebCore.

 In http://trac.webkit.org/changeset/138962, Alexey broke form
 resubmission confirmation in the Chromium port.  In
 https://bugs.webkit.org/show_bug.cgi?id=108214, he is refusing to
 fix the regression.  I don't view that as acceptable behavior from a
 member of the WebKit community.


 I understand your frustration but I don't think he intentionally broke the
 feature given that neither EWS nor commit queue detected the regression.


I don't see Adam or anyone else implying that he intentionally broke a
feature, but regardless when a patch to WebCore causes a regression in an
upstream port the options are to investigate and fix the regression or roll
the patch out.  If Alexey doesn't want to look into the regression, which
is valid, then the only clear option is to roll it out until someone who is
familiar with the breakage can look at it.


  As far as I checked both of those bugs, there aren't even a Chromium test
 failing per his patch. So his patch broke something that had not been
 tested before.

 Also, given that the fix needs to Chromium port specific, it doesn't seem
 productive to ask Alexey, who presumably doesn't know much about Chromium
 port, to come up with a fix for it. And both of those bugs and
 https://code.google.com/p/chromium/issues/detail?id=172721 don't seem to
 contain any information as to why his patch broke the feature or what kind
 of changes he needs to make in order to fix it. Someone who knows how form
 resubmission confirmation works in Chromium needs to help him so that we
 can fix this regression.


If Alexey is interested in learning about this and fixing it, that's fine,
but in the meantime the regression can't be left in the tree.  As long as
I've been involved with the WebKit project there has been a strict
no-regression policy.

- James


 - R. Niwa


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 https://lists.webkit.org/mailman/listinfo/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
https://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Adding main element to WebCore

2012-11-29 Thread James Robinson
On Thu, Nov 29, 2012 at 6:08 PM, James Craig jcr...@apple.com wrote:

 Snipping somewhat for brevity…


This is an interesting standards debate but as many people have noted it
does not belong on the webkit-dev list, which is for coordinating the
development of WebKit.  Please take this over to whatwg@ or some other
appropriate standards list.

- James
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Status of multithreaded image decoding

2012-08-20 Thread James Robinson
On Fri, Aug 17, 2012 at 1:54 AM, Dong Seong Hwang
luxte...@company100.netwrote:

 2012/8/14 Dong Seong Hwang luxte...@company100.net
  Trigger image decoding early in layout and scroll will certainly
  relieve flashing though it can’t completely remove the problem. We
  will apply this optimization to parallel image decoder.
 

 I changed parallel image decoder to trigger decoding on layout as
 Alpha suggested. Unfortunately, it does not seem to relieve flashing.
 I tested the change on Tumblr site, but the difference was visually
 indistinguishable.


By itself it won't help - you also need to synchronize at raster time (i.e.
block if the image isn't available).  Once the encoded bits are downloaded
off the wire and we've fire the load event on the image, we have to block
the next raster on having the decoded pixels available.

I think the way to look at this is running a race.  Your goal is to have a
non-main thread run a race - decode the images - and try to complete faster
than the main thread would have.  The finish line is the point at which the
raster engine needs access to the decoded pixels for a given image.  At
that point, you have to block the rasterization process until the pixels
are ready.  The start line is sometime after the bits are downloaded off of
the network and before the raster engine needs it.

If you think of the current code in that model, the start line is when we
first attempt to paint an image that isn't in the decode cache and the
finish line is when we hand it to the GraphicsContext.  Those happen to be
a straight call through and there's no useful main thread work to be done
in that time so we decode on the main thread.  The only way to speed this
up with threads would be to use multiple threads to decode a given image or
to split the decode and resize operations across threads.  Useful, but
there's limited utility.

The real trick is to move either the start or the finish line.  Starting
the decode on layout is one way to move the start line forward and provide
more time for the thread to get the image ready before paint time.  You
could even consider starting sooner - such as on network load - but the
tradeoff there is the sooner you start decoding the less information you
will have about how the image will actually be used.  You might start
decoding an image that will never be painted, or decode at the wrong scale
or with the wrong clip.  The closer you get to the actual paint time the
more accurate your information will be.  For instance after layout you can
have a pretty good guess at whether an image will be in the viewport and
the scale, although the layout might change again before painting.  This
indicates that you may want some sort of decoding queue with decoding jobs
that can be added, modified, or removed after being entered into the queue.

The other line to move is the finish line.  You could imagine that instead
of passing GraphicsContext commands directly to the raster backend
(CoreGraphics/cairo/skia/etc) you instead buffer the commands into queue
and play them later.  Then you could start the decoding process when you
hit the WebCore paint call for the image and only block when passing the
buffered commands into the actual backend.  Buffering the GraphicsContext
commands could allow for a number of other optimizations (collapsing draws,
dropping fully obscured draw calls, etc) The raster backend may even be
capable of taking a promise for decoded data instead of the data itself
which would let it push the finish line further back.  This last option is
what we're pursuing in skia as part of other optimizations, but might be
useful for other backends.

I think you have a lot of options here.  I want to emphasize that I think
the user experience is the most important thing to keep in mind with all of
this work.  Displaying the page faster is a great enhancement to a user's
happiness and improving our image decoding can definitely get us some big
wins.  I think any successful project here will have to be invisible to the
end user - we can't have pages flashing in unpredictably.

- James


 However, theoretically there must be some improvements. I want to
 discuss more about it while parallel image decoder is reviewed.

___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Status of multithreaded image decoding

2012-08-10 Thread James Robinson
On Thu, Aug 9, 2012 at 5:10 PM, Alpha Lam hc...@chromium.org wrote:

 Hi everyone!

 A few weeks ago some folks from company100.net have started a thread of
 multithreaded (parallel) image decoding in WebKit. We have worked together
 since then to get a better idea how to complete this feature. I would like
 to report on the progress and our plan.

 (The goal for Chromium is slightly different but it will reuse most of the
 architecture discussed here.)

 Master bug: https://bugs.webkit.org/show_bug.cgi?id=90375

 In WebKit today image rendering is progressive, meaning that image is
 rendered as bytes are received from the network. This is done through
 ImageObserverhttp://trac.webkit.org/browser/trunk/Source/WebCore/platform/graphics/ImageObserver.h
  and 
 CachedImageClienthttp://trac.webkit.org/browser/trunk/Source/WebCore/loader/cache/CachedImageClient.h
  interfaces.
 Multithreaded image decoding uses the same notification architecture,
 clients of BitmapImage are notified when a new region is decoded and
 available for painting.

 Today image decoding happens synchronously when a BitmapImage is drawn
 into a GraphicsContext. We plan to use the draw operation as a trigger to
 start image decoding asynchronously. Which means the first draw of
 BitmapImage will get a transparent image, subsequent draws will have the
 most recently decoded bitmap.


I don't think this approach will work terribly well (at least not by
itself).  It seems to require that we flash at least one frame without the
image data even when we really do have it available.  For instance, imagine
a page with a number of small image resources (small enough that they all
load completely in roughly the same instant), or cached image resources,
that impact the layout of the page.  Today, when the images load we will
redo the layout to accomodate the resources size, then at paint time decode
the images and render them.  The user never sees an intermediate state
unless the resources isn't fully loaded at paint time.  With this proposal,
the user would always see a flash where the images occupy layout space but
are not actually rendered.  I think that will be an unacceptably bad user
experience.

To do this, I think you either need deeper integration with the raster
system to make sure that it actually renders the images on the first paint
(perhaps by deferring the actual raster ops to give the decoder some time),
or kicking off the decode steps sooner and adding synchronization at paint
time to make sure we actually see the pixels.

I'm really happy someone is looking into these difficult issues.

- James

This architecture will take several steps:

 https://bugs.webkit.org/show_bug.cgi?id=93467

 This modifies ImageSource to be asynchronous. ImageSource is used as the
 public interface for decoding images. By making this interface asynchronous
 individual port can implement parallel image decoding or a similar
 architecture. This change is ready for review. I would like to get more
 feedback on the interface since this touches all ports.

 https://bugs.webkit.org/show_bug.cgi?id=93590

 Progressive painting of an image may not be possible everywhere. For
 example canvas and accelerated-composited img requires synchronous
 decoding. We plan to keep synchronous decoding the default case. This
 change identifies code location where asynchronous decoding is possible and
 tell BitmapImage asynchronous image decoding is requested.

 After these two changes we will be able to implement multithreaded image
 decoder inside BitmapImage and ImageSource. I will report on the progress
 once we get to this point.

 Alpha

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo/webkit-dev


Re: [webkit-dev] Can we distinguish imported tests in LayoutTests/css3 ?

2012-06-11 Thread James Robinson
There are plenty of non-text tests in fast/, even entire directories of
them (fast/repaint). My understanding of the meaning of fast/ is that it is
where new tests that are not imported should go. This meaning is not
universally applied.

- James
On Jun 11, 2012 7:33 PM, Mike Lawther mikelawt...@chromium.org wrote:

 I'm the guy that added css3/calc tests.

 I did so because I not all my tests are 'text only' tests, but I still
 wanted them all together. My understanding was that the 'fast' directory
 was intended for 'text only' tests only.

 Does having the 'fast' directory still serve a useful purpose? I reckon
 the original intent could be pushed into the tools, eg have a
 'new-run-webkit-tests --fast', which will only run text-only tests. Then
 the developer adding new tests doesn't have to worry about where to put
 them.

 mike

 On 11 June 2012 18:57, Ryosuke Niwa rn...@webkit.org wrote:

 Hi,

 I realized that there are a whole bunch of tests in LayoutTests/css3 that
 use layoutTestController (e.g. css3/calc, css3/filters, etc...), which
 appears to mean that they're our own tests. However, css3/selectors3 is an
 imported W3C test suite. It's very confusing to mix imported tests and
 WebKit's own tests. Can we put the imported W3C tests in imported-w3c
 directory as proposed earlier?

 Best,
 Ryosuke Niwa
 Software Engineer
 Google Inc.


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev



 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Settings::devicePixelRatio and Settings::defaultDeviceScaleFactor

2012-05-31 Thread James Robinson
(top-posting to fit in)

Doesn't the same argument apply to *deviceScaleFactor?  That doesn't make
sense as a Setting either.

Pushing one or both out of settings doesn't answer the question of whether
they are redundant.  My gut feeling is that they are and we should fold
them together.  Alternately, delete defaultDeviceScaleFactor completely.

- James

On Thu, May 31, 2012 at 3:50 PM, Simon Fraser simon.fra...@apple.comwrote:

 I don' t think devicePixelRatio() belongs in settings. It should be either
 be obtained from the platform code in WebCore/WebKit, or pushed in via API.

 Simon

 On May 31, 2012, at 3:48 PM, Adam Barth wrote:

  Is there any difference between Settings::devicePixelRatio [1] and
  Settings::defaultDeviceScaleFactor [2] ?  They appear to be used by
  disjoint sets of ports.  Shall we delete one in favor of the other?
 
  Adam
 
  [1]
 http://trac.webkit.org/browser/trunk/Source/WebCore/page/Settings.h#L478
  [2]
 http://trac.webkit.org/browser/trunk/Source/WebCore/page/Settings.h#L106
  ___
  webkit-dev mailing list
  webkit-dev@lists.webkit.org
  http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Using ref tests for repaint bugs

2012-05-25 Thread James Robinson
On Fri, May 25, 2012 at 1:42 PM, Žan Doberšek zandober...@gmail.com wrote:



 On Fri, May 25, 2012 at 9:57 AM, Dominik Röttsches 
 dominik.rottsc...@intel.com wrote:

  Andrei,


 On 05/25/2012 02:43 AM, Andrei Bucur wrote:

 Ojan,

  As Simon states, some repaint tests will likely not be possible to
 write correctly as ref tests, but some of them I think they fit very well
 in the happy-no-pixel-test bucket :). If people decide it's a direction
 worth investigating, I'll give the idea a spin.


 I'd be interested to hear about your progress. Recently I was facing a
 couple of similar issues:
 https://bugs.webkit.org/show_bug.cgi?id=73409

 and then GTK decided to always do repaints before dumping pixel results:
 https://bugs.webkit.org/show_bug.cgi?id=86284
 which might not be the right solution either - so if you come up with
 something more fine-grained - that'd be great.


 Here's why a repaint is forced for the Gtk port before dumping pixel
 results:

 Gtk port uses a backing store which is updated every 0.017 seconds,
 i.e. 60 times per second, a reasonable interval.


This is reasonable for an actual browser, but not reasonable for
DumpRenderTree.

- James


 The problem is that many tests are structured in the way that they make an
 alteration that produces a layout change and then immediately finish. This
 leaves no time for the backing store to be updated. It would also be
 ridiculous to go around fixing tests by delaying the finish by any amount
 of time.

 Because of that a repaint is forced just before dumping the pixel results
 so the dirty regions of the backing store (only those and not the complete
 backing store) are updated.

 Regards,
 Zan




 Dominik


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev



 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Using namespace std

2012-05-16 Thread James Robinson
On Wed, May 16, 2012 at 12:04 PM, Anders Carlsson ander...@apple.comwrote:


 On May 16, 2012, at 9:31 AM, Darin Adler da...@apple.com wrote:

  On May 16, 2012, at 9:20 AM, Allan Sandfeld Jensen wrote:
 
  there is another conflict which is entirely our own fault. It is
 between WTF::bind and the new std::bind from C++11
 
  We should find a good solution for this. I’d suggest talking with Anders
 Carlsson about it.

 I've run into this and had to use the fully qualified WTF::bind in those
 cases.

 FWIW, I don't think we really need using directives for the std namespace
 - the fully qualified name is short enough and I like the additional
 clarity that we're calling something in the standard library.


I would be strongly in favor of using fully qualified names for std classes
(std::bind instead of just bind).  This isn't a problem in other large
codebases, even ones that use far more types from std:: (like containers)
and that have column limits.

- James


 - Anders
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] github mirror

2012-04-24 Thread James Robinson
On Tue, Apr 24, 2012 at 7:15 AM, Tor Arne Vestbø
tor.arne.ves...@nokia.comwrote:

 On 24.04.12 16:04, ext Shezan Baig wrote:

 On Tue, Apr 24, 2012 at 9:55 AM, Adam Robenaro...@webkit.org  wrote:

 In what situation does this cause issues?


 Probably the biggest issue is for people who've been using
 git.webkit.org and now want to try out GitHub. Since the commits are
 distinct between the two repositories, they have to do a full clone to
 make the switch.



 In theory though, these users should be able to just add a remote to
 their existing clone.  Then it will just sync the commit objects, and
 not the trees and blobs.  Not ideal, they would have two different
 'masters', but still doable, and not *that* much of an overhead.
 Switching between the different masters should also be fast since the
 trees will be the same.


 Right, a fetch should ideally just pull down the commit objects, but it
 appears git does not have this optimization. If it did, I don't think the
 issue of two remote masters would be that big, since you would at some
 point likely transition to use one of the mirrors anyways. And if not,
 having multiple mirrors/remotes should be fine -- I'm using both the github
 and gitorious mirror without any issues.


  But I agree these two repos should probably merge sooner rather than
 later, just to avoid confusion for new users etc :)


 I would support that if it means cleaning up the author-script (which I'm
 happy to do), and using that on webkit.org.


Whatever we decide to do in the future, author rewriting seems like
extremely low value compared to having matching SHA1s.  I think we should
get a clone on github.com that matches the existing git.webkit.org SHA1s
and then make sure that they stay in sync (either with rewriting or not,
but whatever webkit.org does).

- James



 tor arne


 __**_
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/**mailman/listinfo.cgi/webkit-**devhttp://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Git/SVN is slow

2012-03-14 Thread James Robinson
On Wed, Mar 14, 2012 at 2:08 PM, William Siegrist wsiegr...@apple.comwrote:

 On Mar 14, 2012, at 1:47 PM, Robert Hogan li...@roberthogan.net wrote:

  On Wednesday 14 March 2012 08:44:46 Ashod Nakashian wrote:
  From: Nikolas Zimmermann zimmerm...@physik.rwth-aachen.de
  To: WebKit Development webkit-dev@lists.webkit.org
  Cc:
  Sent: Wednesday, March 14, 2012 1:31 AM
  Subject: [webkit-dev] Git/SVN is slow
 
  G ood morning WebKit crowd,
 
  since at least some weeks, updating git.webkit.org and/or
  svn.webkit.org is extremely slow for me.
  I have a git fetch rate below 3KiB/s average - surfing the web and/or
  downloading anything else is acceptable with my 10MBit connection.
  I'm located near Cologne, in Germany -- do any other european
  WebKittens suffer from the same problems?
 
 
  It has always been pretty slow for me too (Ireland), occasionally as bad
 as
  you describe but more often in the 80-100KiB/s average. Thanks for
 sharing
  the chromium mirror tip.
 


 Our network provider is looking into our bandwidth problems. It looks like
 this may affect more than just git or svn.


For the record this is not only an issue with overseas connections.   I've
seen extremely slow pull times as well and lots of 500s from various
webkit.org servers.  I'm in Mountain View, CA and have what I would
consider to be ample bandwidth available to the internet.

- James


 -Bill

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Please set the svn:mime-type property on binary files before committing

2012-03-07 Thread James Robinson
On Wed, Mar 7, 2012 at 3:21 PM, Dan Bernstein m...@apple.com wrote:

 Please set the svn:mime-type property on binary files that you add to the
 tree, such as *-expected.png, before committing. Otherwise the resulting
 webkit-changes message will include those files as text, which is
 inconvenient.


Step 3 of
http://trac.webkit.org/wiki/CommitterTips#Walkingyouthroughyourfirstcommit
should
take care of this (
http://trac.webkit.org/wiki/UsingGitWithWebKit#Misc.TipsandTricks has
slightly more verbose directions).

- James



 Thanks.

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Initial painting time changed?

2012-01-30 Thread James Robinson
On Mon, Jan 30, 2012 at 3:56 AM, Nikolas Zimmermann 
zimmerm...@physik.rwth-aachen.de wrote:

 Good evening WebKit folks,

 while working on some SVG Filter dynamic update issues I found out that
 long-standing assumptions about the paint time don't hold anymore:

 svg
 rect id=rect width=1 height=100/
 script
 function doTest() {
   document.getElementById(rect).setAttribute(width, 100);
   layoutTestController.notifyDone();
 }

 layoutTestController.waitUntilDone();
 setTimeout(doTest, 0);
 /script
 /svg

 The document was laid out and painted once before the timeout fired. Most
 SVG repainting tests work this way: setup document initially, then use a
 setTimeout(doSomething, 0) to eg. change an attribute, then dump the
 document to capture that invalidations/repaints work properly.


That's racy and doesn't match what actual browsers will do.


 Something has changed recently - to get the same effect as we used to,
 following is needed:
 script
 function doTest() {
   setTimeout(function() {// EXTRA TIMEOUT 1
   document.getElementById(rect).setAttribute(width, 100);
   setTimeout(function() { layoutTestController.notifyDone(); }, 0); //
 EXTRA TIMEOUT 2
   }, 0);
 }

 layoutTestController.waitUntilDone();
 setTimeout(doTest, 0);
 /script


This doesn't look any better.

If you want to paint at a specific time in a repaint test, use
layoutTestController.display().



 Does that ring a bell to anyone? Why is the initial painting not done yet,
 if setTimeout(, 0) fires the first time?
 There's also an extra setTimeout needed now, before calling notifyDone(),
 otherwise the repaint is not captured.

 Currently most SVG tests that exercise dynamic updates are useless, as the
 painting always happens after the whole script ran, that means it's not
 possible to capture repaint problems at the moment w/o fixing all these
 tests.


I think you should fix the tests.

- James



 Cheers,
 Niko


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Initial painting time changed?

2012-01-30 Thread James Robinson
On Mon, Jan 30, 2012 at 12:19 PM, Nikolas Zimmermann 
zimmerm...@physik.rwth-aachen.de wrote:


 Am 30.01.2012 um 19:38 schrieb James Robinson:


 The document was laid out and painted once before the timeout fired. Most
 SVG repainting tests work this way: setup document initially, then use a
 setTimeout(doSomething, 0) to eg. change an attribute, then dump the
 document to capture that invalidations/repaints work properly.


 That's racy and doesn't match what actual browsers will do.

 Not correct, this was never racy, and the usual way we're testing since
 half a decade.
 This has changed recently w/o a note, which is not acceptable.

 If you want to paint at a specific time in a repaint test, use
 layoutTestController.display().

 That's only an option, if we could use it to force repainting w/o painting
 gray rectangle first (eg. add a bool flag to display, that toggles graying
 the view first or not)



 Does that ring a bell to anyone? Why is the initial painting not done
 yet, if setTimeout(, 0) fires the first time?
 There's also an extra setTimeout needed now, before calling notifyDone(),
 otherwise the repaint is not captured.

 Currently most SVG tests that exercise dynamic updates are useless, as
 the painting always happens after the whole script ran, that means it's not
 possible to capture repaint problems at the moment w/o fixing all these
 tests.


 I think you should fix the tests.


 Well, I still think such a change in behavior should be discussed first,
 w/o breaking existing test coverage.
 It's also a bit much to ask me to fix sth. around 500-1000 tests?


I didn't see any regressions, pixel tests or otherwise, when running with
this patch, nor did the bots.  What platform / test configuration are you
seeing issues on?

This should not be an observable change in behavior.  setTimeout() vs
painting behavior has never been guaranteed in DRT or in actual browsers.

- James


 Cheers,
 Niko



 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Question regarding rebaseline for Layout Test Result for EFL port

2012-01-23 Thread James Robinson
On Mon, Jan 16, 2012 at 12:02 AM, Gyuyoung Kim gyuyo...@gmail.com wrote:

 Hello WebKit folks.

 It looks that the result of layout test for EFL port needs rebaseline
 because of 101343. The revision modified line spacing of font in
 SimpleFontDataFreeType.cpp, and it seems that the existing layout test
 result of EFL port was influenced by it. (
 http://trac.webkit.org/changeset/101343)

 3,936 test cases are failed after the revision (101343).

 * EFL port's layout test result with r101343

   = Results: 16139/27915 tests passed (57.8%)
   = Tests to be fixed (11776):
 4423 text diff mismatch   (37.6%)
 9 image mismatch   ( 0.1%)
 7344 skipped  (62.4%)


 * EFL port's layout test Result without r101343

   = Results: 20074/27915 tests passed (71.9%)
   = Tests to be fixed (7841):
 1 test timed out   ( 0.0%)
 487 text diff mismatch   ( 6.2%)
 9 image mismatch   ( 0.1%)
 7344 skipped  (93.7%)


 It looks GTK port also did rebaseline due to the revision.
  - http://trac.webkit.org/changeset/101354
  - http://trac.webkit.org/changeset/101352
  - http://trac.webkit.org/changeset/101351
  - And so on.

 I think EFL port also needs to do rebaseline for layout test result. GTK
 port did rebaseline without review because patch is too huge. Is there any
 processes for rebaseline ?


In Chromium, rebaselines are nearly always landed by Chromium committers
without review.  If there is some doubt on whether a rebaseline is correct
or not it's expected that the committer will check with the appropriate
party, but we don't require the full review process.

I'd expect other ports follow a similar guideline for their own baselines.

- James



 - gyuyoung

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] layout tests: how are some compared against image, and others only text?

2012-01-04 Thread James Robinson
On Wed, Jan 4, 2012 at 3:39 PM, Elliot Poger epo...@chromium.org wrote:

 What is it that causes some tests to require baseline images (and not text
 files) for comparison, while others require text and not image baselines?

 (I know that I can specifically SKIP comparison against IMAGE and/or TEXT
 using test_expectations.txt... but even without the use of
 test_expectations, I believe that some tests are compared against only text
 or only image.)

 As an example, I see that this test has only image baselines and no text
 baselines:

 $ ls LayoutTests/platform/*/fast/canvas/canvas-text-baseline*.png | wc -l
 10
 $ ls LayoutTests/platform/*/fast/canvas/canvas-text-baseline*.txt | wc -l
 ls: cannot access
 LayoutTests/platform/*/fast/canvas/canvas-text-baseline*.txt: No such file
 or directory

0


Wrong, you've missed some:

 $ find LayoutTests/ -name canvas-text-baseline*
LayoutTests/platform/chromium-linux/fast/canvas/canvas-text-baseline-expected.png
LayoutTests/platform/gtk/fast/canvas/canvas-text-baseline-expected.png
LayoutTests/platform/qt/fast/canvas/canvas-text-baseline-expected.png
LayoutTests/platform/efl/fast/canvas/canvas-text-baseline-expected.png
LayoutTests/platform/chromium-win/fast/canvas/canvas-text-baseline-expected.png
LayoutTests/platform/chromium-gpu-linux/fast/canvas/canvas-text-baseline-expected.png
LayoutTests/platform/chromium-gpu-win/fast/canvas/canvas-text-baseline-expected.png
LayoutTests/platform/mac/fast/canvas/canvas-text-baseline-expected.png
LayoutTests/fast/canvas/canvas-text-baseline.html
LayoutTests/fast/canvas/canvas-text-baseline-expected.txt  this
one applies


 while this test has only text baselines and no image baselines:

 $ ls LayoutTests/platform/*/fast/canvas/canvas-lineWidth*.png | wc -l
 ls: cannot access
 LayoutTests/platform/*/fast/canvas/canvas-lineWidth*.png: No such file or
 directory
 0
 $ ls LayoutTests/platform/*/fast/canvas/canvas-lineWidth*.txt | wc -l
 5


This test calls layoutTestController.dumpAsText() (in js-test-pre.js), so
there's no .png.

- James


 Is there something inherent in each test that indicates whether its
 results will be compared against image and/or text baselines?  Or is it
 simply a matter of what baseline files are found to compare against?


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] New feature flag proposal: Joystick API

2011-10-06 Thread James Robinson
On Thu, Oct 6, 2011 at 11:07 AM, Alexey Proskuryakov a...@webkit.org wrote:


 I share Simon's concern that that providing low level access to every
 possible controller creates fragmentation, with purportedly HTML content
 that only works on a few devices. There is no clear cut border here - it's
 been mentioned that even touch events can be seen as rare - and then I
 advocate that adding more mouse specific events is a bad idea for the same
 reason.

 As we add joystick/gamepad support, should steering wheels be next on the
 agenda? 3d mice?


Would you mind raising this concern in the relevant standards body (in this
case, webevents-pub...@w3.org seems to be relevant)?  Debating issues with
standards on webkit-dev is unproductive - it does not include all parties
who have an interest, in particular other browser vendors and web
developers, and it does include a lot of WebKit developers who probably
aren't interested.

- James




  - WBR, Alexey Proskuryakov

 06.10.2011, в 10:01, Darin Fisher написал(а):

 This proposal has matured somewhat, so an update is in order.

 FYI:
 http://dvcs.w3.org/hg/webevents/raw-file/default/gamepad.html
 http://www.w3.org/2010/webevents/charter/2011/Overview.html

 We are working behind the ENABLE(GAMEPAD) flag at the moment.  Mozilla is
 also building a prototype of this API.

 We are doing development on the trunk (disabled by default) so that we can
 more easily solicit game developers for feedback using our existing Chrome
 distribution channels.  This feature should have a very light touch on
 existing cross platform code.

 We encourage folks to share feedback on the API design to
 webevents-pub...@w3.org.

 Regards,
 -Darin


 On Wed, Aug 24, 2011 at 9:19 AM, Simon Fraser simon.fra...@apple.comwrote:

 I think it's too early to implement this. We should wait until it's a W3C
 draft at least.

 window.addEventListener(MozJoyConnected..), really?

 Simon

 On Aug 24, 2011, at 8:36 AM, Scott Graham wrote:

  Hi,
 
  I wanted to let everyone know that I propose to add a new feature
  flag, JOYSTICK. http://webkit.org/b/66859
 
  This flag will enable an API and events for accessing joysticks and
  related devices. There's a prototype effort happening in Mozilla also
  (https://wiki.mozilla.org/JoystickAPI), and the design is intended to
  be similar.
 
  As it will not necessarily make sense for all ports, nor be
  implemented immediately in all ports, a feature flag seems
  appropriate.
 
  Please let me know if you have any concerns or comments.
 
  Thanks
  ___
  webkit-dev mailing list
  webkit-dev@lists.webkit.org
  http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev




 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] New feature flag proposal: Joystick API

2011-10-06 Thread James Robinson
On Thu, Oct 6, 2011 at 2:07 PM, Alexey Proskuryakov a...@webkit.org wrote:


 06.10.2011, в 13:49, Scott Graham написал(а):

 The first revision of the spec (from the Scope section) is intended to
 handle:

 ... support for devices common to current gaming systems including
 gamepads, directional pads, joysticks, wheels, pedals, accelerometers.


 Why does the spec title and abstract talk about gamepads (joysticks)
 only? Perhaps it's my mistake that I didn't read the scope section, but with
 title and abstract being so specific, that seemed unnecessary.

 Skipping scope section, I went right to IDL. Why is the interface called
 Gamepad if it's not only about gamepads?


I hate to repeat myself but this feedback really should go to the working
group defining the specification.

- James



  - WBR, Alexey Proskuryakov


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Please use platform prefixes in bug titles

2011-10-05 Thread James Robinson
On Wed, Oct 5, 2011 at 5:43 PM, Simon Fraser simon.fra...@apple.com wrote:

 On Oct 5, 2011, at 5:37 PM, Darin Adler wrote:

  Hi folks.
 
  It really helps frequent reviewers like me if you use platform prefixes
 on bugs. A platform prefix indicates that the bug only affects code on a
 particular platform. For example:
 
 [Mac] Use four more named cursors if present
 
  The above title tells you the patch is Mac-specific. Or:
 
 [Chromium] Switch threaded compositor from a compile time option to a
 run time one
 
  The above title tells you the patch is Chromium-specific. There are
 various other common prefixes, such as [Skia] or [Qt].
 
  Including a prefix on a bug title shows courtesy for others not working
 on your platform and is good for sharing the WebKit project. You should
 leave off a prefix if the patch includes cross-platform changes as well as
 platform-specific changes.

 Why don't we just fix bugzilla to have a useful lists of ports in one of
 the Platform popups?


That would certainly be nice, but it wouldn't be as useful as having the
platform in the bug description.  The Platform field is not very obvious
in the bugzilla UI and doesn't show up at all in bugzilla-generated emails,
ChangeLogs, or commit messages.  Many of our bugs are filed via tools like
webkit-patch or sheriffbot which could be taught about the Platform entry,
but doing so would decrease the usability of the tools and I suspect many
people would get it wrong.

- James


 Simon

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Change to style guideline: should use type instead of type* for out arguments

2011-10-04 Thread James Robinson
On Tue, Oct 4, 2011 at 2:06 PM, Ryosuke Niwa rn...@webkit.org wrote:

 Hi,

 It came to my attention that some people are using raw pointers to pass
 out-arguments (e.g. bug 69366). In my understanding, we use pass by
 reference for out arguments when they have to be modified in callees.

 If there's no objection, I'm going to file a bug  upload a patch to state
 this explicitly in the style guideline.


Could you explain why?  Is it to document the nullity of the out-param?

I personally find pointers for out parameters to be appropriate in many
situations.  It makes the side effects of manipulating the parameter more
obvious, and it provides a clear way for the caller to indicate that they
don't care about a certain out parameter.  It sounds like you have reasons
to prefer the opposite behavior.

- James



 Best,
 Ryosuke Niwa
 Software Engineer
 Google Inc.


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Enable REQUEST_ANIMATION_FRAME on all ports? (was Re: ENABLE flag cleanup strawman proposal)

2011-09-27 Thread James Robinson
On Tue, Sep 27, 2011 at 10:34 AM, Chris Marrin cmar...@apple.com wrote:


 On Sep 26, 2011, at 9:48 PM, James Robinson wrote:
 
  What happened at frame 9?  Instead of advancing by 15 milliseconds worth,
 the bird jumped forward by twice the normal amount.  Why?  We ran the rAF
 callback twice between frames 8 and 9 - once at 135ms and once at 150ms.
  What's actually going on here is we're accumulating a small amount of drift
 on every frame (1.6... milliseconds, to be precision) between when the
 display is refreshing and when the callbacks are being invoked.  This has to
 catch up sometime so we end up with a beat pattern every (16 2/3) / abs(16
 2/3 - 15) = 10 frames.  The same thing happens with a perfect 16ms timer
 every 25 frames, or with a perfect 17ms timer every 50 frames.  Even a very
 close timer will produce these regular beat patterns and as it turns out the
 human eye is incredibly good at picking out and getting annoyed by these
 effects in an otherwise smooth animation.

 I generally agree with your analysis, but I believe your example is
 misleading. Skipping a frame would only cause the bird to jump by 30 units
 rather than 15 if you were simply adding 15 units to its position on every
 call to rAF. But that would make the rate of movement of the bird change
 based on the rate at which rAF is called, and that would be poor design. If
 an implementation decided to call rAF at 30ms intervals (due to system load,
 for instance) then the bird would appear to move half as fast, which isn't
 what you want.

 Assuming you're basing the position on the time at which the animation
 started, then the bird's apparent rate will not change depending on the rate
 at which rAF is firing.

 With that said, I agree with you that there will still be a visual glitch
 in the current implementation. But what's actually happening is that the
 timestamp we're sending to rAF is wrong. We're sending current time.
 Depending on when rAF fires relative to the display refresh, the timestamp
 might be as much as 16ms behind the time the frame is actually seen. If
 you're basing motion on this timestamp, there will be an occasion when one
 frame will have a timestamp that is very close to the display time and the
 next will have a timestamp that is 15ms or so behind. That's why the glitch
 is happening.


I'm assuming in this example that the script changing the position of the
bird to match the timestamp parameter passed in. You are correct in saying
that changing the timestamp parameter to reflect the next display time would
get rid of the visual glitch in this example.  In that case the behavior
between frames 8 and 10 would be:

time (millis) : action
120: rAF fired with timestamp 133 1/3
133 1/3: frame 8 produced
135: rAF fired with timestamp 150
150: rAF fired with timestamp 150
 150 0/3: frame 9 produced
165: rAF fired with timestamp 166 2/3
166 2/3: frame 10 produced

The problem here is that in the real world, frames aren't infinitely cheap
to produce and so attempting to run the rAF callback twice between frames 8
and 9 is just as likely to produce a rendering glitch as the problem in the
original example - even though the timestamp is correct.  In order to keep
the animation running smoothly here it's necessary to keep the timestamp and
the scheduling in sync with the actual display rate.


 So I don't believe this has anything to do with Timers per se, but with the
 wrong timestamp we happen to be sending to rAF. We knew this would happen
 and we chose this method because it gave us a nice simple first
 implementation. I still believe it's a fine implementation and it is
 platform independent, so it allows all the ports to support rAF.


 
  For this reason, you really need a precise time source that is tied in to
 the actual display's refresh rate.  Not all displays are exactly 60Hz - at
 smaller form factors 50 or even 55hz displays are not completely unheard of.
  Additionally the normal clock APIs aren't always precise enough to stay in
 sync with the actual display - particularly on windows it's really hard to
 find a clock that doesn't drift around all over the place.
 
  The above analysis assumes that all calls are infinitely fast and there's
 no real contention for system resources.  In practice, though, this is
 rarely the case.  It's not uncommon that the system will temporarily get
 overloaded and has to make tradeoffs between maintaining a high framerate
 and remaining responsive to user input.  In Chromium, we have some logic to
 ensure that we load balance between handling input events and painting to
 ensure that processing one type doesn't completely starve the other.  In a
 multi-process environment, such as WebKit2 or Chromium, there needs to be
 coordination between the two processes in the non-composited path in order
 to paint a bitmap and get it onscreen.  If this logic is all operating
 completely independently from the rAF scheduling then it's very easy to end
 up triggering

Re: [webkit-dev] Enable REQUEST_ANIMATION_FRAME on all ports? (was Re: ENABLE flag cleanup strawman proposal)

2011-09-26 Thread James Robinson
On Sun, Sep 25, 2011 at 6:52 PM, Darin Adler da...@apple.com wrote:

 On Sep 25, 2011, at 12:20 AM, James Robinson wrote:

  The TIMER based support for RAF is very new (only a few weeks old) and
 still has several major bugs. I'd suggest letting it bake for a bit before
 considering turning it on for all ports.

 Got it.

  Fundamentally I don't think this feature can be implemented reasonably
 well with just timers, so port maintainers should take a really careful look
 at the level of support they want to have for this feature when deciding if
 they want to support it.

 This may contradict the recommendation above. If the timer-based version is
 too low quality then maybe we shouldn’t put ports in the position of
 shipping with a substandard implementation rather than simply having the
 feature omitted.


Perhaps if I expand on my concerns a bit it'll be clearer what the right
option is.

The goal of requestAnimationFrame is to allow web authors to have
high-quality script-driven animations.  To use a concrete example, when
playing angry birds (http://chrome.angrybirds.com/) and flinging a bird
across the terrain, the RAF-based animation should move the bird at a
uniform rate across the screen at the same framerate as the physical display
without hitches or interruptions.  An additional goal is that we shouldn't
do any unnecessary work for frames that do not show up on screen, although
it's generally necessary to do this in order to satisfy the first goal as
I'll show below.  There are two main things that you need in order to
achieve this that are difficult or impossible to do with a WebCore Timer: a
reliable display-rate aligned time source, and a source of feedback from the
underlying display mechanism.

The first is easiest to think about with an example.  When the angry bird
mentioned above is flying across the screen, the user should experience the
bird advancing by the same amount every time their display's update
refreshes.  Let's assume a 60Hz display and a 15ms timer (as the
current REQUEST_ANIMATION_FRAME_TIMER code uses), and furthermore assume
(somewhat optimistically) that every frame takes 0ms to process in
javascript and 0ms to display.  The screen will update at the following
times (in milliseconds): 0, 16 2/3, 33 1/3, 50, 66 2/3, 83 1/3, 100, etc.
 The visual X position of the bird on the display is directly proportional
to the time elapsed when the rAF handler runs, since it's interpolating the
bird's position, and the rAF handler will run at times 0, 15, 30, 45, 60,
etc.  We can thus determine the visual X position of the bird for each
frame:

Frame 0, time 0ms, position: 0, delta from last frame:
Frame 1, time 16 2/3ms, position: 15, delta from last frame: 15
Frame 2, time 33 1/3ms, position: 30, delta from last frame: 15
Frame 3, time 50 0/3 ms, position: 45, delta from last frame: 15
Frame 4, time 66 2/3 ms, position: 60, delta from last frame: 15
Frame 5, time 83 1/3 ms, position: 75, delta from last frame: 15
Frame 6, time 100 0/0 ms, position: 90, delta from last frame: 15
Frame 7, time 116 2/3ms, position: 105, delta from last frame: 15
Frame 8, time 133 1/3ms, position: 120, delta from last frame: 15
Frame 9, time 150 0/3 ms, position: 150, delta from last frame: 30 (!)
Frame 10, time 166 2/3 ms, position: 165, delta from last frame: 15
Frame 11, time 183 1/3 ms, position: 180, delta from last frame: 15
Frame 12, time 200 0/0 ms, position: 195, delta from last frame: 15

What happened at frame 9?  Instead of advancing by 15 milliseconds worth,
the bird jumped forward by twice the normal amount.  Why?  We ran the rAF
callback twice between frames 8 and 9 - once at 135ms and once at 150ms.
 What's actually going on here is we're accumulating a small amount of drift
on every frame (1.6... milliseconds, to be precision) between when the
display is refreshing and when the callbacks are being invoked.  This has to
catch up sometime so we end up with a beat pattern every (16 2/3) / abs(16
2/3 - 15) = 10 frames.  The same thing happens with a perfect 16ms timer
every 25 frames, or with a perfect 17ms timer every 50 frames.  Even a very
close timer will produce these regular beat patterns and as it turns out the
human eye is incredibly good at picking out and getting annoyed by these
effects in an otherwise smooth animation.

For this reason, you really need a precise time source that is tied in to
the actual display's refresh rate.  Not all displays are exactly 60Hz - at
smaller form factors 50 or even 55hz displays are not completely unheard of.
 Additionally the normal clock APIs aren't always precise enough to stay in
sync with the actual display - particularly on windows it's really hard to
find a clock that doesn't drift around all over the place.

The above analysis assumes that all calls are infinitely fast and there's no
real contention for system resources.  In practice, though, this is rarely
the case.  It's not uncommon that the system will temporarily get overloaded

Re: [webkit-dev] Enable REQUEST_ANIMATION_FRAME on all ports? (was Re: ENABLE flag cleanup strawman proposal)

2011-09-25 Thread James Robinson
On Sat, Sep 24, 2011 at 11:29 PM, Adam Barth aba...@webkit.org wrote:

 On Sat, Sep 24, 2011 at 11:21 PM, Adam Barth aba...@webkit.org wrote:
  On Thu, Sep 15, 2011 at 11:06 AM, Adam Barth aba...@webkit.org wrote:
  On Wed, Sep 14, 2011 at 11:06 PM, Darin Fisher da...@chromium.org
 wrote:
  What about ENABLE(REQUEST_ANIMATION_FRAME)?
 
  That might make sense to always enable.  It seems to be very popular.
 
  REQUEST_ANIMATION_FRAME appears not to be enabled for GTK, Qt, or
  AppleWin.  Is there port-specific work that needs to be done to enable
  the feature for those ports?

 Looking a the code briefly, it appears that the
 USE(REQUEST_ANIMATION_FRAME_TIMER) flavor doesn't require
 port-specific work, but the non-timer version had a dependency on
 ChromeClient::scheduleAnimation().

 I'll try to cook up a patch that enables REQUEST_ANIMATION_FRAME
 everywhere and uses USE(REQUEST_ANIMATION_FRAME_TIMER) on ports
 without ChromeClient::scheduleAnimation().


The TIMER based support for RAF is very new (only a few weeks old) and still
has several major bugs.  I'd suggest letting it bake for a bit before
considering turning it on for all ports.  Fundamentally I don't think this
feature can be implemented reasonably well with just timers, so port
maintainers should take a really careful look at the level of support they
want to have for this feature when deciding if they want to support it.

- James



 Adam
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] New feature announcement - Implement HTML5 Microdata in WebKit

2011-09-22 Thread James Robinson
On Thu, Sep 22, 2011 at 2:32 PM, Charles Pritchard ch...@jumis.com wrote:

 On 9/22/2011 2:13 PM, Ian Hickson wrote:

 On Fri, 23 Sep 2011, Dean Jackson wrote:

 However, isn't prefixing designed to avoid incompatibilities in spec
 changes, not incompatibilities between implementations? Ensuring no
 conflicts in implementations doesn't matter too much if the spec
 changes.

 It's designed to ensure that authors can reliably use a name and expect to
 get the same result in any UA that supports that name.

 I'm not going to change the spec in a way that conflicts with that -- if
 the spec has to change, it'll change either in a compatible way (e.g. to
 match what was actually implemented), or in a way that doesn't conflict
 (e.g. by changing the name in the spec).


  Note I'm not talking about Microdata in particular. I don't even know
 what that spec is :) I'm just talking about the general approach. If the
 world can guarantee that this spec will never change, then I guess your
 technique works.

 FWIW, there is an in-between approach, which is the one used by WebGL.
 It defines a prefix that all implementations share.

 canvas.getContext(**experimental-webgl);

 That'll just result in that name becoming the standard.


 I would like some kind of fast track method for these kind of issues.
 Something like a Request for dropping prefix RfDP protocol would be
 super.


Please post this feedback to some thread where it's relevant, not on a
WebKit development mailing list discussion about a specific feature.

- James



 RfDP: Microdata. First the spec editor would have to vouch for it, then,
 if Moz, MS, Opera, Apple and Google reps can give a nod within a few weeks,
 we've got something.

 I'd really like to avoid repeats of  the CSS -vnd-transform baggage, when
 possible.
 WebKit went back and forth on BlobBuilder. Now it's at:
 WebKitBlobBuilder. That was not so fun.
 That's another situation I'd like to avoid.

 For this particular method, the microdata section, I'm happy enough hearing
 that the spec editor will vouch for it.
 If that's the precedent, I'll take it. I'd like to learn how we can build
 on that precedent.

 Reps from the major vendors have been quite responsive this year. I know
 they can't commit to supporting
 an API in a short time frame (such as the File API), but they have been
 great about voicing issues.


 -Charles

 __**_
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/**mailman/listinfo.cgi/webkit-**devhttp://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] new APIs for strokes with dash in Canvas

2011-09-22 Thread James Robinson
On Thu, Sep 22, 2011 at 7:31 PM, Young Han Lee joybro...@gmail.com wrote:

 Hi all,

 I have a patch to add webkitLineDash and webkitLineDashOffset APIs on
 CanvasRenderingContext2D on https://bugs.webkit.org/show_bug.cgi?id=63933

 The purpose of these APIs is to support for strokes with dash in Canvas.

 Although the API is not specified in the HTML Canvas specification yet, I
 believe it would be great to support the API. As Dirk said in the above bug,
 Mozilla already implemented the APIs named mozDash and mozDashOffset[1], and
 there seems to be many needs for the API. Some people even implemented their
 own javascript functions to draw strokes with dash in Canvas[2,3].

 As GraphicsContext already have an interface for strokes with dash,
 setLineDash, all we have to do is exposing it to the JS level. So the syntax
 and meaning of the new APIs is the same with the arguments of the
 setLineDash. webkitLineDash is an array of float, and webkitLineDashOffset
 is a float.

 I uploaded a patch for JSC first. After the patch is landed, I will make a
 follow-up patch for V8.

 Any thought?


Has this proposal been raised in the WHATWG, which specifies the canvas 2d
interface?  It sounds like in this email you have done most of the work
necessary for a good spec proposal (see
http://wiki.whatwg.org/wiki/FAQ#Is_there_a_process_for_adding_new_features_to_a_specification.3F
).

- James




 [1] https://bugzilla.mozilla.org/show_bug.cgi?id=662038
 [2]
 http://davidowens.wordpress.com/2010/09/07/html-5-canvas-and-dashed-lines/
 [3]
 http://stackoverflow.com/questions/7352769/dashed-curves-on-html5-canvas-bezier

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] ENABLE flag cleanup strawman proposal

2011-09-14 Thread James Robinson
== Always Enable ==


 ENABLE(DETAILS) ???
 ENABLE(DOM_STORAGE)
 ENABLE(EVENTSOURCE)
 ENABLE(INSPECTOR) ???
 ENABLE(METER_TAG)
 ENABLE(OFFLINE_WEB_APPLICATIONS)
 ENABLE(PROGRESS_TAG)
 ENABLE(SVG_ANIMATION) = Gated on ENABLE(SVG)
 ENABLE(SVG_AS_IMAGE) = Gated on ENABLE(SVG)
 ENABLE(SVG_DOM_OBJC_BINDINGS) = Gated on ENABLE(SVG)
 ENABLE(SVG_FONTS) = Gated on ENABLE(SVG)
 ENABLE(WEB_TIMING) ??? (I think Maciej has concerns about this
 feature, so maybe keep configurable.)
 ENABLE(WTF_MULTIPLE_THREADS) -- ggaren is already making this happen,
 right?
 ENABLE(XHR_RESPONSE_BLOB) = Gated by ENABLE(BLOB)
 ENABLE(XPATH)
 ENABLE(XSLT)



Is your proposal that since these ENABLE()s are always true, that we delete
the compile guards from the codebase and build scripts?  If so that sounds
like a great idea in general (although I'm not saying that necessarily makes
sense for everything in this list).

- James
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] run-bindings-tests

2011-09-08 Thread James Robinson
On Thu, Sep 8, 2011 at 11:49 AM, Alexey Proskuryakov a...@webkit.org wrote:


 08.09.2011, в 11:32, Eric Seidel написал(а):

  FYI:  As many of you already know, the build.webkit.org bots run
  run-bindings-tests on (almost) all platforms.
 
  They've been running (mostly w/o incident) on the bots since 6/20:
  http://trac.webkit.org/changeset/89267
 
 
  These just make sure that our generated bindings look sane, by
  comparing the generated results against checked-in baselines.
 
  run-bindings-tests makes it easier to make cross-platform bindings
  changes w/o needing a Gtk/Qt/V8/etc. port of WebKit.
 
  If you're changing binding generation you should be aware of this script.

 As discussed on IRC, I do not think that bots should run this test at all.
 It has a non-trivial maintenance cost, but provides very little benefit.
 Even the time spent by multiple engineers on IRC today discussing bot
 complaints is likely more than the test could save in the lifetime of the
 project, at my guesstimate.

 A test like this is almost like keeping a separate text file with a number
 of space characters in WebKit sources, and chastising anyone who fails to
 update this text file with their commit. Why would we care about the number
 of spaces, or about the exact look of generated code?

 Specifically, this is today's failure: 
 http://build.webkit.org/builders/SnowLeopard%20Intel%20Release%20%28Tests%29/builds/32923/steps/bindings-generation-tests/logs/stdio
 - a test that complains about such changes doesn't test for the right thing.

 A script like this might be useful to run locally when making bindings
 changes if in doubt, comparing before and after results. There is no
 need to check in most recent results for this though. I'm not sure if this
 gives you more than manually copying DerivedSources directory and diffing
 new derived sources to that, but if someone finds a little automation
 valuable, then why not.



We used to not run these tests on the bots.  This meant that people would
change the bindings code and not update the expected results, so the
expected results were always massively out of date.  This meant when
patching the bindings scripts you could not rely on run-bindings-tests at
all, because the expectations were already broken before you made any
changes!  This it not theoretical, it happened to be multiple times and I
know I'm not the only one.

The real problem here is that people check in without looking at the bots
and then do not respond when the bots go red.  That's a people problem.

- James



 - WBR, Alexey Proskuryakov

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] help with commit queue failure

2011-08-18 Thread James Robinson
On Thu, Aug 18, 2011 at 2:05 PM, Sailesh Agrawal s...@chromium.org wrote:

 Hi all, I'm trying to commit a patch through commit queue and I'm getting a
 strange error:
 http://queues.webkit.org/results/9419886


From that output, I'd guess that there was a merge conflict trying to apply
the patch.  That output only shows the last 500 chars of output so merge
conflicts earlier than that will not be displayed.

Can you try applying that patch locally on top of ToT?  Try webkit-patch
apply-from-bug and see what happens, that's what the commit queue does.

- James



 As far as I can tell there's nothing wrong but it still fails. Does anyone
 know what's going on here?

 The bug I'm working on is this:
 https://bugs.webkit.org/show_bug.cgi?id=6

 Thanks!
 Sailesh

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] LayoutTests results fallback graph

2011-07-10 Thread James Robinson
On Jul 10, 2011 10:53 AM, Adam Barth aba...@webkit.org wrote:

 Hi webkit-dev,

 In trying to understand how our LayoutTest results system works, I've
 created a digram of the fallback graph among the various
 platform-specific directories:


https://docs.google.com/drawings/d/1z65SKkWrD4Slm6jobIphHwwRADyUtjOAxwGBVKBY8Kc/edit?hl=en_US

 Unfortunately, the fallback graph is not a tree, as one might imagine
 initially.  I'd like to propose two small changes, which will
 hopefully make the system more sensible globally.  I'm happy to do all
 the work required to make these changes:

 1) The win port should fall back either to all (the platform
 independent results) or to mac, but not to mac-snowleopard, as it
 does currently.  (I slightly prefer all, but mac would also be
 fine with me.)

 2) The chromium port should fall back directly to all rather than
 taking a detour through various Apple-maintained ports, as it does
 currently.

 These changes have the following virtues:

 A) The resulting fallback graph will be a tree, making the fallback
 graph easier to understand for both humans and automated tools.
 B) The Chromium port will behave more like the other ports (e.g., GTK
 and Qt), rather than being a parasite on Apple-maintained ports.

 These changes might increase the number of image baselines we store in
 the tree for chromium-mac-derived ports (because there will be fewer
 redundant fallback paths), but I expect that cost to be relatively
 small because essentially every port has different image baselines
 anyway

Could you measure this? I suspect that not falling back on the mac pixel
results will mean checking in a few thousand more pngs, but that's just a
guess.

- James


 Thoughts?
 Adam
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Accessibility Object Searching

2011-06-22 Thread James Robinson
On Wed, Jun 22, 2011 at 4:21 PM, Chris Fleizach cfleiz...@apple.com wrote:


 On Jun 22, 2011, at 3:34 PM, Dominic Mazzoni wrote:

  I just had another thought: how would this work in a multi-process
 browser?
 
  As you may know, Chrome runs an instance of webkit in a separate
  renderer process for each tab, but the GUI and all accessibility
  handling happens in the main browser process. Because accessibility
  calls are synchronous, we cache accessibility information in the
  browser process and respond to queries from that cache, rather than
  sending an IPC to the renderer process and getting a response. I
  realized that we wouldn't be able to use this search function very
  easily as you've proposed it. If NSAccessibility gets such an API,
  we'd have to write our own implementation that could run in Chrome's
  browser process.
 

 I think this is one drawback of Chrome's approach, the inability to
 retrieve dynamic information. I suspect Chrome has similar issues for the
 NSAccessibilityParameterizedAttribute attributes, and that it doesn't
 support the AXTextMarker APIs that Safari does for the same reason.

 To support this (along with the other things I mentioned) Chrome would need
 to synchronously called into WebCore, wait for the answer, and then return
 that information.


Assuming that the data is needed from the browser process (or UI process in
WebKit2 terminology) that's not acceptable from either a performance or bug
point of view.  In a multi-process browser, you can never let the browser
(or UI) process block on a synchronous message from the renderer (or web)
process.

- James



  My understanding was that WebKit2 and Safari are moving to a
  multi-process model as well. Is this design going to be compatible
  with this?
 

 Yes, WebKit2 works with accessibility. You'll be able to see the code to
 support that strewn throughout WebKit2. Let me know if you need more
 pointers.

  - Dominic
  ___
  webkit-dev mailing list
  webkit-dev@lists.webkit.org
  http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Adding ENABLE_FLEXBOX to WebCore

2011-06-08 Thread James Robinson
On Wed, Jun 8, 2011 at 4:20 PM, Darin Fisher da...@chromium.org wrote:

 It seems like it doesn't scale very well to have to stand-up new buildbots
 for each
 new feature.  At least in the Chromium port, it is possible for the
 Chromium repo
 to override ENABLE_ flags so that only DRT gets built with a prototype
 feature,
 making it easy to test a prototype feature using existing buildbots.


If you mean by using feature_overrides.gypi to overwrite features.gypi, that
mechanism doesn't actually work and people have attempted to remove it.  The
bots that we actually pay attention to all use feature_overrides.gypi.  I
would not encourage that pattern.

- James

-Darin


 On Wed, Jun 8, 2011 at 3:43 PM, Adam Barth aba...@webkit.org wrote:

 If you're super worried about folks shipping the feature before it's
 ready, then that approach can make sense.  I'm not sure how well it
 scales, but we can worry about that problem when we have N such
 configurations.

 Adam


 On Wed, Jun 8, 2011 at 3:19 PM, Tony Chang t...@chromium.org wrote:
  I don't understand how changing the name prevents the feature from being
  shipped half-done.  If we're going to ship a half-done feature, we may
 as
  well use the vendor prefixed name so sites don't depend on the goofy
 name.
 
  However, I'm willing to run a buildbot for this and that seems better
 than
  shipping a half-done feature.  I don't expect it to be a core builder
 and
  Ojan and I will be the ones keeping it green.  Isn't this what we're
 doing
  for other features like CSS Regions and Exclusions?
 
  On Wed, Jun 8, 2011 at 12:02 PM, Adam Barth aba...@webkit.org wrote:
 
  It seems like the simplest thing is to have an ENABLE macro that's
  turned on and to use the normal bots.  If you're really worried about
  folks shipping the feature half-done by accident, you can use a goofy
  name like -webkit-goofybox (or whatever) and rename it to the final
  name when you're ready.
 
  Adam
 
 
  On Wed, Jun 8, 2011 at 11:50 AM, Ojan Vafai o...@chromium.org wrote:
   Kind of. We could make the functionality only work at runtime, but
   adding
   the properties to the CSS parser would be difficult to make runtime
   configurable. So, the CSS properties would parse correctly but do
   nothing.
   That's especially problematic for properties like display that
 would
   then
   get an invalid value.
   My current plan was still to test this incrementally. We'd include
 tests
   as
   we went, but skip the flexbox subdirectory. We would just run the
 tests
   locally during development. This has the downside that other changes
   might
   break the flexbox tests, but thats a pain I'm willing to live with.
   I'm fine doing this differently if people have strong opinions.
   Ojan
  
   On Wed, Jun 8, 2011 at 11:41 AM, Darin Fisher da...@chromium.org
   wrote:
  
   Is it possible for this feature to be enabled at runtime?
  
   On Jun 8, 2011 11:38 AM, Adam Barth aba...@webkit.org wrote:
New features should be tested incrementally as they are developed.
That means running them on build.webkit.org. The decision to ship
 a
feature is separate.
   
Adam
   
   
On Wed, Jun 8, 2011 at 11:33 AM, Ojan Vafai o...@chromium.org
wrote:
I don't think we want to ship this until we have a reasonably
feature
complete implementation of the spec and that we're convinced the
spec
is
stable. I expect that in implementing this we'll find areas of
 the
spec
that
need reworking, but at this point it's mainly blocked on
implementation
experience.
I'm not sure it's worth setting a bot up just for this, although
 I'm
not
opposed to it. I expect we should have this shippable within a
couple
months.
   
Ojan
On Wed, Jun 8, 2011 at 11:21 AM, Adam Barth aba...@webkit.org
wrote:
   
Can't we just define ENABLE_FLEXBOX on one or more of the
 commonly
used ports and use the regular bots?
   
Adam
   
   
On Wed, Jun 8, 2011 at 10:57 AM, Tony Chang t...@chromium.org
wrote:
 Hi webkit-dev,
 I wanted to let you know that Ojan and I plan to add flexbox
 layout
 support
 to WebCore.  WebCore already supports an older flexbox
 implementation
 (display: box), but the new spec is designed to be easier for
 developers
 to
 understand and more powerful.  The old flexbox will still
 remain
 in
 WebCore
 since none of the CSS properties overlap with the new flexbox
 spec.
  The
 spec can be found



 at: http://www.w3.org/TR/css3-flexbox/ (
 http://dev.w3.org/csswg/css3-flexbox/)
 This support will be behind the ENABLE_FLEXBOX feature define
 (https://bugs.webkit.org/show_bug.cgi?id=62049) and there is
 a
 meta
 bug
 tracking the feature's development
 (https://bugs.webkit.org/show_bug.cgi?id=62048).  I expect
 this
 feature
 to
 eventually be enabled by all ports.
 I am ready to 

Re: [webkit-dev] Adding ENABLE_FLEXBOX to WebCore

2011-06-08 Thread James Robinson
On Wed, Jun 8, 2011 at 4:55 PM, Darin Fisher da...@chromium.org wrote:

 Oh, okay.  Why do we have override_features.gypi then?


We don't, Adam tried to remove it earlier this week and was foiled by some
weird complex failure.  We should get rid of it ASAP.


 Regardless, it seems like we could create a mechanism so that the result of
 build-webkit
 uses different ENABLE_ options than a stock Chromium build.  There's a
 trivial way to switch
 b/w the two in the GYP files.


There's danger in testing a different set of ENABLE_s than we ship unless we
are really confident in understanding how the ENABLE_'d code interacts with
the rest of the codebase.

- James


 -Darin


 On Wed, Jun 8, 2011 at 4:29 PM, James Robinson jam...@google.com wrote:

 On Wed, Jun 8, 2011 at 4:20 PM, Darin Fisher da...@chromium.org wrote:

 It seems like it doesn't scale very well to have to stand-up new
 buildbots for each
 new feature.  At least in the Chromium port, it is possible for the
 Chromium repo
 to override ENABLE_ flags so that only DRT gets built with a prototype
 feature,
 making it easy to test a prototype feature using existing buildbots.


 If you mean by using feature_overrides.gypi to overwrite features.gypi,
 that mechanism doesn't actually work and people have attempted to remove it.
  The bots that we actually pay attention to all use feature_overrides.gypi.
  I would not encourage that pattern.

 - James

 -Darin


 On Wed, Jun 8, 2011 at 3:43 PM, Adam Barth aba...@webkit.org wrote:

 If you're super worried about folks shipping the feature before it's
 ready, then that approach can make sense.  I'm not sure how well it
 scales, but we can worry about that problem when we have N such
 configurations.

 Adam


 On Wed, Jun 8, 2011 at 3:19 PM, Tony Chang t...@chromium.org wrote:
  I don't understand how changing the name prevents the feature from
 being
  shipped half-done.  If we're going to ship a half-done feature, we may
 as
  well use the vendor prefixed name so sites don't depend on the goofy
 name.
 
  However, I'm willing to run a buildbot for this and that seems better
 than
  shipping a half-done feature.  I don't expect it to be a core builder
 and
  Ojan and I will be the ones keeping it green.  Isn't this what we're
 doing
  for other features like CSS Regions and Exclusions?
 
  On Wed, Jun 8, 2011 at 12:02 PM, Adam Barth aba...@webkit.org
 wrote:
 
  It seems like the simplest thing is to have an ENABLE macro that's
  turned on and to use the normal bots.  If you're really worried about
  folks shipping the feature half-done by accident, you can use a goofy
  name like -webkit-goofybox (or whatever) and rename it to the final
  name when you're ready.
 
  Adam
 
 
  On Wed, Jun 8, 2011 at 11:50 AM, Ojan Vafai o...@chromium.org
 wrote:
   Kind of. We could make the functionality only work at runtime, but
   adding
   the properties to the CSS parser would be difficult to make runtime
   configurable. So, the CSS properties would parse correctly but do
   nothing.
   That's especially problematic for properties like display that
 would
   then
   get an invalid value.
   My current plan was still to test this incrementally. We'd include
 tests
   as
   we went, but skip the flexbox subdirectory. We would just run the
 tests
   locally during development. This has the downside that other
 changes
   might
   break the flexbox tests, but thats a pain I'm willing to live with.
   I'm fine doing this differently if people have strong opinions.
   Ojan
  
   On Wed, Jun 8, 2011 at 11:41 AM, Darin Fisher da...@chromium.org
   wrote:
  
   Is it possible for this feature to be enabled at runtime?
  
   On Jun 8, 2011 11:38 AM, Adam Barth aba...@webkit.org wrote:
New features should be tested incrementally as they are
 developed.
That means running them on build.webkit.org. The decision to
 ship a
feature is separate.
   
Adam
   
   
On Wed, Jun 8, 2011 at 11:33 AM, Ojan Vafai o...@chromium.org
wrote:
I don't think we want to ship this until we have a reasonably
feature
complete implementation of the spec and that we're convinced
 the
spec
is
stable. I expect that in implementing this we'll find areas of
 the
spec
that
need reworking, but at this point it's mainly blocked on
implementation
experience.
I'm not sure it's worth setting a bot up just for this,
 although I'm
not
opposed to it. I expect we should have this shippable within a
couple
months.
   
Ojan
On Wed, Jun 8, 2011 at 11:21 AM, Adam Barth aba...@webkit.org
 
wrote:
   
Can't we just define ENABLE_FLEXBOX on one or more of the
 commonly
used ports and use the regular bots?
   
Adam
   
   
On Wed, Jun 8, 2011 at 10:57 AM, Tony Chang 
 t...@chromium.org
wrote:
 Hi webkit-dev,
 I wanted to let you know that Ojan and I plan to add flexbox
 layout
 support
 to WebCore.  WebCore already supports an older

Re: [webkit-dev] Do we have a style preference about const member functions?

2011-06-01 Thread James Robinson
On Tue, May 31, 2011 at 12:08 PM, Geoffrey Garen gga...@apple.com wrote:

 I agree that const should be used for logical constness. The rule should
 not be merely doesn't alter any data members of this object but rather
 does not alter observable state of this object or vend any type of pointer
 or reference by which observable state of this object could be altered.


 Precisely!


 I like this explanation too.

 I'm trying to think of a simple way to explain / test whether something
 falls into the category of logical constness, since it can be ambiguous.

 It occurred to me that a simple, though imperfect, test is just, Is this
 function called by an owner of a const pointer / reference? After all, a
 const member function is meaningless if nobody points to the class through a
 const pointer  / reference. For classes like DOM and render tree nodes,
 which have no meaningful const-pointer-owning clients, the answer is always
 no. For other classes, the answer is yes, but only if someone has found a
 meaningful use for a const pointer or reference to the class.

 So, perhaps the real style question we should answer is when to use const
 pointers / references, since the answer to when to use const member
 functions will follow from it.

 What are some good examples of existing code that meaningfully uses a const
 pointer or reference? (Something other than, say, the obligatory const in a
 copy constructor.)


I personally find that in a number of render tree functions declared to take
a const RenderObject* that the const-ness is a useful hint that the function
will not be manipulating the object in unexpected ways.  Examples:
http://codesearch.google.com/codesearch?hl=envert=chromiumlr=q=%22const+RenderObject%22+file%3AWebCoresbtn=Search
(click
Next at the bottom to see more results).

My understanding is that these functions could not take a const
RenderObject* parameter if RenderObject did not supply a set of const simple
accessors and utility functions.  We could convert RenderObject over to have
no const member functions and convert all of these utility functions to take
RenderObject* parameters instead, but I think that would lose some of the
intent of the code.

I'm curious if there was a specific patch or piece of code that lead you to
send this email out - perhaps we make a better decision about whether to
change our approach with more concrete examples of problems the current
situation has caused or is causing.

- James


 Geoff

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Cherry-Pick Bug Comments

2011-05-27 Thread James Robinson
I find these cherry-pick bug comments annoying and hope that you will stop
generating them.  There are many ports that make releases based off of
WebKit trunk, and all of them have some notion of release branches that
contain cherry-picked revisions, reverts, etc.  As a developer it's nearly
always irrelevant to me whether a given patch is cherry-picked into a given
Qt release or not, just as it would be to know if that revision was
cherry-picked into a given Gtk, EFL, Safari, or Chromium release branch.
 When I do need to know the status of a specific branch, I look in the
port-specific location of the branch to see what happened.  For example, to
see what's in a given chromium release I look in the appropriate
subdirectory of http://trac.webkit.org/browser/branches/chromium.  For the
Safari 534 branch, http://trac.webkit.org/browser/branches/safari-534-branch
 etc.

I would recommend that the people who work on QtWebKit figure out a way to
track revisions in their release branches in a way that does not involve
spamming non-Qt bugs on bugs.webkit.org or developers who aren't working
directly on Qt.

- James

On Fri, May 27, 2011 at 10:27 AM, Antonio Gomes toniki...@gmail.com wrote:


 An important question: besides the notification e-mails, does the rest of
 our release process bothers someone?

 Not me. It works fine and is very transparent.


 --
 --Antonio Gomes

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Dropping support for WML?

2011-04-12 Thread James Robinson
I have a related question - what sort of test coverage is there for WML
currently on the core bots?  Do any of the core bots compile with WML
enabled?  I've made and reviewed a few changes to WML code made parallel to
changes to the HTML forms code in the past and have made the WML edits
essentially blindly.  I don't know if I've made the WML code better or worse
in the process and have no way to check.  I'm not even sure how to tell if
WML still compiles after my patches.

If the code is completely untested (as I somehow suspect it is) than I think
it's more dangerous to leave the code in than to take it out, given that it
is likely to accumulate more and more bugs as the related code changes.

- James

On Tue, Apr 12, 2011 at 9:57 AM, Ryosuke Niwa rn...@webkit.org wrote:


 -- Forwarded message --
 From: Gyuyoung Kim gyuyoung@samsung.com
 Date: 2011/4/10
 Subject: Re: [webkit-dev] webkit-dev Digest, Vol 71, Issue 8
 To: webkit-dev@lists.webkit.org
 Cc: Krzysztof Czech k.cz...@samsung.com, 임상석 sangseok@samsung.com


  I know Samsung is using the feature but they're not sure if they'll be
  supporting it in the future. Are there are other folks who are actively
  using WML?

 We're using WML in WebKit trunk now. Even though, there are some
 maintenance difficulties,
 I think WML needs to be maintained for some time.

 BTW, as far as I know, there are some bugs in WML. So, we have plan to
 contribute our patches
 soon.

 - Gyuyoung Kim


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] XHTMLMP support

2011-03-08 Thread James Robinson
Which WebKit browser is using XHTMLMP?

- James

On Tue, Mar 8, 2011 at 4:43 PM, Ra Kyounga kyounga...@gmail.com wrote:

 Currently, XHTMLMP is used in webkit browser for mobile business.

 So, I think it's good to leave it for a while :)

 2011. 3. 8., 오전 5:09, Eric Seidel 작성:

  Are any ports still using that?  Or should we remove it.
 
  If so, happy to leave it in.  But if no ports are actively using it,
  its best to remove the code (and restore it from SVN history later if
  someone needs it again).
 
  -eric
  ___
  webkit-dev mailing list
  webkit-dev@lists.webkit.org
  http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] http://build.webkit.org/console fails to load in Chromium (appears to be server side bug)

2011-01-19 Thread James Robinson
http://build.webkit.org/console fails to load in Chromium (and Google
Chrome) today.  It seems that the bug is in the server's Accept-Encoding:
handling.  If the Accept-Encoding line includes 'sdch' then the server
closes the socket without sending any response bytes.  Requests for other
pages like /console?category=core or /waterfall succeed, probably because
the response page for /console?category=core and /waterfall are smaller
(379152 vs 253031/289609 bytes) and presumably something on the server side
is barfing.  Other browsers do not currently include sdch in their
Accept-Encoding lines and so are unaffected.

A workaround for Chromium users is to pass the command line flag
--enable-sdch=.google.com which will prevent sdch from being passed in the
Accept-Encoding line for all servers except for ones under .google.com.
 This is obviously far from ideal as it prevents sdch use on pages other
than *.google.com that support it.  Can whoever runs the web server for
build.webkit.org please try to fix this?  I'm also curious to know where the
actual problem is so we can determine how widespread this might be.

- James
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] make-script-test-wrappers not being maintained

2011-01-18 Thread James Robinson
On Tue, Jan 18, 2011 at 4:30 PM, Darin Adler da...@apple.com wrote:

 It seems that a lot of people are making script tests with non-standard
 wrappers. And not adding exceptions to the make-script-test-wrappers
 script. I ran the script and it created 4 files, and modified 33 others.

 Any ideas on how to improve the situation?


I didn't even know such a script existed.  What are the advantages of using
it vs generating script tests in other ways?  I see that the use is
documented here:
https://trac.webkit.org/wiki/Writing%20Layout%20Tests%20for%20DumpRenderTreebut
there aren't any motivations listed for it.

- James


 -- Darin


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Exposing CSS pixel metrics to the scripting environment

2010-11-26 Thread James Robinson
Are you posting here because there is something specific to WebKit in
your query or because you dislike the outcome of the WHATWG thread?
Most of us follow the WHATWG closely and generally prefer to discuss
standardization issues such as this in that forum to get a broader
feedback base.

- James

On Friday, November 26, 2010, Charles Pritchard ch...@jumis.com wrote:
 Recently I brought this issue up to the WHATWG mailing list, without much 
 luck.

 Currently, mobile devices are given access to window.devicePixelRatio, used 
 for managing what are essentially higher resolution displays. See the iPhone, 
 Android, etc. Within the desktop environment, devicePixelRatio is not updated 
 on zoom events. I don't think it should be, but it's something to consider. 
 Such information is critical to adjusting the resolution of bitmaps, be they 
 from an image source or a canvas source, to be as crisp as can be.

 Microsoft has gone ahead in IE9 and just exposed a collection of metrics data:
 deviceXDPI, logicalXDPI, systemXDPI and Y counterparts.

 Source:
 http://msdn.microsoft.com/en-us/library/ms535868(VS.85).aspx 
 http://msdn.microsoft.com/en-us/library/ms535868%28VS.85%29.aspx

 The task at hand is deciding whether or not to expose this information, and 
 where. Technically, it's quite simple, as it's only a few floating point 
 values which are already available to the WebKit environment. Zoom events 
 always trigger a 'resize' event for window, as they alter the innerWidth and 
 innerHeight of the layout. That resize event is the point in which the 
 scripting environment would check to see if CSS pixel metrics have changed, 
 and adjust the page accordingly. I want to note, that I am not speaking at 
 all about changing how zoom works, or in any way suggesting that zoom be 
 controlled by the scripting environment / web authors.

 I am recommending that we take a look at Microsoft's  .screen extensions, and 
 decide whether they hold merit, and may be included in WebKit. Doing so would 
 mean that an independent implementation has picked up the extension, and it 
 may be on the fast track for standardization.

 I had a rough time bringing this up with Mozilla. I'm hoping for a little 
 more focus here, on this mailing list. Again, I'm merely looking to have CSS 
 pixel metrics exposed, and I'm suggesting the MS proposal as it's certain to 
 exist in their upcoming IE9 release. Their proposal exposes six floating 
 point variables in the window.screen object, and is sufficient for current 
 needs.

 Please let me know thoughts on the matter, and try to keep focused on the 
 fact that we're just looking to expose a few floating points to the scripting 
 environment, we're not looking to change any existing behaviors in any 
 existing elements.

 -Charles
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] W3C Proposal: User Interface Independence for Accessible Rich Internet Applications

2010-11-05 Thread James Robinson
On Fri, Nov 5, 2010 at 6:32 PM, James Craig jcr...@apple.com wrote:

 Ojan Vafai wrote:

 How is [ DOMAttributeChangeRequestEvent ] any different than
 DOMAttrModified? The spec claims it doesn't have the problems that
 DOMAttrModified has, but I don't see how that's the case.



 DOMAttrModified is a notification, but DOMAttributeChangeRequestEvent is a
 request. The reason these all inherit from UIRequestEvent is because they
 are all requests, not notifications. For example, the following sequences
 may occur in a real user scenario.


 *DOMAttrModified*

 1. An attribute in the DOM changes (for whatever reason).
 2. UA fires DOMAttrModified. This may recursively fire a series of other
 mutation events, hence the performance problems.
 *3. Web application (web page author) can do whatever, but this is a
 notification of the change, not a request to make the change.*


 *DOMAttributeChangeRequestEvent*
 Path 1: ignored by the web application, so no DOM change occurs.

 1. User performs some UI action. (e.g. VoiceOver user may press Ctrl+Opt+\)
 2. AT interprets the intent of that action. (In VoiceOver, that key combo
 means expand or collapse a tree node.)
 3. UA fires a single DOMAttributeChangeRequestEvent (e.g. Web application,
 please change 'aria-expanded' value of Node to 'true')
 *4. Web application (web page author) ignores the event, so no DOM change
 occurs.*
 5. AT receives the unsupressed/uncancelled event at the end of the bubble
 phase. (e.g. VoiceOver may beep to indicate to the user that no change
 occurred.)


 *DOMAttributeChangeRequestEvent*
 Path 2: received by the web application, which makes the change on the
 user's behalf.

 1. User performs some UI action. (e.g. VoiceOver user may press Ctrl+Opt+\)
 2. AT interprets the intent of that action. (In VoiceOver, that key combo
 means expand or collapse a tree node.)
 3. UA fires a single DOMAttributeChangeRequestEvent (e.g. Web application,
 please change 'aria-expanded' value of Node to 'true')
 4. Web application (web page author) has registered for the event, and
 receives it.
 *5. Web application (web page author) interprets the request, and calls 
 Node.setAttribute('aria-expanded',
 'true')*
 6. Web application (web page author) cancels or suppresses the event. (e.g.
 UA, I have received your request, and have changed this DOM attribute on
 your behalf.)
 7. AT receives the supressed/cancelled event which indicates that the
 'request' was successful.
 8. AT can indicate to the user what happened. (e.g. VoiceOver might review
 the AX API and speak expanded. 7 items enclosed.)



 Or, phrased another way:

 *DOMAttrModified*
 I took your dollar. By the way, my dad and granddad are probably gonna
 some cash from you, too.

 *DOMAttributeChangeRequestEvent*
 Path 1: Can I have a dollar? No? Okay.
 Path 2: Can I have a dollar? Yes? Sweet.


This is far worse than an async DOMAttrModified since it means that all code
that attempts to change an attribute has to deal with the case where the
page cancels the change or makes arbitrary other changes.  It also has worse
performance characteristics since it means that the browser would have to
fire the DOMAttributeChangeRequestEvent synchronously when it wanted to
change the attribute and wouldn't be allowed to batch up the events at all.

- James



 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Protecting against stale pointers in DrawingBuffer and GraphicsContext3D

2010-10-12 Thread James Robinson
On Tue, Oct 12, 2010 at 9:47 AM, Chris Marrin cmar...@apple.com wrote:


 On Oct 11, 2010, at 5:15 PM, Maciej Stachowiak wrote:

 
  On Oct 11, 2010, at 4:03 PM, Chris Marrin wrote:
 
 
  On Oct 11, 2010, at 3:35 PM, James Robinson wrote:
 
  On Mon, Oct 11, 2010 at 3:15 PM, Chris Marrin cmar...@apple.com
 wrote:
 
  For accelerated 2D rendering we created a class called DrawingBuffer.
 This encapsulates the accelerated drawing surface (usually in the GPU) and
 the compositing layer used to display that surface on the page. The drawing
 surface (which is called a Framebuffer Object or FBO) is allocated by
 GraphicsContext3D, so DrawingBuffer needs a reference to that. Currently
 this is a weak reference. DrawingBuffer has a ::create() method and you pass
 the GraphicsContext3D to that method.
 
  If you destroy the GraphicsContext3D, DrawingBuffer has a stale
 pointer. If you were to try to destroy the DrawingBuffer it would attempt to
 use that pointer (to destroy its FBO) and crash or worse. Currently we have
 an implicit policy that you should never destroy a GraphicsContext3D before
 its DrawingBuffers are all destroyed. That works fine in the current use
 case, CanvasRenderingContext2D. And we can follow that same policy when in
 the future when we use DrawingBuffer in WebGLRenderingContext.
 
  My concern is that this sort of implicit policy can lead to errors in
 the future when other potential clients of these classes use them. So I
 posted https://bugs.webkit.org/show_bug.cgi?id=47501. In that patch I move
 the creation of DrawingBuffer to the GraphicsContext3D and keep back
 pointers to all the DrawingBuffers allocated so they can be cleaned up when
 GraphicsContext3D is destroyed. In talking to James R. he's concerned this
 adds unnecessary complexity and would rather stick with the implicit policy.
 
  So is this something I should be concerned about, or is an implicit
 policy sufficient in this case?
 
  Before somebody suggests it, I think Chris and I are in agreement that
 neither GraphicsContext3D nor DrawingBuffer should be RefCounted.  They both
 have clear single-ownership semantics.
 
  True, although Simon and I just chatted and he pointed out that
 Refcounting both classes would solve this problem. The fact that
 GraphicsContext3D wouldn't need a back pointer to DrawingBuffer means we
 wouldn't have any circular references. I don't think the overhead of
 refcounting is an issue here, so maybe that would be a simpler solution?
 
  I think having two independent objects that must be deleted in a specific
 order, or else you crash, is a hazardous design. APIs (even internal APIs)
 are better when they do not have a way to be used wrong. So, I think this
 should be changed one way or the other.
 
  I have to say that to my taste, refcounting seems like a cleaner solution
 than ad-hoc weak pointers. I'm skeptical of the claim that refcounting is
 not good for a heavyweight object. If there's really a time when special
 resources (VRAM, etc) need to be freed at a known point in program code,
 then it may be better to have an explicit close type function instead of
 counting on the destructor. On the other hand, this might end up being
 roughly equivalent to the clear backpointers solution, but moves the
 complexity of being in a possibly-invalid state from DrawingBuffer to
 GraphicsContext3D.
 
  Either way, I am pretty confident that a design where objects must be
 destroyed in a specific order is not the best choice.

 So it seems like we have two choices: 1) my current patch, which uses
 backpointers to manage the lifetime of the weak pointers, or 2) refcounting.
 My current approach has the advantage that the resources are cleared as soon
 as the DrawingBuffer is destroyed. But it is more complex and therefore more
 error prone. I think that complexity is manageable so that would be my
 preferred implementation. But refcounting is simpler and my current patch
 has a clear() method on DrawingBuffer which gets rid of all the resources. I
 could leave that method and change to a refcounted model, so the author can
 control when the resources are destroyed.


Another option would be to generalize the WeakPtrT implementation from
that patch into a generic class and use that.  Then that logic could be
implemented, reviewed and tested independently from the graphics code.  I
know that Maciej has expressed concern about this pattern in the past due to
the runtime cost it imposes.

Ref counting is a fairly blunt instrument but it would fit in best with the
rest of the codepath.

- James





 What do you think, James?

 -
 ~Chris
 cmar...@apple.com





___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Pixel test experiment

2010-10-12 Thread James Robinson
To add a concrete data point, http://trac.webkit.org/changeset/69517 caused
a number of SVG tests to fail.  It required 14 text rebaselines for Mac and
a further two more for Leopard (done by Adam Barth).  In order to pass the
pixel tests in Chromium, it required 1506 new pixel baselines (checked in by
the very brave Albert Wong, http://trac.webkit.org/changeset/69543).  None
of the rebaselining was done by the patch authors and in general I would not
expect a patch author that didn't work in Chromium to be expected to update
Chromium-specific baselines.  I'm a little skeptical of the claim that all
SVG changes are run through the pixel tests given that to date none of the
affected platform/mac SVG pixel baselines have been updated.  This sort of
mass-rebaselining is required fairly regularly for minor changes in SVG and
in other parts of the codebase.

I'd really like for the bots to run the pixel tests on every run, preferably
with 0 tolerance.  We catch a lot of regressions by running these tests on
the Chromium bots that would probably otherwise go unnoticed.  However there
is a large maintenance cost associated with this coverage.  We normally have
two engineers (one in PST, one elsewhere in the world) who watch the
Chromium bots to triage, suppress, and rebaseline tests as churn is
introduced.

Questions:
- If the pixel tests were running either with a tolerance of 0 or 0.1, what
would the expectation be for a patch like
http://trac.webkit.org/changeset/69517 which requires hundreds of pixel
rebaselines?  Would the patch author be expected to update the baselines for
the platform/mac port, or would someone else?  Thus far the Chromium folks
have been the only ones actively maintaining the pixel baselines - which I
think is entirely reasonable since we're the only ones trying to run the
pixel tests on bots.

- Do we have the tools and infrastructure needed to do mass rebaselines in
WebKit currently?  We've built a number of tools to deal with the Chromium
expectations, but since this has been a need unique to Chromium so far the
tools only work for Chromium.

- James

On Fri, Oct 8, 2010 at 11:18 PM, Nikolas Zimmermann 
zimmerm...@physik.rwth-aachen.de wrote:


 Am 08.10.2010 um 20:14 schrieb Jeremy Orlow:


  I'm not an expert on Pixel tests, but my understanding is that in Chromium
 (where we've always run with tolerance 0) we've seen real regressions that
 would have slipped by with something like tolerance 0.1.  When you have 0
 tolerance, it is more maintenance work, but if we can avoid regressions, it
 seems worth it.


 Well, that's why I initially argued for tolerance 0. Especially in SVG we
 had lots of regressions in the past that were below the 0.1 tolerance. I
 fully support --tolerance 0 as default.

 Dirk  me are also willing to investigate possible problem sources and
 minimize them.
 Reftests as Simon said, are a great thing, but it won't help with official
 test suites like the W3C one - it would be a huge amount of work to create
 reftests for all of these...


 Cheers,
 Niko

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Protecting against stale pointers in DrawingBuffer and GraphicsContext3D

2010-10-12 Thread James Robinson
On Tue, Oct 12, 2010 at 3:46 PM, Maciej Stachowiak m...@apple.com wrote:


 On Oct 12, 2010, at 2:37 PM, Darin Fisher wrote:

 On Tue, Oct 12, 2010 at 1:20 PM, Maciej Stachowiak m...@apple.com wrote:

 Hmm, I've found weak pointer abstractions to be very useful.  The issue
 with reference counting is that it is easy to introduce memory leaks, and
 has been mentioned, it is sometimes nice to have deterministic object
 destruction.


 We used to have lots of problems with leaking refcounted objects back when
 most refcounting was done manually, but this has become much less common as
 we deployed smart pointers.


 It is also nice to avoid having to have explicit clear() functions and then
 add checks to every other method to assert that they are not called after
 clear().


 Well, if you have a weak pointer you have to check weak pointer validity,
 so you don't save any checks. I don't think explicit close calls are that
 bad. I think it's actually a better pattern when you hold significant
 resources other than just some memory (e.g. VRAM, file handles, sockets,
 limited kernel resources, window server handles...). That way, cleaning up
 your external resources does not become dependent on your ownership strategy
 for the object.


 In the Chromium code base, we have a helper for weak pointers:
 http://src.chromium.org/viewvc/chrome/trunk/src/base/weak_ptr.h?view=markup

 We tend to use this in cases in which:

 1) there are many consumers interested in holding a back pointer to some
 shared resource, and
 2) we'd like the shared resource to die at some predictable time.

 Without a utility like this, the alternative is to make the shared resource
 notify each of the consumers about the impending destruction of the shared
 resource.

 It is true that WeakPtrT adds a null check in front of each method call
 made by the consumers, but that runtime cost is often justified in exchange
 for reduced code complexity (i.e., eliminating the need to notify consumers
 when the shared resource dies).


 The cost isn't just the null checks. You need either storage for all the
 backpointers, or the smaller storage overhead for a refcounted handle. But
 then almost by definition you have more overhead than refcounting, since you
 have a refcounted object plus more work.

 I do think weak pointers have their uses, but only in relatively unusual
 cases (e.g. when you'd have a cycle otherwise). It doesn't sound like a huge
 win in this case.


Another other advantage of weak pointers over refcounted pointers is that
with a combination of OwnPtr/WeakPtr it that the ownership is explicit
and easy to examine locally.  By this I mean if I have:

class A {
  OwnPtrT m_t;
};

class B {
private:
  WeakPtrT m_t;
};

it's immediately and locally obvious which object is responsible for the
lifetime of the T.  With the alternative:

class A {
  RefPtrT m_t;
};

class B {
  RefPtrT m_t;
};

there's no way to know which class drives the lifetime of the T without
reading through the definitions of both A and B (and any other class that
holds a RefPtr).  This makes it much harder to review changes to A or to B
without carefully reading through many mostly-unrelated classes.  This is
especially true when

A concrete example of this is LayerRendererChromium which used to have an
OwnPtrGraphicsContext3D member variable, took a
PassOwnPtrGraphicsContext3D parameter in its create() factory function and
exposed a GraphicsContext3D* getter.  The lifetime of the GraphicsContext3D
associated with the compositor was crystal clear just by examining
LayerRendererChromium.h - it took exclusive ownership of the context at
creation time and retained exclusive ownership for the lifetime of the
LayerRendererChromium.  Now that GraphicsContext3D is ref counted, it's not
nearly as obvious whether the LayerRendererChromium is supposed to be the
exclusive owner of the underlying context or whether callers might extend
its lifetime (especially since the GraphicsContext3D* getter now allows
callers to grab their own references).  Ambiguous and non-local ownership
semantics makes code harder to review and reason about.

- James


 Regards,
 Maciej


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Protecting against stale pointers in DrawingBuffer and GraphicsContext3D

2010-10-11 Thread James Robinson
On Mon, Oct 11, 2010 at 3:15 PM, Chris Marrin cmar...@apple.com wrote:


 For accelerated 2D rendering we created a class called DrawingBuffer. This
 encapsulates the accelerated drawing surface (usually in the GPU) and the
 compositing layer used to display that surface on the page. The drawing
 surface (which is called a Framebuffer Object or FBO) is allocated by
 GraphicsContext3D, so DrawingBuffer needs a reference to that. Currently
 this is a weak reference. DrawingBuffer has a ::create() method and you pass
 the GraphicsContext3D to that method.

 If you destroy the GraphicsContext3D, DrawingBuffer has a stale pointer. If
 you were to try to destroy the DrawingBuffer it would attempt to use that
 pointer (to destroy its FBO) and crash or worse. Currently we have an
 implicit policy that you should never destroy a GraphicsContext3D before its
 DrawingBuffers are all destroyed. That works fine in the current use case,
 CanvasRenderingContext2D. And we can follow that same policy when in the
 future when we use DrawingBuffer in WebGLRenderingContext.

 My concern is that this sort of implicit policy can lead to errors in the
 future when other potential clients of these classes use them. So I posted
 https://bugs.webkit.org/show_bug.cgi?id=47501. In that patch I move the
 creation of DrawingBuffer to the GraphicsContext3D and keep back pointers to
 all the DrawingBuffers allocated so they can be cleaned up when
 GraphicsContext3D is destroyed. In talking to James R. he's concerned this
 adds unnecessary complexity and would rather stick with the implicit policy.

 So is this something I should be concerned about, or is an implicit policy
 sufficient in this case?


Before somebody suggests it, I think Chris and I are in agreement that
neither GraphicsContext3D nor DrawingBuffer should be RefCounted.  They both
have clear single-ownership semantics.

- James



 -
 ~Chris
 cmar...@apple.com




 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Protecting against stale pointers in DrawingBuffer and GraphicsContext3D

2010-10-11 Thread James Robinson
On Mon, Oct 11, 2010 at 4:03 PM, Chris Marrin cmar...@apple.com wrote:


 On Oct 11, 2010, at 3:35 PM, James Robinson wrote:

  On Mon, Oct 11, 2010 at 3:15 PM, Chris Marrin cmar...@apple.com wrote:
 
  For accelerated 2D rendering we created a class called DrawingBuffer.
 This encapsulates the accelerated drawing surface (usually in the GPU) and
 the compositing layer used to display that surface on the page. The drawing
 surface (which is called a Framebuffer Object or FBO) is allocated by
 GraphicsContext3D, so DrawingBuffer needs a reference to that. Currently
 this is a weak reference. DrawingBuffer has a ::create() method and you pass
 the GraphicsContext3D to that method.
 
  If you destroy the GraphicsContext3D, DrawingBuffer has a stale pointer.
 If you were to try to destroy the DrawingBuffer it would attempt to use that
 pointer (to destroy its FBO) and crash or worse. Currently we have an
 implicit policy that you should never destroy a GraphicsContext3D before its
 DrawingBuffers are all destroyed. That works fine in the current use case,
 CanvasRenderingContext2D. And we can follow that same policy when in the
 future when we use DrawingBuffer in WebGLRenderingContext.
 
  My concern is that this sort of implicit policy can lead to errors in the
 future when other potential clients of these classes use them. So I posted
 https://bugs.webkit.org/show_bug.cgi?id=47501. In that patch I move the
 creation of DrawingBuffer to the GraphicsContext3D and keep back pointers to
 all the DrawingBuffers allocated so they can be cleaned up when
 GraphicsContext3D is destroyed. In talking to James R. he's concerned this
 adds unnecessary complexity and would rather stick with the implicit policy.
 
  So is this something I should be concerned about, or is an implicit
 policy sufficient in this case?
 
  Before somebody suggests it, I think Chris and I are in agreement that
 neither GraphicsContext3D nor DrawingBuffer should be RefCounted.  They both
 have clear single-ownership semantics.

 True, although Simon and I just chatted and he pointed out that Refcounting
 both classes would solve this problem. The fact that GraphicsContext3D
 wouldn't need a back pointer to DrawingBuffer means we wouldn't have any
 circular references. I don't think the overhead of refcounting is an issue
 here, so maybe that would be a simpler solution?


I'd really prefer not to make them RefCounted.  The problem is that
GraphicsContext3D and DrawingBuffer are very heavyweight objects, which
means we need to manage their lifetimes tightly and avoid leaving them lying
around longer than necessary.  The exact resource use of these objects
depends on the system but a GraphicsContext3D will typically be backed by an
OpenGL context, which implies some set of driver resources, and a
DrawingBuffer is normally backed by a few megabytes of VRAM.  In the current
code, there is always a single object responsible for managing the lifetime
of a given GraphicsContext3D or a DrawingBuffer which makes it very easy to
ensure that they live for as long as necessary but no longer.  With a
RefCounted object it can be difficult to ensure that all references to a
given object go away when necessary.

In this particular case DrawingBuffer exists at a slightly higher
abstraction layer than GraphicsContext3D.  DrawingBuffer depends on GC3D,
but there is no dependency the other way (and IMHO there should not be).
 This means that the lifetime of a DrawingBuffer depends on the underlying
GraphicsContext3D, but not the other way 'round.  All callers that use or
will want to use a DrawingBuffer already have to be aware of the lifetime of
the GraphicsContext3D associated with it since the only way to use a
DrawingBuffer is by using the GraphicsContext3D API.

For those following along at home, the current user of DrawingBuffer is
CanvasRenderingContext2D which has an OwnPtrDrawingBuffer and uses a
GraphicsContext3D that is guaranteed to outlive the
CanvasRenderingContext2D.  The next proposed user of DrawingBuffer is
WebGLRenderingContext which manages its own GraphicsContext3D via a member
OwnPtr.

- James


 -
 ~Chris
 cmar...@apple.com





___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] XHR responseArrayBuffer attribute

2010-09-27 Thread James Robinson
On Mon, Sep 27, 2010 at 2:40 PM, Anne van Kesteren 
annevankeste...@gmail.com wrote:

 (I'm subscribed to webkit-dev with a different address.)

 On Mon, Sep 27, 2010 at 8:31 PM, Michael Nordman micha...@google.com
 wrote:
  Yes, if we go with telling xhr up front for the array buffer case, I
 guess
  an enum would be slightly better.

 I do not think this should be told up front. You already need to keep
 the octet buffer in memory for overrideMimeType to work the way it
 does. We designed responseBlob as an optimization so you would not
 have to deal with the other types. I do not think you can optimize the
 reverse with the design as it stands today.


I see the source of confusion.  I take it that your reading of
http://www.w3.org/TR/XMLHttpRequest2/#the-overridemimetype-method is that if
the mime type is ever overriden and there is a subsequent access of
.responseText, the response body must be re-decoded with the encoding
specified by the new mimetype.

WebKit does not  implement this functionality - the response text decoder is
initialized when the first byte is received from the network based on the
override mime type (if present) and response encoding at that time.  The
decoding behavior from this point on does not change.  WebKit does not save
the raw bytes off the network between receiving chunks of data from the
network, instead it decodes chunks as they arrive and saves them as UTF16
strings.  Frankly, being able to arbitrarily change the encoding at any
point and time and ask for the responseText using a different encoding
sounds dumb - but I'm guessing you will say that is a debate for
public-webapps@

- James


 In any event, any discussion on changes to the specification really
 ought to happen on public-weba...@w3.org.

 Cheers,


 --
 Anne van Kesteren
 http://annevankesteren.nl/
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] XHR responseArrayBuffer attribute

2010-09-27 Thread James Robinson
On Mon, Sep 27, 2010 at 6:37 PM, Maciej Stachowiak m...@apple.com wrote:


 On Sep 27, 2010, at 3:19 PM, Michael Nordman wrote:

 Webkit's XHR currently does not keep two copies of the data that I can see.
 I think we should avoid that.


 We could keep the raw data around, which hopefully is directly usable as an
 ArrayBuffer backing store, and only translate it to text format when/if the
 client requests responseText.


It would be unfortunate to have to keep the raw data around after the page
access .responseText, given that the overwhelming majority of pages will
touch .responseText and nothing else.  I found when improving the V8 XHR
implementation that the memory footprint of .responseText being held off of
the XHR object itself was often significant so I would be very reluctant to
grow it by an addition 50-100% (depending on encoding) in the common case.

- James
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Arena is crufty?

2010-09-01 Thread James Robinson
On Wed, Sep 1, 2010 at 4:20 PM, Chris Marrin cmar...@apple.com wrote:


 Ken's PODRedBlackTree patch has made me go back and take a closer look at
 WebKit's Arena class. Turns out it's not a class at all, just some structs
 and macros. That seems very un-WebKit-like to me. Ken's patch also has a
 PODArena class, which uses Arena in its implementation. Sam suggests that
 PODRedBlackTree should really go into WTF, which means PODArena and Arena
 would need to go there as well.

 It seems like Arena really needs to be brought into the 21st century and
 made a proper class. Maybe now is the right time to:

 1) Make Arena a class

 2) Integrate Ken's PODArena functionality into this new Arena class (or
 maybe just make Ken's PODArena the new Arena class).

 3) Move the new Arena class to WTF

 4) Put PODRedBlackTree in WTF

 It looks like RenderArena is currently the only client of Arena.h, so this
 change shouldn't be too hard. Of course, looking at RenderArena, it's a
 little odd, too. It is not renderer specific at all. It's just an Arena that
 recycles freed objects. Maybe we should move that functionality into the new
 Arena class. But RenderArena is used all over the place, so maybe that's
 going one step too far down this road?


I'm pretty sure we want to delete RenderArena (since
the minuscule-to-negative performance gain is not worth the extra
complexity) at some point anyway.  If that is still the case, then maybe it
makes more sense to have PODArena be the only arena class in WebKit.

I'm not sure we have to move PODArena / PODRedBlackTree to WTF right away,
but it can't hurt.  My gut feeling is that historically most classes that
think they want Arenas actually do not need the extra complexity and we
shouldn't encourage more users of arena classes without some discussion.  I
actually prefer having PODArena be an implementation detail of
PODRedBlackTree and not exposed as a general utility until we have a clear
need for something else to use it.

- James



 -
 ~Chris
 cmar...@apple.com




 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] style question for empty for loops

2010-08-26 Thread James Robinson
The style guide currently covers this
http://webkit.org/coding/coding-style.html:

4. Control clauses without a body should use empty braces:Right:

for ( ; current; current = current-next) { }

Wrong:

for ( ; current; current = current-next);


- James

On Thu, Aug 26, 2010 at 12:22 PM, Chris Fleizach cfleiz...@apple.comwrote:


 On 26. aug. 2010, at 11.49, Darin Adler wrote:

  On Aug 26, 2010, at 11:35 AM, Chris Fleizach wrote:
 
 for (...; ...; ...) { }
 

 So maybe this is the best option. I can add a style guide check for that,
 unless there are objections

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Checking allocation failures [was: Re: Naked new considered harmful]

2010-08-25 Thread James Robinson
On Wed, Aug 25, 2010 at 2:34 PM, Adam Barth aba...@webkit.org wrote:

 On Wed, Aug 25, 2010 at 2:31 PM, Stephan Assmus supersti...@gmx.de
 wrote:
  Am 25.08.2010 18:35, schrieb Adam Barth:
  On Wed, Aug 25, 2010 at 7:09 AM, Stephan Assmussupersti...@gmx.de
   wrote:
  Am 24.08.2010 19:46, schrieb Adam Barth:
  One thing Darin and I discussed at WWDC (yes, this email has been a
  long time coming) is better programming patterns to prevent memory
  leaks.  As I'm sure you know, whenever you allocate a RefCounted
  object, you need to call adoptRef to prevent a memory leak by adopting
  the object's initial reference:
 
  adoptRef(new FancyRefCountedObject())
 
  We now have an ASSERT that enforces this programming pattern, so we
  should be able to catch errors pretty easily.  Recently, Darin also
  introduced an analogous function for OwnPtr:
 
  adoptPtr(new NiftyNonRefCountedObject())
 
  What adoptPtr does is shove the newly allocated object into a
  PassOwnPtr, which together with OwnPtr, makes sure we don't leak the
  object.  By using one of these two functions every time we call new,
  it's easy to see that we don't have any memory leaks.  In the cases
  where we have an intentional memory leak (e.g., for a static), please
  use the leakPtr() member function to document the leak.
 
  In writing new code, please avoid using naked calls to new.  If
  you're making an object that you expect to be heap-allocated, please
  add a create method, similar to the create method we use for
  RefCounted objects:
 
  static PassOwnPtrNiftyNonRefCountedObjectcreate(ParamClass*
 param)
  {
  return adoptPtr(new NiftyNonRefCountedObject(param));
  }
 
  You should also make the constructor non-public so folks are forced to
  call the create method.  (Of course, stack-allocated objects should
  have public constructors.)
 
  I'm going through WebCore and removing as many naked news and I can.
  At some point, we'll add a stylebot rule to flag violations for new
  code.  If you'd like to help out, pick a directory and change all the
  calls to new to use adoptRef or adoptPtr, as appropriate.
 
  this reminds me that I've always been wondering about checks for
  allocation
  failure in WebCore (versus concerns about leaks). A plain call to new
 may
  throw an std::bad_alloc exception. If this is not considered, it may
  leave
  objects in an invalid state, even when using objects such as RefPtr to
  wrap
  arround allocations. I don't remember any specific place in the WebCore
  code
  where it would be a problem, I just don't remember seeing any code that
  deals with allocation failures. Is this a design choice somehow? If so,
  is
  there some online documentation/discussion about it? (Tried to google
 it
  but
  found nothing...)
 
  Failed allocations in WebKit call CRASH().  This prevents us from
  ending up in an inconsistent state.
 
  Ok, but WebKit is used on devices where allocation failure is somewhat of
 a
  realistic possibility, no? Wouldn't it be desirable to handle such a
  situation gracefully? (I.e. display of an error message when trying to
 open
  one more tab rather than crashing the entire browser, for example.)
  Hopefully I am not missing something obvious.

 Dunno.  The crash-on-allocation failure assumption is baked into lots
 of lines of code.


We do have a tryFastMalloc() path that returns NULL on allocation failure
instead of crashing.  It's used in some pieces of code that are carefully
written to handle an allocation failure gracefully.   However, this is rare.

In general it's very difficult to recover from an allocation failure in an
arbitrary piece of code with an arbitrary callstack.

- James


 Adam
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] cr-win bot broken -- GYP Ninjas, please help.

2010-08-14 Thread James Robinson
Indeed! Dimitri or I will fix the python by Monday.

- James

On Aug 14, 2010 11:31 AM, Adam Barth aba...@webkit.org wrote:
The Chromium bots were supposed to have their Python upgraded ages
ago.  If we ever want to run the LayoutTests on them, they'll need
their Python upgraded anyway.  I think the correct solution is to fix
the bots.

Adam



On Sat, Aug 14, 2010 at 12:00 AM, Eric Seidel e...@webkit.org wrote:
 Fixed that one, but error...
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] DOM Mutation Events. WAS: Fwd: webkit editing rewrite?

2010-08-09 Thread James Robinson
On Mon, Aug 9, 2010 at 4:39 PM, Ojan Vafai o...@chromium.org wrote:

 On Thu, Aug 5, 2010 at 2:22 AM, Adam Barth aba...@webkit.org wrote:

 On Thu, Aug 5, 2010 at 1:59 AM, Maciej Stachowiak m...@apple.com wrote:
  If mutation events tend to break editing, one simple solution is to turn
 then off within the scope of editing operations and send a single mutation
 event at the end. It's not clear if that kind of solution would be
 appropriate to expose to Web content, though; you can't do the RAII idiom in
 JavaScript so there could be too much risk of staying stuck in the mutation
 events deferred state. On the other hand, a dynamic-wind-style solution
 might be appropriate. This is one tiny example of how the same abstractions
 might not be right internally and externally.

 If we don't already have it, we definitely want a RAII for turning off
 mutation events (or at least ASSERTing that we're not trying to fire
 them) for the parser.  The new and old tree builders might explode if
 they ever fire a mutation event.


 I'm trying to figure out what we should do with DOM Mutation Events. The
 general agreement from the www-dom discussion of mutation events was that
 they are the wrong abstraction for editing related code. beforeinput/input
 events meet editing needs better. So, that leaves a small list of relatively
 low-priority use-cases. See
 http://www.w3.org/2008/webapps/wiki/MutationReplacement for some of them.

 Mutation events are a huge source of crashes and they complicate WebCore
 code considerably. Also, I don't see any evidence that mutation events are
 used much other than as a blunt dirty bit for editing code, which can now be
 replaced with the input event. Here's some options I see:

1. Make the existing mutation events them async. That would address all
the issues with them except for the performance cost and would only be
backwards-incompatible for the small percentage of sites that depend on the
events firing synchronously.
2. Drop support for them with no replacement.
3. Complete the implementation of beforeinput/input events to support
editing use-cases then do 1 or 2.
4. Same as above, but also implement a low-level replacement for
mutation events (e.g. something like
http://lists.w3.org/Archives/Public/www-dom/2009AprJun/0081.html).

 Do any of these options sound feasible or is the only option to do the very
 large amount of work to make existing mutation events work?


An additional data point is that the DOM events level 3 specification
declares mutation events to be deprecated and includes the following text:

Many single modifications of the tree can cause multiple mutation events to
be dispatched. Rather than attempt to specify the ordering of mutation
events due to every possible modification of the tree, the ordering of these
events is left to the implementation.

This means that the mutation event behavior for anything but the absolute
simplest of cases is implementation-defined, and in practice different
across browsers and browser versions.  Additionally, the DOM events spec
currently has no editor and so there's not a great chance of improvements in
the spec in the short term.

I'd strongly suggest that we go with option 2.  Problems with editing and
input should be fixed by editing and input APIs and there honestly are not
many other compelling use cases that actually work today.  Mutation events
have caused a lot of nasty crashes and complexity in WebKit and there's no
reason to believe that this will improve in the short term.

- James



 Ojan

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] review queue crazy idea

2010-07-21 Thread James Robinson
I've had patches sit in the review queue for 4 weeks then receive a
positive review and land without much incident.  Some patches are difficult
to review or have a limited number of potential reviewers.  I would have
really appreciated a reminder email about that patch in particular (I
honestly had forgotten about it by the time the review email came) but it
would not have been helpful to mark it review-.  I would have just marked it
review? again, which given the eventual outcome (r+ with some comments)
would have been the right move.

We should make it clearer to new contributors that it is the role of the
patch submitter to sell it to the project.  This means making it clear why
the patch is an improvement, making it easy for the reviewer to understand,
and making a convincing argument that the patch is correct by thorough
testing.  This also tends to mean figuring out who could review a patch and
chasing them down via email or IRC.  I think if contributors thought in
these terms then it would naturally encourage the behaviors reviewers like -
breaking patches up, providing more test cases, and clearly explaining what
a patch is trying to do.

- James


On Wed, Jul 21, 2010 at 10:17 PM, Ryosuke Niwa rn...@webkit.org wrote:

 On Wed, Jul 21, 2010 at 2:40 PM, Ojan Vafai o...@chromium.org wrote:

 There are currently 38 (of 171 total) patches in the review queue where
 the bugs have not been modified in over 1 month old. I propose we have a bot
 that educates people about writing easy to review patches and auto-rejects
 any patches in bugs that haven't been touched in over a month.


 I'd love to see a bot that educates people but I'm not sure if we should
 auto-rejects patches that are simply old. Maybe we need a little bit more
 of graduality like pinging the author of the patch first, and then make it
 obsolete if nobody responds etc...

 This is just an initial proposal. I'm not wed to any of the details of how
 this would work. I do think that auto-rejecting old patches is valuable to
 the project as a whole. Having the review queue be so large makes
 it daunting for any reviewer to try and tackle it.


 I'd agree that having a really large review queue isn't ideal.  Maybe we
 can customize the review queue so that it only shows patches of your
 expertise.

 Best,
 Ryosuke Niwa


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Frustrated at inconsiderate behavior

2010-07-07 Thread James Robinson
On Wed, Jul 7, 2010 at 7:19 PM, Oliver Hunt oli...@apple.com wrote:


 On Jul 7, 2010, at 7:16 PM, Tony Gentilcore wrote:



 On Wed, Jul 7, 2010 at 6:50 PM, Mo, Zhenyao zhen...@gmail.com wrote:

 Maybe I should complain this in a different threads, but recently the
 commit bot waiting time is way too long.  Several times a patch of mine got
 the r+ and cq+ and it landed two days later.  This is really frustrating.

 I am very tempted to use svn directly to commit patches, but that means
 the patch only gets tested in my local environments. Like one time my patch
 breaks the leopard bot, turns out the failed test is skipped on leopard,
 which is exactly my OS.  If I land it through the commit bots, I could
 identify the issue earlier.


 I agree they are closely related. A greener tree means a faster commit
 queue and a faster commit queue means less people subvert it and break the
 tree. The hard problem is figuring out how to fix the incentives so
 subverting the queue isn't so desirable.


 What do you mean by subvert the queue?  The commit queue is a tool to
 streamline commits from contributors who do not have commit access to the
 repository.  If you have the ability to commit you should not be using the
 commit queue to land your patches.


That's not my understanding of the commit queue.  I use the commit queue to
land my patches when possible so that the patch receives further testing
before it hits the tree and potentially affects a large number of
contributors.  Why do you think this is a bad idea?  Is this preference
codified somewhere (formally or informally)?

- James


 --Oliver


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] On adding 'console.memory' API (and about the whole 'console' object.)

2010-06-04 Thread James Robinson
How?  Visited state information is not stored in the javascript heap (which
is what this object contains information about).

- James

On Fri, Jun 4, 2010 at 12:06 PM, David Hyatt hy...@apple.com wrote:

 I'm fairly certain I could construct an attack on :visited history privacy
 using this object.

 dave

 On Jun 4, 2010, at 2:02 PM, Ojan Vafai wrote:

 On Fri, Jun 4, 2010 at 11:27 AM, Sam Weinig sam.wei...@gmail.com wrote:

 After talking it over with some folks here at Apple, I want to formally
 object to adding the Console.memory extension to the Console object and I
 think we should remove support for Console.profiles as soon as we can.  They
 both provide information to users that are not generally useful (beyond the
 continuous integration/buildbot use-case which I think is of limited
 utility) and therefore should not be exposed to the web.


 Why is the continuous integration/buildbot use-case of limited utility? Or
 are you saying that Console.memory doesn't really support that use-case
 well? I think we want to make it as easy as possible for complex apps (e.g.
 email apps, mapping apps, etc.) to care about and monitor memory use.

 While I'm not convinced by the utility argument, I do buy the security
 argument. How would you feel about leaving the code in, but disabling it by
 default? Then it could be enabled by command-line or via a preference.

 That said, I'm OK with rolling back for now given that the code was
 committed without this discussion actually coming to a conclusion.

 Ojan
 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev



 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Pixel test differences between Leopard and Snow Leopard

2010-05-19 Thread James Robinson
Thanks for the feedback.  I've filed the following bugs for the different
steps involved (rebaselining, moving expectations, etc).

https://bugs.webkit.org/show_bug.cgi?id=39317
https://bugs.webkit.org/show_bug.cgi?id=39319
https://bugs.webkit.org/show_bug.cgi?id=39325

In the process of updating the Leopard baselines for tests that pass on Snow
Leopard I've already discovered (and filed) several repaint regressions on
Leopard involving animations and reflections.

On Tue, May 18, 2010 at 8:27 AM, Ojan Vafai o...@chromium.org wrote:

 On Mon, May 17, 2010 at 3:59 PM, Simon Fraser simon.fra...@apple.comwrote:

 On May 17, 2010, at 3:44 PM, James Robinson wrote:

 After this is done it should be possible to run the full layout test suite
 with --tolerance=0 on Snow Leopard and have no failures.  Assuming this
 happens, I would like to switch the default value of --tolerance from 0.1 to
 0.0 to try to keep regressions from creeping back in.

 You may find that some of the text antialiasing differences are
 hardware-dependent, so I'm not sure if a tolerance of 0 would be possible.


 I'd really like to see us default to a tolerance of 0 for at least one mac
 platform. Snow Leopard makes sense to be that platform. If we find that
 there are antialiasing differences that are hardware-dependent, then we
 should consider the hardware on the bots the default and, if
 possible, dynamically detect different hardware to set a higher tolerance.


Yes, I'm hoping that we'll be able to somehow limit this to a subset of
tests and/or hardware configurations if needed.  At a minimum we should be
able to make the tolerance much smaller than 0.1% (which is large enough to
paper over real regressions in some of our current tests).

- James

  Does this sound like a good plan?  The biggest impact on the project will
 be a number of large commits in LayoutTests/platform/ to do the svn move and
 svn add operations to move results from platform/mac - platform/mac-leopard
 and to add new Snow Leopard results to platform/mac.  This can be done over
 a weekend and mostly by a script to try to minimize impact.

 This sounds like a great short-term plan. I think a longer-term plan would
 be to move as many tests to ref-tests as possible, which eliminates any
 text-aliasing issues.


 I fully support this plan.

 Ojan

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] Pixel test differences between Leopard and Snow Leopard

2010-05-17 Thread James Robinson
Leopard and Snow Leopard have subtle differences in the way they render
antialiased text.  This does not affect the text metrics but does cause
slight pixel differences.  The majority of our existing pixel test baselines
appear to have been generated on Leopard, but there is a growing minority of
tests that have been generated on Snow Leopard as well as a small number of
baselines that are out of date or just plain incorrect.  This means that
when run with --tolerance=0, there are a significant number of failures on
Snow Leopard that do not have anything to do with the behavior they are
intended to test.  Here are some concrete numbers (taken around r59000, the
exact results vary slightly from revision to revision but not hugely).

On a Leopard machine, running the tests with -p --tolerance=0 yields 120
failures.  With the default tolerance (0.1%) there are 68 failures.

On Snow Leopard, with -p --tolerance=0 there are 3934 failures.  With the
default tolerance (0.1%) there are 144 failures.

Based on this, it seems reasonable to assume that there are ~3814 pixel test
results in platform/mac that are Leopard specific.  There is also a smaller
number of results generated from Snow Leopard machines that fail on Leopard.
 I think this is an unhealthy state for the project as it prevents people
developing on a Mac on Snow Leopard from running the pixel tests without
getting a huge number of spurious failures.  Currently pretty much all Mac
developers are likely to be on Snow Leopard, except for Google employees who
will be moving to Snow Leopard in the near future.  The pixel tests are our
only coverage for lots of the repaint logic currently.

I think fundamentally there is a short term and a longer term problem here.
 The short term problem is that since the Mac pixel results are in such a
bad state for people on Snow Leopard, people making changes to the code are
not likely to update the pixel results at all which leads to an even worse
mess in platform/mac.  The longer term problem is that since the pixel tests
are only run on build bots by one port (Chromium) and not run on
build.webkit.org by anyone, the pixel tests are likely to languish.

To resolve the immediate problem of Leopard-specific results in
platform/mac, I'd like to move all of the Leopard-specific results currently
in platform/mac to platform/mac-leopard and to generate new Snow Leopard
specific results for those tests in platform/mac.  Specifically my plan is
to do this:

For each test T with pixel results in platform/mac that fails on Leopard or
Snow Leopard with -p --tolerance=0:
- if T passes pixel tests exactly on Leopard and fails with a 0.5% pixel
difference on Snow Leopard, assume the only pixel difference is due to text
antialiasing and do the following:
 - svn mv the current platform/mac pixel results to platform/mac-leopard
 - add new pixel results generated on Snow Leopard into platform/mac
 - there should be around 3800 tests in this category
- if T passes pixel tests exactly on Snow Leopard and fails with a 0.5%
pixel difference on Leopard:
 - generate new pixel baselines for Leopard and add to platform/mac-leopard
 - there should be around 50 tests in this category
- if T fails on both Leopard and Snow Leopard, remove the current
expectations and file a bug
 - there should be around 70 tests in this category

After this is done it should be possible to run the full layout test suite
with --tolerance=0 on Snow Leopard and have no failures.  Assuming this
happens, I would like to switch the default value of --tolerance from 0.1 to
0.0 to try to keep regressions from creeping back in.

Does this sound like a good plan?  The biggest impact on the project will be
a number of large commits in LayoutTests/platform/ to do the svn move and
svn add operations to move results from platform/mac - platform/mac-leopard
and to add new Snow Leopard results to platform/mac.  This can be done over
a weekend and mostly by a script to try to minimize impact.

- James
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Testing changes to CodeGenerator*.pm

2010-04-29 Thread James Robinson
As a concrete example, I found this test setup helpful for this patch:
http://trac.webkit.org/changeset/58345.  A nice side effect was that it
revealed a bug in CodeGeneratorGObject.pm and let me fix it without having
to set up build setup for whatever it is that uses the GObject bindings.

I agree that golden file testing is a very high-maintenance fragile test
method, but it's better than nothing.  If this framework didn't exist then I
would have likely made the change and relied on spot checking and our
existing automated tests to catch any regressions which is less than ideal.

- James

On Thu, Apr 29, 2010 at 11:44 AM, Ojan Vafai o...@chromium.org wrote:

 On Thu, Apr 29, 2010 at 11:39 AM, Alexey Proskuryakov a...@webkit.orgwrote:

 On 29.04.2010, at 11:17, Yaar Schnitman wrote:

 I've been using the tool for a couple of patches in V8. It really boost
 the development cycle, helps reviewers understanding what a cryptic perl
 block of code actually does, and side effects are easy to find. Once you
 start using it, its becoming hard to work without it. Give it a try!


 'm thinking about how this tool could have helped with the CodeGenerator
 changes I made in the past, and it seems that it wouldn't have detected any
 changes, and could require me to find creative ways to test the new
 behavior.


 I don't really follow the what the maintenance overhead is. How does this
 actually cause you more than a trivial amount of extra work? Maybe a
 specific example would help.

 Isn't this just like having a layout test with expected results? It's a
 small isolated test instead of testing everything. That seems like a good
 thing.

 More importantly, it lets you be sure that every feature of the code
 generator has some testing. In the real IDLs, a feature might stop getting
 used temporarily and then changes to the code generator would not be readily
 apparent.

 Ojan

 P.S. Sorry for the double-post some of you got. Sent from the wrong email
 address at first.

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] WTF::callOnMainThread() and re-entrancy

2010-03-08 Thread James Robinson
I saw a very similar bug recently regarding modal dialogs and focus/blur
events: https://bugs.webkit.org/show_bug.cgi?id=33962.  I think you will see
the crash from that stacktrace with or without r55593 in place.  For
example, currently some mouse event handlers can fire underneath a
window.alert on GMail.  It's simple enough to check for
page()-defersLoading() and suppress the task, but I am not sure how to
re-enqueue the event to fire later.  Maybe the PageLoadDeferrer should
maintain a queue of things to do once it stops deferring (i.e. once the
modal dialog is dismissed).

- James

On Mon, Mar 8, 2010 at 11:21 AM, Drew Wilson atwil...@google.com wrote:

 Hi all,

 This weekend I spent some time trying to track down a regression caused by
 r55593. In particular:

 http://trac.webkit.org/changeset/55593/trunk/WebCore/dom/Document.cpp

 This was a change to Document.postTask() to always use callOnMainThread()
 (previously, calls to postTask() on the main thread were using one-shot
 timers).

 We've since reverted r55593, but I've been playing around locally with that
 same change to Document.postTask(), and I noticed that it *appeared* that
 tasks posted via callOnMainThread() were still being executed even though
 the main thread was displaying a javascript alert. The end result is that if
 a worker thread was sending messages to the main thread via a message port,
 the event handlers for incoming messages would still be invoked (so we'd run
 javascript even though the javascript execution context should be blocked on
 the alert()). In my test, I ended up trying to display nested alert()
 dialogs, which resulted in a failed assertion.

 So, my question is: does it surprise anyone that tasks posted via
 callOnMainThread() are getting executed even though there's a modal dialog
 shown? And is there anything I should be doing in my task handler to make
 sure we aren't re-entering JS execution inappropriately in these cases? I'm
 just concerned that the way we're posting tasks from worker threads to the
 main thread may cause reentrancy problems.

 Here's the stack from my nested call to alert():

 #0  0x03ff073d in WebCore::DOMTimer::suspend (this=0x15c8a5b0) at
 /Volumes/source/chrome.git/src/third_party/WebKit/WebCore/page/DOMTimer.cpp:181
 #1  0x046b2945 in WebCore::ScriptExecutionContext::suspendActiveDOMObjects
 (this=0x81d8434) at
 /Volumes/source/chrome.git/src/third_party/WebKit/WebCore/dom/ScriptExecutionContext.cpp:206
 #2  0x04538ae5 in WebCore::PageGroupLoadDeferrer::PageGroupLoadDeferrer
 (this=0xbfffc6bc, page=0xc97940, deferSelf=true) at
 /Volumes/source/chrome.git/src/third_party/WebKit/WebCore/page/PageGroupLoadDeferrer.cpp:47
 #3  0x03e0d977 in WebCore::Chrome::runJavaScriptAlert (this=0xc8ddb0,
 frame=0x805a600, messa...@0xbfffc790) at
 /Volumes/source/chrome.git/src/third_party/WebKit/WebCore/page/Chrome.cpp:264
 #4  0x03ff6c2c in WebCore::DOMWindow::alert (this=0x147e23f0,
 messa...@0xbfffc790) at
 /Volumes/source/chrome.git/src/third_party/WebKit/WebCore/page/DOMWindow.cpp:795
 #5  0x04266617 in WebCore::jsDOMWindowPrototypeFunctionAlert
 (exec=0x14f0c1b0, thisValue={u = {asEncodedJSValue = -8455721472, asDouble =
 -nan(0xe07ffee00), asBits = {payload = 134213120, tag = -2}}},
 ar...@0xbfffc7bc) at
 /Volumes/source/chrome.git/src/third_party/WebKit/WebKitBuild/Debug/DerivedSources/WebCore/JSDOMWindow.cpp:8320
 #6  0x00d8b166 in ?? ()
 #7  0x0070bd6b in JSC::JITCode::execute (this=0x147a7690,
 registerFile=0x1473a7fc, callFrame=0x14f0c150, globalData=0x818e800,
 exception=0x818f60c) at JITCode.h:77
 #8  0x006f542f in JSC::Interpreter::execute (this=0x1473a7f0,
 functionExecutable=0x147a7680, callFrame=0x81ed664, function=0x7fd5400,
 thisObj=0x7fd5200, ar...@0xbfffca64, scopeChain=0x147a8920,
 exception=0x818f60c) at
 /Volumes/source/chrome.git/src/third_party/WebKit/JavaScriptCore/interpreter/Interpreter.cpp:687
 #9  0x0074d767 in JSC::JSFunction::call (this=0x7fd5400, exec=0x81ed664,
 thisValue={u = {asEncodedJSValue = -8455892480, asDouble =
 -nan(0xe07fd5200), asBits = {payload = 134042112, tag = -2}}},
 ar...@0xbfffca64) at
 /Volumes/source/chrome.git/src/third_party/WebKit/JavaScriptCore/runtime/JSFunction.cpp:122
 #10 0x0069d481 in JSC::call (exec=0x81ed664, functionObject={u =
 {asEncodedJSValue = -8455891968, asDouble = -nan(0xe07fd5400), asBits =
 {payload = 134042624, tag = -2}}}, callType=JSC::CallTypeJS,
 callda...@0xbfffca34, thisValue={u = {asEncodedJSValue = -8455892480,
 asDouble = -nan(0xe07fd5200), asBits = {payload = 134042112, tag =
 -2}}}, ar...@0xbfffca64) at
 /Volumes/source/chrome.git/src/third_party/WebKit/JavaScriptCore/runtime/CallData.cpp:39
 #11 0x0429f5bb in WebCore::JSEventListener::handleEvent (this=0x15c31ef0,
 scriptExecutionContext=0x81d8434, event=0xc34d40) at
 /Volumes/source/chrome.git/src/third_party/WebKit/WebCore/bindings/js/JSEventListener.cpp:115
 #12 0x04037da0 in WebCore::EventTarget::fireEventListeners
 (this=0x15c9c400, 

Re: [webkit-dev] Making browsers faster: Resource Packages

2009-11-17 Thread James Robinson
On Tue, Nov 17, 2009 at 2:19 PM, Alexander Limi l...@mozilla.com wrote:

 Good people of Webkit!

 We'd all like for the web to be faster, and therefore I'd love your feedback 
 on my proposal — it would be great to see support for this in additional 
 browsers, not just Firefox:

 http://limi.net/articles/resource-packages/

 Summary:
 What if there was a backwards compatible way to transfer all of the resources 
 that are used on every single page in your site — CSS, JS, images, anything 
 else — in a single HTTP request at the start of the first visit to the page? 
 This is what Resource Package support in browsers will let you do.

 Looking forward to hear your thoughts on this.

It seems like a browser will have to essentially stop rendering until
it has finished downloading the entire .zip and examined it.  This
will most likely slow down the time taken to render parts of the page
as they arrive. From the blog post:

A given browser will probably block downloading any resources until
the lists of files that are available in resource packages have been
accounted for — or there may be a way to do opportunistic requests or
similar, we leave this up to the browser vendor unless there’s a
compelling reason to specify how this should work.

This also means that a browser would have to stop tokenizing the HTML
when it hits the next script src= tag, since it would be unable to
know if the javascript was in the bundled zip or not.  This seems to
go against the idea that as much of the page be rendered as fast as
possible.

- James


 Thanks!

 --
 Alexander Limi · Firefox User Experience · http://limi.net


 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Making browsers faster: Resource Packages

2009-11-17 Thread James Robinson
On Tue, Nov 17, 2009 at 3:02 PM, Peter Kasting pkast...@google.com wrote:
 On Tue, Nov 17, 2009 at 3:00 PM, James Robinson jam...@google.com wrote:

 It seems like a browser will have to essentially stop rendering until
 it has finished downloading the entire .zip and examined it.

 I think mitigating this is why there are optional manifests.  I agree that
 if there's no manifest, this is really, really painful.  I think manifests
 should be made mandatory.
 PK

Do you mean external manifests?  Either way, the browser cannot start
a download for any external resource until it downloads and parses out
the manifest.txt for every resource bundle seen in the page so far.
Whether it is pulling the manifest out of a .zip file or as a .txt by
itself doesn't matter much, it's still an extra HTTP round-trip before
any content can be downloaded (including content that is not bundled
at all).

- James
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Making browsers faster: Resource Packages

2009-11-17 Thread James Robinson
On Tue, Nov 17, 2009 at 5:36 PM, Alexander Limi l...@mozilla.com wrote:
 (Adding in some of the people involved with Resource Packages earlier to
 this thread, so they can help me out — I'm just a lowly UI designer, so
some
 of these questions have to be answered by people that know how browsers
 work. I'm just the messenger. Hope you don't mind, guys, and remember that
 webkit-dev requires you to sign up before you can post.)

 On Tue, Nov 17, 2009 at 2:44 PM, Peter Kasting pkast...@google.com
wrote:

 I have read the whole document, but I read it quickly, so please do point
 out places where I've overlooked an obvious response.

 This is what everyone does, so no worries, happy to clarify. 95% of the
 this is why this won't work statements are actually answered by the
 article in some way. But I guess I shouldn't be surprised. :)


 Reduced parallelism is a big concern of mine.  Lots of sites make heavy
 use of resource sharding across many hostnames to take advantage of
multiple
 connections, which this defeats.

 If you package up everything in a single zip file, yes. Realistically, if
 you have a lot of resources, you'd want to spread them out over several
 files to increase parallelism. Also, there's usually resources that are
 page-specific (e.g. belong to the article being rendered). As with
 everything, there are possibilities to use this the wrong way, and
packaging
 up everything in one zip file will definitely affect parallelism. Don't do
 that.

If the contents are spread across N zip files then the browser still has to
download (at least part of) N files in order to see all the manifests before
it can start fetching other resources.  The page-specific resources end up
getting blocked behind all of the manifest downloads.  If resource bundles
are allowed to include other resource bundles (and I see nothing in the spec
about this), then each of the N downloads would have to be made serially
since the browser would have to check the manifest of each bundle to see if
it includes any of the remaining ones.

I think this line of concerns would be lesser if the author could declare
the contents of the manifest in the HTML itself to avoid an extra download
or give some sort of explicit signal to the browser that a given resource
was not in any resource bundle.  The downside of this is that it increases
the HTML's size even more which is a big loss on browsers that do not
support this.


 I am concerned about the instruction to prefer the packaged resources to
 any separate resources.  This seems to increase the maintenance burden
since
 you can never incrementally override the contents of a package, but
always
 have to repackage.

 This is something we could look at, of course. There are easy ways to
 invalidate the zip using ETags etc.


 If an author has resources only used on some pages, then he can either
 make multiple packages (more maintenance burden and exacerbates problem
 above), or include everything in one package (may result in downloading
 excessive resources for pages where clients don't need them).

 I don't think it's unreasonable to expect most big sites to have a
standard
 core of resources they use everywhere. It's important not to try to put
 *everything* in resource packages, just the stuff that should be present
 everywhere (and the specialized thumbnail search result case I mentioned).


 You note that SPDY has to be implemented by both UAs and web servers, but
 conversely this proposal needs to be implemented by UAs and _authors_.  I
 would rather burden the guys writing Apache than the guys making
webpages,
 and I think if a technique is extremely useful, it's easier to get
support
 into Apache than into, say, 50% of the webpages out there.

 There's no damage if you don't do this as a web author. If you care enough
 to do CSS spriting and CSS/JS combining, this gives you a more
maintainable,
 easier, faster solution.

 On Tue, Nov 17, 2009 at 3:00 PM, James Robinson jam...@google.com
wrote:

 It seems like a browser will have to essentially stop rendering until
 it has finished downloading the entire .zip and examined it.

 No. That's why the manifest is there, since it can be read early on, so
the
 browser doesn't have to block.

 I see a lot of I don't think this will work or I don't think this will
be
 any faster here. I guess I should get someone to help me create some
 reasonable benchmarks and show what the difference would be. Maybe Steve
 Souders or someone else that is better at this stuff than me can help out
 with some data.

Yes, actual numbers would be nice to have.

- James


 On Tue, Nov 17, 2009 at 3:02 PM, Peter Kasting pkast...@google.com
wrote:

 I think mitigating this is why there are optional manifests.  I agree
that
 if there's no manifest, this is really, really painful.  I think
manifests
 should be made mandatory.

 The manifests *are* mandatory. Without a manifest, it won't do anything
(ie.
 proceed to load the resources as usual), since

Re: [webkit-dev] [chromium-dev] Learning Webkit: High Level Webkit overview?

2009-10-06 Thread James Robinson
On Sun, Oct 4, 2009 at 8:19 AM, Buakaw San buakaw@gmail.com wrote:


 There is a document called How Chromium Displays Web Pages (http://
 dev.chromium.org/developers/design-documents/displaying-a-web-page-in-
 chrome), however I haven't found an equivalent page for Webkit. E.g.
 How Webkit Renders Web Pages. The Chromium document doesn't go into
 the Webkit part. There is an old blog
 http://webkit.org/blog/114/webcore-rendering-i-the-basics/.
 Which talks about some of the render process but it seems to focus on
 CSS handling.

 I'm trying to diagram this process. E.g. We're load a simple web page
 with some body text, a couple divs; one with an image tag, one with a
 plugin like flash. What is happening from start to finish?

 I imagine the process like such:

 Content = HTML Parsing = DOM Construction = Layout (Render Tree
 Construction) = Rendering


That's pretty much spot on.  Note that it's not always happening exactly in
this order, especially if the document is large and arrives from the server
in multiple chunks.  Then it could look something like:

Some content received
HTML parsing (which builds up the DOM as it goes)
Some more content received
More HTML parsing, DOM construction
Layout
Rendering (aka painting)


 What is the first thing that is done?  Which class is initially hit

when a new web page request arrives? Which classes are responsible for
 parsing HTML/DOM/CSS? How is the plugin loaded? How is the image
 loaded?


The picture is a bit complex, but I can try to give some pointers for
starting.  Resources are pulled in through a variety of Loader classes -
start with FrameLoader (although it's too complicated at the moment).  HTML
Content is fed into an HTMLTokenizer which builds up the DOM tree.  For
images, see ImageLoader.  I'm not sure exactly how plugins work.  Layout and
painting are covered in the Surfin' Safari blog post - if you want to step
through them in a debugger start in FrameView.



 While these are questions that could be answered by studying the code
 in depth, it would be nice if there was such a basic introduction to
 Webkit rendering. Ideally one with nice pictures (like the Chromium
 docs), interaction diagrams and such.


Is that volunteering? :)

- James



 Perhaps this thread could become a source of knowledge for new comers.

 --~--~-~--~~~---~--~~
 Chromium Developers mailing list: chromium-...@googlegroups.com
 View archives, change email options, or unsubscribe:
http://groups.google.com/group/chromium-dev
 -~--~~~~--~~--~--~---


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Purging as much memory as possible

2009-10-02 Thread James Robinson
On Fri, Oct 2, 2009 at 12:10 PM, Peter Kasting pkast...@google.com wrote:

 On Fri, Oct 2, 2009 at 11:13 AM, Geoffrey Garen gga...@apple.com wrote:

 Live items cannot be removed from the cache.


 Yeah, I think that is what I'm seeing.

 Assume the page is not being painted (e.g. the window is minimized).  Is
 there a way to turn the live items into dead ones so they can be flushed?
  Obviously the instant we repaint we will have to obtain these again.

 Even if this is possible, it might be _too_ aggressive, but first I just
 wonder if it can be done at all.


It appears that FontCache::invalidate() does this based on the
implementation - is that correct?  I'm not seeing any callers (but that
might just be Visual Studio being stupid).

- James


 PK

 ___
 webkit-dev mailing list
 webkit-dev@lists.webkit.org
 http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] Instrumentation framework for WebKit

2009-04-30 Thread James Robinson
It's difficult to understand the performance of a large web app nowadays -
think GMail scale and beyond, in part because a lot of things happen in the
UI thread outside of javascript.  For example layout, selector matching and
painting can all happen in a deferred way, take a lot of time and yet not
show up in any javascript-based profiling or metrics.  Even within
javascript it's not always obvious why things happen when they do - for
example a setTimeout() might get blocked behind an XHR readystatechange
handler or a layout causing the timer code to run much later than expected.


To help with this I propose adding a number of instrumentation points within
WebCore designed to bracket the more significant pieces of logic that can
block the UI thread.  Then a client could register itself for the
instrumentation points and aggregate statistics or build reports of what's
going on while a web app is running.


This would not be another generic profiler per se but more targeted at
things useful to a web developer.  As a nice side effect the instrumentation
can have a significantly smaller observer effect than full-scale profiler.


I have a basic implementation of the framework and plumbing for WebKit/mac
at https://bugs.webkit.org/show_bug.cgi?id=25503 with a simple aggregator
for the DumpRenderTree tool.  The output looks something like this:


$ ./WebKitTools/Scripts/run-webkit-tests fast/dynamic/ --dump


Running tests from /usr/local/home/jamesr/WebKit/LayoutTests

Testing 57 test cases.

fast/dynamic .

1.92s total testing time

html_write time = 38.554668ms count = 230

recalc_style time = 3.255128ms count = 248

paint time = 58.205367ms count = 70

layout time = 117.240668ms count = 240


all 57 test cases succeeded

Any thoughts or feedback?


Thanks,

- James
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev