Re: [webkit-dev] Timing attacks on CSS Shaders (was Re:Security problems with CSS shaders)
On Dec 7, 2011, at 7:23 PM, Vincent Hardy wrote: Hello, @chris So I take back my statement that CSS Shaders are less dangerous than WebGL. They are more!!! It seems to me that the differences are: a. It is easier to do the timing portion of a timing attack in WebGL because it all happens in a script and the timing is precise. With CSS shaders, the timing is pretty coarse. b. The content that a CSS shader has access to may be more sensitive than the content a WebGL shader has access to because currently, WebGL cannot render HTML (but isn't it possible to render an SVG with a foreignObject containing HTML into a 2D canvas, and then use that as a texture? In that case, wouldn't the risk be the same? Or is the canvas tainted in that case and cannot be used as a texture?). Yes, if that were possible (it's not today in WebKit) then WebGL shaders would be even more dangerous because of their more precise timing. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)
On Dec 3, 2011, at 11:57 PM, Adam Barth wrote: On Sat, Dec 3, 2011 at 11:37 PM, Dean Jackson d...@apple.com wrote: On 04/12/2011, at 6:06 PM, Adam Barth wrote: On Mon, Oct 24, 2011 at 9:51 PM, Adam Barth aba...@webkit.org wrote: Personally, I don't believe it's possible to implement this feature securely, at least not using the approach prototyped by Adobe. However, I would love to be proven wrong because this is certainly a powerful primitive with many use cases. I spent some more time looking into timing attacks on CSS Shaders. I haven't created a proof-of-concept exploit, but I believe the current design is vulnerable to timing attacks. I've written up blog post explaining the issue: http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html Thanks for writing this up. I'm still interested to know what the potential rate of data leakage is. Like I mentioned before, there are plenty of existing techniques that could expose information to a timing attack. For example, SVG Filters can manipulate the color channels of cross-domain images, and using CSS overflow on an iframe could potentially detect rendering slowdowns as particular colors/elements/images come into view. My understanding is that shader languages allow several orders of magnitude greater differences in rendering times than these approaches. However, as I wrote in the post, I don't have a proof-of-concept, so I cannot give you exact figures. CSS shaders increase the rate of leakage because they execute fast and can be tweaked to exaggerate the timing, but one could imagine that the GPU renderers now being used in many of WebKit's ports could be exploited in the same manner (e.g. discover a CSS trick that drops the renderer into software mode). I don't understand how those attacks would work without shaders. Can you explain in more detail? Specifically, how would an attacker extract the user's identity from a Facebook Like button? In the CSS Shader scenario, I can write a shader that runs 1000x slower on a black pixel than one a white pixel, which means I can extract the text that accompanies the Like button. Once I have the text, I'm sure you'd agree I'd have little trouble identifying the user. To be clear, it's not the difference between white and black pixels, it's the difference between pixels with transparency and those without. And I've never seen a renderer that runs 1000x slower when rendering a pixel with transparency. It may runs a few times slower, and maybe even 10x slower. But that's about it. I'm still waiting to see an actual compelling attack. The one you mention here: http://www.contextis.co.uk/resources/blog/webgl/poc/index.html has never seemed very compelling to me. At the default medium quality setting the image still takes over a minute to be generated and it's barely recognizable. You can't read the text in the image or even really tell what the image is unless you had the reference next to it. For something big, like the WebGL logo, you can see the shape. But that's because it's a lot of big solid color. And of course the demo only captures black and white, so none of the colors in an image come through. If you turn it to its highest quality mode you can make out the blocks of text, but that takes well over 15 minutes to generate. And this exploit is using WebGL, where the author has a huge amount of control over the rendering. CSS Shaders (and other types of rendering on the page) give you much less control over when rendering occurs so it makes it much more difficult to time the operations. I stand behind the statement, ... it seems difficult to mount such an attack with CSS shaders because the means to measure the time taken by a cross-domain shader are limited., which you dismissed as dubious in your missive. With WebGL you can render a single triangle, wait for it to finish, and time it. Even if you tuned a CSS attack to a given browser whose rendering behavior you understand, it would take many frame times to determine the value of a single pixel and even then I think the accuracy and repeatability would be very low. I'm happy to be proven wrong about this, but I've never seen a convincing demo of any CSS rendering exploit. This all begs the question. What is an exploit? If I can reproduce with 90% accuracy a 100x100 block of RGB pixels in 2 seconds, then I think we'd all agree that we have a pretty severe exploit. But what if I can determine the color of a single pixel on the page with 50% accuracy in 30 seconds. Is that an exploit? Some would say yes, because that can give back information (half the time) about visited links. If that's the case, then our solution is very different than in the first case. I think we need to agree on the problem we're trying to solve and then prove that the problem actually exists before trying to solve it. In fact, I think that's a general rule I live my
Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)
On Dec 5, 2011, at 11:32 AM, Adam Barth wrote: On Mon, Dec 5, 2011 at 10:53 AM, Chris Marrin cmar...@apple.com wrote: To be clear, it's not the difference between white and black pixels, it's the difference between pixels with transparency and those without. Can you explain why the attack is limited to distinguishing between black and transparent pixels? My understanding is that these attacks are capable of distinguishing arbitrary pixel values. This is my misunderstanding. I was referring to the attacks using WebGL, which measure the difference between rendering alpha and non-alpha pixels. But I think there is another, more dangerous attack vector specific to CSS shaders. Shaders have the source image (the image of that part of the page) available. So it is an easy thing to make a certain color pixel take a lot longer to render (your 1000x slower case). So you can easily and quickly detect, for instance, the color of a link. So I take back my statement that CSS Shaders are less dangerous than WebGL. They are more!!! As I've said many times (with many more expletives), I hate the Internet. I think the solution is clear. We should create a whole new internet where we only let in people we trust. :-) - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Starting implementation on W3C Filter Effects
On Nov 3, 2011, at 7:00 PM, Charles Pritchard wrote: In my experience, implementing filters leads to writing them multiple times for various targets. I suggest starting with the lowest common denominator before targeting platforms like webgl. I understand that Google is working on an in-software webgl implementation (angle is just a conversion lib); at some point LLVM may have sufficient semantics-- it's certainly been attempted (there's a polyhedron article somewhere on the site). You're saying you believe Google is developing a version of WebGL that runs completely in the CPU? I haven't heard of such a thing and I would be surprised if it were true. Running a GLSL shader in software is possible, in fact OSX has a software renderer that does just that. And while it can get a few fps with a simple shader, it's not practical for serious realtime 3D graphics. The initial WebKit implementation of CSS filters will use the filter code already in the SVG implementation. This does use vector optimizations on some platforms for some shaders. So it will be fully CPU based. From there several options exist for hardware acceleration, some platform specific and others more generic, based on WebGL or some other GPU based acceleration. In https://bugs.webkit.org/show_bug.cgi?id=68479 I plan on adding some filter infrastructure at the GraphicsLayer level to make it simpler to implement layer-based hardware accelerated filters. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Enable REQUEST_ANIMATION_FRAME on all ports? (was Re: ENABLE flag cleanup strawman proposal)
On Sep 27, 2011, at 8:57 PM, James Robinson wrote: ...With that said, I agree with you that there will still be a visual glitch in the current implementation. But what's actually happening is that the timestamp we're sending to rAF is wrong. We're sending current time. Depending on when rAF fires relative to the display refresh, the timestamp might be as much as 16ms behind the time the frame is actually seen. If you're basing motion on this timestamp, there will be an occasion when one frame will have a timestamp that is very close to the display time and the next will have a timestamp that is 15ms or so behind. That's why the glitch is happening. I'm assuming in this example that the script changing the position of the bird to match the timestamp parameter passed in. You are correct in saying that changing the timestamp parameter to reflect the next display time would get rid of the visual glitch in this example. In that case the behavior between frames 8 and 10 would be: time (millis) : action 120: rAF fired with timestamp 133 1/3 133 1/3: frame 8 produced 135: rAF fired with timestamp 150 150: rAF fired with timestamp 150 150 0/3: frame 9 produced 165: rAF fired with timestamp 166 2/3 166 2/3: frame 10 produced The problem here is that in the real world, frames aren't infinitely cheap to produce and so attempting to run the rAF callback twice between frames 8 and 9 is just as likely to produce a rendering glitch as the problem in the original example - even though the timestamp is correct. In order to keep the animation running smoothly here it's necessary to keep the timestamp and the scheduling in sync with the actual display rate. Right. That's why I'm doing an implementation which uses CVDisplayLink, which is synchronized with the display. ... I think the issue of supplying rAF with accurate timestamps is independent of whatever feedback mechanism an implementation uses to do the throttling. I'm sure those heuristics will improve over time. But the first step is to supply rAF with an accurate timestamp. I've opened https://bugs.webkit.org/show_bug.cgi?id=68911 for this. My intention is to create a call, similar to scheduleAnimation() but which simply asks platform specific code for a time estimate of when the next frame will be visible. That can not only be used as the timestamp sent to rAF, but as the basis for when the next call to rAF is made. That should avoid any excessive calls to rAF. That sounds like a good start, but I don't really think it will be sufficient. How will the WebKit layer know when the next frame will be visible? There are many considerations in frame scheduling in addition to the screen's display rate. I think you'll need to end up duplicating all of the WebKit-specific frame scheduling logic into the WebCore implementation, or just be wrong most of the time. I'm not sure what logic you're talking about. But CVDisplayLink is synchronized with the display, so it should (hopefully) be sufficient. For Mac, I plan to look into adding a displayLink thread which will maintain a timestamp value tied to refresh. I didn't try using a displayLink at first because I initially thought I'd use it to actually drive the firing of the callback, which would have been complicated and require a lot of communication between the threads. Just having the displayLink maintain a timestamp means I just need to provide thread safe access to that value. Hopefully that will keep overhead low but will achieve the synchronization goal. Getting the refresh interval is only the first step. In the contention case (which is the really interesting one) just knowing the refresh time of the display does not give you enough insight into when to pump the animation in order to make the next frame. There is no way to know (no matter what technique you use) if you'll make it to the display on time. Even if you start your rendering a full 16ms before the next frame is to be displayed AND you're supplied with timestamp of when that frame will be displayed AND your rendering code takes much less than 16ms to complete, you still might not make it. Javascript garbage collection, layout, an image getting loaded or any one of a number of other things inside the app could take enough time that you miss the window. And other apps could get a time slice and prevent you from making it as well. Desktop OS's are not real-time systems, so the best you can do is to give yourself the best chance of success. That means getting as much time as possible (~16ms) to render before the frame appears and finishing your rendering as quickly as possible (probably at least a couple of ms before the frame is to appear). Regardless of your concerns, today's completely unsynchronized Timer based rAF implementation looks pretty good. It mostly achieves the above criteria. It just has a slightly
Re: [webkit-dev] Enable REQUEST_ANIMATION_FRAME on all ports? (was Re: ENABLE flag cleanup strawman proposal)
On Sep 26, 2011, at 9:48 PM, James Robinson wrote: On Sun, Sep 25, 2011 at 6:52 PM, Darin Adler da...@apple.com wrote: On Sep 25, 2011, at 12:20 AM, James Robinson wrote: The TIMER based support for RAF is very new (only a few weeks old) and still has several major bugs. I'd suggest letting it bake for a bit before considering turning it on for all ports. Got it. Fundamentally I don't think this feature can be implemented reasonably well with just timers, so port maintainers should take a really careful look at the level of support they want to have for this feature when deciding if they want to support it. This may contradict the recommendation above. If the timer-based version is too low quality then maybe we shouldn’t put ports in the position of shipping with a substandard implementation rather than simply having the feature omitted. Perhaps if I expand on my concerns a bit it'll be clearer what the right option is. The goal of requestAnimationFrame is to allow web authors to have high-quality script-driven animations. To use a concrete example, when playing angry birds (http://chrome.angrybirds.com/) and flinging a bird across the terrain, the RAF-based animation should move the bird at a uniform rate across the screen at the same framerate as the physical display without hitches or interruptions. An additional goal is that we shouldn't do any unnecessary work for frames that do not show up on screen, although it's generally necessary to do this in order to satisfy the first goal as I'll show below. There are two main things that you need in order to achieve this that are difficult or impossible to do with a WebCore Timer: a reliable display-rate aligned time source, and a source of feedback from the underlying display mechanism. The first is easiest to think about with an example. When the angry bird mentioned above is flying across the screen, the user should experience the bird advancing by the same amount every time their display's update refreshes. Let's assume a 60Hz display and a 15ms timer (as the current REQUEST_ANIMATION_FRAME_TIMER code uses), and furthermore assume (somewhat optimistically) that every frame takes 0ms to process in javascript and 0ms to display. The screen will update at the following times (in milliseconds): 0, 16 2/3, 33 1/3, 50, 66 2/3, 83 1/3, 100, etc. The visual X position of the bird on the display is directly proportional to the time elapsed when the rAF handler runs, since it's interpolating the bird's position, and the rAF handler will run at times 0, 15, 30, 45, 60, etc. We can thus determine the visual X position of the bird for each frame: Frame 0, time 0ms, position: 0, delta from last frame: Frame 1, time 16 2/3ms, position: 15, delta from last frame: 15 Frame 2, time 33 1/3ms, position: 30, delta from last frame: 15 Frame 3, time 50 0/3 ms, position: 45, delta from last frame: 15 Frame 4, time 66 2/3 ms, position: 60, delta from last frame: 15 Frame 5, time 83 1/3 ms, position: 75, delta from last frame: 15 Frame 6, time 100 0/0 ms, position: 90, delta from last frame: 15 Frame 7, time 116 2/3ms, position: 105, delta from last frame: 15 Frame 8, time 133 1/3ms, position: 120, delta from last frame: 15 Frame 9, time 150 0/3 ms, position: 150, delta from last frame: 30 (!) Frame 10, time 166 2/3 ms, position: 165, delta from last frame: 15 Frame 11, time 183 1/3 ms, position: 180, delta from last frame: 15 Frame 12, time 200 0/0 ms, position: 195, delta from last frame: 15 What happened at frame 9? Instead of advancing by 15 milliseconds worth, the bird jumped forward by twice the normal amount. Why? We ran the rAF callback twice between frames 8 and 9 - once at 135ms and once at 150ms. What's actually going on here is we're accumulating a small amount of drift on every frame (1.6... milliseconds, to be precision) between when the display is refreshing and when the callbacks are being invoked. This has to catch up sometime so we end up with a beat pattern every (16 2/3) / abs(16 2/3 - 15) = 10 frames. The same thing happens with a perfect 16ms timer every 25 frames, or with a perfect 17ms timer every 50 frames. Even a very close timer will produce these regular beat patterns and as it turns out the human eye is incredibly good at picking out and getting annoyed by these effects in an otherwise smooth animation. I generally agree with your analysis, but I believe your example is misleading. Skipping a frame would only cause the bird to jump by 30 units rather than 15 if you were simply adding 15 units to its position on every call to rAF. But that would make the rate of movement of the bird change based on the rate at which rAF is called, and that would be poor design. If an implementation decided to call rAF at 30ms intervals (due to system load, for instance) then the bird would appear to move
Re: [webkit-dev] Enable ArrayBuffer by default?
On Feb 11, 2011, at 9:09 AM, Alex Milowski wrote: On Fri, Feb 11, 2011 at 1:41 AM, Adam Barth aba...@webkit.org wrote: How would folks feel about enabling ArrayBuffer by default? It seems to be a basic data type that's used by a bunch of stuff today and likely to be used by more stuff in the future. The API seems pretty stable and it's implemented by Firefox as well. +1 I'd like to have it enabled by default so we can get more feedback on its use. It's been discussed and I'm all for it. When you say ArrayBuffer, you mean all the Typed Array classes, right? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
[webkit-dev] The future of TransformationMatrix
The current reworking of the TransformationMatrix class and friends (https://bugs.webkit.org/show_bug.cgi?id=48031) got me thinking about the future of this class. I've chit chatted about this with various people, but nothing serious has been done yet. As WebKit and HTML5 get more 3D functionality (CSS Transforms now, WebGL later) a 4x4 matrix class becomes more important. There has even been talk of adding 3D transforms to SVG. Today, the workhorse 3D matrix class is TransformationMatrix. This is used by SVGMatrix and CSSMatrix and internally in several places. The above bug is making the functions in TransformationMatrix more rational and that is a good first step. But we need to architect the most efficient class hierarchy in this area as possible. For instance, today you can use CSSMatrix in WebGL. But it has an inordinate amount of overhead because the class is immutable. So every call that is made must construct a new CSSMatrix. This not only adds call overhead, but in the GC workload as well. I feel that making a mutable 4x4 matrix available is very important. This could be done with a base Matrix class, available to JavaScript, which would have mutable calls. Then SVGMatrix and CSSMatrix could derive from this with their own APIs. Internally, I think we should eventually restructure TransformationMatrix to have mutable fast-path operations. Today, for instance, operator*= calls into the operator* class, so there's always an extra matrix copy. That should be changed and we should similarly have mutable versions of all the calls (translate, rotate, etc.) on the fast path. We should also restructure TransformationMatrix to allow for platform specific versions of the calls. For instance, PLATFORM(CA) has access to CATransform3D, which has accelerated versions of its API where possible. I'm sure QMatrix, SkMatrix and others might also be able to be used for similar performance gains. I've opened a couple of bugs for this: https://bugs.webkit.org/show_bug.cgi?id=52488 https://bugs.webkit.org/show_bug.cgi?id=52490 Please feel free to comment here or there… - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] The future of TransformationMatrix
On Jan 14, 2011, at 4:12 PM, Dirk Schulze wrote: At first SVGMatrix and the complete SVG code itself is not using TransformationMatrix. We had bigger performance problems and the memory amount raised up by 6-10%. Thats why we decided to turn back to AffineTransform. Because of the platform dependencies of TransformationMatrix. I noted that replacing AffineTransform, that was platform dependent at that time, by TransformationMatrix, which wasn't, may cause performance losses. At this time it was not clear that we would use 3D all over the place. I was told that the great benefit of a independent implementation is, that we get same results across platforms (a bigger problem that we saw on DRT and which caused a lot of platform dependent results). Is this argument not valid or important anymore? Sorry, you're right. AffineTransform and TransformationMatrix interoperate because of the casting operators. I've had a desire for a while to supplant AffineTransform and make TransformationMatrix switch its internal storage when a 2D affine transform is stored. Maybe that another bug! - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Plan to move TypedArray out of WebGL feature guard
On Dec 22, 2010, at 5:34 PM, Jian Li wrote: Hi, TypedArray has been used in some non-WebGL areas, like File API and XHR. It would be nice if we move it out of WebGL feature guard. Any objection? It would probably be best if it had its own guard. Then its various users could turn it on in config.h - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Plan to move TypedArray out of WebGL feature guard
On Dec 23, 2010, at 11:01 AM, Darin Adler wrote: Yes, we want correct conditionals, and TypedArray should not be in the WebGL feature guard if it’s used in other features. Adding a feature new guard would not be good if it has to be set explicitly. It would be much better if the build just decided correctly based on the other feature guards. If TypedArray is used in multiple conditional features, then we have to use the correct expression to check the feature guards. If it’s used in any unconditional feature, then it should not have a feature guard at all. So then that brings up another question. Is there any reason for it NOT to be unguarded? It adds some protos to JS and takes up a bit of space. But it's not dependent on any platform or hardware that I'm aware of. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Plan to move TypedArray out of WebGL feature guard
On Dec 23, 2010, at 11:23 AM, Darin Adler wrote: On Dec 23, 2010, at 11:21 AM, Jian Li wrote: We do not add an additional check expression when TypedArray is added to XHR. Is the TypedArray support in XHR a feature in its own right? Should it be off by default or is it ready to be on for all versions of WebKit? There's a discussion right now about adding ArrayBuffer (the foundation of the Typed Arrays) support to the XHR Level 2 spec. If this were to happen then it would seem that Typed Array should be unguarded. Right now ArrayBuffer support for XHR is in WebKit TOT, guarded by: #if ENABLE(3D_CANVAS) || ENABLE(BLOB) It seems odd that there would be any dependency between WebGL and XHR function calls. As for Typed Arrays being ready for prime-time, I'm not sure. You're right about the fact that increasing the security risk would be bad. There are many layout tests for Typed Arrays now at 'LayoutTests/fast/canvas/webgl/'. We should really move them elsewhere. Right now they will not get run at all if WebGL is not turned on. How would we go about feeling comfortable about the security risk? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Plan to move TypedArray out of WebGL feature guard
On Dec 23, 2010, at 1:47 PM, Charles Pritchard wrote: You need to have Blobs for ArrayBuffer to be of much use for XHR, because you need to be able to set the Content-Type, and browsers may/will fiddle with the content-type header you set, if you have not passed a Blob. Blobs are defined under the File API (I believe). Blob is already under the XHR spec, though I haven't seen it used in Webkit distros (responseBlob). We've discussed a different approach to responses in XHR Level 2. The idea is to have a 'responseType' property which is set to the data type you're interested in (text, xml, arrayBuffer, etc.) and then a 'response' property that would be of the appropriate type. One question I have is whether Blobs are necessary if we have ArrayBuffers. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Bools are strictly worse than enums
On Dec 6, 2010, at 10:15 AM, Darin Adler wrote: On Dec 4, 2010, at 3:01 PM, Maciej Stachowiak wrote: Passing a true or false literal (at least in cases where it's not the sole argument) is a likely indicator of unclear style, as opposed to taking a boolean argument. Agreed. In fact, even putting a boolean literal in a named variable and then passing that is likely to be fairly clear. Use of a boolean literal can easily hide the fact that a call site is passing a boolean to the wrong argument or even has the sense of the boolean backward. The enum technique is considerably more powerful in the way that it ties the call site to the called function, providing a benefit that goes beyond readability. That having been said the enum has at least these costs: - The enumeration definition has to be in a file included anywhere it’s used. - Coming up with good names for the enumeration and its values can be difficult. - At call sites that need to compute a value to pass in rather than passing a constant the enum can obscure the code’s meaning rather than clarifying it. - Mangled names of functions get longer. I find one other cost when converting to enums. Since C++ doesn't do typing of enums, you have to worry about name clashes. This leads to enum names decorated with the enum type. Putting the enum in the associated class helps. But as the use of enums goes up you even have to worry about clashes within the class. Plus putting enums in a class requires qualifying each enum with the class name. All these things lead to reduced readability. I haven't found any guidance in the style guidelines about naming and qualifying enums. Right now you see every manner of enum naming imaginable. Adding something to the style guidelines would be very helpful. I agree that using enums is better. But I'd like to get some guidance on how to do it right. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] XHR responseArrayBuffer attribute: possible implementation
On Oct 25, 2010, at 12:22 PM, Darin Fisher wrote: The solution for .responseBlob was to add an .asBlob attribute that would need to be set to true before calling .send(). We could do the same for .responseArrayBuffer. -Darin On Mon, Oct 25, 2010 at 12:17 PM, Geoffrey Garen gga...@apple.com wrote: Hi Chris. I like the efficiency of this approach. And I agree with your premise that a developer will probably only want one type of data (raw, text, or XML) per request, and not more than one. My biggest concern with this idea is that there's nothing obvious about the API pattern of three properties -- .responseText, .responseXML, and .responseArrayBuffer -- that makes clear that accessing one should prohibit access to the others. I wonder if there's a good way to make this clearer. Maybe the API should require the programmer to specify in advance what type of data he/she will ask for. For example, an extra responeType parameter passed to open. The default behavior would be the values currently supported, but you could opt into something specific for extra safety/performance, and new types of data: request.open(GET, data.xml, true, Text); request.open(GET, data.xml, true, XML); request.open(GET, data.xml, true, Bytes); I'd sure like to try to avoid an explosion in the API. I like Geoff's suggestion of specifying the type of request in open(). Seems like the best API would be to have Geoff's API and then: any responseObject(); DOMString responseType(); That would allow us to expand the types supported without any additional API. We'd need to support the current API calls for backward compatibility. But now seems like a good time to plan for the future. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Protecting against stale pointers in DrawingBuffer and GraphicsContext3D
On Oct 11, 2010, at 5:15 PM, Maciej Stachowiak wrote: On Oct 11, 2010, at 4:03 PM, Chris Marrin wrote: On Oct 11, 2010, at 3:35 PM, James Robinson wrote: On Mon, Oct 11, 2010 at 3:15 PM, Chris Marrin cmar...@apple.com wrote: For accelerated 2D rendering we created a class called DrawingBuffer. This encapsulates the accelerated drawing surface (usually in the GPU) and the compositing layer used to display that surface on the page. The drawing surface (which is called a Framebuffer Object or FBO) is allocated by GraphicsContext3D, so DrawingBuffer needs a reference to that. Currently this is a weak reference. DrawingBuffer has a ::create() method and you pass the GraphicsContext3D to that method. If you destroy the GraphicsContext3D, DrawingBuffer has a stale pointer. If you were to try to destroy the DrawingBuffer it would attempt to use that pointer (to destroy its FBO) and crash or worse. Currently we have an implicit policy that you should never destroy a GraphicsContext3D before its DrawingBuffers are all destroyed. That works fine in the current use case, CanvasRenderingContext2D. And we can follow that same policy when in the future when we use DrawingBuffer in WebGLRenderingContext. My concern is that this sort of implicit policy can lead to errors in the future when other potential clients of these classes use them. So I posted https://bugs.webkit.org/show_bug.cgi?id=47501. In that patch I move the creation of DrawingBuffer to the GraphicsContext3D and keep back pointers to all the DrawingBuffers allocated so they can be cleaned up when GraphicsContext3D is destroyed. In talking to James R. he's concerned this adds unnecessary complexity and would rather stick with the implicit policy. So is this something I should be concerned about, or is an implicit policy sufficient in this case? Before somebody suggests it, I think Chris and I are in agreement that neither GraphicsContext3D nor DrawingBuffer should be RefCounted. They both have clear single-ownership semantics. True, although Simon and I just chatted and he pointed out that Refcounting both classes would solve this problem. The fact that GraphicsContext3D wouldn't need a back pointer to DrawingBuffer means we wouldn't have any circular references. I don't think the overhead of refcounting is an issue here, so maybe that would be a simpler solution? I think having two independent objects that must be deleted in a specific order, or else you crash, is a hazardous design. APIs (even internal APIs) are better when they do not have a way to be used wrong. So, I think this should be changed one way or the other. I have to say that to my taste, refcounting seems like a cleaner solution than ad-hoc weak pointers. I'm skeptical of the claim that refcounting is not good for a heavyweight object. If there's really a time when special resources (VRAM, etc) need to be freed at a known point in program code, then it may be better to have an explicit close type function instead of counting on the destructor. On the other hand, this might end up being roughly equivalent to the clear backpointers solution, but moves the complexity of being in a possibly-invalid state from DrawingBuffer to GraphicsContext3D. Either way, I am pretty confident that a design where objects must be destroyed in a specific order is not the best choice. So it seems like we have two choices: 1) my current patch, which uses backpointers to manage the lifetime of the weak pointers, or 2) refcounting. My current approach has the advantage that the resources are cleared as soon as the DrawingBuffer is destroyed. But it is more complex and therefore more error prone. I think that complexity is manageable so that would be my preferred implementation. But refcounting is simpler and my current patch has a clear() method on DrawingBuffer which gets rid of all the resources. I could leave that method and change to a refcounted model, so the author can control when the resources are destroyed. What do you think, James? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Protecting against stale pointers in DrawingBuffer and GraphicsContext3D
On Oct 12, 2010, at 10:44 AM, Darin Adler wrote: On Oct 12, 2010, at 9:47 AM, Chris Marrin wrote: But refcounting is simpler and my current patch has a clear() method on DrawingBuffer which gets rid of all the resources. I could leave that method and change to a refcounted model, so the author can control when the resources are destroyed. What do you think, James? I think that the combination of reference counting and explicit cleanup is a good one, relatively easy to program correctly with, and I would lean toward that combination. The main cost of successfully implementing that design is making sure that each function on DrawingBuffer is either harmless to call after clear() or contains an assertion that it is not called after clear() with some solid reason to know it won’t be called. Ok, I'll redo the patch using that technique. And yes, clear() sets the GraphicsContext3D pointer to 0 and I check that pointer for null in each call. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Protecting against stale pointers in DrawingBuffer and GraphicsContext3D
On Oct 12, 2010, at 2:37 PM, Darin Fisher wrote: ...So it seems like we have two choices: 1) my current patch, which uses backpointers to manage the lifetime of the weak pointers, or 2) refcounting. My current approach has the advantage that the resources are cleared as soon as the DrawingBuffer is destroyed. But it is more complex and therefore more error prone. I think that complexity is manageable so that would be my preferred implementation. But refcounting is simpler and my current patch has a clear() method on DrawingBuffer which gets rid of all the resources. I could leave that method and change to a refcounted model, so the author can control when the resources are destroyed. Another option would be to generalize the WeakPtrT implementation from that patch into a generic class and use that. Then that logic could be implemented, reviewed and tested independently from the graphics code. I know that Maciej has expressed concern about this pattern in the past due to the runtime cost it imposes. Ref counting is a fairly blunt instrument but it would fit in best with the rest of the codepath. Weak pointers are both more complicated than refcounting and introduce a comparable or possibly even greater level of runtime cost. So if there's ever a problem that can be solved either way, I would tend to prefer refcounting. Regards, Maciej Hmm, I've found weak pointer abstractions to be very useful. The issue with reference counting is that it is easy to introduce memory leaks, and has been mentioned, it is sometimes nice to have deterministic object destruction. It is also nice to avoid having to have explicit clear() functions and then add checks to every other method to assert that they are not called after clear(). In the Chromium code base, we have a helper for weak pointers: http://src.chromium.org/viewvc/chrome/trunk/src/base/weak_ptr.h?view=markup We tend to use this in cases in which: 1) there are many consumers interested in holding a back pointer to some shared resource, and 2) we'd like the shared resource to die at some predictable time. Without a utility like this, the alternative is to make the shared resource notify each of the consumers about the impending destruction of the shared resource. It is true that WeakPtrT adds a null check in front of each method call made by the consumers, but that runtime cost is often justified in exchange for reduced code complexity (i.e., eliminating the need to notify consumers when the shared resource dies). In this case I agree with Maciej that the simplest solution is to just use a RefPtr. This is a simple case where a class (DrawingBuffer) must not outlive the GraphicsContext3D used to create it. I have a patch which uses RefPtrs and it simplifies things quite a bit. I'm not too concerned about resource management. I think the typical case will be either that DrawingBuffer and GraphicsContext3D are destroyed at around the same time, or that GraphicsContext3D is persistent and DrawingBuffers come and go. The RefPtr just makes sure that the GraphicsContext3D is never destroyed too early. With that said, there are some places in this area of the code that would benefit from a general WeakPtr pattern. For instance, the WebGLObject set of classes use an ad hoc weak pointer mechanism which would be more readable and reliable with a WeakPtr implementation. I think the accelerated 2D Canvas logic may have some unprotected weak pointers as well (for instance the Shader objects). - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
[webkit-dev] Protecting against stale pointers in DrawingBuffer and GraphicsContext3D
For accelerated 2D rendering we created a class called DrawingBuffer. This encapsulates the accelerated drawing surface (usually in the GPU) and the compositing layer used to display that surface on the page. The drawing surface (which is called a Framebuffer Object or FBO) is allocated by GraphicsContext3D, so DrawingBuffer needs a reference to that. Currently this is a weak reference. DrawingBuffer has a ::create() method and you pass the GraphicsContext3D to that method. If you destroy the GraphicsContext3D, DrawingBuffer has a stale pointer. If you were to try to destroy the DrawingBuffer it would attempt to use that pointer (to destroy its FBO) and crash or worse. Currently we have an implicit policy that you should never destroy a GraphicsContext3D before its DrawingBuffers are all destroyed. That works fine in the current use case, CanvasRenderingContext2D. And we can follow that same policy when in the future when we use DrawingBuffer in WebGLRenderingContext. My concern is that this sort of implicit policy can lead to errors in the future when other potential clients of these classes use them. So I posted https://bugs.webkit.org/show_bug.cgi?id=47501. In that patch I move the creation of DrawingBuffer to the GraphicsContext3D and keep back pointers to all the DrawingBuffers allocated so they can be cleaned up when GraphicsContext3D is destroyed. In talking to James R. he's concerned this adds unnecessary complexity and would rather stick with the implicit policy. So is this something I should be concerned about, or is an implicit policy sufficient in this case? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Protecting against stale pointers in DrawingBuffer and GraphicsContext3D
On Oct 11, 2010, at 3:35 PM, James Robinson wrote: On Mon, Oct 11, 2010 at 3:15 PM, Chris Marrin cmar...@apple.com wrote: For accelerated 2D rendering we created a class called DrawingBuffer. This encapsulates the accelerated drawing surface (usually in the GPU) and the compositing layer used to display that surface on the page. The drawing surface (which is called a Framebuffer Object or FBO) is allocated by GraphicsContext3D, so DrawingBuffer needs a reference to that. Currently this is a weak reference. DrawingBuffer has a ::create() method and you pass the GraphicsContext3D to that method. If you destroy the GraphicsContext3D, DrawingBuffer has a stale pointer. If you were to try to destroy the DrawingBuffer it would attempt to use that pointer (to destroy its FBO) and crash or worse. Currently we have an implicit policy that you should never destroy a GraphicsContext3D before its DrawingBuffers are all destroyed. That works fine in the current use case, CanvasRenderingContext2D. And we can follow that same policy when in the future when we use DrawingBuffer in WebGLRenderingContext. My concern is that this sort of implicit policy can lead to errors in the future when other potential clients of these classes use them. So I posted https://bugs.webkit.org/show_bug.cgi?id=47501. In that patch I move the creation of DrawingBuffer to the GraphicsContext3D and keep back pointers to all the DrawingBuffers allocated so they can be cleaned up when GraphicsContext3D is destroyed. In talking to James R. he's concerned this adds unnecessary complexity and would rather stick with the implicit policy. So is this something I should be concerned about, or is an implicit policy sufficient in this case? Before somebody suggests it, I think Chris and I are in agreement that neither GraphicsContext3D nor DrawingBuffer should be RefCounted. They both have clear single-ownership semantics. True, although Simon and I just chatted and he pointed out that Refcounting both classes would solve this problem. The fact that GraphicsContext3D wouldn't need a back pointer to DrawingBuffer means we wouldn't have any circular references. I don't think the overhead of refcounting is an issue here, so maybe that would be a simpler solution? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Protecting against stale pointers in DrawingBuffer and GraphicsContext3D
On Oct 11, 2010, at 4:34 PM, James Robinson wrote: On Mon, Oct 11, 2010 at 4:03 PM, Chris Marrin cmar...@apple.com wrote: On Oct 11, 2010, at 3:35 PM, James Robinson wrote: On Mon, Oct 11, 2010 at 3:15 PM, Chris Marrin cmar...@apple.com wrote: ... So is this something I should be concerned about, or is an implicit policy sufficient in this case? Before somebody suggests it, I think Chris and I are in agreement that neither GraphicsContext3D nor DrawingBuffer should be RefCounted. They both have clear single-ownership semantics. True, although Simon and I just chatted and he pointed out that Refcounting both classes would solve this problem. The fact that GraphicsContext3D wouldn't need a back pointer to DrawingBuffer means we wouldn't have any circular references. I don't think the overhead of refcounting is an issue here, so maybe that would be a simpler solution? I'd really prefer not to make them RefCounted. The problem is that GraphicsContext3D and DrawingBuffer are very heavyweight objects, which means we need to manage their lifetimes tightly and avoid leaving them lying around longer than necessary. The exact resource use of these objects depends on the system but a GraphicsContext3D will typically be backed by an OpenGL context, which implies some set of driver resources, and a DrawingBuffer is normally backed by a few megabytes of VRAM. In the current code, there is always a single object responsible for managing the lifetime of a given GraphicsContext3D or a DrawingBuffer which makes it very easy to ensure that they live for as long as necessary but no longer. With a RefCounted object it can be difficult to ensure that all references to a given object go away when necessary. In this particular case DrawingBuffer exists at a slightly higher abstraction layer than GraphicsContext3D. DrawingBuffer depends on GC3D, but there is no dependency the other way (and IMHO there should not be). This means that the lifetime of a DrawingBuffer depends on the underlying GraphicsContext3D, but not the other way 'round. All callers that use or will want to use a DrawingBuffer already have to be aware of the lifetime of the GraphicsContext3D associated with it since the only way to use a DrawingBuffer is by using the GraphicsContext3D API. For those following along at home, the current user of DrawingBuffer is CanvasRenderingContext2D which has an OwnPtrDrawingBuffer and uses a GraphicsContext3D that is guaranteed to outlive the CanvasRenderingContext2D. The next proposed user of DrawingBuffer is WebGLRenderingContext which manages its own GraphicsContext3D via a member OwnPtr. How can you make that guarantee? The GraphicsContext3D is owned by SharedGraphicsContext3D, which is owned by Page. When Page is destroyed, it will destroy the SharedGraphicsContext3D which will destroy the GraphicsContext3D. The associated DrawingBuffers are owned by CanvasRenderingContext2D which are owned by HTMLCanvasElement. These live in the DOM Tree, which should be destroyed when the Page is destroyed. But Elements are refcounted, so what if there is a JavaScript reference to the HTMLCanvasElement. Or what if it's referenced by some other mechanism which does deferred destruction (like a run loop observer or something). Do we guarantee that all these references will be removed before the Page is destroyed? If this is ever not the case and the DrawingBuffer ever happens to get destroyed after the GraphicsContext3D, you will get a crash. or worse. When I say worse, I'm referring to the fact that some of the uses of these graphics resources rely on the graphics context being previously bound. So it's possible in these cases that the wrong context is bound. It's not inconceivable that an exploit can be found that tries to access a Canvas while destroying a Page. Time it just right and maybe you can get access to some pixels from another window, which is a pretty bad security hole. It just feels safer to me to manage the lifetime of these objects explicitly rather than relying on a complex sequence of events. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] ArrayBuffer supprot
On Oct 8, 2010, at 3:51 PM, Jian Li wrote: On Fri, Oct 8, 2010 at 3:29 PM, Maciej Stachowiak m...@apple.com wrote: On Oct 8, 2010, at 3:05 PM, Jian Li wrote: Sounds good. I will add the File API feature guard to it and still keep those files under html/canvas. Another possibility is to have an ArrayBuffer feature guard and ensure that it is on if at least one of the features depending on it is on. This also sounds good. Personally I prefer appending File API feature guard since it is simpler. When array buffer is used in XHR, we can then simply remove all the feature guards. Otherwise, we will have to update all the config files to add a new feature guard. I agree, especially since we will hopefully be able to get rid of the guards completely in the not too distant future. And please feel free to finish out the ArrayBuffer implementation :-) - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Any objections to switching to Xcode 3.2.4 or newer?
+1 +1 +1 ! On Oct 6, 2010, at 5:00 PM, Darin Adler wrote: Hi folks. For those working on Mac OS X: Any objection to upgrading to Xcode 3.2.4? It’s now showing up in Apple’s Software Update for all Xcode users, I believe. I ask because this adds a developmentRegion = English string to the project file but the older versions of Xcode remove that string. If we all agree to use the newer version, then we can let that string get checked in. If some of us are using the older version it will be frustrating because the string will be removed each time someone edits the project file with an older version and checks it in. If no one objects, we’ll start checking in the project files with developmentRegion in there. -- Darin ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] XHR responseArrayBuffer attribute
On Sep 29, 2010, at 6:34 PM, Maciej Stachowiak wrote: ...The idea is that when an ArrayBuffer is sent via postMessage, it is atomically closed on this side; its publicly visible length goes to 0, as do the lengths of any views referring to it. On the other side, a new ArrayBuffer wrapper object is synthesized, pointing to the same storage and with the original length. To be able to reuse the same memory region over and over again, the other side would simply send the ArrayBuffer back for re-filling via postMessage. Ping-ponging ArrayBuffers back and forth achieves zero-copy transfer of large amounts of data while still maintaining the shared-nothing semantic. The only allocations are for the (tiny) ArrayBuffer wrapper objects, but the underlying storage is stable. Implementing this idea will require a couple of minor additions to the TypedArray specification (in particular, the addition of a close method on ArrayBuffer) as well as defining the semantics of sending an ArrayBuffer via postMessage. I hope to prototype it soon. Regarding your scenario, I would simply post the ArrayBuffer from the XHR object to the worker with the above semantics. The main thread would then not be able to access the data in the ArrayBuffer, but sending it to the worker for processing would not involve any data copies. Sure, transfer semantics avoid shared mutable state, though it would be inconsistent with most other pure data types. But what if you have some data that doesn't need mutating but you'd like to share with multiple other Workers? Now you'd be forced to explicitly copy. The availability of an immutable variant would let you avoid that. At most, you'd need to copy once if your ArrayBuffer started immutable; or you could have the ability to convert mutable to immutable at runtime (it would have to be a one-way conversion, of course). I'm thinking about how this would be implemented. Ken talks about a close function to make it possible to pass an ArrayBuffer to a worker. If I have it right, this would detach the contents of the ArrayBuffer from it's owning object, replacing it with a 0 length buffer. Then the worker attaches the contents to a new ArrayBuffer owned by the worker. To do that we'd need to figure out the magic of passing this bare buffer to the worker. An ImmutableArrayBuffer would not need any such magic. But without any additional functionality, you'd always need an additional copy (even it's a copy-on-write) for Maciej's example. In Maciej's example, he wants to take an incoming buffer and pass it to a worker, presumably so it can be mutated into something else. So you'd pass the ImmutableArrayBuffer to the worker (no copy) and it would create a new ArrayBuffer with one or more views which it would fill with the mutated data. But to pass this buffer back to the main thread, you'd need to convert this ArrayBuffer to an ImmutableArrayBuffer, which would require some sort of copy. What's needed is a way to pass that ArrayBuffer back to the main thread without a copy. So maybe we just need a function like Ken's close but without the magic. A makeImmutable() function could be called on the ArrayBuffer, which would create a new ImmutableArrayBuffer, attach the contents of the ArrayBuffer to it and set the contents of the ArrayBuffer to a 0 length buffer, as in Ken's design. So now you'd pass the incoming ImmutableArrayBuffer to the worker, create a new ArrayBuffer for the mutated data, fill it, call makeImmutable on it and return the result. No copies would be needed. Once the process starts, the old buffers can be recycled to avoid memory allocations as well. Would something like that work? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] XHR responseArrayBuffer attribute
On Sep 30, 2010, at 10:46 AM, Kenneth Russell wrote: ... Sure, transfer semantics avoid shared mutable state, though it would be inconsistent with most other pure data types. But what if you have some data that doesn't need mutating but you'd like to share with multiple other Workers? Now you'd be forced to explicitly copy. The availability of an immutable variant would let you avoid that. At most, you'd need to copy once if your ArrayBuffer started immutable; or you could have the ability to convert mutable to immutable at runtime (it would have to be a one-way conversion, of course). I'm thinking about how this would be implemented. Ken talks about a close function to make it possible to pass an ArrayBuffer to a worker. If I have it right, this would detach the contents of the ArrayBuffer from it's owning object, replacing it with a 0 length buffer. Then the worker attaches the contents to a new ArrayBuffer owned by the worker. To do that we'd need to figure out the magic of passing this bare buffer to the worker. An ImmutableArrayBuffer would not need any such magic. But without any additional functionality, you'd always need an additional copy (even it's a copy-on-write) for Maciej's example. In Maciej's example, he wants to take an incoming buffer and pass it to a worker, presumably so it can be mutated into something else. So you'd pass the ImmutableArrayBuffer to the worker (no copy) and it would create a new ArrayBuffer with one or more views which it would fill with the mutated data. But to pass this buffer back to the main thread, you'd need to convert this ArrayBuffer to an ImmutableArrayBuffer, which would require some sort of copy. What's needed is a way to pass that ArrayBuffer back to the main thread without a copy. So maybe we just need a function like Ken's close but without the magic. A makeImmutable() function could be called on the ArrayBuffer, which would create a new ImmutableArrayBuffer, attach the contents of the ArrayBuffer to it and set the contents of the ArrayBuffer to a 0 length buffer, as in Ken's design. So now you'd pass the incoming ImmutableArrayBuffer to the worker, create a new ArrayBuffer for the mutated data, fill it, call makeImmutable on it and return the result. No copies would be needed. Once the process starts, the old buffers can be recycled to avoid memory allocations as well. Would something like that work? I can see the need both for immutable data and transfer semantics. I don't think that adding a new type (ImmutableArrayBuffer) is the right way to do it, because it significantly complicates the type hierarchy. Rather, I think immutability should be a read-only property on the ArrayBuffer, set at creation time, and affecting the kinds of views that can be attached to it. I'll raise the issue and a proposal on the public_webgl mailing list. There are many ways to do it. If we do it as a read-only property, then we need to do a write check on every access. Doing it as a completely set of immutable classes (ArrayBuffer and views) would double the number of classes. But there are only 9 classes now, so the increase wouldn't be that bad. This is especially true with the way the spec is now. All the views are collapsed into a single section. So we're really just talking about adding 2 new sections, plus a description of the semantics, the new makeImmutable() function on ArrayBuffer and probably some copy functions. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] XHR responseArrayBuffer attribute
On Sep 27, 2010, at 6:37 PM, Maciej Stachowiak wrote: On Sep 27, 2010, at 3:19 PM, Michael Nordman wrote: Webkit's XHR currently does not keep two copies of the data that I can see. I think we should avoid that. We could keep the raw data around, which hopefully is directly usable as an ArrayBuffer backing store, and only translate it to text format when/if the client requests responseText. Yes, the raw data should be usable without translation in an ArrayBuffer. But we'd still need to make a copy of the raw bits when a new ArrayBuffer is created via responseArrayBuffer(), because that object is mutable. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] XHR responseArrayBuffer attribute
On Sep 27, 2010, at 6:40 PM, James Robinson wrote: On Mon, Sep 27, 2010 at 6:37 PM, Maciej Stachowiak m...@apple.com wrote: On Sep 27, 2010, at 3:19 PM, Michael Nordman wrote: Webkit's XHR currently does not keep two copies of the data that I can see. I think we should avoid that. We could keep the raw data around, which hopefully is directly usable as an ArrayBuffer backing store, and only translate it to text format when/if the client requests responseText. It would be unfortunate to have to keep the raw data around after the page access .responseText, given that the overwhelming majority of pages will touch .responseText and nothing else. I found when improving the V8 XHR implementation that the memory footprint of .responseText being held off of the XHR object itself was often significant so I would be very reluctant to grow it by an addition 50-100% (depending on encoding) in the common case. But do you think you'd ever need more than one copy of the raw bits from the response? Seems like you should be able to return a responseText and a responseArrayBuffer from the same raw bits. Am I missing some detail of how XHR works? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] XHR responseArrayBuffer attribute
On Sep 28, 2010, at 9:45 AM, Maciej Stachowiak wrote: On Sep 28, 2010, at 7:15 AM, Chris Marrin wrote: On Sep 27, 2010, at 6:37 PM, Maciej Stachowiak wrote: On Sep 27, 2010, at 3:19 PM, Michael Nordman wrote: Webkit's XHR currently does not keep two copies of the data that I can see. I think we should avoid that. We could keep the raw data around, which hopefully is directly usable as an ArrayBuffer backing store, and only translate it to text format when/if the client requests responseText. Yes, the raw data should be usable without translation in an ArrayBuffer. But we'd still need to make a copy of the raw bits when a new ArrayBuffer is created via responseArrayBuffer(), because that object is mutable. Is there an immutable variant of ArrayBuffer? If not, we really need one. But even without that, note that you don't necessarily need to make an immediate copy, you can use copy-on-write. The immutable variant would be helpful since we could avoid implementing threadsafe copy-on-write just to allow efficient passing of ArrayBuffers to Workers. There's not an immutable variant, but that's a good idea. I will propose it - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Review tool changes
On Sep 20, 2010, at 11:36 AM, Adam Roben wrote: On Sep 20, 2010, at 2:34 PM, Oliver Hunt wrote: I really would like to be able to select some text and add a comment that uses the selection as context, a single line of context is frequently insufficient, this is about the only thing that still makes the new review tool less effective than the old review mechanism (for me at least). This already works. Just click-and-drag on the line numbers of the lines you want to include in the context. Ah, I didn't know about this. Very cool. This will make comments so much more clear! - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Arena is crufty?
On Sep 2, 2010, at 9:41 AM, Kenneth Russell wrote: On Thu, Sep 2, 2010 at 8:51 AM, Chris Marrin cmar...@apple.com wrote: On Sep 1, 2010, at 7:20 PM, Kenneth Russell wrote: I would be happy to not add another Arena client, but the primary reason I need an arena is not just for performance but to avoid having to keep track of all of the objects I need to delete. Is there any consensus yet on how to proceed with https://bugs.webkit.org/show_bug.cgi?id=45059 ? I'm concerned about taking on large-scale restructuring with potential performance impact as a prerequisite for my landing any initial code. I could revert my PODArena class to use its own memory allocation rather than that in Arena.h. I just posted that it seems like your RB tree could be replaced by std::multimap. And, given comments from others, it seems like the right thing to do with Arena is to put PODArena into the gpu directory like you were originally going to do, but to not use Arena.h (suck it's functionality into PODArena). Alternately, you could try Jeremy's idea and ref count your objects. If you use std::multimap, elements can be of type RefPtrsomething, so you can avoid all memory management issues. I haven't seen that reply yet, but replacing my red-black tree with std::multimap is not a solution. My red-black tree is specifically designed to be augmentable, and the IntervalTree built on it is a core data structure used in the path processing code. The wheels go round and round. Seems like the right solution is to put PODRedBlackTree and PODArena in gpu as originally planned. But still suck in the functionality of Arena.h rather than using it directly. That gives us the option of getting rid of Arena.h at some point. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Complex and Vector3 classes in WTF?
On Aug 31, 2010, at 6:59 PM, Kenneth Russell wrote: On Tue, Aug 31, 2010 at 6:42 PM, Maciej Stachowiak m...@apple.com wrote: On Aug 31, 2010, at 5:29 PM, Chris Marrin wrote: On Aug 31, 2010, at 5:25 PM, Kenneth Russell wrote: ...Yes, I did the Google search and you're right that the term is not in common usage (although I still maintain it's a completely reasonable term). The reason I think it's meaningful is because it really is a matrix of sorts, but a specialized one that handles only affine transformations. We could call it AffineTransform, but then why not call our 4x4 matrix HomogeneousTransform? I'd just like to be consistent. HomogenousTransform is fine. I would also be fond of PerspectiveProjection. PerspectiveProjection is not a good name for a 4x4 matrix class. Such a matrix might be used to represent an orthographic projection. I think TransformMatrix is not a good name. It immediately raises the question, what kind of transform. I also think Matrix does not need to be in the name. That is to some extent an implementation detail, from the mathematical perspective. It's more important to identify the type of transformation. I'm concerned about the route of adding a class for each kind of transformation. It will lead to a proliferation of confusingly named types and excess type conversion, or re-identification of the type of transformation, when composing transformations. At least in the 3D realm, all that is desired is one simple 4x4 matrix class. Additional classes to represent e.g. 4x3 matrices add unnecessary complexity. I don't think we need a huge number, but the 2D affine transform case is clearly special - it's too expensive to use a 4x4 matrix for this. Agreed. I agree. So, in order to appease Maciej :-) what if we keep AffineTransform as is, and change TransformationMatrix to Matrix (or Matrix4 if Matrix is too generic)? Is HomogenousTransform an inaccurate representation of what it does? The class has methods such as mapPoint, projectPoint, scale, rotate3d, applyPerspective, etc. It is clearly oriented around being some sort of transform, not a generic 4x4 matrix. Is it ever used to represent something that is not a transform at all? Would you use this class for something totally unrelated to transforms, for example if you wanted to solve a system of linear equations via gaussian elimination? My expectation is that you would not. I would certainly not preclude this possibility. If you look at the implementation of TransformationMatrix, it actually does matrix decomposition to extract components like rotation and perspective of a given transform. I could easily see the need to expose the underlying operations, or other operations like LU decomposition, in the public But I agree with Maciej that all of the public API is transformation oriented. Even things like inverse() and transpose() have application in doing transforms. I think it would be a stretch to use this 4x4 matrix for general purposes. A general matrix class usually has to deal with different dimensions, for instance. But I am loath to call it HomogeneousTransform, given the fact that Maciej and I have spelled it differently (both are accepted spellings) and it will be really hard for people to get used to spelling such an uncommon word. If you look at the Uses section of http://en.wikipedia.org/wiki/Transformation_matrix you'll see that they consider the term improper as well. And they have a good point. Since they recommend the term general transformation matrix to distinguish it from the more restricted affine transformation, then simply calling it Transform seems appropriate. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Complex and Vector3 classes in WTF?
On Sep 1, 2010, at 1:48 PM, Simon Fraser wrote: On Sep 1, 2010, at 1:04 PM, Chris Marrin wrote: On Sep 1, 2010, at 12:29 PM, Maciej Stachowiak wrote: On Sep 1, 2010, at 11:43 AM, Chris Marrin wrote: But I agree with Maciej that all of the public API is transformation oriented. Even things like inverse() and transpose() have application in doing transforms. I think it would be a stretch to use this 4x4 matrix for general purposes. A general matrix class usually has to deal with different dimensions, for instance. But I am loath to call it HomogeneousTransform, given the fact that Maciej and I have spelled it differently (both are accepted spellings) and it will be really hard for people to get used to spelling such an uncommon word. If you look at the Uses section of http://en.wikipedia.org/wiki/Transformation_matrix you'll see that they consider the term improper as well. And they have a good point. Since they recommend the term general transformation matrix to distinguish it from the more restricted affine transformation, then simply calling it Transform seems appropriate. Transform sounds ok to me, actually, even though it is a little broad. Filed as https://bugs.webkit.org/show_bug.cgi?id=45051. I also opened https://bugs.webkit.org/show_bug.cgi?id=45052 for the work to change Transform back to floats. I'm not sure we've fully committed to do this, but I wanted it recorded in the bug list. We can invalidate it if we don't end up doing it. SVG already has SVGTransform, the interface for one of the component transformations within an SVGTransformList, which has an SVGMatrix property, which represents the matrix. I think Transform is going to get too easily confused with existing transform terminology related to CSS and SVG transforms, and maybe XSLT too. Alternative? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
[webkit-dev] Arena is crufty?
Ken's PODRedBlackTree patch has made me go back and take a closer look at WebKit's Arena class. Turns out it's not a class at all, just some structs and macros. That seems very un-WebKit-like to me. Ken's patch also has a PODArena class, which uses Arena in its implementation. Sam suggests that PODRedBlackTree should really go into WTF, which means PODArena and Arena would need to go there as well. It seems like Arena really needs to be brought into the 21st century and made a proper class. Maybe now is the right time to: 1) Make Arena a class 2) Integrate Ken's PODArena functionality into this new Arena class (or maybe just make Ken's PODArena the new Arena class). 3) Move the new Arena class to WTF 4) Put PODRedBlackTree in WTF It looks like RenderArena is currently the only client of Arena.h, so this change shouldn't be too hard. Of course, looking at RenderArena, it's a little odd, too. It is not renderer specific at all. It's just an Arena that recycles freed objects. Maybe we should move that functionality into the new Arena class. But RenderArena is used all over the place, so maybe that's going one step too far down this road? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Arena is crufty?
On Sep 1, 2010, at 4:39 PM, Maciej Stachowiak wrote: On Sep 1, 2010, at 4:20 PM, Chris Marrin wrote: Ken's PODRedBlackTree patch has made me go back and take a closer look at WebKit's Arena class. Turns out it's not a class at all, just some structs and macros. That seems very un-WebKit-like to me. Ken's patch also has a PODArena class, which uses Arena in its implementation. Sam suggests that PODRedBlackTree should really go into WTF, which means PODArena and Arena would need to go there as well. It seems like Arena really needs to be brought into the 21st century and made a proper class. Maybe now is the right time to: 1) Make Arena a class 2) Integrate Ken's PODArena functionality into this new Arena class (or maybe just make Ken's PODArena the new Arena class). 3) Move the new Arena class to WTF 4) Put PODRedBlackTree in WTF It looks like RenderArena is currently the only client of Arena.h, so this change shouldn't be too hard. Of course, looking at RenderArena, it's a little odd, too. It is not renderer specific at all. It's just an Arena that recycles freed objects. Maybe we should move that functionality into the new Arena class. But RenderArena is used all over the place, so maybe that's going one step too far down this road? Arena was imported from Mozilla and could certainly benefit from modernization. For the rendering use case though, it is essential to handle non-POD types correctly. But RenderArena doesn't deal with types at all, does it? It just allocs and free's void*'s. It's not even a template class. So maybe we should have an Arena class which deals with void*'s and a PODArena template class which lets you type arena objects? RenderArena would use the Arena class. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Arena is crufty?
On Sep 1, 2010, at 5:12 PM, Kenneth Russell wrote: On Wed, Sep 1, 2010 at 5:08 PM, Chris Marrin cmar...@apple.com wrote: On Sep 1, 2010, at 4:39 PM, Maciej Stachowiak wrote: On Sep 1, 2010, at 4:20 PM, Chris Marrin wrote: Ken's PODRedBlackTree patch has made me go back and take a closer look at WebKit's Arena class. Turns out it's not a class at all, just some structs and macros. That seems very un-WebKit-like to me. Ken's patch also has a PODArena class, which uses Arena in its implementation. Sam suggests that PODRedBlackTree should really go into WTF, which means PODArena and Arena would need to go there as well. It seems like Arena really needs to be brought into the 21st century and made a proper class. Maybe now is the right time to: 1) Make Arena a class 2) Integrate Ken's PODArena functionality into this new Arena class (or maybe just make Ken's PODArena the new Arena class). 3) Move the new Arena class to WTF 4) Put PODRedBlackTree in WTF It looks like RenderArena is currently the only client of Arena.h, so this change shouldn't be too hard. Of course, looking at RenderArena, it's a little odd, too. It is not renderer specific at all. It's just an Arena that recycles freed objects. Maybe we should move that functionality into the new Arena class. But RenderArena is used all over the place, so maybe that's going one step too far down this road? Arena was imported from Mozilla and could certainly benefit from modernization. For the rendering use case though, it is essential to handle non-POD types correctly. But RenderArena doesn't deal with types at all, does it? It just allocs and free's void*'s. It's not even a template class. So maybe we should have an Arena class which deals with void*'s and a PODArena template class which lets you type arena objects? RenderArena would use the Arena class. RenderArena is used by classes which want to override operator new and delete in order to be allocated in an arena. PODArena is designed to be non-intrusive with respect to the POD data types that are allocated in it; overloaded operator new and delete do not need to be provided. Right, so I think we should have (in WTF) an Arena class which simply does arena based alloc and free of void*'s, and a PODArena template class which does what your PODArena does today, using the Arena class as its implementation. Then RenderArena should be changed to use Arena. This is minimal change and doesn't change the functionality of any end-user classes. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
[webkit-dev] Complex and Vector3 classes in WTF?
I just noticed these classes, added 7 months ago as part of Chris Rogers' audio work. I think it's a mistake to have these in WTF for a few reasons: 1) Complex is just std::complex with a single added function, complexFromMagnitudePhase(), which seems pretty audio specific, so it should go with the audio code 2) Vector3 has a name very similar to Vector, but with completely different functionality. I actually opened wtf/Vector.h thinking I was going to see a 2D Vector class (because I was in that mindset), then I remembered the _other_ meaning of Vector! So I think it's pretty confusing. 3) Vector3 goes along with other classes, like 2D point, matrices and maybe even lines, planes and other geometry related things. Right now we have FloatPoint2D, FloatPoint3D and TransformationMatrix in WebCore/platform/graphics. These should all be together. I think we should move Complex.h over to live with the rest of the Chris' audio code. Vector3 is a more complex ( ! ) issue. Should we move all the geometry related classes to WTF? If we did I think that should include all the Rect and Box classes as well. Or should we get rid of Vector3, added the functionality it needs to FloatPoint3D and use that? Ken Russell already has plans to do add the functions to FloatPoint3D, so I would vote for that. There's one other problem. Vector3 uses doubles, while FloatPoint3D uses floats. Chris, do you need doubles for your use, or would floats suffice? What do others think? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Complex and Vector3 classes in WTF?
On Aug 31, 2010, at 3:25 PM, Maciej Stachowiak wrote: On Aug 31, 2010, at 2:06 PM, Chris Marrin wrote: On Aug 31, 2010, at 11:48 AM, Kenneth Russell wrote: On Tue, Aug 31, 2010 at 11:05 AM, David Hyatt hy...@apple.com wrote: On Aug 31, 2010, at 10:36 AM, Chris Marrin wrote: Or should we get rid of Vector3, added the functionality it needs to FloatPoint3D and use that? Ken Russell already has plans to do add the functions to FloatPoint3D, so I would vote for that. I would vote for this. I don't think the geometry classes should move to wtf. I'd like to unify the math, geometry, and linear algebra classes that are scattered around the WebKit tree -- for example, FloatPoint, FloatPoint3D, FloatRect, FloatSize, the classes under WebCore/platform/graphics/transforms/, these Complex and Vector3 types, ... -- under a directory like WebCore/math, remove duplicate functionality, and provide a cohesive set of interfaces that can be easily used by other modules like graphics and audio. It would be nice if we could do this unification and then later on we can enhance it so the classes play nice together. For instance, TransformationMatrix deals with many, but not all of the other geometric classes. You can't cast between FloatPoint and FloatPoint3D, etc. Maybe we could also use this opportunity to change TransformationMatrix to Matrix. The current name is such a mouthful. And we might also want to think about changing FloatPoint3D to FloatPoint3. That would make it more natural if and when we want to add a FloatPoint4. We should also change AffineTransform to AffineMatrix so it matches Matrix. Mathematically, you can have an affine transform, or a matrix that represents an affine transform. And there's such a thing as an affine space (in fact IntPoint and IntSize form an affine space). But there's no such thing as an affine matrix. Sure there is. It's a matrix that performs affine transformations. Mathematically it's represented as a 3x3 matrix, but like others, we just represent it as a linear transformation matrix (2x2) plus a 2D translation value. I think the name AffineMatrix is descriptive because, unlike a general 3x3 matrix, our truncated representation can only handle affine transformations. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Complex and Vector3 classes in WTF?
On Aug 31, 2010, at 3:43 PM, Kenneth Russell wrote: On Tue, Aug 31, 2010 at 3:39 PM, Chris Marrin cmar...@apple.com wrote: On Aug 31, 2010, at 3:25 PM, Maciej Stachowiak wrote: On Aug 31, 2010, at 2:06 PM, Chris Marrin wrote: On Aug 31, 2010, at 11:48 AM, Kenneth Russell wrote: On Tue, Aug 31, 2010 at 11:05 AM, David Hyatt hy...@apple.com wrote: On Aug 31, 2010, at 10:36 AM, Chris Marrin wrote: Or should we get rid of Vector3, added the functionality it needs to FloatPoint3D and use that? Ken Russell already has plans to do add the functions to FloatPoint3D, so I would vote for that. I would vote for this. I don't think the geometry classes should move to wtf. I'd like to unify the math, geometry, and linear algebra classes that are scattered around the WebKit tree -- for example, FloatPoint, FloatPoint3D, FloatRect, FloatSize, the classes under WebCore/platform/graphics/transforms/, these Complex and Vector3 types, ... -- under a directory like WebCore/math, remove duplicate functionality, and provide a cohesive set of interfaces that can be easily used by other modules like graphics and audio. It would be nice if we could do this unification and then later on we can enhance it so the classes play nice together. For instance, TransformationMatrix deals with many, but not all of the other geometric classes. You can't cast between FloatPoint and FloatPoint3D, etc. Maybe we could also use this opportunity to change TransformationMatrix to Matrix. The current name is such a mouthful. And we might also want to think about changing FloatPoint3D to FloatPoint3. That would make it more natural if and when we want to add a FloatPoint4. We should also change AffineTransform to AffineMatrix so it matches Matrix. Mathematically, you can have an affine transform, or a matrix that represents an affine transform. And there's such a thing as an affine space (in fact IntPoint and IntSize form an affine space). But there's no such thing as an affine matrix. Sure there is. It's a matrix that performs affine transformations. Mathematically it's represented as a 3x3 matrix, but like others, we just represent it as a linear transformation matrix (2x2) plus a 2D translation value. I think the name AffineMatrix is descriptive because, unlike a general 3x3 matrix, our truncated representation can only handle affine transformations. Chris, based on the precision of Maciej's reply, I suspect you do not want to get into a semantic argument here... :) http://www.google.com/search?q=affine+matrix Oh, Ken, I'll argue about anything, you know that :-) Yes, I did the Google search and you're right that the term is not in common usage (although I still maintain it's a completely reasonable term). The reason I think it's meaningful is because it really is a matrix of sorts, but a specialized one that handles only affine transformations. We could call it AffineTransform, but then why not call our 4x4 matrix HomogeneousTransform? I'd just like to be consistent. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Complex and Vector3 classes in WTF?
On Aug 31, 2010, at 5:25 PM, Kenneth Russell wrote: ...Yes, I did the Google search and you're right that the term is not in common usage (although I still maintain it's a completely reasonable term). The reason I think it's meaningful is because it really is a matrix of sorts, but a specialized one that handles only affine transformations. We could call it AffineTransform, but then why not call our 4x4 matrix HomogeneousTransform? I'd just like to be consistent. HomogenousTransform is fine. I would also be fond of PerspectiveProjection. PerspectiveProjection is not a good name for a 4x4 matrix class. Such a matrix might be used to represent an orthographic projection. I think TransformMatrix is not a good name. It immediately raises the question, what kind of transform. I also think Matrix does not need to be in the name. That is to some extent an implementation detail, from the mathematical perspective. It's more important to identify the type of transformation. I'm concerned about the route of adding a class for each kind of transformation. It will lead to a proliferation of confusingly named types and excess type conversion, or re-identification of the type of transformation, when composing transformations. At least in the 3D realm, all that is desired is one simple 4x4 matrix class. Additional classes to represent e.g. 4x3 matrices add unnecessary complexity. I agree. So, in order to appease Maciej :-) what if we keep AffineTransform as is, and change TransformationMatrix to Matrix (or Matrix4 if Matrix is too generic)? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Accelerated 2D Tesselation Implementation
On Aug 30, 2010, at 11:56 AM, Kenneth Russell wrote: On Sat, Aug 28, 2010 at 12:36 PM, Darin Fisher da...@chromium.org wrote: On Sat, Aug 28, 2010 at 11:32 AM, Adam Barth aba...@webkit.org wrote: On Sat, Aug 28, 2010 at 7:44 AM, Chris Marrin cmar...@apple.com wrote: That's why I still think this should all go into a branch for now. It will help us all see the results without having to deal with the issues of (2) right now. An alternative to a branch is to use a run-time setting. That worked well for the HTML5 parser project. If there's a clean abstraction boundary in the code, we can use that as the branch point for the setting. The advantage of using a run-time setting is that you can leverage all the tools for working on trunk (including code reviews, etc) but you can avoid disturbing the vast majority of other developers while your feature bakes. Adam Such a runtime setting already exists for toggling accelerated canvas 2d on and off. This GPU based path rendering support is initially only going to be hooked in to the accelerated 2D canvas implementation, which since it's already covered by this run-time flag will not disturb other developers. I am going to substantially restructure the code based on feedback and submit new patches, but still against trunk. Given all the discussion that has gone on, I agree that trunk is the right place to do this. But I remain concerned about antialiasing. On that subject. If you were to have to do multi-sampling to solve the AA problem, does it make sense to skip the anti-aliasing on the edges of the spline triangles? Would that buy any performance? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Accelerated 2D Tesselation Implementation
On Aug 27, 2010, at 4:32 PM, Kenneth Russell wrote: ... Since I decided not to attach these files, here are the non-quantized versions: http://www.rawbw.com/~kbrussel/tmp/butterfly.png http://www.rawbw.com/~kbrussel/tmp/butterfly-o3d.png Another thing we need to discuss are the rendering errors in the images you posted. If you compare them with a zoomed in version of the original svg file: Butterfly.svg you can see several places where there are cracks in your rendering that don't appear in the original (as rendered in WebKit and Illustrator). These errors are small, but the errors may be large enough to make the hardware accelerated results unacceptable. It's just something to discuss. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Accelerated 2D Tesselation Implementation
On Aug 27, 2010, at 5:32 PM, Kenneth Russell wrote: ... So here's my concern. If you have to multisample for anti-aliasing anyway, why not just tesselate the shape at an appropriate sampling resolution and multisample the resulting triangle rendering. That would have the disadvantage of not being resolution independent, but it would be much simpler. And you can control the re-tessellation by oversampling the curve and only re-tesselating every few frames (maybe in the background) or only re-tesselating when not animating, or something like that. Would it be significantly faster or slower? Would the quality be better or worse? We'd have to run some tests. I just don't want to jump into a big expensive algorithm when something simpler would suffice. ... I definitely do not want to commit the WebKit project to using this particular algorithm and none other. I suspect we will need to investigate multiple approaches for GPU accelerating path rendering. That having been said, there is ongoing work to accelerate 2D canvas rendering using the GPU, and it has been determined that path rendering is a major bottleneck on some benchmarks. We believe that this algorithm will help eliminate this bottleneck. In order to continue our GPU accelerated canvas work, it is essential that the work continue on the WebKit trunk, where it is currently ongoing. We can not maintain a fixed branch of both the Chromium and WebKit projects, especially as other GPU infrastructure work is actively ongoing in both. I am committed to making this code work well in the WebKit infrastructure, and am not wedded to the particular algorithm; if a better one comes along, let's switch to it. Please work with me to find a way forward that is mutually acceptable. That's what we're doing now :-) I think the way forward is two-fold: 1) decide that this is the best algorithm to use and 2) decide how best to include all the parts in WebKit. We need to do (1) before (2). For (1) there are still some open issues: a) solve the anti-aliasing issue (showing the results, 2) understand and fix the cracking issues (may just have to add stroking or something) and c) understand the performance. That's why I still think this should all go into a branch for now. It will help us all see the results without having to deal with the issues of (2) right now. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
[webkit-dev] Accelerated 2D Tesselation Implementation
Hi Ken, It would help me, and I think many others, if we could have a discussion of how exactly your tessellation logic works and what it is intended to do. For instance, is the algorithm you're using based on Loop-Blinn? I'm a bit familiar with that algorithm and some of the problems it has with rendering this type of geometry. For instance, there are typically anti-aliasing artifacts at the points where the interior triangles touch the edges. These are described in section 5.1 of the paper and the authors added additional (and not well described) logic to solve the problem. If you could describe your algorithm a bit and show some expected results with typical cases, that would be really helpful. For those not familiar with Loop-Blinn, here is a link to their original paper, presented at Siggraph 2005: Resolution Independent Curve Rendering using Programmable Graphics ... It's a great algorithm for rendering resolution independent 2D objects using the GPU. It has potential to render both 2D shapes (as used in Canvas and SVG) and text glyphs. It's advantage is that once you generate the triangles for the shape, you can render the shape at any resolution. It's disadvantage is that the triangle generation is quite expensive, so mutating shapes can potentially be slower than a simpler resolution dependent tessellation. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Accelerated 2D Tesselation Implementation
On Aug 27, 2010, at 10:59 AM, Nico Weber wrote: On Fri, Aug 27, 2010 at 10:51 AM, Chris Marrin cmar...@apple.com wrote: On Aug 27, 2010, at 10:32 AM, Nico Weber wrote: On Fri, Aug 27, 2010 at 10:18 AM, Chris Marrin cmar...@apple.com wrote: Hi Ken, It would help me, and I think many others, if we could have a discussion of how exactly your tessellation logic works and what it is intended to do. For instance, is the algorithm you're using based on Loop-Blinn? I'm a bit familiar with that algorithm and some of the problems it has with rendering this type of geometry. For instance, there are typically anti-aliasing artifacts at the points where the interior triangles touch the edges. These are described in section 5.1 of the paper and the authors added additional (and not well described) logic to solve the problem. If you could describe your algorithm a bit and show some expected results with typical cases, that would be really helpful. For those not familiar with Loop-Blinn, here is a link to their original paper, presented at Siggraph 2005: Resolution Independent Curve Rendering using Programmable Graphics ... It's a great algorithm for rendering resolution independent 2D objects using the GPU. It has potential to render both 2D shapes (as used in Canvas and SVG) and text glyphs. It's advantage is that once you generate the triangles for the shape, you can render the shape at any resolution. It's disadvantage is that the triangle generation is quite expensive, so mutating shapes can potentially be slower than a simpler resolution dependent tessellation. I think there's a variant of the algorithm that uses the stencil buffer polygon rendering method ( http://zrusin.blogspot.com/2006/07/hardware-accelerated-polygon-rendering.html ) instead of triangulation. The paper I think I read on that only covered quadratic splines, but maybe somehow has extended that method to cubic splines by now? It looks like that technique deals with polygons, so as long as you convert the shape to a piecewise linear curve it seems like it can handle any curve form, right? What I linked is how to render polygons with the z buffer. http://http.developer.nvidia.com/GPUGems3/gpugems3_ch25.html says: Recently, Kokojima et al. 2006 presented a variant on our approach for quadratic splines that used the stencil buffer to avoid triangulation. Their idea is to connect all points on the curve path and draw them as a triangle fan into the stencil buffer with the invert operator. Only pixels drawn an odd number of times will be nonzero, thus giving the correct image of concavities and holes. Next, they draw the curve segments, treating them all as convex quadratic elements. This will either add to or carve away a curved portion of the shape. A quad large enough to cover the extent of the stencil buffer is then drawn to the frame buffer with a stencil test. The result is the same as ours without triangulation or subdivision, and needing only one quadratic curve orientation. Furthermore, eliminating the triangulation steps makes high-performance rendering of dynamic curves possible. The disadvantage of their approach is that two passes over the curve data are needed. For static curves, they are trading performance for implementation overhead. Maybe someone extended Kokojima et al's work to cover cubic splines. Ok, that makes sense. One thing that strikes me about that technique is that it touches many pixels outside the actual shape. Some edge cases could have overdraw of several hundred, which seems like it would affect performance. But I have not seen any data about this. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Accelerated 2D Tesselation Implementation
On Aug 27, 2010, at 4:21 PM, Kenneth Russell wrote: On Fri, Aug 27, 2010 at 10:18 AM, Chris Marrin cmar...@apple.com wrote: Hi Ken, It would help me, and I think many others, if we could have a discussion of how exactly your tessellation logic works and what it is intended to do. For instance, is the algorithm you're using based on Loop-Blinn? I'm a bit familiar with that algorithm and some of the problems it has with rendering this type of geometry. For instance, there are typically anti-aliasing artifacts at the points where the interior triangles touch the edges. These are described in section 5.1 of the paper and the authors added additional (and not well described) logic to solve the problem. If you could describe your algorithm a bit and show some expected results with typical cases, that would be really helpful. For those not familiar with Loop-Blinn, here is a link to their original paper, presented at Siggraph 2005: Resolution Independent Curve Rendering using Programmable Graphics ... It's a great algorithm for rendering resolution independent 2D objects using the GPU. It has potential to render both 2D shapes (as used in Canvas and SVG) and text glyphs. It's advantage is that once you generate the triangles for the shape, you can render the shape at any resolution. It's disadvantage is that the triangle generation is quite expensive, so mutating shapes can potentially be slower than a simpler resolution dependent tessellation. The code that is out for review (see https://bugs.webkit.org/show_bug.cgi?id=44729 ) is, as described in the bug report, precisely an implementation of Loop and Blinn's algorithm. It comes from their simplified reformulation in Chapter 25 of GPU Gems 3, which can be found online at http://http.developer.nvidia.com/GPUGems3/gpugems3_ch25.html . Ok, great, thanks. That makes everything much more clear. Their GPU Gems chapter discusses the antialiasing artifacts you mention, where interior triangles not handled by their shader touch the surface of the shape. They point out that enabling multisampling antialiasing solves this problem; the contents of these pixels are antialiased via MSAA, and the pixels covered by their shader are handled by their antialiasing formula. My own experience has been that this works very well; there are no distinguishable bumps or aliasing artifacts with both MSAA enabled and using the antialiasing version of their shader. Yeah, this is what I thought might be the solution. The problem with multisampling is that it's not hardware accelerated everywhere and where it is, it increases expense in both storage and rendering time. I'm not saying this is a showstopper, it's just an issue. So here's my concern. If you have to multisample for anti-aliasing anyway, why not just tesselate the shape at an appropriate sampling resolution and multisample the resulting triangle rendering. That would have the disadvantage of not being resolution independent, but it would be much simpler. And you can control the re-tessellation by oversampling the curve and only re-tesselating every few frames (maybe in the background) or only re-tesselating when not animating, or something like that. Would it be significantly faster or slower? Would the quality be better or worse? We'd have to run some tests. I just don't want to jump into a big expensive algorithm when something simpler would suffice. This is actually the second implementation of this algorithm I have done. In the first, we did much more experimentation, including trying Kokojima et al's stencil buffer algorithm rather than tessellating the interior of the shape. We found that the increased fill rate requirements of Kokojima's technique was detrimental to performance, and that it was better to tessellate to reduce overdraw. Yeah, I can see that algorithm have some really horrible edge cases. In my first implementation of this algorithm, we found that detecting and resolving overlaps of cubic curve control points (section 25.4 in the GPU Gems 3 chapter) was by far the most expensive part of the algorithm. It is also absolutely necessary in order to handle arbitrary inputs. After some investigation I realized that the plane-sweep algorithm mapped well to the problem of detecting overlaps of control point triangles, and incorporating a variant of this algorithm reduced the curve processing time dramatically. This optimization is included in the code out for review. Currently we have hooked up this code to a 2D Canvas implementation, and are performing no caching; we tessellate each path as it comes in. Performance of the classic SVG butterfly drawn using CanvasRenderingContext2D is similar to that of the software renderer, although I do not have concrete performance numbers yet. For retained mode APIs like SVG where the processing results can be cached easily, I expect significant performance improvements based on previous
Re: [webkit-dev] Web Audio API
On Aug 24, 2010, at 12:05 PM, Chris Rogers wrote: Over the past months I've been refining the web audio API implementation that I've been developing in the 'audio' branch of WebKit (per Maciej's recommendation). The API has been through a good amount of review by WebKit developers at Apple, Google, and in the W3C Audio Incubator group. For those who are interested, the draft specification is here: http://chromium.googlecode.com/svn/trunk/samples/audio/specification/specification.html I have working demos here: http://chromium.googlecode.com/svn/trunk/samples/audio/index.html I'll be posting a series of patches to migrate the working code from the audio branch to WebKit trunk. Most of the files are new, with only a few places which will touch existing WebKit files (such as EventTarget, Event). The files will be conditionally compiled. I'm considering using the following enable: #if ENABLE(AUDIOCONTEXT) After discussing the directory layout in some detail with Eric Carlson, Chris Marrin, Simon Fraser, and Jer Noble, we've decided that the files will primarily live in two places: WebCore/audio WebCore/platform/audio I know that some had expressed concern that a directory called 'audio' in WebCore would be confused with the audio element. The reason I think 'audio' would be a good name is because the API does have a direct relationship to the audio element and, over time, when the API becomes more broadly used will be associated with the audio capabilities of the web platform. That said, if anybody has grave concerns over this name, then we can discuss alternatives. I'd rather see the directories named webaudio and the enabled named WEBAUDIO. This would match the naming of 'websockets' (although not web workers, which is simply named 'workers'. I agree that this is directly related to the audio element, but it is an optional piece (hence the enable flag) and so I think it should have its own naming. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Web Audio API
On Aug 24, 2010, at 3:53 PM, Chris Rogers wrote: Hi Chris, That also sounds like a reasonable naming scheme. The only counter-argument I would have is that we have several directories in WebCore which don't have the 'web' prefix such as: WebCore/notifications WebCore/storage WebCore/workers (and not webnotifications, webstorage, webworkers) I guess I'm just trying to keep to a simpler naming convention. Since WebKit is all about the web, it seems like 'web' is implied. Either way is fine with me, but I have a preference for the simpler 'audio'. Yes, WebKit it not consistent. But websockets does follow this model. And the word 'audio' is so generic, I think making it more specific would help people understand what it is better. Also, the name of the spec is Web Audio API, so using the web prefix seems like the best choice. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] ANGLE compile failure?
On Aug 17, 2010, at 1:42 PM, Eric Uhrhane wrote: I'm getting the same failure in two clients, and the second has nothing checked out. This is on OSX 10.5.8, using the standard webkit build scripts and code synced yesterday [several times, same error]. Given that I don't hear anyone else screaming, there's probably something wrong with my environment, but I can't see what it is. I've got Xcode 3.1.4 with Component versions Xcode IDE: 1203.0 Xcode Core: 1204.0 ToolSupport: 1186.0. I've used these clients many times before without any issues, and haven't changed anything I can think of recently. Does this error look familiar to anyone? This means you're not building ANGLE. What are you using to build? If you use build-webkit or make, ANGLE should get built automatically. It should be built first if your using build-webkit and right after JavaScriptGlue if you're using make. Are you seeing ANGLE attempting to build? Is it failing? Also, look for WebKitBuild/Debug/usr/local/include/ANGLE. That's where the missing include should be. If you don't have that dir, then you're not building ANGLE. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] ANGLE compile failure?
On Aug 17, 2010, at 3:24 PM, Eric Uhrhane wrote: On Tue, Aug 17, 2010 at 2:00 PM, Chris Marrin cmar...@apple.com wrote: On Aug 17, 2010, at 1:42 PM, Eric Uhrhane wrote: I'm getting the same failure in two clients, and the second has nothing checked out. This is on OSX 10.5.8, using the standard webkit build scripts and code synced yesterday [several times, same error]. Given that I don't hear anyone else screaming, there's probably something wrong with my environment, but I can't see what it is. I've got Xcode 3.1.4 with Component versions Xcode IDE: 1203.0 Xcode Core: 1204.0 ToolSupport: 1186.0. I've used these clients many times before without any issues, and haven't changed anything I can think of recently. Does this error look familiar to anyone? This means you're not building ANGLE. What are you using to build? If you use build-webkit or make, ANGLE should get built automatically. It should be built first if your using build-webkit and right after JavaScriptGlue if you're using make. Are you seeing ANGLE attempting to build? Is it failing? Also, look for WebKitBuild/Debug/usr/local/include/ANGLE. That's where the missing include should be. If you don't have that dir, then you're not building ANGLE. I'm using build-webkit. There is no WebKitBuild/Debug/usr/local/include/ANGLE. Building now [not a clean rebuild], the first thing it goes through is various parts of JavaScriptCore, then it does JavaScriptGlue, before failing on WebCore. The only occurrence of \ANGLE\ in the whole output is the error about the missing include file. Is your copy of build-webkit out of date? Line 345 should read: splice @projects, 0, 0, ANGLE; The OpenSource Leopard buildbot is compiling ANGLE, so I'm not sure why your machine is not. I'm firing off a Leopard build now to see if I can reproduce it here. In the meantime, try a clean build by deleting the WebKitBuild directory. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] ANGLE
On Aug 13, 2010, at 4:25 AM, Zoltan Herczeg wrote: Hi, ANGLE looks like a graphics helper library. Why it is placed in the root WebKit directory? Perhaps WebCore/platform/graphics or some kind of /3rparty directory would be better, wouldn't it? ANGLE is a library from Google (http://code.google.com/p/angleproject/) which is mirrored in the WebKit tree for convenience. Since it's not actually part of WebKit, it was felt that keeping it the root was best. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Video feature request
On Jan 14, 2010, at 9:23 AM, Oliver Hunt wrote: On Jan 14, 2010, at 9:00 AM, Zack S wrote: Hi all, There's a feature that I would find useful that's not as far as I know a part of HTML5/Javascript in Webkit based browsers. Namely, I'd like to be able to open a video from within Javascript without necessarily wanting to play it, but rather I want to be able to extract frames out of it as image objects and/or to extract the corresponding sound portions of frames. I can't speak to the sound aspect of this, but you can get pixel data by painting a video to the canvas element. There are cross-domain restrictions on this functionality, right? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
[webkit-dev] DOMAttrModified events in WebKit
There is a bug posted (https://bugs.webkit.org/show_bug.cgi?id=8191) about the implementation of DOMAttrModified events. The implementation is quite far along and there are many posts begging for it, but it is stalled because of one comment (https://bugs.webkit.org/show_bug.cgi?id=8191#c17) which is opposed to it on the grounds that it would be slow and buggy, and that there is an alternative proposal. I'm interested in this issue because of WebGL. There is currently an implementation of a subset of X3D in WebGL (http://x3dom.org), which allows you to add a 3D scene as a hierarchy of nodes rendered by WebGL. They have an example of using mutation events to allow the DOM to change the attributes of nodes in the hierarchy and have that redraw the rendered scene. That's just one of the many uses of mutation events. Others are sprinkled throughout the bug log. I don't think the discussion here should be whether or not mutation events a A Good Thing or not. They are being put to good use in Firefox already and they are part of DOM Level 2 already. The question should be whether they would add value to WebKit without penalizing performance when they are not used. I think there is enough evidence to say that they would be useful for many purposes. And from a cursory look at the patch it appears that, while there would be overhead in doing the checks, it would be fairly minor unless an event were actually attached to an attribute. And more might be done to short circuit earlier and further reduce the overhead. But I haven't actually tried the patch to see what the baseline overhead is. If we reach consensus that we should add this feature, I would be happy to get some numbers about the overhead. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
[webkit-dev] Resolution on switch statement indentation
I saw another patch get rejected today because of switch statement indentation. We discussed this last week, and I saw a lot of support for my proposal of indenting case labels from their switch. But the discussion did not end in resolution. To summarize, here are the options mentioned: 1) Case labels always have the same indentation as their switch (today's rule) 2) Case labels always indent 4 spaces in from their switch (my preference) 3) Case labels indent 2 spaces in from their switch. (Maciej's rule) I was a little unclear on Maciej's rule. The last part of his rule is In the case where a case label is followed by a block, include the open brace on the same line as the case label and indent the matching close brace only two spaces (but still 4 spaces for the contained statements). Did he mean that the contained statements would be indented 4 spaces from the case label, meaning they would be indented 6 spaces from the switch? That's the only way the closing brace could be indented 2 spaces from the switch and the code indented 4 spaces from the brace. If so, I especially dislike this rule because it places the entire body of a block at a nonstandard indentation. Anyway, how do we come to resolution on this? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Resolution on switch statement indentation
On Dec 9, 2009, at 8:13 AM, Adam Treat wrote: On Wednesday 09 December 2009 10:26:24 am Chris Marrin wrote: I saw another patch get rejected today because of switch statement indentation. We discussed this last week, and I saw a lot of support for my proposal of indenting case labels from their switch. But the discussion did not end in resolution. To summarize, here are the options mentioned: 1) Case labels always have the same indentation as their switch (today's rule) ... Anyway, how do we come to resolution on this? What is wrong with keeping the current rule? As I pointed out in the previous thread, I feel like it makes the code harder to read, and got several responses of agreement. Also most of the switch statements in the code currently indent the case labels, so it will mean lots of code changes. I think it would be better to change the rule. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Resolution on switch statement indentation
On Dec 9, 2009, at 9:41 AM, Maciej Stachowiak wrote: On Dec 9, 2009, at 7:26 AM, Chris Marrin wrote: I saw another patch get rejected today because of switch statement indentation. We discussed this last week, and I saw a lot of support for my proposal of indenting case labels from their switch. But the discussion did not end in resolution. To summarize, here are the options mentioned: 1) Case labels always have the same indentation as their switch (today's rule) 2) Case labels always indent 4 spaces in from their switch (my preference) 3) Case labels indent 2 spaces in from their switch. (Maciej's rule) I was a little unclear on Maciej's rule. The last part of his rule is In the case where a case label is followed by a block, include the open brace on the same line as the case label and indent the matching close brace only two spaces (but still 4 spaces for the contained statements). Did he mean that the contained statements would be indented 4 spaces from the case label, meaning they would be indented 6 spaces from the switch? I meant 4 spaces from the switch (i.e. 2 additional spaces from the case label). switch (x) { case foo: { fooFunc(); } case bar: barFunc(); } Ok, the example above seems to be missing a space (it indents the case label 1 and the block 3). But I assume you meant to indent 2 and 4. If so, I understand what you are proposing. I still don't like it, but I understand it. I think a consistent 4 space indentation scheme avoids confusion and makes all the indentation tools in editors work correctly. If excessive indentation really is that big of a concern (which I don't think it is) I would rather see the current rule (rule 1) used. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
[webkit-dev] Switch statement indentation
The style script flagged an issue in my code yesterday for an issue I didn't even know existed. How do you indent case clauses for a switch statement? The WebKit style states that case clauses have the same indentation as their switch. I HATE that style. And I had no idea that was the WebKit style. I use the indent the case style and have never had anyone flag it in the past. Without getting into style religion, I was looking at the code and it seems that there are many more uses of the indent the case style than the correct style. Maybe we could change the style rule in the interest of changing fewer files (and because I think it generally reads better)? I'm fine with changing my code to match the style. But the style script is going to be kicking out a lot of these errors and I think we should make sure we want to go down this road before that happens. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Switch statement indentation
On Dec 2, 2009, at 4:47 PM, Alexey Proskuryakov wrote: On 02.12.2009, at 15:25, Chris Marrin wrote: Maybe we could change the style rule in the interest of changing fewer files (and because I think it generally reads better)? I support changing or dropping this rule. Because of this rule, there is no good way to format cases that need braces, such as: switch (i) { case 1: { String a(a); break; } case 2: { String b(b); break; } } The downside is that some code can get indented too far, which is particularly unfortunate for large switches. But I'm not convinced that having a standard for this improves consistency of the code in any meaningful way (*), perhaps this should be decided on a case by case basis. The indented too far problem can be solved by sticking really big switches in their own function. I think this is better style anyway. I've always found huge switches in the middle of a long function to be very confusing. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] New requirement for building on Windows coming
On Nov 24, 2009, at 11:12 AM, Alexey Proskuryakov wrote: On 24.11.2009, at 9:46, Adam Roben wrote: On second thought, even if we soft-link, we'll still have dependencies on the D3D headers... Can we make a local copy of those? I've used the DXSDK_DIR env var to handle both the include and lib locations. This all seems to work fine and will only require the DX SDK when we turn on ACCELERATED_COMPOSITING. Given that, do we still need soft-linking or a local copy of the headers? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Running WebGL layout tests
On Nov 3, 2009, at 12:35 PM, Kenneth Russell wrote: Hi, Trying to run the WebGL layout tests in LayoutTests/fast/canvas/webgl. Here's the command line I'm using: run-webkit-tests --debug LayoutTests/fast/canvas/webgl/[test name.html] (I built WebKit --debug.) All of the tests fail while attempting to fetch the 3D context. (TypeError: Result of expression 'context' [undefined] is not an object.) The same tests run fine when run under run-safari --debug. Do I need to do some other defaults write or other setup to allow the layout test harness to run with WebGL enabled? So you're saying that the overridePreferences() call is no longer turning on the WebKitWebGLEnabled flag in DRT? That's what should turn on the option. What happens if you run: defaults write com.apple.Safari WebKitWebGLEnabled -bool YES and then run layout tests? - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Style guide for indenting nested #if #define in headers
On Oct 15, 2009, at 8:54 AM, Timothy Hatcher wrote: This is rather ugly and does not match the majority of the code we have in WebCore already. I agree. I don't find any issues with the current, unindented style. I just think that ifdefs that span more than 10 lines or so should always put the condition on the #endif as a comment. On Oct 15, 2009, at 8:12 AM, Eric Seidel wrote: I really like the indented style that some folks have started using: #if foo #define BAR BAZ #else #define BAR BARF #endif I think we should standardize on something and add it to the style guides. ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] WebKit and Khronos Group
On Aug 8, 2009, at 7:02 PM, Jeremy Orlow wrote: On Sat, Aug 8, 2009 at 2:02 PM, Maciej Stachowiak m...@apple.com wrote: On Aug 8, 2009, at 11:39 AM, Harry Underwood wrote: Thanks for the link. Didn't even know that WebGL is being considered by WebKit. What Oliver showed you is patches to pretty much fully implement it, done by an Apple employee. So we're doing more than considering it. I expect there will be more to announce when the patches land. But another question, if you don't mind. Is O3D considered as a technical competitor or conflict with Apple's CSS and SVG extensions, or should it be considered as such? We do have some extensions to do 3D transforms with CSS, creating 2.5D visuals with flat CSS boxes manipulated in 3D, and fully integrated with the page content. For example, you can use it to apply 3D effects to a navigation menu or a video. O3D is much more about creating full 3D models and scenes, without tight integration with the Web content. In that respect, O3D is more of a competitor for WebGL than 3D transforms/transitions. I'm not personally involved in the WebGL or O3D efforts, but I can speak to some of this. I agree that O3D and WebGL are more similar to each other than the CSS 3D transforms. Both are fairly low level, though they take fairly different approaches to rendering. O3D is a retained mode API (somewhat like SVG) whereas WebGL is an immediate mode API (much like Canvas). In other words, for O3D, you use JavaScript to build up a scene and transform it between frames. In WebGL you use JavaScript to explicitly render each frame. The latter gives you more control but is more limited by the speed of JavaScript and the WebGL bindings. I guess the point I'm trying to make here is that all three technologies are actually complementary to each other. Incidentally, it's my understanding that Google showed off a prototype version of Chrome running both O3D and WebGL at some conference last week. It's pretty cool how fast things are moving with respect to 3D on the web. :-) It was the OpenGL conference and they showed a working implementation of WebGL (with some unknown vintage of the current API) at the OpenGL BOF. It was well received. But they were also showing O3D on the exhibition floor. So they are apparently pursuing both approaches. As far as I know they are not attempting to move O3D through any standards body. So for now it is just a Google experiment. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Patch process - let's make it better
On Jul 10, 2009, at 3:55 PM, Maciej Stachowiak wrote: Hi everyone, One common topic for discussion has been how to make our process around patch submission better. As the project grows, it's becoming more important for this process to work really smoothly, and we are seeing some breakdowns. I've been doing a lot of thinking about this, and discussion with some project old hands. I think the right way to tackle this is to identify the process problems, and make sure we address each one. I'd also like to start by fixing the problems that can be addressed without making major wholesale tools changes first, then look at the bigger changes. Here are my thoughts on the steps in the lifecycle of a patch: === 1) Submitting the patch === Steps: 1.1) File a bug if there isn't one already. 1.2) Put the bug number in your ChangeLog entry. Maybe it's because I'm a noob and there is a better way, but one of the most annoying things about the patch process is the need to add Changelog entries. It's not hard to create a Changelog entry (given the existance of prepareChangelog). The annoying part is the fact that I ALWAYS get conflicts in at least one Changelog file when I try to check in. I have to fix these by hand, do svn resolved, and try to check in again. Assuming someone hasn't checked something in under me in the 2 minutes it took me to fix the changelogs, (which has happened a couple of times), I can successfully commit. This isn't THAT big of a deal, but it is annoying. And I'm not sure why we need changelogs when we have a complete log of every checkin from svn anyway? Maybe it would be better to do away with Changelogs and just put stricter controls on what's in a commit message. - ~Chris cmar...@apple.com ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Proposed Timer API
On Oct 2, 2008, at 6:13 PM, Maciej Stachowiak wrote: On Oct 2, 2008, at 6:01 PM, Cameron McCormack wrote: Hi Maciej. Cameron McCormack: If possible, it would be nice if there could be some degree of compatibility between this proposed API and the one in SVG Tiny 1.2: http://dev.w3.org/SVG/profiles/1.2T/publish/svgudom.html#svg__SVGTimer Maciej Stachowiak: I considered that, but I don't like the fact that it makes the common zero-delay continuation callback case into three lines of code instead of one, for what I think is no practical benefit. Justin’s proposed API seems to need four lines for that case: var t = new Timer(); t.repeatCount = 1; t.addEventListener('timercomplete', function() { … }, false); t.start(); compared with the three for SVG’s timer: var t = createTimer(0, -1); t.addEventListener('SVGTimer', function() { … }, false); t.start(); See my proposal on another thread, which makes this: startTimer(0, false, function() { ... }); I really like the idea of a Timer object. It would allow you to separate creation from starting, allows you to pause and add other API's to the interface. Can the constructor be used to simplify the creation: var t = new Timer(0, false, function() { ...}); which would start the timer immediately, as in your example. Or you could do: var t = new Timer(function() { ... }); ... t.startOneShot(1.76); etc. And you could easily add animation or media API's for synchronization: var t = new Timer(1.76, function() { ... }); // when the timer is triggered, it will run for 1.76 seconds var transition = window.getTransitionForElement(element, left); transition.trigger(t); ... element.style.left = 100px; This would cause the timer to start when the left transition starts and fire its event 1.76 seconds later. - ~Chris [EMAIL PROTECTED] ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
Re: [webkit-dev] Proposed Timer API
On Oct 3, 2008, at 1:48 PM, Maciej Stachowiak wrote: On Oct 3, 2008, at 11:01 AM, Chris Marrin wrote: On Oct 2, 2008, at 6:13 PM, Maciej Stachowiak wrote: On Oct 2, 2008, at 6:01 PM, Cameron McCormack wrote: Hi Maciej. Cameron McCormack: If possible, it would be nice if there could be some degree of compatibility between this proposed API and the one in SVG Tiny 1.2: http://dev.w3.org/SVG/profiles/1.2T/publish/svgudom.html#svg__SVGTimer Maciej Stachowiak: I considered that, but I don't like the fact that it makes the common zero-delay continuation callback case into three lines of code instead of one, for what I think is no practical benefit. Justin’s proposed API seems to need four lines for that case: var t = new Timer(); t.repeatCount = 1; t.addEventListener('timercomplete', function() { … }, false); t.start(); compared with the three for SVG’s timer: var t = createTimer(0, -1); t.addEventListener('SVGTimer', function() { … }, false); t.start(); See my proposal on another thread, which makes this: startTimer(0, false, function() { ... }); I really like the idea of a Timer object. It would allow you to separate creation from starting, allows you to pause and add other API's to the interface. Can the constructor be used to simplify the creation: var t = new Timer(0, false, function() { ...}); which would start the timer immediately, as in your example. Or you could do: var t = new Timer(function() { ... }); ... t.startOneShot(1.76); I don't expect it to be a common use case to create a timer in one place and stop it in another. That being said, you can do this with the API as proposed: var t = startTimer(0, false, function() { ... }); t.stop(); // now you have a set up but non-running timer ... t.restart(); // now it's actually going I think wanting the timer to start right away is the more common case, so the API is biased in that direction rather than towards initially not running timers. I think the reason you don't see the pattern of deferred timer triggering is because today you just can't do it. I think the use case I described (triggering a timer on an animation or media event) will be common if and when we have that ability. In the above example, does the system guarantee that starting a timer and immediately stopping it will not ever fire that timer? I can't imagine that guarantee being possible, especially for very short (or zero) duration timers. If an implementation chooses to queue up timer events as soon as they time out (plus the optimization that zero duration timers would immediately queue), they would have to dig into that queue and rip out any timers that are stopped. And that might not even be desirable in many cases. What should happen when a timer times out while a JS function is running and you stop it? Should its event still run. I'm sure there are many interesting race conditions possible here. It seems like you would avoid these issues if you could have a param to startTimer (or a separate createTimer function) that prevented the timer from starting in the first place. - ~Chris [EMAIL PROTECTED] ___ webkit-dev mailing list webkit-dev@lists.webkit.org http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev