There are other cases as well where you want a copy.  Patterns are another 
example. For example you can create a pattern from another canvas, and I don't 
think it's supposed to be live if that other canvas later changes.  There are 
examples in SVG as well where patterns are used and I am pretty sure a copy is 
the desired behavior.

I don't think it's as simple as saying "image()" should never copy.

I spent some time thinking about this from a CG perspective, and I couldn't 
really come up with a good solution.  We use a bitmap context for efficient 
pixel access for get/putImageData.  Any platform that wants these functions to 
be efficient has to keep the pixels easily obtainable.  This means, for 
example, that we can't produce a live image using CGLayers.  Looking at Qt, the 
only current platform to return a "live" image to canvas for rendering, it 
looks like they made this tradeoff, i.e., it looks to me like get/putImageData 
are slower than other platforms.  (The get is having to call toImage() on a 
pixmap and then has to convert to obtain a good representation of the data for 
a QImage).  The put has to render the changes into the pixmap.

I tried renaming the two methods (image() and imageForRendering()) to 
copiedImage and liveImage, and ended up with about an 80/20 split at the call 
sites between the two.  It looked really ugly to me, since in nearly every case 
the "liveImage" was just being used to paint the current contents of the 
ImageBuffer, i.e., it was just being passed to drawImage.  The copiedImage was 
typically passed to somebody who needed to retain it for later rendering, e.g., 
a pattern.

So to sum up, the call sites want two things:

(1) To render the current contents of the image buffer efficiently.
(2) To obtain a copied image that is independent of the image buffer that is 
typically retained by someone else.

And the current implementations of image()/imageForRendering() do three things:

(1) Make a copy of the image buffer into an Image (e.g., Skia).
(2) Make an Image that is a live representation of the Image buffer contents 
(e.g., Qt using imageForRendering).
(3) Make a copy-on-write image wrapper (e.g., CG).

Assuming the copy-on-write stuff in CG is actually working (and I suspect that 
it is), then it's actually a pretty good solution.  You don't copy to do 
throwaway drawing.  Pixel access is efficient using get/PutImageData.  There's 
no need to distinguish between copied images vs. live, with the caveat that the 
cached image has to be cleared (only an issue for canvas, since nobody else 
does continual drawing to ImageBuffers).

Anyway I'm open to suggestions here. :)

As for removing the ImageBuffer completely, I don't think you can get away with 
that.  The canvas buffer's size does not necessarily match the size into which 
it is drawn.  You need to be able to get to the image pixels of a possibly 
larger (or smaller) buffer.  You also have to be able to draw the contents of 
one canvas into another canvas, so you have to be able to treat the canvas as 
an image.  Again that means being able to get to the contents of the canvas 
itself as a separate buffer/image.

dave
(hy...@apple.com)

_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

Reply via email to