Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-22 Thread Mike Shaver
On Mon, Mar 15, 2010 at 3:05 AM, Maciej Stachowiak m...@apple.com wrote:
 === Summary of Data ===

 1) In all browsers tested, copying to an ImageData and then back to a canvas
 (two blits) is faster than a 2x scale.
 2) In all browsers tested, twice the cost of a canvas-to-canvas blit is
 considerably less than the cost of copy to and back from ImageData.
 3) In all browsers tested, twice the cost of a canvas-to-canas blit is still
 less than the cost of a single 0.5x scale or a rotate.

With hardware acceleration in play, things seem to change a lot there,
though it may be that it's just breaking the test somehow?  The
displayed images look correct, FWIW.

My results on a Windows 7 machine, with the browsers maximized on a
1600x1200 display.

FF 3.6:
direct copy: 8ms
indirect: 408ms
2x scale: 1344ms
0.5x scale: 85ms
rotate: 440ms

FF 3.7a (no D2D):
direct copy: 12.5ms
indirect: 101ms
2x scale: 532ms
0.5x scale: 33ms
rotate: 389ms

FF 3.7a (D2D):
direct copy: 0.5ms
indirect: 136ms
2x scale: 0.5ms
0.5x scale: 0.5ms
rotate: 0.5ms

WebKit r56194:
direct copy: 18.5ms
indirect copy: 113ms
2x scale: 670ms
0.5x scale: 112ms
rotate: 129ms

This supports the idea of specialized API, perhaps, since it will keep
authors from having to figure out which path to take in order to avoid
a massive hit  when using the canvas copies (100x or more for
D2D-enabled FF, if the test's results are correct).  It also probably
indicates that everyone is going to get a lot faster in the next
while, so performance tradeoffs should perhaps not be baked too deeply
into the premises for these APIs.

Other more complex tests like blurring or desaturating or doing edge
detection, etc. may show other tradeoffs, and I think we're working on
a performance suite for tracking our own work that may illuminate some
of those.  Subject, of course, to the incredibly fluid nature of all
browser performance analysis these days!

Mike


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-15 Thread Maciej Stachowiak


On Mar 14, 2010, at 6:22 PM, Jonas Sicking wrote:



One way to do it would be to have an function somewhere, not
necessarily on the 2D context, which given a Blob, returns an
ImageData object. However this still results in the image being loaded
twice into memory, so would only really help if you want to operate on
an ImageData object directly.


I think loading via an img element in order to draw the image is  
fine. An HTMLImageElement is just as drawable as an ImageData and  
already has all the logic for asynchronous loading of images and  
processing



I agree that the number of steps is not important for responsiveness
or performance (though it is for complexity). However several of those
steps seemed to involved non-trivial amount of CPU usage, that was the
concern expressed in my initial mail.

At the very least I think we have a skewed proposal. The main use
cases that has been brought up are scaling and rotating images.
However the proposal is far from optimal for fulfilling that use case.
For scaling, it's fairly complex and uses more CPU cycles, both on the
main thread, and in total, than would be needed with an API more
optimized for that use case. For rotating it doesn't do anything.


You're assuming a scale and a rotate are both less expensive than two  
blits. Since no one else has provided perf data, I made my own test:


http://webkit.org/demos/canvas-perf/canvas.html

=== Results ===

= Safari (w/ WebKit trunk) =

Direct image copy: 39ms
Indirect copy with (via ImageData): 138.5ms
Copy with 2x scale: 717ms
Copy with 0.5x scale: 59.5ms
Copy with rotate:142.5ms

= Chrome dev 5.0.322.2 =

Direct image copy: 63ms
Indirect copy with (via ImageData): 161.5ms
Copy with 2x scale: 1376.5ms
Copy with 0.5x scale: 70.5ms
Copy with rotate: 259ms

= Opera 10.5 alpha =

Direct image copy: 89ms
Indirect copy with (via ImageData): 428.5ms
Copy with 2x scale: 963.5ms
Copy with 0.5x scale: 61ms
Copy with rotate: 150ms

= Firefox 3.6 =

Direct image copy: 81ms
Indirect copy with (via ImageData): 693.5ms
Copy with 2x scale: 1703.5ms
Copy with 0.5x scale: 284.5ms
Copy with rotate: 568.5ms

=== Summary of Data ===

1) In all browsers tested, copying to an ImageData and then back to a  
canvas (two blits) is faster than a 2x scale.
2) In all browsers tested, twice the cost of a canvas-to-canvas blit  
is considerably less than the cost of copy to and back from ImageData.
3) In all browsers tested, twice the cost of a canvas-to-canas blit is  
still less than the cost of a single 0.5x scale or a rotate.



=== Conclusions ===

1) For scaling an image up 2x, copying to an ImageData and back for  
processing on a Worker would improve responsiveness, relative to just  
doing the scale on the main thread.


2) Copying from one canvas to another is much faster than copying to/ 
from ImageData. To make copying to a Worker worthwhile as a  
responsiveness improvement for rotations or downscales, in addition to  
the OffscreenCanvas proposal we would need a faster way to copy image  
data to a Worker. One possibility is to allow an OffscreenCanvas to be  
copied to and from a background thread. It seems this would be much  
much faster than copying via ImageData.



Regards,
Maciej



Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-15 Thread Jonas Sicking
 I agree that the number of steps is not important for responsiveness
 or performance (though it is for complexity). However several of those
 steps seemed to involved non-trivial amount of CPU usage, that was the
 concern expressed in my initial mail.

 At the very least I think we have a skewed proposal. The main use
 cases that has been brought up are scaling and rotating images.
 However the proposal is far from optimal for fulfilling that use case.
 For scaling, it's fairly complex and uses more CPU cycles, both on the
 main thread, and in total, than would be needed with an API more
 optimized for that use case. For rotating it doesn't do anything.

 You're assuming a scale and a rotate are both less expensive than two blits.
 Since no one else has provided perf data, I made my own test:

 http://webkit.org/demos/canvas-perf/canvas.html

 === Results ===

 = Safari (w/ WebKit trunk) =

 Direct image copy: 39ms
 Indirect copy with (via ImageData): 138.5ms
 Copy with 2x scale: 717ms
 Copy with 0.5x scale: 59.5ms
 Copy with rotate:142.5ms

 = Chrome dev 5.0.322.2 =

 Direct image copy: 63ms
 Indirect copy with (via ImageData): 161.5ms
 Copy with 2x scale: 1376.5ms
 Copy with 0.5x scale: 70.5ms
 Copy with rotate: 259ms

 = Opera 10.5 alpha =

 Direct image copy: 89ms
 Indirect copy with (via ImageData): 428.5ms
 Copy with 2x scale: 963.5ms
 Copy with 0.5x scale: 61ms
 Copy with rotate: 150ms

 = Firefox 3.6 =

 Direct image copy: 81ms
 Indirect copy with (via ImageData): 693.5ms
 Copy with 2x scale: 1703.5ms
 Copy with 0.5x scale: 284.5ms
 Copy with rotate: 568.5ms

 === Summary of Data ===

 1) In all browsers tested, copying to an ImageData and then back to a canvas
 (two blits) is faster than a 2x scale.
 2) In all browsers tested, twice the cost of a canvas-to-canvas blit is
 considerably less than the cost of copy to and back from ImageData.
 3) In all browsers tested, twice the cost of a canvas-to-canas blit is still
 less than the cost of a single 0.5x scale or a rotate.


 === Conclusions ===

 1) For scaling an image up 2x, copying to an ImageData and back for
 processing on a Worker would improve responsiveness, relative to just doing
 the scale on the main thread.

 2) Copying from one canvas to another is much faster than copying to/from
 ImageData. To make copying to a Worker worthwhile as a responsiveness
 improvement for rotations or downscales, in addition to the OffscreenCanvas
 proposal we would need a faster way to copy image data to a Worker. One
 possibility is to allow an OffscreenCanvas to be copied to and from a
 background thread. It seems this would be much much faster than copying via
 ImageData.

We're clearly going in circles here. My point is this:

The two main use cases that has brought up have been scaling and
rotating images off the main thread in order to improve
responsiveness. The proposed solution addresses these use cases fairly
poorly. Both in that APIs could be designed that makes these things
simpler, and in that APIs could be designed that perform better both
by putting less work on the main thread, and by doing less work in
general.

This does not take away from the fact that the proposal can be (based
on your data) be used to improve performance when doing scaling.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-15 Thread Maciej Stachowiak


On Mar 15, 2010, at 12:28 AM, Jonas Sicking wrote:


=== Conclusions ===

1) For scaling an image up 2x, copying to an ImageData and back for
processing on a Worker would improve responsiveness, relative to  
just doing

the scale on the main thread.

2) Copying from one canvas to another is much faster than copying  
to/from

ImageData. To make copying to a Worker worthwhile as a responsiveness
improvement for rotations or downscales, in addition to the  
OffscreenCanvas
proposal we would need a faster way to copy image data to a Worker.  
One

possibility is to allow an OffscreenCanvas to be copied to and from a
background thread. It seems this would be much much faster than  
copying via

ImageData.


We're clearly going in circles here. My point is this:

The two main use cases that has brought up have been scaling and
rotating images off the main thread in order to improve
responsiveness. The proposed solution addresses these use cases fairly
poorly. Both in that APIs could be designed that makes these things
simpler, and in that APIs could be designed that perform better both
by putting less work on the main thread, and by doing less work in
general.


Do you have a specific proposal for how to handle those particular use  
cases? (Side note: I didn't test how efficient it would be to use  
WebGL to scale or rotate images, in part because I'm not sure how to  
do it. If you know how to code it, I'll gladly add it to my test case.)


BTW although no one has provided specific use cases along these lines,  
I can imagine that Photoshop-style image processing effects may be  
compute-intensive enough that you want to do them off the main thread.  
At least, I think there's some photoshop filters that take noticeable  
time to complete even as native-compiled C++. Maybe WebGL could be  
used to do some or all of those things, it's hard to tell. It seems  
like ImageData is *not* a great way to do them if you can help it,  
since turning a large image into an ImageData is so expensive.



This does not take away from the fact that the proposal can be (based
on your data) be used to improve performance when doing scaling.


It looks to me like it could improve performance quite a lot if we add  
a more efficient way


Actually, given that ImageData is already copiable to a background  
thread, it seems like a good idea to add some form of image data that  
can be copied to a Worker with less work on the main thread. That  
seems valuable even if there is no actual graphics API on the  
background thread.


Regards,
Maciej





Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-15 Thread Philip Taylor
On Mon, Mar 15, 2010 at 7:05 AM, Maciej Stachowiak m...@apple.com wrote:
 Copying from one canvas to another is much faster than copying to/from
 ImageData. To make copying to a Worker worthwhile as a responsiveness
 improvement for rotations or downscales, in addition to the OffscreenCanvas
 proposal we would need a faster way to copy image data to a Worker. One
 possibility is to allow an OffscreenCanvas to be copied to and from a
 background thread. It seems this would be much much faster than copying via
 ImageData.

Maybe this indicates that implementations of getImageData/putImageData
ought to be optimised? e.g. do the expensive multiplications and
divisions in the premultiplication code with SIMD. (A seemingly
similar thing at http://bugzilla.openedhand.com/show_bug.cgi?id=1939
suggests SSE2 makes things 3x as fast). That would avoid the need to
invent new API, and would also benefit anyone who wants to use
ImageData for other purposes.

-- 
Philip Taylor
exc...@gmail.com


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-15 Thread Maciej Stachowiak


On Mar 15, 2010, at 3:46 AM, Philip Taylor wrote:

On Mon, Mar 15, 2010 at 7:05 AM, Maciej Stachowiak m...@apple.com  
wrote:
Copying from one canvas to another is much faster than copying to/ 
from

ImageData. To make copying to a Worker worthwhile as a responsiveness
improvement for rotations or downscales, in addition to the  
OffscreenCanvas
proposal we would need a faster way to copy image data to a Worker.  
One

possibility is to allow an OffscreenCanvas to be copied to and from a
background thread. It seems this would be much much faster than  
copying via

ImageData.


Maybe this indicates that implementations of getImageData/putImageData
ought to be optimised? e.g. do the expensive multiplications and
divisions in the premultiplication code with SIMD. (A seemingly
similar thing at http://bugzilla.openedhand.com/show_bug.cgi?id=1939
suggests SSE2 makes things 3x as fast). That would avoid the need to
invent new API, and would also benefit anyone who wants to use
ImageData for other purposes.


It might be possible to make getImageData/putImageData faster than  
they are currently, certainly the browsers at the slower end of the  
ImageData performance spectrum must have a lot of headroom. But they  
probably also probably have room to optimize drawImage. (Looking back  
at my data I noticed that getImageData + putImageData in Safari is  
about as fast or faster than two drawImage calls in the other browsers  
tested).


In the end, though, I doubt that it's possible for getImageData or  
putImageData to be as fast as drawImage, since drawImage doesn't have  
to do any conversion of the pixel format.


Regards,
Maciej



Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-15 Thread Vladimir Vukicevic

On 3/15/2010 4:22 AM, Maciej Stachowiak wrote:


On Mar 15, 2010, at 3:46 AM, Philip Taylor wrote:

On Mon, Mar 15, 2010 at 7:05 AM, Maciej Stachowiak m...@apple.com 
wrote:

Copying from one canvas to another is much faster than copying to/from
ImageData. To make copying to a Worker worthwhile as a responsiveness
improvement for rotations or downscales, in addition to the 
OffscreenCanvas

proposal we would need a faster way to copy image data to a Worker. One
possibility is to allow an OffscreenCanvas to be copied to and from a
background thread. It seems this would be much much faster than 
copying via

ImageData.


Maybe this indicates that implementations of getImageData/putImageData
ought to be optimised? e.g. do the expensive multiplications and
divisions in the premultiplication code with SIMD. (A seemingly
similar thing at http://bugzilla.openedhand.com/show_bug.cgi?id=1939
suggests SSE2 makes things 3x as fast). That would avoid the need to
invent new API, and would also benefit anyone who wants to use
ImageData for other purposes.


It might be possible to make getImageData/putImageData faster than 
they are currently, certainly the browsers at the slower end of the 
ImageData performance spectrum must have a lot of headroom. But they 
probably also probably have room to optimize drawImage. (Looking back 
at my data I noticed that getImageData + putImageData in Safari is 
about as fast or faster than two drawImage calls in the other browsers 
tested).


In the end, though, I doubt that it's possible for getImageData or 
putImageData to be as fast as drawImage, since drawImage doesn't have 
to do any conversion of the pixel format.


This is true -- getImageData/putImageData unfortunately saddled us with 
two performance-killing bits:


1) clamping on assignment.  Not so bad, but doesn't help.

2) Unpremultiplied alpha.  This is the biggest chunk.  We have more 
optimized code in nightly builds of Firefox now that uses a lookup table 
and gets a pretty significant speedup for this part of put/get, but it's 
not going to be as fast as drawImage.


Also, canvas is often (or can be) backed by actual hardware surfaces, 
and drawImage from one to another is going to be much faster than 
reading the data into system memory and then drawing from there back to 
the hardware surface.


If we wanted to support this across workers (and I think it would be 
helpful to figure out how to do so), something like saying that if a 
canvas object was passed (somehow) between workers, it would be a copy 
-- and internally it could be implemented using copy-on-write semantics.


- Vlad


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-15 Thread Oliver Hunt

On Mar 15, 2010, at 2:24 PM, Vladimir Vukicevic wrote:
 If we wanted to support this across workers (and I think it would be helpful 
 to figure out how to do so), something like saying that if a canvas object 
 was passed (somehow) between workers, it would be a copy -- and internally it 
 could be implemented using copy-on-write semantics.

I had been thinking the same -- it would allow the general case (eg. data is 
only expected to be used in the worker) to get a mostly free copy.

 
- Vlad
--Oliver



Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-14 Thread Maciej Stachowiak


On Mar 13, 2010, at 12:30 PM, Jonas Sicking wrote:

On Sat, Mar 13, 2010 at 12:09 PM, Oliver Hunt oli...@apple.com  
wrote:


On Mar 13, 2010, at 9:10 AM, Jonas Sicking wrote:

There is a use case, which I suspect is quite common, for using
canvas to manipulate files on the users file system. For example
when creating a photo uploader which does client side scaling before
uploading the images, or for creating a web based GIMP like
application.

In this case we'll start out with a File object that needs to be  
read

in to a canvas. One solution could be to read the File into memory
in a ByteArray (or similar) and add a synchronous
canvas2dcontext.fromByteArray function. This has the advantage of
being more generic, but the downside of forcing both the encoded and
decoded image to be read into memory.


Honestly i think  nice and consistent way for this work work would  
simply be to support

someImage.src = someFileObject

Which would be asynchronous, and support all the image formats the  
browser already supports.


That is already possible:

someImage.src = someFileObject.urn;

However this brings us back to the very long list of steps I listed
earlier in this thread.


I think it is cleaner to have an asynchronous image load operation (as  
shown above) and then a synchronous image paint operation, rather than  
to introduce a asynchronous paint operation directly on the 2D context.


I don't think there is any sane way to add an asynchronous draw  
command to the 2D context, given that all the existing drawing  
commands are synchronous. What happens if you do an async paint of a  
File, followed by synchronous painting operations? It seems like the  
only options are to force synchronous I/O, give unpredictable results,  
or break the invariants on current drawing operations (i.e. the  
guarantee that they are complete by the time you return to the event  
loop and thus canvas updates are atomic)


Separating the async I/O from drawing allows the 2D context to remain  
100% synchronous and thus to have sane semantics.


I think the number of steps is not the primary concern here. The issue  
driving the proposal for offscreen canvas is responsiveness - i.e. not  
blocking the main thread for a long time. It seems to me that number  
of steps is not the main issue for responsiveness, but rather whether  
there are operations that take a lot of CPU and are done  
synchronously, and therefore, whether it is worthwhile to farm some of  
that work out to a Worker. I/O is not really a major consideration  
because we already have ways to do asynchronous I/O.


Regards,
Maciej



Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-14 Thread Jonas Sicking
On Sun, Mar 14, 2010 at 1:43 AM, Maciej Stachowiak m...@apple.com wrote:

 On Mar 13, 2010, at 12:30 PM, Jonas Sicking wrote:

 On Sat, Mar 13, 2010 at 12:09 PM, Oliver Hunt oli...@apple.com wrote:

 On Mar 13, 2010, at 9:10 AM, Jonas Sicking wrote:

 There is a use case, which I suspect is quite common, for using
 canvas to manipulate files on the users file system. For example
 when creating a photo uploader which does client side scaling before
 uploading the images, or for creating a web based GIMP like
 application.

 In this case we'll start out with a File object that needs to be read
 in to a canvas. One solution could be to read the File into memory
 in a ByteArray (or similar) and add a synchronous
 canvas2dcontext.fromByteArray function. This has the advantage of
 being more generic, but the downside of forcing both the encoded and
 decoded image to be read into memory.

 Honestly i think  nice and consistent way for this work work would simply
 be to support
 someImage.src = someFileObject

 Which would be asynchronous, and support all the image formats the
 browser already supports.

 That is already possible:

 someImage.src = someFileObject.urn;

 However this brings us back to the very long list of steps I listed
 earlier in this thread.

 I think it is cleaner to have an asynchronous image load operation (as shown
 above) and then a synchronous image paint operation, rather than to
 introduce a asynchronous paint operation directly on the 2D context.

 I don't think there is any sane way to add an asynchronous draw command to
 the 2D context, given that all the existing drawing commands are
 synchronous. What happens if you do an async paint of a File, followed by
 synchronous painting operations? It seems like the only options are to force
 synchronous I/O, give unpredictable results, or break the invariants on
 current drawing operations (i.e. the guarantee that they are complete by the
 time you return to the event loop and thus canvas updates are atomic)

 Separating the async I/O from drawing allows the 2D context to remain 100%
 synchronous and thus to have sane semantics.

One way to do it would be to have an function somewhere, not
necessarily on the 2D context, which given a Blob, returns an
ImageData object. However this still results in the image being loaded
twice into memory, so would only really help if you want to operate on
an ImageData object directly.

 I think the number of steps is not the primary concern here. The issue
 driving the proposal for offscreen canvas is responsiveness - i.e. not
 blocking the main thread for a long time. It seems to me that number of
 steps is not the main issue for responsiveness, but rather whether there are
 operations that take a lot of CPU and are done synchronously, and therefore,
 whether it is worthwhile to farm some of that work out to a Worker. I/O is
 not really a major consideration because we already have ways to do
 asynchronous I/O.

I agree that the number of steps is not important for responsiveness
or performance (though it is for complexity). However several of those
steps seemed to involved non-trivial amount of CPU usage, that was the
concern expressed in my initial mail.

At the very least I think we have a skewed proposal. The main use
cases that has been brought up are scaling and rotating images.
However the proposal is far from optimal for fulfilling that use case.
For scaling, it's fairly complex and uses more CPU cycles, both on the
main thread, and in total, than would be needed with an API more
optimized for that use case. For rotating it doesn't do anything.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-13 Thread Jonas Sicking
On Fri, Mar 12, 2010 at 10:07 PM, Maciej Stachowiak m...@apple.com wrote:
  On Mar 12, 2010, at 6:20 PM, Jonas Sicking wrote:
  Oh, another thing to keep in mind is that if/when we add fromBlob to
  the main-thread canvas, it has to be asynchronous in order to avoid
  main thread synchronous IO. This isn't a big deal, but I figured I
  should mention it while we're on the subject.

 This is part of why I think Blob is the wrong tool for the job - we really
 want to use a data type here that can promise synchronous access to the
 data. When you copy the canvas backing store to a new in-memory
 representation, it seems silly to put that behind an interface that's the
 same as data to which you can only promise async access, such as part of a
 file on disk. There's nothing about copying bits from one canvas to another
 that needs to be async.
 (Also it's not clear to me why a Blob would be faster to copy from, copy to,
 or copy cross-thread than ImageData; I thought the motivation for adding it
 was to have a binary container that can be uploaded to a server via XHR.)

There is a use case, which I suspect is quite common, for using
canvas to manipulate files on the users file system. For example
when creating a photo uploader which does client side scaling before
uploading the images, or for creating a web based GIMP like
application.

In this case we'll start out with a File object that needs to be read
in to a canvas. One solution could be to read the File into memory
in a ByteArray (or similar) and add a synchronous
canvas2dcontext.fromByteArray function. This has the advantage of
being more generic, but the downside of forcing both the encoded and
decoded image to be read into memory.

This is why I suggested adding a asynchronous fromBlob function.

For extracting image data from a canvas I agree that a toBlob
function has little advantage over a toByteArray function (with the
possible exception that ByteArray so far is still vapor ware).

 In general I wonder if we should add API to convert directly between
 Blob and ImageData. Or at least Blob-ImageData and
 ImageData-ByteArray. That could avoid overhead of going through a
 canvas context. That is probably a win no matter which thread we are
 on.

 We could even add APIs to rotate and scale ImageData objects directly.
 If those are asynchronous the implementation could easily implement
 them using a background thread. I'm less sure that this is worth it
 though given that you can implement this yourself using workers if we
 add the other stuff we've talked about.

 Scaling and rotation can be done with just pixels if you code it by hand,
 but you can get native code to do it for you if you can manipulate actually
 offscreen buffers - you just establish the appropriate transform before
 painting the ImageData. REally the question is, how much slower is a scaling
 or rotating image paint than an image paint with the identity transform? Is
 it more than twice as expensive? That's the only way copying image data to a
 background thread will give you a responsiveness win. I'd like to see some
 data to establish that this is the case, if scales and rotates are the only
 concrete use cases we have in mind.

I agree that data would be great. Though for scaling I suspect that
it's complicated enough that it's worth exposing *some* built in API
for doing it. Especially considering that you want to use anti
aliasing and ideally things like gamma correction. Be that through
what we already have on 2d context, or on ImageData directly.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-13 Thread Oliver Hunt

On Mar 13, 2010, at 9:10 AM, Jonas Sicking wrote:
 There is a use case, which I suspect is quite common, for using
 canvas to manipulate files on the users file system. For example
 when creating a photo uploader which does client side scaling before
 uploading the images, or for creating a web based GIMP like
 application.
 
 In this case we'll start out with a File object that needs to be read
 in to a canvas. One solution could be to read the File into memory
 in a ByteArray (or similar) and add a synchronous
 canvas2dcontext.fromByteArray function. This has the advantage of
 being more generic, but the downside of forcing both the encoded and
 decoded image to be read into memory.

Honestly i think  nice and consistent way for this work work would simply be to 
support
someImage.src = someFileObject

Which would be asynchronous, and support all the image formats the browser 
already supports.

 
 This is why I suggested adding a asynchronous fromBlob function.
 
 For extracting image data from a canvas I agree that a toBlob
 function has little advantage over a toByteArray function (with the
 possible exception that ByteArray so far is still vapor ware).
CanvasPixelArray is a ByteArray to all intents and purposes, and webkit and i 
_think_ opera implement it as such.  It would of course be good just to get all 
the native array types into ES, and then make WebLG + Canvas make use of those 
directly, and i doubt that is something we'll need to wait too long for.

--Oliver



Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-13 Thread Jonas Sicking
On Sat, Mar 13, 2010 at 12:09 PM, Oliver Hunt oli...@apple.com wrote:

 On Mar 13, 2010, at 9:10 AM, Jonas Sicking wrote:
 There is a use case, which I suspect is quite common, for using
 canvas to manipulate files on the users file system. For example
 when creating a photo uploader which does client side scaling before
 uploading the images, or for creating a web based GIMP like
 application.

 In this case we'll start out with a File object that needs to be read
 in to a canvas. One solution could be to read the File into memory
 in a ByteArray (or similar) and add a synchronous
 canvas2dcontext.fromByteArray function. This has the advantage of
 being more generic, but the downside of forcing both the encoded and
 decoded image to be read into memory.

 Honestly i think  nice and consistent way for this work work would simply be 
 to support
 someImage.src = someFileObject

 Which would be asynchronous, and support all the image formats the browser 
 already supports.

That is already possible:

someImage.src = someFileObject.urn;

However this brings us back to the very long list of steps I listed
earlier in this thread.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-12 Thread David Levin
On Mon, Feb 22, 2010 at 11:57 AM, Drew Wilson atwil...@google.com wrote:

 Do we feel that text APIs are, in general, difficult to implement in a
 multi-thread safe manner?


On Mon, Feb 22, 2010 at 11:51 AM, Michael Nordman micha...@google.com
 wrote:

 The lack of support for text drawing in the worker context seems like a
 short sighted mistake.


On Mon, Feb 22, 2010 at 2:35 PM, Jeremy Orlow jor...@chromium.org wrote:

 It does indeed seem pretty short sighted.


On Mon, Feb 22, 2010 at 2:48 PM, Maciej Stachowiak m...@apple.com wrote:

 1) Like others, I would recommend not omitting the text APIs. I can see why
 they are a bit trickier to implement than the others, but I don't see a
 fundamental barrier.


I did want to add the text apis but it does add implementation difficulties
for WebKit (and as I understand Firefox). However, they are part of what
people want and it does simplify the interfaces, so done.


On Mon, Feb 22, 2010 at 2:48 PM, Maciej Stachowiak m...@apple.com wrote:

 2) I would propose adding createPattern and drawImage overloads that take
 an OffscreenCanvas. The other overloads would in practice not be usefully
 callable in the worker case since you couldn't get an image, canvas or video
 element.

 3) This would leave the only difference between the two interfaces as the
 drawFocusRing method. This would not be usefully callable in a worker, since
 there would be no way to get an Element. But it doesn't seem worth it to add
 an interface just for one method's worth of difference.


Sounds good.


On Mon, Feb 22, 2010 at 3:10 PM, Jonas Sicking jo...@sicking.cc wrote:

 What is the use case for this? It seems like in most cases you'll want
 to display something on screen to the user, and so the difference
 comes down to shipping drawing commands across the pipe, vs. shipping
 the pixel data.


Apologies for not including this at the start. As now mentioned in several
places in the thread, the simplest use case is resize/rotate of images.

However, more complex use cases may involve heavy users of canvas who would
like to either prerender the canvas and/or move the time taken to generate
the canvas off of the main thread. While it is true that simple uses of
canvas would not get a performance win out of this, if you are doing many
canvas operations, (like many complex operations) it is faster to copy a
result than it is to generate it.

Ideally canvas will get some form of toBlob which will allow the image to be
appropriately compressed as well which will also reduce the number of bits
that need to be copied from the worker to the main thread (as well as make
for a smaller upload than the raw bits).

Lastly, in the future, one can see other uses for the OffscreenCanvas
including WebGL for workers which several folks (Maciej and Jonas) have
expressed interest about on this thread.

dave


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-12 Thread Jonas Sicking
On Fri, Mar 12, 2010 at 11:57 AM, David Levin le...@google.com wrote:
 On Mon, Feb 22, 2010 at 3:10 PM, Jonas Sicking jo...@sicking.cc wrote:

 What is the use case for this? It seems like in most cases you'll want
 to display something on screen to the user, and so the difference
 comes down to shipping drawing commands across the pipe, vs. shipping
 the pixel data.

 Apologies for not including this at the start. As now mentioned in several
 places in the thread, the simplest use case is resize/rotate of images.

As Hixie pointed out, resize/rotate images do not seem solved by this
API. In order to resize an image with this API you need to:

1. Load the image into an img
2. Copy the image into a canvas
3. Extract an ImageData from the canvas
4. Send the ImageData to the worker thread
5. Import the ImageData into the worker thread canvas
6. Resize/rotate the image using the worker thread canvas
7. Extract an ImageData from the worker thread canvas
8. Send the ImageData to the main thread
9. Import the ImageData into a main thread canvas

And if you want to send the resized image to the server:

10. Extract the data in a serialized format from the canvas
11. Send using XHR.

Just looking at just the work happening on the main thread it sounds
like just resizing/rotating on the main thread is faster. Not to
mention much less complex.

I'm not saying that the proposed API is bad. It just doesn't seem to
solve the (seemingly most commonly requested) use case of
rotating/scaling images. So if we want to solve those use cases we
need to either come up with a separate API for that, or extend this
proposal to solve that use case somehow.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-12 Thread David Levin
On Fri, Mar 12, 2010 at 12:16 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, Mar 12, 2010 at 11:57 AM, David Levin le...@google.com wrote:
  On Mon, Feb 22, 2010 at 3:10 PM, Jonas Sicking jo...@sicking.cc wrote:
 
  What is the use case for this? It seems like in most cases you'll want
  to display something on screen to the user, and so the difference
  comes down to shipping drawing commands across the pipe, vs. shipping
  the pixel data.
 
  Apologies for not including this at the start. As now mentioned in
 several
  places in the thread, the simplest use case is resize/rotate of images.

 As Hixie pointed out, resize/rotate images do not seem solved by this
 API. In order to resize an image with this API you need to:

 1. Load the image into an img
 2. Copy the image into a canvas
 3. Extract an ImageData from the canvas
 4. Send the ImageData to the worker thread
 5. Import the ImageData into the worker thread canvas
 6. Resize/rotate the image using the worker thread canvas
 7. Extract an ImageData from the worker thread canvas
 8. Send the ImageData to the main thread
 9. Import the ImageData into a main thread canvas

 And if you want to send the resized image to the server:

 10. Extract the data in a serialized format from the canvas
 11. Send using XHR.

 Just looking at just the work happening on the main thread it sounds
 like just resizing/rotating on the main thread is faster. Not to
 mention much less complex.

 I'm not saying that the proposed API is bad. It just doesn't seem to
 solve the (seemingly most commonly requested) use case of
 rotating/scaling images. So if we want to solve those use cases we
 need to either come up with a separate API for that, or extend this
 proposal to solve that use case somehow.


If fromBlob and toBlob were on canvas, it gets rid of steps 1-3 and changes
step 4 to be send file to worker thread. I simply didn't include
fromBlob/toBlob because toBlob was already being discussed in another
thread. I thought it best to let that topic get discussed in parallel, but
it is part of this whole thing, so I am interested in that happening (and
discussing those apis further).

So it looks like this:

1. Send the File to the worker thread
2. Import the File/blob into the worker thread canvas
3. Resize/rotate the image using the worker thread canvas (to thumbnail for
instance)
4. Extract a blob from the worker thread canvas

Either

5. Send the blob using XHR in the worker.

or

5. Send the Blob to the main thread
6. Import the Blob into a main thread canvas
(or both).

Given the blob support this would be overall a better user experience
because the loading of the image is done in the worker as well as the resize
to a much smaller size, so the i/o happening on the main thread is much
lower overall.

dave


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-12 Thread Oliver Hunt

On Mar 12, 2010, at 12:16 PM, Jonas Sicking wrote:
 I'm not saying that the proposed API is bad. It just doesn't seem to
 solve the (seemingly most commonly requested) use case of
 rotating/scaling images. So if we want to solve those use cases we
 need to either come up with a separate API for that, or extend this
 proposal to solve that use case somehow.

Just for reference I think one thing that people are forgetting that there is a 
difference between
being computationally faster, and being more responsive.

If we imagine a scenario you're doing something that takes 100ms/frame, and it 
takes 10ms to
post the data between the main thread and a worker.

I _could_ do this all on the main thread in which case i have a fixed cost of a 
100ms/frame, but
if I offload the processing to a worker my processing time per frame is now 
120ms/frame.
The initial thought may be ick slower but the page is actually much more 
responsive -- when
processing on the main thread the main thread is blocked for 100ms at a time, 
whereas
processing on a worker means the main thread is not blocked for more than 10ms. 
 That's the
difference between being able to scroll in realtime vs. 10fps.

That also ignores the possibility of splitting the processing among multiple 
workers, once again
the total cpu time may increase, but the wall clock time can decrease 
dramatically.
 / Jonas
--Oliver




Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-12 Thread Jonas Sicking
On Fri, Mar 12, 2010 at 12:46 PM, Oliver Hunt oli...@apple.com wrote:

 On Mar 12, 2010, at 12:16 PM, Jonas Sicking wrote:
 I'm not saying that the proposed API is bad. It just doesn't seem to
 solve the (seemingly most commonly requested) use case of
 rotating/scaling images. So if we want to solve those use cases we
 need to either come up with a separate API for that, or extend this
 proposal to solve that use case somehow.

 Just for reference I think one thing that people are forgetting that there is 
 a difference between
 being computationally faster, and being more responsive.

As I mentioned in my email, if you look at the steps listed, enough of
them happen *on the main thread* that you're spending far more of the
main threads CPU cycles than you'd like. Possibly even more than doing
all the resizing on the main thread.

With the other improvements suggested by David things do definitely
look different, but those are not in a proposal yet.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-12 Thread David Levin
On Fri, Mar 12, 2010 at 2:35 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, Mar 12, 2010 at 12:46 PM, Oliver Hunt oli...@apple.com wrote:
 
  On Mar 12, 2010, at 12:16 PM, Jonas Sicking wrote:
  I'm not saying that the proposed API is bad. It just doesn't seem to
  solve the (seemingly most commonly requested) use case of
  rotating/scaling images. So if we want to solve those use cases we
  need to either come up with a separate API for that, or extend this
  proposal to solve that use case somehow.
 
  Just for reference I think one thing that people are forgetting that
 there is a difference between
  being computationally faster, and being more responsive.

 As I mentioned in my email, if you look at the steps listed, enough of
 them happen *on the main thread* that you're spending far more of the
 main threads CPU cycles than you'd like. Possibly even more than doing
 all the resizing on the main thread.

 With the other improvements suggested by David things do definitely
 look different, but those are not in a proposal yet.


There is the other scenario I mentioned, but I'll see what I can do about
separately working up a proposal for adding those methods because they were
next on my list to deal with. (fromBlob/load may be enough for this.)

dave


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-12 Thread Jonas Sicking
On Fri, Mar 12, 2010 at 3:38 PM, David Levin le...@google.com wrote:


 On Fri, Mar 12, 2010 at 2:35 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, Mar 12, 2010 at 12:46 PM, Oliver Hunt oli...@apple.com wrote:
 
  On Mar 12, 2010, at 12:16 PM, Jonas Sicking wrote:
  I'm not saying that the proposed API is bad. It just doesn't seem to
  solve the (seemingly most commonly requested) use case of
  rotating/scaling images. So if we want to solve those use cases we
  need to either come up with a separate API for that, or extend this
  proposal to solve that use case somehow.
 
  Just for reference I think one thing that people are forgetting that
  there is a difference between
  being computationally faster, and being more responsive.

 As I mentioned in my email, if you look at the steps listed, enough of
 them happen *on the main thread* that you're spending far more of the
 main threads CPU cycles than you'd like. Possibly even more than doing
 all the resizing on the main thread.

 With the other improvements suggested by David things do definitely
 look different, but those are not in a proposal yet.

 There is the other scenario I mentioned, but I'll see what I can do about
 separately working up a proposal for adding those methods because they were
 next on my list to deal with. (fromBlob/load may be enough for this.)

Note that the other proposals that have been made has put toBlob on
HTMLCanvasElement, not on the context. That makes the most sense for
the main-thread canvas as that way its available on all contexts.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-12 Thread Maciej Stachowiak


On Mar 12, 2010, at 2:35 PM, Jonas Sicking wrote:

On Fri, Mar 12, 2010 at 12:46 PM, Oliver Hunt oli...@apple.com  
wrote:


On Mar 12, 2010, at 12:16 PM, Jonas Sicking wrote:

I'm not saying that the proposed API is bad. It just doesn't seem to
solve the (seemingly most commonly requested) use case of
rotating/scaling images. So if we want to solve those use cases we
need to either come up with a separate API for that, or extend this
proposal to solve that use case somehow.


Just for reference I think one thing that people are forgetting  
that there is a difference between

being computationally faster, and being more responsive.


As I mentioned in my email, if you look at the steps listed, enough of
them happen *on the main thread* that you're spending far more of the
main threads CPU cycles than you'd like. Possibly even more than doing
all the resizing on the main thread.

With the other improvements suggested by David things do definitely
look different, but those are not in a proposal yet.


In general a copy is a fair bit faster than an image rotate or resize,  
though I don't know if it is faster enough for reasonable image sizes  
to matter.


Regards,
Maciej



Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-12 Thread Jonas Sicking
On Fri, Mar 12, 2010 at 4:19 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Fri, Mar 12, 2010 at 3:38 PM, David Levin le...@google.com wrote:


 On Fri, Mar 12, 2010 at 2:35 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, Mar 12, 2010 at 12:46 PM, Oliver Hunt oli...@apple.com wrote:
 
  On Mar 12, 2010, at 12:16 PM, Jonas Sicking wrote:
  I'm not saying that the proposed API is bad. It just doesn't seem to
  solve the (seemingly most commonly requested) use case of
  rotating/scaling images. So if we want to solve those use cases we
  need to either come up with a separate API for that, or extend this
  proposal to solve that use case somehow.
 
  Just for reference I think one thing that people are forgetting that
  there is a difference between
  being computationally faster, and being more responsive.

 As I mentioned in my email, if you look at the steps listed, enough of
 them happen *on the main thread* that you're spending far more of the
 main threads CPU cycles than you'd like. Possibly even more than doing
 all the resizing on the main thread.

 With the other improvements suggested by David things do definitely
 look different, but those are not in a proposal yet.

 There is the other scenario I mentioned, but I'll see what I can do about
 separately working up a proposal for adding those methods because they were
 next on my list to deal with. (fromBlob/load may be enough for this.)

 Note that the other proposals that have been made has put toBlob on
 HTMLCanvasElement, not on the context. That makes the most sense for
 the main-thread canvas as that way its available on all contexts.

Oh, another thing to keep in mind is that if/when we add fromBlob to
the main-thread canvas, it has to be asynchronous in order to avoid
main thread synchronous IO. This isn't a big deal, but I figured I
should mention it while we're on the subject.

In general I wonder if we should add API to convert directly between
Blob and ImageData. Or at least Blob-ImageData and
ImageData-ByteArray. That could avoid overhead of going through a
canvas context. That is probably a win no matter which thread we are
on.

We could even add APIs to rotate and scale ImageData objects directly.
If those are asynchronous the implementation could easily implement
them using a background thread. I'm less sure that this is worth it
though given that you can implement this yourself using workers if we
add the other stuff we've talked about.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-03-12 Thread Maciej Stachowiak


On Mar 12, 2010, at 6:20 PM, Jonas Sicking wrote:

On Fri, Mar 12, 2010 at 4:19 PM, Jonas Sicking jo...@sicking.cc  
wrote:
On Fri, Mar 12, 2010 at 3:38 PM, David Levin le...@google.com  
wrote:



On Fri, Mar 12, 2010 at 2:35 PM, Jonas Sicking jo...@sicking.cc  
wrote:


On Fri, Mar 12, 2010 at 12:46 PM, Oliver Hunt oli...@apple.com  
wrote:


On Mar 12, 2010, at 12:16 PM, Jonas Sicking wrote:
I'm not saying that the proposed API is bad. It just doesn't  
seem to

solve the (seemingly most commonly requested) use case of
rotating/scaling images. So if we want to solve those use cases  
we
need to either come up with a separate API for that, or extend  
this

proposal to solve that use case somehow.


Just for reference I think one thing that people are forgetting  
that

there is a difference between
being computationally faster, and being more responsive.


As I mentioned in my email, if you look at the steps listed,  
enough of
them happen *on the main thread* that you're spending far more of  
the
main threads CPU cycles than you'd like. Possibly even more than  
doing

all the resizing on the main thread.

With the other improvements suggested by David things do definitely
look different, but those are not in a proposal yet.


There is the other scenario I mentioned, but I'll see what I can  
do about
separately working up a proposal for adding those methods because  
they were
next on my list to deal with. (fromBlob/load may be enough for  
this.)


Note that the other proposals that have been made has put toBlob on
HTMLCanvasElement, not on the context. That makes the most sense for
the main-thread canvas as that way its available on all contexts.


Oh, another thing to keep in mind is that if/when we add fromBlob to
the main-thread canvas, it has to be asynchronous in order to avoid
main thread synchronous IO. This isn't a big deal, but I figured I
should mention it while we're on the subject.


This is part of why I think Blob is the wrong tool for the job - we  
really want to use a data type here that can promise synchronous  
access to the data. When you copy the canvas backing store to a new in- 
memory representation, it seems silly to put that behind an interface  
that's the same as data to which you can only promise async access,  
such as part of a file on disk. There's nothing about copying bits  
from one canvas to another that needs to be async.


(Also it's not clear to me why a Blob would be faster to copy from,  
copy to, or copy cross-thread than ImageData; I thought the motivation  
for adding it was to have a binary container that can be uploaded to a  
server via XHR.)




In general I wonder if we should add API to convert directly between
Blob and ImageData. Or at least Blob-ImageData and
ImageData-ByteArray. That could avoid overhead of going through a
canvas context. That is probably a win no matter which thread we are
on.

We could even add APIs to rotate and scale ImageData objects directly.
If those are asynchronous the implementation could easily implement
them using a background thread. I'm less sure that this is worth it
though given that you can implement this yourself using workers if we
add the other stuff we've talked about.


Scaling and rotation can be done with just pixels if you code it by  
hand, but you can get native code to do it for you if you can  
manipulate actually offscreen buffers - you just establish the  
appropriate transform before painting the ImageData. REally the  
question is, how much slower is a scaling or rotating image paint than  
an image paint with the identity transform? Is it more than twice as  
expensive? That's the only way copying image data to a background  
thread will give you a responsiveness win. I'd like to see some data  
to establish that this is the case, if scales and rotates are the only  
concrete use cases we have in mind.


Regards,
Maciej









Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-24 Thread Maciej Stachowiak


On Feb 23, 2010, at 10:04 PM, Jonas Sicking wrote:

On Tue, Feb 23, 2010 at 9:57 PM, Maciej Stachowiak m...@apple.com  
wrote:

- Raytracing a complex scene at high resolution.
- Drawing a highly zoomed in high resolution portion of the  
Mandelbrot set.


To be fair though, you could compute the pixels for those with just  
math,

there is no need to have a graphics context type abstraction.


http://people.mozilla.com/~sicking/webgl/ray.html


I did not think it was possible to write a proper raytracer for  
arbitrary content all as a shader program, but I do not know enough  
about 3D graphics to know if that demo is correct or if that is  
possible in general. Point conceded though.



http://people.mozilla.com/~sicking/webgl/juliaanim.html
http://people.mozilla.com/~sicking/webgl/mandjulia.html


Neither of examples you posted seems to have the ability to zoom in,  
so I don't think they show anything about doing this to extremely high  
accuracy. But I see your point that much of this particular  
computation can be done on the GPU, up to probably quite high limits.  
Replace this example with your choice of non-data-parallel computation.


Very sexy demos, by the way.

Regards,
Maciej



Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-24 Thread Maciej Stachowiak


On Feb 24, 2010, at 12:09 AM, Maciej Stachowiak wrote:



On Feb 23, 2010, at 10:04 PM, Jonas Sicking wrote:

On Tue, Feb 23, 2010 at 9:57 PM, Maciej Stachowiak m...@apple.com  
wrote:

- Raytracing a complex scene at high resolution.
- Drawing a highly zoomed in high resolution portion of the  
Mandelbrot set.


To be fair though, you could compute the pixels for those with  
just math,

there is no need to have a graphics context type abstraction.


http://people.mozilla.com/~sicking/webgl/ray.html


I did not think it was possible to write a proper raytracer for  
arbitrary content all as a shader program, but I do not know enough  
about 3D graphics to know if that demo is correct or if that is  
possible in general. Point conceded though.



http://people.mozilla.com/~sicking/webgl/juliaanim.html
http://people.mozilla.com/~sicking/webgl/mandjulia.html


Neither of examples you posted seems to have the ability to zoom in,  
so I don't think they show anything about doing this to extremely  
high accuracy. But I see your point that much of this particular  
computation can be done on the GPU, up to probably quite high  
limits. Replace this example with your choice of non-data-parallel  
computation.


Following the links, this demo does do zoom, but it will go all jaggy  
past a certain zoom level, presumably due to limitations of GLSL.


http://learningwebgl.com/lessons/example01/?

Regards,
Maciej



Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-24 Thread Jonas Sicking
On Wed, Feb 24, 2010 at 12:14 AM, Maciej Stachowiak m...@apple.com wrote:

 On Feb 24, 2010, at 12:09 AM, Maciej Stachowiak wrote:

 On Feb 23, 2010, at 10:04 PM, Jonas Sicking wrote:

 On Tue, Feb 23, 2010 at 9:57 PM, Maciej Stachowiak m...@apple.com wrote:

 - Raytracing a complex scene at high resolution.

 - Drawing a highly zoomed in high resolution portion of the Mandelbrot set.

 To be fair though, you could compute the pixels for those with just math,

 there is no need to have a graphics context type abstraction.

 http://people.mozilla.com/~sicking/webgl/ray.html

 I did not think it was possible to write a proper raytracer for arbitrary
 content all as a shader program, but I do not know enough about 3D graphics
 to know if that demo is correct or if that is possible in general. Point
 conceded though.

The big thing that GLSL is lacking is a stack, making it impossible to
recurse properly. This isn't a huge problem to work around, though can
result in ugly code. Especially if you want to support transparent
objects, in which case you'll essentially have to unroll recursion
manually by copying code.

This of course makes it impossible to recurse to arbitrary levels,
though that is something you generally don't want to do anyway in a
ray tracer since it costs a lot of CPU (or in this case GPU) cycles
for very little visual gain.

 http://people.mozilla.com/~sicking/webgl/juliaanim.html
 http://people.mozilla.com/~sicking/webgl/mandjulia.html

 Neither of examples you posted seems to have the ability to zoom in, so I
 don't think they show anything about doing this to extremely high accuracy.
 But I see your point that much of this particular computation can be done on
 the GPU, up to probably quite high limits. Replace this example with your
 choice of non-data-parallel computation.

 Following the links, this demo does do zoom, but it will go all jaggy past a
 certain zoom level, presumably due to limitations of GLSL.
 http://learningwebgl.com/lessons/example01/?

Indeed. Zooming is no problem at all and doesn't require any heavier
math than what is done in my demo. I experimented with allowing the
animations to be stopped at arbitrary points and then allowing
zooming. However it required more UI work than I was interested in
doing at the time.

The reason the demo goes jaggy after a while is due to limitations in
IEEE 754 floats.

But I should clarify that my point wasn't that WebGL makes
off-main-thread graphics processing unneeded. I just thought it was
funny that the two examples you brought up were exactly the things
that I had experimented with. Although I wouldn't be surprised if a
lot of the image processing effects that people want to do can be
written as shader programs. Would definitely be interesting to know if
WebGL could be supported on workers.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-24 Thread Jonas Sicking
On Wed, Feb 24, 2010 at 1:35 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Feb 24, 2010 at 12:14 AM, Maciej Stachowiak m...@apple.com wrote:

 On Feb 24, 2010, at 12:09 AM, Maciej Stachowiak wrote:

 On Feb 23, 2010, at 10:04 PM, Jonas Sicking wrote:

 On Tue, Feb 23, 2010 at 9:57 PM, Maciej Stachowiak m...@apple.com wrote:

 - Raytracing a complex scene at high resolution.

 - Drawing a highly zoomed in high resolution portion of the Mandelbrot set.

 To be fair though, you could compute the pixels for those with just math,

 there is no need to have a graphics context type abstraction.

 http://people.mozilla.com/~sicking/webgl/ray.html

 I did not think it was possible to write a proper raytracer for arbitrary
 content all as a shader program, but I do not know enough about 3D graphics
 to know if that demo is correct or if that is possible in general. Point
 conceded though.

 The big thing that GLSL is lacking is a stack, making it impossible to
 recurse properly. This isn't a huge problem to work around, though can
 result in ugly code. Especially if you want to support transparent
 objects, in which case you'll essentially have to unroll recursion
 manually by copying code.

 This of course makes it impossible to recurse to arbitrary levels,
 though that is something you generally don't want to do anyway in a
 ray tracer since it costs a lot of CPU (or in this case GPU) cycles
 for very little visual gain.

Oh, but the math is definitely correct, so GLSL ray tracing is quite possible.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-24 Thread Maciej Stachowiak


On Feb 24, 2010, at 1:35 AM, Jonas Sicking wrote:

On Wed, Feb 24, 2010 at 12:14 AM, Maciej Stachowiak m...@apple.com  
wrote:


On Feb 24, 2010, at 12:09 AM, Maciej Stachowiak wrote:

On Feb 23, 2010, at 10:04 PM, Jonas Sicking wrote:

On Tue, Feb 23, 2010 at 9:57 PM, Maciej Stachowiak m...@apple.com  
wrote:


- Raytracing a complex scene at high resolution.

- Drawing a highly zoomed in high resolution portion of the  
Mandelbrot set.


To be fair though, you could compute the pixels for those with just  
math,


there is no need to have a graphics context type abstraction.

http://people.mozilla.com/~sicking/webgl/ray.html

I did not think it was possible to write a proper raytracer for  
arbitrary
content all as a shader program, but I do not know enough about 3D  
graphics
to know if that demo is correct or if that is possible in general.  
Point

conceded though.


The big thing that GLSL is lacking is a stack, making it impossible to
recurse properly. This isn't a huge problem to work around, though can
result in ugly code. Especially if you want to support transparent
objects, in which case you'll essentially have to unroll recursion
manually by copying code.

This of course makes it impossible to recurse to arbitrary levels,
though that is something you generally don't want to do anyway in a
ray tracer since it costs a lot of CPU (or in this case GPU) cycles
for very little visual gain.


http://people.mozilla.com/~sicking/webgl/juliaanim.html
http://people.mozilla.com/~sicking/webgl/mandjulia.html

Neither of examples you posted seems to have the ability to zoom  
in, so I
don't think they show anything about doing this to extremely high  
accuracy.
But I see your point that much of this particular computation can  
be done on
the GPU, up to probably quite high limits. Replace this example  
with your

choice of non-data-parallel computation.

Following the links, this demo does do zoom, but it will go all  
jaggy past a

certain zoom level, presumably due to limitations of GLSL.
http://learningwebgl.com/lessons/example01/?


Indeed. Zooming is no problem at all and doesn't require any heavier
math than what is done in my demo.


Zooming does require more iterations to get an accurate edge, and  
WebGL has to limit your loop cycles at some point to prevent locking  
up the GPU. But of course once you are at that level it would be  
pretty darn slow on a CPU. I have seen mandelbrot demos that allow  
essentially arbitrary zoom (or at least, the limit would be the size  
of your RAM, not the size of a float).



I experimented with allowing the
animations to be stopped at arbitrary points and then allowing
zooming. However it required more UI work than I was interested in
doing at the time.

The reason the demo goes jaggy after a while is due to limitations in
IEEE 754 floats.


On the CPU you could go past that if you cared to by coding your own  
high precision math. But it would be quite slow.




But I should clarify that my point wasn't that WebGL makes
off-main-thread graphics processing unneeded. I just thought it was
funny that the two examples you brought up were exactly the things
that I had experimented with. Although I wouldn't be surprised if a
lot of the image processing effects that people want to do can be
written as shader programs. Would definitely be interesting to know if
WebGL could be supported on workers.


I'm very much interested in the possibility of WebGL on Workers, which  
is why I suggested, when reviewing early drafts of this proposal, that  
the object should be an OffscreenCanvas rather than a special Worker- 
only version of a 2d context (with implied built-in buffer). This  
makes it possible to extend it to include WebGL.


Regards,
Maciej



Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-24 Thread Andrew Grieve
Regarding the three steps of offscreen rendering:

1. Draw stuff
2. Ship pixels to main thread
3. Draw them on the screen.

How would you do #2 efficiently? If you used toDataUr*l*(), then you have to
encode a png on one side and then decode the png on the main thread. I think
we might want to add some sort of API for blitting directly from an
offscreen canvas to an onscreen one. Perhaps via a canvas ID.

Andrew


On Wed, Feb 24, 2010 at 6:12 AM, Maciej Stachowiak m...@apple.com wrote:


 On Feb 24, 2010, at 1:35 AM, Jonas Sicking wrote:

  On Wed, Feb 24, 2010 at 12:14 AM, Maciej Stachowiak m...@apple.com
 wrote:


 On Feb 24, 2010, at 12:09 AM, Maciej Stachowiak wrote:

 On Feb 23, 2010, at 10:04 PM, Jonas Sicking wrote:

 On Tue, Feb 23, 2010 at 9:57 PM, Maciej Stachowiak m...@apple.com
 wrote:

 - Raytracing a complex scene at high resolution.

 - Drawing a highly zoomed in high resolution portion of the Mandelbrot
 set.

 To be fair though, you could compute the pixels for those with just math,

 there is no need to have a graphics context type abstraction.

 http://people.mozilla.com/~sicking/webgl/ray.html

 I did not think it was possible to write a proper raytracer for arbitrary
 content all as a shader program, but I do not know enough about 3D
 graphics
 to know if that demo is correct or if that is possible in general. Point
 conceded though.


 The big thing that GLSL is lacking is a stack, making it impossible to
 recurse properly. This isn't a huge problem to work around, though can
 result in ugly code. Especially if you want to support transparent
 objects, in which case you'll essentially have to unroll recursion
 manually by copying code.

 This of course makes it impossible to recurse to arbitrary levels,
 though that is something you generally don't want to do anyway in a
 ray tracer since it costs a lot of CPU (or in this case GPU) cycles
 for very little visual gain.

  http://people.mozilla.com/~sicking/webgl/juliaanim.html
 http://people.mozilla.com/~sicking/webgl/mandjulia.html

 Neither of examples you posted seems to have the ability to zoom in, so I
 don't think they show anything about doing this to extremely high
 accuracy.
 But I see your point that much of this particular computation can be done
 on
 the GPU, up to probably quite high limits. Replace this example with your
 choice of non-data-parallel computation.

 Following the links, this demo does do zoom, but it will go all jaggy
 past a
 certain zoom level, presumably due to limitations of GLSL.
 http://learningwebgl.com/lessons/example01/?


 Indeed. Zooming is no problem at all and doesn't require any heavier
 math than what is done in my demo.


 Zooming does require more iterations to get an accurate edge, and WebGL has
 to limit your loop cycles at some point to prevent locking up the GPU. But
 of course once you are at that level it would be pretty darn slow on a CPU.
 I have seen mandelbrot demos that allow essentially arbitrary zoom (or at
 least, the limit would be the size of your RAM, not the size of a float).


  I experimented with allowing the
 animations to be stopped at arbitrary points and then allowing
 zooming. However it required more UI work than I was interested in
 doing at the time.

 The reason the demo goes jaggy after a while is due to limitations in
 IEEE 754 floats.


 On the CPU you could go past that if you cared to by coding your own high
 precision math. But it would be quite slow.



 But I should clarify that my point wasn't that WebGL makes
 off-main-thread graphics processing unneeded. I just thought it was
 funny that the two examples you brought up were exactly the things
 that I had experimented with. Although I wouldn't be surprised if a
 lot of the image processing effects that people want to do can be
 written as shader programs. Would definitely be interesting to know if
 WebGL could be supported on workers.


 I'm very much interested in the possibility of WebGL on Workers, which is
 why I suggested, when reviewing early drafts of this proposal, that the
 object should be an OffscreenCanvas rather than a special Worker-only
 version of a 2d context (with implied built-in buffer). This makes it
 possible to extend it to include WebGL.

 Regards,
 Maciej




Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-24 Thread Anne van Kesteren
On Wed, 24 Feb 2010 16:45:44 +0100, Andrew Grieve agri...@google.com  
wrote:
How would you do #2 efficiently? If you used toDataUr*l*(), then you  
have to
encode a png on one side and then decode the png on the main thread. I  
think

we might want to add some sort of API for blitting directly from an
offscreen canvas to an onscreen one. Perhaps via a canvas ID.


postMessage() supports ImageData so you could use that.


--
Anne van Kesteren
http://annevankesteren.nl/


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-23 Thread Jeremy Orlow
On Tue, Feb 23, 2010 at 12:46 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Feb 22, 2010 at 4:34 PM, Jeremy Orlow jor...@chromium.org wrote:
  On Tue, Feb 23, 2010 at 12:05 AM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  On Mon, Feb 22, 2010 at 3:43 PM, Jeremy Orlow jor...@chromium.org
 wrote:
   On Mon, Feb 22, 2010 at 11:10 PM, Jonas Sicking jo...@sicking.cc
   wrote:
  
   On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com
 wrote:
I've talked with some other folks on WebKit (Maciej and Oliver)
 about
having
a canvas that is available to workers. They suggested some nice
modifications to make it an offscreen canvas, which may be used in
the
Document or in a Worker.
  
   What is the use case for this? It seems like in most cases you'll
 want
   to display something on screen to the user, and so the difference
   comes down to shipping drawing commands across the pipe, vs. shipping
   the pixel data.
  
   Sometimes the commands take up a lot more CPU power than shipping the
   pixels.  Lets say you wanted to have a really rich map application
 that
   looked great, was highly interactive/fluid, but didn't use a lot of
   bandwidth.  Rendering different parts of the screen on different
 workers
   seems like a legit use.
 
  I admit to not being a graphics expert, but I would imagine you have
  to do quite a lot of drawing before
  1. Drawing on offscreen canvas
  2. Cloning the pixel data in order to ship it to a different thread
  3. Drawing the pixel data to the on-screen canvas
 
  Presumably a smart UA implementation could make 1 and 3 be nearly nothing
  (with copy on write and such) in many cases.

 Huh? I thought the whole point was that 1 was expensive, which was why
 you wanted to do it off the main thread.

 And 3 is what puts pixels on the screen so I don't see how you could
 do that without copying. You could possibly implement 3 using
 blitting, but that's still not nearly nothing.

 Possibly 2 is what you could get rid of using copy-on-write.

  gets to be cheaper than
 
  1. Drawing to on-screen canvas.
 
  You're assuming only one core.  The norm on the desktops and laptops
 these
  days is multiple cores.

 I did not assume that no. But it sounded like your use case was to
 rasterize off the main thread, get the pixels to the main thread, and
 then draw there. The last part (step 3 above) will definitely happen
 on the main thread no matter how many cores you have.


Sorry, I didn't read clearly before sending.

Yes, 1 would presumably be expensive and thus worth doing on a worker.  Even
on a single core machine, workers are great for long tasks since you needn't
add breaks to your code via setTimeout so the UI can be updated (which
doesn't always work perfectly anyway).

2 could be done with copy on write in many cases.  3 is just blitting which
is generally a pretty fast operation.


I've gotten a couple responses back on use cases.  I'll admit that about
half a simply resize/rotate.  The others I've been told I cannot talk about
publicly.  I think my map tiles example is similar enough to a bunch of them
though that you can get at least the conceptual idea.  And my understanding
is that in prototypes, it's been very hard to do the (computationally
complex) rendering without making the UI go janky.  There might be some more
use cases that come up that I can generalize into a public form, in which
case I'll do my best.


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-23 Thread Darin Fisher
On Mon, Feb 22, 2010 at 4:05 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Feb 22, 2010 at 3:43 PM, Jeremy Orlow jor...@chromium.org wrote:
  On Mon, Feb 22, 2010 at 11:10 PM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com wrote:
   I've talked with some other folks on WebKit (Maciej and Oliver) about
   having
   a canvas that is available to workers. They suggested some nice
   modifications to make it an offscreen canvas, which may be used in the
   Document or in a Worker.
 
  What is the use case for this? It seems like in most cases you'll want
  to display something on screen to the user, and so the difference
  comes down to shipping drawing commands across the pipe, vs. shipping
  the pixel data.
 
  Sometimes the commands take up a lot more CPU power than shipping the
  pixels.  Lets say you wanted to have a really rich map application that
  looked great, was highly interactive/fluid, but didn't use a lot of
  bandwidth.  Rendering different parts of the screen on different workers
  seems like a legit use.

 I admit to not being a graphics expert, but I would imagine you have
 to do quite a lot of drawing before
 1. Drawing on offscreen canvas
 2. Cloning the pixel data in order to ship it to a different thread
 3. Drawing the pixel data to the on-screen canvas


The pixel copies are not as expensive as you might imagine.  (You just
described how rendering works in Chrome.)  Step #1 can vastly dominate if
drawing is complex.

Imagine if it involved something as complicated and expensive as rendering a
web page.  Doing work that expensive on a background thread becomes
imperative to maintaining good responsiveness of the main UI thread of the
application, so the extra copies can be well worth the cost.

-Darin




 gets to be cheaper than

 1. Drawing to on-screen canvas.

  The other use case I can think of is doing image manipulation and then
  sending the result directly to the server, without ever displaying it
  to the user. However this is first of all not supported by the
  suggested API, and second I can't think of any image manipulation that
  you wouldn't want to display to the user except for scaling down a
  high resolution image. But that seems like a much simpler API than all
  of canvas. And again, not even this simple use case is supported by
  the current API.
 
  OK, so you solve this one problem.  Then soon enough someone wants to do
  something more than just scale an image.  So you you add another one off
  solution.  Then another.  Next thing you've essentially created canvas
  prime

 We've always started with use cases and then created APIs that
 fulfills those use cases, rather than come up with APIs and hope that
 that fulfills some future use case. That seems like a much wiser path
 here too.

  I'll note that there are a bunch of teams that want this behavior, though
 I
  can't remember exactly what for.

 But you're sure that it fulfills their requirements? ;-)

  At least some of it is simple image
  resizing type stuff.  Most of it is related to doing image manipulation
 work
  that the app is probably going to need soon (but isn't on the screen
  yet...and that we don't want to slow the main thread for).
  Really, if you use picassa (or iPhoto or some other competitor) it really
  isn't hard to think of a lot of uses for this.  Even for non-photo Apps
  (like Bespin) I could totally see it being worth it to them to do some
  rendering off the main loop.

 For many of these things you want to display the image to the user at
 the same time as the

  To be honest, I think the applications are largely self
 evident...especially
  if you think about taking rich desktop apps and making them web apps.

 So picassa and/or iPhoto uses off-main-thread *drawing* (not image
 scaling) today?

   Are
  you sure that you're negativity towards an offscreen canvas isn't simply
  being driven by implementation related worries?

 Quite certain. I can promise to for every API suggested, that if there
 are no use cases included, and no one else asks, I will ask what the
 use case is.

 / Jonas



Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-23 Thread Jeremy Orlow
On Tue, Feb 23, 2010 at 11:31 AM, Jeremy Orlow jor...@chromium.org wrote:

 On Tue, Feb 23, 2010 at 12:46 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Feb 22, 2010 at 4:34 PM, Jeremy Orlow jor...@chromium.org
 wrote:
  On Tue, Feb 23, 2010 at 12:05 AM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  On Mon, Feb 22, 2010 at 3:43 PM, Jeremy Orlow jor...@chromium.org
 wrote:
   On Mon, Feb 22, 2010 at 11:10 PM, Jonas Sicking jo...@sicking.cc
   wrote:
  
   On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com
 wrote:
I've talked with some other folks on WebKit (Maciej and Oliver)
 about
having
a canvas that is available to workers. They suggested some nice
modifications to make it an offscreen canvas, which may be used in
the
Document or in a Worker.
  
   What is the use case for this? It seems like in most cases you'll
 want
   to display something on screen to the user, and so the difference
   comes down to shipping drawing commands across the pipe, vs.
 shipping
   the pixel data.
  
   Sometimes the commands take up a lot more CPU power than shipping the
   pixels.  Lets say you wanted to have a really rich map application
 that
   looked great, was highly interactive/fluid, but didn't use a lot of
   bandwidth.  Rendering different parts of the screen on different
 workers
   seems like a legit use.
 
  I admit to not being a graphics expert, but I would imagine you have
  to do quite a lot of drawing before
  1. Drawing on offscreen canvas
  2. Cloning the pixel data in order to ship it to a different thread
  3. Drawing the pixel data to the on-screen canvas
 
  Presumably a smart UA implementation could make 1 and 3 be nearly
 nothing
  (with copy on write and such) in many cases.

 Huh? I thought the whole point was that 1 was expensive, which was why
 you wanted to do it off the main thread.

 And 3 is what puts pixels on the screen so I don't see how you could
 do that without copying. You could possibly implement 3 using
 blitting, but that's still not nearly nothing.

 Possibly 2 is what you could get rid of using copy-on-write.

  gets to be cheaper than
 
  1. Drawing to on-screen canvas.
 
  You're assuming only one core.  The norm on the desktops and laptops
 these
  days is multiple cores.

 I did not assume that no. But it sounded like your use case was to
 rasterize off the main thread, get the pixels to the main thread, and
 then draw there. The last part (step 3 above) will definitely happen
 on the main thread no matter how many cores you have.


 Sorry, I didn't read clearly before sending.

 Yes, 1 would presumably be expensive and thus worth doing on a worker.
  Even on a single core machine, workers are great for long tasks since you
 needn't add breaks to your code via setTimeout so the UI can be updated
 (which doesn't always work perfectly anyway).

 2 could be done with copy on write in many cases.  3 is just blitting which
 is generally a pretty fast operation.


 I've gotten a couple responses back on use cases.  I'll admit that about
 half a simply resize/rotate.  The others I've been told I cannot talk about
 publicly.  I think my map tiles example is similar enough to a bunch of them
 though that you can get at least the conceptual idea.  And my understanding
 is that in prototypes, it's been very hard to do the (computationally
 complex) rendering without making the UI go janky.  There might be some more
 use cases that come up that I can generalize into a public form, in which
 case I'll do my best.


Note that doing rendering in a worker and then displaying it on the the main
thread also gives you double buffering for no additional cost.  This is
something our teams are excited about as well.


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-23 Thread Maciej Stachowiak


On Feb 23, 2010, at 7:40 AM, Jeremy Orlow wrote:



Note that doing rendering in a worker and then displaying it on the  
the main thread also gives you double buffering for no additional  
cost.  This is something our teams are excited about as well.


While I think the use cases presented for background thread image  
manipulation are valid, I'm confused about this claim:


(A) canvas implementations do double-buffering anyway, to avoid  
drawing intermediate states in the middle of JS execution.
(B) If you want triple-buffering (maybe you want to use multiple event  
loop cycles to draw the next frame), you don't need a Worker to do it.  
You can



BTW some examples of shipping pixels being much less shipping drawing  
commands:


- Raytracing a complex scene at high resolution.
- Drawing a highly zoomed in high resolution portion of the Mandelbrot  
set.


To be fair though, you could compute the pixels for those with just  
math, there is no need to have a graphics context type abstraction.


Regards,
Maciej



Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-23 Thread Jonas Sicking
On Tue, Feb 23, 2010 at 9:57 PM, Maciej Stachowiak m...@apple.com wrote:
 - Raytracing a complex scene at high resolution.
 - Drawing a highly zoomed in high resolution portion of the Mandelbrot set.

 To be fair though, you could compute the pixels for those with just math,
 there is no need to have a graphics context type abstraction.

http://people.mozilla.com/~sicking/webgl/ray.html
http://people.mozilla.com/~sicking/webgl/juliaanim.html
http://people.mozilla.com/~sicking/webgl/mandjulia.html

Done and done, no need for workers ;-)

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-23 Thread Jonas Sicking
On Tue, Feb 23, 2010 at 10:04 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Tue, Feb 23, 2010 at 9:57 PM, Maciej Stachowiak m...@apple.com wrote:
 - Raytracing a complex scene at high resolution.
 - Drawing a highly zoomed in high resolution portion of the Mandelbrot set.

 To be fair though, you could compute the pixels for those with just math,
 there is no need to have a graphics context type abstraction.

 http://people.mozilla.com/~sicking/webgl/ray.html
 http://people.mozilla.com/~sicking/webgl/juliaanim.html
 http://people.mozilla.com/~sicking/webgl/mandjulia.html

 Done and done, no need for workers ;-)

(Updated to support recent webkit and chromium nightlies)

/ Jonas


[whatwg] Offscreen canvas (or canvas for web workers).

2010-02-22 Thread David Levin
I've talked with some other folks on WebKit (Maciej and Oliver) about having
a canvas that is available to workers. They suggested some nice
modifications to make it an offscreen canvas, which may be used in the
Document or in a Worker.

Proposal:
Introduce an OffscreenCanvas which may be created from a Document or a
Worker context.

interface OffscreenCanvas {
 attribute unsigned long width;
 attribute unsigned long height;
DOMString toDataURL (in optional DOMString type, in any... args);
object getContext(in DOMString contextId);
};


When it is created in the Worker context, OffscreenCanvas.getContext(2d)
returns a CanvasWorkerContext2D. In the Document context, it returns a
CanvasRenderingContext2D.

The base class for both CanvasWorkerContext2D and CanvasRenderingContext2D
is CanvasContext2D. CanvasContext2D is just like a CanvasRenderingContext2D
except for omitting the font methods and any method which uses HTML
elements. It does have some replacement methods for createPattern/drawImage
which take an OffscreenCanvas. The canvas object attribute is either a
HTMLCanvasElement or an OffscreenCanvas depending on what the canvas context
came from.

interface CanvasContext2D {
readonly attribute object canvas;

void save();
void restore();

void scale(in float sx, in float sy);
void rotate(in float angle);
void translate(in float tx, in float ty);
void transform(in float m11, in float m12, in float m21, in float
m22, in float dx, in float dy);
void setTransform(in float m11, in float m12, in float m21, in float
m22, in float dx, in float dy);

 attribute float globalAlpha;
 attribute [ConvertNullToNullString] DOMString
globalCompositeOperation;

CanvasGradient createLinearGradient(in float x0, in float y0, in
float x1, in float y1)
raises (DOMException);
CanvasGradient createRadialGradient(in float x0, in float y0, in
float r0, in float x1, in float y1, in float r1)
raises (DOMException);
CanvasPattern createPattern(in OffscreenCanvas image, in DOMString
repetition);

 attribute float lineWidth;
 attribute [ConvertNullToNullString] DOMString lineCap;
 attribute [ConvertNullToNullString] DOMString lineJoin;
 attribute float miterLimit;

 attribute float shadowOffsetX;
 attribute float shadowOffsetY;
 attribute float shadowBlur;
 attribute [ConvertNullToNullString] DOMString shadowColor;

void clearRect(in float x, in float y, in float width, in float
height);
void fillRect(in float x, in float y, in float width, in float
height);
void strokeRect(in float x, in float y, in float w, in float h);

void beginPath();
void closePath();
void moveTo(in float x, in float y);
void lineTo(in float x, in float y);
void quadraticCurveTo(in float cpx, in float cpy, in float x, in
float y);
void bezierCurveTo(in float cp1x, in float cp1y, in float cp2x, in
float cp2y, in float x, in float y);
void arcTo(in float x1, in float y1, in float x2, in float y2, in
float radius);
void rect(in float x, in float y, in float width, in float height);
void arc(in float x, in float y, in float radius, in float
startAngle, in float endAngle, in boolean anticlockwise);
void fill();
void stroke();
void clip();
boolean isPointInPath(in float x, in float y);

void drawImage(in OffscreenCanvas image, in float dx, in float dy,
in optional float dw, in optional float dh);
void drawImage(in OffscreenCanvas image, in float sx, in float sy,
in float sw, in float sh, in float dx, in float dy, in float dw, in float
dh);

// pixel manipulation
ImageData createImageData(in float sw, in float sh)
raises (DOMException);
ImageData getImageData(in float sx, in float sy, in float sw, in
float sh)
raises(DOMException);
void putImageData(in ImageData imagedata, in float dx, in float dy,
in optional float dirtyX, in optional float dirtyY, in optional float
dirtyWidth, in optional float dirtyHeight]);
};

interface CanvasWorkerContext2D : CanvasContext2D {
};

interface CanvasRenderingContext2D : CanvasContext2D {
 CanvasPattern createPattern(in HTMLImageElement image, in DOMString
repetition);
 CanvasPattern createPattern(in HTMLCanvasElement image, in
DOMString repetition);
 CanvasPattern createPattern(in HTMLVideoElement image, in DOMString
repetition);

 // focus management
 boolean drawFocusRing(in Element element, in float xCaret, in float
yCaret, in optional boolean canDrawCustom);

// text
 attribute DOMString font;
 attribute DOMString textAlign;
  

Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-22 Thread Michael Nordman
The lack of support for text drawing in the worker context seems like a
short sighted mistake. I understand there may be implementation issues in
some browsers, but lack of text support feels like a glaring omission spec
wise.

On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com wrote:

 I've talked with some other folks on WebKit (Maciej and Oliver) about
 having a canvas that is available to workers. They suggested some nice
 modifications to make it an offscreen canvas, which may be used in the
 Document or in a Worker.

 Proposal:
 Introduce an OffscreenCanvas which may be created from a Document or a
 Worker context.

 interface OffscreenCanvas {
  attribute unsigned long width;
  attribute unsigned long height;
 DOMString toDataURL (in optional DOMString type, in any... args);
 object getContext(in DOMString contextId);
 };


 When it is created in the Worker context, OffscreenCanvas.getContext(2d)
 returns a CanvasWorkerContext2D. In the Document context, it returns a
 CanvasRenderingContext2D.

 The base class for both CanvasWorkerContext2D and CanvasRenderingContext2D
 is CanvasContext2D. CanvasContext2D is just like a CanvasRenderingContext2D
 except for omitting the font methods and any method which uses HTML
 elements. It does have some replacement methods for createPattern/drawImage
 which take an OffscreenCanvas. The canvas object attribute is either a
 HTMLCanvasElement or an OffscreenCanvas depending on what the canvas context
 came from.

 interface CanvasContext2D {
 readonly attribute object canvas;

 void save();
 void restore();

 void scale(in float sx, in float sy);
 void rotate(in float angle);
 void translate(in float tx, in float ty);
 void transform(in float m11, in float m12, in float m21, in float
 m22, in float dx, in float dy);
 void setTransform(in float m11, in float m12, in float m21, in
 float m22, in float dx, in float dy);

  attribute float globalAlpha;
  attribute [ConvertNullToNullString] DOMString
 globalCompositeOperation;

 CanvasGradient createLinearGradient(in float x0, in float y0, in
 float x1, in float y1)
 raises (DOMException);
 CanvasGradient createRadialGradient(in float x0, in float y0, in
 float r0, in float x1, in float y1, in float r1)
 raises (DOMException);
 CanvasPattern createPattern(in OffscreenCanvas image, in DOMString
 repetition);

  attribute float lineWidth;
  attribute [ConvertNullToNullString] DOMString lineCap;
  attribute [ConvertNullToNullString] DOMString lineJoin;
  attribute float miterLimit;

  attribute float shadowOffsetX;
  attribute float shadowOffsetY;
  attribute float shadowBlur;
  attribute [ConvertNullToNullString] DOMString shadowColor;

 void clearRect(in float x, in float y, in float width, in float
 height);
 void fillRect(in float x, in float y, in float width, in float
 height);
 void strokeRect(in float x, in float y, in float w, in float h);

 void beginPath();
 void closePath();
 void moveTo(in float x, in float y);
 void lineTo(in float x, in float y);
 void quadraticCurveTo(in float cpx, in float cpy, in float x, in
 float y);
 void bezierCurveTo(in float cp1x, in float cp1y, in float cp2x, in
 float cp2y, in float x, in float y);
 void arcTo(in float x1, in float y1, in float x2, in float y2, in
 float radius);
 void rect(in float x, in float y, in float width, in float height);
 void arc(in float x, in float y, in float radius, in float
 startAngle, in float endAngle, in boolean anticlockwise);
 void fill();
 void stroke();
 void clip();
 boolean isPointInPath(in float x, in float y);

 void drawImage(in OffscreenCanvas image, in float dx, in float dy,
 in optional float dw, in optional float dh);
 void drawImage(in OffscreenCanvas image, in float sx, in float sy,
 in float sw, in float sh, in float dx, in float dy, in float dw, in float
 dh);

 // pixel manipulation
 ImageData createImageData(in float sw, in float sh)
 raises (DOMException);
 ImageData getImageData(in float sx, in float sy, in float sw, in
 float sh)
 raises(DOMException);
 void putImageData(in ImageData imagedata, in float dx, in float dy,
 in optional float dirtyX, in optional float dirtyY, in optional float
 dirtyWidth, in optional float dirtyHeight]);
 };

 interface CanvasWorkerContext2D : CanvasContext2D {
 };

 interface CanvasRenderingContext2D : CanvasContext2D {
   CanvasPattern createPattern(in HTMLImageElement image, in
 DOMString repetition);
  CanvasPattern createPattern(in HTMLCanvasElement 

Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-22 Thread Drew Wilson
On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com wrote:

 I've talked with some other folks on WebKit (Maciej and Oliver) about
 having a canvas that is available to workers. They suggested some nice
 modifications to make it an offscreen canvas, which may be used in the
 Document or in a Worker.

 Proposal:
 Introduce an OffscreenCanvas which may be created from a Document or a
 Worker context.

 interface OffscreenCanvas {
  attribute unsigned long width;
  attribute unsigned long height;
 DOMString toDataURL (in optional DOMString type, in any... args);
 object getContext(in DOMString contextId);
 };


 When it is created in the Worker context, OffscreenCanvas.getContext(2d)
 returns a CanvasWorkerContext2D. In the Document context, it returns a
 CanvasRenderingContext2D.

 The base class for both CanvasWorkerContext2D and CanvasRenderingContext2D
 is CanvasContext2D. CanvasContext2D is just like a CanvasRenderingContext2D
 except for omitting the font methods and any method which uses HTML
 elements. It does have some replacement methods for createPattern/drawImage
 which take an OffscreenCanvas. The canvas object attribute is either a
 HTMLCanvasElement or an OffscreenCanvas depending on what the canvas context
 came from.

 interface CanvasContext2D {
 readonly attribute object canvas;

 void save();
 void restore();

 void scale(in float sx, in float sy);
 void rotate(in float angle);
 void translate(in float tx, in float ty);
 void transform(in float m11, in float m12, in float m21, in float
 m22, in float dx, in float dy);
 void setTransform(in float m11, in float m12, in float m21, in
 float m22, in float dx, in float dy);

  attribute float globalAlpha;
  attribute [ConvertNullToNullString] DOMString
 globalCompositeOperation;

 CanvasGradient createLinearGradient(in float x0, in float y0, in
 float x1, in float y1)
 raises (DOMException);
 CanvasGradient createRadialGradient(in float x0, in float y0, in
 float r0, in float x1, in float y1, in float r1)
 raises (DOMException);
 CanvasPattern createPattern(in OffscreenCanvas image, in DOMString
 repetition);

  attribute float lineWidth;
  attribute [ConvertNullToNullString] DOMString lineCap;
  attribute [ConvertNullToNullString] DOMString lineJoin;
  attribute float miterLimit;

  attribute float shadowOffsetX;
  attribute float shadowOffsetY;
  attribute float shadowBlur;
  attribute [ConvertNullToNullString] DOMString shadowColor;

 void clearRect(in float x, in float y, in float width, in float
 height);
 void fillRect(in float x, in float y, in float width, in float
 height);
 void strokeRect(in float x, in float y, in float w, in float h);

 void beginPath();
 void closePath();
 void moveTo(in float x, in float y);
 void lineTo(in float x, in float y);
 void quadraticCurveTo(in float cpx, in float cpy, in float x, in
 float y);
 void bezierCurveTo(in float cp1x, in float cp1y, in float cp2x, in
 float cp2y, in float x, in float y);
 void arcTo(in float x1, in float y1, in float x2, in float y2, in
 float radius);
 void rect(in float x, in float y, in float width, in float height);
 void arc(in float x, in float y, in float radius, in float
 startAngle, in float endAngle, in boolean anticlockwise);
 void fill();
 void stroke();
 void clip();
 boolean isPointInPath(in float x, in float y);

 void drawImage(in OffscreenCanvas image, in float dx, in float dy,
 in optional float dw, in optional float dh);
 void drawImage(in OffscreenCanvas image, in float sx, in float sy,
 in float sw, in float sh, in float dx, in float dy, in float dw, in float
 dh);

 // pixel manipulation
 ImageData createImageData(in float sw, in float sh)
 raises (DOMException);
 ImageData getImageData(in float sx, in float sy, in float sw, in
 float sh)
 raises(DOMException);
 void putImageData(in ImageData imagedata, in float dx, in float dy,
 in optional float dirtyX, in optional float dirtyY, in optional float
 dirtyWidth, in optional float dirtyHeight]);
 };

 interface CanvasWorkerContext2D : CanvasContext2D {
 };

 interface CanvasRenderingContext2D : CanvasContext2D {
   CanvasPattern createPattern(in HTMLImageElement image, in
 DOMString repetition);
  CanvasPattern createPattern(in HTMLCanvasElement image, in
 DOMString repetition);
  CanvasPattern createPattern(in HTMLVideoElement image, in
 DOMString repetition);

  // focus management
  boolean drawFocusRing(in Element element, in float xCaret, 

Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-22 Thread Maciej Stachowiak


On Feb 22, 2010, at 11:13 AM, David Levin wrote:

I've talked with some other folks on WebKit (Maciej and Oliver)  
about having a canvas that is available to workers. They suggested  
some nice modifications to make it an offscreen canvas, which may be  
used in the Document or in a Worker.


Comments:

1) Like others, I would recommend not omitting the text APIs. I can  
see why they are a bit trickier to implement than the others, but I  
don't see a fundamental barrier.


2) I would propose adding createPattern and drawImage overloads that  
take an OffscreenCanvas. The other overloads would in practice not be  
usefully callable in the worker case since you couldn't get an image,  
canvas or video element.


3) This would leave the only difference between the two interfaces as  
the drawFocusRing method. This would not be usefully callable in a  
worker, since there would be no way to get an Element. But it doesn't  
seem worth it to add an interface just for one method's worth of  
difference.


Regards,
Maciej



Proposal:
Introduce an OffscreenCanvas which may be created from a Document or  
a Worker context.


interface OffscreenCanvas {
 attribute unsigned long width;
 attribute unsigned long height;
DOMString toDataURL (in optional DOMString type, in any...  
args);

object getContext(in DOMString contextId);
};


When it is created in the Worker context, OffscreenCanvas.getContext 
(2d) returns a CanvasWorkerContext2D. In the Document context, it  
returns a CanvasRenderingContext2D.


The base class for both CanvasWorkerContext2D and  
CanvasRenderingContext2D is CanvasContext2D. CanvasContext2D is just  
like a CanvasRenderingContext2D except for omitting the font methods  
and any method which uses HTML elements. It does have some  
replacement methods for createPattern/drawImage which take an  
OffscreenCanvas. The canvas object attribute is either a  
HTMLCanvasElement or an OffscreenCanvas depending on what the canvas  
context came from.


interface CanvasContext2D {
readonly attribute object canvas;

void save();
void restore();

void scale(in float sx, in float sy);
void rotate(in float angle);
void translate(in float tx, in float ty);
void transform(in float m11, in float m12, in float m21, in  
float m22, in float dx, in float dy);
void setTransform(in float m11, in float m12, in float m21,  
in float m22, in float dx, in float dy);


 attribute float globalAlpha;
 attribute [ConvertNullToNullString] DOMString  
globalCompositeOperation;


CanvasGradient createLinearGradient(in float x0, in float  
y0, in float x1, in float y1)

raises (DOMException);
CanvasGradient createRadialGradient(in float x0, in float  
y0, in float r0, in float x1, in float y1, in float r1)

raises (DOMException);
CanvasPattern createPattern(in OffscreenCanvas image, in  
DOMString repetition);


 attribute float lineWidth;
 attribute [ConvertNullToNullString] DOMString  
lineCap;
 attribute [ConvertNullToNullString] DOMString  
lineJoin;

 attribute float miterLimit;

 attribute float shadowOffsetX;
 attribute float shadowOffsetY;
 attribute float shadowBlur;
 attribute [ConvertNullToNullString] DOMString  
shadowColor;


void clearRect(in float x, in float y, in float width, in  
float height);
void fillRect(in float x, in float y, in float width, in  
float height);
void strokeRect(in float x, in float y, in float w, in float  
h);


void beginPath();
void closePath();
void moveTo(in float x, in float y);
void lineTo(in float x, in float y);
void quadraticCurveTo(in float cpx, in float cpy, in float  
x, in float y);
void bezierCurveTo(in float cp1x, in float cp1y, in float  
cp2x, in float cp2y, in float x, in float y);
void arcTo(in float x1, in float y1, in float x2, in float  
y2, in float radius);
void rect(in float x, in float y, in float width, in float  
height);
void arc(in float x, in float y, in float radius, in float  
startAngle, in float endAngle, in boolean anticlockwise);

void fill();
void stroke();
void clip();
boolean isPointInPath(in float x, in float y);

void drawImage(in OffscreenCanvas image, in float dx, in  
float dy, in optional float dw, in optional float dh);
void drawImage(in OffscreenCanvas image, in float sx, in  
float sy, in float sw, in float sh, in float dx, in float dy, in  
float dw, in float dh);


// pixel manipulation
ImageData createImageData(in float sw, in float sh)
raises (DOMException);
ImageData getImageData(in float sx, in float sy, in float  
sw, in float sh)

 

Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-22 Thread Jonas Sicking
On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com wrote:
 I've talked with some other folks on WebKit (Maciej and Oliver) about having
 a canvas that is available to workers. They suggested some nice
 modifications to make it an offscreen canvas, which may be used in the
 Document or in a Worker.

What is the use case for this? It seems like in most cases you'll want
to display something on screen to the user, and so the difference
comes down to shipping drawing commands across the pipe, vs. shipping
the pixel data.

The other use case I can think of is doing image manipulation and then
sending the result directly to the server, without ever displaying it
to the user. However this is first of all not supported by the
suggested API, and second I can't think of any image manipulation that
you wouldn't want to display to the user except for scaling down a
high resolution image. But that seems like a much simpler API than all
of canvas. And again, not even this simple use case is supported by
the current API.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-22 Thread Jeremy Orlow
On Mon, Feb 22, 2010 at 11:10 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com wrote:
  I've talked with some other folks on WebKit (Maciej and Oliver) about
 having
  a canvas that is available to workers. They suggested some nice
  modifications to make it an offscreen canvas, which may be used in the
  Document or in a Worker.

 What is the use case for this? It seems like in most cases you'll want
 to display something on screen to the user, and so the difference
 comes down to shipping drawing commands across the pipe, vs. shipping
 the pixel data.


Sometimes the commands take up a lot more CPU power than shipping the
pixels.  Lets say you wanted to have a really rich map application that
looked great, was highly interactive/fluid, but didn't use a lot of
bandwidth.  Rendering different parts of the screen on different workers
seems like a legit use.


 The other use case I can think of is doing image manipulation and then
 sending the result directly to the server, without ever displaying it
 to the user. However this is first of all not supported by the
 suggested API, and second I can't think of any image manipulation that
 you wouldn't want to display to the user except for scaling down a
 high resolution image. But that seems like a much simpler API than all
 of canvas. And again, not even this simple use case is supported by
 the current API.


OK, so you solve this one problem.  Then soon enough someone wants to do
something more than just scale an image.  So you you add another one off
solution.  Then another.  Next thing you've essentially created canvas
prime


I'll note that there are a bunch of teams that want this behavior, though I
can't remember exactly what for.  At least some of it is simple image
resizing type stuff.  Most of it is related to doing image manipulation work
that the app is probably going to need soon (but isn't on the screen
yet...and that we don't want to slow the main thread for).

Really, if you use picassa (or iPhoto or some other competitor) it really
isn't hard to think of a lot of uses for this.  Even for non-photo Apps
(like Bespin) I could totally see it being worth it to them to do some
rendering off the main loop.

To be honest, I think the applications are largely self evident...especially
if you think about taking rich desktop apps and making them web apps.  Are
you sure that you're negativity towards an offscreen canvas isn't simply
being driven by implementation related worries?

J


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-22 Thread Jonas Sicking
On Mon, Feb 22, 2010 at 3:36 PM, David Levin le...@google.com wrote:


 On Mon, Feb 22, 2010 at 3:10 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com wrote:
  I've talked with some other folks on WebKit (Maciej and Oliver) about
  having
  a canvas that is available to workers. They suggested some nice
  modifications to make it an offscreen canvas, which may be used in the
  Document or in a Worker.

 What is the use case for this? It seems like in most cases you'll want
 to display something on screen to the user, and so the difference
 comes down to shipping drawing commands across the pipe, vs. shipping
 the pixel data.

 The other use case I can think of is doing image manipulation and then
 sending the result directly to the server, without ever displaying it
 to the user. However this is first of all not supported by the
 suggested API, and second I can't think of any image manipulation that
 you wouldn't want to display to the user except for scaling down a
 high resolution image. But that seems like a much simpler API than all
 of canvas. And again, not even this simple use case is supported by
 the current API.


 A simple use case is image resizing which was what started the last thread
 and that is a similar use case to what I heard internally.
 I don't understand what you mean about things not being supported.
 1. Given the current structure clone support, it is certainly possible to
 transfer image data to and from a worker, so it seems possible to display
 the result to the user. It is orthogonal to this feature but adding
 something like toFile (your proposal) and a corresponding fromFile/load
 would also aid in this (as well as aid in sending things to the server).
 2. Resize may be done using the scale(x, y) method.

Ok, if you ship the data to the main thread you can do it. However
this means copying image in order to get it to the main thread which
seems pretty suboptimal.

And again, if this is the use case, then it seems like we could
construct a far simpler API.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-22 Thread Jonas Sicking
On Mon, Feb 22, 2010 at 3:43 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Mon, Feb 22, 2010 at 11:10 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com wrote:
  I've talked with some other folks on WebKit (Maciej and Oliver) about
  having
  a canvas that is available to workers. They suggested some nice
  modifications to make it an offscreen canvas, which may be used in the
  Document or in a Worker.

 What is the use case for this? It seems like in most cases you'll want
 to display something on screen to the user, and so the difference
 comes down to shipping drawing commands across the pipe, vs. shipping
 the pixel data.

 Sometimes the commands take up a lot more CPU power than shipping the
 pixels.  Lets say you wanted to have a really rich map application that
 looked great, was highly interactive/fluid, but didn't use a lot of
 bandwidth.  Rendering different parts of the screen on different workers
 seems like a legit use.

I admit to not being a graphics expert, but I would imagine you have
to do quite a lot of drawing before
1. Drawing on offscreen canvas
2. Cloning the pixel data in order to ship it to a different thread
3. Drawing the pixel data to the on-screen canvas

gets to be cheaper than

1. Drawing to on-screen canvas.

 The other use case I can think of is doing image manipulation and then
 sending the result directly to the server, without ever displaying it
 to the user. However this is first of all not supported by the
 suggested API, and second I can't think of any image manipulation that
 you wouldn't want to display to the user except for scaling down a
 high resolution image. But that seems like a much simpler API than all
 of canvas. And again, not even this simple use case is supported by
 the current API.

 OK, so you solve this one problem.  Then soon enough someone wants to do
 something more than just scale an image.  So you you add another one off
 solution.  Then another.  Next thing you've essentially created canvas
 prime

We've always started with use cases and then created APIs that
fulfills those use cases, rather than come up with APIs and hope that
that fulfills some future use case. That seems like a much wiser path
here too.

 I'll note that there are a bunch of teams that want this behavior, though I
 can't remember exactly what for.

But you're sure that it fulfills their requirements? ;-)

 At least some of it is simple image
 resizing type stuff.  Most of it is related to doing image manipulation work
 that the app is probably going to need soon (but isn't on the screen
 yet...and that we don't want to slow the main thread for).
 Really, if you use picassa (or iPhoto or some other competitor) it really
 isn't hard to think of a lot of uses for this.  Even for non-photo Apps
 (like Bespin) I could totally see it being worth it to them to do some
 rendering off the main loop.

For many of these things you want to display the image to the user at
the same time as the

 To be honest, I think the applications are largely self evident...especially
 if you think about taking rich desktop apps and making them web apps.

So picassa and/or iPhoto uses off-main-thread *drawing* (not image
scaling) today?

  Are
 you sure that you're negativity towards an offscreen canvas isn't simply
 being driven by implementation related worries?

Quite certain. I can promise to for every API suggested, that if there
are no use cases included, and no one else asks, I will ask what the
use case is.

/ Jonas


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-22 Thread Jeremy Orlow
On Tue, Feb 23, 2010 at 12:05 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Feb 22, 2010 at 3:43 PM, Jeremy Orlow jor...@chromium.org wrote:
  On Mon, Feb 22, 2010 at 11:10 PM, Jonas Sicking jo...@sicking.cc
 wrote:
 
  On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com wrote:
   I've talked with some other folks on WebKit (Maciej and Oliver) about
   having
   a canvas that is available to workers. They suggested some nice
   modifications to make it an offscreen canvas, which may be used in the
   Document or in a Worker.
 
  What is the use case for this? It seems like in most cases you'll want
  to display something on screen to the user, and so the difference
  comes down to shipping drawing commands across the pipe, vs. shipping
  the pixel data.
 
  Sometimes the commands take up a lot more CPU power than shipping the
  pixels.  Lets say you wanted to have a really rich map application that
  looked great, was highly interactive/fluid, but didn't use a lot of
  bandwidth.  Rendering different parts of the screen on different workers
  seems like a legit use.

 I admit to not being a graphics expert, but I would imagine you have
 to do quite a lot of drawing before
 1. Drawing on offscreen canvas
 2. Cloning the pixel data in order to ship it to a different thread
 3. Drawing the pixel data to the on-screen canvas


Presumably a smart UA implementation could make 1 and 3 be nearly nothing
(with copy on write and such) in many cases.


 gets to be cheaper than

 1. Drawing to on-screen canvas.


You're assuming only one core.  The norm on the desktops and laptops these
days is multiple cores.

  The other use case I can think of is doing image manipulation and then
  sending the result directly to the server, without ever displaying it
  to the user. However this is first of all not supported by the
  suggested API, and second I can't think of any image manipulation that
  you wouldn't want to display to the user except for scaling down a
  high resolution image. But that seems like a much simpler API than all
  of canvas. And again, not even this simple use case is supported by
  the current API.
 
  OK, so you solve this one problem.  Then soon enough someone wants to do
  something more than just scale an image.  So you you add another one off
  solution.  Then another.  Next thing you've essentially created canvas
  prime

 We've always started with use cases and then created APIs that
 fulfills those use cases, rather than come up with APIs and hope that
 that fulfills some future use case. That seems like a much wiser path
 here too.


I've pinged a couple people within Google to see if we can re-gather what
some of the original use cases were.  I'll admit that resizing and rotating
were definitely at the top of the list, but I believe vector based drawing
was there too.  Will report back on this when I have more data.


  I'll note that there are a bunch of teams that want this behavior, though
 I
  can't remember exactly what for.

 But you're sure that it fulfills their requirements? ;-)

  At least some of it is simple image
  resizing type stuff.  Most of it is related to doing image manipulation
 work
  that the app is probably going to need soon (but isn't on the screen
  yet...and that we don't want to slow the main thread for).
  Really, if you use picassa (or iPhoto or some other competitor) it really
  isn't hard to think of a lot of uses for this.  Even for non-photo Apps
  (like Bespin) I could totally see it being worth it to them to do some
  rendering off the main loop.

 For many of these things you want to display the image to the user at
 the same time as the

  To be honest, I think the applications are largely self
 evident...especially
  if you think about taking rich desktop apps and making them web apps.

 So picassa and/or iPhoto uses off-main-thread *drawing* (not image
 scaling) today?


I don't know.  But you're probably right that scaling (and rotating) is
probably the bulk of what is computationally expensive.


Are
  you sure that you're negativity towards an offscreen canvas isn't simply
  being driven by implementation related worries?

 Quite certain. I can promise to for every API suggested, that if there
 are no use cases included, and no one else asks, I will ask what the
 use case is.


Fair enough.


Re: [whatwg] Offscreen canvas (or canvas for web workers).

2010-02-22 Thread Jonas Sicking
On Mon, Feb 22, 2010 at 4:34 PM, Jeremy Orlow jor...@chromium.org wrote:
 On Tue, Feb 23, 2010 at 12:05 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Feb 22, 2010 at 3:43 PM, Jeremy Orlow jor...@chromium.org wrote:
  On Mon, Feb 22, 2010 at 11:10 PM, Jonas Sicking jo...@sicking.cc
  wrote:
 
  On Mon, Feb 22, 2010 at 11:13 AM, David Levin le...@google.com wrote:
   I've talked with some other folks on WebKit (Maciej and Oliver) about
   having
   a canvas that is available to workers. They suggested some nice
   modifications to make it an offscreen canvas, which may be used in
   the
   Document or in a Worker.
 
  What is the use case for this? It seems like in most cases you'll want
  to display something on screen to the user, and so the difference
  comes down to shipping drawing commands across the pipe, vs. shipping
  the pixel data.
 
  Sometimes the commands take up a lot more CPU power than shipping the
  pixels.  Lets say you wanted to have a really rich map application that
  looked great, was highly interactive/fluid, but didn't use a lot of
  bandwidth.  Rendering different parts of the screen on different workers
  seems like a legit use.

 I admit to not being a graphics expert, but I would imagine you have
 to do quite a lot of drawing before
 1. Drawing on offscreen canvas
 2. Cloning the pixel data in order to ship it to a different thread
 3. Drawing the pixel data to the on-screen canvas

 Presumably a smart UA implementation could make 1 and 3 be nearly nothing
 (with copy on write and such) in many cases.

Huh? I thought the whole point was that 1 was expensive, which was why
you wanted to do it off the main thread.

And 3 is what puts pixels on the screen so I don't see how you could
do that without copying. You could possibly implement 3 using
blitting, but that's still not nearly nothing.

Possibly 2 is what you could get rid of using copy-on-write.

 gets to be cheaper than

 1. Drawing to on-screen canvas.

 You're assuming only one core.  The norm on the desktops and laptops these
 days is multiple cores.

I did not assume that no. But it sounded like your use case was to
rasterize off the main thread, get the pixels to the main thread, and
then draw there. The last part (step 3 above) will definitely happen
on the main thread no matter how many cores you have.

/ Jonas