Re: [whatwg] CSS Filter Effects for Canvas 2D Context

2012-01-24 Thread Chris Marrin

On Jan 24, 2012, at 11:56 AM, Ronald Jett wrote:

 Hello,
 
 I think that bringing the new CSS filters 
 (http://html5-demos.appspot.com/static/css/filters/index.html) to canvas 
 might be a good idea. Some of the new filters, specifically blur, would 
 definitely speed up some applications. I saw that there was a previous 
 discussion on this list about bringing SVG filters to canvas, but it was a 
 few years back and it doesn't seem like the discussion yielded much.
 
 It would be great if you could turn the filters on and off while drawing. 
 Something like:
 
 ctx.blur(20); // turns on a 20px blur
 ctx.drawRect(0, 0, 50, 50); // this will be blurred
 ctx.blur(0); // turns off blur
 ctx.drawRect(100, 100, 50, 50); // this will not be blurred
 
 You could even do multiples:
 
 ctx.blur(2);
 ctx.sepia(1);
 ctx.drawImage(img, 0, 0);
 ctx.endFilters(); // turn all filters off
 
 Another benefit of having these effects in canvas is that we could utilize 
 toDataURL to save out an image that a user/application has filtered.
 
 Thoughts?

You can apply CSS Filters to a Canvas element. Maybe it would be better to put 
the items you want filtered into a separate canvas element and use CSS Filters 
on that? The big advantage of doing it that way is that the CSS filters can be 
animated and hardware accelerated. Adding filter functions to canvas would 
require you to re-render the items for every filter change and you'd have to 
animate it all yourself.

Generally, I think that often a hybrid approach to Canvas, where you draw into 
multiple Canvas elements and use CSS transforms, animations (and now filters) 
for positioning and effects can give you the best of both worlds...

-
~Chris
cmar...@apple.com






Re: [whatwg] ArrayBuffer and ByteArray questions

2010-09-08 Thread Chris Marrin

On Sep 8, 2010, at 12:13 AM, Anne van Kesteren wrote:

 On Wed, 08 Sep 2010 01:09:13 +0200, Jian Li jia...@chromium.org wrote:
 Several specs, like File API and WebGL, use ArrayBuffer, while other spec, 
 like XMLHttpRequest Level 2, use ByteArray. Should we change to use the same 
 name all across our specs? Since we define ArrayBuffer in the Typed Arrays 
 spec (
 https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/TypedArray-spec.html),
 should we favor ArrayBuffer?
 
 In addition, can we consider adding ArrayBuffer support to BlobBuilder,
 FormData, and XMLHttpRequest.send()?
 
 So TC39 is going to leave this thing alone? I.e. are we sure ArrayBuffer is 
 the way of the future?

ArrayBuffer certainly has momentum behind it. It started as a part of the WebGL 
spec as a way of passing buffers of data of various types (sometimes 
heterogeneous types) to the WebGL engine. Since then, it has found uses in the 
Web Audio proposal, the File API and there has been talk in using it as a way 
to pass data to Web Workers. We have discussed using it in XHR as well, and I 
think that would be a great idea. From a WebGL standpoint, it is the one 
missing piece to make it possible to easily get data of any type from a URL 
into the WebGL engine. But it would have uses in many other places as well.

For reference, here is the latest proposal:


https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/TypedArray-spec.html

-
~Chris
cmar...@apple.com






Re: [whatwg] ArrayBuffer and ByteArray questions

2010-09-08 Thread Chris Marrin

On Sep 8, 2010, at 9:44 AM, Simon Pieters wrote:

 On Wed, 08 Sep 2010 17:22:44 +0200, Chris Marrin cmar...@apple.com wrote:
 
 
 On Sep 8, 2010, at 12:13 AM, Anne van Kesteren wrote:
 
 On Wed, 08 Sep 2010 01:09:13 +0200, Jian Li jia...@chromium.org wrote:
 Several specs, like File API and WebGL, use ArrayBuffer, while other spec, 
 like XMLHttpRequest Level 2, use ByteArray. Should we change to use the 
 same name all across our specs? Since we define ArrayBuffer in the Typed 
 Arrays spec (
 https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/TypedArray-spec.html),
 should we favor ArrayBuffer?
 
 In addition, can we consider adding ArrayBuffer support to BlobBuilder,
 FormData, and XMLHttpRequest.send()?
 
 So TC39 is going to leave this thing alone? I.e. are we sure ArrayBuffer is 
 the way of the future?
 
 ArrayBuffer certainly has momentum behind it. It started as a part of the 
 WebGL spec as a way of passing buffers of data of various types (sometimes 
 heterogeneous types) to the WebGL engine. Since then, it has found uses in 
 the Web Audio proposal, the File API and there has been talk in using it as 
 a way to pass data to Web Workers.
 
 Do you mean WebSockets?

Web Sockets is certainly another candidate, but I meant Web Workers. There have 
been informal discussions on using ArrayBuffers as a way to safely share binary 
data between threads. I don't believe anything has been formalized here.

-
~Chris
cmar...@apple.com






Re: [whatwg] ArrayBuffer and ByteArray questions

2010-09-08 Thread Chris Marrin

On Sep 8, 2010, at 11:21 AM, Oliver Hunt wrote:

 
 On Sep 8, 2010, at 11:13 AM, Chris Marrin wrote:
 
 Web Sockets is certainly another candidate, but I meant Web Workers. There 
 have been informal discussions on using ArrayBuffers as a way to safely 
 share binary data between threads. I don't believe anything has been 
 formalized here.
 
 You can't share data between workers.  There is no (and there cannot be) any 
 shared state between multiple threads of JS execution.

Right. I didn't mean literal sharing. But you can imagine some copy-on-write 
semantics which would make it more efficient to pass data this way.

-
~Chris
cmar...@apple.com






Re: [whatwg] [canvas] getContext multiple contexts

2010-08-04 Thread Chris Marrin

On Aug 3, 2010, at 3:41 PM, Ian Hickson wrote:

 On Tue, 3 Aug 2010, Chris Marrin wrote:
 On Aug 2, 2010, at 3:16 PM, Ian Hickson wrote:
 On Thu, 29 Apr 2010, Vladimir Vukicevic wrote:
 
 A while ago questions came up in the WebGL WG about using a canvas 
 with multiple rendering contexts, and synchronization issues that 
 arise there. Here's our suggested change to getContext.
 
 This seems overly complex. I've gone for a somewhat simpler approach, 
 which basically makes canvas fail getContext() if you call it with a 
 context that isn't compatible with the last one that was used, as 
 defined by a registry of contexts types. Currently, only '2d' and '3d' 
 are defined in this registry, and they are not defined as compatible.
 
 '3d'? We're calling it 'webgl'. Is there another 3D context registered 
 somewhere?
 
 Sorry, typo in the e-mail. The spec correctly refers to a webgl context.
 
 (I have to say, I'd rather we called it 3d. I hate it when we embed 
 marketing names into the platform.)

I generally agree. For me, I consider WebGL to be a clarifying name like HTML, 
rather than a marketing name.

 
 
 [arguments on getContext]
 
 We feel it's more appropriate on the getContext() call because it 
 involves creation of the resources for the context. If it were a 
 separate call, you'd need to defer creation of those resources until the 
 attribute call is made or create them as needed. This not only involves 
 overhead in every call, but it requires you to provide specific rules on 
 which calls cause automatic resource creation. Making it a parameter to 
 getContext simplifies the definition. And it seems this would be a 
 useful parameter for many types of contexts, even the 2D context as Vlad 
 pointed out.
 
 What happens if you call getContext with the same contextID but different 
 attributes?

Good question. It's addressed in 
https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/doc/spec/WebGL-spec.html#2.1.
 It says that subsequent calls ignore the attributes. There is a 
getContextAttributes call on the context to return what attributes were 
actually set.

-
~Chris
cmar...@apple.com






Re: [whatwg] [canvas] getContext multiple contexts

2010-08-03 Thread Chris Marrin

On Aug 2, 2010, at 3:16 PM, Ian Hickson wrote:

 
 On Thu, 29 Apr 2010, Vladimir Vukicevic wrote:
 
 A while ago questions came up in the WebGL WG about using a canvas with 
 multiple rendering contexts, and synchronization issues that arise 
 there. Here's our suggested change to getContext.
 
 This seems overly complex. I've gone for a somewhat simpler approach, 
 which basically makes canvas fail getContext() if you call it with a 
 context that isn't compatible with the last one that was used, as 
 defined by a registry of contexts types. Currently, only '2d' and '3d' are 
 defined in this registry, and they are not defined as compatible.

'3d'? We're calling it 'webgl'. Is there another 3D context registered 
somewhere? I don't have a problem with this simplification.

 
 
 It essentially allows for multiple contexts but adds no synchronization 
 primitives other than the requirement that rendering must be visible to 
 all contexts (that is, that they're rendered to the same destination 
 space).
 
 Having 3D and 2D contexts rendering to the same space -- especially given 
 getImageData() and the like -- seems like an interoperability nightmare.

I agree.

 
 
 This also adds the 'attributes' parameter which can customize the 
 context that's created, as defined by the context itself.  WebGL has its 
 own context attributes object, and I'd suggest that the 2D context gain 
 at least an attribute to specify whether the context should be opaque or 
 not; but that's a separate suggestion from the below text.
 
 I haven't added this. Could you elaborate on why this is needed? What 
 happens if the method is invoked agains with different parameters?
 
 It seems far preferable to have this on the API rather than as part of the 
 getContext() method.

We feel it's more appropriate on the getContext() call because it involves 
creation of the resources for the context. If it were a separate call, you'd 
need to defer creation of those resources until the attribute call is made or 
create them as needed. This not only involves overhead in every call, but it 
requires you to provide specific rules on which calls cause automatic resource 
creation. Making it a parameter to getContext simplifies the definition. And it 
seems this would be a useful parameter for many types of contexts, even the 2D 
context as Vlad pointed out.

-
~Chris
cmar...@apple.com






Re: [whatwg] suggestion for HTML5 spec

2010-08-03 Thread Chris Marrin

On Aug 2, 2010, at 7:20 PM, Dirk Pranke wrote:

 On Mon, Aug 2, 2010 at 7:09 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 2 Aug 2010, Dirk Pranke wrote:
 On Mon, Aug 2, 2010 at 6:56 PM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 2 Aug 2010, Dirk Pranke wrote:
 
 Why would a user ever want anyone to disable their GPU acceleration?
 
 I believe I've heard people say that they might sometimes want this for
 power management, i.e. performing the same computation on the GPU might
 take more power than performing it more slowly on the CPU. I imagine
 this would depend on the specific configuration and computations
 involved, though.
 
 This seems like a matter for the user, not the Web page, though.
 
 Ah, I knew you were going to say this. I agree, but I can also imagine
 that the way the user selects this is by choosing one of two different
 resources from a page, just like we do today for videos of different
 bandwidths.
 
 It seems better to have a way for the user agent to automaically negotiate
 the right bandwidth usage based on user preference, too.
 
 Any setting like this that we offer authors _will_ be misused, possibly as
 often as used correctly. Unless there's a really compelling reason to have
 it, it seems better to let the user be in control.
 
 If users can choose between two links on a page labelled high FPS -
 will destroy your battery and low FPS, they are in control, in a
 way that is easily understood by the user and allows them to make the
 choice at the point in time that it matters. Compare this with
 changing the streaming settings on QT Player or Windows Media Player,
 or even toggling the use the video card button on your laptop (and
 hoping that the content is smart enough to degrade gracefully instead
 of choking).

But an author can't make that claim if it involves forcing the GPU on or off. 
If we were to do this, I'm sure there would be implementations where the exact 
opposite of the author's intent would be the result. Saying something like 
turn off the GPU can result in more or less battery usage, depending on the 
hardware, software and content. Preserving battery life should be the job of 
the system (possibly with I care more about battery life than quality input 
from the User Agent).

 
 We've seen this exact same argument play out over the last fifteen
 years in video on the web. The technology for detecting and adjusting
 bandwidth dynamically has been around forever (actually implemented,
 even), and yet for every one multi-bitrate stream available on the
 web, I imagine there are very many more that are single-bitrate. A big
 part of the reason for this is because doing it this way is (in my
 opinion) a better user experience.

Sure, you might be able to say that a lower bitrate video will use less power 
than a higher bitrate one. So the author might want to provide two videos. But 
leave it up to the system to decide what hardware to use to play them.

-
~Chris
cmar...@apple.com






Re: [whatwg] [canvas] getContext multiple contexts

2010-08-03 Thread Chris Marrin

On Aug 3, 2010, at 3:15 PM, Chris Marrin wrote:

 
 On Aug 2, 2010, at 3:16 PM, Ian Hickson wrote:
 
 
 On Thu, 29 Apr 2010, Vladimir Vukicevic wrote:
 
 A while ago questions came up in the WebGL WG about using a canvas with 
 multiple rendering contexts, and synchronization issues that arise 
 there. Here's our suggested change to getContext.
 
 This seems overly complex. I've gone for a somewhat simpler approach, 
 which basically makes canvas fail getContext() if you call it with a 
 context that isn't compatible with the last one that was used, as 
 defined by a registry of contexts types. Currently, only '2d' and '3d' are 
 defined in this registry, and they are not defined as compatible.
 
 '3d'? We're calling it 'webgl'. Is there another 3D context registered 
 somewhere? I don't have a problem with this simplification.

Sorry, in rereading this I realize that the last statement is confusing. I 
don't have a problem with hixie's simplification on when to fail getContext. 
The string passed for a WebGL context should be 'webgl', not '3d'.

-
~Chris
cmar...@apple.com






Re: [whatwg] Image resize API proposal

2010-05-25 Thread Chris Marrin

On May 24, 2010, at 2:09 PM, David Levin wrote:

 
 
 On Mon, May 24, 2010 at 1:40 PM, Aryeh Gregor simetrical+...@gmail.com 
 wrote:
 On Mon, May 24, 2010 at 1:21 PM, David Levin le...@google.com wrote:
  We've discussed the leading alternate proposal optimized canvas (plus js to
  read the exif information) and then get the bits out of canvas, but there
  are several issues with this proposal including
 
  that not all browsers will have an implementation using the gpu that allows
  web sites to use this and not hang the UI
 
 This is a nonissue.  There's no point in speccing one feature to work
 around the fact that browsers haven't implemented another -- it makes
 more sense to just get the browsers to implement the latter feature,
 making the former moot.  Browsers look like they're moving toward GPU
 acceleration for everything now, and that has many more benefits, so
 we should assume that by the time they'd implement this API, they'll
 already be GPU-accelerated.
 
  that even if it was implemented everywhere, this solution involves readback
  from the GPU which, as Chris mentioned, is generally evil and should be
  avoided at all costs.
 
 This I'm not qualified to comment on, though.  To the best of my
 knowledge, GPUs are magical boxes that make things go faster via pixie
 dust.  ;)
 
 Thanks for your opinion. :)
 
 Chris is qualified so are other people whom I've spoken to who have said the 
 same thing, so using the gpu is not pixie dust in this particular scenario 
 even though folks would like to be believe it so.

I didn't mean to say that GPU's are in general evil, just readback. GPU's can 
do magical things as long as you keep the data there. If your API were to 
return some abstract wrapper around the resultant image (Like ImageData), it 
would allow the bits to stay on the GPU. Then you can use the abstract API to 
pass the image around, render with it, whatever. There might still be some 
platforms that have to read the pixels back to use them, but not always. For 
instance, WebGL has API to take an ImageData and load it as a texture. If we 
see the image from that object are already in the GPU, we can use it directly 
or copy it (GPU-GPU copies are very fast) to the texture memory.

I'm not sure such a representation is appropriate for all the use cases here, 
but it would make image handler much faster in many cases.

-
~Chris
cmar...@apple.com






Re: [whatwg] Image resize API proposal

2010-05-25 Thread Chris Marrin

On May 25, 2010, at 3:58 AM, Kornel Lesinski wrote:

 On 24 May 2010, at 22:09, David Levin wrote:
 
 that even if it was implemented everywhere, this solution involves readback
 from the GPU which, as Chris mentioned, is generally evil and should be
 avoided at all costs.
 
 This I'm not qualified to comment on, though.  To the best of my
 knowledge, GPUs are magical boxes that make things go faster via pixie
 dust.  ;)
 
 Thanks for your opinion. :)
 
 Chris is qualified so are other people whom I've spoken to who have said the 
 same thing, so using the gpu is not pixie dust in this particular scenario 
 even though folks would like to be believe it so.
 
 I think GPU readback is a red herring. It's an operation that takes 
 milliseconds. It's slow for realtime graphics, but it's not something that 
 user is going to notice when uploading images — users are not uploading 
 hundreds of images per second.

It's not a red herring. Readback performance has nothing to do with how fast 
pixels can be read from the GPU. As it turns out reading from a GPU is somewhat 
slower than writing but you're right that it's only a few ms. The problem is 
that GPU's are heavily pipelined. You can add lots of commands to the input 
queue and have them executed much later, without you waiting around. As soon as 
a readback comes in, all those pipelined commands have to executed and during 
this time no other commands can be accepted. All users of the GPU (and there 
are many users other than you) sit and wait, including you. When all the 
commands are flushed, your readback is done, then all those waiting get their 
commands submitted and eventually executed. If you're doing this for several 
images, it's even worse. Your readbacks will get interleaved with all the other 
command submissions, so you'll be stalling the pipe several times (as opposed 
to doing all your readbacks together). The flushes make readback slower than 
just copying the pixels and it makes every operation slower for every GPU user.

There are times when you have no choice but to do readback. But the API should 
be designed so that it can be avoided whenever possible.

It's simply evil :-)

-
~Chris
cmar...@apple.com






Re: [whatwg] Image resize API proposal

2010-05-23 Thread Chris Marrin

On May 22, 2010, at 3:03 AM, Robert O'Callahan wrote:

 On Sat, May 22, 2010 at 10:12 AM, David Levin le...@google.com wrote:
 There are a few issues here:
 This only applies when you can accelerate with a GPU. Not all devices may 
 support this.
 This only applies to browsers that implement the acceleration with a GPU. 
 When Mike Shaver mentioned this, he referred to a Windows version of Firefox. 
 It is unclear if Firefox supports this on any other platform nor does it seem 
 that all other browsers will support the accelerated canvas in the near-ish 
 future.
 The gpu results are due to the fact that the operation is done async from the 
 call (which is great as far as not hanging the UI until you try to get the 
 data out of the canvas, which leads to...).
 Even with gpu acceleration, in order to use the result in an xhr, one has to 
 get back the result from the gpu and this is a more expensive operation 
 (because getting the data out of the gpu is slow) as indicated by the 
 indirect copy results from Firefox  and forces the completion of all of the 
 operations that were being done async.
  
 1. Phones have GPUs now. You won't see new devices being built that can run 
 real Web browsers but don't have some kind of GPU, because the limiting 
 factor on hardware now is not silicon but power.
 2. Your proposal depends on browsers that implement your new API. As a 
 browser developer, I would rather make canvas faster across the board than 
 implement new API.
 3. The GPU results are largely because GPUs are massively parallel.
 4. The Firefox results include time to unpremultiply data and premultiply it 
 again, all on the CPU. They don't indicate how long readback from the GPU 
 actually takes on that machine. Also:
 4a) the cost of readback is proportional to the size of the scaled image, so 
 if your use case is scaling down images to small sizes, readback is cheap.
 4b) you can easily read back and send one chunk of the scaled image at a time

Just to be clear, readback is never cheap. The cost of readback isn't so much 
the image size, but the fact that you need to stall the pipeline, do the read 
and then fill the pipeline again. So anything happening on the GPU (not just in 
your app, but systemwide) will slow down the readback and the readback will 
slow everything else.

Readback is just generally evil and should be avoided at all costs :-)

-
~Chris
cmar...@apple.com






Re: [whatwg] canvas, img, file api and blobs

2010-02-16 Thread Chris Marrin

On Feb 16, 2010, at 9:00 AM, Eric Carlson wrote:

 Chris -
 
   Welcome to the HTML5 WG email torrent ;-)
 
   Here is a message that you might actually care to read.
 
 eric
 
 
 
 Begin forwarded message:
 
 From: Joel Webber j...@google.com
 Date: February 16, 2010 8:39:31 AM PST
 To: Stefan Haustein haust...@google.com
 Cc: Maciej Stachowiak m...@apple.com, wha...@whatwg.org, Jonas Sicking 
 jo...@sicking.cc, Stef Epardaud s...@epardaud.fr
 Subject: Re: [whatwg] canvas, img, file api and blobs
 
 On Tue, Feb 16, 2010 at 7:38 AM, Stefan Haustein haust...@google.com wrote:
 On Tue, Feb 16, 2010 at 10:08 AM, Maciej Stachowiak m...@apple.com wrote:
 
 On Feb 16, 2010, at 12:13 AM, Jonas Sicking wrote:
 
 
 Absolutely! I definitely agree that we need a type like this. The
 sooner the better. On that note, do you know what the latest status is
 within ECMA on this? I know you made a proposal on the webapps list
 (or was it here?), did that go anywhere?
 
 I made my proposal to ECMA TC-39 (the relevant committee). I will try to 
 polish it and request it as an agenda item for the next face-to-face (in 
 March). Independently, WebGL's typed arrays have been proposed.
 
 Hi Maciej,
 
 do you have a link to your proposal?
 
 And in particular, does it bear any resemblance to the WebGLArray 
 interfaces, as proposed in 
 (http://people.mozilla.com/~vladimir/jsvec/TypedArray-spec.html)? I'm 
 particularly concerned with the interfaces among all these different 
 subsystems (WebGL, Canvas, XHR, File, etc., as being discussed on this 
 thread) that want to operate on binary data.
 
 We've found that getting data from XHR to WebGL via WebGLArrays to be a huge 
 (read: probably orders-of-magnitude) bottleneck; but being able to slice 
 mesh and texture data out of arrays directly from XHR responses would 
 completely fix this.
 

We've been getting pretty good traction on Vlad's ArrayBuffers proposal, which 
was taken from the WebGL spec. Our current plan is to change the names in the 
browsers (WebKit, Chrome and Mozilla) to the non-WebGL specific names Vlad 
proposes in his spec. We'd really like this to be the one true binary data 
access mechanism for HTML. We're talking to the File API guys about this and I 
think this API can be adapted in all the other places as well.

As far as performance goes, can you point me at some quantitative data? When 
you say it's an orders-of-magnitude bottleneck, what are you comparing it to? 
The API is very new and we certainly want to improve it for the various 
purposes it can be put to. We've even talked about optimizations inside the JS 
implementations to improve access performance.

-
~Chris
cmar...@apple.com