Re: [whatwg] Hardware accelerated canvas

2012-09-05 Thread Jonas Sicking
On Tue, Sep 4, 2012 at 10:15 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 So now our list is:

 1)  Have a way for pages to opt in to software rendering.
 2)  Opt canvases in to software rendering via some sort of heuristic
 (e.g. software by default until there has been drawing to it for
 several event loop iterations, or whatever).
 3)  Have a way for pages to opt in to having snapshots taken.
 4)  Auto-snapshot based on some heuristics.
 5)  Save command stream.
 6)  Have a way for pages to explicitly snapshot a canvas.
 7)  Require opt in for hardware accelerated rendering.
 8)  Authors use toDataURL() when they want their data to stick around.
 9)  Context lost event that lets authors regenerate the canvas.
 10) Do nothing, assume users will hit reload if their canvas goes blank.

11) Default to best-effort (current behavior), but allow opting in to
getting notifications about lost context, in which case the browser
would not need to do various tricks in order to attempt to save the
current state.

I.e. basically 4, but with the ability for the page to opt in to 9.

It sounds like no browsers do any such tricks right now, so
effectively the opt-in would be to just be notified. But possibly
browsers might feel the need to do various snap-shot heuristics on
mobile as they start to hardware accelerate there.

/ Jonas


Re: [whatwg] Hardware accelerated canvas

2012-09-05 Thread Benoit Jacob
- Original Message -
 On Tue, Sep 4, 2012 at 10:15 AM, Boris Zbarsky bzbar...@mit.edu
 wrote:
  So now our list is:
 
  1)  Have a way for pages to opt in to software rendering.
  2)  Opt canvases in to software rendering via some sort of
  heuristic
  (e.g. software by default until there has been drawing to it
  for
  several event loop iterations, or whatever).
  3)  Have a way for pages to opt in to having snapshots taken.
  4)  Auto-snapshot based on some heuristics.
  5)  Save command stream.
  6)  Have a way for pages to explicitly snapshot a canvas.
  7)  Require opt in for hardware accelerated rendering.
  8)  Authors use toDataURL() when they want their data to stick
  around.
  9)  Context lost event that lets authors regenerate the canvas.
  10) Do nothing, assume users will hit reload if their canvas goes
  blank.
 
 11) Default to best-effort (current behavior), but allow opting in to
 getting notifications about lost context, in which case the browser
 would not need to do various tricks in order to attempt to save the
 current state.
 
 I.e. basically 4, but with the ability for the page to opt in to 9.

Keep in mind that snapshotting as in 4 will cause a large memory usage increase 
for large canvases, and will cause animation choppiness on certain pages on 
systems where readback is expensive. So the heuristics have to be specified in 
a precise manner, or else browser vendors could well decide that the best 
heuristic is never.

Benoit

 
 It sounds like no browsers do any such tricks right now, so
 effectively the opt-in would be to just be notified. But possibly
 browsers might feel the need to do various snap-shot heuristics on
 mobile as they start to hardware accelerate there.
 
 / Jonas
 


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Robert O'Callahan
We might be able to do some sort of hack where if a 2D canvas isn't drawn
to for a while (say five seconds), we read back a copy of it for
safe-keeping.

I have to say though, we've been shipping 2D canvas with the context-loss
problem to millions of users for a couple of years now and I don't recall
seeing any bug reports about it. And it's the sort of bug users would
notice if it happened.

Rob
-- 
“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others? [Matthew 5:43-47]


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Erik Möller
On Mon, 03 Sep 2012 23:47:57 +0200, Tobie Langel tobie.lan...@gmail.com  
wrote:



I apologize in advance, as this is slightly off-topic. I've been
unsuccessfully looking for info on how Canvas hardware acceleration
actually works and haven't found much.

Would anyone have pointers?

Thanks.

--tobie


I think that varies a lot between the vendors and I haven't seen any  
externally available documentation on the topic. In general, images are  
drawn as textured quads, paths are triangulated and drawn as as tristrips.  
Some level of caching are performed to reduce draw calls and improve  
performance. Some hard to do things (on hardware) make use of stencil  
buffers and multipass rendering. I think that's as specific info you'll  
get.


If you're really interested, run your favourite browser through PIX  
http://en.wikipedia.org/wiki/PIX_(Microsoft)


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread James Robinson
I believe this ship has already sailed for the most part - several major
browsers (starting with IE9) have shipped GPU based canvas 2d
implementations that simply lose the image buffer on a lost context.  Given
that there are a fair number of benchmarks (of varying quality) around
canvas 2d speed I doubt vendors will be able to give up speed.

It's also important to note that unlike WebGL the only thing lost on a lost
context is the image buffer itself.  With WebGL, the page has to regenerate
a large number of resources (shaders, buffers, textures) before it can
render the next frame.  With canvas the page can just start drawing.  Many
applications redraw the entire canvas on every frame so lost context
recovery is identical to normal operation - just draw the thing.  All other
resources are managed and can be regenerated by the browser without script
intervention.

On Mon, Sep 3, 2012 at 9:11 AM, Ian Hickson i...@hixie.ch wrote:

 There are ways to make it work without forgoing acceleration, e.g. taking
 regular backups of the canvas contents, remembering every instruction
 that was sent to the canvas, etc.


We investigated these and other options when first looking at GPU
acceleration in Chrome.  None seemed feasible.  Readbacks are expensive.
 Bandwidth from GPU to main memory in split memory systems is limited and
doing a readback is a pipeline stall.  Recording draw commands works for
some path-only use cases but many canvases reference from dynamic sources
such as videos or other canvases.  Preserving these resources around is
quite expensive, especially when they might be GPU-resident to start and
require a readback.

The more basic problem with all of these approaches is that they require
considerable complexity, time and memory to deal with a (hopefully) rare
situation.  There will never be a benchmark that involves a context loss in
the middle, so any time spent on recovery is time wasted.


 On Mon, 3 Sep 2012, Benoit Jacob wrote:
 
  Remember this adage from high-performance computing which applies here
  as well: The fast drives out the slow even if the fast is wrong.

This isn't an issue of the spec -- there is existing content that would be
 affected.


It is the spec's problem so far as the spec wants to reflect reality.  I
really doubt UAs are going to be able to implement something significantly
more complicated or slow than what they have been shipping for a few years.

I think it would be useful for some sorts of applications to be notified
when the image buffer data is lost so that they could regenerate it.  This
would be useful for applications that use a canvas to cache mostly-static
intermediate data or applications that only repaint dirty rectangles in
normal operation.

- James


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Boris Zbarsky

On 9/4/12 12:30 PM, James Robinson wrote:

Many applications redraw the entire canvas on every frame


This is already assuming there are frames involved.

There are lots of applications (graphing comes to mind!) where you 
really want the canvas to be essentially a write-once-read-forever image.


So perhaps the question should be: what can we do to make such 
applications robust?


Options seem to include (just brainstorming; no feasibility issues 
considered so far):


1)  Have a way for pages to opt in to software rendering.
2)  Opt canvases in to software rendering via some sort of heuristic
(e.g. software by default until there has been drawing to it for
several event loop iterations, or whatever).
3)  Have a way for pages to opt in to having snapshots taken.
4)  Auto-snapshot based on some heuristics.
5)  Save command stream.
6)  Have a way for pages to explicitly snapshot a canvas.
7)  Require opt in for hardware accelerated rendering.

Any others?

Of the above, I don't think #5 and #7 are realistic, for what it's 
worth.  I haven't put enough thought into the rest yet to decide what I 
think about them.



I think it would be useful for some sorts of applications to be notified
when the image buffer data is lost so that they could regenerate it.  This
would be useful for applications that use a canvas to cache mostly-static
intermediate data or applications that only repaint dirty rectangles in
normal operation.


Or applications for which the output is basically static data and the 
canvas is the output medium.  Note that in such cases regeneration might 
be _very_ expensive, effectively requiring rerunning the whole 
compute-intensive part of the application.


-Boris



Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Boris Zbarsky

On 9/4/12 12:43 PM, Boris Zbarsky wrote:

1)  Have a way for pages to opt in to software rendering.
2)  Opt canvases in to software rendering via some sort of heuristic
 (e.g. software by default until there has been drawing to it for
 several event loop iterations, or whatever).
3)  Have a way for pages to opt in to having snapshots taken.
4)  Auto-snapshot based on some heuristics.
5)  Save command stream.
6)  Have a way for pages to explicitly snapshot a canvas.
7)  Require opt in for hardware accelerated rendering.

Any others?


Ms2ger points out (without endorsing) that there's an:

8)  Have every author who wants their canvas to stick around call 
toDataURL() and stick the result in an img src.


This is much like #6 above, except with more pain and suffering and 
memory usage and whatnot, but does mean that there is precedent in the 
platform for #6...


-Boris


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread David Geary
On Tue, Sep 4, 2012 at 10:43 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 9/4/12 12:30 PM, James Robinson wrote:

 Many applications redraw the entire canvas on every frame


 This is already assuming there are frames involved.

 There are lots of applications (graphing comes to mind!) where you really
 want the canvas to be essentially a write-once-read-forever image.

 So perhaps the question should be: what can we do to make such
 applications robust?

 Options seem to include (just brainstorming; no feasibility issues
 considered so far):

 1)  Have a way for pages to opt in to software rendering.
 2)  Opt canvases in to software rendering via some sort of heuristic
 (e.g. software by default until there has been drawing to it for
 several event loop iterations, or whatever).
 3)  Have a way for pages to opt in to having snapshots taken.
 4)  Auto-snapshot based on some heuristics.
 5)  Save command stream.
 6)  Have a way for pages to explicitly snapshot a canvas.
 7)  Require opt in for hardware accelerated rendering.

 Any others?

 Of the above, I don't think #5 and #7 are realistic, for what it's worth.
  I haven't put enough thought into the rest yet to decide what I think
 about them.


I'm not crazy about any of them. They all seem like sticky wickets to me.
Implementation issues aside, they are at the wrong level of abstraction, so
they obfuscate the real reason for their existence.



  I think it would be useful for some sorts of applications to be notified
 when the image buffer data is lost so that they could regenerate it.  This
 would be useful for applications that use a canvas to cache mostly-static
 intermediate data or applications that only repaint dirty rectangles in
 normal operation.


 Or applications for which the output is basically static data and the
 canvas is the output medium.  Note that in such cases regeneration might be
 _very_ expensive, effectively requiring rerunning the whole
 compute-intensive part of the application.


Sure, but those use cases will be in the minority, and we're already
talking about a very rare occurrence in the first place, so the odds of a
very expensive regeneration on a lost context must be near Lotto levels.

I think it makes the most sense to add a context lost handler to the spec
and leave it up to developers to redraw the canvas. It's straightforward to
understand and to implement. It has the distasteful downside of forcing
some developers to add a few lines of code to their existing apps, but if
the apps are used and maintained is it really that big of a deal?


david




 -Boris




Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread David Geary
On Tue, Sep 4, 2012 at 10:53 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 9/4/12 12:43 PM, Boris Zbarsky wrote:

 1)  Have a way for pages to opt in to software rendering.
 2)  Opt canvases in to software rendering via some sort of heuristic
  (e.g. software by default until there has been drawing to it for
  several event loop iterations, or whatever).
 3)  Have a way for pages to opt in to having snapshots taken.
 4)  Auto-snapshot based on some heuristics.
 5)  Save command stream.
 6)  Have a way for pages to explicitly snapshot a canvas.
 7)  Require opt in for hardware accelerated rendering.

 Any others?


 Ms2ger points out (without endorsing) that there's an:

 8)  Have every author who wants their canvas to stick around call
 toDataURL() and stick the result in an img src.


And then the browser presumably uses the img to regenerate the canvas on a
lost context? Why not just give developers a callback and let them restore
the canvas as they see fit?


david




 -Boris



Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Tab Atkins Jr.
On Tue, Sep 4, 2012 at 10:07 AM, David Geary david.mark.ge...@gmail.com wrote:
 On Tue, Sep 4, 2012 at 10:53 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 Ms2ger points out (without endorsing) that there's an:

 8)  Have every author who wants their canvas to stick around call
 toDataURL() and stick the result in an img src.

 And then the browser presumably uses the img to regenerate the canvas on a
 lost context? Why not just give developers a callback and let them restore
 the canvas as they see fit?

No, the author just uses the img in their page instead.  The
canvas is only used in JS to generate the image, and is never put
into the document at all.

And again, the reason that just give them a contextloss event is bad
is because most people simply won't do it.  It doesn't make any sense!
 The browser just... forgets about your image, which it was displaying
fine just a second ago?

~TJ


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Boris Zbarsky

On 9/4/12 1:02 PM, David Geary wrote:

Sure, but those use cases will be in the minority


What makes you say that?

Outside of games, I think they're a majority of the canvas-using things 
I've seen.



I think it makes the most sense to add a context lost handler to the
spec and leave it up to developers to redraw the canvas.


OK, yes, let's call that option 9.  And I'll add option 10: do nothing.

So now our list is:


1)  Have a way for pages to opt in to software rendering.
2)  Opt canvases in to software rendering via some sort of heuristic
(e.g. software by default until there has been drawing to it for
several event loop iterations, or whatever).
3)  Have a way for pages to opt in to having snapshots taken.
4)  Auto-snapshot based on some heuristics.
5)  Save command stream.
6)  Have a way for pages to explicitly snapshot a canvas.
7)  Require opt in for hardware accelerated rendering.
8)  Authors use toDataURL() when they want their data to stick around.
9)  Context lost event that lets authors regenerate the canvas.
10) Do nothing, assume users will hit reload if their canvas goes blank.

Any other options, before we start trying to actually decide which if 
any of these might be workable?


-Boris


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Boris Zbarsky

On 9/4/12 1:07 PM, David Geary wrote:

And then the browser presumably uses the img to regenerate the canvas on
a lost context?


No, then the author just forgets about the broken-ass canvas and shows 
the img to the user.  Basically using a canvas as a transient buffer 
to get the image data being generated into a base64 data: URI form.



Why not just give developers a callback and let them
restore the canvas as they see fit?


Because so far I'm listing possible solutions, not trying to pick one.

-Boris




Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Glenn Maynard
On Tue, Sep 4, 2012 at 11:43 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 2)  Opt canvases in to software rendering via some sort of heuristic
 (e.g. software by default until there has been drawing to it for
 several event loop iterations, or whatever).


4)  Auto-snapshot based on some heuristics.


These are mostly the same, and should be able to keep the majority of
draw-once apps from breaking without hurting draw-repeatedly apps (games,
animations).

The only reason I can think of switch renderers, instead of snapshotting,
is to deal with losing the context *mid*-render, while a script is still
drawing.  (That seems like a problem so rare as to be almost theoretical,
though.)

On Tue, Sep 4, 2012 at 12:02 PM, David Geary david.mark.ge...@gmail.comwrote:

  Or applications for which the output is basically static data and the
  canvas is the output medium.  Note that in such cases regeneration might
 be
  _very_ expensive, effectively requiring rerunning the whole
  compute-intensive part of the application.

 Sure, but those use cases will be in the minority,


This is a huge assumption.  I seriously doubt that apps that draw to a
canvas just once are a minority.


 and we're already talking about a very rare occurrence in the first place


That's another big assumption.  From what I understand, this is a regular
occurance on mobile.

-- 
Glenn Maynard


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Mark Callow
On 12/09/04 10:02, David Geary wrote:
 Sure, but those use cases will be in the minority, and we're already
 talking about a very rare occurrence in the first place, so the odds of a
 very expensive regeneration on a lost context must be near Lotto levels.

It is not a rare occurrence on mobile devices. On my tablet WebGL app's
lose their context every time the tablet goes to sleep. Since the
timeout is so short, it only take a brief distraction and poof! the
tablet is asleep. The loss can happen while the application is in the
middle of drawing the canvas. 

Regards

-Mark

P.S. Why do so many threads on whatwg get forked? My threaded message
viewer is now showing 3 threads with the title Hardware accelerated
canvas.

-- 
注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合
が有ります。正式なメール受信者では無い場合はメール複製、 再配信または情
報の使用を固く禁じております。エラー、手違いでこのメールを受け取られまし
たら削除を行い配信者にご連絡をお願いいたし ます。

NOTE: This electronic mail message may contain confidential and
privileged information from HI Corporation. If you are not the intended
recipient, any disclosure, photocopying, distribution or use of the
contents of the received information is prohibited. If you have received
this e-mail in error, please notify the sender immediately and
permanently delete this message and all related copies.



Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Szymon Piłkowski
On 4 September 2012 19:15, Boris Zbarsky bzbar...@mit.edu wrote:

 On 9/4/12 1:02 PM, David Geary wrote:

 Sure, but those use cases will be in the minority


 What makes you say that?

 Outside of games, I think they're a majority of the canvas-using things
 I've seen.


I'd like to point out that existing games and game engines are very often
relying on 'off-screen' canvases, using them as buffers to reduce amount of
draw calls and calculations (tilemaps, pre-processed sprites etc)

-- 
Szymon Piłkowski :-: http://twitter.com/ard


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Ian Hickson
On Mon, 3 Sep 2012, Glenn Maynard wrote:
 
 As Erik said, taking a snapshot of the canvas is very expensive on some
 platforms.  If you're rendering a game in realtime, you never have a time
 out where you can tolerate an expensive readback.

If you're rendering a game in realtime, the issue doesn't come up. The 
most you'll be out is a frame.

You only have to back up the canvas when you notice it's not being updated 
and there's a chance the video card is going to get upset.


On Tue, 4 Sep 2012, Robert O'Callahan wrote:
 
 I have to say though, we've been shipping 2D canvas with the 
 context-loss problem to millions of users for a couple of years now and 
 I don't recall seeing any bug reports about it. And it's the sort of bug 
 users would notice if it happened.

This suggests that we don't need to worry about it. Do we have any 
concrete metrics on how much of a problem this actually is? Is there a way 
to force it? (Changing video driver on Windows should do it, right? Is 
there a way to do that that isn't that heavy-duty? How about on Mac or X?)


On Mon, 3 Sep 2012, David Geary wrote:
 
 I would like to see a provision for handling lost contexts along the 
 lines of Rick's proposal, perhaps with some underlying integration with 
 requestAnimationFrame() so application developers don't have to get 
 directly involved.

I think it makes a lot of sense for the browser to give the page an 
animation frame pronto if it has heard that the context has gone away. 
That doesn't require any special new features though.


On Mon, 3 Sep 2012, Rik Cabanier wrote:

 It's perfectly reasonable for an author to draw into a canvas once and 
 expect that the browser will manage it properly.

I agree.


On Tue, 4 Sep 2012, James Robinson wrote:
 
 It's also important to note that unlike WebGL the only thing lost on a 
 lost context is the image buffer itself.  With WebGL, the page has to 
 regenerate a large number of resources (shaders, buffers, textures) 
 before it can render the next frame.  With canvas the page can just 
 start drawing.  Many applications redraw the entire canvas on every 
 frame so lost context recovery is identical to normal operation - just 
 draw the thing.  All other resources are managed and can be regenerated 
 by the browser without script intervention.

For applications where there is redrawing going on, I agree that it's 
basically a non-issue.

The point is 2D canvas is used for a lot of things that _never_ redraw. 
Even within the WHATWG sphere for example we have this:

   http://www.whatwg.org/issues/data.html

...which fetches data from the network and draws a graph. There's no 
redrawing going on.


On Tue, 4 Sep 2012, David Geary wrote:
 
 I think it makes the most sense to add a context lost handler to the 
 spec and leave it up to developers to redraw the canvas. It's 
 straightforward to understand and to implement. It has the distasteful 
 downside of forcing some developers to add a few lines of code to their 
 existing apps, but if the apps are used and maintained is it really that 
 big of a deal?

Used and maintained are entirely orthogonal on the Web. Most pages are 
not maintained. Many actively-used apps are written by people who were 
contracted to write the app and who are no longer on retainer.


I think it's reasonable for us to add an event that fires on the canvas 
element, or on the CanvasRenderingContext2D object, when the canvas gets 
cleared because the video card is reset. I just don't expect it to be used 
very much, so I don't consider it a solution to all the problems we're 
discussing here. It's only a solution to some of the problems (namely, to 
those apps that only repaint dirty areas, and have authors who are aware 
that this problem can ever happen, and for which just reload the page 
isn't a sufficiently clean answer).

What should the event be called?

canvas.onforcerepaint?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread David Geary
On Tue, Sep 4, 2012 at 11:12 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Tue, Sep 4, 2012 at 10:07 AM, David Geary david.mark.ge...@gmail.com
 wrote:
  On Tue, Sep 4, 2012 at 10:53 AM, Boris Zbarsky bzbar...@mit.edu wrote:
  Ms2ger points out (without endorsing) that there's an:
 
  8)  Have every author who wants their canvas to stick around call
  toDataURL() and stick the result in an img src.
 
  And then the browser presumably uses the img to regenerate the canvas on
 a
  lost context? Why not just give developers a callback and let them
 restore
  the canvas as they see fit?

 No, the author just uses the img in their page instead.  The
 canvas is only used in JS to generate the image, and is never put
 into the document at all.


Ah, okay. Thanks for the clarification.


 And again, the reason that just give them a contextloss event is bad
 is because most people simply won't do it.  It doesn't make any sense!
  The browser just... forgets about your image, which it was displaying
 fine just a second ago?


If you tell developers they have to use toDataURL() to create an image for
static canvases, then you're going to have to tell them why. And then
you're back to square one.

That's what I meant by most of Boris's solutions being at the wrong level
of abstraction. Solving the problem at a higher level of abstraction to
obfuscate the real reason is a mistake, IMO, even when the reason may not
make apparent sense. I believe developers are pretty smart, and will be
able to make sense of it.


david



 ~TJ



Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Boris Zbarsky

On 9/4/12 1:20 PM, Glenn Maynard wrote:

The only reason I can think of switch renderers, instead of
snapshotting, is to deal with losing the context *mid*-render, while a
script is still drawing.  (That seems like a problem so rare as to be
almost theoretical, though.)


The main reason to switch renderers instead of snapshotting is that if 
you default to software rendering and then upgrade to GPU you just need 
to take your buffer and dump it to the GPU, whereas if you snapshot you 
have to do a readback.


If the only thing around is the one canvas, they're probably not that 
different.  If there are other GPU ops going on, moving data to the GPU 
won't affect those as much as doing the readback, as I understand.  But 
maybe I understand wrong?



That's another big assumption.  From what I understand, this is a
regular occurance on mobile.


Yes, exactly.

-Boris



Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Justin Novosad
On Tue, Sep 4, 2012 at 10:22 AM, Mark Callow callow_m...@hicorp.co.jpwrote:


 It is not a rare occurrence on mobile devices. On my tablet WebGL app's
 lose their context every time the tablet goes to sleep. Since the
 timeout is so short, it only take a brief distraction and poof! the
 tablet is asleep. The loss can happen while the application is in the
 middle of drawing the canvas.


It seems like this is much more of a problem on mobile OSes. On win7, I
tried unsuccessfully to hose GPU-accelerated 2D canvases in IE and Chrome.
 The OS (or is it the graphics driver?) is doing a good job of making 2D
canvas render buffers persist through various GPU calamities such as system
hibernation and remote desktop sessions.  So, is context loss even an issue
on any modern desktop OS?  Can anyone reliably repro a 2D canvas context
loss on Windows (Vista or 7), or MacOS X with up to date drivers and a
current version of any browser?


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Ashley Gullen
It sounds like the real issue is mobile:
- it seems pretty difficult to make a desktop lose a context
- most mobile browsers still use software rendering, or at least haven't
had GPU acceleration very long, so there are unlikely to be bug reports
about it
- it sounds like mobile devices lose contexts much more easily
Even so, I think an 'onforcerepaint' / 'onneedredraw' / 'onreset' type
event is the best way to handle this.  If it's really that rare, most devs
can ignore the event.  If it happens a lot (e.g. on mobile), it will
gradually become common knowledge, built in to frameworks, etc.

Ashley

On 4 September 2012 18:49, Justin Novosad ju...@chromium.org wrote:

 On Tue, Sep 4, 2012 at 10:22 AM, Mark Callow callow_m...@hicorp.co.jp
 wrote:

 
  It is not a rare occurrence on mobile devices. On my tablet WebGL app's
  lose their context every time the tablet goes to sleep. Since the
  timeout is so short, it only take a brief distraction and poof! the
  tablet is asleep. The loss can happen while the application is in the
  middle of drawing the canvas.
 
 
 It seems like this is much more of a problem on mobile OSes. On win7, I
 tried unsuccessfully to hose GPU-accelerated 2D canvases in IE and Chrome.
  The OS (or is it the graphics driver?) is doing a good job of making 2D
 canvas render buffers persist through various GPU calamities such as system
 hibernation and remote desktop sessions.  So, is context loss even an issue
 on any modern desktop OS?  Can anyone reliably repro a 2D canvas context
 loss on Windows (Vista or 7), or MacOS X with up to date drivers and a
 current version of any browser?



Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Justin Novosad
On Tue, Sep 4, 2012 at 11:04 AM, Ashley Gullen ash...@scirra.com wrote:

 It sounds like the real issue is mobile:
 - it seems pretty difficult to make a desktop lose a context
 - most mobile browsers still use software rendering, or at least haven't
 had GPU acceleration very long, so there are unlikely to be bug reports
 about it
 - it sounds like mobile devices lose contexts much more easily
 Even so, I think an 'onforcerepaint' / 'onneedredraw' / 'onreset' type
 event is the best way to handle this.  If it's really that rare, most devs
 can ignore the event.  If it happens a lot (e.g. on mobile), it will
 gradually become common knowledge, built in to frameworks, etc.


That doesn't sound too evil, but the ideal solution would be one that would
not involve web standards at all. If there was a way of ensuring GPU
resource persistence on mobile platforms (swap-out resources rather than
discard them), then we would not be having this conversation. Making that
happen is a debate for a different audience. Unfortunately OS and graphics
APIs don't evolve at whatwg pace.

IMHO, the convenience of a living standard means that amending the
standard will often be an attractive shortcut even when it is not the best
solution.

-Justin


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Erik Möller

On Tue, 04 Sep 2012 19:15:46 +0200, Boris Zbarsky bzbar...@mit.edu wrote:


On 9/4/12 1:02 PM, David Geary wrote:

Sure, but those use cases will be in the minority


What makes you say that?

Outside of games, I think they're a majority of the canvas-using things  
I've seen.



I think it makes the most sense to add a context lost handler to the
spec and leave it up to developers to redraw the canvas.


OK, yes, let's call that option 9.  And I'll add option 10: do nothing.

So now our list is:


1)  Have a way for pages to opt in to software rendering.
2)  Opt canvases in to software rendering via some sort of heuristic
 (e.g. software by default until there has been drawing to it for
 several event loop iterations, or whatever).
3)  Have a way for pages to opt in to having snapshots taken.
4)  Auto-snapshot based on some heuristics.
5)  Save command stream.
6)  Have a way for pages to explicitly snapshot a canvas.
7)  Require opt in for hardware accelerated rendering.
8)  Authors use toDataURL() when they want their data to stick around.
9)  Context lost event that lets authors regenerate the canvas.
10) Do nothing, assume users will hit reload if their canvas goes blank.

Any other options, before we start trying to actually decide which if  
any of these might be workable?


-Boris


It's important to discuss implementation details so we don't spec  
something that's not implementable on all platforms. That said we  
obviously should try to stay clear of specifying how things should be  
implemented and instead spec what end result we're after. There's little  
distinction from the end users point of view for example between  
snapshotting an saving the command stream.


Can we live with a weaker statement than a guarantee that the canvas  
content will be retained? Perhaps a best effort may be enough?
It's obviously in the vendors interests to do as good a job as possible  
with retaining canvas content, and I believe for example on Android it's  
possible to get notified before a power-save event will occur. That would  
enable us to do a read-back and properly restore the canvas (investigation  
needed). For applications that just cannot loose any data then canvas  
obviously isn't the best choice for storage. If we do have the context  
lost event then at least new versions of those applications can be sure to  
render out all their data to a canvas and move it over to an img while  
listening for the context lost event. Existing applications that cannot  
loose data... well, maybe that's a loss we'll have to accept, but by all  
means dazzle me with a brilliant solution if you have one.


JFYI I'd say by far the most common lost context scenario on desktop would  
be browsing to your driver manufacturers page, downloading and installing  
a new driver.


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Kornel Lesiński
On Tue, 04 Sep 2012 19:35:32 +0100, Justin Novosad ju...@chromium.org  
wrote:


That doesn't sound too evil, but the ideal solution would be one that  
would not involve web standards at all. If there was a way of ensuring  
GPU

resource persistence on mobile platforms (swap-out resources rather than
discard them), then we would not be having this conversation. Making that
happen is a debate for a different audience. Unfortunately OS and  
graphics APIs don't evolve at whatwg pace.


Indeed.

I think it'd be ideal if browsers could hide this problem from developers  
(with command logging, snapshotting or other tricks) until improvements in  
OS/drivers/hardware make this a non-issue (e.g. if the OS can notify  
applications before gfx context is lost, then browsers could snapshot then  
and problem will be gone for good)


Until then great performance can still be achieved with some heuristics  
and accepted risk of loss, e.g. don't snapshot for 1/10th of a second  
after canvas has been cleared, don't log commands from  
requestAnimationFrame() etc.


--
regards, Kornel


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Kornel Lesiński

On Tue, 04 Sep 2012 17:43:11 +0100, Boris Zbarsky bzbar...@mit.edu wrote:


5)  Save command stream.
6)  Have a way for pages to explicitly snapshot a canvas.
7)  Require opt in for hardware accelerated rendering.

Any others?

Of the above, I don't think #5 and #7 are realistic, for what it's  
worth.  I haven't put enough thought into the rest yet to decide what I  
think about them.


Would a mix of #5 and snapshotting work?

1. create a (fixed-size?) append-only buffer for drawing commands,
2. log all drawing commands until the buffer is full or a  
non-cheaply-serializable command (e.g. draw of video) is executed,

3. snapshot,
4. empty the buffer
5. goto 2

That could make readbacks much less frequent. Would this still be a  
prohibitively expensive solution?


--
regards, Kornel


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Erik Möller
On Tue, 04 Sep 2012 20:49:57 +0200, Kornel Lesiński kor...@geekhood.net  
wrote:


until improvements in OS/drivers/hardware make this a non-issue (e.g. if  
the OS can notify applications before gfx context is lost, then browsers  
could snapshot then and problem will be gone for good)


We've just worked hard to get this behaviour into the GPUs to allow long  
running shaders to be terminated for security reasons so it's not likely  
to go away. Besides snapshotting right before a lost context doesn't help  
us at all. For all we know the GPU could be half way through rendering  
something when the event is triggered... even if we could read back the  
half rendered content in the rendertarget how do we generate the correct  
output from there? We'd have to take our DeLorian back to before the frame  
was started and replay the rendering commands.


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Jussi Kalliokoski
This might be a silly idea, but what about this:

When all references to the context are lost (garbage collected), simply
store the image on the canvas and make it behave like it was just an image.
This would lose all the state of the context, but since the problem seems
to be mostly with things like graphing and stuff like that, it might not be
a problem, since if the reference to the context was lost, it seems to
imply that it won't be needed anymore and what's left is just a static
image.

Maybe I'm just missing something.

Cheers,
Jussi

On Tue, Sep 4, 2012 at 9:51 PM, Kornel Lesiński kor...@geekhood.net wrote:

 On Tue, 04 Sep 2012 17:43:11 +0100, Boris Zbarsky bzbar...@mit.edu
 wrote:

  5)  Save command stream.
 6)  Have a way for pages to explicitly snapshot a canvas.
 7)  Require opt in for hardware accelerated rendering.

 Any others?

 Of the above, I don't think #5 and #7 are realistic, for what it's worth.
  I haven't put enough thought into the rest yet to decide what I think
 about them.


 Would a mix of #5 and snapshotting work?

 1. create a (fixed-size?) append-only buffer for drawing commands,
 2. log all drawing commands until the buffer is full or a
 non-cheaply-serializable command (e.g. draw of video) is executed,
 3. snapshot,
 4. empty the buffer
 5. goto 2

 That could make readbacks much less frequent. Would this still be a
 prohibitively expensive solution?

 --
 regards, Kornel



Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Glenn Maynard
(some quotes restored)

On Tue, Sep 4, 2012 at 12:28 PM, Ian Hickson i...@hixie.ch wrote:

Realistically, there are too many pages that have 2D canvases that are
   drawn to once and never updated for any solution other than don't
   lose the data to be adopted. How exactly this is implemented is a
   quality of implementation issue.

  There are ways to make it work without forgoing acceleration, e.g. taking
  regular backups of the canvas contents, remembering every instruction
  that was sent to the canvas, etc.

 On Mon, 3 Sep 2012, Glenn Maynard wrote:
 
  As Erik said, taking a snapshot of the canvas is very expensive on some
  platforms.  If you're rendering a game in realtime, you never have a
 time
  out where you can tolerate an expensive readback.

 If you're rendering a game in realtime, the issue doesn't come up. The
 most you'll be out is a frame.


You were listing approaches for never exposing the effects of context loss,
in claiming that don't lose data is workable.  To do that with snapshots,
you'd have to take a snapshot *every* frame, regardless of whether it's a
realtime game or a one-shot renderer.  That's the only way to get don't
lose the data.  After all, the game might decide to stop rendering at any
moment (eg. it might pause).  If your snapshot isn't up-to-date when
context loss happens, you won't have an up-to-date snapshot to restore, so
there's nothing you can do but lose data (and showing an out-of-date
snapshot is probably even worse than reverting to a blank canvas).

I do think that *heuristic* snapshots are probably one part of the
solution, but they're distinctly heuristic and they will always leave edge
cases where it's possible to lose canvas data.

It doesn't follow that since there are pages that only render once, the
only solution is to not lose data, since it's fairly straightforward to
only snapshot the first couple frames and then stop snapshotting (or eg.
stop snapshotting until the page doesn't render for 500ms, or something
like thaT).

-- 
Glenn Maynard


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Boris Zbarsky

On 9/4/12 3:17 PM, Jussi Kalliokoski wrote:

When all references to the context are lost (garbage collected)


That never happens while the canvas itself is alive, since if nothing 
else the canvas has a reference to the context.


-Boris


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Jussi Kalliokoski
Hmm... Is it visible to the page outside getContext() ?

On Tue, Sep 4, 2012 at 10:21 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 9/4/12 3:17 PM, Jussi Kalliokoski wrote:

 When all references to the context are lost (garbage collected)


 That never happens while the canvas itself is alive, since if nothing else
 the canvas has a reference to the context.

 -Boris



Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Oliver Hunt
The context is owned by the canvas element.  If the canvas element is still 
alive then by definition so is the context.

--Oliver

On Sep 4, 2012, at 12:31 PM, Jussi Kalliokoski jussi.kallioko...@gmail.com 
wrote:

 Hmm... Is it visible to the page outside getContext() ?
 
 On Tue, Sep 4, 2012 at 10:21 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 
 On 9/4/12 3:17 PM, Jussi Kalliokoski wrote:
 
 When all references to the context are lost (garbage collected)
 
 
 That never happens while the canvas itself is alive, since if nothing else
 the canvas has a reference to the context.
 
 -Boris
 



Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Boris Zbarsky

On 9/4/12 3:31 PM, Jussi Kalliokoski wrote:

Hmm... Is it visible to the page outside getContext() ?


No.  Why does that matter?

-Boris


Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Jussi Kalliokoski
I'm just wondering if there's a way to detect if the canvas was made with
the purpose of drawing a static image.

On providing a better way for those single shot canvases to make sure that
the image is preserved, I think one way would be to add a complementing
method to toDataURL(), being toBlobURL(). This has the advantage that the
Blob URL could act as a cross-origin resource if non-CORS-enabled resources
were made to draw the canvas, whereas toDataURL() will just throw a
security exception. The Blob URL could still be used to serve an image tag
on the page.

Cheers,
Jussi

On Tue, Sep 4, 2012 at 11:03 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 9/4/12 3:31 PM, Jussi Kalliokoski wrote:

 Hmm... Is it visible to the page outside getContext() ?


 No.  Why does that matter?

 -Boris



Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Rik Cabanier
Rereading the mail thread, it seems like most people want/can live with a
callback that informs the developer that the canvas needs to be recreated.

If the developer doesn't use this new feature, he will get current behavior
where the browser will do snapshotting at reasonable intervals (or fail
outright in extreme circumstances).
If the callback is set, the browser will not do any snapshotting and will
ask the user to re-render the canvas context after a context loss (or a low
memory situation). If there is a context loss during drawing, it's probably
reasonable to just ignore all drawing commands during that rendering pass
and ask for a re-render immediately afterwards.

Someone did bring up that more complex applications use off-screen canvas
elements. Those would need to set the callback as well to avoid having to
snapshot them.
A possible problem here is that the user would need to be intelligent and
not re-render everything every time the callback is executed.

Rik

On Tue, Sep 4, 2012 at 12:15 PM, Erik Möller emol...@opera.com wrote:

 On Tue, 04 Sep 2012 20:49:57 +0200, Kornel Lesiński kor...@geekhood.net
 wrote:

  until improvements in OS/drivers/hardware make this a non-issue (e.g. if
 the OS can notify applications before gfx context is lost, then browsers
 could snapshot then and problem will be gone for good)


 We've just worked hard to get this behaviour into the GPUs to allow long
 running shaders to be terminated for security reasons so it's not likely to
 go away. Besides snapshotting right before a lost context doesn't help us
 at all. For all we know the GPU could be half way through rendering
 something when the event is triggered... even if we could read back the
 half rendered content in the rendertarget how do we generate the correct
 output from there? We'd have to take our DeLorian back to before the frame
 was started and replay the rendering commands.


 --
 Erik Möller
 Core Gfx Lead
 Opera Software
 twitter.com/erikjmoller



Re: [whatwg] Hardware accelerated canvas

2012-09-04 Thread Charles Pritchard

On 9/4/2012 10:12 PM, Rik Cabanier wrote:

Rereading the mail thread, it seems like most people want/can live with a
callback that informs the developer that the canvas needs to be recreated.


Wouldn't this be more appropriate as a webgl-2d extension?

It seems like webgl support is a prerequisite for this kind of work.

If we are going to go down this route, I would like to reiterate my 
prior requests:
low memory condition events, reporting of the Microsoft window.screen 
extensions (for pixel ratios)
and a consideration of on-screen magnification as a reason for 
requesting a per-canvas re-paint.


The latter would be pretty cool to have supported as a UA feature. 
Currently, authors can zoom in with browser zoom (thus the window.screen 
extensions + window.onresize).
I'd love to see magnification added on as well. That's something that we 
do see at the OS-level, where the user zooms in on a particular portion 
of the screen and moves around.


With a per-canvas repaint request, as authors, we could listen for that 
event, and the UA would simply have a transformation matrix appropriate 
to the user's zoom level.
With some consideration of low memory conditions (and perhaps media 
stream processing), we might have more incentive as authors to want to 
hook into these APIs.


I came across the low-memory issue while working on a very memory hungry 
application on the iPhone. At some point, I had to do my own memory 
management and estimation
to keep from running out of RAM (at which point, iOS terminates the 
application).



This whole cluster of use-case has been brought up before on these 
lists. If those cases are taken into account, I think it makes a lot 
more sense to accommodate this hypothetical lost-context issue.

Otherwise, this does just seem like webgl-2d.

As for performance: nobody has touched the obvious items, like using 
array buffers to describe scenes, instead of using thousands of method 
calls.
It's a hell of a lot better to upload a float16 array of x/y coordinates 
than to run a thousands of drawImage calls (for something like the IE 
fish demo).


I'm asking that, if we're going to take the plunge, we go one way or the 
other. Satisfy this broad range of real-world uses for canvas 2d,
or go the other route, and think about a webgl-2d, which authors could 
select when doing getContext().


-Charles






Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Erik Möller
On Mon, 03 Sep 2012 00:14:49 +0200, Benoit Jacob bja...@mozilla.com  
wrote:



- Original Message -

On Sun, 2 Sep 2012, Erik Möller wrote:

 As we hardware accelerate the rendering of , not just with
 the webgl
 context, we have to figure out how to best handle the fact that
 GPUs loose the
 rendering context for various reasons. Reasons for loosing the
 context differ
 from platform to platform but ranges from going into power-save
 mode, to
 internal driver errors and the famous long running shader
 protection.
 A lost context means all resources uploaded to the GPU will be gone
 and have
 to be recreated. For canvas it is not impossible, though IMO
 prohibitively
 expensive to try to automatically restore a lost context and
 guarantee the
 same behaviour as in software.
 The two options I can think of would be to:
 a) read back the framebuffer after each draw call.
 b) read back the framebuffer before the first draw call of a
 frame and build
 a display list of all other draw operations.

 Neither seem like a particularly good option if we're looking to
 actually
 improve on canvas performance. Especially on mobile where read-back
 performance is very poor.

 The WebGL solution is to fire an event and let the
 js-implementation deal with
 recovering after a lost context
 http://www.khronos.org/registry/webgl/specs/latest/#5.15.2

 My preferred option would be to make a generic context lost event
 for canvas,
 but I'm interested to hear what people have to say about this.

Realistically, there are too many pages that have 2D canvases that
are
drawn to once and never updated for any solution other than don't
lose
the data to be adopted. How exactly this is implemented is a quality
of
implementation issue.


With all the current graphics hardware, this means don't use a GL/D3D  
surface to implement the 2d canvas drawing buffer storage, which  
implies: don't hardware-accelerate 2d canvases.


If we agree that 2d canvas acceleration is worth it despite the  
possibility of context loss, then Erik's proposal is really the only  
thing to do, as far as current hardware is concerned.


Erik's proposal doesn't worsen the problem in anyway --- it acknowledges  
a problem that already exists and offers to Web content a way to recover  
from it.


Hardware-accelerated 2d contexts are no different from  
hardware-accelerated WebGL contexts, and WebGL's solution has been  
debated at length already and is known to be the only thing to do on  
current hardware. Notice that similar solutions preexist in the system  
APIs underlying any hardware-accelerated canvas context: Direct3D's lost  
devices, EGL's lost contexts, OpenGL's ARB_robustness context loss  
statuses.


Benoit



--
Ian Hickson   U+1047E)\._.,--,'``.
   fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._
,.
Things that are impossible just take longer.
  `._.-(,_..'--(,_..'`-.;.'


I agree with Benoit, this is already an existing problem, I'm just  
pointing the spotlight at it. If we want to take advantage of hardware  
acceleration on canvas this is an issue we will have to deal with.


I don't particularly like this idea, but for the sake of having all the  
options on the table I'll mention it. We could default to the old  
behaviour and have an opt in for hardware accelerated canvas in which  
case you would have to respond to said context lost event. That would  
allow the existing content to keep working as it is without changes. It  
would be more work for vendors, but it's up to every vendor to decide how  
to best solve it, either by doing it in software or using the expensive  
read back alternative in hardware.


Like I said, not my favourite option, but I agree it's bad to break the  
web.


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Erik Möller
On Mon, 03 Sep 2012 03:37:24 +0200, Charles Pritchard ch...@jumis.com  
wrote:



Canvas GPU acceleration today is done via transform3d and transitions.


I hope everyone are aware that this connection is just coincidental. The  
fact that one vendor decided to flip the hardware acceleration switch when  
there was a 3d-transform doesn't mean everyone will. Hardware acceleration  
and 3d-transforms are separate features. 3d transforms should be available  
in software rendering as well.


Most [installed] GPUs are not able to accelerate the Canvas path drawing  
mechanism.

They are able to take an array of floats for WebGL, though.


It's true that there are no dedicated hardware for rendering paths in the  
GPUs of today, but they are very good at rendering line segments and  
triangle strips and paths can be triangulated. With some preprocessing  
paths can even be rendered directly using shaders  
http://research.microsoft.com/en-us/um/people/cloop/loopblinn05.pdf




What is really meant here by Canvas GPU acceleration?



I can of course only speak for Opera, but we strive to hardware accelerate  
all parts of the drawing, and for canvas that also entails triangulating  
paths and batching to reduce the number of drawcalls. I.e. using an image  
atlas to draw several pieces in succession should give a good performance  
boost. Of course if we'd want to take it one step further then adding  
support at the API level for drawing multiple images would be good.


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Benoit Jacob


- Original Message -
 On Mon, 03 Sep 2012 00:14:49 +0200, Benoit Jacob bja...@mozilla.com
 wrote:
 
  - Original Message -
  On Sun, 2 Sep 2012, Erik Möller wrote:
  
   As we hardware accelerate the rendering of , not just with
   the webgl
   context, we have to figure out how to best handle the fact that
   GPUs loose the
   rendering context for various reasons. Reasons for loosing the
   context differ
   from platform to platform but ranges from going into power-save
   mode, to
   internal driver errors and the famous long running shader
   protection.
   A lost context means all resources uploaded to the GPU will be
   gone
   and have
   to be recreated. For canvas it is not impossible, though IMO
   prohibitively
   expensive to try to automatically restore a lost context and
   guarantee the
   same behaviour as in software.
   The two options I can think of would be to:
   a) read back the framebuffer after each draw call.
   b) read back the framebuffer before the first draw call of a
   frame and build
   a display list of all other draw operations.
  
   Neither seem like a particularly good option if we're looking to
   actually
   improve on canvas performance. Especially on mobile where
   read-back
   performance is very poor.
  
   The WebGL solution is to fire an event and let the
   js-implementation deal with
   recovering after a lost context
   http://www.khronos.org/registry/webgl/specs/latest/#5.15.2
  
   My preferred option would be to make a generic context lost
   event
   for canvas,
   but I'm interested to hear what people have to say about this.
 
  Realistically, there are too many pages that have 2D canvases that
  are
  drawn to once and never updated for any solution other than don't
  lose
  the data to be adopted. How exactly this is implemented is a
  quality
  of
  implementation issue.
 
  With all the current graphics hardware, this means don't use a
  GL/D3D
  surface to implement the 2d canvas drawing buffer storage, which
  implies: don't hardware-accelerate 2d canvases.
 
  If we agree that 2d canvas acceleration is worth it despite the
  possibility of context loss, then Erik's proposal is really the
  only
  thing to do, as far as current hardware is concerned.
 
  Erik's proposal doesn't worsen the problem in anyway --- it
  acknowledges
  a problem that already exists and offers to Web content a way to
  recover
  from it.
 
  Hardware-accelerated 2d contexts are no different from
  hardware-accelerated WebGL contexts, and WebGL's solution has been
  debated at length already and is known to be the only thing to do
  on
  current hardware. Notice that similar solutions preexist in the
  system
  APIs underlying any hardware-accelerated canvas context: Direct3D's
  lost
  devices, EGL's lost contexts, OpenGL's ARB_robustness context loss
  statuses.
 
  Benoit
 
 
  --
  Ian Hickson   U+1047E)\._.,--,'``.
 fL
  http://ln.hixie.ch/   U+263A/,   _.. \   _\
   ;`._
  ,.
  Things that are impossible just take longer.
`._.-(,_..'--(,_..'`-.;.'
 
 I agree with Benoit, this is already an existing problem, I'm just
 pointing the spotlight at it. If we want to take advantage of
 hardware
 acceleration on canvas this is an issue we will have to deal with.
 
 I don't particularly like this idea, but for the sake of having all
 the
 options on the table I'll mention it. We could default to the old
 behaviour and have an opt in for hardware accelerated canvas in
 which
 case you would have to respond to said context lost event.

Two objections against this:

1. Remember this adage from high-performance computing which applies here as 
well: The fast drives out the slow even if the fast is wrong. Browsers want 
to have good performance on Canvas games, demos and benchmarks. Users want good 
performance too. GL/D3D helps a lot there, at the cost of a rather rare -- and 
practically untestable -- problem with context loss. So browsers are going to 
use GL/D3D, period. On the desktop, most browsers already do. It seems 
impossible for the spec to require not using GL/D3D and get obeyed.

2. This would effectively force browsers to ship an implementation that does 
not rely on GL/D3D. For browsers that do have a GL/D3D based canvas 
implementation and target platforms where GL/D3D availability can be taken for 
granted (typically on mobile devices), it is reasonable to expect that in the 
foreseeable future they might want to get rid of their non-GL/D3D canvas impl.

Benoit


 That would
 allow the existing content to keep working as it is without changes.
 It
 would be more work for vendors, but it's up to every vendor to decide
 how
 to best solve it, either by doing it in software or using the
 expensive
 read back alternative in hardware.
 
 Like I said, not my favourite option, but I agree it's bad to break
 the
 web.
 
 --
 Erik Möller
 Core Gfx Lead
 Opera Software
 

Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Benoit Jacob


- Original Message -
 What is really meant here by Canvas GPU acceleration?

This means use GL/D3D to implement the 2D canvas drawing primitives; but what 
really matters here, is that this requires using a GL/D3D texture/surface as 
the primary storage for the 2D canvas drawing buffer.

Because of the way that current GPUs work, this entails that the canvas drawing 
buffer is a /discardable/ resource. Erik's proposal is about dealing with this 
dire reality.

Again, accelerated canvases have been widely used for a year and a half now. 
It's not realistic to expect the world to go back to non-accelerated by default 
now.

Benoit


Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Ian Hickson
On Sun, 2 Sep 2012, Benoit Jacob wrote:
  
  Realistically, there are too many pages that have 2D canvases that are 
  drawn to once and never updated for any solution other than don't 
  lose the data to be adopted. How exactly this is implemented is a 
  quality of implementation issue.
 
 With all the current graphics hardware, this means don't use a GL/D3D 
 surface to implement the 2d canvas drawing buffer storage, which 
 implies: don't hardware-accelerate 2d canvases.

There are ways to make it work without forgoing acceleration, e.g. taking 
regular backups of the canvas contents, remembering every instruction 
that was sent to the canvas, etc.


 Erik's proposal doesn't worsen the problem in anyway --- it acknowledges 
 a problem that already exists and offers to Web content a way to recover 
 from it.

The problem is that there is content that doesn't recover, and assumes the 
problem doesn't exist. That makes it our problem.


On Mon, 3 Sep 2012, Benoit Jacob wrote:
 
 Remember this adage from high-performance computing which applies here 
 as well: The fast drives out the slow even if the fast is wrong. 
 Browsers want to have good performance on Canvas games, demos and 
 benchmarks. Users want good performance too. GL/D3D helps a lot there, 
 at the cost of a rather rare -- and practically untestable -- problem 
 with context loss. So browsers are going to use GL/D3D, period. On the 
 desktop, most browsers already do. It seems impossible for the spec to 
 require not using GL/D3D and get obeyed.

On Sun, 2 Sep 2012, Glenn Maynard wrote:
 
 If the choice becomes follow the spec and don't hardware-accelerate 
 canvas vs. don't follow the spec and get orders of magnitude better 
 performance, I suspect I can guess the choice implementors will make 
 (implementors invited to speak for themselves, of course).

This isn't an issue of the spec -- there is existing content that would be 
affected.


On Mon, 3 Sep 2012, Erik Möller wrote:
 
 I don't particularly like this idea, but for the sake of having all the 
 options on the table I'll mention it. We could default to the old 
 behaviour and have an opt in for hardware accelerated canvas in which 
 case you would have to respond to said context lost event. That would 
 allow the existing content to keep working as it is without changes. It 
 would be more work for vendors, but it's up to every vendor to decide 
 how to best solve it, either by doing it in software or using the 
 expensive read back alternative in hardware.

On Sun, 2 Sep 2012, Rik Cabanier wrote:
 
 If there was a callback for context loss and if the user had set it, a 
 browser could throw the entire canvas out and ask for it to be 
 re-rendered if the canvas is shown again. This would even make sense if 
 you don't have a HW accelerated canvas.
 
 There would be no backward compatibility issue either. If the user 
 doesn't set the callback, a browser would have to do something 
 reasonable to keep the canvas bitmap around.

This is an interesting idea... do other vendors want to provide something 
like this?

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Glenn Maynard
On Mon, Sep 3, 2012 at 11:11 AM, Ian Hickson i...@hixie.ch wrote:

 There are ways to make it work without forgoing acceleration, e.g. taking
 regular backups of the canvas contents, remembering every instruction
 that was sent to the canvas, etc.


As Erik said, taking a snapshot of the canvas is very expensive on some
platforms.  If you're rendering a game in realtime, you never have a time
out where you can tolerate an expensive readback.  You can't remember
every single instruction sent to the canvas--that would mean keeping every
drawImage source alive forever, too.  You have to be able to snapshot the
backing store and purge the draw list at some point (thus the readback in
b) of the original post).

I definitely disagree with Benoit's assumption that since WebGL can't come
up with anything better, Canvas can't either.  2d canvas and WebGL aren't
the same--WebGL has far more state to restore, some of which isn't directly
accessible in GLES (eg. depth buffers, IIRC).  It's definitely worth
evaluating every option before assuming that exposing context loss to
developers is really the only possible solution.

 If the choice becomes follow the spec and don't hardware-accelerate
  canvas vs. don't follow the spec and get orders of magnitude better
  performance, I suspect I can guess the choice implementors will make
  (implementors invited to speak for themselves, of course).

 This isn't an issue of the spec -- there is existing content that would be
 affected.


Again, there are approaches which can alleviate the common draw once and
forget about it cases.  For the benefits, I suspect the remaining content
breakage would fall well below the threshold people will tolerate, if it
came down to it.

On Sun, 2 Sep 2012, Rik Cabanier wrote:
 
  If there was a callback for context loss and if the user had set it, a
  browser could throw the entire canvas out and ask for it to be
  re-rendered if the canvas is shown again. This would even make sense if
  you don't have a HW accelerated canvas.
 
  There would be no backward compatibility issue either. If the user
  doesn't set the callback, a browser would have to do something
  reasonable to keep the canvas bitmap around.

 This is an interesting idea... do other vendors want to provide something
 like this?


Also, would vendors actually be willing to shift existing content to this
slower path?  This is only a partial solution if implementations don't do
that part.

-- 
Glenn Maynard


Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread David Geary
On Mon, Sep 3, 2012 at 7:21 AM, Benoit Jacob bja...@mozilla.com wrote:



 - Original Message -
  What is really meant here by Canvas GPU acceleration?

 This means use GL/D3D to implement the 2D canvas drawing primitives; but
 what really matters here, is that this requires using a GL/D3D
 texture/surface as the primary storage for the 2D canvas drawing buffer.

 Because of the way that current GPUs work, this entails that the canvas
 drawing buffer is a /discardable/ resource. Erik's proposal is about
 dealing with this dire reality.

 Again, accelerated canvases have been widely used for a year and a half
 now. It's not realistic to expect the world to go back to non-accelerated
 by default now.

It seems to me that one way or another we have to break something. Canvases
drawn into once with no animation loop may go blank with GL-based hardware
acceleration, whereas most video games will not function properly without
it. I much prefer the former to the latter.

I agree that it's unrealistic to go back to non-accelerated canvas. I would
like to see a provision for handling lost contexts along the lines of
Rick's proposal, perhaps with some underlying integration with
requestAnimationFrame() so application developers don't have to get
directly involved.

HTML is a living specification and I believe developers would rather have
occasional breaks with backwards compatibility instead of severely reduced
performance.


david


 Benoit



Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Rik Cabanier
On Mon, Sep 3, 2012 at 10:31 AM, David Geary david.mark.ge...@gmail.comwrote:

 On Mon, Sep 3, 2012 at 7:21 AM, Benoit Jacob bja...@mozilla.com wrote:

 
 
  - Original Message -
   What is really meant here by Canvas GPU acceleration?
 
  This means use GL/D3D to implement the 2D canvas drawing primitives; but
  what really matters here, is that this requires using a GL/D3D
  texture/surface as the primary storage for the 2D canvas drawing buffer.
 
  Because of the way that current GPUs work, this entails that the canvas
  drawing buffer is a /discardable/ resource. Erik's proposal is about
  dealing with this dire reality.
 
  Again, accelerated canvases have been widely used for a year and a half
  now. It's not realistic to expect the world to go back to non-accelerated
  by default now.

 It seems to me that one way or another we have to break something. Canvases
 drawn into once with no animation loop may go blank with GL-based hardware
 acceleration, whereas most video games will not function properly without
 it. I much prefer the former to the latter.


No, we can't break the current implementation.
It's perfectly reasonable for an author to draw into a canvas once and
expect that the browser will manage it properly.



 I agree that it's unrealistic to go back to non-accelerated canvas. I would
 like to see a provision for handling lost contexts along the lines of
 Rick's proposal, perhaps with some underlying integration with
 requestAnimationFrame() so application developers don't have to get
 directly involved.


I'm unsure why you bring up requestAnimationFrame().
Can you elaborate?



 HTML is a living specification and I believe developers would rather have
 occasional breaks with backwards compatibility instead of severely reduced
 performance.


 david

 
  Benoit
 



Re: [whatwg] Hardware accelerated canvas

2012-09-03 Thread Tobie Langel
I apologize in advance, as this is slightly off-topic. I've been
unsuccessfully looking for info on how Canvas hardware acceleration
actually works and haven't found much.

Would anyone have pointers?

Thanks.

--tobie


[whatwg] Hardware accelerated Canvas

2012-09-03 Thread Saurabh Jain
Hi,

Hardware accelerated Canvas is a necessity. If it is not done than HTML5
will never be able to compete with native platforms. Most applications need
2D rendering for UI but application developers these days demand at par
performance with native counterparts. WebGL is not a good option for 2D
only stuff since its relatively complicated and still not fully supported
on all platforms. Even open source libraries like Three.js are not fully
documented.

So Canvas remains the only viable option for application developers for
most UI related stuff. Yes many web developers do not know about rendering
loops but with requestAnimationFrame this will change. This change has to
happen as users are demanding more complex and beautiful UI everyday thanks
to their use of iOS and other mobile apps.

Saurabh Jain
Director, SKJ Technologies Private Ltd http://www.skjapp.com/
Founder, OpenClass http://www.skjapp.com/openclass (Community of web and
mobile app developers)
http://www.facebook.com/openpad
Author : Mobile Phone Programming using Java ME (J2ME)
http://library.skjworld.com/mobile-technology/java/java-me
Twitter : http://twitter.com/skjsaurabh


[whatwg] Hardware accelerated canvas

2012-09-02 Thread Erik Möller
As we hardware accelerate the rendering of canvas, not just with the  
webgl context, we have to figure out how to best handle the fact that GPUs  
loose the rendering context for various reasons. Reasons for loosing the  
context differ from platform to platform but ranges from going into  
power-save mode, to internal driver errors and the famous long running  
shader protection.
A lost context means all resources uploaded to the GPU will be gone and  
have to be recreated. For canvas it is not impossible, though IMO  
prohibitively expensive to try to automatically restore a lost context and  
guarantee the same behaviour as in software.

The two options I can think of would be to:
a) read back the framebuffer after each draw call.
b) read back the framebuffer before the first draw call of a frame and  
build a display list of all other draw operations.


Neither seem like a particularly good option if we're looking to actually  
improve on canvas performance. Especially on mobile where read-back  
performance is very poor.


The WebGL solution is to fire an event and let the js-implementation deal  
with recovering after a lost context  
http://www.khronos.org/registry/webgl/specs/latest/#5.15.2


My preferred option would be to make a generic context lost event for  
canvas, but I'm interested to hear what people have to say about this.


For reference (since our own BTS isn't public yet).  
http://code.google.com/p/chromium/issues/detail?id=91308


--
Erik Möller
Core Gfx Lead
Opera Software
twitter.com/erikjmoller


Re: [whatwg] Hardware accelerated canvas

2012-09-02 Thread Ashley Gullen
Why is it prohibitively expensive to handle a lost context automatically in
a canvas 2D?

Having written a 2D engine which supports this (albeit in DirectX), don't
you just need to recreate the surface, set up your render state again,
recreate any textures that were referenced, then continue? (In some
engines, this can amount to simply calling init() again)

WebGL's intent is just to expose OpenGL ES to javascript, and since OpenGL
ES makes you handle lost contexts yourself, so does WebGL.  I think
contrary to that the canvas 2D API should be kept simple and
straightforward to use, and handle lost contexts automatically.

Ashley


On 2 September 2012 10:05, Erik Möller emol...@opera.com wrote:

 As we hardware accelerate the rendering of canvas, not just with the
 webgl context, we have to figure out how to best handle the fact that GPUs
 loose the rendering context for various reasons. Reasons for loosing the
 context differ from platform to platform but ranges from going into
 power-save mode, to internal driver errors and the famous long running
 shader protection.
 A lost context means all resources uploaded to the GPU will be gone and
 have to be recreated. For canvas it is not impossible, though IMO
 prohibitively expensive to try to automatically restore a lost context and
 guarantee the same behaviour as in software.
 The two options I can think of would be to:
 a) read back the framebuffer after each draw call.
 b) read back the framebuffer before the first draw call of a frame and
 build a display list of all other draw operations.

 Neither seem like a particularly good option if we're looking to actually
 improve on canvas performance. Especially on mobile where read-back
 performance is very poor.

 The WebGL solution is to fire an event and let the js-implementation deal
 with recovering after a lost context http://www.khronos.org/**
 registry/webgl/specs/latest/#**5.15.2http://www.khronos.org/registry/webgl/specs/latest/#5.15.2

 My preferred option would be to make a generic context lost event for
 canvas, but I'm interested to hear what people have to say about this.

 For reference (since our own BTS isn't public yet).
 http://code.google.com/p/**chromium/issues/detail?id=**91308http://code.google.com/p/chromium/issues/detail?id=91308

 --
 Erik Möller
 Core Gfx Lead
 Opera Software
 twitter.com/erikjmoller



Re: [whatwg] Hardware accelerated canvas

2012-09-02 Thread Glenn Maynard
On Sun, Sep 2, 2012 at 12:13 PM, Ashley Gullen ash...@scirra.com wrote:

 Why is it prohibitively expensive to handle a lost context automatically in
 a canvas 2D?

 Having written a 2D engine which supports this (albeit in DirectX), don't
 you just need to recreate the surface, set up your render state again,
 recreate any textures that were referenced, then continue? (In some
 engines, this can amount to simply calling init() again)


That would erase the canvas, since you don't know its contents in order to
recreate it.

WebGL's intent is just to expose OpenGL ES to javascript, and since OpenGL
 ES makes you handle lost contexts yourself, so does WebGL.


If there was a way to make WebGL transparently handle context loss, they'd
have done it.  It's easily the most unpleasant part of WebGL and will
probably end up being the biggest source of bugs and failed interop (on
platforms where it happens).  WebGL makes you handle lost contexts yourself
because it's the only thing that can be implemented in practice.

It'd be easier on users with 2d canvases, since there's much less
unrestorable state (only the contents of the canvas, not textures, shaders,
and so on), but it would still be a major source of interop issues.

-- 
Glenn Maynard


Re: [whatwg] Hardware accelerated canvas

2012-09-02 Thread Ian Hickson
On Sun, 2 Sep 2012, Erik Möller wrote:

 As we hardware accelerate the rendering of canvas, not just with the webgl
 context, we have to figure out how to best handle the fact that GPUs loose the
 rendering context for various reasons. Reasons for loosing the context differ
 from platform to platform but ranges from going into power-save mode, to
 internal driver errors and the famous long running shader protection.
 A lost context means all resources uploaded to the GPU will be gone and have
 to be recreated. For canvas it is not impossible, though IMO prohibitively
 expensive to try to automatically restore a lost context and guarantee the
 same behaviour as in software.
 The two options I can think of would be to:
 a) read back the framebuffer after each draw call.
 b) read back the framebuffer before the first draw call of a frame and build
 a display list of all other draw operations.
 
 Neither seem like a particularly good option if we're looking to actually
 improve on canvas performance. Especially on mobile where read-back
 performance is very poor.
 
 The WebGL solution is to fire an event and let the js-implementation deal with
 recovering after a lost context
 http://www.khronos.org/registry/webgl/specs/latest/#5.15.2
 
 My preferred option would be to make a generic context lost event for canvas,
 but I'm interested to hear what people have to say about this.

Realistically, there are too many pages that have 2D canvases that are 
drawn to once and never updated for any solution other than don't lose 
the data to be adopted. How exactly this is implemented is a quality of 
implementation issue.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: [whatwg] Hardware accelerated canvas

2012-09-02 Thread Benoit Jacob


- Original Message -
 On Sun, 2 Sep 2012, Erik Möller wrote:
 
  As we hardware accelerate the rendering of , not just with
  the webgl
  context, we have to figure out how to best handle the fact that
  GPUs loose the
  rendering context for various reasons. Reasons for loosing the
  context differ
  from platform to platform but ranges from going into power-save
  mode, to
  internal driver errors and the famous long running shader
  protection.
  A lost context means all resources uploaded to the GPU will be gone
  and have
  to be recreated. For canvas it is not impossible, though IMO
  prohibitively
  expensive to try to automatically restore a lost context and
  guarantee the
  same behaviour as in software.
  The two options I can think of would be to:
  a) read back the framebuffer after each draw call.
  b) read back the framebuffer before the first draw call of a
  frame and build
  a display list of all other draw operations.
  
  Neither seem like a particularly good option if we're looking to
  actually
  improve on canvas performance. Especially on mobile where read-back
  performance is very poor.
  
  The WebGL solution is to fire an event and let the
  js-implementation deal with
  recovering after a lost context
  http://www.khronos.org/registry/webgl/specs/latest/#5.15.2
  
  My preferred option would be to make a generic context lost event
  for canvas,
  but I'm interested to hear what people have to say about this.
 
 Realistically, there are too many pages that have 2D canvases that
 are
 drawn to once and never updated for any solution other than don't
 lose
 the data to be adopted. How exactly this is implemented is a quality
 of
 implementation issue.

With all the current graphics hardware, this means don't use a GL/D3D surface 
to implement the 2d canvas drawing buffer storage, which implies: don't 
hardware-accelerate 2d canvases.

If we agree that 2d canvas acceleration is worth it despite the possibility of 
context loss, then Erik's proposal is really the only thing to do, as far as 
current hardware is concerned.

Erik's proposal doesn't worsen the problem in anyway --- it acknowledges a 
problem that already exists and offers to Web content a way to recover from it.

Hardware-accelerated 2d contexts are no different from hardware-accelerated 
WebGL contexts, and WebGL's solution has been debated at length already and is 
known to be the only thing to do on current hardware. Notice that similar 
solutions preexist in the system APIs underlying any hardware-accelerated 
canvas context: Direct3D's lost devices, EGL's lost contexts, OpenGL's 
ARB_robustness context loss statuses.

Benoit

 
 --
 Ian Hickson   U+1047E)\._.,--,'``.
fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._
 ,.
 Things that are impossible just take longer.
   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Hardware accelerated canvas

2012-09-02 Thread Rik Cabanier
On Sun, Sep 2, 2012 at 2:24 PM, Ian Hickson i...@hixie.ch wrote:

 On Sun, 2 Sep 2012, Erik Möller wrote:
 
  As we hardware accelerate the rendering of canvas, not just with the
 webgl
  context, we have to figure out how to best handle the fact that GPUs
 loose the
  rendering context for various reasons. Reasons for loosing the context
 differ
  from platform to platform but ranges from going into power-save mode, to
  internal driver errors and the famous long running shader protection.
  A lost context means all resources uploaded to the GPU will be gone and
 have
  to be recreated. For canvas it is not impossible, though IMO
 prohibitively
  expensive to try to automatically restore a lost context and guarantee
 the
  same behaviour as in software.
  The two options I can think of would be to:
  a) read back the framebuffer after each draw call.
  b) read back the framebuffer before the first draw call of a frame and
 build
  a display list of all other draw operations.
 
  Neither seem like a particularly good option if we're looking to actually
  improve on canvas performance. Especially on mobile where read-back
  performance is very poor.
 
  The WebGL solution is to fire an event and let the js-implementation
 deal with
  recovering after a lost context
  http://www.khronos.org/registry/webgl/specs/latest/#5.15.2
 
  My preferred option would be to make a generic context lost event for
 canvas,
  but I'm interested to hear what people have to say about this.

 Realistically, there are too many pages that have 2D canvases that are
 drawn to once and never updated for any solution other than don't lose
 the data to be adopted. How exactly this is implemented is a quality of
 implementation issue.


It would be interesting to hear what other implementors have done to work
around this. Chrome has code to do most of canvas in hardware and they are
able to run it on devices with limit resources.

I do think that Erik's request has merit. With regular images, you can
always remove it from memory if needed and retrieve it from the history or
reload it from the original source.
An implementor could save a bitmap representation of the canvas to its
cache but this is probably slow if it has to be read back from the GPU and
the image could potentially be large.

If there was a callback for context loss and if the user had set it, a
browser could throw the entire canvas out and ask for it to be re-rendered
if the canvas is shown again. This would even make sense if you don't have
a HW accelerated canvas.

There would be no backward compatibility issue either. If the user doesn't
set the callback, a browser would have to do something reasonable to keep
the canvas bitmap around.

Rik


Re: [whatwg] Hardware accelerated canvas

2012-09-02 Thread Glenn Maynard
On Sun, Sep 2, 2012 at 4:24 PM, Ian Hickson i...@hixie.ch wrote:

 Realistically, there are too many pages that have 2D canvases that are
 drawn to once and never updated for any solution other than don't lose
 the data to be adopted. How exactly this is implemented is a quality of
 implementation issue.


If the choice becomes follow the spec and don't hardware-accelerate
canvas vs. don't follow the spec and get orders of magnitude better
performance, I suspect I can guess the choice implementors will make
(implementors invited to speak for themselves, of course).  If I was
playing a game rendered with Canvas and one browser had GPU-acceleration
and one did not, I know for sure which one I'd choose.

It wouldn't be very hard to special-case the drawn-one-time case; take a
backing store snapshot after the first render (or the first few), eg. at
the end of the task where drawing was performed.  That would allow
restoring those one-shot canvases without imposing a huge cost on canvases
that are drawn to continuously.  It'd be a hack and wouldn't work
everywhere (or every time--you can lose the context mid-script, at least in
theory), but it would avoid most breakage while still allowing
GPU-acceleration, so I wouldn't be surprised if implementations compromised
on something like this.

-- 
Glenn Maynard


Re: [whatwg] Hardware accelerated canvas

2012-09-02 Thread Charles Pritchard

On 9/2/2012 5:36 PM, Glenn Maynard wrote:

On Sun, Sep 2, 2012 at 4:24 PM, Ian Hickson i...@hixie.ch wrote:


Realistically, there are too many pages that have 2D canvases that are
drawn to once and never updated for any solution other than don't lose
the data to be adopted. How exactly this is implemented is a quality of
implementation issue.


If the choice becomes follow the spec and don't hardware-accelerate
canvas vs. don't follow the spec and get orders of magnitude better
performance, I suspect I can guess the choice implementors will make
(implementors invited to speak for themselves, of course).  If I was
playing a game rendered with Canvas and one browser had GPU-acceleration
and one did not, I know for sure which one I'd choose.



Canvas GPU acceleration today is done via transform3d and transitions.
Yes, you are quite likely to notice the difference on a mobile device.

Other than that; there are some niche instances of using drawImage 
repeatedly, such as the Fish demo.


Largely, if you're thinking GPU acceleration, you're thinking WebGL.
And yes, you're going to notice a big difference there, too.

Most [installed] GPUs are not able to accelerate the Canvas path drawing 
mechanism.

They are able to take an array of floats for WebGL, though.


GPU-acceleration, so I wouldn't be surprised if implementations compromised
on something like this.


What is really meant here by Canvas GPU acceleration?

Largely, the issues we have are with filters: an item that Vincent from 
Adobe and Rik have both brought up.


I've been frustrated a few times following Chrome development as they 
speed up the MS Fish Tank demo at the cost of ruining the performance of 
pen input/drawing programs.

It's bounced back and forth a few times now.

-Charles