Re: exposing CANVAS or something like it to Web Workers

2013-02-08 Thread Gregg Tavares
On Thu, Feb 7, 2013 at 10:46 PM, Travis Leithead 
travis.leith...@microsoft.com wrote:

  Having thought about this before, I wonder why we don’t use a
 producer/consumer model rather than a transfer of canvas ownership model?
 

 ** **

 A completely orthogonal idea (just my rough 2c after reading Gregg’s
 proposal), is to have an internal frame buffer accessible via a
 WorkerCanvas API which supports some set of canvas 2d/3d APIs as
 appropriate, and can “push” a completed frame onto a stack in the internal
 frame buffer. Thus the worker can produce frames as fast as desired.

 ** **

 On the document side, canvas gets a 3rd kind of context—a
 WorkerRemoteContext, which just offers the “pop” API to pop a frame from
 the internal frame buffer into the canvas.

 ** **

 Then you just add some basic signaling events on both ends of the frame
 buffer and you’re good (as far as synchronizing the worker with the
 document). The producer (in the worker) is free to produce multiple frames
 in advance (if desired), while the consumer is able to pop frames when
 available. You could even have the framebuffer depth configurable.


What would be the advantage? If you wanted to keep dom elements in sync
with the canvas you'd still have to post something from the worker back to
the main thread so the main thread would know to pop.



 

 ** **

 *From:* Gregg Tavares [mailto:g...@google.com]
 *Sent:* Thursday, February 7, 2013 2:25 PM
 *To:* Ian Hickson
 *Cc:* Charles Pritchard; Web Applications Working Group WG
 *Subject:* Re: exposing CANVAS or something like it to Web Workers

 ** **

 I put up a new proposal for canvas in workers

 ** **

 http://wiki.whatwg.org/wiki/CanvasInWorkers

 ** **

 Please take a look. 

 ** **

 This proposal comes from offline discussions with representatives from the
 various browsers as well as input from the Google Maps team. I can post a
 summary here if you'd like but it might be easier to read the wiki

 ** **

 Looking forward to feedback.

 ** **

 ** **

 ** **

 On Tue, Jan 8, 2013 at 10:50 AM, Ian Hickson i...@hixie.ch wrote:

  On Wed, 2 Jan 2013, Gregg Tavares (社ç~T¨) wrote:
 
  Another issue as come up and that is one of being able
  to synchronize updates of a canvas in
  worker with changes in the main page.

 For 2D, the intended solution is to just ship the ImageBitamp from the
 worker canvas to the main thread via a MessagePort and then render it on
 the canvas at the appropriate time.

 I don't know how you would do it for WebGL.



  Similarly, let's say you have 2 canvases and are rendering to both in a
  worker.  Does
 
 context1.commit();
 context2.commit();
 
  guarantee you'll see both commits together?

 No, unfortunately not. There's no synchronisation between workers and the
 main thread (by design, to prevent any possibility of deadlocks), and
 there's not currently a batching API.

 However, if this becomes a common problem (which we can determine by
 seeing if we get bugs complaining about different parts of apps/games
 seeming to slide around or generally be slightly out of sync, or if we see
 a lot of authors shunting multiple ImageBitmap objects across MessagePort
 channels) we can always add an explicit batching API to make this kind of
 thing easy.

 Note that in theory, for 2D at least, shunting ImageBitmaps across threads
 can be as efficient as commit().


 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'**
 **

  ** **



Re: exposing CANVAS or something like it to Web Workers

2013-02-07 Thread Gregg Tavares
I put up a new proposal for canvas in workers

http://wiki.whatwg.org/wiki/CanvasInWorkers

Please take a look.

This proposal comes from offline discussions with representatives from the
various browsers as well as input from the Google Maps team. I can post a
summary here if you'd like but it might be easier to read the wiki

Looking forward to feedback.




On Tue, Jan 8, 2013 at 10:50 AM, Ian Hickson i...@hixie.ch wrote:

 On Wed, 2 Jan 2013, Gregg Tavares (社ç~T¨) wrote:
 
  Another issue as come up and that is one of being able
  to synchronize updates of a canvas in
  worker with changes in the main page.

 For 2D, the intended solution is to just ship the ImageBitamp from the
 worker canvas to the main thread via a MessagePort and then render it on
 the canvas at the appropriate time.

 I don't know how you would do it for WebGL.


  Similarly, let's say you have 2 canvases and are rendering to both in a
  worker.  Does
 
 context1.commit();
 context2.commit();
 
  guarantee you'll see both commits together?

 No, unfortunately not. There's no synchronisation between workers and the
 main thread (by design, to prevent any possibility of deadlocks), and
 there's not currently a batching API.

 However, if this becomes a common problem (which we can determine by
 seeing if we get bugs complaining about different parts of apps/games
 seeming to slide around or generally be slightly out of sync, or if we see
 a lot of authors shunting multiple ImageBitmap objects across MessagePort
 channels) we can always add an explicit batching API to make this kind of
 thing easy.

 Note that in theory, for 2D at least, shunting ImageBitmaps across threads
 can be as efficient as commit().

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: Reading image bytes to a PNG in a typed array

2013-01-26 Thread Gregg Tavares
Could this be solved in workers?

x) Create canvas, set to desired size
x) Create 2D context
x) Create imageData object
x) Create a WebGL framebuffer object
x) Attach texture as color target to framebuffer
x) read back pixels into canvas2d's imageData.data member
x) ctx.putImageData into the canvas

1) Set CanvasProxy (or whatever it's called) to the size you want
2) Draw Texture
3) call CanvasProxy's toDataURL('image/png')
4) Set the CanvasProxy back to the original size
5) snip off the mime/encoding header
6) implement base64 decode in JS and decode to Uint8Array

Less steps and it's now async as well.




On Wed, Jan 16, 2013 at 8:02 AM, Florian Bösch pya...@gmail.com wrote:

 Whatever the eventual solution to this problem, it should be the user of
 the API driving the decision how to get the data.


 On Wed, Jan 16, 2013 at 4:56 PM, Kyle Huey m...@kylehuey.com wrote:


 On Wed, Jan 16, 2013 at 7:50 AM, Glenn Maynard gl...@zewt.org wrote:

 On Wed, Jan 16, 2013 at 9:40 AM, Florian Bösch pya...@gmail.com wrote:

 Perhaps we should think of a better scheme to export data than toFoo().
 Maybe toData('url'), toData('arraybuffer') toData('blob') or perhaps
 toData(URL), toData(ArrayBuffer) or toData(Blob). I tend to think that if
 you're starting to write toA, toB, toC, toX methods on an object, you've
 not thought this really trough what's a parameter, and what's a method.


 We should be avoiding the need to return data in a bunch of different
 interfaces in the first place.  If the data is large, or takes a long or
 nondeterministic amount of time to create (eg. something that would be
 async in the UI thread), return a Blob; otherwise return an ArrayBuffer.
  The user can convert from there as needed.


 Well, the problem is that we fundamentally screwed up when we specced
 Blob.  It has a synchronous size getter which negates many of the
 advantages of FileReader extracing data asynchronously.  For something like
 image encoding (that involves compression), where you have to perform the
 operation to know the size, Blob and ArrayBuffer are effectively
 interchangeable from the implementation perspective, since both require you
 to perform the operation up front.

 - Kyle





Re: exposing CANVAS or something like it to Web Workers

2013-01-03 Thread Gregg Tavares
On Wed, Jan 2, 2013 at 2:52 PM, Gregg Tavares (社用) g...@google.com wrote:

 Another issue as come up and that is one of being able
 to synchronize updates of a canvas in
 worker with changes in the main page.

 For a real world example see Google's MapsGL
 (http://support.google.com/maps/bin/answer.py?hl=enanswer=1630790)

 Apparently MapsGL uses 2 canvases and/or some DOM objects overlayed on top
 of each other.
 Dragging the mouse moves objects in all of those layers and they need to
 move simultaneously
 to have a good UX.

 You can imagine issues if a canvas is being rendering to from a worker.
 How would the user
 guarantee that changes from the worker are synchronized with changes to
 the DOM in the
 main thread?

 Similarly, let's say you have 2 canvases and are rendering to both in a
 worker.  Does

context1.commit();
context2.commit();

 guarantee you'll see both commits together?


Let me retract this. There is a way under the current API to solve sync
which is to
use drawImage to another canvas in the main thread and have the canvases
being
drawing to by the worker basically as offscreen canvases.

It might not be the ideal solution since a drawImage call is an extra draw
which is
not cheap, especially if it's a large canvas. But, it does mean there's a
solution
for now without adding any extra api.




 Random thoughts

 *) Leave things the way they are and add another mechanism for syncing?

 In other words, by default things are not sync. Through some other API or
 settings the user can opt
 into getting synchronization

 *) Look at OpenGL swap groups as inspiration for an API?

 http://www.opengl.org/registry/specs/NV/wgl_swap_group.txt
 http://www.opengl.org/registry/specs/NV/glx_swap_group.txt

 *) Consider an 'oncommit' or 'onswap' event on 'Window'?

 The idea being you want to give the main thread a chance to update stuff
 (DOM elements) in response
 to a worker having called 'commit' on a canvas.

 Of course not sure how that would work if you have 2 workers each
 rendering to a different canvas.

 Note: I haven't thought through these issues at all and I've personally
 not had to deal with them but it
 seems clear from the MapsGL example that a solution will be needed for
 some subset of apps to have
 a good UX. I know for example there's game engine that has API to keep DOM
 elements located
 relative to game objects being rendered in a canvas to make it easy to
 give HTML based stats on
 the game objects as opposed to having to render the stats manually with
 canvas.















 On Fri, Nov 16, 2012 at 8:35 PM, Ian Hickson i...@hixie.ch wrote:

 On Fri, 16 Nov 2012, Charles Pritchard wrote:
  
  
  
 http://lists.w3.org/Archives/Public/public-whatwg-archive/2012Nov/0199.html
 
  Seems like we might use requestAnimationFrame in the main thread to
  postMessage to the worker as an alternative to using setInterval in
  workers for repaints.

 The idea in due course is to just expose rAF in workers. Please do read
 the e-mail above, which actually mentions that.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'





Re: [FileAPI] Updates to FileAPI Editor's Draft

2011-06-30 Thread Gregg Tavares (wrk)
On Tue, Jun 21, 2011 at 10:17 AM, Arun Ranganathan a...@mozilla.com wrote:

 **

 Sorry if these have all been discussed before. I just read the File API for
 the first time and 2 random questions popped in my head.

  1) If I'm using readAsText with a particular encoding and the data in the
 file is not actually in that encoding such that code points in the file can
 not be mapped to valid code points what happens? Is that implementation
 specific or is it specified? I can imagine at least 3 different behaviors.


 This should be specified better and isn't.  I'm inclined to then return the
 file in the encoding it is in rather than force an encoding (in other words,
 ignore the encoding parameter if it is determined that code points can't be
 mapped to valid code points in the encoding... also note that we say to 
 Replace
 bytes or sequences of bytes that are not valid according to the charset with
 a single U+FFFD character 
 [Unicodehttp://dev.w3.org/2006/webapi/FileAPI/#Unicode
 ]).  Right now, the spec isn't specific to this scenario (... if the
 user agent cannot decode blob using encoding, then let charset be null
 before the algorithmic steps, which essentially forces UTF-8).

 Can we list your three behaviors here, just so we get them on record?
  Which behavior do you think is ideal?  More importantly, is substituting
 U+FFFD and defaulting to UTF-8 good enough for your scenario above?


The 3 off the top of my head were

1) Throw an exception. (content not valid for encoding)
2) Remap bad codes to some other value (sounds like that's the one above)
3) Remove the bad character

I see you've listed a 4th, Ignore the encoding on error, assume utf-8.
That one seems problematic because of partial reads. If you are decoding as
shift-jis, have returned a partial read, and then later hit a bad code
point, the stuff you've seen previously will all need to change by switching
to no encoding.

I'd chose #2 which it sounds like is already the case according the spec.

Regardless of what solution is chosen is there a way for me to know
something was lost?





  2) If I'm reading using readAsText a multibyte encoding (utf-8,
 shift-jis, etc..) is it implementation dependent whether or not it can
 return partial characters when returning partial results during reading? In
 other words,  Let's say the next character in a file is a 3 byte code point
 but the reader has only read 2 of those 3 bytes so far. Is implementation
 dependent whether result includes those 2 bytes before reading the 3rd byte
 or not?


 Yes, partial results are currently implementation dependent; the spec. only
 says they SHOULD be returned.  There was reluctance to have MUST condition
 on partial file reads.  I'm open to revisiting this decision if the
 justification is a really good one.


I'm assuming by MUST condition you mean a UA doesn't have to support
partial reads at all, not that how partial reads work shouldn't be
specified.

Here's an example.

Assume we stick with unknown characters get mapped to U+FFFD.
Assume my stream is utf8 and in hex the bytes are.

E3 83 91 E3 83 91

That's 2 code points of 0x30D1. Now assume the reader has only read the
first 5 bytes.

Should the partial results be

(a) filereader.result.length == 1 where the content is 0x30D1

 or should the partial result be

(b) filereader.result.length == 2 where the content is 0x30D1, 0xFFFD
 because at that point the E3 83 at the end of the partial result is not a
valid codepoint

I think the spec should specify that if the UA supports partial reads it
should follow example (a)





 -- A*



Re: Mouse Lock

2011-06-28 Thread Gregg Tavares (wrk)
On Mon, Jun 27, 2011 at 6:17 PM, Glenn Maynard gl...@zewt.org wrote:

 On Mon, Jun 27, 2011 at 8:59 PM, Gregg Tavares (wrk) g...@google.comwrote:

 As far as I know if a game wants to limit movement of the mouse inside a
 window they just mouselock and display their own mouse pointer. The original
 is hidden and their pointer logic uses the deltas to move their software
 mouse pointer.


 Rendering a cursor manually is a non-option.  It invariably results in a
 laggy mouse cursor, even in native applications.  Even a single extra frame
 of latency makes a mouse painful to use.


I beg to differ. Nearly every game that has a mouse renders a mouse cursor
manually. At least all the ones I've played. I quick search on youtube for
rts gameplay shows this is true.

The point is, 1 it's unrelated to mouselock, 2 you can implement it on top
of mouselock for now. If that's too slow for your app and there's a huge
need something else can be added later.



 But again, this seems like an unrelated feature.

 --
 Glenn Maynard





Re: Mouse Lock

2011-06-27 Thread Gregg Tavares (wrk)
On Fri, Jun 24, 2011 at 10:58 AM, Aryeh Gregor simetrical+...@gmail.comwrote:

 On Wed, Jun 22, 2011 at 5:20 AM, Simon Pieters sim...@opera.com wrote:
  On Tue, 21 Jun 2011 00:43:52 +0200, Aryeh Gregor 
 simetrical+...@gmail.com
  wrote:
  There's a middle ground here: you can lock the mouse to the window,
  but not completely.  That is, if the user moves the mouse to the edge,
  it remains inside, but if they move it fast enough it escapes.  This
  is enough to stop the window from accidentally losing focus when
  you're trying to click on something near the edge of the screen, but
  it lets you easily get outside the window if you actually want to.
  IIRC, Wine does this in windowed mode.  Of course, it might not be
  suitable for games that want to hide the cursor, like FPSes, but it
  might be a possible fallback if the browser doesn't trust the site
  enough for whatever reason to let it fully lock the mouse.
 
  This seems weird. When would you use this middle ground? Would users
  understand it? Also, as you say, totally inappropriate for FPS games.

 Well, the time when I noticed it in Wine is when I was running some
 kind of isometric RPG or strategy game or something, and had to run in
 windowed mode because it was buggy in fullscreen.  In these games you
 have a map, and you scroll around on the map by moving the mouse to
 the edge of the screen.  You do have a visible mouse cursor, but you
 don't want it to leave the window because then you have to position it
 pixel-perfect to get the window to scroll, instead of just slamming it
 against the side.

 Of course, you could just trap the mouse for real in this case as
 well.  In practice that might be fine.  Also, it occurs to me that the
 author could always make the cursor transparent if they wanted to
 confuse the user, and the user might not think to move it quicker to
 get it out even if they could see it (although it's an intuitive thing
 to try).  So it might not be a security advantage at all relative to
 actually locking the cursor.

 But this does highlight the fact that we probably want to support
 mouse-locking that doesn't hide the cursor also, for this kind of
 mouse-based scrolling.  In that case, though, the coordinates and
 mouse events should behave just like regular.  If the user presses the
 cursor up against the side of the screen and it visually stops moving,
 no mousemove events should be fired even if the user does keep moving
 the actual mouse.  The application would then want to check the
 cursor's location instead of listening for events.


As far as I know if a game wants to limit movement of the mouse inside a
window they just mouselock and display their own mouse pointer. The original
is hidden and their pointer logic uses the deltas to move their software
mouse pointer.

That's a simpler solution than adding stuff to the API for this specific
case.


Re: Mouse Lock

2011-06-20 Thread Gregg Tavares (wrk)
On Mon, Jun 20, 2011 at 4:53 PM, Adam Barth w...@adambarth.com wrote:

 On Mon, Jun 20, 2011 at 3:30 PM, Tab Atkins Jr. jackalm...@gmail.com
 wrote:
  On Mon, Jun 20, 2011 at 3:26 PM, Olli Pettay olli.pet...@helsinki.fi
 wrote:
  On 06/21/2011 01:08 AM, Tab Atkins Jr. wrote:
  On Mon, Jun 20, 2011 at 3:03 PM, Olli Pettayolli.pet...@helsinki.fi
   wrote:
  On 06/21/2011 12:25 AM, Tab Atkins Jr. wrote:
  The use-case is non-fullscreen games and similar, where you'd prefer
  to lock the mouse as soon as the user clicks into the game.
  Minecraft
  is the first example that pops into my head that works like this -
  it's windowed, and mouselocks you as soon as you click at it.
 
  And how would user unlock when some evil sites locks the mouse?
  Could you give some concrete example about
   It's probably also useful to instruct the user how to release the
  lock.
 
  I'm assuming that the browser reserves some logical key (like Esc) for
  releasing things like this, and communicates this in the overlay
  message.
 
  And what if the web page moves focus to some browser window, so that ESC
  is fired there? Or what if the web page moves the window to be outside
 the
  screen so that user can't actually see the message how to
  unlock mouse?
 
  How is a webpage able to do either of those things?

 window.focus()


Seems like calling window.focus() would cancel mouselock. As I suspect would
changing the focus any other way like Alt-Tab, Cmd-Tab, etc.




 Adam



Re: Mouse Lock

2011-06-19 Thread Gregg Tavares (wrk)
On Sun, Jun 19, 2011 at 5:10 AM, Olli Pettay olli.pet...@helsinki.fiwrote:

 On 06/17/2011 01:21 AM, Vincent Scheib wrote:

 - 2 new methods on an element to enter and exit mouse lock. Two
 callbacks on the entering call provide notification of success or failure.
 - Mousemove event gains .deltaX .deltaY members, always valid, not just
 during mouse lock.

 I don't understand the need for .deltaX/Y.


I'm sure there are lots of other use cases but a typical use case for deltaX
and deltaY is camera movement in a first person game. You move the mouse to
the left to look left. The current mouseX, mouseY stop when mouse hits the
edge of the window/screen. deltaX, deltaY do not stop.



 More comments later.


 -Olli



Re: [FileAPI] Updates to FileAPI Editor's Draft

2011-06-14 Thread Gregg Tavares (wrk)
Sorry if these have all been discussed before. I just read the File API for
the first time and 2 random questions popped in my head.

1) If I'm using readAsText with a particular encoding and the data in the
file is not actually in that encoding such that code points in the file can
not be mapped to valid code points what happens? Is that implementation
specific or is it specified? I can imagine at least 3 different behaviors.

2) If I'm reading using readAsText a multibyte encoding (utf-8, shift-jis,
etc..) is it implementation dependent whether or not it can return partial
characters when returning partial results during reading? In other words,
 Let's say the next character in a file is a 3 byte code point but the
reader has only read 2 of those 3 bytes so far. Is implementation dependent
whether result includes those 2 bytes before reading the 3rd byte or not?


Re: Using ArrayBuffer as payload for binary data to/from Web Workers

2011-05-28 Thread Gregg Tavares (wrk)
On Thu, May 26, 2011 at 8:20 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Fri, Apr 22, 2011 at 6:26 PM, Kenneth Russell k...@google.com wrote:
  On Mon, Mar 7, 2011 at 6:17 PM, Kenneth Russell k...@google.com wrote:
  On Mon, Mar 7, 2011 at 5:18 PM, Chris Marrin cmar...@apple.com wrote:
 
  On Mar 7, 2011, at 4:46 PM, Kenneth Russell wrote:
 
  On Mon, Mar 7, 2011 at 3:54 PM, Glenn Maynard gl...@zewt.org wrote:
  On Mon, Mar 7, 2011 at 6:05 PM, Chris Marrin cmar...@apple.com
 wrote:
 
  Now that ArrayBuffer has made its way into XHR, I think it would be
  reasonable to somehow use this new object type as a way to pass data
 to and
  from Workers without copying. I've seen hints and thoughts about
 this here
  and there, but I've never seen a formal discussion. I'm not even
 sure if
  webapps is the place for this discussion, although it seems like a
  reasonable place. Please let me know if there is a better place.
 
  ArrayBuffer is the most obvious use for zero-copy messaging, but I
 don't
  think it should be limited to it...
 
  Has there been discussion anywhere that I've missed?
 
  Probably not the only one, but check the WebWorkers and images
 thread on
  whatwg.
 
  There's definitely interest among the editors of the Typed Array spec
  in revising the spec to support zero-copy data transfers to and from
  web workers. In informal offline discussions, there was a tentative
  plan to put up a new draft for discussion within the next month or so.
  A goal was to prototype it before solidifying a spec so that we can be
  assured it will work well for real-world use cases.
 
  Yeah, I guess the question is whether we should put the functionality
 into ArrayBuffer, or into a wrapper class which would part of the Web Worker
 spec. The latter might make it easier to add other resources (like image and
 canvas) at some point. But I agree, it should be implemented before
 finalizing anything.
 
  Did I hear you volunteer to add a strawman proposal to the Typed Array
 spec? :-)
 
  Yes, you did. :-)
 
  The editors' draft of the typed array spec has been updated with a
  strawman proposal for this zero-copy, transfer-of-ownership behavior:
 
  http://www.khronos.org/registry/typedarray/specs/latest/
 
  Feedback would be greatly appreciated. For the purposes of keeping the
  conversation centralized, it might be helpful if we could use the
  public_webgl list; see
  https://www.khronos.org/webgl/public-mailing-list/ .

 While I see the need for this, i think it will be very surprising to
 authors that for all other data, postMessage is purely a read-only
 action. However for ArrayBuffers it would not be. There are two ways
 we can improve this situation:

 1. Add a separate method next to postMessage which has the prescribed
 functionality. This also has the advantage that it lets users choose
 if they want the transfer-ownership functionality or not, for example
 for cases when performance isn't as big requirement, and when
 ArrayBuffers are small enough that the transferring ownership logic
 adds more overhead than memory copying logic would.

 2. Add a separate argument to postMessage, similar to the 'ports'
 argument, which contains a list of array buffers whose ownership
 should be transferred.


Riffing off idea #2, the second argument could be an array of objects who's
ownership should be transferred. For now only ArrayBuffers would be legal
objects but at some point in the future other types of objects could be
added (not sure what those objects would be but that's a much more flexible
interface than #1. You can chose to copy some ArrayBuffers and transfer
others.



 / Jonas




Re: API for matrix manipulation

2011-03-15 Thread Gregg Tavares (wrk)
On Mon, Mar 14, 2011 at 4:27 PM, Chris Marrin cmar...@apple.com wrote:


 On Mar 14, 2011, at 12:19 PM, Lars Knudsen wrote:

  Hi,
 
  related to this:  Is there any work ongoing to tie these (or more generic
 vector / matrix) classes to OpenCL / WebCL for faster computation across
 CPUs and GPUs?

 On WebKit I've experimented with an API to copy a CSSMatrix to an
 Float32Array, which can be directly uploaded to the GPU. It's surprising how
 much more efficient this was than copying the 16 floating point values out
 of the CSSMatrix using JS. But I've hesitated proposing such an API while
 WebGL and Typed Arrays were still in draft. Now that they're not, maybe it's
 time to discuss it.

 I've also experimented with API in CSSMatrix to do in-place operations,
 rather than creating a new CSSMatrix to hold the results. This too was a big
 win, mostly I think because you get rid of all the churn of creating and
 collecting CSSMatrix objects.


Would it be an even bigger win if CSSMatrix took a destination? That way you
can avoid all allocations where as if they do it in place then you always
need to make at least some copies to temps to get anything done.


 -
 ~Chris
 cmar...@apple.com








Re: Mouse Capture for Canvas

2011-02-10 Thread Gregg Tavares (wrk)
Sorry I don't have anything to add to the main discussion points but I want
to point out that this should NOT be limited to the canvas tag.

There are whole game engines that work on nothing but manipulating DOM
elements with z-index and setting style.left and style.top.  I can imagine
plenty of other apps that don't use canvas that might benefit from
mouselock.  A mapping page. A graph exploration page. A photo viewing page.


Re: requestAnimationFrame

2010-11-23 Thread Gregg Tavares (wrk)
How about this way of looking at it

Goals

1) prevent programmer error
2) provide a good user experience (browser is responsive with lots of tabs)

The solution to #1 as currently proposed is to guarantee that
requestAnimationFrame will have it's callback called periodically, even if
it's not visible.

What's the solution to #2 in world where requestAnimationFrame is always
called periodically?  One solution was mentioned which is the browser can
freeze the tab. I don't see how guaranteeing calling requestAnimationFrame
once a second or so is compatible with freeze the tab. Wouldn't that break
the contract?

The problem I'm trying to address is not one of freezing a tab. That has
issues. Sites like gmail, meebo, hotmail, yahoo mail, various twitter sites,
etc all do setInterval or setTimeout processing to get the messages from the
server. That's a reasonable thing to do with setTimeout and setInterval.
Those are generally not very heavy operations. Getting a few k or 100k of
text and processing it.  You can't freeze them without breaking their
functionality.

requestAnimationFrame though is generally designed to be used for updating a
canvas (2d or 3d) which will likely be heavy both in terms of CPU usage
(drawing lots of lines/curves/images into the canvas) and in terms of memory
usage (accessing lots of images).

So, imagine you have a netbook and you've got 10-20 tabs open (less than I
usually have open, YMMV). Imagine all the flash content (ads, video UIs) on
those pages is done with canvas. Imagine there has been enough social
pressure so that sites that used to use setInterval or setTimeout to update
their canvas and therefore bog down the browser have switched to
requestAnimationFrame. Now, given that the there's a promise to call the
callbacks periodically, what would be your solution to fix the issue that
the browser is running really slow since each time a callback comes in, 10s
of megs of images have to get swapped in so they can be drawn into a canvas
and then swapped back out when the next canvas gets it's animationFrame
callback?


Re: requestAnimationFrame

2010-11-18 Thread Gregg Tavares (wrk)
On Thu, Nov 18, 2010 at 12:45 AM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Thu, Nov 18, 2010 at 4:03 PM, Gregg Tavares (wrk) g...@google.comwrote:

 On (a)
 Take this page (http://boingboing.net) At the time I checked it today
 (6:55pm PST) it had 10 instances of flash running. O page load 3 were
 animating continuallty, 7 were idle. The 7 idle ones were all video with
 custom UIs. Of those 7, 6 of them will animate constantly once the video is
 finished advertising other videos to play. That means at any one time, if
 this was canvas based, 9 areas will be re-rendered by JavaScript on each
 animationFrame.  It seems like giving the browser more info by tying these
 animation frames to specific elements would let the browser not call
 animationFrame callbacks unless the element it is tied to is on the screen
 and that would be a pretty good thing. If all 10 of those areas were
 re-rendering their ads all the time I suspect that page would get pretty
 slow, especially on netbooks. As it is, only 1-3 areas are ever visible at
 once.


 Yeah, that makes sense.

 Then I suggest adding requestAnimationFrame to elements, so you can call
 element.requestAnimationFrame(callback).

 I think there needs to be a guarantee that the callback is eventually
 called even if the element never becomes visible. People sometimes want to
 take action when an animation finishes. window.mozRequestAnimationFrame
 fires once per second even for hidden tabs for this reason.


I see your point but I'm a little worried about adding that exception. Take
the boingboing.net example again. There are 10 areas, 1 is on the screen,
it's playing video. If once a second all 9 areas have their animationFrame
callback called that video will probably glitch or stutter once a second.  I
suppose the UA can stagger the offscreen animationFrame calls but I just
wonder if calling them at all when they are offscreen is really warranted.
Or how about I've got boingboing.net in one tab (10 areas),
techcrunch.comin another (4 areas),
geekologie.com in a 3rd (7 areas), nytimes.com in a 4th (3 areas), add a few
more and soon my machine will end up being bogged down making animationFrame
callbacks to offscreen tabs.  Is the solution to just keep calling each one
less and less frequently? I'd prefer it if my under powered netbook wasn't
spending it's time and battery rendering things offscreen ever if possible.
Not only that, but as I add more and more tabs I'd like to be able to swap
them out of memory, but if every second or 2 seconds or 10 seconds each one
gets an animationFrame callback then it will be swapped back into memory.






 It's a little tricky to define exactly what it means for an element to be
 visible. I think we could probably leave that up to the UA without hurting
 interop.


 Rob
 --
 Now the Bereans were of more noble character than the Thessalonians, for
 they received the message with great eagerness and examined the Scriptures
 every day to see if what Paul said was true. [Acts 17:11]



Re: requestAnimationFrame

2010-11-18 Thread Gregg Tavares (wrk)
On Thu, Nov 18, 2010 at 1:54 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Fri, Nov 19, 2010 at 10:48 AM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Fri, Nov 19, 2010 at 10:46 AM, Darin Fisher da...@chromium.orgwrote:

 I agree.  What's the use case for animating hidden tabs (or canvases that
 are hidden)?

 One of the big problems with JavaScript based animations is that they
 have no way of knowing they should go idle when their window is hidden (in
 a background tab or otherwise).  The same API used to rate-limit rendering
 could address the problem of hidden tabs too.


 Yes. As I mentioned, we do that in Firefox. Please read what I wrote
 above.


 Hmm, maybe I didn't mention it in this thread yet, sorry.

 I did mention the reason we want to guarantee the callback eventually
 fires: apps often want to do something when an animation ends. Having to
 write a special code path specifically to handle the case where an animation
 ends but the tab/element is hidden sounds to me like it's going to be
 error-prone.


I totally see that some bad code could be error prone if we don't guarantee
the callback is eventually fired.  On the other hand, guaranteeing it gets
fired even if it's offscreen seems to have all the other repercussions
(excess cpu usage, excess memory paging), those both could lead to a bad
browsing experience.  It seems like guaranteeing animationFrame callbacks
get called even when offscreen is helping bad programmers (their incorrect
code works) where as never calling them when offscreen is helping users
(their browser is more likely to be responsive).  Is there some way to solve
both issues?





 Rob
 --
 Now the Bereans were of more noble character than the Thessalonians, for
 they received the message with great eagerness and examined the Scriptures
 every day to see if what Paul said was true. [Acts 17:11]



Re: requestAnimationFrame

2010-11-17 Thread Gregg Tavares (wrk)
On Tue, Nov 16, 2010 at 12:28 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Wed, Nov 17, 2010 at 7:52 AM, Gregg Tavares (wrk) g...@google.comwrote:

 So if the JS on the beforePaint takes a while to complete what happens to
 the browser? For example if you are resizing the browser? Is the browser
 forced not to be able to actually paint until JS returns?


 Not necessarily. In Firefox 4, yes. In Mobile Firefox, which supports
 compositing in a separate process from the content, no.




 Now, when animation is happening on a separate compositor thread that
 guarantee has to be relaxed a bit. But we'll still try to meet it on a
 best-effort basis --- i.e. we'll run the JS animations once per composited
 frame, if the JS can keep up.


 So you're saying that there's no guarantee that requestAnimationFrame will
 actually keep things in sync?


 Right. A cast-iron guarantee that requestAnimationFrame callbacks will run
 to completion before painting is incompatible with the goal of being able to
 repaint the browser window even if scripts are running too long or
 completely hung.

 But we *can* guarantee that a) scripted animations stay in sync with each
 other, and b) if the HTML5 event loop is not too busy (e.g., animation
 scripts take much less time to complete than the interval between composited
 frames and the content process is otherwise idle), scripted animations will
 stay in sync with with declarative animations even if the declarative
 animations are being processed by an off-main-thread compositing framework.
 (OK, this is a bit speculative since we haven't implemented it yet, but the
 approach seems straightforward.)


Just blue skying here but  It seems like if your goal is to keep
animations in sync the trigger should be an animation tick, not a repaint.
In otherwords, you want to give JS a chance to update stuff anytime a CSS
animation updates stuff. That would separate the issue from painting. So
what about an onAnimation type of event?

That would separate this issue of the browser having to wait for JS during a
paint and still keep things in sync.











 Rob
 --
 Now the Bereans were of more noble character than the Thessalonians, for
 they received the message with great eagerness and examined the Scriptures
 every day to see if what Paul said was true. [Acts 17:11]



Re: requestAnimationFrame

2010-11-17 Thread Gregg Tavares (wrk)
On Wed, Nov 17, 2010 at 10:45 AM, Gregg Tavares (wrk) g...@google.comwrote:



 On Tue, Nov 16, 2010 at 12:28 PM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Wed, Nov 17, 2010 at 7:52 AM, Gregg Tavares (wrk) g...@google.comwrote:

 So if the JS on the beforePaint takes a while to complete what happens to
 the browser? For example if you are resizing the browser? Is the browser
 forced not to be able to actually paint until JS returns?


 Not necessarily. In Firefox 4, yes. In Mobile Firefox, which supports
 compositing in a separate process from the content, no.




 Now, when animation is happening on a separate compositor thread that
 guarantee has to be relaxed a bit. But we'll still try to meet it on a
 best-effort basis --- i.e. we'll run the JS animations once per composited
 frame, if the JS can keep up.


 So you're saying that there's no guarantee that requestAnimationFrame
 will actually keep things in sync?


 Right. A cast-iron guarantee that requestAnimationFrame callbacks will run
 to completion before painting is incompatible with the goal of being able to
 repaint the browser window even if scripts are running too long or
 completely hung.

 But we *can* guarantee that a) scripted animations stay in sync with each
 other, and b) if the HTML5 event loop is not too busy (e.g., animation
 scripts take much less time to complete than the interval between composited
 frames and the content process is otherwise idle), scripted animations will
 stay in sync with with declarative animations even if the declarative
 animations are being processed by an off-main-thread compositing framework.
 (OK, this is a bit speculative since we haven't implemented it yet, but the
 approach seems straightforward.)


 Just blue skying here but  It seems like if your goal is to keep
 animations in sync the trigger should be an animation tick, not a repaint.
 In otherwords, you want to give JS a chance to update stuff anytime a CSS
 animation updates stuff. That would separate the issue from painting. So
 what about an onAnimation type of event?

 That would separate this issue of the browser having to wait for JS during
 a paint and still keep things in sync.



Think about this some more. the point if the previous suggestion is that
updating keeping a JS animation in sync with a CSS animation has nothing to
do with painting or rendering. The fact that apparently firefox ties those
2 things together is an artifact of firefox's implementation. It's just has
valid to for example have animation running on 1 thread and rendering on
another. Many console/pc games do this. To keep animations in sync requires
syncing the animation values in the animation thread, not deciding to update
values during rendering.

So, if the goal is to let JS sync with CSS animations, putting the in events
related to painting is the wrong model. Solving that problem particular goal
it would make more sense to add a cssAnimationTick event or something.
 Any JS that wants to say in sync would add an event handler

window.addEventHandler('animationTick, updateAnimations, ...);

The handler would get passed a clock similar to how beforePaint works now.

Of course going down that path doesn't solve the issue I'm trying to solve
which could be stated as

*) Don't render from JS unless visible (ie, don't execute expensive 2d or 3d
canvas rendering calls when not visible)

With the caveats of

  a) Make it extremely easy to do the right thing so that few if any sites
making canvas ads or canvas games hog the CPU when not visible.
  b) don't make the browser wait on JavaScript.
  c) don't render more than needed. (ie, don't render 60 frames a second if
you're only changing stuff at 15)


It seems like a 'element.setRenderCallback(func, time)' which works exactly
the same as setInterval but only when that element is visible would solve
both of those issues. If you don't want it on element then make it
window.setRenderCallback(element, func, time).  The problem with a name like
setRenderCallback is it seems like it's tied to rendering which is what I
want to avoid which is why setIntervalIfVisible makes more sense. That name
does not imply it has anything to do with rendering.
















 Rob
 --
 Now the Bereans were of more noble character than the Thessalonians, for
 they received the message with great eagerness and examined the Scriptures
 every day to see if what Paul said was true. [Acts 17:11]





Re: requestAnimationFrame

2010-11-17 Thread Gregg Tavares (wrk)
On Wed, Nov 17, 2010 at 5:20 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Thu, Nov 18, 2010 at 11:22 AM, Gregg Tavares (wrk) g...@google.comwrote:

 Think about this some more. the point if the previous suggestion is
 that updating keeping a JS animation in sync with a CSS animation has
 nothing to do with painting or rendering. The fact that apparently firefox
 ties those 2 things together is an artifact of firefox's implementation.
 It's just has valid to for example have animation running on 1 thread and
 rendering on another. Many console/pc games do this. To keep animations in
 sync requires syncing the animation values in the animation thread, not
 deciding to update values during rendering.


 But for maximum smoothness you want to update your animation once per
 painted frame. It doesn't matter how many threads you use and whether you do
 it during rendering.


  So, if the goal is to let JS sync with CSS animations, putting the in
 events related to painting is the wrong model. Solving that problem
 particular goal it would make more sense to add a cssAnimationTick event
 or something.  Any JS that wants to say in sync would add an event handler

 window.addEventHandler('animationTick, updateAnimations, ...);

 The handler would get passed a clock similar to how beforePaint works now.


 Let's not get hung up on the beforePaint name. As Cameron mentioned, we
 should probably just ditch the event from the proposal altogether.

 Of course going down that path doesn't solve the issue I'm trying to solve
 which could be stated as

 *) Don't render from JS unless visible (ie, don't execute expensive 2d or
 3d canvas rendering calls when not visible)


 With the caveats of

   a) Make it extremely easy to do the right thing so that few if any sites
 making canvas ads or canvas games hog the CPU when not visible.
   b) don't make the browser wait on JavaScript.
   c) don't render more than needed. (ie, don't render 60 frames a second
 if you're only changing stuff at 15)


 Those are good goals, except I think we need to drill down into (c). Are
 people changing stuff at 15Hz for crude performance tuning, or for some
 other reason?


Let's ignore (c) for the moment. I feel with (a) and (b) alone there are
still issues to discuss.

On (a)
Take this page (http://boingboing.net) At the time I checked it today
(6:55pm PST) it had 10 instances of flash running. O page load 3 were
animating continuallty, 7 were idle. The 7 idle ones were all video with
custom UIs. Of those 7, 6 of them will animate constantly once the video is
finished advertising other videos to play. That means at any one time, if
this was canvas based, 9 areas will be re-rendered by JavaScript on each
animationFrame.  It seems like giving the browser more info by tying these
animation frames to specific elements would let the browser not call
animationFrame callbacks unless the element it is tied to is on the screen
and that would be a pretty good thing. If all 10 of those areas were
re-rendering their ads all the time I suspect that page would get pretty
slow, especially on netbooks. As it is, only 1-3 areas are ever visible at
once.



 Rob
 --
 Now the Bereans were of more noble character than the Thessalonians, for
 they received the message with great eagerness and examined the Scriptures
 every day to see if what Paul said was true. [Acts 17:11]



Re: requestAnimationFrame

2010-11-16 Thread Gregg Tavares (wrk)
On Mon, Nov 15, 2010 at 7:24 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Tue, Nov 16, 2010 at 1:45 PM, Gregg Tavares (wrk) g...@google.comwrote:

 On Mon, Nov 15, 2010 at 4:07 PM, Robert O'Callahan 
 rob...@ocallahan.orgwrote:

 On Tue, Nov 16, 2010 at 12:55 PM, Gregg Tavares (wrk) 
 g...@google.comwrote:

 I've seen proposals for something more like

   element.setInternvalIfVisible(func, internval);

 Which is the same as setInterval but only gets called if the element is
 visible.  With that kind of API there is no connection to rendering. Each
 area that needs animation can set the framerate it is hoping to get. The UA
 can throttle if it wants to.


 What happens if one element's event handler makes another element
 visible, will the second element's timer be able to fire or not?


  Does it matter?


 Yes, I think it probably does matter for interop.

 What happens now?


 With mozRequestAnimationFrame, visibility is not relevant to whether the
 callbacks fire, so the question does not arise.


 Now, with setInterval there is no connection to rendering. I set the code
 to update one element to have an interval of 16 and another to have an
 interval of 100. If the first one makes the second one visible that doesn't
 effect whether or not the second one's set interval function gets called. If
 there was a setIntervalIfVisible and that behavior was browser independent
 how would that make things worse than they are today? It seem like if
 visible is just a hint to the browser that it doesn't need to call the
 interval function if it doesn't want to. It doesn't need to be a guaranteed
 that it will be called when visible any more than the current setInterval is
 a guarantee that it will be called at the interval rate.


 mozRequestAnimationFrame actually guarantees that you will be called when
 the browser paints. Otherwise we can't guarantee that JS animations will
 stay in sync with declarative animations.


So if the JS on the beforePaint takes a while to complete what happens to
the browser? For example if you are resizing the browser? Is the browser
forced not to be able to actually paint until JS returns?



 Now, when animation is happening on a separate compositor thread that
 guarantee has to be relaxed a bit. But we'll still try to meet it on a
 best-effort basis --- i.e. we'll run the JS animations once per composited
 frame, if the JS can keep up.


So you're saying that there's no guarantee that requestAnimationFrame will
actually keep things in sync?





 When an element becomes visible, does its timer fire immediately if the
 last firing was more than 'interval' ago?


 Yes? No? Does it matter? What happens now?


 I suspect it would matter for interop, yes. Again, with
 requestAnimationFrame the question does not arise.

 I'm not trying to be argumentative. I'm just not seeing the issue.
 Certainly I'd like various areas to be updated together, or in sync, or when
 visible but that seems like it could be up the UA. If one UA has a simple
 implementation and another UA as more complex one that gives a better user
 experience then that's reason to switch to that browser.


 As Boris mentioned, keeping multiple animations (including declarative
 animations) in sync was a design goal for requestAnimationFrame.

 This seems like you'd just pass in 0 for the interval. The UA can decide
 whether or not to call you as fast as it can or at 60hz or whatever it
 things is appropriate just as it does for setInterval today.


 OK.

 To summarize, I think you have raised two separate feature requests here:
 1) Provide an API that lets authors specify a maximum frame rate
 2) Provide an API that lets authors avoid getting a callback while a
 particular element is invisible

 I'm curious about the use-cases that require #1, and given it can be
 implemented on top of requestAnimationFrame, the question for any proposal
 is whether the extra convenience justifies the surface area. (And note that
 even something like setInternvalIfVisible requires some lines of code for
 the animation script to figure out which frame it should display.)

 I'm not sure how important #2 is. If your callback includes if
 (element.getBoundingClientRect().top  window.innerHeight) return;, I think
 you'd be pretty close to the same effect. But if you have a lot of animated
 elements, most of which are not visible, I can see that native support could
 be helpful.


 Rob
 --
 Now the Bereans were of more noble character than the Thessalonians, for
 they received the message with great eagerness and examined the Scriptures
 every day to see if what Paul said was true. [Acts 17:11]



requestAnimationFrame

2010-11-15 Thread Gregg Tavares (wrk)
following in the footsteps of a previous thread
http://lists.w3.org/Archives/Public/public-webapps/2009OctDec/0223.html

I'd like to see if there is some consensus on this issue. Various people are
anxious to see this happen in more browsers.

A couple of questions came up for requestAnimationFrame
(see
http://weblogs.mozillazine.org/roc/archives/2010/08/mozrequestanima.html)

One is, how should this api be used if I want an app to update at 10hz.  It
seems to be designed to assume I want the maximum frame rate. If I want to
run slower would I just use

setInterval(function() {
   window.requestAnimationFrame();
  }, 100); // request frames at 10hz?

That's fine if that's the answer

But that brings up the next question. I'm in some alternate world where
there is no Flash, instead all ads are implemented in Canvas. Therefore a
site like cnn.com or msnbc.com has 5 canvases running ads in each one.  I
don't really want all 5 canvases redrawn if they are not on the screen but
the current design has requestAnimationFrame and beforePaint to be window
level apis.

That seems to have 2 issues.

1) All of them will get a beforePaint event even if most or all of them are
scrolled off the visible area since there is only 1 beforePaint event
attached to the window.

2) All of them will get beforePaint events at the speed of the fastest one.
If one ad only needs to update at 5hz and other updates at 60hz both will
update at 60hz.

Do those issues matter? If they do matter would making both
requestAnimationFrame and beforePaint be element level APIs solve it?


Re: requestAnimationFrame

2010-11-15 Thread Gregg Tavares (wrk)
On Mon, Nov 15, 2010 at 3:28 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 11/15/10 6:03 PM, Gregg Tavares (wrk) wrote:

 One is, how should this api be used if I want an app to update at 10hz.
  It seems to be designed to assume I want the maximum frame rate.


 The initial API was designed to replace setTimeout(..., 0) animations,
 yeah.  The idea was to ask for animate as smoothly as possible but allow
 the browser to choose the actual animation rate.


  setInterval(function() {
window.requestAnimationFrame();
   }, 100); // request frames at 10hz?


 This seems like it would work, yes It may be suboptimal if you have
 multiple consumers, because you end up with multiple timers in flight. But
 you'd get that anyway if you had some consumers wanting 10Hz and some
 wanting 11Hz and some wanting 9Hz

 It's worth thinking about this a bit, but I'm not sure there's a great
 solution here.

  But that brings up the next question. I'm in some alternate world where
 there is no Flash, instead all ads are implemented in Canvas. Therefore
 a site like cnn.com http://cnn.com or msnbc.com http://msnbc.com has

 5 canvases running ads in each one.  I don't really want all 5 canvases
 redrawn if they are not on the screen but the current design has
 requestAnimationFrame and beforePaint to be window level apis.


 Note that the beforePaint event can make arbitrary changes to the DOM (and
 in particular can change whether things are visible)...


  1) All of them will get a beforePaint event even if most or all of them
 are scrolled off the visible area since there is only 1 beforePaint
 event attached to the window.

 2) All of them will get beforePaint events at the speed of the fastest
 one. If one ad only needs to update at 5hz and other updates at 60hz
 both will update at 60hz.

 Do those issues matter? If they do matter would making both
 requestAnimationFrame and beforePaint be element level APIs solve it?


 The current mozRequestAnimationFrame implementation allows passing a
 function.  If no function is passed, an event is fired at the window. If a
 function is passed, that function is called.  So in this case, each canvas
 could pass a separate function that just modifies that canvas. If they want
 to throttle themselves they'd use your setInterval suggestion above as
 needed.  That would address #2 above.


How would setInterval with multiple functions on mozRequestAnimationFrame
solve this issue? They are still all going to get called at the fastest
interval right? Or did you mean if mozRequestAnimationFrame was moved to an
element level function?


 I suppose we could do something similar with events and event handlers if
 people think it's a better API.

 For #1, things are more difficult; a lot of the work you have to do to
 determine whether something is inside the visible area is work you want to
 put off until _after_ all the beforePaint handlers have run.


I've seen proposals for something more like

  element.setInternvalIfVisible(func, internval);

Which is the same as setInterval but only gets called if the element is
visible.  With that kind of API there is no connection to rendering. Each
area that needs animation can set the framerate it is hoping to get. The UA
can throttle if it wants to.

Would something more like that solve the issue?



 -Boris



Re: requestAnimationFrame

2010-11-15 Thread Gregg Tavares (wrk)
On Mon, Nov 15, 2010 at 3:58 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Tue, Nov 16, 2010 at 12:03 PM, Gregg Tavares (wrk) g...@google.comwrote:

 One is, how should this api be used if I want an app to update at 10hz.
  It seems to be designed to assume I want the maximum frame rate. If I want
 to run slower would I just use

 setInterval(function() {
window.requestAnimationFrame();
   }, 100); // request frames at 10hz?

 That's fine if that's the answer


 If you really want to animate in 10Hz steps, then I suggest you do
 something like
 var start = window.animationTime;
 var rate = 10; // Hz
 var duration = 10; // s
 var lastFrameNumber;
 function animate() {
   var elapsed = window.animationTime - start;
   if (elapsed  duration) {
 window.requestAnimationFrame(animate);
   }
   var frameNumber = Math.round(elapsed/(1000/rate));
   if (frameNumber == lastFrameNumber)
 return;
   lastFrameNumber = frameNumber;
   // ... update the display based on frameNumber ...
 }
 window.requestAnimationFrame(animate);


That seems quite a bit more complicated than

   setInterval(myRenderFunction, 100);

Which is what you'd do today.



 (Out of curiosity, what are the use-cases for this?)


There is plenty of flash content that has a lower than 60hz (or fast as
possible) refresh rate. When something is instead implementing in HTML5
instead of Flash what should they do to get the similar results? Checking
cnn.com, time.com, arstechnica.com, wired.com and msnbc.com I found that 7
ads were set to run at 18hz, 3 were set to run at 24hz, 2 were set to run at
30hz. I used SWF
Infohttps://addons.mozilla.org/en-US/firefox/addon/45361/to check
the fps setting. I have no idea why they don't choose run as fast
as possible. I could be laziness, it could be that it makes the pages too
slow and unresponsive to set them to as fast as possible, it could be that
rendering 3 times more then necessary, 60hz vs 18hz would eat battery
life, it could be an artistic choice, it could be just that flash makes you
pick one vs defaulting to fast as possible.



 That seems to have 2 issues.

 1) All of them will get a beforePaint event even if most or all of them
 are scrolled off the visible area since there is only 1 beforePaint event
 attached to the window.


 Something could be done here, but it seems rather complex to specify things
 in a way that avoids firing callbacks when relevant content is out of
 view.

 2) All of them will get beforePaint events at the speed of the fastest one.
 If one ad only needs to update at 5hz and other updates at 60hz both will
 update at 60hz.


 The above approach solves this I think.

 Rob
 --
 Now the Bereans were of more noble character than the Thessalonians, for
 they received the message with great eagerness and examined the Scriptures
 every day to see if what Paul said was true. [Acts 17:11]



Re: requestAnimationFrame

2010-11-15 Thread Gregg Tavares (wrk)
On Mon, Nov 15, 2010 at 4:07 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Tue, Nov 16, 2010 at 12:55 PM, Gregg Tavares (wrk) g...@google.comwrote:

 I've seen proposals for something more like

   element.setInternvalIfVisible(func, internval);

 Which is the same as setInterval but only gets called if the element is
 visible.  With that kind of API there is no connection to rendering. Each
 area that needs animation can set the framerate it is hoping to get. The UA
 can throttle if it wants to.


 What happens if one element's event handler makes another element visible,
 will the second element's timer be able to fire or not?


Does it matter? What happens now? Now, with setInterval there is no
connection to rendering. I set the code to update one element to have an
interval of 16 and another to have an interval of 100. If the first one
makes the second one visible that doesn't effect whether or not the second
one's set interval function gets called. If there was a setIntervalIfVisible
and that behavior was browser independent how would that make things worse
than they are today? It seem like if visible is just a hint to the browser
that it doesn't need to call the interval function if it doesn't want to. It
doesn't need to be a guaranteed that it will be called when visible any more
than the current setInterval is a guarantee that it will be called at the
interval rate.



 When an element becomes visible, does its timer fire immediately if the
 last firing was more than 'interval' ago?


Yes? No? Does it matter? What happens now?

I'm not trying to be argumentative. I'm just not seeing the issue. Certainly
I'd like various areas to be updated together, or in sync, or when visible
but that seems like it could be up the UA. If one UA has a simple
implementation and another UA as more complex one that gives a better user
experience then that's reason to switch to that browser.



 If the author just wants a smooth animation, what should they pass as the
 interval? For smooth animations, it seems to me that the browser alone
 should choose the frame rate.


This seems like you'd just pass in 0 for the interval. The UA can decide
whether or not to call you as fast as it can or at 60hz or whatever it
things is appropriate just as it does for setInterval today.



 Rob
 --
 Now the Bereans were of more noble character than the Thessalonians, for
 they received the message with great eagerness and examined the Scriptures
 every day to see if what Paul said was true. [Acts 17:11]



Re: requestAnimationFrame

2010-11-15 Thread Gregg Tavares (wrk)
On Mon, Nov 15, 2010 at 5:10 PM, Jonas Sicking jo...@sicking.cc wrote:

 On Mon, Nov 15, 2010 at 5:01 PM, Bjoern Hoehrmann derhoe...@gmx.net
 wrote:
  * Gregg Tavares (wrk) wrote:
 There is plenty of flash content that has a lower than 60hz (or fast as
 possible) refresh rate. When something is instead implementing in HTML5
 instead of Flash what should they do to get the similar results? Checking
 cnn.com, time.com, arstechnica.com, wired.com and msnbc.com I found that
 7
 ads were set to run at 18hz, 3 were set to run at 24hz, 2 were set to run
 at
 30hz. I used SWF
 Infohttps://addons.mozilla.org/en-US/firefox/addon/45361/to check
 the fps setting. I have no idea why they don't choose run as fast
 as possible. I could be laziness, it could be that it makes the pages
 too
 slow and unresponsive to set them to as fast as possible, it could be
 that
 rendering 3 times more then necessary, 60hz vs 18hz would eat battery
 life, it could be an artistic choice, it could be just that flash makes
 you
 pick one vs defaulting to fast as possible.
 
  The frame rate is a number in the swf header that cannot be set to a as
  fast as possible value.


How does that info help resolve this? fast as possible effectively is that
same as 60hz for all practical purposes and yet lots of people are not
setting their flash animations to 60hz.



 Ah, so that also means that different animations can't run with
 different frame rates?


Yes they can. One instance of flash is set to 18hz, another is set to 24hz,
both are on the same page. Or are we talking about something else?



 Maybe having a global property which defines the maximum frame rate
 for all animations on the page would be enough then? Though it'll give
 ads and their embedders a fun property to fight over.

 / Jonas



Re: [widgets] Zip vs GZip Tar

2010-04-28 Thread Gregg Tavares
On Wed, Apr 28, 2010 at 2:28 PM, timeless timel...@gmail.com wrote:

 On Wed, Apr 28, 2010 at 7:48 PM, Gregg Tavares g...@google.com wrote:
  I'm sorry if I'm not familiar with all the details of how the widgets
 spec
  is going but the specs encourage comment so I'm commenting :-)
 
  It seems like widgets have 2 uses
 
  #1) As a way to package an HTML5 app that can be downloaded similar to a
  native
  executable
 
  #2) As a way to package an HTML5 app that can be embedded in a page but
  easily
  distributed as a single file (ie, do what Flash / Silverlight /
 Unity3D
  currently do except based in HTML5 / EMCAScript)
 
  Use #2 would benefit tremendously if like Flash / Sliverlight / Unity3D
 the
  application could start as soon as it has enough info to start as
 supposed
  to
  having to wait for the entire package to download.
 
  To accomplish that goal requires using a format that can be streamed.
  Zip is not such a format. Zip files store their table of contents at the
 end
  of
  the file. They can have multiple table of contents but only the last one
  found is valid. That means the entire file has to be downloaded before a
 UA
  can
  correctly figure out what's in the file.
 

 That's incorrect. Zip is streamable. Go read the format.


I have read the format in extreme detail as well as implemented plugins
that support asset streaming for Firefox, Safari, IE and Chrome.

It's not streamable.


Re: [widgets] Zip vs GZip Tar

2010-04-28 Thread Gregg Tavares
On Wed, Apr 28, 2010 at 2:28 PM, timeless timel...@gmail.com wrote:

 On Wed, Apr 28, 2010 at 7:48 PM, Gregg Tavares g...@google.com wrote:
  I'm sorry if I'm not familiar with all the details of how the widgets
 spec
  is going but the specs encourage comment so I'm commenting :-)
 
  It seems like widgets have 2 uses
 
  #1) As a way to package an HTML5 app that can be downloaded similar to a
  native
  executable
 
  #2) As a way to package an HTML5 app that can be embedded in a page but
  easily
  distributed as a single file (ie, do what Flash / Silverlight /
 Unity3D
  currently do except based in HTML5 / EMCAScript)
 
  Use #2 would benefit tremendously if like Flash / Sliverlight / Unity3D
 the
  application could start as soon as it has enough info to start as
 supposed
  to
  having to wait for the entire package to download.
 
  To accomplish that goal requires using a format that can be streamed.
  Zip is not such a format. Zip files store their table of contents at the
 end
  of
  the file. They can have multiple table of contents but only the last one
  found is valid. That means the entire file has to be downloaded before a
 UA
  can
  correctly figure out what's in the file.
 

 That's incorrect. Zip is streamable. Go read the format.



In more detail:

A zip file is allowed to look like this

offset : content
0100 : [fileheader: foo.txt]
0180 : [content for foo.txt]
0400 : [fileheader: foo.txt]
0480 : [content for foo.txt]
0900 : [table of contents: foo.txt: location 0400]

Only the table of contents is the authority of what files
are valid inside a zip file, not the headers on the file.

This is so you can both safely append new versions
of files to the end of a zip file and just write a new
table of contents and to avoids false positives from
other content in the file.

The only the last table of contents
is valid and only the files it points to are valid.

Therefore, unless you are implementing some kind of
hack, hoping that reading the file headers will work
you aren't actually reading zip files. You're reading some
subset that happen to work with your hack.

Zip files are not streamable.


solving the CPU usage issue for non-visible pages

2009-10-19 Thread Gregg Tavares
I posted something about this in the whatwg list and was told to bring it
here.

Currently, AFAIK, the only way to do animation in HTML5 + JavaScript is
using setInterval. That's great but it has the problem that even when the
window is minimized or the page is not the front tab, JavaScript has no way
to know to stop animating.  So, for a CPU heavy animation using canvas 2d or
canvas 3d, even a hidden tab uses lots of CPU. Of course the browser does
not copy the bits from the canvas to the window but JavaScript is still
drawing hundreds of thousands of pixels to the canvas's internal image
buffer through canvas commands.

To see an example run this sample in any browser

http://mrdoob.com/projects/chromeexperiments/depth_of_field/

Minimize the window or switch to another tab and notice that it's still
taking up a bunch of CPU time.

Conversely, look at this flash page.

http://www.alissadean.com/

While it might look simple there is actually a lot of CPU based pixel work
required to composite the buttons with alpha over the scrolling clouds with
alpha over the background.

Minimize that window or switch to another tab and unlike HTML5 + JavaScript,
flash has no problem knowning that it no longer needs to render.

There are probably other possible solutions to this problem but it seems
like the easiest would be either

*) adding an option to window.setInterval or only callback if the window is
visible

*) adding window.setIntervalIfVisible (same as the previous option really)

A possibly better solution would be

*) element.setIntervalIfVisible

Which would only call the callback if that particular element is visible.

It seems like this will be come an issue as more and more HMTL5 pages start
using canvas to do stuff they would have been doing in flash like ads or
games. Without a solution those ads and games will continue to eat CPU even
when not visible which will make the user experience very poor.

There may be other solutions. The advantage to this solution is it requires
almost no changes to logic of current animating applications.

Some have suggested the UA can solve this but I don't see how a UA can know
when JavaScript should and should not run. For example chat and mail
applications run using setInterval even when not visible so just stopping
non-visible pages from running at all is not an option.

Another suggested solution is for pages to default to only be processed when
visible and requiring the page to somehow notify the UA it needs processing
even when not-visible. This could break some existing apps but they would
likely be updated immediately. This solution might lessen the probability of
resource hogging pages in the  future as HTML5+JavaScript+canvas ads, games
and other apps become more common since the default would be for them not to
hog the CPU when not visible.

-gregg


Re: RFC: WebApp timing

2009-08-13 Thread Gregg Tavares
On Wed, Aug 12, 2009 at 6:12 PM, Zhiheng Wang zhihe...@google.com wrote:

 Hello,

We recently started a draft to provide timing-related APIs in browsers.
 The goal is to add the missing pieces in webapp latency measurements using
 Javascript. As a starter, right now we've only include a
 minimum set of interfaces we consider necessary, which mainly focuse on the
 time and type of the
 navigation.

The first cut of the draft is attached below. It's sketchy but should
 hold much of our ideas. We are
 still actively working on it. Any interest and feedback on the draft are
 highly welcome.


Is this a place that app specific timing would be useful to add or is that
already covered somewhere else?

In other words, I'm looking for an API that helps me do this

var timer = new Timer();
timer.start();
for (var x = 0; x  10; ++x) { }
var elapsedTime = timer.elapsedTime;
document.write(loop took  + elapsedTime +  seconds);

Where elapsedTime is some relatively high precision number so this might
print

loop took 0.0145 seconds

(using Date, which only has a precision of milliseconds, is not enough)



 cheers,
 Zhiheng



Re: New FileAPI Draft | was Re: FileAPI feedback

2009-08-06 Thread Gregg Tavares
On Wed, Aug 5, 2009 at 8:08 PM, Garrett Smith dhtmlkitc...@gmail.comwrote:

 On Wed, Aug 5, 2009 at 1:04 AM, Arun Ranganathana...@mozilla.com wrote:
  Garrett Smith wrote:
 
  Please show the subsequent use cases you've studied and please do
  publish your studies.
 
 
 
  What I meant by use cases was this exchange:
 
  http://lists.w3.org/Archives/Public/public-webapps/2009JulSep/0371.html
 
  http://lists.w3.org/Archives/Public/public-webapps/2009JulSep/0457.html
 

 Those are the discussions of Events that you did not participate in.
 Where is the complexity study?

  In the case of changing UI indicators, using a common codepath for
 success
  as well as errors seemed more useful than multiple error callbacks.

 Multiple error callbacks? Who brought that up? Are you making a Straw Man?

  In the case of file read APIs, simply getting the data asynchronously is
  more convenient than using events.  There is no intrigue at work here,
  merely disagreement.
 

 Is it? In reading that discussion, I see no disagreement from you
 whatsoever.

 I see that you posted this new thread. You said you studied the use
 cases and that your original design was best. We still have no
 evidence that any studying has taken place. Please post the studies so
 that they can be understood.

 The route you chose makes a permanent design decision.  Once done, it
 can not be undone. If it goes through as is, the best case at that
 point would be to start over.

 I am not going to argue with hand-waving summations or the multiple
 error handlers straw man.


 Garrett


I probably have no understanding of the issue but casually glancing at
the discussion links above, can't you solve the multiple callback issue
by wrapping the callback in JavaScript?

If I understand correctly, a File is not an element nor is it attached to
the DOM so some random JavaScript can not do something like
document.getElementsByTagName('File');

In the DOM case multiple callbacks per event make some sense because
2 different pieces of code can find the same elements but in the File case
only
code can create a File object, some other random code can't go
searching for that object. That means the code that created the File object
can manage it however it sees fit including wrapping the entire thing in
a JavaScript object and having that object support multiple callbacks if it
wants
to.


Re: New FileAPI Draft | was Re: FileAPI feedback

2009-08-06 Thread Gregg Tavares
On Thu, Aug 6, 2009 at 2:35 AM, Anne van Kesteren ann...@opera.com wrote:

 On Thu, 06 Aug 2009 10:53:31 +0200, Gregg Tavares g...@google.com wrote:

 On Thu, Aug 6, 2009 at 12:48 AM, Anne van Kesteren ann...@opera.com
 wrote:

 XHR does not do local data. It also does not do raw file data very well.


 I don't quite understand this comment. Isn't the point of these
 discussions how to extend browsers and HTML? XHR was just extended to
 support cross-site requests and new properties were added. Couldn't it be
 extended again to
 support local files (through the filedata: url system) and as well to
 support raw data?


 Sorry, it indeed could be extended in that way. I just think it is a bad
 idea. XMLHttpRequest provides an almost complete HTTP API and such a thing
 is completely different and way more complex than what is needed to read
 files from disk. In addition XMLHttpRequest is quite complex and overloading
 the whole object and all its members with this functionality is not worth
 saving a few members on the File/FileData objects.


well, here's an issue that NOT doing it through XMLHttpRequest seems to
bring up.

Say I'm writing word processor or blog posting software. I want to add the
feature where the user can import an RTF file and I'm going to parse the
file in JavaScript and pull out the text and formatting in stick it in their
current document.  If you do it through XMLHttpRequest then there is one
path the get the data. One way or another the user ends up providing an URL.
That url could be http://foo.com/mydoc.rtf; or it could be filedata: uuid,
mydoc.rtf but I pass that to XMLHttpRequest and I get back the data in a
consistent way.  Otherwise, if you go forward with getting the data through
FileData.get??? methods, there are now two code paths needed, 2 ways to
setup callbacks, 2 ways to get progress events, 2 ways to deal with errors,
etc.

That seems less than ideal to me.







 --
 Anne van Kesteren
 http://annevankesteren.nl/



Re: New FileAPI Draft | was Re: FileAPI feedback

2009-08-05 Thread Gregg Tavares
On Wed, Aug 5, 2009 at 1:47 AM, Arun Ranganathan a...@mozilla.com wrote:

 Gregg Tavares wrote:

 I'd really like to contribute to this as I'm helping implement WebGL and
 we
 need a way to get LOTS of data into WebGL. Hundreds of files per app.

 That said, there's a bunch of things I don't understand about the API

 *) Given that XMLHttpRequest is limited to a same domain policy

 Firefox 3.5 and Safari 4 support cross-domain XMLHttpRequest, mitigated by
 CORS.  See, for example,
 http://hacks.mozilla.org/2009/07/cross-site-xmlhttprequest-with-cors/

 but the img
 tag, audio tag, video tag, script tag and others are not, how do you
 resolve
 that with the FIle API?


 The File API is meant to talk to your local file system.  It isn't a
 network download API, but it seems that's what you want :-).  Perhaps I am
 misunderstanding your question?


Sorry, I was told on the HTML5 list that this is where network downloads and
archive support stuff belonged.

It certain seems like a good fit to me.




 -- A*



Re: Multipart or TAR archive/package support for all APIs (Performance and scalability)

2009-08-05 Thread Gregg Tavares
On Tue, Aug 4, 2009 at 12:15 PM, Sebastian Markbåge
sebast...@calyptus.euwrote:

 There has been some talk about supporting packages/archives in web APIs.

 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-July/021586.html
 http://lists.w3.org/Archives/Public/public-webapps/2009JulSep/0460.html

 --

 Why?

 The main purpose is performance because of overhead in opening several
 connections. While this could potentially be solved using HTTP pipelining
 there are several advantages to working with packages in single requests.

 - HTTP pipelining has various bugs in several servers and proxies.
 Therefore, it's disabled by default in most(?) current browsers and several
 proxies. If it's going to be usable it needs several specification changes
 and updates across the board.

 - Even if HTTP pipelining worked as expected, Keep-Alive connections
 require that servers keeps connections open for a certain timeout period.
 That can be detrimental to high performance servers. The solution is to set
 the timeout so low that the client may timeout during page load - making it
 worse than no pipelining.

 - By packaging small files as a single unit, you can gzip the entire
 package using Content-Encoding. That can have major bandwidth benefits
 compared to gzipping each file individually. (.tar.gz vs .gz.tar)

 - High performance servers can easily handle packaged data. It's quicker to
 read a large file as a single consecutive read than making lots of look ups
 and seeks to find lots of small files on disk.

 - Clients can cache the package as a single unit, giving clients the same
 boost on disk seeks, if a simple caching mechanism is used.

 - If it's ubiquitous - it's easier for authors to package and deploy
 widgets and client-side tools as single files.

 --

 How?

 My suggestion would be to define the fragment part of the URI for a certain
 multipart type. The fragment identifier denotes a certain file within the
 package. E.g. http://domain/archive#filename This is similar to fragment's
 use for rows in text/plain (rfc5147 http://tools.ietf.org/html/rfc5147),
 anchors in text/html (rfc2854 http://www.ietf.org/rfc/rfc2854.txt), etc.

 The idea is that you could reference a single file within an archive in any
 other web API. The UA would download the archive and load the file when it
 reaches a file with said identifier within that archive.

 The packaging format could be any existing format: application/tar (using
 filenames), multipart/form-data (using the name attribute in
 Content-Disposition part-header) or multipart/related (using Content-ID
 part-header). But it's probably good to settle on one.

 The identifier fragment can itself have an additional fragment when the
 inner mime type defines a special usage: a
 href=archive#file.html#anchorname or any other place where you need a
 fragment to define behavior (SVG, XBL, etc). Multiple # should be fine
 according to the generic uri syntax 
 (rfc3986http://tools.ietf.org/html/rfc3986).
 Does it break any other existing specs or implementations?

 --

 Compatibility?

 Additionally you could add an additional attribute to HTML5 and CSS for
 archive URLs. That way, compatible UAs can use the package, if supported,
 otherwise fallback to regular files. Perhaps you could use media types using
 nested mimes: audio src=archive#audiofile type=multipart/related;
 fragmenttype=audio/ogg /

 Example usage:

 img src=file.jpg msrc=archive.tar#file.jpg /


 {

 background-image: url(file.jpg);

 background-image: murl(archive.tar#file.jpg);

 }


 script src=file.js msrc=archive.tar#file.js type=text/javascript /


 var img = new Image();

 img.msrc = archive.tar#file.png;



 xhr.open(GET, archive.tar#file.xml, true);


 -

 The purpose of this suggestion is that it is a rather easy specification.
 It's a minor tweak that would open up many possibilities using existing
 tools. It may not be so minor for implementations though. I'd love to hear
 other suggestions on how to best to address this issue.


This is a neat idea but it doesn't appear to solve these use cases I see as
fairly common.

#1) I'm making a version of any medium to heavy flash app but using only
HTML5 standards audio, video, canvas.  I need lots of assets. I want to
start my app immediately, put up a loading progress bar while I download the
assets.

How does

img src=archive#img1.jpg
img src=archive#img2.jpg
img src=archive#img3.jpg

Get me info for a progress bar?

#2) I'm making a game where I want to download user content. The user makes
a character using some editor, online or offline, the character is put in an
archive with the user's images and other data.

How would the above suggestion let me download this archive and query what's
inside so I can use this user's data?

#3) I'm making WorldOfSpaceCraft in WebGL. Knowing that I need to download
LOTS of assets I make an archive file with low-poly lods and low-res
textures at the 

Re: New FileAPI Draft | was Re: FileAPI feedback

2009-08-05 Thread Gregg Tavares
How about this?

Why make a new API for getting the contents of a file (local or otherwise)
when we already have one which is XHR?

What if FileList was just array of File objects where each File object is
just a URL in the format

filedata: uuid, filename

Then you can use that URL anywhere in HTML a URL is valid. script, img,
audio, video, css, AND XHR

That would mean you wouldn't be adding a new API to get the contents of a
file. If you want the contents just use XHR and use the URL from the File in
the FileList.

You could add a few more functions to XHR like request.getAsDataURL(),
request.getAsTextInEncodiing(), etc. if need be if they are needed


Adding gzipped archive support to XMLHttpRequest and the File API?

2009-08-03 Thread Gregg Tavares
I hope I'm in the right place.

I'd like to propose / suggest that XMLHttpRequest be extended to handle
gzipped tar files. Usage would be something like

var req = new XMLHttpRequest();
req.onfileavailable = myOnFileAvailable;
req.open(GET, http://someplace.com/somefile.tgz;, true);
req.send();

// This function is called as files in the archive are downloaded.
// In other words, you do NOT have to wait for the entire archive
// to be downloaded. The archive is downloaded and decompressed
// asynchronously and as each file in the tar is completely download
// loaded this callback is called with a File object representing
// that file
function myOnFileAvailable(fileObject) {
  // fileObject is a File API type object. The data is available at this
point is
}

This extension is needed for rich web applications of the type that would
use
the canvas 2d or canvas 3d API to implement things like games or other apps
that require thousands of small to large assets.

In that vane I'd also like to suggest a new URL. (forgive me if there is
already a standard for this. Just point me in that direction)

File currently has getAsDataURI and getAsText. Unfortunately neither of
those are sufficient for the needs of the applications this proposal is
trying to deal with. getAsText clearly only handles text and getAsDataURI
has severe size restrictions.

How about added getAsBinaryURI something to this effect.

binary:??? where ??? is some ID made up by the browser, guaranteed to be
unique within one HTML page. The data it represents is not available
directly to HTML but you can pass it to things that use URIs so for example.

img.src = file.getAsBinaryURI();
audio.src = file.getAsBinaryURI();

etc..

I'll be honest, I feel like there is certain mis-match between the File API
as current specified and the use cases that apps like I'm referring to need.

Given that in the example above when myOnFileAvailable is called the file is
completely available it would be cumbersome to have to add yet another
callback to have to use file.getAsBinaryURI in an asynchronous manner.  In
other words.  I'm hoping I wouldn't have to do this

// NOT THIS PLEASE

function myOnFileAvailable(fileObject) {
  fileObject.getFileAsBinaryURI(myFileAsBinaryURICallback);
}

myFileAsBinaryURICallback(binaryURI) {
  someImgElement.src = binaryURI;
}


Though maybe that's not so bad. It would get called immediately in this case
because the fileObject already has the data but would still allow it to be
used for files not in an archive as well.

Thoughts?