Re: [whatwg] Reviving ImageBitmap options: Intend to spec and implement

2016-02-10 Thread Kenneth Russell
On Wed, Feb 10, 2016 at 10:25 AM, Domenic Denicola  wrote:

> From: whatwg [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of Xida
> Chen
>
> > We intend to push this feature forward in Blink, particularly we intend
> to
> > spec and implement the "Strongly desired options" listed on the Whatwg
> > proposal page. We would appreciate comments and suggestions on the
> > proposal.
> >
> > Any thoughts or objections before we start drafting a change to the spec?
>
> Exciting stuff!
>

Agreed -- thanks Xida for pushing this feature forward. It'll significantly
improve texture handling performance in WebGL applications.



>
> > 'none': Do not change orientation.
>
> Does this mean "disregard image orientation metadata" or does it mean "do
> not change orientation from what the metadata specifies"?
>
> > enum colorspaceConversion, default = 'default'
>
> Is there any precedent on the web platform (or in Open GL APIs, perhaps?)
> for whether "colorspace" is one word or two words? If two words, this
> should be colorSpaceConversion. Happy to defer to your expertise here.
>

The WebGL spec has been using a single word for this, but two words are
probably more correct.



>
> > 'default': Implementation-specific behavior, possibly optimized for the
> implementation's graphics framework.
>
> This is a bit unfortunate as an interop hazard. I suppose it's no worse
> than today though. Has this been discussed in the past? Apologies, I was
> only able to skim the previous email thread.
>

This provides the functionality of the existing
UNPACK_COLORSPACE_CONVERSION_WEBGL flag in the WebGL spec. By default
images uploaded to WebGL are decoded "however the browser does it" -- which
isn't tightly specified -- but most WebGL apps that draw images just do
this and get correct results.

However, in special cases and for sophisticated apps, it's absolutely
necessary to get the raw data that's in the image, which is why there has
to be a switch to turn it off.

-Ken




>
> ---
>
> This comment is on something outside the "strongly desired features"
> section so probably not relevant to your immediate work. Just wanted to
> mention it.
>
> > DOMString? crossOrigin
>
> In new JavaScript-only APIs we've made the decision to move away from the
> potentially-confusing HTML style crossOrigin enums in favor of the
> RequestCredentials enum used by Fetch:
> https://fetch.spec.whatwg.org/#requestcredentials. You can see this in
> e.g. https://github.com/whatwg/html/pull/608 where I chose the same
> initial crossOrigin design and Anne convinced me to move to credentials. I
> imagine we'll continue to use crossorigin="" and corresponding reflected
> crossOrigin IDL attributes for any HTML elements, but for JS-only APIs
> RequestCredentials is the way to go.
>


Re: [whatwg] Reviving ImageBitmap options: Intend to spec and implement

2016-02-10 Thread Kenneth Russell
On Wed, Feb 10, 2016 at 12:06 PM, Justin Novosad  wrote:

> On Wed, Feb 10, 2016 at 2:29 PM, Boris Zbarsky  wrote:
>
> > On 2/10/16 1:25 PM, Domenic Denicola wrote:
> >
> >> In new JavaScript-only APIs we've made the decision to move away from
> the
> >> potentially-confusing HTML style crossOrigin enums in favor of the
> >> RequestCredentials enum used by Fetch:
> >> https://fetch.spec.whatwg.org/#requestcredentials. You can see this in
> >> e.g. https://github.com/whatwg/html/pull/608 where I chose the same
> >> initial crossOrigin design and Anne convinced me to move to
> credentials. I
> >> imagine we'll continue to use crossorigin="" and corresponding reflected
> >> crossOrigin IDL attributes for any HTML elements, but for JS-only APIs
> >> RequestCredentials is the way to go.
> >>
> >
> > That's not _quite_ the same thing.  The HTML setup basically lets you
> > specify one of:
> >
> > 1)  No CORS (attr not set).
> > 2)  CORS, RequestCredentials == "include" (crossrigin="use-credentials")
> > 3)  CORS, RequestCredentials == "same-origin" (any other attr value)
> >
> > Note that in the pull request you reference your default was not actually
> > any of those situations, as far as I can tell, so I agree that using
> > "crossOrigin" there was not a good fit.  But for the ImageBitmap case, we
> > do want to support case 1 above, at least assuming tainted ImageBitmap
> is a
> > thing.  If it's not, then I agree that just a RequestCredentials value is
> > probably sufficient and all the loads involved should use CORS.
> >
>
> Tainted ImageBitmaps is a thing: https://github.com/whatwg/html/pull/385
> That said, this is not one of the options we intend to move forward with in
> the near term (It is not in the highly desired list). It is not clear that
> the feature is even needed. I am no CORS expert, but I think you can get an
> untainted ImageBitmap from a cross-origin image by using: XHR (with
> credentials) -> blob -> createImageBitmap
>

Agreed -- I don't think the ImageBitmap constructor from URL, or the
crossOrigin ImageBitmap constructor argument, are needed any more. It would
be good to fully implement ImageBitmap (including asynchronous decoding),
plus some of the WebGL-related constructor arguments, and verify that the
texture uploads are much faster. At that point let's delete the crossOrigin
ImageBitmap constructor argument, and the proposal for creating
ImageBitmaps from URLs.

-Ken




>
>
> > That said, the actual phrasing around "crossOrigin" in
> > https://wiki.whatwg.org/wiki/ImageBitmap_Options doesn't make much sense
> > (e.g. it in fact is not a "CORS settings attribute" because it's not a
> > markup attribute at all).  But we can wordsmith it better once we agree
> on
> > what we actually want it to do.
> >
> > -Boris
> >
>


Re: [whatwg] Opinions on window.console = "foo", and other oddities of window.console

2016-02-08 Thread Kenneth Russell
The test harnesses used by the working group which I chair rely on being
able to replace window.console. I'd like to see it continue to be mutable.
Thanks.


On Mon, Feb 8, 2016 at 3:25 PM, Domenic Denicola  wrote:

> As you may know, we now have a standard for the console object:
> http://console.spec.whatwg.org/
>
> One of the first issues we encountered while investigating console
> behavior is https://github.com/whatwg/console/issues/1, which is that
> browsers currently allow setting `window.console` (and presumably
> `self.console` in workers) to any value. This is pretty weird, but every
> tested browser (Firefox, Chrome, Edge, and Safari) follows it.
>
> Probably the default course of action here is to just spec this. But we
> wanted to check to see if any implementers thought this was weird, and
> wanted to experiment with e.g. removing the setter.
>
> Even if we do go this route, we have another weirdness:
>
> - Firefox has an accessor descriptor (get/set), Chrome, Safari, and Edge
> have a data descriptor (value)
> - Firefox, Chrome, and Edge are enumerable; Safari is not enumerable
>
> I guess Chrome and Safari's data descriptors are not surprising, given
> that their bindings infrastructure for Window isn't fully Web
> IDL-compatible yet. But Edge is pretty surprising. Travis, do you have any
> info on that?
>
> So I think the plan of record is: `attribute any console`, with prose
> describing how the getter returns "the window's console object" which is
> initially set to a new instance of Console, but the setter can change it to
> any value. This means accessor descriptors and enumerable, since that's how
> IDL works. If anyone thinks this is bad, let us know. Otherwise the
> standard will be updated any day now in that direction.
>


Re: [whatwg] OffscreenCanvas from a worker and synchronization with DOM/CSS

2016-01-26 Thread Kenneth Russell
That's right. From discussions with Mozilla I don't think that code path's
enabled in their implementation yet, but it will be (as well as in Chrome's
future implementation of the same spec).


On Sat, Jan 23, 2016 at 12:40 PM, Gregg Tavares  wrote:

> Never mind me. For whatever reason my mind blanked out.
>
> You can transfer to the main thread and then apply to a canvas.
>


Re: [whatwg] High-density canvases

2015-04-15 Thread Kenneth Russell
On Wed, Apr 15, 2015 at 1:46 PM, Dean Jackson d...@apple.com wrote:

 On 16 Apr 2015, at 6:42 am, Elliott Sprehn espr...@chromium.org wrote:

 3. Accessing rendered pixel size is layout-inducing. To avoid layout

 thrashing, we should consider making this an asynchronous getter (e.g.
 asyncGetBoundignClientRect). This would also prevent renderedsizechanged
 events from firing from within the evaluation of the
 renderedPixelWidth/Height
 attributes, which is weird.


 renderedsizechanged feels like it's related to the general per element
 resize event problem. It'd be unfortunate to add an event to the browser
 specifically for canvas instead of solving the general problem. In that case
 authors will start resorting to hacks where they insert canvases into the
 page to listen for resizes (we've seen this with other events like
 overflowchanged).

 Perhaps we should add per element resize events and spec that they fire
 after the layout, but before the paint, during normal frame creation. That
 does mean you can cause an infinite loop if your resize handler keeps
 mutating the page, but there's so many other ways to cause that already.


 +1

If that's a viable option let's do that. It would solve many
longstanding problems.

-Ken


Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-04-14 Thread Kenneth Russell
On Sun, Apr 12, 2015 at 2:41 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Sat, Apr 11, 2015 at 1:49 PM, Kenneth Russell k...@google.com wrote:

 Suppose src=myimage.png is set on an image element and then, while
 it is downloading, an ImageBitmap is transferred into it.


 This should behave as if src was removed just before the ImageBitmap is
 transferred into it.

 In spec terms, I think we'd define a new state flag has an ImageBitmap.
 Transferring an ImageBitmap into an img element would set that flag and
 trigger update the image data. After step 2 of update the image data,
 when the flag is set, we'd synchronously do what update the image data
 would do with no source: clear last selected source (step 4), and do steps
 9.1 and 9.2 (but never fire the error event). Then, transferring the
 ImageBitmap would do the load complete work of step 14: set the
 completely available state, set the image data, and add the image to the
 list of available images.

 Note that this is also exactly what we'd need to do to support srcObject on
 img, which would be nice to have so you can render a Blob with an
 HTMLImageElement without dealing with the annoying createObjectURL lifetime
 issues.

 Thus:

 1. What is displayed in the webpage as myimage.png is downloading?


 As normal before an ImageBitmap is transferred, the ImageBitmap afterward.

 2. Do the downloaded bits overwrite what you transferred or do we stop
 the download?


 Stop the download.


 3. Are onload and onerror events fired? This question applies both to
 the in-progress download and to the transferred-in ImageBitmap.


 No.


 4. What should the 'complete' property and 'currentSrc' attribute reflect?


 True and the empty string respectively.


 5. Can the developer go back to having the img element contain what
 was set on 'src' and not what was transferred in?


 As written, yes, by performing another relevant mutation to trigger update
 the image data again.

 srcset and the picture element make the situation even more complex.


 I think the above covers it.

Thanks for these answers.

They sound reasonable to me from a complexity standpoint, though the
needed updates to the img tag's spec sound a little involved. I have
another concern which is mentioned below.


 In comparison, displaying ImageBitmaps with a custom canvas context
 offers simple semantics and avoids other problems raised on this
 thread like requiring a layout, or firing events, upon recipt of a new
 ImageBitmap.


 AFAIK there aren't any issues with events. The issue with layout is simply
 whether or not we want the element receiving an ImageBitmap to have the
 expected intrinsic size. AFAICT for unnecessary layouts to occur would
 require repeated ImageBitmap transfers into the same HTMLImageElement, where
 the ImageBitmaps have varying sizes and the element has auto 'width' or
 'height', but the author actually intends that the images all be scaled or
 cropped to the same size, and doesn't notice this is not happening. Is there
 a more realistic scenario?

If repeatedly transferring ImageBitmaps of the same size into the same
img won't cause repeated re-layouts, that alleviates my concern
about efficiency of using images as the output path. I expect that a
single OffscreenCanvas will be used to produce output to multiple
areas on the page (whether into images or canvases) but that it will
be sized large enough so that all of the ImageBitmaps it produces will
cover the largest of those areas, avoiding repeatedly resizing the
OffscreenCanvas.

There is one other problem which has come up repeatedly in canvas
applications: needing to be able to accurately measure the number of
device pixels covered by a canvas, so pixel-accurate rendering can be
done. https://wiki.whatwg.org/wiki/CanvasRenderedPixelSize addresses
this and it's currently being implemented in Chromium. There is no
such mechanism for images. It would be necessary to understand exactly
how many device pixels the output image is covering in the document so
that the OffscreenCanvas can be sized appropriately. Using a canvas
element for the display of these ImageBitmaps avoids this problem. It
also makes the API more symmetric; a canvas-like object produces the
ImageBitmap, and a canvas element is used to view it. Do you have any
suggestion for how this issue would be solved with img?

-Ken


Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-04-10 Thread Kenneth Russell
On Thu, Apr 9, 2015 at 6:25 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Thu, Apr 9, 2015 at 5:23 PM, Kenneth Russell k...@google.com wrote:
 1. An image element that didn't have a width or height explicitly
 specified in the markup has an ImageBitmap transferred in. Will its
 width and height attributes change? Will layout have to occur on the
 page?

 Its width and height *attributes* won't change, but its *intrinsic*
 width and height will, to match the bitmap and density.  (Btw, the
 ImageBitmap feature needs to have a notion of density.)

I think the idea of introducing density to ImageBitmap needs a
completely separate thread. I don't understand all the ramifications,
but it looks to me like the fact that ImageBitmap's width and height
are specified in CSS pixels makes it impossible for users' code to
compute the exact number of pixels in the ImageBitmap. This is still a
problem for developers using canvas and
https://wiki.whatwg.org/wiki/CanvasRenderedPixelSize is intended to
solve it.

 Yes, layout occurs, assuming the size changed.

 2. The same image element has another ImageBitmap transferred in which
 has a different width and height. Do the image's width and height
 change? Will layout have to occur on the page?

 Yes and yes, assuming the size changed.

This is problematic. It's essential that displaying a newly produced
ImageBitmap be a cheap operation. Layout is expensive.


 When an image's src is set to a URL and its width and height
 attributes aren't set, the page is laid out again when the image is
 loaded. For acceptable performance it's important that this not happen
 when displaying a new ImageBitmap. Using a new canvas context type
 solves this problem. The OffscreenCanvas proposal defines the
 ImageBitmap as being rendered at the upper left corner of the canvas's
 box with no scaling applied, and clipped to the box. It's not as
 flexible as having the object-fit and object-position CSS properties
 available, but will give developers explicit control over when they
 want layout to happen, and still let them display their content as
 they wish.

 Why not just use object-fit/position?  I think we support those.  No
 need to make new weird behavior when you can just opt into the desired
 behavior with existing standardized stuff.

Can the right settings of object-fit and object-position avoid the
layouts in the abovementioned scenarios?

-Ken


Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-04-10 Thread Kenneth Russell
On Fri, Apr 10, 2015 at 2:33 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Sat, Apr 11, 2015 at 2:18 AM, Justin Novosad ju...@google.com wrote:

 Riddle me this: What would be the value an HTMLImageElement's src
 attribute
 after an ImageBitmap is transferred in? A data URL? What if the
 ImageBitmap
 was sourced from an actual resource with a URL, would we pipe that
 through?  In cases where there is no tractable URL (ex: ImageBitmap was
 grabbed fram a tainted canvas), then what?


 We have the same issue for HTMLMediaElement.srcObject:
 http://dev.w3.org/2011/webrtc/editor/archives/20140619/getusermedia.html#widl-HTMLMediaElement-srcObject
 The current approach is to leave 'src' unchanged, and the spec says that if
 there is a current srcObject (or ImageBitmap in our case), render that and
 ignore 'src'.

Here are some more questions, some of which are from a colleague. I'm
not trying to claim credit for thinking of them, but want to make it
clear that there are several more ambiguities with attempting to use
an image element to display ImageBitmaps.

Suppose src=myimage.png is set on an image element and then, while
it is downloading, an ImageBitmap is transferred into it.

1. What is displayed in the webpage as myimage.png is downloading?
2. Do the downloaded bits overwrite what you transferred or do we stop
the download?
3. Are onload and onerror events fired? This question applies both to
the in-progress download and to the transferred-in ImageBitmap.
4. What should the 'complete' property and 'currentSrc' attribute reflect?
5. Can the developer go back to having the img element contain what
was set on 'src' and not what was transferred in?

srcset and the picture element make the situation even more complex.

In comparison, displaying ImageBitmaps with a custom canvas context
offers simple semantics and avoids other problems raised on this
thread like requiring a layout, or firing events, upon recipt of a new
ImageBitmap.

-Ken


Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-04-08 Thread Kenneth Russell
On Tue, Apr 7, 2015 at 6:42 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Wed, Apr 8, 2015 at 11:41 AM, Kenneth Russell k...@google.com wrote:

 I apologize for taking so long to update this proposal, but it's now
 in a reasonable state:
 https://wiki.whatwg.org/wiki/OffscreenCanvas

 (Renamed per feedback from Anne -- thanks.)

 Please offer your feedback. Multiple browser implementers agreed on
 this form of the API at a recent face-to-face meeting of the WebGL
 working group, and the proposal should be able to address multiple use
 cases. I'm very interested in prototyping it to see if it can offer
 better parallelism.


 It seems fine.

Great.

 As far as I can tell, the only significant difference between
 this and WorkerCanvas is using an HTMLCanvasElement bitmaprenderer as a
 zero-copy way to render an ImageBitmap, instead of HTMLImageElement.

That's the main difference. Another is that OffscreenCanvas can be
directly instantiated from both the main thread and workers.

Can you explain what the problem was with using HTMLImageElement?

One browser implementer pointed out that HTMLImageElement has several
complex internal states related to loading, and they do not want to
conflate that with the display of ImageBitmaps. I've documented this
in the OffscreenCanvas proposal. (I originally thought Canvas had more
complex internal state, but now think that it can be easier to define
new behavior against a new canvas context than img.)

-Ken


 Rob
 --
 oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
 owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
 osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
 owohooo
 osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
 oioso
 oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
 owohooo
 osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
 ooofo
 otohoeo ofoioroeo ooofo ohoeololo.


Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-04-08 Thread Kenneth Russell
On Wed, Apr 8, 2015 at 8:46 AM, Ashley Gullen ash...@scirra.com wrote:
 (sorry for double email) Just one question about how requestAnimationFrame
 is expected to work. Could we get rAF added to workers? Would it be able to
 simply fire whenever it does on the main thread, as if it postMessage'd to
 the worker every time it was called but without the extra postMessage
 latency? To do that I guess workers need to be associated with a window. For
 normal workers this seems fine but if OffscreenCanvas is expected to work in
 shared or service workers this is a little tricker to define. Perhaps
 requestAnimationFrame could be a method on OffscreenCanvas and it fires
 based on the window the HTMLCanvasElement it came from is currently in? Then
 you have a convenient way to synchronise rendering with the window it will
 ultimately be displayed in, without having to know in advance which window
 that is.

A usable requestAnimationFrame will be needed in workers in order to
reliably do animation. I'm hoping that that can be a somewhat
orthogonal proposal to this one. There are two ways to use an
OffscreenCanvas and only one of them causes rendering results to be
pushed implicitly to an existing canvas on the main thread. (The other
way involves directly instantiating an OffscreenCanvas and using
postMessage to do some work on the main thread to display its
results.) requestAnimationFrame is typically triggered on the main
thread as early as possible after the previous frame's vsync interval,
and I'd expect that other workers' requestAnimationFrame callbacks
should be triggered in the same way. Dedicated workers are always
associated with a particular page which is displayed in a window, so
the connection to an on-screen window should always be there.

-Ken


 Ashley



 On 8 April 2015 at 16:36, Ashley Gullen ash...@scirra.com wrote:

 This looks like it will cover running a HTML5 game engine from a worker
 very nicely and with little performance overhead. Good stuff!

 Ashley


 On 8 April 2015 at 00:41, Kenneth Russell k...@google.com wrote:

 On Wed, Mar 25, 2015 at 6:41 PM, Kenneth Russell k...@google.com wrote:
  On Wed, Mar 25, 2015 at 1:22 PM, Robert O'Callahan
  rob...@ocallahan.org wrote:
  On Wed, Mar 25, 2015 at 11:41 PM, Anne van Kesteren ann...@annevk.nl
  wrote:
 
  On Fri, Mar 20, 2015 at 11:15 PM, Robert O'Callahan
  rob...@ocallahan.org wrote:
   My understanding is that the current consensus proposal for canvas
   in
   Workers is not what's in the spec, but this:
   https://wiki.whatwg.org/wiki/WorkerCanvas
   See Canvas in Workers threads from October 2013 for the
   discussion.
   svn
   is failing me but the CanvasProxy proposal in the spec definitely
   predates
   those threads.
  
   Ian, unless I'm wrong, it would be helpful to remove the
   CanvasProxy
   stuff
   from the spec to avoid confusion.
  
   That proposal contains WorkerCanvas.toBlob, which needs to be
   updated to
   use promises.
 
  There's also https://wiki.whatwg.org/wiki/WorkerCanvas2 it seems. It
  would be interesting to know what the latest on this is.
 
 
  Right now that appears to be just a copy of WorkerCanvas with some
  boilerplate text added. I assume someone's working on it and will
  mention it
  on this list if/when they're ready to discuss it :-).
 
  Yes, apologies for letting it sit there in an incomplete state.
  Recently feedback from more browser implementers was gathered about
  the WorkerCanvas proposal. I will assemble it into WorkerCanvas2 and
  follow up on this thread in a day or two.

 I apologize for taking so long to update this proposal, but it's now
 in a reasonable state:
 https://wiki.whatwg.org/wiki/OffscreenCanvas

 (Renamed per feedback from Anne -- thanks.)

 Please offer your feedback. Multiple browser implementers agreed on
 this form of the API at a recent face-to-face meeting of the WebGL
 working group, and the proposal should be able to address multiple use
 cases. I'm very interested in prototyping it to see if it can offer
 better parallelism.

 -Ken



  -Ken
 
 
  Rob
  --
  oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso
  oaonogoroyo
  owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
  osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
  owohooo
  osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o
  o‘oRoaocoao,o’o
  oioso
  oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo
  oaonoyooonoeo
  owohooo
  osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono
  odoaonogoeoro
  ooofo
  otohoeo ofoioroeo ooofo ohoeololo.





Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-04-07 Thread Kenneth Russell
On Wed, Mar 25, 2015 at 6:41 PM, Kenneth Russell k...@google.com wrote:
 On Wed, Mar 25, 2015 at 1:22 PM, Robert O'Callahan rob...@ocallahan.org 
 wrote:
 On Wed, Mar 25, 2015 at 11:41 PM, Anne van Kesteren ann...@annevk.nl
 wrote:

 On Fri, Mar 20, 2015 at 11:15 PM, Robert O'Callahan
 rob...@ocallahan.org wrote:
  My understanding is that the current consensus proposal for canvas in
  Workers is not what's in the spec, but this:
  https://wiki.whatwg.org/wiki/WorkerCanvas
  See Canvas in Workers threads from October 2013 for the discussion.
  svn
  is failing me but the CanvasProxy proposal in the spec definitely
  predates
  those threads.
 
  Ian, unless I'm wrong, it would be helpful to remove the CanvasProxy
  stuff
  from the spec to avoid confusion.
 
  That proposal contains WorkerCanvas.toBlob, which needs to be updated to
  use promises.

 There's also https://wiki.whatwg.org/wiki/WorkerCanvas2 it seems. It
 would be interesting to know what the latest on this is.


 Right now that appears to be just a copy of WorkerCanvas with some
 boilerplate text added. I assume someone's working on it and will mention it
 on this list if/when they're ready to discuss it :-).

 Yes, apologies for letting it sit there in an incomplete state.
 Recently feedback from more browser implementers was gathered about
 the WorkerCanvas proposal. I will assemble it into WorkerCanvas2 and
 follow up on this thread in a day or two.

I apologize for taking so long to update this proposal, but it's now
in a reasonable state:
https://wiki.whatwg.org/wiki/OffscreenCanvas

(Renamed per feedback from Anne -- thanks.)

Please offer your feedback. Multiple browser implementers agreed on
this form of the API at a recent face-to-face meeting of the WebGL
working group, and the proposal should be able to address multiple use
cases. I'm very interested in prototyping it to see if it can offer
better parallelism.

-Ken



 -Ken


 Rob
 --
 oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
 owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
 osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
 owohooo
 osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
 oioso
 oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
 owohooo
 osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
 ooofo
 otohoeo ofoioroeo ooofo ohoeololo.


Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-03-25 Thread Kenneth Russell
On Wed, Mar 25, 2015 at 1:22 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Wed, Mar 25, 2015 at 11:41 PM, Anne van Kesteren ann...@annevk.nl
 wrote:

 On Fri, Mar 20, 2015 at 11:15 PM, Robert O'Callahan
 rob...@ocallahan.org wrote:
  My understanding is that the current consensus proposal for canvas in
  Workers is not what's in the spec, but this:
  https://wiki.whatwg.org/wiki/WorkerCanvas
  See Canvas in Workers threads from October 2013 for the discussion.
  svn
  is failing me but the CanvasProxy proposal in the spec definitely
  predates
  those threads.
 
  Ian, unless I'm wrong, it would be helpful to remove the CanvasProxy
  stuff
  from the spec to avoid confusion.
 
  That proposal contains WorkerCanvas.toBlob, which needs to be updated to
  use promises.

 There's also https://wiki.whatwg.org/wiki/WorkerCanvas2 it seems. It
 would be interesting to know what the latest on this is.


 Right now that appears to be just a copy of WorkerCanvas with some
 boilerplate text added. I assume someone's working on it and will mention it
 on this list if/when they're ready to discuss it :-).

Yes, apologies for letting it sit there in an incomplete state.
Recently feedback from more browser implementers was gathered about
the WorkerCanvas proposal. I will assemble it into WorkerCanvas2 and
follow up on this thread in a day or two.

-Ken


 Rob
 --
 oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
 owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
 osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
 owohooo
 osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
 oioso
 oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
 owohooo
 osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
 ooofo
 otohoeo ofoioroeo ooofo ohoeololo.


Re: [whatwg] High-density canvases

2014-06-24 Thread Kenneth Russell
On Mon, Jun 23, 2014 at 6:06 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Tue, Jun 24, 2014 at 12:27 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

 I'll do that now.


 Done.
 http://wiki.whatwg.org/wiki/CanvasRenderedPixelSize

Fantastic. Thanks Rob. That looks great. Filed crbug.com/388532 about
implementing them in Chrome.

-Ken


Re: [whatwg] High-density canvases

2014-06-23 Thread Kenneth Russell
On Thu, Jun 19, 2014 at 4:20 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Fri, Jun 20, 2014 at 6:06 AM, Kenneth Russell k...@google.com wrote:

 It is a little unfortunate that a canvas-specific solution is needed.
 Is it known whether document.getBoxQuads solves this problem in
 Firefox?


 It doesn't.

 Gecko (and I assume other engines) typically snaps CSS box edges to device
 pixel edges at render time, so that CSS backgrounds and borders look crisp.
 So if a canvas has a CSS background exactly filling its content box, our
 snapping of the content box is what determines the ideal device pixel size
 of the canvas buffer.

 Authors can estimate the canvas device pixel size using getBoxQuads to get
 the canvas content box in fractional CSS pixels and multiplying by
 devicePixelRatio (assuming no CSS transforms or other effects present). But
 if that estimate results in a non-integral device pixel size, there's no way
 for authors to know how we will snap that to an integral size.

 We could move these attributes up the element hierarchy, or better still to
 a new generic API that works on all elements (e.g. some variant of
 getBoxQuads). (CSS fragmentation might split an element into multiple boxes
 with different sizes.) I don't know of any good use-cases for that, but
 maybe there are some?

It's hard to predict. A more general API would be better than a
canvas-specific one, assuming it solves the problem. getBoxQuads can
return different types of CSS boxes (content, padding, border, margin)
-- will it be obvious to web developers which to use, and is it more
likely all implementations will handle them all correctly rather than
a couple of new properties on the canvas?

Could you suggest a name for the new API? getBoxQuadsInDevicePixels?
getDevicePixelBoxQuads?

-Ken


Re: [whatwg] High-density canvases

2014-06-23 Thread Kenneth Russell
On Mon, Jun 23, 2014 at 4:25 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Tue, Jun 24, 2014 at 8:54 AM, Kenneth Russell k...@google.com wrote:

 It's hard to predict. A more general API would be better than a
 canvas-specific one, assuming it solves the problem. getBoxQuads can
 return different types of CSS boxes (content, padding, border, margin)
 -- will it be obvious to web developers which to use, and is it more
 likely all implementations will handle them all correctly rather than
 a couple of new properties on the canvas?

 Could you suggest a name for the new API? getBoxQuadsInDevicePixels?
 getDevicePixelBoxQuads?


 AFAIK all we need from this API is device pixel sizes. Extending getBoxQuads
 with an option to return geometry in device pixels doesn't really make sense
 to me since it doesn't make sense to ask what is element A's quad relative
 to element B in device pixels. So a general, dedicated API would probably
 look something like:
   interface DOMSize {
 readonly attribute unrestricted double width;
 readonly attribute unrestricted double height;
   };
   dictionary BoxSizeOptions {
 CSSBoxType box = border;
   };
   sequenceDOMSize getBoxDevicePixelSizes(optional BoxSizeOptions options);

 So renderedPixelWidth/Height on HTMLCanvasElement is definitely simpler. I
 have no strong opinion about which way to go, but I lean towards the
 attributes unless someone has a good use-case for the general API. We could
 add the general API later if we need to, the duplication would not be a big
 deal.

If there's no concern about potential duplicated functionality then
let's add the attributes to the canvas element. They'll be easier for
developers to understand and easier to verify as obviously correct.

How should we proceed? Would you add an entry to
http://wiki.whatwg.org/wiki/Category:Proposals ?


Re: [whatwg] High-density canvases

2014-06-19 Thread Kenneth Russell
On Thu, Jun 19, 2014 at 7:54 AM, Stephen White senorbla...@chromium.org wrote:
 On Thu, Jun 12, 2014 at 11:42 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

 I think I'd rather not take control of canvas resizing away from
 applications, even opt-in. That leads to complexity such as extra API for
 slaving other canvases. I also think we'd be better off sticking to the
 invariant that the units of canvas coordinate space are single pixels in
 the canvas bitmap; I think that simplifies things for implementers and
 authors.


 I agree.


 Here's an alternative proposal which I think is a bit simpler and more
 flexible:
 Expose two new DOM attributes on HTMLCanvasElement:
 readonly attribute long preferredWidth;
 readonly attribute long preferredHeight;
 These attributes are the UA's suggested canvas size for optimizing output
 quality. It's basically what Ian's proposal would have set as the automatic
 size. We would also add a preferredsizechange event when those attributes
 change.

 Applications that want DPI-aware canvases would read those attributes and
 use them to set the size, just once if they only want to paint once, during
 each requestAnimationFrame callback for games and other animations, and in
 the preferredsizechange event handler if they are willing to paint multiple
 times but aren't actually animating. The application would be responsible
 for scaling its output to the new canvas coordinate space size. Giving the
 application control of the size change simplifies things for the browser
 and gives applications maximum flexibility, e.g. resizing ancillary
 resources as needed, or imposing constraints on the chosen size.


 I like this new proposal. It works for both the canvas and WebGL use cases,
 and it puts the app in control of behaviour in a way that makes sense for
 immediate-mode APIs.

 I assume that the size change event would fire:

- on browser page zoom
- on pinch-zoom
- when a CSS animation (e.g., scale) changes the canvas size in CSS
pixels

 For browsers that implement the latter two off the main thread, perhaps
 they should only fire at end-of-gesture or end-of-animation, to avoid the
 rendered size being out-of-sync with scaled size by the time the canvas
 gets composited.

 I agree with Mark that the names need work. How about something that
 incorporates device pixel in some way, to reflect that this is roughly
 dpr * css scale * size?

 devicePixelWidth
 widthInDevicePixels
 pixelExactWidth
 exactPixelWidth

widthInDevicePixels and exactPixelWidth both sound clear.
renderedPixelWidth per
https://bugzilla.mozilla.org/show_bug.cgi?id=1024493 also sounds like
a good option. It would be great to spec these and their associated
change event. I'd be interested in adding support to Chrome.

It is a little unfortunate that a canvas-specific solution is needed.
Is it known whether document.getBoxQuads solves this problem in
Firefox?

-Ken


 pixelWidth
 pixelRatioExactWidth
 unscaledWidth
 unscaledPixelWidth
 nativeWidth
 nativePixelWidth

 Stephen



 Rob
 --
 Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni
 le atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
 stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
 'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
 waanndt  wyeonut  thoo mken.o w



Re: [whatwg] WebGL and ImageBitmaps

2014-05-16 Thread Kenneth Russell
On Fri, May 16, 2014 at 9:27 AM, Ian Hickson i...@hixie.ch wrote:
 On Fri, 16 May 2014, Justin Novosad wrote:

 This is a longstanding issue with 2D canvas that several developers have
 complained about over the years.  The color space of the canvas backing
 store is currently unspecified.

 It's defined that the colour space must be the same as used for CSS and
 img, but yeah the final colour space is left up to the UA:

http://whatwg.org/html#color-spaces-and-color-correction


 Blink/WebKit uses output-referred color space, which is bad for some
 inter-op cases, but good for performance. Calling drawImage will produce
 inter-operable behavior as far as visual output is concerned, but
 getImageData will yield values that have the display color profile baked
 in.

 I'm not quite sure what you mean here. If you mean that you can set
 'fillStyle' to a colour, fillRect the canvas, and the get the data for
 that pixel and find that it's a different colour, then that's
 non-conforming. If you mean that you can take a bitmap image without
 colour-profile information and draw it on a canvas and then getImageData()
 will return different results, then again, that's non-conforming.

 If you mean that drawing an image with a color profile will result in
 getImageData() returning different colour pixels on different systems,
 then that's allowed, because the colour space of the canvas (and the rest
 of the Web platform, which must be the same colour space) is not defined.


 Some web developers have worked around this by reverse-engineering the
 client-specific canvas to sRGB colorspace transform by running a test
 pattern through drawImage+getImageData.  It is horrible that we are
 making devs resort to this.

 I'm not really sure what this work around achieves. Can you elaborate?

 If you just want to do everything in sRGB, then putting all your images in
 sRGB but without giving color space information (or setting the option to
 'strip', if we add these createImageBitmap() options) would result in what
 you want, no?

 You'd have to manually (or on the server) convert images that were in
 other colour spaces, though.


 Adding a colorspace option to createImageBitmap is not enough IMHO. I
 think we need a more global color-management approach for canvas.

 If we need colour management, we need it for the Web as a whole, not just
 for canvas. So far, though, implementations have not been implementing the
 features that have been proposed, so...:

http://www.w3.org/TR/css3-color/#dropped

Recently there has been activity in some implementations to finally
solve the color management problem. I can't speak for Safari, but have
heard that as of Mac OS X 10.9, that implementation is rendering web
pages into the sRGB color space per the CSS spec, and converting to
the display's color profile on the way to the screen. I was told that
there was a concern there would be a compatibility impact with web
apps implicitly expecting to work in the display's color space, but
surprisingly, everything seemed to work during this transition.

In Chrome there is also work ongoing to handle multiple monitors with
different display color profiles. Noel Gordon (CC'd) is driving this
and the list of active issues can be seen at
https://code.google.com/p/chromium/issues/list?q=owner%3Anoel%40chromium.org
. It's likely the implementation will be done in stages: first
rendering images correctly when dragging windows from monitor to
monitor, and then rendering all web page content correctly.

In sum, while it has taken a while for implementations to finally
start tackling the color management problem, it's happening now.

-Ken


Re: [whatwg] WebGL and ImageBitmaps

2014-05-15 Thread Kenneth Russell
On Fri, May 9, 2014 at 2:58 PM, Ian Hickson i...@hixie.ch wrote:
 On Fri, 9 May 2014, Ian Hickson wrote:
 On Thu, 18 Jul 2013, Justin Novosad wrote:
 
  To help us iterate further, I've attempted to capture the essence of
  this thread on the whatwg wiki, using the problem solving template. I
  tried to capture the main ideas that we seem to agree on so far and I
  started to think about how to handle special cases.
 
  http://wiki.whatwg.org/wiki/ImageBitmap_Options

 Are the strongly desired options in the above wiki page still the
 options we should be adding?

 On the assumption that they are, I filed some bugs to cover this:

https://www.w3.org/Bugs/Public/show_bug.cgi?id=25642
createImageBitmap() options: image data orientation

https://www.w3.org/Bugs/Public/show_bug.cgi?id=25643
createImageBitmap() options: color space handling

https://www.w3.org/Bugs/Public/show_bug.cgi?id=25644
createImageBitmap() options: alpha channel handling

 As usual with such bugs, if there is interest amongst browser vendors in
 implementing any of these, please comment on the bug.

For the record, I've commented on the above bugs. I'd personally like
to see them all specified and implemented so that image loading for
the purpose of uploading to WebGL textures can be more efficient.

-Ken


Re: [whatwg] WebGL and ImageBitmaps

2014-05-15 Thread Kenneth Russell
On Thu, May 15, 2014 at 10:59 AM, Ian Hickson i...@hixie.ch wrote:
 On Wed, 14 May 2014, Kenneth Russell wrote:
 
  On the assumption that they are, I filed some bugs to cover this:
 
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=25642
 createImageBitmap() options: image data orientation
 
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=25643
 createImageBitmap() options: color space handling
 
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=25644
 createImageBitmap() options: alpha channel handling
 
  As usual with such bugs, if there is interest amongst browser vendors
  in implementing any of these, please comment on the bug.

 For the record, I've commented on the above bugs. I'd personally like to
 see them all specified and implemented so that image loading for the
 purpose of uploading to WebGL textures can be more efficient.

 I see that your comment was that the WebGL WG would like them specified.

 Does your saying that mean that Chrome wants to implement these also? Just
 having a working group want them specified doesn't help get us to multiple
 implementations...

To clarify the Chrome team's desire to implement these I added another
comment to each bug, and pointed to the relevant portion of the WebGL
spec in response to the latest comment on
https://www.w3.org/Bugs/Public/show_bug.cgi?id=25644 .

-Ken


Re: [whatwg] Synchronizing Canvas updates in a worker to DOM changes in the UI thread

2013-10-22 Thread Kenneth Russell
On Tue, Oct 22, 2013 at 7:37 AM, Glenn Maynard gl...@zewt.org wrote:
 I just noticed that Canvas already has a Canvas.setContext() method

That's there in support of CanvasProxy, which is a flawed API and
which this entire discussion is aiming to rectify.

 , which
 seems to do exactly what I'm proposing, even down to clearing the backbuffer
 on attach.  The only difference is that it lives on Canvas instead of the
 context--the only reason I put it there in my proposal was because this only
 seemed useful for WebGL.  Given that, I think this proposal can be
 simplified down to just: put setContext on WorkerCanvas too.

Also, adding a present() method to Canvas.

 On Mon, Oct 21, 2013 at 9:03 PM, Kenneth Russell k...@google.com wrote:

 There are some unexpected consequences of the attachToCanvas API
 style. For example, what if two contexts use attachToCanvas to target
 the same canvas?


 I left out these details in my initial post in order to see what people
 thought at a high level before delving into details.

At a high level I prefer the form of the WorkerCanvas API, including
transferToImageBitmap and the ability to transfer an ImageBitmap into
an HTMLImageElement for viewing, and removing the CanvasProxy concept
and associated APIs. I'd like to focus my own efforts in writing a
full draft for WorkerCanvas under
http://wiki.whatwg.org/wiki/Category:Proposals .

-Ken


Re: [whatwg] Canvas in workers

2013-10-22 Thread Kenneth Russell
On Mon, Oct 21, 2013 at 8:03 AM, Glenn Maynard gl...@zewt.org wrote:
 On Sun, Oct 20, 2013 at 11:53 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

 Glenn, taking a step back for a bit, is there anything in
 https://wiki.mozilla.org/User:Roc/WorkerCanvasProposal that you would
 actually object to? IOW, is there anything there that you would think is
 completely superfluous to the platform if all your proposals were to be
 adopted as well?


 I have no objection to the overall change from CanvasProxy to WorkerCanvas,
 eg. the stuff in Kyle's original mail to the thread.  (Being able to settle
 on that is one of the reasons I've tried to detach discussion for the other
 use cases.)

 I'd only recommend leaving out transferToImageBitmap, srcObject and
 ImageBitmap.close() parts.  I do think that would be redundant with with
 present proposal.  They can always be added later, and leaving them out
 keeps the WorkerCanvas proposal itself focused.

Robert, please don't remove those APIs from your proposal. They're
needed in order to address known use cases, and splitting them off
will make it difficult to understand how they interact with
WorkerCanvas later.

I would like to suggest changing the 'srcObject' property on
HTMLImageElement into some sort of method taking ImageBitmap as
argument. If an ImageBitmap had been previously set against the
HTMLImageElement, the method would automatically call 'close()'
against it. Fundamentally there should be some easy way to repeatedly
update the HTMLImageElement without having to query its previous
ImageBitmap and call close() against it before setting srcObject.

Would you consider copying
https://wiki.mozilla.org/User:Roc/WorkerCanvasProposal to
http://wiki.whatwg.org/wiki/Category:Proposals so that it's easier to
collaborate on it?


Re: [whatwg] Canvas in workers

2013-10-22 Thread Kenneth Russell
On Tue, Oct 22, 2013 at 1:44 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Tue, Oct 22, 2013 at 7:31 PM, Kenneth Russell k...@google.com wrote:

 Robert, please don't remove those APIs from your proposal. They're
 needed in order to address known use cases, and splitting them off
 will make it difficult to understand how they interact with
 WorkerCanvas later.


 Yes, I think it's a good idea to specify a complete set of APIs that fit
 together logically and if there are some we don't need, we can remove them
 or just delay implementing them until they're needed.

 I would like to suggest changing the 'srcObject' property on
 HTMLImageElement into some sort of method taking ImageBitmap as
 argument. If an ImageBitmap had been previously set against the
 HTMLImageElement, the method would automatically call 'close()'
 against it. Fundamentally there should be some easy way to repeatedly
 update the HTMLImageElement without having to query its previous
 ImageBitmap and call close() against it before setting srcObject.


 Hmm. I'm not sure how this should work.

 Maybe instead we should use canvas elements and define
 ImageBitmap.transferToCanvas(HTMLCanvasElement). That would mitigate Glenn's
 argument about having to change element types too.

Using a Canvas as the target for displaying an ImageBitmap is fraught
with problems. Because Canvases are rendering targets themselves,
transferring in an ImageBitmap has to interoperate somehow with the
canvas's rendering context as well as other APIs like toDataURL. The
DrawingBuffer proposal in http://wiki.whatwg.org/wiki/CanvasInWorkers
was admittedly complex, but did properly handle fully detaching a
canvas's framebuffer and attaching it to another one. ImageBitmap,
being semantically read-only, isn't well suited for this task.

The idea to use an HTMLImageElement to display the ImageBitmaps is
elegant and I would like to see it explored, including prototyping it
behind an experimental flag and seeing how well it works.

 Would you consider copying
 https://wiki.mozilla.org/User:Roc/WorkerCanvasProposal to
 http://wiki.whatwg.org/wiki/Category:Proposals so that it's easier to
 collaborate on it?


 No problem at all. Can you do it? I need to get a WHATWG account :-).

Yes, I'll take care of the initial draft. The whatwg was pretty
responsive when I got my account though.

-Ken


 Rob
 --
 Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni le
 atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
 stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
 'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
 waanndt  wyeonut  thoo mken.o w


Re: [whatwg] Canvas in workers

2013-10-22 Thread Kenneth Russell
Great.


On Tue, Oct 22, 2013 at 2:54 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 I got an account and I'm uploading the proposal now.


 Rob
 --
 Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni le
 atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
 stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
 'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
 waanndt  wyeonut  thoo mken.o w


Re: [whatwg] Synchronizing Canvas updates in a worker to DOM changes in the UI thread

2013-10-21 Thread Kenneth Russell
On Mon, Oct 21, 2013 at 3:39 PM, Glenn Maynard gl...@zewt.org wrote:
 On Sun, Oct 20, 2013 at 11:16 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

 With all these proposals I think it's OK to allow the main thread to do
 (e.g.) a toDataURL and read what the current contents of the canvas is,


 We can defer this discussion, since it's not something new to this proposal
 (or any other proposal we're discussing).


 On Sun, Oct 20, 2013 at 11:33 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

 To me, passing the image data explicitly in an ImageBuffer along with the
 present message seems like a better fit to the workers message-passing
 model than this proposal, where the data is stored as hidden state in the
 canvas element with (effectively) a setter in the worker and a getter in the
 main thread, and that setting and getting has to be coordinated with
 postMessage for synchronization. The relationship between a commit and its
 present has to be deduced by reasoning about the timing of messages,
 rather than by just reasoning about JS data flow through postMessage.


 Using ImageBitmap for this has a lot of issues.  It requires synchronizing
 with scripts in the UI thread.

This isn't difficult, and amounts to a few additional lines of code in
the main thread's onmessage handler.

The ImageBitmap style proposal has another significant advantage in
that it allows a single canvas context to present results in multiple
output regions on the page.


  It requires manualling resize your canvas
 repeatedly to fit different destinations.  It also may potentially create
 lots of backbuffers. Here's an example of code using ImageBitmap
 incorrectly, leading to excess memory allocation:

 function render()
 {
 var canvas = myWorkerCanvas;
 renderTo(canvas);
 var buffer = canvas.transferToImageBitmap();
 postMessage(buffer);
 }
 setTimeout(render, 1);

 We start with one backbuffer available, render to it (renderTo), peel it off
 the canvas to be displayed somewhere, and toss it off to the main thread.
 (For the sake of the example, the main thread is busy and doesn't process it
 immediately.)  The worker enters render() again, and when it gets to
 renderTo, a new backbuffer has to be allocated, since the one buffer we have
 is still used by the ImageBuffer and can't be changed.  This happens
 repeatedly, creating new backbuffers each time, since none of them can be
 reused.

 This is an extreme example, but if this ever happens even once, it means
 potentially allocating an extra backbuffer.

This sort of resource exhaustion is certainly possible, but I view
this downside as smaller than the upside of addressing both of the
above use cases.

-Ken



 This proposal also requires that whenever a worker is going to return
 image data to the main thread, the main thread must start things off by
 creating a canvas element. It's also not possible for a worker to spawn off
 sub-workers to do drawing (at least, not without some really ugly
 coordination with the main thread.)


 Sure it is.  If you want an offscreen buffer, you just new WorkerCanvas().
 This is unrelated to offscreen drawing.

 --
 Glenn Maynard



Re: [whatwg] Synchronizing Canvas updates in a worker to DOM changes in the UI thread

2013-10-21 Thread Kenneth Russell
On Mon, Oct 21, 2013 at 4:34 PM, Glenn Maynard gl...@zewt.org wrote:
 On Mon, Oct 21, 2013 at 6:08 PM, Kenneth Russell k...@google.com wrote:

  Using ImageBitmap for this has a lot of issues.  It requires
  synchronizing
  with scripts in the UI thread.

 This isn't difficult, and amounts to a few additional lines of code in
 the main thread's onmessage handler.


 Synchronization with the UI thread isn't bad because it's difficult.
 Avoiding synchronization with the main thread has been raised as a desirable
 goal:
 http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0152.html
 including that it isn't possible to render from a worker without
 synchronizing with the main thread.

 (My previous comments on this are here:
 http://www.mail-archive.com/whatwg@lists.whatwg.org/msg35959.html)


 The ImageBitmap style proposal has another significant advantage in
 that it allows a single canvas context to present results in multiple
 output regions on the page.


 You can do that.  You just create a WorkerCanvas for each canvas you want to
 present to, hand them to the worker, then attachToCanvas in the worker to
 switch from canvas to canvas.  (That's orthogonal to explicitpresent.)

OK, I misunderstood that part of your attachToCanvas proposal.

There are some unexpected consequences of the attachToCanvas API
style. For example, what if two contexts use attachToCanvas to target
the same canvas? What if one of those contexts is 2D and the other is
WebGL? Currently it's illegal to try to fetch two different context
types for a single Canvas. The current CanvasProxy spec contains
several complex rules for these cases, and they're not easy to
understand.

Will it be guaranteed that if you have a WebGL context, attachToCanvas
to canvas1, do some rendering, and then attachToCanvas to canvas2,
that the only remaining buffer in canvas1 is its color buffer? No
depth buffers, multisample buffers, etc. will have to remain for some
reason?

How would WebGL's preserveDrawingBuffer attribute, which is a property
of the context, interact with directing its output to multiple
canvases?

Fundamentally I think the behavior is easier to spec, and the
implementation is easier to make correct, if the ultimate destination
is an image rather than a canvas, and the color buffer is transferred
out of the WorkerCanvas in an explicit step.

-Ken



 This sort of resource exhaustion is certainly possible, but I view
 this downside as smaller than the upside of addressing both of the
 above use cases.


 I can only find one thing above that you might be referring to as a use case
 (the one I replied to immediately above).  What was the other?

 --
 Glenn Maynard



Re: [whatwg] Canvas in workers

2013-10-18 Thread Kenneth Russell
On Tue, Oct 15, 2013 at 5:30 PM, Kenneth Russell k...@google.com wrote:
 On Tue, Oct 15, 2013 at 4:41 PM, Robert O'Callahan rob...@ocallahan.org 
 wrote:
 On Wed, Oct 16, 2013 at 11:55 AM, Kenneth Russell k...@google.com wrote:

 On Mon, Oct 14, 2013 at 1:34 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:
  On Mon, Oct 14, 2013 at 2:20 PM, Kenneth Russell k...@google.com wrote:
 
  Would you mind looking at the proposal
  http://wiki.whatwg.org/wiki/CanvasInWorkers and commenting on it?
 
 
  Sure. Kyle and I looked at it while we were working on our proposal. The
  main issues I have with it are that rearchitecting canvas to introduce
  the
  DrawingBuffer layer of abstraction seems unnecessarily complex, and it
  doesn't handle direct presentation of frames from the worker, bypassing
  the
  main thread.

 Note that the CanvasInWorkers draft solves some other longstanding
 issues not addressed by the WorkerCanvas proposal. It provides the
 ability to render to multiple canvases from a single context, whether
 workers are involved or not.


 That may be a useful feature, but I'd like to see it justified in its own
 right.

 There has been a lot of developer feedback on the WebGL mailing lists
 over the past couple of years about exactly this feature. Web sites
 like Turbosquid want to present lots of little thumbnails of models --
 see for example http://www.turbosquid.com/Search/3D-Models/Vehicle/Car
 -- and have them be interactive. It's too resource-intensive to create
 a separate WebGL context for each. The most direct solution is to
 allow one context to render to multiple canvases.


 It achieves ideal memory utilization by
 being very explicit in the API, without the need for extensive and
 subtle optimizations behind the scenes.


 We can be more explicit with ImageBitmaps. We could provide
 WorkerCanvas.transferToImageBitmap which transfers the current canvas
 contents to an ImageBitmap and clears the canvas. (Canvas implementations
 are already optimized to support a zero-cost cleared state, because
 existing benchmarks require it.) Sharing ImageBitmap contents across threads
 during structured clone is not subtle. We can add an
 HTMLImageElement.srcObject attribute which could take a Blob or an
 ImageBitmap to enable explicit zero-copy rendering of ImageBitmaps. Would
 that be explicit enough for you?

 Yes, that generally sounds good.


 Personally I think high-performance manipulation of ImageBitmaps would be
 more generally useful than detachable DrawingBuffers, and would be easier
 for authors to understand.

 If you squint, WorkerCanvas.transferToImageBitmap is similar to detaching a
 DrawingBuffer. But I don't see a need to reattach a buffer to a canvas for
 further drawing. Do you?

 Not immediately. The ability to transfer out the canvas's contents,
 and render them in an HTMLImageElement without incurring an extra
 blit, should address the Maps team's requirements.

 Actually, adding transferToImageBitmap to HTMLCanvasElement as well
 would address the use case of rendering to multiple targets using one
 context. Instead of using multiple canvases as the targets, one would
 simply use multiple images. That sounds appealing.

 If WorkerCanvas is changed so that its width and height are mutable
 within the worker as you mentioned above, it sounds like it's
 addressing the known use cases.

Capturing Glenn Maynard's feedback from the other thread started by
Rik Cabanier, Glenn made a good point that there needs to be a way to
explicitly deallocate the ImageBitmap. Otherwise, the JavaScript
objects will have to be garbage collected before the GPU resource
(texture) it references can be freed, and that will not work -- GPU
resources will quickly pile up.

Would it be acceptable to say that setting the 'src' property of an
HTMLImageElement to an ImageBitmap neuters the incoming ImageBitmap?
If not, would it be feasible to add another method to HTMLImageElement
like setToImageBitmap which does this?

-Ken


 It's worth considering whether a change to the CanvasInWorkers
 proposal could support presenting frames directly from the worker.


 Sure, by adding a commit() method to DrawingBuffer. Right?

 I'm not exactly sure how it would be done. In the proposal as written,
 the DrawingBuffer's not shared between threads, only transferred.

 -Ken


 Rob
 --
 Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni le
 atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
 stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
 'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
 waanndt  wyeonut  thoo mken.o w


Re: [whatwg] Counterproposal for canvas in workers

2013-10-17 Thread Kenneth Russell
On Wed, Oct 16, 2013 at 10:26 PM, Robert O'Callahan
rob...@ocallahan.org wrote:
 On Thu, Oct 17, 2013 at 3:34 PM, Rik Cabanier caban...@gmail.com wrote:

 The tasks themselves can also launch synchronized/unsynchronized subtasks
 with promises. A task is considered done if it exits and all its promises
 are fulfilled.


 It seems that tasks are like workers, but different, and you'd have to do a
 lot of extra work to precisely define the execution environment of the task
 script.

 It also seems that you have to precisely define how different tasks
 interact. For example is the current path left in the canvas by task 1
 usable by the code in task 2? You also have to define how this works in
 WebGL.

 I don't think this supports a worker/task generating a steady stream of
 frames, e.g. for a 3D game. Does it?

 I'm not all that enthusiastic :-)

Sorry, neither am I. OpenGL (and WebGL) applications do a lot of
one-time setup, and then repeatedly redraw using the previously
uploaded objects. This stateless drawing model isn't compatible with
that structure.

-Ken


Re: [whatwg] Counterproposal for canvas in workers

2013-10-17 Thread Kenneth Russell
On Thu, Oct 17, 2013 at 2:08 PM, Rik Cabanier caban...@gmail.com wrote:



 On Thu, Oct 17, 2013 at 2:03 PM, Kenneth Russell k...@google.com wrote:

 On Wed, Oct 16, 2013 at 10:26 PM, Robert O'Callahan
 rob...@ocallahan.org wrote:
  On Thu, Oct 17, 2013 at 3:34 PM, Rik Cabanier caban...@gmail.com
  wrote:
 
  The tasks themselves can also launch synchronized/unsynchronized
  subtasks
  with promises. A task is considered done if it exits and all its
  promises
  are fulfilled.
 
 
  It seems that tasks are like workers, but different, and you'd have to
  do a
  lot of extra work to precisely define the execution environment of the
  task
  script.
 
  It also seems that you have to precisely define how different tasks
  interact. For example is the current path left in the canvas by task 1
  usable by the code in task 2? You also have to define how this works in
  WebGL.
 
  I don't think this supports a worker/task generating a steady stream of
  frames, e.g. for a 3D game. Does it?
 
  I'm not all that enthusiastic :-)

 Sorry, neither am I. OpenGL (and WebGL) applications do a lot of
 one-time setup, and then repeatedly redraw using the previously
 uploaded objects. This stateless drawing model isn't compatible with
 that structure.


 Every task per ID and per canvas has access to its own state/VM.
 So, the first time a task is executed (or we could provide an 'init' phase),
 it could do setup which will be maintained between tasks.

OK, I see. Sorry for misinterpreting.

It seems to me that this proposal would restrict even further what the
worker executing the task can do, and be harder to program to than the
existing worker model. I'm interested in pursuing the other discussion
around a WorkerCanvas rather than this one.

-Ken


Re: [whatwg] Canvas in workers

2013-10-16 Thread Kenneth Russell
On Wed, Oct 16, 2013 at 5:39 AM, Justin Novosad ju...@google.com wrote:
 On Tue, Oct 15, 2013 at 8:30 PM, Kenneth Russell k...@google.com wrote:

 On Tue, Oct 15, 2013 at 4:41 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:
  If you squint, WorkerCanvas.transferToImageBitmap is similar to
  detaching a
  DrawingBuffer. But I don't see a need to reattach a buffer to a canvas
  for
  further drawing. Do you?

 Not immediately. The ability to transfer out the canvas's contents,
 and render them in an HTMLImageElement without incurring an extra
 blit, should address the Maps team's requirements.


 WorkerCanvas.copyToImageBitmap could be just as effective with a proper lazy
 copy-on-write mechanism. It would offer the same performance in cases where
 you would just need to transfer (as opposed to copy) the buffer, with the
 added flexibility that it reattaches a new buffer to the canvas, only if
 needed (at next draw).  Also the lazy copy can be skipped if the next draw
 operation to the canvas context is a clear, in which case the UA only needs
 to attach an uninitialized buffer.

I'm assuming that transferToImageBitmap will attach a new buffer to
the canvas as well. The semantic would be that the color buffer of the
canvas gets transferred to the ImageBitmap, and the canvas gets a new,
blank color buffer. (Any auxiliary buffers, like a depth buffer, would
also be implicitly cleared.)

It's easier to understand how to make the transfer operation efficient
than how to optimize the copy-on-write, which requires deeper analysis
of the calls made against the canvas and context in order to get peak
performance. If both are present then both the createImageBitmap and
transfer implementations can be really simple. Do you have a
particular objection to including the transfer API?

 Also, because ImageBitmaps are immutable objects, the API should probably be
 more like var imageBitmap = createImageBitmap(myWorkerCanvas);

Yes, right. That factory method is already spec'ed on the
WorkerGlobalScope [1]. It actually returns a Promise, so presumably
transferToImageBitmap would have to as well.

[1] 
http://www.whatwg.org/specs/web-apps/current-work/multipage/timers.html#imagebitmapfactories

-Ken


Re: [whatwg] Canvas in workers

2013-10-16 Thread Kenneth Russell
On Wed, Oct 16, 2013 at 1:13 PM, Justin Novosad ju...@google.com wrote:



 On Wed, Oct 16, 2013 at 3:53 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

 On Thu, Oct 17, 2013 at 6:35 AM, Kenneth Russell k...@google.com wrote:

 Yes, right. That factory method is already spec'ed on the
 WorkerGlobalScope [1]. It actually returns a Promise, so presumably
 transferToImageBitmap would have to as well.


 The whole point of transferToImageBitmap is that it's really fast, so I
 don't see why it has to be async.


 True. I also wonder why all of the currently spec'ed ImageBitmap creation
 methods are async. I think asynchrony makes sense when creating ImageBitmaps
 from blobs, image elements and video elements, which may not be in an
 immediately accessible state, but creating an ImageBitmap from a Canvas or
 canvas context (or a WorkerCanvas) could be immediate.

It would be fine in my opinion for transferToImageBitmap to simply
return an ImageBitmap. The suggestion to return a Promise was merely
for symmetry with createImageBitmap.

While the Promise returned from createImageBitmap(HTMLCanvasElement)
can be fulfilled immediately, is it worth introducing a special
overload with a different return type?

-Ken


Re: [whatwg] Canvas in workers

2013-10-15 Thread Kenneth Russell
On Mon, Oct 14, 2013 at 1:34 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Mon, Oct 14, 2013 at 2:20 PM, Kenneth Russell k...@google.com wrote:

 Would you mind looking at the proposal
 http://wiki.whatwg.org/wiki/CanvasInWorkers and commenting on it?


 Sure. Kyle and I looked at it while we were working on our proposal. The
 main issues I have with it are that rearchitecting canvas to introduce the
 DrawingBuffer layer of abstraction seems unnecessarily complex, and it
 doesn't handle direct presentation of frames from the worker, bypassing the
 main thread.

Note that the CanvasInWorkers draft solves some other longstanding
issues not addressed by the WorkerCanvas proposal. It provides the
ability to render to multiple canvases from a single context, whether
workers are involved or not. It achieves ideal memory utilization by
being very explicit in the API, without the need for extensive and
subtle optimizations behind the scenes.

It's worth considering whether a change to the CanvasInWorkers
proposal could support presenting frames directly from the worker.


 There's been some recent discussion in the WebGL WG on this topic and
 concerns were raised from other parties at Mozilla about the
 DrawingBuffer proposal above, including that it isn't possible to
 render from a worker without synchronizing with the main thread.


 This is a huge limitation. For games and other animated content, achieving a
 stable frame rate is super important and a key motivation for adding canvas
 support to workers.

 My vision for handling the Maps use-cases based on our proposal is this: the
 worker renders to one or more WorkerCanvases, then does
 createImageBitmap(canvasContext), then ships the ImageBitmap(s) to the main
 thread using postMessage, and then renders those ImageBitmaps either by
 drawing them to a canvas or in some more direct way.

 This can all be implemented in a zero-copy way with the APIs currently in
 the spec, though it's not trivial. You'd need to do a few optimizations:
 -- createImageBitmap(canvasContext) would take a lazy snapshot of the canvas
 contents; i.e., further modifications to the canvas would trigger a copy (on
 the worker), but if the canvas is untouched no copy is required.
 -- structured clone of the ImageBitmap would not copy. This may actually
 require a small spec change.
 -- drawing an ImageBitmap to a 2D canvas, if it covers the whole canvas,
 would simply replace the canvas buffer with the ImageBitmap contents (and
 perform copy-on-write if the script later makes changes to that canvas).
 These optimizations would all be useful in other contexts too. Whatever
 happens with canvas-in-a-worker, I bet we'll end up doing these
 optimizations for other reasons.

I agree that it's probably possible to make this work. It is still
worth considering whether these optimizations are going to be fragile,
and whether developers will fall off a performance cliff if something
subtle changes in their code or in the browser's behavior in the
future.


 It might make sense to create an API that renders an ImageBitmap more
 directly than drawing it to a canvas. For example we could create an API
 that allows an img element to render an ImageBitmap. It would be a bit
 simpler for authors and perhaps a bit simpler to implement than the final
 optimization in my list above.

That's an interesting possibility. It would probably be quite a bit
simpler to both specify and implement rather than using a canvas on
the receiving end.

-Ken


 Rob
 --
 Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni le
 atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
 stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
 'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
 waanndt  wyeonut  thoo mken.o w


Re: [whatwg] Canvas in workers

2013-10-15 Thread Kenneth Russell
On Tue, Oct 15, 2013 at 4:41 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Wed, Oct 16, 2013 at 11:55 AM, Kenneth Russell k...@google.com wrote:

 On Mon, Oct 14, 2013 at 1:34 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:
  On Mon, Oct 14, 2013 at 2:20 PM, Kenneth Russell k...@google.com wrote:
 
  Would you mind looking at the proposal
  http://wiki.whatwg.org/wiki/CanvasInWorkers and commenting on it?
 
 
  Sure. Kyle and I looked at it while we were working on our proposal. The
  main issues I have with it are that rearchitecting canvas to introduce
  the
  DrawingBuffer layer of abstraction seems unnecessarily complex, and it
  doesn't handle direct presentation of frames from the worker, bypassing
  the
  main thread.

 Note that the CanvasInWorkers draft solves some other longstanding
 issues not addressed by the WorkerCanvas proposal. It provides the
 ability to render to multiple canvases from a single context, whether
 workers are involved or not.


 That may be a useful feature, but I'd like to see it justified in its own
 right.

There has been a lot of developer feedback on the WebGL mailing lists
over the past couple of years about exactly this feature. Web sites
like Turbosquid want to present lots of little thumbnails of models --
see for example http://www.turbosquid.com/Search/3D-Models/Vehicle/Car
-- and have them be interactive. It's too resource-intensive to create
a separate WebGL context for each. The most direct solution is to
allow one context to render to multiple canvases.


 It achieves ideal memory utilization by
 being very explicit in the API, without the need for extensive and
 subtle optimizations behind the scenes.


 We can be more explicit with ImageBitmaps. We could provide
 WorkerCanvas.transferToImageBitmap which transfers the current canvas
 contents to an ImageBitmap and clears the canvas. (Canvas implementations
 are already optimized to support a zero-cost cleared state, because
 existing benchmarks require it.) Sharing ImageBitmap contents across threads
 during structured clone is not subtle. We can add an
 HTMLImageElement.srcObject attribute which could take a Blob or an
 ImageBitmap to enable explicit zero-copy rendering of ImageBitmaps. Would
 that be explicit enough for you?

Yes, that generally sounds good.


 Personally I think high-performance manipulation of ImageBitmaps would be
 more generally useful than detachable DrawingBuffers, and would be easier
 for authors to understand.

 If you squint, WorkerCanvas.transferToImageBitmap is similar to detaching a
 DrawingBuffer. But I don't see a need to reattach a buffer to a canvas for
 further drawing. Do you?

Not immediately. The ability to transfer out the canvas's contents,
and render them in an HTMLImageElement without incurring an extra
blit, should address the Maps team's requirements.

Actually, adding transferToImageBitmap to HTMLCanvasElement as well
would address the use case of rendering to multiple targets using one
context. Instead of using multiple canvases as the targets, one would
simply use multiple images. That sounds appealing.

If WorkerCanvas is changed so that its width and height are mutable
within the worker as you mentioned above, it sounds like it's
addressing the known use cases.


 It's worth considering whether a change to the CanvasInWorkers
 proposal could support presenting frames directly from the worker.


 Sure, by adding a commit() method to DrawingBuffer. Right?

I'm not exactly sure how it would be done. In the proposal as written,
the DrawingBuffer's not shared between threads, only transferred.

-Ken


 Rob
 --
 Jtehsauts  tshaei dS,o n Wohfy  Mdaon  yhoaus  eanuttehrotraiitny  eovni le
 atrhtohu gthot sf oirng iyvoeu rs ihnesa.rt sS?o  Whhei csha iids  teoa
 stiheer :p atroa lsyazye,d  'mYaonu,r  sGients  uapr,e  tfaokreg iyvoeunr,
 'm aotr  atnod  sgaoy ,h o'mGee.t  uTph eann dt hwea lmka'n?  gBoutt  uIp
 waanndt  wyeonut  thoo mken.o w


Re: [whatwg] Canvas in workers

2013-10-14 Thread Kenneth Russell
Would you mind looking at the proposal
http://wiki.whatwg.org/wiki/CanvasInWorkers and commenting on it? This
was arrived at after extensive discussions with the Google Maps team,
and addresses their key use cases. Compared to the one below, it
solves the following problems:

  1) Rendering from a worker and displaying on the main thread with no
extra blits of the rendering results
  2) Allows one context to render to multiple canvases
  3) Supports resizing of the drawing buffer

The main difference in my mind is that in the DrawingBuffer proposal,
the back buffer of the canvas can be detached, transferred via
postMessage to the other side, and attached to a Canvas, replacing its
previous back buffer. The semantics are simple, clear, avoid extra
blits of the rendered content, and support rendering into multiple
canvases from one context.

There's been some recent discussion in the WebGL WG on this topic and
concerns were raised from other parties at Mozilla about the
DrawingBuffer proposal above, including that it isn't possible to
render from a worker without synchronizing with the main thread.
Still, it seems to me it's worth considering all aspects of the
proposal, because it was motivated by a major existing web app which
is both using Canvas 2D and WebGL extensively, and desires to use
workers more heavily in its rendering pipeline.

-Ken



On Sat, Oct 12, 2013 at 9:12 PM, Kyle Huey m...@kylehuey.com wrote:
 I talked at length with Robert O'Callahan about what the DOM API for
 supporting canvas in web workers should look like and we came up with the
 following modifications to the spec.

1. Rename CanvasProxy to WorkerCanvas and only allow it to be
transferred to workers.  I don't think we're interested in supporting
cross-origin canvas via CanvasProxy (I would be curious to hear more
about what the use cases are).
2. Add a worker-only WorkerCanvas constructor that takes the desired
width/height of the drawing surface.
3. Remove the rendering context constructors and the setContext method
on WorkerCanvas (née CanvasProxy).
4. Copy all of the sensible non-node related things from
HTMLCanvasElement to WorkerCanvas.  This would include
- width and height as readonly attributes
   - getContext (to replace what we removed in step 3).  roc prefers to
   have getContext2D and getContextWebGL, and dispense with the string
   argument version entirely, but I don't have strong feelings.
   - toBlob.  We do not intend to implement toDataURL here.
5. Add a commit method to WorkerCanvas.  For a WorkerCanvas obtained
from a main thread canvas element, this would cause the buffer displayed
on screen to swap.  For a WorkerCanvas created *de novo* on a worker
thread, it would do nothing.  This commit method would also commit a minor
violation of run-to-completion semantics, described below.
6. We would rely on extracting ImageBitmaps from the WorkerCanvas and
shipping them to the main thread via postMessage to allow synchronizing
canvas updates with DOM updates.  We explored a couple other options but we
didn't come up with anything else that allows synchronizing updates to
multiple canvases from a worker.  This isn't really sketched out here.

 So the IDL would look something like:

 [Constructor(unsigned long width, unsigned long height)]

 interface WorkerCanvas {

   readonly attribute unsigned long width;

   readonly attribute unsigned long height;


   CanvasRenderingContext2D? getContext2D(any... args);

   WebGLRenderingContext? getContextWebGL(any... args);


   void toBlob(FileCallback? _callback, optional DOMString type, any...
 arguments);


   bool commit();

 };

 WorkerCanvas implements Transferable;

 Everything would be behave pretty much as one would expect, except perhaps
 for the commit method.  The width and height of the canvas can be modified
 on the main thread while the worker is drawing.  This would fire an event
 off to the worker to update the WorkerCanvas's dimensions that would be
 scheduled as if the main thread had postMessage()d something to the
 worker.  But it's possible that the worker would attempt to draw to the
 canvas before that update runs.  It's also possible that the worker would
 simply draw in a loop without yielding.  To solve this, if commit is called
 and the current dimensions on the main thread don't match the dimensions of
 the WorkerCanvas it would fail (return false) and update the dimensions of
 the WorkerCanvas before returning.  This is technically a violation of
 run-to-completion semantics, but is needed to support workers that do not
 yield.

 Thoughts?

 - Kyle


Re: [whatwg] Proposal: ImageData constructor or factory method with preexisting data

2013-08-13 Thread Kenneth Russell
On Tue, Aug 13, 2013 at 11:57 AM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 12 Aug 2013, Kenneth Russell wrote:

 The use case is the passing of ImageData objects back and forth to
 workers to fill and refill with data.

 An ImageData is essentially a wrapper for the underlying
 Uint8ClampedArray, providing an associated width and height. However,
 the only way to draw raw pixels into a Canvas is using an ImageData.

 Currently a Unit8ClampedArray can be constructed, but the only way to
 create an ImageData is to ask a Canvas for one, and there's no way to
 associate it with a pre-allocated Uint8ClampedArray.

 Why can't you just send an ImageData over?


 This means that if you want to pass an ImageData to a worker for
 filling, transferring the underlying Uint8ClampedArray, you need to be
 very careful about bookkeeping, and to not lose the reference to the
 ImageData object.

 Sure. Just send the ImageData over. That seems relatively
 straight-forward. What am I missing?


 IMO there ought to be a factory method for ImageData taking a
 Uint8ClampedArray, width, height, and possibly resolution (or a
 dictionary?), which validates the width, height and resolution against
 the size of the Uint8ClampedArray, and makes a new ImageData object.
 This would ease management of ImageData instances.

 We could have a constructor for ImageData objects, sure. That would be
 relatively easy to add, if it's really needed. I don't understand why it's
 hard to keep track of ImageData objects, though. Can you elaborate?

I have in mind new APIs for typed arrays which allow sharding of typed
arrays to workers and re-assembly of the component pieces when the
work is complete. This would involve multiple manipulations of the
ArrayBuffer and its views. It would be most convenient if the result
could be wrapped in an ImageData if it's destined to be drawn to a
Canvas. Otherwise it's likely that a data copy will need to be
incurred.

-Ken


Re: [whatwg] Proposal: ImageData constructor or factory method with preexisting data

2013-08-12 Thread Kenneth Russell
On Fri, Aug 9, 2013 at 2:34 PM, Ian Hickson i...@hixie.ch wrote:
 On Fri, 9 Aug 2013, Rik Cabanier wrote:
 On Fri, Aug 9, 2013 at 1:32 PM, Ian Hickson i...@hixie.ch wrote:
  On Mon, 11 Mar 2013, Kenneth Russell wrote:
  
   It would be useful to be able to create an ImageData [1] object with
   preexisting data. The main use case is to display arbitrary data in
   the 2D canvas context with no data copies.
 
  Doesn't ImageBitmap support this already? I'm not sure I understand
  the use case here. Where are you getting the image data from, that
  it's already in raw RGBA form rather than compressed e.g. as a PNG?
  (Presumably this isn't coming over the network, since I would imagine
  the time to compress and decompress an image is far smaller than the
  time to send uncompressed data. But maybe I'm wrong about that.)

 From re-reading the thread, it seems that this data comes from the
 server (or a web worker?) as uncompressed data. The http protocol likely
 did compression on the packets so the size difference is probably not
 that great.

 I think the use-case is to avoid having to copy over the data pixel by
 pixel from the arraybuffer.

 Could you elaborate on the use case?

 I'm happy to believe that there are times that a server or worker is
 generating lots of pixel data, but having never run into such a case
 myself, I would very much like to understand it better. It may be that
 there are better solutions to the real underlying problem.

The use case is the passing of ImageData objects back and forth to
workers to fill and refill with data.

An ImageData is essentially a wrapper for the underlying
Uint8ClampedArray, providing an associated width and height. However,
the only way to draw raw pixels into a Canvas is using an ImageData.

Currently a Unit8ClampedArray can be constructed, but the only way to
create an ImageData is to ask a Canvas for one, and there's no way to
associate it with a pre-allocated Uint8ClampedArray. This means that
if you want to pass an ImageData to a worker for filling, transferring
the underlying Uint8ClampedArray, you need to be very careful about
bookkeeping, and to not lose the reference to the ImageData object.

IMO there ought to be a factory method for ImageData taking a
Uint8ClampedArray, width, height, and possibly resolution (or a
dictionary?), which validates the width, height and resolution against
the size of the Uint8ClampedArray, and makes a new ImageData object.
This would ease management of ImageData instances.

-Ken


Re: [whatwg] BinaryEncoding for Typed Arrays using window.btoa and window.atob

2013-08-05 Thread Kenneth Russell
On Mon, Aug 5, 2013 at 2:04 PM, Simon Pieters sim...@opera.com wrote:
 On Mon, 05 Aug 2013 22:39:22 +0200, Chang Shu csh...@gmail.com wrote:

 I see your point now, Simon. Technically both approaches should work.
 As you said, yours has the limitation that the implementation does not
 know which view to return unless you provide an enum type of parameter
 instead of boolean to atob. And mine has the performance issue. How
 about we don't return the 'binary' string in case the 2nd parameter is
 provided in my case?


 That works for me.

Chang, in your proposal for modifying atob, how does the developer
know how many characters were written into the outgoing
ArrayBufferView?

What happens if the ArrayBufferView argument isn't large enough to
hold the decoded string?

During the decoding process, is the type of the ArrayBufferView
significant? In your example, what would happen if an Int16Array were
passed instead of an Int32Array?

The Encoding spec at http://encoding.spec.whatwg.org/ seems to have
handled issues like these. Perhaps a better route would be to fold
this functionality into that spec.

-Ken


Re: [whatwg] Adding features needed for WebGL to ImageBitmap

2013-07-16 Thread Kenneth Russell
On Tue, Jul 16, 2013 at 7:16 AM, Justin Novosad ju...@google.com wrote:



 On Tue, Jul 16, 2013 at 12:25 AM, Mark Callow callow.m...@artspark.co.jp
 wrote:

 On 2013/07/15 10:46, Justin Novosad wrote:

 But to circle back to your point, I agree that an exception is a good idea
 to avoid having to hold a triplicate copy in RAM, or having to redecode
 all
 the time. Better to force the dev to make additional copies explicitly if
 needed than to make a potentially uselessly costly implementation.mf

 Maybe I am misunderstanding but the only reason I can see for 3 copies (2
 if you ignore the undecoded copy) is if you propose to ignore the specified
 parameters when drawing the image to a 2D canvas.


 Yes, that is what I was referring to because that was suggested earlier on
 this thread. But I think it is becoming clearer that that is not the right
 thing to do.


 I would expect to always draw the image decoded as indicated by the
 proposed parameters so no additional copy would be necessary. Sure the image
 might not be correct (colors off or image upside down) but that would be a
 programmer error.


 Exactly.  That is what I am suggesting.  If the programmer wants several
 copies of the same image with different baked-in transformations, then the
 programmer should create several ImageBitmaps explicitly.  No under the hood
 magic. It's clearer that way.

This sounds good. Additionally, the WebGL spec can be updated to state
that the parameters UNPACK_FLIP_Y_WEBGL, etc. don't apply to
ImageBitmap, so the only way to affect the decoding is with the
dictionary of options.


 I would like to see ImageBitmap fully support WebGL so WebGL apps can use
 a Browser's built-in image decoders. And, if the rumors are true, come IE11,
 WebGL will be supported by all major browsers so it should be treated as a
 first-class citizen.


 Yes!

Agree with this sentiment also.

-Ken



 Regards

 -Mark

 --
 注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合が有ります。正式なメール受信者では無い場合はメール複製、
 再配信または情報の使用を固く禁じております。エラー、手違いでこのメールを受け取られましたら削除を行い配信者にご連絡をお願いいたし ます.

 NOTE: This electronic mail message may contain confidential and privileged
 information from HI Corporation. If you are not the intended recipient, any
 disclosure, photocopying, distribution or use of the contents of the
 received information is prohibited. If you have received this e-mail in
 error, please notify the sender immediately and permanently delete this
 message and all related copies.




Re: [whatwg] Adding features needed for WebGL to ImageBitmap

2013-07-10 Thread Kenneth Russell
(Replying on behalf of Gregg, who unfortunately isn't at Google any more)

On Wed, Jul 10, 2013 at 3:17 PM, Ian Hickson i...@hixie.ch wrote:
 On Wed, 19 Jun 2013, Gregg Tavares wrote:

 In order for ImageBitmap to be useful for WebGL we need more options

 ImageBitmap is trying to just be a generic HTMLImageElement, that is, a
 bitmap image. It's not trying to be anything more than that.

 Based on some of these questions, though, maybe you mean ImageData?

Gregg meant ImageBitmap.

Some background: when uploading HTMLImageElements to WebGL it's
required to be able to specify certain options, such as whether to
premultiply the alpha channel, or perform colorspace conversion.
Because it seemed infeasible at the time to modify the HTML spec,
these options are set via the WebGL API. If they're set differently
from the browser's defaults (which are generally to do
premultiplication, and do colorspace conversion), then the WebGL
implementation has to re-decode the image when it's uploaded to a
WebGL texture. (There's no way to know in advance whether a given
image is intended for upload to WebGL as opposed to insertion into the
document, and making image decoding lazier than it currently is would
introduce bad hiccups while scrolling.)

We'd like to avoid the same problems with the new ImageBitmap concept.

The current ImageBitmap draft has the problem that when the callback
is called, image decoding will already have been done, just like
HTMLImageElement -- at least, this is almost surely how it'll be
implemented, in order to obey the rule An ImageBitmap object
represents a bitmap image that can be painted to a canvas without
undue latency. Just like HTMLImageElement, these options need to be
set before decoding occurs, to avoid redundant work and rendering
pauses which would happen if operations like colorspace conversion
were done lazily. (By the way, colorspace conversion is typically
implemented inside the image decoder itself, and it would be a lot of
work to factor it out into code which can be applied to a
previously-decoded image. In fact from looking again at the code in
Blink which does this I'd say it's completely infeasible.)


 premultipliedAlpha: true/false (default true)
 Nearly all GL games use non-premultipiled alpha textures. So all those
 games people want to port to WebGL will require non-premultipied textures.
 Often in games the alpha might not even be used for alpha but rather for
 glow maps or specular maps or the other kinds of data.

 How do you do this with img today?

Per above, by specifying the option via the WebGL API, and performing
a synchronous image re-decode. This re-decode is really expensive, and
a major pain point for WebGL developers. It's so bad that developers
are using pure JavaScript decoders for PNG and JPG formats just so
that they can do this on a worker thread.


 flipY: true/false (default false)
 Nearly all 3D modeling apps expect the bottom left pixel to be the first
 pixel in a texture so many 3D engines flip the textures on load. WebGL
 provides this option but it takes time and memory to flip a large image
 therefore it would be nice if that flip happened before the callback
 from ImageBitmap

 No pixel is the first pixel in an ImageBitmap. I don't really understand
 what this means.

There's a longstanding difference between the coordinate systems used
by most 2D libraries, and 3D APIs. OpenGL in particular long ago
adopted the convention that the origin of a texture is its lower-left
corner, with the Y axis pointing up.

Every image loading library ever created for OpenGL has had an option
to flip (or not) loaded textures along the Y axis; the option is
required to support pipelines for loading artists' work.

The WebGL spec offers this option via the UNPACK_FLIP_Y_WEBGL state.
http://www.khronos.org/registry/webgl/specs/latest/#TEXIMAGE2D_HTML
defines that the upper left pixel of images is by default the first
pixel transferred to the GPU.

Flipping large images vertically is expensive, taking a significant
percentage of frame time. As with premultiplication of alpha, we want
to avoid doing it unnecessarily, redundantly, or synchronously with
respect to the application. For this reason we want to make it an
option on createImageBitmap so when the callback is called, the
decoded image data is already oriented properly for upload to the GPU.


 colorspaceConversion: true/false (default true)
 Some browsers apply color space conversion to match monitor settings.
 That's fine for images with color but WebGL apps often load heightmaps,
 normalmaps, lightmaps, global illumination maps and many other kinds of
 data through images. If the browser applies a colorspace conversion the
 data is not longer suitable for it's intended purpose therefore many WebGL
 apps turn off color conversions. As it is now, when an image is uploaded to
 WebGL, if colorspace conversion is
 offhttp://www.khronos.org/registry/webgl/specs/latest/#PIXEL_STORAGE_PARAMETERS,
 WebGL has to 

Re: [whatwg] Adding features needed for WebGL to ImageBitmap

2013-07-10 Thread Kenneth Russell
On Wed, Jul 10, 2013 at 4:37 PM, Ian Hickson i...@hixie.ch wrote:
 On Wed, 10 Jul 2013, Kenneth Russell wrote:

 Some background: when uploading HTMLImageElements to WebGL it's required
 to be able to specify certain options, such as whether to premultiply
 the alpha channel, or perform colorspace conversion. Because it seemed
 infeasible at the time to modify the HTML spec, these options are set
 via the WebGL API. If they're set differently from the browser's
 defaults (which are generally to do premultiplication, and do colorspace
 conversion), then the WebGL implementation has to re-decode the image
 when it's uploaded to a WebGL texture. (There's no way to know in
 advance whether a given image is intended for upload to WebGL as opposed
 to insertion into the document, and making image decoding lazier than it
 currently is would introduce bad hiccups while scrolling.)

 It seems like the right solution is to create a primitive for WebGL that
 represents images that are going to be used in WebGL calls. Such a
 primitive could use the same sources for images as ImageBitmap, but would
 be specifically for use with WebGL, in the same way that ImageBitmap is
 used just by the 2D Canvas API.

That sounds like the wrong solution to me. The goal of HTML5 should be
good integration of all of the component APIs, not to treat some, like
WebGL, as bolt-on mechanisms.

ImageBitmap can cleanly address all of the desired use cases simply by
adding an optional dictionary of options. I suspect that in the future
some options will be desired even for the 2D canvas use case, and
having the dictionary already specified will make that easier. There
is no need to invent a new primitive and means of loading it.

-Ken


 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Adding features needed for WebGL to ImageBitmap

2013-07-10 Thread Kenneth Russell
On Wed, Jul 10, 2013 at 5:13 PM, Peter Kasting pkast...@google.com wrote:
 On Wed, Jul 10, 2013 at 5:07 PM, Ian Hickson i...@hixie.ch wrote:

 (The other two options don't make much sense to me even for GL. If you
 don't want a color space, don't set one. If you don't want an alpha
 channel, don't set one. You control the image, after all.)


 I only have a small amount of graphics experience, but I don't think that
 latter comment is right, at least.

 At least for the alpha channel, as Gregg already wrote, a lot of GL
 algorithms use that data for something per-pixel that's not alpha
 (generally some other kind of per-pixel map).  It's not appropriate for the
 browser to assume that it's safe to muck with the values there.  Fixing this
 by instead trying to pass these values separate from the rest of the pixel
 data is inefficient as well as just weird from the perspective of anyone
 with significant experience in using these sorts of algorithms.

This is correct. Further, even if an image doesn't contain any color
space information, the browser may still incorrectly decide to adjust
the colorspace of decoded image data based on the client machine's
settings. It's required to be able to tell the browser to not do this.

I would find it really discouraging if the WebGL spec had to subsume
image loading functionality. It's a statement of fact that with a
dictionary of options, ImageBitmap can work as efficiently for the
WebGL canvas context as it's intended to for the 2D canvas context.

-Ken


Re: [whatwg] Challenging canvas.supportsContext

2013-06-19 Thread Kenneth Russell
Looking back at the previous discussion:

  http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2012-September/037229.html
(and succeeding emails)
  http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2012-October/037693.html

Accurate feature detection in libraries like Modernizr was mentioned
as a key use case:

  http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2012-September/037249.html

Additionally, though, application developers wanted to be able to ask
questions like is software rendering supported for WebGL:

  http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2012-September/037232.html

I think this second use case is valid. One could imagine a graphing or
charting library which would use low-power, software-rendered WebGL if
available, but otherwise would fall back to using
CanvasRenderingContext2D.

I'd like to see supportsContext remain in the spec.

-Ken



On Wed, Jun 19, 2013 at 12:34 PM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Wed, Jun 19, 2013 at 11:29 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 6/19/13 2:17 PM, Benoit Jacob wrote:
 The closest thing that I could find being discussed, was use cases by JS
 frameworks or libraries that already expose similar feature-detection
 APIs.
 However, that only shifts the question to: what is the reason for them to
 expose such APIs?


 I _think_ the issue is poorly-designed detection APIs that do the detection
 even if the consumer of the framework/library doesn't care about that
 particular feature.

 That means that right now those frameworks are doing a getContext() call but
 then no one cares that they did.


 There is also the argument that supportsContext can be much cheaper than a
 getContext, given that it only has to guarantee that getContext must fail
 if supportsContext returned false. But this argument is overlooking that
 in
 the typical failure case, which is failure due to system/driver
 blacklisting, getContext returns just as fast as supportsContext

 I think the argument here is that the common case for getContext is in fact
 more and more the success case.  So the framework/library is wasting time
 successfully creating a context that no one actually cares about.

 If the above is correct, I agree with Benoit: the right fix is to fix the
 libraries to do the getContext() lazily when someone actually asks whether
 WebGL is enabled.

 If I'm wrong, then I'd like to understand what problem we _are_ trying to
 solve.  That is, what the cases are that want to check that they can create
 a context but not actually create one.

 This is missing the point.  You don't want to wait until it's actually
 time to create the context.  Unless you torture your code flow, by the
 time you're creating a context you should already know that the
 context is supported.  The knowledge of which context to use is most
 useful well before that, when you're first entering the app.

 Plus, it doesn't matter how late you do the detection - if you do a
 straight *detection* at all rather than an initialization (that is, if
 you throw away the context you've just created for testing), you'll
 still incur the start-up costs of spinning up a context.  Doing that
 early or late doesn't matter too much - it's bad either way.

 Like @supports, the supportsContext() method can be easy and reliable
 with a very simple definition for supports - it returns true if
 calling getContext() with the same arguments would return a context
 rather than erroring, and false otherwise.  No capacity for lying
 there without breaking sites.

 ~TJ


Re: [whatwg] Challenging canvas.supportsContext

2013-06-19 Thread Kenneth Russell
On Wed, Jun 19, 2013 at 2:20 PM, Brandon Benvie bben...@mozilla.com wrote:
 On 6/19/2013 2:05 PM, James Robinson wrote:

 What would a page using Modernizr (or other library) to feature detect
 WebGL do if the supportsContext('webgl') call succeeds but the later
 getContext('webgl') call fails?


 I don't have an example, I was just explaining how Mozernizr is often used.


 I'm also failing to see the utility of the supportsContext() call.  It's
 impossible for a browser to promise that supportsContext('webgl') implies
 that getContext('webgl') will succeed without doing all of the expensive
 work, so any correctly authored page will have to handle a
 getContext('webgl') failure anyway.


 Given this, it would seem supportsContext is completely useless. The whole
 purpose of a feature detection check is to detect if a feature actually
 works or not. Accuracy is more important than cost.

supportsContext() can give a much more accurate answer than
!!window.WebGLRenderingContext. I can only speak for Chromium, but in
that browser, it can take into account factors such as whether the GPU
sub-process was able to start, whether WebGL is blacklisted on the
current card, whether WebGL is disabled on the current domain due to
previous GPU resets, and whether WebGL initialization succeeded on any
other page. All of these checks can be done without the heavyweight
operation of actually creating an OpenGL context.

-Ken


Re: [whatwg] Challenging canvas.supportsContext

2013-06-19 Thread Kenneth Russell
On Wed, Jun 19, 2013 at 3:06 PM, James Robinson jam...@google.com wrote:
 On Wed, Jun 19, 2013 at 3:04 PM, Kenneth Russell k...@google.com wrote:

 On Wed, Jun 19, 2013 at 2:20 PM, Brandon Benvie bben...@mozilla.com
 wrote:
  On 6/19/2013 2:05 PM, James Robinson wrote:
 
  What would a page using Modernizr (or other library) to feature detect
  WebGL do if the supportsContext('webgl') call succeeds but the later
  getContext('webgl') call fails?
 
 
  I don't have an example, I was just explaining how Mozernizr is often
  used.
 
 
  I'm also failing to see the utility of the supportsContext() call.
  It's
  impossible for a browser to promise that supportsContext('webgl')
  implies
  that getContext('webgl') will succeed without doing all of the
  expensive
  work, so any correctly authored page will have to handle a
  getContext('webgl') failure anyway.
 
 
  Given this, it would seem supportsContext is completely useless. The
  whole
  purpose of a feature detection check is to detect if a feature actually
  works or not. Accuracy is more important than cost.

 supportsContext() can give a much more accurate answer than
 !!window.WebGLRenderingContext. I can only speak for Chromium, but in
 that browser, it can take into account factors such as whether the GPU
 sub-process was able to start, whether WebGL is blacklisted on the
 current card, whether WebGL is disabled on the current domain due to
 previous GPU resets, and whether WebGL initialization succeeded on any
 other page. All of these checks can be done without the heavyweight
 operation of actually creating an OpenGL context.


 That's true, but the answer still doesn't promise anything about what
 getContext() will do.  It may still return null and code will have to check
 for that.  What's the use case for calling supportsContext() without calling
 getContext()?

Any application which has a complex set of fallback paths. For example,

  - Preference 1: supportsContext('webgl', { softwareRendered: true })
  - Preference 2: supportsContext('2d', { gpuAccelerated: true })
  - Preference 3: supportsContext('webgl', { softwareRendered: false })
  - Fallback: 2D canvas

I agree that ideally, if supportsContext returns true then -- without
any other state changes that might affect supportsContext's result --
getContext should return a valid rendering context. It's simply
impossible to guarantee this correspondence 100% of the time, but if
supportsContext's spec were tightened somehow, and conformance tests
were added which asserted consistent results between supportsContext
and getContext, would that address your concern?

-Ken


Re: [whatwg] Challenging canvas.supportsContext

2013-06-19 Thread Kenneth Russell
On Wed, Jun 19, 2013 at 3:39 PM, James Robinson jam...@google.com wrote:



 On Wed, Jun 19, 2013 at 3:24 PM, Kenneth Russell k...@google.com wrote:

 On Wed, Jun 19, 2013 at 3:06 PM, James Robinson jam...@google.com wrote:
  On Wed, Jun 19, 2013 at 3:04 PM, Kenneth Russell k...@google.com wrote:
 
  supportsContext() can give a much more accurate answer than
  !!window.WebGLRenderingContext. I can only speak for Chromium, but in
  that browser, it can take into account factors such as whether the GPU
  sub-process was able to start, whether WebGL is blacklisted on the
  current card, whether WebGL is disabled on the current domain due to
  previous GPU resets, and whether WebGL initialization succeeded on any
  other page. All of these checks can be done without the heavyweight
  operation of actually creating an OpenGL context.
 
 
  That's true, but the answer still doesn't promise anything about what
  getContext() will do.  It may still return null and code will have to
  check
  for that.  What's the use case for calling supportsContext() without
  calling
  getContext()?

 Any application which has a complex set of fallback paths. For example,

   - Preference 1: supportsContext('webgl', { softwareRendered: true })
   - Preference 2: supportsContext('2d', { gpuAccelerated: true })
   - Preference 3: supportsContext('webgl', { softwareRendered: false })
   - Fallback: 2D canvas


 I'm assuming you have (1) and (3) flipped here and both supportsContext()
 and getContext() support additional attributes to dictate whether a
 software-provided context can be supplied.  In that case, in order to write
 correct code I'd still have to attempt to fetch the contexts before using
 them, i.e.:

 var ctx = canvas.getContext('webgl', { 'allowSoftware': false});
 if (ctx) {
   doPreference1(ctx);
   return;
 }
 ctx = canvas.getContext('2d', {'allowSoftware': false});
 if (ctx) {
   doPreference2(ctx);
 // etc

 how could I simplify this code using supportsContext() ?



 I agree that ideally, if supportsContext returns true then -- without
 any other state changes that might affect supportsContext's result --
 getContext should return a valid rendering context.


 It seems overwhelmingly likely that one of the state changes that might
 affect the result will be attempting to instantiate a real context.

In my experience, in Chromium, creation of the underlying OpenGL
context for a WebGLRenderingContext almost never fails in isolation.
Instead, more general failures happen such as the GPU process failing
to boot, or creation of all OpenGL contexts (including the
compositor's) failing. These failures would be detected before the app
calls supportsContext('webgl'). For this reason I believe
supportsContext's answer can be highly accurate in almost every
situation.


 It's simply
 impossible to guarantee this correspondence 100% of the time, but if
 supportsContext's spec were tightened somehow, and conformance tests
 were added which asserted consistent results between supportsContext
 and getContext, would that address your concern?


 I don't see how supportsContext() could be as accurate as getContext()
 without doing all of the work getContext() does.  If it's not 100% accurate,
 when is it useful?

This is a specious question. My point is that the answer can be
accurate enough to be useful, and I'm personally willing to sign up to
make the implementation follow a stricter spec.

-Ken


 - James




 -Ken




Re: [whatwg] Proposal: ImageData constructor or factory method with preexisting data

2013-03-12 Thread Kenneth Russell
It should simply reference the Uint8ClampedArray, not copy it or do
anything else esoteric. The only way to display an ImageData in the 2D
canvas context is via the putImageData API. I am not proposing
changing those semantics.

-Ken



On Mon, Mar 11, 2013 at 5:00 PM, Rik Cabanier caban...@gmail.com wrote:
 Do you expect that createImageData creates an internal copy of the
 Uint8ClampedArray object or is it live?


 On Mon, Mar 11, 2013 at 4:28 PM, Kenneth Russell k...@google.com wrote:

 It would be useful to be able to create an ImageData [1] object with
 preexisting data. The main use case is to display arbitrary data in
 the 2D canvas context with no data copies.

 Proposed IDL:

 [NoInterfaceObject]
 interface ImageDataFactories {
   ImageData createImageData(Uint8ClampedArray data, double sw, double sh);
 };
 Window implements ImageDataFactories;
 WorkerGlobalScope implements ImageDataFactories;

 createImageData would throw an exception if the length of the
 Uint8ClampedArray was not equal to 4 * floor(sw) * floor(sh), or at
 least, if the length of the array was less than this value. (Similar
 wording would be used to that of CanvasRenderingContext2D's
 createImageData.)

 I don't think it is necessary to provide a createImageDataHD in this
 interface. The caller will know the devicePixelRatio and determine
 whether to generate high-DPI data.

 [1]
 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#imagedata

 Comments?

 Thanks,

 -Ken




Re: [whatwg] Proposal: ImageData constructor or factory method with preexisting data

2013-03-12 Thread Kenneth Russell
I much prefer your suggestion to just add a constructor to ImageData.
I was not sure whether that style was preferred nowadays. ImageData is
already exposed in the global namespace, so making it a callable
constructor function seems like an easy change.

As mentioned in another reply, the intent here is to reference the
Uint8ClampedArray, not make a copy.

-Ken


On Mon, Mar 11, 2013 at 7:03 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 3/11/13 7:28 PM, Kenneth Russell wrote:

 Proposed IDL:

 [NoInterfaceObject]
 interface ImageDataFactories {
ImageData createImageData(Uint8ClampedArray data, double sw, double
 sh);
 };
 Window implements ImageDataFactories;
 WorkerGlobalScope implements ImageDataFactories;


 How about just:

   [Constructor(Uint8ClampedArray data, double sw, double sh)]
   interface ImageData {
 /* Whatever is currently there */
   };

 and then you create one with:

   new ImageData(someData, someWidth, someHeight);

 Other than needing to specify whether the array is copied or held on to by
 reference, and specifying that this interface should be exposed in workers,
 this seems fine to me.

 -Boris


Re: [whatwg] Proposal: ImageData constructor or factory method with preexisting data

2013-03-12 Thread Kenneth Russell
On Tue, Mar 12, 2013 at 11:15 AM, Rik Cabanier caban...@gmail.com wrote:
 sounds good!
 I think this is a convenient and useful addition.

Great.

 do you want to keep doubles to define the dimensions instead of integers? If
 so, the size should probably  4 * ceil(sw) * ceil(sh)

I would prefer to use integers, and only used doubles to be consistent
with the other APIs like getImageData and createImageData. In this
case it would make more sense to use integers, since the width and
height are simply being used to interpret preexisting data in the
Uint8ClampedArray.

-Ken


 On Tue, Mar 12, 2013 at 10:50 AM, Kenneth Russell k...@google.com wrote:

 It should simply reference the Uint8ClampedArray, not copy it or do
 anything else esoteric. The only way to display an ImageData in the 2D
 canvas context is via the putImageData API. I am not proposing
 changing those semantics.

 -Ken



 On Mon, Mar 11, 2013 at 5:00 PM, Rik Cabanier caban...@gmail.com wrote:
  Do you expect that createImageData creates an internal copy of the
  Uint8ClampedArray object or is it live?
 
 
  On Mon, Mar 11, 2013 at 4:28 PM, Kenneth Russell k...@google.com wrote:
 
  It would be useful to be able to create an ImageData [1] object with
  preexisting data. The main use case is to display arbitrary data in
  the 2D canvas context with no data copies.
 
  Proposed IDL:
 
  [NoInterfaceObject]
  interface ImageDataFactories {
ImageData createImageData(Uint8ClampedArray data, double sw, double
  sh);
  };
  Window implements ImageDataFactories;
  WorkerGlobalScope implements ImageDataFactories;
 
  createImageData would throw an exception if the length of the
  Uint8ClampedArray was not equal to 4 * floor(sw) * floor(sh), or at
  least, if the length of the array was less than this value. (Similar
  wording would be used to that of CanvasRenderingContext2D's
  createImageData.)
 
  I don't think it is necessary to provide a createImageDataHD in this
  interface. The caller will know the devicePixelRatio and determine
  whether to generate high-DPI data.
 
  [1]
 
  http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#imagedata
 
  Comments?
 
  Thanks,
 
  -Ken
 
 




Re: [whatwg] Proposal: ImageData constructor or factory method with preexisting data

2013-03-12 Thread Kenneth Russell
On Tue, Mar 12, 2013 at 3:49 PM, Rik Cabanier caban...@gmail.com wrote:


 On Tue, Mar 12, 2013 at 3:03 PM, Kenneth Russell k...@google.com wrote:

 On Tue, Mar 12, 2013 at 2:04 PM, Rik Cabanier caban...@gmail.com wrote:
 
 
  On Tue, Mar 12, 2013 at 11:40 AM, Kenneth Russell k...@google.com
  wrote:
 
  On Tue, Mar 12, 2013 at 11:15 AM, Rik Cabanier caban...@gmail.com
  wrote:
   sounds good!
   I think this is a convenient and useful addition.
 
  Great.
 
   do you want to keep doubles to define the dimensions instead of
   integers? If
   so, the size should probably  4 * ceil(sw) * ceil(sh)
 
  I would prefer to use integers, and only used doubles to be consistent
  with the other APIs like getImageData and createImageData. In this
  case it would make more sense to use integers, since the width and
  height are simply being used to interpret preexisting data in the
  Uint8ClampedArray.
 
 
  The current canvas spec doesn't specifically state what happens with
  partial
  pixels. What happens today?
  (Also is there a definition somewhere that states when a pixel is
  considered
  filled?)

 Safari, Firefox and Chrome all round the double arguments to
 putImageData to integers using the truncate rounding mode and then
 draw the source ImageData pixel-for-pixel. For example, passing 64.5
 or 64.99 for the dx or dy arguments is equivalent to passing 64.
 Here's a test case.

 canvas id=canvas width=256 height=256/canvas
 script
 var canvas = document.getElementById(canvas);
 var ctx = canvas.getContext(2d);
 var width = canvas.width;
 var height = canvas.height;
 ctx.fillRect(0, 0, width, height);
 var imageData = ctx.createImageData(width / 2, height / 2);
 for (var ii = 0; ii  imageData.data.length; ii += 4) {
   imageData.data[ii + 0] = 0;
   imageData.data[ii + 1] = 255;
   imageData.data[ii + 2] = 0;
   imageData.data[ii + 3] = 255;
 }
 // Try passing 64.5, 64.99, or 65 for one or both of these arguments
 and see the results
 ctx.putImageData(imageData, 64, 64);
 /script

 In other words, the source ImageData would not be rendered into the
 canvas at a half-pixel offset if ctx.putImageData(imageData, 64.5,
 64.5) were called.


 Thanks for investigating this. The fact that 'truncate' is called, should
 probably go in the spec.
 Maybe we should change the IDL to integer.

I think that would be a good idea. I believe it would be backward
compatible and leave the definition of double - long conversion to
the Web IDL and ECMAScript specs.


  I don't think it is necessary to provide a createImageDataHD in this
  interface. The caller will know the devicePixelRatio and determine
  whether to generate high-DPI data.
 
  That probably won't work since it results in code that executes
  differently
  on devices that are HD.

 I think it works. The application will call the new ImageData
 constructor and pass it to either putImageData or putImageDataHD.
 These interpret the incoming ImageData differently depending on the
 devicePixelRatio.

 In contrast, CanvasRenderingContext2D's existing createImageDataHD and
 getImageDataHD methods will create an ImageData that may have a
 different width and height from those passed in. The reason for this
 is that these methods are referring to the canvas's backing store. For
 this new constructor which simply wraps existing pixel data, the
 application knows exactly how many pixels are contained in the array,
 so it makes the most sense to take the incoming width and height
 verbatim. I don't see any advantage to having an alternate high-DPI
 constructor which would multiply the width and height by the
 devicePixelRatio behind the scenes.


 Your proposal is:

 createImageData would throw an exception if the length of the
 Uint8ClampedArray was not equal to 4 * floor(sw) * floor(sh), or at
 least, if the length of the array was less than this value.

Yes.

 So, if you create an imageData that is going to be used in putImageDataHD,
 the bounds checking happens when you pass it into putImageDataHD?
 It seems the imageData object should know if it was meant for an HD call.
 There is no real reason why you could use it in both HD and non-HD APIs.

Glenn already answered this, but you can already call getImageData and
pass the result to putImageDataHD (and vice versa --
getImageDataHD/putImageData). The only difference between putImageData
and putImageDataHD is that on high-DPI displays putImageDataHD will
take the same image data and display it in a smaller region. All of
the size checking would be performed in the new ImageData constructor
we're discussing. I don't think ImageData itself should know anything
about high DPI because it's designed for pixel-by-pixel manipulation.

-Ken



 
 
   On Tue, Mar 12, 2013 at 10:50 AM, Kenneth Russell k...@google.com
   wrote:
  
   It should simply reference the Uint8ClampedArray, not copy it or do
   anything else esoteric. The only way to display an ImageData in the
   2D
   canvas context is via the putImageData API. I am

Re: [whatwg] Proposal: ImageData constructor or factory method with preexisting data

2013-03-12 Thread Kenneth Russell
On Tue, Mar 12, 2013 at 4:54 PM, Rik Cabanier caban...@gmail.com wrote:
 On Tue, Mar 12, 2013 at 4:16 PM, Glenn Maynard gl...@zewt.org wrote:

 On Tue, Mar 12, 2013 at 12:14 AM, Boris Zbarsky bzbar...@mit.edu wrote:

  CSE can get rid of the redundant .data gets.  Similarly, .data gets can
 be
  loop-hoisted in many cases.
 

 Doing COW based on page-faults is nicer anyway, but I don't know about the
 data structures of JS engines to know whether this is feasible.  (For
 example, if an object in JS is preceded by a header that gets written by
 the engine now and then, it'll probably lie on the same page as the data,
 which would trigger an expensive fault each time.)

 I suppose padding the backing store so it doesn't share pages with anything
 else might be reasonable here: up to about 8k of waste on a system with 4kb
 pages.  The cost of marking the pages read-only would only have to be paid
 when the copy-on-write action (eg. a call to putImageData) is actually
 made.  Very small buffers could simply disable copy-on-write and always
 perform a copy, where the waste for padding is more significant and the
 benefits of avoiding a copy are smaller.

 (For what it's worth, marking a 128 MB buffer read-only in Linux with
 mprotect takes on the order of 3 microseconds on my typical desktop-class
 system.  I don't know if Windows's VirtualProtect is slower.)

 On Tue, Mar 12, 2013 at 4:04 PM, Rik Cabanier caban...@gmail.com wrote:

   I don't think it is necessary to provide a createImageDataHD in this
 
   interface. The caller will know the devicePixelRatio and determine
   whether to generate high-DPI data.
 
  That probably won't work since it results in code that executes
 differently
  on devices that are HD.
 

 The difference between getImageData(HD) and putImageData(HD) is in the
 canvas operation, not the ImageData: it determines how pixels are scaled
 when being read out of and written into the canvas backing store.  It
 doesn't apply to this API; ImageData objects don't know anything beyond
 their pixel data and dimensions.

 (Code executing differently on high-DPI devices is a bridge we've already
 crossed.  getImageData scales pixels down from the device's pixel ratio;
 getImageDataHD doesn't, copying backing store pixels one to one.)

 There is no real reason why you could use it in both HD and non-HD APIs.
 

 Rather, there's no reason you couldn't.  You can definitely create an
 ImageData with getImageData and then pass it to putImageDataHD (which would
 cause the image to be scaled on devices with a pixel ratio other than 1, of
 course).


 It feels like something is missing. How does putImageDataHD know that the
 bitmap should be scaled? Width and height refer to the pixel dimensions and
 not the 'px' unit

putImageData measures the affected region of the scratch bitmap in CSS
pixels and putImageDataHD measures it in device pixels.
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-context-2d-putimagedata


[whatwg] Proposal: ImageData constructor or factory method with preexisting data

2013-03-11 Thread Kenneth Russell
It would be useful to be able to create an ImageData [1] object with
preexisting data. The main use case is to display arbitrary data in
the 2D canvas context with no data copies.

Proposed IDL:

[NoInterfaceObject]
interface ImageDataFactories {
  ImageData createImageData(Uint8ClampedArray data, double sw, double sh);
};
Window implements ImageDataFactories;
WorkerGlobalScope implements ImageDataFactories;

createImageData would throw an exception if the length of the
Uint8ClampedArray was not equal to 4 * floor(sw) * floor(sh), or at
least, if the length of the array was less than this value. (Similar
wording would be used to that of CanvasRenderingContext2D's
createImageData.)

I don't think it is necessary to provide a createImageDataHD in this
interface. The caller will know the devicePixelRatio and determine
whether to generate high-DPI data.

[1] 
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#imagedata

Comments?

Thanks,

-Ken


Re: [whatwg] Fetch: crossorigin=anonymous and XMLHttpRequest

2013-02-26 Thread Kenneth Russell
Are you referring to the crossOrigin attribute on HTMLImageElement and
HTMLMediaElement? Those are implemented in WebKit. It should be fine
to change crossOrigin=anonymous requests to satisfy (a) and (b). Any
server that satisfies these anonymous requests in a way compatible
with UAs' caching will ignore the incoming origin and the referrer.

-Ken


On Tue, Feb 26, 2013 at 2:52 PM, Adam Barth w...@adambarth.com wrote:
 WebKit hasn't implemented either, so we don't have any implementation
 constraints in this area.

 Adam


 On Tue, Feb 26, 2013 at 3:35 AM, Anne van Kesteren ann...@annevk.nl wrote:
 There's an unfortunate mismatch currently. new
 XMLHttpRequest({anon:true}) will generate a request where a) origin is
 a globally unique identifier b) referrer source is the URL
 about:blank, and c) credentials are omitted. From those
 crossorigin=anonymous only does c. Can we still change
 crossorigin=anonymous to match the anonymous flag semantics of
 XMLHttpRequest or is it too late?


 --
 http://annevankesteren.nl/


[whatwg] Reporting errors during Web Worker startup

2013-01-09 Thread Kenneth Russell
http://www.whatwg.org/specs/web-apps/current-work/multipage/workers.html#creating-workers
doesn't seem to define what happens if there aren't enough resources
to create a separate parallel execution environment.

Would it be legal for a UA to consider this as violating a policy
decision and throw SecurityError? Or  is that step intended to reflect
a static decision, such as whether the UA allows workers to run at
all?

If this behavior isn't specified, could some graceful failure mode be
specified? Currently some UAs terminate the execution of the page
attempting to start the worker, which is obviously undesirable.

Thanks,

-Ken


Re: [whatwg] Endianness of typed arrays

2012-03-28 Thread Kenneth Russell
On Wed, Mar 28, 2012 at 3:46 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 3/28/12 3:14 AM, Mark Callow wrote:

 vertexAttribPointer lets you specifiy to WebGL the layout and type of
 the data in the buffer object.

 Sure.  That works for the GPU, but it doesn't allow for the sort of
 on-the-fly endianness conversion that would be needed to make webgl still
 work on big-endian platforms if the JS-visible typed arrays were always
 little-endian.


 The API follows OpenGL {,ES} for familiarity and reflects its
 heritage of a C API avoiding use of structures.

 Yep.  I know the history.  I think this was a mistake, if we care about the
 web ever being usable on big-endian hardware.  Whether we do is a separate
 question.

 But it works.

 Sort of, but maybe not; see below.


 OpenGL {,ES} developers typically load data from a serialized form and
 perform endianness conversion during deserialization. The serialized
 form is what would be loaded into an ArrayBuffer via XHR. It is then
 deserialized into 1 or more additional ArrayBuffers.


 The point is that developers are:

 1)  Loading data in serialized forms that has nothing to do with WebGL
    via XHR and then reading it using typed array views on the
    resulting array buffer.
 2)  Not doing endianness conversions, either for the use case in point
    1 or indeed for WebGL.

 Again, I think we all agree how this would work if everyone using the typed
 array APIs were perfect in every way and had infinite resources. But they're
 not and they don't... The question is where we go from here.

 In practice, it sounds like a UA on a big-endian system has a few options:

 A)  Native-endianness typed arrays.  Breaks anyone loading data via XHR
 arraybuffer responses (whether for WebGL or not) and not doing manual
 endianness conversions.

 B)  Little-endian typed arrays.  Breaks WebGL, unless developers switch to a
 more struct-based API.  Makes the non-WebGL cases of XHR arraybuffer
 responses work.

 C)  Try to guess based on where the array buffer came from and have
 different behavior for different array buffers.  With enough luck (or good
 enough heuristics), would make at least some WebGL work, while also making
 non-WebGL things loaded over XHR work.

 In practice, if forced to implement a UA on a big-endian system today, I
 would likely pick option (C)  I wouldn't classify that as a victory for
 standardization, but I'm also not sure what we can do at this point to fix
 the brokenness.

The top priority should be to implement DataView universally. DataView
is specifically designed for correct, portable manipulation of binary
data coming from or going to files or the network. Fortunately,
DataView is supported in nearly every actively developed UA; once
https://bugzilla.mozilla.org/show_bug.cgi?id=575688 is fixed, it
should be present in every major UA -- even the forthcoming IE 10! See
http://blogs.msdn.com/b/ie/archive/2011/12/01/working-with-binary-data-using-typed-arrays.aspx
.

Once DataView is available everywhere then the top priority should be
to write educational materials regarding binary I/O. It should be
possible to educate the web development community about correct
practices with only a few high profile articles.

Changing the endianness of Uint16Array and the other multi-byte typed
arrays is not a feasible solution. Existing WebGL programs already
work correctly on big-endian architectures specifically because the
typed array views use the host's endianness. If the typed array views
were changed to be explicitly little-endian, it would be a requirement
to introduce new big-endian views, and all applications using typed
arrays would have to be rewritten, not just those which use WebGL.

Finally, to reiterate one point: the typed array design was informed
by prior experience with the design and performance characteristics of
a similar API, specifically Java's New I/O (NIO) Buffer classes. NIO
merged the two distinct use cases of file and network I/O, and
interaction with graphics and audio devices, into one API. The result
was increased polymorphism at call sites, which defeated the Java VM's
optimizing compiler and led to 10x slowdowns in many common
situations. It was so difficult to fix these performance pitfalls that
they remained for many years, and I don't know how robust the
solutions are in current Java VMs. To avoid these issues the typed
array spec consciously treats these use cases separately. It is
possible to make incorrect assumptions leading to non-portable code,
but at some level this is possible with nearly any API that extends
beyond a small, closed world. I believe the focus should be on
educating developers about correct use of the APIs, developing
supporting libraries to ease development, and advancing the ECMAScript
language with constructs like struct types
(http://wiki.ecmascript.org/doku.php?id=harmony:binary_data).

-Ken


Re: [whatwg] Endianness of typed arrays

2012-03-28 Thread Kenneth Russell
On Wed, Mar 28, 2012 at 12:34 PM, Benoit Jacob bja...@mozilla.com wrote:
 Before I joined this mailign list, Boris Zbarsky wrote:
 C)  Try to guess based on where the array buffer came from and have
 different behavior for different array buffers.  With enough luck (or
 good enough heuristics), would make at least some WebGL work, while also
 making non-WebGL things loaded over XHR work.

 FWIW, here is a way to do this that will always work and won't rely on 
 luck. The key idea is that by the time one draws stuff, all the information 
 about how vertex attributes use buffer data must be known.

 1. In webgl.bufferData implementation, don't call glBufferData, instead just 
 cache the buffer data.

 2. In webgl.vertexAttribPointer, record the attributes structure (their 
 types, how they use buffer data). Do not convert/upload buffers yet.

 3. In the first WebGL draw call (like webgl.drawArrays) since the last 
 bufferData/vertexAttribPointer call, do the conversion of buffers and the 
 glBufferData calls. Use some heuristics to drop the buffer data cache, as 
 most WebGL apps will not have a use for it anymore.

It would never be possible to drop the CPU side buffer data cache. A
subsequent draw call may set up the vertex attribute pointers
differently for the same buffer object, which would necessitate going
back through the buffer's data and generating new, appropriately
byte-swapped data for the GPU.

 In practice, if forced to implement a UA on a big-endian system today, I
 would likely pick option (C)  I wouldn't classify that as a victory
 for standardization, but I'm also not sure what we can do at this point
 to fix the brokenness.

 I agree that seems to be the only way to support universal webgl content on 
 big-endian UAs. It's not great due to the memory overhead, but at least it 
 shouldn't incur a significant performance overhead, and it typically only 
 incurs a temporary memory overhead as we should be able to drop the buffer 
 data caches quickly in most cases. Also, buffers are typically 10x smaller 
 than textures, so the memory overhead would typically be ~ 10% in corner 
 cases where we couldn't drop the caches.

Our emails certainly crossed, but please refer to my other email.
WebGL applications that assemble vertex data for the GPU using typed
arrays will already work correctly on big-endian architectures. This
was a key consideration when these APIs were being designed. The
problems occur when binary data is loaded via XHR and uploaded to
WebGL directly. DataView is supposed to be used in such cases to load
the binary data, because the endianness of the file format must
necessarily be known.

The possibility of forcing little-endian semantics was considered when
typed arrays were originally being designed. I don't have absolute
performance numbers to quote you, but based on previous experience
with Java's NIO Buffer classes, I am positive that the performance
impact for WebGL applications on big-endian architectures would be
very large. It would prevent applications which manipulate vertices in
JavaScript from running acceptably on big-endian machines.

-Ken

 In conclusion: WebGL is not the worst here, there is a pretty reasonable 
 avenue for big-endian UAs to implement it in a way that allows running the 
 same unmodified content as little-endian UAs.

 Benoit


Re: [whatwg] Endianness of typed arrays

2012-03-28 Thread Kenneth Russell
On Wed, Mar 28, 2012 at 2:27 PM, Brandon Jones toj...@gmail.com wrote:
 I was initially on the just make it little endian and don't make me worry
 about it side of the fence, but on further careful consideration I've
 changed my mind: I think having typed arrays use the platform endianness is
 the right call.

 As Ken pointed out, if you are populating your arrays from javascript or a
 JSON file or something similar this is a non-issue. The problem only occurs
 when you are attempting to load a binary blob directly into a typed array.
 Unless that blob is entirely homogenous (ie: all Float32's or all Int16's,
 etc) it's impossible to trivially swap endianness without being provided a
 detailed breakdown of the data patterns contained within the blob.

 Consider this example (using WebGL, but the same could apply elsewhere): I
 download a binary file containing tightly packed interleaved vertices that I
 want to pass directly to a WebGL buffer. The data contains little endian
 vertex positions, texture coordinates, texture ID's and a 32 bit color per
 vertex, so the data looks something like this:

 struct {
     Float32[3] pos,
     Float32[4] uv,
     Uint16 textureId,
     Uint32 color
 };

 I will receive this data from XHR as an opaque TypedArray, and if the
 platform is little endian I can pass it directly to the GPU. But on big
 endian systems, a translation needs to be done somewhere:

 xhr.responseType = arraybuffer;
 xhr.onload = function() {
     var vertBuffer = gl.createBuffer();
     gl.bindBuffer(gl.ARRAY_BUFFER, vertBuffer);

     // If bigEndian then... magic!

     gl.bufferData(gl.ARRAY_BUFFER, this.response, gl.STATIC_DRAW);
 }

 So the question is: What exactly are we expecting that magic to be? We
 can't just swizzle every 4 bytes. Either the graphics driver must do the
 endian swap as it processes the buffer, which is possible but entirely out
 of the browsers control, or we would have to provide data packing
 information to the browser so that it could do the appropriate swap for us.
 And if I'm going to have to build up a data definition and pass that through
 to the browser anyway... well I've just destroyed the whole don't make me
 care about endianness ideal, haven't I? I might as well just do the swap in
 my own code via a DataView, or better yet cache a big endian version of the
 same file on the server side if I'm worried about performance.

I would suggest that you pass down the schema of the data to the
client application along with the raw binary file, and always iterate
down it with DataView, reading each individual value and storing it
into one of multiple typed array views of a new ArrayBuffer. Then
upload the new ArrayBuffer to WebGL. This way, if you get the code
working on one platform, you are guaranteed that it will work on all
platforms.

As one simple concrete example, please look at
http://code.google.com/p/webglsamples/source/browse/hdr/hdr.js#235 .
This demo downloads high dynamic range textures as binary files
containing floating-point values. The data is copied from the XHR's
ArrayBuffer using a DataView, knowing that the source data is in
little endian format, and stored into a Float32Array for upload to
WebGL. This code works identically on big-endian and little-endian
architectures.

 So yeah, it sucks that we have to plan for devices that are practically
 non-existant and difficult to test for, but I don't really see a nicer
 (practical) solution.

 That said, one thing that DataView doesn't handle too nicely right now is
 arrays. You're basically stuck for-looping over your data, even if it's all
 the same type. I would fully support having new DataView methods available
 like:

 Int32Array getInt32Array(unsigned long byteOffset, unsigned long elements,
 optional boolean littleEndian);

 Which would be a nice, sensible optimization since I'm pretty sure the
 browser backend could do that faster than a JS loop.

Definitely agree that adding array readers and writers to DataView is
worth considering; it's even mentioned in the typed array spec at
http://www.khronos.org/registry/typedarray/specs/latest/#11 . I would
however like to work on optimizing DataView's single-element accessors
first so that we could do a good measurement of the potential speedup.
Right now DataView is completely unoptimized in WebKit's
implementation, but the typed array views have had the benefit of
months of optimization work in both the JavaScriptCore and V8 engines.

-Ken

 --Brandon

 On Wed, Mar 28, 2012 at 1:39 PM, Kenneth Russell k...@google.com wrote:

 On Wed, Mar 28, 2012 at 12:34 PM, Benoit Jacob bja...@mozilla.com wrote:
  Before I joined this mailign list, Boris Zbarsky wrote:
  C)  Try to guess based on where the array buffer came from and have
  different behavior for different array buffers.  With enough luck (or
  good enough heuristics), would make at least some WebGL work, while
  also
  making non-WebGL things loaded over XHR work.
 
  FWIW, here is a way

Re: [whatwg] API for encoding/decoding ArrayBuffers into text

2012-03-27 Thread Kenneth Russell
On Mon, Mar 26, 2012 at 10:28 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Mar 26, 2012 at 6:11 PM, Kenneth Russell k...@google.com wrote:
 On Mon, Mar 26, 2012 at 5:33 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Mar 26, 2012 at 4:40 PM, Joshua Bell jsb...@chromium.org wrote:
 * We lost the ability to decode from a arraybuffer and see how many
 bytes were consumed before a null-terminator was hit. One not terribly
 elegant solution would be to add a TextDecoder.decodeWithLength method
 which return a DOMString+length tuple.

 Agreed, but of course see above - there was consensus earlier in the thread
 that searching for null terminators should be done outside the API,
 therefore the caller will have the length handy already. Yes, this would be
 a big flaw since decoding at tightly packed data structure (e.g. array of
 null terminated strings w/o length) would be impossible with just the
 nullTerminator flag.

 Requiring callers to find the null character first, and then use that
 will require one additional pass over the encoded binary data though.
 Also, if we put the API for finding the null character on the Decoder
 object it doesn't seem like we're creating an API which is easier to
 use, just one that has moved some of the logic from the API to every
 caller.

 Though I guess the best solution would be to add methods to DataView
 which allows consuming an ArrayBuffer up to a null terminated point
 and returns the decoded string. Potentially such a method could take a
 Decoder object as argument.

 The rationale for specifying the string encoding and decoding
 functionality outside the typed array specification is to keep the
 typed array spec small and easily implementable. The indexed property
 getters and setters on the typed array views, and methods on DataView,
 are designed to be implementable with a small amount of assembly code
 in JavaScript engines. I'd strongly prefer to continue to design the
 encoding/decoding functionality separately from the typed array views.

 Is there a reason you couldn't keep the current set of functions on
 DataView implemented using a small amount of assembly code, and let
 the new functions fall back to slower C++ functions?

That's possible.

Another motivation for keeping encoding/decoding functionality
separate is that it is likely that it will require a lot of spec text,
which would dramatically increase the size of the typed array spec.
Perhaps once all of the details have been hammered out on this thread
it will be more obvious whether these methods would be much clearer if
added directly to DataView.

A couple of comments on the current StringEncoding proposal:

  - I think it should reference DataView directly rather than
ArrayBufferView. The typed array spec was specifically designed with
two use cases in mind: in-memory assembly of data to be sent to the
graphics card or audio device, where the byte order must be that of
the host architecture; and assembly of data for network transmission,
where the byte order needs to be explicit. DataView covers the latter
case.

  - It would be preferable if the encoding API had a way to avoid
memory allocation, for example to encode into a passed-in DataView.

-Ken


Re: [whatwg] API for encoding/decoding ArrayBuffers into text

2012-03-27 Thread Kenneth Russell
On Tue, Mar 27, 2012 at 6:44 PM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, Mar 27, 2012 at 7:12 PM, Kenneth Russell k...@google.com wrote:

   - I think it should reference DataView directly rather than
 ArrayBufferView. The typed array spec was specifically designed with
 two use cases in mind: in-memory assembly of data to be sent to the
 graphics card or audio device, where the byte order must be that of
 the host architecture;


 This is wrong, broken, won't be implemented this way by any production
 browser, isn't how it's used in practice, and needs to be fixed in the
 spec.  It violates the most basic web API requirement: interoperability.
 Please see earlier in the thread; the views affected by endianness need to
 be specced as little endian.  That's what everyone is going to implement,
 and what everyone's pages are going to depend on, so it's what the spec
 needs to say.  Separate types should be added for big-endian (eg.
 Int16BEArray).

Thanks for your input.

The design of the typed array classes was informed by requirements
about how the OpenGL, and therefore WebGL, API work; and from prior
experience with the design and implementation of Java's New I/O Buffer
classes, which suffered from horrible performance pitfalls because of
a design similar to that which you suggest.

Production browsers already implement typed arrays with their current
semantics. It is not possible to change them and have WebGL continue
to function. I will go so far as to say that the semantics will not be
changed.

In the typed array specification, unlike Java's New I/O specification,
the API was split between two use cases: in-memory data construction
(for consumption by APIs like WebGL and Web Audio), and file and
network I/O. The API was carefully designed to avoid roadblocks that
would prevent maximum performance from being achieved for these use
cases. Experience has shown that the moment an artificial performance
barrier is imposed, it becomes impossible to build certain kinds of
programs. I consider it unacceptable to prevent developers from
achieving their goals.


 I also disagree that it should use DataView.  Views are used to access
 arrays (including strings) within larger data structures.  DataView is used
 to access packed data structures, where constructing a view for each
 variable in the struct is unwieldy.  It might be useful to have a helper in
 DataView, but the core API should work on views.

This is one point of view. The true design goal of DataView is to
supply the primitives for fast file and network input/output, where
the endianness is explicitly specified in the file format. Converting
strings to and from binary encodings is obviously an operation
associated with transfer of data to or from files or the network.
According to this taxonomy, the string encoding and decoding
operations should only be associated with DataView, and not the other
typed array types, which are designed for in-memory data assembly for
consumption by other hardware on the system.


  - It would be preferable if the encoding API had a way to avoid
 memory allocation, for example to encode into a passed-in DataView.


 This was an earlier design, and discussion led to it being removed as a
 premature optimization, to simplify the API.  I'd recommend reading the rest
 of the thread.

I do apologize for not being fully caught up on the thread, but hope
that the input above was still useful.

-Ken


Re: [whatwg] API for encoding/decoding ArrayBuffers into text

2012-03-26 Thread Kenneth Russell
On Mon, Mar 26, 2012 at 5:33 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Mon, Mar 26, 2012 at 4:40 PM, Joshua Bell jsb...@chromium.org wrote:
 * We lost the ability to decode from a arraybuffer and see how many
 bytes were consumed before a null-terminator was hit. One not terribly
 elegant solution would be to add a TextDecoder.decodeWithLength method
 which return a DOMString+length tuple.

 Agreed, but of course see above - there was consensus earlier in the thread
 that searching for null terminators should be done outside the API,
 therefore the caller will have the length handy already. Yes, this would be
 a big flaw since decoding at tightly packed data structure (e.g. array of
 null terminated strings w/o length) would be impossible with just the
 nullTerminator flag.

 Requiring callers to find the null character first, and then use that
 will require one additional pass over the encoded binary data though.
 Also, if we put the API for finding the null character on the Decoder
 object it doesn't seem like we're creating an API which is easier to
 use, just one that has moved some of the logic from the API to every
 caller.

 Though I guess the best solution would be to add methods to DataView
 which allows consuming an ArrayBuffer up to a null terminated point
 and returns the decoded string. Potentially such a method could take a
 Decoder object as argument.

The rationale for specifying the string encoding and decoding
functionality outside the typed array specification is to keep the
typed array spec small and easily implementable. The indexed property
getters and setters on the typed array views, and methods on DataView,
are designed to be implementable with a small amount of assembly code
in JavaScript engines. I'd strongly prefer to continue to design the
encoding/decoding functionality separately from the typed array views.

-Ken


Re: [whatwg] API for encoding/decoding ArrayBuffers into text

2012-03-13 Thread Kenneth Russell
Joshua Bell has been working on a string encoding and decoding API
that supports the needed encodings, and which is separable from the
core typed array API:

http://wiki.whatwg.org/wiki/StringEncoding

This is the direction I prefer. String encoding and decoding seems to
be a complex enough problem that it should be expressed separately
from the typed array spec itself.

-Ken


On Tue, Mar 13, 2012 at 5:59 PM, Ian Hickson i...@hixie.ch wrote:
 On Tue, 13 Mar 2012, Jonas Sicking wrote:

 Something that has come up a couple of times with content authors
 lately has been the desire to convert an ArrayBuffer (or part thereof)
 into a decoded string. Similarly being able to encode a string into an
 ArrayBuffer (or part thereof).

 Something as simple as

 DOMString decode(ArrayBufferView source, DOMString encoding);
 ArrayBufferView encode(DOMString source, DOMString encoding,
 [optional] ArrayBufferView destination);

 would go a very long way. The question is where to stick these
 functions. Internationalization doesn't have a obvious object we can
 hang functions off of (unlike, for example crypto), and the above
 names are much too generic to turn into global functions.

 Shouldn't this just be another ArrayBufferView type with special
 semantics, like Uint8ClampedArray? DOMStringArray or some such? And/or a
 getString()/setString() method pair on DataView?

 Incidentally I _strongly_ suggest we only support UTF-8 here.

 --
 Ian Hickson               U+1047E                )\._.,--,'``.    fL
 http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] API for encoding/decoding ArrayBuffers into text

2012-03-13 Thread Kenneth Russell
On Tue, Mar 13, 2012 at 6:10 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Tue, Mar 13, 2012 at 4:08 PM, Kenneth Russell k...@google.com wrote:
 Joshua Bell has been working on a string encoding and decoding API
 that supports the needed encodings, and which is separable from the
 core typed array API:

 http://wiki.whatwg.org/wiki/StringEncoding

 This is the direction I prefer. String encoding and decoding seems to
 be a complex enough problem that it should be expressed separately
 from the typed array spec itself.

 Very cool. Where do I provide feedback to this? Here?

This list seems like a good place to discuss it.

-Ken


Re: [whatwg] [CORS] WebKit tainting image instead of throwing error

2011-10-04 Thread Kenneth Russell
On Tue, Oct 4, 2011 at 11:55 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 10/4/11 2:44 PM, Anne van Kesteren wrote:

 On Tue, 04 Oct 2011 20:32:02 +0200, Ian Hickson i...@hixie.ch wrote:

 The idea is that if the server explicitly rejected the CORS request, then
 the image should not be usable at all.

 FWIW, from a CORS-perspective both scenarios are fine. CORS only cares
 about whether data gets shared in the end.

 Displaying images involves sharing data, basically.  That's why we're having
 to jump through all these hoops

As far as I can tell the tainting behavior WebKit implements is
correct, and is specified by the text in
http://www.whatwg.org/specs/web-apps/current-work/multipage/embedded-content-1.html#the-img-element
. Scroll down to step 6 in the algorithm for When the user agent is
to update the image data Note that the default origin behaviour
is set to taint when fetching images.

-Ken


Re: [whatwg] [CORS] WebKit tainting image instead of throwing error

2011-10-04 Thread Kenneth Russell
On Tue, Oct 4, 2011 at 12:11 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 default origin behavior is only relevant when the mode is No CORS. See

http://www.whatwg.org/specs/web-apps/current-work/multipage/fetching-resources.html#potentially-cors-enabled-fetch

 So for images it only applies when the crossorigin attribute is not set.

 So no, WebKit's implementation is not correct if you were trying to
 implement the spec.

 In particular, if crossorigin is set, you end up at

http://www.whatwg.org/specs/web-apps/current-work/multipage/fetching-resources.html#potentially-cors-enabled-fetch
 step 3 item 1 in the 'If mode is Anonymous or Use Credentials'
section,
 which is exactly what was cited in the mail that started this thread.

Right, I see now. Sorry.

On Tue, Oct 4, 2011 at 12:10 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 4 Oct 2011, Anne van Kesteren wrote:
  On Tue, 04 Oct 2011 20:32:02 +0200, Ian Hickson i...@hixie.ch wrote:
   The idea is that if the server explicitly rejected the CORS request,
   then the image should not be usable at all.
 
  FWIW, from a CORS-perspective both scenarios are fine. CORS only cares
  about whether data gets shared in the end. One advantage I can see about
  img crossorigin still displaying the image is that the request does
  not use cookies. Not displaying the image probably makes debugging
  easier however.

 On Tue, 4 Oct 2011, Boris Zbarsky wrote:
 
  Displaying images involves sharing data, basically.  That's why we're
  having to jump through all these hoops

 On Tue, 4 Oct 2011, Anne van Kesteren wrote:
 
  Sure, but not more than per usual. Note that if you do not specify the
  crossorigin attribute the image can still get untainted. And if it does
  not you would still display the image (as always).

 The thing is that you can grab the image data (at least the alpha channel
 using 2D canvas) from these images, even with tainting enabled, so if the
 server says no, we really should honour it.


I don't think that this is a good argument for the currently specified
behavior. The server only has the option of declining cross-origin access if
the application specified the crossorigin attribute. A hostile application
would simply not specify that attribute, would receive the tainted image,
and would use the timing attack I assume you're referring to to infer the
alpha channel.

The far more common case today is that the server doesn't understand the
CORS request, not that it explicitly forbids cross-origin access to the
resource.

It seems to me that tainting the image if the CORS check fails is more
graceful behavior, but I also see the advantages in early error reporting.

Odin, if you file a bug on bugs.webkit.org, would you CC my email address?

-Ken

On Tue, 4 Oct 2011, Boris Zbarsky wrote:
 
  And in particular an img crossorigin that's in the DOM and fails the
  CORS checks should not render the image on the page.  Anything else is
  just broken.

 Agreed.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



[whatwg] CORS requests for image and video elements

2011-05-17 Thread Kenneth Russell
Last week, a proof of concept of a previously theoretical timing
attack against WebGL was published which allows theft of cross-domain
images' content.

To address this vulnerability it appears to be necessary to ban the
use of cross-domain images and videos in WebGL. Unfortunately, doing
so will prevent entire classes of applications from being written, and
break a not insignificant percentage of current applications.

We would like to use CORS to solve this problem; if the server grants
access to the image or video, WebGL can use it. Initial discussions
with image hosting services have been positive, and it seems that CORS
support could be enabled fairly quickly. Many such services already
support other access control mechanisms such as Flash's
crossdomain.xml. Unfortunately, experimentation indicates that it is
not possible to simply send CORS' Origin header with every HTTP GET
request for images; some servers do not behave properly when this is
done.

We would like to propose adding a new Boolean property, useCORS, to
HTMLImageElement and HTMLMediaElement, defaulting to false. If set to
true, then HTTP requests sent for these elements will set the Origin
header from the page's URL. If the Access-Control-Allow-Origin header
in the response grants access, then the content's origin will be
treated as the same as the page's.

Perhaps an API could also be added to find out whether the server
granted CORS access to the resulting media, though this is less
important. (Note that the canvas element does not have an explicit API
for querying the origin-clean flag.)

Thoughts on this proposal? We would like to decide on a path quickly
so that we can update both specs and implementations.

Thanks,

-Ken


Re: [whatwg] CORS requests for image and video elements

2011-05-17 Thread Kenneth Russell
On Tue, May 17, 2011 at 2:52 PM, Glenn Maynard gl...@zewt.org wrote:
 On Tue, May 17, 2011 at 5:40 PM, Jonas Sicking jo...@sicking.cc wrote:

 If the supports credentials flag is set to false, the request will
 be made without cookies, and the server may respond with either
 Access-Control-Allow-Origin:* or Access-Control-Allow-Origin:
 origin.

 I propose that the latter mode is used as it will make servers easier
 to configure as they can just add a static header to all their
 responses.

 This could be specified, eg. img cors without credentials and img
 cors=credentials with.  I don't know if there are use cases to justify
 it.

In general I think we need to enable as close behavior to the normal
image fetching code path as possible. For example, a mashup might
require you to be logged in to a site in order to display thumbnails
of movie trailers. If normal image fetches send cookies, then it has
to be possible to send them when doing a CORS request. I like the idea
of img cors vs. img cors=credentials.

-Ken


Re: [whatwg] CORS requests for image and video elements

2011-05-17 Thread Kenneth Russell
On Tue, May 17, 2011 at 6:11 PM, Ian Hickson i...@hixie.ch wrote:
 On Tue, 17 May 2011, Kenneth Russell wrote:

 Last week, a proof of concept of a previously theoretical timing attack
 against WebGL was published which allows theft of cross-domain images'
 content.

 To address this vulnerability it appears to be necessary to ban the use
 of cross-domain images and videos in WebGL. Unfortunately, doing so will
 prevent entire classes of applications from being written, and break a
 not insignificant percentage of current applications.

 We would like to use CORS to solve this problem; if the server grants
 access to the image or video, WebGL can use it. Initial discussions with
 image hosting services have been positive, and it seems that CORS
 support could be enabled fairly quickly. Many such services already
 support other access control mechanisms such as Flash's crossdomain.xml.
 Unfortunately, experimentation indicates that it is not possible to
 simply send CORS' Origin header with every HTTP GET request for images;
 some servers do not behave properly when this is done.

 We would like to propose adding a new Boolean property, useCORS, to
 HTMLImageElement and HTMLMediaElement, defaulting to false. If set to
 true, then HTTP requests sent for these elements will set the Origin
 header from the page's URL. If the Access-Control-Allow-Origin header in
 the response grants access, then the content's origin will be treated as
 the same as the page's.

 On Tue, 17 May 2011, Jonas Sicking wrote:

 Does setting useCORS make the CORS implementation execute with the
 supports credentials flag set to true or false?

 When set to true, the request to the server will contain the normal
 cookies which the user has set for that domain. However, the response
 from the server will have to contain Access-Control-Allow-Origin:
 origin. In particular Access-Control-Allow-Origin:* will not be
 treated as a valid response.

 If the supports credentials flag is set to false, the request will be
 made without cookies, and the server may respond with either
 Access-Control-Allow-Origin:* or Access-Control-Allow-Origin:
 origin.

 I propose that the latter mode is used as it will make servers easier to
 configure as they can just add a static header to all their responses.

 On Tue, 17 May 2011, Glenn Maynard wrote:

 This could be specified, eg. img cors without credentials and img
 cors=credentials with.  I don't know if there are use cases to
 justify it.

 On Tue, 17 May 2011, Kenneth Russell wrote:

 In general I think we need to enable as close behavior to the normal
 image fetching code path as possible. For example, a mashup might
 require you to be logged in to a site in order to display thumbnails of
 movie trailers. If normal image fetches send cookies, then it has to be
 possible to send them when doing a CORS request. I like the idea of img
 cors vs. img cors=credentials.

 I've added a content attribute to img, video, and audio that makes
 the image or media resource be fetched with CORS And have the origin of
 the page if CORS succeeded.

 The attribute is cross-origin and it has two allowed values,
 use-credentials and anonymous. The latter is the default, so you can
 just say img cross-origin src=data.png.

 This is only a first draft, I'm not sure it's perfect. In particular,
 right now cross-origin media is not allowed at all without this attribute
 (this is not a new change, but I'm not sure it's what implementations do).
 Also, right now as specced if you give a local URL that redirects to a
 remote URL, I don't have CORS kick in, even if you specified cross-origin.
 (This is mostly an editorial thing, I'm going to wait for Anne to get back
 and then see if he can help me out with some editorial changes to CORS to
 make it easier to make that work generally.)

 Implementation and author experience feedback would be very welcome on
 this.

Thanks very much for your prompt attention. This sounds like a great
first step. I'll personally try to implement this in WebKit ASAP, and
encourage other browser vendors who are members of the WebGL working
group to do so as well and provide feedback.

 On Tue, 17 May 2011, Kenneth Russell wrote:

 Perhaps an API could also be added to find out whether the server
 granted CORS access to the resulting media, though this is less
 important. (Note that the canvas element does not have an explicit API
 for querying the origin-clean flag.)

 I haven't exposed this. You can work around it by trying to use the image
 in a canvas, then rereading the canvas, and seeing if you get a security
 error. If there are compelling use cases for that I'd be happy to add an
 API to handle this feature to the DOM though.

That sounds fine.

-Ken

 --
 Ian Hickson               U+1047E                )\._.,--,'``.    fL
 http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] Canvas.getContext error handling

2011-04-13 Thread Kenneth Russell
On Tue, Apr 12, 2011 at 4:32 PM, Glenn Maynard gl...@zewt.org wrote:
 Based on some discussion[1], it looks like a clean way to handle the
 permanent failure case is: If the GPU is blacklisted, or any other
 permanent error occurs, treat webgl as an unsupported context.  This means
 instead of WebGL's context creation algorithm executing and returning null,
 it would never be run at all; instead, step 2 of getContext[2] would return
 null.

 For transient errors, eg. too many open contexts, return a WebGL context in
 the lost state as Kenneth described.

 It was mentioned that the GPU blacklist can change as the browser runs.
 That's supported with this method, since whether a context type is
 supported or not can change over time.

 Are there any cases where this wouldn't work?

 (I'm not sure if or how webglcontextcreationerror fits in this.  It would
 either go away entirely, or be wedged between steps 1 and 2 of getContext; I
 don't know how WebGL would specify that.)

Thanks for the pointer to the IRC logs. It looks like it was a useful
discussion.

It's essential to be able to report more detail about why context
creation failed. We have already received a lot of feedback from users
and developers of popular projects like Google Body that doing so will
reduce end user frustration and provide them a path toward getting the
content to work.

At a minimum, we need to either continue to allow the generation of
webglcontextcreationerror at some point during the getContext() call,
throw an exception from getContext() in this case, or do something
else. Do you have a suggestion on which way to proceed?

-Ken

 [1] http://krijnhoetmer.nl/irc-logs/whatwg/20110413#l-77
 [2]
 http://dev.w3.org/html5/spec/the-canvas-element.html#dom-canvas-getcontext

 --
 Glenn Maynard



Re: [whatwg] Canvas.getContext error handling

2011-04-13 Thread Kenneth Russell
On Wed, Apr 13, 2011 at 4:43 PM, Glenn Maynard gl...@zewt.org wrote:
 On Wed, Apr 13, 2011 at 4:21 PM, Kenneth Russell k...@google.com wrote:

 It's essential to be able to report more detail about why context
 creation failed. We have already received a lot of feedback from users
 and developers of popular projects like Google Body that doing so will
 reduce end user frustration and provide them a path toward getting the
 content to work.

 Hixie says this is a bad idea, for security reasons, and that the UA should
 just tell the user directly:
 http://krijnhoetmer.nl/irc-logs/whatwg/20110413#l-1056

 That said, the discussion lead to another approach:

 Calling canvas.getContext(webgl, {async: true}) will cause it to *always*
 return an object immediately, without attempting to initialize the
 underlying drawing context.  This context starts out in the lost state.
 As long as WebGL is supported by the browser, getContext will never return
 null, even for blacklisted GPUs.  The context is initialized
 asynchronously.  On success, webglcontextrestored is fired, as if the
 context had just come back from a normal context loss.  On failure,
 webglcontextcreationerror is fired with a statusMessage, and possibly a flag
 indicating whether it's a permanent failure (GPU blacklisted) or a
 recoverable one (insufficient resources).

 If {async: true} isn't specified, then an initial context failure returns
 null (using the unsupported contextId approach), and there's no interface
 to get an error message--people should be strongly discouraged from using
 this API (deprecating it if possible).

 (If it's possible to make the backwards-incompatible change to remove sync
 initialization entirely, that would be good to do, but I'm assuming it's
 not.)

 There are other fine details (such as feature detection, and possibly
 distinguishing initializing from lost), but I'll wait for people to give
 their thoughts before delving in deeper.  Aside from giving a consistent way
 to report errors, this allows browsers to initialize WebGL contexts in the
 background.

Providing a programmatic status message about why WebGL initialization
failed (for example, that the user's card or driver is blacklisted) is
not a security issue. First, there would be no way to issue work to
the GPU to exploit any vulnerabilities that might exist, since the app
couldn't get a WebGLRenderingContext. Second, there wouldn't be
detailed enough information in the error message to find out what
graphics card is in use and attempt any other kind of targeted attacks
using other web rendering mechanisms.

Adding support for asynchronous initialization of WebGL is a good
idea, and should be proposed on public_webgl, but this discussion
should focus solely on improving the specification of the existing
synchronous initialization path, and its error conditions.

Given that the proposed asynchronous initialization path above uses
webglcontextcreationerror and provides a status message, I think that
should continue to be the error reporting mechanism for the current
initialization path. Then the introduction of any asynchronous
initialization path would be very simple: the application should
anticipate that it will receive a context lost event immediately,
rather than assuming it can immediately do its initialization. Error
reporting would be identical in the two scenarios.

-Ken


Re: [whatwg] Canvas.getContext error handling

2011-04-12 Thread Kenneth Russell
On Tue, Apr 12, 2011 at 2:21 AM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 4/12/11 12:06 AM, Ian Hickson wrote:

 Now, that's a problem for WebGL, because it's not possible to tell in
 advance whether the underlying rendering context can be created.

 It would be helpful if someone could explain what conditions could lead to
 a situation where getContext() could fail to return a WebGL context.

 In at least Gecko's implementation, creating of an actual GLContext to back
 the WebGL context could fail.  Unfortunately, this can happen any time too
 many WebGL contexts are live; what too many means depends on the exact GPU
 resources available.

 I think we consider the fact that part a quality of implementation issue,
 though we haven't figured out how to do the high quality think here yet.
  ;)

 Is it something the author can do anything about

 Still in Gecko's case, drop references to some GL contexts and canvases, and
 hope for GC to happen.

In Chromium the same basic issue is present. We are close to being
able to forcibly evict old OpenGL contexts in response, so that
creation of the current one could proceed. In this case the author
wouldn't need to do anything.

There are two more cases I can think of. The first is when the
graphics card or driver version is blacklisted. In this case
getContext() will always return null, and there's nothing the
developer can do. Currently this would be reported via
webglcontextcreationerror. We could consider throwing an exception
containing the detail message.

The second is if the graphics card is in a powered-off state when the
app calls Canvas.getContext(webgl) -- for example, if a notebook is
awakening from sleep. In this case a good quality WebGL implementation
would like to notify the app when the graphics card is available. In
order for this to work, WebGL would actually need to return a non-null
WebGLRenderingContext, but immediately dispatch a webglcontextlost
event to the canvas.

To sum up, in general I think that whenever getContext(webgl)
returns null, it's unrecoverable in a high quality WebGL
implementation.

-Ken


Re: [whatwg] Canvas.getContext error handling

2011-04-11 Thread Kenneth Russell
On Sat, Apr 9, 2011 at 7:55 PM, Glenn Maynard gl...@zewt.org wrote:
 getContext doesn't specify error handling.  WebGL solves this oddly: if an
 error occurs, it dispatches an event with error details at the canvas.  It's
 odd for a synchronous API to report error information with an event; it
 would make a lot more sense to raise an exception.  However, getContext
 doesn't specify error handling for the Return a new object for contextId
 algorithms.

 The primary context should only be set by getContext after Return a new
 object for contextId completes successfully; it shouldn't be set on error.
 The cached return value used in step 5 should also only happen after
 success; don't cache a null response.  This way, you can retry getContext on
 failure, and getContext is a straightforward no-op after an error.

 (I don't know if the WebGL folks could be convinced to change to throwing an
 exception or if they want to continue returning null and firing an event.)

 Related thread:
 https://www.khronos.org/webgl/public-mailing-list/archives/1104/msg00027.html

Just for the record, I'm sure the WebGL working group would be
amenable to making changes in this area. However, there is a general
problem: if getContext() throws an exception, how does the caller know
whether a later call to getContext() might succeed, or will always
fail?

I don't remember all of the discussions in the WebGL working group
which led to the currently defined behavior, but I think that the fact
that 3D graphics contexts can be spontaneously lost, and recovered,
factored into the current design.

-Ken


Re: [whatwg] ArrayBuffer and the structured clone algorithm

2011-02-01 Thread Kenneth Russell
On Tue, Feb 1, 2011 at 11:08 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Tue, Feb 1, 2011 at 10:04 AM, Anne van Kesteren ann...@opera.com wrote:
 On Tue, 01 Feb 2011 18:36:19 +0100, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/1/11 5:19 AM, Simon Pieters wrote:

 While you're discussing efficient handoff of ArrayBuffer, do you also
 keep in mind efficient handoff of other objects (e.g. ImageData) as
 discussed in this thread?:

 http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2011-January/029885.html

 For what it's worth, in Gecko that's the same thing, since imagedata is
 just a typed array in our implementation.

 ImageData.data you mean? I wonder if we can still remove CanvasPixelArray.

 Only if the out-of-bounds behavior for entries in Typed Arrays matches
 the current clamping behavior for CanvasPixelArray.  I don't see any
 explicit indication of what should be done in the Typed Array spec,
 which I suppose means that they're relying on WebIDL's coercion algos
 to keep things in-range for the given view.  WebIDL has the wrong
 behavior here right now (it wraps), though I think heycan is receptive
 to changing it.

For this reason I think we need to keep CanvasPixelArray distinct. I
certainly hope that Web IDL does not change its conversion rules to
mimic the clamping behavior in CanvasPixelArray. Right now Web IDL
delegates to the ECMA-262 specification for primitive conversions,
which have the wrapping behavior of C-style casts rather than clamping
behavior. Forcing clamping for out-of-range integer values would
impose a significant negative performance constraint on typed arrays.

-Ken


Re: [whatwg] ArrayBuffer and the structured clone algorithm

2011-01-31 Thread Kenneth Russell
On Mon, Jan 31, 2011 at 3:10 PM, Ian Hickson i...@hixie.ch wrote:
 On Fri, 7 Jan 2011, David Flanagan wrote:

 The structured clone algorithm currently allows ImageData and Blob
 objects to be cloned but doesn't mention ArrayBuffer.  Is this
 intentional?  I assume there are no security issues involved, since one
 could copy the bytes of an ArrayBuffer into either a Blob or an
 ImageData object in order to clone them.

 It's intentional in that I'm waiting for ArrayBuffer to be more stable
 before I add it throughout the spec. (Same with CORS and the various
 places that might support cross-origin communication, e.g. Web Workers,
 Server-Sent Events, img+canvas, etc.)

There's been some preliminary discussion within the WebGL working
group (where ArrayBuffer / Typed Arrays originated) about using
ArrayBuffer with Web Workers in particular. There is a strong desire
to support handoff of an ArrayBuffer from the main thread to a worker
and vice versa; this would allow efficient producer/consumer queues to
be built without violating ECMAScript's shared-nothing semantics.

All of the parties involved are pretty busy getting WebGL 1.0 out the
door; once that happens, we aim to make one more revision to the Typed
Array spec to support (1) read-only arrays for more efficient XHRs and
(2) handoff of ArrayBuffers. Expect public discussions to start in
about six to eight weeks.

-Ken


Re: [whatwg] ArrayBuffer and ByteArray questions

2010-09-08 Thread Kenneth Russell
On Wed, Sep 8, 2010 at 11:21 AM, Oliver Hunt oli...@apple.com wrote:

 On Sep 8, 2010, at 11:13 AM, Chris Marrin wrote:

 Web Sockets is certainly another candidate, but I meant Web Workers. There 
 have been informal discussions on using ArrayBuffers as a way to safely 
 share binary data between threads. I don't believe anything has been 
 formalized here.

 You can't share data between workers.  There is no (and there cannot be) any 
 shared state between multiple threads of JS execution.

Let's say efficiently send rather than share. The current thinking
has been around a way to post one ArrayBuffer to a worker which would
close that ArrayBuffer and all views on the main thread. The way to
get the same backing store from the worker back to the main thread
would be to post the ArrayBuffer from the worker to the main thread,
at which point the ArrayBuffer and all views on the worker would be
closed. This ping-ponging would allow efficient implementation of
producer/consumer queues without allocating new backing store each
time the worker wants to produce something for the main thread.

This would require some small API additions to the typed array spec,
and a prototype so we can convince ourselves of its effectiveness.

-Ken


Re: [whatwg] api for fullscreen()

2010-02-01 Thread Kenneth Russell
On Thu, Jan 28, 2010 at 8:55 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Fri, Jan 29, 2010 at 5:06 PM, Geoff Stearns tensafefr...@google.com
 wrote:

 enterFullscreen always returns immediately. If fullscreen mode is
 currently supported and permitted, enterFullscreen dispatches a task that a)
 imposes the fullscreen style, b) fires the beginfullscreen event on the
 element and c) actually initiates fullscreen display of the element. The UA
 may asynchronously display confirmation UI and dispatch the task when the
 user has confirmed (or never).

 Don't you think it would make more sense to dispatch the enterFullscreen
 event only when the element actually goes fullscreen? If the user clicks the
 fullscreen button, but then doesn't accept whatever options (likely a
 security dialog or something) then it doesn't make sense to broadcast an
 enterFullscreen event, as you'd just have to broadcast an exitFullscreen
 event right away to show that the user isn't actually in fullscreen.

 That was my intent in the last sentence of the paragraph you quoted.



 The enableKeys parameter to enterFullscreen is a hint to the UA that the
 application would like to be able to receive arbitrary keyboard input.
 Otherwise the UA is likely to disable alphanumeric keyboard input. If
 enableKeys is specified, the UA might require more severe confirmation UI.

 This seems overly complicated. I think it would suffice to simply show a
 dialog the first time a user wants to go fullscreen within a domain with an
 option to remember this choice for this domain. Then the user won't have
 to jump through the hoops again when they return, but will still protect
 them from random websites going fullscreen and trying to phish things. This
 way blocking or restricting keyboard events isn't needed.

 Those kinds of dialogs are dangerous because users tend to just dismiss them
 without reading. Passive (ignorable and asynchronous) confirmation works
 better.

 The enableKeys option would let authors who don't need alphanumeric input
 (video playback) go fullscreen with a low confirmation bar (perhaps none at
 all, if the fullscreen request is in a click event handler).

 Also consider what happens if the user focuses something on another
 display. Do you then drop out of fullscreen, or just blur() the fullscreen
 window? (I'd vote to leave it and just blur() it, so you can do things like
 watch fullscreen video on one display and continue working in the other).

 That sounds like a good idea, but I don't think it needs to be in the spec.
 It's up to the UA.

 Another thing to add in here I haven't seen discussed yet is what to show
 as the background to the fullscreen element. Consider the example of a 16:9
 video going fullscreen on a 4:3 display. How do you tell the browser to fill
 in the extra space around the video with black (or whatever other color you
 want). Is this a custom css element?


 The video element already letterboxes. So you'd do something like this:
 div class=fullscreen style=background:black; position:relative;
 width:640px; height:480px;
   video style=position:absolute; width:100%; height:100%;
 src=.../video
   ... controls ...
 /div

 Making the div fullscreen would override the author geometry and produce
 the effect you want.

When you say that the DOM viewport of the element is aligned with the
screen when it goes fullscreen, does that mean that the .width and
.height properties are changed? Or does it mean that the element's
size is changed by a CSS style?

The case I'm thinking about is when a Canvas element is taken
fullscreen; on that element changing the .width and .height properties
changes the size of the backing store, but applying a CSS style to
change its width and height causes the backing store to be scaled to
fit. The desired behavior is for the backing store to be resized.

-Ken

 Rob
 --
 He was pierced for our transgressions, he was crushed for our iniquities;
 the punishment that brought us peace was upon him, and by his wounds we are
 healed. We all, like sheep, have gone astray, each of us has turned to his
 own way; and the LORD has laid on him the iniquity of us all. [Isaiah
 53:5-6]



Re: [whatwg] api for fullscreen()

2010-02-01 Thread Kenneth Russell
On Mon, Feb 1, 2010 at 11:05 AM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Tue, Feb 2, 2010 at 7:39 AM, Kenneth Russell k...@google.com wrote:

 When you say that the DOM viewport of the element is aligned with the
 screen when it goes fullscreen, does that mean that the .width and
 .height properties are changed? Or does it mean that the element's
 size is changed by a CSS style?

 The latter. The window's viewport is aligned with the screen bounds, and by
 default the element is styled with position:fixed; left:0; right:0; top:0;
 bottom:0, which resizes it in CSS to fill the viewport.

 The case I'm thinking about is when a Canvas element is taken
 fullscreen; on that element changing the .width and .height properties
 changes the size of the backing store, but applying a CSS style to
 change its width and height causes the backing store to be scaled to
 fit. The desired behavior is for the backing store to be resized.


 The author would have to handle the beginfullscreen event and manually set
 the canvas width/height attributes, e.g. to
 getBoundingClientRect().width/height. I don't think we should change
 width/height attributes automatically, since that has the side effect of
 clearing the canvas.

OK, that sounds reasonable. Thanks.

-Ken

 Rob
 --
 He was pierced for our transgressions, he was crushed for our iniquities;
 the punishment that brought us peace was upon him, and by his wounds we are
 healed. We all, like sheep, have gone astray, each of us has turned to his
 own way; and the LORD has laid on him the iniquity of us all. [Isaiah
 53:5-6]



Re: [whatwg] Canvas pixel manipulation and performance

2009-12-04 Thread Kenneth Russell
On Fri, Dec 4, 2009 at 9:30 AM, Jason Oster paras...@kodewerx.org wrote:
 I guess this suggestion to access the full pixel data in a single array
 element has fallen by the wayside.  Are there any direct objections to
 including additional API to allow this kind of behavior?  It seems most
 developers believe it would be unnecessary, but I haven't heard much in the
 way of reasoning (technical nor personal).

 I cannot comment on the typical uses of accessing pixel data from script;
 if it is [in general] more important to have each of the R,G,B,A components
 separated for script access, or not.  But for cases involving indexed
 palettes, having the ability to directly treat each pixel as a single
 property is very much desired.

 It is not going to provide a huge boost in performance.  At worst, it will
 help make code cleaner.  But at best, it will do that and [slightly?] reduce
 the performance penalty of reading/writing 3 superfluous (in my eyes) array
 accesses.  The only negative aspect I can think of with additional API
 functions is the introduction of new developer confusion; Which one do I
 use?

I think you'd get more traction if you had performance measurements;
minimally, profiles showing that this is hot in your current
application. Ideally, you could do a prototype in one of the browsers
supporting WebGL which exposes the ImageData's backing store as a
WebGLUnsignedIntArray. If this showed a significant speedup it would
provide strong motivation.

-Ken


Re: [whatwg] Canvas pixel manipulation and performance

2009-11-29 Thread Kenneth Russell
On Sat, Nov 28, 2009 at 9:47 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 11/29/09 12:15 AM, Kenneth Russell wrote:

 I assume you meant JS bitwise operators?  Do we have any indication that
 this would be faster than four array property sets?  The bitwise ops in
 JS
 are not necessarily particulary fast.

 Yes, that's what I meant. I don't have any data on whether this would
 currently be faster than the four separate byte stores.

 Are they even byte stores, necessarily?  I know in Gecko imagedata is just a
 JS array at the moment; it stores each of R,G,B,A as a JS Number (with the
 usual if it's an integer store as an integer optimization arrays do).
  That might well change in the future, and I hope it does, but that's the
 current code.

 I can't speak to what the behavior is in Webkit, and in particular whether
 it's even the same when using V8 vs Nitro.

In Chromium (WebKit + V8), CanvasPixelArray property stores write
individual bytes to memory. WebGLByteArray and WebGLUnsignedByteArray
behave similarly but have simpler clamping semantics.

-Ken


Re: [whatwg] Canvas pixel manipulation and performance

2009-11-29 Thread Kenneth Russell
On Sun, Nov 29, 2009 at 11:05 AM, Philip Taylor excors+wha...@gmail.com wrote:
 On Sun, Nov 29, 2009 at 6:59 PM, Kenneth Russell k...@google.com wrote:
 On Sat, Nov 28, 2009 at 9:47 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 Are they even byte stores, necessarily?  I know in Gecko imagedata is just a
 JS array at the moment; it stores each of R,G,B,A as a JS Number (with the
 usual if it's an integer store as an integer optimization arrays do).
  That might well change in the future, and I hope it does, but that's the
 current code.

 I can't speak to what the behavior is in Webkit, and in particular whether
 it's even the same when using V8 vs Nitro.

 In Chromium (WebKit + V8), CanvasPixelArray property stores write
 individual bytes to memory. WebGLByteArray and WebGLUnsignedByteArray
 behave similarly but have simpler clamping semantics.

 Would it be helpful (for simplicity or performance or consistency etc)
 to change the specification of CanvasPixelArray to have those simpler
 clamping semantics? (I don't expect there would be compatibility
 problems with changing it now, particularly since Firefox doesn't
 implement clamping at all in CPA.)

It would. Vladimir Vukicevic from Mozilla was planning to raise this
issue with the whatwg upon release of the first public draft of the
WebGL spec.

-Ken


Re: [whatwg] Canvas pixel manipulation and performance

2009-11-28 Thread Kenneth Russell
On Sat, Nov 28, 2009 at 12:44 PM, Jason Oster paras...@kodewerx.org wrote:

 Once again, I agree.  My confusion on the type-specific arrays for WebGL is
 that they were specific and general enough to use in other cases.  If they
 should not be used in 2D canvas implementations (or elsewhere) then a
 2D-canvas-specific array or object would be the way forward.

I and other members of the WebGL working group are hoping that the new
array-like types being introduced with this specification will be
general enough to repurpose in other areas. The first public draft of
the spec will be released in the next week or two, and we're hoping
that will enable discussion with the broader web community.

From a technical standpoint, it would be feasible to use the
WebGLUnsignedIntArray to access the Canvas's pixel data, and
assemble RGBA pixels into integer values using just JavaScript logical
operators. To keep things sane, the specification would need to state
something along the lines that the high (logical, not addressing) 8
bits are the red bits and the low 8 bits are the alpha bits. This
means that implementations on big-endian and little-endian machines
would need to store the data differently internally so that the
behavior at the JavaScript level is identical; the WebGL array types
currently deliberately do no byte swapping.

-Ken


Re: [whatwg] Canvas pixel manipulation and performance

2009-11-28 Thread Kenneth Russell
On Sat, Nov 28, 2009 at 9:00 PM, Boris Zbarsky bzbar...@mit.edu wrote:
 On 11/28/09 11:42 PM, Kenneth Russell wrote:

 From a technical standpoint, it would be feasible to use the
 WebGLUnsignedIntArray to access the Canvas's pixel data, and
 assemble RGBA pixels into integer values using just JavaScript logical
 operators.

 I assume you meant JS bitwise operators?  Do we have any indication that
 this would be faster than four array property sets?  The bitwise ops in JS
 are not necessarily particulary fast.

Yes, that's what I meant. I don't have any data on whether this would
currently be faster than the four separate byte stores.

-Ken


Re: [whatwg] An BinaryArchive API for HTML5?

2009-07-30 Thread Kenneth Russell
On Thu, Jul 30, 2009 at 6:13 AM, Sebastian
Markbågesebast...@calyptus.eu wrote:
 This suggestion seems similar to Digg's Stream project that uses multipart
 documents: http://github.com/digg/stream

 While it would be nice to have a way to parse and handle this in JavaScript,
 it shouldn't be JavaScript's responsibility to work with large object data
 and duplicating it as in-memory data strings.
 The real issue here is the overhead of each additional HTTP request for
 those thousands of objects. But that's useful for all parts of the spec if
 you can download it as a single package even without JavaScript. Images,
 CSS, background-images, JavaScript, etc. Currently you can include graphics
 as data URLs in CSS. Using a package you could package whole widgets (or
 apps) as a single request.
 I'd suggest that this belongs in a lower level API such as the URIs and
 network stack for the tags. You could specify a file within an archive by
 adding an hash with the filename to the URI:
 img src=http://someplace.com/somearchive.tgz#myimage.jpg; /
 style type=text/css
 #id { background-image:
 url(http://someplace.com/somearchive.tgz#mybackgroundimage.jpg); }
 /style
 script src=http://someplace.com/somearchive.tgz#myscript.js;
 type=text/javascript/script
 var img = new Image();
 img.src = http://someplace.com/somearchive.tgz#myimage.png;;
 Now which packaging format to use would be a discussion on it's own. An easy
 route would be to use multipart/mixed that is already used for this in
 e-mails and can also be gzipped using Content-Encoding.

In the context of the 3d canvas discussions, it looks like there is a
need to load binary blobs of vertex data and feed them to the graphics
card via a JavaScript call. Here is some hypothetical IDL similar to
what is being considered:

[IndexGetter, IndexSetter]
interface CanvasFloatArray {
readonly attribute unsigned long length;
};

interface CanvasRenderingContextGL {
...
typedef unsigned long GLenum;
void glBufferData(in GLenum target, in CanvasFloatArray data,
in GLenum usage);
...
};

Do you have some suggestions for how the data could be transferred
most efficiently to the glBufferData call? As far as I know there is
no tag which could be used to refer to the binary file within the
archive. If there were then presumably it could provide its contents
as a CanvasFloatArray or other type.

-Ken

 On Thu, Jul 30, 2009 at 11:41 AM, Anne van Kesteren ann...@opera.com
 wrote:

 On Thu, 30 Jul 2009 08:49:12 +0200, Gregg Tavares g...@google.com wrote:
  What are people's feelings on adding a Binary Archive API to HTML5?

 I think it makes more sense to build functionality like this on top of the
 File API rather than add more things into HTML5.


  It seems like it would be useful if there was browser API that let you
  download something like gzipped tar files.

 We already have that: XMLHttpRequest.


  The API would look something like
 
  var request = createArchiveRequest();
  request.open(GET, http://someplace.com/somearchive.tgz;);
  request.onfileavailable = doSomethingWithEachFileAsItArrives;
  request.send();

 I don't think we should introduce a new HTTP API.


  function doSomethingWithEachFileAsItArrives(binaryBlob) {
    // Load every image in archive
    if (binaryBlob.url.substr(-3) == .jpg) {
       var image = new Image();
       image.src = binaryBlob.toDataURL();  // or something;
       ...
    }
    // Look for a specific text file
    else if (binaryBlog.url === myspecial.txt) {
      // getText only works if binaryBlob is valid utf-8 text.
      var text = binaryBlob.getText();
      document.getElementById(content).innerHTML = text;
    }
  }

 Having dedicated support for a subset of archiving formats in within the
 API for File objects makes sense to me. Latest draft of the File API I know
 of is

  http://dev.w3.org/2006/webapi/FileUpload/publish/FileAPI.xhtml

 and the mailing list would be public-weba...@w3.org.


 --
 Anne van Kesteren
 http://annevankesteren.nl/