[whatwg] Fwd: Why can't ImageBitmap objects have width and height attributes? (and other e-mails)

2013-07-18 Thread K. Gadd
Re-sending this because the listserv silently discarded it (You guys should
fix it to actually send the notice...)

-- Forwarded message --
From: K. Gadd k...@luminance.org
Date: Wed, Jul 17, 2013 at 6:46 PM
Subject: Re: [whatwg] Why can't ImageBitmap objects have width and height
attributes? (and other e-mails)
To: Ian Hickson i...@hixie.ch
Cc: wha...@whatwg.org


Responses inline


On Wed, Jul 17, 2013 at 5:17 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 18 Dec 2012, Kevin Gadd wrote:
 
  Is it possible to expose the width/height of an ImageBitmap, or even
  expose all the rectangle coordinates? Exposing width/height would be
  nice for parity with Image and Canvas when writing functions that accept
  any drawable image source.
 
  Thanks for the prompt action here, this looks like a straightforward
  solution.

 I've added height, width, and pixel density. Not sure what you meant by
 the other coordinates.


By 'the other coordinates' I mean that if you constructed it from a
subrectangle of another image (via the sx, sy, sw, sh parameters) it would
be good to expose *all* those constructor arguments. This allows you to
more easily maintain a cache of ImageBitmaps without additional bookkeeping
data.



 On Tue, 18 Dec 2012, Kevin Gadd wrote:
 
  Sorry, upon reading over the ImageBitmap part of the spec again I'm
  confused: Why is constructing an ImageBitmap asynchronous?

 Because it might involve network I/O.

  I thought any decoding isn't supposed to happen until drawImage, so I
  don't really understand why this operation involves a callback and a
  delay. Making ImageBitmap creation async means that you *cannot* use
  this as a replacement for drawImage source rectangles unless you know
  all possible source rectangles in advance. This is not possible for
  many, many use cases (scrolling through a bitmap would be one trivial
  example).

 Yeah, it's not supposed to be a replacement for drawImage().


This is why I was confused then, since I was told on this list that
ImageBitmap was a solution for the problem of drawing subrectangles of
images via drawImage (since the current specified behavior makes it
impossible to precisely draw a subrectangle). :(




  Is it async because it supports using Video and Blob as the source?

 Mainly Blob, but maybe other things in the future.


  I really love the feature set (being able to pass ImageData in is going
  to be a huge boon - no more temporary canvases just to create images
  from pixel data!) but if it's async-only I don't know how useful it will
  be for the issues that led me to starting this discussion thread in the
  first place.

 Can you elaborate on the specific use cases you have in mind?


The use case is being able to draw lots of different subrectangles of lots
of different images in a single frame.



 On Tue, 18 Dec 2012, Kevin Gadd wrote:
 
  How do you wait synchronously for a callback from inside
  requestAnimationFrame?

 You return and wait for another frame.


  Furthermore, wouldn't that mean returning once to the event loop for
  each individual drawImage call you wish to make using a source rectangle
  - so for a single scene containing lots of dynamic source rectangles you
  could end up having to wait for dozens of callbacks.

 I don't understand. Why can't you prepare them ahead of time all together?
 (As in the example in the spec, for instance.)


You can, it's just significantly more complicated. It's not something you
can easily expose in a user-consumable library wrapper either, since it
literally alters the execution model for your entire rendering frame and
introduces a pause for every group of images that need the use of temporary
ImageBitmap instances. I'm compiling classic 2D games to JavaScript to run
in the browser, so I literally call drawImage hundreds or thousands of
times per frame, most of the calls having a unique source rectangle. I will
have to potentially construct thousands of ImageBitmaps and wait for all
those callbacks. A cache will reduce the number of constructions I have to
do per frame, but then I have to somehow balance the risk of blowing
through the entirety of the end user's memory (a very likely thing on
mobile) or create a very aggressive, manually flushed cache that may not
even have room for all the rectangles used in a given frame. Given that an
ImageBitmap creation operation may not be instantaneous this really makes
me worry that the performance consequences of creating an ImageBitmap will
make it unusable for this scenario.

(I do agree that if you're building a game from scratch for HTML5 Canvas
based on the latest rev of the API, you can probably design for this by
having all your rectangles known in advance - but there are specific
rendering primitives that rely on dynamic rectangles, like for example
filling a progress bar with a texture, tiling a texture within a window, or
scrolling a larger texture within a region. I've encountered all these in
real games.)



Re: [whatwg] Adding features needed for WebGL to ImageBitmap

2013-07-18 Thread K. Gadd
To respond on the topic of WebGL/ImageBitmap integration - and in
particular some of the features requested earlier in the thread. Apologies
if I missed a post where this stuff was already addressed directly; I
couldn't follow this thread easily because of how much context was stripped
out of replies:

Having control over when or where colorspace conversion occurs would be
tremendously valuable. Right now the only place where you have control over
this is in WebGL, and when it comes to canvas each browser seems to
implement it differently. This is already a problem for people trying to do
image processing in JavaScript; an end-user of my compiler ran into this by
writing a simple app that read pixel data out of PNGs and then discovered
that every browser had its own unique interpretation of what a simple
image's data should look like when using getImageData:

https://bugzilla.mozilla.org/show_bug.cgi?id=867594

Ultimately the core here is that without control over colorspace
conversion, any sort of deterministic image processing in HTML5 is off the
table, and you have to write your own image decoders, encoders, and
manipulation routines in JavaScript using raw typed arrays. Maybe that's
how it has to be, but it would be cool to at least support basic variations
of these use cases in Canvas since getImageData/putImageData already exist
and are fairly well-specified (other than this problem, and some nits
around source rectangles and alpha transparency).

Out of the features suggested previously in the thread, I would immediately
be able to make use of control over colorspace conversion and an ability to
opt into premultiplied alpha. Not getting premultiplied alpha, as is the
case in virtually every canvas implementation I've tried, has visible
negative consequences for image quality and also reduces the performance of
some use cases where bitmap manipulation needs to happen, due to the fact
that premultiplied alpha is the 'preferred' form for certain types of
rendering and the math works out better. I think the upsides to getting
premultiplication are the same here as they are in WebGL: faster
uploads/downloads, better results, etc.

I understand the rationale behind gregg's suggestion for flipY, but
ultimately don't know if that one makes any sense in a HTML5 context. It
basically only exists because of the annoying disagreement between APIs
like OpenGL and other APIs like HTML5 Canvas or Direct3D, specifically
about which direction the Y axis goes. Normally one would assume that you
can correct this by simply inverting heights/y coordinates in the correct
places, but when you're rendering to offscreen surfaces, the confusion over
the Y axis ends up causing you to have to do a bunch of weird things to
coordinates and sampling in order to get correct results, because your
offscreen surfaces are *actually* upside down. It's gross.

To clearly state what would make ImageBitmap useful for the use cases I
encounter and my end-users encounter:
ImageBitmap should be a canonical representation of a 2D bitmap, with a
known color space, known pixel format, known alpha representation
(premultiplied/not premultiplied), and ready for immediate rendering or
pixel data access. It's okay if it's immutable, and it's okay if
constructing one from an img or a Blob takes time, as long as once I have
an ImageBitmap I can use it to render and use it to extract pixel data
without user configuration/hardware producing unpredictable results.

Colorspace conversion would allow me to address outstanding bugs that
currently require my end users to manually strip color profiles and gamma
from their image files, and premultiplied alpha would dramatically improve
the performance of some test cases and shipped games out there based on my
compiler. (Naturally, this all requires browser vendors to implement this
stuff, so I understand that these gains would probably not manifest for
years.)

-kg


Re: [whatwg] URL resolution of fragment urls in html5 webapps

2013-07-18 Thread Igor Minar
Holly cow! IE8 and 9 (but not 10) actually resolve the in-document urls
just as I want even with base[href] set.

Could that be used as an argument for clarifying the spec on url resolution
and making the in-document navigation with base set possible?

btw Alex, I looked at the Navigation Controller and it would definitely
help if the in-document navigation was something that I could control with
it. But ideally, I shouldn't need to do anything for the in-document
navigation to work.




On Wed, Jul 10, 2013 at 11:14 AM, Igor Minar imi...@google.com wrote:




 On Wed, Jul 10, 2013 at 10:24 AM, Alex Russell slightly...@google.comwrote:

 hey Igor,

 Was just discussing this with Rafael, and it seems like the core issue
 you're flagging is that if a document has a base element, all #anchor
 navigations (which would otherwise be document relative) are now full-page
 navigations to the URL specified in the base, not the document's
 natural URL. Is that right?


 correct



 If so, we might be able give you some control over this in the Navigation
 Controller (although it's not currently scoped as many folks didn't want to
 contemplate in-document navigation for the time being).

 But perhaps we don't need to do that: is the current behavior the same
 across browsers? If it's not, we might be able to change the spec. If it
 is, it'll be harder.


 As far as I can tell it is, because that the easiest thing to implement.
 It sort of makes sense why - a relative anchor url is treated just as any
 relative url and it is resolved as such. However, just as Rafael pointed
 out, unlike path-relative urls, I can't think of a scenario where resolving
 relative anchor urls against anything but self would be useful and
 therefore I have a hard time thinking of an existing code that would take
 advantage and rely on this kind of resolution.

 In the ideal world, I'd love for the spec to say that
 - all relative urls except for relative anchor urls should be resolved
 against document.baseURI (which is tied to location.href unless base[href]
 is set)
 - relative anchor urls should always resolve against location.href

 I think that this kind of behavior would make the url resolution work in
 all common and currently used scenarios.

 /i





 Regards


 On Wed, Jul 10, 2013 at 7:11 AM, Igor Minar imi...@google.com wrote:

 The current url resolution as
 described
 http://www.whatwg.org/specs/web-apps/current-work/#resolving-urlsin

 the spec results in some unhelpful behavior when the following
 combination of web technologies are used in a client-side web app:

 - a combination of path-relative urls (a
 href=relative/url/to/somewherelink/a) and fragment/anchor urls (a
 href=#anchorUrllink/a)
 - history.pushState - used for deep-linking
 - base[href] - used to properly resolve the relative urls to the root of
 the application in various deployment environments


 Once history.pushState is used to change location.href, the path-relative
 urls resolve correctly as expected against the base[href], but anchor
 urls
 that are only useful if resolved against the current document.baseURI
 also
 unsurprisingly resolve against the base[href]. This behavior makes them
 unsuitable for this kind of applications which is a big loss in
 developers
 toolbox and in fact breaks existing web features like svg that depend on
 anchor urls to reference nodes in the current document.

 Does anyone have thoughts on how one could build a client-side app that
 can
 be deployed in various contexts without any special server-side
 templating
 or build-time pre-processing?

 The base element looks like a perfect solution for this, if only it
 didn't
 break anchor urls.






Re: [whatwg] URL resolution of fragment urls in html5 webapps

2013-07-18 Thread Jonas Sicking
On Wed, Jul 10, 2013 at 10:24 AM, Alex Russell slightly...@google.com wrote:
 hey Igor,

 Was just discussing this with Rafael, and it seems like the core issue
 you're flagging is that if a document has a base element, all #anchor
 navigations (which would otherwise be document relative) are now full-page
 navigations to the URL specified in the base, not the document's
 natural URL. Is that right?

 If so, we might be able give you some control over this in the Navigation
 Controller (although it's not currently scoped as many folks didn't want to
 contemplate in-document navigation for the time being).

 But perhaps we don't need to do that: is the current behavior the same
 across browsers? If it's not, we might be able to change the spec. If it
 is, it'll be harder.

I really don't want to add something to the navigation controller
specifically for this unless we can show that this is a common use
case.

Navigation controller is hairy enough as it is without trying to toss
in edge cases into it in at least the first version.

Igor: I don't quite understand the problem that you are running in to.
Can you provide an example which includes URLs of the initial document
url, the url that you pass to pushState (including if it's relative or
absolute), the value in base (again, including if it's relative or
absolute).

/ Jonas

 On Wed, Jul 10, 2013 at 7:11 AM, Igor Minar imi...@google.com wrote:

 The current url resolution as
 described
 http://www.whatwg.org/specs/web-apps/current-work/#resolving-urlsin
 the spec results in some unhelpful behavior when the following
 combination of web technologies are used in a client-side web app:

 - a combination of path-relative urls (a
 href=relative/url/to/somewherelink/a) and fragment/anchor urls (a
 href=#anchorUrllink/a)
 - history.pushState - used for deep-linking
 - base[href] - used to properly resolve the relative urls to the root of
 the application in various deployment environments


 Once history.pushState is used to change location.href, the path-relative
 urls resolve correctly as expected against the base[href], but anchor urls
 that are only useful if resolved against the current document.baseURI also
 unsurprisingly resolve against the base[href]. This behavior makes them
 unsuitable for this kind of applications which is a big loss in developers
 toolbox and in fact breaks existing web features like svg that depend on
 anchor urls to reference nodes in the current document.

 Does anyone have thoughts on how one could build a client-side app that can
 be deployed in various contexts without any special server-side templating
 or build-time pre-processing?

 The base element looks like a perfect solution for this, if only it didn't
 break anchor urls.



Re: [whatwg] Adding features needed for WebGL to ImageBitmap

2013-07-18 Thread Mark Callow
On 2013/07/18 16:34, K. Gadd wrote:

 I understand the rationale behind gregg's suggestion for flipY, but
 ultimately don't know if that one makes any sense in a HTML5 context. It
 basically only exists because of the annoying disagreement between APIs
 like OpenGL and other APIs like HTML5 Canvas or Direct3D, specifically
 about which direction the Y axis goes. 
It exists because of the annoying disagreement between the orientation
of the data in most image file formats and the default orientation for
textures images in OpenGL. There are a some image file formats that have
a bottom left orientation and there is one, extremely common, format,
EXIF, that includes metadata giving the visual orientation of the image.
The flipY item in the proposed dictionary could be handily extended to
an enum. E.g.,

  * none - leave orientation alone
  * flipY - ignore the EXIF orientation, just flip in Y
  * topLeftEXIF - identify visual orientation from EXIF data and
re-order data so top-down, left-to-right processing for display
results in correct visual orientation
  * bottomRightEXIF - as above but ordered for bottom-up,
left-to-right processing

Regards

-Mark

-- 
注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合
が有ります。正式なメール受信者では無い場合はメール複製、 再配信または情
報の使用を固く禁じております。エラー、手違いでこのメールを受け取られまし
たら削除を行い配信者にご連絡をお願いいたし ます.

NOTE: This electronic mail message may contain confidential and
privileged information from HI Corporation. If you are not the intended
recipient, any disclosure, photocopying, distribution or use of the
contents of the received information is prohibited. If you have received
this e-mail in error, please notify the sender immediately and
permanently delete this message and all related copies.



Re: [whatwg] Proposal: Media element - add attributes for discovery of playback rate support

2013-07-18 Thread John Mellor
If the user is speeding up playback to improve their productivity (spend
less time watching e.g. a lecture), then they may well be willing to wait
until enough of the video is buffered, since they can do something else in
the meantime.

For example by spending 30m buffering the first half of a 1 hour live
stream, the user could then watch the whole hour at double speed.

Obviously the UI should make it clear what's going on (rather than
lengthily buffering without explanation).
On 17 Jul 2013 18:41, Peter Carlson (carlsop) carl...@cisco.com wrote:

 Ian

 For on-demand movies or VOD, the available playback speeds may be
 determined by the server of the content. This cannot be overcome by
 client-side buffering.

 Peter Carlson




Re: [whatwg] URL resolution of fragment urls in html5 webapps

2013-07-18 Thread Igor Minar
On Thu, Jul 18, 2013 at 2:13 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Wed, Jul 10, 2013 at 10:24 AM, Alex Russell slightly...@google.com
 wrote:
  hey Igor,
 
  Was just discussing this with Rafael, and it seems like the core issue
  you're flagging is that if a document has a base element, all #anchor
  navigations (which would otherwise be document relative) are now
 full-page
  navigations to the URL specified in the base, not the document's
  natural URL. Is that right?
 
  If so, we might be able give you some control over this in the Navigation
  Controller (although it's not currently scoped as many folks didn't want
 to
  contemplate in-document navigation for the time being).
 
  But perhaps we don't need to do that: is the current behavior the same
  across browsers? If it's not, we might be able to change the spec. If it
  is, it'll be harder.

 I really don't want to add something to the navigation controller
 specifically for this unless we can show that this is a common use
 case.

 Navigation controller is hairy enough as it is without trying to toss
 in edge cases into it in at least the first version.

 Igor: I don't quite understand the problem that you are running in to.
 Can you provide an example which includes URLs of the initial document
 url, the url that you pass to pushState (including if it's relative or
 absolute), the value in base (again, including if it's relative or
 absolute).


pushState is actually not even needed to reproduce the same problem. It's
enough when the base[href] doesn't match the url of the current document.

Check out this simple document:
- code+preview: http://plnkr.co/edit/TtH7rjQVKU6qN0QOxULW?p=preview
- preview only: http://run.plnkr.co/bY3fF8OOXKq5MrSu/

pushState is just an easy way how you can get into situation where the url
of the current document changes, and base[href] prevents all in-document
links to resolve correctly.

/i



 / Jonas

  On Wed, Jul 10, 2013 at 7:11 AM, Igor Minar imi...@google.com wrote:
 
  The current url resolution as
  described
  http://www.whatwg.org/specs/web-apps/current-work/#resolving-urlsin
  the spec results in some unhelpful behavior when the following
  combination of web technologies are used in a client-side web app:
 
  - a combination of path-relative urls (a
  href=relative/url/to/somewherelink/a) and fragment/anchor urls (a
  href=#anchorUrllink/a)
  - history.pushState - used for deep-linking
  - base[href] - used to properly resolve the relative urls to the root of
  the application in various deployment environments
 
 
  Once history.pushState is used to change location.href, the
 path-relative
  urls resolve correctly as expected against the base[href], but anchor
 urls
  that are only useful if resolved against the current document.baseURI
 also
  unsurprisingly resolve against the base[href]. This behavior makes them
  unsuitable for this kind of applications which is a big loss in
 developers
  toolbox and in fact breaks existing web features like svg that depend on
  anchor urls to reference nodes in the current document.
 
  Does anyone have thoughts on how one could build a client-side app that
 can
  be deployed in various contexts without any special server-side
 templating
  or build-time pre-processing?
 
  The base element looks like a perfect solution for this, if only it
 didn't
  break anchor urls.
 



Re: [whatwg] Fwd: Why can't ImageBitmap objects have width and height attributes? (and other e-mails)

2013-07-18 Thread Justin Novosad
On Thu, Jul 18, 2013 at 3:18 AM, K. Gadd k...@luminance.org wrote:



 
   I thought any decoding isn't supposed to happen until drawImage, so I
   don't really understand why this operation involves a callback and a
   delay. Making ImageBitmap creation async means that you *cannot* use
   this as a replacement for drawImage source rectangles unless you know
   all possible source rectangles in advance. This is not possible for
   many, many use cases (scrolling through a bitmap would be one trivial
   example).
 
  Yeah, it's not supposed to be a replacement for drawImage().
 

 This is why I was confused then, since I was told on this list that
 ImageBitmap was a solution for the problem of drawing subrectangles of
 images via drawImage (since the current specified behavior makes it
 impossible to precisely draw a subrectangle). :(


This is a really good point and case.  I was under the impression that the
color bleeding prevention was to be solved with ImageBitmaps, but as you
point out, it breaks down for cutting rectangles on the fly. Furthermore, I
think there is also no good solution for synchronously cutting rectangles
out of animated image sources like an animated canvas or a playing video.
Two possible solutions that were brought up so far on this list:

a) have synchronous versions of createImageBitmap
b) have a rendering option to modify drawImage's edge filtering behavior
(either an argument to drawImage or a rendering context attribute)

I took note of this concern here:
http://wiki.whatwg.org/index.php?title=New_Features_Awaiting_Implementation_Interest




  -kg



Re: [whatwg] Canvas 2D memory management

2013-07-18 Thread Ian Hickson
On Wed, 9 Jan 2013, Ashley Gullen wrote:

 Some developers are starting to design large scale games using our HTML5 
 game engine, and we're finding we're running in to memory management 
 issues.  Consider a device with 50mb of texture memory available.  A 
 game might contain 100mb of texture assets, but only use a maximum of 
 30mb of them at a time (e.g. if there are three levels each using 30mb 
 of different assets, and a menu that uses 10mb of assets).  This game 
 ought to fit in memory at all times, but if a user agent is not smart 
 about how image loading is handled, it could run out of memory.

 [...]
 
 Some ideas:
 1) add new functions to the canvas 2D context, such as:
 ctx.load(image): cache an image in memory so it can be immediately drawn
 when drawImage() is first used
 ctx.unload(image): release the image from memory

The Web API tries to use garbage collection for this; the idea being that 
you load the images you need when you need them, then discard then when 
you're done, and the memory gets reclaimed when possible.

We could introduce a mechanism to flush ImageBitmap objects more forcibly, 
e.g. imageBitmap.discard(). This would be a pretty new thing, though. Are 
there any browser vendors who have opinions about this?

We should probably wait to see if people are able to use ImageBitmap with 
garbage collection first. Note, though, that ImageBitmap doesn't really 
add anything you couldn't do with img before, in the non-Worker case. 
That is, you could just create img elements then lose references to them 
when you wanted them GC'ed; if that isn't working today, I don't see why 
it would start working with ImageBitmap.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] [canvas] coordinate space definition bogus?

2013-07-18 Thread Ian Hickson
On Tue, 29 Jan 2013, Dirk Schulze wrote:

 I think the definition of coordinate space is misleading in the 
 specification.
 
# The canvas element has two attributes to control the size of the 
# coordinate space: width and height. 
 
 This implies that the coordinate space is limited by this size. This is 
 not the case. The coordinate space can be transformed and scaled all the 
 time. In theory the size of the coordinate space is infinite. But the 
 size of the surface could be defined by 'width' and 'height'.

Yeah, that's bogus. I've tried to fix the text.


 The same problem occurs with the definition of clipping regions, that by 
 default, depend on the size of the coordinate space. A simple scale and 
 drawing over the size of the 'width' and 'height' values demonstrate 
 that the clipping region can not be measured by the size of the 
 coordinate space (or it could, if it is assumed to be infinite).
 
 For clip, why isn't it possible to just say that clip() does not clip if 
 there is no currentPath? This would at least avoid this trap.

Fixed. Thanks.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Adding features needed for WebGL to ImageBitmap

2013-07-18 Thread Justin Novosad
On Thu, Jul 18, 2013 at 5:45 AM, Mark Callow callow.m...@artspark.co.jpwrote:

 On 2013/07/18 16:34, K. Gadd wrote:
 
  I understand the rationale behind gregg's suggestion for flipY, but
  ultimately don't know if that one makes any sense in a HTML5 context. It
  basically only exists because of the annoying disagreement between APIs
  like OpenGL and other APIs like HTML5 Canvas or Direct3D, specifically
  about which direction the Y axis goes.
 It exists because of the annoying disagreement between the orientation
 of the data in most image file formats and the default orientation for
 textures images in OpenGL. There are a some image file formats that have
 a bottom left orientation and there is one, extremely common, format,
 EXIF, that includes metadata giving the visual orientation of the image.
 The flipY item in the proposed dictionary could be handily extended to
 an enum. E.g.,

   * none - leave orientation alone
   * flipY - ignore the EXIF orientation, just flip in Y
   * topLeftEXIF - identify visual orientation from EXIF data and
 re-order data so top-down, left-to-right processing for display
 results in correct visual orientation
   * bottomRightEXIF - as above but ordered for bottom-up,
 left-to-right processing

 Regards

 -Mark


EXIF, may be a defacto standard in some image formats we use today, but I
think we should probably be more general and just refer to this as image
media meta-data.  What should the default be? I hesitate between 'topLeft'
and 'none'.


Re: [whatwg] remove resetClip from the Canvas 2D spec

2013-07-18 Thread Ian Hickson
On Tue, 29 Jan 2013, Rik Cabanier wrote:
 
 we were looking at how resetClip could be implemented in WebKit. Looking 
 over the Core Graphics implementation, this feature can't be implemented 
 without significant overhead. I also found an email from 2007 where 
 Maciej states the same concern:
http://permalink.gmane.org/gmane.org.w3c.whatwg.discuss/10582

The solution on Mac is probably for Apple to update CoreGraphics to 
support this feature.

This is a quite widely requested feature.


 Since no browser has implemented it, can it be removed from the spec?

It's new, so no browser having implemented it is expected.


If browsers don't implement it, it'll get removed in due course. But it 
would be sad for authors, who are the main concern here.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] [Canvas] Behavior on non-invertable CTM

2013-07-18 Thread Ian Hickson
On Tue, 29 Jan 2013, Dirk Schulze wrote:
 
 The spec doesn't have any wording about the behavior on non-invertible 
 CTMs on Canvas contexts. Is it still possible to add segments to the 
 current path once a CTM is not invertible anymore? Does the path get 
 rejected completely then? Implementations are fairly different.
 
 Here are two examples (code attached at the end of the mail as well):
 
 http://jsfiddle.net/Dghuh/1/
 http://jsfiddle.net/Dghuh/2/
 
 Note that the path is stroked after restoring the initial CTM in both 
 examples.
 
 The first one does scale(0), which should make the CTM non-invertibe, 
 WebKit still applies lineTo and closePath for some reason. IE and FF 
 refuse to draw anything.

scale(0) is invalid, and should throw an exception.

If you do scale(0,0), the browsers act the same as with your second test 
that uses setTransform() with 6 zeros.


 The second does setTransform(0,0,0,0,0,0), which should reset the CTM to 
 a zero matrix (again, not invertible). IE, Opera and FF draw a line to 
 0,0 and close the path afterwards (which kind of makes sense, since the 
 universe is convoluted to one point). WebKit refuses the lineTo command 
 and closes the path as expected.

WebKit seems to just be wrong here, and the others right.


 This is an edge case, but should still be clarified in the spec.

I don't understand what there is to clarify. In both cases, the behaviour 
seems well-defined: if you're transforming everything to zero, that's what 
the result will be. Zero. Firefox's behaviour is the right one.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Adding features needed for WebGL to ImageBitmap

2013-07-18 Thread Justin Novosad
To help us iterate further, I've attempted to capture the essence of this
thread on the whatwg wiki, using the problem solving template.
I tried to capture the main ideas that we seem to agree on so far and I
started to think about how to handle special cases.

http://wiki.whatwg.org/wiki/ImageBitmap_Options

-Justin




On Thu, Jul 18, 2013 at 1:22 PM, Justin Novosad ju...@google.com wrote:




 On Thu, Jul 18, 2013 at 5:45 AM, Mark Callow 
 callow.m...@artspark.co.jpwrote:

 On 2013/07/18 16:34, K. Gadd wrote:
 
  I understand the rationale behind gregg's suggestion for flipY, but
  ultimately don't know if that one makes any sense in a HTML5 context. It
  basically only exists because of the annoying disagreement between APIs
  like OpenGL and other APIs like HTML5 Canvas or Direct3D, specifically
  about which direction the Y axis goes.
 It exists because of the annoying disagreement between the orientation
 of the data in most image file formats and the default orientation for
 textures images in OpenGL. There are a some image file formats that have
 a bottom left orientation and there is one, extremely common, format,
 EXIF, that includes metadata giving the visual orientation of the image.
 The flipY item in the proposed dictionary could be handily extended to
 an enum. E.g.,

   * none - leave orientation alone
   * flipY - ignore the EXIF orientation, just flip in Y
   * topLeftEXIF - identify visual orientation from EXIF data and
 re-order data so top-down, left-to-right processing for display
 results in correct visual orientation
   * bottomRightEXIF - as above but ordered for bottom-up,
 left-to-right processing

 Regards

 -Mark


 EXIF, may be a defacto standard in some image formats we use today, but I
 think we should probably be more general and just refer to this as image
 media meta-data.  What should the default be? I hesitate between 'topLeft'
 and 'none'.





[whatwg] Notifications: eventTime

2013-07-18 Thread Anne van Kesteren
Chrome supports Notifications.eventTime:
http://developer.chrome.com/extensions/notifications.html#type-NotificationOptions
I suggest we add that to the specification. By default it's not there,
but if specified it signifies the notification is for an event at a
specific time, such as a calender entry, or boarding pass alarm.

Whether this should be a DOMTimeStamp or Date is a bit unclear to me
and so far es-discuss has been of little help, but I hope that'll
clear up soon enough.


--
http://annevankesteren.nl/


Re: [whatwg] Challenging canvas.supportsContext

2013-07-18 Thread Benoit Jacob
The thread seems to have settled down.

I still believe that supportsContext, in its current form, should be
removed from the HTML spec, because as currently spec'd it could be
implemented as just returning whether WebGLRenderingContext is defined. I
also still believe that it will be exceedingly hard to spec supportsContext
in a way that makes it useful as opposed to just calling getContext.

Emails in this thread have conveyed the idea that it is already useful to
return whether WebGL is at least not blacklisted, regardless of whether
actual context creation would succeed. That, however, is impossible to
specify, and depends too much on details of how some current browsers and
platforms work:
 - driver blacklists will hopefully be a thing of the past, eventually.
 - on low-end mobile devices, the main cause of WebGL context creation
failure is not blacklists, but plain OpenGL context creation failures, or
non-conformant OpenGL behavior, or OOM'ing right after context creation.
For these reasons, justifying supportsContext by driver blacklisting seems
like encoding short-term contingencies into the HTML spec, which we
shouldn't do. Besides, even if we wanted to do that, there would remain the
problem that that's impossible to spec in a precise and testable way.

For these reasons, I still think that supportsContext should be removed
from the spec.

Benoit



2013/6/19 Benoit Jacob jacob.benoi...@gmail.com

 Dear list,

 I'd like to question the usefulness of canvas.supportsContext. I tried to
 think of an actual application use case for it, and couldn't find one. It
 also doesn't seem like any valid application use case was given on this
 list when this topic was discussed around September 2012.

 The closest thing that I could find being discussed, was use cases by JS
 frameworks or libraries that already expose similar feature-detection APIs.
 However, that only shifts the question to: what is the reason for them to
 expose such APIs? In the end, I claim that the only thing that we should
 recognize as a reason to add a feature to the HTML spec, is *application*use 
 cases.

 So let's look at the naive application usage pattern for supportsContext:

   if (canvas.supportsContext(webgl)) {
 context = canvas.getContext(webgl);
   }

 The problem is that the same can be achieved with just the getContext
 call, and checking whether it succeeded.

 In other words, I'm saying that no matter what JS libraries/frameworks may
 offer for feature detection, in the end, applications don't want to just *
 detect* features --- applications want to *use* features. So they'll just
 pair supportsContext calls with getContext calls, making the
 supportsContext calls useless.

 There is also the argument that supportsContext can be much cheaper than a
 getContext, given that it only has to guarantee that getContext must fail
 if supportsContext returned false. But this argument is overlooking that in
 the typical failure case, which is failure due to system/driver
 blacklisting, getContext returns just as fast as supportsContext --- as
 they both just check the blacklist and return. Outside of exceptional cases
 (out of memory...), the slow path in getContext is the *success* case,
 and again, in that case a real application would want to actually *use*that 
 context.

 Keep in mind that supportsContext can't guarantee that if it returns true,
 then a subsequent getContext will succeed. The spec doesn't require it to,
 either. So if the existence of supportsContext misleads application
 developers into no longer checking for getContext failures, then we'll just
 have rendered canvas-using applications a little bit more fragile. Another
 problem with supportsContext is that it's untestable, at least when it
 returns true; it is spec-compliant to just implement it as returning
 whether the JS interface for the required canvas context exists, which is
 quite useless. Given such deep problems, I think that the usefulness bar
 for accepting supportsContext into the spec should be quite high.

 So, is there an application use case that actually benefits from
 supportsContext?

 Cheers,
 Benoit




Re: [whatwg] Proposal: Media element - add attributes for discovery of playback rate support

2013-07-18 Thread Brendan Long
On 07/18/2013 06:54 AM, John Mellor wrote:
 If the user is speeding up playback to improve their productivity (spend
 less time watching e.g. a lecture), then they may well be willing to wait
 until enough of the video is buffered, since they can do something else in
 the meantime.

 For example by spending 30m buffering the first half of a 1 hour live
 stream, the user could then watch the whole hour at double speed.
This is how DVR's work with live TV and people seem to like it (well,
they like it more than not being able to fast-forward at all..).


Re: [whatwg] Canvas 2D memory management

2013-07-18 Thread Justin Novosad
On Thu, Jul 18, 2013 at 12:50 PM, Ian Hickson i...@hixie.ch wrote:

 On Wed, 9 Jan 2013, Ashley Gullen wrote:
 
  Some developers are starting to design large scale games using our HTML5
  game engine, and we're finding we're running in to memory management
  issues.  Consider a device with 50mb of texture memory available.  A
  game might contain 100mb of texture assets, but only use a maximum of
  30mb of them at a time (e.g. if there are three levels each using 30mb
  of different assets, and a menu that uses 10mb of assets).  This game
  ought to fit in memory at all times, but if a user agent is not smart
  about how image loading is handled, it could run out of memory.
 
  [...]
 
  Some ideas:
  1) add new functions to the canvas 2D context, such as:
  ctx.load(image): cache an image in memory so it can be immediately drawn
  when drawImage() is first used
  ctx.unload(image): release the image from memory

 The Web API tries to use garbage collection for this; the idea being that
 you load the images you need when you need them, then discard then when
 you're done, and the memory gets reclaimed when possible.

 We could introduce a mechanism to flush ImageBitmap objects more forcibly,
 e.g. imageBitmap.discard(). This would be a pretty new thing, though. Are
 there any browser vendors who have opinions about this?

 We should probably wait to see if people are able to use ImageBitmap with
 garbage collection first. Note, though, that ImageBitmap doesn't really
 add anything you couldn't do with img before, in the non-Worker case.
 That is, you could just create img elements then lose references to them
 when you wanted them GC'ed; if that isn't working today, I don't see why
 it would start working with ImageBitmap.


This is probably an area where most browsers could do a better job.
Browsers should be able to handle the texture memory issues automatically
without any new APIs, if they can't, then file bug reports.  If garbage
collection is not kicking-in at the right time, report it to the vendor.
ImageBitmap should provide the same kind of pinning semantics as the
suggested ctx.load/unload. However, one weakness of the current API is that
upon construction of the ImageBitmap, the browser does not know whether the
asset will be used with a GPU-accelerated rendering context or not. If this
information were available, the asset could be pre-cached on the GPU when
appropriate.  Maybe something like ctx.prefetch(image) would be appropriate
for warming up the caches.


 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'



Re: [whatwg] Proposal: Media element - add attributes for discovery of playback rate support

2013-07-18 Thread Eric Carlson

On Jul 18, 2013, at 1:13 PM, Brendan Long s...@brendanlong.com wrote:

 On 07/18/2013 06:54 AM, John Mellor wrote:
 If the user is speeding up playback to improve their productivity (spend
 less time watching e.g. a lecture), then they may well be willing to wait
 until enough of the video is buffered, since they can do something else in
 the meantime.
 
 For example by spending 30m buffering the first half of a 1 hour live
 stream, the user could then watch the whole hour at double speed.
 This is how DVR's work with live TV and people seem to like it (well,
 they like it more than not being able to fast-forward at all..).

  And it works because a DVR has lots of disk space. This is not the case with 
all devices that support the media element.

 Even a DVR, however, won't always let you change the playback speed. For 
example it isn't possible to play at greater than 1x past the current time when 
watching a live stream. If I am watching a live stream and I try to play past 
the end of the buffered video, my DVR drops back to 1x and won't let me change 
the speed. It doesn't automatically pause and buffer for a while so it can play 
at a faster rate.

  It isn't always possible to play a media stream at an arbitrary speed. It is 
foolish to pretend otherwise as the current spec does. 

eric



Re: [whatwg] Canvas 2D memory management

2013-07-18 Thread Rik Cabanier
On Thu, Jul 18, 2013 at 2:03 PM, Justin Novosad ju...@google.com wrote:

 On Thu, Jul 18, 2013 at 12:50 PM, Ian Hickson i...@hixie.ch wrote:

  On Wed, 9 Jan 2013, Ashley Gullen wrote:
  
   Some developers are starting to design large scale games using our
 HTML5
   game engine, and we're finding we're running in to memory management
   issues.  Consider a device with 50mb of texture memory available.  A
   game might contain 100mb of texture assets, but only use a maximum of
   30mb of them at a time (e.g. if there are three levels each using 30mb
   of different assets, and a menu that uses 10mb of assets).  This game
   ought to fit in memory at all times, but if a user agent is not smart
   about how image loading is handled, it could run out of memory.
  
   [...]
  
   Some ideas:
   1) add new functions to the canvas 2D context, such as:
   ctx.load(image): cache an image in memory so it can be immediately
 drawn
   when drawImage() is first used
   ctx.unload(image): release the image from memory
 
  The Web API tries to use garbage collection for this; the idea being that
  you load the images you need when you need them, then discard then when
  you're done, and the memory gets reclaimed when possible.
 
  We could introduce a mechanism to flush ImageBitmap objects more
 forcibly,
  e.g. imageBitmap.discard(). This would be a pretty new thing, though. Are
  there any browser vendors who have opinions about this?
 
  We should probably wait to see if people are able to use ImageBitmap with
  garbage collection first. Note, though, that ImageBitmap doesn't really
  add anything you couldn't do with img before, in the non-Worker case.
  That is, you could just create img elements then lose references to
 them
  when you wanted them GC'ed; if that isn't working today, I don't see why
  it would start working with ImageBitmap.
 

 This is probably an area where most browsers could do a better job.
 Browsers should be able to handle the texture memory issues automatically
 without any new APIs, if they can't, then file bug reports.  If garbage
 collection is not kicking-in at the right time, report it to the vendor.


Does the JS VM know about the image bits? It seems not since they live on
the C++ side so the imageBitmap could look like a small object that is
GC'ed later.


 ImageBitmap should provide the same kind of pinning semantics as the
 suggested ctx.load/unload. However, one weakness of the current API is that
 upon construction of the ImageBitmap, the browser does not know whether the
 asset will be used with a GPU-accelerated rendering context or not. If this
 information were available, the asset could be pre-cached on the GPU when
 appropriate.  Maybe something like ctx.prefetch(image) would be appropriate
 for warming up the caches.


That seems too implementation specific.


Re: [whatwg] Canvas 2D memory management

2013-07-18 Thread Boris Zbarsky

On 7/18/13 5:18 PM, Rik Cabanier wrote:

Does the JS VM know about the image bits?


For what it's worth, at least in Firefox it would; we already tell the 
JS VM about all sort of other large C++-side allocations owned by JS 
objects.


-Boris


Re: [whatwg] Canvas 2D memory management

2013-07-18 Thread Justin Novosad
On Thu, Jul 18, 2013 at 5:18 PM, Rik Cabanier caban...@gmail.com wrote:



 On Thu, Jul 18, 2013 at 2:03 PM, Justin Novosad ju...@google.com wrote:

 On Thu, Jul 18, 2013 at 12:50 PM, Ian Hickson i...@hixie.ch wrote:

  On Wed, 9 Jan 2013, Ashley Gullen wrote:
  
   Some developers are starting to design large scale games using our
 HTML5
   game engine, and we're finding we're running in to memory management
   issues.  Consider a device with 50mb of texture memory available.  A
   game might contain 100mb of texture assets, but only use a maximum of
   30mb of them at a time (e.g. if there are three levels each using 30mb
   of different assets, and a menu that uses 10mb of assets).  This game
   ought to fit in memory at all times, but if a user agent is not smart
   about how image loading is handled, it could run out of memory.
  
   [...]
  
   Some ideas:
   1) add new functions to the canvas 2D context, such as:
   ctx.load(image): cache an image in memory so it can be immediately
 drawn
   when drawImage() is first used
   ctx.unload(image): release the image from memory
 
  The Web API tries to use garbage collection for this; the idea being
 that
  you load the images you need when you need them, then discard then when
  you're done, and the memory gets reclaimed when possible.
 
  We could introduce a mechanism to flush ImageBitmap objects more
 forcibly,
  e.g. imageBitmap.discard(). This would be a pretty new thing, though.
 Are
  there any browser vendors who have opinions about this?
 
  We should probably wait to see if people are able to use ImageBitmap
 with
  garbage collection first. Note, though, that ImageBitmap doesn't really
  add anything you couldn't do with img before, in the non-Worker case.
  That is, you could just create img elements then lose references to
 them
  when you wanted them GC'ed; if that isn't working today, I don't see why
  it would start working with ImageBitmap.
 

 This is probably an area where most browsers could do a better job.
 Browsers should be able to handle the texture memory issues automatically
 without any new APIs, if they can't, then file bug reports.  If garbage
 collection is not kicking-in at the right time, report it to the vendor.


 Does the JS VM know about the image bits? It seems not since they live on
 the C++ side so the imageBitmap could look like a small object that is
 GC'ed later.


If there is any memory consumed by the ImageBitmap that is pinned for the
life-time of the object, then that memory needs to be declared to the JS VM
even if the data does not live in the JS heap. I know V8 and JavaScriptCore
have APIs for that and other engines probably do too, so the problem is
manageable.


Re: [whatwg] Proposal: Media element - add attributes for discovery of playback rate support

2013-07-18 Thread Brendan Long
On 07/18/2013 03:17 PM, Eric Carlson wrote:
 Even a DVR, however, won't always let you change the playback speed.
 For example it isn't possible to play at greater than 1x past the
 current time when watching a live stream. If I am watching a live
 stream and I try to play past the end of the buffered video, my DVR
 drops back to 1x and won't let me change the speed. It doesn't
 automatically pause and buffer for a while so it can play at a faster
 rate. It isn't always possible to play a media stream at an arbitrary
 speed. It is foolish to pretend otherwise as the current spec does. 
That makes sense, but we also don't want to limit ourselves to playback
speeds that a server supports when the client /does/ have data buffered.

What if we added a supportedPlaybackRates attribute, which holds an
array of playback rates supported by the server, plus (optionally) any
rates the user agent can support due to the currently buffered data
(possibly requiring that the user agent have enough data buffered to
play at that speed for some amount of time).


Re: [whatwg] Script preloading

2013-07-18 Thread Kyle Simpson
About a week ago, I presented a set of code comparing the script 
dependencies=.. approach to the script preload approach, as far as creating 
generalized script loaders.

There were a number of concerns presented in those code snippets, and the 
surrounding discussions. I asked for input on the code and the issues raised.

Later in the thread, I also identified issues with needing to more robustly 
handle error recovery and it was suggested that Navigation Controller was a 
good candidate for partnering with script loading for this task. I asked for 
some code input in that respect as well.

AFAICT, the thread basically went dormant at roughly the same time.

I'm sure people are busy with plenty of other things, so I'm not trying to be 
annoying or impatient. But I would certainly like for the thread not to die, 
but to instead make productive progress.

If you have any feedback on the code comparisons I posted earlier 
(https://gist.github.com/getify/5976429) please do feel free to share.

Thanks!



--Kyle







Re: [whatwg] Sortable Tables

2013-07-18 Thread Ian Hickson
On Fri, 28 Dec 2012, Stuart Langridge wrote:
   
   Sorttable also allows authors to specify alternate content for a 
   cell. td sorttable_customkey=11eleven/td
 
  tddata value=11eleven/data/td
 
   The sorttable.js solution is to specify a custom key, which 
   sorttable pretends was the cell content for the purposes of sorting, 
   so td sorttable_customkey=20121107-10Wed 7th November, 
   10.00am GMT/td and then the script can sort it.
 
  tdtime datetime=2012-11-07T10:00ZWed 7th November, 10.00am 
  GMT/time/td
 
 I can see using data for this, because it's deliberately semantically 
 meaningless (right?)

I wouldn't say it's semantically meaningless, but sure.


 but time is more of a problem if you have multiple things in one cell. 
 For example, one semi-common pattern is to put some data and an input 
 type=checkbox in a single cell, like
 tdWed 7th November, 10.00am GMT input type=checkbox name=whatever/td

Why can't the checkbox be in a separate cell?


 Using data to wrap the whole cell is OK, but using time to wrap a 
 bunch of non-time content isn't, really. In this situation would you 
 recommend
 tddata value=2012-11-07T10:00Ztime datetime=2012-11-07T10:00ZWed 
 7th November, 10.00am GMT/time/data/td
 which seems rather redundant to me?

I would recommend using two cells, but you could do that too. It would 
mean the keys were compared as strings, though, rather than as datetimes. 
Things wouldn't work if you mixed date and time elements with those 
values (e.g. if some cells didn't have checkboxes and so you used just 
time in some cases), since strings sort after times.


   and this, like many other things on this list, suggests that some 
   sort of here is the JavaScript function I want you to use to 
   produce sort keys for table cells in this column function is a 
   useful idea. Sorttable allows this, and people use it a lot.)
 
  I tried to do this but couldn't figure out a sane way to do it. A 
  comparator can totally destroy the table we're sorting, and I don't 
  know what to do if that happens.
 
 As in, you specify that there's a comparator function and then the
 sorter passes the comparator function two TD elements for comparing,
 and the comparator function looks like this?
 function comparator(td1, td2) { td1.parentNode.removeChild(td1); }

Right. Or worse (e.g. moving cells around on rows that have been 
compared before).

Also it totally destroys any ability to cache information per-row, which 
I think would be disastrous given how much work it takes to compare rows.


 On the other hand, surely I could make the same argument about any 
 handler, right? If you put scriptdocument.body.innerHTML += 
 hahaha!/script as a child of body, browsers used to crash (because 
 it's an infinite loop), and the implementor response boiled down to 
 don't do that, at least at first.

Only at first, because it wasn't tenable. We had to eventually define what 
happens, exactly.


 It's hard to see how such a malicious script could get into a page 
 without author knowledge --

It might well be with author knowledge. More likely it's a bug in their 
code.


 of course, an author might include a third-party script which does this 
 to destroy a page, but the same third-party script could set 
 document.body.innerHTML to 0wned which is even more destructive of 
 page content.

That's not really a problem. The problem is making sure that the algorithm 
is stable in the face of crazy comparators, because any lack of stability 
could lead to security bugs (e.g. if you make it crash somehow, and can 
use that to run arbitrary code).


 It would be reasonable, I think, for the sort process to halt 
 uncompleted if a comparator function destroys the things it's comparing, 
 although perhaps your concern is that it's hard to know *whether that 
 happened* (since it might just reparent them to a different table or 
 something)?

It's hard to detect cheaply, certainly.


 Maybe pass a cloneNode of each TD?

Too expensive (what if one of the nodes is a 24 MB image, or a plugin?).


 Or have the sorter work out the sortable *value* of the field (from the 
 content, or the data value wrapper) and then pass the values, not the 
 actual cells? Then the comparator can't destroy anything.

It seems to me like that doesn't give you anything that you couldn't do by 
just setting the keys manually on the table before the sort happens (which 
you can do easily onsort= in the current model).


   13. What happens if a table has multiple tbody elements? Do they 
   sort as independent units, or mingle together? Sorttable just sorts 
   the first one and ignores the rest, because multiple tbodies are 
   uncommon, but that's not really acceptable ;-)
 
  Independent.
 
 Hm. They can sort independently, no problem, but how does a user command 
 a sort of one tbody and not the rest?

They can't.


 All the tbodies will identify the same thead tr as their highest one. 
 This suggests that if you've got 

Re: [whatwg] Proposal: createImageBitmap should return a Promise instead of using a callback

2013-07-18 Thread Silvia Pfeiffer
Promises are new to browsers and people who have used them before have
raised issues about the extra resources they require. It may be a
non-issue in the browser, but it's still something we should be wary
of.

Would it be possible for the first browser that implements this to
have both implementations (callback and Promise objects) and use the
below code or something a little more complex to see how much overhead
is introduced by the Promise object and whether it is in fact
negligible both from a memory and execution time POV?

Silvia.


On Thu, Jul 18, 2013 at 8:54 AM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 18 Jul 2013, Silvia Pfeiffer wrote:

 In this case you did remove the non-promise based approach - presumably
 because it has not been implemented in browsers yet, which is fair
 enough for browsers.

 Right.


 However, for JS developers it means that if they want to use this
 function, they now have to move to introduce a Promise model in their
 libraries.

 Not really. You don't have to use the promise API for anything other than
 a callback if you don't want to.

 As in, if your code uses the style that the HTML spec used to have for the
 createImageBitmap() example:

var sprites = {};
function loadMySprites(loadedCallback) {
  var image = new Image();
  image.src = 'mysprites.png';
  image.onload = function () {
// ... do something to fill in sprites, and then call loadedCallback
  };
}

function runDemo() {
  var canvas = document.querySelector('canvas#demo');
  var context = canvas.getContext('2d');
  context.drawImage(sprites.tree, 30, 10);
  context.drawImage(sprites.snake, 70, 10);
}

loadMySprites(runDemo);

 ...then you can still do this with promises:

var sprites = {};
function loadMySprites(loadedCallback) {
  var image = new Image();
  image.src = 'mysprites.png';
  image.onload = function () {
// only the comment from the snippet above is different here:
Promise.every(
  createImageBitmap(image,  0,  0, 40, 40).then(function (image) { 
 sprites.woman = image }),
  createImageBitmap(image, 40,  0, 40, 40).then(function (image) { 
 sprites.man   = image }),
  createImageBitmap(image, 80,  0, 40, 40).then(function (image) { 
 sprites.tree  = image }),
  createImageBitmap(image,  0, 40, 40, 40).then(function (image) { 
 sprites.hut   = image }),
  createImageBitmap(image, 40, 40, 40, 40).then(function (image) { 
 sprites.apple = image }),
  createImageBitmap(image, 80, 40, 40, 40).then(function (image) { 
 sprites.snake = image }),
).then(loadedCallback);
  };
}

function runDemo() {
  var canvas = document.querySelector('canvas#demo');
  var context = canvas.getContext('2d');
  context.drawImage(sprites.tree, 30, 10);
  context.drawImage(sprites.snake, 70, 10);
}

loadMySprites(runDemo);

 The promises are very localised, just to the code that uses them. But
 then when you want to use them everywhere, you can do so easily too,
 just slowly extending them out as you want to. And when two parts of
 the codebase that use promises touch, suddenly the code that glues
 them together gets simpler, since you can use promise utility methods
 instead of rolling your own synchronisation.


 I'm just dubious whether they are ready for that yet (in fact, I have
 heard that devs are not ready yet).

 Ready for what?


 At the same time, I think we should follow a clear pattern for
 introducing a Promise based API, which the .create() approach would
 provide.

 I don't understand what that means.


 I guess I'm asking for JS dev input here...

 Promises are just regular callbacks, with the synchronisation done by the
 browser (or shim library) rather than by author code. I don't really
 understand the problem here.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'