Re: [whatwg] canvas feedback

2014-05-14 Thread Jürg Lehni
On Apr 30, 2014, at 00:27 , Ian Hickson i...@hixie.ch wrote:

 On Mon, 7 Apr 2014, Jürg Lehni wrote:
 
 Well this particular case, yes. But in the same way we allow a group of 
 items to have an opacity applied to in Paper.js, and expect it to behave 
 the same ways as in SVG: The group should appear as if its children were 
 first rendered at 100% alpha and then blitted over with the desired 
 transparency.
 
 Layers would offer exactly this flexibility, and having them around 
 would make a whole lot of sense, because currently the above can only be 
 achieved by drawing into a separate canvas and blitting the result over. 
 The performance of this is real low on all browsers, a true bottleneck 
 in our library currently.
 
 It's not clear to me why it would be faster if implemented as layers. 
 Wouldn't the solution here be for browsers to make canvas-on-canvas 
 drawing faster? I mean, fundamentally, they're the same feature.

I was perhaps wrongly assuming that including layering in the API would allow 
the browser vendors to better optimize this use case. The problem with the 
current solution is that drawing a canvas into another canvas is inexplicably 
slow across all browsers. The only reason I can imagine for this is that the 
pixels are copied back and forth between the GPU and the main memory, and 
perhaps converted along the way, while they could simply stay on the GPU as 
they are only used there. But reality is probably more complicated than that.

So if the proposed API addition would allow a better optimization then I'd be 
all for it. If not, then I am wondering how I can get the vendor's attention to 
improve this particular case. It really is very slow currently, to the point 
where it doesn't make sense to use it for any sort of animation technique.

J







Re: [whatwg] @autocomplete sections

2014-05-14 Thread Ilya Sherman
That's a good question.  Initially, sections were motivated by the desire
to distinguish between shipping and billing, i.e. the recommendation
was to use section-shipping and section-billing.  We eventually
realized that shipping and billing are so commonly used that they
merited having their own unique tokens.  Now that those are separately
canonicalized, the motivation for section-* tokens is much less clear.

However, there are still plenty of cases where sections *could* be useful.
 For example, a social network might ask for multiple points of contact
info, e.g. a home address and also a work address.  There are other types
of addresses as well: For example, not all mailing addresses, such as P.O.
boxes, are shipping addresses to which packages can be delivered.  The idea
is that section-* tokens allow a website to ask for multiple addresses of
types that are not necessarily billing or shipping.

It's certainly possible to use multiple forms, or to use a fieldset, to
describe such a form.  Using a single form can be more convenient for the
user, as there's just a single submit button.  Using a fieldset can be
inconvenient for the developer, as fields belonging to the same section
might not be listed adjacent to one another in an HTML file.  (Most
commonly, this occurs when a developer is allowing presentation to guide
their HTML structure, so perhaps we should actively discourage this as an
anti-pattern.)

Section tokens were designed before rAc was a consideration.  In Chrome, we
use them for the Autofill feature (which presents a helpful popup as the
user interacts with a regular ol' visible form), but not for rAc.  It's
possible that the use case for section-* tokens is so marginal that it
would be better to simply remove them, since billing and shipping cover
the common case.


On Tue, May 13, 2014 at 6:17 PM, Matthew Noorenberghe 
mattn+wha...@mozilla.com wrote:

 Hello,

 While looking at implementing the new autocomplete attribute syntax, I was
 wondering about the driver for section-* tokens.  The example in the
 spec[1] with multiple shipping addresses for one checkout isn't something
 I've seen done in the wild in one flow. In the example, how did the website
 know that the two items should be in different sections in the first place?
 The only idea I'm thinking of is a checkbox to indicate an item was a gift
 when it was added to the cart. If the website already knew about the
 different shipping addresses of the user when the item was added to the
 cart, it wouldn't really need to autocomplete the shipping address again.

 Example:
 * On the page for product A, the user chooses address A from a select
 that the page populated from information on past shipping addresses or
 checks a checkbox that the item is a gift.
 * The user clicks add to cart for product A
 * On the page for product B, the user chooses address B from a select
 that the page populated from information on past shipping addresses.
 * The user clicks add to cart for product B
 * The checkout page knows that products A and B are getting shipped to
 different addresses so it can show them in different sections.

 Also, why is sectioning better than using two forms for the example in the
 spec? One of the complexities with arbitrary sectioning is how to display
 the multiple sections in the browser's rAc UI. Choosing multiple sets of
 information for different sections at once can be overwhelming/confusing.
 How are UAs expected to communicate what section goes with each profile?
 Surely UAs aren't going to show red and blue for the example in the
 spec but it's not so straightforward to find an appropriate label/heading.
 A fieldset could have multiple sections so one can't just find the first
 fieldset parent and use its legend. It's also possible there isn't a
 fieldset or legend and so UAs may then use the outlining algorithm to
 find headings. I'm not sure if the ability to have arbitrary sections is
 worth the complexity this adds. How are other UAs planning on supporting
 multiple arbitrary sections? I'd like to hear more of an argument
 supporting this feature before implementing it. Is this something that
 others intend to implement?

 Thanks,
 Matthew Noorenberghe

 [1]
 http://www.whatwg.org/specs/web-apps/current-work/multipage/association-of-controls-and-forms.html#attr-fe-autocomplete



Re: [whatwg] WebGL and ImageBitmaps

2014-05-14 Thread Glenn Maynard
On Mon, May 12, 2014 at 3:19 AM, K. Gadd k...@luminance.org wrote:

 On Fri, May 9, 2014 at 12:02 PM, Ian Hickson i...@hixie.ch wrote:
  I'm assuming you're referring to the case where if you try to draw a
  subpart of an image and for some reason it has to be sampled (e.g. you're
  drawing it larger than the source), the anti-aliasing is optimised for
  tiling and so you get leakage from the next sprite over.
 
  If so, the solution is just to separate the sprites by a pixel of
  transparent black, no?

 This is the traditional solution for scenarios where you are sampling
 from a filtered texture in 3d. However, it only works if you never
 scale images, which is actually not the case in many game scenarios.


That's only an issue when sampling without premultiplication, right?

I had to refresh my memory on this:

https://zewt.org/~glenn/test-premultiplied-scaling/

The first image is using WebGL to blit unpremultiplied.  The second is
WebGL blitting premultiplied.  The last is 2d canvas.  (We're talking about
canvas here, of course, but WebGL makes it easier to test the different
behavior.)  This blits a red rectangle surrounded by transparent space on
top of a red canvas.  The black square is there so I can tell that it's
actually drawing something.

The first one gives a seam around the transparent area, as the white pixels
(which are completely transparent in the image) are sampled into the
visible part.  I think this is the problem we're talking about.  The second
gives no seam, and the Canvas one gives no seam, indicating that it's a
premultiplied blit.  I don't know if that's specified, but the behavior is
the same in Chrome and FF.


On Tue, May 13, 2014 at 8:59 PM, K. Gadd k...@luminance.org wrote:

 On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote:
  Can you give an explicit example where browsers are having different
  behavior when using drawImage?

 I thought I was pretty clear about this... colorspace conversion and
 alpha conversion happen here depending on the user's display
 configuration, the color profile of the source image, and what browser
 you're using. I've observed differences between Firefox and Chrome
 here, along with different behavior on OS X (presumably due to their
 different implementation of color profiles).

 In this case 'different' means 'loading  drawing an image to a canvas
 gives different results via getImageData'.


That's a description, not an explicit example.  An example would be a URL
demonstrating the issue.

The effects of color profiles should never be visible to script--they
should be applied when the canvas is drawn to the screen, not when the
image is decoded or the canvas is manipulated.  That seems hard to
implement, though, if you're blitting images to a canvas that all have
different color profiles.  It's probably better to ignore color profiles
for canvas entirely than to expose the user's monitor configuration like
this...

-- 
Glenn Maynard


Re: [whatwg] Proposal: Event.creationTime

2014-05-14 Thread Glenn Maynard
On Thu, May 8, 2014 at 2:33 AM, Brian Birtles bbirt...@mozilla.com wrote:

 (2014/05/08 0:49), Glenn Maynard wrote:

 Can you remind me why this shouldn't just use real time, eg. using the
 Unix
 epoch as the time base?  It was some privacy concern, but I can't think of
 any privacy argument for giving high-resolution event timestamps in units
 that are this limited and awkward.


 [1] has some justification for why we don't use 1970. As does [2].
 I'm not sure what the privacy concerns raised in the past were with
 regards to 1970.


Okay, I remember.  It's not that using the epoch here is itself a privacy
issue, it's that the solutions to the monotonicity problem introduce
privacy issues: if you add a global base time that isn't per-origin, that's
a tracking vector.

Maybe a solution would be to make DOMHighResTimeStamp structured clonable
(or a wrapper class, since the type itself is just double).  If you post a
timestamp to another thread, it arrives in that thread's own time base.
That way, each thread can always calculate precise deltas between two
timestamps, without exposing the actual time base.  (You still can't send
it to a server, but that's an inherent problem for a timer on a monotonic
clock.)

If you treat Date.now() as your global clock, you can roughly convert
 between different performance timelines but with the caveat that you lose
 precision and are vulnerable to system clock adjustments. (There is
 actually a method defined for converting between timelines in Web
 Animations but the plan is to remove it.)


That would defeat the purpose of using high-resolution timers in the first
place.

-- 
Glenn Maynard


Re: [whatwg] @autocomplete sections

2014-05-14 Thread Matthew Noorenberghe
- Original Message -
 From: Ilya Sherman isher...@google.com
 To: whatwg@lists.whatwg.org
 Sent: Wednesday, May 14, 2014 2:33:58 PM
 Subject: Re: [whatwg] @autocomplete sections
 
 That's a good question.  Initially, sections were motivated by the desire
 to distinguish between shipping and billing, i.e. the recommendation
 was to use section-shipping and section-billing.  We eventually
 realized that shipping and billing are so commonly used that they
 merited having their own unique tokens.  Now that those are separately
 canonicalized, the motivation for section-* tokens is much less clear.

OK, that makes sense. If that's the case, could we at least not allow both 
section-* and billing/shipping? i.e. use one token for either billing, 
shipping, or section-*.

 However, there are still plenty of cases where sections *could* be useful.
  For example, a social network might ask for multiple points of contact
 info, e.g. a home address and also a work address.

I think that would be better addressed by allowing the home/work token before 
addresses so the UA can make a more informed decision about which addresses to 
suggest instead of using heuristics to figure out what the arbitrary section 
suffixes mean and trying to figure out a way to convey the distinction to the 
user in their own language. Simply asking the user to choose two addresses in 
the rAc UI without distinguishing them would be the trivial behaviour that 
would provide a poor UX.

 There are other types
 of addresses as well: For example, not all mailing addresses, such as P.O.
 boxes, are shipping addresses to which packages can be delivered.  The idea
 is that section-* tokens allow a website to ask for multiple addresses of
 types that are not necessarily billing or shipping.

Like above, is the UA supposed to figure out what the section suffix means? Or 
shall it simply remember the fact that a given address was used with that 
suffix and prefer the chosen address on another site which happens to use the 
same section name? Does allowing the home/work tokens before an address address 
this case? If not, could you provide a real-world example of this different 
class of address? Can we add a new token for it instead?

 It's certainly possible to use multiple forms, or to use a fieldset, to
 describe such a form.  Using a single form can be more convenient for the
 user, as there's just a single submit button.

It may be more convenient in terms of the number of clicks but it can be more 
confusing if the user is confronted with UI to choose profiles for multiple 
sections that they can't meaningfully distinguish due to the lack of context 
(partly from the complexity for UAs to use heuristics to make guesses).

 Using a fieldset can be
 inconvenient for the developer, as fields belonging to the same section
 might not be listed adjacent to one another in an HTML file.  (Most
 commonly, this occurs when a developer is allowing presentation to guide
 their HTML structure, so perhaps we should actively discourage this as an
 anti-pattern.)

I had thought about proposing that fieldsets work like forms in that fields 
can be part of a form without being a child (using @form=myForm) and have 
fieldset have an elements attribute to get a list of all field belonging to 
a fieldset. With that, we could require the section/hint tokens to be in 
@autocomplete on fieldset instead of duplicating them in every @autocomplete 
attribute of fields in the section.

 Section tokens were designed before rAc was a consideration.  In Chrome, we
 use them for the Autofill feature (which presents a helpful popup as the
 user interacts with a regular ol' visible form), but not for rAc.  It's
 possible that the use case for section-* tokens is so marginal that it
 would be better to simply remove them, since billing and shipping cover
 the common case.

OK, it's useful to know they're not used for rAc in Chrome at this time. I feel 
inclined to have it removed so far.


Re: [whatwg] WebGL and ImageBitmaps

2014-05-14 Thread K. Gadd
Replies inline

On Wed, May 14, 2014 at 4:27 PM, Glenn Maynard gl...@zewt.org wrote:
 On Mon, May 12, 2014 at 3:19 AM, K. Gadd k...@luminance.org wrote:

 This is the traditional solution for scenarios where you are sampling
 from a filtered texture in 3d. However, it only works if you never
 scale images, which is actually not the case in many game scenarios.


 That's only an issue when sampling without premultiplication, right?

 I had to refresh my memory on this:

 https://zewt.org/~glenn/test-premultiplied-scaling/

 The first image is using WebGL to blit unpremultiplied.  The second is WebGL
 blitting premultiplied.  The last is 2d canvas.  (We're talking about canvas
 here, of course, but WebGL makes it easier to test the different behavior.)
 This blits a red rectangle surrounded by transparent space on top of a red
 canvas.  The black square is there so I can tell that it's actually drawing
 something.

 The first one gives a seam around the transparent area, as the white pixels
 (which are completely transparent in the image) are sampled into the visible
 part.  I think this is the problem we're talking about.  The second gives no
 seam, and the Canvas one gives no seam, indicating that it's a premultiplied
 blit.  I don't know if that's specified, but the behavior is the same in
 Chrome and FF.

The reason one pixel isn't sufficient is that if the minification
ratio is below 50% (say, 33%), sampling algorithms other than
non-mipmapped-bilinear will begin sampling more than 4 pixels (or one
quad, in gpu shading terminology), so you now need enough transparent
pixels around all your textures to ensure that sampling never crosses
the boundaries into another image.

http://fgiesen.wordpress.com/2011/07/10/a-trip-through-the-graphics-pipeline-2011-part-8/
explains the concept of quads, along with relevant issues like
centroid interpolation. Anyone talking about correctness or
performance in modern accelerated rendering might benefit from reading
this whole series.

You do make the good point that whether or not the canvas
implementation is using premultiplied textures has an effect on the
result of scaling and filtering (since doing scaling/filtering on
nonpremultiplied rgba produces color bleeding from transparent
pixels). Is that currently specified? I don't think I've seen bleeding
artifacts recently, but I'm not certain whether the spec requires this
explicitly.

This issue is however not color bleeding - color bleeding is a math
'error' that results from not using premultiplication - but that the
filtering algorithm samples pixels outside the actual 'rectangle'
intended to be drawn. (This is an implicit problem with sampling based
on texture coordinates and derivatives instead of pixel offsets)

If you search for 'padding texture atlases' you can see some examples
that show why this is a tricky problem and a single pixel of padding
is not sufficient:
http://wiki.polycount.com/EdgePadding

There are some related problems here for image compression as well,
due to the block-oriented nature of codecs like JPEG and DXTC. Luckily
they aren't something the user agent has to deal with in their canvas
implementation, but that's another example where a single pixel of
padding isn't enough.

 On Tue, May 13, 2014 at 8:59 PM, K. Gadd k...@luminance.org wrote:
 I thought I was pretty clear about this... colorspace conversion and
 alpha conversion happen here depending on the user's display
 configuration, the color profile of the source image, and what browser
 you're using. I've observed differences between Firefox and Chrome
 here, along with different behavior on OS X (presumably due to their
 different implementation of color profiles).

 In this case 'different' means 'loading  drawing an image to a canvas
 gives different results via getImageData'.


 That's a description, not an explicit example.  An example would be a URL
 demonstrating the issue.

http://joedev.net/JSIL/Numbers/ was the first game to report an issue
from this, because his levels are authored as images. He ended up
solving the problem by following my advice to manually strip color
profile information from all his images (though this is not a panacea;
a browser could decide that profile-information-less images are now
officially sRGB, and then profile-convert them to the display profile)

It's been long enough that I don't know if his uploaded build works
anymore or whether it will demonstrate the issue. It's possible he
removed his dependency on images by now.

Here is what I told the developer in an email thread when he first
reported the issue (and by 'reported' I mean 'sent me a very confused
email saying that his game didn't work in Firefox and he had no idea
why'):

 The reason it's not working in Firefox right now is due to a firefox bug, 
 because your PNG files contain what's called a 'sRGB chunk': 
 https://bugzilla.mozilla.org/show_bug.cgi?id=867594
 I don't know if this bug can be fixed on Firefox's side because it's an 

Re: [whatwg] canvas feedback

2014-05-14 Thread K. Gadd
Is it ever possible to make canvas-to-canvas blits consistently fast?
It's my understanding that browsers still make
intelligent/heuristic-based choices about which canvases to
accelerate, if any, and that it depends on the size of the canvas,
whether it's in the DOM, etc. I've had to report bugs related to this
against firefox and chrome in the past, I'm sure more exist. There's
also the scenario where you need to blit between Canvas2D canvases and
WebGL canvases - the last time I tried this, a single blit could cost
*hundreds* of milliseconds because of pipeline stalls and cpu-gpu
transfers.

Canvas-to-canvas blits are a way to implement layering, but it seems
like making it consistently fast via canvas-canvas blits is a much
more difficult challenge than making sure that there are fastcheap
ways to layer separate canvases at a composition stage. The latter
just requires that the browser have a good way to composite the
canvases, the former requires that various scenarios with canvases
living in CPU and GPU memory, deferred rendering queues, etc all get
resolved efficiently in order to copy bits from one place to another.

(In general, I think any solution that relies on using
canvas-on-canvas drawing any time a single layer is invalidated is
suspect. The browser already has a compositing engine for this that
can efficiently update only modified subregions and knows how to cache
reusable data; re-rendering the entire surface from JS on change is
going to be a lot more expensive than that. Don't some platforms
actually have compositing/layers at the OS level, like CoreAnimation
on iOS/OSX?)

On Wed, May 14, 2014 at 6:30 AM, Jürg Lehni li...@scratchdisk.com wrote:
 On Apr 30, 2014, at 00:27 , Ian Hickson i...@hixie.ch wrote:

 On Mon, 7 Apr 2014, Jürg Lehni wrote:

 Well this particular case, yes. But in the same way we allow a group of
 items to have an opacity applied to in Paper.js, and expect it to behave
 the same ways as in SVG: The group should appear as if its children were
 first rendered at 100% alpha and then blitted over with the desired
 transparency.

 Layers would offer exactly this flexibility, and having them around
 would make a whole lot of sense, because currently the above can only be
 achieved by drawing into a separate canvas and blitting the result over.
 The performance of this is real low on all browsers, a true bottleneck
 in our library currently.

 It's not clear to me why it would be faster if implemented as layers.
 Wouldn't the solution here be for browsers to make canvas-on-canvas
 drawing faster? I mean, fundamentally, they're the same feature.

 I was perhaps wrongly assuming that including layering in the API would allow 
 the browser vendors to better optimize this use case. The problem with the 
 current solution is that drawing a canvas into another canvas is inexplicably 
 slow across all browsers. The only reason I can imagine for this is that the 
 pixels are copied back and forth between the GPU and the main memory, and 
 perhaps converted along the way, while they could simply stay on the GPU as 
 they are only used there. But reality is probably more complicated than that.

 So if the proposed API addition would allow a better optimization then I'd be 
 all for it. If not, then I am wondering how I can get the vendor's attention 
 to improve this particular case. It really is very slow currently, to the 
 point where it doesn't make sense to use it for any sort of animation 
 technique.

 J







Re: [whatwg] WebGL and ImageBitmaps

2014-05-14 Thread Rik Cabanier
On Tue, May 13, 2014 at 6:59 PM, K. Gadd k...@luminance.org wrote:

 On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote:
  Can you give an explicit example where browsers are having different
  behavior when using drawImage?

 I thought I was pretty clear about this... colorspace conversion and
 alpha conversion happen here depending on the user's display
 configuration, the color profile of the source image, and what browser
 you're using. I've observed differences between Firefox and Chrome
 here, along with different behavior on OS X (presumably due to their
 different implementation of color profiles).

 In this case 'different' means 'loading  drawing an image to a canvas
 gives different results via getImageData'.

 On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote:
  Would this be solved with Greg's proposal for flags on ImageBitmap:
 
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-June/251541.html

 I believe so. I think I was on record when he first posted that I
 consider the alpha and colorspace flags he described as adequate.
 FlipY is considerably less important to me, but I can see how people
 might want it (honestly, reversing the order of scanlines is a very
 cheap operation; you can do it in the sampling stage of your shader,
 and actually *have* to in OpenGL because of their coordinate system
 when you're doing render-to-texture.)

 On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote:
  Very specifically here, by 'known color space' i just mean that the
  color space of the image is exposed to the end user. I don't think we
  can possibly pick a standard color space to always use; the options
  are 'this machine's current color space' and 'the color space of the
  input bitmap'. In many cases the color space of the input bitmap is
  effectively 'no color space', and game developers feed the raw rgba to
  the GPU. It's important to support that use case without degrading the
  image data.
 
 
  Is that not the case today?

 It is very explicitly not the case, which is why we are discussing it.
 It is not currently possible to do lossless manipulation of PNG images
 in a web browser using canvas. The issues I described where you get
 different results from getImageData are a part of that.

 On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote:
  Safari never created a temporary image and I recently updated Firefox so
 it
  matches Safari.
  Both Safari, IE and Firefox will now sample outside of the drawImage
 region.
  Chrome does not but they will fix that at some point.

 This is incorrect. A quick google search for 'webkit drawimage source
 rectangle temporary' reveals such, in a post to this list.

 http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2012-December/080583.html
 My statement to this effect was based on my (imperfect) memory of that
 post. 'CGImage' (to me) says Safari since it's an Apple API, and the
 post mentions Safari.


I made a codepen that showed the issue: http://codepen.io/adobe/pen/jIzbv
Firefox was not matching the behavior on mac because it created a
intermediate image. I fixed that in
https://bugzilla.mozilla.org/show_bug.cgi?id=987292

I agree that the code you linked to exists in WebKit but they add padding
so it samples outside the source again.


Re: [whatwg] @autocomplete sections

2014-05-14 Thread Ilya Sherman
On Wed, May 14, 2014 at 6:59 PM, Matthew Noorenberghe 
mattn+wha...@mozilla.com wrote:

 - Original Message -
  From: Ilya Sherman isher...@google.com
  To: whatwg@lists.whatwg.org
  Sent: Wednesday, May 14, 2014 2:33:58 PM
  Subject: Re: [whatwg] @autocomplete sections
 
  That's a good question.  Initially, sections were motivated by the desire
  to distinguish between shipping and billing, i.e. the recommendation
  was to use section-shipping and section-billing.  We eventually
  realized that shipping and billing are so commonly used that they
  merited having their own unique tokens.  Now that those are separately
  canonicalized, the motivation for section-* tokens is much less clear.

 OK, that makes sense. If that's the case, could we at least not allow both
 section-* and billing/shipping? i.e. use one token for either billing,
 shipping, or section-*.

  However, there are still plenty of cases where sections *could* be
 useful.
   For example, a social network might ask for multiple points of contact
  info, e.g. a home address and also a work address.

 I think that would be better addressed by allowing the home/work token
 before addresses so the UA can make a more informed decision about which
 addresses to suggest instead of using heuristics to figure out what the
 arbitrary section suffixes mean and trying to figure out a way to convey
 the distinction to the user in their own language. Simply asking the user
 to choose two addresses in the rAc UI without distinguishing them would be
 the trivial behaviour that would provide a poor UX.

  There are other types
  of addresses as well: For example, not all mailing addresses, such as
 P.O.
  boxes, are shipping addresses to which packages can be delivered.  The
 idea
  is that section-* tokens allow a website to ask for multiple addresses of
  types that are not necessarily billing or shipping.

 Like above, is the UA supposed to figure out what the section suffix
 means? Or shall it simply remember the fact that a given address was used
 with that suffix and prefer the chosen address on another site which
 happens to use the same section name? Does allowing the home/work tokens
 before an address address this case? If not, could you provide a real-world
 example of this different class of address? Can we add a new token for it
 instead?


IMO it makes sense to ignore section-* tokens for rAc for now.  I don't
think we need to add home, work, and other such tokens at this time.
 At least, I haven't heard any concrete demand for them.

It likely makes sense to remove section-* tokens from the spec entirely.
I'm not sure how much they're used, but I would guess almost not at all.
 It would be nice to have some concrete numbers; but unfortunately, I'm not
aware of any metrics tracking the usage of section-* tokens.


  It's certainly possible to use multiple forms, or to use a fieldset, to
  describe such a form.  Using a single form can be more convenient for the
  user, as there's just a single submit button.

 It may be more convenient in terms of the number of clicks but it can be
 more confusing if the user is confronted with UI to choose profiles for
 multiple sections that they can't meaningfully distinguish due to the lack
 of context (partly from the complexity for UAs to use heuristics to make
 guesses).


In terms of rAc, I agree that it's hard to present sections delineated by
section-* with a meaningful UI.  In terms of Chrome's Autofill feature,
which originally motivated these tokens, the webpage provides its own UI;
Autofill simply draws a small popup menu on top of the page.  Hence, the
page is able to provide its own context.


  Using a fieldset can be
  inconvenient for the developer, as fields belonging to the same section
  might not be listed adjacent to one another in an HTML file.  (Most
  commonly, this occurs when a developer is allowing presentation to guide
  their HTML structure, so perhaps we should actively discourage this as an
  anti-pattern.)

 I had thought about proposing that fieldsets work like forms in that
 fields can be part of a form without being a child (using @form=myForm)
 and have fieldset have an elements attribute to get a list of all field
 belonging to a fieldset. With that, we could require the section/hint
 tokens to be in @autocomplete on fieldset instead of duplicating them in
 every @autocomplete attribute of fields in the section.

  Section tokens were designed before rAc was a consideration.  In Chrome,
 we
  use them for the Autofill feature (which presents a helpful popup as
 the
  user interacts with a regular ol' visible form), but not for rAc.  It's
  possible that the use case for section-* tokens is so marginal that it
  would be better to simply remove them, since billing and shipping
 cover
  the common case.

 OK, it's useful to know they're not used for rAc in Chrome at this time. I
 feel inclined to have it removed so far.



Re: [whatwg] WebGL and ImageBitmaps

2014-05-14 Thread Rik Cabanier
On Wed, May 14, 2014 at 7:45 PM, Glenn Maynard gl...@zewt.org wrote:

 On Wed, May 14, 2014 at 6:27 PM, Glenn Maynard gl...@zewt.org wrote:

  That's only an issue when sampling without premultiplication, right?
 
  I had to refresh my memory on this:
 
  https://zewt.org/~glenn/test-premultiplied-scaling/
 
  The first image is using WebGL to blit unpremultiplied.  The second is
  WebGL blitting premultiplied.  The last is 2d canvas.  (We're talking
 about
  canvas here, of course, but WebGL makes it easier to test the different
  behavior.)  This blits a red rectangle surrounded by transparent space on
  top of a red canvas.  The black square is there so I can tell that it's
  actually drawing something.
 
  The first one gives a seam around the transparent area, as the white
  pixels (which are completely transparent in the image) are sampled into
 the
  visible part.  I think this is the problem we're talking about.  The
 second
  gives no seam, and the Canvas one gives no seam, indicating that it's a
  premultiplied blit.  I don't know if that's specified, but the behavior
 is
  the same in Chrome and FF.
 

 It looks right on red, but if the background is green you can still see the
 post-premultiplied black being pulled in.  It's really just GL_REPEAT that
 you want, repeating the outer edge.


 On Wed, May 14, 2014 at 9:21 PM, K. Gadd k...@luminance.org wrote:

  The reason one pixel isn't sufficient is that if the minification
  ratio is below 50% (say, 33%), sampling algorithms other than
  non-mipmapped-bilinear will begin sampling more than 4 pixels (or one
  quad, in gpu shading terminology), so you now need enough transparent
  pixels around all your textures to ensure that sampling never crosses
  the boundaries into another image.
 

 I'm well aware of the issues of sampling sprite sheets; I've dealt with the
 issue at length in the past.  That's unrelated to my last mail, however,
 which was about premultiplication (which is something I've not used as
 much).


  I agree with this, but I'm not going to assume it's actually possible
  for a canvas implementation to work this way. I assume that color
  profile conversions are non-trivial (in fact, I'm nearly certain they
  are non-trivial), so doing the conversion every time you render a
  canvas to the compositor is probably expensive, especially if your GPU
  isn't powerful enough to do it in a shader (mobile devices, perhaps) -
  so I expect that most implementations do the conversion once at load
  time, to prepare an image for rendering. Until it became possible to
  retrieve image pixels with getImageData, this was a good, safe
  optimization.
 

 What I meant is that I think color correction simply shouldn't apply to
 canvas at all.  That may not be ideal, but I'm not sure of anything else
 that won't cause severe interop issues.


Maybe the color correction described here is happening:
https://hsivonen.fi/png-gamma/

If so, the image that's drawn on the canvas should match what the browser
is showing on screen.
Without an example, it's just speculation of course.


 To be clear, colorspace conversion--converting from sRGB to RGB--isn't a
 problem, other than probably needing to be specified more clearly and being
 put behind an option somewhere, so you can avoid a lossy colorspace
 conversion.  The problem is color correction that takes the user's monitor
 configuration into account, since the user's monitor settings shouldn't be
 visible to script.  I don't know enough about color correction to know if
 this can be done efficiently in an interoperable way, so the data scripts
 see isn't affected by the user's configuration.


Yes, color correction from sRGB to your monitor should not affect drawing
on canvas. (What if you had multiple monitors :-))


Re: [whatwg] canvas feedback

2014-05-14 Thread Rik Cabanier
On Wed, May 14, 2014 at 6:30 AM, Jürg Lehni li...@scratchdisk.com wrote:

 On Apr 30, 2014, at 00:27 , Ian Hickson i...@hixie.ch wrote:

  On Mon, 7 Apr 2014, Jürg Lehni wrote:
 
  Well this particular case, yes. But in the same way we allow a group of
  items to have an opacity applied to in Paper.js, and expect it to behave
  the same ways as in SVG: The group should appear as if its children were
  first rendered at 100% alpha and then blitted over with the desired
  transparency.
 
  Layers would offer exactly this flexibility, and having them around
  would make a whole lot of sense, because currently the above can only be
  achieved by drawing into a separate canvas and blitting the result over.
  The performance of this is real low on all browsers, a true bottleneck
  in our library currently.
 
  It's not clear to me why it would be faster if implemented as layers.
  Wouldn't the solution here be for browsers to make canvas-on-canvas
  drawing faster? I mean, fundamentally, they're the same feature.

 I was perhaps wrongly assuming that including layering in the API would
 allow the browser vendors to better optimize this use case.


No, you are correct; having layers will make drawing more efficient as you
can make certain assumptions and you don't have to create/recycle
intermediate canvas's.


 The problem with the current solution is that drawing a canvas into
 another canvas is inexplicably slow across all browsers. The only reason I
 can imagine for this is that the pixels are copied back and forth between
 the GPU and the main memory, and perhaps converted along the way, while
 they could simply stay on the GPU as they are only used there. But reality
 is probably more complicated than that.


I don't know why this would be. Do you have data on this?


 So if the proposed API addition would allow a better optimization then I'd
 be all for it. If not, then I am wondering how I can get the vendor's
 attention to improve this particular case. It really is very slow
 currently, to the point where it doesn't make sense to use it for any sort
 of animation technique.


I think we just need to find some time to start implementing it. The API is
simple and in the case of Core Graphics, it maps directly.


Re: [whatwg] canvas feedback

2014-05-14 Thread Rik Cabanier
On Wed, May 14, 2014 at 7:30 PM, K. Gadd k...@luminance.org wrote:

 Is it ever possible to make canvas-to-canvas blits consistently fast?
 It's my understanding that browsers still make
 intelligent/heuristic-based choices about which canvases to
 accelerate, if any, and that it depends on the size of the canvas,
 whether it's in the DOM, etc. I've had to report bugs related to this
 against firefox and chrome in the past, I'm sure more exist. There's
 also the scenario where you need to blit between Canvas2D canvases and
 WebGL canvases - the last time I tried this, a single blit could cost
 *hundreds* of milliseconds because of pipeline stalls and cpu-gpu
 transfers.


Chrome has made some optimizations recently in this area and will try to
keep everything on the GPU for transfers between canvas 2d and WebGL.
Are you still seeing issues there?


 Canvas-to-canvas blits are a way to implement layering, but it seems
 like making it consistently fast via canvas-canvas blits is a much
 more difficult challenge than making sure that there are fastcheap
 ways to layer separate canvases at a composition stage. The latter
 just requires that the browser have a good way to composite the
 canvases, the former requires that various scenarios with canvases
 living in CPU and GPU memory, deferred rendering queues, etc all get
 resolved efficiently in order to copy bits from one place to another.


Small canvas's are usually not hardware accelerated. Do you have any data
that this is causing slowdowns?
Layering should also mitigate this since if the canvas is HW accelerated,
so should its layers.


 (In general, I think any solution that relies on using
 canvas-on-canvas drawing any time a single layer is invalidated is
 suspect. The browser already has a compositing engine for this that
 can efficiently update only modified subregions and knows how to cache
 reusable data; re-rendering the entire surface from JS on change is
 going to be a lot more expensive than that.


I don't think the canvas code is that smart. I think you're thinking about
drawing SVG and HTML.


 Don't some platforms
 actually have compositing/layers at the OS level, like CoreAnimation
 on iOS/OSX?)


Yes, but AFAIK they don't use this for Canvas.



 On Wed, May 14, 2014 at 6:30 AM, Jürg Lehni li...@scratchdisk.com wrote:
  On Apr 30, 2014, at 00:27 , Ian Hickson i...@hixie.ch wrote:
 
  On Mon, 7 Apr 2014, Jürg Lehni wrote:
 
  Well this particular case, yes. But in the same way we allow a group of
  items to have an opacity applied to in Paper.js, and expect it to
 behave
  the same ways as in SVG: The group should appear as if its children
 were
  first rendered at 100% alpha and then blitted over with the desired
  transparency.
 
  Layers would offer exactly this flexibility, and having them around
  would make a whole lot of sense, because currently the above can only
 be
  achieved by drawing into a separate canvas and blitting the result
 over.
  The performance of this is real low on all browsers, a true bottleneck
  in our library currently.
 
  It's not clear to me why it would be faster if implemented as layers.
  Wouldn't the solution here be for browsers to make canvas-on-canvas
  drawing faster? I mean, fundamentally, they're the same feature.
 
  I was perhaps wrongly assuming that including layering in the API would
 allow the browser vendors to better optimize this use case. The problem
 with the current solution is that drawing a canvas into another canvas is
 inexplicably slow across all browsers. The only reason I can imagine for
 this is that the pixels are copied back and forth between the GPU and the
 main memory, and perhaps converted along the way, while they could simply
 stay on the GPU as they are only used there. But reality is probably more
 complicated than that.
 
  So if the proposed API addition would allow a better optimization then
 I'd be all for it. If not, then I am wondering how I can get the vendor's
 attention to improve this particular case. It really is very slow
 currently, to the point where it doesn't make sense to use it for any sort
 of animation technique.
 
  J