Re: [whatwg] Canvas and color colorspaces (was: WebGL and ImageBitmaps)

2016-05-01 Thread Rik Cabanier
Great to hear!
Are there minutes posted?

On Sunday, May 1, 2016, Justin Novosad <ju...@google.com> wrote:

> There is currently an ongoing discussion with the Khronos Web3D group to
> develop a proposal that solves these problems in canvas, over the past few
> weeks we have converged on a solution that I think is pretty solid. I am in
> the process of writing-up the HTML (non-WebGL) part of the proposal and I
> intend to post it to the WICG shortly so that we can incubate it further,
> with a broader audience.  When that happens, I will update this thread.
>
> On Sat, Apr 30, 2016 at 2:07 PM, Rik Cabanier <caban...@gmail.com
> <javascript:_e(%7B%7D,'cvml','caban...@gmail.com');>> wrote:
>
>> [Sorry to revive this old thread]
>> All,
>>
>> with the advent of DCI-P3 compliant monitors and Apple's Safari doing
>> color managing to the device, we're seeing some issues in this area.
>>
>> - Currently, WebKit sets the profile of the canvas backing store to sRGB
>> regardless of the output device. Because of this, high gamut images are
>> always clipped to sRGB. [1]
>> It would be ideal if we can specify that the canvas backing store is in
>> the device profile.
>> Alternatively, we could add an API to attach a color profile to the
>> canvas.
>> - The spec currently states that toDataURL should not include a profile.
>> However, if the backing store is in device pixels, the generated image
>> should include the correct profile. Otherwise if you draw the bitmap in a
>> compliant browser (ie Safari), the colors will look too saturated.
>>
>> If we agree that canvas is in the device space, I'd like to see the spec
>> [2] clarified to state that compositing on the canvas should match
>> compositing on the HTML surface.
>> Specifically:
>>
>> The canvas
>> <https://html.spec.whatwg.org/multipage/scripting.html#the-canvas-element> 
>> APIs
>> must perform colour correction at only two points: when rendering images
>> with their own gamma correction and colour space information onto a bitmap,
>> to convert the image to the colour space used by the bitmaps (e.g. using
>> the 2D Context's drawImage()
>> <https://html.spec.whatwg.org/multipage/scripting.html#dom-context-2d-drawimage>
>>  method
>> with an HTMLOrSVGImageElement
>> <https://html.spec.whatwg.org/multipage/scripting.html#htmlorsvgimageelement>
>>  object),
>> and when rendering the actual canvas bitmap to the output device.
>>
>> Becomes:
>>
>> The canvas
>> <https://html.spec.whatwg.org/multipage/scripting.html#the-canvas-element> 
>> APIs
>> must perform colour correction at only one point: when rendering content
>> with its own gamma correction and colour space information onto a bitmap to
>> the colour space used by the bitmaps (e.g. using the 2D Context's
>> drawImage()
>> <https://html.spec.whatwg.org/multipage/scripting.html#dom-context-2d-drawimage>
>>  method
>> with an HTMLOrSVGImageElement
>> <https://html.spec.whatwg.org/multipage/scripting.html#htmlorsvgimageelement>
>>  object).
>>
>>
>> ToDataURL and ToBlob [3] should also be enhanced so they include the
>> device profile if it is different from sRGB.
>>
>> It would also be great if the browser could let us know what profile (if
>> any) it was using.
>>
>> 1:
>> https://github.com/WebKit/webkit/blob/112c663463807e8676765cb7a006d415c372f447/Source/WebCore/platform/graphics/ImageBuffer.h#L73
>> 2:
>> https://html.spec.whatwg.org/multipage/scripting.html#colour-spaces-and-colour-correction
>> 3:
>> https://html.spec.whatwg.org/multipage/scripting.html#dom-canvas-todataurl
>>
>>
>>
>> On Thu, May 22, 2014 at 12:21 PM, Justin Novosad <ju...@google.com
>> <javascript:_e(%7B%7D,'cvml','ju...@google.com');>> wrote:
>>
>>> tl;dr: The color space of canvas backing stores is undefined, which
>>> causes problems for many web devs, but also has non-negligible advantages.
>>> So be careful what you wish for.
>>>
>>> I saw some confusion and questions needing answers in the "WebGL and
>>> ImageBitmaps" thread regarding color management. I will attempt to clarify
>>> to the best of my abilities. Though I am knowledgeable on the subject, I am
>>> not an absolute authority, so others are welcome to correct me if I am
>>> wrong about anything.
>>>
>>> Color management... To make a long story short, there are two types of
>>> color profiles : input profiles and output profiles for 

Re: [whatwg] Canvas and color colorspaces (was: WebGL and ImageBitmaps)

2016-05-01 Thread Rik Cabanier
On Sat, Apr 30, 2016 at 6:35 PM, Ron Waldon  wrote:

> What if we could just declare the colour-space that content uses, and
> allow the browser to make a best-effort translation if the current display
> uses a different colour-space?
>

That is pretty much the situation in Safari today.
My original points are about some deficiencies in the canvas implementation
for this workflow.


> This way, we don't need to expose colour profiles or other fingerprinting
> details to JavaScript code. That code can just declare that it uses Adobe
> sRGB (which might be the default if not specified?), and the browser can
> apply a transform as needed depending on the hardware.
>

I am not convinced that this is a fingerprinting concern, especially if we
just allow generic names for profiles.
Certainly the additions to CSS wrt color and media queries allow you to
infer the used profile


> The declaration could be a MIME-type parameter, for visual content
> delivered via HTTP, or there could be a Canvas attribute or constructor
> option. /shrug
>


Re: [whatwg] Canvas and color colorspaces (was: WebGL and ImageBitmaps)

2016-04-30 Thread Rik Cabanier
On Sat, Apr 30, 2016 at 4:27 PM, Rik Cabanier <caban...@gmail.com> wrote:

>
>
> On Sat, Apr 30, 2016 at 3:25 PM, Kornel <kor...@geekhood.net> wrote:
>
>>
>> On 30 Apr 2016, at 21:19, Rik Cabanier <caban...@gmail.com> wrote:
>>
>>>
>>> > It would be ideal if we can specify that the canvas backing store is
>>> in the device profile.
>>>
>>> How would the website know what profile this is? If it's just a boolean
>>> setting, then I don't see how it would make it possible to use such canvas
>>> correctly, e.g. convert a XYZ color to canvas' color space.
>>>
>>
>> This is how content is drawn today. A website doesn't know what profile a
>> browser is using.
>> Introducing this would make canvas drawing match HTML which is what the
>> spec is intending and users want.
>>
>>
>> I think HTML colors being interpreted as colors in device color space is
>> a bug. It makes it hard/impossible to get consistent colors across HTML,
>> GIF and JPEG/PNG on wide-gamut displays:
>> https://kornel.ski/en/color
>>
>
> I don't see why that would be the case. The device color space doesn't
> imply that it is uncalibrated.
>
>
>> IMHO HTML/CSS and unlabelled image colors should be interpreted as sRGB
>> colors. That makes all content displayed consistently and without
>> over-saturation on wide gamut displays. That's what Safari does, and I
>> really like that behavior.
>>
>
> That is incorrect. With the advent of the DCI-P3 devices (iMac retina and
> iPad Pro), Safari switched to rendering using the monitor profile.
> So, if you place a DCI-P3 image on a web page, it will display with all
> its colors on the new devices; while it will look more washed out on the
> other devices.
>

Sorry, after rereading my message it looks like we're talking about
different things.
Yes, Safari's behavior is great. We should keep that and hopefully push all
browser towards this model.
We DO want compositing in the the output device color profile though so we
can use all available colors.


> Is device profile exposed somewhere in the platform yet? If not, I think
>>> it'd be better to leave it hidden to avoid adding more fingerprinting
>>> vectors.
>>>
>>
>> I'm unsure how this would contribute to fingerprinting.
>> If browser start following the spec wrt icc profile conversion, you could
>> infer the profile by drawing an image and looking at the pixels.
>>
>>
>> User may have a custom, personal monitor calibration, e.g. in OS X system
>> Preferences -> Color -> Calibrate does this. This is likely to create a
>> very unique profile that can be used as a supercookie that uniquely
>> identifies the user, even across different browsers and private mode.
>>
>> Implementations must avoid exposing pixel data that has been converted to
>> display color space at any time, because it is possible to recreate the
>> profile by observing posterization.
>>
>> Therefore to avoid creation of a supercookie, by default canvas backing
>> store must be in sRGB, unlabelled images rendered to canvas must be assumed
>> to be in sRGB too, and toDataURL() has to export it in sRGB.
>>
>
> No. Canvas is defined to render like HTML so this needs to stay consistent.
> Also, we should hardcode such a limitation in the platform; if I have a
> nice monitor, I'd like to my web browser to use it.
>

That is: we should NOT hardcode such a limitation :-)

> Setting the canvas to a website-supplied profile seems OK to me. It'd mean
>>> the website already knows how to convert colors to the given colorspace,
>>> and the same profile could be passed back by toDataURL().
>>>
>>
>> That would indeed be the ideal solution. My worry is that it introduces a
>> lot of changes in the browser (ie see Justin's email that started this
>> thread) and I'd like to see a solution sooner than later.
>>
>>
>> I'd rather not see any half-measures for mixed device RGB and sRGB.
>>
>
> This is not a half-measure. This is getting everyone to agree on the
> basics so authors can design websites that look consistent and that can
> take advantage of high gamut display.
> More advanced features can be added later (and this is something the css
> wg is working on)
>
>
>> Color handling in Chrome and Firefox is currently problematic on
>> wide-gamut displays, not just in canvas, but everywhere. It's just not
>> possible to have a photo that matches CSS backround and doesn't have orange
>> faces on wide gamut displays. It's very frustrating from author's
>&g

Re: [whatwg] Canvas and color colorspaces (was: WebGL and ImageBitmaps)

2016-04-30 Thread Rik Cabanier
On Sat, Apr 30, 2016 at 12:38 PM, Kornel <kor...@geekhood.net> wrote:

>
> > On 30 Apr 2016, at 19:07, Rik Cabanier <caban...@gmail.com> wrote:
> >
> > It would be ideal if we can specify that the canvas backing store is in
> the device profile.
>
> How would the website know what profile this is? If it's just a boolean
> setting, then I don't see how it would make it possible to use such canvas
> correctly, e.g. convert a XYZ color to canvas' color space.
>

This is how content is drawn today. A website doesn't know what profile a
browser is using.
Introducing this would make canvas drawing match HTML which is what the
spec is intending and users want.


> Is device profile exposed somewhere in the platform yet? If not, I think
> it'd be better to leave it hidden to avoid adding more fingerprinting
> vectors.
>

I'm unsure how this would contribute to fingerprinting.
If browser start following the spec wrt icc profile conversion, you could
infer the profile by drawing an image and looking at the pixels.


> Setting the canvas to a website-supplied profile seems OK to me. It'd mean
> the website already knows how to convert colors to the given colorspace,
> and the same profile could be passed back by toDataURL().
>

That would indeed be the ideal solution. My worry is that it introduces a
lot of changes in the browser (ie see Justin's email that started this
thread) and I'd like to see a solution sooner than later.


Re: [whatwg] Canvas and color colorspaces (was: WebGL and ImageBitmaps)

2016-04-30 Thread Rik Cabanier
[Sorry to revive this old thread]
All,

with the advent of DCI-P3 compliant monitors and Apple's Safari doing color
managing to the device, we're seeing some issues in this area.

- Currently, WebKit sets the profile of the canvas backing store to sRGB
regardless of the output device. Because of this, high gamut images are
always clipped to sRGB. [1]
It would be ideal if we can specify that the canvas backing store is in the
device profile.
Alternatively, we could add an API to attach a color profile to the canvas.
- The spec currently states that toDataURL should not include a profile.
However, if the backing store is in device pixels, the generated image
should include the correct profile. Otherwise if you draw the bitmap in a
compliant browser (ie Safari), the colors will look too saturated.

If we agree that canvas is in the device space, I'd like to see the spec
[2] clarified to state that compositing on the canvas should match
compositing on the HTML surface.
Specifically:

The canvas
 APIs
must perform colour correction at only two points: when rendering images
with their own gamma correction and colour space information onto a bitmap,
to convert the image to the colour space used by the bitmaps (e.g. using
the 2D Context's drawImage()

method
with an HTMLOrSVGImageElement

object),
and when rendering the actual canvas bitmap to the output device.

Becomes:

The canvas
 APIs
must perform colour correction at only one point: when rendering content
with its own gamma correction and colour space information onto a bitmap to
the colour space used by the bitmaps (e.g. using the 2D Context's
drawImage()

method
with an HTMLOrSVGImageElement

 object).


ToDataURL and ToBlob [3] should also be enhanced so they include the device
profile if it is different from sRGB.

It would also be great if the browser could let us know what profile (if
any) it was using.

1:
https://github.com/WebKit/webkit/blob/112c663463807e8676765cb7a006d415c372f447/Source/WebCore/platform/graphics/ImageBuffer.h#L73
2:
https://html.spec.whatwg.org/multipage/scripting.html#colour-spaces-and-colour-correction
3:
https://html.spec.whatwg.org/multipage/scripting.html#dom-canvas-todataurl



On Thu, May 22, 2014 at 12:21 PM, Justin Novosad  wrote:

> tl;dr: The color space of canvas backing stores is undefined, which causes
> problems for many web devs, but also has non-negligible advantages. So be
> careful what you wish for.
>
> I saw some confusion and questions needing answers in the "WebGL and
> ImageBitmaps" thread regarding color management. I will attempt to clarify
> to the best of my abilities. Though I am knowledgeable on the subject, I am
> not an absolute authority, so others are welcome to correct me if I am
> wrong about anything.
>
> Color management... To make a long story short, there are two types of
> color profiles : input profiles and output profiles for characterizing
> input devices (cameras, scanners) and output devices (displays, printers)
> respectively.
> Image files will usually encode their color information in a standard
> color space or in a an input device dependent space. If colors are encoded
> in a color space that is different from the format's default, then a color
> profile or a color space identifier must be encoded into the image
> resource's metadata.
>
> To present color-managed image content on screen, the image needs to be
> converted from whatever color space the image was encoded into into a
> standard "connection space" using the color profile or color space metadata
> from the image resource. Then the colors need to be converted from the
> profile connection space to the output space, which is provided by the
> OS/display driver. Depending on the OS and hardware configuration, the
> output space may be a standard color space (like sRGB), or a
> device-specific color profile.
>
> Currently, many color-managed software applications rely on the codec to
> take care of the entire color-management process for image and video
> content, meaning that the decoded image data is in output-referred color
> space (i.e. the display's profile was applied).  There are practical
> reasons for this, the most important ones being color fidelity and memory
> consumption.  Let me explain. The profile connection space is typically CIE
> XYZ or CIE L*a*b. I wont get into the technical details of how these work
> except to say that they are device independent and allow for an accurate
> representation of the whole spectrum of human-visible 

Re: [whatwg] Interpretation of CanvasRenderingContext2D.closePath()

2015-11-16 Thread Rik Cabanier
On Mon, Nov 16, 2015 at 9:02 AM, Justin Novosad  wrote:

> Hi All,
>
> The text in the spec:
>
> 
>
> The closePath() method must do nothing if the object's path has no
> subpaths. Otherwise, it must mark the last subpath as closed, create a new
> subpath whose first point is the same as the previous subpath's first
> point, and finally add this new subpath to the path.
>
> Note: If the last subpath had more than one point in its list of points,
> then this is equivalent to adding a straight line connecting the last point
> back to the first point, thus "closing" the shape, and then repeating the
> last (possibly implied) moveTo() call.
>
> 
>
> Problematic use case:
>
> ctx.moveTo(9.8255,71.1829);
> ctx.lineTo(103,25);
> ctx.lineTo(118,25);
> ctx.moveTo(9.8255,71.1829);
> ctx.closePath();
> ctx.stroke();
>
> Should this draw a closed triangle or two connected line segments?
> According to the "Note" (or at least my interpretation of it), this should
> draw a closed triangle. But it appears that this is not what many browsers
> have implemented.  Chrome recently became compliant (or what I think is
> compliant), and the change in behavior was reported as a regression.
>
> Thoughts?
>

moveTo creates a new subpath. This means the closePath is going to do
nothing because the subpath is empty.
So according to the spec, this should create 2 connected lines.


Re: [whatwg] Interpretation of CanvasRenderingContext2D.closePath()

2015-11-16 Thread Rik Cabanier
On Mon, Nov 16, 2015 at 9:41 AM, Justin Novosad <ju...@google.com> wrote:

> Also, the part about "repeating the last (possibly implied) moveTo() call"
> doesn't make much sense if we assume that closePath() applies to the new
> sub path that was started by the last moveTo() call.
>

It *is* super confusing. I complained about this in the past but it didn't
go anywhere.

For the implied moveTo case, take the following code::

ctx.lineTo(0,0); // no moveTo, so moveTo(0,0) is implied -> create new
subpath with points (0,0), (0,0)
ctx.lineTo(100,100); -> subpath (0,0), (0,0), (100,100)
ctx.closePath(); // draw line to (0,0) -> subpath (0,0), (0,0), (100,100),
(0,0) then create new subpath with point (0,0)
ctx.stroke();




> On Mon, Nov 16, 2015 at 12:38 PM, Justin Novosad <ju...@google.com> wrote:
>
>> That makes sense, but the text for closePath() talks about "the last
>> subpath", which I guess is a bit unclear.
>>
>> On Mon, Nov 16, 2015 at 12:30 PM, Rik Cabanier <caban...@gmail.com>
>> wrote:
>>
>>>
>>>
>>> On Mon, Nov 16, 2015 at 9:02 AM, Justin Novosad <ju...@google.com>
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> The text in the spec:
>>>>
>>>> 
>>>>
>>>> The closePath() method must do nothing if the object's path has no
>>>> subpaths. Otherwise, it must mark the last subpath as closed, create a
>>>> new
>>>> subpath whose first point is the same as the previous subpath's first
>>>> point, and finally add this new subpath to the path.
>>>>
>>>> Note: If the last subpath had more than one point in its list of points,
>>>> then this is equivalent to adding a straight line connecting the last
>>>> point
>>>> back to the first point, thus "closing" the shape, and then repeating
>>>> the
>>>> last (possibly implied) moveTo() call.
>>>>
>>>> 
>>>>
>>>> Problematic use case:
>>>>
>>>> ctx.moveTo(9.8255,71.1829);
>>>> ctx.lineTo(103,25);
>>>> ctx.lineTo(118,25);
>>>> ctx.moveTo(9.8255,71.1829);
>>>> ctx.closePath();
>>>> ctx.stroke();
>>>>
>>>> Should this draw a closed triangle or two connected line segments?
>>>> According to the "Note" (or at least my interpretation of it), this
>>>> should
>>>> draw a closed triangle. But it appears that this is not what many
>>>> browsers
>>>> have implemented.  Chrome recently became compliant (or what I think is
>>>> compliant), and the change in behavior was reported as a regression.
>>>>
>>>> Thoughts?
>>>>
>>>
>>> moveTo creates a new subpath. This means the closePath is going to do
>>> nothing because the subpath is empty.
>>> So according to the spec, this should create 2 connected lines.
>>>
>>
>>
>


Re: [whatwg] Interpretation of CanvasRenderingContext2D.closePath()

2015-11-16 Thread Rik Cabanier
On Mon, Nov 16, 2015 at 10:54 AM, Justin Novosad <ju...@google.com> wrote:

>
>
> On Mon, Nov 16, 2015 at 1:40 PM, Rik Cabanier <caban...@gmail.com> wrote:
>
>>
>>
>> On Mon, Nov 16, 2015 at 9:41 AM, Justin Novosad <ju...@google.com> wrote:
>>
>>> Also, the part about "repeating the last (possibly implied) moveTo()
>>> call" doesn't make much sense if we assume that closePath() applies to the
>>> new sub path that was started by the last moveTo() call.
>>>
>>
>> It *is* super confusing. I complained about this in the past but it
>> didn't go anywhere.
>>
>
>> For the implied moveTo case, take the following code::
>>
>> ctx.lineTo(0,0); // no moveTo, so moveTo(0,0) is implied -> create new
>> subpath with points (0,0), (0,0)
>> ctx.lineTo(100,100); -> subpath (0,0), (0,0), (100,100)
>> ctx.closePath(); // draw line to (0,0) -> subpath (0,0), (0,0),
>> (100,100), (0,0) then create new subpath with point (0,0)
>> ctx.stroke();
>>
>>
> To be clear, my problem with the wording is that "(possibly implied)"
> implies that the moveTo may also be explicit. In the case where there is an
> explicit (non-implied) moveTo, does that make closePath essentially a no-op?
>

Can you write out the calls so it's clear if we're talking about the
current or previous subpath?


> On Mon, Nov 16, 2015 at 12:38 PM, Justin Novosad <ju...@google.com> wrote:
>>>
>>>> That makes sense, but the text for closePath() talks about "the last
>>>> subpath", which I guess is a bit unclear.
>>>>
>>>> On Mon, Nov 16, 2015 at 12:30 PM, Rik Cabanier <caban...@gmail.com>
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Mon, Nov 16, 2015 at 9:02 AM, Justin Novosad <ju...@google.com>
>>>>> wrote:
>>>>>
>>>>>> Hi All,
>>>>>>
>>>>>> The text in the spec:
>>>>>>
>>>>>> 
>>>>>>
>>>>>> The closePath() method must do nothing if the object's path has no
>>>>>> subpaths. Otherwise, it must mark the last subpath as closed, create
>>>>>> a new
>>>>>> subpath whose first point is the same as the previous subpath's first
>>>>>> point, and finally add this new subpath to the path.
>>>>>>
>>>>>> Note: If the last subpath had more than one point in its list of
>>>>>> points,
>>>>>> then this is equivalent to adding a straight line connecting the last
>>>>>> point
>>>>>> back to the first point, thus "closing" the shape, and then repeating
>>>>>> the
>>>>>> last (possibly implied) moveTo() call.
>>>>>>
>>>>>> 
>>>>>>
>>>>>> Problematic use case:
>>>>>>
>>>>>> ctx.moveTo(9.8255,71.1829);
>>>>>> ctx.lineTo(103,25);
>>>>>> ctx.lineTo(118,25);
>>>>>> ctx.moveTo(9.8255,71.1829);
>>>>>> ctx.closePath();
>>>>>> ctx.stroke();
>>>>>>
>>>>>> Should this draw a closed triangle or two connected line segments?
>>>>>> According to the "Note" (or at least my interpretation of it), this
>>>>>> should
>>>>>> draw a closed triangle. But it appears that this is not what many
>>>>>> browsers
>>>>>> have implemented.  Chrome recently became compliant (or what I think
>>>>>> is
>>>>>> compliant), and the change in behavior was reported as a regression.
>>>>>>
>>>>>> Thoughts?
>>>>>>
>>>>>
>>>>> moveTo creates a new subpath. This means the closePath is going to do
>>>>> nothing because the subpath is empty.
>>>>> So according to the spec, this should create 2 connected lines.
>>>>>
>>>>
>>>>
>>>
>>
>


Re: [whatwg] Handling out of memory issues with getImageData/createImageData

2015-09-26 Thread Rik Cabanier
On Fri, Sep 25, 2015 at 7:51 AM, Boris Zbarsky  wrote:

> On 9/25/15 10:48 AM, Justin Novosad wrote:
>
>> I am sharing this here in case there would be interest in standardizing
>> this behavior.
>>
>
> I personally think it's a good idea (and throwing an exception is how
> Gecko handles, or at least aims to handle, this situation).


In the past, we discussed that error conditions such as this shouldn't
throw exceptions. Most of the time, this type of error is just temporal and
is resolved in the next frame. Rare exceptions are almost never caught by
the author so the application crashes.

Maybe for out of memory conditions, we could return a fake imagedata object
with nothing but transparent white. In addition an 'isValid' property could
signal if you have a real imagedata object.


Re: [whatwg] Why CanvasRenderingContext2D uses WebIDL unrestricted float type?

2015-03-24 Thread Rik Cabanier
On Tue, Mar 24, 2015 at 1:06 PM, Tetsuharu OHZEKI saneyuki.s...@gmail.com
wrote:

 Hi everybody.

 I have a question about the definition of CanvasRenderingContext2D's
 behavior.

 The current spec about CanvasRenderingContext2D says the following:

  Except where otherwise specified, for the 2D context interface,
  any method call with a numeric argument whose value is infinite
  or a NaN value must be ignored.

 https://html.spec.whatwg.org/multipage/scripting.html#canvasrenderingcontext2d

 But I think that, why don't  CanvasRenderingContext2D use restricted
 float type defined in WebIDL if these methods ignore the value when
 its is not finite?

 By the current WebIDL spec
 (http://heycam.github.io/webidl/#es-double), restricted values,
 'float'  'double', will raise TypeError in conversion phase under
 ECMAScript environment if the passed value is a NaN or +-Infinity.

 For the purpose to ignore non-restricted values, I feel that it's more
 better to restrict by IDL type. So the definitions by the current spec
 is for backward compatibility, or simply a spec issue?


We had a long discussion on this about a year ago.
In short, we want web APIs to be robust so that if a developer makes a
mistakes and passes a NaN or other strange value, the application will
attempt to keep on running.
Worst case, the app will crash later on anyway and best case, it will show
up as a short flicker or not at all.


Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-03-22 Thread Rik Cabanier
On Sat, Mar 21, 2015 at 11:02 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 On Sun, Mar 22, 2015 at 6:45 PM, Rik Cabanier caban...@gmail.com wrote:

 On Sat, Mar 21, 2015 at 1:44 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

  On Sun, Mar 22, 2015 at 7:16 AM, Rik Cabanier caban...@gmail.com
 wrote:
 
  Justin is worried that in order to make this asynchronous, Chrome has
 to
  create a snapshot of the canvas bits which is slow if it resides on the
  GPU.
  Of course, his workaround to use getImageData is just as slow since it
 has
  to do the same operation.
 
 
  One of the advantages of having a native async toBlob API is that the
  browser can asynchronously read back from GPU memory (when the graphics
 API
  permits this --- D3D11 does, at least). Gecko currently doesn't take
  advantage of this, however.
 

 You would need a copy in GPU memory first to do the async readback on.


 Not necessarily.


 There are many scenarios (ie fullscreen hidi canvas) where this might fill
 the GPU's memory.


 Unlikely in practice.


Hopefully Justing can chime in on this. Google maps in particular taxes the
GPU a lot.
If Chrome has to reserve enough space for another backbuffer, this will
certainly make rendering of complex scenes slower.


  To alleviate this, I have 2 proposals:
  - After calling toBlob, the canvas is read-only until the promise is
  fulfilled
  - If the user changes the canvas after calling toBlob, the promise is
  cancelled.
 
  Maybe we should only allow 1 outstanding toBlob per canvas element too.
 
 
  I don't think we should impose any of these restrictions. They're not
  necessary.
 

 How else would you avoid making a copy?


 It depends on lots of variables, but there are certainly scenarios when
 you can do async readback without making a copy.

 Even if you have to make a copy in GPU memory, that's not a big problem
 most of the time, not compared to doing a synchronous readback.

 Rob
 --
 oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
 owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
 osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
 owohooo
 osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
 oioso
 oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
 owohooo
 osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
 ooofo
 otohoeo ofoioroeo ooofo ohoeololo.



Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-03-21 Thread Rik Cabanier
On Sat, Mar 21, 2015 at 4:19 AM, Jake Archibald jaffathec...@gmail.com
wrote:

 I'd rather we did that by introducing promises to HTMLCanvasElement.
 Returning a promise from toBlob is easy, making the callback arg optional
 by checking the type of the first arg is hacky but possible (and is done in
 js libs).

The spec (if there is one?) should be updated to return a promise and leave
out the callback:

promise canvas.toBlob(optional type, optional encoderOptions);

Mozilla would keep their existing implementation around and the IDL logic
would automatically pick the right call.


 On Sat, 21 Mar 2015 10:56 Robert O'Callahan rob...@ocallahan.org wrote:

 On Sat, Mar 21, 2015 at 5:45 PM, Rik Cabanier caban...@gmail.com wrote:

 Ah, OK. I thought we were changing it for both cases. This will cause a
 lot
 of confusion...


 If we want to keep HTMLCanvasElement and WorkerCanvas in sync, we can.




Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-03-21 Thread Rik Cabanier
On Sat, Mar 21, 2015 at 6:21 AM, Ashley Gullen ash...@scirra.com wrote:

 Is everyone here aware that currently Google have stated they do not plan
 to implement toBlob?:
 https://code.google.com/p/chromium/issues/detail?id=67587

 IMO this is the wrong call. We should be favouring blobs over data URLs
 since they are more efficient (no size bloat, can be requested async like
 other network resources, no need to copy round very large strings).


Justin is worried that in order to make this asynchronous, Chrome has to
create a snapshot of the canvas bits which is slow if it resides on the GPU.
Of course, his workaround to use getImageData is just as slow since it has
to do the same operation.

To alleviate this, I have 2 proposals:
- After calling toBlob, the canvas is read-only until the promise is
fulfilled
- If the user changes the canvas after calling toBlob, the promise is
cancelled.

Maybe we should only allow 1 outstanding toBlob per canvas element too.

I made a small code example of toBlob here:
http://codepen.io/adobe/full/raoZdQ/
It works smoothly on my mac and pc laptop, but really janky on my PC
desktop.



 On 21 March 2015 at 11:19, Jake Archibald jaffathec...@gmail.com wrote:

 I'd rather we did that by introducing promises to HTMLCanvasElement.
 Returning a promise from toBlob is easy, making the callback arg optional
 by checking the type of the first arg is hacky but possible (and is done
 in
 js libs).

 On Sat, 21 Mar 2015 10:56 Robert O'Callahan rob...@ocallahan.org wrote:

  On Sat, Mar 21, 2015 at 5:45 PM, Rik Cabanier caban...@gmail.com
 wrote:
 
  Ah, OK. I thought we were changing it for both cases. This will cause a
  lot
  of confusion...
 
 
  If we want to keep HTMLCanvasElement and WorkerCanvas in sync, we can.
 
  Rob
  --
  oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
  owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
  osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
  owohooo
  osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o
 o‘oRoaocoao,o’o
  oioso
  oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
  owohooo
  osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
  ooofo
  otohoeo ofoioroeo ooofo ohoeololo.
 





Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-03-21 Thread Rik Cabanier
On Sat, Mar 21, 2015 at 1:44 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 On Sun, Mar 22, 2015 at 7:16 AM, Rik Cabanier caban...@gmail.com wrote:

 Justin is worried that in order to make this asynchronous, Chrome has to
 create a snapshot of the canvas bits which is slow if it resides on the
 GPU.
 Of course, his workaround to use getImageData is just as slow since it has
 to do the same operation.


 One of the advantages of having a native async toBlob API is that the
 browser can asynchronously read back from GPU memory (when the graphics API
 permits this --- D3D11 does, at least). Gecko currently doesn't take
 advantage of this, however.


You would need a copy in GPU memory first to do the async readback on.
There are many scenarios (ie fullscreen hidi canvas) where this might fill
the GPU's memory.


 To alleviate this, I have 2 proposals:
 - After calling toBlob, the canvas is read-only until the promise is
 fulfilled
 - If the user changes the canvas after calling toBlob, the promise is
 cancelled.

 Maybe we should only allow 1 outstanding toBlob per canvas element too.


 I don't think we should impose any of these restrictions. They're not
 necessary.


How else would you avoid making a copy?


Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-03-20 Thread Rik Cabanier
On Fri, Mar 20, 2015 at 3:15 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 On Sat, Mar 21, 2015 at 1:13 AM, Jake Archibald jaffathec...@gmail.com
 wrote:

  Receiving a push message results in a 'push' event within the
  ServiceWorker. The likely action at this point is to show a notification.
  It should be possible to generate an image to use as an icon for this
  notification (
  https://notifications.spec.whatwg.org/#dom-notificationoptions-icon).
 
  This could be a badge showing some kind of unread count, some combination
  of app icon  avatar, or even a default avatar (Google Inbox generates an
  avatar from the senders names first letter).
 
  This is also useful for generating images to go into the cache API, or
  transforming images as they pass through the ServiceWorker.
 
  API:
 
  Almost all the pieces already exist, except a way to get the image data
 of
  a CanvasRenderingContext2D into a format that can be read from a
  url. ImageBitmap seems like a good fit for such an API:
 
  var context = new CanvasRenderingContext2D(192, 192);
 
  Promise.all(
caches.match('/avatars/ben.png')
  .then(r = r.blob())
  .then(b = createImageBitmap(b)),
caches.match('/avatars/julie.png')
  .then(r = r.blob())
  .then(b = createImageBitmap(b)),
  ).then(([ben, julie]) = {
context.drawImage(ben, 0, 0);
context.drawImage(julie, 96, 96);
return createImageBitmap(context);
  }).then(
//  and here's the bit we're missing… 
image = image.toDataURL()
  ).then(icon = {
self.registration.showNotificaiton(Hello!, {icon});
  });
 

 My understanding is that the current consensus proposal for canvas in
 Workers is not what's in the spec, but this:
 https://wiki.whatwg.org/wiki/WorkerCanvas
 See Canvas in Workers threads from October 2013 for the discussion. svn
 is failing me but the CanvasProxy proposal in the spec definitely predates
 those threads.

 Ian, unless I'm wrong, it would be helpful to remove the CanvasProxy stuff
 from the spec to avoid confusion.

 That proposal contains WorkerCanvas.toBlob, which needs to be updated to
 use promises


Do you know how many site use toBlob in Firefox?
A quick search on github shows a very high number of pages [1] so it might
be too late to change.

Maybe you can keep the callback and return a promise?


1: https://github.com/search?l=javascriptq=toblobtype=Codeutf8=%E2%9C%93
https://github.com/search?q=toblobtype=Codeutf8=%E2%9C%93


Re: [whatwg] Canvas image to blob/dataurl within Worker

2015-03-20 Thread Rik Cabanier
On Fri, Mar 20, 2015 at 9:42 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 On Sat, Mar 21, 2015 at 3:38 PM, Rik Cabanier caban...@gmail.com wrote:

 Do you know how many site use toBlob in Firefox?
 A quick search on github shows a very high number of pages [1] so it
 might be too late to change.

 Maybe you can keep the callback and return a promise?


 None of them use WorkerCanvas.toBlob since we don't implement that yet.


Ah, OK. I thought we were changing it for both cases. This will cause a lot
of confusion...


Re: [whatwg] Support filters in Canvas

2014-09-29 Thread Rik Cabanier
On Mon, Sep 29, 2014 at 10:20 AM, Markus Stange msta...@themasta.com
wrote:

 Hi,

 I'd like to revive this discussion.

 On Sat, Mar 15, 2014 at 12:03 AM, Dirk Schulze dschu...@adobe.com wrote:

  I would suggest a filter attribute that takes a list of filter operations
  similar to the CSS Image filter function[1]. Similar to shadows[2], each
  drawing operation would be filtered. The API looks like this:
 
  partial interface CanvasRenderingContext2D {
  attribute DOMString filter;
  }
 
  A filter DOMString could looks like: “contrast(50%) blur(3px)”

 This approach sounds good to me, and it's what I've implemented for
 Firefox in bug 927892 [1]. The Firefox implementation is behind the
 preference canvas.filters.enabled which is currently off by default.

  Filter functions include a reference to a filter element and a
  specification of SVG filters[4]. I am unsure if a reference do an element
  within a document can cause problems. If it does, we would just not
 support
  SVG filter references.

 I've included support for SVG filters in the Firefox implementation.
 It's a bit of work and it increases the number of edge cases we need
 to specify, but I think it's worth it.


Can we limit it to just the set of CSS filter shorthands for now?
I think other UA's are further behind in their implementation of
integrating SVG filters in their rendering pipeline.


 Here's a more fleshed-out proposal that attempts to define the edge
 cases I've encountered during development.

 The ctx.filter property should behave like the ctx.font property in some
 senses:
  - It's part of the state of the context and honors ctx.save() and
 ctx.restore().
  - Setting an invalid value is ignored silently.
  - Both inherit and initial are invalid values, as with font.
  - Setting a valid value sets the current state's filter to that
 value, and the getter will now return this value, possibly
 reserialized.
 Question: Do we want the getter to return the serialized form of the
 filter?


Since it's an attribute, it would be strange if it returns a different
string. It should return the same value that it was set to.


 I don't really mind either way, and I'm not sure in what cases
 the results would differ. I guess extraneous whitespace between
 individual filter functions would be cleaned up, and 0 length values
 would get set to 0px. Anything else?
 - Resetting the state to no filtering is done using ctx.filter =
 none. Values such as , null, or undefined are invalid and will be
 ignored and not unset the filter.
 Question: Is this what we want?

 Filter rendering should work similarly to shadow rendering:
  - It happens on every drawing operation, with the input to the filter
 being what that operation would have rendered regularly.
  - The transform of the context is applied during rendering of the
 input. The actual filtering is not be subject to the transform and
 happens in device space. This means that e.g. a drop-shadow(0px 10px
 black) filter always offsets the shadow towards the bottom, regardless
 of the transform.
  - The results in the canvas pixel buffer will be the same regardless
 of the CSS size of the canvas in the page, and regardless of whether
 the canvas element is in the page at all or just a detached DOM node.
  - The global composite operation is respected when compositing the
 filtered results into the canvas contents. The filter input drawing
 operation is always rendered with over into a conceptual transparent
 temporary surface.
  - The same applies for global alpha.

 Interaction with shadow:
  - If both a filter and a shadow are set on the canvas, filtering will
 happen first, with the shadow being applied to the filtered results.
 In that case the global composite operation will be respected when
 compositing the result with shadow into the canvas.
  - As a consequence of the other statements, this is true: If a valid
 filter is used, appending  drop-shadow(shadowOffsetXpx
 shadowOffsetYpx shadowBlurpx shadowColor) to the filter will
 have the same results as using the shadow properties, even if there is
 a transform on the context.


Since you can do a shadow with the filter attribute, maybe we can specify
that the shadow attribute is ignored?


 Units:
  - The CSS px unit refers to one canvas pixel, independent of the CSS
 size of the canvas on the page. That is, a drop-shadow(0 10px black)
 filter will have the same results in the canvas-internal pixel buffer,
 regardless of whether that canvas is specified using canvas
 width=100 height=100 style=width: 100px; height: 100px; or
 canvas width=100 height=100 style=width: 20px; height: 20px;.
  - Lengths in non-px units refer to the number of canvas pixels you
 get if you convert the length to CSS px and interpret that number as
 canvas pixels.

 Font size relative units:
  - Lengths in em are relative to the font size of the canvas context
 as specified by ctx.font.
  - The same applies for lengths in ex; and those use the x-height of
 

Re: [whatwg] Support filters in Canvas

2014-09-29 Thread Rik Cabanier
On Mon, Sep 29, 2014 at 8:52 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 On Mon, Sep 29, 2014 at 2:12 PM, Rik Cabanier caban...@gmail.com wrote:

 Can we limit it to just the set of CSS filter shorthands for now?
 I think other UA's are further behind in their implementation of
 integrating SVG filters in their rendering pipeline.


 How about we put CSS + SVG filters into the one spec draft for now, and
 split them up into two drafts if we actually get to the point where vendors
 want to ship one and not the other? It seems premature to create both HTML
 Canvas Filters Level 1 and HTML Canvas Filters Level 2 at the same time.


Are you proposing that this is developed as a separate spec and not as an
addition to the canvas specification?


Re: [whatwg] Canvas Path.addPath SVGMatrix not optimal?

2014-08-25 Thread Rik Cabanier
On Mon, Mar 24, 2014 at 7:31 AM, Justin Novosad ju...@google.com wrote:

 On Sat, Mar 22, 2014 at 4:20 AM, Dirk Schulze dschu...@adobe.com wrote:

  So can we agree on:
 
  addPath(Path, optional SVGMatrix)
 

 lgtm


Firefox [1], WebKit [2] and Blink [3] implemented addPath with the matrix
as an optional parameter.
Can the spec be updated to reflect this?

1: https://bugzilla.mozilla.org/show_bug.cgi?id=985801
2: https://bugs.webkit.org/show_bug.cgi?id=130461
3: https://codereview.chromium.org/170503002


Re: [whatwg] [2D Canvas] Proposal: batch variants of drawImage

2014-08-08 Thread Rik Cabanier
On Thu, Aug 7, 2014 at 7:11 PM, Katelyn Gadd k...@luminance.org wrote:

 Sorry, in this context rgba multiplication refers to per-channel
 multipliers (instead of only one multiplier for the alpha channel), so
 that you can color tint images when drawing them. As mentioned, it's
 used for fades, drawing colored text, and similar effects.


I see. Any reason that this couldn't be done with a 'multiply' blend mode?


 Premultiplication is a different subject, sorry if I confused you with
 the similar language. There are past discussions about both in the
 list archives.

 On Thu, Aug 7, 2014 at 10:59 AM, Rik Cabanier caban...@gmail.com wrote:
 
 
 
  On Mon, Aug 4, 2014 at 4:35 PM, Katelyn Gadd k...@luminance.org wrote:
 
  Many, many uses of drawImage involve transform and/or other state
  changes per-blit (composite mode, global alpha).
 
  I think some of those state changes could be viably batched for most
  games (composite mode) but others absolutely cannot (global alpha,
  transform). I see that you handle transform with
  source-rectangle-and-transform (nice!) but you do not currently handle
  the others. I'd suggest that this needs to at least handle
  globalAlpha.
 
  Replacing the overloading with individual named methods is something
  I'm also in favor of. I think it would be ideal if the format-enum
  argument were not there so that it's easier to feature-detect what
  formats are available (for example, if globalAlpha data is added later
  instead of in the '1.0' version of this feature).
 
 
  We can define the functions so they throw a type error if an unknown
 enum is
  passed. That way you can feature detect future additions to the enum.
 
  What should be do about error detection in general? If we require the
 float
  array to be well formed before drawing, we need an extra pass to make
 sure
  that they are correct.
  If we don't require it, we can skip that pass but content could be
 partially
  drawn to the canvas before the exception is thrown.
 
 
  I get the impression that ordering is implicit for this call - the
  batch's drawing operations occur in exact order. It might be
  worthwhile to have a way to indicate to the implementation that you
  don't care about order, so that it is free to rearrange the draw
  operations by image and reduce state changes. Doing that in userspace
  js is made difficult since you can't easily do efficient table lookup
  for images.
 
  if rgba multiplication were to make it into canvas2d sometime in the
  next decade, that would nicely replace globalAlpha as a per-draw
  value. This is an analogue to per-vertex colors in 3d graphics and is
  used in virtually every hardware-accelerated 2d game out there,
  whether to tint characters when drawing text, fade things in and out,
  or flash the screen various colors. That would be another reason to
  make feature detection easier.
 
  Would it be possible to sneak rgba multiplication in under the guise
  of this feature? ;) Without it, I'm forced to use WebGL and reduce
  compatibility just for something relatively trivial on the
  implementer's side. (I should note that from what I've heard, Direct2D
  actually makes this hard to implement.
 
 
  Is this the other proposal to control the format of the canvas buffer
 that
  is passed to WebGL?
 
 
  On the bright side there's a workaround for RGBA multiplication based
  on generating per-channel bitmaps from the source bitmap (k, r/g/b),
  then blending them source-over/add/add/add. drawImageBatch would
  improve perf for the r/g/b part of it, so it's still an improvement.
 
  On Mon, Aug 4, 2014 at 3:39 PM, Robert O'Callahan rob...@ocallahan.org
 
  wrote:
   It looks reasonable to me.
  
   How do these calls interact with globalAlpha etc? You talk about
   decomposing them to individual drawImage calls; does that mean each
   image
   draw is treated as a separate composite operation?
  
   Currently you have to choose between using a single image or passing
 an
   array with one element per image-draw. It seems to me it would be more
   flexible to always pass an array but allow the parameters array to
 refer
   to
   an image by index. Did you consider that approach?
  
   Rob
   --
   oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso
 oaonogoroyo
   owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
   osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
   owohooo
   osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o
   o‘oRoaocoao,o’o
   oioso
   oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo
 oaonoyooonoeo
   owohooo
   osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono
 odoaonogoeoro
   ooofo
   otohoeo ofoioroeo ooofo ohoeololo.
 
 



Re: [whatwg] [2D Canvas] Proposal: batch variants of drawImage

2014-08-08 Thread Rik Cabanier
On Fri, Aug 8, 2014 at 7:25 AM, Ashley Gullen ash...@scirra.com wrote:

 As Justin stated, 20% of current Chrome users currently fall back to
 canvas 2d.


 1. What fraction of those 20% actually still get a GPU accelerated canvas
 vs. software rendered? Batching will be of very little use to the software
 rendered audience, making it an even smaller target market.


There will still be a noticeable gain since we wouldn't have to cross the
JS boundary as much.
More importantly though, things will still work for non-accelerated canvas
while WebGL won't.


 2. In Firefox's case, that number has reduced from 67% to 15% over a
 couple of years. Surely in time this will fall even further to a negligible
 amount. Why standardise a feature whose target market is disappearing?


How long will it be before we're at a reliable 100%? Graphics drivers are
still flaky and it's not as if they only came out a couple of years ago.
Note that the supported % for graphics layers (accelerated compositing)
is much lower.


 Small developers that don't have the resources to develop for concurrent
 WebGL and Canvas2D code paths


 They don't have to: there are free, high-quality open-source libraries
 like Pixi.js that do this already, so even small developers have an easy
 way to make use of a WebGL renderer without much extra effort.


 When do you envision that OpenGL drivers are bug free everywhere? History
 is not on your side here...
 I would much rather have something short term that can be implemented
 with low effort and improves performance.


 No software is ever bug-free, but this is irrelevant. To not be
 blacklisted, drivers don't need to be perfect, they just need to meet a
 reasonable threshold of security and reliability. If a driver is insecure
 or crashes constantly it is blacklisted. Drivers are being improved so they
 are no longer this poor, and it is not unrealistic to imagine 99%+ of
 drivers meeting this threshold in the near future, even if none of them are
 completely bug-free.


I think Justin can better fill you in on this. Chrome has to jump through
many hoops to make canvas reliable on top of OpenGL and it still suffers
from random crashes when you stress the system.
Both Safari and Firefox use higher level system calls and are more reliable
(albeit slower) than Chrome.


 I don't really understand why you and Brian are so opposed to improving
 the performance of canvas 2D.


 I see it as a feature targeted at a rapidly disappearing segment of the
 market that will disappear in the long run, leaving the web platform with
 unnecessary API cruft.


 Following your logic, why work on new canvas or SVG features as they can
 theoretically be emulated in WebGL?
 Or now that we have asm.js, why even bother with new JavaScript features?


 I am in general against duplication on the web platform, but new features
 deserve to be implemented if they have a valid use case or solve a real
 problem.


The problem is that a large number of drawImage calls have a lot of
overhead due to JS crossings and housekeeping. This proposal solves that.


 In this case I don't see that any real problem is being solved, since
 widely available frameworks and engines already solve it with WebGL in a
 way accessible even to individual developers, and this solution is already
 production-grade and widely deployed.


Sure, but that is in WebGL which not everyone wants to use and is less
widely supported.


 On further thought this particular proposal doesn't even appear to solve
 the batching problem very well. Many games consist of large numbers of
 rotated sprites. If a canvas2d batching facility needs to break the batch
 every time it needs to call rotate(), this will revert back to individual
 draw-calls for many kinds of game. WebGL does not have this limitation and
 can batch in to single calls objects of a variety of scales, angles,
 tiling, opacity and more. This is done by control over individual vertex
 positions and texture co-ordinates, which is a fundamental break from the
 style of the canvas2d API. Therefore even with the proposed batching
 facility, for maximum performance it is still necessary to use WebGL. This
 proposal solves a very narrowly defined performance problem.


I'm unsure if I follow. The point of Justin's proposal is to do just that
under the hood.
Why do you think the batching needs to be broken up? Did you see that the
proposal has a matrix per draw?


 An alternate solution is for browser vendors to implement canvas2d
 entirely in JS on top of WebGL. This reduces per-call overhead by staying
 in JS land, while not needing to add any new API surface. In fact it looks
 like this has already been attempted here:
 https://github.com/corbanbrook/webgl-2d -


Implementing Canvas on top of WebGL is not realistic.
Please look into Chrome's implementation to make canvas reliable and fast.
This can not be achieved today.

I totally agree if the web platform matures to a point where this IS
possible, we 

Re: [whatwg] [2D Canvas] Proposal: batch variants of drawImage

2014-08-08 Thread Rik Cabanier
On Fri, Aug 8, 2014 at 7:54 PM, Katelyn Gadd k...@luminance.org wrote:

 A multiply blend mode by itself is not sufficient because the image
 being rgba multiplied typically has alpha transparency. The closest
 approximation is to generate (offline, in software with getImageData)
 an image per channel - rgbk - and to source-over blend the 'k' channel
 and then additive blend the r/g/b channels with individual alpha. This
 approximates the per-channel alpha values with nearly equivalent
 results (I say nearly equivalent because some browsers do weird stuff
 with gamma/colorspace conversion that completely breaks this.)

 If it helps, you could think of this operation as a layer group in
 photoshop. The first layer in the group is the source image, the
 second layer is a solid color filled layer containing the rgba
 'multiplier', in multiply mode, and then the layer group has a mask on
 it that contains the source image's alpha information. Note that this
 representation (a layer group with two layers and a mask) implies that
 drawing an image this way requires multiple passes, which is highly
 undesirable. My current fallback requires 4 passes, along with 4
 texture changes and two blend state changes. Not wonderful.


I see; you're asking for a feature like Photoshop Color Overlay layer
effect. Is that correct?



 RGBA multiplication dates back to early fixed-function graphics
 pipelines. If a blend with globalAlpha and a premultiplied source is
 represented like this:

 result(r, g, b) = ( source-premultiplied(r, g, b) * globalAlpha ) + (
 dest(r, g, b) * (1 - (source(a) * globalAlpha)) )

 Then if you take a premultiplied color constant and use that as the
 multiplier for your image (instead of a global alpha value - this is
 the input to rgba multiplication, i.e. a 'vertex color'):

 result(r, g, b) = ( source-premultiplied(r, g, b) *
 rgba-multiplier-premultiplied(r, g, b) ) + ( dest(r, g, b) * (1 -
 (source(a) * rgba-multiplier-premultiplied(a))) )

 (Sorry if this is unclear, I don't have a math education)

 So you basically take the global alpha multiplier and you go from that
 to a per-channel multiplier. If you're using premultiplied alpha
 already, this ends up being pretty straightforward... you just take a
 color (premultiplied, like everything else) and use that as your
 multiplier. You can multiply directly by each channel since the global
 'alpha' part of the multiplier is already baked in by the
 premultiplication step.

 This is a really common primitive since it's so easy to implement, if
 not entirely free - you're already doing that global alpha
 multiplication, so you just introduce a different multiplier
 per-channel, which is really trivial in a SIMD model like the ones
 used in computer graphics. You go from vec4 * scalar to vec4 * vec4.


 Text rendering is the most compelling reason to support this, IMO.
 With this feature you can build glyph atlases inside 2d canvases
 (using something like freetype, etc), then trivially draw colored
 glyphs out of them without having to drop down into getImageData or
 use WebGL. It is trivially expressed in most graphics APIs since it
 uses the same machinery as a global alpha multiplier - if you're
 drawing a premultiplied image with an alpha multiplier in hardware,
 you're almost certainly doing vec4 * scalar in your shader. If you're
 using the fixed-function pipeline from bad old 3d graphics, vec4 *
 scalar didn't even exist - the right hand side was *always* another
 vec4 so this feature literally just changed the constant on the right
 hand side.


 I harp on this feature since nearly every 2d game I encounter uses it,
 and JSIL has to do it in software. If not for this one feature it
 would be very easy to make the vast majority of ported titles Just
 Work against canvas, which makes them more likely to run correctly on
 mobile.


Maybe it would be best to bring this up as a separate topic on this mailing
list. (just copy/paste most of your message)


 On Fri, Aug 8, 2014 at 5:28 PM, Rik Cabanier caban...@gmail.com wrote:
 
 
 
  On Thu, Aug 7, 2014 at 7:11 PM, Katelyn Gadd k...@luminance.org wrote:
 
  Sorry, in this context rgba multiplication refers to per-channel
  multipliers (instead of only one multiplier for the alpha channel), so
  that you can color tint images when drawing them. As mentioned, it's
  used for fades, drawing colored text, and similar effects.
 
 
  I see. Any reason that this couldn't be done with a 'multiply' blend
 mode?
 
 
  Premultiplication is a different subject, sorry if I confused you with
  the similar language. There are past discussions about both in the
  list archives.
 



Re: [whatwg] [2D Canvas] Proposal: batch variants of drawImage

2014-08-07 Thread Rik Cabanier
On Mon, Aug 4, 2014 at 4:35 PM, Katelyn Gadd k...@luminance.org wrote:

 Many, many uses of drawImage involve transform and/or other state
 changes per-blit (composite mode, global alpha).

 I think some of those state changes could be viably batched for most
 games (composite mode) but others absolutely cannot (global alpha,
 transform). I see that you handle transform with
 source-rectangle-and-transform (nice!) but you do not currently handle
 the others. I'd suggest that this needs to at least handle
 globalAlpha.

 Replacing the overloading with individual named methods is something
 I'm also in favor of. I think it would be ideal if the format-enum
 argument were not there so that it's easier to feature-detect what
 formats are available (for example, if globalAlpha data is added later
 instead of in the '1.0' version of this feature).


We can define the functions so they throw a type error if an unknown enum
is passed. That way you can feature detect future additions to the enum.

What should be do about error detection in general? If we require the float
array to be well formed before drawing, we need an extra pass to make sure
that they are correct.
If we don't require it, we can skip that pass but content could be
partially drawn to the canvas before the exception is thrown.


 I get the impression that ordering is implicit for this call - the
 batch's drawing operations occur in exact order. It might be
 worthwhile to have a way to indicate to the implementation that you
 don't care about order, so that it is free to rearrange the draw
 operations by image and reduce state changes. Doing that in userspace
 js is made difficult since you can't easily do efficient table lookup
 for images.

 if rgba multiplication were to make it into canvas2d sometime in the
 next decade, that would nicely replace globalAlpha as a per-draw
 value. This is an analogue to per-vertex colors in 3d graphics and is
 used in virtually every hardware-accelerated 2d game out there,
 whether to tint characters when drawing text, fade things in and out,
 or flash the screen various colors. That would be another reason to
 make feature detection easier.

 Would it be possible to sneak rgba multiplication in under the guise
 of this feature? ;) Without it, I'm forced to use WebGL and reduce
 compatibility just for something relatively trivial on the
 implementer's side. (I should note that from what I've heard, Direct2D
 actually makes this hard to implement.


Is this the other proposal to control the format of the canvas buffer that
is passed to WebGL?


 On the bright side there's a workaround for RGBA multiplication based
 on generating per-channel bitmaps from the source bitmap (k, r/g/b),
 then blending them source-over/add/add/add. drawImageBatch would
 improve perf for the r/g/b part of it, so it's still an improvement.

 On Mon, Aug 4, 2014 at 3:39 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:
  It looks reasonable to me.
 
  How do these calls interact with globalAlpha etc? You talk about
  decomposing them to individual drawImage calls; does that mean each image
  draw is treated as a separate composite operation?
 
  Currently you have to choose between using a single image or passing an
  array with one element per image-draw. It seems to me it would be more
  flexible to always pass an array but allow the parameters array to refer
 to
  an image by index. Did you consider that approach?
 
  Rob
  --
  oIo otoeololo oyooouo otohoaoto oaonoyooonoeo owohooo oioso oaonogoroyo
  owoiotoho oao oboroootohoeoro oooro osoiosotoeoro owoiololo oboeo
  osouobojoeocoto otooo ojouodogomoeonoto.o oAogoaoiono,o oaonoyooonoeo
  owohooo
  osoaoyoso otooo oao oboroootohoeoro oooro osoiosotoeoro,o o‘oRoaocoao,o’o
  oioso
  oaonosowoeoroaoboloeo otooo otohoeo ocooouoroto.o oAonodo oaonoyooonoeo
  owohooo
  osoaoyoso,o o‘oYooouo ofolo!o’o owoiololo oboeo oiono odoaonogoeoro
  ooofo
  otohoeo ofoioroeo ooofo ohoeololo.



Re: [whatwg] [2D Canvas] Proposal: batch variants of drawImage

2014-08-06 Thread Rik Cabanier
On Tue, Aug 5, 2014 at 10:04 AM, Brian Blakely anewpage.me...@gmail.com
wrote:

 On Tue, Aug 5, 2014 at 11:21 AM, Justin Novosad ju...@google.com wrote:

  On Tue, Aug 5, 2014 at 7:47 AM, Ashley Gullen ash...@scirra.com wrote:
 
   I am against this suggestion. If you are serious about performance then
   you should use WebGL and implement your own batching system, which is
  what
   every major 2D HTML5 game framework I'm aware of does already. Adding
   batching features to canvas2d has three disadvantages in my view:
  
   1. Major 2D engines already support WebGL, so even if this new feature
  was
   supported, in practice it would not be used.
   2. There is opportunity cost in speccing something that is unlikely to
 be
   used and already well-covered by another part of the web platform. We
  could
   be speccing something else more useful.
   3. canvas2d should not end up being specced closer and closer to WebGL:
   canvas2d should be kept as a high-level easy-to-use API even with
   performance cost, whereas WebGL is the low-level high-performance API.
   These are two different use cases and it's good to have two different
  APIs
   to cover them. If you want to keep improving canvas2d performance I
 would
   worry you will simply end up reinventing WebGL.
  
  
  These are good points. The only counter argument I have to that is that a
  fallback from WebGL to canvas2d is unfortunately necessary for a
  significant fraction of users. Even on web-browsers that do support
 WebGL,
  gl may be emulated in software, which can be detected by web apps and
  warrants falling back to canvas2d (approx. 20% of Chrome users, for
  example). I realize that there is currently a clear ease of use vs.
  performance dichotomy between 2d and webgl, and this proposal blurs that
  boundary. Nonetheless, there is developer-driven demand for this based
 on a
  real-world problem. Also, if 2D canvas had better performance
  characteristics, it would not be necessary for some game engines to have
  dual (2d/webgl) implementations.
 
  -Justin
 

 My take is similar to Ashley's, and I wonder how buffing up the toy API
 (2D) compensates for the fact that the performance API (GL) has
 compatibility problems, even on platforms that support it.  If the goal is
 to solve the latter, why not introduce more direct proposals?


Can you explain what you're asking for? Are you asking for a proposal that
fixes compatibility?


Re: [whatwg] [2D Canvas] Proposal: batch variants of drawImage

2014-08-06 Thread Rik Cabanier
On Tue, Aug 5, 2014 at 5:55 PM, Ashley Gullen ash...@scirra.com wrote:

 If your argument is that WebGL sometimes falls back to canvas2d, this
 generally only happens when the system has crappy drivers that are
 blacklisted for being insecure/unstable. The solution to this is to develop
 and distribute better drivers that are not blacklisted. This is already
 happening and making good progress - according to Mozilla's stats, Firefox
 users who get WebGL support has increased from 33% in 2011 to 85% in 2014 (
 http://people.mozilla.org/~bjacob/gfx_features_stats/).


As Justin stated, 20% of current Chrome users currently fall back to canvas
2d.
This is a large chunk of the market.
Small developers that don't have the resources to develop for concurrent
WebGL and Canvas2D code paths will certainly code for just Canvas since
that will give them close to 100%.


 I feel it is likely
 to continue to approach ubiquitous WebGL support, making fallbacks
 unnecessary. This also solves the problem of having to have dual renderer
 implementations: only the WebGL renderer will be necessary, and this is far
 more compelling than a souped-up canvas2d, since WebGL can use shader
 effects, have advanced control over textures and co-ordinates, also do 3D,
 and so on. This cannot all be brought to canvas2d without simply
 reinventing WebGL. Further, crappy drivers can also cause software-rendered
 canvas2d as well, which is likely so slow to begin with that batching will
 have no important performance improvement. Software-rendered WebGL is just
 another workaround to crappy drivers (or in rare cases systems without
 GPUs, but then who's going to be gunning for high performance there?) and
 there is still no guarantee falling back to canvas2d will be
 GPU-accelerated, especially since the system already has such poor drivers
 that the browser has blacklisted it for WebGL support.

 The real problem is that there is not 100% WebGL support everywhere, but
 with drivers improving and Apple and Microsoft on board I'm sure that will
 fix itself eventually. Please don't spec features to improve canvas2d
 performance in the mean time; I don't see it having any long-term utility
 for the web platform.


When do you envision that OpenGL drivers are bug free everywhere? History
is not on your side here...
I would much rather have something short term that can be implemented with
low effort and improves performance.

I don't really understand why you and Brian are so opposed to improving the
performance of canvas 2D.
There are a lot of people that use and like its API. WebGL on the other
hand, has a very steep learning curve and problems are not always obvious.

Following your logic, why work on new canvas or SVG features as they can
theoretically be emulated in WebGL?
Or now that we have asm.js, why even bother with new JavaScript features?


 On 5 August 2014 16:21, Justin Novosad ju...@google.com wrote:

  On Mon, Aug 4, 2014 at 6:39 PM, Robert O'Callahan rob...@ocallahan.org
  wrote:
 
   It looks reasonable to me.
  
   How do these calls interact with globalAlpha etc? You talk about
   decomposing them to individual drawImage calls; does that mean each
 image
   draw is treated as a separate composite operation?
  
 
  Composited separately is the intent. A possible internal optimization:
 the
  implementation could group non-overlapping draw and composite them
  together.
 
 
   Currently you have to choose between using a single image or passing an
   array with one element per image-draw. It seems to me it would be more
   flexible to always pass an array but allow the parameters array to
 refer
  to
   an image by index. Did you consider that approach?
  
 
  Had not thought of that. Good idea.
 
  On Mon, Aug 4, 2014 at 7:35 PM, Katelyn Gadd k...@luminance.org wrote:
 
   I'd suggest that this needs to at least handle
   globalAlpha.
  
 
  It would be trivial to add a an addition format that includes alpha.
 
 
   Replacing the overloading with individual named methods is something
   I'm also in favor of.
 
 
  That's something I pondered and was not sure about. Eliminating the
  parameter format argument would be nice. Your feature-detection argument
 is
  a really good reason.
 
  
   I get the impression that ordering is implicit for this call - the
   batch's drawing operations occur in exact order. It might be
   worthwhile to have a way to indicate to the implementation that you
   don't care about order, so that it is free to rearrange the draw
   operations by image and reduce state changes. Doing that in userspace
   js is made difficult since you can't easily do efficient table lookup
   for images.
  
 
  I am not sure exposing that in the API is a good idea because it opens
 the
  door to undefined behavior. It could result in different implementations
  producing drastically different yet compliant results.
  Perhaps implementations could auto-detect draw operations that are
  commutative based on a quick 

Re: [whatwg] Proposal: navigator.cores

2014-07-02 Thread Rik Cabanier
On Wed, Jul 2, 2014 at 2:19 AM, Ryosuke Niwa rn...@apple.com wrote:

 On May 3, 2014, at 10:49 AM, Adam Barth w...@adambarth.com wrote:

  Over on blink-dev, we've been discussing [1] adding a property to
 navigator
  that reports the number of cores [2].  As far as I can tell, this
  functionality exists in every other platform (including iOS and Android).
  Some of the use cases for this feature have been discussed previously on
  this mailing list [3] and rejected in favor of a more complex system,
  perhaps similar to Grand Central Dispatch [4].  Others have raised
 concerns
  that exposing the number of cores could lead to increased fidelity of
  fingerprinting [5].
 
  My view is that the fingerprinting risks are minimal.  This information
 is
  already available to web sites that wish to spend a few seconds probing
  your machine [6].  Obviously, exposing this property makes that easier
 and
  more accurate, which is why it's useful for developers.
 
  IMHO, a more complex worker pool system would be valuable, but most
 systems
  that have such a worker pool system also report the number of hardware
  threads available.  Examples:
 
  C++:
  std::thread::hardware_concurrency();
 
  Win32:
  GetSystemInfo returns dwNumberOfProcessors
 
  POSIX:
  sysctl returns HW_AVAILCPU or HW_NCPU
 
  Java:
  Runtime.getRuntime().availableProcessors();
 
  Python:
  multiprocessing.cpu_count()
 
  In fact, the web was the only platform I could find that didn't make the
  number of cores available to developers.

 FWIW, this property has been added to WebKit [1] and Blink [2] although
 that's not an indication of any browser actually shipping it for WebKit.


Since there are now 2 implementations, it should be added to the spec
instead of just being a wiki.


Re: [whatwg] Proposal: navigator.cores

2014-07-02 Thread Rik Cabanier
On Wed, Jul 2, 2014 at 10:37 AM, Anne van Kesteren ann...@annevk.nl wrote:

 On Wed, Jul 2, 2014 at 8:58 AM, Rik Cabanier caban...@gmail.com wrote:
  Since there are now 2 implementations, it should be added to the spec
  instead of just being a wiki.

 That depends on whether other vendors are objecting.

 Looks like that is the case:

 https://groups.google.com/d/msg/mozilla.dev.platform/QnhfUVw9jCI/PEFuf5a_0YQJ


That thread concluded with a let's see how this feature is going to be
used before we commit.
Blink and WebKit certainly are in favor.


Re: [whatwg] Proposal: navigator.cores

2014-07-02 Thread Rik Cabanier
On Wed, Jul 2, 2014 at 9:27 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 7/2/14, 3:21 PM, Rik Cabanier wrote:

 facts = 2 implementations. I certainly didn't say anything else.


 You said, and I quote:


   That thread concluded with a let's see how this feature is going to
   be used before we commit.


ah, I see now that he responded to my second message. Yes, I was off on
there.


 Anyway, 2 implementations is a necessary condition for a REC, not a
 sufficient one.


This is from the WHATWG site:
http://wiki.whatwg.org/wiki/FAQ#Is_there_a_process_for_adding_new_features_to_a_specification.3F

The WHATWG doesn't have a hard requirement for 2 implementations but it
certainly is an indication that it should be more than just a wiki.


[whatwg] Hit regions: exception when the region has no pixels

2014-07-02 Thread Rik Cabanier
The canvas spec [1] currently states:

If any of the following conditions are met, throw a NotSupportedError
exception and abort these steps:

...

The specified pixels has no pixels.


Since the specified pixels are the union of the clipping path and the
current path, it will be nearly impossible for an author to determine if a
hit region has no pixels.
Can't we relax this requirement and simply not set up a new hit region if
there are no pixels?

1:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#hit-region


Re: [whatwg] High-density canvases

2014-06-24 Thread Rik Cabanier
On Mon, Jun 23, 2014 at 6:06 PM, Robert O'Callahan rob...@ocallahan.org
wrote:

 On Tue, Jun 24, 2014 at 12:27 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:

 I'll do that now.


 Done.
 http://wiki.whatwg.org/wiki/CanvasRenderedPixelSize


The wiki states:

Add a new event renderedsizechange to HTMLCanvasElement. This event does
not bubble and is not cancelable. Whenever the value that would be returned
by renderedPixelWidth orrenderedPixelHeight changes, queue a task to fire
renderedsizechange at the HTMLCanvasElement if there is not already a task
pending to fire such an event at that element.


- If there's a transition or animation that affects the canvas element,
should it receive resize events at 60fps?
- will CSS 3D transforms affect the rendered canvas size? If so, what would
the optimal resolution be if there's a rotation
- what happens if the canvas element is not in the document? Will it just
return the pixel width/height?


Re: [whatwg] High-density canvases

2014-06-18 Thread Rik Cabanier
On Wed, Jun 18, 2014 at 8:30 AM, Justin Novosad ju...@google.com wrote:

 In the previous incarnation of High density canvases (i.e. getImageDataHD
 and friends), we worked under the assumption that it was okay to have a
 backing store with a pixel density that is higher than CSS pixel density.
 And I think that was perfectly acceptable.

 If I recall correctly, the feature failed because some websites were
 already using CSS hacks to boost their pixel density, which led to 16x
 (rather than 4x) memory consumption for canvas pixel buffers on devices
 with a 2:1 device to CSS pixel ratio.  I think that failure could have been
 avoided by making the feature smarter by dynamically adjusting the HD ratio
 applied to the canvas to prevent canvas pixel density from being boosted
 beyond the display's pixel density.

 I am currently trying an experimental approach where canvases that are
 drawn to, but never read from (no toDataURL or getImageData calls) have
 their contents stored as a command buffer, rather than a pixel buffer. This
 way, the contents can be painted at any resolution (à la SVG).  This
 approach also allows canvases to be rasterized asynchronously, it allows
 contents to change pixel density without redrawing in JS (in reaction to a
 page zoom, for example), and it can support arbitrarily large canvas sizes.
 In theory, it would be possible to inject this behavior without any changes
 to the spec, but some side effects may be hard to resolve and/or live with.
  Having an experimental implementation will help us discover and iron-out
 the issues. In the end, it may have to ship as opt-in behavior, but that
 remains to be determined.

 My main point is, there is potentially significant progress in terms of HD
 canvas rendering that can be achieved without any changes to the spec
 (other than perhaps an opt-in flag). If it works out well without an
 explicit opt-in, legacy websites will benefit.


This should be an explicit opt-in, otherwise applications that call
'getImageData', will suddenly get a different rendition.
Also, they might *want* to see the pixels in which case upscaling is
undesired.

To make this work, you might have to store potentially an infinite amount
of commands which is problematic.


 On Tue, Jun 17, 2014 at 11:06 PM, Mark Callow callow.m...@artspark.co.jp
 wrote:

 On 13/06/2014 12:42, Robert O'Callahan wrote:
  Here's an alternative proposal which I think is a bit simpler and more
  flexible:
  Expose two new DOM attributes on HTMLCanvasElement:
  readonly attribute long preferredWidth;
  readonly attribute long preferredHeight;
  These attributes are the UA's suggested canvas size for optimizing
 output
  quality. It's basically what Ian's proposal would have set as the
 automatic
  size. We would also add a preferredsizechange event when those
 attributes
  change.
 I like the functionality but these names really don't convey that
 functionality. The names you originally proposed over in *Bug 1024493*
 https://bugzilla.mozilla.org/show_bug.cgi?id=1024493 at mozilla.org,

 renderedPixelWidth/Height, while not perfect, convey it much better.

 Regards

 -Mark

 --
 注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合
 が有ります。正式なメール受信者では無い場合はメール複製、 再配信または情
 報の使用を固く禁じております。エラー、手違いでこのメールを受け取られまし
 たら削除を行い配信者にご連絡をお願いいたし ます.

 NOTE: This electronic mail message may contain confidential and
 privileged information from HI Corporation. If you are not the intended
 recipient, any disclosure, photocopying, distribution or use of the
 contents of the received information is prohibited. If you have received
 this e-mail in error, please notify the sender immediately and
 permanently delete this message and all related copies.





Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-06-02 Thread Rik Cabanier
On Mon, Jun 2, 2014 at 10:05 AM, Justin Novosad ju...@google.com wrote:

 On Sat, May 31, 2014 at 8:44 AM, Robert O'Callahan rob...@ocallahan.org
 wrote:

  On Sat, May 31, 2014 at 3:44 AM, Justin Novosad ju...@google.com
 wrote:
 
  My point is, we need a proper litmus test for the just do it in script
  argument because, let's be honnest, a lot of new features being added to
  the Web platform could be scripted efficiently, and that does not
  necessarily make them bad features.
 
 
  Which ones?
 

 The examples I had in mind when I wrote that were Path2D


Crossing the JS boundary is still an issue so implementing this in pure JS
would be too slow.
Path2D is only there to minimize DOM calls.


 and HitRegions.


I agree that most of hit regions can be implemented using JS.
The reason for hit regions is a11y and people felt a feature that just does
accessibility, will end up unused or unimplemented.


Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-06-02 Thread Rik Cabanier
On Mon, Jun 2, 2014 at 10:16 AM, Justin Novosad ju...@google.com wrote:

 On Sat, May 31, 2014 at 1:46 PM, Glenn Maynard gl...@zewt.org wrote:

  On Fri, May 30, 2014 at 1:25 PM, Justin Novosad ju...@google.com
 wrote:
 
  I think this proposal falls short of enshrining.  The cost of adding this
  feature is minuscule.
 
 
  I don't think the cost is ever really miniscule.
 

 https://codereview.chromium.org/290893002


That's implementation cost to you :-)
Now we need to convince the other vendors. Do they want it, want more, want
it in a different way?
Then it needs to be documented. How can authors discover that this is
supported? How can it be poly-filled?


  True, you'd never want to use toDataURL with a compression operation
  that takes many seconds ((or even tenths of a second) to complete, and
 data
  URLs don't make sense for large images in the first place.  It makes
 sense
  for toBlob(), though, and having the arguments to toBlob and toDataURL
 be
  different seems like gratuitous inconsistency.
 
 
  Yes, toBlob is async, but it can still be polyfilled.
 
 
  (I'm not sure how this replies to what I said--this feature makes more
  sense for toBlob than toDataURL, but I wouldn't add it to toBlob and not
  toDataURL.)
 

 What I meant is that I agree that adding the compression argument to toBlob
 answers the need for an async API (being synchronous was one of the
 criticisms of the original proposal, which only mentioned toDataURL).
  However, this does not address the other criticism that we should not add
 features to toDataURL (and by extension to toBlob) because the new
 functionality could implemented more or less efficiently in JS.


  --
  Glenn Maynard
 
 



Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-05-31 Thread Rik Cabanier
On Sat, May 31, 2014 at 10:46 AM, Glenn Maynard gl...@zewt.org wrote:

 On Fri, May 30, 2014 at 1:25 PM, Justin Novosad ju...@google.com wrote:

  I think this proposal falls short of enshrining.  The cost of adding this
  feature is minuscule.
 

 I don't think the cost is ever really miniscule.


 
 
  True, you'd never want to use toDataURL with a compression operation
 that
  takes many seconds ((or even tenths of a second) to complete, and data
 URLs
  don't make sense for large images in the first place.  It makes sense
 for
  toBlob(), though, and having the arguments to toBlob and toDataURL be
  different seems like gratuitous inconsistency.
 
 
  Yes, toBlob is async, but it can still be polyfilled.
 

 (I'm not sure how this replies to what I said--this feature makes more
 sense for toBlob than toDataURL, but I wouldn't add it to toBlob and not
 toDataURL.)


 On Sat, May 31, 2014 at 7:44 AM, Robert O'Callahan rob...@ocallahan.org
 wrote:

  On Sat, May 31, 2014 at 3:44 AM, Justin Novosad ju...@google.com
 wrote:
 
  My point is, we need a proper litmus test for the just do it in script
  argument because, let's be honnest, a lot of new features being added to
  the Web platform could be scripted efficiently, and that does not
  necessarily make them bad features.
 
 
  Which ones?
 

 The ones that are used so frequently that providing a standard API for them
 benefits everyone, by avoiding the fragmentation of everyone rolling their
 own.  For example, URL parsing and manipulation, and lots of DOM interfaces
 like element.closest(), element.hidden and element.classList.  (Cookies are
 another one that should be in this category; document.cookie isn't a sane
 API without a wrapper.)

 This isn't one of those, though.


roc was asking which NEW feature is being added that can be done in script.


Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-05-31 Thread Rik Cabanier
On Sat, May 31, 2014 at 4:06 PM, andreas@gmail.com wrote:

 Does SIMD support in JS change this equation?


Glenn is asking how much more compression there is to gain from this extra
parameter and how much extra processing it requires.

He's not asking how long it would take to do it in JavaScript. I would be
interested though :-)
png likely won't gain as much from simd compared to jpeg.


  On May 31, 2014, at 18:58, Glenn Maynard gl...@zewt.org wrote:
 
  On Sat, May 31, 2014 at 4:00 PM, Rik Cabanier caban...@gmail.com
 wrote:
 
  roc was asking which NEW feature is being added that can be done in
  script.
 
  He asked which new features have already been added that can be done
  efficiently in script.  Element.closest() was added less than a week ago.
 
  But again, image decoding *can't* be done efficiently in script:
  platform-independent code with performance competitive with native SIMD
  assembly is a thing of myth.  (People have been trying unsuccessfully to
 do
  that since day one of MMX, so it's irrelevant until the day it actually
  happens.)  Anyhow, I think I'll stop helping to derail this thread and
  return to the subject.
 
  Noel, if you're still around, I'd suggest fleshing out your suggestion by
  providing some real-world benchmarks that compare the PNG compression
 rates
  against the relative time it takes to compress.  If spending 10x the
  compression time gains you a 50% improvement in compression, that's a lot
  more compelling than if it only gains you 10%.  I don't know what the
  numbers are myself.
 
  --
  Glenn Maynard



Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-05-30 Thread Rik Cabanier
On Fri, May 30, 2014 at 8:44 AM, Justin Novosad ju...@google.com wrote:

 Backtracking here.

 The just do it in script argument saddens me quite a bit. :-(

 I don't agree that it is okay to be in a state where web apps have to
 depend on script libraries that duplicate the functionality of existing Web
 APIs. I mean, we put a lot of effort into avoiding introducing
 non-orthogonal APIs in order to keep the platform lean. In that sense it is
 hypocritical to keep web APIs in a state that forces web developers to use
 scripts that are non-orthogonal to web APIs.  The browser has a png
 encoder, and it is exposed in the API.  So why should web developers be
 forced provide their own scripted codec implementation?!

 I understand that we should not add features to the Web platform that can
 be implemented efficiently in client-side code using existing APIs. But
 where do we draw the line? An extreme interpretation of that argument would
 be to stop adding any new features in CanvasRenderingContext2D because
 almost anything can be polyfilled on top of putImageData/getImageData with
 an efficient asm.js (or something else) implementation.  In fact, why do we
 continue to implement any rendering features? Let's stop adding features to
 DOM and CSS, because we could just have JS libraries that dump pixels into
 canvases! Pwshh (mind blown)

 My point is, we need a proper litmus test for the just do it in script
 argument because, let's be honnest, a lot of new features being added to
 the Web platform could be scripted efficiently, and that does not
 necessarily make them bad features.


Yes, we need to weigh the cost of implementing new features natively
against the cost of doing them in script. If it's a feature that is not
often requested and it can be done almost as efficiently in script (and
asynchronous!), I believe it should not be added to the platform.

When canvas was created, JS interpreters were slow so the decision to do it
natively was clear; that decision still makes sense today.
However, if in the future someone writes a complete canvas implementation
on top of WebGL and it is just as fast, memory efficient and reliable, we
should just freeze the current spec and tell people to use that library.


 Also, there are plenty of browser/OS/HW combinations for which it is
 unreasonable to expect a scripted implementation of a codec to rival the
 performance of a native implementation.  For example, browsers are not
 required to support asm.js (which is kind of the point of it). More
 generally speaking, asm.js or any other script performance boosting
 technology, may not support the latest processing technology hotness that
 may be used in browser implementations (SIMD instructions that aren't
 mapped by the script compiler, CUDA, ASICs, PPUs, who knows...)


Do you know of any browser that is not interested in making its JavaScript
interpreter faster and compatible with asm,js?
Note that we're talking about a new feature here so the argument that
asm.js is too slow in old browsers doesn't count :-)


  On Thu, May 29, 2014 at 8:54 PM, Glenn Maynard gl...@zewt.org wrote:

  On Thu, May 29, 2014 at 5:34 PM, Nils Dagsson Moskopp 
  n...@dieweltistgarnichtso.net wrote:
 
and time it takes to compress.
  
   What benefit does it give then if the result is the same perceptually?
  
 
  Time it takes to compress.  There's a big difference between waiting one
  second for a quick save and 60 seconds for a high-compression final
 export.
 
 
  On Thu, May 29, 2014 at 7:31 PM, Kornel Lesiński kor...@geekhood.net
  wrote:
 
   I don't think it's a no-brainer. There are several ways it could be
   interpreted:
  
 
  The API is a no-brainer.  That doesn't mean it should be done carelessly.
   That said, how it's implemented is an implementation detail, just like
 the
  JPEG quality parameter, though it should probably be required to never
 use
  lossy compression (strictly speaking this may not actually be required
  today...).
 
  FYI, I don't plan to spend much time arguing for this feature.  My main
  issue is with the just do it in script argument.  It would probably
 help
  for people more strongly interested in this to show a comparison of
  resulting file sizes and the relative amount of time it takes to compress
  them.
 
  --
  Glenn Maynard
 



Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-05-29 Thread Rik Cabanier
On Wed, May 28, 2014 at 10:36 PM, Noel Gordon noel.gor...@gmail.com wrote:

 canvas.toDataURL supports an optional quality argument for the
 “image/jpeg” mime type to control image compression. Developers have no
 control over “image/png” compression.

 “image/png” is a lossless image compression format and the proposal is to
 allow developers some control over the compression process. For example, a
 developer might request maximum compression once their art work is complete
 to minimize the encoded image size for transmission or storage. Encoding
 speed might be more important while creating the work, and less compression
 (faster encoding) could be requested in that case.

 An optional toDataURL parameter on [0.0 ... 1.0], similar to the optional
 quality argument used for image/jpeg, could be defined for “image/png” to
 control compression:

canvas.toDataURL(“image/png”, [compression-control-value]);

 The default value, and how the browser controls the image encoder to gain
 more compression with increasing values, would be internal implementation
 details of the browser.


This has been requested before. ie
http://lists.whatwg.org/pipermail/help-whatwg.org/2013-May/001209.html
The conclusion was that this can be accomplished using JavaScript. There
are JS libraries that can compress images and performance is very good
these days.

If you're worried about blocking the main thread, you can use workers to do
offline processing.


Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-05-29 Thread Rik Cabanier
On Thu, May 29, 2014 at 7:45 AM, Justin Novosad ju...@google.com wrote:

 On Thu, May 29, 2014 at 9:59 AM, Glenn Maynard gl...@zewt.org wrote:

 On Thu, May 29, 2014 at 1:32 AM, Rik Cabanier caban...@gmail.com wrote:

  This has been requested before. ie
 
 http://lists.whatwg.org/pipermail/help-whatwg.org/2013-May/001209.html
  The conclusion was that this can be accomplished using JavaScript. There
  are JS libraries that can compress images and performance is very good
  these days.
 

 This is a nonsensical conclusion.  People shouldn't have to pull in a PNG
 compressor and deflate code when a PNG compression API already exists on
 the platform.  This is an argument against adding toDataURL at all, which
 is a decision that's already been made.

 +1
 I would add that the fact that such libraries even exist despite the fact
 that the platform provides a competing API proves that the API is not what
 it should be.

 Also, an encoder written in JavaScript cannot produce color-managed
 results because we do not have any APIs that expose color profiles. I am
 guessing that png encoders written in JS probably assume that data returned
 by getImageData is in sRGB, which is often not the case.  toDataURL, on the
 other hand, has the possibility of encoding into the png, a color profile
 that expresses the canvas backing store's color space. I know current
 implementations of toDataURL don't do that, but we could and should.


I'm not sure if we want to bake in the device's color profile into the
output bitmap by default because on re-import it will then go through color
management and its pixels will look different from the unmanaged canvas
ones.


Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-05-29 Thread Rik Cabanier
On Thu, May 29, 2014 at 6:59 AM, Glenn Maynard gl...@zewt.org wrote:

 On Thu, May 29, 2014 at 1:32 AM, Rik Cabanier caban...@gmail.com wrote:

 This has been requested before. ie

 http://lists.whatwg.org/pipermail/help-whatwg.org/2013-May/001209.html
 The conclusion was that this can be accomplished using JavaScript. There
 are JS libraries that can compress images and performance is very good
 these days.


 This is a nonsensical conclusion.  People shouldn't have to pull in a PNG
 compressor and deflate code when a PNG compression API already exists on
 the platform.  This is an argument against adding toDataURL at all, which
 is a decision that's already been made.


If performance is good, why would this not be acceptable?
It seems that this would be a fragmented solution as file formats and
features would be added at different stages to browser engines. Would there
be a way to feature test that the optional arguments are supported?


Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-05-29 Thread Rik Cabanier
On Thu, May 29, 2014 at 8:50 AM, Justin Novosad ju...@google.com wrote:




 On Thu, May 29, 2014 at 11:21 AM, Rik Cabanier caban...@gmail.com wrote:




 On Thu, May 29, 2014 at 7:45 AM, Justin Novosad ju...@google.com wrote:

 On Thu, May 29, 2014 at 9:59 AM, Glenn Maynard gl...@zewt.org wrote:

 On Thu, May 29, 2014 at 1:32 AM, Rik Cabanier caban...@gmail.com
 wrote:

  This has been requested before. ie
 
 http://lists.whatwg.org/pipermail/help-whatwg.org/2013-May/001209.html
  The conclusion was that this can be accomplished using JavaScript.
 There
  are JS libraries that can compress images and performance is very good
  these days.
 

 This is a nonsensical conclusion.  People shouldn't have to pull in a
 PNG
 compressor and deflate code when a PNG compression API already exists on
 the platform.  This is an argument against adding toDataURL at all,
 which
 is a decision that's already been made.

 +1
 I would add that the fact that such libraries even exist despite the
 fact that the platform provides a competing API proves that the API is not
 what it should be.

 Also, an encoder written in JavaScript cannot produce color-managed
 results because we do not have any APIs that expose color profiles. I am
 guessing that png encoders written in JS probably assume that data returned
 by getImageData is in sRGB, which is often not the case.  toDataURL, on the
 other hand, has the possibility of encoding into the png, a color profile
 that expresses the canvas backing store's color space. I know current
 implementations of toDataURL don't do that, but we could and should.


 I'm not sure if we want to bake in the device's color profile into the
 output bitmap by default because on re-import it will then go through color
 management and its pixels will look different from the unmanaged canvas
 ones.


 I think you meant encode rather than bake in that above sentence.
 Correct?  Currently, the non-color managed output of toDataURL has the
 display profile baked in.

 Take the following code:

 var image = newImage()
 image.src = canvas.toDataURL('image/png');
 image.onload = function() { canvas.drawImage(image, 0, 0); }

 Under a non color managed implementation, the above code will not modify
 the content of the canvas in any way because there are no color space
 conversions since the png is not color managed... All is good.  If
 toDataURL encoded a color profile, the behavior would remain unchanged
 because the color correction applied during the image decode would do
 nothing (converting to and from the same color space). Again, all is good.


That's right.
The values of pixels on the canvas are the same on every machine, but we
have many different types of monitors. The png's that are generated should
all be identical pixel-wise but their attached profiles might be different.


 However, if the data URL was to be sent over the network to be decoded on
 a different machine, then you are screwed with a non-color managed png,
 because the sender's display's color profile is baked-in to the image but
 there is no color profile meta data to allow the receiver to bring the
 image into a known color space.


You are screwed either way :-)
I think authors are going to be surprised that pixels will end up
different. (Imagine taking screenshots of achievements in a game that are
put in an online gallery)
If you put the profile in and it is different from sRGB, the png will look
different from the original canvas because you now will go through an
intermediate sRGB space which will warp the color range.

As an example, I did a toDataURL of this codepen example:
http://codepen.io/Boshnik/pen/vFbgw
I then opened it up in Photoshop, attached my monitor profile and wrote a
small script that does a difference on them:
http://cabanier.github.io/BlendExamples/images.htm

If you run windows, you will see that there's content in the canvas output.
For some reason, Mac doesn't do any color conversion on any browser.
Even on my own system, there's a difference because of the sRGB conversion:
http://cabanier.github.io/BlendExamples/screenshot.png


Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-05-29 Thread Rik Cabanier
On Thu, May 29, 2014 at 12:17 PM, Glenn Maynard gl...@zewt.org wrote:

 On Thu, May 29, 2014 at 10:29 AM, Rik Cabanier caban...@gmail.com wrote:

 If performance is good, why would this not be acceptable?


  I don't know why we'd provide an API to compress PNGs, then tell people
 to use a script reimplementation if they want to set a common option.

 As far as performance, I'm not sure about PNG, but there's no way that a
 JS compressor would compete with native for JPEG.  Assembly (MMX, SSE)
 optimization gives a significant performance improvement over C, so I doubt
 JS will ever be in the running.  (
 http://www.libjpeg-turbo.org/About/Performance)


MMX, SSE is being addressed using asm.js.
We're also just dealing with screenshots here. I doubt people are going to
do toDataURL at 60fps.




 It seems that this would be a fragmented solution as file formats and
 features would be added at different stages to browser engines. Would there
 be a way to feature test that the optional arguments are supported?


 No more than any other new feature.  I don't know if feature testing for
 dictionary arguments has been solved yet (it's come up before), but if not
 that's something that needs to be figured out in general.



Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-05-29 Thread Rik Cabanier
On Thu, May 29, 2014 at 1:33 PM, Rik Cabanier caban...@gmail.com wrote:




 On Thu, May 29, 2014 at 12:17 PM, Glenn Maynard gl...@zewt.org wrote:

 On Thu, May 29, 2014 at 10:29 AM, Rik Cabanier caban...@gmail.com
 wrote:

 If performance is good, why would this not be acceptable?


  I don't know why we'd provide an API to compress PNGs, then tell people
 to use a script reimplementation if they want to set a common option.

 As far as performance, I'm not sure about PNG, but there's no way that a
 JS compressor would compete with native for JPEG.  Assembly (MMX, SSE)
 optimization gives a significant performance improvement over C, so I doubt
 JS will ever be in the running.  (
 http://www.libjpeg-turbo.org/About/Performance)


 MMX, SSE is being addressed using asm.js.
 We're also just dealing with screenshots here. I doubt people are going to
 do toDataURL at 60fps.


Here's a link to an experiment:
http://multimedia.cx/eggs/playing-with-emscripten-and-asm-js/



  It seems that this would be a fragmented solution as file formats and
 features would be added at different stages to browser engines. Would there
 be a way to feature test that the optional arguments are supported?


 No more than any other new feature.  I don't know if feature testing for
 dictionary arguments has been solved yet (it's come up before), but if not
 that's something that needs to be figured out in general.






Re: [whatwg] Proposal: toDataURL “image/png” compression control

2014-05-29 Thread Rik Cabanier
On Thu, May 29, 2014 at 2:28 PM, Glenn Maynard gl...@zewt.org wrote:

 On Thu, May 29, 2014 at 4:21 PM, Boris Zbarsky bzbar...@mit.edu wrote:

  On 5/29/14, 5:13 PM, Glenn Maynard wrote:
 
  Assembly language is inherently incompatible with the Web.
 
 
  A SIMD API, however is not.  Under the hood, it can be implemented in
  terms of MMX, SSE, NEON, or just by forgetting about the SIMD bit and
  pretending like you have separate operations.  In particular, you could
  have a SIMD API that desugars to plain JS as the default implementation
 in
  browsers but that JITs can recognize and vectorize as they desire. This
  sort of API will happen, for sure.
 

 I doubt it, at least with performance competitive with native assembly.  We
 certainly shouldn't delay features while we hope for it.


You don't need to hope for it. The future is already here:
http://www.j15r.com/blog/2014/05/23/Box2d_2014_Update
asm.js will be fast on all modern browsers before this feature would ship.
As an author, I'd certainly prefer the most flexible solution that works
everywhere.


Re: [whatwg] Canvas and color colorspaces (was: WebGL and ImageBitmaps)

2014-05-22 Thread Rik Cabanier
Hi justin,

thanks for this explanation!


On Thu, May 22, 2014 at 12:21 PM, Justin Novosad ju...@google.com wrote:

 tl;dr: The color space of canvas backing stores is undefined, which causes
 problems for many web devs, but also has non-negligible advantages. So be
 careful what you wish for.

 I saw some confusion and questions needing answers in the WebGL and
 ImageBitmaps thread regarding color management. I will attempt to clarify
 to the best of my abilities. Though I am knowledgeable on the subject, I am
 not an absolute authority, so others are welcome to correct me if I am
 wrong about anything.

 Color management... To make a long story short, there are two types of
 color profiles : input profiles and output profiles for characterizing
 input devices (cameras, scanners) and output devices (displays, printers)
 respectively.
 Image files will usually encode their color information in a standard
 color space or in a an input device dependent space. If colors are encoded
 in a color space that is different from the format's default, then a color
 profile or a color space identifier must be encoded into the image
 resource's metadata.

 To present color-managed image content on screen, the image needs to be
 converted from whatever color space the image was encoded into into a
 standard connection space using the color profile or color space metadata
 from the image resource. Then the colors need to be converted from the
 profile connection space to the output space, which is provided by the
 OS/display driver. Depending on the OS and hardware configuration, the
 output space may be a standard color space (like sRGB), or a
 device-specific color profile.

 Currently, many color-managed software applications rely on the codec to
 take care of the entire color-management process for image and video
 content, meaning that the decoded image data is in output-referred color
 space (i.e. the display's profile was applied).  There are practical
 reasons for this, the most important ones being color fidelity and memory
 consumption.  Let me explain. The profile connection space is typically CIE
 XYZ or CIE L*a*b. I wont get into the technical details of how these work
 except to say that they are device independent and allow for an accurate
 representation of the whole spectrum of human-visible colors. This makes it
 possible to map colors from a wide gamut camera to a wide gamut display
 with high color fidelity for all the colors that are located in the
 intersection of the color gamuts of both the input and output devices. If
 we were forced to convert the image to an intermediate sRGB representation,
 the colors in the image would be clamped to the sRGB gamut (which is
 narrower than the gamuts of many devices). Currently, most browsers avoid
 doing that for img, and therefore provide (more or less) optimal image
 and video color fidelity for users of wide gamut devices. Also, an
 intermediate representation in 8-bit sRGB means loss of precision due to
 rounding errors, as opposed to the profile connection space which uses
 higher precision registers for intermediate color values to avoid precision
 issues caused by rounding.  To avoid perceptible precision issues in an
 intermediate sRGB representation, we'd have to increase the bit depth and
 therefore use more RAM for storing decoded image data.

 All of this is to say that there are good reasons for the current
 situation where we deal with decoded images that have the output device's
 color profile pre-applied: color fidelity and memory consumption.

 In the case of 2D canvas, the color space for the backing store is
 unspecified, and many implementations have chosen to use the output
 device's color space, which has many advantages:
 * images and videos are already decoded directly into that space
 * no color conversion is necessary when presenting the canvas on screen
 (good for performance)
 * there is no loss of precision due the use of a limited-precision
 intermediate color space.
 * the color gamut is not constrained by an intermediate color space (like
 sRGB).
 And disadvantages:
 * Compositing operations produce incorrect results because most of them
 (including source-over) are affected by the color space.
 * direct pixel manipulation using put/getImageData exposes data in a color
 space that is undefined, making it extremely challenging to perform many
 types of image processing and image generation tasks in a
 device-independent way.
 * The device-dependent behavior of a drawImage/getImageData round trip is
 a known fingerprinting vector.

 Right now, I am hearing a lot of complaints regarding the lack of a
 standardized color space for canvases, and in particular the impact this
 has on applications that try to do cool things with put/getImageData, or
 generate images procedurally.  I want to make sure everyone understands
 there is a trade-off to fixing this, so be careful what you wish for.

 I am especially concerned about the 

Re: [whatwg] WebGL and ImageBitmaps

2014-05-18 Thread Rik Cabanier
On Sun, May 18, 2014 at 2:15 AM, K. Gadd k...@luminance.org wrote:

 I'd expect that the error might not accumulate for most color values.
 Rounding would potentially kick in once you get the first loss of
 precision.


That doesn't make sense. If this is a shift because of color management, it
should happen for pretty much all values.
I changed my profile to generate wild color shifts and tried random color
values but don't see any changes in any browser.

Could this just be happening with images that have profiles?


 I've only historically seen color shifts upon repeated
 rendering in scenarios where you're losing lots of precision, or
 losing energy (bad RGB - HSV conversions, for example) - you don't
 actually need a lot of precision to fix that as long as your
 coefficients are right.
 On Fri, May 16, 2014 at 8:41 PM, Rik Cabanier caban...@gmail.com wrote:
 
 
 
  On Fri, May 16, 2014 at 3:06 PM, Justin Novosad ju...@google.com
 wrote:
 
  On Fri, May 16, 2014 at 5:42 PM, Rik Cabanier caban...@gmail.com
 wrote:
 
 
  Is the Web page not composited in sRGB? If so, it seems the backing
 store
  should be sRGB too.
 
 
 
  The web page is not composited in sRGB. It is composited in the output
  device's color space, which is often sRGB or close to sRGB, but not
 always.
  A notable significant exception is pre Snow Leopard Macs that use a
 gamma
  1.8 transfer curve.
  By the way, sniffing the display color profile through getImageData is a
  known fingerprinting technique. This factor alone can be sufficient to
  fingerprint a user who has a calibrated monitor.
 
 
  I'm unable to reproduce what you're describing. So, if I fill with a
 color
  and repeatedly do a getImageData/putImageData, should I see color shifts?
 



Re: [whatwg] WebGL and ImageBitmaps

2014-05-18 Thread Rik Cabanier
On Sun, May 18, 2014 at 8:10 PM, K. Gadd k...@luminance.org wrote:

 The point I was trying to make there is that for many format
 conversions or encoding conversions (RGB-YUV, RGB-HSL), not all
 input values are degraded equally. The amount of error introduced
 depends on the inputs. There are going to be some values for which the
 conversion is more or less accurate - for example, in most cases I
 would expect black and white to convert without any error. As a
 result, you can't just pick a few random colors and fill a canvas with
 them and decide based on that whether or not error is being
 introduced. At a minimum, you should use a couple test pattern bitmaps
 and do a comparison of the result. Keep in mind that all the
 discussions of profile conversion so far have been about bitmaps, not
 synthesized solid colors.

 I am, of course, not an expert - but I have observed this with
 repeated RGB-HSL conversions in the past (testing poor
 implementations that introduced accumulated error against relatively
 good implementations that did not accumulate very much error over
 time.)

 http://en.wikipedia.org/wiki/SRGB#Specification_of_the_transformation

 Note that as described there, clipping and rounding may occur and
 linear - gamma-corrected conversions may also occur.


But that was not was Justin said. He said they didn't know what profile was
used so sometimes things are not color managed correctly.
It was also possible to infer what machine you were running based on the
detected profile.

Color managment is not equal to the simple color transformations you
descirbe.


 We also can't
 know what color profile configuration your machine happens to be using
 when you run these tests, and what browser you're using. Both of those
 are important when saying that you can/can't reproduce the issue.


I think you have things backwards. YOU raised an issue with color
management and I'm trying to reproduce it but I'm failing because you
didn't give us enough information.
I tried:
- many different colors
- different profiles
- repeated put/getImageData calls on the same canvas
and can't reproduce.
When and on what platform/browser does this problem occur?


 On Sun, May 18, 2014 at 8:22 AM, Rik Cabanier caban...@gmail.com wrote:
 
 
 
  On Sun, May 18, 2014 at 2:15 AM, K. Gadd k...@luminance.org wrote:
 
  I'd expect that the error might not accumulate for most color values.
  Rounding would potentially kick in once you get the first loss of
  precision.
 
 
  That doesn't make sense. If this is a shift because of color management,
 it
  should happen for pretty much all values.
  I changed my profile to generate wild color shifts and tried random color
  values but don't see any changes in any browser.
 
  Could this just be happening with images that have profiles?
 
 
  I've only historically seen color shifts upon repeated
  rendering in scenarios where you're losing lots of precision, or
  losing energy (bad RGB - HSV conversions, for example) - you don't
  actually need a lot of precision to fix that as long as your
  coefficients are right.
  On Fri, May 16, 2014 at 8:41 PM, Rik Cabanier caban...@gmail.com
 wrote:
  
  
  
   On Fri, May 16, 2014 at 3:06 PM, Justin Novosad ju...@google.com
   wrote:
  
   On Fri, May 16, 2014 at 5:42 PM, Rik Cabanier caban...@gmail.com
   wrote:
  
  
   Is the Web page not composited in sRGB? If so, it seems the backing
   store
   should be sRGB too.
  
  
  
   The web page is not composited in sRGB. It is composited in the
 output
   device's color space, which is often sRGB or close to sRGB, but not
   always.
   A notable significant exception is pre Snow Leopard Macs that use a
   gamma
   1.8 transfer curve.
   By the way, sniffing the display color profile through getImageData
 is
   a
   known fingerprinting technique. This factor alone can be sufficient
 to
   fingerprint a user who has a calibrated monitor.
  
  
   I'm unable to reproduce what you're describing. So, if I fill with a
   color
   and repeatedly do a getImageData/putImageData, should I see color
   shifts?
  
 
 



Re: [whatwg] WebGL and ImageBitmaps

2014-05-16 Thread Rik Cabanier
On Fri, May 16, 2014 at 12:16 PM, Justin Novosad ju...@google.com wrote:




 On Fri, May 16, 2014 at 12:27 PM, Ian Hickson i...@hixie.ch wrote:

 On Fri, 16 May 2014, Justin Novosad wrote
  Blink/WebKit uses output-referred color space, which is bad for some
  inter-op cases, but good for performance. Calling drawImage will produce
  inter-operable behavior as far as visual output is concerned, but
  getImageData will yield values that have the display color profile baked
  in.

 I'm not quite sure what you mean here. If you mean that you can set
 'fillStyle' to a colour, fillRect the canvas, and the get the data for
 that pixel and find that it's a different colour, then that's
 non-conforming. If you mean that you can take a bitmap image without
 colour-profile information and draw it on a canvas and then getImageData()
 will return different results, then again, that's non-conforming.

 If you mean that drawing an image with a color profile will result in
 getImageData() returning different colour pixels on different systems,
 then that's allowed, because the colour space of the canvas (and the rest
 of the Web platform, which must be the same colour space) is not defined.

 Yes the later is what I mean. It is allowed, and it is causing headaches
 for many web developers. One possible solution would be to impose that
 ImageData be in sRGB color space. Unfortunately, that would imply loss of
 precision due to color space conversion rounding errors in a
 getImageData/putImageData round trip.


Can you explain why that is? Presumably, the image data is converted to
sRGB before you use it to composite its pixels.


 But that is probably a lesser evil.  I wonder if making this change would
 break anything on the web...


  Some web developers have worked around this by reverse-engineering the
  client-specific canvas to sRGB colorspace transform by running a test
  pattern through drawImage+getImageData.  It is horrible that we are
  making devs resort to this.

 I'm not really sure what this work around achieves. Can you elaborate?


 For example, if a web app wants to apply an image processing algorithm, it
 would use getImageData to retrieve the original pixel values, process the
 data, and display the results using putImageData.  The color space of the
 image data is undefined and it affects the behavior of the image processing
 algorithm.  In order to standardize the behavior of the image processing
 algorithm, the image data must be converted to a known color space.  The
 required color space transformation can not be queried but it can be
 determined experimentally by taking an img that is in a known color space
 and contains known color values.  You draw that image to a canvas using
 drawImage, and read it back using getImageData.  The color values returned
 by getImageData and the known corresponding color values of the original
 image provide a set of color-space correspondences that can be used to feed
 a curve fitting algorithm in order to reverse-engineer the parameters of
 the color space conversion that maps the unknown ImageData color space to
 the known color space of the test image.


I agree. That is horrible!


 If you just want to do everything in sRGB, then putting all your images in
 sRGB but without giving color space information (or setting the option to
 'strip', if we add these createImageBitmap() options) would result in what
 you want, no?


 Only if the canvas backing store is forced to be in sRGB.


Is the Web page not composited in sRGB? If so, it seems the backing store
should be sRGB too.


 You'd have to manually (or on the server) convert images that were in
 other colour spaces, though.


  Adding a colorspace option to createImageBitmap is not enough IMHO. I
  think we need a more global color-management approach for canvas.

 If we need colour management, we need it for the Web as a whole, not just
 for canvas. So far, though, implementations have not been implementing the
 features that have been proposed, so...:

http://www.w3.org/TR/css3-color/#dropped


 I think think CSS and HTML can survive well without  color management
 features, as long as the color behavior is well defined, which seems to be
 the case except for canvas
 ImageData is problematic because it stores data that in an undefined color
 space.



  --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'





Re: [whatwg] WebGL and ImageBitmaps

2014-05-16 Thread Rik Cabanier
On Fri, May 16, 2014 at 3:06 PM, Justin Novosad ju...@google.com wrote:

 On Fri, May 16, 2014 at 5:42 PM, Rik Cabanier caban...@gmail.com wrote:


 Is the Web page not composited in sRGB? If so, it seems the backing store
 should be sRGB too.



 The web page is not composited in sRGB. It is composited in the output
 device's color space, which is often sRGB or close to sRGB, but not always.
 A notable significant exception is pre Snow Leopard Macs that use a gamma
 1.8 transfer curve.
 By the way, sniffing the display color profile through getImageData is a
 known fingerprinting technique. This factor alone can be sufficient to
 fingerprint a user who has a calibrated monitor.


I'm unable to reproduce what you're describing. So, if I fill with a color
and repeatedly do a getImageData/putImageData, should I see color shifts?


Re: [whatwg] WebGL and ImageBitmaps

2014-05-14 Thread Rik Cabanier
On Tue, May 13, 2014 at 6:59 PM, K. Gadd k...@luminance.org wrote:

 On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote:
  Can you give an explicit example where browsers are having different
  behavior when using drawImage?

 I thought I was pretty clear about this... colorspace conversion and
 alpha conversion happen here depending on the user's display
 configuration, the color profile of the source image, and what browser
 you're using. I've observed differences between Firefox and Chrome
 here, along with different behavior on OS X (presumably due to their
 different implementation of color profiles).

 In this case 'different' means 'loading  drawing an image to a canvas
 gives different results via getImageData'.

 On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote:
  Would this be solved with Greg's proposal for flags on ImageBitmap:
 
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-June/251541.html

 I believe so. I think I was on record when he first posted that I
 consider the alpha and colorspace flags he described as adequate.
 FlipY is considerably less important to me, but I can see how people
 might want it (honestly, reversing the order of scanlines is a very
 cheap operation; you can do it in the sampling stage of your shader,
 and actually *have* to in OpenGL because of their coordinate system
 when you're doing render-to-texture.)

 On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote:
  Very specifically here, by 'known color space' i just mean that the
  color space of the image is exposed to the end user. I don't think we
  can possibly pick a standard color space to always use; the options
  are 'this machine's current color space' and 'the color space of the
  input bitmap'. In many cases the color space of the input bitmap is
  effectively 'no color space', and game developers feed the raw rgba to
  the GPU. It's important to support that use case without degrading the
  image data.
 
 
  Is that not the case today?

 It is very explicitly not the case, which is why we are discussing it.
 It is not currently possible to do lossless manipulation of PNG images
 in a web browser using canvas. The issues I described where you get
 different results from getImageData are a part of that.

 On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote:
  Safari never created a temporary image and I recently updated Firefox so
 it
  matches Safari.
  Both Safari, IE and Firefox will now sample outside of the drawImage
 region.
  Chrome does not but they will fix that at some point.

 This is incorrect. A quick google search for 'webkit drawimage source
 rectangle temporary' reveals such, in a post to this list.

 http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2012-December/080583.html
 My statement to this effect was based on my (imperfect) memory of that
 post. 'CGImage' (to me) says Safari since it's an Apple API, and the
 post mentions Safari.


I made a codepen that showed the issue: http://codepen.io/adobe/pen/jIzbv
Firefox was not matching the behavior on mac because it created a
intermediate image. I fixed that in
https://bugzilla.mozilla.org/show_bug.cgi?id=987292

I agree that the code you linked to exists in WebKit but they add padding
so it samples outside the source again.


Re: [whatwg] WebGL and ImageBitmaps

2014-05-14 Thread Rik Cabanier
On Wed, May 14, 2014 at 7:45 PM, Glenn Maynard gl...@zewt.org wrote:

 On Wed, May 14, 2014 at 6:27 PM, Glenn Maynard gl...@zewt.org wrote:

  That's only an issue when sampling without premultiplication, right?
 
  I had to refresh my memory on this:
 
  https://zewt.org/~glenn/test-premultiplied-scaling/
 
  The first image is using WebGL to blit unpremultiplied.  The second is
  WebGL blitting premultiplied.  The last is 2d canvas.  (We're talking
 about
  canvas here, of course, but WebGL makes it easier to test the different
  behavior.)  This blits a red rectangle surrounded by transparent space on
  top of a red canvas.  The black square is there so I can tell that it's
  actually drawing something.
 
  The first one gives a seam around the transparent area, as the white
  pixels (which are completely transparent in the image) are sampled into
 the
  visible part.  I think this is the problem we're talking about.  The
 second
  gives no seam, and the Canvas one gives no seam, indicating that it's a
  premultiplied blit.  I don't know if that's specified, but the behavior
 is
  the same in Chrome and FF.
 

 It looks right on red, but if the background is green you can still see the
 post-premultiplied black being pulled in.  It's really just GL_REPEAT that
 you want, repeating the outer edge.


 On Wed, May 14, 2014 at 9:21 PM, K. Gadd k...@luminance.org wrote:

  The reason one pixel isn't sufficient is that if the minification
  ratio is below 50% (say, 33%), sampling algorithms other than
  non-mipmapped-bilinear will begin sampling more than 4 pixels (or one
  quad, in gpu shading terminology), so you now need enough transparent
  pixels around all your textures to ensure that sampling never crosses
  the boundaries into another image.
 

 I'm well aware of the issues of sampling sprite sheets; I've dealt with the
 issue at length in the past.  That's unrelated to my last mail, however,
 which was about premultiplication (which is something I've not used as
 much).


  I agree with this, but I'm not going to assume it's actually possible
  for a canvas implementation to work this way. I assume that color
  profile conversions are non-trivial (in fact, I'm nearly certain they
  are non-trivial), so doing the conversion every time you render a
  canvas to the compositor is probably expensive, especially if your GPU
  isn't powerful enough to do it in a shader (mobile devices, perhaps) -
  so I expect that most implementations do the conversion once at load
  time, to prepare an image for rendering. Until it became possible to
  retrieve image pixels with getImageData, this was a good, safe
  optimization.
 

 What I meant is that I think color correction simply shouldn't apply to
 canvas at all.  That may not be ideal, but I'm not sure of anything else
 that won't cause severe interop issues.


Maybe the color correction described here is happening:
https://hsivonen.fi/png-gamma/

If so, the image that's drawn on the canvas should match what the browser
is showing on screen.
Without an example, it's just speculation of course.


 To be clear, colorspace conversion--converting from sRGB to RGB--isn't a
 problem, other than probably needing to be specified more clearly and being
 put behind an option somewhere, so you can avoid a lossy colorspace
 conversion.  The problem is color correction that takes the user's monitor
 configuration into account, since the user's monitor settings shouldn't be
 visible to script.  I don't know enough about color correction to know if
 this can be done efficiently in an interoperable way, so the data scripts
 see isn't affected by the user's configuration.


Yes, color correction from sRGB to your monitor should not affect drawing
on canvas. (What if you had multiple monitors :-))


Re: [whatwg] canvas feedback

2014-05-14 Thread Rik Cabanier
On Wed, May 14, 2014 at 6:30 AM, Jürg Lehni li...@scratchdisk.com wrote:

 On Apr 30, 2014, at 00:27 , Ian Hickson i...@hixie.ch wrote:

  On Mon, 7 Apr 2014, Jürg Lehni wrote:
 
  Well this particular case, yes. But in the same way we allow a group of
  items to have an opacity applied to in Paper.js, and expect it to behave
  the same ways as in SVG: The group should appear as if its children were
  first rendered at 100% alpha and then blitted over with the desired
  transparency.
 
  Layers would offer exactly this flexibility, and having them around
  would make a whole lot of sense, because currently the above can only be
  achieved by drawing into a separate canvas and blitting the result over.
  The performance of this is real low on all browsers, a true bottleneck
  in our library currently.
 
  It's not clear to me why it would be faster if implemented as layers.
  Wouldn't the solution here be for browsers to make canvas-on-canvas
  drawing faster? I mean, fundamentally, they're the same feature.

 I was perhaps wrongly assuming that including layering in the API would
 allow the browser vendors to better optimize this use case.


No, you are correct; having layers will make drawing more efficient as you
can make certain assumptions and you don't have to create/recycle
intermediate canvas's.


 The problem with the current solution is that drawing a canvas into
 another canvas is inexplicably slow across all browsers. The only reason I
 can imagine for this is that the pixels are copied back and forth between
 the GPU and the main memory, and perhaps converted along the way, while
 they could simply stay on the GPU as they are only used there. But reality
 is probably more complicated than that.


I don't know why this would be. Do you have data on this?


 So if the proposed API addition would allow a better optimization then I'd
 be all for it. If not, then I am wondering how I can get the vendor's
 attention to improve this particular case. It really is very slow
 currently, to the point where it doesn't make sense to use it for any sort
 of animation technique.


I think we just need to find some time to start implementing it. The API is
simple and in the case of Core Graphics, it maps directly.


Re: [whatwg] canvas feedback

2014-05-14 Thread Rik Cabanier
On Wed, May 14, 2014 at 7:30 PM, K. Gadd k...@luminance.org wrote:

 Is it ever possible to make canvas-to-canvas blits consistently fast?
 It's my understanding that browsers still make
 intelligent/heuristic-based choices about which canvases to
 accelerate, if any, and that it depends on the size of the canvas,
 whether it's in the DOM, etc. I've had to report bugs related to this
 against firefox and chrome in the past, I'm sure more exist. There's
 also the scenario where you need to blit between Canvas2D canvases and
 WebGL canvases - the last time I tried this, a single blit could cost
 *hundreds* of milliseconds because of pipeline stalls and cpu-gpu
 transfers.


Chrome has made some optimizations recently in this area and will try to
keep everything on the GPU for transfers between canvas 2d and WebGL.
Are you still seeing issues there?


 Canvas-to-canvas blits are a way to implement layering, but it seems
 like making it consistently fast via canvas-canvas blits is a much
 more difficult challenge than making sure that there are fastcheap
 ways to layer separate canvases at a composition stage. The latter
 just requires that the browser have a good way to composite the
 canvases, the former requires that various scenarios with canvases
 living in CPU and GPU memory, deferred rendering queues, etc all get
 resolved efficiently in order to copy bits from one place to another.


Small canvas's are usually not hardware accelerated. Do you have any data
that this is causing slowdowns?
Layering should also mitigate this since if the canvas is HW accelerated,
so should its layers.


 (In general, I think any solution that relies on using
 canvas-on-canvas drawing any time a single layer is invalidated is
 suspect. The browser already has a compositing engine for this that
 can efficiently update only modified subregions and knows how to cache
 reusable data; re-rendering the entire surface from JS on change is
 going to be a lot more expensive than that.


I don't think the canvas code is that smart. I think you're thinking about
drawing SVG and HTML.


 Don't some platforms
 actually have compositing/layers at the OS level, like CoreAnimation
 on iOS/OSX?)


Yes, but AFAIK they don't use this for Canvas.



 On Wed, May 14, 2014 at 6:30 AM, Jürg Lehni li...@scratchdisk.com wrote:
  On Apr 30, 2014, at 00:27 , Ian Hickson i...@hixie.ch wrote:
 
  On Mon, 7 Apr 2014, Jürg Lehni wrote:
 
  Well this particular case, yes. But in the same way we allow a group of
  items to have an opacity applied to in Paper.js, and expect it to
 behave
  the same ways as in SVG: The group should appear as if its children
 were
  first rendered at 100% alpha and then blitted over with the desired
  transparency.
 
  Layers would offer exactly this flexibility, and having them around
  would make a whole lot of sense, because currently the above can only
 be
  achieved by drawing into a separate canvas and blitting the result
 over.
  The performance of this is real low on all browsers, a true bottleneck
  in our library currently.
 
  It's not clear to me why it would be faster if implemented as layers.
  Wouldn't the solution here be for browsers to make canvas-on-canvas
  drawing faster? I mean, fundamentally, they're the same feature.
 
  I was perhaps wrongly assuming that including layering in the API would
 allow the browser vendors to better optimize this use case. The problem
 with the current solution is that drawing a canvas into another canvas is
 inexplicably slow across all browsers. The only reason I can imagine for
 this is that the pixels are copied back and forth between the GPU and the
 main memory, and perhaps converted along the way, while they could simply
 stay on the GPU as they are only used there. But reality is probably more
 complicated than that.
 
  So if the proposed API addition would allow a better optimization then
 I'd be all for it. If not, then I am wondering how I can get the vendor's
 attention to improve this particular case. It really is very slow
 currently, to the point where it doesn't make sense to use it for any sort
 of animation technique.
 
  J
 
 
 
 
 



Re: [whatwg] WebGL and ImageBitmaps

2014-05-12 Thread Rik Cabanier
On Mon, May 12, 2014 at 1:19 AM, K. Gadd k...@luminance.org wrote:

 Gosh, this thread is old. I'm going to try and compose a coherent
 response but at this point I've forgotten a lot of the context...

 On Fri, May 9, 2014 at 12:02 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 18 Jul 2013, K. Gadd wrote:
 
  Ultimately the core here is that without control over colorspace
  conversion, any sort of deterministic image processing in HTML5 is off
  the table, and you have to write your own image decoders, encoders, and
  manipulation routines in JavaScript using raw typed arrays. Maybe that's
  how it has to be, but it would be cool to at least support basic
  variations of these use cases in Canvas since getImageData/putImageData
  already exist and are fairly well-specified (other than this problem,
  and some nits around source rectangles and alpha transparency).
 
  Given that the user's device could be a very low-power device, or one
 with
  a very small screen, but the user might still want to be manipulating
 very
  large images, it might be best to do the master manipulation on the
  server anyway.

 This request is not about efficient image manipulation (as you point
 out, this is best done on a high power machine) - without control over
 colorspace conversion any image processing is nondeterministic. There
 are games and apps out there that rely on getting the exact same
 pixels out of a given Image on all machines, and that's impossible
 right now due to differing behaviors. You see demoscene projects
 packing data into bitmaps (yuck), or games using images as the
 canonical representation of user-generated content. The latter, I
 think, is entirely defensible - maybe even desirable, since it lets
 end users interact with the game using photoshop or mspaint.
 Supporting these use cases in a cross-browser manner is impossible
 right now, yet they work in the desktop versions of these games.

 On Fri, May 9, 2014 at 12:02 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 18 Jul 2013, K. Gadd wrote:
  Out of the features suggested previously in the thread, I would
  immediately be able to make use of control over colorspace conversion
  and an ability to opt into premultiplied alpha. Not getting
  premultiplied alpha, as is the case in virtually every canvas
  implementation I've tried, has visible negative consequences for image
  quality and also reduces the performance of some use cases where bitmap
  manipulation needs to happen, due to the fact that premultiplied alpha
  is the 'preferred' form for certain types of rendering and the math
  works out better. I think the upsides to getting premultiplication are
  the same here as they are in WebGL: faster uploads/downloads, better
  results, etc.
 
  Can you elaborate on exactly what this would look like in terms of the
 API
  implications? What changes to the spec did you have in mind?

 I don't remember what my exact intent here was, but I'll try to
 resynthesize it:
 The key here is to have a clear understanding of what data you get out
 of an ImageBitmap. It is *not* necessary for the end user to be able
 to specify it, as long as the spec dictates that all browsers provide
 the exact same format to end users.
 If we pick one format and lock to it, we want a format that discards
 as little source image data as possible (preferably *no* data is
 discarded) - which would mean the raw source image data, without any
 colorspace or alpha channel conversion applied.


Can you give an explicit example where browsers are having different
behavior when using drawImage?


 This allows all the procedural image manipulation cases described
 above, and makes it a very fast and straightforward path for loading
 images you plan to pass off to the GPU as a WebGL texture. There's a
 bit more on this below...


Would this be solved with Greg's proposal for flags on ImageBitmap:
http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-June/251541.html


 On Fri, May 9, 2014 at 12:02 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 18 Jul 2013, K. Gadd wrote:
  To clearly state what would make ImageBitmap useful for the use cases I
  encounter and my end-users encounter:
  ImageBitmap should be a canonical representation of a 2D bitmap, with a
  known color space, known pixel format, known alpha representation
  (premultiplied/not premultiplied), and ready for immediate rendering or
  pixel data access. It's okay if it's immutable, and it's okay if
  constructing one from an img or a Blob takes time, as long as once I
 have
  an ImageBitmap I can use it to render and use it to extract pixel data
  without user configuration/hardware producing unpredictable results.
 
  This seems reasonable, but it's not really detailed enough for me to turn
  it into spec. What colour space? What exactly should we be doing to the
  alpha channel?

 Very specifically here, by 'known color space' i just mean that the
 color space of the image is exposed to the end user. I don't think 

Re: [whatwg] Proposal: navigator.cores

2014-05-08 Thread Rik Cabanier
FYI
From the WebKit side, people are leaning towards returning the logical CPU
count but limit the maximum value to 8 [1].
This should cover the vast majority of systems and use cases for this
property and still not expose users that are on high value devices..

1: https://bugs.webkit.org/show_bug.cgi?id=132588


On Tue, May 6, 2014 at 2:30 PM, Rik Cabanier caban...@gmail.com wrote:




 On Tue, May 6, 2014 at 8:51 AM, Joe Gregorio jcgrego...@google.comwrote:

 On Tue, May 6, 2014 at 7:57 AM, João Eiras jo...@opera.com wrote:
 ...
 
  I guess everyone that is reading this thread understands the use cases
 well
  and agrees with them.
 
  The disagreement is what kind of API you need. Many people, rightly so,
 have
  stated that a core count gives little information that can be useful.
 
  It's better to have an API that determines the optimal number of
 parallel
  tasks that can run, because who knows what else runs in a different
 process
  (the webpage the worker is in, the browser UI, plugins, other webpages,
  iframes, etc) with what load. Renaming 'cores' to 'parallelTaskCount'
 would
  be a start.
 

 +1

 The solution proposed should actually be a solution to the problem as
 stated, which,
 from the abstract read:

The intended use for the API is to help developers make informed
 decisions regarding
the size of their worker threadpools to perform parallel algorithms.

 So the solution should be some information about the maximum number of
 parallel
 workers that a page can expect to run, which may have no relation to
 the number of
 cores, physical or virtual. The browser should be allowed to determine
 what that number
 is based on all the factors it has visibility to, such as load, cores,
 and policy.

 Returning the number is actually important, for example, physics
 engines for WebGL
 games, how you shard the work may depend on knowing how many parallel
 workers
 you should schedule.


 It seems everyone is in agreement that this API should return the number
 of useful parallel tasks.

 So far, people have proposed several names:
 - cores - this seems confusing since the returned number might be lower
 - concurrency - there can be more concurrent tasks than there are logical
 cores
 - hardwareConcurrency
 - parallelTaskCount

 Leaving the question of fingerprinting aside for now, what name would
 people prefer?



Re: [whatwg] Proposal: navigator.cores

2014-05-08 Thread Rik Cabanier
On Thu, May 8, 2014 at 7:07 PM, Joe Gregorio jcgrego...@google.com wrote:

 Maybe we can also return their RAM, but limit it to a maximum of 640K,
 since no one will need more than that :-)

 I think in a few years the limit to 8 cores will look just as silly.


Once 16 is common, WebKit will be updated to 16.
Maybe by then we'll also have a task scheduling framework to go along.


  On Thu, May 8, 2014 at 10:02 PM, Rik Cabanier caban...@gmail.com wrote:
  FYI
  From the WebKit side, people are leaning towards returning the logical
 CPU
  count but limit the maximum value to 8 [1].
  This should cover the vast majority of systems and use cases for this
  property and still not expose users that are on high value devices..
 
  1: https://bugs.webkit.org/show_bug.cgi?id=132588
 
 
  On Tue, May 6, 2014 at 2:30 PM, Rik Cabanier caban...@gmail.com wrote:
 
 
 
 
  On Tue, May 6, 2014 at 8:51 AM, Joe Gregorio jcgrego...@google.com
  wrote:
 
  On Tue, May 6, 2014 at 7:57 AM, João Eiras jo...@opera.com wrote:
  ...
  
   I guess everyone that is reading this thread understands the use
 cases
   well
   and agrees with them.
  
   The disagreement is what kind of API you need. Many people, rightly
 so,
   have
   stated that a core count gives little information that can be useful.
  
   It's better to have an API that determines the optimal number of
   parallel
   tasks that can run, because who knows what else runs in a different
   process
   (the webpage the worker is in, the browser UI, plugins, other
 webpages,
   iframes, etc) with what load. Renaming 'cores' to 'parallelTaskCount'
   would
   be a start.
  
 
  +1
 
  The solution proposed should actually be a solution to the problem as
  stated, which,
  from the abstract read:
 
 The intended use for the API is to help developers make informed
  decisions regarding
 the size of their worker threadpools to perform parallel
 algorithms.
 
  So the solution should be some information about the maximum number of
  parallel
  workers that a page can expect to run, which may have no relation to
  the number of
  cores, physical or virtual. The browser should be allowed to determine
  what that number
  is based on all the factors it has visibility to, such as load, cores,
  and policy.
 
  Returning the number is actually important, for example, physics
  engines for WebGL
  games, how you shard the work may depend on knowing how many parallel
  workers
  you should schedule.
 
 
  It seems everyone is in agreement that this API should return the number
  of useful parallel tasks.
 
  So far, people have proposed several names:
  - cores - this seems confusing since the returned number might be lower
  - concurrency - there can be more concurrent tasks than there are
 logical
  cores
  - hardwareConcurrency
  - parallelTaskCount
 
  Leaving the question of fingerprinting aside for now, what name would
  people prefer?
 
 



Re: [whatwg] Proposal: navigator.cores

2014-05-06 Thread Rik Cabanier
On Tue, May 6, 2014 at 8:51 AM, Joe Gregorio jcgrego...@google.com wrote:

 On Tue, May 6, 2014 at 7:57 AM, João Eiras jo...@opera.com wrote:
 ...
 
  I guess everyone that is reading this thread understands the use cases
 well
  and agrees with them.
 
  The disagreement is what kind of API you need. Many people, rightly so,
 have
  stated that a core count gives little information that can be useful.
 
  It's better to have an API that determines the optimal number of parallel
  tasks that can run, because who knows what else runs in a different
 process
  (the webpage the worker is in, the browser UI, plugins, other webpages,
  iframes, etc) with what load. Renaming 'cores' to 'parallelTaskCount'
 would
  be a start.
 

 +1

 The solution proposed should actually be a solution to the problem as
 stated, which,
 from the abstract read:

The intended use for the API is to help developers make informed
 decisions regarding
the size of their worker threadpools to perform parallel algorithms.

 So the solution should be some information about the maximum number of
 parallel
 workers that a page can expect to run, which may have no relation to
 the number of
 cores, physical or virtual. The browser should be allowed to determine
 what that number
 is based on all the factors it has visibility to, such as load, cores,
 and policy.

 Returning the number is actually important, for example, physics
 engines for WebGL
 games, how you shard the work may depend on knowing how many parallel
 workers
 you should schedule.


It seems everyone is in agreement that this API should return the number of
useful parallel tasks.

So far, people have proposed several names:
- cores - this seems confusing since the returned number might be lower
- concurrency - there can be more concurrent tasks than there are logical
cores
- hardwareConcurrency
- parallelTaskCount

Leaving the question of fingerprinting aside for now, what name would
people prefer?


Re: [whatwg] Proposal: navigator.cores

2014-05-06 Thread Rik Cabanier
On Tue, May 6, 2014 at 5:24 PM, Glenn Maynard gl...@zewt.org wrote:

 On Sun, May 4, 2014 at 4:49 PM, Adam Barth w...@adambarth.com wrote:

  You're right that Panopticlick doesn't bother to spend the few seconds it
  takes to estimate the number of cores because it already has sufficient
  information to fingerprint 99.1% of visitors:
 
  https://panopticlick.eff.org/browser-uniqueness.pdf
 

 It's pretty unpleasant to use a paper arguing that fingerprinting is a
 threat to online privacy as an argument that we should give up trying to
 prevent fingerprinting.


What do you mean?

For fingerprinting, the algorithm would not have to be precise. Instead a
routine that return 1 or 2 for cheap machines and  12 for expensive
machines would be enough.
The fact that this is so easily accomplished today and we don't have any
evidence that this is happening, tells me that it is not that valuable.


 On Mon, May 5, 2014 at 10:20 PM, Ian Hickson i...@hixie.ch wrote:

  of Workers today, as bz pointed out earlier). Indeed, on a high-core
  machine as we should expect to start seeing widely in the coming years,
 it
  might make sense for the browser to randomly limit the number of cores on
  a per-origin/session basis, specifically to mitigate fingerprinting.
 

 This might make sense in browser modes like Chrome's incognito mode, but
 I think it would be overboard to do this in a regular browser window.  If
 I've paid for a CPU with 16 cores, I expect applications which are able to
 use them all to do so consistently, and not be randomly throttled to
 something less.


Exactly.


 On Tue, May 6, 2014 at 4:38 PM, Boris Zbarsky bzbar...@mit.edu wrote:

  On 5/6/14, 5:30 PM, Rik Cabanier wrote:
 
  Leaving the question of fingerprinting aside for now, what name would
  people prefer?
 
 
  mauve?
 
  Failing that, maxUsefulWorkers?

 It can be useful to start more workers than processors, when they're not
 CPU-bound.


Yes. The 'hardwareConcurrency' and 'parallelTaskCount' terms feel better
because they don't imply a maximum.


Re: [whatwg] Proposal: navigator.cores

2014-05-05 Thread Rik Cabanier
On Mon, May 5, 2014 at 11:10 AM, David Young dyo...@pobox.com wrote:

 On Sat, May 03, 2014 at 10:49:00AM -0700, Adam Barth wrote:
  Over on blink-dev, we've been discussing [1] adding a property to
 navigator
  that reports the number of cores [2].  As far as I can tell, this
  functionality exists in every other platform (including iOS and Android).
   Some of the use cases for this feature have been discussed previously on
  this mailing list [3] and rejected in favor of a more complex system,
  perhaps similar to Grand Central Dispatch [4].  Others have raised
 concerns
  that exposing the number of cores could lead to increased fidelity of
  fingerprinting [5].

 navigator.cores seems to invite developers to try to write web apps
 that make local decisions about the scheduling of work on cores when
 they're missing important essential knowledge about the global context:
 cores may not be equally fast, energy-efficient, or available;


Eli already pointed out that this is not a problem. Heterogeneous systems
still allow concurrency on the different cores; the faster ones will simply
finish their work faster


 global
 scheduling decisions may be more sophisticated than, or in opposition to
 your local scheduling decisions.

 I think that a more complex system, such as Grand Central Dispatch, is
 probably more effective and less complicated than each web app trying
 independently, without any real information, to optimize its use of
 cores.


We are now 6 years from that proposal and nothing has happened since. (Is
there any data on how successful Grand Central is?)
This is not an optimal solution but at least it gives authors a way to make
a semi-informed decision. If this is not provided, they will either make a
poor guess or just optimize for the most popular platform.

As previous pointed out, a thread scheduling solution will need the optimal
number of concurrent tasks too.


Re: [whatwg] Proposal: navigator.cores

2014-05-05 Thread Rik Cabanier
On Mon, May 5, 2014 at 7:35 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 5/5/14, 7:29 PM, Kenneth Russell wrote:

 There's no provision in the web worker
 specification for allocation of a web worker to fail gracefully, or
 for a worker to be suspended indefinitely.


 This is not actually true.  Nothing in the spec requires a UA to expose
 the full parallelism of the hardware to workers, so you can just start
 queueing them when you hit too many; that's not black-box distinguishable
 from limited hardware resources.


  Even if a worker had its
 priority designated as low, it would still need to be started.


 Sure, in the sense that the caller has to be able to postMessage to it.


  On 32-bit systems, at least, spawning too many workers will cause the
 user agent to run out of address space fairly quickly.


 You're assuming that each worker, as soon as it can accept messages, has a
 thread with its own address space dedicated to it.

 While this is one possible implementation strategy, it's not required by
 the spec (and e.g. Gecko does not use this implementation strategy).


  It would be great to design a new parallelism architecture for the
 web, but from a practical standpoint, no progress has been made in
 this area for a number of years, and web developers are hampered today
 by the absence of this information. I think it should be exposed to
 the platform.


 Again, note that currently nothing requires a UA to allow an arbitrary
 number of workers running at once.  Gecko will certainly limit the number
 of in-flight workers available to a particular origin, queueing up new ones
 until old ones terminate.  So navigator.cores on machines with a large
 number of cores may not actually be a useful measure, since the UA may not
 allow that many workers to be actively on the CPU at once for a given
 origin anyway.


This is why this API should return the number of actual parallel tasks that
are allowed by the browser.
There would be no point if this API would return a number that is larger
than that.


Re: [whatwg] Proposal: navigator.cores

2014-05-04 Thread Rik Cabanier
On Sun, May 4, 2014 at 1:11 PM, Ian Hickson i...@hixie.ch wrote:

 On Sat, 3 May 2014, Adam Barth wrote:
 
  Over on blink-dev, we've been discussing [1] adding a property to
 navigator
  that reports the number of cores [2].
  [1]
 https://groups.google.com/a/chromium.org/forum/#!topic/blink-dev/B6pQClqfCp4
  [2] http://wiki.whatwg.org/wiki/NavigatorCores
  Some of the use cases for this feature have been discussed previously on
  this mailing list [3] and rejected in favor of a more complex system,
  perhaps similar to Grand Central Dispatch [4].
  [3]
 http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-November/024251.html
  [4] http://en.wikipedia.org/wiki/Grand_Central_Dispatch

 It's not clear what has changed since that discussion. Why are the
 concerns raised at that time no longer valid?


  As far as I can tell, this functionality exists in every other platform
  (including iOS and Android).

 This is true, but all those platforms have security mechanisms in place to
 mitigate the risk: you have to manually install the application, thus
 granting it either essentially full access to your machine (Win32),


Not quite true for Win32. An admin can install an application but a user
with very limited privileges can run it and call the
'GetLogicalProcessorInformation' API to get information about the number of
logical cores.
Microsoft did not considered an API that needs additional security. OSX is
the same (and likely most other OS's)


 or you have to have it vetted by a third party (iOS), or you have to
 examine
 permissions that the application is requesting, and explicitly grant it
 the right to run on your machine.

 The Web's security model is radically different. On the Web, we assume
 that it is safe to run any random hostile code, and that that code cannot
 harm you or violate your privacy. There are flaws in the privacy
 protection (i.e. fingerprinting vectors) that browsers are slowly
 patching, but we have worked hard to avoid adding new fingerprinting
 vectors. We should continue to do so.


  Others have raised concerns that exposing the number of cores could lead
  to increased fidelity of fingerprinting [5].
 
  My view is that the fingerprinting risks are minimal.  This information
  is already available to web sites that wish to spend a few seconds
  probing your machine [6].  Obviously, exposing this property makes that
  easier and more accurate, which is why it's useful for developers.
  [5]
 https://groups.google.com/a/chromium.org/d/msg/blink-dev/B6pQClqfCp4/bfPhYPPQqwYJ
  [6] http://wg.oftn.org/projects/core-estimator/demo/

 The core estimator is wildly inaccurate. For example, it is highly
 sensitive to machine load. I don't think it's fair to say well, you can
 get this data with poor fidelity over a few seconds, therefore providing a
 precise number with zero milliseconds latency is no worse.


Are you saying it's better that people use an estimator poly-fill? Authors
want to know this information and will use other means to get this
information. This will likely favor popular platforms.


  IMHO, a more complex worker pool system would be valuable, but most
  systems that have such a worker pool system also report the number of
  hardware threads available.

 They don't have to, though.


Yes, they have to because even with worker pools, an application wants know
how it can best break up the problem.


  In fact, the web was the only platform I could find that didn't make the
  number of cores available to developers.

 The Web is unique in attempting to protect users' privacy in the face of
 hostile code without requiring installation or a trust-granting step.


I agree with Adam and fail to see what possible information of value could
leak. This is nothing like reading pixels from the screen or giving access
to a GPS device.
Browsers already directly and indirectly give the author access to its
capabilities. Having an optimal number of concurrent tasks should be basic
information.


Re: [whatwg] Proposal: navigator.cores

2014-05-04 Thread Rik Cabanier
On Sun, May 4, 2014 at 6:49 AM, Adam Barth w...@adambarth.com wrote:

 On Sun, May 4, 2014 at 12:13 AM, Tobie Langel tobie.lan...@gmail.comwrote:

 On May 4, 2014, at 7:45, Rik Cabanier caban...@gmail.com wrote:
  On Sat, May 3, 2014 at 10:32 PM, Eli Grey m...@eligrey.com wrote:
 
  The proposal specifically states using logical cores, which handles
  all of the CPUs you mentioned properly.
 
  Intel CPUs with hyperthreading enabled report logical cores as double
  the hardware cores. Depending on the version and configuration of the
  Samsung Exynos Octa big.LITTLE CPUs, you will get either 4 logical
  cores (only one cluster can run at a time) or 8 logical cores
  (big.LITTLE MP, available in Exynos 5420 or later only).
 
 
  Great!
  Make sure this is captured when it is put in a specification.
  Otherwise the subtlety between an actual and a logical core might get
 lost.

 Shouldn't this also be captured in the API's name?


 Maybe navigator.hardwareConcurrency as a nod to the C++11 name?


That sounds reasonable.
`navigator.concurrency` is not quite correct since you can have higher
concurrency than the number of hardware threads.


Re: [whatwg] Proposal: navigator.cores

2014-05-04 Thread Rik Cabanier
On Sun, May 4, 2014 at 8:35 PM, Ian Hickson i...@hixie.ch wrote:

 On Sun, 4 May 2014, Adam Barth wrote:
 
  The world of computing has changed since 2009.  At that time, the iPhone
  3G had just been released and Apple hadn't even released the first iPad.
 
  The needs of the web as a platform have changed because now the web
  faces stiff competition from other mobile application frameworks.

 I'm not arguing that we shouldn't provide solid APIs that allow authors to
 provide multi-core solutions. I'm arguing that when we do so, we should do
 so while protecting the privacy of users.


  My personal view is that the fingerprinting horse left the barn years
  ago. I don't believe vendors will succeed in patching the existing
  fingerprint vectors.  For example, the WebKit project cataloged a number
  of vectors three year ago and has made very little progress patching any
  of them:
 
  http://trac.webkit.org/wiki/Fingerprinting

 I'm not responsible for what individual browser vendors do.


  Moreover, vendors are adding new state vectors all the time.  For
  example, the HTTP2 protocol contains an explicit protocol element for
  persisting data on the client:
 
  http://tools.ietf.org/html/draft-ietf-httpbis-http2-12#section-6.5

 I'm not responsible for what editors of other standards do.


  The web cannot afford to avoid exposing useful, non-privacy sensitive
  information, such as the number of cores, to developers out of a fear of
  fingerprinting.

 Sure we can. You don't need to know how many cores a system has, you need
 to know how you can make optimal use of the resources of the system
 without affecting other tasks that the user is running. There are plenty
 of ways we can address this use case that don't expose the number of cores
 as a reliable metric.


 On Sun, 4 May 2014, Rik Cabanier wrote:
   
As far as I can tell, this functionality exists in every other
platform (including iOS and Android).
  
   This is true, but all those platforms have security mechanisms in
   place to mitigate the risk: you have to manually install the
   application, thus granting it either essentially full access to your
   machine (Win32),
 
  Not quite true for Win32. An admin can install an application but a user
  with very limited privileges can run it and call the
  'GetLogicalProcessorInformation' API to get information about the number
  of logical cores.

 Right. You have to install the application. At that point, game over.


No, you misunderstood.
The admin install the application and has all privileges.
A guest user with the most limited set of privileges can still call this
API since it is considered so low risk. (This is likely also why Chrome can
use it since it's process run in a very restricted sandbox.)


 The point is that on the Web there's no installation step. This is a
 feature. It's one of the Web's most critically powerful assets.


  Microsoft did not considered an API that needs additional security. OSX
  is the same (and likely most other OS's)

 Sure. Once you've agreed to just let the application reside on your
 system, then you can fingerprint the user with impunity.

 The Web is better than that, or at least, we should make it better.


Yes, the web is like the reverse of an OS: instead of the admin/OS vendor
being in charge of the machine and managing the privileges, it's the user
who is in charge and who gets to decide if an application gets to use
restricted features.


  Are you saying it's better that people use an estimator poly-fill?

 No, I'm saying we should provide an API to address the underlying use case
 -- making optimal use of CPU resources -- without increasing the
 fingerprinting risk.


I think we already agree on that. The API should return the optimal number
of concurrent threads. This doesn't need to be just related to hardware
resources and I can even see it be variable depending on the state of the
machine (battery - low battery - too hot - background - load)

Designing a thread scheduler like one the other proposals is a different
problem and much more difficult to generalize.


 On Sun, 4 May 2014, Eli Grey wrote:
  On Sun, May 4, 2014 at 4:11 PM, Ian Hickson i...@hixie.ch wrote:
   or you have to examine permissions that the application is requesting,
   and explicitly grant it the right to run on your machine
 
  I am not aware of this in any platforms. Can you provide one example of
  a platform that requests an explicit permission for access to core
  count?

 The explicit permission is you can run on this system. On iOS,
 Android, MacOS, Linux, Windows, and pretty much every other platform,
 before you can run code on the system, the user or administrator has to
 explicitly install your code.

 On the Web, all it takes is visiting a URL. There's no installation step,
 there's no need for the user to click through a dialog saying running
 native code is highly risky.

 Because the Web has a dramatically lower bar for running code, we have

Re: [whatwg] Proposal: navigator.cores

2014-05-03 Thread Rik Cabanier
On Sat, May 3, 2014 at 10:49 AM, Adam Barth w...@adambarth.com wrote:

 Over on blink-dev, we've been discussing [1] adding a property to navigator
 that reports the number of cores [2].  As far as I can tell, this
 functionality exists in every other platform (including iOS and Android).
  Some of the use cases for this feature have been discussed previously on
 this mailing list [3] and rejected in favor of a more complex system,
 perhaps similar to Grand Central Dispatch [4].  Others have raised concerns
 that exposing the number of cores could lead to increased fidelity of
 fingerprinting [5].

 My view is that the fingerprinting risks are minimal.  This information is
 already available to web sites that wish to spend a few seconds probing
 your machine [6].  Obviously, exposing this property makes that easier and
 more accurate, which is why it's useful for developers.

 IMHO, a more complex worker pool system would be valuable, but most systems
 that have such a worker pool system also report the number of hardware
 threads available.  Examples:

 C++:
 std::thread::hardware_concurrency();

 Win32:
 GetSystemInfo returns dwNumberOfProcessors

 POSIX:
 sysctl returns HW_AVAILCPU or HW_NCPU

 Java:
 Runtime.getRuntime().availableProcessors();

 Python:
 multiprocessing.cpu_count()

 In fact, the web was the only platform I could find that didn't make the
 number of cores available to developers.


This sound like a great addition to the platform. I agree that there are no
real fingerprinting concerns and it will really benefit advanced authors
that want to optimize performance.

I wonder if this value should return the number of concurrent tasks
(including the main thread) that the system can support, as opposed to the
number of cores.
For instance, Samsung's exynos octa processor [1] has 8 cores, but only 4
should be used at a time. Desktop CPU's often support hyperthreading so
they support double the number of tasks per core [2]

1:
http://www.samsung.com/global/business/semiconductor/minisite/Exynos/products5octa_5410.html
2: http://ark.intel.com/products/77780


Re: [whatwg] Proposal: navigator.cores

2014-05-03 Thread Rik Cabanier
On Sat, May 3, 2014 at 10:32 PM, Eli Grey m...@eligrey.com wrote:

 The proposal specifically states using logical cores, which handles
 all of the CPUs you mentioned properly.

 Intel CPUs with hyperthreading enabled report logical cores as double
 the hardware cores. Depending on the version and configuration of the
 Samsung Exynos Octa big.LITTLE CPUs, you will get either 4 logical
 cores (only one cluster can run at a time) or 8 logical cores
 (big.LITTLE MP, available in Exynos 5420 or later only).


Great!
Make sure this is captured when it is put in a specification.
Otherwise the subtlety between an actual and a logical core might get lost.



 On Sat, May 3, 2014 at 10:39 PM, Rik Cabanier caban...@gmail.com wrote:
  On Sat, May 3, 2014 at 10:49 AM, Adam Barth w...@adambarth.com wrote:
 
  Over on blink-dev, we've been discussing [1] adding a property to
 navigator
  that reports the number of cores [2].  As far as I can tell, this
  functionality exists in every other platform (including iOS and
 Android).
   Some of the use cases for this feature have been discussed previously
 on
  this mailing list [3] and rejected in favor of a more complex system,
  perhaps similar to Grand Central Dispatch [4].  Others have raised
 concerns
  that exposing the number of cores could lead to increased fidelity of
  fingerprinting [5].
 
  My view is that the fingerprinting risks are minimal.  This information
 is
  already available to web sites that wish to spend a few seconds probing
  your machine [6].  Obviously, exposing this property makes that easier
 and
  more accurate, which is why it's useful for developers.
 
  IMHO, a more complex worker pool system would be valuable, but most
 systems
  that have such a worker pool system also report the number of hardware
  threads available.  Examples:
 
  C++:
  std::thread::hardware_concurrency();
 
  Win32:
  GetSystemInfo returns dwNumberOfProcessors
 
  POSIX:
  sysctl returns HW_AVAILCPU or HW_NCPU
 
  Java:
  Runtime.getRuntime().availableProcessors();
 
  Python:
  multiprocessing.cpu_count()
 
  In fact, the web was the only platform I could find that didn't make the
  number of cores available to developers.
 
 
  This sound like a great addition to the platform. I agree that there are
 no
  real fingerprinting concerns and it will really benefit advanced authors
  that want to optimize performance.
 
  I wonder if this value should return the number of concurrent tasks
  (including the main thread) that the system can support, as opposed to
 the
  number of cores.
  For instance, Samsung's exynos octa processor [1] has 8 cores, but only 4
  should be used at a time. Desktop CPU's often support hyperthreading so
  they support double the number of tasks per core [2]
 
  1:
 
 http://www.samsung.com/global/business/semiconductor/minisite/Exynos/products5octa_5410.html
  2: http://ark.intel.com/products/77780



Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson i...@hixie.ch wrote:


 On Wed, 12 Mar 2014, Rik Cabanier wrote:
  On Wed, Mar 12, 2014 at 3:44 PM, Ian Hickson wrote:
   On Thu, 28 Nov 2013, Rik Cabanier wrote:
On Thu, Nov 28, 2013 at 8:30 AM, Jürg Lehni wrote:

 I meant to say that it I think it would make more sense if the
 path was in the current transformation matrix, so it would
 represent the same coordinate values in which it was drawn, and
 could be used in the same 'context' of transformations applied to
 the drawing context later on.
   
No worries, it *is* confusing. For instance, if you emit coordinates
and then scale the matrix by 2, those coordinates from
getCurrentPath will have a scale of .5 applied.
  
   That's rather confusing, and a pretty good reason not to have a way to
   go from the current default path to an explicit Path, IMHO.
  
   Transformations affect the building of the current default path at
   each step of the way, which is really a very confusing API. The Path
   API on the other hand doesn't have this problem -- it has no
   transformation matrix. It's only when you use Path objects that they
   get transformed.
 
  This happens transparently to the author so it's not confusing.

 I've been confused by it multiple times over the years, and I wrote the
 spec. I am confident in calling it confusing.


Only when you think about it :-)


  For instance:
 
  ctx.rect(0,0,10,10);
  ctx.scale(2,2); - should not affect geometry of the previous rect
  ctx.stroke(); - linewidth is scaled by 2, but rect is still 10x10

 It's confusing because it's not at all clear why this doesn't result in
 two rectangles of different sizes:

  ctx.rect(0,0,10,10);
  ctx.scale(2,2);
  ctx.stroke();
  ctx.scale(2,2);
  ctx.stroke();

 ...while this does:

  ctx.rect(0,0,10,10);
  ctx.scale(2,2);
  ctx.stroke();
  ctx.beginPath();
  ctx.rect(0,0,10,10);
  ctx.scale(2,2);
  ctx.stroke();

 It appears to be the same path in both cases, after all.


Maybe you can think about drawing paths like drawing in a graphics
application.
- moveTo, lineTo, etc = drawing line segments in the document
- scale = hitting the magnifying glass/zooming
- translate = panning the document (0,0) is the upper left of the screen
- coordinates in path segments/rect = coordinates on the screen

It would be very surprising that line art would change when zooming in or
out or panning.


Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson i...@hixie.ch wrote:


 So this is not how most implementations currently have it defined.
   
I'm unsure what you mean. Browser implementations? If so, they
definitely do store the path in user coordinates. The spec currently
says otherwise [1] though.
  
   I'm not sure what you're referring to here.
 
  All graphics backends for canvas that I can inspect, don't apply the CTM
  to the current path when you call a painting operator. Instead, the path
  is passed as segments in the current CTM and the graphics library will
  apply the transform to the segments.

 Right. That's what the spec says too, for the current default path.


No, the spec says this:

For CanvasRenderingContext2D objects, the points passed to the methods, and
the resulting lines added to current default path by these methods, must be
transformed according to the current transformation matrix before being
added to the path.




 This is the confusing behaviour to which I was referring. The Path API
 (or
 Path2D or whatever we call it) doesn't have this problem.


That is correct. The Path2D object is in user space and can be passed
directly to the graphics API (along with the CTM).


 ...
var s = new Shape();
   
ctx.beginPath(); ctx.lineTo(...); ...; ctx.fill(); s.add(new
Shape(ctx.currentPath));
...
ctx.beginPath(); ctx.lineTo(...); ...; ctx.stroke(); s.add(new
Shape(ctx.currentPath, ctx.currentDrawingStyle));
   
ctx.addHitRegion({shape: s, id: control});
  
   Why not just add ctx.addHitRegion() calls after the fill and stroke
 calls?
 
  That does not work as the second addHitRegion will remove the control and
  id from the first one.
  The 'add' operation is needed to get a union of the region shapes.

 Just use two different IDs with two different addHitRegion() calls. That's
 a lot less complicated than having a whole new API.


That doesn't work if you want to have the same control for the 2 areas,
from the spec for addHitRegion:

If there is a previous region with this control, remove it from the scratch
bitmap's hit region list; then, if it had a parent region, decrement that
hit region's child count by one.


Even if you don't use the control, it would be strange to have 2 separate
hit regions for something that represents 1 object.


   On Fri, 6 Dec 2013, Jürg Lehni wrote:
 ...

copy, and would help memory consummation and performance.
  
   I don't really understand the use case here.
 
  Jurg was just talking about an optimization (so you don't have to make
  an internal copy)

 Sure, but that doesn't answer the question of what the use case is.


From my recent experiments with porting canvg (
https://code.google.com/p/canvg/) to use Path2D, they have a routine that
continually plays a path into the context which is called from a routine
that does the fill, clip or stroke.
Because that routine can't simply set the current path, a lot more changes
were needed.
Some pseudocode that shows the added complexity, without currentPath:

function drawpath() {

  if(Path2DSupported) {

return myPath;

  } else

  for(...) {

ctx.moveTo/lineTo/...

  }

}

function fillpath() {

var p = drawpath();
if(p)
  ctx.fill(p);
else
  ctx.fill();

}

with currentPath:

function drawpath() {
  if(Path2DSupported) { // only 2 extra lines of code
ctx.currentPath = myPath;
  } else
  for(...) {
ctx.moveTo/lineTo/...
  }
function fillpath() {
  drawpath();
  ctx.fill();
}



 On Wed, 12 Mar 2014, Rik Cabanier wrote:
 ...
   You say, here are some paths, here are some fill rules, here are some
   operations you should perform, now give me back a path that describes
   the result given a particular fill rule.
 
  I think you're collapsing a couple of different concepts here:
 
  path + fillrule - shape
  union of shapes - shape
  shape can be converted to a path

 I'm saying shape is an unnecessary primitive. You can do it all with
 paths.

union of (path + fillrule)s - path


No, that makes no sense. What would you get when combining a path with a
fillrule and no fillrule?


   A shape is just a path with a fill rule, essentially.
 
  So, a path can now have a fillrule? Sorry, that makes no sense.

 I'm saying a shape is just the combination of a fill rule and a path. The
 path is just a path, the fill rule is just a fill rule.


After applying a fillrule, there is no longer a path. You can *convert* it
back to a path that describes the outline of the shape if you want, but
that is something different.
The way you've defined things now, you can apply another fill rule on a
path with a fill rule. What would the result of that be?


   Anything you can do
   with one you can do with the other.
 
  You can't add segments from one shape to another as shapes represent
  regions.
  Likewise, you can't union, intersect or xor path segments.

 But you can union, intersect, or xor lists of pairs of paths and
 fillrules.


would you start

Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson i...@hixie.ch wrote:


 So this is not how most implementations currently have it defined.
   
I'm unsure what you mean. Browser implementations? If so, they
definitely do store the path in user coordinates. The spec currently
says otherwise [1] though.
  
   I'm not sure what you're referring to here.
 
  All graphics backends for canvas that I can inspect, don't apply the CTM
  to the current path when you call a painting operator. Instead, the path
  is passed as segments in the current CTM and the graphics library will
  apply the transform to the segments.

 Right. That's what the spec says too, for the current default path.


No, the spec says this:

For CanvasRenderingContext2D objects, the points passed to the methods, and
the resulting lines added to current default path by these methods, must be
transformed according to the current transformation matrix before being
added to the path.




 This is the confusing behaviour to which I was referring. The Path API
 (or
 Path2D or whatever we call it) doesn't have this problem.


That is correct. The Path2D object is in user space and can be passed
directly to the graphics API (along with the CTM).


Another use case is to allow authors to quickly migrate to hit
 regions.
   
ctx.beginPath(); ctx.lineTo(...); ...; ctx.fill();
... // lots of complex drawing operation for a control
ctx.beginPath(); ctx.lineTo(...); ...; ctx.stroke();
   
   
To migrate that to a region (with my proposed shape interface [1]):
   
var s = new Shape();
   
ctx.beginPath(); ctx.lineTo(...); ...; ctx.fill(); s.add(new
Shape(ctx.currentPath));
...
ctx.beginPath(); ctx.lineTo(...); ...; ctx.stroke(); s.add(new
Shape(ctx.currentPath, ctx.currentDrawingStyle));
   
ctx.addHitRegion({shape: s, id: control});
  
   Why not just add ctx.addHitRegion() calls after the fill and stroke
 calls?
 
  That does not work as the second addHitRegion will remove the control and
  id from the first one.
  The 'add' operation is needed to get a union of the region shapes.

 Just use two different IDs with two different addHitRegion() calls. That's
 a lot less complicated than having a whole new API.


That doesn't work if you want to have the same control for the 2 areas,
from the spec for addHitRegion:

If there is a previous region with this control, remove it from the scratch
bitmap's hit region list; then, if it had a parent region, decrement that
hit region's child count by one.


Even if you don't use the control, it would be strange to have 2 separate
hit regions for something that represents 1 object.


   On Fri, 6 Dec 2013, Jürg Lehni wrote:
 ...
copy, and would help memory consummation and performance.
  
   I don't really understand the use case here.
 
  Jurg was just talking about an optimization (so you don't have to make
  an internal copy)

 Sure, but that doesn't answer the question of what the use case is.


From my recent experiments with porting canvg (
https://code.google.com/p/canvg/) to use Path2D, they have a routine that
continually plays a path into the context which is called from a routine
that does the fill, clip or stroke.
Because that routine can't simply set the current path, a lot more changes
were needed.
Some pseudocode that shows the added complexity, without currentPath:

function drawpath() {

  if(Path2DSupported) {

return myPath;

  } else

  for(...) {

ctx.moveTo/lineTo/...

  }

}

function fillpath() {

var p = drawpath();
if(p)
  ctx.fill(p);
else
  ctx.fill();

}

with currentPath:

function drawpath() {
  if(Path2DSupported) { // only 2 extra lines of code
ctx.currentPath = myPath;
  } else
  for(...) {
ctx.moveTo/lineTo/...
  }
function fillpath() {
  drawpath();
  ctx.fill();
}



 On Wed, 12 Mar 2014, Rik Cabanier wrote:

 You can do unions and so forth with just paths, no need for
 regions.
   
How would you do a union with paths? If you mean that you can just
aggregate the segments, sure but that doesn't seem very useful.
  
   You say, here are some paths, here are some fill rules, here are some
   operations you should perform, now give me back a path that describes
   the result given a particular fill rule.
 
  I think you're collapsing a couple of different concepts here:
 
  path + fillrule - shape
  union of shapes - shape
  shape can be converted to a path

 I'm saying shape is an unnecessary primitive. You can do it all with
 paths.

union of (path + fillrule)s - path


No, that makes no sense. What would you get when combining a path with a
fillrule and no fillrule?


   A shape is just a path with a fill rule, essentially.
 
  So, a path can now have a fillrule? Sorry, that makes no sense.

 I'm saying a shape is just the combination of a fill rule and a path. The
 path is just a path, the fill rule is just a fill rule.


After applying a fillrule, there is no longer a path. You

Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Tue, Apr 8, 2014 at 12:25 PM, Justin Novosad ju...@google.com wrote:


 
  On Mon, 17 Mar 2014, Justin Novosad wrote:
  
   Yes, but there is still an issue that causes problems in Blink/WebKit:
   because the canvas rendering context stores its path in local
   (untransformed) space, whenever the CTM changes, the path needs to be
   transformed to follow the new local spcae.  This transform requires the
  CTM
   to be invertible. So now webkit and blink have a bug that causes all
   previously recorded parts of the current path to be discarded when the
  CTM
   becomes non-invertible (even if it is only temporarily non-invertible,
  even
   if the current path is not even touched while the matrix is
   non-invertible). I have a fix in flight that fixes that problem in
 Blink
  by
   storing the current path in transformed coordinates instead. I've had
 the
   fix on the back burner pending the outcome of this thread.
 
  Indeed. It's possible to pick implementation strategies that just can't
 be
  compliant; we shouldn't change the spec every time any implementor
 happens
  to make that kind of mistake, IMHO.
 
  (Of course the better long-term solution here is the Path objects, which
  are transform-agnostic during building.)
 
 
  Just to be clear, we should support this because otherwise the results
 are
  just wrong. For example, here some browsers currently show a straight
 line
  in the default state, and this causes the animation to look ugly in the
  transition from the first frame to the secord frame (hover over the
 yellow
  to begin the transition):
 
 http://junkyard.damowmow.com/538
 
  Contrast this to the equivalent code with the transforms explicitly
  multiplied into the coordinates:
 
 http://junkyard.damowmow.com/539
 
  I don't see why we would want these to be different. From the author's
  perspective, they're identical.


These examples are pretty far fetched.
How many time do people change the CTM in the middle of a drawing operation
and not change the geometry?

If we stick to that, there are still some behaviors that need to resolved.
 One issue that comes to mind is what happens if stroke or fill are called
 while the CTM is non-invertible? To be more precise, how would the styles
 be mapped?  If the fillStyle is collapsed to a point, does that mean the
 path gets filled in transparent black?  If we go down this road, we will
 likely uncover more questions of this nature.


Indeed


  On Tue, 25 Mar 2014, Justin Novosad wrote:
  
   I prepared a code change to that effect, but then there was talk of
   changing the spec to skip path primitives when the CTM is not
   invertible, which I think is a good idea. It would avoid a lot of
   needless hoop jumping on the implementation side for supporting weird
   edge cases that have little practical usefulness.
 
  I'm not sure I agree that they have little practical usefulness. Zeros
  often occur at the edges of transitions, and if we changed the spec then
  these transitions would require all the special-case code to go in author
  code instead of implementor code.
 

 Yes, I think that may be the strongest argument so far in this discussion.
 The examples you provided earlier illustrate it well.
 I would like to hear what Rik and Dirk think about this now.


I looked at the webkit and chrome bug databases and I haven't found anyone
who complained about their current behavior.
Implementing this consistently will either add a bunch of special case code
to deal with non-singular matrices or double (triple?) conversion of all
segment points like firefox does. After that, fill, stroke and clip will
still not work when there's a non-invertible matrix.

I do not think it's worth the effort...


Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson i...@hixie.ch wrote:

 ...

Stroking will be completely wrong too, because joins and end caps
are drawn separately, so they would be stroked as separate paths.
This will not give you the effect of a double-stroked path.
  
   I don't understand why you think joins and end caps are drawn
   separately. That is not what the spec requires.
 
  Sure it does, for instance from
 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#trace-a-path
  :
 
  The round value means that a filled arc connecting the two
  aforementioned corners of the join, abutting (and not overlapping) the
  aforementioned triangle, with the diameter equal to the line width and
  the origin at the point of the join, must be added at joins.
 
  If you mean, drawn with a separate fill call, yes that is true.
  What I meant was that they are drawn as a separate closed path that will
  interact with other paths as soon as there are different winding rules or
  holes.

 The word filled is a bit misleading here (I've removed it), but I don't
 see why that led you to the conclusion you reached. The step in question
 begins with Create a new path that describes the edge of the areas that
 would be covered if a straight line of length equal to the styles
 lineWidth was swept along each path in path while being kept at an angle
 such that the line is orthogonal to the path being swept, replacing each
 point with the end cap necessary to satisfy the styles lineCap attribute
 as described previously and elaborated below, and replacing each join with
 the join necessary to satisfy the styles lineJoin type, as defined below,
 which seems pretty unambiguous.


Thinking about this some more, it looks like you came around and specified
stroking like I requested from the beginning.
For instance,
http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0354.html
 or
http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0213.html
Now that you made that change, 'addPathByStrokingPath' is specified
correctly. I still don't know how it could be implemented though... (It
*could* as a shape but not as a path)


Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Tue, Apr 8, 2014 at 4:50 PM, Rik Cabanier caban...@gmail.com wrote:




 On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson i...@hixie.ch wrote:

 ...


Stroking will be completely wrong too, because joins and end caps
are drawn separately, so they would be stroked as separate paths.
This will not give you the effect of a double-stroked path.
  
   I don't understand why you think joins and end caps are drawn
   separately. That is not what the spec requires.
 
  Sure it does, for instance from
 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#trace-a-path
  :
 
  The round value means that a filled arc connecting the two
  aforementioned corners of the join, abutting (and not overlapping) the
  aforementioned triangle, with the diameter equal to the line width and
  the origin at the point of the join, must be added at joins.
 
  If you mean, drawn with a separate fill call, yes that is true.
  What I meant was that they are drawn as a separate closed path that will
  interact with other paths as soon as there are different winding rules
 or
  holes.

 The word filled is a bit misleading here (I've removed it), but I don't
 see why that led you to the conclusion you reached. The step in question
 begins with Create a new path that describes the edge of the areas that
 would be covered if a straight line of length equal to the styles
 lineWidth was swept along each path in path while being kept at an angle
 such that the line is orthogonal to the path being swept, replacing each
 point with the end cap necessary to satisfy the styles lineCap attribute
 as described previously and elaborated below, and replacing each join with
 the join necessary to satisfy the styles lineJoin type, as defined below,
 which seems pretty unambiguous.


 Thinking about this some more, it looks like you came around and specified
 stroking like I requested from the beginning.
 For instance,
 http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0354.html
  or
 http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0213.html
 Now that you made that change, 'addPathByStrokingPath' is specified
 correctly. I still don't know how it could be implemented though... (It
 *could* as a shape but not as a path)


The spec is still confusingly written and could be misinterpreted:

Create a new path that describes the edge of the areas that would be
covered if a straight line of length equal to the styles lineWidth was
swept along each subpath in path while being kept at an angle such that the
line is orthogonal to the path being swept, replacing each point with the
end cap necessary to satisfy the styles lineCap attribute as described
previously and elaborated below, and replacing each join with the join
necessary to satisfy the styles lineJoin type, as defined below.


Maybe could become:

Create a new path that describes the edge of the coverage of the following
areas:
- a straight line of length equal to the styles lineWidth that was swept
along each subpath in path while being kept at an angle such that the line
is orthogonal to the path being swept,
- the end cap necessary to satisfy the styles lineCap attribute as
described previously and elaborated below,
- the join with the join necessary to satisfy the styles lineJoin type, as
defined below.


Re: [whatwg] Canvas normalize rect() and strokeRect()

2014-04-06 Thread Rik Cabanier
On Sat, Apr 5, 2014 at 11:00 PM, Dirk Schulze dschu...@adobe.com wrote:


 On Apr 6, 2014, at 3:23 AM, Rik Cabanier caban...@gmail.com wrote:

 
 
 
  On Sat, Apr 5, 2014 at 9:01 AM, Dirk Schulze dschu...@adobe.com wrote:
  Hi,
 
  I looked at the behavior of negative width or height for the rect() and
 strokeRect() functions.
 
  All browsers normalize the passed parameters for strokeRect() to have
 positive width and height.
 
  strokeRect(90,10,-80,80) -- strokeRect(10,10,80,80)
 
  http://jsfiddle.net/za945/
 
  It also seems that only firefox is following the spec [1] when width or
 height are 0: http://jsfiddle.net/za945/2/
  I'm unsure why such a rectangle is defined as a straight line.

 You mean you would rather let it draw a one dimensional rectangle? So for
 the dimension that is not zero, you would see two overlapping lines + the 0
 dimensional sides?


yes

That seems indeed to be the case for IE, Safari and Blink:
 http://jsfiddle.net/Gh9XK/

 
  Just WebKit seems to normalize for rect() as well:
 
  http://jsfiddle.net/VT4MG/
 
  The behavior of normalizing is not specified. Especially it seems odd
 that the behavior for fillRect()/strokeRect() should differ from rect(). So
 we should either normalize for all functions or don't do it for all IMO.
 
  Note: fillRect() and clearRect() are not affected. The behavior for
 rect() is important for filling with different winding rules as well. It is
 not just stroking with dash arrays that is effected.
 
  yes, the spec needs to say in that order as it does for fillRect and
 strokeRect.

 Ok, that means you would be in favor not to normalize. Again, all current
 browser normalize and do NOT draw in that order for fillRect() and
 strokeRect(). That means you would require to give up the currently
 interoperable behavior.


I changed your test a bit so you can more easily see the normalisation:
http://jsfiddle.net/za945/3/
Safari and Chrome are doing as you say, but Firefox does not. (I don't have
IE to test)

I would be in favor to change the blink/webkit behavior as the specified
one makes more sense.


Re: [whatwg] Canvas normalize rect() and strokeRect()

2014-04-05 Thread Rik Cabanier
On Sat, Apr 5, 2014 at 9:01 AM, Dirk Schulze dschu...@adobe.com wrote:

 Hi,

 I looked at the behavior of negative width or height for the rect() and
 strokeRect() functions.

 All browsers normalize the passed parameters for strokeRect() to have
 positive width and height.

 strokeRect(90,10,-80,80) -- strokeRect(10,10,80,80)

 http://jsfiddle.net/za945/


It also seems that only firefox is following the spec [1] when width or
height are 0: http://jsfiddle.net/za945/2/
I'm unsure why such a rectangle is defined as a straight line.


 Just WebKit seems to normalize for rect() as well:

 http://jsfiddle.net/VT4MG/

 The behavior of normalizing is not specified. Especially it seems odd that
 the behavior for fillRect()/strokeRect() should differ from rect(). So we
 should either normalize for all functions or don't do it for all IMO.

 Note: fillRect() and clearRect() are not affected. The behavior for rect()
 is important for filling with different winding rules as well. It is not
 just stroking with dash arrays that is effected.


yes, the spec needs to say in that order as it does for fillRect and
strokeRect.

1:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-context-2d-fillrect


Re: [whatwg] Bicubic filtering on context.drawImage

2014-03-27 Thread Rik Cabanier
On Wed, Mar 26, 2014 at 10:23 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Mar 26, 2014, at 9:22 PM, Rik Cabanier caban...@gmail.com wrote:

 On Wed, Mar 26, 2014 at 8:59 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Mar 24, 2014, at 8:25 AM, Justin Novosad ju...@google.com wrote:

 On Sat, Mar 22, 2014 at 1:47 AM, K. Gadd k...@luminance.org wrote:


 A list of resampling methods defined by the spec would be a great
 overengineered (not in a bad way) solution, but I think you really
 only need to worry about breaking existing apps - so providing an
 escape valve to demand bilinear (this is pretty straightforward,
 everything can do bilinear) instead of the 'best' filtering being
 offered is probably enough for future-proofing. It might be better to
 default to bilinear and instead require canvas users to opt into
 better filtering, in which case a list of available filters would
 probably be preferred, since that lets the developer do feature
 detection.

 I think we missed an opportunity to make filtering future-proof when it

 got spec'ed as a boolean. Should have been an enum IMHO :-(
 Anyways, if we add another image smoothing attribute to select the
 algorithm let's at least make that one an enum.

 I'm not sure the spec should impose specific filter implementations, or
 perhaps only bi-linear absolutely needs to be supported, and all other
 modes can have fallbacks.
 For example.  We could have an attribute named imageSmoothingQuality.
 possibles value could be 'best' and 'fast'. Perhaps 'fast' would mean
 bi-linear. Not sure which mode should be the default.


 We could also have interpolateEndpointsCleanly flag that forces bilinear
 or an equivalent algorithm that ensures endpoints do not get affected by
 inner contents.


 Is that to clamp the sampling to the source rect?
 http://jsfiddle.net/6vh5q/9/ shows that Safari samples when smoothing is
 turned off which is a bit strange.


 In general, it's better to define semantic based flags and options so that
 UAs could optimize it in the future.  Mandating a particular scaling
 algorithm in the spec. would limit such optimizations in the future.  e.g.
 there could be a hardware that natively support Lanczos sampling but not
 Bicubic sampling.


 If it was an enum/string, an author could set the desired sampling method
 and if the UA doesn't support it, the attribute would not change.


 The point I was trying to make isn't so much about some UA not supporting
 a particular sampling algorithm.  It's more about that the
 right/most-effective sampling algorithm depending on platform/hardware.  In
 general, UA is in a much better position to determine what sampling
 algorithm works best given the constraints such as smoothness and
 interpolating endpoints cleanly on a given hardware.


Yes, figuring out the various aliasing options will be difficult to figure
out.
Katalyn's use case is with controlling the sampling outside the source rect
in drawImage which we can treat as a separate, more trivial issue.


Re: [whatwg] Bicubic filtering on context.drawImage

2014-03-26 Thread Rik Cabanier
On Wed, Mar 26, 2014 at 8:59 PM, Ryosuke Niwa rn...@apple.com wrote:


 On Mar 24, 2014, at 8:25 AM, Justin Novosad ju...@google.com wrote:

  On Sat, Mar 22, 2014 at 1:47 AM, K. Gadd k...@luminance.org wrote:
 
 
  A list of resampling methods defined by the spec would be a great
  overengineered (not in a bad way) solution, but I think you really
  only need to worry about breaking existing apps - so providing an
  escape valve to demand bilinear (this is pretty straightforward,
  everything can do bilinear) instead of the 'best' filtering being
  offered is probably enough for future-proofing. It might be better to
  default to bilinear and instead require canvas users to opt into
  better filtering, in which case a list of available filters would
  probably be preferred, since that lets the developer do feature
  detection.
 
  I think we missed an opportunity to make filtering future-proof when it
  got spec'ed as a boolean. Should have been an enum IMHO :-(
  Anyways, if we add another image smoothing attribute to select the
  algorithm let's at least make that one an enum.
 
  I'm not sure the spec should impose specific filter implementations, or
  perhaps only bi-linear absolutely needs to be supported, and all other
  modes can have fallbacks.
  For example.  We could have an attribute named imageSmoothingQuality.
  possibles value could be 'best' and 'fast'. Perhaps 'fast' would mean
  bi-linear. Not sure which mode should be the default.

 We could also have interpolateEndpointsCleanly flag that forces bilinear
 or an equivalent algorithm that ensures endpoints do not get affected by
 inner contents.


Is that to clamp the sampling to the source rect?
http://jsfiddle.net/6vh5q/9/ shows that Safari samples when smoothing is
turned off which is a bit strange.


 In general, it's better to define semantic based flags and options so that
 UAs could optimize it in the future.  Mandating a particular scaling
 algorithm in the spec. would limit such optimizations in the future.  e.g.
 there could be a hardware that natively support Lanczos sampling but not
 Bicubic sampling.


If it was an enum/string, an author could set the desired sampling method
and if the UA doesn't support it, the attribute would not change.


Re: [whatwg] Singular CTM and currentTransform

2014-03-25 Thread Rik Cabanier
On Tue, Mar 25, 2014 at 8:49 AM, Justin Novosad ju...@google.com wrote:

 On Tue, Mar 25, 2014 at 8:25 AM, Dirk Schulze dschu...@adobe.com wrote:

  Hi,
 
  Independent if getter getTransform/getCTM or attribute currentTransform,
  what should be returned for a CTM that is singular (not invertible)?
 
  In WebKit we do not track all transformations of the CTM that caused a
  singular matrix or are following a transformation that would have caused
 a
  singular matrix.
 
  Example:
 
  ctx.scale(0,0);
  ct.translate(10,10);
 
  In webkit we would not apply the transformation scale(0,0) and mark the
  CTM as not-invertible instead. So we could not return an SVGMatrix object
  with a = b = c = d = 0; e = f = 10 because we actually don't know the CTM
  after applying all transforms.
 
  I would suggest that the getter either:
  1) throws and invalid state error if the CTM is not invertible
  2) returns 0. In WebIDLit would look like: SVGMatrix? getTransform();
 
  Greetings,
  Dirk
 

 The notion that an invertible matrix is an unusable state is somewhat of a
 webkit-ism.  I think there is a prerequisite question that needs to be
 resolved before we can ponder what you propose: should we proceed with draw
 operations when the canvas is non-invertible?  Right now some browsers do
 and some don't. The current state of the spec suggests that webkit/blink
 are *not* doing the right thing.  In another thread we discussed skipping
 path primitives (and presumably all draw calls) when the matrix is
 non-invertible. We should probably finalize that issue first.


I agree. That issue has the same root problem as currentTransform.
It would be nice to get closure.

Justin, you hinted that you would be willing to follow the spec which would
make you match Firefox and IE.
Are still planning on doing that?

Note that Firefox is still non-compliant if there's a non-invertible matrix
during filling/stroking/clipping


  PS: This is one reason I prefer a getter over an attribute because the
  getter does not return a mutable (live) SVGMatrix. But even than the
  problem above is not fully solved of course.



Re: [whatwg] Singular CTM and currentTransform

2014-03-25 Thread Rik Cabanier
On Tue, Mar 25, 2014 at 12:35 PM, Justin Novosad ju...@google.com wrote:

 On Tue, Mar 25, 2014 at 3:15 PM, Rik Cabanier caban...@gmail.com wrote:


 I agree. That issue has the same root problem as currentTransform.
 It would be nice to get closure.

 Justin, you hinted that you would be willing to follow the spec which
 would make you match Firefox and IE.
 Are still planning on doing that?


 I'm in a holding pattern. I prepared a code change to that effect, but
 then there was talk of changing the spec to skip path primitives when the
 CTM is not invertible, which I think is a good idea. It would avoid a lot
 of needless hoop jumping on the implementation side for supporting weird
 edge cases that have little practical usefulness.

 Right now, there is no browser interoperability when using non-invertible
 CTMs, and the web has been in this inconsistent state for a long time.  The
 fact that this issue has never escalated (AFAIK) is a strong hint that no
 one out there really cares about this use case, so we should probably just
 go for simplicity. Maklng path primitives and draw calls no-ops when the
 CTM is non-invertible is simple to spec, implement, test, and understand
 for developers.


Great to hear!
I volunteer to update the Firefox implementation if we can get consensus.
(see https://bugzilla.mozilla.org/show_bug.cgi?id=931587)


 Note that Firefox is still non-compliant if there's a non-invertible
 matrix during filling/stroking/clipping


  PS: This is one reason I prefer a getter over an attribute because the
  getter does not return a mutable (live) SVGMatrix. But even than the
  problem above is not fully solved of course.






Re: [whatwg] Proposal: change 2D canvas currentTransform to getter method

2014-03-24 Thread Rik Cabanier
On Mon, Mar 24, 2014 at 8:34 AM, Simon Sarris simon.sar...@gmail.comwrote:


 On Mon, Mar 24, 2014 at 11:26 AM, Hwang, Dongseong 
 dongseong.hw...@intel.com wrote:

 Looking over this thread, we make a consensus not to
 expose currentTransform attribute.

 Now, all we have to decide is API

 Option 1,
 SVGMatrix getTransform();
 void setTransform(SVGMatrix);  -- it overrides void
 setTransform(unrestricted double a, unrestricted double b, unrestricted
 double c, unrestricted double d, unrestricted double e, unrestricted double
 f);

 Option 2,
 SVGMatrix getCTM();
 void setCTM(SVGMatrix);

 Option 3,
 SVGMatrix getCurrentTransform();
 void setCurrentTransform(SVGMatrix);

 Which is the best?

 Greetings, DS


 I'm heavily in favor of option 1.

 I think using Current in the naming convention is silly. The transform
 just as much a part of state as lineWidth/etc, but nobody would propose
 naming lineWidth something like currentLineWidth! There's no way to get a
 *non-current* transformation matrix (or lineWidth), so I think the
 distinction is unnecessary.

 CTM only seems like a good idea if we're worried that the name is too
 long, but since Current is redundant/extraneous, I don't think an
 initialism is worth the added layer of confusion.


+1
There's already a transform function that takes an array that works the
same way.


Re: [whatwg] effect of smoothing on drawImage (was: Bicubic filtering on context.drawImage)

2014-03-24 Thread Rik Cabanier
I created a test case that rotates your test image:
http://jsfiddle.net/6vh5q/9/
According to the spec [1] the red line should show up when you rotate the
image, regardless of smoothing.

Chrome/Firefox with cairo or core graphics backend: no red line
IE/Firefox with d2d backend: checkered red when smoothing is off, solid
with smoothing on (= this is correct)
Safari: checkered red with smoothing off but no red with smoothing on (?)

It might be surprising to an author that there's content from outside the
source rect leaks in when smoothing is turned off.
I'm unsure if it's important enough to address in the spec.

On Sat, Mar 22, 2014 at 9:43 PM, K. Gadd k...@luminance.org wrote:

 On windows with smoothing disabled I get it in every browser except
 Chrome. Maybe this is due to Direct2D, and for some reason it
 antialiases the edges of the bitmaps? It's nice to hear that it
 doesn't happen on other configurations.

 It's important to test this when drawing with transforms active because
 you may not see the full set of artifacts without them. (The application
 where I first observed this is rotating/scaling these bitmaps with
 scaling disabled.)

 Copying to a temporary canvas is an interesting idea; is it possible for
 typical browser implementations to optimize this or does it forcibly
 degrade things to a pair of individual draw calls (with full state changes
 and 6-vertex buffers) for every bitmap rendered?

 I don't really have any problems with the behavior when smoothing is
 enabled; sorry if this was unclear.

 -kg

 On Sat, Mar 22, 2014 at 9:09 PM, Rik Cabanier caban...@gmail.com wrote:
 
  On Fri, Mar 21, 2014 at 10:47 PM, K. Gadd k...@luminance.org wrote:
 
  Hi, the attached screenshots and test case in
  https://bugzilla.mozilla.org/show_bug.cgi?id=782054 demonstrate how
  the issue affects 2D games that perform scaling/rotation of bitmaps.
  There are other scenarios I probably haven't considered as well. As
  far as I can tell the mechanism used to render these quads is
  rendering quads that are slightly too large (probably for coverage
  purposes or to handle subpixel coordinates?) which results in
  effectively drawing a rectangle larger than the input rectangle, so
  you sample a bit outside of it and get noise when texture atlases are
  in use.
 
  Interestingly, I raised this on the list previously and it was pointed
  out that Chrome's previous ('correct' for that test case) behavior was
  actually incorrect, so it was changed. If I remember correctly there
  are good reasons for this behavior when bilinear filtering is enabled,
  but it's quite unexpected to basically get 'antialiasing'
  on the edges of your bitmaps when filtering is explicitly disabled.
  Getting opted into a different filter than the filter you expect could
  probably be similarly problematic but I don't know of any direct
  examples other than the gradient fill one.
 
 
  ah. I remember looking at your test case.
  I made a simpler version that shows the issue:
  http://codepen.io/adobe/pen/jIzbv
  According to the spec, there should be a faint line [1]:
 
  If the original image data is a bitmap image, the value painted at a
 point
  in the destination rectangle is computed by filtering the original image
  data. The user agent may use any filtering algorithm (for example
 bilinear
  interpolation or nearest-neighbor). When the filtering algorithm
 requires a
  pixel value from outside the original image data, it must instead use the
  value from the nearest edge pixel. (That is, the filter uses
 'clamp-to-edge'
  behavior.) When the filtering algorithm requires a pixel value from
 outside
  the source rectangle but inside the original image data, then the value
 from
  the original image data must be used.
 
  You were told correctly that the Chrome behavior is incorrect. When doing
  smoothing, Chrome is not looking outside the bounds of the source image,
 so
  you don't get the faint line.
  This is also an issue with the cairo and core graphics backends of
 Firefox.
  Safari and IE seem to work correctly.
 
  I will log bugs against Chrome and Firefox so we can get interoperable
  behavior here.
 
  I was not able to reproduce the issue of smoothing was disabled. If you
 want
  smoothing but not the lines, you can do a drawimage to an intermediate
  canvas with the same resolution as the source canvas. I verified that
 this
  works.


1:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-context-2d-drawimage


Re: [whatwg] ImageData constructor questions

2014-03-23 Thread Rik Cabanier
On Sat, Mar 22, 2014 at 11:31 PM, Dirk Schulze dschu...@adobe.com wrote:

 Hi,

 I was reading the spec part about ImageData [1].

 1)
 I noticed that createImageData() is explicit that it represent a
 transparent black rectangle. The constructor for ImageData is not that
 explicit.


Yes, that seems like an oversight.


 2)
 The last step of the 2nd constructor that takes an Uint8ClampedArray says:
* Return a new ImageData object whose width is sw, whose height is
 height, and whose data is source.

 Is data a reference to the original source or a copy of source?


It is the original source. The spec does not say anything about creating a
new array and copying the bits over.


 For the former, there might be two ImageData objects referencing the same
 ByteArray. How would that be useful?


You *could* use the API that way but I don't see why there has to be a way
to prevent that from happening. Is there an implementation issue if 2
different imageData object point to the same dataarray/



 [1]
 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dfnReturnLink-7(4.12.4.2.16
  Pixel manipulation)


Correct link is:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#pixel-manipulation


[whatwg] effect of smoothing on drawImage (was: Bicubic filtering on context.drawImage)

2014-03-22 Thread Rik Cabanier
On Fri, Mar 21, 2014 at 10:47 PM, K. Gadd k...@luminance.org wrote:

 Hi, the attached screenshots and test case in
 https://bugzilla.mozilla.org/show_bug.cgi?id=782054 demonstrate how
 the issue affects 2D games that perform scaling/rotation of bitmaps.
 There are other scenarios I probably haven't considered as well. As
 far as I can tell the mechanism used to render these quads is
 rendering quads that are slightly too large (probably for coverage
 purposes or to handle subpixel coordinates?) which results in
 effectively drawing a rectangle larger than the input rectangle, so
 you sample a bit outside of it and get noise when texture atlases are
 in use.

 Interestingly, I raised this on the list previously and it was pointed
 out that Chrome's previous ('correct' for that test case) behavior was
 actually incorrect, so it was changed. If I remember correctly there
 are good reasons for this behavior when bilinear filtering is enabled,
 but it's quite unexpected to basically get 'antialiasing'
 on the edges of your bitmaps when filtering is explicitly disabled.
 Getting opted into a different filter than the filter you expect could
 probably be similarly problematic but I don't know of any direct
 examples other than the gradient fill one.


ah. I remember looking at your test case.
I made a simpler version that shows the issue:
http://codepen.io/adobe/pen/jIzbv
According to the spec, there should be a faint line [1]:

If the original image data is a bitmap image, the value painted at a point
in the destination rectangle is computed by filtering the original image
data. The user agent may use any filtering algorithm (for example bilinear
interpolation or nearest-neighbor). When the filtering algorithm requires a
pixel value from outside the original image data, it must instead use the
value from the nearest edge pixel. (That is, the filter uses
'clamp-to-edge' behavior.) When the filtering algorithm requires a pixel
value from outside the source rectangle but inside the original image
data, *then
the value from the original image data must be used.*

You were told correctly that the Chrome behavior is incorrect. When doing
smoothing, Chrome is not looking outside the bounds of the source image, so
you don't get the faint line.
This is also an issue with the cairo and core graphics backends of Firefox.
Safari and IE seem to work correctly.

I will log bugs against Chrome and Firefox so we can get interoperable
behavior here.

I was not able to reproduce the issue of smoothing was disabled. If you
want smoothing but not the lines, you can do a drawimage to an intermediate
canvas with the same resolution as the source canvas. I verified that this
works.

1:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-context-2d-drawimage


Re: [whatwg] addPath and CanvasPathMethods

2014-03-21 Thread Rik Cabanier
On Fri, Mar 21, 2014 at 11:52 AM, Joe Gregorio jcgrego...@google.comwrote:


 On Fri, Mar 21, 2014 at 12:17 AM, Rik Cabanier caban...@gmail.com wrote:

 On Thu, Mar 20, 2014 at 4:24 PM, Robert O'Callahan rob...@ocallahan.org
 wrote:
  An implementation can turn #2 into #1 if the paths obviously don't
  overlap. If they might overlap, the author probably shouldn't be doing
 the
  latter!
 

 For this reason I don't particularly like the addPath API. It tricks
 authors into thinking that they will get a union of the paths.


  TBH I don't see why authors would choose the latter approach.
 

 I agree.
 'addPath' will always cause the generation of a new path so if an author
 chooses this strange approach, the retessellation will have to happen
 again
 anyway.


 Agreed, the speed of Path2D comes from creating the object and then
 reusing it multiple
 times via fill, stroke or clip. Adding addPath to CRC2D would seem to
 undermine
 that and encourage non-optimal uses.

 I've wondered at times about an API that made the const-ness of paths even
 more explicit:

   var b = new Path2DBuilder();
   b.moveTo(...);
   b.ellipseTo(...);
   var path = b.getPath2D();

 Where path has no attributes or methods.


Agreed. That is what my shape proposal is; a immutable representation of a
filled/stroked region.



 But then I think that looks too much like Java


:-)


Re: [whatwg] Proposal: change 2D canvas currentTransform to getter method

2014-03-21 Thread Rik Cabanier
On Thu, Mar 20, 2014 at 11:18 AM, Simon Sarris simon.sar...@gmail.comwrote:

 On Thu, Mar 20, 2014 at 1:52 PM, Justin Novosad ju...@google.com wrote:

  Hello all,
 
  The recently added currentTransform attribute on CanvasRenderingContext2D
  gives shared access to the rendering context's transform. By shared, I
  mean:
 
  a) this code modifies the CTM:
  var matrix = context.currentTransform;
  matrix.a = 2;
 
  b) In this code, the second line modifies matrix:
  var matrix = context.currentTransform;
  context.scale(2, 2);
 
  This behavior is probably not what most developers would expect.
  I would like to propose changing this to a getter method instead.  We
  already have a setter method (setTransform).
 
  In another thread entitled Canvas Path.addPath SVGMatrix not optimal,
  Dirk Schulze proposed using the name getCTM, which would be consistent
 with
  the SVGLocatable interface, where getCTM returns an SVGMatrix. On the
 other
  hand, we could call it getTransform to be consistent with the existing
  setTransform on CRC2D. Opinions? Perhaps we should also have an overload
 of
  setTransform (or setCTM) that would take an SVGMatrix.
 
  First of all, have any browsers shipped currentTransform yet?
 
  Thoughts?
 
  -Justin Novosad


 FF (at least Aurora/Nightlies) has for some time had mozCurrentTransform
 (and mozCurrentTransformInverse), which return an Array (so not
 spec-compliant, since spec wants SVGMatrix). It is not shared, so it does
 not do what your a) and b) examples do.

 I agree that changing it to a getter method would be better, it would be
 more intuitive and clear for developers.


Looking over this thread, getTransform gets the most support.
we could add the following methods:

SVGMatrix getTransform();

void setTransform(SVGMatrix);


Re: [whatwg] Bicubic filtering on context.drawImage

2014-03-21 Thread Rik Cabanier
Hi Katelyn,

would this solved by creating a list of resampling methods that are clearly
defined in the spec?
Do you have a list in mind?


On Sat, Mar 15, 2014 at 4:14 AM, K. Gadd k...@luminance.org wrote:

 In game scenarios it is sometimes necessary to have explicit control
 over the filtering mechanism used, too. My HTML5 ports of old games
 all have severe rendering issues in every modern browser because of
 changes they made to canvas semantics - using filtering when not
 requested by the game, sampling outside of texture rectangles as a
 result of filtering


Can you give an example of when that sampling happens?


 , etc - imageSmoothingEnabled doesn't go far enough
 here, and I am sure there are applications that would break if
 bilinear was suddenly replaced with bicubic, or bicubic was replaced
 with lanczos, or whatever. This matters since some applications may be
 using getImageData to sample the result of a scaled drawImage and
 changing the scaling algorithm can change the data they get.

 One example I can think of is that many games bilinear scale a tiny
 (2-16 pixel wide) image to get a large, detailed gradient (since
 bilinear cleanly interpolates the endpoints). If you swap to another
 algorithm the gradient may end up no longer being linear, and the
 results would change dramatically.

 On Fri, Mar 14, 2014 at 1:45 PM, Simon Sarris sar...@acm.org wrote:
  On Fri, Mar 14, 2014 at 2:40 PM, Justin Novosad ju...@google.com
 wrote:
 
 
  Yes, and if we fixed it to make it prettier, people would complain
 about a
  performance regression. It is impossible to make everyone happy right
 now.
  Would be nice to have some kind of speed versus quality hint.
 
 
  As a canvas/web author (not vendor) I agree with Justin. Quality is very
  important for some canvas apps (image viewers/editors), performance is
 very
  important for others (games).
 
  Canvas fills a lot of roles, and leaving a decision like that up to
  browsers where they are forced to pick one or the other in a utility
  dichotomy. I don't think it's a good thing to leave debatable choices
 up
  to browser vendors. It ought to be something solved at the spec level.
 
  Either that or end users/programmers need to get really lucky and hope
 all
  the browsers pick a similar method, because the alternative is a
  (admittedly soft) version of This site/webapp best viewed in Netscape
  Navigator.
 
  Simon Sarris



Re: [whatwg] Canvas Path.addPath SVGMatrix not optimal?

2014-03-20 Thread Rik Cabanier
On Thu, Mar 20, 2014 at 7:01 AM, Justin Novosad ju...@google.com wrote:




 On Wed, Mar 19, 2014 at 5:47 PM, Rik Cabanier caban...@gmail.com wrote:


 On Wed, Mar 19, 2014 at 2:22 PM, Justin Novosad ju...@google.com wrote:


 I agree it should be optional, but just to play devil's advocate : you
 can
 create an identity SVGMatrix with a simpler bit of code. Just do this
 right
 after creating a canvas rendering context: var identity =
 context.currentTransform;


 Hi Justin,

 did Blink already expose this property?


 It is implemented, but not exposed (hidden behind the experimental canvas
 features flag)


 As currently specified, this must return a live SVGMatrix object, meaning
 that as you change the CTM on the 2d context, your reference to the
 SVGMatrix should change as well. [1]


 D'oh! I totally missed that when I reviewed the implementation. In fact,
 the implementer even went to great length to ensure the opposite behavior
 (making a copy).
 https://codereview.chromium.org/24233004
 I'll make sure that gets fixed.


By fixed, do you mean you will return a reference or change the name of
the API? :-)


 It's unlikely that you actually want this...
 This API should be renamed to get/setCurrentTransform() and return a copy.


 Yes, making a copy felt like the most desirable behavior, so I did not
 think twice about the fact that the implementation performs a copy.
 Anyways, thanks for catching this.



Re: [whatwg] [Canvas] Behavior on non-invertable CTM

2014-03-20 Thread Rik Cabanier
On Mon, Mar 17, 2014 at 2:41 PM, Rik Cabanier caban...@gmail.com wrote:




 On Mon, Mar 17, 2014 at 2:30 PM, Justin Novosad ju...@google.com wrote:




 On Mon, Mar 17, 2014 at 2:18 PM, Rik Cabanier caban...@gmail.com wrote:




 On Mon, Mar 17, 2014 at 1:47 PM, Justin Novosad ju...@google.comwrote:





 I have a fix in flight that fixes that problem in Blink by storing
 the current path in transformed coordinates instead. I've had the fix on
 the back burner pending the outcome of this thread.


 That seems like an expensive solution because this causes the
 coordinates to be transformed twice.
 Why not store the matrix that was applied to the path coordinates and
 use that to undo the transformation?


 Dirk and I looked over the WebKit code and it's actually already doing
 this.


 Good, maybe that can be the reference then.




 If we decide that the right thing is to do nothing when when the CTM is
 non-invertible, then sure, we can just do that. The idea of storing the
 current path in transformed coordinates was to also support drawing with a
 non-invertible CTM, like Firefox does, which is what Ian stated was the
 correct behavior earlier in this thread.


 yeah, but then FF bails at draw time anyway.


 Only if the CTM is still non-invertible at draw time.  If the CTM was
 transiently non-invertible during the path construction, FF produces
 results consistent with applying the transform to the points used to
 construct the path, which is technically compliant with the current wording
 of the spec.


 That's correct. If someone did this in Firefox:

 ctx.setTransform(1,1,1,1,0,0);

 ctx.moveto(0,0);

 ctx.lineTo(10,0);

 ctx,setTransform(1,0,0,1,0,0);

 ctx.fill();

 the end result would be a line from (0,0) to (10, 10). (IE does this as
 well).
 Nothing draws in Safari and Chrome currently.


It would be great if we could get clarification on this.
Firefox and IE are conformant per the spec when it comes to drawing paths
but not fill/stroke/clip. Supporting this small edge case comes at a large
cost in Firefox and likely also IE.

Many APIs in canvas are running into this issue which results in lack of
interoperability.


[whatwg] addPath and CanvasPathMethods

2014-03-20 Thread Rik Cabanier
addPath is currently defined on the Path2D object. [1]
Is there a reason why it's not defined on CanvasPathMethods instead? That
way this method is available on the 2d contest so you can append a path to
the current graphics state.

This would also negate the need for setCurrentPath.

1:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-path-addpath


Re: [whatwg] Proposal: change 2D canvas currentTransform to getter method

2014-03-20 Thread Rik Cabanier
On Thu, Mar 20, 2014 at 10:52 AM, Justin Novosad ju...@google.com wrote:

 Hello all,

 The recently added currentTransform attribute on CanvasRenderingContext2D
 gives shared access to the rendering context's transform. By shared, I
 mean:

 a) this code modifies the CTM:
 var matrix = context.currentTransform;
 matrix.a = 2;

 b) In this code, the second line modifies matrix:
 var matrix = context.currentTransform;
 context.scale(2, 2);

 This behavior is probably not what most developers would expect.
 I would like to propose changing this to a getter method instead.  We
 already have a setter method (setTransform).

 In another thread entitled Canvas Path.addPath SVGMatrix not optimal,
 Dirk Schulze proposed using the name getCTM, which would be consistent with
 the SVGLocatable interface, where getCTM returns an SVGMatrix. On the other
 hand, we could call it getTransform to be consistent with the existing
 setTransform on CRC2D. Opinions?


getCTM is nice and short and matches SVG. getCurrentTransform is more clear
so I'm fine with either one.

Since setTransform takes 6 arguments, it would be a bit strange that
getTransform would return an SVGMatrix.


 Perhaps we should also have an overload of
 setTransform (or setCTM) that would take an SVGMatrix


yes, it would be nice to have a transform/setTransform with that signature


 First of all, have any browsers shipped currentTransform yet?


No.


 Thoughts?

 -Justin Novosad



Re: [whatwg] addPath and CanvasPathMethods

2014-03-20 Thread Rik Cabanier
On Thu, Mar 20, 2014 at 12:15 PM, Justin Novosad ju...@google.com wrote:

 Sorry for the confusion, the point I was trying to make was unrelated to
 the CTM question (almost). My point is that the tessellation of a path is
 something that can be cached in a Path2D object.


Path2D does not contain the winding or a hint that it will be used for
fill/clip or stroking. I'm unsure if it gives you enough information to
cache tessellation before you call a marking operation.


 If you do this, you can take advantage of the cached tessellation:
 (apply tranform 1 to ctx)
 ctx.fill(path1)
 (apply tranform 2 to ctx)
 ctx.fill(path2)

 If you do it this way, the aggregated path needs to be re-tesselated each
 time because the winding rule would need to be re-applied:
 (apply tranform 1 to ctx)
 ctx.addPath(path1)
 (apply tranform 2 to ctx)
 ctx.addPath(path2)
 ctx.fill();

 Technically, these two ways of drawing are not equivalent (depends on
 compositing mode, transparency, and winding rule, overlaps between paths),
 but they can be used to achieve similar things.  Nonetheless the second way
 is detrimental to performance, and we'd be encouraging it by providing an
 addPath method on the context.  Besides, if the dev really needs to add
 paths together, it can be done inside an intermediate path object.


How would using intermediate path objects not be detrimental to performance?

I do not like the addPath method myself, but if it's offered on Path2D,
it's reasonable to have it on the context.
I was experimenting porting canvg to use the Path2D object and the
implementation would be much cleaner if there was a way to set the path in
the graphics state.


 On Thu, Mar 20, 2014 at 2:52 PM, Dirk Schulze dschu...@adobe.com wrote:


 On Mar 20, 2014, at 7:44 PM, Justin Novosad ju...@google.com wrote:

  This would apply the CTM to the incoming path, correct?  I am a little
 bit concerned that this API could end up being used in ways that would
 cancel some of the performance benefits (internal caching opportunities) of
 Path2D objects.

 Where is the difference to fill(Path2D), stroke(Path2D) and clip(Path2D)?
 The path will always need to be transformed to the CTM. Graphic libraries
 usually do this already for you. The addPath() proposal is not different to
 that.

 Greetings,
 Dirk

 
 
  On Thu, Mar 20, 2014 at 1:49 PM, Dirk Schulze dschu...@adobe.com
 wrote:
  On Mar 20, 2014, at 6:31 PM, Rik Cabanier caban...@gmail.com wrote:
 
   addPath is currently defined on the Path2D object. [1]
   Is there a reason why it's not defined on CanvasPathMethods instead?
 That
   way this method is available on the 2d contest so you can append a
 path to
   the current graphics state.
  
   This would also negate the need for setCurrentPath.
 
  I am supportive for this idea! I agree that this would solve one of the
 reasons why I came up with currentPath for WebKit in the first place.
 
  Greetings,
  Dirk
 
 
  
   1:
  
 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-path-addpath
 
 





Re: [whatwg] addPath and CanvasPathMethods

2014-03-20 Thread Rik Cabanier
On Thu, Mar 20, 2014 at 4:24 PM, Robert O'Callahan rob...@ocallahan.orgwrote:

 On Fri, Mar 21, 2014 at 8:15 AM, Justin Novosad ju...@google.com wrote:

 Sorry for the confusion, the point I was trying to make was unrelated to
 the CTM question (almost). My point is that the tesselation of a path is
 something that can be cached in a Path2D object.

 If you do this, you can take advantage of the cached tessellation:
 (apply tranform 1 to ctx)
 ctx.fill(path1)
 (apply tranform 2 to ctx)
 ctx.fill(path2)

 If you do it this way, the aggregated path needs to be re-tesselated each
 time because the winding rule would need to be re-applied:
 (apply tranform 1 to ctx)
 ctx.addPath(path1)
 (apply tranform 2 to ctx)
 ctx.addPath(path2)
 ctx.fill();

 Technically, these two ways of drawing are not equivalent (depends on
 compositing mode, transparency, and winding rule, overlaps between paths),
 but they can be used to achieve similar things.  Nonetheless the second
 way
 is detrimental to performance, and we'd be encouraging it by providing an
 addPath method on the context.  Besides, if the dev really needs to add
 paths together, it can be done inside an intermediate path object.


 An implementation can turn #2 into #1 if the paths obviously don't
 overlap. If they might overlap, the author probably shouldn't be doing the
 latter!


For this reason I don't particularly like the addPath API. It tricks
authors into thinking that they will get a union of the paths.


 TBH I don't see why authors would choose the latter approach.


I agree.
'addPath' will always cause the generation of a new path so if an author
chooses this strange approach, the retessellation will have to happen again
anyway.


Re: [whatwg] Canvas Path.addPath SVGMatrix not optimal?

2014-03-19 Thread Rik Cabanier
On Wed, Mar 19, 2014 at 2:22 PM, Justin Novosad ju...@google.com wrote:

 On Wed, Mar 19, 2014 at 4:46 PM, Dirk Schulze dschu...@adobe.com wrote:

  Hi,
 
  I just looked at the definition of Path.addPath[1]:
 
  void addPath(Path path, SVGMatrix? transformation);
 
  SVGMatrix is nullable but can not be omitted all together. Why isn't it
  optional as well? I think it should be optional, especially because
  creating an SVGMatrix at the moment means writing:
 
  var matrix = document.createElementNS('http://www.w3.org/2000/svg
  ','svg').createSVGMatrix();
 
  Greetings,
  Dirk
 
  [1]
 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#path


 I agree it should be optional, but just to play devil's advocate : you can
 create an identity SVGMatrix with a simpler bit of code. Just do this right
 after creating a canvas rendering context: var identity =
 context.currentTransform;


Hi Justin,

did Blink already expose this property?
As currently specified, this must return a live SVGMatrix object, meaning
that as you change the CTM on the 2d context, your reference to the
SVGMatrix should change as well. [1]

It's unlikely that you actually want this...
This API should be renamed to get/setCurrentTransform() and return a copy.

1:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-context-2d-currenttransform


Re: [whatwg] Support filters in Canvas

2014-03-19 Thread Rik Cabanier
On Sat, Mar 15, 2014 at 12:03 AM, Dirk Schulze dschu...@adobe.com wrote:

 Hi,

 Apologize if it was already discussed but I couldn't find a mail to this
 topic.


Yes, this was brought up a couple of times.
Last exchange was in Sept of 2012 (!):
http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2012-September/037432.html


 In one of the early drafts of Filter Effects we had filter operation
 support for HTML Canvas. We deferred it and IMO it makes more sense to have
 it as part of the Canvas specification text.

 I would suggest a filter attribute that takes a list of filter operations
 similar to the CSS Image filter function[1]. Similar to shadows[2], each
 drawing operation would be filtered. The API looks like this:

 partial interface CanvasRenderingContext2D {
 attribute DOMString filter;
 }

 A filter DOMString could looks like: contrast(50%) blur(3px)

 With the combination of grouping in canvas[3] it would be possible to
 group drawing operations and filter them together.

 Filter functions include a reference to a filter element and a
 specification of SVG filters[4]. I am unsure if a reference do an element
 within a document can cause problems. If it does, we would just not support
 SVG filter references.


Yes, I'd prefer if we supported just the CSS filter shorthands for now.


Re: [whatwg] [Canvas] Behavior on non-invertable CTM

2014-03-17 Thread Rik Cabanier
On Mon, Mar 17, 2014 at 8:45 AM, Justin Novosad ju...@google.com wrote:

 On Mon, Mar 17, 2014 at 11:35 AM, Dirk Schulze dschu...@adobe.com wrote:

 
   Hmmm, I gave this a bit more thought...  To apply the construction
   algorithm in transformed space, the ellipse parameters (radiusX,
 radiusY,
   rotation) would have to be transformed. Transforming the parameters
 would
   be intractable under a projective transform (e.g. perspective), but
 since
   we are limitted to affine transforms, it can be done.  Now, in the case
  of
   a non-invertible CTM, we would end up with radiusX or radiusY or both
  equal
   to zero.  And what happens when you have that?  Your arcTo just turned
  into
   lineTo(x1, y1). Tada!
 
  Why does radiusX or radiusY need to be zero? Because you define it that
  way for a non-invertible matrix? That makes sense for scale(0,0). What
  about infinity or NaN? If Ian didn't update the spec then this is still
  undefined and therefore up to the UA to decide.
 
 
 Oh yeah, I was totally forgetting about singularities caused by non-finite
 values.  Could we just the same agree to resolve that case by treating
 arcTo as lineTo(x1, y1) in the case of a non-invertible CTM?  Or do you
 think there is a more logical thing to do?


Make a clean cut and define that drawing operators are ignored when there's
a non-invertible matrix.


Re: [whatwg] [Canvas] Behavior on non-invertable CTM

2014-03-17 Thread Rik Cabanier
On Mon, Mar 17, 2014 at 10:18 AM, Justin Novosad ju...@google.com wrote:




 On Mon, Mar 17, 2014 at 12:59 PM, Rik Cabanier caban...@gmail.com wrote:




 On Mon, Mar 17, 2014 at 8:45 AM, Justin Novosad ju...@google.com wrote:

 On Mon, Mar 17, 2014 at 11:35 AM, Dirk Schulze dschu...@adobe.com
 wrote:

 
   Hmmm, I gave this a bit more thought...  To apply the construction
   algorithm in transformed space, the ellipse parameters (radiusX,
 radiusY,
   rotation) would have to be transformed. Transforming the parameters
 would
   be intractable under a projective transform (e.g. perspective), but
 since
   we are limitted to affine transforms, it can be done.  Now, in the
 case
  of
   a non-invertible CTM, we would end up with radiusX or radiusY or both
  equal
   to zero.  And what happens when you have that?  Your arcTo just
 turned
  into
   lineTo(x1, y1). Tada!
 
  Why does radiusX or radiusY need to be zero? Because you define it that
  way for a non-invertible matrix? That makes sense for scale(0,0). What
  about infinity or NaN? If Ian didn't update the spec then this is still
  undefined and therefore up to the UA to decide.
 
 
 Oh yeah, I was totally forgetting about singularities caused by
 non-finite
 values.  Could we just the same agree to resolve that case by treating
 arcTo as lineTo(x1, y1) in the case of a non-invertible CTM?  Or do you
 think there is a more logical thing to do?


 Make a clean cut and define that drawing operators are ignored when
 there's a non-invertible matrix.

 I could totally go for that, but you are talking about going back on the
 spec of a feature that has shipped, as opposed to clarifying edges cases.
 Maybe that would be fine in this case though...


I'm unsure if anyone has shipped that part of the spec. There's certainly
no interop...

Looking at the implementation in Blink and WebKit, all of the drawing
methods and fill/stroke/clip start with:

if (!isTransformInvertible())
return;


At first glance, Firefox seems to do what the spec says (which results in
slow double transforming of all coordinates) but then they punt as well:

Matrix inverse = mTarget-GetTransform();
if (!inverse.Invert()) {

NS_WARNING(Could not invert transform);

return;

}


So, what we could say is:
- when drawing paths, ignore all calls if the matrix is non-invertible
(WebKit and Blink do this)
- when filling/stroking/clipping, ignore all calls if the matrix is
non-invertible (Firefox, WebKit and Blink do this)


Re: [whatwg] [Canvas] Behavior on non-invertable CTM

2014-03-17 Thread Rik Cabanier
On Mon, Mar 17, 2014 at 1:23 PM, Justin Novosad ju...@google.com wrote:




 On Mon, Mar 17, 2014 at 2:06 PM, Rik Cabanier caban...@gmail.com wrote:





 Make a clean cut and define that drawing operators are ignored when
 there's a non-invertible matrix.

 I could totally go for that, but you are talking about going back on
 the spec of a feature that has shipped, as opposed to clarifying edges
 cases. Maybe that would be fine in this case though...


 I'm unsure if anyone has shipped that part of the spec. There's certainly
 no interop...


 Plenty of browser have shipped drawing paths to canvas. I agree about the
 no interop part. It is the main reason I think it may still be acceptable
 to redefine the spec.


Sure, but no one implemented transforming of the path as the spec
describes. At the time the drawing operation happens, all browsers have the
path in the local CTM.


 Looking at the implementation in Blink and WebKit, all of the drawing
 methods and fill/stroke/clip start with:

 if (!isTransformInvertible())
 return;


 At first glance, Firefox seems to do what the spec says (which results in
 slow double transforming of all coordinates) but then they punt as well:

 Matrix inverse = mTarget-GetTransform();
 if (!inverse.Invert()) {

 NS_WARNING(Could not invert transform);

 return;

 }


 So, what we could say is:
 - when drawing paths, ignore all calls if the matrix is non-invertible
 (WebKit and Blink do this)
 - when filling/stroking/clipping, ignore all calls if the matrix is
 non-invertible (Firefox, WebKit and Blink do this)


 Yes, but there is still an issue that causes problems in Blink/WebKit:
 because the canvas rendering context stores its path in local
 (untransformed) space, whenever the CTM changes, the path needs to be
 transformed to follow the new local spcae.  This transform requires the CTM
 to be invertible. So now webkit and blink have a bug that causes all
 previously recorded parts of the current path to be discarded when the CTM
 becomes non-invertible (even if it is only temporarily non-invertible, even
 if the current path is not even touched while the matrix is
 non-invertible).


This was something that was introduced by the Blink team after they
branched.
WebKit doesn't do this flagging so if a non-invertible matrix is reset, the
old path will still be around.


 I have a fix in flight that fixes that problem in Blink by storing the
 current path in transformed coordinates instead. I've had the fix on the
 back burner pending the outcome of this thread.


That seems like an expensive solution because this causes the coordinates
to be transformed twice.
Why not store the matrix that was applied to the path coordinates and use
that to undo the transformation?


Re: [whatwg] [Canvas] Behavior on non-invertable CTM

2014-03-17 Thread Rik Cabanier
On Mon, Mar 17, 2014 at 1:47 PM, Justin Novosad ju...@google.com wrote:





 I have a fix in flight that fixes that problem in Blink by storing the
 current path in transformed coordinates instead. I've had the fix on the
 back burner pending the outcome of this thread.


 That seems like an expensive solution because this causes the coordinates
 to be transformed twice.
 Why not store the matrix that was applied to the path coordinates and use
 that to undo the transformation?


Dirk and I looked over the WebKit code and it's actually already doing this.


 If we decide that the right thing is to do nothing when when the CTM is
 non-invertible, then sure, we can just do that. The idea of storing the
 current path in transformed coordinates was to also support drawing with a
 non-invertible CTM, like Firefox does, which is what Ian stated was the
 correct behavior earlier in this thread.


yeah, but then FF bails at draw time anyway.
IMO no author relies on behavior when there's a non-invertible matrix so we
should just implement the simplest and most efficient solution.


 See why I kept the fix on the backburner? :-)


:-P yes


Re: [whatwg] [Canvas] Behavior on non-invertable CTM

2014-03-17 Thread Rik Cabanier
On Mon, Mar 17, 2014 at 2:30 PM, Justin Novosad ju...@google.com wrote:




 On Mon, Mar 17, 2014 at 2:18 PM, Rik Cabanier caban...@gmail.com wrote:




 On Mon, Mar 17, 2014 at 1:47 PM, Justin Novosad ju...@google.com wrote:





 I have a fix in flight that fixes that problem in Blink by storing the
 current path in transformed coordinates instead. I've had the fix on the
 back burner pending the outcome of this thread.


 That seems like an expensive solution because this causes the
 coordinates to be transformed twice.
 Why not store the matrix that was applied to the path coordinates and
 use that to undo the transformation?


 Dirk and I looked over the WebKit code and it's actually already doing
 this.


 Good, maybe that can be the reference then.




 If we decide that the right thing is to do nothing when when the CTM is
 non-invertible, then sure, we can just do that. The idea of storing the
 current path in transformed coordinates was to also support drawing with a
 non-invertible CTM, like Firefox does, which is what Ian stated was the
 correct behavior earlier in this thread.


 yeah, but then FF bails at draw time anyway.


 Only if the CTM is still non-invertible at draw time.  If the CTM was
 transiently non-invertible during the path construction, FF produces
 results consistent with applying the transform to the points used to
 construct the path, which is technically compliant with the current wording
 of the spec.


That's correct. If someone did this in Firefox:

ctx.setTransform(1,1,1,1,0,0);

ctx.moveto(0,0);

ctx.lineTo(10,0);

ctx,setTransform(1,0,0,1,0,0);

ctx.fill();

the end result would be a line from (0,0) to (10, 10). (IE does this as
well).
Nothing draws in Safari and Chrome currently.


Re: [whatwg] Canvas and Paths

2014-03-17 Thread Rik Cabanier

 On Mon, 10 Mar 2014, Tab Atkins Jr. wrote:
 
  This is also my question.  Given that generating a path for a particular
  context isn't a magic bullet *anyway* (because the details of the
  context can change), I don't understand why caching isn't the answer.

 On Mon, 10 Mar 2014, Rik Cabanier wrote:
 
  At usage time, the path could be retargeted to a new backend.

 If the backend changes, knowing the backend at creation time doesn't help.

 If it doesn't, then the cost seems to be the same either way.


  I don't think that should be done as a cached copy since that would
  require too many resources. I will see if this is an acceptable solution
  for mozilla.

 How many resources could a path possibly take?


 On Mon, 10 Mar 2014, Justin Novosad wrote:
 
  Isn't caching ideal for that situation? In the case of re-targeting, you
  can either replace the cached encoding, or append the new encoding to a
  collection of cached encodings.  Both of those options seem more
  effective than to stick to an encoding type that was baked-in at
  construction time. It may also be great to have a heuristic to chose
  whether to discard the previously cached re-encoding. Something like: if
  we are re-encoding because the destination backing type changed due to a
  resize, then discard previous encodings; if re-encoding because the path
  is drawn to multiple canvases, then retain multiple cached encodings.

 That makes sense to me.


FYI
The Firefox people agreed to a solution that retargets the path if its
backend doesn't match with the canvas context backend.
There's no need to change to current API.


Re: [whatwg] Grouping in canvas 2d

2014-03-14 Thread Rik Cabanier
On Fri, Mar 14, 2014 at 11:09 AM, Ian Hickson i...@hixie.ch wrote:

 On Wed, 4 Dec 2013, Jürg Lehni wrote:
 
  Implementing [layering/grouping] would help us greatly to optimize
  aspects of Paper.js, as double buffering into separate canvases is very
  slow and costly.

 Can you elaborate on what precisely the performance bottleneck is? I was
 looking through this thread but I can't find a description of the use
 cases it addresses, so it's hard to evaluate the proposals.


Let's say you're drawing a scene and there is a bunch of artwork that you
want to apply a multiply effect or opacity to.
With today's code, it would look something like this:

var bigcanvas = document.getElementById(c)l

var ctx = bigcanvas.getContext(2d);

ctx.moveto(); // drawing underlying scene

var c = document.createElement(canvas);
ctx = c.getContext(2d);

ctx.moveto(); // drawing scene that needs the effect

ctx = bigcanvas.getContext(2d);

ctx,globalAlpha(.5);

ctx.drawImage(c, 0, 0);

With layers, it would become:

var bigcanvas = document.getElementById(c)l
var ctx = bigcanvas.getContext(2d);
ctx.moveto(); // drawing underlying scene

ctx,globalAlpha(.5);
ctx.beginLayer();
ctx.moveto(); // drawing scene that needs the effect

ctx.endLayer();

So, with layers you
- avoid creating (expensive) DOM elements
- simplify the drawing (especially when there's a transformation)


Re: [whatwg] Questions regarding Path object

2014-03-14 Thread Rik Cabanier
On Wed, Dec 4, 2013 at 5:18 PM, Rik Cabanier caban...@gmail.com wrote:




 On Wed, Dec 4, 2013 at 11:10 AM, Jürg Lehni li...@scratchdisk.com wrote:

 I somehow managed to oversee all the things that happened in this
 discussion, but I'm very happy to see that Path2D is being proposed and
 agreed on now. It's also what I've originally suggested on April 10 this
 year, and I completely agree that it leaves much less doubt about its
 functionality and context of use. It also has a history as a term in Java2D:

 http://docs.oracle.com/javase/7/docs/api/java/awt/geom/Path2D.html

 So is this going through?


 Yes, all that need to happen is for someone to implement this :-)


Path2D has now landed in Blink [1]. Blink also implemented the 'addPath'
method.
WebKit just landed a patch to rename Path to Path2D, remove currentPath and
add fill/stroke/clip with a path [2].
A patch is under review for Firefox to add Path2D.

Given this, can we change the spec to reflect the new name?

1: https://codereview.chromium.org/178673002/
2: https://webkit.org/b/130236
3: https://bugzilla.mozilla.org/show_bug.cgi?id=830734



  On Nov 18, 2013, at 19:03 , Elliott Sprehn espr...@gmail.com wrote:

  On Monday, November 18, 2013, Rik Cabanier wrote:
 
 
 
 
  On Wed, Nov 13, 2013 at 1:36 PM, Robert O'Callahan 
 rob...@ocallahan.orgjavascript:_e({}, 'cvml', 'rob...@ocallahan.org');
  wrote:
 
  On Wed, Nov 13, 2013 at 12:12 PM, Jussi Kalliokoski 
  jussi.kallioko...@gmail.com javascript:_e({}, 'cvml',
  'jussi.kallioko...@gmail.com'); wrote:
 
  Path is also too generic even in the context of graphics. If we
 later on
  want to add a path object for 3-dimensional paths, you end up with
 Path
  and
  Path3D? Yay for consistency. Path2D would immediately inform what
  dimensions we're dealing with and also that this is to do with
 graphics,
  and thus sounds like a good name to me.
 
 
  Sounds good to me.
 
 
  Elliot,
 
  what do you think, is Path2D acceptable?
 
 
  Sounds great to me, lets do it!
 
  - E





Re: [whatwg] Canvas hit regions

2014-03-14 Thread Rik Cabanier
On Fri, Mar 14, 2014 at 4:56 PM, Ian Hickson i...@hixie.ch wrote:


 I've done some more work on the spec for event rerouting for hit regions,
 based on the feedback sent to this list.

 On Wed, 5 Mar 2014, Robert O'Callahan wrote:
  On Wed, Mar 5, 2014 at 12:53 PM, Ian Hickson i...@hixie.ch wrote:
   On Fri, 28 Feb 2014, Rik Cabanier wrote:
For instance, if the fallback is an edit control and the user
drag-selects some text on the canvas, is it expected that this text
is also selected in the edit control?
  
   You can't validly include a text field in canvas fallback precisely
   because of this kind of thing. See:
  
  http://whatwg.org/html#best-practices
 
  The question remains: what should happen in Rik's example?

 If the control is a text edit control, the event isn't rerouted. This was
 always the intention (hit regions couldn't be set for text edit controls),
 but there was a loophole before, where you could register a hit region for
 one kind of control and then change that control to be something else.
 I've adjusted the spec to close that loophole.

 Event retargetting now explicitly applies to the control represented by
 the region, which is always null if the given control is now a text
 field.


Does this change the eventTarget attribute on the event object [1]. It
doesn't seem like it does but should it?
I'm not an expert but it seems strange to send an event to an element with
a different eventTarget.


  More generally, is this event rerouting supposed to be able to trigger
  browser default event handling behavior, or only DOM event dispatch?

 As it was specified, I don't see how it could trigger default actions of
 anything other than the canvas and its ancestors. The canvas hook ran in
 the middle of the When a pointing device is clicked, the user agent must
 run these steps algorithm, which refers to the origin target, not the
 rerouted target.

 I've now changed this so that it does in fact trigger the default action
 if applicable.


This will still just reroute events, right?
For instance, if the fallback element is a a href=..., will clicking on
the region cause the browser to follow the hyperlink?


 On Wed, 5 Mar 2014, Robert O'Callahan wrote:
 
  The problem is that if these retargeted events can trigger default
  browser behavior, the browser has to be able to compute the position of
  the event relative to the new target DOM node, and it's not clear how to
  do that.

 I've made it explicit that the elements that can get clicks targetted to
 them only include elements that don't have subregions. In particular,
 image maps and image buttons are excluded.


Thanks for updating the spec. It's getting quite complex though :-(
Maybe it's simpler to just add the id to the event and leave the canvas
element as the target? Since this is not a major feature, the complexity
might stop implementors.


 On Tue, 4 Mar 2014, Rik Cabanier wrote:
  On Tue, Mar 4, 2014 at 8:30 PM, Ian Hickson i...@hixie.ch wrote:
   On Tue, 4 Mar 2014, Rik Cabanier wrote:

 So what would you do in the case where you start two touches on
 different regions, then move them at the same time to two other
 different regions at the same time? What would you put in the
 touchmove event's object?
 
  The touches attribute [1] of the touch event would contain 2 touch
  elements.
 
  Each touch element would have as target the canvas element and contain
  the id of the hit region.

 Oh so it's not the TouchEvent object you think should be adjusted, but the
 Touch object itself? (I'm assuming that's what you are referring to when
 you say touch element.)


yes. Thanks for changing this.



 Presumably we would just set the region at creation time, like the
 target attribute, right?


yes

I've specced this.


 On Mon, 10 Mar 2014, Rik Cabanier wrote:
 
  Currently, the specification states that if you create a region and then
  create another region that completely covers region, the first region is
  removed from the hit region list [1]
 
  This is a complex operation that involves either drawing the regions to a
  bitmaps and counting pixels, or path intersection.

 There's two trivial ways to implement this, depending on whether the hit
 regions are backed by a bitmap (the simplest and fastest solution but uses
 a lot of memory) or a region list (slower, but much more memory
 efficient). In the case of a bitmap, you just draw on the new region, and
 the old region is no longer in the bitmap, so it's trivially gone.

In the
 case of a list, you put the new region ahead of the old region so that you
 never actually get around to checking the old region.


The following step still needs to run though:
http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#garbage-collect-the-regions

Let victim be the first hit region in list to have an empty set of pixels
and a zero child count, if any.


If this was implemented with a bitmap, the only way to figure

  1   2   3   4   5   >