Re: [whatwg] canvas feedback

2014-05-14 Thread Jürg Lehni
On Apr 30, 2014, at 00:27 , Ian Hickson i...@hixie.ch wrote:

 On Mon, 7 Apr 2014, Jürg Lehni wrote:
 
 Well this particular case, yes. But in the same way we allow a group of 
 items to have an opacity applied to in Paper.js, and expect it to behave 
 the same ways as in SVG: The group should appear as if its children were 
 first rendered at 100% alpha and then blitted over with the desired 
 transparency.
 
 Layers would offer exactly this flexibility, and having them around 
 would make a whole lot of sense, because currently the above can only be 
 achieved by drawing into a separate canvas and blitting the result over. 
 The performance of this is real low on all browsers, a true bottleneck 
 in our library currently.
 
 It's not clear to me why it would be faster if implemented as layers. 
 Wouldn't the solution here be for browsers to make canvas-on-canvas 
 drawing faster? I mean, fundamentally, they're the same feature.

I was perhaps wrongly assuming that including layering in the API would allow 
the browser vendors to better optimize this use case. The problem with the 
current solution is that drawing a canvas into another canvas is inexplicably 
slow across all browsers. The only reason I can imagine for this is that the 
pixels are copied back and forth between the GPU and the main memory, and 
perhaps converted along the way, while they could simply stay on the GPU as 
they are only used there. But reality is probably more complicated than that.

So if the proposed API addition would allow a better optimization then I'd be 
all for it. If not, then I am wondering how I can get the vendor's attention to 
improve this particular case. It really is very slow currently, to the point 
where it doesn't make sense to use it for any sort of animation technique.

J







Re: [whatwg] canvas feedback

2014-05-14 Thread K. Gadd
Is it ever possible to make canvas-to-canvas blits consistently fast?
It's my understanding that browsers still make
intelligent/heuristic-based choices about which canvases to
accelerate, if any, and that it depends on the size of the canvas,
whether it's in the DOM, etc. I've had to report bugs related to this
against firefox and chrome in the past, I'm sure more exist. There's
also the scenario where you need to blit between Canvas2D canvases and
WebGL canvases - the last time I tried this, a single blit could cost
*hundreds* of milliseconds because of pipeline stalls and cpu-gpu
transfers.

Canvas-to-canvas blits are a way to implement layering, but it seems
like making it consistently fast via canvas-canvas blits is a much
more difficult challenge than making sure that there are fastcheap
ways to layer separate canvases at a composition stage. The latter
just requires that the browser have a good way to composite the
canvases, the former requires that various scenarios with canvases
living in CPU and GPU memory, deferred rendering queues, etc all get
resolved efficiently in order to copy bits from one place to another.

(In general, I think any solution that relies on using
canvas-on-canvas drawing any time a single layer is invalidated is
suspect. The browser already has a compositing engine for this that
can efficiently update only modified subregions and knows how to cache
reusable data; re-rendering the entire surface from JS on change is
going to be a lot more expensive than that. Don't some platforms
actually have compositing/layers at the OS level, like CoreAnimation
on iOS/OSX?)

On Wed, May 14, 2014 at 6:30 AM, Jürg Lehni li...@scratchdisk.com wrote:
 On Apr 30, 2014, at 00:27 , Ian Hickson i...@hixie.ch wrote:

 On Mon, 7 Apr 2014, Jürg Lehni wrote:

 Well this particular case, yes. But in the same way we allow a group of
 items to have an opacity applied to in Paper.js, and expect it to behave
 the same ways as in SVG: The group should appear as if its children were
 first rendered at 100% alpha and then blitted over with the desired
 transparency.

 Layers would offer exactly this flexibility, and having them around
 would make a whole lot of sense, because currently the above can only be
 achieved by drawing into a separate canvas and blitting the result over.
 The performance of this is real low on all browsers, a true bottleneck
 in our library currently.

 It's not clear to me why it would be faster if implemented as layers.
 Wouldn't the solution here be for browsers to make canvas-on-canvas
 drawing faster? I mean, fundamentally, they're the same feature.

 I was perhaps wrongly assuming that including layering in the API would allow 
 the browser vendors to better optimize this use case. The problem with the 
 current solution is that drawing a canvas into another canvas is inexplicably 
 slow across all browsers. The only reason I can imagine for this is that the 
 pixels are copied back and forth between the GPU and the main memory, and 
 perhaps converted along the way, while they could simply stay on the GPU as 
 they are only used there. But reality is probably more complicated than that.

 So if the proposed API addition would allow a better optimization then I'd be 
 all for it. If not, then I am wondering how I can get the vendor's attention 
 to improve this particular case. It really is very slow currently, to the 
 point where it doesn't make sense to use it for any sort of animation 
 technique.

 J







Re: [whatwg] canvas feedback

2014-05-14 Thread Rik Cabanier
On Wed, May 14, 2014 at 6:30 AM, Jürg Lehni li...@scratchdisk.com wrote:

 On Apr 30, 2014, at 00:27 , Ian Hickson i...@hixie.ch wrote:

  On Mon, 7 Apr 2014, Jürg Lehni wrote:
 
  Well this particular case, yes. But in the same way we allow a group of
  items to have an opacity applied to in Paper.js, and expect it to behave
  the same ways as in SVG: The group should appear as if its children were
  first rendered at 100% alpha and then blitted over with the desired
  transparency.
 
  Layers would offer exactly this flexibility, and having them around
  would make a whole lot of sense, because currently the above can only be
  achieved by drawing into a separate canvas and blitting the result over.
  The performance of this is real low on all browsers, a true bottleneck
  in our library currently.
 
  It's not clear to me why it would be faster if implemented as layers.
  Wouldn't the solution here be for browsers to make canvas-on-canvas
  drawing faster? I mean, fundamentally, they're the same feature.

 I was perhaps wrongly assuming that including layering in the API would
 allow the browser vendors to better optimize this use case.


No, you are correct; having layers will make drawing more efficient as you
can make certain assumptions and you don't have to create/recycle
intermediate canvas's.


 The problem with the current solution is that drawing a canvas into
 another canvas is inexplicably slow across all browsers. The only reason I
 can imagine for this is that the pixels are copied back and forth between
 the GPU and the main memory, and perhaps converted along the way, while
 they could simply stay on the GPU as they are only used there. But reality
 is probably more complicated than that.


I don't know why this would be. Do you have data on this?


 So if the proposed API addition would allow a better optimization then I'd
 be all for it. If not, then I am wondering how I can get the vendor's
 attention to improve this particular case. It really is very slow
 currently, to the point where it doesn't make sense to use it for any sort
 of animation technique.


I think we just need to find some time to start implementing it. The API is
simple and in the case of Core Graphics, it maps directly.


Re: [whatwg] canvas feedback

2014-05-14 Thread Rik Cabanier
On Wed, May 14, 2014 at 7:30 PM, K. Gadd k...@luminance.org wrote:

 Is it ever possible to make canvas-to-canvas blits consistently fast?
 It's my understanding that browsers still make
 intelligent/heuristic-based choices about which canvases to
 accelerate, if any, and that it depends on the size of the canvas,
 whether it's in the DOM, etc. I've had to report bugs related to this
 against firefox and chrome in the past, I'm sure more exist. There's
 also the scenario where you need to blit between Canvas2D canvases and
 WebGL canvases - the last time I tried this, a single blit could cost
 *hundreds* of milliseconds because of pipeline stalls and cpu-gpu
 transfers.


Chrome has made some optimizations recently in this area and will try to
keep everything on the GPU for transfers between canvas 2d and WebGL.
Are you still seeing issues there?


 Canvas-to-canvas blits are a way to implement layering, but it seems
 like making it consistently fast via canvas-canvas blits is a much
 more difficult challenge than making sure that there are fastcheap
 ways to layer separate canvases at a composition stage. The latter
 just requires that the browser have a good way to composite the
 canvases, the former requires that various scenarios with canvases
 living in CPU and GPU memory, deferred rendering queues, etc all get
 resolved efficiently in order to copy bits from one place to another.


Small canvas's are usually not hardware accelerated. Do you have any data
that this is causing slowdowns?
Layering should also mitigate this since if the canvas is HW accelerated,
so should its layers.


 (In general, I think any solution that relies on using
 canvas-on-canvas drawing any time a single layer is invalidated is
 suspect. The browser already has a compositing engine for this that
 can efficiently update only modified subregions and knows how to cache
 reusable data; re-rendering the entire surface from JS on change is
 going to be a lot more expensive than that.


I don't think the canvas code is that smart. I think you're thinking about
drawing SVG and HTML.


 Don't some platforms
 actually have compositing/layers at the OS level, like CoreAnimation
 on iOS/OSX?)


Yes, but AFAIK they don't use this for Canvas.



 On Wed, May 14, 2014 at 6:30 AM, Jürg Lehni li...@scratchdisk.com wrote:
  On Apr 30, 2014, at 00:27 , Ian Hickson i...@hixie.ch wrote:
 
  On Mon, 7 Apr 2014, Jürg Lehni wrote:
 
  Well this particular case, yes. But in the same way we allow a group of
  items to have an opacity applied to in Paper.js, and expect it to
 behave
  the same ways as in SVG: The group should appear as if its children
 were
  first rendered at 100% alpha and then blitted over with the desired
  transparency.
 
  Layers would offer exactly this flexibility, and having them around
  would make a whole lot of sense, because currently the above can only
 be
  achieved by drawing into a separate canvas and blitting the result
 over.
  The performance of this is real low on all browsers, a true bottleneck
  in our library currently.
 
  It's not clear to me why it would be faster if implemented as layers.
  Wouldn't the solution here be for browsers to make canvas-on-canvas
  drawing faster? I mean, fundamentally, they're the same feature.
 
  I was perhaps wrongly assuming that including layering in the API would
 allow the browser vendors to better optimize this use case. The problem
 with the current solution is that drawing a canvas into another canvas is
 inexplicably slow across all browsers. The only reason I can imagine for
 this is that the pixels are copied back and forth between the GPU and the
 main memory, and perhaps converted along the way, while they could simply
 stay on the GPU as they are only used there. But reality is probably more
 complicated than that.
 
  So if the proposed API addition would allow a better optimization then
 I'd be all for it. If not, then I am wondering how I can get the vendor's
 attention to improve this particular case. It really is very slow
 currently, to the point where it doesn't make sense to use it for any sort
 of animation technique.
 
  J
 
 
 
 
 



[whatwg] canvas feedback

2014-04-29 Thread Ian Hickson

On Tue, 8 Apr 2014, Rik Cabanier wrote:
 On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson i...@hixie.ch wrote:
 
  So this is not how most implementations currently have it 
  defined.

 I'm unsure what you mean. Browser implementations? If so, they 
 definitely do store the path in user coordinates. The spec 
 currently says otherwise [1] though.
   
I'm not sure what you're referring to here.
  
   All graphics backends for canvas that I can inspect, don't apply the 
   CTM to the current path when you call a painting operator. Instead, 
   the path is passed as segments in the current CTM and the graphics 
   library will apply the transform to the segments.
 
  Right. That's what the spec says too, for the current default path.
 
 No, the spec says this:
 
 For CanvasRenderingContext2D objects, the points passed to the methods, 
 and the resulting lines added to current default path by these methods, 
 must be transformed according to the current transformation matrix 
 before being added to the path.

As far as I can tell, these are block-box indistinguishable statements.

Can you show a test case that demonstrates how the spec doesn't match 
browsers?


   [use case: two paths mapping to the same region]
 
  Just use two different IDs with two different addHitRegion() calls. 
  That's a lot less complicated than having a whole new API.
 
 That doesn't work if you want to have the same control for the 2 areas, 
 from the spec for addHitRegion:
 
# If there is a previous region with this control, remove it from the 
# scratch bitmap's hit region list; then, if it had a parent region, 
# decrement that hit region's child count by one.

Interesting. You mean like a case where you had a button that got split 
into multiple segments animated separately that then joined together, but 
where you want to be able to click on any part of that split-up button?

Hmm.

There's several ways we could support that.

The simple way would be to allow multiple regions to refer to a control.

The more subtle way would be to allow the control logic to defer to the 
parent region, but that's probably a bad idea (where would you put the 
parent region?).

So I guess the question is: is it more useful to be able to refer to 
the same control from multiple regions, or is it more useful for the 
previous region to be automatically discarded when you add a new one?

It's probably more useful to have multiple regions. You can always do the 
discarding using IDs.


 Even if you don't use the control, it would be strange to have 2 
 separate hit regions for something that represents 1 object.

Why? I think that makes a lot of sense. There are multiple regions, why 
not have multiple hit regions? This becomes especially true when the 
multiple regions might be differently overlapped by other regions, or 
where different parts of the canvas are renderings from different angles 
of the same underlying scene. It would be silly for things to be easier to 
do with two canvases than one, in that kind of case, no?

I've changed the spec to not discard regions based on the control.


On Fri, 6 Dec 2013, Jürg Lehni wrote:

 Instead of using getCurrentPath and setCurrentPath methods as a 
 solution, this could perhaps be solved by returning the internal 
 path instead of a copy, but with a flag that would prevent 
 further alterations on it.

 The setter of the currentPath accessor / data member could then 
 make the copy instead when a new path is to be set.

 This would also make sense from a a caching point of view, where 
 storing the currentPath for caching might not actually mean that 
 it will be used again in the future (e.g. because the path's 
 geometry changes completely on each frame of an animation), so 
 copying only when setting would postpone the actual work of 
 having to make the copy, and would help memory consummation and 
 performance.
   
I don't really understand the use case here.
  
   Jurg was just talking about an optimization (so you don't have to 
   make an internal copy)
 
  Sure, but that doesn't answer the question of what the use case is.
 
 From my recent experiments with porting canvg ( 
 https://code.google.com/p/canvg/) to use Path2D, they have a routine 
 that continually plays a path into the context which is called from a 
 routine that does the fill, clip or stroke. Because that routine can't 
 simply set the current path, a lot more changes were needed.

Sure, but the brief transitional cost of moving from canvas current 
default paths to Path2D objects is minor on the long run, and not worth 
the added complexity cost paid over the lifetime of the Web for the 
feature. So for something like this, we need a stronger use case than it 
makes transitioning to Path2D slightly easier.


  On Wed, 12 Mar 2014, Rik Cabanier wrote:
 
  You can do unions and so forth with just paths, no need for 
 

Re: [whatwg] canvas feedback

2014-04-29 Thread Justin Novosad


and it is not possible to resolve font sizes in physical length
units unless the document is associated with a view.
  
   Why not? The canvas has a pixel density (currently always 1:1), no?
 
  1:1 is not a physical pixel density. To resolve a font size that is
  specified in physical units (e.g. millimeters or inches) you need
  something like a DPI value, which requires information about the output
  device.

 No, not any more. CSS physical units are defined as mapping to CSS
 pixels at 96 CSS pixels per CSS inch, and canvas is defined as mapping CSS
 pixels to canvas coordinate space units at one CSS pixel per coordinate
 space unit. As far as I know, all browsers do this now.


Right... So I think there is a bug in Blink then. Thanks.


My 2 cents: specifying fallback behaviors for all use cases that are
context dependent could be tedious and I have yet to see a
real-world use case that requires being able to paint a canvas in a
frame-less document. Therefore, I think the spec should clearly
state canvas elements that are in a document without a browsing
context are unusable. Not sure what the exact behavior should be
though.  Should an exception be thrown upon trying to use the
rendering context? Perhaps canvas draws should fail silently, and
using the canvas as an image source should give transparent black
pixels?
  
   As far as I can tell, this is all already specified, and it just gets
   treated like a normal canvas.
 
  Agreed. The fallback behavior is specified. But is it good enough? There
  will be discrepancies, sometimes large ones, between text rendered with
  and without a browsing context.

 I don't think there should be any discrepancies.


One major discrepancy I noticed is that web font resolution fails but I
don't think that is due to lack of a browsing context per se. It is more
precisely due to the fact that we don't compute style on documents that are
not displayed (web fonts are defined in CSS).


   Instead, we should use adaptive algorithms, for example always using
   the prettiest algorithms unless we find that frame rate is suffering,
   and then stepping down to faster algorithms.
 
  Such an adaptive algorithm implies making some kind of weighted decision
  to chose a reasonable compromise between quality and performance.
  Sounds like the perfect place to use a hint.

 If we really need a hint. But do we? Do we have data showing that adaptive
 algorithms can't do a good job without a hint?


Fair enough. Will give it a try.


 On Mon, 7 Apr 2014, Justin Novosad wrote:
 
  Dashing is one thing that would be affected.  I think some
  implementations are currently in a non-compliant state probably because
  the line dashing feature was added recently.  Back when strokeRect was
  originally implemented, we could get away with blindly normalizing
  rectangles because there was no impact on the rendering result.  The
  other thing that is affected is fill rule application.  For example, if
  you have a path that contains two intersecting rectangles and you are
  filling in with the nonzero rule.  If one of the two rectangles is
  flipped, then the intersection region should be unfilled.  If the
  rectangles are normalized internally by the implementation, then you
  will get the wrong (non spec compliant) result.

 I've added in that order to rect().


Thanks.


 I couldn't find the original reason for strokeRect() only drawing one line
 in the one-dimensional case, though it dates back to 2007 at least.


That speaks for itself: If no one has complained about that since 2007...


 I haven't changed rect() to do that too.


Good. I think it is best for rect to not optimize to a line because that
would affect filling in an undesirable way and it would affect the start
point of the next sub-path.  That being said, it is probably safe to
optimize it to two lines, but that does not have to be detailed in the spec
since it is an implementation optimization that has no effect on the
rendered result.



 On Sun, 6 Apr 2014, Dirk Schulze wrote:
 
  The spec says that the object TextMetrics[1] must return font and actual
  text metrics. All things require information from the font or the font
  system. In many cases either font or font system just do not provide
  these information.

 Which cases? The information is needed for text layout purposes; if the
 browser can't get the information, how is it conforming to CSS?


It conforms by applying approximation rules (or guesses?) that derive the
missing metrics from the ones that are available.  It's ugly, but it kinda
works.


Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson i...@hixie.ch wrote:


 On Wed, 12 Mar 2014, Rik Cabanier wrote:
  On Wed, Mar 12, 2014 at 3:44 PM, Ian Hickson wrote:
   On Thu, 28 Nov 2013, Rik Cabanier wrote:
On Thu, Nov 28, 2013 at 8:30 AM, Jürg Lehni wrote:

 I meant to say that it I think it would make more sense if the
 path was in the current transformation matrix, so it would
 represent the same coordinate values in which it was drawn, and
 could be used in the same 'context' of transformations applied to
 the drawing context later on.
   
No worries, it *is* confusing. For instance, if you emit coordinates
and then scale the matrix by 2, those coordinates from
getCurrentPath will have a scale of .5 applied.
  
   That's rather confusing, and a pretty good reason not to have a way to
   go from the current default path to an explicit Path, IMHO.
  
   Transformations affect the building of the current default path at
   each step of the way, which is really a very confusing API. The Path
   API on the other hand doesn't have this problem -- it has no
   transformation matrix. It's only when you use Path objects that they
   get transformed.
 
  This happens transparently to the author so it's not confusing.

 I've been confused by it multiple times over the years, and I wrote the
 spec. I am confident in calling it confusing.


Only when you think about it :-)


  For instance:
 
  ctx.rect(0,0,10,10);
  ctx.scale(2,2); - should not affect geometry of the previous rect
  ctx.stroke(); - linewidth is scaled by 2, but rect is still 10x10

 It's confusing because it's not at all clear why this doesn't result in
 two rectangles of different sizes:

  ctx.rect(0,0,10,10);
  ctx.scale(2,2);
  ctx.stroke();
  ctx.scale(2,2);
  ctx.stroke();

 ...while this does:

  ctx.rect(0,0,10,10);
  ctx.scale(2,2);
  ctx.stroke();
  ctx.beginPath();
  ctx.rect(0,0,10,10);
  ctx.scale(2,2);
  ctx.stroke();

 It appears to be the same path in both cases, after all.


Maybe you can think about drawing paths like drawing in a graphics
application.
- moveTo, lineTo, etc = drawing line segments in the document
- scale = hitting the magnifying glass/zooming
- translate = panning the document (0,0) is the upper left of the screen
- coordinates in path segments/rect = coordinates on the screen

It would be very surprising that line art would change when zooming in or
out or panning.


Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson i...@hixie.ch wrote:


 So this is not how most implementations currently have it defined.
   
I'm unsure what you mean. Browser implementations? If so, they
definitely do store the path in user coordinates. The spec currently
says otherwise [1] though.
  
   I'm not sure what you're referring to here.
 
  All graphics backends for canvas that I can inspect, don't apply the CTM
  to the current path when you call a painting operator. Instead, the path
  is passed as segments in the current CTM and the graphics library will
  apply the transform to the segments.

 Right. That's what the spec says too, for the current default path.


No, the spec says this:

For CanvasRenderingContext2D objects, the points passed to the methods, and
the resulting lines added to current default path by these methods, must be
transformed according to the current transformation matrix before being
added to the path.




 This is the confusing behaviour to which I was referring. The Path API
 (or
 Path2D or whatever we call it) doesn't have this problem.


That is correct. The Path2D object is in user space and can be passed
directly to the graphics API (along with the CTM).


 ...
var s = new Shape();
   
ctx.beginPath(); ctx.lineTo(...); ...; ctx.fill(); s.add(new
Shape(ctx.currentPath));
...
ctx.beginPath(); ctx.lineTo(...); ...; ctx.stroke(); s.add(new
Shape(ctx.currentPath, ctx.currentDrawingStyle));
   
ctx.addHitRegion({shape: s, id: control});
  
   Why not just add ctx.addHitRegion() calls after the fill and stroke
 calls?
 
  That does not work as the second addHitRegion will remove the control and
  id from the first one.
  The 'add' operation is needed to get a union of the region shapes.

 Just use two different IDs with two different addHitRegion() calls. That's
 a lot less complicated than having a whole new API.


That doesn't work if you want to have the same control for the 2 areas,
from the spec for addHitRegion:

If there is a previous region with this control, remove it from the scratch
bitmap's hit region list; then, if it had a parent region, decrement that
hit region's child count by one.


Even if you don't use the control, it would be strange to have 2 separate
hit regions for something that represents 1 object.


   On Fri, 6 Dec 2013, Jürg Lehni wrote:
 ...

copy, and would help memory consummation and performance.
  
   I don't really understand the use case here.
 
  Jurg was just talking about an optimization (so you don't have to make
  an internal copy)

 Sure, but that doesn't answer the question of what the use case is.


From my recent experiments with porting canvg (
https://code.google.com/p/canvg/) to use Path2D, they have a routine that
continually plays a path into the context which is called from a routine
that does the fill, clip or stroke.
Because that routine can't simply set the current path, a lot more changes
were needed.
Some pseudocode that shows the added complexity, without currentPath:

function drawpath() {

  if(Path2DSupported) {

return myPath;

  } else

  for(...) {

ctx.moveTo/lineTo/...

  }

}

function fillpath() {

var p = drawpath();
if(p)
  ctx.fill(p);
else
  ctx.fill();

}

with currentPath:

function drawpath() {
  if(Path2DSupported) { // only 2 extra lines of code
ctx.currentPath = myPath;
  } else
  for(...) {
ctx.moveTo/lineTo/...
  }
function fillpath() {
  drawpath();
  ctx.fill();
}



 On Wed, 12 Mar 2014, Rik Cabanier wrote:
 ...
   You say, here are some paths, here are some fill rules, here are some
   operations you should perform, now give me back a path that describes
   the result given a particular fill rule.
 
  I think you're collapsing a couple of different concepts here:
 
  path + fillrule - shape
  union of shapes - shape
  shape can be converted to a path

 I'm saying shape is an unnecessary primitive. You can do it all with
 paths.

union of (path + fillrule)s - path


No, that makes no sense. What would you get when combining a path with a
fillrule and no fillrule?


   A shape is just a path with a fill rule, essentially.
 
  So, a path can now have a fillrule? Sorry, that makes no sense.

 I'm saying a shape is just the combination of a fill rule and a path. The
 path is just a path, the fill rule is just a fill rule.


After applying a fillrule, there is no longer a path. You can *convert* it
back to a path that describes the outline of the shape if you want, but
that is something different.
The way you've defined things now, you can apply another fill rule on a
path with a fill rule. What would the result of that be?


   Anything you can do
   with one you can do with the other.
 
  You can't add segments from one shape to another as shapes represent
  regions.
  Likewise, you can't union, intersect or xor path segments.

 But you can union, intersect, or xor lists of pairs of paths and
 fillrules.


would you start 

[whatwg] canvas feedback

2014-04-08 Thread Ian Hickson

(Note: I started responding to this feedback last week, so this is missing 
responses to feedback sent in the last few days. Sorry about that. I'll 
get to that feedback in due course as well!)

On Mon, 3 Mar 2014, Justin Novosad wrote:
 
 Say you create a new document using 
 document.implementation.createHTMLDocument(), you get a document without 
 a browsing context. This means that style and layout will never be 
 calculated on the document.  Some of those calculations are context 
 dependent, so they can't even be resolved.  Now, what about canvas 
 elements? If JS code draws to a canvas that is in a document with no 
 browsing context, what should happen?

It should draw. In theory, anywhere in the canvas API where it depends on 
computed styles, it has prose saying what should happen if the computed 
style cannot be used. This is needed for display:none canvases, for 2D 
contexts in workers, and for the case you describe.


 For example, there is no locale for font family resolution

I'm not clear on what you mean by locale here. What is the locale that a 
displayed canvas in a Document in a browsing context has, that a 
non-displayed canvas outside a Document and without a browsing context 
does not have?


 and it is not possible to resolve font sizes in physical length units 
 unless the document is associated with a view.

Why not? The canvas has a pixel density (currently always 1:1), no?


 My 2 cents: specifying fallback behaviors for all use cases that are 
 context dependent could be tedious and I have yet to see a real-world 
 use case that requires being able to paint a canvas in a frame-less 
 document. Therefore, I think the spec should clearly state canvas 
 elements that are in a document without a browsing context are unusable.  
 Not sure what the exact behavior should be though.  Should an exception 
 be thrown upon trying to use the rendering context? Perhaps canvas draws 
 should fail silently, and using the canvas as an image source should 
 give transparent black pixels?

As far as I can tell, this is all already specified, and it just gets 
treated like a normal canvas.


On Wed, 5 Mar 2014, Rik Cabanier wrote:
 
 Testing all browsers (except IE since 
 document.implementation.createHTMLDocument() doesn't work) they seem to 
 handle canvas contexts with no browsing context except when you use 
 text. Chrome crashes, firefox throws an exception and Safari draws the 
 text with a very small scale

I don't really understand why this is problematic in practice. What does a 
browsing context provide that is needed for rendering text that a user 
agent couldn't fake for itself in other contexts? We're definitely going 
to need text in worker canvases.


On Thu, 6 Mar 2014, Justin Novosad wrote:
 
 Thanks for checking.  The reason I started this thread is that I just 
 recently solved the crash in Chrome, and I wasn't satisfied with my 
 resolution.  I just added an early exit, so Chrome 35 will fail silently 
 on calls that depend on style resolution when the canvas has no browsing 
 context.  So now we have three different behaviors. Yay!
 
 I don't think the Safari behavior is the right thing to do because it 
 will never match the developer's intent.

I agree. The developer's intent is that text be drawn as specified in the 
API. Why would we do anything else?


On Wed, 12 Mar 2014, Rik Cabanier wrote:
 On Wed, Mar 12, 2014 at 3:44 PM, Ian Hickson wrote:
  On Thu, 28 Nov 2013, Rik Cabanier wrote:
   On Thu, Nov 28, 2013 at 8:30 AM, Jürg Lehni wrote:
   
I meant to say that it I think it would make more sense if the 
path was in the current transformation matrix, so it would 
represent the same coordinate values in which it was drawn, and 
could be used in the same 'context' of transformations applied to 
the drawing context later on.
  
   No worries, it *is* confusing. For instance, if you emit coordinates 
   and then scale the matrix by 2, those coordinates from 
   getCurrentPath will have a scale of .5 applied.
 
  That's rather confusing, and a pretty good reason not to have a way to 
  go from the current default path to an explicit Path, IMHO.
 
  Transformations affect the building of the current default path at 
  each step of the way, which is really a very confusing API. The Path 
  API on the other hand doesn't have this problem -- it has no 
  transformation matrix. It's only when you use Path objects that they 
  get transformed.
 
 This happens transparently to the author so it's not confusing.

I've been confused by it multiple times over the years, and I wrote the 
spec. I am confident in calling it confusing.


 For instance:
 
 ctx.rect(0,0,10,10);
 ctx.scale(2,2); - should not affect geometry of the previous rect
 ctx.stroke(); - linewidth is scaled by 2, but rect is still 10x10

It's confusing because it's not at all clear why this doesn't result in 
two rectangles of different sizes:

 ctx.rect(0,0,10,10);
 ctx.scale(2,2);
 

Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson i...@hixie.ch wrote:


 So this is not how most implementations currently have it defined.
   
I'm unsure what you mean. Browser implementations? If so, they
definitely do store the path in user coordinates. The spec currently
says otherwise [1] though.
  
   I'm not sure what you're referring to here.
 
  All graphics backends for canvas that I can inspect, don't apply the CTM
  to the current path when you call a painting operator. Instead, the path
  is passed as segments in the current CTM and the graphics library will
  apply the transform to the segments.

 Right. That's what the spec says too, for the current default path.


No, the spec says this:

For CanvasRenderingContext2D objects, the points passed to the methods, and
the resulting lines added to current default path by these methods, must be
transformed according to the current transformation matrix before being
added to the path.




 This is the confusing behaviour to which I was referring. The Path API
 (or
 Path2D or whatever we call it) doesn't have this problem.


That is correct. The Path2D object is in user space and can be passed
directly to the graphics API (along with the CTM).


Another use case is to allow authors to quickly migrate to hit
 regions.
   
ctx.beginPath(); ctx.lineTo(...); ...; ctx.fill();
... // lots of complex drawing operation for a control
ctx.beginPath(); ctx.lineTo(...); ...; ctx.stroke();
   
   
To migrate that to a region (with my proposed shape interface [1]):
   
var s = new Shape();
   
ctx.beginPath(); ctx.lineTo(...); ...; ctx.fill(); s.add(new
Shape(ctx.currentPath));
...
ctx.beginPath(); ctx.lineTo(...); ...; ctx.stroke(); s.add(new
Shape(ctx.currentPath, ctx.currentDrawingStyle));
   
ctx.addHitRegion({shape: s, id: control});
  
   Why not just add ctx.addHitRegion() calls after the fill and stroke
 calls?
 
  That does not work as the second addHitRegion will remove the control and
  id from the first one.
  The 'add' operation is needed to get a union of the region shapes.

 Just use two different IDs with two different addHitRegion() calls. That's
 a lot less complicated than having a whole new API.


That doesn't work if you want to have the same control for the 2 areas,
from the spec for addHitRegion:

If there is a previous region with this control, remove it from the scratch
bitmap's hit region list; then, if it had a parent region, decrement that
hit region's child count by one.


Even if you don't use the control, it would be strange to have 2 separate
hit regions for something that represents 1 object.


   On Fri, 6 Dec 2013, Jürg Lehni wrote:
 ...
copy, and would help memory consummation and performance.
  
   I don't really understand the use case here.
 
  Jurg was just talking about an optimization (so you don't have to make
  an internal copy)

 Sure, but that doesn't answer the question of what the use case is.


From my recent experiments with porting canvg (
https://code.google.com/p/canvg/) to use Path2D, they have a routine that
continually plays a path into the context which is called from a routine
that does the fill, clip or stroke.
Because that routine can't simply set the current path, a lot more changes
were needed.
Some pseudocode that shows the added complexity, without currentPath:

function drawpath() {

  if(Path2DSupported) {

return myPath;

  } else

  for(...) {

ctx.moveTo/lineTo/...

  }

}

function fillpath() {

var p = drawpath();
if(p)
  ctx.fill(p);
else
  ctx.fill();

}

with currentPath:

function drawpath() {
  if(Path2DSupported) { // only 2 extra lines of code
ctx.currentPath = myPath;
  } else
  for(...) {
ctx.moveTo/lineTo/...
  }
function fillpath() {
  drawpath();
  ctx.fill();
}



 On Wed, 12 Mar 2014, Rik Cabanier wrote:

 You can do unions and so forth with just paths, no need for
 regions.
   
How would you do a union with paths? If you mean that you can just
aggregate the segments, sure but that doesn't seem very useful.
  
   You say, here are some paths, here are some fill rules, here are some
   operations you should perform, now give me back a path that describes
   the result given a particular fill rule.
 
  I think you're collapsing a couple of different concepts here:
 
  path + fillrule - shape
  union of shapes - shape
  shape can be converted to a path

 I'm saying shape is an unnecessary primitive. You can do it all with
 paths.

union of (path + fillrule)s - path


No, that makes no sense. What would you get when combining a path with a
fillrule and no fillrule?


   A shape is just a path with a fill rule, essentially.
 
  So, a path can now have a fillrule? Sorry, that makes no sense.

 I'm saying a shape is just the combination of a fill rule and a path. The
 path is just a path, the fill rule is just a fill rule.


After applying a fillrule, there is no longer a path. You 

Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Tue, Apr 8, 2014 at 12:25 PM, Justin Novosad ju...@google.com wrote:


 
  On Mon, 17 Mar 2014, Justin Novosad wrote:
  
   Yes, but there is still an issue that causes problems in Blink/WebKit:
   because the canvas rendering context stores its path in local
   (untransformed) space, whenever the CTM changes, the path needs to be
   transformed to follow the new local spcae.  This transform requires the
  CTM
   to be invertible. So now webkit and blink have a bug that causes all
   previously recorded parts of the current path to be discarded when the
  CTM
   becomes non-invertible (even if it is only temporarily non-invertible,
  even
   if the current path is not even touched while the matrix is
   non-invertible). I have a fix in flight that fixes that problem in
 Blink
  by
   storing the current path in transformed coordinates instead. I've had
 the
   fix on the back burner pending the outcome of this thread.
 
  Indeed. It's possible to pick implementation strategies that just can't
 be
  compliant; we shouldn't change the spec every time any implementor
 happens
  to make that kind of mistake, IMHO.
 
  (Of course the better long-term solution here is the Path objects, which
  are transform-agnostic during building.)
 
 
  Just to be clear, we should support this because otherwise the results
 are
  just wrong. For example, here some browsers currently show a straight
 line
  in the default state, and this causes the animation to look ugly in the
  transition from the first frame to the secord frame (hover over the
 yellow
  to begin the transition):
 
 http://junkyard.damowmow.com/538
 
  Contrast this to the equivalent code with the transforms explicitly
  multiplied into the coordinates:
 
 http://junkyard.damowmow.com/539
 
  I don't see why we would want these to be different. From the author's
  perspective, they're identical.


These examples are pretty far fetched.
How many time do people change the CTM in the middle of a drawing operation
and not change the geometry?

If we stick to that, there are still some behaviors that need to resolved.
 One issue that comes to mind is what happens if stroke or fill are called
 while the CTM is non-invertible? To be more precise, how would the styles
 be mapped?  If the fillStyle is collapsed to a point, does that mean the
 path gets filled in transparent black?  If we go down this road, we will
 likely uncover more questions of this nature.


Indeed


  On Tue, 25 Mar 2014, Justin Novosad wrote:
  
   I prepared a code change to that effect, but then there was talk of
   changing the spec to skip path primitives when the CTM is not
   invertible, which I think is a good idea. It would avoid a lot of
   needless hoop jumping on the implementation side for supporting weird
   edge cases that have little practical usefulness.
 
  I'm not sure I agree that they have little practical usefulness. Zeros
  often occur at the edges of transitions, and if we changed the spec then
  these transitions would require all the special-case code to go in author
  code instead of implementor code.
 

 Yes, I think that may be the strongest argument so far in this discussion.
 The examples you provided earlier illustrate it well.
 I would like to hear what Rik and Dirk think about this now.


I looked at the webkit and chrome bug databases and I haven't found anyone
who complained about their current behavior.
Implementing this consistently will either add a bunch of special case code
to deal with non-singular matrices or double (triple?) conversion of all
segment points like firefox does. After that, fill, stroke and clip will
still not work when there's a non-invertible matrix.

I do not think it's worth the effort...


Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson i...@hixie.ch wrote:

 ...

Stroking will be completely wrong too, because joins and end caps
are drawn separately, so they would be stroked as separate paths.
This will not give you the effect of a double-stroked path.
  
   I don't understand why you think joins and end caps are drawn
   separately. That is not what the spec requires.
 
  Sure it does, for instance from
 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#trace-a-path
  :
 
  The round value means that a filled arc connecting the two
  aforementioned corners of the join, abutting (and not overlapping) the
  aforementioned triangle, with the diameter equal to the line width and
  the origin at the point of the join, must be added at joins.
 
  If you mean, drawn with a separate fill call, yes that is true.
  What I meant was that they are drawn as a separate closed path that will
  interact with other paths as soon as there are different winding rules or
  holes.

 The word filled is a bit misleading here (I've removed it), but I don't
 see why that led you to the conclusion you reached. The step in question
 begins with Create a new path that describes the edge of the areas that
 would be covered if a straight line of length equal to the styles
 lineWidth was swept along each path in path while being kept at an angle
 such that the line is orthogonal to the path being swept, replacing each
 point with the end cap necessary to satisfy the styles lineCap attribute
 as described previously and elaborated below, and replacing each join with
 the join necessary to satisfy the styles lineJoin type, as defined below,
 which seems pretty unambiguous.


Thinking about this some more, it looks like you came around and specified
stroking like I requested from the beginning.
For instance,
http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0354.html
 or
http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0213.html
Now that you made that change, 'addPathByStrokingPath' is specified
correctly. I still don't know how it could be implemented though... (It
*could* as a shape but not as a path)


Re: [whatwg] canvas feedback

2014-04-08 Thread Rik Cabanier
On Tue, Apr 8, 2014 at 4:50 PM, Rik Cabanier caban...@gmail.com wrote:




 On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson i...@hixie.ch wrote:

 ...


Stroking will be completely wrong too, because joins and end caps
are drawn separately, so they would be stroked as separate paths.
This will not give you the effect of a double-stroked path.
  
   I don't understand why you think joins and end caps are drawn
   separately. That is not what the spec requires.
 
  Sure it does, for instance from
 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#trace-a-path
  :
 
  The round value means that a filled arc connecting the two
  aforementioned corners of the join, abutting (and not overlapping) the
  aforementioned triangle, with the diameter equal to the line width and
  the origin at the point of the join, must be added at joins.
 
  If you mean, drawn with a separate fill call, yes that is true.
  What I meant was that they are drawn as a separate closed path that will
  interact with other paths as soon as there are different winding rules
 or
  holes.

 The word filled is a bit misleading here (I've removed it), but I don't
 see why that led you to the conclusion you reached. The step in question
 begins with Create a new path that describes the edge of the areas that
 would be covered if a straight line of length equal to the styles
 lineWidth was swept along each path in path while being kept at an angle
 such that the line is orthogonal to the path being swept, replacing each
 point with the end cap necessary to satisfy the styles lineCap attribute
 as described previously and elaborated below, and replacing each join with
 the join necessary to satisfy the styles lineJoin type, as defined below,
 which seems pretty unambiguous.


 Thinking about this some more, it looks like you came around and specified
 stroking like I requested from the beginning.
 For instance,
 http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0354.html
  or
 http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0213.html
 Now that you made that change, 'addPathByStrokingPath' is specified
 correctly. I still don't know how it could be implemented though... (It
 *could* as a shape but not as a path)


The spec is still confusingly written and could be misinterpreted:

Create a new path that describes the edge of the areas that would be
covered if a straight line of length equal to the styles lineWidth was
swept along each subpath in path while being kept at an angle such that the
line is orthogonal to the path being swept, replacing each point with the
end cap necessary to satisfy the styles lineCap attribute as described
previously and elaborated below, and replacing each join with the join
necessary to satisfy the styles lineJoin type, as defined below.


Maybe could become:

Create a new path that describes the edge of the coverage of the following
areas:
- a straight line of length equal to the styles lineWidth that was swept
along each subpath in path while being kept at an angle such that the line
is orthogonal to the path being swept,
- the end cap necessary to satisfy the styles lineCap attribute as
described previously and elaborated below,
- the join with the join necessary to satisfy the styles lineJoin type, as
defined below.


Re: [whatwg] Canvas feedback (various threads)

2011-02-11 Thread Ian Hickson
On Thu, 10 Feb 2011, Boris Zbarsky wrote:
 On 2/10/11 11:31 PM, Ian Hickson wrote:
  I think you had a typo in your test. As far as I can tell, all
  WebKit-based browsers act the same as Opera and Firefox 3 on this:
  
  
  http://software.hixie.ch/utilities/js/canvas/?c.clearRect(0%2C%200%2C%20640%2C%20480)%3B%0Ac.save()%3B%0Atry%20%7B%0A%20%20c.strokeStyle%20%3D%20'red'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%2080)%3B%0A%20%20c.strokeStyle%20%3D%20'transparent'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%20120)%3B%0A%7D%20finally%20%7B%0A%20%20c.restore()%3B%0A%7D%0A
 
 On that test, Safari 5.0.3 on Mac outputs red and transparent for 
 the two strings.

Huh. Interesting. I never test release browsers, totally missed this. :-)

Thanks.


   Which is less interop than it seems (due to Safari's behavior), and 
   about to disappear completely, since both IE9 and Firefox 4 will 
   ship with the 0 instead of 0.0  :(
  
  Is there no chance to fix this in Firefox 4? It _is_ a regression. :-)
 
 At this point, probably not.  If it's not actively breaking websites 
 it's not being changed before final release.  If it is, we'd at least 
 think about it...

Well I don't really mind what we do at this point.

I'm assuming you're not suggesting changing the alpha=1 case (which is 
also different between CSS and canvas). Is that right?

I guess with IE9 and Firefox4 about to go to the 0 behaviour, and Safari 
still having 0 behaviour, and Opera and Chrome being the only ones doing 
what the spec says, we should move to 0...

I've changed the spec to make alpha1 colours work like CSS, and I've 
marked this part of the spec as controversial so that people are aware 
that it could change again. I guess we'll look at the interop here again 
in a few months and see if it's any better. My apologies to Opera and 
WebKit for getting screwed by following the spec.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Canvas feedback (various threads)

2011-02-11 Thread Boris Zbarsky

On 2/11/11 3:34 PM, Ian Hickson wrote:

I'm assuming you're not suggesting changing the alpha=1 case (which is
also different between CSS and canvas). Is that right?


Sadly, probably right.  For the alpha=1 case, I would expect there to be 
sites depending on this...


-Boris


Re: [whatwg] Canvas feedback (various threads)

2011-02-10 Thread Ian Hickson

On Thu, 3 Feb 2011, Boris Zbarsky wrote:

 It looks like CSS rgba colors with an alpha value of 0 are serialized as 
 rgba() with 0 as the alpha value. in at least Gecko, Webkit, and 
 Presto.
 
 It also looks like canvas style color with an alpha value of 0 are 
 serialized as rgba() with 0.0 as the alpha value in Gecko 3.6, Webkit, 
 and Presto.
 
 In Gecko 2.0 we made a change to the canvas code to fix some bugs in the 
 serialization by the simple expedient of reusing the well-tested code 
 that CSS colors use.  This has the incidental benefit of more behavior 
 consistency for authors.  Unfortunately, this makes us not do what these 
 other UAs do, nor do what the current spec draft says.
 
 While we can clearly special-case 0 in this code, it seems like an 
 authoring pitfall to have the two different serialization styles, so I 
 would prefer to have consistent behavior between the two.  What do other 
 UA vendors think of standardizing on 0 as the serialization for canvas 
 color alpha channels when they're transparent?  Alternately, what about 
 using 0.0 for CSS colors?

On Fri, 4 Feb 2011, Anne van Kesteren wrote:
 
 Either way is fine. Note however that when there is no alpha-channel 
 involved the differences are much greater between CSS and canvas. At 
 least in Gecko/WebKit. I.e. rgb() vs #rrggbb. That probably cannot be 
 changed though.

Given Anne's point (that this is already different in other contexts), and 
given the current deployed interop on this issue, I'm reluctant to change 
this. I agree that it's unfortunate that we needlessly differ from CSS 
here. It's another example of why it's important for us to define 
everything, and that we never leave anything up to UAs to decide. :-)


On Mon, 17 Jan 2011, carol.sz...@nokia.com wrote:

 I propose changing the drawing model steps 2 to 5 the following way:
 
 Step 2: Multiply the alpha channel of every pixel in A by globalAlpha. 
 (Prior Step 5)

 Step 3: When shadows are drawn, render the shadow from image A, using 
 the current shadow styles, creating image B.

 Step 4: When shadows are drawn, composite image B onto image A using 
 destination-over compositing operation.
 
 This algorithm is less expensive then the prior one (it saves one 
 multiplication of the AlphaChannel over the entire B bitmap), and treats 
 the image/object drawn and its shadow as one entity. Which results in 
 the shadow being preserved for composite operations such as copy and 
 source-in and produces less strange results in operations such as xor 
 and destination-out when shadows are drawn.
 
 The this algorithm yields the same result as the current version of the 
 spec for source-over, but for completeness supports shadows in all 
 modes. Indeed the way the current spec exists, many non-source-over 
 modes including xor yield very strange results if shadows are used. I do 
 not care that much if the spec has my proposal in it or whether it specs 
 that shadows are not rendered in modes other then source over, but it 
 would be nice to hear an agreement from browser implementors on this.

On Tue, 18 Jan 2011, Robert O'Callahan wrote:
 
 [...] if we don't have good use cases for using shadows with 
 non-source-over operators (I don't), let's just say that shadows don't 
 draw for non-source-over operators. That would reduce spec and 
 implementation complexity.

I'm happy to do either of these, but I'm very relunctant to change the 
spec away from what browsers do. Currently, browsers pretty much agree on 
how shadows work in composition, they just disagree over what gets 
composited (and even then it's only really WebKit that disagrees).

If there is interest in changing this, I think the best thing would be for 
browser vendors to indicate a commitment to change it, so that I can make 
sure I'm not changing the spec away from what browsers want to implement.


On Tue, 23 Nov 2010, Tab Atkins Jr. wrote:

 Right now, canvas gradients interpolate their colors in 
 non-premultiplied space; that is, the raw values of r, g, b, and a are 
 interpolated independently.  This has the unfortunate effect that colors 
 darken as they transition to transparent, as transparent is defined as 
 rgba(0,0,0,0), a transparent black.  Under this scheme, the color 
 halfway between yellow and transparent is rgba(127,127,0,.5), a 
 partially-transparent dark yellow, rather than rgba(255,255,0,.5).*
 
 The rest of the platform has switched to using premultiplied colors for 
 interpolation, because they react better in cases like this**. CSS 
 transitions and CSS gradients now explicitly use premultiplied colors, 
 and SVG ends up interpolating similarly (they don't quite have the same 
 problem - they track opacity separate from color, so transitioning from 
 color:yellow;opacity:1 to color:yellow;opacity:0 gives you 
 color:yellow;opacity:.5 in the middle, which is the moral equivalent 
 of rgba(255,255,0,.5)).
 
 It would be unfortunate for canvas gradients to be the only 

Re: [whatwg] Canvas feedback (various threads)

2011-02-10 Thread Tab Atkins Jr.
On Thu, Feb 10, 2011 at 4:56 PM, Ian Hickson i...@hixie.ch wrote:
 Looking at that demo, it seems that premultiplied removes possible options
 from the author. How do you go from red to actually transparent black,
 getting darker as you transition? Do you have to give a nearly-transparent
 black (alpha=0.01) to get around it or some such? That seems weird.

If you want to mimic the appearance of red-transparent black in
non-premultiplied space, then yes, you need to put in several
additional color-stops.  In premultiplied space, that transition is a
curved path.


 It's not only the 'transparent' keyword; it affects all cases of
 gradients between colors with different alpha values and different color
 values.  And in cases where one of the endpoint alphas is not 0, it's
 not possible to get the correct (premultiplied) result with a gradient
 computed in nonpremultiplied space.

 Can you elaborate on that? I'm interested in seeing the cases that you
 can't do in one or the other of the colour spaces we're discussing. If one
 is a strict superset of the other, then it would make sense to specify
 that we use that one. If you can do the same gradients in both, then
 interoperability seems more important.

The two color-spaces are equivalent in terms of colors and gradients
than can be expressed, though the ease of expressing certain gradients
are different between them.

CSS does transitions in premultiplied space, and both FF and Webkit
are planning to do CSS gradients in premultiplied space as well (the
spec already requires it).  It would be unfortunate to have canvas
work differently than CSS here.


 The canvas gradient spec is pretty uniformly and interoperably implemented
 on this front:

   
 http://software.hixie.ch/utilities/js/canvas/?c.clearRect(0%2C%200%2C%20640%2C%20480)%3B%0Ac.save()%3B%0Avar%20gradient%20%3D%20c.createLinearGradient(0%2C0%2C640%2C480)%3B%0Agradient.addColorStop(0%2C%20'rgba(255%2C255%2C255%2C1)')%3B%0Agradient.addColorStop(1%2C%20'rgba(0%2C0%2C0%2C0)')%3B%0Ac.fillStyle%20%3D%20gradient%3B%0Ac.fillRect(0%2C0%2C640%2C480)%3B%0Ac.restore()%3B%0A

 It's easy to work around this issue:

   
 http://software.hixie.ch/utilities/js/canvas/?c.clearRect(0%2C%200%2C%20640%2C%20480)%3B%0Ac.save()%3B%0Avar%20gradient%20%3D%20c.createLinearGradient(0%2C0%2C640%2C480)%3B%0Agradient.addColorStop(0%2C%20'rgba(255%2C255%2C255%2C1)')%3B%0Agradient.addColorStop(1%2C%20'rgba(255%2C255%2C255%2C0)')%3B%0Ac.fillStyle%20%3D%20gradient%3B%0Ac.fillRect(0%2C0%2C640%2C480)%3B%0Ac.restore()%3B%0A

 I'm happy to change the spec on this, but I'm not going to change it ahead
 of the implementations. If you want this changed, I recommend getting the
 browser vendors to change this.

Ok.

~TJ


Re: [whatwg] Canvas feedback (various threads)

2011-02-10 Thread Ian Hickson
On Thu, 10 Feb 2011, Boris Zbarsky wrote:
 On 2/10/11 7:56 PM, Ian Hickson wrote:
  On Thu, 3 Feb 2011, Boris Zbarsky wrote:
   
   It looks like CSS rgba colors with an alpha value of 0 are serialized as
   rgba() with 0 as the alpha value. in at least Gecko, Webkit, and
   Presto.
   
   It also looks like canvas style color with an alpha value of 0 are
   serialized as rgba() with 0.0 as the alpha value in Gecko 3.6, Webkit,
   and Presto.
 
 I have to correct myself.  The above is the behavior in _Chrome_, not in 
 all Webkit-based browsers.  In Safari, canvas style colors are 
 serialized as the original string they were set to, apparently (so if 
 you set it to 0.0 you get 0.0 back; if you set it to 0 you get 0 back, 
 and if you set interoperability you get interoperability back...).

I think you had a typo in your test. As far as I can tell, all 
WebKit-based browsers act the same as Opera and Firefox 3 on this:

   
http://software.hixie.ch/utilities/js/canvas/?c.clearRect(0%2C%200%2C%20640%2C%20480)%3B%0Ac.save()%3B%0Atry%20%7B%0A%20%20c.strokeStyle%20%3D%20'red'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%2080)%3B%0A%20%20c.strokeStyle%20%3D%20'transparent'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%20120)%3B%0A%7D%20finally%20%7B%0A%20%20c.restore()%3B%0A%7D%0A


 An additional data point is that in the IE9 RC you get 0, just like in 
 Firefox 4.

Good to know.


 Which is less interop than it seems (due to Safari's behavior), and 
 about to disappear completely, since both IE9 and Firefox 4 will ship 
 with the 0 instead of 0.0  :(

Is there no chance to fix this in Firefox 4? It _is_ a regression. :-)

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Canvas feedback (various threads)

2011-02-10 Thread James Robinson
On Thu, Feb 10, 2011 at 8:39 PM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 2/10/11 11:31 PM, Ian Hickson wrote:

 I think you had a typo in your test. As far as I can tell, all
 WebKit-based browsers act the same as Opera and Firefox 3 on this:


 http://software.hixie.ch/utilities/js/canvas/?c.clearRect(0%2C%200%2C%20640%2C%20480)%3B%0Ac.save()%3B%0Atry%20%7B%0A%20%20c.strokeStyle%20%3D%20'red'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%2080)%3B%0A%20%20c.strokeStyle%20%3D%20'transparent'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%20120)%3B%0A%7D%20finally%20%7B%0A%20%20c.restore()%3B%0A%7D%0A


 On that test, Safari 5.0.3 on Mac outputs red and transparent for the
 two strings.

 And this test:
 http://software.hixie.ch/utilities/js/canvas/?c.clearRect(0%2C%200%2C%20640%2C%20480)%3B%0Ac.save()%3B%0Atry%20%7B%0A%20%20c.strokeStyle%20%3D%20'red'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%2080)%3B%0A%20%20c.strokeStyle%20%3D%20'orly%2C%20do%20you%20think%20so'%3B%0A%20%20c.fillText(c.strokeStyle%2C%2020%2C%20120)%3B%0A%7D%20finally%20%7B%0A%20%20c.restore()%3B%0A%7D%0A

 outputs red and orly, do you think so in the same browser.

 Does Safari on Mac behave differently from Safari on Windows here?


The version of WebKit used by Safari 5.0.3 is rather antiquated at this
point.  Using the latest WebKit nightly build, or Chrome 10.0.648.45
dev (which has a significantly newer version of WebKit), I get #ff and
rgba(0, 0, 0, 0.0) on the first test and #ff / #ff on the second.
 Presumably at some point Apple will release a new version of Safari that
matches the behavior nightlies currently have.

- James



  Which is less interop than it seems (due to Safari's behavior), and
 about to disappear completely, since both IE9 and Firefox 4 will ship
 with the 0 instead of 0.0  :(


 Is there no chance to fix this in Firefox 4? It _is_ a regression. :-)


 At this point, probably not.  If it's not actively breaking websites it's
 not being changed before final release.  If it is, we'd at least think about
 it...

 -Boris



Re: [whatwg] Canvas feedback (various threads)

2011-02-10 Thread Boris Zbarsky

On 2/10/11 11:54 PM, James Robinson wrote:

The version of WebKit used by Safari 5.0.3 is rather antiquated at this
point.  Using the latest WebKit nightly build, or Chrome 10.0.648.45
dev (which has a significantly newer version of WebKit), I get #ff
and rgba(0, 0, 0, 0.0) on the first test and #ff / #ff on the
second.  Presumably at some point Apple will release a new version of
Safari that matches the behavior nightlies currently have.


Ah, I see.

My point stands: interoperability among UAs that users are using right 
now (which doesn't include webkit nightlies, but does include Chrome 9 
which seems to have the new behavior) is just not there.


-Boris


Re: [whatwg] Canvas feedback (various threads)

2010-11-15 Thread Ian Hickson
On Wed, 11 Aug 2010, Philip Taylor wrote:
 On Wed, Aug 11, 2010 at 9:35 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 29 Jul 2010, Gregg Tavares (wrk) wrote:
  source-over
     glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
 
  I tried searching the OpenGL specification for either glBlendFunc or 
  GL_ONE_MINUS_SRC_ALPHA and couldn't find either. Could you be more 
  specific regarding what exactly we would be referencing?  I'm not 
  really sure I understand you proposal.
 
 The OpenGL spec omits the gl/GL_ prefixes - search for BlendFunc 
 instead. (In the GL 3.0 spec, tables 4.1 (the FUNC_ADD row) and 4.2 seem 
 relevant for defining the blend equations.)

Maybe I'm looking at the wrong specification, but I couldn't find any 
conformance requirements in the document I read on any of the pages that 
contain BlendFunc:

   http://www.opengl.org/registry/doc/glspec40.core.20100311.pdf

If there's specific text you would like to have in the HTML specification 
to replace the current definitions, I'm happy to use it. I've been unable 
to work out what such text should be, however.


On Wed, 11 Aug 2010, David Flanagan wrote:
 
 I think that the sentence The transformations must be performed in 
 reverse order is sufficient to remove the ambiguity in multiplication 
 order.  So the spec is correct (but confusing) as it stands, except that 
 it doesn't actually say that the CTM is to be replaced with the product 
 of the CTM and the new matrix.  It just says multiply them.
 
 I suggest changing the description of transform() from:
 
  must multiply the current transformation matrix with the matrix 
  described by:
 
 To something like this:
 
 must set the current transformation matrix to the matrix obtained by 
 postmultiplying the current transformation matrix with this matrix:
 
 a c e
 b d f
 0 0 1
 
 That is:
 
  a c e
 CTM = CTM *  b d f
  0 0 1

I tried to update the text a little, but I didn't explicitly say 
postmultiplying, since saying that you multiple A by B seems less 
ambiguous to me than saying that you postmultiply A with B or that the 
result should be A * B (in the latter two cases you'd have to define 
postmultiply and * respectively).


 Changing translate(), scale() and rotate() to formally define them in 
 terms of transform() would be simple, and the current prose descriptions 
 of the methods could then be moved to the non-normative green box.  The 
 current descriptions suffer from the use of the word add near the word 
 matrix when in fact a matrix multiplication is to be performed, but I 
 don't think they can be mis-interpreted as they stands. I'd be happy to 
 write new method descriptions if you want to tighten things up in this 
 way, however.

I'm happy with the text as it stands if it's unambiguous, but if you have 
any specific proposed text let me know and I'll see if it is better. :-)


On Wed, 11 Aug 2010, Boris Zbarsky wrote:
 On 8/11/10 5:42 PM, David Flanagan wrote:
  I think that the sentence The transformations must be performed in 
  reverse order is sufficient to remove the ambiguity in multiplication 
  order.
 
 It is?  It sounds pretty confusing to me... reverse from what?
 
 The right way to specify what happens when composing two transformations 
 is to just explicitly say which transformation is applied first, instead 
 of talking about the algebraic operations on the matrix representations.  
 In my opinion.

Yeah, I'm not sure it's perfect as is either. If anyone has any suggested 
improvements for the text please do propose it. I'm happy to massage it 
into RFC2119-speak; unfortunately my understanding of the maths and 
graphics here is not sufficient for me to write the actual requirements.


  must set the current transformation matrix to the matrix obtained by 
  postmultiplying the current transformation matrix with this matrix:
  
  a c e
  b d f
  0 0 1
 
 See, that makes inherent assumptions about row vs column vectors that 
 aren't stated anywhere, right?

Yes, and that assumption has in fact bitten us in the behind before.


On Wed, 11 Aug 2010, Boris Zbarsky wrote:
 On 8/11/10 4:35 PM, Ian Hickson wrote:
  On Mon, 19 Jul 2010, Boris Zbarsky wrote:
   
   I do think the spec could benefit from an example akin to the one in 
   the CoreGraphics documentation.
  
  I followed your references but I couldn't figure out which example you 
  meant. What exactly do you think we should add?
 
 Effectively the part starting with the second paragraph under 
 Discussion at 
 http://developer.apple.com/mac/library/documentation/GraphicsImaging/Reference/CGAffineTransform/Reference/reference.html#//apple_ref/doc/c_ref/CGAffineTransform
  
 and going through the two linear equations defining x' and y'.  Plus a 
 bit that says how the linear list of arguments passed to transform() 
 maps to the 2-dimensional array of numbers in the transformation matrix.

I assume you mean the discussion in the definition of struct 
CGAffineTransform? I'm 

Re: [whatwg] Canvas feedback (various threads)

2010-11-15 Thread Charles Pritchard

On Sun, 3 Oct 2010, Charles Pritchard wrote:

Having worked quite awhile with WebApps APIs, Canvas included, I've
concluded that HTML can be implemented within the web stack.
It's my firm belief that the Web Apps specifications can and should be
proven complete.

If by complete you mean self-hosting, then: why? That seems like a
very arbitrary goal.

If not, what do you mean?
I intended that section 4, The elements of HTML, with minor 
exceptions, should be

displayable with Canvas, and operable with a minimal canvas+webapps profile.

Exceptions include, but are not limited to: script, iframe, embed, 
object, and for some time, video and audio.


There are minimal instances in the current spec, which prohibit this.

For those instances, there are additional use cases, outside of 
'completion',

mainly having relevance to accessibility (a11y).


I'm concerned that the issue is being avoided because it originated from
a project you disagree with; and has biased your judgment of additional
use cases or possible remedies.

Good lord no. This is merely a prioritisation issue. I'm sure we'll add
lots of metrics over time.

But I'm not requesting a discussion on various aspects of fonts.

I'm pointing to the non-availability of one particular metric, as a 
blocking issue in my ability to keep

a string of text on the same baseline.

The problem is that anytime we add anything to canvas, implementors get so
excited that they drop what they're doing and implement it (in some cases,
overnight!). This takes resources away from other features. If we're to
get the whole platform to improve, we need to make sure that everything
gets a chance to be implemented. This means we can't just be adding stuff
to canvas all the time.
I'm no newbie, I understand and appreciate your oversight on html 5 
implementations,
your flexibility in exploring new standards (like WebSRT), and your 
conscientious approach

to managing a living document.

You're making a slippery slope argument, I don't think it fits:
I'd asked for one property to be added to the specs document; as an 
extension

of the textBaseline attribute already in the document.

It's implementation-time is minimal.

It's part of my approach, to give weight to implementation time and scope.


We need to expose baseline positioning somewhere

Why? What's the use case? Implementing a browser isn't a sane use case,
sorry.

Continuing that discussion is not sane...

Exposing the baseline position of inline data would allow for 
fine-grained positioning

of elements and better control of interactivity with text.

Currently, I can't use fillText with two separate font sizes, and 
underline them.


The textBaseline attribute gets me close, but it's insufficient.


Nobody wants to see another vendor-specific extension; can we try to
form an agreement on this, so we can avoid that?

On the contrary, we _do_ want to see vendor-specific extensions. That's
how we get implementation experience and how the Web improves.

Standardisation is the penultimate stage in a feature's development, after
studying author practices, experimental implementations in vendor-specific
features, and studying use cases, and ahead only of the final convergence
of browser implementations


Your enthusiasm for the final convergence is charming.

I made a poor generalization. Most of us do not want to see 
vendor-specific extensions which stick.

Example:
moz..transform = webkit..transform = transform = ...;




Re: [whatwg] Canvas feedback (various threads)

2010-10-03 Thread Charles Pritchard

 On 8/11/2010 1:35 PM, Ian Hickson wrote:

On Tue, 10 Aug 2010, Charles Pritchard wrote:

I've worked on a substantial amount of code dealing with text editing.
At present, the descent of the current font has been the only
deficiency.




I feel that using Canvas to implement HTML5/CSS provides a quality proof
of the completeness of the 2D API.

The 2D API isn't complete by a long shot, there's no difficulty in proving
that. It's not trying to be complete.


Having worked quite awhile with WebApps APIs, Canvas included, I've 
concluded that HTML can be implemented within the web stack.


CSS, Canvas and DOM interfaces are sufficient to provide an HTML and SVG 
user agent, and WebApps APIs associate well with the host environment.


To this date, there have been very issues that have blocked me from 
implementing such agents.


It's my firm belief that the Web Apps specifications can and should be 
proven complete. Hypertext and DOM manipulation are well tested, parsing 
has been well documented. We should hold HTML5 elements to the same 
standard: the WebApps API should be sufficient to implement HTML UI 
elements. Canvas contexts are the de facto standard for painting to a 
screen. If an HTML element or an SVG element can not be produced within 
the Canvas API, the WebApps spec is deficient.


Currently, there's a deficieny in the interoperability of these 
standards: Web Fonts, HTML Forms, CSS line boxes, and SVG text, in 
relation to baseline positioning. It's not a canvas issue; it simply 
came to light while I was using canvas.


I'm certain that you've not heard a browser vendor tell you that 
returning additional font data would be costly, or take away valuable 
resources. I'm concerned that the issue is being avoided because it 
originated from a project you disagree with; and has biased your 
judgment of additional use cases or possible remedies.


We need to expose baseline positioning somewhere; we typically have 
start positions by checking directionality and getting the compute style 
of the css box. There are some items in range selectors as well as 
TextMetrics that can help in returning the positioning of glyphs within 
a string. What we don't have are standard ways to access baseline 
metrics. It's a one-way process, it is not currently exposed as a 
scripting API. Lets fix it.


John Daggett has given me some constructive feedback; he let me know 
that a string may be composed of several fonts, and that the condition 
may effect metrics. This is why, he believes, returning baseline metrics 
on a given font is not a complete solution. He recommended revising 
TextMetrics in canvas. I provided an alternative solution, getting the 
computed style of a string within an inline element. That is... using 
document.createElement('span').textContent = 'complex string'; And 
gathering the computed value of that span.


Some issues in the interoperability of Web Fonts and Canvas text APIs 
still exist. I recommend implementing both solutions, adding baseline 
metrics to TextMetrics in canvas,
and returning baseline attributes for CSS inline elements, as a computed 
style.


This approach would avoid the interop issue I mentioned, and return 
reliable information to scripting APIs across CSS, HTML and Canvas.


That information, about baseline positioning of text, could then be used 
for various use cases. The computed information is already available to 
browser vendors, and would be inexpensive to expose to existing APIs. 
Nobody wants to see another vendor-specific extension; can we try to 
form an agreement on this, so we can avoid that?


-Charles








Re: [whatwg] Canvas feedback (various threads)

2010-08-12 Thread David Flanagan

Boris Zbarsky wrote:

On 8/11/10 5:42 PM, David Flanagan wrote:

I think that the sentence The transformations must be performed in
reverse order is sufficient to remove the ambiguity in multiplication
order.


It is?  It sounds pretty confusing to me... reverse from what?


I agree that it is confusing.  But Ian had asked whether it is possible 
to implement the spec, as now written, incorrectly.  I suspect that any 
implementation that did transformations wrong would violate the spec 
somewhere.  I still think it is worth clarifying the spec, but by Ian's 
criteria, I suspect it is not strictly necessary.


The right way to specify what happens when composing two transformations 
is to just explicitly say which transformation is applied first, instead 
of talking about the algebraic operations on the matrix representations. 
 In my opinion.


But if you don't talk about the algebraic operations then you haven't 
really defined what a transformation is, have you?





must set the current transformation matrix to the matrix obtained by
postmultiplying the current transformation matrix with this matrix:

a c e
b d f
0 0 1


See, that makes inherent assumptions about row vs column vectors that 
aren't stated anywhere, right?


I suppose it does.  So to be complete, the spec would have to show the 
math required to transform a point (x,y) using the CTM.


Are you suggesting that there is some way that the spec can be written 
generically without any assumptions about row vector or column vector 
format?  Note that the matrix shown above already appears in the current 
version of the transform() method description.  I don't see how to avoid 
picking one form or another unless you want to define a CTM as an array 
of 6 numbers and show the formulas for updating each of those numbers 
without referring to matrix multiplication at all.


David


-Boris





Re: [whatwg] Canvas feedback (various threads)

2010-08-12 Thread Boris Zbarsky

On 8/12/10 10:59 AM, David Flanagan wrote:

But if you don't talk about the algebraic operations then you haven't
really defined what a transformation is, have you?


Given that the input is in coordinates, we need to talk about algebraic 
operations to define the transformation on vectors those coordinates 
produce.  That is, we need a clear definition of what the output vector 
is given an input vector and a list of 6 numbers.


But once we have that, composition of transformation can be described 
simply as composition (thinking of them as functions), without reference 
to algebraic manipulation of their particular matrix representations.



I suppose it does. So to be complete, the spec would have to show the
math required to transform a point (x,y) using the CTM.


Yes, indeed.


Are you suggesting that there is some way that the spec can be written
generically without any assumptions about row vector or column vector
format?


No, I'm just saying that the definition of transform composition doesn't 
need to make such assumptions.



Note that the matrix shown above already appears in the current
version of the transform() method description. I don't see how to avoid
picking one form or another unless you want to define a CTM as an array
of 6 numbers and show the formulas for updating each of those numbers
without referring to matrix multiplication at all.


While that would be a viable course of action, I don't think there's a 
need for that.  Defining the CTM as a matrix and defining how the 6 
numbers produce the matrix and exactly how the matrix acts on vectors is 
fine; while it's actually a bit more text than the other it produces a 
simpler conceptual picture.


-Boris


[whatwg] Canvas feedback (various threads)

2010-08-11 Thread Ian Hickson
On Mon, 19 Jul 2010, David Flanagan wrote:
 The spec describes the transform() method as follows:
 
  The transform(m11, m12, m21, m22, dx, dy) method must multiply the
  current transformation matrix with the matrix described by:
 
  m11 m21 dx
  m12 m22 dy
  0   0   1

 The first number in these argument names is the column number and the
 second is the row number.  This surprises me, and I want to check that
 it is not an inadvertent error:

 1) Wikipedia says (http://en.wikipedia.org/wiki/Matrix_multiplication)
 that the convention is to list row numbers first

 2) Java's java.awt.geom.AffineTransform class also lists the row index
 first, as in the following javadoc excerpt:

  [ x']   [  m00  m01  m02  ] [ x ]   [ m00x + m01y + m02 ]
  [ y'] = [  m10  m11  m12  ] [ y ] = [ m10x + m11y + m12 ]
  [ 1 ]   [   001   ] [ 1 ]   [ 1 ]

 It would be nice if this spec was not inconsistent with other usage.
 Even changing the argument names to neutral a,b,c,d,dx,dy would be
 better than what is there currently.

Done.


On Mon, 19 Jul 2010, Boris Zbarsky wrote:

 I do think the spec could benefit from an example akin to the one in the
 CoreGraphics documentation.

I followed your references but I couldn't figure out which example you
meant. What exactly do you think we should add?


On Tue, 20 Jul 2010, Yp C wrote:

 But I think the number can indicate the position of the value in the
 matrix,if change them into a,b,c... like cairo, I think it will still
 confuse the beginner.

The a,b,c,... notation is at least as common as m11,m12,

On Mon, 19 Jul 2010, Brendan Kenny wrote:

 Looking at that last CoreGraphics link, it seems like the current names
 are an artifact of a row-vector matrix format (in which 'b' *is* m12)
 that is transposed for external exposure in the browser, but retains the
 same entry indexing.

Yes.


 The row- vs column-vector dispute is an ancient one, but I can't think
 of anyone that refers to an entry of a matrix by [column, row].

It appears at least .NET uses the same notation and order.


On Mon, 19 Jul 2010, David Flanagan wrote:

 While I'm harping on the transform() method, I'd like to point out that
 the current spec text must multiply the current transformation matrix
 with the matrix described by... is ambiguous because matrix
 multiplication is not commutative.  Perhaps an explicit formula that
 showed the order would be clearer.

 Furthermore, if the descriptions for translate(), scale() and rotate()
 were to altered to describe them in terms of transform() that would
 tighten things up.

Could you describe what interpretations of the current text would be valid
but would not be compatible with the bulk of existing implementations? I'm
not sure how to fix this exactly. (Graphics is not my area of expertise,
unfortunately. I'm happy to apply any proposed text though!)


On Tue, 20 Jul 2010, Andreas Kling wrote:

 Greetings!

 The current draft of HTML5 says about rendering radial gradients:

 This effectively creates a cone, touched by the two circles defined in
 the creation of the gradient, with the part of the cone before the start
 circle (0.0) using the color of the first offset, the part of the cone
 after the end circle (1.0) using the color of the last offset, and areas
 outside the cone untouched by the gradient (transparent black).

 I find this behavior of transparent spread rather strange and it
 doesn't match any of the SVG gradient's spreadMethod options.

 The sensible behavior here IMO is pad spread (SVG default, and what
 most browsers implementing canvas currently do) which means repeating
 the terminal color stops indefinitely.

I'm pretty sure it's too late to change this.


On Wed, 28 Jul 2010, David Flanagan wrote:

 Firefox and Chrome disagree about the implementation of the
 destination-atop, source-in, destination-in, and source-out compositing
 operators. [...]

 I suspect, based on the reference to an infinite transparent black
 bitmap in 4.8.11.1.13 Drawing model that Firefox gets this right and
 Chrome gets it wrong, but it would be nice to have that confirmed.

 I suggest clarifying 4.8.11.1.3 Compositing to mention that the
 compositing operation takes place on all pixels within the clipping
 region, and that some compositing operators clear large portions of the
 canvas.

On Wed, 28 Jul 2010, Tab Atkins Jr. wrote:

 The spec is completely clear on this matter - Firefox is right,
 Chrome/Safari are wrong.  They do it wrongly because that's how
 CoreGraphics, their graphics library, does things natively.

On Wed, 28 Jul 2010, Oliver Hunt wrote:

 This is the way the webkit canvas implementation has always worked,
 firefox implemented this incorrectly, and the spec was based off of that
 implementation.

Actually the spec was based off the WebKit implementation, but this
particular part had no documentation describing what it did, so I
couldn't specify it. :-(


On Fri, 30 Jul 2010, 

Re: [whatwg] Canvas feedback (various threads)

2010-08-11 Thread Philip Taylor
On Wed, Aug 11, 2010 at 9:35 PM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 29 Jul 2010, Gregg Tavares (wrk) wrote:
 source-over
    glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

 I tried searching the OpenGL specification for either glBlendFunc or
 GL_ONE_MINUS_SRC_ALPHA and couldn't find either. Could you be more
 specific regarding what exactly we would be referencing?  I'm not really
 sure I understand you proposal.

The OpenGL spec omits the gl/GL_ prefixes - search for BlendFunc
instead. (In the GL 3.0 spec, tables 4.1 (the FUNC_ADD row) and 4.2
seem relevant for defining the blend equations.)

-- 
Philip Taylor
exc...@gmail.com


Re: [whatwg] Canvas feedback (various threads)

2010-08-11 Thread David Flanagan

Ian Hickson wrote:

On Mon, 19 Jul 2010, David Flanagan wrote:

Even changing the argument names to neutral a,b,c,d,dx,dy would be
better than what is there currently.


Done.



Thanks



On Mon, 19 Jul 2010, David Flanagan wrote:

While I'm harping on the transform() method, I'd like to point out that
the current spec text must multiply the current transformation matrix
with the matrix described by... is ambiguous because matrix
multiplication is not commutative.  Perhaps an explicit formula that
showed the order would be clearer.

Furthermore, if the descriptions for translate(), scale() and rotate()
were to altered to describe them in terms of transform() that would
tighten things up.


Could you describe what interpretations of the current text would be valid
but would not be compatible with the bulk of existing implementations? I'm
not sure how to fix this exactly. (Graphics is not my area of expertise,
unfortunately. I'm happy to apply any proposed text though!)



I think that the sentence The transformations must be performed in 
reverse order is sufficient to remove the ambiguity in multiplication 
order.  So the spec is correct (but confusing) as it stands, except that 
it doesn't actually say that the CTM is to be replaced with the product 
of the CTM and the new matrix.  It just says multiply them.


I suggest changing the description of transform() from:


must multiply the current transformation matrix with the matrix described by:


To something like this:

must set the current transformation matrix to the matrix obtained by 
postmultiplying the current transformation matrix with this matrix:


a c e
b d f
0 0 1

That is:

 a c e
CTM = CTM *  b d f
 0 0 1

Changing translate(), scale() and rotate() to formally define them in 
terms of transform() would be simple, and the current prose descriptions 
of the methods could then be moved to the non-normative green box.  The 
current descriptions suffer from the use of the word add near the word 
matrix when in fact a matrix multiplication is to be performed, but I 
don't think they can be mis-interpreted as they stands. I'd be happy to 
write new method descriptions if you want to tighten things up in this 
way, however.


David


Re: [whatwg] Canvas feedback (various threads)

2010-08-11 Thread Charles Pritchard


 On Tue, 10 Aug 2010, Charles Pritchard wrote:
 
 I recommend not using canvas for text editing.
 
 I've worked on a substantial amount of code dealing with text editing. 
 At present, the descent of the current font has been the only 
 deficiency.
 
 Well, there's also the way it doesn't interact with the OS text selection, 
 copy-and-paste, drag-and-drop, accessibility APIs, the browsers' undo 
 logic, the OS spell-checker and grammar-checker, the OS text tools like 
 Search in Spotlight, and the i18n features like bidi handling. And 
 that's just for starters. :-)

Drag-and-drop works just fine, it's covered by event.dataTransfer. 
Accessibility has been addressed through drawFocusRing amongst other techniques 
of including relevant HTML within the canvas tag. Undo logic is covered by the 
history state objects. Text selection visuals work using measureText and 
textBaseline bottom.

OS tools are fairly out of spec, though I do understand the value of extending 
a dataTransfer API to assist with UA context menu hooks, such as 
spellcheck/autocomplete/grammar suggestions. That said, there's nothing in the 
way of  implementing those features within an HTML app. Even offline, both can 
be accomplished via WebSQL.

Perhaps there should be a discussion about dataTransfer and the context menu.


 
 I feel that using Canvas to implement HTML5/CSS provides a quality proof 
 of the completeness of the 2D API.
 
 The 2D API isn't complete by a long shot, there's no difficulty in proving 
 that. It's not trying to be complete.

Perhaps completeness is a poor choice. I'm concerned about obstructions.

At present, there are very few obstacles. CSS and Canvas have allowed me to 
create implementations of HTML Forms, CSS line boxes and SVG.

Baseline positioning is an obstacle.

I realize that SVG has an even-odd fill mode. This is something that can be 
converted, and is rarely used.


 
 -- 
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Through CSS computed styles, or a canvas interface, I will get accessible 
baseline offsets. :-)

Re: [whatwg] Canvas feedback (various threads)

2010-08-11 Thread Boris Zbarsky

On 8/11/10 4:35 PM, Ian Hickson wrote:

On Mon, 19 Jul 2010, Boris Zbarsky wrote:


I do think the spec could benefit from an example akin to the one in the
CoreGraphics documentation.


I followed your references but I couldn't figure out which example you
meant. What exactly do you think we should add?


Effectively the part starting with the second paragraph under 
Discussion at 
http://developer.apple.com/mac/library/documentation/GraphicsImaging/Reference/CGAffineTransform/Reference/reference.html#//apple_ref/doc/c_ref/CGAffineTransform 
and going through the two linear equations defining x' and y'.  Plus a 
bit that says how the linear list of arguments passed to transform() 
maps to the 2-dimensional array of numbers in the transformation matrix.


-Boris


Re: [whatwg] canvas feedback

2010-03-12 Thread David Levin
On Thu, Mar 11, 2010 at 9:16 PM, Ian Hickson i...@hixie.ch wrote:


 On Mon, 22 Feb 2010, David Levin wrote:
 
  I've talked with some other folks on WebKit (Maciej and Oliver) about
  having a canvas that is available to workers. They suggested some nice
  modifications to make it an offscreen canvas, which may be used in the
  Document or in a Worker.

 What are the use cases?


The simplest is resize/rotate for large images.
However there are more advanced uses in which the page heavily uses canvas.
Since there are many canvas operations, they take a noticeable amount of
time, and it would be better than they are done on a different thread than
the main one.

Another use related use case is when a page needs to render multiple
canvases and each one is involved. (One could think of some sort of
animation.) It would also be nice if they could do multiple operations in
parallel (using multiple workers for instance.)


[whatwg] canvas feedback

2010-03-11 Thread Ian Hickson
On Mon, 7 Dec 2009, Gregg Tavares wrote:

 Has there been a proposal for allowing mouse events to go through a 
 canvas element where it is transparent to the element below?

On Mon, 7 Dec 2009, Jason Oster wrote:

 The pointer-events CSS property was recently added to Firefox for web 
 (HTML) content:  https://developer.mozilla.org/en/CSS/pointer-events
 
 Although the current implementation only supports the values 'auto' and 
 'none' for web content, it seems like some of the SVG-only values could 
 possibly be used for your example case.  Or at the very least, I expect 
 the property could be extended with additional values specifically for 
 similar use cases with web content.

On Mon, 7 Dec 2009, Jonas Sicking wrote:
 
 Indeed. We've talked about using this CSS property as a way to let only 
 the non-transparent parts of an image receive events. Unfortunately if I 
 recall correctly none of the currently defined values of pointer-events 
 fit that behavior, but the intent would be to add one. In any case I 
 think this CSS property is the way to add the feature you want, and so 
 the discussion is probably best had on the www-style mailing list.

On Tue, 8 Dec 2009, Robert O'Callahan wrote:
 
 www-svg would be better, since it's an SVG property.

I agree that pointer-events should be the way to fix this.


On Sat, 5 Dec 2009, Franz Buchinger wrote:

 Gears introduced the concept of an offscreen canvas that doesn't draw 
 anything in the browser window, but can be used to manipulate images in 
 a web worker.
 
 I used this functionality to implement a resize-before-upload feature 
 in my photo gallery uploader. Now I'm trying to port my uploader to 
 HTML5 but there seems no way to delegate the scaling work to a HTML5 web 
 worker. Surely I could use the DOM canvas to scale down the photos in 
 the main browser thread, but this means that the UI gets blocked 
 during this process.

The specific use case of resizing images before uploading them would also 
require some kind of image object in the worker, unless you had the main 
thread paint each image to a canvas and then grabbed the pixels and sent 
them over, which would be almost as expensive as resizing them in the main 
thread in the first place. To have an image in the worker, we'd need the 
DOM in the worker, and for now, that's not on the cards, as several 
implementations have non-thread-safe DOM implementations.

I imagine we'll handle this use case in due course, though.


On Sun, 6 Dec 2009, Oliver Hunt wrote:
 
 That said I'm not sure whether we should really be trying to spec such a 
 feature until we've had more time to see how workers are used in the 
 wild.

Indeed.


On Fri, 11 Dec 2009, Jeremy Orlow wrote:
 
 Resizing images was just one use.  I could easily imagine apps wanting 
 to generate more complex images on background threads without needing to 
 implement things like spline drawing, pattern filling, and text 
 themselves.

If there are specific use cases other than batch image resizing for 
upload, it would be good to have specific use cases put forward. We can 
only evaluate proposals based on the use cases they're intended to solve 
if we know the use cases.


On Mon, 21 Dec 2009, Gregg Tavares wrote:

 What is the intent of the getContext function on the canvas tag?
 
 Should it be possible to get multiple simultaneous different contexts as 
 in?
 
 var ctx2d = canvas.getContext(2d);
 var ctxText = canvas.getContext(fancy-text-api);
 var ctxFilter = canvas.getContext(image-filter-api);
 
 ctx2d.drawImage(someImage, 0, 0);
 ctxText.drawText(0, 0, hello world);
 ctxFilter.radialBlur(0.1);
 
 ?
 
 OR
 
 is canvas only allowed 1 context at a time?

That's basically up to the contexts to define.


On Mon, 21 Dec 2009, Gregg Tavares wrote:

 Is disallowing other contexts when certain contexts, eg webgl, okay or 
 is that really an incompatible extension of the canvas tag?
 
 Can portable code be written if some browsers let me get both a 2d 
 context and a 3d context and others don't?

All the browsers should do the same thing, but what that thing is depends 
on the context's spec.

In the case of the 2d context, whenever you invoke methods on the context, 
the bitmap is changed. If another context clears the context, or does 
something more complicated, it has to define how it works with the 2d 
context.


On Thu, 7 Jan 2010, Jonas Sicking wrote:

 So at mozilla we've been implementing and playing around with various 
 File APIs lately. One of the most common use cases today for file 
 manipulation is photo uploaders. For example to sites like flickr, 
 facebook, twitpic and icanhascheezburger. Some of these sites allow 
 additional modifications of the uploaded image, most commonly cropping 
 and rotating the image. But even things like manipulating colors, adding 
 text, and red eye reduction are things that sites do, or would like to 
 do.
 
 We do already have a great tool for image manipulation in HTML, the 
 canvas 

Re: [whatwg] Canvas feedback

2009-04-28 Thread Ian Hickson
On Thu, 26 Mar 2009, Biju wrote:
 On Wed, Mar 25, 2009 at 3:11 AM, Ian Hickson i...@hixie.ch wrote:
  On Sat, 14 Mar 2009, Biju wrote:
 
  Just like canvas.getImageData() and canvas.putImageData()
 
  Why canvas.getImageHSLData() and canvas.putImageHSLData() API are not 
  provide? Is it something discussed and planned not have?
 
  In practice user agents actually store image data as memory buffers of 
  RGBA data, so handling RGBA data is (or at least can be) extremely 
  efficient. HSLA data is not a native type, so it would be an 
  inefficient type to provide native access to.
 
  In practice, if you need to work in the HSLA space, then you should 
  convert the data to HSLA and back manually, which would highlight the 
  performance bottleneck implied in such an approach. Alternatively, 
  convert whatever algorithms you are using to work in the RGBA space.

 As in both case the will be performance hit. Isnt it better to take care 
 that in C/Compiled code rather than in JavaScript . code for converting 
 from HSLA to RGBA will be already there in the browser code so that need 
 to be only exposed to JavaScript

If we do that, then people will be more likely to use HSLA, which would be 
bad since using HSLA is not the optimal way of doing this.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


[whatwg] Canvas feedback

2009-03-25 Thread Ian Hickson
On Fri, 13 Mar 2009, Hans Schmucker wrote:

 This problem recently became apparent while trying to process a public 
 video on tinyvid.tv:
 
 In article 4.8.11.3 Security with canvas elements, the origin-clean 
 flag is only set depending on an element's origin. However there are 
 many scenarios where an image/video may actually be public and actively 
 allowing processing on other domains (as indicated by 
 Access-Control-Allow-Origin).
 
 Is this an oversight or is there a specific reason why Access Control 
 for Cross-Site Requests should not work for Canvas?

I'm waiting for CORS to be a proven technology before using it widely in 
HTML. I think eventually it will be plugged into several features, 
including canvas and video, but for now it seems safer to wait and see.


On Sat, 14 Mar 2009, Hans Schmucker wrote:

 Question is: what would be the best way to fix it? Of course the spec 
 could be changed for video and image, but wouldn't it be simpler to 
 update the defintion of origins to include patterns that can represent 
 allow rules?

On Sat, 14 Mar 2009, Robert O'Callahan wrote:
 
 I don't think changing the definition of origins is the right way to go.

I agree with Rob that we don't want to be changing the 'origin' concept. 
It's brittle enough as it is.


 It seems better to define a category of public resources, specify that 
 a resource served with Access-Control-Allow-Origin: * is public, and 
 have canvas treat public resources specially.

I don't think we need public resources, just resources that have been 
allowed and resources that haven't (like with video). But this, IMHO, 
can wait for a while longer, until we have tested CORS with a few 
technologies and proved that it is a good technology.


On Sat, 14 Mar 2009, Biju wrote:
 
 Just like canvas.getImageData() and canvas.putImageData()

 Why canvas.getImageHSLData() and canvas.putImageHSLData() API are 
 not provide? Is it something discussed and planned not have?

In practice user agents actually store image data as memory buffers of 
RGBA data, so handling RGBA data is (or at least can be) extremely 
efficient. HSLA data is not a native type, so it would be an inefficient 
type to provide native access to.

In practice, if you need to work in the HSLA space, then you should 
convert the data to HSLA and back manually, which would highlight the 
performance bottleneck implied in such an approach. Alternatively, convert 
whatever algorithms you are using to work in the RGBA space.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Canvas feedback

2009-03-25 Thread Biju
On Wed, Mar 25, 2009 at 3:11 AM, Ian Hickson i...@hixie.ch wrote:
 On Sat, 14 Mar 2009, Biju wrote:

 Just like canvas.getImageData() and canvas.putImageData()

 Why canvas.getImageHSLData() and canvas.putImageHSLData() API are
 not provide? Is it something discussed and planned not have?

 In practice user agents actually store image data as memory buffers of
 RGBA data, so handling RGBA data is (or at least can be) extremely
 efficient. HSLA data is not a native type, so it would be an inefficient
 type to provide native access to.

 In practice, if you need to work in the HSLA space, then you should
 convert the data to HSLA and back manually, which would highlight the
 performance bottleneck implied in such an approach. Alternatively, convert
 whatever algorithms you are using to work in the RGBA space.
As in both case the will be performance hit.
Isnt it better to take care that in C/Compiled code rather than in JavaScript .
code for converting from HSLA to RGBA will be already there in the browser code
so that need to be only exposed to JavaScript