On Tue, 8 Apr 2014, Rik Cabanier wrote:
> On Mon, Apr 7, 2014 at 3:35 PM, Ian Hickson <i...@hixie.ch> wrote:
> > > > > >
> > > > > > So this is not how most implementations currently have it 
> > > > > > defined.
> > > > >
> > > > > I'm unsure what you mean. Browser implementations? If so, they 
> > > > > definitely do store the path in user coordinates. The spec 
> > > > > currently says otherwise [1] though.
> > > >
> > > > I'm not sure what you're referring to here.
> > >
> > > All graphics backends for canvas that I can inspect, don't apply the 
> > > CTM to the current path when you call a painting operator. Instead, 
> > > the path is passed as segments in the current CTM and the graphics 
> > > library will apply the transform to the segments.
> >
> > Right. That's what the spec says too, for the current default path.
> No, the spec says this:
> For CanvasRenderingContext2D objects, the points passed to the methods, 
> and the resulting lines added to current default path by these methods, 
> must be transformed according to the current transformation matrix 
> before being added to the path.

As far as I can tell, these are block-box indistinguishable statements.

Can you show a test case that demonstrates how the spec doesn't match 

> > > [use case: two paths mapping to the same region]
> >
> > Just use two different IDs with two different addHitRegion() calls. 
> > That's a lot less complicated than having a whole new API.
> That doesn't work if you want to have the same control for the 2 areas, 
> from the spec for addHitRegion:
># If there is a previous region with this control, remove it from the 
># scratch bitmap's hit region list; then, if it had a parent region, 
># decrement that hit region's child count by one.

Interesting. You mean like a case where you had a button that got split 
into multiple segments animated separately that then joined together, but 
where you want to be able to click on any part of that split-up button?


There's several ways we could support that.

The simple way would be to allow multiple regions to refer to a control.

The more subtle way would be to allow the control logic to defer to the 
parent region, but that's probably a bad idea (where would you put the 
parent region?).

So I guess the question is: is it more useful to be able to refer to 
the same control from multiple regions, or is it more useful for the 
previous region to be automatically discarded when you add a new one?

It's probably more useful to have multiple regions. You can always do the 
discarding using IDs.

> Even if you don't use the control, it would be strange to have 2 
> separate hit regions for something that represents 1 object.

Why? I think that makes a lot of sense. There are multiple regions, why 
not have multiple hit regions? This becomes especially true when the 
multiple regions might be differently overlapped by other regions, or 
where different parts of the canvas are renderings from different angles 
of the same underlying scene. It would be silly for things to be easier to 
do with two canvases than one, in that kind of case, no?

I've changed the spec to not discard regions based on the control.

> > > > On Fri, 6 Dec 2013, Jürg Lehni wrote:
> > > > >
> > > > > Instead of using getCurrentPath and setCurrentPath methods as a 
> > > > > solution, this could perhaps be solved by returning the internal 
> > > > > path instead of a copy, but with a flag that would prevent 
> > > > > further alterations on it.
> > > > >
> > > > > The setter of the currentPath accessor / data member could then 
> > > > > make the copy instead when a new path is to be set.
> > > > >
> > > > > This would also make sense from a a caching point of view, where 
> > > > > storing the currentPath for caching might not actually mean that 
> > > > > it will be used again in the future (e.g. because the path's 
> > > > > geometry changes completely on each frame of an animation), so 
> > > > > copying only when setting would postpone the actual work of 
> > > > > having to make the copy, and would help memory consummation and 
> > > > > performance.
> > > >
> > > > I don't really understand the use case here.
> > >
> > > Jurg was just talking about an optimization (so you don't have to 
> > > make an internal copy)
> >
> > Sure, but that doesn't answer the question of what the use case is.
> From my recent experiments with porting canvg ( 
> https://code.google.com/p/canvg/) to use Path2D, they have a routine 
> that continually plays a path into the context which is called from a 
> routine that does the fill, clip or stroke. Because that routine can't 
> simply set the current path, a lot more changes were needed.

Sure, but the brief transitional cost of moving from canvas current 
default paths to Path2D objects is minor on the long run, and not worth 
the added complexity cost paid over the lifetime of the Web for the 
feature. So for something like this, we need a stronger use case than "it 
makes transitioning to Path2D slightly easier".

> > On Wed, 12 Mar 2014, Rik Cabanier wrote:
> > > > > >
> > > > > > You can do unions and so forth with just paths, no need for 
> > > > > > regions.
> > > > >
> > > > > How would you do a union with paths? If you mean that you can 
> > > > > just aggregate the segments, sure but that doesn't seem very 
> > > > > useful.
> > > >
> > > > You say, here are some paths, here are some fill rules, here are 
> > > > some operations you should perform, now give me back a path that 
> > > > describes the result given a particular fill rule.
> > >
> > > I think you're collapsing a couple of different concepts here:
> > >
> > > path + fillrule -> shape
> > > union of shapes -> shape
> > > shape can be converted to a path
> >
> > I'm saying "shape" is an unnecessary primitive. You can do it all with
> > paths.
> >
> >    union of (path + fillrule)s -> path
> No, that makes no sense. What would you get when combining a path with a 
> fillrule and no fillrule?

Why would you combine a path with a fillrule and no fillrule?

I'm saying that you can replace "shape" with "path+fillrule", that's all.

> > > > A shape is just a path with a fill rule, essentially.
> > >
> > > So, a path can now have a fillrule? Sorry, that makes no sense.
> >
> > I'm saying a shape is just the combination of a fill rule and a path. 
> > The path is just a path, the fill rule is just a fill rule.
> After applying a fillrule, there is no longer a path. You can *convert* 
> it back to a path that describes the outline of the shape if you want, 
> but that is something different. The way you've defined things now, you 
> can apply another fill rule on a path with a fill rule. What would the 
> result of that be?

Exactly what you say -- you'd take the path, and apply the other fill 
rule to it. Applying fill rules to paths seems like a known operation.

> > > > Anything you can do with one you can do with the other.
> > >
> > > You can't add segments from one shape to another as shapes represent 
> > > regions. Likewise, you can't union, intersect or xor path segments.
> >
> > But you can union, intersect, or xor lists of pairs of paths and 
> > fillrules.
> would you start throwing when doing these operations on paths without 
> fill rules?

I'm not saying Path2D objects would get a built-in fill rule. I'm saying 
that you would pass Path2D objects and fill rules together.

An analogy:

You could have a "Length" object that represented a particular length, 
e.g. "4cm" or "12 lightyears".

You could add these "Length" objects together, so "1 inch" plus "2cm" is 
"0.0454m" or some such.

You could, instead, have numbers and units, e.g. "4" "cm" and "12" 
"lightyears". Then you could add pairs of numbers and units, as in "1" 
"inch" plus "2" "cm" is "0.0454" "m".

Numbers are like paths, units are like fill rules.

> > > > > > > The path object should represent the path in the graphics 
> > > > > > > state. You can't add a stroked path or text outline to the 
> > > > > > > graphics state and then fill/stroke it.
> > > > > >
> > > > > > Why not?
> > > > >
> > > > > As designed today, you could fill it, as long as you use 
> > > > > non-zero winding. If you use even-odd, the results will be very 
> > > > > wrong. (ie where joins and line segments meet, there will be 
> > > > > white regions)
> > > >
> > > > I think "wrong" here implies a value judgement that's unwarranted.
> > >
> > > "Wrong" meaning: if the author has a bunch of geometry and wants to 
> > > put it in 1 path object so he can just execute 1 fill operation, he 
> > > might be under the impression that "adding" the geometry will just 
> > > work.
> >
> > Well, sure, an author might be under any number of false impressions.
> >
> > The API has a way for a bunch of paths to be merged with a single 
> > fillrule to generate a new path with no crossing subpaths (which is 
> > also fillrule agnostic), essentially giving you the union of the 
> > shapes represented by those paths interpreted with that fillrule.
> Is this the API you're referring to?
> path = new Path2D(paths [, fillRule ] )
> The first argument could point to paths that need different winding 
> rules.

Sure, if you happen to make your paths with different winding rules, you 
might have that. In that case, merge all your paths that need one winding 
rule together, then merge all your paths that need the other winding rule 
together, then merge the two resulting paths together. If this is 
something that people do a lot (why would it be?) then we can provide a 
dedicated API for that.

> > > There are very few use cases where you want to add partial path 
> > > segments together but I agree that there are some cases that it's 
> > > useful to have.
> >
> > I disagree that there few such cases. Pretty much any time you create 
> > a path, you are adding partial path segments together. Whether you do 
> > so using one Path object all at once or multiple Path objects that you 
> > later add together is just a matter of programming style.
> It's the multiple path objects use case that is unclear to me. Is there 
> any tool/library that does this?

Right now, with canvas, any time you create a path with a transform in the 
middle of creating the path, you are doing the equivalent of using 
multiple Path2D objects then merging them.

> With the new wording, the last sentence should be updated:
> Subpaths in the newly created path must wind clockwise, regardless of 
> the direction of paths in path.
> Since you now create 'holes', the separate paths need to be reoriented 
> like you specify in other parts.

Good point, fixed. Thanks.

> > addPath() is useful for shifting a path according to a transform.
> Why not just transform() then?

I'm not sure what you're proposing here. Can you elaborate?

> > addPathByStrokingPath() is for creating a stroked path.
> > addText() is for writing text.
> >
> > I don't see how removing any of them is a win.
> Yes, they are useful. The issue is that they shouldn't be implemented as 
> currently specified.

Instead of arguing for their removal, then, describe the use case that 
they do not handle.

> > > > > > On Mon, 4 Nov 2013, Rik Cabanier wrote:
> > > > > > >
> > > > > > > However, for your example, I'm unsure what the right 
> > > > > > > solution is. The canvas specification is silent on what the 
> > > > > > > behavior is for non-invertible matrices.
> > > > > >
> > > > > > What question do you think the spec doesn't answer?
> > > > > >
> > > > > > > I think setting scale(0,0) or another matrix operation that 
> > > > > > > is not reversible, should remove drawing operations from the 
> > > > > > > state because: - how would you stroke with such a matrix?
> > > > > >
> > > > > > You'd get a point.
> > > > >
> > > > > How would you get a point? the width is scaled to 0.
> > > >
> > > > That's how you get a point -- scale(0,0) essentially reverts 
> > > > everything to a zero dimensional point.
> > >
> > > OK, but the width of the point is also transformed to 0 so you get 
> > > nothing.
> >
> > Points are always zero-width, by definition.
> You can still stroke it though and get a point of the strokewidth.

If you are scaling by 0, the strokeWidth essentially gets scaled to 0 also.

> > > The APIs that you define, have use cases and I agree with them. 
> > > However the way you defined those APIs does not make sense and will 
> > > not give the result that authors want.
> >
> > The way to make this point would be to start from the use case, 
> > describe the desired effect, show the "obvious" way to achieve this 
> > using the API, and then demonstrate how it doesn't match the desired 
> > effect.
> The obvious way is to go with Shape2D.

At no point in the steps I just described does giving a solution enter 
into the equation.

> It's not because I invented it; many advanced graphics APIs came offer 
> this (including D2D and skia)

I'm not arguing that Shape2D is a bad idea, nor that it is a good idea. 
I'm arguing that you haven't explained the use cases or shown why the 
current spec doesn't address the use cases.

> > > The bad news is that this algorithm is very expensive and there are 
> > > few libraries that do a decent job (I only know of 1). So, it's not 
> > > realistic to add this to the Path2D object.
> >
> > I don't really see why it's unrealistic. In most cases, the user agent 
> > doesn't actually have to do any work -- e.g. if all that you're doing 
> > is merging two paths so that you can fill them simultaneously later, 
> > the UA can just keep the two paths as is and, when necessary, fill 
> > them.
> >
> > For cases where you really want to have this effect -- e.g. when you 
> > want to get the outline of the dashed outline of text -- then I don't 
> > really see any way to work around it.
> That is true. That is why I proposed to make the interface more limited 
> for now until there is a time that this functionality is available.

Making the interface more limited fails to address the use cases that are 
currently addressed.

> > > The reason for that is that even though a UA could emulate the union 
> > > by doing multiple fill operations, Path2D allows you to stroke 
> > > another path object. At that point, you really have to do 
> > > planarization. By defining a Shape2D object and not allowing it to 
> > > be stroked, we can work around this.
> >
> > Sure, by limiting the feature set dramatically we can avoid the cases 
> > where you have to do the hard work, but we also lose a bunch of 
> > features.
> For now. They can be added later.
> Until then, this is confusing implementors.

The only thing I see that is confusing implementors here is the forking of 
the canvas spec that you and W3C staff are doing at the W3C.

If you are concerned about confusion, then stop doing that. Then, maybe, 
arguments about the WHATWG spec being confusing could have credibility.

> > > > > No one has implemented them and they are confusing the browser 
> > > > > vendors.
> > > >
> > > > I don't think they're confusing anyone.
> > >
> > > The blink people were looking at adding this until they thought it 
> > > through and realized that it wouldn't work.
> >
> > Realised what wouldn't work? As far as I'm aware, there's nothing that 
> > wouldn't work.
> See this thread: 
> http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2014-January/168925.html

I see nothing in that e-mail that shows anyone confused. Could you be more 

On Tue, 8 Apr 2014, Justin Novosad wrote:
> > >
> > > For example, there is no locale for font family resolution
> >
> > I'm not clear on what you mean by "locale" here. What is the locale 
> > that a displayed <canvas> in a Document in a browsing context has, 
> > that a non-displayed <canvas> outside a Document and without a 
> > browsing context does not have?
> I am not sure exactly how this relates to the specification, but when 
> reading the code in Blink, I saw that font family resolution goes 
> through different paths if the view has a Korean, Chinese or Japanese 
> locale.  Some OSes allow you to have different locales on a per window 
> basis, so you need to have a view (i.e. a browsing context) associated 
> with the Document in order to resolve this.

Ah, I see. I don't think the HTML spec mentions this (I don't know what it 
would say exactly), but I would recommend using the script's script 
settings object's responsible browsing context or script's script setting 
object's responsible document or something similar. That way it's defined 
regardless of what the <canvas> element is.

> > > and it is not possible to resolve font sizes in physical length 
> > > units unless the document is associated with a view.
> >
> > Why not? The canvas has a pixel density (currently always 1:1), no?
> 1:1 is not a physical pixel density. To resolve a font size that is 
> specified in physical units (e.g. millimeters or inches) you need 
> something like a DPI value, which requires information about the output 
> device.

No, not any more. CSS "physical" units are defined as mapping to CSS 
pixels at 96 CSS pixels per CSS inch, and canvas is defined as mapping CSS 
pixels to canvas coordinate space units at one CSS pixel per coordinate 
space unit. As far as I know, all browsers do this now.

> > > My 2 cents: specifying fallback behaviors for all use cases that are 
> > > context dependent could be tedious and I have yet to see a 
> > > real-world use case that requires being able to paint a canvas in a 
> > > frame-less document. Therefore, I think the spec should clearly 
> > > state <canvas> elements that are in a document without a browsing 
> > > context are unusable. Not sure what the exact behavior should be 
> > > though.  Should an exception be thrown upon trying to use the 
> > > rendering context? Perhaps canvas draws should fail silently, and 
> > > using the canvas as an image source should give transparent black 
> > > pixels?
> >
> > As far as I can tell, this is all already specified, and it just gets 
> > treated like a normal canvas.
> Agreed. The fallback behavior is specified. But is it good enough? There 
> will be discrepancies, sometimes large ones, between text rendered with 
> and without a browsing context.

I don't think there should be any discrepancies.

> > Instead, we should use adaptive algorithms, for example always using 
> > the prettiest algorithms unless we find that frame rate is suffering, 
> > and then stepping down to faster algorithms.
> Such an adaptive algorithm implies making some kind of weighted decision 
> to chose a reasonable compromise between quality and performance.  
> Sounds like the perfect place to use a hint.

If we really need a hint. But do we? Do we have data showing that adaptive 
algorithms can't do a good job without a hint?

> One issue that comes to mind is what happens if stroke or fill are 
> called while the CTM is non-invertible? To be more precise, how would 
> the styles be mapped?  If the fillStyle is collapsed to a point, does 
> that mean the path gets filled in transparent black?  If we go down this 
> road, we will likely uncover more questions of this nature.

The spec handles fills already as far as I can tell (e.g. "When the value 
is a color, it must not be affected by the transformation matrix when used 
to draw on bitmaps", "If a radial gradient or repeated pattern is used 
when the transformation matrix is singular, the resulting style must be 
transparent black", and so on). For strokes, you get nothing if the stroke 
is transformed to 0.

On Tue, 8 Apr 2014, Rik Cabanier wrote:
> > >
> > > Just to be clear, we should support this because otherwise the 
> > > results are just wrong. For example, here some browsers currently 
> > > show a straight line in the default state, and this causes the 
> > > animation to look ugly in the transition from the first frame to the 
> > > secord frame (hover over the yellow to begin the transition):
> > >
> > >    http://junkyard.damowmow.com/538
> > >
> > > Contrast this to the equivalent code with the transforms explicitly 
> > > multiplied into the coordinates:
> > >
> > >    http://junkyard.damowmow.com/539
> > >
> > > I don't see why we would want these to be different. From the 
> > > author's perspective, they're identical.
> These examples are pretty far fetched.
> How many time do people change the CTM in the middle of a drawing 
> operation and not change the geometry?

That kind of reasoning is not how we design a good platform.

A good platform is consistent and predictable. It doesn't have surprises.

Intuitively, a transform (as in 538) is equivalent to the same logic with 
the transform explicitly applied (as in 539). It is surprising if this is 
not the case.

We should make sure our platform is a good platform when we can. It has 
enough crazy surprises that we can't prevent as it is.

On Tue, 8 Apr 2014, Rik Cabanier wrote:
> The spec is still confusingly written and could be misinterpreted:
> Create a new path that describes the edge of the areas that would be 
> covered if a straight line of length equal to the styles lineWidth was 
> swept along each subpath in path while being kept at an angle such that 
> the line is orthogonal to the path being swept, replacing each point 
> with the end cap necessary to satisfy the styles lineCap attribute as 
> described previously and elaborated below, and replacing each join with 
> the join necessary to satisfy the styles lineJoin type, as defined 
> below.
> Maybe could become:
> Create a new path that describes the edge of the coverage of the following
> areas:
> - a straight line of length equal to the styles lineWidth that was swept
> along each subpath in path while being kept at an angle such that the line
> is orthogonal to the path being swept,
> - the end cap necessary to satisfy the styles lineCap attribute as
> described previously and elaborated below,
> - the join with the join necessary to satisfy the styles lineJoin type, as
> defined below.

Can you elaborate on what the possible misinterpretation of the current 
paragraph is that is resolved by this change?

On Thu, 3 Apr 2014, Jürg Lehni wrote:
> When both filling and stroking a path and then drawing it with with an 
> opacity of less than 100%, the path will be rendered differently than in 
> an SVG (a large stroke width will make the issue more apparent):
> - In Canvas, both the fill and the stroke will be rendered with the 
> given opacity, and the fill will shine through the inner half of the 
> stroke.
> - In SVG, the stroke will cover the fill, and the fill will not shine 
> through the inner half of the stroke, regardless of the opacity.
> If you'd like to emulate the SVG behavior in Canvas (which we happen to 
> do in Paper.js), then the only way to do so currently is to draw the 
> path's fill and stroke at 100% opacity into a separate canvas, and then 
> blit the whole thing over with the given opacity.
> This is *much* slower than directly drawing into the Canvas, and happens 
> to be one of the worst bottlenecks in Paper.js
> I would really appreciate a solution to this problem.


Would you still want the overlapping in the case of the stroke itself 
being semi-transparent, though?

On Mon, 7 Apr 2014, Rik Cabanier wrote:
> Maybe this would better be solved by a function that does fill and 
> stroke at the same time, for instance:
> void fillAndStroke(optional CanvasFillRule fillRule = "nonzero");
> void fillAndStroke(Path2D path, optional CanvasFillRule fillRule =
> "nonzero");
> globalAlpha would then apply to the operation as a whole.

This would also allow us to shift the fill so that it doesn't overlap with 
the stroke, so that in the case of a semi-transparent stroke, it doesn't 
overlap the fill.

On Mon, 7 Apr 2014, Jürg Lehni wrote:
> Well this particular case, yes. But in the same way we allow a group of 
> items to have an opacity applied to in Paper.js, and expect it to behave 
> the same ways as in SVG: The group should appear as if its children were 
> first rendered at 100% alpha and then blitted over with the desired 
> transparency.
> Layers would offer exactly this flexibility, and having them around 
> would make a whole lot of sense, because currently the above can only be 
> achieved by drawing into a separate canvas and blitting the result over. 
> The performance of this is real low on all browsers, a true bottleneck 
> in our library currently.

It's not clear to me why it would be faster if implemented as layers. 
Wouldn't the solution here be for browsers to make canvas-on-canvas 
drawing faster? I mean, fundamentally, they're the same feature.

On Sat, 5 Apr 2014, Dirk Schulze wrote:
> I looked at the behavior of negative width or height for the rect() and 
> strokeRect() functions.
> All browsers normalize the passed parameters for strokeRect() to have 
> positive width and height.
> strokeRect(90,10,-80,80) —> strokeRect(10,10,80,80)
> http://jsfiddle.net/za945/


> Just WebKit seems to normalize for rect() as well:
> http://jsfiddle.net/VT4MG/


> The behavior of normalizing is not specified. Especially it seems odd 
> that the behavior for fillRect()/strokeRect() should differ from rect(). 
> So we should either normalize for all functions or don’t do it for all 
> IMO.
> Note: fillRect() and clearRect() are not affected. The behavior for 
> rect() is important for filling with different winding rules as well. It 
> is not just stroking with dash arrays that is effected.

On Sat, 5 Apr 2014, Rik Cabanier wrote:
> yes, the spec needs to say "in that order" as it does for fillRect and 
> strokeRect.

On Sat, 5 Apr 2014, Rik Cabanier wrote:
> It also seems that only firefox is following the spec [1] when width or
> height are 0: http://jsfiddle.net/za945/2/


> I'm unsure why such a rectangle is defined as a straight line.

On Sun, 6 Apr 2014, Dirk Schulze wrote:
> You mean you would rather let it draw a one dimensional rectangle? So 
> for the dimension that is not zero, you would see two overlapping lines 
> + the 0 dimensional sides?
> That seems indeed to be the case for IE, Safari and Blink: 
> http://jsfiddle.net/Gh9XK/

On Mon, 7 Apr 2014, Justin Novosad wrote:
> Dashing is one thing that would be affected.  I think some 
> implementations are currently in a non-compliant state probably because 
> the line dashing feature was added recently.  Back when strokeRect was 
> originally implemented, we could get away with blindly normalizing 
> rectangles because there was no impact on the rendering result.  The 
> other thing that is affected is fill rule application.  For example, if 
> you have a path that contains two intersecting rectangles and you are 
> filling in with the nonzero rule.  If one of the two rectangles is 
> flipped, then the intersection region should be unfilled.  If the 
> rectangles are "normalized" internally by the implementation, then you 
> will get the wrong (non spec compliant) result.

I've added "in that order" to rect().

I couldn't find the original reason for strokeRect() only drawing one line 
in the one-dimensional case, though it dates back to 2007 at least. I 
haven't changed rect() to do that too.

On Sun, 6 Apr 2014, Dirk Schulze wrote:
> The spec says that the object TextMetrics[1] must return font and actual 
> text metrics. All things require information from the font or the font 
> system. In many cases either font or font system just do not provide 
> these information.

Which cases? The information is needed for text layout purposes; if the 
browser can't get the information, how is it conforming to CSS?

Ian Hickson               U+1047E                )\._.,--....,'``.    fL
http://ln.hixie.ch/       U+263A                /,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Reply via email to