Date: Sat, 20 Nov 2010 21:57:02 -0500
From: Boris Zbarsky<[email protected]>
To: [email protected]
Subject: Re: [whatwg] Processing the zoom level - MS extensions to
        window.screen
Message-ID:<[email protected]>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

On 11/20/10 3:59 PM, Charles Pritchard wrote:
This response is from the digest: I'm glad to see activity here.
Canvas is supposed to be resolution independent,
No, it's not.  Vector images are supposed to be resolution independent.
   Canvas is very explicitly a _bitmap_.  It's not a vector image.
Canvas is an immediate mode rendering framework. I realize that it uses a bitmap backend, but the drawing itself works very much like vector imaging. The scene graph is built in
the scripting environment instead of an ML file.

  >  When a user zooms in, I need to be able to reprint my fillText
to match their resolution.
This is a valid use case if using canvas is the right requirement,
though it really feels like you're using the wrong tool here; if you
want resolution independence you should be using SVG, which is designed
precisely to accomplish that.
I've heard this before, and I'm afraid that it's a stuck-issue I'll never unstick. Canvas is a low level API, SVG is a serialized format of a scene graph. They're not the same thing.

You may implement an SVG rendering engine in Canvas, you may use some other scene graph.

I've been through this discussion with several people, and I really do lack the perspective to understand the hang-up on "SVG" vs Canvas. One is a rendering API, one is a serialized file format.
They're two different classes.
That said, this seems like a general quality-of-implementation issue,
right?  Expecting the page to rerender the entire canvas on any zoom
operation doesn't seem reasonable....  A UA could handle this by
supersampling the canvas, for example (and in the past we've considered
doing that for Firefox, actually).
While Apple has certainly worked in supersampling, it's completely unnecessary. I don't see why expecting a page to re-render is unreasonable. That's exactly what pages do right now. In most implementations, Canvas is tied to the same rasterization engine as HTML.

I've demonstrated a rich application with zooming quality equivalent to the HTML rendering. There's a reason why they are equivalent: they're using the same logic, they're using the same
raster libraries. They are in a very physical sense, the same.

Stated succinctly: It is entirely reasonable to re-render canvas when an "onresize" event is received, it's a standard practice. There's no reason for the UA to handle it any differently than it does now (scaling the CSS pixels). This is something to be left up to the implementer.

Boris, Rob: As an accessibility use case, this is quite important.
Please let me know if there are objections.
I don't think it's reasonable to demand resolution independence from
what is designed to be a bitmap format.  We really do have better tools
for them; using them instead seems more appropriate than grafting
poor-man's resolution independence onto canvas.
PNG is a bitmap format. Canvas is a drawing API. It's not poor-man's resolution independence: it's a reasonable, standards based implementation. It exists now, it works fine, and I am notably frustrated by your responses. They are not grounded in an appreciation for my use cases nor
the evidence I have to provide.

When I say: please expose additional screen metrics, you respond: You're doing it wrong,
it's poor, and we tried to do it for you, but it didn't work out.

I mean.. come on.

My evidence is essentially nullified when you make broad statements about how there are better tools and better formats. I don't doubt your good intentions here, but I am suggesting
that you've made an error in judgement.


-Charles

Reply via email to