I was just looking at the spec text on getImageData/putImageData, and I had a few comments. Of the three, #3 is the most important one:

1)  It may be worth noting that if the canvas backing store is stored
    as premultiplied rgba, then getImageData right after a putImageData
    may well not return the values in the CanvasPixelArray that was
    put, due to rounding when converting to and from premultiplied
    colors.
2)  The description of putImageData says it "Paints the data from the
    given ImageData object onto the canvas".  It may be worth
    specifying that this uses the SOURCE operator, though this is
    clear later on when defining what the method _really_ does.
3)  It's not clear to me why imagedata actually exposes device pixels,
    nor is it clear to me how this is supposed to work if the same
    document is being rendered to multiple devices.  Is a UA allowed
    to have a higher internal resolution for a canvas (in device pixels)
    and then sample when painting to the device?  This might well be
    desirable if the UA expects the canvas to be scaled; it can well
    reduce scaling artifacts in that situation.  It doesn't seem
    reasonable, to me, to expose such super-sampling via imageData;
    it's entirely an optimization detail.

    Worse yet, the current setup means that a script that tries
    createImageData, fill in the pixels, and then paint it to the
    canvas, needs to fill different numbers of pixels depending on the
    output device.  I fully expect script authors to get this very very
    wrong, since it's such non-intuitive behavior.  It would make more
    sense to just have the script work entirely in CSS pixels; if it
    wishes to create a higher-resolution image it can create a canvas
    with bigger dimensions (and scale its actual display via setting
    its width and height CSS properties).

-Boris

Reply via email to