Gosh, this thread is old. I'm going to try and compose a coherent
response but at this point I've forgotten a lot of the context...

On Fri, May 9, 2014 at 12:02 PM, Ian Hickson <i...@hixie.ch> wrote:
> On Thu, 18 Jul 2013, K. Gadd wrote:
>> Ultimately the core here is that without control over colorspace
>> conversion, any sort of deterministic image processing in HTML5 is off
>> the table, and you have to write your own image decoders, encoders, and
>> manipulation routines in JavaScript using raw typed arrays. Maybe that's
>> how it has to be, but it would be cool to at least support basic
>> variations of these use cases in Canvas since getImageData/putImageData
>> already exist and are fairly well-specified (other than this problem,
>> and some nits around source rectangles and alpha transparency).
> Given that the user's device could be a very low-power device, or one with
> a very small screen, but the user might still want to be manipulating very
> large images, it might be best to do the "master" manipulation on the
> server anyway.

This request is not about efficient image manipulation (as you point
out, this is best done on a high power machine) - without control over
colorspace conversion any image processing is nondeterministic. There
are games and apps out there that rely on getting the exact same
pixels out of a given Image on all machines, and that's impossible
right now due to differing behaviors. You see demoscene projects
packing data into bitmaps (yuck), or games using images as the
canonical representation of user-generated content. The latter, I
think, is entirely defensible - maybe even desirable, since it lets
end users interact with the game using photoshop or mspaint.
Supporting these use cases in a cross-browser manner is impossible
right now, yet they work in the desktop versions of these games.

On Fri, May 9, 2014 at 12:02 PM, Ian Hickson <i...@hixie.ch> wrote:
> On Thu, 18 Jul 2013, K. Gadd wrote:
>> Out of the features suggested previously in the thread, I would
>> immediately be able to make use of control over colorspace conversion
>> and an ability to opt into premultiplied alpha. Not getting
>> premultiplied alpha, as is the case in virtually every canvas
>> implementation I've tried, has visible negative consequences for image
>> quality and also reduces the performance of some use cases where bitmap
>> manipulation needs to happen, due to the fact that premultiplied alpha
>> is the 'preferred' form for certain types of rendering and the math
>> works out better. I think the upsides to getting premultiplication are
>> the same here as they are in WebGL: faster uploads/downloads, better
>> results, etc.
> Can you elaborate on exactly what this would look like in terms of the API
> implications? What changes to the spec did you have in mind?

I don't remember what my exact intent here was, but I'll try to resynthesize it:
The key here is to have a clear understanding of what data you get out
of an ImageBitmap. It is *not* necessary for the end user to be able
to specify it, as long as the spec dictates that all browsers provide
the exact same format to end users.
If we pick one format and lock to it, we want a format that discards
as little source image data as possible (preferably *no* data is
discarded) - which would mean the raw source image data, without any
colorspace or alpha channel conversion applied.

This allows all the procedural image manipulation cases described
above, and makes it a very fast and straightforward path for loading
images you plan to pass off to the GPU as a WebGL texture. There's a
bit more on this below...

On Fri, May 9, 2014 at 12:02 PM, Ian Hickson <i...@hixie.ch> wrote:
> On Thu, 18 Jul 2013, K. Gadd wrote:
>> To clearly state what would make ImageBitmap useful for the use cases I
>> encounter and my end-users encounter:
>> ImageBitmap should be a canonical representation of a 2D bitmap, with a
>> known color space, known pixel format, known alpha representation
>> (premultiplied/not premultiplied), and ready for immediate rendering or
>> pixel data access. It's okay if it's immutable, and it's okay if
>> constructing one from an <img> or a Blob takes time, as long as once I have
>> an ImageBitmap I can use it to render and use it to extract pixel data
>> without user configuration/hardware producing unpredictable results.
> This seems reasonable, but it's not really detailed enough for me to turn
> it into spec. What colour space? What exactly should we be doing to the
> alpha channel?

Very specifically here, by 'known color space' i just mean that the
color space of the image is exposed to the end user. I don't think we
can possibly pick a standard color space to always use; the options
are 'this machine's current color space' and 'the color space of the
input bitmap'. In many cases the color space of the input bitmap is
effectively 'no color space', and game developers feed the raw rgba to
the GPU. It's important to support that use case without degrading the
image data.

Alpha channel is simpler, but just as important. Most image formats
store unpremultiplied image data, where you can have both
255,255,255,0 and 255,255,255,255 as valid colors - transparent white
and opaque white. Premultiplied alpha is common in game scenarios and
some game image formats, where transparent white is the same as
transparent black - 0,0,0,0. Premultiplied alpha is common in games
because it simplifies rendering considerably. A browser's rendering
pipeline is likely to use premultiplied alpha in some places, if not
all places.

Given this, you have a dilemma: Premultiplied alpha is faster and
often desirable, but if all image data comes premultiplied, you're
discarding information (just like with colorspace conversion). This is
worsened by the fact that canvas (as currently specified) does not
accept or provide premultiplied image data when you use it to access
image pixels. This imposes a performance cost for painting
premultiplied pixels to a canvas (you have to unpremultiply them, ugh)
and it means that the inverse operation costs more too.

Ideally ImageBitmap would provide the raw unpremultiplied image data
for a given image. It would make sense to expose a way to get it
premultiplied, but that is at least an operation you can trivially
implement in JS if you have to. It's just important to specify what
happens, vs it being left up to the implementation as it is now.
Personally, it would be great if the ImageBitmap creation operation
could create a premultiplied bitmap, because if the browser already
has a premultiplied version of the image cached (i.e. if that's the
canonical version it uses for rasterization/compositing), creating the
bitmap is as simple as a memcpy instead of a more complex operation.

On Fri, May 9, 2014 at 12:02 PM, Ian Hickson <i...@hixie.ch> wrote:
> On Wed, 17 Jul 2013, K. Gadd wrote:
>> By 'the other coordinates' I mean that if you constructed it from a
>> subrectangle of another image (via the sx, sy, sw, sh parameters) it
>> would be good to expose *all* those constructor arguments. This allows
>> you to more easily maintain a cache of ImageBitmaps without additional
>> bookkeeping data.
> Can you elaborate on this? Do you mean, e.g. making one ImageBitmap per
> sprite in a sprite sheet? If so, wouldn't you index by the name or ID of
> the sprite rather than the coordinates of the sprite in the sheet?
>> The use case is being able to draw lots of different subrectangles of
>> lots of different images in a single frame.
> Like, sprites?
> Wouldn't you know these ahead of time?

As I tried to elaborate before, no. Real-world rendering scenarios
often involve picking an arbitrary source rectangle every frame and
drawing it. Imagine scrolling through a huge bitmap that represents a
level in a game world - many classic 2D games and some classic 3D
games used this technique to represent prerendered environments. In
this context, it's impossible to 'know' the rectangles in advance and
cache the imagebitmaps. You end up having to splice the world up into
arbitrarily-sized tiles (and then you have the same problem as before;
how do you crop them without using sx/sy/sw/sh, which are established
to not do the right thing here?) or pay the cost of creating a new
ImageBitmap every frame. This is not a hypothetical scenario;
something like half the games I've ported use source rectangles this
way. Some of them do have easily predictable source rectangles, but
even in that scenario, ImageBitmap is worse because I still have to
retain the full-size image for correctness. (This is the reason why
efficient subregions are important.)

Exposing all the inputs as properties enables more straightforward use
of the instance as a cache element. It's not essential, but it makes
it easier to identify whether you've got an instance with the
appropriate information, and also makes it easier to clone one (if you
need to). This sort of cache is already a nightmare to maintain since
the closest thing we have to a usable cache (WeakMap) still isn't
available on the actual web (sigh).

On Fri, May 9, 2014 at 12:02 PM, Ian Hickson <i...@hixie.ch> wrote:
> On Wed, 17 Jul 2013, K. Gadd wrote:
>> To be clear, I think this is essential because it is a synchronous
>> operation (this form of ImageBitmap could potentially not even involve a
>> copy, though I understand if for some reason you can't provide that) and
>> it's an operation that is extremely common in performance-sensitive 2D
>> rendering. To me, the GC pressure from ImageBitmap instances is bad
>> enough; adding an event loop turn and a copy and potentially another
>> decode is just plain ridiculous. It'll force people to go straight to
>> WebGL, which would be a shame (especially due to the compatibility
>> penalty that results from that.)
> I'm not really understanding why you can't just use drawImage() if you are
> in fact just drawing arbitrary subparts of a master image. Why would you
> want to use ImageBitmap?

drawImage is specified to be unusable for these scenarios. It samples
pixels from outside the source rectangle, and has to in order to be
spec compliant. Chrome used to provide the desired behavior here, but
it was changed because it was not compliant with the spec. If memory
serves, vlad told me that this is required to handle particular
scenarios (dirty rectangle updates of filtered images, etc), so it's
not possible to simply 'fix' drawImage for this scenario unless it's
via the addition of some sort of opt in filtering state that clamps
samples to the source rectangle.

IIRC drawImage source rectangles have miserable performance
characteristics in most accelerated canvas backends, sometimes
involving a temporary texture being created with the subrect - Safari
and Firefox both did this the last time I tested - so even given
correct sampling it would still be a poorly-performing solution. :/

On Fri, May 9, 2014 at 12:02 PM, Ian Hickson <i...@hixie.ch> wrote:
> I'm assuming you're referring to the case where if you try to draw a
> subpart of an image and for some reason it has to be sampled (e.g. you're
> drawing it larger than the source), the anti-aliasing is optimised for
> tiling and so you get "leakage" from the next sprite over.
> If so, the solution is just to separate the sprites by a pixel of
> transparent black, no?

This is the traditional solution for scenarios where you are sampling
from a filtered texture in 3d. However, it only works if you never
scale images, which is actually not the case in many game scenarios.
Where a texture is going to be scaled, you end up having to pad
everything with as much as 8 pixels of transparent black, bloating
your memory usage considerably. Having to manually pad images with
transparent pixels is something that a lot of game developers aren't
familiar with, so many games you port to the web will not have assets
with appropriate padding - they don't need it on desktop PCs if they
have filtering disabled or don't scale, but they need it on the web
because drawImage samples outside the rectangle.

On Fri, May 9, 2014 at 12:02 PM, Ian Hickson <i...@hixie.ch> wrote:
> ImageBitmap wasn't meant for these cases. If you want to make a new image
> in this way, I would recommend using drawImage() onto a new canvas.

If this is canonically the correct solution, I'm kind of okay with it,
but I know that historically (and based on my experience/familiarity
with graphics pipelines) that this kind of reuse of temporary surfaces
introduces nasty stalls into rendering pipelines and can force
synchronous flushes. I think if we force developers to do this it may
harm the performance of HTML5 games overall, forcing more people to
rely on WebGL - which means platforms without WebGL (due to
blacklisting, etc) can't run those games effectively.

On Fri, May 9, 2014 at 12:02 PM, Ian Hickson <i...@hixie.ch> wrote:
>> b) have a rendering option to modify drawImage's edge filtering behavior
>> (either an argument to drawImage or a rendering context attribute)
> Yeah, maybe we should do that. I filed a bug:
>    https://www.w3.org/Bugs/Public/show_bug.cgi?id=25635


Thanks for the follow-up!

I can provide games that demonstrate some of the problems I describe
if it helps - assuming they're still up on the internet; some of them
are not on servers hosted by me, because some of my customers port
games themselves using my compiler.


Reply via email to