On 12/17/13 10:55 PM, Rik Cabanier wrote:
Ah. I see what you're saying now. My first reaction was "that's
brilliant!". Unfortunately, my second reaction was that drawImage()
would then block on the image decoding, and unless this was being done
in a Worker, I'm almost certain that would be an unacceptable
performance hit. (One of my use cases is scanning an SD card for
hundreds of images and generating thumbnails for each of them. If doing
this used drawImage() calls that blocked the main thread, my UI would be
On Tue, Dec 17, 2013 at 9:36 PM, David Flanagan <dflana...@mozilla.com
On 12/17/13 8:36 PM, Rik Cabanier wrote:
is there a reason why you are completely decoding the image when
you create the imageBitmap? 
I assume that that is the intent of calling createImageBitmap() on
a blob. Since JPEG decoding probably takes significantly longer
than blocking on memory access, I assume that lazy decoding is not
No, nothing in the spec says that you *must* decode the bits:
The exact judgement of what is undue latency of this is left up to
the implementer, but in general if making use of the bitmap
requires network I/O, or even local disk I/O, then the latency is
probably undue; whereas if it only requires a blocking read from a
GPU or system RAM, the latency is probably acceptable.
In your case, things are reversed. Allocating system ram will kill
performance and cause undue latency. Reading the JPEG image on the fly
from a Flash disk will be less disruptive and faster.
But that misses my point. On the devices I'm concerned with I can
never completely decode the image whether it is deferred or not.
If I decode at full size, apps running in the background are
likely to be killed because of low memory. I need the ability to
do the downsampling during the decoding process, so that there is
never the memory impact of holding the entire full-size image in
If you detect a situation where this operation causes excessive
memory consumption, you could hold on to the compressed data URL
and defer decoding until the point where it is actually needed.
Since exhausting VM will create "undue latency", this workaround
follows the spirit of the spec.
If you really want to have the downsampled bits in memory, you
could create a canvas and draw your image into it.
I can't do that because I don't have (and cannot have) a full-size
decoded image. I've got a blob that is a JPEG encoded 5 megapixel
image. And I want to end up with a decoded 320x480 image. And I
want to get from A to B without ever allocating 20mb and decoding
the image at full size
The downsampling happens *during* the drawimage of the imageBitmap
into the canvas. At no point do you have to allocate 20mb.
I also suspect that adding an async version of drawImage() to the canvas
API is a non-starter because that API is pretty fundamentally synchronous.