Re: [whatwg] ImageBitmap feature requests
Well, you could assign a unique sequential identifier or GUID to ImageBitmaps, like object URLs, as long as you remove the lifetime relationship where the object has to be manually freed. That would let you do some of those caching scenarios, the key is that the lifetime is now managed by 'do any elements use this ImageBitmap as a source, or is it retained by user JS', I think? On Tue, May 20, 2014 at 1:01 PM, Justin Novosad ju...@google.com wrote: On Sun, May 18, 2014 at 11:02 PM, Robert O'Callahan rob...@ocallahan.orgwrote: On Sat, May 17, 2014 at 4:18 AM, Anne van Kesteren ann...@annevk.nlwrote: Maybe we should have img.srcObject similar to what we're doing for media elements. img.src can simply return about:imagebitmap or some such. That way you can also assign a Blob to an img element without having to do the weird createObjectURL() hack that might leak memory if you're not careful. I like this approach, but I think it's simpler to continue to have HTMLImageElement.src reflect the src content attribute. I wonder what kind of broader effect it would have if image content can no longer be uniquely identified or retrieved using a URL. In many places in Blink/WebKit (and presumably other implementations as well) URLs are used as keys and handles for image resources. All of that would have to be refactored.
Re: [whatwg] WebGL and ImageBitmaps
I'd expect that the error might not accumulate for most color values. Rounding would potentially kick in once you get the first loss of precision. I've only historically seen color shifts upon repeated rendering in scenarios where you're losing lots of precision, or losing energy (bad RGB - HSV conversions, for example) - you don't actually need a lot of precision to fix that as long as your coefficients are right. On Fri, May 16, 2014 at 8:41 PM, Rik Cabanier caban...@gmail.com wrote: On Fri, May 16, 2014 at 3:06 PM, Justin Novosad ju...@google.com wrote: On Fri, May 16, 2014 at 5:42 PM, Rik Cabanier caban...@gmail.com wrote: Is the Web page not composited in sRGB? If so, it seems the backing store should be sRGB too. The web page is not composited in sRGB. It is composited in the output device's color space, which is often sRGB or close to sRGB, but not always. A notable significant exception is pre Snow Leopard Macs that use a gamma 1.8 transfer curve. By the way, sniffing the display color profile through getImageData is a known fingerprinting technique. This factor alone can be sufficient to fingerprint a user who has a calibrated monitor. I'm unable to reproduce what you're describing. So, if I fill with a color and repeatedly do a getImageData/putImageData, should I see color shifts?
Re: [whatwg] WebGL and ImageBitmaps
The point I was trying to make there is that for many format conversions or encoding conversions (RGB-YUV, RGB-HSL), not all input values are degraded equally. The amount of error introduced depends on the inputs. There are going to be some values for which the conversion is more or less accurate - for example, in most cases I would expect black and white to convert without any error. As a result, you can't just pick a few random colors and fill a canvas with them and decide based on that whether or not error is being introduced. At a minimum, you should use a couple test pattern bitmaps and do a comparison of the result. Keep in mind that all the discussions of profile conversion so far have been about bitmaps, not synthesized solid colors. I am, of course, not an expert - but I have observed this with repeated RGB-HSL conversions in the past (testing poor implementations that introduced accumulated error against relatively good implementations that did not accumulate very much error over time.) http://en.wikipedia.org/wiki/SRGB#Specification_of_the_transformation Note that as described there, clipping and rounding may occur and linear - gamma-corrected conversions may also occur. We also can't know what color profile configuration your machine happens to be using when you run these tests, and what browser you're using. Both of those are important when saying that you can/can't reproduce the issue. On Sun, May 18, 2014 at 8:22 AM, Rik Cabanier caban...@gmail.com wrote: On Sun, May 18, 2014 at 2:15 AM, K. Gadd k...@luminance.org wrote: I'd expect that the error might not accumulate for most color values. Rounding would potentially kick in once you get the first loss of precision. That doesn't make sense. If this is a shift because of color management, it should happen for pretty much all values. I changed my profile to generate wild color shifts and tried random color values but don't see any changes in any browser. Could this just be happening with images that have profiles? I've only historically seen color shifts upon repeated rendering in scenarios where you're losing lots of precision, or losing energy (bad RGB - HSV conversions, for example) - you don't actually need a lot of precision to fix that as long as your coefficients are right. On Fri, May 16, 2014 at 8:41 PM, Rik Cabanier caban...@gmail.com wrote: On Fri, May 16, 2014 at 3:06 PM, Justin Novosad ju...@google.com wrote: On Fri, May 16, 2014 at 5:42 PM, Rik Cabanier caban...@gmail.com wrote: Is the Web page not composited in sRGB? If so, it seems the backing store should be sRGB too. The web page is not composited in sRGB. It is composited in the output device's color space, which is often sRGB or close to sRGB, but not always. A notable significant exception is pre Snow Leopard Macs that use a gamma 1.8 transfer curve. By the way, sniffing the display color profile through getImageData is a known fingerprinting technique. This factor alone can be sufficient to fingerprint a user who has a calibrated monitor. I'm unable to reproduce what you're describing. So, if I fill with a color and repeatedly do a getImageData/putImageData, should I see color shifts?
Re: [whatwg] WebGL and ImageBitmaps
Replies inline On Wed, May 14, 2014 at 4:27 PM, Glenn Maynard gl...@zewt.org wrote: On Mon, May 12, 2014 at 3:19 AM, K. Gadd k...@luminance.org wrote: This is the traditional solution for scenarios where you are sampling from a filtered texture in 3d. However, it only works if you never scale images, which is actually not the case in many game scenarios. That's only an issue when sampling without premultiplication, right? I had to refresh my memory on this: https://zewt.org/~glenn/test-premultiplied-scaling/ The first image is using WebGL to blit unpremultiplied. The second is WebGL blitting premultiplied. The last is 2d canvas. (We're talking about canvas here, of course, but WebGL makes it easier to test the different behavior.) This blits a red rectangle surrounded by transparent space on top of a red canvas. The black square is there so I can tell that it's actually drawing something. The first one gives a seam around the transparent area, as the white pixels (which are completely transparent in the image) are sampled into the visible part. I think this is the problem we're talking about. The second gives no seam, and the Canvas one gives no seam, indicating that it's a premultiplied blit. I don't know if that's specified, but the behavior is the same in Chrome and FF. The reason one pixel isn't sufficient is that if the minification ratio is below 50% (say, 33%), sampling algorithms other than non-mipmapped-bilinear will begin sampling more than 4 pixels (or one quad, in gpu shading terminology), so you now need enough transparent pixels around all your textures to ensure that sampling never crosses the boundaries into another image. http://fgiesen.wordpress.com/2011/07/10/a-trip-through-the-graphics-pipeline-2011-part-8/ explains the concept of quads, along with relevant issues like centroid interpolation. Anyone talking about correctness or performance in modern accelerated rendering might benefit from reading this whole series. You do make the good point that whether or not the canvas implementation is using premultiplied textures has an effect on the result of scaling and filtering (since doing scaling/filtering on nonpremultiplied rgba produces color bleeding from transparent pixels). Is that currently specified? I don't think I've seen bleeding artifacts recently, but I'm not certain whether the spec requires this explicitly. This issue is however not color bleeding - color bleeding is a math 'error' that results from not using premultiplication - but that the filtering algorithm samples pixels outside the actual 'rectangle' intended to be drawn. (This is an implicit problem with sampling based on texture coordinates and derivatives instead of pixel offsets) If you search for 'padding texture atlases' you can see some examples that show why this is a tricky problem and a single pixel of padding is not sufficient: http://wiki.polycount.com/EdgePadding There are some related problems here for image compression as well, due to the block-oriented nature of codecs like JPEG and DXTC. Luckily they aren't something the user agent has to deal with in their canvas implementation, but that's another example where a single pixel of padding isn't enough. On Tue, May 13, 2014 at 8:59 PM, K. Gadd k...@luminance.org wrote: I thought I was pretty clear about this... colorspace conversion and alpha conversion happen here depending on the user's display configuration, the color profile of the source image, and what browser you're using. I've observed differences between Firefox and Chrome here, along with different behavior on OS X (presumably due to their different implementation of color profiles). In this case 'different' means 'loading drawing an image to a canvas gives different results via getImageData'. That's a description, not an explicit example. An example would be a URL demonstrating the issue. http://joedev.net/JSIL/Numbers/ was the first game to report an issue from this, because his levels are authored as images. He ended up solving the problem by following my advice to manually strip color profile information from all his images (though this is not a panacea; a browser could decide that profile-information-less images are now officially sRGB, and then profile-convert them to the display profile) It's been long enough that I don't know if his uploaded build works anymore or whether it will demonstrate the issue. It's possible he removed his dependency on images by now. Here is what I told the developer in an email thread when he first reported the issue (and by 'reported' I mean 'sent me a very confused email saying that his game didn't work in Firefox and he had no idea why'): The reason it's not working in Firefox right now is due to a firefox bug, because your PNG files contain what's called a 'sRGB chunk': https://bugzilla.mozilla.org/show_bug.cgi?id=867594 I don't know if this bug can be fixed on Firefox's side because it's
Re: [whatwg] canvas feedback
Is it ever possible to make canvas-to-canvas blits consistently fast? It's my understanding that browsers still make intelligent/heuristic-based choices about which canvases to accelerate, if any, and that it depends on the size of the canvas, whether it's in the DOM, etc. I've had to report bugs related to this against firefox and chrome in the past, I'm sure more exist. There's also the scenario where you need to blit between Canvas2D canvases and WebGL canvases - the last time I tried this, a single blit could cost *hundreds* of milliseconds because of pipeline stalls and cpu-gpu transfers. Canvas-to-canvas blits are a way to implement layering, but it seems like making it consistently fast via canvas-canvas blits is a much more difficult challenge than making sure that there are fastcheap ways to layer separate canvases at a composition stage. The latter just requires that the browser have a good way to composite the canvases, the former requires that various scenarios with canvases living in CPU and GPU memory, deferred rendering queues, etc all get resolved efficiently in order to copy bits from one place to another. (In general, I think any solution that relies on using canvas-on-canvas drawing any time a single layer is invalidated is suspect. The browser already has a compositing engine for this that can efficiently update only modified subregions and knows how to cache reusable data; re-rendering the entire surface from JS on change is going to be a lot more expensive than that. Don't some platforms actually have compositing/layers at the OS level, like CoreAnimation on iOS/OSX?) On Wed, May 14, 2014 at 6:30 AM, Jürg Lehni li...@scratchdisk.com wrote: On Apr 30, 2014, at 00:27 , Ian Hickson i...@hixie.ch wrote: On Mon, 7 Apr 2014, Jürg Lehni wrote: Well this particular case, yes. But in the same way we allow a group of items to have an opacity applied to in Paper.js, and expect it to behave the same ways as in SVG: The group should appear as if its children were first rendered at 100% alpha and then blitted over with the desired transparency. Layers would offer exactly this flexibility, and having them around would make a whole lot of sense, because currently the above can only be achieved by drawing into a separate canvas and blitting the result over. The performance of this is real low on all browsers, a true bottleneck in our library currently. It's not clear to me why it would be faster if implemented as layers. Wouldn't the solution here be for browsers to make canvas-on-canvas drawing faster? I mean, fundamentally, they're the same feature. I was perhaps wrongly assuming that including layering in the API would allow the browser vendors to better optimize this use case. The problem with the current solution is that drawing a canvas into another canvas is inexplicably slow across all browsers. The only reason I can imagine for this is that the pixels are copied back and forth between the GPU and the main memory, and perhaps converted along the way, while they could simply stay on the GPU as they are only used there. But reality is probably more complicated than that. So if the proposed API addition would allow a better optimization then I'd be all for it. If not, then I am wondering how I can get the vendor's attention to improve this particular case. It really is very slow currently, to the point where it doesn't make sense to use it for any sort of animation technique. J
Re: [whatwg] WebGL and ImageBitmaps
On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote: Can you give an explicit example where browsers are having different behavior when using drawImage? I thought I was pretty clear about this... colorspace conversion and alpha conversion happen here depending on the user's display configuration, the color profile of the source image, and what browser you're using. I've observed differences between Firefox and Chrome here, along with different behavior on OS X (presumably due to their different implementation of color profiles). In this case 'different' means 'loading drawing an image to a canvas gives different results via getImageData'. On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote: Would this be solved with Greg's proposal for flags on ImageBitmap: http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-June/251541.html I believe so. I think I was on record when he first posted that I consider the alpha and colorspace flags he described as adequate. FlipY is considerably less important to me, but I can see how people might want it (honestly, reversing the order of scanlines is a very cheap operation; you can do it in the sampling stage of your shader, and actually *have* to in OpenGL because of their coordinate system when you're doing render-to-texture.) On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote: Very specifically here, by 'known color space' i just mean that the color space of the image is exposed to the end user. I don't think we can possibly pick a standard color space to always use; the options are 'this machine's current color space' and 'the color space of the input bitmap'. In many cases the color space of the input bitmap is effectively 'no color space', and game developers feed the raw rgba to the GPU. It's important to support that use case without degrading the image data. Is that not the case today? It is very explicitly not the case, which is why we are discussing it. It is not currently possible to do lossless manipulation of PNG images in a web browser using canvas. The issues I described where you get different results from getImageData are a part of that. On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote: Safari never created a temporary image and I recently updated Firefox so it matches Safari. Both Safari, IE and Firefox will now sample outside of the drawImage region. Chrome does not but they will fix that at some point. This is incorrect. A quick google search for 'webkit drawimage source rectangle temporary' reveals such, in a post to this list. http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2012-December/080583.html My statement to this effect was based on my (imperfect) memory of that post. 'CGImage' (to me) says Safari since it's an Apple API, and the post mentions Safari. -kg
Re: [whatwg] WebGL and ImageBitmaps
Gosh, this thread is old. I'm going to try and compose a coherent response but at this point I've forgotten a lot of the context... On Fri, May 9, 2014 at 12:02 PM, Ian Hickson i...@hixie.ch wrote: On Thu, 18 Jul 2013, K. Gadd wrote: Ultimately the core here is that without control over colorspace conversion, any sort of deterministic image processing in HTML5 is off the table, and you have to write your own image decoders, encoders, and manipulation routines in JavaScript using raw typed arrays. Maybe that's how it has to be, but it would be cool to at least support basic variations of these use cases in Canvas since getImageData/putImageData already exist and are fairly well-specified (other than this problem, and some nits around source rectangles and alpha transparency). Given that the user's device could be a very low-power device, or one with a very small screen, but the user might still want to be manipulating very large images, it might be best to do the master manipulation on the server anyway. This request is not about efficient image manipulation (as you point out, this is best done on a high power machine) - without control over colorspace conversion any image processing is nondeterministic. There are games and apps out there that rely on getting the exact same pixels out of a given Image on all machines, and that's impossible right now due to differing behaviors. You see demoscene projects packing data into bitmaps (yuck), or games using images as the canonical representation of user-generated content. The latter, I think, is entirely defensible - maybe even desirable, since it lets end users interact with the game using photoshop or mspaint. Supporting these use cases in a cross-browser manner is impossible right now, yet they work in the desktop versions of these games. On Fri, May 9, 2014 at 12:02 PM, Ian Hickson i...@hixie.ch wrote: On Thu, 18 Jul 2013, K. Gadd wrote: Out of the features suggested previously in the thread, I would immediately be able to make use of control over colorspace conversion and an ability to opt into premultiplied alpha. Not getting premultiplied alpha, as is the case in virtually every canvas implementation I've tried, has visible negative consequences for image quality and also reduces the performance of some use cases where bitmap manipulation needs to happen, due to the fact that premultiplied alpha is the 'preferred' form for certain types of rendering and the math works out better. I think the upsides to getting premultiplication are the same here as they are in WebGL: faster uploads/downloads, better results, etc. Can you elaborate on exactly what this would look like in terms of the API implications? What changes to the spec did you have in mind? I don't remember what my exact intent here was, but I'll try to resynthesize it: The key here is to have a clear understanding of what data you get out of an ImageBitmap. It is *not* necessary for the end user to be able to specify it, as long as the spec dictates that all browsers provide the exact same format to end users. If we pick one format and lock to it, we want a format that discards as little source image data as possible (preferably *no* data is discarded) - which would mean the raw source image data, without any colorspace or alpha channel conversion applied. This allows all the procedural image manipulation cases described above, and makes it a very fast and straightforward path for loading images you plan to pass off to the GPU as a WebGL texture. There's a bit more on this below... On Fri, May 9, 2014 at 12:02 PM, Ian Hickson i...@hixie.ch wrote: On Thu, 18 Jul 2013, K. Gadd wrote: To clearly state what would make ImageBitmap useful for the use cases I encounter and my end-users encounter: ImageBitmap should be a canonical representation of a 2D bitmap, with a known color space, known pixel format, known alpha representation (premultiplied/not premultiplied), and ready for immediate rendering or pixel data access. It's okay if it's immutable, and it's okay if constructing one from an img or a Blob takes time, as long as once I have an ImageBitmap I can use it to render and use it to extract pixel data without user configuration/hardware producing unpredictable results. This seems reasonable, but it's not really detailed enough for me to turn it into spec. What colour space? What exactly should we be doing to the alpha channel? Very specifically here, by 'known color space' i just mean that the color space of the image is exposed to the end user. I don't think we can possibly pick a standard color space to always use; the options are 'this machine's current color space' and 'the color space of the input bitmap'. In many cases the color space of the input bitmap is effectively 'no color space', and game developers feed the raw rgba to the GPU. It's important to support that use case without degrading the image data. Alpha channel is simpler
Re: [whatwg] Bicubic filtering on context.drawImage
As I mentioned to Ryosuke off-list, I think the interpolateEndpointsCleanly attribute is a (relatively) simple solution to the problem I have with the current spec, and it doesn't overcomplicate things or make it hard to improve filtering in the future. It's also trivial to feature-detect, which means I can use it when available and fallback to a temporary canvas otherwise. I think providing this option would also make it easier to solve situations where applications rely on the getImageData output after rendering a scaled bitmap. I'd probably call it something (to me) clearer about semantics, though, like 'sampleInsideRectangle' On Wed, Mar 26, 2014 at 9:22 PM, Rik Cabanier caban...@gmail.com wrote: On Wed, Mar 26, 2014 at 8:59 PM, Ryosuke Niwa rn...@apple.com wrote: On Mar 24, 2014, at 8:25 AM, Justin Novosad ju...@google.com wrote: On Sat, Mar 22, 2014 at 1:47 AM, K. Gadd k...@luminance.org wrote: A list of resampling methods defined by the spec would be a great overengineered (not in a bad way) solution, but I think you really only need to worry about breaking existing apps - so providing an escape valve to demand bilinear (this is pretty straightforward, everything can do bilinear) instead of the 'best' filtering being offered is probably enough for future-proofing. It might be better to default to bilinear and instead require canvas users to opt into better filtering, in which case a list of available filters would probably be preferred, since that lets the developer do feature detection. I think we missed an opportunity to make filtering future-proof when it got spec'ed as a boolean. Should have been an enum IMHO :-( Anyways, if we add another image smoothing attribute to select the algorithm let's at least make that one an enum. I'm not sure the spec should impose specific filter implementations, or perhaps only bi-linear absolutely needs to be supported, and all other modes can have fallbacks. For example. We could have an attribute named imageSmoothingQuality. possibles value could be 'best' and 'fast'. Perhaps 'fast' would mean bi-linear. Not sure which mode should be the default. We could also have interpolateEndpointsCleanly flag that forces bilinear or an equivalent algorithm that ensures endpoints do not get affected by inner contents. Is that to clamp the sampling to the source rect? http://jsfiddle.net/6vh5q/9/ shows that Safari samples when smoothing is turned off which is a bit strange. In general, it's better to define semantic based flags and options so that UAs could optimize it in the future. Mandating a particular scaling algorithm in the spec. would limit such optimizations in the future. e.g. there could be a hardware that natively support Lanczos sampling but not Bicubic sampling. If it was an enum/string, an author could set the desired sampling method and if the UA doesn't support it, the attribute would not change.
Re: [whatwg] effect of smoothing on drawImage (was: Bicubic filtering on context.drawImage)
On windows with smoothing disabled I get it in every browser except Chrome. Maybe this is due to Direct2D, and for some reason it antialiases the edges of the bitmaps? It's nice to hear that it doesn't happen on other configurations. It's important to test this when drawing with transforms active because you may not see the full set of artifacts without them. (The application where I first observed this is rotating/scaling these bitmaps with scaling disabled.) Copying to a temporary canvas is an interesting idea; is it possible for typical browser implementations to optimize this or does it forcibly degrade things to a pair of individual draw calls (with full state changes and 6-vertex buffers) for every bitmap rendered? I don't really have any problems with the behavior when smoothing is enabled; sorry if this was unclear. -kg On Sat, Mar 22, 2014 at 9:09 PM, Rik Cabanier caban...@gmail.com wrote: On Fri, Mar 21, 2014 at 10:47 PM, K. Gadd k...@luminance.org wrote: Hi, the attached screenshots and test case in https://bugzilla.mozilla.org/show_bug.cgi?id=782054 demonstrate how the issue affects 2D games that perform scaling/rotation of bitmaps. There are other scenarios I probably haven't considered as well. As far as I can tell the mechanism used to render these quads is rendering quads that are slightly too large (probably for coverage purposes or to handle subpixel coordinates?) which results in effectively drawing a rectangle larger than the input rectangle, so you sample a bit outside of it and get noise when texture atlases are in use. Interestingly, I raised this on the list previously and it was pointed out that Chrome's previous ('correct' for that test case) behavior was actually incorrect, so it was changed. If I remember correctly there are good reasons for this behavior when bilinear filtering is enabled, but it's quite unexpected to basically get 'antialiasing' on the edges of your bitmaps when filtering is explicitly disabled. Getting opted into a different filter than the filter you expect could probably be similarly problematic but I don't know of any direct examples other than the gradient fill one. ah. I remember looking at your test case. I made a simpler version that shows the issue: http://codepen.io/adobe/pen/jIzbv According to the spec, there should be a faint line [1]: If the original image data is a bitmap image, the value painted at a point in the destination rectangle is computed by filtering the original image data. The user agent may use any filtering algorithm (for example bilinear interpolation or nearest-neighbor). When the filtering algorithm requires a pixel value from outside the original image data, it must instead use the value from the nearest edge pixel. (That is, the filter uses 'clamp-to-edge' behavior.) When the filtering algorithm requires a pixel value from outside the source rectangle but inside the original image data, then the value from the original image data must be used. You were told correctly that the Chrome behavior is incorrect. When doing smoothing, Chrome is not looking outside the bounds of the source image, so you don't get the faint line. This is also an issue with the cairo and core graphics backends of Firefox. Safari and IE seem to work correctly. I will log bugs against Chrome and Firefox so we can get interoperable behavior here. I was not able to reproduce the issue of smoothing was disabled. If you want smoothing but not the lines, you can do a drawimage to an intermediate canvas with the same resolution as the source canvas. I verified that this works. 1: http://www.whatwg.org/specs/web-apps/current-work/multipage/the-canvas-element.html#dom-context-2d-drawimage
Re: [whatwg] Bicubic filtering on context.drawImage
Hi, the attached screenshots and test case in https://bugzilla.mozilla.org/show_bug.cgi?id=782054 demonstrate how the issue affects 2D games that perform scaling/rotation of bitmaps. There are other scenarios I probably haven't considered as well. As far as I can tell the mechanism used to render these quads is rendering quads that are slightly too large (probably for coverage purposes or to handle subpixel coordinates?) which results in effectively drawing a rectangle larger than the input rectangle, so you sample a bit outside of it and get noise when texture atlases are in use. Interestingly, I raised this on the list previously and it was pointed out that Chrome's previous ('correct' for that test case) behavior was actually incorrect, so it was changed. If I remember correctly there are good reasons for this behavior when bilinear filtering is enabled, but it's quite unexpected to basically get 'antialiasing' on the edges of your bitmaps when filtering is explicitly disabled. Getting opted into a different filter than the filter you expect could probably be similarly problematic but I don't know of any direct examples other than the gradient fill one. A list of resampling methods defined by the spec would be a great overengineered (not in a bad way) solution, but I think you really only need to worry about breaking existing apps - so providing an escape valve to demand bilinear (this is pretty straightforward, everything can do bilinear) instead of the 'best' filtering being offered is probably enough for future-proofing. It might be better to default to bilinear and instead require canvas users to opt into better filtering, in which case a list of available filters would probably be preferred, since that lets the developer do feature detection. -kg On Fri, Mar 21, 2014 at 9:38 PM, Rik Cabanier caban...@gmail.com wrote: Hi Katelyn, would this solved by creating a list of resampling methods that are clearly defined in the spec? Do you have a list in mind? On Sat, Mar 15, 2014 at 4:14 AM, K. Gadd k...@luminance.org wrote: In game scenarios it is sometimes necessary to have explicit control over the filtering mechanism used, too. My HTML5 ports of old games all have severe rendering issues in every modern browser because of changes they made to canvas semantics - using filtering when not requested by the game, sampling outside of texture rectangles as a result of filtering Can you give an example of when that sampling happens? , etc - imageSmoothingEnabled doesn't go far enough here, and I am sure there are applications that would break if bilinear was suddenly replaced with bicubic, or bicubic was replaced with lanczos, or whatever. This matters since some applications may be using getImageData to sample the result of a scaled drawImage and changing the scaling algorithm can change the data they get. One example I can think of is that many games bilinear scale a tiny (2-16 pixel wide) image to get a large, detailed gradient (since bilinear cleanly interpolates the endpoints). If you swap to another algorithm the gradient may end up no longer being linear, and the results would change dramatically. On Fri, Mar 14, 2014 at 1:45 PM, Simon Sarris sar...@acm.org wrote: On Fri, Mar 14, 2014 at 2:40 PM, Justin Novosad ju...@google.com wrote: Yes, and if we fixed it to make it prettier, people would complain about a performance regression. It is impossible to make everyone happy right now. Would be nice to have some kind of speed versus quality hint. As a canvas/web author (not vendor) I agree with Justin. Quality is very important for some canvas apps (image viewers/editors), performance is very important for others (games). Canvas fills a lot of roles, and leaving a decision like that up to browsers where they are forced to pick one or the other in a utility dichotomy. I don't think it's a good thing to leave debatable choices up to browser vendors. It ought to be something solved at the spec level. Either that or end users/programmers need to get really lucky and hope all the browsers pick a similar method, because the alternative is a (admittedly soft) version of This site/webapp best viewed in Netscape Navigator. Simon Sarris
[whatwg] IE10 inconsistency with Blob / createObjectURL
Apologies if this has come up on the list before: IE10 appears to have shipped implementations of the Blob constructor along with createObjectURL. While they work, there appears to be a significant deviation from the spec behavior (at the very least, Firefox and Chrome implement these APIs as I'd expect). When a Blob gets GCed in IE10, it appears to intentionally destroy the object URL associated with it, instead of waiting for you to revoke the object URL. When this happens it spits out a vague console message: HTML7007: One or more blob URLs were revoked by closing the blob for which they were created. These URLs will no longer resolve as the data backing the URL has been freed. This is confusing because the API doesn't even have a way to 'close' blobs; all that is necessary to trigger this is to let a Blob get collected by going out of scope. From reading the spec I don't see any language that suggests this behavior is allowed or expected. I can try to work around it by retaining all the blobs but it seems unnecessary... -kg
Re: [whatwg] Canvas 2D memory management
Some of my applications would definitely benefit from this as well. A port of one client's game managed to hit around 1GB of backing store/bitmap data combined when preloading all their image assets using img. Even though browsers then discard the bitmap data, it made it difficult to get things running without killing a tab due to hitting a memory limit temporarily. (The assets were not all in use at once, so the actual usage while playing is fine). Having explicit control over whether bitmaps are resident in memory would be great for this use case since I can preload the actual file over the network, then do the actual async forced decode by creating an ImageBitmap from a Blob, and discard it when the pixel data is no longer needed (the game already has this information since it uses the C# IDisposable pattern, where resources are disposed after use) On Fri, Jul 19, 2013 at 12:34 PM, Justin Novosad ju...@google.com wrote: On Fri, Jul 19, 2013 at 7:09 AM, Ashley Gullen ash...@scirra.com wrote: FWIW, imageBitmap.discard() wouldn't be unprecedented - WebGL allows you to explicitly release memory with deleteTexture() rather than letting the GC collect unused textures. A related issue we have now is with canvas backing stores. It is common for web apps to create temporary canvases to do some offscreen rendering. When the temporary canvas goes out of scope, it continues to consume RAM or GPU memory until it is garbage collected. Occasionally this results in memory-leak-like symptoms. The usual workaround is to use a single persistent global canvas for offscreen work instead of temporary ones (yuck). This could be handled in a cleaner way if there were a .discard() method on canvases elements too. Ashley On 18 July 2013 17:50, Ian Hickson i...@hixie.ch wrote: On Wed, 9 Jan 2013, Ashley Gullen wrote: Some developers are starting to design large scale games using our HTML5 game engine, and we're finding we're running in to memory management issues. Consider a device with 50mb of texture memory available. A game might contain 100mb of texture assets, but only use a maximum of 30mb of them at a time (e.g. if there are three levels each using 30mb of different assets, and a menu that uses 10mb of assets). This game ought to fit in memory at all times, but if a user agent is not smart about how image loading is handled, it could run out of memory. [...] Some ideas: 1) add new functions to the canvas 2D context, such as: ctx.load(image): cache an image in memory so it can be immediately drawn when drawImage() is first used ctx.unload(image): release the image from memory The Web API tries to use garbage collection for this; the idea being that you load the images you need when you need them, then discard then when you're done, and the memory gets reclaimed when possible. We could introduce a mechanism to flush ImageBitmap objects more forcibly, e.g. imageBitmap.discard(). This would be a pretty new thing, though. Are there any browser vendors who have opinions about this? We should probably wait to see if people are able to use ImageBitmap with garbage collection first. Note, though, that ImageBitmap doesn't really add anything you couldn't do with img before, in the non-Worker case. That is, you could just create img elements then lose references to them when you wanted them GC'ed; if that isn't working today, I don't see why it would start working with ImageBitmap. -- Ian Hickson U+1047E)\._.,--,'``. fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
[whatwg] Fwd: Why can't ImageBitmap objects have width and height attributes? (and other e-mails)
Re-sending this because the listserv silently discarded it (You guys should fix it to actually send the notice...) -- Forwarded message -- From: K. Gadd k...@luminance.org Date: Wed, Jul 17, 2013 at 6:46 PM Subject: Re: [whatwg] Why can't ImageBitmap objects have width and height attributes? (and other e-mails) To: Ian Hickson i...@hixie.ch Cc: wha...@whatwg.org Responses inline On Wed, Jul 17, 2013 at 5:17 PM, Ian Hickson i...@hixie.ch wrote: On Tue, 18 Dec 2012, Kevin Gadd wrote: Is it possible to expose the width/height of an ImageBitmap, or even expose all the rectangle coordinates? Exposing width/height would be nice for parity with Image and Canvas when writing functions that accept any drawable image source. Thanks for the prompt action here, this looks like a straightforward solution. I've added height, width, and pixel density. Not sure what you meant by the other coordinates. By 'the other coordinates' I mean that if you constructed it from a subrectangle of another image (via the sx, sy, sw, sh parameters) it would be good to expose *all* those constructor arguments. This allows you to more easily maintain a cache of ImageBitmaps without additional bookkeeping data. On Tue, 18 Dec 2012, Kevin Gadd wrote: Sorry, upon reading over the ImageBitmap part of the spec again I'm confused: Why is constructing an ImageBitmap asynchronous? Because it might involve network I/O. I thought any decoding isn't supposed to happen until drawImage, so I don't really understand why this operation involves a callback and a delay. Making ImageBitmap creation async means that you *cannot* use this as a replacement for drawImage source rectangles unless you know all possible source rectangles in advance. This is not possible for many, many use cases (scrolling through a bitmap would be one trivial example). Yeah, it's not supposed to be a replacement for drawImage(). This is why I was confused then, since I was told on this list that ImageBitmap was a solution for the problem of drawing subrectangles of images via drawImage (since the current specified behavior makes it impossible to precisely draw a subrectangle). :( Is it async because it supports using Video and Blob as the source? Mainly Blob, but maybe other things in the future. I really love the feature set (being able to pass ImageData in is going to be a huge boon - no more temporary canvases just to create images from pixel data!) but if it's async-only I don't know how useful it will be for the issues that led me to starting this discussion thread in the first place. Can you elaborate on the specific use cases you have in mind? The use case is being able to draw lots of different subrectangles of lots of different images in a single frame. On Tue, 18 Dec 2012, Kevin Gadd wrote: How do you wait synchronously for a callback from inside requestAnimationFrame? You return and wait for another frame. Furthermore, wouldn't that mean returning once to the event loop for each individual drawImage call you wish to make using a source rectangle - so for a single scene containing lots of dynamic source rectangles you could end up having to wait for dozens of callbacks. I don't understand. Why can't you prepare them ahead of time all together? (As in the example in the spec, for instance.) You can, it's just significantly more complicated. It's not something you can easily expose in a user-consumable library wrapper either, since it literally alters the execution model for your entire rendering frame and introduces a pause for every group of images that need the use of temporary ImageBitmap instances. I'm compiling classic 2D games to JavaScript to run in the browser, so I literally call drawImage hundreds or thousands of times per frame, most of the calls having a unique source rectangle. I will have to potentially construct thousands of ImageBitmaps and wait for all those callbacks. A cache will reduce the number of constructions I have to do per frame, but then I have to somehow balance the risk of blowing through the entirety of the end user's memory (a very likely thing on mobile) or create a very aggressive, manually flushed cache that may not even have room for all the rectangles used in a given frame. Given that an ImageBitmap creation operation may not be instantaneous this really makes me worry that the performance consequences of creating an ImageBitmap will make it unusable for this scenario. (I do agree that if you're building a game from scratch for HTML5 Canvas based on the latest rev of the API, you can probably design for this by having all your rectangles known in advance - but there are specific rendering primitives that rely on dynamic rectangles, like for example filling a progress bar with a texture, tiling a texture within a window, or scrolling a larger texture within a region. I've encountered all these in real games
Re: [whatwg] Adding features needed for WebGL to ImageBitmap
To respond on the topic of WebGL/ImageBitmap integration - and in particular some of the features requested earlier in the thread. Apologies if I missed a post where this stuff was already addressed directly; I couldn't follow this thread easily because of how much context was stripped out of replies: Having control over when or where colorspace conversion occurs would be tremendously valuable. Right now the only place where you have control over this is in WebGL, and when it comes to canvas each browser seems to implement it differently. This is already a problem for people trying to do image processing in JavaScript; an end-user of my compiler ran into this by writing a simple app that read pixel data out of PNGs and then discovered that every browser had its own unique interpretation of what a simple image's data should look like when using getImageData: https://bugzilla.mozilla.org/show_bug.cgi?id=867594 Ultimately the core here is that without control over colorspace conversion, any sort of deterministic image processing in HTML5 is off the table, and you have to write your own image decoders, encoders, and manipulation routines in JavaScript using raw typed arrays. Maybe that's how it has to be, but it would be cool to at least support basic variations of these use cases in Canvas since getImageData/putImageData already exist and are fairly well-specified (other than this problem, and some nits around source rectangles and alpha transparency). Out of the features suggested previously in the thread, I would immediately be able to make use of control over colorspace conversion and an ability to opt into premultiplied alpha. Not getting premultiplied alpha, as is the case in virtually every canvas implementation I've tried, has visible negative consequences for image quality and also reduces the performance of some use cases where bitmap manipulation needs to happen, due to the fact that premultiplied alpha is the 'preferred' form for certain types of rendering and the math works out better. I think the upsides to getting premultiplication are the same here as they are in WebGL: faster uploads/downloads, better results, etc. I understand the rationale behind gregg's suggestion for flipY, but ultimately don't know if that one makes any sense in a HTML5 context. It basically only exists because of the annoying disagreement between APIs like OpenGL and other APIs like HTML5 Canvas or Direct3D, specifically about which direction the Y axis goes. Normally one would assume that you can correct this by simply inverting heights/y coordinates in the correct places, but when you're rendering to offscreen surfaces, the confusion over the Y axis ends up causing you to have to do a bunch of weird things to coordinates and sampling in order to get correct results, because your offscreen surfaces are *actually* upside down. It's gross. To clearly state what would make ImageBitmap useful for the use cases I encounter and my end-users encounter: ImageBitmap should be a canonical representation of a 2D bitmap, with a known color space, known pixel format, known alpha representation (premultiplied/not premultiplied), and ready for immediate rendering or pixel data access. It's okay if it's immutable, and it's okay if constructing one from an img or a Blob takes time, as long as once I have an ImageBitmap I can use it to render and use it to extract pixel data without user configuration/hardware producing unpredictable results. Colorspace conversion would allow me to address outstanding bugs that currently require my end users to manually strip color profiles and gamma from their image files, and premultiplied alpha would dramatically improve the performance of some test cases and shipped games out there based on my compiler. (Naturally, this all requires browser vendors to implement this stuff, so I understand that these gains would probably not manifest for years.) -kg