Re: [whatwg] Persistent state for homescreen web apps (without reloading each time)
On Mon, Jun 8, 2015 at 8:10 PM, Zac Spitzer zac.spit...@gmail.com wrote: Is it within the scope of the spec to specify whether home screen web apps should retain their loaded state when switching from foreground to background and back to foreground again? Chrome behaves exactly as expected, however, iOS reloads the web app each time http://zacster.blogspot.com.au/2015/04/broken-web-apps-launched-from-ios-home.html If you want to more reliably store state, for home screen app bookmarks as well as for regular web pages, take a look at the history API. That gives the browser a clear, self-contained block of data to store and restore, unlike the state of the page itself which isn't always possible to restore cleanly. The History API also means that state can be cleanly restored after browser forward/back, even across session restarts. I seem to recall this works on iOS for homescreen apps, but it's been a while since I've tested it. It'll definitely store the URL (so you can encode state in the hash, as usual), but you should be able to store data using the data argument as well, for more complex persisted state. -- Glenn Maynard
Re: [whatwg] Persistent and temporary storage
On Fri, Mar 13, 2015 at 3:13 PM, Silvia Pfeiffer silviapfeiff...@gmail.com wrote: On 14 Mar 2015 05:49, Tab Atkins Jr. jackalm...@gmail.com wrote: Users install a relatively small number of apps, and the uninstall flow (which deletes their storage) is also trivial. Users visit a relatively large number of web-pages (and even more distinct origins, due to iframes and ads), and we don't have any good notion of uninstall yet on the web; the existing flows for deleting storage are terrible. First you need a notion of install. Not having to install web pages is a feature, not a bug. In fact, it's one of the defining features of the platform. -- Glenn Maynard
Re: [whatwg] Unicode - ASCII copy/paste fallback
On Sun, Feb 15, 2015 at 4:40 AM, David Sheets kosmo...@gmail.com wrote: If you're reading documentation which includes types, it's nice to see implication arrows but copy valid syntax. This is rather vague, but it sounds along the lines of using → for pointer dereferencing in a C++ document, which sounds pretty strange and unconventional. There's so much rampant abuse of the user's clipboard (most notably people inserting ads into the user's clipboard when he tries to copy text) that whenever a feature allows a page to cause copies to grab text other than what the user specified, we should be looking carefully at the use cases. If you have nothing more useful to discuss beyond uninformed, opinionated naysaying, I'll be leaving this thread lie. You should make an effort to remain civil when people don't immediately agree with you. -- Glenn Maynard
Re: [whatwg] Unicode - ASCII copy/paste fallback
On Sat, Feb 14, 2015 at 12:34 PM, David Sheets kosmo...@gmail.com wrote: I am writing a documentation generation tool for a programming language with right arrows represented as - but would like to render them as →. Programmers are used to writing in ASCII and reading typeset mathematics. If I present documentation to them via a purpose-built document browser, I should give them the option (at the generation/styling stage) of making those documents as pleasing as possible. Programmers a decade or two ago, maybe, but not today. As a programmer, if I see → on a page, select it and copy it, I expect to copy →, just as I selected. This sounds like something browsers should actively discourage. -- Glenn Maynard
Re: [whatwg] Unicode - ASCII copy/paste fallback
On Fri, Feb 13, 2015 at 5:45 AM, David Sheets kosmo...@gmail.com wrote: Hello, I have a page with a span class=rarrspan-gt;/span/span b and style .rarr span { overflow: hidden; height: 0; width: 0; display: inline-block; } .rarr::after { content: →; } (That's RIGHTWARDS ARROW x2192.) In Firefox 36, this copies and pastes like a - b which is the desired behavior. In Chrome 40, this copies and pastes like a b. Is my desired behavior (to show unicode but copy an ASCII representation) generally possible? Are there specs somewhere about copy/paste behavior? I looked in https://html.spec.whatwg.org/ but found nothing relevant. Copying ASCII isn't desirable. It should copy the Unicode string a → b. After all, that's what gets copied if you had done spana → b/span in the first place. (Chrome's issue isn't related to Unicode. It just doesn't know how to select text that's inside CSS content, so it isn't included in the copy.) -- Glenn Maynard
Re: [whatwg] Unicode - ASCII copy/paste fallback
On Fri, Feb 13, 2015 at 9:02 AM, Glenn Maynard gl...@zewt.org wrote: Copying ASCII isn't desirable. It should copy the Unicode string a → b. After all, that's what gets copied if you had done spana → b/span in the first place. (Oh, I missed the obvious--the - from Firefox is coming from the HTML, of course.) I guess what you're after is being able to have separate text for display vs. copy. I'm sure you don't actually want to use a hacky custom font. What's the actual use case? In general I think browsers should always copy just what the user selected, and not let pages cause something other than that to be copied, since things like that are generally abused (eg. inserting linkback ads to copied text). -- Glenn Maynard
Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data
On Thu, Nov 13, 2014 at 7:17 PM, Roger Hågensen resca...@emsai.net wrote: On 2014-11-13 20:20, Evan Stade wrote: Currently this new behavior is available behind a flag. We will soon be inverting the flag, so you have to opt into respecting autocomplete=off. I don't like that browsers ignore HTML functionality hints like that. It's not ignoring hints, this is just removing a bad feature. One of the most common irritants of day to day browsing is pages disabling form autocomplete and password management, and making me enter everything by hand. It's working extremely poorly in the real world. I have one real live use case that would be affected by this. http://player.gridstream.org/request/ Unfortunately, even if a couple pages have a legitimate use for a feature, when countless thousands of pages abuse it, the feature needs to go. The damage to people's day-to-day experience outweighs any benefits by orders of magnitude. This radio song request uses autocomplete=off for the music request because a listener would probably not request the same bunch of songs over and over. (The use case doesn't really matter to me--the abuse is too widespread--but this is wrong. If I request a song today, requesting it again tomorrow or the next day is perfectly natural, especially if my request was never played.) Also, banks generally prefer to have autocomplete=off for credit card numbers, names, addresses etc. for security reasons. And that is now to be ignored? Yes, absolutely. My bank's preference is irrelevant. It's my browser, not my bank's. This is *exactly* the sort of misuse of this feature which makes it need to be removed. Also the reason the name field also has autocomplete=off is simple, if somebody uses a public terminal then not having the name remembered is nice. This is another perfect example of the confused misuse of this feature. You don't disable autocompletion because some people are on public terminals--by that logic, every form everywhere would always disable autocomplete. This must be addressed on the terminal itself, in a consistent way, not by every site individually. (Public terminals need to wipe the entire profile when a user leaves, since you also need cache, browser history, cookies, etc.) -- Glenn Maynard
Re: [whatwg] PSA: Chrome ignoring autocomplete=off for Autofill data
(Trimming for time and to avoid exploding the thread. Others can respond to the rest if they like.) On Thu, Nov 13, 2014 at 8:26 PM, Roger Hågensen resca...@emsai.net wrote: Punishing those who do it right because of the stupidity of the many, can't say I'm too thrilled about that. Leaving it in is punishing every user of the Web. This is just one of many well-intentioned features that is failing in practice. No it's inherently correct for the use case as listeners tend to enter things like: Could you play Gun's'Rose? Love you show, more rock please? Where are you guys sending from? (You said would probably not request the same bunch of songs over and over, and now you're replying as if you said something completely different.) Is that what you want them to start doing? If a bank or security site wishes to have input fields without autocomplete they can just use textarea. Are you going to enforce autocomplete=on for textarea now? I'm not worried about that at all. When autocomplete doesn't happen, people blame the browser (most people aren't web authors and don't know this is the web page's fault). When text entry is glitchy because the page used a textarea or other ugly hacks, it's the web page that looks bad. That's its own deterrant. On Thu, Nov 13, 2014 at 8:57 PM, Ben Maurer ben.mau...@gmail.com wrote: If the site sets autocomplete=off could you disable the saving of new suggestions? One of the main use cases for turning off autocomplete is to disable the saving of sensitive or irrelevant information. If the user is filling in an address or cc num it's likely they have the opportunity to save that on other sites. It wouldn't make sense for all sites to autocomplete credit cards, but only 50% to save them. -- Glenn Maynard
Re: [whatwg] Passwords
On Sat, Oct 18, 2014 at 2:50 PM, Anne van Kesteren ann...@annevk.nl wrote: I'd be interested in hearing why sites such as forums have not made the switch yet. If you're hosting passwords it seems downright irresponsible at this point to not use TLS. The most common reasons I've seen are: - People asking why would this page need encryption?, which is always the wrong question. (The right question is why does this page need to not have encryption?) - People don't want to jump the hoops to get a certificate and install it. I still have to search to find the right OpenSSL magic commands, and it still takes fiddling to get TLS enabled on web servers. (It should require editing two or three lines to enable it on Apache, not uncommenting dozens of lines of sample configuration then figuring out how to sync it up to your HTTP configuration. I suspect Apache can do this much more simply, and that the sample configurations that come with installations are just garbage...) - People don't want to pay for a certificate. (There's StartSSL, but when I tried it, it was so bad that I prefer to pay GoDaddy. That should say a lot given how bad *that* site is...) - They don't want the additional latency that TLS causes. I assume this is why Amazon puts most of the storefront on HTTP, and only selectively switches to HTTPS. (They've put a lot of design behind making this secure, but most authors can't do that, and it still has a big privacy cost.) This is at least a valid issue. - Some web services don't support HTTPS. (There's no excuse for this, but saying that doesn't make the problem go away. I don't recall particular examples.) -- Glenn Maynard
Re: [whatwg] getting rid of anonymizing redirects
On Tue, Oct 7, 2014 at 7:28 AM, Peter Lepeska bizzbys...@gmail.com wrote: Hi Chris, Looks like this is already supported: https://html.spec.whatwg.org/multipage/semantics.html#link-type-noreferrer . Just need to educate web developers to you use it. People don't use it because it's not supported in most browsers. It's too bad, since link anonymizers are terrible and the lack of this feature is causing them to continue to be used. -- Glenn Maynard
Re: [whatwg] getting rid of anonymizing redirects
I don't have a list, but as far as I know the only browser with complete support is WebKit (and now Blink, I guess), though there are apparently some bugs there. Firefox has had a ticket open for this for about half a decade (which has had some activity recently, but with tickets that old I'm doubtful until it actually gets released...). I don't think IE has any support. I haven't retested any of this recently, so I'd recommend testing for yourself if you need to be sure. I haven't tested meta referer at all and don't know anything about its support. On Tue, Oct 7, 2014 at 9:09 AM, Peter Lepeska bizzbys...@gmail.com wrote: Thanks Glenn. Do you happen to have a list of which browsers support it and which do not? Thanks, Peter From: Glenn Maynard gl...@zewt.org Date: Tuesday, October 7, 2014 at 10:00 AM To: Peter Lepeska bizzbys...@gmail.com Cc: Chris Bentzel cbent...@google.com, WHAT Working Group wha...@whatwg.org, public-web-p...@w3.org Subject: Re: [whatwg] getting rid of anonymizing redirects On Tue, Oct 7, 2014 at 7:28 AM, Peter Lepeska bizzbys...@gmail.com wrote: Hi Chris, Looks like this is already supported: https://html.spec.whatwg.org/multipage/semantics.html#link-type-noreferrer . Just need to educate web developers to you use it. People don't use it because it's not supported in most browsers. It's too bad, since link anonymizers are terrible and the lack of this feature is causing them to continue to be used. -- Glenn Maynard -- Glenn Maynard
Re: [whatwg] Proposal: navigator.cores
On Wed, Jul 2, 2014 at 11:31 AM, Rik Cabanier caban...@gmail.com wrote: I thought that those concerns were addressed with the addition of a maximum number of cores? That doesn't address much, if anything. Also, WebKit's implementation also caps the number of cores at eight to mitigate some of the finger printing / privacy concerns raised. This is a misunderstanding of what fingerprinting is. It's not about having rare values, like the one user in a thousand with 32 cores. (That matters too, but it's not the main issue.) Fingerprinting is having data that persists for the user at all, such as whether a user has one or two or four cores, which are then combined with as many other data points as possible to create a fingerprint. Limiting the maximum exposed number of cores doesn't affect this. -- Glenn Maynard
Re: [whatwg] Proposal: toDataURL “image/png” compression control
On Mon, Jun 2, 2014 at 12:49 PM, Rik Cabanier caban...@gmail.com wrote: That's implementation cost to you :-) Now we need to convince the other vendors. Do they want it, want more, want it in a different way? Then it needs to be documented. How can authors discover that this is supported? How can it be poly-filled? Polyfill isn't really an issue, since this is just a browser hint. We definitely need a way to feature test option arguments, but we should start another thread for that. This needs a bit more guidance in the spec as far as what different numbers mean. A quality number of 0-1 with JPEG is fairly well-understood--you won't always get the same result, but nobody interprets 1 as spend 90 seconds trying as hard as you possibly can to make the image smaller. There's no common understanding for PNG compression levels, and there's a wide variety of ways you can try harder to compress a PNG, with wildly different space/time tradeoffs. By order of cost: - Does 0 mean output a PNG as quickly as possible, even if it results in zero compression? - What number means be quick, but don't turn off compression entirely? - What number means use a reasonable tradeoff, eg. the default today? - What number means prefer smaller file sizes, but I'm expecting an order of 25% extra time cost, not a 1500%? - Does 1 mean spend two minutes if you want, make the image as small as you can? (pngcrush does this, and Photoshop in some versions does this--which is incredibly annoying, by the way.) If there's no guidance given at all, 0 might mean either of the first two, and 1 might mean either of the last two. My suggestion is an enum, with three values: fast, normal, small, which non-normative spec guidance suggesting that fast means make the compression faster if possible at the cost of file size, but don't go overboard and turn compression off entirely, and small means spend a bit more time if it helps create a smaller file, but don't go overboard and spend 15x as long. If we want to support the other two, they can be added later (eg. uncompressed and crush). Since this is only a hint, implementations can choose which ones to implement; if the choice isn't known, fall back on default. A normative requirement for all PNG compression is that it should always round-trip the RGBA value for each pixel. That means that--regardless of this option--a UA can use paletted output only if the image color fits in a palette, and it prohibits things like clamping pixels with a zero alpha to #00, which is probably one strategy for improving compression (but if you're compressing non-image data, like helper textures for WebGL, you don't want that). On Mon, Jun 2, 2014 at 1:23 PM, Nils Dagsson Moskopp n...@dieweltistgarnichtso.net wrote: As an author, I do not see why I should ever want to tell a browser losslessly encoding an image any other quality argument different from „maximum speed“ or „minimum size“ – on a cursory look, anything else would probably not be interoperable. Also, is 0.5 the default value? Image compression is uninteroperable from the start, in the sense that the each UA can always come up with different output files. This feature (and the JPEG quality level feature) doesn't make it worse. -- Glenn Maynard
Re: [whatwg] Proposal: toDataURL “image/png” compression control
On Fri, May 30, 2014 at 1:25 PM, Justin Novosad ju...@google.com wrote: I think this proposal falls short of enshrining. The cost of adding this feature is minuscule. I don't think the cost is ever really miniscule. True, you'd never want to use toDataURL with a compression operation that takes many seconds ((or even tenths of a second) to complete, and data URLs don't make sense for large images in the first place. It makes sense for toBlob(), though, and having the arguments to toBlob and toDataURL be different seems like gratuitous inconsistency. Yes, toBlob is async, but it can still be polyfilled. (I'm not sure how this replies to what I said--this feature makes more sense for toBlob than toDataURL, but I wouldn't add it to toBlob and not toDataURL.) On Sat, May 31, 2014 at 7:44 AM, Robert O'Callahan rob...@ocallahan.org wrote: On Sat, May 31, 2014 at 3:44 AM, Justin Novosad ju...@google.com wrote: My point is, we need a proper litmus test for the just do it in script argument because, let's be honnest, a lot of new features being added to the Web platform could be scripted efficiently, and that does not necessarily make them bad features. Which ones? The ones that are used so frequently that providing a standard API for them benefits everyone, by avoiding the fragmentation of everyone rolling their own. For example, URL parsing and manipulation, and lots of DOM interfaces like element.closest(), element.hidden and element.classList. (Cookies are another one that should be in this category; document.cookie isn't a sane API without a wrapper.) This isn't one of those, though. -- Glenn Maynard
Re: [whatwg] Proposal: toDataURL “image/png” compression control
On Sat, May 31, 2014 at 4:00 PM, Rik Cabanier caban...@gmail.com wrote: roc was asking which NEW feature is being added that can be done in script. He asked which new features have already been added that can be done efficiently in script. Element.closest() was added less than a week ago. But again, image decoding *can't* be done efficiently in script: platform-independent code with performance competitive with native SIMD assembly is a thing of myth. (People have been trying unsuccessfully to do that since day one of MMX, so it's irrelevant until the day it actually happens.) Anyhow, I think I'll stop helping to derail this thread and return to the subject. Noel, if you're still around, I'd suggest fleshing out your suggestion by providing some real-world benchmarks that compare the PNG compression rates against the relative time it takes to compress. If spending 10x the compression time gains you a 50% improvement in compression, that's a lot more compelling than if it only gains you 10%. I don't know what the numbers are myself. -- Glenn Maynard
Re: [whatwg] Proposal: toDataURL “image/png” compression control
On Fri, May 30, 2014 at 12:46 PM, Anne van Kesteren ann...@annevk.nl wrote: On Fri, May 30, 2014 at 5:44 PM, Justin Novosad ju...@google.com wrote: The just do it in script argument saddens me quite a bit. :-( Agreed, however for this particular case, I'm not sure it makes much sense to further enshrine a synchronous API for serializing an image. True, you'd never want to use toDataURL with a compression operation that takes many seconds ((or even tenths of a second) to complete, and data URLs don't make sense for large images in the first place. It makes sense for toBlob(), though, and having the arguments to toBlob and toDataURL be different seems like gratuitous inconsistency. -- Glenn Maynard
Re: [whatwg] Proposal: toDataURL “image/png” compression control
On Thu, May 29, 2014 at 1:32 AM, Rik Cabanier caban...@gmail.com wrote: This has been requested before. ie http://lists.whatwg.org/pipermail/help-whatwg.org/2013-May/001209.html The conclusion was that this can be accomplished using JavaScript. There are JS libraries that can compress images and performance is very good these days. This is a nonsensical conclusion. People shouldn't have to pull in a PNG compressor and deflate code when a PNG compression API already exists on the platform. This is an argument against adding toDataURL at all, which is a decision that's already been made. -- Glenn Maynard
[whatwg] Proposal: toDataURL “image/png” compression control
On Thu, May 29, 2014 at 10:29 AM, Rik Cabanier caban...@gmail.com javascript:_e(%7B%7D,'cvml','caban...@gmail.com'); wrote: If performance is good, why would this not be acceptable? I don't know why we'd provide an API to compress PNGs, then tell people to use a script reimplementation if they want to set a common option. As far as performance, I'm not sure about PNG, but there's no way that a JS compressor would compete with native for JPEG. Assembly (MMX, SSE) optimization gives a significant performance improvement over C, so I doubt JS will ever be in the running. ( http://www.libjpeg-turbo.org/About/Performance) It seems that this would be a fragmented solution as file formats and features would be added at different stages to browser engines. Would there be a way to feature test that the optional arguments are supported? No more than any other new feature. I don't know if feature testing for dictionary arguments has been solved yet (it's come up before), but if not that's something that needs to be figured out in general. -- Glenn Maynard -- Glenn Maynard
Re: [whatwg] Proposal: toDataURL “image/png” compression control
On Thu, May 29, 2014 at 3:33 PM, Rik Cabanier caban...@gmail.com wrote: MMX, SSE is being addressed using asm.js. Assembly language is inherently incompatible with the Web. We already have an API for compressing images, and compression level is an ordinary input to image compressors, yet you're arguing that rather than add the option to the API we have, we should require people to bundle their own image compressors and write MMX assembly on the Web to make it fast. Sorry if I think that's a bizarre argument... We're also just dealing with screenshots here. I doubt people are going to do toDataURL at 60fps. (I hope we can all see more use cases than just screenshots.) -- Glenn Maynard
Re: [whatwg] Proposal: toDataURL “image/png” compression control
On Thu, May 29, 2014 at 4:21 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 5/29/14, 5:13 PM, Glenn Maynard wrote: Assembly language is inherently incompatible with the Web. A SIMD API, however is not. Under the hood, it can be implemented in terms of MMX, SSE, NEON, or just by forgetting about the SIMD bit and pretending like you have separate operations. In particular, you could have a SIMD API that desugars to plain JS as the default implementation in browsers but that JITs can recognize and vectorize as they desire. This sort of API will happen, for sure. I doubt it, at least with performance competitive with native assembly. We certainly shouldn't delay features while we hope for it. -- Glenn Maynard
Re: [whatwg] Proposal: toDataURL “image/png” compression control
On Thu, May 29, 2014 at 4:50 PM, Rik Cabanier caban...@gmail.com wrote: You don't need to hope for it. The future is already here: http://www.j15r.com/blog/2014/05/23/Box2d_2014_Update asm.js will be fast on all modern browsers before this feature would ship. As an author, I'd certainly prefer the most flexible solution that works everywhere. I don't have the time to read all of this, but it doesn't seem to have anything to do with SIMD instruction sets (which are notoriously difficult to generalize). Anyway, this has derailed the thread. We have an API for compression already. It already supports a compression level argument for JPEG. Having an equivalent argument for PNG is a no-brainer. The only difference to JPEG is that it should be described as the compression level rather than quality level, since with PNG it has no effect on quality, only the file size and time it takes to compress. -- Glenn Maynard
Re: [whatwg] Proposal: toDataURL “image/png” compression control
On Thu, May 29, 2014 at 5:34 PM, Nils Dagsson Moskopp n...@dieweltistgarnichtso.net wrote: and time it takes to compress. What benefit does it give then if the result is the same perceptually? Time it takes to compress. There's a big difference between waiting one second for a quick save and 60 seconds for a high-compression final export. On Thu, May 29, 2014 at 7:31 PM, Kornel Lesiński kor...@geekhood.net wrote: I don't think it's a no-brainer. There are several ways it could be interpreted: The API is a no-brainer. That doesn't mean it should be done carelessly. That said, how it's implemented is an implementation detail, just like the JPEG quality parameter, though it should probably be required to never use lossy compression (strictly speaking this may not actually be required today...). FYI, I don't plan to spend much time arguing for this feature. My main issue is with the just do it in script argument. It would probably help for people more strongly interested in this to show a comparison of resulting file sizes and the relative amount of time it takes to compress them. -- Glenn Maynard
Re: [whatwg] WebGL and ImageBitmaps
On Mon, May 12, 2014 at 3:19 AM, K. Gadd k...@luminance.org wrote: On Fri, May 9, 2014 at 12:02 PM, Ian Hickson i...@hixie.ch wrote: I'm assuming you're referring to the case where if you try to draw a subpart of an image and for some reason it has to be sampled (e.g. you're drawing it larger than the source), the anti-aliasing is optimised for tiling and so you get leakage from the next sprite over. If so, the solution is just to separate the sprites by a pixel of transparent black, no? This is the traditional solution for scenarios where you are sampling from a filtered texture in 3d. However, it only works if you never scale images, which is actually not the case in many game scenarios. That's only an issue when sampling without premultiplication, right? I had to refresh my memory on this: https://zewt.org/~glenn/test-premultiplied-scaling/ The first image is using WebGL to blit unpremultiplied. The second is WebGL blitting premultiplied. The last is 2d canvas. (We're talking about canvas here, of course, but WebGL makes it easier to test the different behavior.) This blits a red rectangle surrounded by transparent space on top of a red canvas. The black square is there so I can tell that it's actually drawing something. The first one gives a seam around the transparent area, as the white pixels (which are completely transparent in the image) are sampled into the visible part. I think this is the problem we're talking about. The second gives no seam, and the Canvas one gives no seam, indicating that it's a premultiplied blit. I don't know if that's specified, but the behavior is the same in Chrome and FF. On Tue, May 13, 2014 at 8:59 PM, K. Gadd k...@luminance.org wrote: On Mon, May 12, 2014 at 4:44 PM, Rik Cabanier caban...@gmail.com wrote: Can you give an explicit example where browsers are having different behavior when using drawImage? I thought I was pretty clear about this... colorspace conversion and alpha conversion happen here depending on the user's display configuration, the color profile of the source image, and what browser you're using. I've observed differences between Firefox and Chrome here, along with different behavior on OS X (presumably due to their different implementation of color profiles). In this case 'different' means 'loading drawing an image to a canvas gives different results via getImageData'. That's a description, not an explicit example. An example would be a URL demonstrating the issue. The effects of color profiles should never be visible to script--they should be applied when the canvas is drawn to the screen, not when the image is decoded or the canvas is manipulated. That seems hard to implement, though, if you're blitting images to a canvas that all have different color profiles. It's probably better to ignore color profiles for canvas entirely than to expose the user's monitor configuration like this... -- Glenn Maynard
Re: [whatwg] Proposal: Event.creationTime
On Thu, May 8, 2014 at 2:33 AM, Brian Birtles bbirt...@mozilla.com wrote: (2014/05/08 0:49), Glenn Maynard wrote: Can you remind me why this shouldn't just use real time, eg. using the Unix epoch as the time base? It was some privacy concern, but I can't think of any privacy argument for giving high-resolution event timestamps in units that are this limited and awkward. [1] has some justification for why we don't use 1970. As does [2]. I'm not sure what the privacy concerns raised in the past were with regards to 1970. Okay, I remember. It's not that using the epoch here is itself a privacy issue, it's that the solutions to the monotonicity problem introduce privacy issues: if you add a global base time that isn't per-origin, that's a tracking vector. Maybe a solution would be to make DOMHighResTimeStamp structured clonable (or a wrapper class, since the type itself is just double). If you post a timestamp to another thread, it arrives in that thread's own time base. That way, each thread can always calculate precise deltas between two timestamps, without exposing the actual time base. (You still can't send it to a server, but that's an inherent problem for a timer on a monotonic clock.) If you treat Date.now() as your global clock, you can roughly convert between different performance timelines but with the caveat that you lose precision and are vulnerable to system clock adjustments. (There is actually a method defined for converting between timelines in Web Animations but the plan is to remove it.) That would defeat the purpose of using high-resolution timers in the first place. -- Glenn Maynard
Re: [whatwg] Proposal: navigator.cores
On Thu, May 8, 2014 at 10:13 PM, Adam Barth w...@adambarth.com wrote: I've updated the spec proposal [1] to sanction reporting fewer than the actual number of logical cores as a fingerprinting mitigation. The spec should allow the UA to do this (the real value isn't script-visible, so it can't really prohibit it), but it shouldn't recommend effectively limiting high-end machines. This also shouldn't be confused to be a solution for fingerprinting. It would still be another axis to segment average users on. On Fri, May 9, 2014 at 9:56 AM, David Young dyo...@pobox.com wrote: The algorithms don't have to run as fast as possible, they only have to run fast enough that the system is responsive to the user. If there is a motion graphic, you need to run the algorithm fast enough that the motion isn't choppy. That's not correct. For image processing and compression, you want to use as many cores as you can so the operation completes more quickly. For the rest, using more cores means that the algorithm can do a better job, giving a more accurate physics simulation, detecting motion more quickly and accurately, and so on. -- Glenn Maynard
Re: [whatwg] WebGL and ImageBitmaps
On Fri, May 9, 2014 at 2:02 PM, Ian Hickson i...@hixie.ch wrote: Given that the user's device could be a very low-power device, or one with a very small screen, but the user might still want to be manipulating very large images, it might be best to do the master manipulation on the server anyway. If I have a photo library with thousands of images, I don't want to upload each image--possibly megabytes each--to the server in order to manipulate it. Also, doing the work on the user's system scales to lots of users more sensibly than doing manipulations of large images on a server. I'm assuming you're referring to the case where if you try to draw a subpart of an image and for some reason it has to be sampled (e.g. you're drawing it larger than the source), the anti-aliasing is optimised for tiling and so you get leakage from the next sprite over. If so, the solution is just to separate the sprites by a pixel of transparent black, no? If you're downscaling by more than 2:1, you need to put more than one pixel between the images, which means you have to author sprite sheets differently depending on how far down you need to zoom. A drawing flag makes a lot more sense. -- Glenn Maynard
Re: [whatwg] Proposal: navigator.cores
On Tue, May 6, 2014 at 9:33 PM, Rik Cabanier caban...@gmail.com wrote: What do you mean? The paper explains that fingerprinting is a problem for privacy, and here it's being used to argue fingerprinting is already so bad that we should stop trying. (I'm not saying he can't do it or that it's unethical, just that it's unpleasant.) On Thu, May 8, 2014 at 9:07 PM, Joe Gregorio jcgrego...@google.com wrote: Maybe we can also return their RAM, but limit it to a maximum of 640K, since no one will need more than that :-) I think in a few years the limit to 8 cores will look just as silly. I'd imagine that this won't be the final version of WebKit, and that they'd increase that number if 16 cores was average and 64 cores was on the outside. (At least for desktops, I'm not at all convinced that anything like that will happen, though--high-end desktop machines have been hovering around 4 cores with HT for years. I think there's just not much market demand for faster and faster CPUs like there used to be...) That said, if I spent lots of money on a 16-core processor, then I'd be pretty angry if this caused pages to only use half of it. -- Glenn Maynard
Re: [whatwg] Proposal: Event.creationTime
On Wed, May 7, 2014 at 10:08 AM, Boris Zbarsky bzbar...@mit.edu wrote: On 5/7/14, 6:43 AM, Anne van Kesteren wrote: Is there a good reference somewhere for what the time would be relative to? https://w3c.github.io/web-performance/specs/HighResolutionTime2/Overview. html#sec-time-origin seems like the right thing. This seems to make it impossible to get the time of an event in real time, so you can compare it to external events or send it to shared workers. Can you remind me why this shouldn't just use real time, eg. using the Unix epoch as the time base? It was some privacy concern, but I can't think of any privacy argument for giving high-resolution event timestamps in units that are this limited and awkward. -- Glenn Maynard
Re: [whatwg] Proposal: navigator.cores
On Sun, May 4, 2014 at 4:49 PM, Adam Barth w...@adambarth.com wrote: You're right that Panopticlick doesn't bother to spend the few seconds it takes to estimate the number of cores because it already has sufficient information to fingerprint 99.1% of visitors: https://panopticlick.eff.org/browser-uniqueness.pdf It's pretty unpleasant to use a paper arguing that fingerprinting is a threat to online privacy as an argument that we should give up trying to prevent fingerprinting. On Mon, May 5, 2014 at 10:20 PM, Ian Hickson i...@hixie.ch wrote: of Workers today, as bz pointed out earlier). Indeed, on a high-core machine as we should expect to start seeing widely in the coming years, it might make sense for the browser to randomly limit the number of cores on a per-origin/session basis, specifically to mitigate fingerprinting. This might make sense in browser modes like Chrome's incognito mode, but I think it would be overboard to do this in a regular browser window. If I've paid for a CPU with 16 cores, I expect applications which are able to use them all to do so consistently, and not be randomly throttled to something less. On Tue, May 6, 2014 at 4:38 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 5/6/14, 5:30 PM, Rik Cabanier wrote: Leaving the question of fingerprinting aside for now, what name would people prefer? mauve? Failing that, maxUsefulWorkers? It can be useful to start more workers than processors, when they're not CPU-bound. -- Glenn Maynard
Re: [whatwg] hidden attribute useless with display: flex?
Previous discussion: http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2012-November/037905.html On Wed, Apr 30, 2014 at 6:01 AM, Anne van Kesteren ann...@annevk.nl wrote: div hidden/div Per spec, the div should be shown right? I imagine there is no way back on that? We could change the specification to use display-box instead. That might work. It's too bad that display-box also has multiple uses--it doesn't only display or hide the content, it has a third contents mode. That means the same problem would happen as soon as you set display-box: contents on something--it would override [hidden]. What we really need is a CSS property that only sets whether the element is visible or not and nothing else, like visible: false. That way, the only way [hidden] gets overridden is if you're actually setting the visibility style. I assume it's too late to change the style [hidden] uses, though. Lots of pages do things like d = elem.style.display; elem.style.display = block; width = elem.style.offsetWidth; elem.style.display = d; to work around offset* being 0 while hidden, and if [hidden] changes to some other style (or to !important) that code will break. I always just put [hidden] { display: none !important; } in my stylesheets to work around this. That sucks, since it makes [hidden] in pages and scripts I write incompatible with everyone else, who may be writing scripts that don't understand this (such as the above pattern), or may work around it in some other way. -- Glenn Maynard
Re: [whatwg] hidden attribute useless with display: flex?
On Wed, Apr 30, 2014 at 4:12 PM, Tab Atkins Jr. jackalm...@gmail.comwrote: On Wed, Apr 30, 2014 at 7:32 AM, Glenn Maynard gl...@zewt.org wrote: It's too bad that display-box also has multiple uses--it doesn't only display or hide the content, it has a third contents mode. That means the same problem would happen as soon as you set display-box: contents on something--it would override [hidden]. What we really need is a CSS property that only sets whether the element is visible or not and nothing else, like visible: false. That way, the only way [hidden] gets overridden is if you're actually setting the visibility style. Mind bringing this up in www-style? My thinking in that design is (It's confusing to move conversations to lists some people aren't subscribed to, since they'll inevitably miss part of the discussion. It'd help a lot if the lists wouldn't bounce mails if you're subscribed to *any* w3 mailing list, so cross-posting would work better. But, I think that starting a new thread on another list without copying it to this one is even more confusing, so I've CC'd both.) that display-box controls whether an element generates boxes at all, which seems consistent with including the 'contents' value. But if it seems useful to have a property dedicated to literally just hiding the element, we can see about rejiggering things. If an element is @hidden, I don't want style rules for other behaviors to override that. Just as today I don't want a display: block; style to break hidden (I just wanted this inline element to be block when it's not hidden, not override hidden entirely!), I don't want display-box: contents; to break hidden either (I just wanted to cause the element itself to not be rendered when it's not hidden, not override hidden entirely!).. That said, we may be past the point where it will really help. It's too late to actually use this with the default [hidden] rule, which means authors will have to put a [hidden] { rendered: false; } (or something) rule in their stylesheet. If authors all need to add a boilerplate rule to fix @hidden anyway, [hidden] { display: none !important; } works. -- Glenn Maynard
Re: [whatwg] canvas drawImage and EXIF orientation metadata
On Thu, Apr 17, 2014 at 2:46 AM, Jonas Sicking jo...@sicking.cc wrote: The problem here stemms from that orientation data lives as metadata in the EXIF data of image formats. This means that many tools has simply ignored that metadata. The result seems to have been that people open their images in tools that ignore the EXIF metadata. Then rotates the pixel data using that tool. Then saves the image again while keeping the EXIF metadata unchanged. This now means the pixels have been rotated (say) 90 degrees, but the EXIF metadata still says rotate image 90 degrees. So any tool that now honors the EXIF renders the picture *wrong*. So effectively the EXIF metadata has to be ignored in order to keep webcompat. That was the case even before image-orientation was implemented. FWIW I believe that WebP is remaking this same mistake. Would be cool if someone tried to prevent this from happening. The question was why is this a CSS style instead of a property on img, not why isn't this just the default. -- Glenn Maynard
Re: [whatwg] [notifications][editorial] tweaking the Activating a notification window.focus() note
On Wed, Apr 16, 2014 at 6:10 PM, Edward O'Connor eocon...@apple.com wrote: Hi, In §4.6 Activating a notification, there's a note that currently reads User agents are strongly encouraged to make window.focus() work from within the event listener for the event named click as a means of focusing the browsing context related to the notification. This note assumes that the UA doesn't automatically focus the browsing context when a notification is activated. (Safari on OS X is one example of a UA which does this.) The note should be adjusted so that readers understand that calling window.focus() may not be necessary on some combinations of UA and system notification service. Rather, pages should never be allowed to window.focus() when a notification is activated. If the platform's notification design wants that to happen, it's the platform's job to do that, and pages shouldn't all be required to call window.focus() to make this happen consistently. (If for some reason the platform doesn't want that to happen, the page shouldn't be able to override that, either.) If there are notifications that don't want to focus a page when activated, that should be a setting on the notification. -- Glenn Maynard
Re: [whatwg] Media sink device selection on audio/video
On Fri, Apr 11, 2014 at 6:23 PM, Edward O'Connor eocon...@apple.com wrote: The consensus opinion at WebRTC and MediaCapture seemed to be that the ability to let an app say which of these 5 microphones do you want? is more amenable to creating good apps than leaving this UI to the browser chrome. Seems to me that the privacy aspects (the fingerprinting vulnerabilities from exposing this data), and the abuse aspects (giving hostile sites the ability to access all the user's devices if any are made available) would trump this. Surely we can rely on user agents to provide nice UIs. The fingerprinting could be pretty specific, too. For example, my apple TV advertises itself with a custom AirPlay name. I agree with Ian. For instance, on iOS we provide features that allow Web developers to take AirPlay into account when building custom video controls, but we do not expose the list of AirPlay targets to Web content. Some other issues: - The browser will give a consistent UI. I don't get a different Save As dialog for each site, and I shouldn't get a different which mic do you want to use? dialog for each site either. - The browser will give a UI. My guess is that the vast majority of web apps wouldn't provide a selection UI *at all* for mics or speakers, and just use the default. - Web apps shouldn't need to implement basic UI for things like this, just like they shouldn't have to implement their own Save As dialogs. That's the platform's job. -- Glenn Maynard
Re: [whatwg] Proposal: requestBackgroundProcessing()
On Thu, Feb 20, 2014 at 12:35 PM, Rik Cabanier caban...@gmail.com wrote: This sounds like work that should be done in a worker. Worker timers aren't throttled when in the background, and this is exactly something workers are for. Is WebRTC available in a worker? I don't know, but if not, fixing that is probably closer to the right direction than letting people run fast timers in minimized UI threads. If this is just messaging of game state, he could probably do just relay that through the UI thread, so the game simulation still takes place in a worker. -- Glenn Maynard
Re: [whatwg] Proposal: requestBackgroundProcessing()
On Thu, Feb 20, 2014 at 3:29 PM, Ashley Gullen ash...@scirra.com wrote: There's a lot of worker features that aren't widely supported yet, like rendering to canvas with WebGL and 2D, Web Audio support, inputs, various APIs like speech and fullscreen, so I don't think that's practical right now. I guess that's not a reason to standardise a new feature, but is there not at least a workaround for the mean time? Are workers able to wake the UI with postMessage()? You were talking about running the server of a multiplayer game. Other than communicating with other clients, it doesn't need any of that, right? Those are all client-side behaviors. You can send a message to the UI thread. I didn't suggest that, since it feels like an arms race of trying to sidestep browser behavior. It may not matter, since the common things they're trying to stop are probably things like never-ending animation timers running when you can't even see it (who have no reason to drive their timers from a worker to bypass the timer throttling), but I'd recommend trying to move your actual server logic into a worker. -- Glenn Maynard
Re: [whatwg] Proposal: requestBackgroundProcessing()
On Thu, Feb 20, 2014 at 4:32 PM, Ashley Gullen ash...@scirra.com wrote: Since it's a peer-to-peer engine, the user acting as the server is also a participant in the game. This means the server is also running the full game experience with rendering and audio. The user is a client, and also a server for all clients (including itself). There should be no need for the client to run in the same environment as the server. The user's client is just another client to the server (that happens to be running in a worker in the same browser). The game logic is tied to rAF, since we intend to step the world once per screen draw, including for the player acting as a server. That doesn't make sense. requestAnimationFrame is bound to the refresh rate of the user's monitor, and each user can have a monitor with a different refresh rate. (You can even have different monitors on the same system with different refresh rates, though I don't know if that happens in practice today.) Even if everyone's monitor happens to have the same refresh rate, the monitors won't all be refreshing in sync. You also wouldn't want gameplay behavior to change in subtle ways depending on whether some user's display was at 50Hz or 60Hz or 120Hz. It looks like the server is caught between running in the UI thread and hanging when in the background, or running in the worker and not having a straightforward way to provide the rendering and audio game experience for the host player. The host player just connects to the server, like any other client does. (The actual connection layer is likely to be different, of course, since you can just post messages across and not establish any network connection.) One solution is to make web workers able to access many of the same APIs the UI thread can, especially with WebGL, Web Audio, WebRTC and rAF. Then it might be practical to move a full game engine in to a worker. If that is planned, I guess there is no need for a new feature. Not a full game engine, just the server logic where gameplay state, physics and so on are handled and communicated to clients. -- Glenn Maynard
Re: [whatwg] Bicubic filtering on context.drawImage
On Mon, Dec 9, 2013 at 1:21 AM, Tab Atkins Jr. jackalm...@gmail.com wrote: Hm, I wonder if image-interpolation on the canvas should affect this? It's defined to only have an effect when you scale the canvas element itself, but I think it probably makes sense that whatever scaling intent you specify for the element should probably apply to images you draw into it with a scale. What is image-interpolation? It looks like a CSS property, but Google doesn't distinguish between image-interpolation and image interpolation, so it's impossible to search for. If it is, having CSS state affect drawing of 2d canvas seems wrong. Aside from the bad layering, it would lead to different rendering if you draw to a Canvas before stylesheets finish loading (equivalent to not waiting for images, but much easier to get wrong without noticing), and if you offscreen render a Canvas before actually putting it in a document. -- Glenn Maynard
Re: [whatwg] Bicubic filtering on context.drawImage
On Mon, Dec 9, 2013 at 9:42 AM, Juriy Zaytsev kan...@gmail.com wrote: Well, doesn't this already happen with remote web fonts and fillText/strokeText? I'm not familiar with that, but at least if a font size is incorrect it's a lot more noticable than if a resampling filter is sometimes in bilinear when you wanted bicubic. -- Glenn Maynard
Re: [whatwg] Canvas in workers
- Original Message - From: Robert O'Callahan rob...@ocallahan.org We talked through this proposal with a lot of Mozilla people in a meeting and collectively decided that we don't care about the case of workers that commit multiple frames to a canvas without yielding --- at least for now. So we want to remove commit() and copy the main-thread semantics that a canvas frame is eligible for presentation whenever script is not running in the worker. On Thu, Oct 24, 2013 at 7:25 AM, Jeff Gilbert jgilb...@mozilla.com wrote: This is not the current WebGL semantics: WebGL presents its drawing buffer to the HTML page compositor immediately before a compositing operation[...] (Can you please quote correctly? Having one person top-quoting makes a mess of the whole thread, and it looked like you were saying that the WebGL spec language you were quoting was incorrect.) The assumption WebGL is making here is that compositing is a synchronous task in the event loop, which happens while no script is running. That is, the semantics Robert describes are the same as what the WebGL spec is trying to say. That's not necessarily how compositing actually works, though, and that language also won't make sense with threaded rendering. It might be better for WebGL to define this using the global script clean-up jobs task that HTML now defines. http://www.whatwg.org/specs/web-apps/current-work/#run-the-global-script-clean-up-jobs I'd recommend spinning off a separate thread if we want to go into this further. -- Glenn Maynard
Re: [whatwg] Synchronizing Canvas updates in a worker to DOM changes in the UI thread
I just noticed that Canvas already has a Canvas.setContext() method, which seems to do exactly what I'm proposing, even down to clearing the backbuffer on attach. The only difference is that it lives on Canvas instead of the context--the only reason I put it there in my proposal was because this only seemed useful for WebGL. Given that, I think this proposal can be simplified down to just: put setContext on WorkerCanvas too. On Mon, Oct 21, 2013 at 9:03 PM, Kenneth Russell k...@google.com wrote: There are some unexpected consequences of the attachToCanvas API style. For example, what if two contexts use attachToCanvas to target the same canvas? I left out these details in my initial post in order to see what people thought at a high level before delving into details. Attaching when already attached would replace the old attachment. It's not possible for two workers to attach to the same canvas, since only a single WorkerCanvas can exist for any given canvas; and the original Canvas can't be attached to if a WorkerCanvas was created (eg. it's in the proxied mode). What if one of those contexts is 2D and the other is WebGL? Currently it's illegal to try to fetch two different context types for a single Canvas. The current CanvasProxy spec contains several complex rules for these cases, and they're not easy to understand. This is handled by setContext: attaching a context detaches any previously-attached context. Will it be guaranteed that if you have a WebGL context, attachToCanvas to canvas1, do some rendering, and then attachToCanvas to canvas2, that the only remaining buffer in canvas1 is its color buffer? No depth buffers, multisample buffers, etc. will have to remain for some reason? If you reattach to canvas1 in the future, the buffers are cleared, which means you can discard or reuse those buffers as soon as you attach to a different canvas. How would WebGL's preserveDrawingBuffer attribute, which is a property of the context, interact with directing its output to multiple canvases? Since attaching the canvas clears it, that would override preserveDrawingBuffer. Fundamentally I think the behavior is easier to spec, and the implementation is easier to make correct, if the ultimate destination is an image rather than a canvas, and the color buffer is transferred out of the WorkerCanvas in an explicit step. Whether that's true or not, making things easy for the user takes priority over making things easy for spec writers and implementation. On Tue, Oct 22, 2013 at 2:48 AM, Robert O'Callahan rob...@ocallahan.orgwrote: This code actually does something potentially useful which can't easily be done with attachToCanvas: generating a series of images as fast as possible which will be processed on another thread in some way other than just rendering them on the screen. (E.g., be encoded into an animated image or video file.) (This is a proposal for attachToCanvas--now setContext--not against transferToImageBitmap, if there are use cases that transferToImageBitmap solves best in its own right. It seems like toBlob already handles this, though.) -- Glenn Maynard
Re: [whatwg] Synchronizing Canvas updates in a worker to DOM changes in the UI thread
On Tue, Oct 22, 2013 at 2:48 AM, Robert O'Callahan rob...@ocallahan.org wrote: This code actually does something potentially useful which can't easily be done with attachToCanvas: generating a series of images as fast as possible which will be processed on another thread in some way other than just rendering them on the screen. (E.g., be encoded into an animated image or video file.) (Err, wait. A few issues come to mind. 1: You can already say createImageBitmap(canvas) to create an ImageBitmap, which handles the store a snapshot of a frame use cases. 2: If the reason to have a transfer version for these use cases is just an optimization, then it's not obvious that it's a useful optimization. The use cases you mention suggest a GPU readback anyway. 3: If you're doing the encoding yourself in script, you want ImageData anyway, not ImageBitmap. I don't object as such to adding such a method if it's useful, and I don't think I have the energy right now to debate these in much depth, but this feels like taking a proposal and searching for uses for it.) On Tue, Oct 22, 2013 at 12:20 PM, Kenneth Russell k...@google.com wrote: On Tue, Oct 22, 2013 at 7:37 AM, Glenn Maynard gl...@zewt.org wrote: I just noticed that Canvas already has a Canvas.setContext() method That's there in support of CanvasProxy, which is a flawed API and which this entire discussion is aiming to rectify. I don't see flaws with the setContext() API, which appears to have already solved the problem of being able to make one context render to multiple canvases. Any relation to CanvasProxy isn't relevant to this. , which seems to do exactly what I'm proposing, even down to clearing the backbuffer on attach. The only difference is that it lives on Canvas instead of the context--the only reason I put it there in my proposal was because this only seemed useful for WebGL. Given that, I think this proposal can be simplified down to just: put setContext on WorkerCanvas too. Also, adding a present() method to Canvas. That's mixing up proposals, actually. Adding present() is for the explicitpresent proposal, which aims at solving the synchronizing rendering in a worker to DOM changes in the main thread use cases. Reusing setContext() replaces my attachToCanvas() proposal, which is for the one context rendering to multiple canvases) use cases. They're orthogonal, not mutually exclusive, and solve different problems. (We're mixing up proposals because we're trying to solve too many problems simultaneously, which is one reason I've tried to split this stuff into smaller chunks.) At a high level I prefer the form of the WorkerCanvas API, including transferToImageBitmap and the ability to transfer an ImageBitmap into an HTMLImageElement for viewing, and removing the CanvasProxy concept and associated APIs. I'd like to focus my own efforts in writing a full draft for WorkerCanvas under http://wiki.whatwg.org/wiki/Category:Proposals . Again, this is a supplement to WorkerCanvas, not a replacement for it. (It may be compatible with CanvasProxy too, but I haven't looked at it closely to see.) We're circling around: you keep saying we should use transferToImageBitmap, I keep pointing out the problems with it that my proposal solves, and you reply by saying we should use transferToImageBitmap, without addressing those problems. I don't think we have any more information to bring to the discussion right now, so I think we're at a good point to wait for Hixie to get around to these threads rather than going over the same stuff again (and giving him more reading material :). Here's a summary of my proposal: - The WorkerCanvas adjustments to CanvasProxy (minus the transferToImageBitmap stuff), to better address the rendering to a Canvas from a worker and creating off-screen Canvases in a worker-related use cases. - Include setContext() on WorkerCanvas, to support rendering from one context to multiple canvases when in a worker. - Add explicitpresent and present() to Canvas, to support synchronizing rendering in a worker to DOM changes in the main thread without forcing that synchronization on everybody. The second and third are independent and can be implemented separately, after WorkerCanvas itself has time to settle. -- Glenn Maynard
Re: [whatwg] Canvas in workers
On Sun, Oct 20, 2013 at 11:53 PM, Robert O'Callahan rob...@ocallahan.orgwrote: Glenn, taking a step back for a bit, is there anything in https://wiki.mozilla.org/User:Roc/WorkerCanvasProposal that you would actually object to? IOW, is there anything there that you would think is completely superfluous to the platform if all your proposals were to be adopted as well? I have no objection to the overall change from CanvasProxy to WorkerCanvas, eg. the stuff in Kyle's original mail to the thread. (Being able to settle on that is one of the reasons I've tried to detach discussion for the other use cases.) I'd only recommend leaving out transferToImageBitmap, srcObject and ImageBitmap.close() parts. I do think that would be redundant with with present proposal. They can always be added later, and leaving them out keeps the WorkerCanvas proposal itself focused. -- Glenn Maynard
Re: [whatwg] Synchronizing Canvas updates in a worker to DOM changes in the UI thread
On Sun, Oct 20, 2013 at 11:16 PM, Robert O'Callahan rob...@ocallahan.orgwrote: With all these proposals I think it's OK to allow the main thread to do (e.g.) a toDataURL and read what the current contents of the canvas is, We can defer this discussion, since it's not something new to this proposal (or any other proposal we're discussing). On Sun, Oct 20, 2013 at 11:33 PM, Robert O'Callahan rob...@ocallahan.orgwrote: To me, passing the image data explicitly in an ImageBuffer along with the present message seems like a better fit to the workers message-passing model than this proposal, where the data is stored as hidden state in the canvas element with (effectively) a setter in the worker and a getter in the main thread, and that setting and getting has to be coordinated with postMessage for synchronization. The relationship between a commit and its present has to be deduced by reasoning about the timing of messages, rather than by just reasoning about JS data flow through postMessage. Using ImageBitmap for this has a lot of issues. It requires synchronizing with scripts in the UI thread. It requires manualling resize your canvas repeatedly to fit different destinations. It also may potentially create lots of backbuffers. Here's an example of code using ImageBitmap incorrectly, leading to excess memory allocation: function render() { var canvas = myWorkerCanvas; renderTo(canvas); var buffer = canvas.transferToImageBitmap(); postMessage(buffer); } setTimeout(render, 1); We start with one backbuffer available, render to it (renderTo), peel it off the canvas to be displayed somewhere, and toss it off to the main thread. (For the sake of the example, the main thread is busy and doesn't process it immediately.) The worker enters render() again, and when it gets to renderTo, a new backbuffer has to be allocated, since the one buffer we have is still used by the ImageBuffer and can't be changed. This happens repeatedly, creating new backbuffers each time, since none of them can be reused. This is an extreme example, but if this ever happens even once, it means potentially allocating an extra backbuffer. This proposal also requires that whenever a worker is going to return image data to the main thread, the main thread must start things off by creating a canvas element. It's also not possible for a worker to spawn off sub-workers to do drawing (at least, not without some really ugly coordination with the main thread.) Sure it is. If you want an offscreen buffer, you just new WorkerCanvas(). This is unrelated to offscreen drawing. -- Glenn Maynard
Re: [whatwg] Synchronizing Canvas updates in a worker to DOM changes in the UI thread
On Mon, Oct 21, 2013 at 6:08 PM, Kenneth Russell k...@google.com wrote: Using ImageBitmap for this has a lot of issues. It requires synchronizing with scripts in the UI thread. This isn't difficult, and amounts to a few additional lines of code in the main thread's onmessage handler. Synchronization with the UI thread isn't bad because it's difficult. Avoiding synchronization with the main thread has been raised as a desirable goal: http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0152.htmlincluding that it isn't possible to render from a worker without synchronizing with the main thread. (My previous comments on this are here: http://www.mail-archive.com/whatwg@lists.whatwg.org/msg35959.html) The ImageBitmap style proposal has another significant advantage in that it allows a single canvas context to present results in multiple output regions on the page. You can do that. You just create a WorkerCanvas for each canvas you want to present to, hand them to the worker, then attachToCanvas in the worker to switch from canvas to canvas. (That's orthogonal to explicitpresent.) This sort of resource exhaustion is certainly possible, but I view this downside as smaller than the upside of addressing both of the above use cases. I can only find one thing above that you might be referring to as a use case (the one I replied to immediately above). What was the other? -- Glenn Maynard
Re: [whatwg] Counterproposal for canvas in workers
On Sun, Oct 20, 2013 at 2:22 AM, Robert O'Callahan rob...@ocallahan.orgwrote: On Fri, Oct 18, 2013 at 3:10 PM, Glenn Maynard gl...@zewt.org wrote: Also, with the transferToImageBuffer approach, if you want to render from a worker into multiple canvases in the UI thread, you have to post those ImageBuffers over to the main thread each frame, which has the same (potential) synchronization issues as the transferDrawingBufferToCanvas proposal. I'm confused here. You said if you want to render from a worker into multiple canvases in the UI thread, which I took to mean that you wanted to synchronize canvas updates from workers with DOM changes made by the UI thread. But now you're saying you don't want to do that. So I don't know what you meant. This has nothing to do with synchronizing to DOM updates. The point is to be able to render from a single WebGL context to multiple canvanses, without having to create multiple WebGL contexts and upload a second copy of textures, vertex programs, etc. into it, which is very expensive. Doing that efficiently and asynchronously is what this is trying to solve. (The particular problem I pointed out is specific to doing that from Workers with canvases in the UI thread, but the goal itself is not.) -- Glenn Maynard
Re: [whatwg] Canvas in workers
On Sat, Oct 19, 2013 at 10:11 AM, Robert O'Callahan rob...@ocallahan.orgwrote: It's not clear to me how attachToCanvas works. An application like Google Maps wants to draw to multiple canvases from a worker and then render the updated canvas contents all at once, in synchrony with changes to the DOM made by the main thread. How would you do that with attachToCanvas? That's not the problem attachToCanvas tries to solve. It tries to solve rendering to multiple canvases, without requiring synchronization with the UI thread. I have a proposal for handling synchronization with DOM updates, but I'll post it in a separate thread. (To clarify, this thread is talking about three different things: rendering from a worker to the UI thread, rendering to multiple canvases, and synchronizing rendering in a worker to DOM updates in the main thread. The only reason they're in the same thread is because some proposals are trying to handle two or all three of them together. Trying to do that is leading to unwanted limitations, such as forcing synchronization when you don't want it, and it's making the conversation hard to follow. Since my proposal is orthogonal to the rest--it's separate and compatible with both WorkerCanvas and attachToCanvas--and this thread is discussing too many things at once, I'll move the third to a separate thread.) - If you're rendering in a worker and the eventual target is in the main thread, the worker needs to be careful to not start rendering again until the main thread has assigned the ImageBitmap to where it wants it, and called .close(). You'd need to send a message back to the worker going okay, you can continue now. Otherwise, you'd start rendering before a buffer has been freed up for reuse, and end up creating more backbuffers than you intended (which matters for large screens). This seems easy to get wrong, and attachToCanvas doesn't have this problem. Not if you use transferToImageBitmap. transferToImageBitmap does have this problem. If you transferToImageBitmap to detach your backing store to display it somewhere, then start rendering the next frame without waiting for the ImageBitmap to be given to the target, then as soon as you start rendering you'll end up creating a 3rd rendering buffer. (The present() proposal also has this problem, but users are only affected by it if they're actually synchronizing to DOM updates.) With attachToCanvas, you just size both canvases normally once, and switch between them with a single function call. I'm not sure how helpful this is. In the case of WebGL, the rendering context has resources that need to be sized the same as the destination color buffer, so they'll need to be resized anyway if you're actually using a single context to render to canvases of different sizes. My guess is that the advice will always be don't do that. If you think that a single context outputting to multiple canvases fundamentally won't work well with canvases of different sizes, then forget about the feature. When you attachToCanvas, you're attaching that canvas's rendering buffer, not just its color buffer. In WebGL terms, each canvas is a Framebuffer, with all its associated Renderbuffers. Attaching the context to a canvas is like using bindFramebuffer to render to its backing store. I believe these are minor changes, especially compared to moving drawing to a worker. Moving drawing to a worker is unrelated to drawing to multiple canvases. I don't agree that it's a minor change: today you can create a black-box API that renders to a context, without caring about the DOM at all. If you have to move the image to an img to display it, suddenly you have to care about the DOM to render. It breaks the basic model. -- Glenn Maynard
[whatwg] Synchronizing Canvas updates in a worker to DOM changes in the UI thread
(This is a spin-off thread from Canvas in workers. Some of this is written in terms of the WorkerCanvas proposal, but it works fine with the current CanvasProxy API. I'm skipping some steps here and going straight to a proposal, since my main goal at the moment is to detangle this from the other thread...) Here's a way to synchronize updates to DOM changes, so scenes rendered in a worker only appear when the UI thread is ready for them to be. - Add a flag to the Canvas to enable this. For now, let's call this explicitpresent, eg. canvas explicitpresent. - When a script finishes rendering (eg. calls commit()), the buffer is not automatically displayed. Instead, it's simply made available to be displayed. - Add a method, Canvas.present(), to present the most recently-available frame. To describe this in terms of triple-buffering, you have three buffers: a rendering buffer (aka the backbuffer), a display buffer (aka the front-buffer), and a ready buffer. You render (possibly in a worker) to the rendering buffer. When you're finished, you call commit(), and the rendering buffer and the ready buffer are swapped. Now that a new frame is ready, you can call canvas.present() to swap the ready buffer and the display buffer. Essentially, that's it. You don't actually need to allocate a third buffer, as long as the user doesn't start rendering a new frame before present()ing the previous one. This could be a behind-the-scenes optimization to avoid the extra memory cost--only allocate a third buffer if actually needed. It must not be possible for the UI thread to detect whether present() did anything--if there's no frame in the ready buffer, nothing changes and the UI thread can't detect this. Similarly, it must not be possible for the rendering thread to detect if the ready frame has been presented. These rules are to prevent exposing asynchronous behavior. Example: canvas id=canvas explicitpresent script var canvas = document.querySelector(#canvas); var worker = createWorker(); worker.postMessage({ cmd: init, canvas: canvas.getWorkerCanvas(), }); worker.onmessage = function(e) { // The worker told us that a frame has been committed. Present it for display. canvas.present(); // Tell the worker that it should start rendering the next frame. worker.postMessage({cmd: update}); // Do any DOM changes here, to synchronize them with displaying the new canvas. updateUI(); } /script Worker: onmessage = function(e) { // On initialization only: if(e.data.cmd == init) canvas = e.data.canvas; // Render our scene. renderFrame(canvas); // Commit the scene. canvas.commit(); // Tell the main thread that the frame is ready. postMessage(present); } function renderFrame(workerCanvas) { } -- Glenn Maynard
Re: [whatwg] Synchronizing Canvas updates in a worker to DOM changes in the UI thread
(Whoops. Why did Gmail send that as my work email? It shouldn't have made it through to the list, since it's not subscribed...) On Sun, Oct 20, 2013 at 9:26 PM, Kyle Huey m...@kylehuey.com wrote: On Sun, Oct 20, 2013 at 11:33 PM, Glenn Maynard gl...@bluegoji.comwrote: It must not be possible for the UI thread to detect whether present() did anything--if there's no frame in the ready buffer, nothing changes and the UI thread can't detect this. Similarly, it must not be possible for the rendering thread to detect if the ready frame has been presented. These rules are to prevent exposing asynchronous behavior. Well you can readback from canvases, so how is that going to work? However it works today, since CanvasProxy needs the same thing. If a CanvasProxy/WorkerCanvas exists for a canvas, you should have to use a toBlob method on that, and calls to that (and earlier calls in progress) on the Canvas itself should fail. (If CanvasProxy isn't doing that it seems like a bug.)
Re: [whatwg] Counterproposal for canvas in workers
On Thu, Oct 17, 2013 at 10:25 PM, Robert O'Callahan rob...@ocallahan.orgwrote: On Fri, Oct 18, 2013 at 3:10 PM, Glenn Maynard gl...@zewt.org wrote: transferToImageBuffer looks like it would create a new ImageBuffer for each frame, so you'd need to add a close() method to make sure they don't accumulate due to GC lag, That's a good point. We will need something like that. It would only neuter that thread's (main thread or worker thread) version of the ImageBitmap. But don't forget that this is a cost to authors, who now have to .close() the object. If they forget, or don't know they need to do that, or miss some code paths, then there are no blatant side-effects--things are just mysteriously slower, and probably with more of an effect in some implementations than others (which is never good). With attachToCanvas, this can't happen. and it seems like turning this into a fast buffer swap under the hood would be harder. I don't see why. To me it seems obviously more complicated, but I guess I'll leave that evaluation to implementors. Also, with the transferToImageBuffer approach, if you want to render from a worker into multiple canvases in the UI thread, you have to post those ImageBuffers over to the main thread each frame, which has the same (potential) synchronization issues as the transferDrawingBufferToCanvas proposal. What are those issues? You can do a single postMessage passing a complete set of ImageBItmaps. See http://lists.w3.org/Archives/Public/public-whatwg-archive/2013Oct/0193.html. I don't know the answer to this; my feeling is that posting to the UI thread and scripts in the UI thread may or may not have (performance/smoothness) issues, but doing it all in the worker avoids any potential for this issue. On Thu, Oct 17, 2013 at 10:48 PM, Rik Cabanier caban...@gmail.com wrote: This proposal implies an extra buffer for the 2d context. My proposal doesn't require that so it's more memory efficient + you can draw in parallel. You always need at least two buffers: a back-buffer for drawing and a front-buffer for display (compositing). Otherwise, as soon as you start drawing the next frame, the old frame is gone, so you won't be able to recomposite (on reflow, CSS filter changes, etc). Double-buffering at a minimum is pretty standard, even for native applications (with none of this Web complexity in the way). Won't you need another front-buffer for the worker to draw to? I don't see why. You just use double-buffering as always: the worker draws to the backbuffer, then the drawing buffer (back-buffer) and the buffer being displayed (front-buffer) are flipped and you start over. I don't think there's any difference in this between native OpenGL, today-WebGL, and WorkerCanvas-WebGL. (I realize I'm looking at this from a WebGL-biased perspective, which clears the buffer between presentations unless you tell it not to. This is specifically to allow this sort of fast buffer flipping. 2d canvas doesn't do that, so to allow copy-free display it'd need a flag like WebGL's preserveDrawingBuffer = false. This applies to any API trying to get buffer flipping out of 2d canvas, though--something has to be added or changed. We don't need to address this here.) -- Glenn Maynard
Re: [whatwg] Canvas in workers
On Fri, Oct 18, 2013 at 2:06 PM, Kenneth Russell k...@google.com wrote: Capturing Glenn Maynard's feedback from the other thread started by Rik Cabanier, Glenn made a good point that there needs to be a way to explicitly deallocate the ImageBitmap. Otherwise, the JavaScript objects will have to be garbage collected before the GPU resource (texture) it references can be freed, and that will not work -- GPU resources will quickly pile up. I'd like to hear thoughts on the context.attachToCanvas approach. I think it has important advantages over ImageBitmap: - ImageBitmap requires the user to call close(). If the user forgets, or doesn't know, or misses it in some code paths, the problems caused aren't obvious. Worse, they may only appear in some implementations and not others, depending on GC strategies. attachToCanvas doesn't need cleanup in the first place, which is a nicer solution--there's nothing for the user to get wrong. - If you're rendering in a worker and the eventual target is in the main thread, the worker needs to be careful to not start rendering again until the main thread has assigned the ImageBitmap to where it wants it, and called .close(). You'd need to send a message back to the worker going okay, you can continue now. Otherwise, you'd start rendering before a buffer has been freed up for reuse, and end up creating more backbuffers than you intended (which matters for large screens). This seems easy to get wrong, and attachToCanvas doesn't have this problem. - With ImageBitmap, you need to create a helper canvas, then each time you render to a new target, you need to resize the canvas to match where it'll eventually go, so the resulting ImageBitmap is the size of its destination. (This may also need to be carefully optimized, so the implementation doesn't actually resize the backing store every time its size changes.) With attachToCanvas, you just size both canvases normally once, and switch between them with a single function call. - attachToCanvas matches the way Canvas works today: you create a Canvas, put it in the document (if it's for display), and render to it. For two canvases, you'd just add a second Canvas, and toggle as needed. With ImageBitmap, you have to restructure everything as soon as you want a second canvas, since you'd want to have a single offscreen Canvas for rendering, and to have img elements in the document instead of canvases. Here's the example from the other thread to consolidate the discussion. If you're in a worker, canvas and canvas2 can both be WorkerCanvases posted from the main thread or created directly: var canvas = document.querySelector(.canvas1); var gl = canvas.getContext(webgl); loadExpensiveResources(gl); drawStuff(gl); var canvas2 = document.querySelector(.canvas2); gl.attachToCanvas(canvas2); drawStuff(gl); // don't need to loadExpensiveResources again -- Glenn Maynard
Re: [whatwg] Counterproposal for canvas in workers
On Wed, Oct 16, 2013 at 9:34 PM, Rik Cabanier caban...@gmail.com wrote: When drawing to canvas, Chrome stores the drawing commands in a buffer and executes them when the main function returns (or access to pixel data is requested). It occurred to me that this could be re-purposed for canvas workers. A worker could create a list of drawing commands and if the worker is done, this list is executed either on the main thread or the worker or a compositor thread depending on what your architecture supports. The worker would not be allowed to read pixels or resize the canvas but all other operations would be allowed. This sounds like it serializes setting up the queue, and actually drawing the queue. OpenGL doesn't do that: it starts sending drawing commands to the GPU as soon as you make them, so the CPU can be setting up rendering of the same scene while the GPU is rendering earlier commands. It only needs to buffer if you send commands faster than the GPU can process them (the specific details of this are internal driver magic, but that's the gist). Waiting until all rendering commands have been called before starting to render would be catastrophic for performance, since it would prevent parallelism between the CPU and GPU. On Thu, Oct 17, 2013 at 3:35 PM, Rik Cabanier caban...@gmail.com wrote: I'm unsure how this would work for WebGL since I'm not all that familiar with its architecture. However, it seems that the end result of a webgl application, is a series of commands that are sent to the graphics chip. In theory, this should be compatible. All of that happens inside the OpenGL driver, which browsers have no control over. -- Glenn Maynard
Re: [whatwg] Counterproposal for canvas in workers
On Thu, Oct 17, 2013 at 4:50 PM, Rik Cabanier caban...@gmail.com wrote: It seemed like that proposal was harder. Synchronization with the main drawing thread seemed and the continuous committing seemed difficult too. Have implementors said that synchronizing the flip is (unreasonably) hard to implement? (I'm not an implementor, but this proposal feels unimplementable to me, or at least catastrophically difficult for WebGL. Compositors are often already threaded, so synchronizing a buffer flip with the compositor doesn't seem too far out there.) In addition, Ken wanted multiple workers access the same canvas which I didn't see addressed (unless I missed it). I don't remember multiple workers accessing the same canvas and I'm not quite sure what it means. I do remember a single (WebGL) context rendering to multiple canvases. Is that what you're thinking of? On Thu, Oct 17, 2013 at 4:51 PM, Rik Cabanier caban...@gmail.com wrote: Thanks Glenn! With that info, will there ever be a way to use WebGL in different workers but going to the same webgl context? Sorry, which use case is this for? I'm not sure why you'd want to do that, and it sounds like it would expose thread-safety issues to the platform. (I'm not sure if you mean the same thing here and above--they sound similar, but you said canvas in one place and WebGL context in the other.) (Sorry if I'm forgetting things, the subject has been busy and a little bit noisy...) -- Glenn Maynard
Re: [whatwg] Counterproposal for canvas in workers
On Thu, Oct 17, 2013 at 5:14 PM, Rik Cabanier caban...@gmail.com wrote: Compositors are often already threaded, so synchronizing a buffer flip with the compositor doesn't seem too far out there.) This proposal implies an extra buffer for the 2d context. My proposal doesn't require that so it's more memory efficient + you can draw in parallel. You always need at least two buffers: a back-buffer for drawing and a front-buffer for display (compositing). Otherwise, as soon as you start drawing the next frame, the old frame is gone, so you won't be able to recomposite (on reflow, CSS filter changes, etc). Double-buffering at a minimum is pretty standard, even for native applications (with none of this Web complexity in the way). I think WorkerCanvas (as well as CanvasProxy that's in the spec today--this isn't new to WorkerCanvas) allows full parallelism in drawing, both between the script and the GPU and between the worker and the main UI thread. I don't remember multiple workers accessing the same canvas and I'm not quite sure what it means. I do remember a single (WebGL) context rendering to multiple canvases. Is that what you're thinking of? I went back over the history and that was indeed his use case. That's a good use case, I've wanted to do that myself. We haven't tried very hard to fit it into the WorkerCanvas approach yet, and it may also be that the best way to do that is orthogonal to the whole canvas in workers use case. The obvious approach is to add a new method on the context, attachToCanvas(Canvas or WorkerCanvas), which would just take the context and cause its output to be directed to a new Canvas (or WorkerCanvas), probably clearing the contents of the new canvas as a side-effect. (This could be added to both CanvasRenderingContext2D and WebGLRenderingContext, though I suspect this is only really useful for WebGL. There's no expensive resource loading with 2d canvas.) var canvas = document.querySelector(.canvas1); var gl = canvas.getContext(webgl); loadExpensiveResources(gl); drawStuff(gl); var canvas2 = document.querySelector(.canvas2); gl.attachToCanvas(canvas2); drawStuff(gl); // don't need to loadExpensiveResources again I think that's by far the most straightforward approach for users. Maybe there are implementation issues that make this hard, but if so I think they would apply to every approach to this use case (they're really all different interfaces to the same functionality)... -- Glenn Maynard
Re: [whatwg] Counterproposal for canvas in workers
On Thu, Oct 17, 2013 at 8:22 PM, Robert O'Callahan rob...@ocallahan.orgwrote: That's not really a use-case. What would you actually be trying to do? IIUC Ken agreed that his use-cases that appeared to require a single context rendering to multiple canvases would be addressed just as easily (or better) by using multiple image elements, a single canvas, and doing image.srcObject = canvas.transferToImageBuffer(). I wasn't arguing a use case, I was agreeing with a feature. I think the use cases for rendering to multiple DOM elements (canvases or otherwise) using WebGL are already well-established (less so for 2d canvas). transferToImageBuffer looks like it would create a new ImageBuffer for each frame, so you'd need to add a close() method to make sure they don't accumulate due to GC lag, and it seems like turning this into a fast buffer swap under the hood would be harder. If you just point the context at the final canvas in the first place, it can render directly into that canvas's backbuffer, so the buffer flipping mechanics are identical to when it isn't being used at all. Also, with the transferToImageBuffer approach, if you want to render from a worker into multiple canvases in the UI thread, you have to post those ImageBuffers over to the main thread each frame, which has the same (potential) synchronization issues as the transferDrawingBufferToCanvas proposal. With attachToCanvas, it's just like WorkerCanvas: the buffer flipping can happen entirely within the worker. -- Glenn Maynard
Re: [whatwg] Canvas in workers
On Wed, Oct 16, 2013 at 8:01 AM, Justin Novosad ju...@google.com wrote: ... oh... so the UI could be updated even if JS is blocking... the future is bright :-) If the UI is all painted in a canvas, then yes. Let's not get ahead of ourselves though. Browsers that have a compositor in a separate thread can present frames without synchronizing with the main thread, but updating a regular (DOM-based) UI would likely require style and layout calculations to be propagated from the main thread to the compositor. Right, that's the only sort of async update we're talking about here: changing what gets composited, and not anything that affects layout or is detectable by scripts. But actually, there's no disagreement. I misread Kenneth's mail as saying your proposal requires synchronization with the main thread, but he actually said this other proposal requires synchronization with the main thread (but has other benefits). I'm not sure how big a problem that synchronization will be. Posting messages should be extremely cheap: essentially free, when you're just transferring an object to a thread in the same process. The UI thread should also not be heavily loaded: you shouldn't have to wait long for the script to receive the object and push it into the canvas. But, even a cost of 2ms would be a massive hit, since that's 12% of the time you have available when rendering at 60 FPS. In practice it might make it harder to sustain smooth 60 FPS animation. For example, if the main thread occasionally spends 4ms doing GC work, you may have a rendering hitch that you wouldn't have otherwise. -- Glenn Maynard
Re: [whatwg] Canvas in workers
On Mon, Oct 14, 2013 at 1:20 PM, Kenneth Russell k...@google.com wrote: 1) Rendering from a worker and displaying on the main thread with no extra blits of the rendering results 2) Allows one context to render to multiple canvases 3) Supports resizing of the drawing buffer The WorkerCanvas proposal should allow #1 and #3. (It doesn't support #3 for purely offscreen worker canvases, but that'd be easy to add.) #2 would be nice with WebGL, where setting up extra contexts can be expensive, and it may be simpler to do at the Canvas level than by mimicing OpenGL (eg. shared resources across contexts). There's been some recent discussion in the WebGL WG on this topic and concerns were raised from other parties at Mozilla about the DrawingBuffer proposal above, including that it isn't possible to render from a worker without synchronizing with the main thread. Your proposal does seem to require synchronization with the main thread, at least with double-buffering. You postMessage the DrawingBuffer to the main thread to ask it to be displayed. The worker can't start drawing the next frame until it knows that the drawing buffers have been flipped; the buffer flip happens in the main thread, when transferDrawingBufferToCanvas is called. WorkerCanvas performs the flip itself in the worker, when .commit() is called (and possibly also when the script returns). Even if the main thread is busy, the worker should be able to do this immediately. It does need to be a thread-safe operation, but it doesn't need to block if the UI thread is busy. This proposal doesn't handle synchronizing DOM updates with threaded canvas updates, but it seems like that inherently requires synchronization... -- Glenn Maynard
Re: [whatwg] Proposal: Adding methods like getElementById and getElementsByTagName to DocumentFragments
On Thu, Oct 10, 2013 at 1:41 PM, Ian Hickson i...@hixie.ch wrote: Leaving aside the issue that CSS-escape is more than one operation depending on what kind of token you're creating, My understanding is that you can do both of them, at least for selector-related escaping, so the author doesn't have to know about the difference. That's based on Simon's earlier mail: On Thu, Oct 10, 2013 at 6:06 AM, Simon Pieters sim...@opera.com wrote: The common case is escaping as ident. An API to escape as ident could be used for escaping strings, too. In order to not make people think more than just remembering to escape at all, it might be a good idea to just have one API to serve both cases, e.g. CSS.escape(foo). I don't think it's actually as trivial as you think. document.getElementById(id) ...becomes: document.querySelector('#' + escapeCSSIdent(id)) ...which is a lot less pretty and understandable, especially when you consider that many authors are actually coming from: document.all[id] ...which is briefer than either, and still self-explanatory. I feel this is a case where we're not putting authors first, but are instead putting spec purity first. (Nothing about this discussion relates to spec purity, whatever that means. My argument is that this function is useless legacy, and that proliferating it to DocumentFragment seems to be for consistency's sake only.) I think the example you gave is trivial and perfectly fine, particularly since you need to do the same thing anyway as soon as you're doing anything other than ID lookups or the other couple special cases. I find that happens very quickly, so my code is a lot more readable when I just use querySelector everywhere. But adding another getElementById is probably low cost, so it doesn't bother me that much. -- Glenn Maynard
Re: [whatwg] Canvas in workers
On Sat, Oct 12, 2013 at 11:12 PM, Kyle Huey m...@kylehuey.com wrote: 1. Rename CanvasProxy to WorkerCanvas and only allow it to be transferred to workers. I don't think we're interested in supporting cross-origin canvas via CanvasProxy (I would be curious to hear more about what the use cases are). You can transfer data to a worker that's cross-origin, if you have a MessagePort where the other side goes to another origin's worker (possibly given to you via eg. window.postMessage). Is the real goal here trying to limit this to threads and avoid IPC, or is this actually a cross-origin concern? 2. Add a worker-only WorkerCanvas constructor that takes the desired width/height of the drawing surface. This looks like it's trying to allow entirely off-screen rendering within a Worker, which is fine, but there's no way to resize the backing store in this mode. I don't know if that would need a separate subclass of WorkerCanvas to allow making width/height writable. - getContext (to replace what we removed in step 3). roc prefers to have getContext2D and getContextWebGL, and dispense with the string argument version entirely, but I don't have strong feelings. CanvasRenderingContext2D? getContext2D(any... args); WebGLRenderingContext? getContextWebGL(any... args); This is crazy. The platform is inconsistent enough. We have an API for this already, getContext(); don't add a different API for the exact same thing. 5. Add a commit method to WorkerCanvas. For a WorkerCanvas obtained I'm not sure what this is for. If you draw in a worker and return without calling .commit(), is the commit implicit when you return to the event loop? (See below for where this matters.) simply draw in a loop without yielding. To solve this, if commit is called and the current dimensions on the main thread don't match the dimensions of the WorkerCanvas it would fail (return false) and update the dimensions of the WorkerCanvas before returning. This is technically a violation of run-to-completion semantics, but is needed to support workers that do not yield. This sounds like it's easy to get wrong, since it'll probably be rare. An exception might be better, so if you don't handle it you at least get an error logged to the console. There will be flicker issues with this. The canvas is cleared when you change width or height. In the UI thread that's OK, since the author can synchronously redraw immediately after changing the size. Here, it's likely it won't be redrawn in time, so it'll flicker whenever the size changes, especially if it's being changed smoothly. Here's a suggestion to fix this: - When the UI thread changes Canvas.width or Canvas.height, it doesn't actually resize buffers. Instead, it sends a message to the WorkerCanvas asking for the change. Until the change actually happens, the Canvas continues to be composited as before. (However, the change to .width and .height is visible on the object immediately.) - When the WorkerCanvas's event loop receives a message asking for a size change: - Change the size of the back-buffer, and update WorkerCanvas.width and WorkerCanvas.height accordingly. - Fire onresize on the WorkerCanvas. The worker is expected to redraw here. (This is where the implicit commit matters: we want to guarantee a commit here.) - Only when the newly-redrawn buffer is committed does the front buffer's size get updated to match the back-buffer. In other words, when you change the size in the UI thread, it continues to composite the same image (possibly not filling the whole element, or being stretched) until the worker actually gets the resize and has a chance to redraw it. This also means the idea of not being able to commit because of a resize can go away, and commit() can be void, since the back-buffer size never actually changes while the worker is drawing. On Sun, Oct 13, 2013 at 11:01 AM, David Bruant bruan...@gmail.com wrote: bool commit(); Boolean as return value for success? :-s A promise instead maybe? throw instead of false at least? In any case, it looks like commit could be a long operation (tell me if I'm wrong here. Do you have numbers on how long it takes/would take?), having it async sounds reasonable. This should be synchronous and never block. Even if the Canvas is in a different process, it should be possible to do this with IPC without waiting for the other side to process the change. -- Glenn Maynard
Re: [whatwg] Canvas in workers
On Sun, Oct 13, 2013 at 11:22 AM, David Bruant bruan...@gmail.com wrote: bool commit(); Boolean as return value for success? :-s A promise instead maybe? throw instead of false at least? In any case, it looks like commit could be a long operation (tell me if I'm wrong here. Do you have numbers on how long it takes/would take?), having it async sounds reasonable. This should be synchronous and never block. I might have misused the word async again, sorry about that. I think we agree. Sorry, I also stated this imprecisely (block is somewhat overloaded). Specifically, this function never has to sit around and wait for the corresponding Canvas thread to become available; it can perform the flip while the other thread is working. And if it really did need to block--we're in a worker anyway, so that's OK. Even if the Canvas is in a different process, it should be possible to do this with IPC without waiting for the other side to process the change. How does a worker know when the changes on the screen happened? I imagine a worker would want to know that before performing other changes to the canvas. If the Canvas wants to change its size, then it should ask the owner of the WorkerCanvas to make the change, which means the canvas size will never change while the worker is drawing. The WorkerCanvas would never change size while the worker is rendering, and it would know that the canvas changed size when it receives an onresize event. On Sun, Oct 13, 2013 at 4:42 PM, Robert O'Callahan rob...@ocallahan.orgwrote: 1. Rename CanvasProxy to WorkerCanvas and only allow it to be transferred to workers. I don't think we're interested in supporting cross-origin canvas via CanvasProxy (I would be curious to hear more about what the use cases are). Basically it's simpler to have CanvasProxy/WorkerCanvas only supported on workers. Cross-origin isn't itself a concern. I think restricting this to workers is fine. A simple way is for calls to any methods throw an exception if you're not in a worker. This keeps structured clone and transfer simple: the rules for passing it around are the same as anything else, you just can't use it if you're not a Worker. If https://www.w3.org/Bugs/Public/show_bug.cgi?id=23358 is implemented, that'd give an easy way to define this. 2. Add a worker-only WorkerCanvas constructor that takes the desired width/height of the drawing surface. This looks like it's trying to allow entirely off-screen rendering within a Worker, which is fine, but there's no way to resize the backing store in this mode. We don't have a use-case for resizing the backing store of a worker-created canvas. I suspect this will come up with WebGL, since recreating the context from scratch can be a lot more expensive there. We should at least make sure this is possible later. Actually, there is a way: change the width and height on the CanvasRenderingContext2D you get from getContext, which isn't readonly. I'm guessing that would actually want to be read-only in this proposal, since it allows making changes that are visible to scripts in the UI thread. There is the slight problem that changing both width and height would fire two events. Changes to width and height should probably only be applied when the script returns to its event loop. A bigger problem is that your approach isn't compatible with a worker that draws frames in a loop without yielding. I'm uncertain how important that is, so I'll wait for Kyle to address that. OK. What are the use cases for doing that? Being able to do complex work in a worker in a linear, non-event-based manner is an important use case for workers in general, but I can't think of any way this applies to drawing successive frames to a canvas. -- Glenn Maynard
Re: [whatwg] Proposal: Adding methods like getElementById and getElementsByTagName to DocumentFragments
On Thu, Oct 10, 2013 at 6:06 AM, Simon Pieters sim...@opera.com wrote: $('li[id = ' + textId + ']', $slideshow3485780.context) $('[n_id='+allN_id+'] .notificationContainer a span') $('.recommend .bd.b_con ul[city='+city1+']') (The above is just a small subset of some interesting cases.) I didn't see a single case that actually used an escaping utility. When I'm doing this I just make sure that the strings don't need escaping in the first place. Many of these look like they do that (probably most ID cases are things like random numbers or alphanumerics). FWIW, I rarely use IDs at all: I use classes, even if there will probably only be one of something. (Once templates enter the picture, IDs don't make sense, so I generally just avoid them.) On Thu, Oct 10, 2013 at 8:41 AM, Glenn Adams gl...@skynav.com wrote: Given the existence of Window.escape(), i.e., the JS escape(string) function property of the Global object, I wonder if choosing a longer, different name would be better to avoid confusion. I think the CSS scope makes it perfectly clear and unambiguous. -- Glenn Maynard
Re: [whatwg] Proposal: Adding methods like getElementById and getElementsByTagName to DocumentFragments
On Thu, Oct 10, 2013 at 9:22 AM, Boris Zbarsky bzbar...@mit.edu wrote: On 10/10/13 10:15 AM, Glenn Maynard wrote: When I'm doing this I just make sure that the strings don't need escaping in the first place. Many of these look like they do that (probably most ID cases are things like random numbers or alphanumerics). Let's take a look at Simon's examples from actual web pages: .querySelectorAll(#+M+ +m) .querySelectorAll('.'+classes[**i]) If M is a random number, it needs escaping. Similar if classes[i] is a random number. In particular, ID and class selectors cannot start with a digit. That's why I said many. There are obviously several cases that do need escaping. FWIW, I rarely use IDs at all: I use classes, even if there will probably only be one of something. Classes have the same syntax as IDs in CSS (both are identifiers), so it's the same issue. My point was that I never use getElementById (and getElementsByClassName returns an array, so it's wrong too). -- Glenn Maynard
Re: [whatwg] Proposal: Adding methods like getElementById and getElementsByTagName to DocumentFragments
On Wed, Oct 9, 2013 at 7:02 PM, Boris Zbarsky bzbar...@mit.edu wrote: On 6/28/13 10:01 PM, Boris Zbarsky wrote: On 6/28/13 5:06 PM, Tab Atkins Jr. wrote: getElementById(foo) is just querySelector(#foo) This is actually false. For example, getElementById(foo:bar) is just querySelector(#foo\\:bar), which is ... nonobvious. And today someone asked me how to do the equivalent of getElementById(\n) with querySelector. That one is even more non-obvious. But it's already been suggested--by you--that we need a function to CSS-escape a string, which seems to solve the that problem trivially (for users). I often do things like saving an element's elem.dataset.someId, and then finding the element again later by saying container.querySelector('[data-some-id=' + saved_id + ']'. (That lets me find the element later, even if it's been replaced by a new element, which doesn't work if I just save a reference.) That would help there, too, since I wouldn't need to make sure that my IDs don't need to be escaped. -- Glenn Maynard
Re: [whatwg] Forms: input type=file and directory tree picking
On Wed, Oct 2, 2013 at 1:02 PM, Jonas Sicking jo...@sicking.cc wrote: That's not the only alternative. For example, a third alternative is that the user's selection (e.g. a directory) is returned quickly, not pre-expanded, and then any uploading happens in the background with the author script doing the walk and uploading the files. It's unclear to me what you are proposing here. Can you elaborate? The same thing I did, I think: an API to navigate the directory tree as needed, and to never greedily recursing the directory tree. -- Glenn Maynard
Re: [whatwg] Forms: input type=file and directory tree picking
On Wed, Oct 2, 2013 at 3:35 PM, Jonas Sicking jo...@sicking.cc wrote: Though of course you or anyone else is free to propose changes to the spec to improve that situation. That's what we're doing: suggesting that we expose an API to navigate the tree. (I'm not proposing actual APIs for this here, since lots of people have done that already in the filesystem API threads, but I hope we can agree that it would be a much better solution.) -- Glenn Maynard
Re: [whatwg] related subject -- access to local files RE: Forms: input type=file and directory tree picking
(Dropped CC's, since this isn't really related to the thread you're replying to.) On Wed, Oct 2, 2013 at 5:38 PM, David Dailey ddai...@zoominternet.netwrote: A few years ago, probably on www-html5, I remember posing a question about enabling the once-unbroken ability to allow JavaScript with user-consent, to insert an image file (as the src of an img into a web page, viewed in the browser). You can access the File object of files selected with input type=file (and similarly with drag-and-drop), create a URL representing it with URL.createObjectURL, and use that with img src. -- Glenn Maynard
Re: [whatwg] Forms: input type=file and directory tree picking
On Wed, Oct 2, 2013 at 7:48 PM, Jonas Sicking jo...@sicking.cc wrote: Proposals for filesystem is just part of what we need. We also need a way to expose it through HtMLInputElement. And a way to allow not exposing the files through .files. Assuming for now that we need separate modes for files and directories, I'd suggest input type=directory, which causes a directory picker to be shown, doesn't populate .files at all, and adds an API entry point to retrieve a filesystem. If somebody suggests an implementable way to expose UI that doesn't need to separate files and directories then we may want something else, but that doesn't seem likely to me. (Implementations could still allow selecting individual files, or groups of files, as long as it's exposed transparently as if they're files in a directory. So, something like type=filesystem might be a better name.) Actually, a filesystem might not even be needed. We could just expose an asynchronous iterator. A use case is an image viewer for photographers, allowing the user to open a directory possibly containing tens of thousands of files. The image viewer should be able to allow the user to navigate through directories. An iterator couldn't do that--it could only grab the files in the order they happen to come in. Also, if the page already knows that the last image the user viewed is 2013/sunset.jpg, and the user opens the same directory, the page should be able to grab the same file by its filename immediately if it still exists, without having to iterate. Again, and with extra oomph, proposals welcome. I'd need to review the threads first, but I'll try to get to that and see if I have any new suggestions. The last thing I remember is movement away from the Filesystem API, and a lighter API being proposed, but I don't recall where that left off. -- Glenn Maynard
Re: [whatwg] Forms: input type=file and directory tree picking
On Tue, Oct 1, 2013 at 3:44 PM, Ian Hickson i...@hixie.ch wrote: * Websites wants to do their own pick UI * OSs can't display pickers which allow picking either a file or a directory. I don't think I've ever seen a native application on any platform offer two buttons, one to pick one or more files, and one to pick one (or more?) directories. I think this should be a large red flag. Now if I'm wrong and this kind of UI is in fact a thing, then fair enough, but if it's not, maybe we should go and study how this problem is solved in native apps. I can't find any applications off-hand which allow both in the first place, but Windows doesn't have a dialog that can do both. Typically you end up with two separate buttons/menu items, but presented as separate features, eg.Open File and Import Directory. (Drag and drop doesn't need to make this distinction. I'm not sure what should happen if you have a files input, and drag a directory into it that you couldn't have selected with the file picker.) -- Glenn Maynard
Re: [whatwg] Blurry lines in 2D Canvas
On Fri, Sep 27, 2013 at 4:38 PM, Jasper St. Pierre jstpie...@mecheye.netwrote: The issue here is that the canvas API does not specify how pixels are sited on the canvas: if you imagine pixels as enlarged squares on a grid (shush, I know), does an X coordinate of 5 name the center of the square, or the intersection between 4th and 5th squares? That's not the issue this thread is about. I don't know if it's specified (though I suspect is is), but WebKit, Firefox and IE10's Canvas implementations all use OpenGL's coordinate system. This can be seen in the very first post in the thread, which renders consistently in all three browsers. http://jsfiddle.net/V92Gn/128/ -- Glenn Maynard
Re: [whatwg] Proposal: q and qq for document.querySelector and document.querySelectorAll
On Wed, Sep 18, 2013 at 7:18 AM, Niels Keurentjes niels.keurent...@omines.com wrote: The spec should only concern itself with exposing functionality. Practical considerations such as length of code are the responsibility of the developer - if you like to have q and qq aliases you can add them yourself at runtime, that's the whole point of a prototyped language. Common libraries like jQuery, prototype and Mootools expose the behaviour as $ and $$ for exactly the reason given, no reason to impose that on every developer if they choose not to use a library. This is nonsense. Usability and practicality are absolutely concerns of the spec. If libraries like jQuery need to be used for it to be convenient to develop for the platform, and everyone has different and incompatible convenience wrappers for everything, then that's a failure of the platform. (I don't think this proposal is a good idea, though.) -- Glenn Maynard
Re: [whatwg] Should video controls generate click events?
On Tue, Sep 10, 2013 at 6:35 PM, Ian Hickson i...@hixie.ch wrote: On Tue, 20 Aug 2013, Glenn Maynard wrote: It's the behavior users expect when watching videos, which is the case video should optimize for. If you're doing something else where the user interacts with the video in other ways, then it's expected that you need to prevent this behavior explicitly. Unlike browser controls, this is visible to scripts and something that affects authors, so this probably should be in the spec if it isn't. I'm not sure what you want in the spec here. Can you elaborate? The same thing you described: the activation behavior for videos should be to toggle play/pause. If only some browsers do it, it's an interop problem, and it seems like the right default behavior. I'm not sure whether this should only be when browser controls are enabled or not. It might be best to keep them orthogonal, so browser controls are always UI controls that don't generate click events at all. On Wed, 21 Aug 2013, Silvia Pfeiffer wrote: This is why I am saying: Philip's example is not a typical use case. It only happens when the developer made the choice to roll their own, but the user activates the default controls (e.g. through the context menu) as well. This can't happen on YouTube, because YouTube hide away the context menu on the video element. You can't do that. Browsers have options to remove the ability for pages to prevent the context menu from opening. I always use it, since it's disruptive (the browser's context menu belongs to me, not the page). -- Glenn Maynard
Re: [whatwg] High-density canvases
On Mon, Sep 9, 2013 at 7:31 PM, Ian Hickson i...@hixie.ch wrote: Right, resetting the context would definitely be part of the deal. This mode would be specifically defined as a mode where you had to listen to onresize or your canvas would almost certainly get cleared sooner or later. In fact, we could go further, and say that canvases that aren't getting rendered at all (e.g. display:none, off-screen, background tab) can get cleared, with the deal being that next time you need to show the canvas you immediately get an onresize. It would be better if the resize didn't happen until the page is actually ready to re-render. That way, the canvas doesn't flicker if it's rendered before the page actually does render (eg. it may need to reload resources to render). Rendering a blurry canvas briefly is better than rendering a blank canvas. For example, add a method resizeToCurrentDPI(), and don't do it automatically at all. Fire an event when calling the method *would* cause a change of canvas size. The page can then load resources asynchronously as needed, and call the method when it's ready to redraw, avoiding any period where a blank canvas might be composited. Yeah, my suggestion, if we do this, would be to not do it until high density displays are even more widely available than now. This is mostly a convenience and performance-improving API, not a critical feature add. High-DPI displays are already widespread in mobile (all Apple devices except for the iPad Mini; the Kindle Fire HD), and by contrast there's no sign of them for desktops, so I think we're either there now or we won't be for a long time. -- Glenn Maynard
Re: [whatwg] Script preloading
I don't like the name jit, because it already has a different meaning when talking about scripting. If this was for CSS or WebVTT or something else other than scripts, it wouldn't be as bad... On Fri, Aug 30, 2013 at 7:22 PM, Ryosuke Niwa rn...@apple.com wrote: I don't quite understand why we need two values for whenneded. Why can't we simply have prefetch (since we already use that term in the link element) and needs (I'd prefer calling it requires) content attributes? When a script element has the prefetch attribute, it doesn't execute until execute() is called upon the element unless (i.e. the script is executed immediately when the script has been loaded) if at least one of its dependencies is not a prefetch (i.e. doesn't have the prefetch content attribute). I'm not sure what you mean (skipping the parenthetical this says unless if, so I'm not sure how to parse that), but prefetch sounds like something different than jit. prefetch sounds like a hint about networking behavior, eg. download this script, even if it isn't needed yet. On the other hand, jit changes when the script is executed, not when it's downloaded: it means don't execute the script's contents until the scripts that depend on this one are also ready to be downloaded. Could you clarify which use case this alternative proposal doesn't address? The use case was download several scripts, then execute them all at once. I'm not sure about that use case, but a prefetch hint doesn't seem right for that. You'd end up downloading the scripts even if they're never used. With jit, the browser can still avoid downloading the scripts entirely if they're not used. -- Glenn Maynard
Re: [whatwg] Zip archives as first-class citizens
On Wed, Aug 28, 2013 at 12:25 PM, Eric Uhrhane er...@chromium.org wrote: Broken files don't work, and I'm OK with that. I'm saying that legal zips can have multiple directories, where the definitive one is last in the file, so it's not a good format for streaming. If you're saying that you want to change the format to make an earlier directory definitive, that's going to break compat for the existing archives everywhere, and would be confusing enough that we should just go with a different archive format that doesn't require changes. Or at least don't call it zip when you're done messing with the spec. I'm saying that if the directories are out of sync, the filenames are going to be broken in existing clients already. We should only try to guarantee that files always work if their internal data is consistent. If their records are out of sync, then we should only ensure that the files work the same in all browsers, even if there are some files that won't work nicely as a result. That said, we don't actually have use cases or a feature proposal for streaming from ZIPs, so it's hard to make any further analysis. The feature we're discussing here doesn't need streaming, only random access. It wouldn't read the whole ZIP, it would just read the end of the file to grab the central directory (which gives you the information you need to decide what to read from there). (The access patterns of having to read the central directory first aren't ideal for optimizing away fetches, since Content-Range has no way of saying give me the last 64K of the file so you have to ask for the size first, but I'd rather that than introducing a new archive format into the wild...) -- Glenn Maynard
Re: [whatwg] Forms: input type=file and directory tree picking
On Thu, Aug 29, 2013 at 5:06 PM, Jonas Sicking jo...@sicking.cc wrote: We don't have to do any enumeration synchronously. It can all happen off the main thread. The .click() API is asynchronous. It's asynchronous to the JS, sure, but at the end of the day the user can't get any work done until it's complete. It's synchronous as far as the user is concerned. Sure. The alternative is that the user attaches each file separately. Which, while means smaller synchronous actions, is not really a better UX. In other words, synchronousness is not the only design constraint here. The alternative is to provide an interface that explores the supplied directory on-demand, as the page needs it, rather than greedily scanning the entire directory before giving it to script. Scanning a large directory tree in advance is almost never what applications or users want. A static file list isn't a sensible API for recursively exposing directory trees. -- Glenn Maynard
Re: [whatwg] Handling of invalid UTF-8
On Thu, Aug 29, 2013 at 5:29 PM, Cameron Zemek grom...@gmail.com wrote: In the spec preview it had a section about UTF-8 decoding and the handling of invalid byte sequences, http://dev.w3.org/html5/spec-preview/infrastructure.html#utf-8 . But I have noticed this section has been removed from the current version. So what algorithm is used for handling of invalid UTF-8 byte sequences? Or this no longer part of the HTML 5 specification? http://www.whatwg.org/specs/web-apps/current-work/#dependencies has a reference to the Encoding spec, which is where the UTF-8 decoding logic lives now: http://encoding.spec.whatwg.org/#utf-8 -- Glenn Maynard
Re: [whatwg] Script preloading
On Tue, Aug 27, 2013 at 4:55 PM, Ian Hickson i...@hixie.ch wrote: IMHO, if you have to write a script to solve use cases like these, you haven't really solved the use cases. It seems that the opportunity we have here is to provide a feature or set of features that addresses these use cases directly, so that anyone can use them without much work. This is especially true for a module loader, which will be used to deal with interactions between scripts written by different parties. If the platform doesn't provide a standard, universal way to do this, then people will keep rolling their own incompatible solutions. That's bearable for self-contained code used by a module, but it doesn't make sense for the piece that handles the cross-vendor interactions. Anyway, the idea of only providing basic building blocks and making people roll their own solutions isn't the web's design philosophy at all, so I don't think it's a valid objection. script whenneeded=jit is a special mode where instead of running once the script's dependencies are met, it additionally waits until all the scripts that depend on _it_ are ready to run. (Just in time exection.) (The default is whenneeded=asap, as soon as possible exection.) This mode seems to be specifically for this use case: [Use-case U:] I have a set of script A.js, B.js, and C.js. B relies on A, and C relies on B. So they need to execute strictly in that order. [Now], imagine they progressively render different parts of a widget. [...] I only want to execute A, B and C once all 3 are preloaded and ready to go. It's [...] about minimizing delays between them, for performance PERCEPTION. This one seems uncommon, and less like a dependency use case than the others. How often is this wanted? Is it too inconvenient to just mark them all @whenneeded, and say something like: document.querySelector(#C).execute(function() { A.render(); B.render(); C.render(); }); That does require the modules render in a function, and not when the script is first executed. I don't know how much of a burden that is for this case. Alternatively, if an event is fired when a script's dependencies have been met, then you could mark all three scripts @whenneeded, and call (#C).execute() once C's dependencies have been met. Maybe the jit feature isn't a big deal, but it seems like a bit of an oddball for a narrow use case. You can manually increase or decrease a dependency count on script elements by calling incDependencies() and decDependencies(). Will a @defer dependency effectively defer all scripts that depend on it? incDependencies() and decDependencies() may be hard to debug, since if somebody messes up the counter, it's hard to tell whose fault it is. A named interface could help with this: script.addDependency(thing); /* script.dependencies is now [thing] */ script.removeDependency(thing); On Thu, Aug 29, 2013 at 10:37 AM, Nicholas Zakas standa...@nczconsulting.com wrote: The question of dependency management is, in my mind, a separate issue and one that doesn't belong in this layer of the web platform. HTML isn't the right spot for a dependency tree to be defined for scripts (or anything else). To me, that is a problem to be solved within the ECMAScript world much the way CSS has @import available from within CSS code. This would serialize script loading, because you wouldn't know a script's dependencies until you've actually fetched the script. That would make page loads very slow. I think the use cases other than the initial one (preload/execute later) are best relegated to script loaders I disagree. See above. (Please remember to trim quotes.) -- Glenn Maynard
Re: [whatwg] Zip archives as first-class citizens
On Wed, Aug 28, 2013 at 4:54 PM, Eric Uhrhane er...@chromium.org wrote: Without commenting on the other parts of the proposal, let me just mention that every time .zip support comes up, we notice that it's not a great web archive format because it's not streamable. That is, you can't actually use any of the contents until you've downloaded the whole file. ZIPs support both streaming and random access. You can access files in a ZIP as the ZIP is downloaded, using the local file headers. In this mode, they work like tars (except that you don't have to decompress unneeded data, like you do with a tar.gz). This feature wouldn't want that, since you need to read the whole file up to the file you want. Instead, it wants random access, which ZIPs also support. You download the central directory record first, to find out where the file you want lies in the archive, then download just the slice of data you need. You don't need to download the whole file. -- Glenn Maynard
Re: [whatwg] Zip archives as first-class citizens
On Wed, Aug 28, 2013 at 12:07 PM, Eric Uhrhane er...@chromium.org wrote: We've covered this several times. The directory records in a zip can be superseded by further directories later in the archive, so you can't trust that you've got the right directory until you're done downloading. Both the local headers and the central record can be wrong. (As mentioned on IRC the other day, apparently EPUB files often have broken central records, so eBook readers probably prefer the local records.) If they're out of sync, then they'll always be broken in some clients. We just have to make sure that the record that takes priority in any particular case is well-defined, so we have interop. If some malformed archives won't work in some cases as a result, using a different format isn't an improvement: that just means *zero* existing archives would work. This applies to various other aspects of the format: the maximum supported length of comments and handling of duplicate filenames, for example. This would all need to be specified; the ZIP AppNote doesn't specify a parser or error handling in the way the web needs, it just describes the format. -- Glenn Maynard
Re: [whatwg] Should video controls generate click events?
On Tue, Aug 20, 2013 at 3:46 PM, Silvia Pfeiffer silviapfeiff...@gmail.com wrote: What I'm saying is that the idea that the JS developer controls pause/play as well as exposes video controls is a far-fetched example. I don't understand what's far-fetched about that. They seem orthogonal to me. On Tue, Aug 20, 2013 at 6:18 PM, Rick Waldron waldron.r...@gmail.comwrote: Firefox actually implements click-to-play video by default. It's unfortunate and all video interaction projects that I've worked on directly or consulted for have been forced to include video surface click - event.preventDefault() calls to stop the behaviour. It's the behavior users expect when watching videos, which is the case video should optimize for. If you're doing something else where the user interacts with the video in other ways, then it's expected that you need to prevent this behavior explicitly. Unlike browser controls, this is visible to scripts and something that affects authors, so this probably should be in the spec if it isn't. -- Glenn Maynard
Re: [whatwg] Blurry lines in 2D Canvas (and SVG)
On Sat, Aug 10, 2013 at 9:43 PM, Rik Cabanier caban...@gmail.com wrote: Ah, so you are relying on pixel snapping (=rounded up to 2 pixels). Rounded to the nearest integer pixel. If you give 1.25, the width would be 1. If you can do that with your approach, why not with strokes that are drawn from the center? It might be possible, in principle, to only snap the border of the stroke instead of the whole thing, but I don't know how to do that or if it'd be worthwhile. It seems like sharp lines are only particularly important for thin strokes (especially 1px), and in those cases the difference between a center and an outer stroke are minor. (I don't know if it's harder to implement, eg. so there's no gap between a fill followed by an outer stroke.) I was wondering if this is something that happens in Flash as well. It turns out that there's an option called hinting: Keep stroke anchors on full pixels to prevent blurry lines. There's a blog post on what this does: http://www.kaourantin.net/2005/08/stroke-hinting-in-flash-player-8-aka.html http://www.kaourantin.net/2005/08/stroke-hinting-in-flash-player-8-aka.html I don't know about this, but the description sounds similar to what I'm suggesting. -- Glenn Maynard
Re: [whatwg] Antialiasing of line widths 1 (was Re: Blurry lines in 2D Canvas (and SVG))
On Sat, Aug 10, 2013 at 7:42 AM, Stephen White senorbla...@chromium.orgwrote: Chrome (well, Skia actually) uses a hairline mode for line widths 1. It draws a line of width 1, and uses the width to modulate the alpha. I think the idea is to prevent blotchiness/unevenness caused by undersampling or missed coverage (Skia uses 16 samples of AA). That sounds like it should be fine, since it should give results similar to what users would expect from simple coverage antialiasing. I'm not sure that's what I'm seeing, though. http://jsfiddle.net/eZEyH/1/ The 0.001 width stroke is being drawn solid black in the pixel-centered (left) case. In the right one, horizontally aligned to the edge of a pixel, the stroke disappears. (I left it vertically pixel-centered, so the box didn't disappear entirely.) The right is what I'd expect to always happen with a lineWidth that thin. Similar things happen with thicker widths, the 0.001 just makes it very easy to see. This can become visible during animation, eg. http://jsfiddle.net/xSUuB/1/. In Chrome, the line flickers between solid black and grey. In Firefox, it's antialiased normally, so it consistently appears grey (actually shifting between one pixel of grey and two pixels of lighter grey). -- Glenn Maynard
Re: [whatwg] Blurry lines in 2D Canvas (and SVG)
On Fri, Aug 9, 2013 at 4:17 PM, Stephen White senorbla...@chromium.orgwrote: If the stroke was instead drawn centered over half pixels, the stroked rects would be centered along (5.5, 5.5) - (14.5, 5.5) - (14.5, 14.5) - (14.5, 5.5) - (5.5, 5.5). This would touch pixels 5-15 in each dimension. If drawn with transparency, the resulting left and top edges would look different than the bottom and right edges. E.g., http://jsfiddle.net/9xbkX/ My proposal addresses this, by adding an outer stroke mode. http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2013-July/040252.html -- Glenn Maynard
Re: [whatwg] Blurry lines in 2D Canvas (and SVG)
On Fri, Aug 9, 2013 at 7:16 PM, Rik Cabanier caban...@gmail.com wrote: In addition if the corners of the path don't align with the grid, you will get a blurry outline again. That's the purpose of the second half of my proposal: snapping coordinates and line widths to integers. As an experiment, I drew 4 rectangles in JSFiddle with stroke width of .5, .75, 1, 1.5 and 2: http://jsfiddle.net/6KS4V/2/ I aligned them to the grid as Glenn suggested. This is a blown up screenshot from IE (Firefox looked the same): http://bit.ly/16FVCKd and here's one from Chrome: http://bit.ly/19Tf9Ko The rectangle that's 2 points wide is somewhat blurry, but the one that is 1.5 is very bad. Right. In case anyone's not following, this is what's happening: https://zewt.org/~glenn/stroke-alignment.png The red box is the rectangle being drawn. The blue lines are the actual strokes. (This was created by hand, it's not an actual Canvas rendering.) The top row is drawing with integer coordinates. With a 1px stroke, the stroke sits across two pixels, so it aliases. With a 2px stroke, it fully covers two pixels and doesn't alias. With a 3px stroke, it aliases again. The middle row is drawing with half-coordinates. The pattern is reversed: clean, aliased, clean. Additionally, fills (with no stroke) always aliases, since the red box lies between pixels. The bottom row is an outer stroke and integer coordinates: neither strokes nor fills alias, in all three cases. This is the mode I'm suggesting. Chrome seems ignore stroke widths that are smaller than 1 (which is reasonable). (That seems wrong to me--it should continue to draw based on pixel coverage--but that's a separate issue...) -- Glenn Maynard
Re: [whatwg] Blurry lines in 2D Canvas (and SVG)
On Fri, Aug 9, 2013 at 11:07 PM, Rik Cabanier caban...@gmail.com wrote: How would you fix a 1.5 pixel width for the stroke or a 1.5 transform? By snapping the final, post-transform width of the stroke to an integer. If you scale by 1.25, eg. ctx.scale(1.25, 1.25), then draw a stroke with a lineWidth of 1.5, the resulting width is 1.875 pixels. That would be rounded up to 2 pixels, after applying the transform (scale) and before invoking the trace a path algorithm. Chrome seems ignore stroke widths that are smaller than 1 (which is reasonable). (That seems wrong to me--it should continue to draw based on pixel coverage--but that's a separate issue...) Is it? Obviously you can't draw less than a pixel, but the user did specify that he wants it too look black. strokeStyle = black doesn't mean every pixel in the stroke should be black. It's the color of the pen. If you draw over half of a pixel with a black pen, you get 50% grey. It'd be one thing if Chrome didn't antialias at all, but if Chrome is antialiasing a stroke with a lineWidth of 1.5, it doesn't make sense that it's not antialiasing a stroke with a lineWidth of 0.75. I don't think this is strictly specified; the only mention of anti-aliasing is an example of how to do it (oversampling). This is tangental, though. Might want to start another thread if you want to go over this more, or we'll derail this one... -- Glenn Maynard
Re: [whatwg] BinaryEncoding for Typed Arrays using window.btoa and window.atob
On Wed, Aug 7, 2013 at 4:21 PM, Chang Shu csh...@gmail.com wrote: If we plan to enhance the Encoding spec, I personally prefer a new pair of BinaryDecoder/BinaryEncoder, which will be less confusing than reusing TextDecoder/TextEncoder. I disagree with the idea of adding a new method for something that behaves exactly like something we already have, just to give it a different name. (It may not be too late to rename those functions, if nobody has implemented them yet, but I'm not convinced it's much of a problem.) -- Glenn Maynard
Re: [whatwg] Forms: input type=file and directory tree picking
On Sun, Aug 4, 2013 at 2:47 AM, Jonas Sicking jo...@sicking.cc wrote: We can't do what you are suggesting for a plain input type=file multiple since there's already a defined API for that, and that API only exposes a .files property in the DOM. Sure we can; we can always add to that API, such as adding a getFileSystem() method. A different @type may be better anyway, though. But we could certainly add some way to enable creating an input which exposes files and directories in the DOM, rather than just files. Doing that will depend on coming up with a filesystem proposal which all parties actually agree on implementing, so far we don't have such a proposal. Unless we think it won't ever happen, it'd be better to keep working towards that than to rush things and implement greedy recursion. It also requires an actual proposal for what such an input would look like. I.e. would it be an input type=file directory? Or input type=file multiple with some sort of API call saying that flattening directories into files aren't needed? Or input type=filesanddirectories? It's probably not worth worrying about this part too much until we have a filesystem API for it to enable. Any of these seem fine (though I'd lean away from an API call), or maybe input type=file multiple=fs, which would cause it to fall back on the current (files only, non-recursive FileList) behavior on browsers that don't support it. (I don't think flattening directories into files is something that should ever happen in the first place, but if it does we'd definitely need to make sure it doesn't happen in this mode.) -- Glenn Maynard
Re: [whatwg] Forms: input type=file and directory tree picking
On Fri, Aug 2, 2013 at 11:15 AM, Jonathan Watt jw...@jwatt.org wrote: In my prototype implementation it took around 30 seconds to build the FileList for a directory of 200,000 files with a top end SSD; so depending on what the page is doing, directory picking could take some time. A static list isn't appropriate for recursively exposing a large directory. I hope that won't be implemented, since that's the sort of halfway-feature--not quite good enough, but it sort of works--that can delay a good API indefinitely. An interface to allow scripts to navigate the tree like a filesystem should be used, to avoid having to do a full recursion. For example, a photo browser probably only wants to read data on demand, as the user navigates. Also, doing it synchronously means that if the user adds another photo, he'd have to reopen the directory (and wait for the long recursion) all over again for it to be seen by the app. A previous discussion (on drag and drop, but the issues are the same) is here: http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2011-November/033814.html That centered around FS-API, which is probably not the direction things are going, but whichever API things land on for filesystem access should probably be used for this--or vice-versa, if this comes first. I suspect they're actually the same thing, since a file picker (along with drag and drop) is the likely way to expose a filesystem-of-real-files API. -- Glenn Maynard
Re: [whatwg] Blurry lines in 2D Canvas (and SVG)
On Thu, Jul 25, 2013 at 10:29 PM, Rik Cabanier caban...@gmail.com wrote: We're getting a bit off track here :-) We're figuring out an unclear use case. That's as on-track as it gets. :) No, you need to scale, otherwise the content of your canvas won't scale up. For instance, if you have a 100x100 device pixel rect, it has to become a 110x110 device pixel rect if you zoom by 10% Okay, that wasn't clear to me. Pixel ratios are peripheral to what you're describing: you could ask for the same thing any time you're rendering to a dynamically-sized canvas, which simplifies the discussion. I don't know if a complex semi-antialiasing mode is a good approach, though. It'll always have issues (rounded corners won't connect cleanly; it's not clear if it works for fills, or if it works for patterned fills). I don't know if this would work well in practice, or if it's implementable, but here's a two-part approach that might work: - First, add the inner and/or outer stroke modes. This seems useful in and of itself, but the purpose here is to make it so integer coordinates give hard edges, whether or not you have a 1px stroke. - Second, add a mode which causes coordinates to be snapped to integers. This would happen when you make the API call, and be applied after the canvas transform. If you're in scale(1.25), and you call rect(100, 100, 75, 75), it would draw a rect from 100x100 to 194x194, instead of to 193.75x193.75. This would give clean output for rounded edges, since you're adjusting the size of the path as a whole. It would work for fills (which also get aliased edges when transformed). It also works if the fill is a pattern, where turning off antialiasing would make the pattern ugly. -- Glenn Maynard
Re: [whatwg] Blurry lines in 2D Canvas (and SVG)
On Thu, Jul 25, 2013 at 12:24 AM, Rik Cabanier caban...@gmail.com wrote: Yes, that's what I had in mind: the developer detects the device pixel ratio and scales up the canvas so the pixels match. That reduces to the simple case, then. The pixel ratio gets out of the picture entirely if you adjust the canvas so it's rendered 1:1 to pixels, so the rules for getting hard edges are the same (half-pixels for strokes, integer pixels for fills). -- Glenn Maynard
Re: [whatwg] Blurry lines in 2D Canvas (and SVG)
On Thu, Jul 25, 2013 at 2:36 PM, Rik Cabanier caban...@gmail.com wrote: On Thu, Jul 25, 2013 at 7:05 AM, Glenn Maynard gl...@zewt.org wrote: On Thu, Jul 25, 2013 at 12:24 AM, Rik Cabanier caban...@gmail.comwrote: Yes, that's what I had in mind: the developer detects the device pixel ratio and scales up the canvas so the pixels match. That reduces to the simple case, then. The pixel ratio gets out of the picture entirely if you adjust the canvas so it's rendered 1:1 to pixels, so the rules for getting hard edges are the same (half-pixels for strokes, integer pixels for fills). Unfortunately, no. Let's say you have a device pixel ratio of 1.1 and a canvas of 100x100px. The underlying canvas bitmap should now be created as 110 x 110 pixels and your content should be scaled by 1.1. This will make everything blurry :-( If you have a pixel ratio of 1.1 (100 CSS pixels = 110 device pixels), and you're displaying in a 100x100 box in CSS pixels, then you create a canvas of 110x110 pixels, so the backing store has the same resolution as the final device pixels. If you don't do that--if you create a 100x100 backing store and then display it in 100x100 CSS pixels--then nothing Canvas can do will prevent it from being blurry, because the backing store is being upscaled by 10% after it's already been drawn. -- Glenn Maynard
Re: [whatwg] Blurry lines in 2D Canvas (and SVG)
On Thu, Jul 25, 2013 at 3:49 PM, Rik Cabanier caban...@gmail.com wrote: You still need the scale though, otherwise the canvas content isn't zoomed (which is what the user requested) (We were talking about device pixel ratios, not zooming--user zooming scales the backing store, which is what we can't do anything about.) I think we're misunderstanding each other, but I'm not sure where. If you're on a 1.1x device, a 100x100 CSS pixel (px) box has 110x100 physical pixels. To draw cleanly into that box, you create a canvas with a 110x110 pixel backing store, and display it in the 100x100px region, eg. canvas width=110 height=110 style=width: 100px; height: 100px; You don't do any scaling within the 2d canvas itself, you just draw to it like a 110x110-pixel canvas. -- Glenn Maynard
Re: [whatwg] Blurry lines in 2D Canvas (and SVG)
(The below is about Canvas only; I'm not very familiar with SVG. I think they should be two separate discussions.) On Tue, Jul 23, 2013 at 6:19 PM, Rik Cabanier caban...@gmail.com wrote: we've noticed that if you draw lines in canvas or SVG, they always end up blurry. For instance see this fiddle: http://jsfiddle.net/V92Gn/128/ This happens because you offset 1 pixel and then draw a half pixel stroke on each side. Since it covers only half the pixel, the color gets mapped to 50% gray. You can work around this by doing an extra offset of half the devicepixelratio, For Canvas, you should always add 0.5, since you're in the canvas coordinate space, before the pixel ratio is applied. This is the same coordinate system used by OpenGL and Direct3D 10 (and up), with pixels centered around 0.5x0.5. That is, a pixel sits between 0x0 and 1x1. If you're specifying the center of the line (eg. where the stroke grows outwards from), you need to add a half pixel. (When you're specifying a bounding box, such as drawImage, you don't, since you're at the edge rather than the center of a pixel.) I'm not sure if there's a way to disable antialiasing for paths. Disabling antialiasing to allow people to specify wrong coordinates only seems like it would be more confusing, though. The only solution is to educate people about when and why they need to add a half pixel; even if there was a way to avoid this in general (I'm not sure there is, for an API with Canvas's functionality), it's much too late to change this. -- Glenn Maynard
Re: [whatwg] Forcing orientation in content
On Fri, Jul 12, 2013 at 2:45 PM, Ian Hickson i...@hixie.ch wrote: Why? As a user on desktop, I can resize my window however I want, to be landscape or portrait. Why wouldn't I be allowed to do the same on any other device? In mobile accelerometer/gyro-based games, you don't want the user's shifting the device around to cause the screen to change orientation while they're playing. This means locking the current orientation, though, rather than a specific orientation (for example, you'd probably want to unlock it when the user is in a menu and not actually playing the game). On Fri, Jul 12, 2013 at 7:07 PM, Ian Hickson i...@hixie.ch wrote: Sure, some orientations might be better -- just like the HTML spec is more readable on a taller large screen than on a landscape phone screen -- but if the user wants to play the other way, it seems wrong to be able to prevent it. In practice, game developers are rarely willing to spend the time to make their games work well in both portrait and landscape. The Web solution is probably not to lock the display, though, but to letterbox the display if the window's aspect ratio is too far off, as with videos. -- Glenn Maynard
Re: [whatwg] Forcing orientation in content
On Fri, Jul 12, 2013 at 10:25 PM, Jonas Sicking jo...@sicking.cc wrote: If the content is sized better for portrait or landscape, then it's generally good for the user if the mode is forced by the application. Otherwise the user will have to scroll, or will see content that is smaller than it otherwise would be. I disagree completely. Even for landscape videos, I'd strongly prefer that a video not force my phone to portrait. Leave my orientation alone, and vertically letterbox the content, despite that giving a small viewing area. That lets me rotate the device at my convenience, rather than rotating the device on its own, so browser controls don't suddenly jump to a different place (eg. if I decide to back out instead of viewing), and not forcing me to rotate the device if I don't feel the need (eg. if it's a short clip and the small viewing area is sufficient). Changing orientation is disruptive. I can hardly imagine how obnoxious Web browsing would be on a mobile device, if every second page I navigated to decided to flip my device to a different orientation. This feels like the same sort of misfeature as allowing pages to resize the browser window: best viewed in 800x600 (so we'll force it), best viewed in portrait (so we'll force it). -- Glenn Maynard
Re: [whatwg] Proposal: Adding methods like getElementById and getElementsByTagName to DocumentFragments
On Mon, Jul 1, 2013 at 1:56 AM, Octavian Damiean odami...@linux.comwrote: I completely agree with Jussi here. It's also not really constructive to argue whether querySelector is more powerful not, we're talking about consistency. It's a little inconsistent to agree with something other than consistency, then telling people not to argue about anything but consistency. :) Consistency isn't a magic word that justifies things by itself. When it comes to backwards-compatibility with obsolete APIs, consistency often just means bloat. (I've used querySelector exclusively for quite some time, and I find arguments that querySelector isn't readable or the wrong tool to simply not hold up. I find it more readable, actually, since I don't have to change interfaces depending on whether I'm searching for an ID or a class.) -- Glenn Maynard
Re: [whatwg] Requiring the Encoding Standard preferred name is too strict for no good reason
On Mon, Jul 1, 2013 at 6:16 PM, Ian Hickson i...@hixie.ch wrote: It seems bad, and maybe rather full of hubris, to make it conforming to use a label that we know will be interpreted in a manner that is a willful violation of its spec (that is, the ISO spec). It's hard enough to get people to label their encodings in the first place. It doesn't seem like a good idea to spend people's limited attention on encodings with you should change your encoding label, even though what you already have will always work, especially given how widespread the ISO-8859-1 label is. (FWIW, I wouldn't change a server to say windows-1252. The ISO spec is so far out of touch with reality that it's hard to consider it authoritative; in reality, ISO-8859-1 is 1252.) -- Glenn Maynard
Re: [whatwg] Proposal: Adding methods like getElementById and getElementsByTagName to DocumentFragments
On Sat, Jun 29, 2013 at 4:55 PM, Tim Streater t...@clothears.org.uk wrote: But what I'm doing, I'm not doing for CSS purposes. I'm trying to find a particular row, by id, in order to modify the contents of cells in that row. I find it perverse to be using a style-related API call to do that. CSS uses selectors, not the other way around. querySelector() has nothing to do with styles. -- Glenn Maynard
Re: [whatwg] Challenging canvas.supportsContext
On Tue, Jun 25, 2013 at 3:28 PM, Simon Pieters sim...@opera.com wrote: On Tue, 25 Jun 2013 21:01:27 +0200, Dean Jackson d...@apple.com wrote: Showing or hiding interface objects is not something I want to do. It's possible that I missed it, but, why not? There is precedent for doing so. For instance, in Opera 11, the WebSocket constructor was absent unless WebSockets were enabled in opera:config. This allowed feature detection like the following to work: var supports_websockets = WebSocket in window; Also, the HTML spec actually requires it: [[ When support for a feature is disabled (e.g. as an emergency measure to mitigate a security problem, or to aid in development, or for performance reasons), user agents must act as if they had no support for the feature whatsoever, and as if the feature was not mentioned in this specification. For example, if a particular feature is accessed via an attribute in a Web IDL interface, the attribute itself would be omitted from the objects that implement that interface — leaving the attribute on the object but making it return null or throw an exception is insufficient. ]] This is done if the feature is being disabled completely at page load time, with no chance of it coming back: you simply don't put the interface into the environment. WebGL is different, since it might go away after the page is already loaded (eg. the GPU blacklist is updated); going in and trying to remove the interface after the page is loaded would be weird. It might also become available after previously being unavailable (eg. video drivers are updated), in which case you'd have to go in and insert the interface. It also doesn't provide any way to query arguments to getContext, eg. to see if null would be returned if a particular option is provided, which supportsContext allows. (I don't know if there are any cases where this actually happens, since most options are best effort and don't cause context creation to fail if they're not available.) -- Glenn Maynard
Re: [whatwg] Challenging canvas.supportsContext
On Tue, Jun 25, 2013 at 6:48 PM, Simon Pieters sim...@opera.com wrote: On Wed, 26 Jun 2013 01:39:01 +0200, Glenn Maynard gl...@zewt.org wrote: This is done if the feature is being disabled completely at page load time, with no chance of it coming back: you simply don't put the interface into the environment. WebGL is different, since it might go away after the page is already loaded (eg. the GPU blacklist is updated); going in and trying to remove the interface after the page is loaded would be weird. It might also become available after previously being unavailable (eg. video drivers are updated), in which case you'd have to go in and insert the interface. That's a good point. But the above also means that supportsContext is not useful in such cases since the environment can have changed between the time supportsContext is called and the time you want to create a context. That's inherent however it's done, since it's usually impossible to guarantee this; too much is out of the control of the browser. Even if you call getContext(gl) twice in a row, one might succeed and the other fail. That doesn't mean it's not useful, but it does mean it's harder to use correctly. For example, if Google Maps wants to show an enable WebGL maps button only if WebGL is available, supportsContext() can be useful to tell whether to show the button. That's useful even if it's not perfect: if that hides the button correctly for 99% of users, and gives a button that shows sorry, WebGL didn't actually work! for the remaining 1%, then that's an improvement over a useless button for 100% of users. If they want to show the button in the uncommon case of WebGL becoming available later on, they'd also want to recheck support periodically (eg. on focus or something). This is all far from perfect--web APIs try hard to avoid this sort of nondeterministic behavior. I don't know enough about the costs of actually creating a context to know whether it's worth it. But, I disagree that being imperfect means it's not useful at all. (FWIW, if I remember correctly, the basic idea of supportsContext was to discourage badly-written libraries, used on pages that don't even care about WebGL, from always creating a context just to fill in a feature support table, causing lots of pages to create and immediately discard rendering contexts all the time.) On Tue, Jun 25, 2013 at 6:46 PM, Jonas Sicking jo...@sicking.cc wrote: I don't think any of the current proposals supports that use case. For that to be really supported we'd need some sort of event that is fired whenever support for WebGL is dynamically added or removed. Pages having to continuously poll .supportsContext() is not a real solution. Has any browser actually expressed interest in supporting that use case? I recall the driver blacklist issue coming up before, where WebGL is available when the page is loaded, but is disabled later due to a background update to the blacklist. Sorry, it was years ago and I don't recall who that discussion was with. https://www.khronos.org/webgl/public-mailing-list/archives/1104/msg00136.htmlis the closest reference to the discussion I can find. -- Glenn Maynard