Re: [whatwg] Canvas and color colorspaces (was: WebGL and ImageBitmaps)
> On 30 Apr 2016, at 21:19, Rik Cabanier wrote: > > > It would be ideal if we can specify that the canvas backing store is in the > > device profile. > > How would the website know what profile this is? If it's just a boolean > setting, then I don't see how it would make it possible to use such canvas > correctly, e.g. convert a XYZ color to canvas' color space. > > This is how content is drawn today. A website doesn't know what profile a > browser is using. > Introducing this would make canvas drawing match HTML which is what the spec > is intending and users want. I think HTML colors being interpreted as colors in device color space is a bug. It makes it hard/impossible to get consistent colors across HTML, GIF and JPEG/PNG on wide-gamut displays: https://kornel.ski/en/color IMHO HTML/CSS and unlabelled image colors should be interpreted as sRGB colors. That makes all content displayed consistently and without over-saturation on wide gamut displays. That's what Safari does, and I really like that behavior. > Is device profile exposed somewhere in the platform yet? If not, I think it'd > be better to leave it hidden to avoid adding more fingerprinting vectors. > > I'm unsure how this would contribute to fingerprinting. > If browser start following the spec wrt icc profile conversion, you could > infer the profile by drawing an image and looking at the pixels. User may have a custom, personal monitor calibration, e.g. in OS X system Preferences -> Color -> Calibrate does this. This is likely to create a very unique profile that can be used as a supercookie that uniquely identifies the user, even across different browsers and private mode. Implementations must avoid exposing pixel data that has been converted to display color space at any time, because it is possible to recreate the profile by observing posterization. Therefore to avoid creation of a supercookie, by default canvas backing store must be in sRGB, unlabelled images rendered to canvas must be assumed to be in sRGB too, and toDataURL() has to export it in sRGB. > Setting the canvas to a website-supplied profile seems OK to me. It'd mean > the website already knows how to convert colors to the given colorspace, and > the same profile could be passed back by toDataURL(). > > That would indeed be the ideal solution. My worry is that it introduces a lot > of changes in the browser (ie see Justin's email that started this thread) > and I'd like to see a solution sooner than later. I'd rather not see any half-measures for mixed device RGB and sRGB. Color handling in Chrome and Firefox is currently problematic on wide-gamut displays, not just in canvas, but everywhere. It's just not possible to have a photo that matches CSS backround and doesn't have orange faces on wide gamut displays. It's very frustrating from author's perspective (I'm a developer of web-oriented image optimizers for Mac, so I'm hearing from many new iMac users annoyed with Chrome). If you must implement a quick fix, then perhaps render everything in the browser in sRGB color space internally, and then if needed convert to device RGB as the very last step (in GPU/by the OS)? It would make all current web content render consistently as expected. Support for the niche use case of true display of full gamut of wider-than-sRGB profiles can be added less urgently. -- Kind regards, Kornel
Re: [whatwg] Canvas and color colorspaces (was: WebGL and ImageBitmaps)
> On 30 Apr 2016, at 19:07, Rik Cabanier wrote: > > It would be ideal if we can specify that the canvas backing store is in the > device profile. How would the website know what profile this is? If it's just a boolean setting, then I don't see how it would make it possible to use such canvas correctly, e.g. convert a XYZ color to canvas' color space. Is device profile exposed somewhere in the platform yet? If not, I think it'd be better to leave it hidden to avoid adding more fingerprinting vectors. Setting the canvas to a website-supplied profile seems OK to me. It'd mean the website already knows how to convert colors to the given colorspace, and the same profile could be passed back by toDataURL(). -- Kind regards, Kornel
Re: [whatwg] A mask="" advisory flag for
> > - Change to , but keep using the > theme-color meta for the color Please don't use meta theme-color. Financial Times' theme color is "salmon pink" (#fff1e0), but FT's logo must use black letters. FT's logo is: http://image.webservices.ft.com/v1/images/raw/fticon:brand-ft?format=jpg&bgcolor=fff1e0&quality=highest&source=example and for Safari's icon it should be: http://image.webservices.ft.com/v1/images/raw/fticon:brand-ft?format=svg&source=example but theme-color makes it look like: http://image.webservices.ft.com/v1/images/raw/fticon:brand-ft?format=svg&tint=fff1e0,fff1e0&source=example For this case Safari requires theme-color to be changed to black, but that would make the entire UI of Chrome for Android black, which is also unacceptable. -- Kind regards, Kornel Lesiński
Re: [whatwg] A mask="" advisory flag for
>> >> The reason for treating the icon as a mask is that we want to enforce >> having a monochrome shape, specifically for our pinned tabs feature. > > The svg element has a switch for choosing between luminance and > alpha masking; I think using alpha masking instead seems like a pretty > clear win. It makes the color irrelevant, making it more likely that > the plain icon is appropriate to use for a mask as well, and there's > no difference in behavior if you're using opaque colors. (No > difference in functionality overall, either; you just achieve > partial-transparency with alpha rather than color.) I think it would be a big improvement if Safari only looked at the alpha channel and ignored luminance for the mask. And as I've suggested before, instead of reading the theme color from the problematic , Safari could read the theme color from the icon by averaging colors of opaque pixels of the icon. Instead of 100% black, authors should be advised to make the icon 100% in the theme color they want. It would be easy to author (it'd display essentially as-is if the author used a solid color) and still meet the requirement of enforcing a monochrome image (authors that mixed colors against the advice would get one color that is a blend). And all this would be achieved without the need for another metatag, and the mask icon would the same in other browsers. -- Kind regards, Kornel Lesiński
[whatwg] Icon mask and theme color
Apple has released a version of Safari that has a new interpretation of and that conflicts with existing usage on the Web. Safari (8.1 OS X 11.11) uses `theme-color` for foreground color of favicons of pinned tabs, but other browsers use `theme-color` for background colors. This makes it impossible to have a light theme color that fits Chrome's background and a dark pinned icon color that suits Safari (rdar://21379839). Additionally Apple introduced a `mask` attribute on the link element that is merely modifying the link relationship, but in a way that is incompatible with other browsers. To avoid compatibility problems I suggest specifying a way to define icon masks and colors in a way that doesn't conflict with usage on the Web. The new Safari is still only a preview, so I hope Apple will switch to a better solution. To prevent user agents from using theme-color in conflicting ways I suggest defining theme-color to be a background color: https://github.com/whatwg/meta-theme-color/issues/10 and adding a new meta: to define color for the favicon specifically. Additionally I suggest defining `icon-mask` link relationship for an icon that user agent can colorize: This way authors can control whether they want the icon to be reused as a regular icon, with , or not, by using this relationship alone. -- Kind regards, Kornel Lesiński
Re: [whatwg] HTTP/2 push detection and control in JavaScript
> On 20 Feb 2015, at 10:48, Brendan Long wrote: > > The obvious question to ask is “why not just poll the server”? The answer its > that live streaming latency depends (among other things) on how quickly you > poll. Unless you can perfectly predict when the server will have an update > available, you need to either poll slightly late (introducing latency) or > poll significantly more often than the server creates updates. Using server > push is equivalent to to polling infinitely fast, while simultaneously > reducing load on the server by making fewer requests (win/win). For server push we already have Server-Sent Events: https://html.spec.whatwg.org/multipage/comms.html#server-sent-events > I’m not really concerned with how this is solved, but an example would be to > add to XMLHTTPRequest: XHR is dead. https://fetch.spec.whatwg.org/ -- regards, Kornel
Re: [whatwg] Feature-detectable WakeLocks
>> I'd prefer if individual lock types were instances of objects, e.g. >> navigator.*Lock objects could be instances of a variant of the WakeLock >> interface: >> >> navigator.screenLock.request(); >> navigator.screenLock.isHeld(); >> >> navigator.cpuLock.request(); >> navigator.cpuLock.release(); >> > Personally, this doesn't strike me as good API design. It means having a > bunch of attributes that all use the same class but only differ in name. Really? I think clearly separating different classes of locks (with a common base class) is much better than conflating them behind a weakly typed string-driven API. It's like: Element.firstChild.getAttribute(…); Element.nextSibling.getAttribute(…); instead of: Element.getAttribute("firstChild", …); Element.getAttribute("nextSibling", …); >> Alternatively, if the WakeLock was instantiable (to have a standard way for >> independent page components to share locks) then these objects could be >> constructors: >> >> if (navigator.ScreenLock) { >> var lock = new navigator.ScreenLock(); >> … >> lock.release(); >> } >> >> (or `new navigator.wakeLocks.Screen()`, etc.) > We don't have any APIs like this today on the Web. It would be weird :) "Weird" is subjective and a vague criticism. Can you elaborate what's wrong with that? > It would just be better to have a constructor on the interface: `new > WakeLock("screen")` or whatever. I don't see any benefit in obscuring types of the objects. String-driven API doesn't allow simple feature detection, and a single type that conflates all lock types makes extensibility uglier. You won't be able to elegantly add methods that are valid only for some types of locks, e.g. `new WakeLock("cpu").dimScreen()` is nonsense, but would valid from perspective of WebIDL and JS prototypes. -- regards, Kornel
Re: [whatwg] Proposal: Wake Lock API
> On Monday, August 18, 2014 at 6:24 PM, Kornel Lesiński wrote: > >> I think it'd be unfortunate if this API had just one shared lock per >> browsing context and required components on the page to coordinate locking, >> but provided no means to do so. > > > The API allows scripts to check which locks are currently held (as either a > `isHeld()` or `getCurrentLocks()`, for which I just sent a PR for). I don't understand how is that helping. Let's say I have embedded a Slideshare presentation and a YouTube video on my page. I start watching slides, then start playing the video, then finish watching slides. When Slideshare finishes and wants to release the lock, it can't learn via this API whether YouTube still wants the lock. When Slideshare started isHeld was false, but setting it back to that original state would be incorrect. When Slideshare finished isHeld was true, but that doesn't tell anything either, since Slideshare itself set it to true. The only way I see for coordinating lock between independent components on the page is not via isHeld(), but by defensively re-setting the lock. In my previous example both Slideshare and YouTube would have to watch for 'lost' events (but not via the Netscape-style onlost footgun!) and keep re-requesting the lock soon after it's been released, for as long as they need it. IMHO that's really ugly. If re-requesting is supposed to be the pattern for maintaining locks properly, then the whole API could be cut down to just events: window.addEventListener('beforeScreenLock', function(e) { if (stillShowingStuff) e.preventDefault(); }, false); The browser would fire beforeScreenLock event every time the OS is about to turn the screen off. To keep the screen on for another while the page just needs to prevent the event. >> This will force authors of libraries and components to create dummy iframes >> just to have their private lock, and libraries/pages without such workaround >> will be messing up each other's locks. > > Currently, iframes are not allowed to have locks - only top-level browsing > contexts are. This is to avoid things like embedded ads from requesting wake > locks. That's a noble goal. However, it may not be effective against ads in practice, because majority of ads are embedded using
[whatwg] Preventing wake lock leaks with DOM nodes
My biggest concern with the WakeLock API is that it's easy to forget (or fail) to release the lock. It's not a problem with the API per se, but a programming problem in general: resource management in non-trivial programs is hard. WakeLocks are especially problematic in this regard, as a "leaked" lock won't cause any immediate problems for the developer, so this type of bug can easily go unnoticed. So I think lifetime of WakeLocks needs to be attached to something visible to make failure to release the lock immediately obvious. In case of screen lock it's especially easy: the whole purpose of this lock is to keep something visible on screen, so we can require that something to be explicitly connected to the lock. For example if I were creating a widget that displays a presentation on the page, I could attach the screen lock to the or element that holds the presentation: new navigator.ScreenLock(myCanvas); and if the canvas was removed from the document or hidden in any way, then the browser could turn the screen off as usual, and I wouldn't have to do anything! It's nearly impossible to forget to remove a visible DOM element from the document — the mistake is likely to be quite obviously visible. If screen lock lifetime was dependent on visibility of a DOM element, then it would also be very hard to leak the lock without noticing it! (that's a variant of "wake-lock:display" CSS proposal, but less explicitly dependent on CSS). With CPU lock it's less clear cut. I think tying it to a notification may be a good idea. Alternatively, perhaps the lock itself could be an instance of the element that author is supposed to insert to the document? ;) -- regards, Kornel
[whatwg] Feature-detectable WakeLocks
WakeLock.request() expecting a string isn't very friendly to feature detection. I'd prefer if individual lock types were instances of objects, e.g. navigator.*Lock objects could be instances of a variant of the WakeLock interface: navigator.screenLock.request(); navigator.screenLock.isHeld(); navigator.cpuLock.request(); navigator.cpuLock.release(); Alternatively, if the WakeLock was instantiable (to have a standard way for independent page components to share locks) then these objects could be constructors: if (navigator.ScreenLock) { var lock = new navigator.ScreenLock(); … lock.release(); } (or `new navigator.wakeLocks.Screen()`, etc.) Having specific instances for different types of locks could also enable elegant extensibility of the API, e.g. var screenLock = new navigator.ScreenLock(); screenLock.dimScreen(); // completely made-up API var cpuLock = new navigator.CpuLock(); cpuLock.setThreadPriority("low"); // completely made-up API -- regards, Kornel
Re: [whatwg] Proposal: Wake Lock API
I think it'd be unfortunate if this API had just one shared lock per browsing context and required components on the page to coordinate locking, but provided no means to do so. This will force authors of libraries and components to create dummy iframes just to have their private lock, and libraries/pages without such workaround will be messing up each other's locks. Having just a single shared DOM0-style event handler navigator.wakeLock.onlost looks especially jarring. I would expect this to be a proper DOM event that can be used with normal addEventListener (please avoid repeating the mistake of matchMedia). To make some coordination possible, the simplest method could be to keep track of number of lock requests and releases, like retain/release in Objective-C: navigator.wakeLock.request("screen"); // locks navigator.wakeLock.request("screen"); // increases lock count navigator.wakeLock.release("screen"); // not released yet, but decreases lock count navigator.wakeLock.release("screen"); // now released for real However, as you probably know from Objective-C, perfect balancing of retain/release takes care and discipline. Personally, I wouldn't trust all 3rd party libraries/widgets/ads to be careful with this. In fact, I expect some "clever" libraries to ruin this with: while(navigator.wakeLock.isHeld("screen")) navigator.wakeLock.release("screen"); // just release the damn thing in my leaky code! Therefore, if WakeLock needs to be purely JS API, I strongly prefer having WakeLock available only as an object instance, but without exposing GC behavior—if it's lost, it's like a missing release call. If devtools ever get monitoring of unhanded errors in Promise objects, they could also warn against lost WakeLock objects—it's the same type of problem dependent on GC. I'm assuming that release would work only once on each lock object: var lock = new WakeLock("screen"); lock.release(); lock.release(); // ignored, so it doesn't unlock any other component's lock This makes coordination easier: each page component can easily create their own lock independently (without needing to create an iframe to get their own lock), and can't release any other component's lock. -- regards, Kornel
Re: [whatwg] Proposal: toDataURL “image/png” compression control
On 29.05.2014, at 23:19, Glenn Maynard wrote: > > Anyway, this has derailed the thread. We have an API for compression > already. It already supports a compression level argument for JPEG. > Having an equivalent argument for PNG is a no-brainer. The only > difference to JPEG is that it should be described as the "compression > level" rather than "quality level", since with PNG it has no effect on > quality, only the file size and time it takes to compress. I don't think it's a no-brainer. There are several ways it could be interpreted: 1. As zlib's compression level However, this has marginal utility, because these days even the maximum level, even on mobile devices, is reasonably fast. Lower level would be useful only for very large images on very slow devices, but UAs can have a good heuristic for ensuring reasonable compression time without any input from the page's author. I expect exponential increase in computing power make this setting completely irrelevant by the time it's implemented in most browsers. 2. Enable brute-force search for best combinations of zlib's compression level, memory level and window size OptiPNG and pngcrush show that "maximum" settings in zlib don't always give smallest file and best compression is obtained by trying hundreds of combinations of zlib parameters. If browsers choose this approach for a high "compression level" that will be a couple of *orders of magnitude* slower than the first option. If different vendors don't agree on orders of magnitude of time it takes to compress an image, such parameter could be unusable. 3. Compression parameters in other gzip implementations For example Zopfli compressor produces files smaller than zlib, but is much much slower. Instead of 1-9 scale it takes "number of iterations" as the compression level. And it can even use a totally different approach to compression level: I've modified Zopfli[1] to make it aim for constant processing time on any machine. Faster machines will just produce smaller files. Browsers could use this approach to ensure every PNG is compressed in < 0.5s or so, or the compression level parameter could be a number of seconds to spend on the compression. And that's just for lossless PNG. It's possible to encode standard PNG in a *lossy* fashion (http://pngmini.com/lossypng.html), and there are few ways to do it: Images can be converted to PNG-8 (vector quantization is a form of lossy compression) and then compression level could be interpreted as number of unique colors or mean square error of the quantized image (the latter option is used by http://pngquant.org). This generally makes files 3-4 times smaller, but has a limit on maximum quality that can be achieved. For higher quality it's possible to make truecolor PNG lossy by taking advantage of the fact that PNG filters are predictors. Instead of writing all pixels as they are in the input image the encoder can replace some pixels with values matching filters' prediction. This simplifies the data and generally halves the file size (and costs almost no extra CPU time). The threshold used to choose between source and predicted values for pixels acts similarly to JPEG's quality level. So there are multiple ways such parameter can be interpreted, and it can result in wildly different visual quality, file size and time taken to compress the image. -- regards, Kornel [1] https://github.com/pornel/zopfli
Re: [whatwg] Simplified element draft
On Sat, 04 Jan 2014 06:36:27 -, Adam Barth wrote: In order for the HTMLPreloadScanner to issue preload requests for elements, the HTMLPreloadScanner would need to be able to evaluate arbitrary media requests. That's difficult to do without joining the main thread because the media query engine works only on the main thread. Solution I've suggested originally was that when the selection algorithm encounters a media query it cannot evaluate yet, it aborts selection, waits until conditions change and retries selection form the beginning. This means that: * all images that can be selected by the preloader will be selected, and they'll be selected as soon as it is possible, * browser will never load any irrelevant image, * browsers can optimize when and which MQs match without affecting correctness. For example: in an without layout. 1. If you know resolution and the first MQ matches, then load src=first immediately. Done! 2. If you don't know viewport size then wait until any conditions change (i.e. either viewport size becomes known OR resolution changes) and go to step 1. 3. If the second MQ matches then load src=second immediately (might still happen in the preloader). Done! 4. If you can't evaluate complex MQ in the preloader, then wait until control goes back to the main thread and go to step 1. By "wait" here I mean the selection algorithm is deferred for the given picture only, and nothing else is blocked. Equivalent of it in the current spec would be something like this: Before step 7 in http://picture.responsiveimages.org/#update-source-sets add: 6b. If child has a media attribute, and its value is a valid media query which UA temporarily cannot evaluate then exit this sub-algorithm and /select an image source/ again after a UA-specific delay. "temporarily cannot evaluate" is completely up to UA. It may mean unknown sizes in iframes, it may mean non-trivial queries in the preloader, etc. "UA-specific delay" could be waiting for any media query in the to change, or it could simply mean ignoring the picture in the preloader and doing evaluation properly on the main thread/when layout is calculated, etc. "exit this sub-algorithm" will either cause an earlier source that has unambiguously matched to be loaded or empty source set will cause selection algorithm to do nothing. -- regards, Kornel
[whatwg] NodeList.forEach/map/filter still doesn't work
Everywhere on the web where NodeList.forEach() is mentioned, everybody agrees that's something that is expected to work, but doesn't. It's followed by a list of excuses why it doesn't work, as if it was a completely intractable problem that nobody can ever fix in any way whatsoever. Can we please switch efforts from explaining why it's broken to actually fixing it? It's sad that I can't use document.querySelector().filter().map().forEach() without patching prototypes myself. ES6 Array.from(), even with syntactic sugar, is a band-aid. NodeList.forEach() still doesn't work, but should. I don't think anybody cares for NodeList.forEach/map/filter/etc to be "real" Array functions, so I'd love to see even a simplest fix like: NodeList.prototype.map = function(...whatever) { return Array.from(this).map(...whatever); } NodeList.prototype.forEach = function(...whatever) { return Array.from(this).forEach(...whatever); } etc. -- regards, Kornel Lesiński
Re: [whatwg] Styling form controls (Was: Re: Forms-related feedback)
On Wed, 04 Dec 2013 16:12:50 -, TJ VanToll wrote: The datepicker also shows the problem that using pseudo-elements as styling hooks presents. The calendars presented on mobile browsers and desktop browsers are radically different. Even if you wanted to standardize certain hooks, there is literally nothing in common across the implementations. Maybe instead of coming up with one set of pseudo-elements that's limited to the lowest common denominator we should have multiple completely different sets of pseudo-elements for each kind of interface? input::calendar.month-view-grid ::first-week-row {...} // typical desktop style input::calendar.spin-wheel ::month-spinner {...} // iOS style (or any other syntax with cats/hats/dogs/pseudo-functions, as long as it groups pseudo-elements per kind of calendar UI) This way developers assuming date pickers are grids with a month view could style specific pseudo-elements for this layout and mobile browsers could ignore these styles completely. -- regards, Kornel
Re: [whatwg] Simplified element draft
On Wed, 27 Nov 2013 00:48:56 -, Simon Pieters wrote: You introduce a proxy that needs to be tested to see that it works in different scenarios (e.g. removing an attribute, that events are forwarded properly, that it does not affect parts it shouldn't like document.images, that the context menu works, etc.). You introduce a (or two) new fallback mechanism. You haven't specified that should be able to be drawn on a canvas in 2d (and WebGL?). Thanks, very good examples. Now I understand (although I wish specifying it "exactly like " would make that easy enough). -- regards, Kornel
Re: [whatwg] Simplified element draft
The advantage of the scheme that zcorpan proposed is that there is no magic proxy; we just add a capability to to select its source using more than just a src attribute. This has better fallback than your design and is easier to implement. I believe that from testing perspective both approaches are equivalent. The spec I propose *is* only another way to control src of an image. The only difference is that I don't expose the to scripts. That may make it even simpler, because you can't have odd cases like author moving/removing the controlling img or setting values directly on img that conflict with picture's definitions. -- regards, Kornel
Re: [whatwg] Simplified element draft
On 25 November 2013 10:59:15 Yoav Weiss wrote: On Mon, Nov 25, 2013 at 11:32 AM, Kornel Lesiński wrote: > If picture was explicitly controlled by img then websites could start > depending on that behavior, and we'd be stuck with it. OTOH picture can > have "native" DOM interface and still reuse img for implementation. I believe these interfaces would be something you'd need to test, so you would have testing duplication, even if you save code duplication. Yes, you need to test the integration point, but you only need to test that assignment of one attribute affects the other. You don't need to repeat tests that test it deeper. > I do wonder however if fallback img should be used as equivalent of a > to save authors a bit of repetition. (in selection algorithm the > first step would be "for each source or img child...") or perhaps be used > as last-resort fallback when no source matches (step 2 of the algorithm). I agree that it would make sense for authors. Which variant you think is better? > I've specified something like that. I think it can be as simple as a flag > that preload scanner uses internally. > Again, this is an issue with HTMLImageElement itself, not the preload scanner. It'd probably require modifications to the section of the HTML spec. I believe it won't be an issue in the approach I've specified - when the fallback img is separate from controlling image. Scripts can avoid creating fallback img at all, because when scripting is enabled they will use polyfill and can treat all UAs as supporting picture. In that case fallback img would be like document.write("") ;) Maybe the spec should have authoring guidelines for this? The controlling image starts with no src, so it won't download anything that wasn't deliberately chosen through picture. -- regards, Kornel
Re: [whatwg] Simplified element draft
On 25 November 2013 08:00:10 Yoav Weiss wrote: It contains some parts that I'm not sure have a consensus around them yet: * It defines as controlling , where earlier on this list we discussed mostly the opposite ( querying its parent , if one exists) Controlling image is a great idea. It greatly simplifies the spec and hopefully implementations as well. I chose not to expose that implementation detail, assuming that one day (when all UAs, crawlers implement it) we will not need explicit fallback any more. If picture was explicitly controlled by img then websites could start depending on that behavior, and we'd be stuck with it. OTOH picture can have "native" DOM interface and still reuse img for implementation. * It defines as a part of 's shadow DOM, which we need to see how it fits with having fallback elements (which are necessary in the near future). I've added section about preloader. The img in fallback content should be ignored by the preloader. It's purely for picture-less UAs. I do wonder however if fallback img should be used as equivalent of a to save authors a bit of repetition. (in selection algorithm the first step would be "for each source or img child...") or perhaps be used as last-resort fallback when no source matches (step 2 of the algorithm). This proposal does contain srcset as a subcomponent, but it's not the same srcset as defined in the HTML spec, but a modified version based on improvements from the src-N spec. (that cover the variable-width images use-case) Indeed. This part of the spec isn't ironed out yet. The proposal will also require some changes to and specifically, when not created by JS, will have to avoid loading of resources until the element is added to the DOM, and can see if its direct parent is . If the parent is , would then query the parent (or wait to be "controlled" by its parent), otherwise, it'll load its resources as usual. I've specified something like that. I think it can be as simple as a flag that preload scanner uses internally. I think we don't need to add any runtime behavior changes for this, as scripts constructing will not insert explicit fallback node - it makes more sense to rely on polyfill instead (that will use img with correct src from the start). -- regards, Kornel
[whatwg] Simplified element draft
I've written down proposal for the simplified source selection algorithm: http://geekhood.net/picture-element.html This also includes variant of the idea from the recent " redux" proposal to use an actual element as the basis for the element definition. This draft doesn't include all features of src-N *yet*, but I expect this to be added either via extended srcset syntax to something like sizes> once there's consensus how to approach this. To simplify implementation even further I've allowed UAs to flatten fallback DOM to a plaintext string (in case they need to emulate for existing screen readers or accessibility APIs). I've dropped usemap. It could be added, but I'm not sure if there is need for it. I've specified very few IDL attributes. This area may need to be extended. -- regards, Kornel
Re: [whatwg] redux
On Wed, 20 Nov 2013 17:25:07 -, Tab Atkins Jr. wrote: Simon Pieters wrote up Kornel's earlier approach to a saner, more palatable source selection algorithm for (rather than copying /). This approach also has a new wrinkle: *requires* an child, and it's the that still actually displays the image. The element is just a wrapper for the + elements, and provides a context for the source selection algorithm. This makes testing substantially easier, as we can limit ourselves to testing the source selection algorithm, and probably makes implementation easier as well. Can we hide the "controlling" in shadow DOM? And make HTMLPictureElement the interface that proxies relevant properties/events to the internal ? Reuse of is a great idea for simpler implementation and testing, but maybe we don't even need to expose that fact to the authors. -- regards, Kornel
Re: [whatwg] The src-N proposal
we can just add rather than complicating microsyntax with attribute-within-attribute and/or extra layers of delimiters and escaping). Authors already have ways of dealing with verbosity of HTML and DOM APIs (templating, jQuery, etc.), and we have proposals for reducing repetition with orthogonal features like Media Query Variables, so I think we can afford starting with a bit verbose, but sane and straightforward syntax. The lesson we learnt from / isn't that the pattern is an easy choice. It's that we should avoid it if at all possible. :-) was undoubtedly painful, but I've looked at test cases and media selection algorithm and I think the pain was caused by video-specific problems and complexity of MediaElement algorithms and APIs, and is not inherent to use of elements in HTML in general. Images don't need to expose API for buffering, seeking, playback states, etc. Image sources can be evaluated using simple, stateless atomic algorithm - basically same algorithm as you'd use for an attribute, but instead of using custom attribute parser you read attributes from child nodes. -- regards, Kornel
Re: [whatwg] The src-N proposal
On Tue, 19 Nov 2013 22:07:33 -, Simon Pieters wrote: In http://lists.w3.org/Archives/Public/public-respimg/2013Oct/0045.html I discuss a problem that a new element would have, namely that it would require a new fallback mechanism and a lot of stuff would need to be duplicated from img. Do we need usemap? We can probably drop it. We don't need to replicate lots of legacy features and quirks of . I think the upside is that we can ship with almost no features, and re-add them only as necessary. For the fallback: is an existing example of a picture with a fallback DOM, so browser vendors already have to implement/implemented fallback for -like element. I would go further and simplify it by forbidding all interactive (focusable) elements in fallback DOM. Canvas already forbids interactive elements with some exceptions, but for picture we don't even need these exceptions. This authoring rule can be validated easily, and allows UAs to avoid real difficulty of handling focus in fallback. To make easy to plug into existing ATs I suggest specifying that UAs MAY interpret fallback content as text extracted using innerText algorithm (preserves space between elements) with additional rule that @alt from any in the fallback is extracted as well (so alt="old alt"> as well as fancy alt will have good accessibility in all UAs). This should be zero extra work for implementors, since that's what they already do for copying selection to plain text clipboard. With plain text extracted from the fallback it will be possible to reuse accessibility interfaces designed for . When implementations mature we may eventually be able to let authors rely on more structured fallback. In any case we're better off than with strictly-plaintext-forever , and the first version of can be guaranteed to be be easily implementable in terms of . At this point we could change the name of the wrapping element to and basically have the same syntax as current except there would be a required child element. The polyfill implements using (http://uniqname.github.io/x-picture/), so that's definitely a way to do simple implementation. An element will be de-facto required for a while as a fallback, but could it be optional eventually? I think that even if browsers implement using , the element itself should be hidden in shadow DOM. If we don't explicitly define as wrapper for then yes, we'll need separate test cases for , but: - hopefully plenty of cases can be adapted with little more than find'n'replace - We don't need to bring all the legacy baggage of , so a bunch of tests for Netscape'isms can be deleted. - Image element has weird stuff like .complete property that can change synchronously. Kill it! With clean slate we can define only minimal, quirk-free API that is much easier to deal with. - Test cases is something that can be shared between browser vendors, and the community can help adapt test cases to , so we can spread the effort. -- regards, Kornel
Re: [whatwg] The src-N proposal
On Wed, 20 Nov 2013 05:24:21 -, Bruno Racineux wrote: If your sources and breakpoints are hard-coded in your articles (stored DB), and you suddenly have to change your site's theme, or add a new image at the platform level or a new resolution? What if one breakpoint is no longer relevant? Or what if you change designs with a complete new responsive approach? How does an inline syntax help me with that case? You can be stuck. That forces you to regenerate all the img src(s) of your articles with your new layout and new inline breakpoints. I sympathize with the problem. Unfortunately we have a hard requirement of supporting the preload scanner, which means we absolutely cannot wait for any external file. And since we can't wait for any external file, we can't wait for stylesheets or any reusable centralized definition of breakpoints. When HTTP/2 Push becomes a standard feature preload scanner won't be so important any more and we'll be able to revisit this. A centralized css-subset approach do not have such difficult problems. Verbose aside, to me this all screams: RespIMGs has to be a CSS related feature with centralization of custom MQs and srcset(s) at the . With preload scanner limitation definitions in is the best we could possibly do. I have proposed Media Query Variables intended to be used in
Re: [whatwg] The src-N proposal
On Mon, 18 Nov 2013 23:18:37 -, Bruno Racineux wrote: All I hear from implementors as a whole, is that: you don't want to go the css imgset or image-set road, you won't use src-templates, and you don't want any new macro. Seriously, what it left? Indeed, the discussions are difficult, but hopefully we're making progress. For all it's worth, my outside take on both of srcset and src-N has always been that it's not DRY enough, and more unnecessary bloat to pages, due the long unnecessary repetition of img-path(s) for each img of similar size, repeating the same pattern over and over for image galleries, and lack of src-template (or regex pattern) approach to this problem. I agree that none of current proposals is perfect and all have degree of repetition and verbosity. However, the most terse syntaxes are starting to look like Perl. It's not always the best idea to squeeze every byte out of a syntax. Even if none of existing proposals is perfect in terms of DRY, I think overall they're good enough to be useful. I'm not concerned about verbosity, because gzip is excellent at removing cost of any repetition, so on the wire the most verbose and the most terse syntax cost the same. In terms of memory footprint we're talking about few attributes or elements that take bytes/1-digit kilobytes... while displaying megabytes of high-DPI RGBA bitmaps. We should be able to add URL templates or another DRYing method later (especially to which can take additional attributes easily without complicating syntax), and such layering/decoupling may actually be a more elegant architecture. I would consider src-N more friendly, with perhaps a new 'the dedicated to src-N(s) and, proceed to includes custom MQs in the head at the same time (which is inline css in the anyway), to a least reduce some of its verbosity... As you know there has been proposal for Media Query Variables, so it seems quite probable that a similar thing can be added for other properties of responsive images as well. One way to convince browser vendors that such syntax is needed is to let them ship the basic version with full URLs, and then you'll have proof that URL patterns emerge and authors complain about verbosity (or not :) Either way, it's quite pathetic to watch implementors argue over two half baked quite verbose solutions, from a distance, after nearly 3 years thinking of this... Even worse, suggesting to go ahead with something incomplete, not knowing what the future completion will actually consist of. The issues and ideas discussed here look a lot like discussions in RICG years ago, so hopefully we'll eventually come to the same conclusions as RICG did ;) -- regards, Kornel
Re: [whatwg] The src-N proposal
On Tue, 19 Nov 2013 01:12:12 -, Tab Atkins Jr. wrote: AFAIK it makes it as easy to implement and as safe to use as src-N. Simon, who initially raised concerns about use of in found that solution acceptable[2]. I'd love to hear feedback about simplified, atomic from other vendors. The cost there is that is now treated substantially differently than , despite sharing a name. The substantial difference is that it lacks JS API exposing network/buffering state, but IHMO that's not a big loss, as those concepts are not as needed for pictures. IMHO the important thing is that on the surface (syntactical level) they're the same - multiple elements where the first one matches. Otherwise, though, I'm fine with this as well. The only innovation that src-N offers over is the variable-width images syntax, and that can be baked into as well. That was exactly my thought. Combination of src-N features with less contentious syntax would be ideal. can support number of attributes, so if there are objections to some features or parts of src-N syntax, it can be split into multiple attributes on to be introduced gradually later/as needed (e.g. , , , etc.) without risking explosive complexity of combined microsyntaxes. -- regards, Kornel
Re: [whatwg] The src-N proposal
On Mon, 18 Nov 2013 16:47:08 -, James Graham wrote: On 18/11/13 16:36, matmarquis.com wrote: I recall that some of the more specific resistance was due to the complication involved in implementing and testing existing media elements, but I can’t claim to understand precisely what manner of browser-internal complications `source` elements brought to the table. The fundamental issue is atomicity; setting one or N attributes is an atomic operation from the point of view of script; creating N elements is not. This creates complexity because the algorithm has to deal with the possibility of DOM mutation changing the set of available sources before it has selected the correct one. I believe there was a proposal that simplified the semantics by ignoring mutations, but I hear it ran into problems with animated images, which I haven't understood in detail. I agree that as specified for and initially for was a mess, but that doesn't have to be the case. The complexity was mainly caused by stateful algorithm exposed to JS, which is not necessary for . It's *is* possible to have use N elements atomically. I've specified a simplified selection algorithm[1] that achieves this. It is atomic from JS perspective. Atomicity is achieved by always scheduling the selection algorithm to run on next tick (event loop spin) after mutation. This way JS can perform several mutations in a row without worrying about race conditions. The algorithm I've specified is also stateless and works correctly with incomplete data (e.g. if packet boundary happens to be inside ). AFAIK it makes it as easy to implement and as safe to use as src-N. Simon, who initially raised concerns about use of in found that solution acceptable[2]. I'd love to hear feedback about simplified, atomic from other vendors. [1] https://github.com/ResponsiveImagesCG/picture-element/issues/62#issuecomment-24479164 [2] http://lists.w3.org/Archives/Public/public-html/2013Sep/0185.html -- regards, Kornel
Re: [whatwg] The src-N proposal
On Sun, 10 Nov 2013 08:20:33 -, Adam Barth wrote: This is similar to AppCache vs Alex's ServiceWorkers. AppCache addresses a small set of use cases, probably not enough. ServiceWorkers provides the tools to address a lot of use cases, but isn't directly itself a solution; you use it to build solutions. Another example would be the WebForms2 repetition model, vs Rafael's . The repetition model idea solved some specific use cases, but trying to make it solve all use cases would be a hugely complicated endeavour and would be really ugly. provides a tool with which you can build specific solutions, but isn't itself a direct solution. I basically agree with Ian. Let's address the simple use cases first (i.e., device-pixel-ratio switching) and worry about the more complex use cases in the future. If we go that path I'm afraid we'll end up with a horrible mess of several incomplete client-side and server-side solutions clobbered together with preloader-killing scripts. The closest thing to what Ian is suggesting is implemented with , but due to standardization failure it won't be able to benefit from image preloader or offer users/UAs ability to control image selection. Basically authors will hate us. We've been going in circles for a couple of years now and all we have to offer is an incomplete solution? And browser vendors can't even agree which one of the half-baked solutions is it going to be :( -- regards, Kornel
Re: [whatwg] The src-N proposal
* The developer community and the RICG are rallying behind src-n, with work on being discontinued in favor of src-N. I'd like to clarify that src-N got support from RCIG on assumption that has been rejected by browser vendors and has no future. However, many members have expressed that they prefer syntax over src-N. -- regards, Kornel
[whatwg] Image.complete in broken state
The spec states: The IDL attribute `complete` must return true if […]The final task that is queued by the networking task source once the resource has been fetched has been queued, but has not yet been run, and the img element is not in the broken state. The img element is completely available. http://www.whatwg.org/specs/web-apps/current-work/multipage/embedded-content-1.html#dom-img-complete If I understand correctly the spec calls for Image.complete to be false when the image is broken — and this doesn't match implementations. At least Firefox, Chrome and Safari set image.complete == true when the image is broken. Test case: Having complete == true set on broken images is actually useful: it allows distinguishing between images that haven't been loaded yet and images that have been loaded and failed to decode (.complete == true && .naturalWidth == 0). -- regards, Kornel
Re: [whatwg] High-density canvases
On Tue, 10 Sep 2013 21:22:51 +0100, Dean Jackson wrote: I think there are two separate things a developer might want: - the number of actual pixels that correspond to 1 CSS px without zoom - the page zoom If you merge the two, then an unsuspecting developer might think that the user has zoomed in by 2x on an iPhone, and decide to make things smaller. Do you have an example of a page that does make things smaller to counter the zoom? Are you referring to some iPhone-specific workarounds (like position:fixed elements being problematic for zoom?) I assumed that sites which don't like being zoomed in would just block it via . -- regards, Kornel
Re: [whatwg] Proposal: Adding methods like getElementById and getElementsByTagName to DocumentFragments
On Fri, 06 Sep 2013 13:20:01 +0100, Simon Pieters wrote: Such a function already exists in the wild btw: http://mothereff.in/css-escapes So the use case is getting an element by id with an "untrusted" id as input, in an element or document fragment as opposed to the document? I wouldn't call it "untrusted". It's needed to correctly find arbitrary ID. It's not too eccentric to have non-alphanumeric IDs. For example you need to use `[]` in form element name to receive multiple values in PHP, and it makes sense for form-generating libraries to use same name and ID. I don't understand "deprecation" of getElementById(). querySelector('#'+CSS.escapeIdent(id)) is significantly worse: less readable, slower (generates garbage strings) and error-prone (unescaped incorrect use is much easier than the correct use). It's like deprecating indexOf(), because properly-escaped regular expressions can do the same. getElementById() is a very well-known API. It's pretty convenient. It cannot be removed from the platform, so every browser already has to implement it and cost of exposing it on document fragments should be minimal. Maybe it's not "cool", but keeping it away from document fragments buys nothing, and just makes the platform less consistent. -- regards, Kornel
Re: [whatwg] BinaryEncoding for Typed Arrays using window.btoa and window.atob
On Mon, 05 Aug 2013 21:39:22 +0100, Chang Shu wrote: I see your point now, Simon. Technically both approaches should work. As you said, yours has the limitation that the implementation does not know which view to return unless you provide an enum type of parameter instead of boolean to atob. In that case it'd be better to return ArrayBuffer, so the user can wrap it in any type they want (including DataView). -- regards, Kornel
Re: [whatwg] Blurry lines in 2D Canvas (and SVG)
On Wed, 24 Jul 2013 02:13:19 +0100, Rik Cabanier wrote: It's not intuitive. It's a pretty common pitfall, but it's logical. For 1-pixel lines it could be fixed by allowing authors to specify that path should be stroked with lines aligned to inside/outside of the path (which is a useful feature on its own). Sure, but how can we fix this? It's not very intuitive that I have to keep track of the devicePixelRatio (and the current CTM?) to get crisp lines. To what extent does it need to be "fixed"? "Manually" snapping lines to canvas pixels isn't too hard, e.g. subtracting 0.5 from x/y and adding 1 to width/height to get pixel-aligned rectangle outside the box. It does get trickier with transforms indeed :( Is it enough to snap to canvas pixels? (future of "HD" canvas is uncertain, so authors may need to resize canvas to match devicePixelRatio anyway). Is it enough if there was strokeOutside()/strokeInside() that makes untransformed lines pixel-aligned? Or is it necessary to have snapping for odd-width lines that are stroked centered on a path? Do authors expect lines in canvas with non-integer transforms to be crisp? Should arc() and bezier curves also be snapped? What if you want a line that touches the curve? What we need is that artwork 'snaps' to the native pixels while still being antialiased. How should snapping be done? If fill() of a 2x2 rect draws: XX XX how would stroke() look like? .XX. .XX. or .. .. or ... .X. ... If you have path that is 2.5 device pixels wide, is it going to be snapped to different width depending whether you draw it at (0, 0) or (0.1, 0)? Would that also make circles ellipses? Snapping makes animated slow movement choppy, so authors may also want ability to disable it for selected paths/drawing operations or even for each axis separately (e.g. to smoothly animate horizontal movement while object is snapped to pixels vertically, etc.) -- regards, Kornel
Re: [whatwg] Blurry lines in 2D Canvas (and SVG)
On Wed, 24 Jul 2013 01:18:35 +0100, David Dailey wrote: Just affirming what you've said in SVG: http://cs.sru.edu/~ddailey/svg/edgeblurs.svg The middle rects are crisp, having been merely translated leftward and downward by half a pixel. Zooming in from the browser rectifies the problem (as expected) after a single tick. I remember folks discussing sub-pixel antialiasing quite a bit on the SVG lists circa fall/winter 2011. It seemed to cause some troubles for D3. Is that the same issue? It's not a bug, it's a feature ;) The line is centered around edge of the box. You haven't specified whether you want the line to be outside or inside the box (or overlapping left edge of the box, but not the right, etc.), so you get line in the middle approximated as well as possible. It's not intuitive. It's a pretty common pitfall, but it's logical. For 1-pixel lines it could be fixed by allowing authors to specify that path should be stroked with lines aligned to inside/outside of the path (which is a useful feature on its own). -- regards, Kornel
Re: [whatwg] Script preloading, ES6 modules
ES6 modules[1] have a script loader API[2]. That API is pretty powerful to the point it can emulate other script loaders, load files that are not ES6 modules, and even load text files that aren't JS (intended for compilation of coffeescript-like languages, but could be abused for anything): https://gist.github.com/wycats/51c96e3adcdb3a68cbc3#using-existing-libraries-as-modules There's a very high overlap between module dependencies and
Re: [whatwg] Forcing orientation in content
On Sat, 13 Jul 2013 08:13:03 +0100, Tobie Langel wrote: It is not uncommon for mobile experiences to rely on the accelerometer as an input mechanism, for example to control page scrolling (e.g. Instapaper) or for gameplay. In such cases, auto-rotation of the viewport is completely disruptive to the user's experience and needs to be inhibited. Indeed, this ruins accelerometer-based games. It's also slightly problematic in applications using compass (augumented reality or navigation apps pointing user towards a direction) - auto-rotation misfires when person rotates themselves while holding phone in front of them. Inhibiting auto-rotation may be sufficient, and shouldn't be too annoying. Browsers might even have option to unlock rotation (e.g. Instapaper shows rotation lock switch when you shake the device). I suspect that games designed for being locked in a particular screen orientation will be forcing users to rotate device to desired orientation first (e.g. I can imagine racing games to refuse to start the race until user rotates device to landscape) — but maybe that's a good thing? Since specific, locked screen orientation is mostly needed in games, and forced rotation is disruptive to other things on the screen (e.g. moving buttons/addressbar to other physical edge of the screen), maybe it should be tied to the Fullscreen API? element.requestFullscreen({orientation:'landscape', autorotation:false}) -- regards, Kornel
Re: [whatwg] Script preloading, non-script dependencies
On Tue, 09 Jul 2013 20:39:45 +0100, Ian Hickson wrote: Would something like this, based on proposals from a variety of people in the past, work for your needs? 1. Add a "dependencies" attribute to that can point to other scripts to indicate that execution of this script should be delayed until all other scripts that are (a) earlier in the tree order and (b) identified by this attribute have executed.