Re: Shrinking existing libraries as a goal
On Thu, May 17, 2012 at 3:21 PM, Yehuda Katz wyc...@gmail.com wrote: I am working on it. I was just getting some feedback on the general idea before I sunk a bunch of time in it. For what it's worth, I definitely support this idea too on a general level. However as others have pointed out, the devil's in the details, so looking forward to those :) Of course, ideal are proposals which not just shrinks existing libraries, but also helps people that aren't using libraries at all but rather uses the DOM directly. / Jonas
Spec Bugs and Test Suites [Was: Re: Shrinking existing libraries as a goal]
On 5/17/12 7:03 PM, ext Julian Aubourg wrote: To me the biggest abomination of all is Just a reminder that WebApps' [PubStatus] page enumerates all of its specs and for each spec, there (or will be): a) a link to the spec's Bugzilla component; b) a link to the spec's Test Suite. Finally, fixing xhr issues always seem like low priority items in browser bug trackers because there's always some kind of workaround, that libraries like jQuery have to put in their code (provided it can be feature tested which it cannot most of the time). I've been meaning to do a test suite to help provide guidance to implementors (something I figure would be much more useful than yet another round of specs) but I admit I haven't got to it yet. Dunno how people feel about this, but I think providing test suites that browsers could test against as a way to prevent regressions and inconsistencies could help a lot as a starting point. 2012/5/18 Yehuda Katz wyc...@gmail.com mailto:wyc...@gmail.com I am working on it. I was just getting some feedback on the general idea before I sunk a bunch of time in it. Keep an eye out :D Yehuda Katz (ph) 718.877.1325 tel:718.877.1325 On Thu, May 17, 2012 at 3:18 PM, Brian Kardell bkard...@gmail.com mailto:bkard...@gmail.com wrote: Has anyone compiled an more general and easy to reference list of the stuff jquery has to normalize across browsers new and old? For example, ready, event models in general, query selector differences, etc? On May 17, 2012 3:52 PM, Rick Waldron waldron.r...@gmail.com mailto:waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.com mailto:bkard...@gmail.com wrote: On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com mailto:waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com mailto:bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal with, considering the ID attribute value must be unique amongst all the IDs in the element's home subtree[1] . qSA should've been spec'ed to enforce the definition of an ID by only returning the first match for an ID selector - devs would've learned quickly how that worked; since it doesn't and since getElementById is faster, jQuery must take on the additional code burden, via cover API, in order to make a reasonably usable DOM querying interface. jQuery says you're welcome. and performance (this example illustrates both - since jQuery is doing the simpler thing in all cases, it is actually able to be faster (though technically not correct) I'd argue that qSA, in its own contradictory specification, is not correct. It has been argued in the past - I'm taking no position here, just noting. For posterity (not you specifically, but for the benefit of those who don't follow so closely), the HTML link also references DOM Core, which has stated for some time that getElementById should return the _first_ element with that ID in the document (implying that there could be more than one) [a] and despite whatever CSS has said since day one (ids are unique in a doc) [b] a quick check in your favorite browser will show that CSS doesn't care, it will style all IDs that match. So basically - qSA matches CSS, which
Comments, Spec Bugs and Test Case are Always Welcome! [Was: Re: Shrinking existing libraries as a goal]
[ My previous response was accidentally sent before it should have been (delete it) ... ] On 5/17/12 7:03 PM, ext Julian Aubourg wrote: To me the biggest Comments on all of WebApps' specs are always welcome, regardless of where the spec is in the W3C's Recommendation process. I've been meaning to do a test suite to help provide guidance to implementors (something I figure would be much more useful than yet another round of specs) but I admit I haven't got to it yet. Dunno how people feel about this, but I think providing test suites that browsers could test against as a way to prevent regressions and inconsistencies could help a lot as a starting point. The group's PubStatus page http://www.w3.org/2008/webapps/wiki/PubStatus enumerates each spec and each spec has (or will have): 1) a link to the spec's Bugzilla component; 2) a link to the spec's Test suite. We have a need for test cases for just about every spec. The test submission process is described in http://www.w3.org/2008/webapps/wiki/Submission. For WebApps' testing related discussions, please use group's public-webapps-testsu...@w3.org list. -Thanks, AB
Re: Shrinking existing libraries as a goal
A related TL;DR observation... While we may get 5 things that really help shrink the current set of problems, it adds APIs which inevitably introduce new ones. In the meantime, nothing stands still - lots of specs are introducing lots of new APIs. Today's 'modern browsers' are the ones we are all swearing at a year or two from now. New APIs allow people to think about things in new ways. Given new APIs, new ideas will develop (either in popular existing libraries, or even whole new ones). Ideas spawn more ideas - offshoots, competitors, etc. In the long term, changes like the ones being discussed will probably serve more to mitigating libraries otherwise inevitable continued growth. More interestingly though, to Tab's point - all of the things that he explained will happen with all of those new APIs too. New ideas will spawn competitors and better APIs that are normalized by libraries, etc. They will compete and evolve until eventually it becomes self-evident over time that there is something much preferred still by the user community at large to whatever is actually implemented in the browser.It seems to me that this is inevitable, happens with all software, and is actually kind of a good thing... I'm not exactly sure what value this observation has other than to maybe explain why I think that on this front, libraries have a few important advantages and wonder aloud whether somehow there is a way to change the model/process to incorporate those advantages more directly. Particularly, the advantages are about real world competition and less need to be absolutely positively fully universal. The advantages of the competition aspect I think cannot be overstated - they play in virtually every point along whole lifecycle For all of the intelligence on the committees and on these lists (and it's a lot), it's actually a pretty small group of people ultimately proposing things for the whole world. By their very nature, committees (and the vendors who are heavily involved) also have to consider the very fringe cases and the browser vendors have to knowing enter in to things considering that every change means more potential problems that have to work without breaking anything existing. Libraries might have a small number of authors, but their user base starts our small too. The fact that it is also the author's choice to opt-in to using a library also means that they are much freer to rev and version and say don't do that, instead do this with some of the very fringe cases - or even just consciously choose that that is not a use case they are interested in supporting. With the process, even when we get to vendor implementations, features start out in test builds or require flags to enable. While it's good - it's really more of a test for uniform compliance and a preview for/by a group of mavens. This means that features/apis cannot actually be practically used in developing real pages/sites and that is a huge disadvantage that libraries don't generally have. Often it isn't obvious until there are at least thousands and thousands of average developers who have had significant time to really try to live with it in the real world (actually delivering product) that it becomes evident that something is overly cumbersome or somehow falls short for what turn out to be unexpectedly common cases. Finally, the whole point of these committees is to arrive at standards, not compete. However, in practice, they also commonly resolve differences after the fact (the standard is revised to meet what is implemented and now can't change). Libraries are inherently usually the opposite - they want competition first and standardization only after things have wide consensus. These are the kinds of things that cause innovation and competition of ideas which ultimately helping define and evolve what the community at large sees as good. I'm not exactly sure how you would go about changing the model/process to encourage/foster the sort of inverse relationship while simultaneously focusing on standards... tricky. Maybe some of the very smart people on this list have some thoughts? -Brian On May 17, 2012 3:52 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.com wrote: On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal
Re: [fullscreen] fullscreenEnabled and the fullscreen enabled flag
Chris Pearce is not on this mailing list. Chris are you okay with moving discussion to here? Anyone else who should be kept in the loop? On Thu, May 17, 2012 at 11:21 PM, Edward O'Connor eocon...@apple.com wrote: Document.fullscreenEnabled is not defined in normative spec prose. It is mentioned twice in the spec: once in the IDL block at the top of §4 API, and finally in the sentence the fullscreenEnabled attribute must return true if the context object and all ancestor browsing context's documents have their fullscreen enabled flag set, or false otherwise. I expected to find a sentence like (but clearer than) the following somewhere in §3 Model or §4 API: All documents have a fullscreenEnabled property which, if true, signals that it is possible for elements within the given document to enter fullscreen mode. Also, §3 Model claims that HTML defines under what conditions the fullscreen enabled flag is set, but I found no mention of this flag in whatwg.org/html. https://www.w3.org/Bugs/Public/show_bug.cgi?id=16709 will make sure all of this is defined. I guess we could add some more informative text at some point. -- Anne — Opera Software http://annevankesteren.nl/ http://www.opera.com/
Re: Implied Context Parsing (DocumentFragment.innerHTML, or similar) proposal details to be sorted out
Not that I want to start another bike-shedding, there is one clear distinction between innerHTML and createDocumentFragment, which is that innerHTML sets already-started flag on parsed script elements but createDocumentFragment does not (or rather it unsets it after the fragment parsing algorithm has ran). See http://html5.org/specs/dom-parsing.html#dom-range-createcontextualfragment There appears to be a consensus to use document.parse (which is fine with me), so I would like to double-check which behavior we're picking. IMO, the only sane choice is to unset the already-started flag since doing otherwise implies script elements parsed by document.parse won't be executed when inserted into a document. While we can change the behavior for template elements, I would rather have the same behavior between all 3 APIs (createDocumentFragment, parse, and template element) and let innerHTML be the outlier for legacy reasons. (Note: I intend to fix the bug in WebKit that already-started flag isn't unmarked in createDocumentFragment). - Ryosuke
Re: Shrinking existing libraries as a goal
On May 17, 2012, at 10:58 PM, Jonas Sicking jo...@sicking.cc wrote: On Thu, May 17, 2012 at 3:21 PM, Yehuda Katz wyc...@gmail.com wrote: I am working on it. I was just getting some feedback on the general idea before I sunk a bunch of time in it. For what it's worth, I definitely support this idea too on a general level. However as others have pointed out, the devil's in the details, so looking forward to those :) Of course, ideal are proposals which not just shrinks existing libraries, but also helps people that aren't using libraries at all but rather uses the DOM directly. I also agree that providing functionality which can help reduce side of JS libraries is a good goal (though one of many). And also that the merits of specific proposals depend on the details. One aspect of this that can be challenging is finding functionality that will allow a broad range of libraries to shrink, rather than only one or a few. - Maciej
Proposal: add websocket close codes for server not found and/or too many websockets open
So the Web Socket spec is a little vague on how JS is notified when the targeted web socket server is down/nonexistent/etc. Firefox is firing an 'error' event when this happens, based on the language here in the W3C spec: if the status code received from the server is not 101 (e.g. it is a redirect), the user agent must fail the websocket connection Chrome is not calling onerror for this, so we have a difference here. The language in the spec isn't really clear if this covers the connection-never-happened case. Both Chrome and Firefox (haven't tested other browsers/clients) are then calling close with code=1006, which seems the best code available in RFC 6455, but the language there isn't great either: to indicate that the connection was closed abnormally, e.g., without sending or receiving a Close control frame. There's essentially no mention in either spec of what happens when there never was any connection to the server.. It would be useful to be clear about whether onerror should be called here. I'm also wondering if it would be useful to have a dedicated error code for this case (server not available'). Also: I expect every browser that implements web sockets will have some limit on the number of websockets it allows to be open at once (to prevent DoS attacks if nothing else). I'm not sure of what the right close code for this is. Ideas? Perhaps we could also use a dedicated code for this case too. Jason Duell Mozilla
Re: Shrinking existing libraries as a goal
To me the biggest abomination of all is the XMLHttpRequest object: - the spec is probably one of the most complex I've seen - yet, vast portions are left to interpretations or even not specified at all: - the local filesystem comes to mind, - also every browser has its own specific way of notifying non-applicative errors (like network errors): - specific status, - unhandleable asychronously thrown exception, - exception thrown when accessing a field, - etc... And that's just the tip of the iceberg. It's got to a point where the almighty xhr bleeds through abstractions and makes it impossible to design proper API (at least not if you want to leak memory like crazy). Finally, fixing xhr issues always seem like low priority items in browser bug trackers because there's always some kind of workaround, that libraries like jQuery have to put in their code (provided it can be feature tested which it cannot most of the time). I've been meaning to do a test suite to help provide guidance to implementors (something I figure would be much more useful than yet another round of specs) but I admit I haven't got to it yet. Dunno how people feel about this, but I think providing test suites that browsers could test against as a way to prevent regressions and inconsistencies could help a lot as a starting point. 2012/5/18 Yehuda Katz wyc...@gmail.com I am working on it. I was just getting some feedback on the general idea before I sunk a bunch of time in it. Keep an eye out :D Yehuda Katz (ph) 718.877.1325 On Thu, May 17, 2012 at 3:18 PM, Brian Kardell bkard...@gmail.com wrote: Has anyone compiled an more general and easy to reference list of the stuff jquery has to normalize across browsers new and old? For example, ready, event models in general, query selector differences, etc? On May 17, 2012 3:52 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 3:21 PM, Brian Kardell bkard...@gmail.comwrote: On Thu, May 17, 2012 at 2:47 PM, Rick Waldron waldron.r...@gmail.com wrote: On Thu, May 17, 2012 at 2:35 PM, Brian Kardell bkard...@gmail.com wrote: So, out of curiosity - do you have a list of things? I'm wondering where some efforts fall in all of this - whether they are good or bad on this scale, etc... For example: querySelectorAll - it has a few significant differences from jQuery both in terms of what it will return (jquery uses getElementById in the case that someone does #, for example, but querySelectorAll doesn't do that if there are multiple instances of the same id in the tree) Which is an abomination for for developers to deal with, considering the ID attribute value must be unique amongst all the IDs in the element's home subtree[1] . qSA should've been spec'ed to enforce the definition of an ID by only returning the first match for an ID selector - devs would've learned quickly how that worked; since it doesn't and since getElementById is faster, jQuery must take on the additional code burden, via cover API, in order to make a reasonably usable DOM querying interface. jQuery says you're welcome. and performance (this example illustrates both - since jQuery is doing the simpler thing in all cases, it is actually able to be faster (though technically not correct) I'd argue that qSA, in its own contradictory specification, is not correct. It has been argued in the past - I'm taking no position here, just noting. For posterity (not you specifically, but for the benefit of those who don't follow so closely), the HTML link also references DOM Core, which has stated for some time that getElementById should return the _first_ element with that ID in the document (implying that there could be more than one) [a] and despite whatever CSS has said since day one (ids are unique in a doc) [b] a quick check in your favorite browser will show that CSS doesn't care, it will style all IDs that match. So basically - qSA matches CSS, which does kind of make sense to me... I'd love to see it corrected in CSS too (first element with that ID if there are more than one) but it has been argued that a lot of stuff (more than we'd like to admit) would break. in some very difficult ones. Previously, this was something that the browser APIs just didn't offer at all -- now they offer them, but jQuery has mitigation to do in order to use them effectively since they do not have parity. Yes, we're trying to reduce the amount of mitigation that is required of libraries to implement reasonable apis. This is a multi-view discussion: short and long term. So can someone name specific items? Would qSA / find been pretty high on that list? Is it better for jQuery (specifically) that we have them in their current state or worse? Just curious. TBH, the current state can't get any worse, though I'm sure it will.
Proposal: add websocket close codes for server not found and/or too many websockets open
So the Web Socket spec is a little vague on how JS is notified when the targeted web socket server is down/nonexistent/etc. Firefox is firing an 'error' event when this happens, based on the language here in the W3C spec: if the status code received from the server is not 101 (e.g. it is a redirect), the user agent must fail the websocket connection Chrome is not calling onerror for this, so we have a difference here. The language in the spec isn't really clear if this covers the connection-never-happened case. Both Chrome and Firefox (haven't tested other browsers/clients) are then calling close with code=1006, which seems the best code available in RFC 6455, but the language there isn't great either: to indicate that the connection was closed abnormally, e.g., without sending or receiving a Close control frame. There's essentially no mention in either spec of what happens when there never was any connection to the server.. It would be useful to be clear about whether onerror should be called here. I'm also wondering if it would be useful to have a dedicated error code for this case (server not available'). Also: I expect every browser that implements web sockets will have some limit on the number of websockets it allows to be open at once (to prevent DoS attacks if nothing else). I'm not sure of what the right close code for this is. Ideas? Perhaps we could also use a dedicated code for this case too. Jason Duell Mozilla