Re: [whatwg] HTML 5 video tag questions
On Mon, 15 Jun 2009 07:27:18 +0200, Tab Atkins Jr. jackalm...@gmail.com wrote: On Sun, Jun 14, 2009 at 8:46 PM, jjcogliati-wha...@yahoo.com wrote: I read section 4.8.7 The video element and I have some questions: 1. What happens if the user agent supports the video tag but does not support the particular video codec that the video file has? Should it display the fallback content in that case, and if so, can a video tag be put inside another video tag? If the particular codec is not supported, it displays the fallback content instead. No. If the particular codec is not supported then you get a blank box. If you want to offer multiple formats to cater to disparate UAs, give video some source children rather than @src. It will try each source in turn until it finds one it can play. Yep. -- Simon Pieters Opera Software
Re: [whatwg] HTML 5 video tag questions
On Sun, Jun 14, 2009 at 11:08 PM, Simon Pieterssim...@opera.com wrote: On Mon, 15 Jun 2009 07:27:18 +0200, Tab Atkins Jr. jackalm...@gmail.com wrote: On Sun, Jun 14, 2009 at 8:46 PM, jjcogliati-wha...@yahoo.com wrote: I read section 4.8.7 The video element and I have some questions: 1. What happens if the user agent supports the video tag but does not support the particular video codec that the video file has? Should it display the fallback content in that case, and if so, can a video tag be put inside another video tag? If the particular codec is not supported, it displays the fallback content instead. No. If the particular codec is not supported then you get a blank box. Hmm.. is that good? What if you want to use an object (to use flash or java) or a img as fallback? / Jonas
Re: [whatwg] HTML 5 video tag questions
On Mon, Jun 15, 2009 at 1:46 PM, jjcogliati-wha...@yahoo.com wrote: 1. What happens if the user agent supports the video tag but does not support the particular video codec that the video file has? Should it display the fallback content in that case, and if so, can a video tag be put inside another video tag? It does not display the fallback if the codec/format is not supported. The fallback is only displayed in a browser if the video element is not supported at all. 2. What is the recommended way for website authors to determine what video and audio codecs and containers are supported by a user agent? Ideally all user agents will have one codec that is supported across all implementations. Failing that there's a JavaScript API for querying codec support. Look for 'canPlayType'. Chris -- http://www.bluishcoder.co.nz
Re: [whatwg] HTML 5 video tag questions
On Mon, Jun 15, 2009 at 5:27 PM, Tab Atkins Jr.jackalm...@gmail.com wrote: (That said, I don't think there's anything wrong with nesting videos, it's just unnecessary.) This won't work since fallback content is not displayed unless video is not supported. Chris. -- http://www.bluishcoder.co.nz
Re: [whatwg] HTML 5 video tag questions
On Mon, Jun 15, 2009 at 4:49 AM, Chris Doublechris.dou...@double.co.nz wrote: On Mon, Jun 15, 2009 at 5:27 PM, Tab Atkins Jr.jackalm...@gmail.com wrote: (That said, I don't think there's anything wrong with nesting videos, it's just unnecessary.) This won't work since fallback content is not displayed unless video is not supported. Dang, I was wrong. I know I remembered some conversations about nested video, but I guess I was just remembering people *asking* about it. Regardless, as noted by others, my source suggestion was correct. Provide multiple sources if you're not sure about what format your users can view. ~TJ
Re: [whatwg] [gnu.org #451052] LGPL Question regarding Google's use of FFmpeg in Chromium and Chrome
Am Dienstag, den 09.06.2009, 15:37 -0400 schrieb Donald R Robertson III via RT: Would you mind summarizing the issue? I found a wall of text at the link you provided, that presented a couple of different issues. My understanding is that Google has used the LGPL-2.1-licensed FFmpeg library to provide h.264 decoding in their closed-source Chrome Browser. They, however, seem to have to acquired a license from the MPEG LA so to not violate any patents. Now, does this license preclude them from distributing FFmpeg, possibly according to section 11 of the LGPL 2.1 ? Also, what if other software uses the FFmpeg library to decode h.264 - has the patent license of Google any effect on this ? This post on the list by Chris DiBona, Google's Open Source Programs Manager may give further insights to the debate: http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-June/020035.html Cheers (and many apologies for my poor legalese parsing skills) -- Nils Dagsson Moskopp http://dieweltistgarnichtso.net
Re: [whatwg] page refresh and resubmitting POST state
Ian Hickson wrote: On Fri, 22 May 2009, Mike Wilson wrote: I can see some usefulness for adding a couple of subjects to the HTML5 spec: - how browsers should handle page refresh, in particular for pages received through POST (= do you want to resubmit?) Done. Nice, thanks. - potentially add constructs to help users avoid the above resubmit question (this could f ex be through providing some support for PRG = Post-Redirect-Get, or other) On Fri, 22 May 2009, Jonas Sicking wrote: This is already supported. If you use a 302 or 303 redirect in response to a POST this will redirect to a uri that the UA then GETs. Refresing that page will simply result in a new GET to the second uri. snip On Sat, 23 May 2009, Mike Wilson wrote: I was thinking about the resubmit problem in a general context, specifically how browsers could make it possible for web authors to create POSTing pages that avoids giving the dreaded do you want to resubmit question at all, independent of operation. Just do a redirect like Jonas describes, instead of returning the page contents directly. You can even redirect to the same URL. As I pointed out in a followup to Jonas's mail: http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2009-May/019937.html doing PRG with current technology has the drawback of losing the page state. This can be patched back again by adding query params to the URL but this isn't good for all scenarios (see below). [...] Defining some support in the browser could replace or simplify parts of these solutions. I'm sure people are open to suggestions. I wouldn't worry about whether they're in scope for HTML5 or not; if they're not, we can always redirect you to the right group. The information I've compiled goes outside the subject of this thread, so I'll explore further parts of the state handling problem in a separate mail thread titled html5 state handling: overview and extensions. On Sun, 24 May 2009, Aryeh Gregor wrote: One workaround is to just stick the info in the query string of the GET. One problem with this is that it makes it easy to trick users into thinking they've just done something alarming: you can link to confirmmove.php?page1=Main_Pagepage2=Main_Page_ON_WHEELS, and the user will think they actually just moved the page (the software told them so!). Another problem is that sometimes there's way too much data to fit into a query string. For instance, in MediaWiki you can move a page with all its subpages. There might be hundreds or even thousands of these, and a success/failure message is printed for each, with the reason for failure if appropriate. This might be too long to fit in a GET. Just stick the data into the query parameters, including the user's ID, then sign the query parameters, and when serving the page, check the signature and check the user's ID matches the cookie. Adding data to the URL makes sense in some scenarios, but not in others. Often the application needs to hold on to state shared by a sequence of pages in the same browsing context, but at the same time not wanting this state to be shared with the same set of pages in another browsing context. This rules out cookie-based state as this is shared by all browsing contexts in the user agent. With current technology a common solution is to add a unique id to the URL that points to a storage area on the server where the full state is stored. The same id is then used on all pages throughout the navigation sequence, and the id could be said to represent the browsing context (ie window or tab) as each browsing context will get a different id, mapping to state for that browsing context. For this scenario it would be better if the id parameter was not part of the URL, because: - the id parameter adds no meaning to the URL - the id parameter maps to an internal and transient data structure on the server and not to an entity - the above two bullets mean we don't want it in bookmarks - coming back with an old URL (with an old id) requires handling cases like recreating an expired data structure, or handling conflicts if our id is now allocated to another user A similar workaround would be to use cookies. This is nicer than the previous method, but has the potential to break confusingly if the user takes several similar actions at once (e.g., moving a number of pages at once in multiple tabs). Using sessionStorage can be used to work around this somewhat, at least in AJAX apps. For server-oriented webapps a solution that doesn't rely on script is preferred. This means the server should be able to transmit browsing_context-scoped state to the client and have it automatically sent back on any following request. Something like browsing_context-oriented cookies. I'll include this in my state handling overview in the new thread. Best regards Mike Wilson
[whatwg] html5 state handling: overview and extensions
INTRODUCTION HTML5 provides a number of constructs to transfer and manage application state. In this post I attempt to classify these constructs in a consistent manner to identify what kinds of state management is taken care of, and what is not. My goal is to discuss the uncovered areas to see if we can address them with suitable additions to HTML5. DEFINITIONS State To simplify this overview I limit myself to only addressing internal state that the application stores to keep track of the user's interaction with it, and that isn't directly accessible by the user. This means f ex that hidden input fields are included in this overview (as these usually represent some internal state) but editable input fields are not (as these correspond more to page parameters than page state). I know this distinction may not be perfect, and at some point we might want to include the other parts, but I think it is good to start out with these restrictions. State can be stored in many different ways, and many different things can be regarded as state, so consider the below scenario to understand what I regard as state in this post: - a user makes some navigation action that makes the browser navigate to, and request http://host/page1 - when returning the response for page1, the server may include some state (ServerState) to be stored in the browser through some state construct - during the lifetime of the returned page in the browser, additional state may be produced by script (ScriptState) and stored by some other state construct - to qualify as state constructs in this overview, the constructs used to store ServerState and ScriptState above should support that the state survives the following actions: . navigating away from, and then back again, to the current session history entry in the browsing context, including scenarios where the document objects have been discarded in the meantime . page reload/refresh of the current page (this follows from the first point) (there are many other actions that could be mentioned but the above two are enough for this overview) State that survives these criteria is real state in the browser. Server-controlled state This is state which is created by the server application and then transmitted to the browser where it is stored, to later be transparently sent back to the server for identification or processing. As it is under server control it should not rely on script execution in the browser, so state constructs need to have a mechanism that both automatically stores the state in browser, and automatically transfers it back from browser to server when appropriate. Typically the transfer back and forth between server and browser takes place on at least every page request and response. Script-controlled state This is state created and stored by script in the browser. The preservation of state only applies to other script reading the data and any mechanism for transfer back and forth to the server is optional. Scopes State can be stored on different scopes, or contexts, to control its reach and lifetime. HTML5 offers the following scopes where a higher item on the list encloses lower items: - User agent (an application containing a collection of top-level browsing contexts) - Browsing context (has session history with a number of Documents) - Document (corresponds to one page load from server but can be associated with multiple session history entries with different navigational states) - Session history entry (a single navigational state for a Document) Apart from the different scopes, state is also kept separated by origin, to not allow different sites to interfere with each other's state. I will just assume origin is in effect for the rest of this post. User visibility For different scenarios it may, or may not, be desired to indicate the current state to the user through the browser user interface. This could f ex mean being part of the URL for bookmarkability etc. It is an advantage if the application author can choose between state constructs both with and without user interface exposure. Request type (http method) Some state constructs are unique to a certain http method. In this overview I list GET and POST methods. FEATURE TABLES Below are tables comparing properties of different state constructs. There are many alternatives to how these tables could be organized but I've tried to keep things simple and only add columns for the most important properties for this discussion. SERVER-CONTROLLED STATE Scope Visibility Request : State construct - -- --- - user agent, invisible, GET : cookie user agent, invisible, POST: cookie browsing context,
[whatwg] Cue range implementation?
Have addCueRange and removeCueRanges been implemented in any browser yet? I've looked at nightly builds of Firefox and Safari (on Windows, at least) but they don't seem to be there. Has there been any further discussion since the thread last year (Re: [whatwg] re-thinking cue ranges) about whether an event or callback model would be better? I can imagine cue ranges being extremely useful for handling all kinds of timed changes to content: not just annotations or subtitling. We've been working with JavaScript/JSON to implement timed changes to CSS and HTML, relative to a 'time parent' such as a video, as well as 'custom events' such as chapter changes. Cue ranges would make the implementation of this kind of timed presentation much more efficient and straightforward. Sam Dutton http://www.bbc.co.uk/ This e-mail (and any attachments) is confidential and may contain personal views which are not the views of the BBC unless specifically stated. If you have received it in error, please delete it from your system. Do not use, copy or disclose the information in any way nor act in reliance on it and notify the sender immediately. Please note that the BBC monitors e-mails sent or received. Further communication will signify your consent to this. winmail.dat
[whatwg] Definitions of DOMTokenList algorithms and element.classList
Step 3 of the algorithm for DOMTokenList.has says: If the token indicated by token is one of the tokens in the object's underlying string then return true and stop this algorithm. What does token is one of the tokens mean? I assume it means that a case-sensitive string comparison of token with each of the tokens in the DOMTokenList yields one match. It might be good to clarify this in the spec. Note that the algorithms for DOMTokenList.add and DOMTokenList.toggle use similar wording. Should methods of element.classList treat their arguments case- insensitively in quirks mode? I think they should. This should be mentioned in the spec. -Adam
[whatwg] Unifying DOMTokenList with DOM 3 Core's DOMStringList?
DOM 3 Core defines the DOMStringList interface [1], which has some similarities to HTML 5's DOMTokenList interface. In fact, DOMStringList basically provides a subset of DOMTokenList's functionality. But there is a mismatch in the names of the DOMTokenList.has and DOMTokenList.contains methods (which seem to have the same purpose). I think DOMTokenList.has should be renamed to DOMTokenList.contains, to match DOMStringList. Perhaps some relationship between these two interfaces should be defined as well (though that would obviously introduce a dependency on DOM 3 Core). -Adam [1] http://www.w3.org/TR/2004/REC-DOM-Level-3-Core-20040407/core.html#DOMStringList
[whatwg] Browser Bundled Javascript Repository
Hey Guys, This is my first time on the list, I searched the Archives but I didn't see anything like this so I apologize if I missed any earlier discussion on something like this. A while back I came across this two paragraph blog post titled Browsers Should Bundle JS Libraries: http://fukamachi.org/wp/2009/03/30/browsers-should-bundle-js-libraries/ The premise is basically that browsers are repeatedly downloading the same javascript frameworks from different domains over and over every day. In the author's own words: All popular, stable Javascript libraries, all open source. All downloaded tens of millions of times a day, identical code each time. Below is a summary and expansion of my comments/ideas from the discussion on the above blog article. A typical solution to the problem, and one that works right now in browsers, is that if you require a javascript library on your website you can point to a publicly available version of that library. If enough sites use this public URI then the browser will continually be using that URI and it will be cached and reused by the browser. This is the idea behind Google's Hosted Libraries: http://code.google.com/apis/ajaxlibs/ There are some arguments against using Google's Hosted Libraries: http://www.derekallard.com/blog/post/jquery-hosted-on-google-and-some-implications-for-developers/ However, I think the author makes a good point. Bundling the JS Libraries in the Browser seems like it would require very little space, could even be stored in a more efficient representation (compiled bytecode for example), and would prevent an extra HTTP Request. The problem then becomes how does a browser know example.com's jquery.js is the same as other.com's jquery.js. The developer should opt-in to telling the browser it wants to use a certain JS Library version that the browser may already know about. The way I thought about it was by adding an attribute to the script tag. In my comments, I used the rel attribute because of developer's familiarity with it in other tags, but it could (and probably should) be an entirely new attribute. The value inside of this attribute would need to be a unique identifier for a possible script available in the browser's repository. The src attribute should still point to a hosted version of the script in case this attribute is unsupported (ignored) or the script is not found in the repository (not-bundled). For Example: !-- SHA1 hash as identifier for jquery-1.2.3 -- script rel=A56F2CED6... src=... / !-- Canonical name as an identifier for a JS lib and version -- script rel=jquery-1.2.3 src=... / Here the rel attribute's value is a standard identifier for a particular version of the JQuery JS Library. The browser could check its Repository to see if it has it. If found, no request is needed and it can load its local version. If not found it can proceed like normal using the src attribute to download the script. Pros: - Future-Proof: Adding a new attribute, or using a currently ignored attribute, on the script tag would make this a safe addition that works fine in older browsers (backwards compatible) and works instantly in supported browsers. - Developer Opt-In: Developers that choose not to use this feature could just ignore it. - Pre-Compiled: By bundling known JS Libraries with the browser, the browser could store a more efficient representation of the file. For instance pre-compiled into Bytecode or something else browser specific. - Less HTTP Requests / Cache Checks: If a library is in the repository no request is needed. Cache checks don't need to be performed. Also, for the 100 sites you visit that all send you the equivalent jquery.js you now would send 0 requests. I think this would be enticing to mobile browsers which would benefit from this Space vs. Time tradeoff. - No 3rd Party is Gathering Statistics: One of the arguments against using Google's Hosted Libraries is that you send them some data if you are indeed using their scripts and a client downloads from them (Referrer, etc.). Here there is no 3rd party, its just between the client browser and domain. - Standardizing Identifier For Libraries: Providing a common identifier for libraries would be open for discussion. The best idea I've had would be to provide the SHA1 Hash of the Desired Release of a Javascript Library. This would ensure a common identifier for the same source file across browsers that support the feature. This would be useful for developers as well. A debug tool can indicate to a developer that the script they are using is available in the Browser Repository with a certain identifier. - Repository Can Grow Dynamically - Assuming this is a desirable feature that shows some promise, the browser repository can grow dynamically. Browsers can count the number of times they have seen equivalent
Re: [whatwg] Browser Bundled Javascript Repository
Pros: - Pre-Compiled: By bundling known JS Libraries with the browser, the browser could store a more efficient representation of the file. For instance pre-compiled into Bytecode or something else browser specific. I think something needs to be clarified wrt to compile times and the like. In the WebKit project we do a large amount of performance analysis and except in the most trivial of cases compile time just doesn't show up as being remotely significant in any profiles. Additionally the way JS works, certain forms of static analysis result in behaviour that cannot reasonably be cached. Finally the optimised object lookup and function call behaviour employed by JavaScriptCore, V8 and (i *think*) TraceMonkey is not amenable to caching, even within a single browser session, so for modern engines i do not believe caching bytecode or native is really reasonable -- i suspect the logic required to make this safe would not be significantly cheaper than just compiling anyway. - Less HTTP Requests / Cache Checks: If a library is in the repository no request is needed. Cache checks don't need to be performed. Also, for the 100 sites you visit that all send you the equivalent jquery.js you now would send 0 requests. I think this would be enticing to mobile browsers which would benefit from this Space vs. Time tradeoff. I believe http can specify how long you should wait before validating the cached copy of a resource so i'm not know if this is a real win, but i'm not a networking person so am not entirely sure of this :D - Standardizing Identifier For Libraries: Providing a common identifier for libraries would be open for discussion. The best idea I've had would be to provide the SHA1 Hash of the Desired Release of a Javascript Library. This would ensure a common identifier for the same source file across browsers that support the feature. This would be useful for developers as well. A debug tool can indicate to a developer that the script they are using is available in the Browser Repository with a certain identifier. This isn't a pro -- it's additional work for the standards body Cons: - May Not Grow Fast Enough: If JS Libraries change too quickly the repository won't get used enough. - May Not Scale: Are there too many JS Libraries, versions, etc making this unrealistic? Would storage become too large? - Adds significant spec complexity - Adds developer complexity, imagine a developer modifies their servers copy of a given script but forgets to update the references to the script, now they get inconsistent behaviour between browsers that support this feature and browsers that don't. --Oliver
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
On Wed, 10 Jun 2009, João Eiras wrote: Ensuring consistency between browsers, to reduce the likelihood that any particular browser's ordering becomes important and then forcing that browser's ordering (which could be some arbitrary ordering dependent on some particular hash function, say) into the platform de facto. This is similar to what happened to ES property names -- they were supposedly unordered, UAs were allowed to sort them however they liked, and now we are locked in to a particular order. I strongly think the order should not be sorted, but should reflect the order of the token in original string which was broken down into tokens. It would also make implementations much simpler and sane, and would spare extra cpu cycles by avoiding the sort operations. On Tue, 9 Jun 2009, Erik Arvidsson wrote: I was about to follow up on this. Requiring sorting which is O(n log n) for something that can be done in O(n) makes thing slower without any real benefit. Like João said the order should be defined as the order of the class content attribute. Fair enough. Done. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] Unifying DOMTokenList with DOM 3 Core's DOMStringList?
On Mon, 15 Jun 2009, Adam Roben wrote: DOM 3 Core defines the DOMStringList interface [1], which has some similarities to HTML 5's DOMTokenList interface. In fact, DOMStringList basically provides a subset of DOMTokenList's functionality. But there is a mismatch in the names of the DOMTokenList.has and DOMTokenList.contains methods (which seem to have the same purpose). I think DOMTokenList.has should be renamed to DOMTokenList.contains, to match DOMStringList. Done. Perhaps some relationship between these two interfaces should be defined as well (though that would obviously introduce a dependency on DOM 3 Core). It didn't seem especially useful to make one inherit from the other, so I haven't done that. Not sure what other relationship would help. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] Browser Bundled Javascript Repository
Pros: - Pre-Compiled: By bundling known JS Libraries with the browser, the browser could store a more efficient representation of the file. For instance pre-compiled into Bytecode or something else browser specific. I think something needs to be clarified wrt to compile times and the like. In the WebKit project we do a large amount of performance analysis and except in the most trivial of cases compile time just doesn't show up as being remotely significant in any profiles. Additionally the way JS works, certain forms of static analysis result in behaviour that cannot reasonably be cached. Finally the optimised object lookup and function call behaviour employed by JavaScriptCore, V8 and (i *think*) TraceMonkey is not amenable to caching, even within a single browser session, so for modern engines i do not believe caching bytecode or native is really reasonable -- i suspect the logic required to make this safe would not be significantly cheaper than just compiling anyway. I noticed this came up on the WebKit mailing list recently and they said the same thing as you, that compile time was insignificant. Thanks for expanding on this: https://lists.webkit.org/pipermail/webkit-dev/2009-May/007657.html Although this response sounded slightly more promising: https://lists.webkit.org/pipermail/webkit-dev/2009-May/007682.html - Less HTTP Requests / Cache Checks: If a library is in the repository no request is needed. Cache checks don't need to be performed. Also, for the 100 sites you visit that all send you the equivalent jquery.js you now would send 0 requests. I think this would be enticing to mobile browsers which would benefit from this Space vs. Time tradeoff. I believe http can specify how long you should wait before validating the cached copy of a resource so i'm not know if this is a real win, but i'm not a networking person so am not entirely sure of this :D I believe you're correct. HTTP E-Tags and Expires headers can influence browsers to cache resources for very long times. Unfortunately I don't think that a lot of developers take advantage of these tags (this is a bad argument but worth mentioning). Although Yahoo's Y-Slow and Google's Page-Speed extensions have opened many developers eyes to ways they can improve their site's performance. The real gain however, would be if you visited 100 different websites that all needed the same script and you wouldn't have to make a single request, or cache a single resource, due to using the script in the repository. I think this sounds better. - Standardizing Identifier For Libraries: Providing a common identifier for libraries would be open for discussion. The best idea I've had would be to provide the SHA1 Hash of the Desired Release of a Javascript Library. This would ensure a common identifier for the same source file across browsers that support the feature. This would be useful for developers as well. A debug tool can indicate to a developer that the script they are using is available in the Browser Repository with a certain identifier. This isn't a pro -- it's additional work for the standards body You are correct. I'm not too familiar with the process behind the specifications (although I would like to learn). Maybe including this as a Pro was premature, but having a hash value like a SHA1 be the identifier has a number of advantages. Maybe there are better solutions out there that have the same advantages that a SHA1 would provide. Cons: - May Not Grow Fast Enough: If JS Libraries change too quickly the repository won't get used enough. - May Not Scale: Are there too many JS Libraries, versions, etc making this unrealistic? Would storage become too large? - Adds significant spec complexity - Adds developer complexity, imagine a developer modifies their servers copy of a given script but forgets to update the references to the script, now they get inconsistent behaviour between browsers that support this feature and browsers that don't. Significant spec complexity? I'm too inexperienced to know. =( As for the developer scenario this is similar to modifying any single attribute on a tag and not appropriately modifying the others. Change the src on an img and not changing the alt. Change the href on a link and not changing the media. I think this could be easily avoided with validation. If the unique identifier were a SHA1 hash and the referenced script src does not hash to the provided value, then the page would invalidate with an error/warning. However, you raise a good point, and I can't come up with any truly equivalent analogy to any other existing developer specific problem. --Oliver Thanks for the points Oliver. Looks like a few of the Pros may have been eliminated. Do you think this could produce any noticeable improvements? - Joe
Re: [whatwg] Browser Bundled Javascript Repository
As an alternative,common libraries could get shipped as browser plugins, allowing developers to leverage local URIs such as chrome:// in XUL/mozilla/firefox apps. This would only effectively work if: - all vendors define a same local URI prefix. I do like chrome://. Mozilla dudes were always lightyears ahead in all forms of cross- platform app development with XUL. - all vendors extend their existing plugin architecture to accomodate this URI and referencing from network-delivered pages. - some form of discovery exists, with ability to provide network transport alternative: use chrome URI if exists, use http URI if not Library vendors would then ship their releases as browser plugins, using existing discovery mechanisms, as well as software update mechanisms. -chris On Jun 15, 2009, at 11:55, Oliver Hunt oli...@apple.com wrote: Pros: - Pre-Compiled: By bundling known JS Libraries with the browser, the browser could store a more efficient representation of the file. For instance pre-compiled into Bytecode or something else browser specific. I think something needs to be clarified wrt to compile times and the like. In the WebKit project we do a large amount of performance analysis and except in the most trivial of cases compile time just doesn't show up as being remotely significant in any profiles. Additionally the way JS works, certain forms of static analysis result in behaviour that cannot reasonably be cached. Finally the optimised object lookup and function call behaviour employed by JavaScriptCore, V8 and (i *think*) TraceMonkey is not amenable to caching, even within a single browser session, so for modern engines i do not believe caching bytecode or native is really reasonable -- i suspect the logic required to make this safe would not be significantly cheaper than just compiling anyway. - Less HTTP Requests / Cache Checks: If a library is in the repository no request is needed. Cache checks don't need to be performed. Also, for the 100 sites you visit that all send you the equivalent jquery.js you now would send 0 requests. I think this would be enticing to mobile browsers which would benefit from this Space vs. Time tradeoff. I believe http can specify how long you should wait before validating the cached copy of a resource so i'm not know if this is a real win, but i'm not a networking person so am not entirely sure of this :D - Standardizing Identifier For Libraries: Providing a common identifier for libraries would be open for discussion. The best idea I've had would be to provide the SHA1 Hash of the Desired Release of a Javascript Library. This would ensure a common identifier for the same source file across browsers that support the feature. This would be useful for developers as well. A debug tool can indicate to a developer that the script they are using is available in the Browser Repository with a certain identifier. This isn't a pro -- it's additional work for the standards body Cons: - May Not Grow Fast Enough: If JS Libraries change too quickly the repository won't get used enough. - May Not Scale: Are there too many JS Libraries, versions, etc making this unrealistic? Would storage become too large? - Adds significant spec complexity - Adds developer complexity, imagine a developer modifies their servers copy of a given script but forgets to update the references to the script, now they get inconsistent behaviour between browsers that support this feature and browsers that don't. --Oliver
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
On Jun 15, 2009, at 12:22 PM, Ian Hickson wrote: On Tue, 9 Jun 2009, Erik Arvidsson wrote: I was about to follow up on this. Requiring sorting which is O(n log n) for something that can be done in O(n) makes thing slower without any real benefit. Like João said the order should be defined as the order of the class content attribute. Fair enough. Done. Since DOMTokenList requires uniqueness, then I suspect it's still O(n log n) even without sorting, not O(n). -- Darin
Re: [whatwg] Browser Bundled Javascript Repository
The common JavaScript libraries should be identified using urn scheme with JavaScript namespace, as in script src=urn:JavaScript:cool-acme-lib:1.0 /script Chris
Re: [whatwg] H.264-in-video vs plugin APIs
On Mon, Jun 15, 2009 at 12:49 AM, Michael Dale d...@ucsc.edu wrote: I have requested that a few times as well... Some went so far to even make a mock up page: http://people.xiph.org/~j/apple/preview/ It would or course be much more ideal if we could get the component into quicktime codec lookup system. Is there any criteria or process that has been made publicly available? Are there any guidelines or special request mechanisms that we have missed? Eric or anyone at apple reading this list: if you have _any_ information as to how a codec component gets into the quicktime codec lookup system; it would be great if you could inform us. --Michael Dale Silvia Pfeiffer wrote: I'm sorry, but there is quite a bit of frustration from the past hidden in my paragraph. For the last 4 years we have been trying to get XiphQT added to the list of QuickTime components on Apple's external components webpage at http://www.apple.com/quicktime/resources/components.html . We have seen proprietary codecs added one after the other but Xiph codecs continuously being ignored even though we requested addition multiple times and through different people. I'm sorry to say but that has indeed caused a feeling of being rejected on purpose. We would love to see this situation rectified. Regards, Silvia. On Mon, Jun 15, 2009 at 2:48 AM, Eric Carlsoneric.carl...@apple.com wrote: Silvia - On Jun 13, 2009, at 7:02 PM, Silvia Pfeiffer wrote: As for Safari and any other software on the Mac that is using the QuickTime framework, there is XiphQT to provide support. It's a QuickTime component and therefore no different to installing a Flash plugin, thus you can also count Safari as a browser that has support for Ogg Theora/Vorbis, even if I'm sure people from Apple would not like to see it this way. Speaking of misinformation and hyperbole, what makes you say people from Apple want to hide the fact that Safari supports third party QuickTime codecs? We *could* have limited WebKit to only support QuickTime's built-in formats, but did not specifically so customers can add other formats as they choose. We have never tried to hide this, it is ridiculous to imply otherwise. eric I don't even think QuickTime has a codec lookup system. For codecs that are unavailable, it just points me to the page, even if the codec doesn't exist on that page. I remember that either QuickTime 4 or 5 had a codec lookup and download system... Not sure about QuickTime 6. Definitely not QuickTime 7. Maybe QuickTime X will have one again?
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
On Mon, 15 Jun 2009 21:38:05 +0200, Darin Adler da...@apple.com wrote: On Jun 15, 2009, at 12:22 PM, Ian Hickson wrote: On Tue, 9 Jun 2009, Erik Arvidsson wrote: I was about to follow up on this. Requiring sorting which is O(n log n) for something that can be done in O(n) makes thing slower without any real benefit. Like João said the order should be defined as the order of the class content attribute. Fair enough. Done. Since DOMTokenList requires uniqueness, then I suspect it's still O(n log n) even without sorting, not O(n). -- Darin Oh, I have forseen that. Is it really necessary to remove duplicates ? I imagine DOMTokenList to be similar to what can be achieved with a String.split(), but then it would be just more duplicate functionality.
Re: [whatwg] Browser Bundled Javascript Repository
The common JavaScript libraries should be identified using urn scheme with JavaScript namespace, as in script src=urn:JavaScript:cool-acme-lib:1.0 /script Chris Chris, this was what I was originally thinking of when I read about tag-uri's in the Atom Publishing Protocol (which I think are now defunct). URN Schemes: http://www.w3.org/TR/uri-clarification/ I don't know too much about URN Scheme's but I don't think they provide as much benefit as a hash would be (ugly as they are). In the original email I alluded to canonical or easy to use identifiers that would be alias's for the hash: !-- Canonical name as an identifier for a JS lib and version -- script rel=jquery-1.2.3 src=... / Maybe a URN Scheme would be optimal for a canonical name, but I still like the SHA1 hash for verification/validation and dynamic growth. - Joe
Re: [whatwg] Browser Bundled Javascript Repository
URI are expected to be readable. If we admit that many HTTP URL aren't readable and there is little we can do about that, maybe we could stretch the URN a bit to allow a hash in it? Chris
Re: [whatwg] Browser Bundled Javascript Repository
The problem here is that isn't backwards compatible and thus no-one will really be able to use it. You then also get into the how do I get my library into the browser? After mulling this over with the Google CDN work, I think that using HTTP and the browser mechanisms that we have now gives us a lot without any of these issues. On Mon, Jun 15, 2009 at 12:40 PM, Kristof Zelechovski giecr...@stegny.2a.pl wrote: The common JavaScript libraries should be identified using urn scheme with JavaScript namespace, as in script src=urn:JavaScript:cool-acme-lib:1.0 /script Chris
Re: [whatwg] Unifying DOMTokenList with DOM 3 Core's DOMStringList?
On Mon, 15 Jun 2009 18:25:03 +0200, Adam Roben aro...@apple.com wrote: DOM 3 Core defines the DOMStringList interface [1], [...] I don't think anybody actually implements this interface though or is planning to. Having said that, I don't really feel strongly about not reusing the name contains. [1] http://www.w3.org/TR/2004/REC-DOM-Level-3-Core-20040407/core.html#DOMStringList -- Anne van Kesteren http://annevankesteren.nl/
Re: [whatwg] Definitions of DOMTokenList algorithms andelement.classList
I would consider it a big advantage to the posterity if the descriptions and the algorithms were better formulated and ready to be understood in plain text. For example, regarding 2.8.3 DOMTokenList, (see appendix). LEGEND Code samples are in braces; my comments are in brackets, and so are important [*changes*], [-deletions-] and [+insertions+]. I understand this effect cannot be automatically provided by the HTML view. SUMMARY Generic changes: Find index; Replace position; Find token argument; Replace given token; [this is not strictly necessary, any uniformity would do] Insert the where appropriate. Specific changes: see below. Cheers, Chris Appendix: * { tokenCount = tokenList . length } --- Returns the number of tokens in the string. { [*token*] = tokenList . item([*position*]) } { [+token =+] tokenList[[*position*]] } --- Returns the token [*at the position given*]. The tokens are sorted alphabetically. [I would not say that a token has an index; an index is not a property of the token.] Returns null if [*the position*] is out of range. { hasToken = tokenList . has(token) } Returns true if the token is present; false otherwise. [ It may be slightly misleading to speak of tokens _in parameters_. The present description means that the corresponding LISP binding would be { (let ((has-token (ask token-list 'has 'token } Rather than { (let ((has-token (ask token-list 'has token } Of course, I may be entirely wrong here in that the first snippet is what is intended. ] Throws an { INVALID_CHARACTER_ERR } exception if [+the+] token contains any spaces. [In which case it is not a token at all, so this remark makes no sense.] { tokenlist . add(token) } [*Inserts*] [+the+] token [+into the list+], unless it is already present. [Inserts because the list implementation is sorted.] Throws an { INVALID_CHARACTER_ERR } exception if [+the+] token contains any spaces. [?] { tokenList . remove(token) } Removes [+the+] token if it is present. Throws an { INVALID_CHARACTER_ERR } exception if [+the+] token contains any spaces. [?] { hasToken = tokenList . toggle(token) } Adds [+the+] token if it is not present, or removes it if it is. [Returns what?] Throws an { INVALID_CHARACTER_ERR } exception if token contains any spaces. [?] The { length } attribute must return the number of unique tokens that result from splitting the underlying string on spaces. This is the length. [Why this Biblical tone?] The [*positions of the supported enumerated tokens within the list*] are the numbers in the range [+from+] zero to length[*minus;*]1, unless the length is zero, in which case there are no supported [*enumerated*] properties. The { item([*position*]) } method must split the underlying string on spaces, sort the resulting list of tokens by Unicode code point, remove exact duplicates, and then return the [-indexth-] item in this list [+at the given position+]. If [*the position*] is equal to the number of tokens or greater, then the method must return null. The { has(token) } method must run the following algorithm: 1. If the [+given+] token [-argument-] contains any space characters, then raise an { INVALID_CHARACTER_ERR } exception and stop the algorithm. 2. Otherwise, split the underlying string on spaces to get the list of tokens in the object's underlying string. 3. If the [+given+] token [-indicated by token-] is one of the tokens in the object's underlying string then return true and stop this algorithm. 4. Otherwise, return false. The { add(token) } method must run the following algorithm: 1. If the [+given+] token [-argument-] contains any space characters, then raise an { INVALID_CHARACTER_ERR } exception and stop the algorithm. 2. Otherwise, split the underlying string on spaces to get the list of tokens in the object's underlying string. 3. If the given token is already one of the tokens in the { DOMTokenList } object's underlying string then stop the algorithm. 4. Otherwise, if the { DOMTokenList } object's underlying string is not the empty string and the last character of that string is not a space character, then append a U+0020 SPACE character to the end of that string. 5. Append the [*characters*] of [+the+] token to the end of the { DOMTokenList } object's underlying string. The { remove(token) } method must run the following algorithm: 1. If the [+given+] token [-argument-] contains any space characters, then raise an { INVALID_CHARACTER_ERR } exception and stop the algorithm. 2. Otherwise, remove the given token from the underlying string. [ You leave two consecutive spaces here. Why do you insist on not allowing an initial space above? ] The { toggle(token)} method must run the following algorithm: [This algorithm is redundant because it is a secondary method that can be implemented in terms of {has}, {add} and {remove}. ] Objects implementing the DOMTokenList interface must stringify to the object's underlying string representation.
Re: [whatwg] Browser Bundled Javascript Repository
Dion: The problem here is that isn't backwards compatible and thus no-one will really be able to use it. I thought the original idea was backwards compatible. Maybe not the URN Schemes. If the original idea is not, could you point out the issues? Dion: You then also get into the how do I get my library into the browser? Enough widespread usage of a library is a clear indicator for adoption into a browser bundle. Dynamically growing repositories could optimize per computer for the particular user's browsing habits (assuming developers would mark their scripts with the identifiers). You can have the same problem with what libraries will Google include in its CDN. Although it may be easier for Google to host just about any library if it already has a CDN setup. Dion: After mulling this over with the Google CDN work, I think that using HTTP and the browser mechanisms that we have now gives us a lot without any of these issues. I was afraid of this. This is a completely valid point. I guess it sounds like too much work for too little gain? - Joe
[whatwg] nostyle consideration
Proposing nostyle in the spirit of noscript Examples 1) Head Usage nostyle meta http-equiv=Refresh content=0;url=/errors/stylerequired.html /nostyle 2) Body Usage nostyle h2Warning: Styles required for correct rendering/h2 /nostyle The Obvious Push Back - Why bother? You can just do this .nostyle {display: none;} h2 class=nostyleWarning: Styles required for correct rendering/h2 And yes while that is true and for many situations will work fine, there are other cases you won't and you can get a sloppy or even bad results because of rendering engine paths. For example, because style is not applied until later you have an issue here h2 class=nostyleimg src=error.gifWarning: Styles required for correct rendering/h2 The network request happens regardless of situation no assuming images on. This of course makes the idea of h2 style=display:none;img src=error.php?style=offWarning: Styles required for correct rendering/h2 kind of useless. Obviously detecting style availability is no problem using JavaScript, just measure some style region or compute a style. However, in the absence of JavaScript it is actually somewhat of a challenge to detect this case you have to look for dependent requests as a clue like some style only available background-image or something. These corner cases aren't necessarily the main concerns, a serious motivation for this element is also because of some of the nonsense I am observing with background-image and content property near abuse by CSS wonks. It appears that there is a fairly decent sized camp of CSS for everything and this element might help mitigate some problems stemming from this. For example, using the content property can be somewhat troubling if style is removed. For example, consider what happens if you are putting in field required indicators input[type=text].required:before {content: (*) } or for offsite links a[href^=http://]:after {content:' ( Offsite Link )';} or any other dynamic insert this way. In my book effort I am seeing tremendous interest in the design community with such rules. Without style you lose valuable data and there is no easy recourse to present this situation at least not one without using JavaScript. At least having warnings via a nostyle element would be assisitive in informing users that this isn't quite right and in some cases I might dream up helpful for accessibility in light of too much CSS abuse. Just an aside: The content property is the CSS cousin of document.write if you think about it, useful but problematic. So given that noscript correctly acts in masking content for user agents supporting and not for those off or unsupporting, the nostyle element seems like a quite logical solution for the other technology key client-side tech. Anyway if this were an acceptable addition, tag syntax would be quite simple would only have common attributes, pretty basic replication of noscript prose in the specification. Though of course this is one element that would require browser changes, no quick simulation with JS. Comments? -Thomas Powell
Re: [whatwg] nostyle consideration
How would you hide the NOSTYLE element for legacy browsers that support STYLE? What about browsers that support an alternative style type and not CSS? (This is academic, I know, but here you have NOSTYLE where you really mean NOCSS :-() Chris
Re: [whatwg] Browser Bundled Javascript Repository
If you build a Firefox plugin, you can put some code on your page that allows users to click to install if the user doesn't already have the plugin installed. If you ship an updated version of your plug-in, users get notified and prompted to install the new one. Similar mechanisms exist on other browsers. But you're right, this is all a lot of end-user intervention: it would be a slightly, err, very painful process of installing a browser plugin, which is currently very-much of a user opt-in process, and not something very practical. However, the underlying plugin infrastructure could be extended for a more transparent process built specifically to handle browser javascript library extensions. I'm just trying to find ways to leverage a lot of what's already there. On the web developer's end, one might consider: Instead of adding an attribute to a script tag, the good old' link / element could be both backward and forward compatible: link rel=local:extension type=application/x-javascript href=ext:ibdom.0.2.js / or link rel=local:extension type=text/javascript href=ext:ibdom.0.2.js / instead of a chrome:// prefix, some new protocol to specifically designate a local extension would likely be more appropriate. I'm throwing ext: out there for now. Interesting thing is the same scheme could be leveraged for local CSS extensions: link rel=local:extension type=text/css href=ext:ibdom.0.2.js / To handle users who don't have the ibdom javascript extension installed, developers could add something like this to their document: (assuming a decent library which declares a top-level object/namespace): script type=text/javascript if (!window.IBDOM) { var newScript = document.createElement(script); scr.setAttribute(type,text/javascript); scr.setAttribute(src,/path/to/ibdom.0.2.js); document.appendChild(newScript); } /script -chris P.S.: http://ibdom.sf.net/ On Mon, Jun 15, 2009 at 12:53 PM, Joseph Pecorarojoepec...@gmail.com wrote: Library vendors would then ship their releases as browser plugins, using existing discovery mechanisms, as well as software update mechanisms. -chris This sounds to me as though the user would have to download a browser plugin. I would hope this would be as transparent as possible to the user. Maybe I'm misunderstanding the discovery mechanisms you're talking about. Could you expand on this? I do think that a URI prefix is a neat idea. This could eliminate the need for a new attribute. Is it as backwards-compatible? - Joe -- Chris Holland http://webchattr.com/ - chat rooms done right.
Re: [whatwg] nostyle consideration
On Mon, Jun 15, 2009 at 4:26 PM, Thomas Powelltpow...@gmail.com wrote: Proposing nostyle in the spirit of noscript Examples 1) Head Usage nostyle meta http-equiv=Refresh content=0;url=/errors/stylerequired.html /nostyle 2) Body Usage nostyle h2Warning: Styles required for correct rendering/h2 /nostyle The reason that noscript worked is because (IIRC) it was introduced at the same time as script. All browsers that supported script also supported noscript. nostyle would cause all legacy user agents to render the content even if they supported styles just fine. And yes while that is true and for many situations will work fine, there are other cases you won't and you can get a sloppy or even bad results because of rendering engine paths. For example, because style is not applied until later you have an issue here h2 class=nostyleimg src=error.gifWarning: Styles required for correct rendering/h2 The network request happens regardless of situation no assuming images on. That doesn't seem like a very serious issue. Just don't use images if you care that much. A large percentage of non-CSS browsers are probably text-based anyway. For example, using the content property can be somewhat troubling if style is removed. For example, consider what happens if you are putting in field required indicators input[type=text].required:before {content: (*) } This should just use HTML5's required attribute instead of a class: http://dev.w3.org/html5/spec/Overview.html#the-required-attribute Conformant browsers should make it clear to the user that the field is required even if styles are disabled. or for offsite links a[href^=http://]:after {content:' ( Offsite Link )';} This is non-essential info, and every browser I've heard of exposes it anyway (e.g., by hovering over the link and looking in the lower left). or any other dynamic insert this way. Do you have any other examples where this is a significant issue? Those two don't seem like a big deal to me, honestly, even if it were logistically possible to get nostyle supported widely enough to be useful. If CSS is necessary for a site to operate, it's probably being misused. If an author is misusing CSS this badly, it's not clear to me why they could be expected to reliably use nostyle. The contents of nostyle also don't make a difference to almost anyone, so authors who use it won't really understand the purpose it serves and it will probably be misused more often than used.
Re: [whatwg] Browser Bundled Javascript Repository
Chris Holland: But you're right, this is all a lot of end-user intervention: it would be a slightly, err, very painful process of installing a browser plugin, which is currently very-much of a user opt-in process, and not something very practical. [...]. I'm just trying to find ways to leverage a lot of what's already there. Yes, I don't like the idea of requiring users to act. But I see what you were tying to do with reuse. Interesting thing is the same scheme could be leveraged for local CSS extensions: I thought about this as well, but CSS is far less likely to be duplicated across multiple sites. There are plenty of CSS Frameworks but I feel that none have picked up enough dominance for this kind of optimization to be useful. You do mention what looks like urn schemes and extending this idea to CSS. I was specifically thinking of javascript because of its widespread use of libraries/frameworks. Using URN Schemes could let this repository idea extend to more then just javascript, however I don't think any other type of resource (CSS, Images, Etc.) have this unique pattern of the exact same content being served on thousands of different domains. link rel=local:extension type=text/css href=ext:ibdom. 0.2.js / To handle users who don't have the ibdom javascript extension installed, developers could add something like this to their document: (assuming a decent library which declares a top-level object/namespace): script type=text/javascript if (!window.IBDOM) { var newScript = document.createElement(script); scr.setAttribute(type,text/javascript); scr.setAttribute(src,/path/to/ibdom.0.2.js); document.appendChild(newScript); } /script Although the idea is the same (have a fall back plan if a repository lookup is ignored or fails), I think this is needlessly complex compared to just adding a new attribute on the script tag. By extending the script tag you already have fall back behavior to just download the script from the src attribute. If you take the link approach then you're practically requiring that you need to write fault tolerant code like you showed above, and that is no fun for web developers. -- The more I think about it, the more I think this might not necessarily be a web standards idea. More of a browser optimization, however it would never take off unless it was standardized. I don't know what to do about this... CDN Caching, like Google's Hosted Libraries, is more generic but less optimized. Maybe this is just a special case? - Joe
Re: [whatwg] Unifying DOMTokenList with DOM 3 Core's DOMStringList?
Anne van Kesteren wrote: On Mon, 15 Jun 2009 18:25:03 +0200, Adam Roben aro...@apple.com wrote: DOM 3 Core defines the DOMStringList interface [1], [...] I don't think anybody actually implements this interface though or is planning to. Gecko does. It's used for the styleSheetSets property on documents (which I thought hixie had a draft spec for at some points and which I seem to recall webkit implementing also), for some mozItems extension on offline resource stuff, for the types property of the HTML5 Drag and Drop DataTransfer object. Maybe other things too; I didn't read the mxr results that carefully. -Boris
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
On Mon, Jun 15, 2009 at 12:38, Darin Adler da...@apple.com wrote: Since DOMTokenList requires uniqueness, then I suspect it's still O(n log n) even without sorting, not O(n). That can be done in O(n). -- erik
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
Uniqueness of tokens can be determined in O(n) only* if the tokens are ordered in the source (any order would do) but there is no such requirement, and it cannot be required for compatibility with the content in the wild and because the standard supports inserting new tokens. It is possible to ignore this issue and proceed as if the tokens were ordered. The result would be that remove would fail, or it would run in quadratic time. HTH, Chris * If all possible tokens are predefined and their number is finite and the source is valid, uniqueness can be determined in constant time. This scenario, however, is better served by a bit field than by a token list.
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
On Mon, Jun 15, 2009 at 16:02, Kristof Zelechovski giecr...@stegny.2a.plwrote: Uniqueness of tokens can be determined in O(n) only* if the tokens are ordered in the source (any order would do) but there is no such requirement, and it cannot be required for compatibility with the content in the wild and because the standard supports inserting new tokens. That is is not true. Just use a set/map to keep track of previously seen elements. It is a trivial thing to do. It is possible to ignore this issue and proceed as if the tokens were ordered. The result would be that remove would fail, or it would run in quadratic time. HTH, Chris * If all possible tokens are predefined and their number is finite and the source is valid, uniqueness can be determined in constant time. This scenario, however, is better served by a bit field than by a token list. -- erik
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
The complexity of using a set/map is logarithmic in the size of the set. Multiply by the number of steps, you get what it takes. Chris
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
On Mon, Jun 15, 2009 at 7:11 PM, Kristof Zelechovskigiecr...@stegny.2a.pl wrote: The complexity of using a set/map is logarithmic in the size of the set. Not if it's implemented as a hash table. Is DOMTokenList really expected to store lists large enough that this asymptotic behavior matters, though?
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
On Mon, Jun 15, 2009 at 16:12, Aryeh Gregor simetrical+...@gmail.comsimetrical%2b...@gmail.com wrote: On Mon, Jun 15, 2009 at 7:11 PM, Kristof Zelechovskigiecr...@stegny.2a.pl wrote: The complexity of using a set/map is logarithmic in the size of the set. Not if it's implemented as a hash table. Is DOMTokenList really expected to store lists large enough that this asymptotic behavior matters, though? For WebKit I was initially not planning to use a backing map/set. Class lists should be pretty small and for small lists it is faster to iterate over the list than using a hash table. If this ends up being a performance issue a backing map/set can be used. (WebKit uses a vector internally and does O(n) lookups when computing the style so I doubt it will be a performance issue.) -- erik
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
The complexity of using a set implemented as hash table is quadratic in the number of elements because of hash collisions. Chris
Re: [whatwg] nostyle consideration
On Mon, Jun 15, 2009 at 2:14 PM, Aryeh Gregor simetrical+...@gmail.comsimetrical%2b...@gmail.com wrote: On Mon, Jun 15, 2009 at 4:26 PM, Thomas Powelltpow...@gmail.com wrote: Proposing nostyle in the spirit of noscript Examples 1) Head Usage nostyle meta http-equiv=Refresh content=0;url=/errors/stylerequired.html /nostyle 2) Body Usage nostyle h2Warning: Styles required for correct rendering/h2 /nostyle The reason that noscript worked is because (IIRC) it was introduced at the same time as script. All browsers that supported script also supported noscript. nostyle would cause all legacy user agents to render the content even if they supported styles just fine. Yes in the absence of our time machine it seems a bit late doesn't it. And yes while that is true and for many situations will work fine, there are other cases you won't and you can get a sloppy or even bad results because of rendering engine paths. For example, because style is not applied until later you have an issue here h2 class=nostyleimg src=error.gifWarning: Styles required for correct rendering/h2 The network request happens regardless of situation no assuming images on. That doesn't seem like a very serious issue. Just don't use images if you care that much. A large percentage of non-CSS browsers are probably text-based anyway. It isn't but hints at what the motivation was from a real world request (see below) For example, using the content property can be somewhat troubling if style is removed. For example, consider what happens if you are putting in field required indicators input[type=text].required:before {content: (*) } This should just use HTML5's required attribute instead of a class: http://dev.w3.org/html5/spec/Overview.html#the-required-attribute Agreed that is the case, this is more documenting the usage of designers not that there isn't an HTML 5 appropriate solution. Conformant browsers should make it clear to the user that the field is required even if styles are disabled. yes they should. or for offsite links a[href^=http://]:after {content:' ( Offsite Link )';} This is non-essential info, and every browser I've heard of exposes it anyway (e.g., by hovering over the link and looking in the lower left). or any other dynamic insert this way. Do you have any other examples where this is a significant issue? Those two don't seem like a big deal to me, honestly, even if it were logistically possible to get nostyle supported widely enough to be useful. Those were just examples of more valid uses of content actually. Of course as I mentioned below people can abuse this property and then it does become a big deal. But dynamically having content jam in all sorts of stuff client-side seems wrong-headed so I certainly don't suggest codifying bad practices though mitigating them somehow seems appropriate. If CSS is necessary for a site to operate, it's probably being misused. If an author is misusing CSS this badly, it's not clear to me why they could be expected to reliably use nostyle. The contents of nostyle also don't make a difference to almost anyone, so authors who use it won't really understand the purpose it serves and it will probably be misused more often than used. You may be quite right. Understand my purpose of proposing this was mostly due to some contrivances to determine style and no-style support for an effort which is very contingency concerned. The solution that was adopted using scripting, server-side logging particularly triggered by image requests from background-image values or their absence, etc. can figure all cases but it was a mess and thus the why not have a nostyle wouldn't life be easier So from where you sit yes it is not that important likely, from having to wrestle with it I would have loved to have an easy solution. Anyway I will say that there is a bit of symmetry of having on/off cases for all the various client-side technologies (img, script, object, etc.), but I see that the off aspect of style could simply be thought of as the markup itself and that is certainly fine it has worked for most so far.
Re: [whatwg] nostyle consideration
On Mon, 15 Jun 2009 21:26:21 +0100, Thomas Powell tpow...@gmail.com wrote: 1) Head Usage nostyle meta http-equiv=Refresh content=0;url=/errors/stylerequired.html /nostyle 2) Body Usage nostyle h2Warning: Styles required for correct rendering/h2 /nostyle Purpose of this element seems to make lack of CSS not my problem instead of providing meaningful alternative. This is not helpful for users without CSS. It only helps authors to discriminate against them, and I'm strongly against it. Comments? noscript is a very poor solution, and nostyle would be too. You should use graceful degradation/progressive enhancement instead (in both cases). -- regards, Kornel Lesinski
[whatwg] embed and object
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 Two questions. 1) What is the difference between embed and object in the spec? The wording is quite different, but seem to say basically the same thing: plugin data. 2) Under object it says: The object element can represent an external resource, which, depending on the type of the resource, will either be treated as an image, as a nested browsing context, or as an external resource to be processed by a plugin. Why specify that a User Agent will use a plugin to process the data if it is not an image or nested browsing context? Why not leave that decision up to the User Agent? - -- Stephen Paul Weber, @singpolyma Please see http://singpolyma.net for how I prefer to be contacted. edition right joseph -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (GNU/Linux) iQIcBAEBCAAGBQJKNtsfAAoJENEcKRHOUZze558P/jJXNIFxiav0ehYgMsILzl8W rRa3c10tmUhvto3TKmK8y+olY7NRoqiRqqJmpucbiactvNhBuidSjRqSyl1gOt1R Xa9824QXs6L4VMbM2oX95nPNkg2/GRbxkKgPDfSnduLZyHy25C2Mk5K+9ghaOy0e v4/sToSPI5BRSiqUHnX85WmQEvfCD49wPtNDIw1VOWBeIdsorVC+e3vacGICnb1Y 9miOqRHWzotdcj0r1fXQtzA9C1KrdEUu2kqPIVDJzarFWurPvQSuzK1kWy81n3di k7blf7rWmze7r3qG9oYvpvsULKVUh9jYU2D2G5hI8oMiPBiOjYoZisJDfEKvdoVo V54ulPwHT4bGHvaW2ge1LYIo/MbXjt/OTllTeDCBuLlKseFWf7vxmayHuUEtbKlz 9EDcpA029yE8ARrj+MuHJd8NZQjtwFHCAXsYUoruSKKUf5V7bha4puNNM0SwQJRd faYjtzWJHe4J5oUqukSDyXRauVn/4xQ9Gle1zpWw93ldrc1/qd/jigqVP0hP0+2A l5a9SnXSwqyzNHUCHwOyTL6g4Sh3t5U8mmt9E6IGbsmmZAfukAqQnUJg2VzZ4d2O jLI0X6xOQzIWq4wy42gWcE/k/CcwlrFExK673xymeA37hIEXVzSr5cgdMiAM0/6A +n36/B/yddxP4P2GUah+ =G0qz -END PGP SIGNATURE-
Re: [whatwg] Browser Bundled Javascript Repository
On Mon, Jun 15, 2009 at 1:09 PM, Joseph Pecoraro joepec...@gmail.comwrote: Dion: The problem here is that isn't backwards compatible and thus no-one will really be able to use it. I thought the original idea was backwards compatible. Maybe not the URN Schemes. If the original idea is not, could you point out the issues? The URN schemes isn't compatible. The SHA hash idea is do-able, but as Oliver pointed out is impractical: a) devs will forget to update it, b) looks ugly, c) fun things would happen with a SHA collision! ;) Dion: You then also get into the how do I get my library into the browser? Enough widespread usage of a library is a clear indicator for adoption into a browser bundle. Dynamically growing repositories could optimize per computer for the particular user's browsing habits (assuming developers would mark their scripts with the identifiers). You can have the same problem with what libraries will Google include in its CDN. Although it may be easier for Google to host just about any library if it already has a CDN setup. This was a real problem for us. How much is enough ? We started to get inundated with requests for people to put libraries up there. Dion: After mulling this over with the Google CDN work, I think that using HTTP and the browser mechanisms that we have now gives us a lot without any of these issues. I was afraid of this. This is a completely valid point. I guess it sounds like too much work for too little gain? I don't want to stop you from working on these ideas. The core problem that we tend to download the same crap all the time is real, and I look forward to seeing people come up with interesting solutions. - Joe
Re: [whatwg] nostyle consideration
On Mon, Jun 15, 2009 at 4:23 PM, Kornel Lesinski kor...@geekhood.netwrote: On Mon, 15 Jun 2009 21:26:21 +0100, Thomas Powell tpow...@gmail.com wrote: 1) Head Usage nostyle meta http-equiv=Refresh content=0;url=/errors/stylerequired.html /nostyle 2) Body Usage nostyle h2Warning: Styles required for correct rendering/h2 /nostyle Purpose of this element seems to make lack of CSS not my problem instead of providing meaningful alternative. This is not helpful for users without CSS. It only helps authors to discriminate against them, and I'm strongly against it. There is no intention of that in the proposal, you seem to have eliminated the discussion about dynamic content which is also discrimentory of such users as well as well as the error reporting examples. I showed a variety of negative and positive cases. My interest here in this tag in fact has grown out of a problem with lack of understanding of users with various capabilities rather than some particular design or tech agenda. Comments? noscript is a very poor solution, and nostyle would be too. You should use graceful degradation/progressive enhancement instead (in both cases). Couldn't agree more about the architecture, if you read any of my books particularly my Ajax one I am strong proponent of falling back not locking out, but obviously that choice is philosophical not technical. A negative lock out approach can be accomplished whether or not this element exists though as you say it makes it easier for some to treat a class of users bad. While I am not sure that markup elements can really force a philosophy of Web design/dev though they can certainly encourage it and so I understand the passion of not wanting to enable tech abusers any more than we have to, so point taken, but it actually doesn't fit with my experience. Consider your opinion of the value of noscript for me I have to disagree it has been quite valuable. In fact my main use is simply to show people about the reality of people turning things off or not supporting script rather than letting them cite arbitrary hear say about the issue. We have customers that have used it just to quantify exactly what you are worried about - down level user-agents or script off folks. It has really helped me get people on board with seeing the realities of addressing such contingency cases. Log files could certainly do this too but in the age of script based Google analytics unlikely for most. Given the blissful ignorance about measuring script use it would be great to see a noscript tag (or maybe even this dreaded nostyle) being employed for good because in many cases the discrimentory use of Web tech changes once people see the traffic they fail to serve properly to in a measurable manner rather than an abstract statement about what they ought to do (at least rational corporate types act that way if experience is a judge). Anyway I am sure you can think of a bunch of bad uses of JS, but that if anything only proves the point of the need, if a site owner is going to be restrictive for better or worse it would be better to be aware of your choice in a quantifiable way and error to your users properly? In short I simply view this nostyle element simply as a symmetrical element to other aspects of Web tech: on state/off state that's it. It can be used for good or ill like most anything and it actually supports a view of awareness of all rendering cases by its mere availability. Judging by comments it would appear that some view the style off state of being handled just fine with plain markup. -Thomas -- regards, Kornel Lesinski
Re: [whatwg] Browser Bundled Javascript Repository
On Mon, Jun 15, 2009 at 1:09 PM, Joseph Pecoraro joepec...@gmail.com wrote: Dion: The problem here is that isn't backwards compatible and thus no-one will really be able to use it. I thought the original idea was backwards compatible. Maybe not the URN Schemes. If the original idea is not, could you point out the issues? The URN schemes isn't compatible. The SHA hash idea is do-able, but as Oliver pointed out is impractical: a) devs will forget to update it, b) looks ugly, c) fun things would happen with a SHA collision! ;) a) Solved by Validation - I can't think of anything much better then that. =( b) Canonical Listing - This shouldn't be too difficult to distribute from a central source or some convention. c) Hehe, I think I detect a hint of sarcasm. If there is a SHA1 collision then you'd probably make a lot of money! Dion: You then also get into the how do I get my library into the browser? Enough widespread usage of a library is a clear indicator for adoption into a browser bundle. Dynamically growing repositories could optimize per computer for the particular user's browsing habits (assuming developers would mark their scripts with the identifiers). You can have the same problem with what libraries will Google include in its CDN. Although it may be easier for Google to host just about any library if it already has a CDN setup. This was a real problem for us. How much is enough ? We started to get inundated with requests for people to put libraries up there. Lets the browsers decide. And I can't make any reasonable suggestions without getting real world data, something I haven't tried to do yet. But yes, this is a good point, something that is extremely flexible / variable. Dion: After mulling this over with the Google CDN work, I think that using HTTP and the browser mechanisms that we have now gives us a lot without any of these issues. I was afraid of this. This is a completely valid point. I guess it sounds like too much work for too little gain? I don't want to stop you from working on these ideas. The core problem that we tend to download the same crap all the time is real, and I look forward to seeing people come up with interesting solutions. Thanks for the support. My thoughts are beginning to look like this: - Javascript Frameworks are downloaded all the time on many domains. This is a special case. - Those who benefit the most are the ones that can't space the extra request or large caches. This makes me think mobile browsers would get the biggest benefit. - I think the iPhone had some special html syntax for its mobile webpages, maybe they can sneak this in if it proves useful to them. - Joe
Re: [whatwg] Browser Bundled Javascript Repository
2009/6/15 Joseph Pecoraro joepec...@gmail.com On Mon, Jun 15, 2009 at 1:09 PM, Joseph Pecoraro joepec...@gmail.comwrote: Dion: The problem here is that isn't backwards compatible and thus no-one will really be able to use it. I thought the original idea was backwards compatible. Maybe not the URN Schemes. If the original idea is not, could you point out the issues? The URN schemes isn't compatible. The SHA hash idea is do-able, but as Oliver pointed out is impractical: a) devs will forget to update it, b) looks ugly, c) fun things would happen with a SHA collision! ;) a) Solved by Validation - I can't think of anything much better then that. =( b) Canonical Listing - This shouldn't be too difficult to distribute from a central source or some convention. c) Hehe, I think I detect a hint of sarcasm. If there is a SHA1 collision then you'd probably make a lot of money! C is a serious concern. SHA-1 collisions are now 2^51 - http://eprint.iacr.org/2009/259.pdf Dion: You then also get into the how do I get my library into the browser? Enough widespread usage of a library is a clear indicator for adoption into a browser bundle. Dynamically growing repositories could optimize per computer for the particular user's browsing habits (assuming developers would mark their scripts with the identifiers). You can have the same problem with what libraries will Google include in its CDN. Although it may be easier for Google to host just about any library if it already has a CDN setup. This was a real problem for us. How much is enough ? We started to get inundated with requests for people to put libraries up there. Lets the browsers decide. And I can't make any reasonable suggestions without getting real world data, something I haven't tried to do yet. But yes, this is a good point, something that is extremely flexible / variable. Dion: After mulling this over with the Google CDN work, I think that using HTTP and the browser mechanisms that we have now gives us a lot without any of these issues. I was afraid of this. This is a completely valid point. I guess it sounds like too much work for too little gain? I don't want to stop you from working on these ideas. The core problem that we tend to download the same crap all the time is real, and I look forward to seeing people come up with interesting solutions. Thanks for the support. My thoughts are beginning to look like this: - Javascript Frameworks are downloaded all the time on many domains. This is a special case. - Those who benefit the most are the ones that can't space the extra request or large caches. This makes me think mobile browsers would get the biggest benefit. - I think the iPhone had some special html syntax for its mobile webpages, maybe they can sneak this in if it proves useful to them. - Joe
Re: [whatwg] Browser Bundled Javascript Repository
c) fun things would happen with a SHA collision! ;) c) Hehe, I think I detect a hint of sarcasm. If there is a SHA1 collision then you'd probably make a lot of money! C is a serious concern. SHA-1 collisions are now 2^51 - http://eprint.iacr.org/2009/259.pdf This time I didn't detect sarcasm =) I was actually aware of that paper. I saw it on Reddit this past week, and although they complained about the fact that it has not yet been reviewed I think it could very well be valid. Its been known that SHA1 has been theoretically broken (not perfect 2**80) for some time now: (2005) http://www.schneier.com/blog/archives/2005/02/sha1_broken.html However, its application in this Repository idea is not to be a cryptographically secure hash, it would just be to perform a quick, reliable, hash of the contents and to produce a unique identifier. There would be no security concerns in the impossibly rare chance that two scripts hashes collide. Just add some whitespace to the text somewhere! It would even be easy to debug when with standard tools such as Firefox's Firebug and Webkit's Web Inspector. Hahaha =) Also, Git and Mercurial (distributed version control systems) have been using SHA1 for the exact same purpose for years. I'm more familiar with Git's use of SHA1 and it uses it everywhere in the internals (file contents, directory listings, commit history). Finally, if anyone here is seriously concerned with SHA1 just move to SHA-256 or SHA-512. With a repository unlikely to grow into the thousands, much less the millions, the chances of a collision even in 2**51 (2251799813685248 base 10) is bold thinking ;) I'm not attacking anyone here, I'm just clarifying why I think SHA1 is not a bad choice. Collision will always be an issue when a infinite number of things gets reduced to a finite set of values, but the concern negligible when done right. Cheers - Joe
Re: [whatwg] Browser Bundled Javascript Repository
2009/6/15 Joseph Pecoraro joepec...@gmail.com c) fun things would happen with a SHA collision! ;) c) Hehe, I think I detect a hint of sarcasm. If there is a SHA1 collision then you'd probably make a lot of money! C is a serious concern. SHA-1 collisions are now 2^51 - http://eprint.iacr.org/2009/259.pdf This time I didn't detect sarcasm =) I was actually aware of that paper. I saw it on Reddit this past week, and although they complained about the fact that it has not yet been reviewed I think it could very well be valid. Its been known that SHA1 has been theoretically broken (not perfect 2**80) for some time now: (2005) http://www.schneier.com/blog/archives/2005/02/sha1_broken.html However, its application in this Repository idea is not to be a cryptographically secure hash, it would just be to perform a quick, reliable, hash of the contents and to produce a unique identifier. There would be no security concerns in the impossibly rare chance that two scripts hashes collide. Just add some whitespace to the text somewhere! It would even be easy to debug when with standard tools such as Firefox's Firebug and Webkit's Web Inspector. Hahaha =) In the event of a collision there would be huge issues - imagine running someone else's script in your application. Basically XSS - someone could take over your app, steal passwords, do bank transactions on your behalf, etc. Collisions are made easier in plain text than in certs given that your input is not constrained. Also, Git and Mercurial (distributed version control systems) have been using SHA1 for the exact same purpose for years. I'm more familiar with Git's use of SHA1 and it uses it everywhere in the internals (file contents, directory listings, commit history). There have been a number of threads about that :) Finally, if anyone here is seriously concerned with SHA1 just move to SHA-256 or SHA-512. With a repository unlikely to grow into the thousands, much less the millions, the chances of a collision even in 2**51 (2251799813685248 base 10) is bold thinking ;) The chances assuming everything is random are very low. The chances assuming an active attacker, which is the case we're considering here, are not 1/2^51. 2^51 merely represents how much work needs to be done, or viewed alternately, how close a plausible attack is. I'm not attacking anyone here, I'm just clarifying why I think SHA1 is not a bad choice. Collision will always be an issue when a infinite number of things gets reduced to a finite set of values, but the concern negligible when done right. Cheers - Joe
Re: [whatwg] nostyle consideration
On Mon, Jun 15, 2009 at 7:28 PM, Thomas Powelltpow...@gmail.com wrote: There is no intention of that in the proposal, you seem to have eliminated the discussion about dynamic content which is also discrimentory of such users as well as well as the error reporting examples. I showed a variety of negative and positive cases. My interest here in this tag in fact has grown out of a problem with lack of understanding of users with various capabilities rather than some particular design or tech agenda. For the same reason you shouldn't rely only on JavaScript to provide necessary content, you shouldn't rely on generated content in CSS. If you follow this very basic principle, you obviate the need for nostyle. I encourage you to view the following excerpt from an Eric Meyer presentation, on the perils of relying on CSS to generate content: http://www.vimeo.com/1149007?pg=embedsec=1149007 The key point is this: If it's important, it should be in the content, it shouldn't be generated. Erik Vorhes
Re: [whatwg] Browser Bundled Javascript Repository
2009/6/15 Ian Fette (イアンフェッティ) ife...@google.com: In the event of a collision there would be huge issues - imagine running someone else's script in your application. Basically XSS - someone could take over your app, steal passwords, do bank transactions on your behalf, etc. Collisions are made easier in plain text than in certs given that your input is not constrained. I think the idea was for browser vendors to select and include these libraries in the browser. So there isn't an obvious (to me) way for an attacker to use hash collisions to create an XSS. That said, I don't think content hashes are the right identifier. Using a sha-1 of a specific jquery version would prevent anyone from ever fixing critical bugs in it. There's be all this legacy content out there referring to an outdated version. - a
Re: [whatwg] HTML 5 video tag questions
Okay. Thanks. Maybe to make this more clear section 4.8.7.1 should add a sentence somewhere like: Authors may provide multiple source elements to provide different codecs for different user agents. Thank you. --- On Mon, 6/15/09, Tab Atkins Jr. jackalm...@gmail.com wrote: From: Tab Atkins Jr. jackalm...@gmail.com Subject: Re: [whatwg] HTML 5 video tag questions To: Chris Double chris.dou...@double.co.nz Cc: whatwg@lists.whatwg.org, jjcogliati-wha...@yahoo.com Date: Monday, June 15, 2009, 6:55 AM On Mon, Jun 15, 2009 at 4:49 AM, Chris Doublechris.dou...@double.co.nz wrote: On Mon, Jun 15, 2009 at 5:27 PM, Tab Atkins Jr.jackalm...@gmail.com wrote: (That said, I don't think there's anything wrong with nesting videos, it's just unnecessary.) This won't work since fallback content is not displayed unless video is not supported. Dang, I was wrong. I know I remembered some conversations about nested video, but I guess I was just remembering people *asking* about it. Regardless, as noted by others, my source suggestion was correct. Provide multiple sources if you're not sure about what format your users can view. ~TJ
Re: [whatwg] HTML 5 video tag questions
jjcogliati-wha...@yahoo.com wrote: Okay. Thanks. Maybe to make this more clear section 4.8.7.1 should add a sentence somewhere like: Authors may provide multiple source elements to provide different codecs for different user agents. Not just different codecs. Different bitrates, frame rates, or (coded) resolutions are also obvious axes along which browsers would logically want to choose if that's how source is really meant to work. The browser can inspect and decide. --Ben signature.asc Description: OpenPGP digital signature
Re: [whatwg] Browser Bundled Javascript Repository
In the event of a collision there would be huge issues - imagine running someone else's script in your application. Basically XSS - someone could take over your app, steal passwords, do bank transactions on your behalf, etc. Collisions are made easier in plain text than in certs given that your input is not constrained. Aaron Boodman: I think the idea was for browser vendors to select and include these libraries in the browser. So there isn't an obvious (to me) way for an attacker to use hash collisions to create an XSS. Yes, thanks for clearing that up. That said, I don't think content hashes are the right identifier. Using a sha-1 of a specific jquery version would prevent anyone from ever fixing critical bugs in it. There's be all this legacy content out there referring to an outdated version. Assuming there is a buggy version of a JS library: 1. It probably shouldn't be used. 2. The browser vendor can (and should) eliminate it from their repository and proceed with the usual fallback of downloading the script from the script's src attribute would take effect. I could see where this could get confusing. JS Library X gets released and tells its users to use SHA1 ABC Thousands of people download it, and later that day they fix an issue, update their site and say use this new version with SHA1 FDE Canonical listings would makes this easier. - Joe