Re: [whatwg] request for clarification: aside, figure
On Tue, 09 Jun 2009 01:57:15 +0100, Ian Hickson i...@hixie.ch wrote: On Sun, 10 May 2009, Bruce Lawson wrote: I don't think the spec is clear enough defining these two elements from an author's perspective. .. What is the difference between a figure that has no caption and an aside? Both seem to be connected in some way with the main content around it, but can be considered separate/ may be moved. .. So If I have a magazine-style pullquote, is that a figure or an aside (or neither)? I have attempted to address this, but actually it turns out HTML5 already has examples of how to do pull quotes in the aside section. I didn't express myself clearly enough. This isn't a problem per se - it's the symptom of a problem. I note that there is an example of how to do pullquotes, but I can't deduce the logic that makes it obvious why one should use an aside rather than figure; the definition of each seems to allow either to be used thus. For example, in the middle of a fictional interview about markup, I might want to pull out a quote and citation: Do I write aside blockquoteAfter a sip of sweet sherry, I turn into Mr Last Week/blockquote citeIan Hickson/cite /aside Or figure blockquoteAfter a sip of sweet sherry, I turn into Mr Last Week/blockquote legendIan Hickson/legend /figure The former shows correct usage of aside vs figure, though the cite element usage is incorrect; the name should not be marked up. Again, I see no spec-derived reason why it should be aside rather than figure, other than it happens to be given an example of one rather than the other. I have no preference, just seek to eliminate ambiguity. (Given that marking up a name as a citation is common practice, and validator cannot distinguish between a name and a title of a work, should we widen the definition of cite to match the English language defintion 1. to quote or refer to (a passage, book, or author) ? A different discussion, apologies)
Re: [whatwg] Asynchronous file upload
On Tue, 09 Jun 2009 03:33:56 +0200, Ian Hickson i...@hixie.ch wrote: On Mon, 11 May 2009, Samuel Santos wrote: I was asked by a client if it was possible to implement something similar to the asynchronous file upload used on gmail using only standard web technologies. Looking at the gmail source code I can see that they use some flash magic. And by reading the HTML5 spec I could not find a way to implement this feature. This is a feature of XHR2, I believe: http://dev.w3.org/2006/webapi/XMLHttpRequest-2/ ...though it will probably rely on the upcoming File API to actually obtain files to upload. And as such is not defined yet at all, but that is pretty much the plan, yes. -- Anne van Kesteren http://annevankesteren.nl/
Re: [whatwg] Annotating structured data that HTML has no semantics for
On Mon, 11 May 2009, Simon Pieters wrote: On Sun, 10 May 2009 12:32:34 +0200, Ian Hickson i...@hixie.ch wrote: Page 3: h2My Catsh2 dl dtSchrouml;dinger dd item=com.damowmow.cat meta property=com.damowmow.name content=Schrouml;dinger meta property=com.damowmow.age content=9 p property=com.damowmow.descOrange male. dtErwin dd item=com.damowmow.cat meta property=com.damowmow.name content=Lord Erwin meta property=com.damowmow.age content=3 p property=com.damowmow.descSiamese color-point. img property=com.damowmow.img alt= src=/images/erwin.jpeg /dl Given the microdata solution and this example, there is now a reason other than styling to introduce di, since here you duplicate the dt information in meta. dl di item=com.damowmow.cat dt property=com.damowmow.nameSchrouml;dinger dd meta property=com.damowmow.age content=9 p property=com.damowmow.descOrange male. /di ... The styling problem is discussed at http://forums.whatwg.org/viewtopic.php?t=47 Yeah, I noticed that. I agree that if it turns out that this is a common authoring pattern (and assuming we can work around the difficulties in adjusting the parser to handle this), we should probably introduce di after all. I intend to wait and see what happens first though. On Mon, 11 May 2009, Giovanni Gentili wrote: Ian Hickson: � USE CASE: Annotate structured data that HTML has no semantics for, and � which nobody has annotated before, and may never again, for private use or � use in a small self-contained community. (..) � SCENARIOS: Between the scenarios should be considered also this case: * a user (or groups of users) wants to annotate items present on a generic web page with additional properties in a certain vocabulary. for example Joe wants to gather in a blog a series of personal annotation to movies (or other type of items) present in imdb.com. This isn't really a use case, it's a solution. What is the end-user scenario that the author is trying to address? For example, what kind of software will collect this information? What problem are we solving? a) In the case of properties specified for element without ancestor with an item attribute specified the corresponding item should be the document? (element body with implicit item attribute). We already have mechanisms for providing name-value pairs for a document; namely, meta name and link rel. b) Do we need to require UA to offer a standard way to visualize (at least as an option left to the user) the structured information carried in microdata ? Not as far as I can tell; what use case would this be for? And copypaste? The spec already requires user agents to include microdata in copy and paste. On Tue, 12 May 2009, Tim Tepa�e wrote: (Note the metas in the last example -- since sometimes the information isn't visible, rather than requiring that people put it in and hide it with display:none, which has a rather poor accessibility story, I figured we could just allow meta anywhere, if it has a property= attribute.) That seems to be a solution optimised for extremely invisible metadata but not for metadata which differs from the human visible data. It handles both -- instead of: span itemprop=xy/span ...you can do: spanmeta itemprop=x content=yz/span Imagine as an example the simple act of marking up a number (and ignoring what the number denotes). For human consumption a thousands seperator is often used, the type of seperator differs by language, locale and context. Just in my little word I see on regular basis the point, the comma, the space, the thin space and sometimes the the apostrophe. Parsing different representations of numbers would be a chore. The value of textContent of the element span itemprop=com.example.price�nbsp;1thinsp;000thinsp;000,mdash;/span is clearly unusable, demanding an additional invisible meta property=com.example.price content=100. Right. My irritation lies in the element proliferation, requiring one element/ attribute combination for machines, one element/text content combination for humans. Of course, any sane author would arrange both elements in a close relation, as parent/child or sibling but there would be still two different elements to maintain, leading to a higher cognitive load. Not just for authors but also for programmers: a fluctating price had to be actualized on two different elements; tree walking DOM scripts had to take meta-Elements in account. Furthermore it clashes with the familiar habit of other elements in HTML. A hyperlink is one element with a machine-readable attribute and human- readable text content. A citation is one element with a machine-readable reference and human-readable text content. The same model is used in meter, progress, time, abbr ... but not in user-defined
Re: [whatwg] Annotating structured data that HTML has no semantics for
Some of the improvement suggestions that I have heard that sounds interesting, though possibly for the next version of microdata. * Support for specifying a machine-readable value, such as for dates, colors, numbers, etc. I expect we will add support for these based on demand, the same way we added time in the first place. Using dedicated elements for each data type seems like it will eventually bloat the language. For example what use would a color element or a number element do? If instead mashine readable values could be added using a generic method, such as a 'itemvalue' or 'propvalue' attribute, each microdata format can define how to interpret the values, be they numbers, dates, body parts, or chemical formulas. I even wonder it would allow replacing the time element with a standardized microformat, such as: Christmas is going down on span item=w3c.time itemvalue=12-25-2009The 25th day of Decemberspan! I don't really understand how that would be better than dedicated elements. The idea would be to reduce the size of the language. I.e. if a feature isn't heavily used, it might be better expressed as a microdata format. For example, why didn't you add elements for bibtex or vCard, but instead used microdata? However, it's quite possible that time is going to be commonly used enough that it's worth using an element rather than a microdata format. Another reason is as a test of the microdata feature itself. Microdata is a sort of extension mechanism to HTML 5. In software development, it is common to test your extension system by developing parts of the product using the extension system. This way you can both keep the core code small, and you get a good test bed for your extension system. You have already done this with the predefined vocabularies, and apparently the lack of ability to define a mashine readable value separate from the human readable one was not a problem. However it would seem that the same does not hold true for time. * Support for tabular data. This would be nice if we can find a way to do it that doesn't put undue burdens on simple implementations. (e.g. I would imagine that while a microdata implementation today can be a few hundred lines total, adding support for the table model could easily double that.) Quite possibly. In both these cases I'm perfectly happy to wait with adding more features to microdata for now and see if what we have is successful, before we start over engineering it to cover every imaginable case. / Jonas
Re: [whatwg] Annotating structured data that HTML has no semanticsfor
* Let a COLOR element have a value DOM property in the DOM that returns a color. * Let a NUMBER element has a value DOM property that returns a number. Actually, the latter use case is one I have bumped into: * The DOM does not provide a numeric value, * JavaScript support for parsing localized properties is poor; you have to reverse engineer the result of toLocaleString, * VBScript support is better but inconsistent as it depends on the system locale and not on the document locale as desired. IMHO, Chris
Re: [whatwg] Annotating structured data that HTML has no semantics for
The problem of W3C DTD DDoS does not apply to CURIE because software processing RDF does not need to retrieve the resources referenced on a regular basis. Even in the case of DTD, the problem is that some software does not cache, not that some software tries to access it. IMHO, Chris
Re: [whatwg] Two feature-suggestions for HTML5 (forms)
Please don't cross-post to w3 lists and to whatwg lists. 2009/6/8 Asser Nilsson asser.nils...@googlemail.com: Hi! There are two things in HTML5-forms that are often made with Active Technology like JavaScript, that would be very cool, if HTML5 could do these things without Scripting: 1. I've seen this at the online-dictionary dict.leo.org: they made a JavaScript-function, that you can type the word to translate without the text-box marked an the characters get in there. So, you don't have to mark the right textbox before you type, but you can type, no matter to look, if the box is marked. I think it would be a very nice feature for HTML5-forms if this would work without Scripting... E.g. an option for text-fields, that everythins typed on this site is typed into this field (even if it is not highlighted). What about setting the autofocus attribute on the page? Keys have various meaning in various points of the page and you should not change that, but you can get the equivalent effect (user doesn't have to click on the textbox). 2. In search-boxes Javascript often is used for send every written character to the server and the server return search-suggestions as-you-type. If this is possible without Scripting only with HTML/CSS this would be very cool. It is possible to do this with XForms, I guess. That is not exactly declarative but has less problems that pure Javascript (if you don't think that XForms is mainly implemented as Javascript, of couse) Of course, there are things where you need Scripting, but such basic features should be possible without scripting, only with HTML/CSS. (Many people deactivate Scripting.) Sorry for my bad english, but I really think these two things would be very nice features for coming versions of HTML. Greetings, Asser Nilsson Giovanni
Re: [whatwg] Annotating structured data that HTML has no semantics for
Ian Hickson wrote: I agree entirely. I actually tried to find a workable solution to address this but unfortunately the only general solutions I could come up with that would allow this were selector-based, and in practice authors are still having trouble understanding how to use Selectors even with CSS. There's also the problem with separating the data from the rules that say how to interpret the data, which would likely lead to more problems than the typos one would get from repeating the itemprop=s. I am sorry, but I cannot agree on this one. At least simple selectors are well understood and a well established technique on the web. There is widespread use for it in CSS (so it is very simple to test, if your selector works for the correct set of elements). And the fact that jquery is *so* successful is based on jquery's capability to work with selectors in such an easy way – not the other way around. And with a selector-based aproach it is far easier to add metadata-information to existing content, than with the metadata-proposal. So for authors it would be much easier, I think. It would work like a dezentralized microformats-approach (btw. it would be easy to map the existing microformats to such a css-based metadata-format), with the benefit that you can simply map your own classes and ids to global ones like foaf, dc or hcard. And you could easily use such profiles from other pages, e.g.: Someone could markup the songs on his page in a way last.fm does and then simply use a copy of their meta-data profile (basically in the same way we use microformats now). The only real problem I see is the unfortunate fact, that it is harder for browser-implementors to write a good copy paste code which preserves all metadata from one source to another. Best regards Frank -- frank hellenkamp | interface designer solmsstraße 7 | 10961 berlin +49.30.49 78 20 70 | tel +49.173.70 55 781 | mbl +49.3212.100 35 22 | fax jo...@depagecms.net http://www.depagecms.net http://immerdasgleiche.de http://everydayisexactlythesame.net/ signature.asc Description: OpenPGP digital signature
[whatwg] (no subject)
This is a proposal that I posted to w3.org a year ago, and it didn't really get any debate there so I'm hoping to provoke some here, i wont go into too much detail instead linking to the original posts but i'll give a bit of an overview here... Essentially the proposal is for a static DOM object which has read only settings exposed to javascript (ultimately one day sendable via HTTP to the web server to superceed UserAgent sniffing), the browser would be left with the task of presenting the various options to the user (global, per domain, etc.). Javascript has allowed web sites and applications increased levels of functionality but at the same time allowed for more possibilities of special effects and multimedia, these are two seperate sides of the javascript coin and it would be useful to have the former without being required to witness the latter. One example I frequently use is google maps, which runs fine with javascript on my low powered surfing laptop - until you change zoom levels - then it takes over a minute to interpolate to the next zoom level, whilst this probably down to bad coding (a simple setTimeout for 2 seconds hence could force the interpolation effect to stop) it's a shame that this one effect brings the entire web-application of google maps into an unusable state and personally i don't think that the fact that my laptop doesnt have a GPU should mean i'm relegated to use the NOSCRIPT version of a site. For the sake of one flashy, un-needed effect. AFAIK there is a light version of google maps that uses little or no javascript but apart from the transition effect it runs fine. Which brings me back to the proposal, if there were an AllowTransitions boolean that developers could check then they would know what experience to present the user with: function Zoom_In() { if( window.UserPreferences.AllowTransitions ) Interpolate_To_Zoom( ++zoom ); else Jump_To_Zoom( ++zoom ); } this would still allow me to use the Javacsript map application on my low powered machine without resorting to a no-script-at-all version. Another aspect is rich content, if i'm surfing whilst listening to mp3s i might not want to be interrupted by sounds or videos playing and might want to turn web-sounds off, or maybe i'm watching a movie on the train but dont want my web mail to audibly alert me to a new mail message. Instead i could have a global volume control (or per tab...) in my browser rather than the curernt situation where you have to set it for each and every flash applet on each and every web site. Also I might need roaming profiles, if I'm connected via WiFi i might be happy to have videos playing, but if I'm out in the countryside, and i have a limited/expensive GPRS data plan i dont want videos to suck up all my bandwidth and money - if the browser could itself switch between high and low bandwidth profiles then this would be a smoother user experience than again having to bookmark a site's full and lite pages seperately, or have the site try to second guess my desires through the capabilities my user agent string suggests. Example properties might be: MaxStreamRate (in Kb per seconds - with a popup warning the user if they attempt to play something wider) AutoPlayVideo (if false then video content should never start playing without a direct click on a play button) AutoPlayAudio (as above) AudioVolume (0 = mute, 99 = full) Note that the last two do not crosstalk - AutoPlayAudio may still be true if AudioVolume is 0. At the moment the user is at the whim of site builders about how to turn features on and of, stop and start audio, control volume or switch between lite and full versions of sites. These sometimes need the user to be logged and/or cookies to be remembered between sessions or instead the web host will simply attempt to dictate the version of a site you get depending on your user agent string. Currently most browsers allow images to be turned on and off very easily (albeit usually buried deep in a menu tree). By centralising what it is the user wants to happen we can make the web a much more pleasant and consistent experience and one that is ready for users who may literally walk from a high bandwidth high availability connection to a low bandwidth one whilst surfing - if their PDA or laptop could hook into their connection settings and see that the connection has switched wouldn't it be great if it could automatically tone the richness of the web experience down with the user not having to lift a finger... Revised Proposal http://lists.w3.org/Archives/Public/public-html-comments/2008Jul/.html Original Proposal http://lists.w3.org/Archives/Public/public-html-comments/2008Apr/0003.html Ric Hardacre (MCAD, MCP, HTML,CSS JS hacker since '95) cyclomedia.co.uk
Re: [whatwg] code attributes
On Thu, Jun 4, 2009 at 6:24 PM, Ian Hicksoni...@hixie.ch wrote: On Tue, 28 Apr 2009, Jacob Rask wrote: has there ever been any discussion on including an attribute to the code element, specify the programming language in the markup? If so, what was the conclusion? I didn't find anything in the list archives. If not, I believe it would be a very good idea. Browsers could for instance have default color codings for different languages, open selected code/text in an editor associated with that language, etc... On Tue, 28 Apr 2009, Nils Dagsson Moskopp wrote: We would need a controlled vocabulary, of course. On Tue, 28 Apr 2009, Michael A. Puls II wrote: http://www.whatwg.org/specs/web-apps/current-work/multipage/text-level-semantics.html#the-code-element For example: code class=language-python/code This has been discussed before, but the basic answer is use the class attribute, at least for now. I would recommend, if this is a common-enough problem, that a group of people get together and define a common set of class attribute values for the code element, in the style of a Microformat. That way, we can build a common vocabulary that can then be made more formal in the next version of HTML. Is there a reason you encourage class values rather than microdata here? As I understood it, one of the things microdata was trying to avoid was using the class attribute since there was concern that it would collide with user values. / Jonas
Re: [whatwg] Asynchronous file upload
On Tue, Jun 9, 2009 at 12:48 AM, Anne van Kesteren ann...@opera.com wrote: On Tue, 09 Jun 2009 03:33:56 +0200, Ian Hickson i...@hixie.ch wrote: On Mon, 11 May 2009, Samuel Santos wrote: I was asked by a client if it was possible to implement something similar to the asynchronous file upload used on gmail using only standard web technologies. Looking at the gmail source code I can see that they use some flash magic. And by reading the HTML5 spec I could not find a way to implement this feature. This is a feature of XHR2, I believe: http://dev.w3.org/2006/webapi/XMLHttpRequest-2/ ...though it will probably rely on the upcoming File API to actually obtain files to upload. And as such is not defined yet at all, but that is pretty much the plan, yes. Does the planned API allow for the composition of multipart encoded posts including binary file parts? So not just sending the binary file data in isolation. Such that the caller can use some File API to obtain references to files, and then stitch together a blob of data to upload, including the file data, that looks just like what would be sent via a Form post, and then to have XHR2 send it. Fyi, the latest release of Gears lets you do that with the combination of Desktop.openFiles(), BlobBuilder, and HttpRequest.send(blob)... of course non-standard. -- Anne van Kesteren http://annevankesteren.nl/
Re: [whatwg] Asynchronous file upload
On Tue, 09 Jun 2009 20:37:04 +0200, Michael Nordman micha...@google.com wrote: Does the planned API allow for the composition of multipart encoded posts including binary file parts? So not just sending the binary file data in isolation. Such that the caller can use some File API to obtain references to files, and then stitch together a blob of data to upload, including the file data, that looks just like what would be sent via a Form post, and then to have XHR2 send it. Fyi, the latest release of Gears lets you do that with the combination of Desktop.openFiles(), BlobBuilder, and HttpRequest.send(blob)... of course non-standard. The discussion for these features should really happen on public-weba...@w3.org. Having said that, the file API is still in the works. For XMLHttpRequest all I expect to change is adding new objects that can be passed to the send() method as argument and define how they are to be serialized and maybe how they affect the Content-Type header. -- Anne van Kesteren http://annevankesteren.nl/
Re: [whatwg] Superset encodings [Re: ISO-8859-* and the C1 control range]
Le 3 juin 09 à 23h19, Ian Hickson écrivit : On Tue, 14 Apr 2009, Øistein E. Andersen wrote: HTML5 currently contains a table of encodings aliases, [...] GB2312 and GB_2312-80 technically refer to the *character set* GB 2312-80, [...]. GBK, on the other hand, is an encoding. [...] There is a large number of unregistered charset strings, however, and the other mappings in this table are between encodings. Unless x-x-big5 is actually supposed to refer to an encoding distinct from Big5, [this mapping] should be removed. [...] I believe you misunderstand the purpose of this table. The idea is to give a mapping of _labels_ to encodings, not encodings to encodings. I've clarified the text to this effect. You seem to have added specified by a label to the phrase which now reads an encoding specified by a label given in the first column of the following table without changing the column heading (Input encoding) and without defining what a label actually is. The reference to encoding aliasing is also intact, which seems misleading if the table is not supposed to map between encodings. The concept of misinterpret[ation] for compatibility seems inappropriate for the mapping from x-x-big5 to Big5 unless the label x-x-big5 is actually supposed to specify an encoding distinct from Big5. It is not at all clear to me what you mean by label. It might be the MIME charset string with which the HTML document is labelled, but that would require an inordinate number of strings to be specified (e.g., iso-ir-100, latin1 and IBM819 amongst others alongside ISO-8859-1), so this cannot possibly be the intended meaning. It might be a normalised form of the MIME charset string, using the IANA charset registry to map an alias to its corresponding name (or to the alias qualified as preferred MIME name if there is such an entry), but that does not quite seem to work either, since aliases not registered in the IANA charset registry would then not be covered by the aliasing mechanism (e.g., it would cause content labelled as x-sjis to be handled as unaugmented Shift_JIS despite the mapping from Shift_JIS to Windows-31J, since x-sjis does not and cannot figure in the IANA charset registry). I did indeed believe that the table was supposed to map between encodings, and this interpretation still seems to give the correct result in practice for non-CJK encodings (unless, of course, content labelled TIS-620-2533 should actually be interpreted as TIS-620 rather than windows-874). Le 9 juin 09 à 10h55, Anne van Kesteren écrivit : On Tue, 09 Jun 2009 01:42:57 +0200, Øistein E. Andersen wrote: Shift-JIS and Windows-932 are commonly used names/labels for the encodings that are registered as Shift_JIS and Windows-31J (respectively) in the IANA charset registry. [...] So should HTML5 mention that Windows-932 maps to Windows-31J? (It does not appear in the IANA registry.) That is an interesting question. My (apparently wrong) understanding was that the table was merely supposed to provide mappings between encodings, since such mappings are inappropriate in non-HTML contexts and cannot be added to the IANA registry. It might be to useful to include a set of MIME charset strings which cannot be or have not yet been registered (e.g., x-x-big5, x-sjis, windows-932) as well as information on how CJK character sets are implemented in practice, both of which seem to be necessary for compatibility. Such information does not fit comfortably in the current table, though. -- Øistein E. Andersen
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
On Wed, 13 May 2009, Erik Arvidsson wrote: Section 2.9.3 DOMTokenList says: The DOMTokenList interface represents an interface to an underlying string that consists of an *unordered* set of unique space-separated tokens. Yet, the item method says: The item(index) method must split the underlying string on spaces, *sort the resulting list of tokens by Unicode code point*, remove exact duplicates, and then return the indexth item in this list. If index is equal to or greater than the number of tokens, then the method must return null. What is the reason for requiring the set to be ordered in item? Ensuring consistency between browsers, to reduce the likelihood that any particular browser's ordering becomes important and then forcing that browser's ordering (which could be some arbitrary ordering dependent on some particular hash function, say) into the platform de facto. This is similar to what happened to ES property names -- they were supposedly unordered, UAs were allowed to sort them however they liked, and now we are locked in to a particular order. If we still want to enforce that item returns the items in the sorted order we should change the spec to say that DOMTokenList represents an ordered set instead. The spec doesn't say that DOMTokenList represents an unordered set; it says that the underlying attribute represents an unordered set. On Mon, 18 May 2009, Erik Arvidsson wrote: Simon, I think you have convinced me at least. I therefore think that a better wording in the spec is to say that DOMTokenList acts as a sorted set. I've added a note to this effect. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
Ensuring consistency between browsers, to reduce the likelihood that any particular browser's ordering becomes important and then forcing that browser's ordering (which could be some arbitrary ordering dependent on some particular hash function, say) into the platform de facto. This is similar to what happened to ES property names -- they were supposedly unordered, UAs were allowed to sort them however they liked, and now we are locked in to a particular order. I strongly think the order should not be sorted, but should reflect the order of the token in original string which was broken down into tokens. It would also make implementations much simpler and sane, and would spare extra cpu cycles by avoiding the sort operations.
Re: [whatwg] A Selector-based metadata proposal (was: Annotating structured data that HTML has no semantics for)
On Thu, 14 May 2009, Eduard Pascual wrote: I have put online a document that describes my idea/proposal for a selector-based solution to metadata. The document can be found at http://herenvardo.googlepages.com/CRDF.pdf Feel free to copy and/or link the file wherever you deem appropriate. Needless to say, feedback and constructive criticism to the proposal is always welcome. (Note: if discussion about this proposal should take place somewhere else, please let me know.) This proposal is very similar to RDF EASE. While I sympathise with the goal of making semantic extraction easier, I feel this approach has several fundamental problems which make it inappropriate for the specific use cases that were brought up and which resulted in the microdata proposal: * It separates (by design) the semantics from the data with those semantics. I think this is a level of indirection too far -- when something is a heading, it should _be_ a heading, it shouldn't be labeled opaquely with a transformation sheet elsewhere defining that is maps to the heading semantic. * It is even more brittle in the face of copy-and-paste and regular maintenance than, say, namespace prefixes. It is very easy to forget to copy the semantic transformation rules. It is very easy to edit the document such that the selectors no longer match what they used to match. It's not at all obvious from looking at the page that there are semantics there. * It relies on selectors to do something subtle. Authors have a great deal of trouble understanding selectors -- if you watch a typical Web authors writing CSS, he will either use just class selectors, or he will write selectors by trial and error until he gets the style he wants. This isn't fatal for CSS because you can see the results right there; for something as subtle as semantic data mining, it is extremely likely that authors will make mistakes that turn their data into garbage, which would make the feature impractical for large-scale use. I say this despite really wanting Selectors to succeed (disclosure: I'm one of the editors of the Selectors specification and spent years working on its test suite). I think CRDF has a bright future in doing the kind of thing GRDDL does, and in extracting data from pages that were written by authors who did not want to provide semantic data (i.e. screen scraping). It's an interesting way of converting, say, Microformats to RDF. Having said that, I do agree that the repetition of microdata requires in common scenarios with blocks of repeated data is unfortunate. It is worse than the repetition one has just from the basic HTML markup. e.g. this: table tr td Hedral td Black tr td Pillar td White /table ...becomes this: table tr item td itemprop=name Hedral td itemprop=color Black tr item td itemprop=name Pillar td itemprop=color White /table ...or even: table tr item=com.example.cat td itemprop=com.example.name Hedral td itemprop=com.example.color Black tr item td itemprop=com.example.name Pillar td itemprop=com.example.color White /table ...which is far more verbose than ideal. I considered special casing tables (using col itemprop to set itemprop= for all cells in a column) but it would require quite a lot of complexity in processors since they'd additionally have to implement the table model, and having seen the quality of some of the implementations of metadata extractors used on Web content, I fear that that will be far too much complexity. (I fear even subject= might already be too much.) The simpler we make it the more reliable it will be. It also wouldn't solve the problem with other patterns, e.g. dl (which approaches like CRDF's handle fine). I don't have a good answer for the repetition problem. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] DOMTokenList is unordered but yet requires sorting
I was about to follow up on this. Requiring sorting which is O(n log n) for something that can be done in O(n) makes thing slower without any real benefit. Like João said the order should be defined as the order of the class content attribute. On Tue, Jun 9, 2009 at 16:00, João Eiras jo...@opera.com wrote: Ensuring consistency between browsers, to reduce the likelihood that any particular browser's ordering becomes important and then forcing that browser's ordering (which could be some arbitrary ordering dependent on some particular hash function, say) into the platform de facto. This is similar to what happened to ES property names -- they were supposedly unordered, UAs were allowed to sort them however they liked, and now we are locked in to a particular order. I strongly think the order should not be sorted, but should reflect the order of the token in original string which was broken down into tokens. It would also make implementations much simpler and sane, and would spare extra cpu cycles by avoiding the sort operations. -- erik
[whatwg] Limit on number of parallel Workers.
Hi WHATWG! In Chromium, workers are going to have their separate processes, at least for now. So we quickly found that while(true) foo = new Worker(...) quickly consumes the OS resources :-) In fact, this will kill other browsers too, and on some systems the unbounded number of threads will effectively freeze the system beyond the browser. We think about how to reasonably place limits on the resources consumed by 'sea of workers'. Obviously, one could just limit a maxumum number of parallel workers available to page or domain or browser. But what do you do when a limit is reached? The Worker() constructor could return null or throw exception. However, that seems to go against the spirit of the spec since it usually does not deal with resource constraints. So it makes sense to look for the most sensible implementation that tries best to behave. Current idea is to let create as many Worker objects as requested, but not necessarily start them right away. So the resources are not allocated except the thin JS wrapper. As long as workers terminate and the number of them decreases below the limit, more workers from the ready queue could be started. This allows to support implementation limits w/o exposing them. This is similar to how a 'sea of XHRs' would behave. The test page herehttp://www.figushki.com/test/xhr/xhr1.html creates 10,000 async XHR requests to distinct URLs and then waits for all of them to complete. While it's obviosuly impossible to have 10K http connections in parallel, all XHRs will be completed, given time. Does it sound like a good way to avoid the resource crunch due to high number of workers? Thanks, Dmitry
Re: [whatwg] Limit on number of parallel Workers.
I believe that this will be difficult to have such a limit as sites may rely on GC to collect Workers that are no longer running (so number of running threads is non-deterministic), and in the context of mix source content (mash-ups) it will be difficult for any content source to be sure it isn't going to contribute to that limit. Obviously a UA shouldn't crash, but i believe that it is up to the UA to determine how to achieve this -- eg. having a limit to allow a 1:1 relationship between workers and processes will have a much lower limit than an implementation that has a worker per thread model, or an m:n relationship between workers and threads/processes. Having the specification limited simply because one implementation mechanism has certain limits when there are many alternative implementation models seems like a bad idea. I believe if there's going to be any worker related limits, it should realistically be a lower limit on the number of workers rather than an upper. --Oliver On Jun 9, 2009, at 6:13 PM, Dmitry Titov wrote: Hi WHATWG! In Chromium, workers are going to have their separate processes, at least for now. So we quickly found that while(true) foo = new Worker(...) quickly consumes the OS resources :-) In fact, this will kill other browsers too, and on some systems the unbounded number of threads will effectively freeze the system beyond the browser. We think about how to reasonably place limits on the resources consumed by 'sea of workers'. Obviously, one could just limit a maxumum number of parallel workers available to page or domain or browser. But what do you do when a limit is reached? The Worker() constructor could return null or throw exception. However, that seems to go against the spirit of the spec since it usually does not deal with resource constraints. So it makes sense to look for the most sensible implementation that tries best to behave. Current idea is to let create as many Worker objects as requested, but not necessarily start them right away. So the resources are not allocated except the thin JS wrapper. As long as workers terminate and the number of them decreases below the limit, more workers from the ready queue could be started. This allows to support implementation limits w/o exposing them. This is similar to how a 'sea of XHRs' would behave. The test page here creates 10,000 async XHR requests to distinct URLs and then waits for all of them to complete. While it's obviosuly impossible to have 10K http connections in parallel, all XHRs will be completed, given time. Does it sound like a good way to avoid the resource crunch due to high number of workers? Thanks, Dmitry
Re: [whatwg] Limit on number of parallel Workers.
On Tue, Jun 9, 2009 at 6:13 PM, Dmitry Titovdim...@chromium.org wrote: Hi WHATWG! In Chromium, workers are going to have their separate processes, at least for now. So we quickly found that while(true) foo = new Worker(...) quickly consumes the OS resources :-) In fact, this will kill other browsers too, and on some systems the unbounded number of threads will effectively freeze the system beyond the browser. We think about how to reasonably place limits on the resources consumed by 'sea of workers'. Obviously, one could just limit a maxumum number of parallel workers available to page or domain or browser. But what do you do when a limit is reached? The Worker() constructor could return null or throw exception. However, that seems to go against the spirit of the spec since it usually does not deal with resource constraints. So it makes sense to look for the most sensible implementation that tries best to behave. Current idea is to let create as many Worker objects as requested, but not necessarily start them right away. So the resources are not allocated except the thin JS wrapper. As long as workers terminate and the number of them decreases below the limit, more workers from the ready queue could be started. This allows to support implementation limits w/o exposing them. This is similar to how a 'sea of XHRs' would behave. The test page here creates 10,000 async XHR requests to distinct URLs and then waits for all of them to complete. While it's obviosuly impossible to have 10K http connections in parallel, all XHRs will be completed, given time. Does it sound like a good way to avoid the resource crunch due to high number of workers? This is the solution that Firefox 3.5 uses. We use a pool of relatively few OS threads (5 or so iirc). This pool is then scheduled to run worker tasks as they are scheduled. So for example if you create 1000 worker objects, those 5 threads will take turns to execute the initial scripts one at a time. If you then send a message using postMessage to 500 of those workers, and the other 500 calls setTimeout in their initial script, the same threads will take turns to run those 1000 tasks (500 message events, and 500 timer callbacks). This is somewhat simplified, and things are a little more complicated due to how we handle synchronous network loads (during which we freeze and OS thread and remove it from the pool), but the above is the basic idea. / Jonas
Re: [whatwg] Limit on number of parallel Workers.
On Tue, Jun 9, 2009 at 6:28 PM, Oliver Hunt oli...@apple.com wrote: I believe that this will be difficult to have such a limit as sites may rely on GC to collect Workers that are no longer running (so number of running threads is non-deterministic), and in the context of mix source content (mash-ups) it will be difficult for any content source to be sure it isn't going to contribute to that limit. Obviously a UA shouldn't crash, but i believe that it is up to the UA to determine how to achieve this -- eg. having a limit to allow a 1:1 relationship between workers and processes will have a much lower limit than an implementation that has a worker per thread model, or an m:n relationship between workers and threads/processes. Having the specification limited simply because one implementation mechanism has certain limits when there are many alternative implementation models seems like a bad idea. Where in his email does Dmitry advocate upper limits? I believe if there's going to be any worker related limits, it should realistically be a lower limit on the number of workers rather than an upper. Perhaps lower limits on how many workers are 'guaranteed' to be available would be good, but that's fairly orthogonal to the original email. What he's proposing is a way to gracefully rate limit the number of workers rather than having the OS running out of resources rate limit it. I for one like the proposal and the analogy to what happens when you issue 10,000 XHRs at once. J On Jun 9, 2009, at 6:13 PM, Dmitry Titov wrote: Hi WHATWG! In Chromium, workers are going to have their separate processes, at least for now. So we quickly found that while(true) foo = new Worker(...) quickly consumes the OS resources :-) In fact, this will kill other browsers too, and on some systems the unbounded number of threads will effectively freeze the system beyond the browser. We think about how to reasonably place limits on the resources consumed by 'sea of workers'. Obviously, one could just limit a maxumum number of parallel workers available to page or domain or browser. But what do you do when a limit is reached? The Worker() constructor could return null or throw exception. However, that seems to go against the spirit of the spec since it usually does not deal with resource constraints. So it makes sense to look for the most sensible implementation that tries best to behave. Current idea is to let create as many Worker objects as requested, but not necessarily start them right away. So the resources are not allocated except the thin JS wrapper. As long as workers terminate and the number of them decreases below the limit, more workers from the ready queue could be started. This allows to support implementation limits w/o exposing them. This is similar to how a 'sea of XHRs' would behave. The test page herehttp://www.figushki.com/test/xhr/xhr1.html creates 10,000 async XHR requests to distinct URLs and then waits for all of them to complete. While it's obviosuly impossible to have 10K http connections in parallel, all XHRs will be completed, given time. Does it sound like a good way to avoid the resource crunch due to high number of workers? Thanks, Dmitry
Re: [whatwg] Limit on number of parallel Workers.
This is a bit of an aside, but section 4.5 of the Web Workers spec no longer makes any guarantees regarding GC of workers. I would expect user agents to make some kind of best effort to detect unreachability in the simplest cases, but supporting MessagePorts and SharedWorkers makes authoritatively determining worker reachability exceedingly difficult except in simpler cases (DedicatedWorkers with no MessagePorts or nested workers, for example). It seems like we should be encouraging developers to call WorkerGlobalScope.close() when they are done with their workers, which in the case below makes the number of running threads less undeterministic. Back on topic, I believe what Dmitry was suggesting was not that we specify a specific limit in the specification, but rather we have some sort of general agreement on how a UA might handle limits (what should it do when the limit is reached). His suggestion of delaying the startup of the worker seems like a better solution than other approaches like throwing an exception on the Worker constructor. -atw On Tue, Jun 9, 2009 at 6:28 PM, Oliver Hunt oli...@apple.com wrote: I believe that this will be difficult to have such a limit as sites may rely on GC to collect Workers that are no longer running (so number of running threads is non-deterministic), and in the context of mix source content (mash-ups) it will be difficult for any content source to be sure it isn't going to contribute to that limit. Obviously a UA shouldn't crash, but i believe that it is up to the UA to determine how to achieve this -- eg. having a limit to allow a 1:1 relationship between workers and processes will have a much lower limit than an implementation that has a worker per thread model, or an m:n relationship between workers and threads/processes. Having the specification limited simply because one implementation mechanism has certain limits when there are many alternative implementation models seems like a bad idea. I believe if there's going to be any worker related limits, it should realistically be a lower limit on the number of workers rather than an upper. --Oliver On Jun 9, 2009, at 6:13 PM, Dmitry Titov wrote: Hi WHATWG! In Chromium, workers are going to have their separate processes, at least for now. So we quickly found that while(true) foo = new Worker(...) quickly consumes the OS resources :-) In fact, this will kill other browsers too, and on some systems the unbounded number of threads will effectively freeze the system beyond the browser. We think about how to reasonably place limits on the resources consumed by 'sea of workers'. Obviously, one could just limit a maxumum number of parallel workers available to page or domain or browser. But what do you do when a limit is reached? The Worker() constructor could return null or throw exception. However, that seems to go against the spirit of the spec since it usually does not deal with resource constraints. So it makes sense to look for the most sensible implementation that tries best to behave. Current idea is to let create as many Worker objects as requested, but not necessarily start them right away. So the resources are not allocated except the thin JS wrapper. As long as workers terminate and the number of them decreases below the limit, more workers from the ready queue could be started. This allows to support implementation limits w/o exposing them. This is similar to how a 'sea of XHRs' would behave. The test page herehttp://www.figushki.com/test/xhr/xhr1.html creates 10,000 async XHR requests to distinct URLs and then waits for all of them to complete. While it's obviosuly impossible to have 10K http connections in parallel, all XHRs will be completed, given time. Does it sound like a good way to avoid the resource crunch due to high number of workers? Thanks, Dmitry
Re: [whatwg] Limit on number of parallel Workers.
It occurs to me that my statement was a bit stronger than I intended - the spec *does* indeed make guarantees regarding GC of workers, but they are fairly loose and typically tied to the parent Document becoming inactive. -atw On Tue, Jun 9, 2009 at 6:42 PM, Drew Wilson atwil...@google.com wrote: This is a bit of an aside, but section 4.5 of the Web Workers spec no longer makes any guarantees regarding GC of workers. I would expect user agents to make some kind of best effort to detect unreachability in the simplest cases, but supporting MessagePorts and SharedWorkers makes authoritatively determining worker reachability exceedingly difficult except in simpler cases (DedicatedWorkers with no MessagePorts or nested workers, for example). It seems like we should be encouraging developers to call WorkerGlobalScope.close() when they are done with their workers, which in the case below makes the number of running threads less undeterministic. Back on topic, I believe what Dmitry was suggesting was not that we specify a specific limit in the specification, but rather we have some sort of general agreement on how a UA might handle limits (what should it do when the limit is reached). His suggestion of delaying the startup of the worker seems like a better solution than other approaches like throwing an exception on the Worker constructor. -atw On Tue, Jun 9, 2009 at 6:28 PM, Oliver Hunt oli...@apple.com wrote: I believe that this will be difficult to have such a limit as sites may rely on GC to collect Workers that are no longer running (so number of running threads is non-deterministic), and in the context of mix source content (mash-ups) it will be difficult for any content source to be sure it isn't going to contribute to that limit. Obviously a UA shouldn't crash, but i believe that it is up to the UA to determine how to achieve this -- eg. having a limit to allow a 1:1 relationship between workers and processes will have a much lower limit than an implementation that has a worker per thread model, or an m:n relationship between workers and threads/processes. Having the specification limited simply because one implementation mechanism has certain limits when there are many alternative implementation models seems like a bad idea. I believe if there's going to be any worker related limits, it should realistically be a lower limit on the number of workers rather than an upper. --Oliver On Jun 9, 2009, at 6:13 PM, Dmitry Titov wrote: Hi WHATWG! In Chromium, workers are going to have their separate processes, at least for now. So we quickly found that while(true) foo = new Worker(...) quickly consumes the OS resources :-) In fact, this will kill other browsers too, and on some systems the unbounded number of threads will effectively freeze the system beyond the browser. We think about how to reasonably place limits on the resources consumed by 'sea of workers'. Obviously, one could just limit a maxumum number of parallel workers available to page or domain or browser. But what do you do when a limit is reached? The Worker() constructor could return null or throw exception. However, that seems to go against the spirit of the spec since it usually does not deal with resource constraints. So it makes sense to look for the most sensible implementation that tries best to behave. Current idea is to let create as many Worker objects as requested, but not necessarily start them right away. So the resources are not allocated except the thin JS wrapper. As long as workers terminate and the number of them decreases below the limit, more workers from the ready queue could be started. This allows to support implementation limits w/o exposing them. This is similar to how a 'sea of XHRs' would behave. The test page herehttp://www.figushki.com/test/xhr/xhr1.html creates 10,000 async XHR requests to distinct URLs and then waits for all of them to complete. While it's obviosuly impossible to have 10K http connections in parallel, all XHRs will be completed, given time. Does it sound like a good way to avoid the resource crunch due to high number of workers? Thanks, Dmitry
Re: [whatwg] Limit on number of parallel Workers.
This is the solution that Firefox 3.5 uses. We use a pool of relatively few OS threads (5 or so iirc). This pool is then scheduled to run worker tasks as they are scheduled. So for example if you create 1000 worker objects, those 5 threads will take turns to execute the initial scripts one at a time. If you then send a message using postMessage to 500 of those workers, and the other 500 calls setTimeout in their initial script, the same threads will take turns to run those 1000 tasks (500 message events, and 500 timer callbacks). This is somewhat simplified, and things are a little more complicated due to how we handle synchronous network loads (during which we freeze and OS thread and remove it from the pool), but the above is the basic idea. / Jonas Thats a really good model. Scalable and degrades nicely. The only problem is with very long running operations where a worker script doesn't return in a timely fashion. If enough of them do that, all others starve. What does FF do about that, or in practice do you anticipate that not being an issue? Webkit dedicates an OS thread per worker. Chrome goes even further (for now at least) with a process per worker. The 1:1 mapping is probably overkill as most workers will probably spend most of their life asleep just waiting for a message.
Re: [whatwg] File package protocol and manifest support?
On Mon, 18 May 2009, Brett Zamir wrote: While this may be too far in the game to bring up, I'd very much be interested (and think others would be too) to have a standard means of representing not only individual files, but also groups of files on the web. This seems reasonable, but I think this is the wrong venue to peruse such a proposal. I recommend proposing this to the IETF. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
Re: [whatwg] DOM3 Load and Save for simple parsing/serialization?
On Mon, 18 May 2009, Brett Zamir wrote: Has any thought been given to standardizing on at least a part of DOM Level 3 Load and Save in HTML5? DOM3 Load and Save is already standardised as far as I can tell. I don't see why HTML5 would have to say anything about it. -- Ian Hickson U+1047E)\._.,--,'``.fL http://ln.hixie.ch/ U+263A/, _.. \ _\ ;`._ ,. Things that are impossible just take longer. `._.-(,_..'--(,_..'`-.;.'
[whatwg] Feedbacl
For some reason tonight I decided to check on what up coming in the new HTML5 and have some questions, queries and concerns. I hope I'm not too being redundent with other comments: 1) I've usedframes in many web pages, and I see this is being dropped. I typically have a selection frame and a result frame. So links clicked on in the 1st frame show up in the second frame. I then never have to worry about managing what's in each frame. For many pages I can likely use a block element like a DIV, but my ISP has size limitations and I have spread my pages onto several sites. I have no problems switching to something else but I didn't see anything in the specs except opening a new window to accomplish this. If something else is being used, how will this be compatible with older browsers. 2) I am perhaps one of the few I know to use Xforms and I am excited about being able to have like capabilities in all browsers. The implementation image I saw looked somewhat different and didn't really describe what new, changed or obsolete. Personally I want the same capabilities of Xforms; being able to save locally, FTP. or URL and this wasn't really identified. I don't mind having to make change, I just wamt it to work. Still on Xforms I would like additional functionality, I think you may have dealt with, is being able to reformat/reorder the data via CSS or a datagrid to a format the user wishes the data to be viewwed in. Obviously this may be define via code, but I'm hoping the WebForms implementation will allow for things such as sortable columns,, re-order columns, hide/show columns... I don't know if the subject of data binding has ever come up. I like the data binding in IE, however other browsers don't support this ability and I have to use binding in IE and Xforms for firefox. I would really benifit from being able to use the same code for both. I did notice a Local Storage componenet, which I hope some consistent client call can be done to Post or Sync these to a URL... 3) Xforms or not, I hope anything displayable can be formated appropriately using CSS. There seems to be many browser specific formating settings, is there a way to consolidate these with this release to iliminate or reduce browse specific CSS settings. 4) not being able to implement #3, somehow within CSS it would then be nice to be able have some type of IF statement so additional CSS can be included or excluded for non-complient browsers... Even down the road, the ability to include/exclude imports based on broweser capabilities could benifit many. Unless defined, browser builders will continue to build there own settings. Im sure this is out of your control, but perhaps an IF isn't. I hate the idea of having to create a different presentation based on the browser, but how does one ever ensure someones browser is compatable or the content is dsiplayed appropriately. 5) On the CSS, I'm sure builders/browser developer would love an XML format. If there are no CSS format changes perhaps this can be identified as a future enhancement/direction. CSS seems to be areal oddball format compared to everything else. 6) I did see some comment about user defined variables in the FAQ. I see know reason why if enbed something called MIKE in an html file and the CSS attributes should handle whatever needs to be displayed in whatever format. No CSS=No display. The same as it works now.. I hope I didn't ask too many questions in one email. I also prefer e-mail as I don't check wiki's for responses everyday as this is an at home project, and even at work I would have less time. Thanks for creating this option others haven't. Mike
Re: [whatwg] Limit on number of parallel Workers.
On Tue, Jun 9, 2009 at 7:07 PM, Michael Nordman micha...@google.com wrote: This is the solution that Firefox 3.5 uses. We use a pool of relatively few OS threads (5 or so iirc). This pool is then scheduled to run worker tasks as they are scheduled. So for example if you create 1000 worker objects, those 5 threads will take turns to execute the initial scripts one at a time. If you then send a message using postMessage to 500 of those workers, and the other 500 calls setTimeout in their initial script, the same threads will take turns to run those 1000 tasks (500 message events, and 500 timer callbacks). This is somewhat simplified, and things are a little more complicated due to how we handle synchronous network loads (during which we freeze and OS thread and remove it from the pool), but the above is the basic idea. / Jonas Thats a really good model. Scalable and degrades nicely. The only problem is with very long running operations where a worker script doesn't return in a timely fashion. If enough of them do that, all others starve. What does FF do about that, or in practice do you anticipate that not being an issue? Webkit dedicates an OS thread per worker. Chrome goes even further (for now at least) with a process per worker. The 1:1 mapping is probably overkill as most workers will probably spend most of their life asleep just waiting for a message. Indeed, it seems FF has a pretty good solution for this (at least for non-multiprocess case). 1:1 is not scaling well in case of threads and especially in case of processes. Here http://figushki.com/test/workers/workers.html is a page that can create variable number of workers to observe the effects, curious can run it in FF3.5, in Safari 4, or in Chromium with '--enable-web-workers' flag. Don't click 'add 1000' button in Safari 4 or Chromium if you are not prepared to kill the unresponsive browser while the whole system gets half-frozen. FF continue to work just fine, well done guys :-) Dmitry
Re: [whatwg] A new attribute for video and low-power devices
On Mon, 18 May 2009, Benjamin M. Schwartz wrote: As I have mentioned earlier, there are some devices that will be unable to render video faithfully inline, due to the limitations of hardware video accelerators. However, it occurs to me that there are two essentially different uses for video 1. Important content for the webpage. An example would be the central video on a web page whose purpose is to allow users to view that video. This is currently done principally using Adobe Flash and (to a lesser extent) object tags. 2. Incidental animations. Examples include decorative elements in a web page's interface, animated sidebar advertisements, and other small page elements of this kind. This was historically a popular use for animated-GIF, though Flash has largely overtaken it here as well. In case 1, a browser on a low-powered device may show the video full-screen or in an independent resizable window (to quote the spec). The browser might also show the video at the specified size, but on top of the page, rather than at its correct location in the middle of the rendering stack. However, for case 2, showing the video full-screen or moving it to the top of the rendering stack would clearly be a bad idea, as the video does not contain the content of interest to the user. In this case, if browsers cannot display the video as specified, they should probably fall back to the poster image. With the current tag definition, browsers will have to grow ugly heuristics for this case, based on video's size, aspect ratio, loop, and controls. To avoid this heuristic hack, I suggest that video gain an additional attribute to indicate which behavior is preferable. A boolean attribute like decorative, incidental, or significant would greatly assist browsers in determining the correct behavior. On Mon, 18 May 2009, Benjamin M. Schwartz wrote: Consider a webpage in which a side-effect of clicking on some scripted button is to trigger a small animation (using video) elsewhere on the page. If your browser is configured to show video full-screen, this webpage will become nearly unusable, because the small animation will take over the screen every time you click on a button. I wouldn't expect the user agent to automatically switch to fullscreen playback immediately. I would expect the user agent to require the user to invoke full-screen mode manually. I am proposing an additional attribute for video so that the browser will know not to do that. On Mon, 18 May 2009, Simon Pieters wrote: I'm not convinced that an additional attribute would solve the problem: it is likely that some authors would use the attribute incorrectly, because it doesn't have any effect in their primary testing environment. If an author sets the attribute where it shouldn't be set, it effectively makes the video unavailable to users whose UA acts upon the attribute, which seems bad. I think a more effective solution is to give a non-modal message to the user saying This page is trying to play a video. Press the Foo key to play., or similar. On Mon, 18 May 2009, Benjamin M. Schwartz wrote: Then I will attempt to convince you. Suppose the additional attribute is a boolean called decorative, defaulting to false if not present. Authors who are only testing on modern desktops will, as you say, likely ignore this issue. I therefore fully expect that they will never set this attribute. If the attribute is not set, then most browsers should assume that the video may be of some significance, and ensure that the user can play it. I think the risk of authors accidentally setting decorative on critical videos is small. I also think that if a popular mobile browsing platform were to respect this flag, major websites would use it correctly and user experience would be improved. On Mon, 18 May 2009, Aryeh Gregor wrote: Isn't that like saying that authors who are only testing on normal browsers will likely ignore the longdesc= attribute? It seems like most authors do just ignore it, but the ones who don't get it wrong far more often than they get it right. In the ~0.1% of images where longdesc= is used, it's misused literally over 99% of the time: http://blog.whatwg.org/the-longdesc-lottery It thus ends up being so useless for users that even if you do provide a good longdesc, no one will actually use it. There's so little signal and so much noise that screenreader users just don't bother checking it, if they even know that it exists. It thus seems like it would be prudent to wait on implementation experience to see if a new attribute is actually needed here. Adding attributes that don't affect most users is a recipe for widespread misuse. In the worst case, browsers might very well refuse to support the attribute because it's come into wide misuse before any browser actually supports it, so