Re: [whatwg] isBrowserOnline vs navigator.onLine
Brad Neuberg wrote: I just tested navigator.onLine in Firefox and it returned undefined. I used javascript:alert(navigator.onLine). It works in IE. That works for me in Firefox 1.5b2. Is it supposed to work in Firefox? How does a user move into offline mode in that browser? FileWork Offline -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] Disclosure: Change of employer
Rimantas Liubertas wrote: microsoft.com and search.msn.com are valid ...nobody-cares-about-web-standards style of reasoning. nobody used to mean microsoft (web standards are irrelevant as long as M$ ignores them, now it appears to be google. Although microsoft.com is valid, they don't really care about standards that much. The site uses an HTML 4.0 Transitional DOCTYPE that triggers quirks mode. -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] Re: getElementsByClassName
Kornel Lesinski wrote: On Tue, 06 Sep 2005 00:54:56 +0100, Ian Hickson [EMAIL PROTECTED] wrote: You can just do: if (x) find.push(class1); if (y) find.push(class2); document.getElementsByClassName.apply(document, find); ...which seems much better to me than using a string. It's the first time I see apply method used. I couldn't find it in ECMA262-3 nor in WA1.0. Can you give me a hint where it's defined? Why is that better than using string? It's a method of Function(). http://developer.mozilla.org/en/docs/Core_JavaScript_1.5_Reference:Objects:Function:apply -- Lachlan Hunt http://lachy.id.au/
[whatwg] Sample DOMTokenString Implementation
Hi, I implemented a sample DOMTokenString() interface tonight [1]. Since String() is immutable in JS, I couldn't implement it as suggested in the current draft. So, instead, I've implemented it like this: interface DOMTokenString : DOMString { boolhas(in DOMString token); DOMTokenString add(in DOMString token); DOMTokenString remove(in DOMString token); } The constructor accepts a single string as a parameter new DOMTokenString(string); The string is split into tokens and stored in an private array within the object: var tokens = string.split(/\s/); That splits it on any whitespace characters. The tokens are then rejoined into a string using a single space as the separator. This is similar to the way class works in HTML (at least, in Gecko). i.e. class=foo bar is equivalent to class= foo bar , and in Gecko, .className returns each separated by a single space. e.g. var s = new DOMTokenString( foo bar ); // returns foo bar bool has(); * This searches the array for the first index of the specified token and returns true if found, false otherwise. e.g. s.has(bar); // returns true; s.has (foo bar) // returns false; DOMTokenString add(); * This function returns a new DOMTokenString() created from the concatenation of the the current string(), a separator and the new token. * It does not matter if the same token is already present, the new token is just appended to the end. * If the token parameter is, itself, a space separated list, it is (because of the way the new string is constructed) equivalent to adding each token individually. e.g. s = s.add(foo); // returns foo bar foo s = s.add(baz quux) // returns foo bar foo baz quux DOMTokenString remove(); * This filters the tokens array, removing all occurances of matching tokens. The new token array is then joined and returned. e.g. s = s.remove(foo) // returns bar baz quux s = s.remove(bar baz) // returns bar baz quux (i.e. no match) s = s.remove(baz); // returns bar quux [1] http://lachy.id.au/dev/script/examples/DOM/DOMTokenString.js -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] getElementsByClassName()
Jim Ley wrote: On 9/5/05, Lachlan Hunt [EMAIL PROTECTED] wrote: No, as already demonstrated, #2 does return matches in some cases. Surely that's just an implementation bug? rather than indicative of any underlying problem in the spec. Yes, it was a bug, but I didn't think the spec was very clear on how to handle the issue. The ElementClassName file : className = className.replace(/^\s*([^\s]*)\s*$/, $1) doesn't enforce the classnames have no spaces in them and results it in continuing to test the className attributes with a regexp containing the space. a quick untested fix would I think be: className = className.match(/^\s*(\S+)\s*$/) ? className.replace(/^\s*(\S+)\s*$/,$1) : ; That seems to work well. (also using \S rather than [^\s], but that's purely style of course) Thanks, I didn't know about that syntax. I think it is defined in the spec, it's erroneous, and your implementation is just broken as above, I'd quite like it to be defined as 3, Yes, I guess, if it is erroneous, then #3 does make the most sense. mainly because a DOM binding with optional parameters isn't language independant, and if it's a ECMAScript tied DOM, then the DOM needs to be a lot more ECMAScript like. I may not be understanding what you mean, but if optional parameters aren't language independant, shouldn't it be defined in a more language independant way, so that any non-ECMAScript languages can still implement this? -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] On tag inference
Henri Sivonen wrote: What about the interaction of section with head and body? How would you insert the optional tags in this case: !DOCTYPE html title.../title section.../section div.../div ? My tentative assumption has been !DOCTYPE html htmlheadtitle.../title /headbodysection.../section div.../div/body/html That is how I would recommend it be defined. It's not what Firefox does (that's the easiest browser to get the DOM source from), but I don't think the defined behaviour should be affected by the results of current browsers, in this case. !DOCTYPE html htmlheadtitle.../title section.../section /headbodydiv.../div/body/html Firefox doesn't even get that, it does this: (I've replaced ... with section and div, respectively, and formatted for easier reading) html head titleTesting/title section/section /head body section div div /div /body /html In fact, even if you explicitly insert the body start tag, you get some strange results from unknown elements like section. For example, given this document: !DOCTYPE html titleTesting/title body sectionsection ememphasis/em articlearticle/article divdiv/div /section Firefox closes the section element before any known block element, but allows any text nodes, inline elements, and other unknown elements to be nested. html head titleTesting/title /head body section section ememphasis/em articlearticle/article /section div div /div /body /html This is why it should be defined that elements like setion should imply body; however, for backwards compatibility, it should be recommended that the start tags not be omitted in such cases. Even then, it won't always work as intended. eg. you can't use these: section div, section p, ... { /* ... */ } -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] getElementsByClassName()
Jim Ley wrote: On 9/5/05, Lachlan Hunt [EMAIL PROTECTED] wrote: I may not be understanding what you mean, but if optional parameters aren't language independant, shouldn't it be defined in a more language independant way, so that any non-ECMAScript languages can still implement this? Yes, DOM currently is language agnostic, however the optional className parameters aren't compatible with languages which can't do that. So as defined now getElementsByClassName would not manage to do that. In that case, should it be redefined as: NodeList getElementsByClassName(in DOMString classNames); where classNames is a string of space separated class names? That would be just as easy to implement and would work with languages that don't support optional parameters. -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] Re: getElementsByClassName
Ian Hickson wrote: On Sun, 4 Sep 2005, Lachlan Hunt wrote: It also includes Element.hasClassName(), Element.addClassName() and Element.removeClassName(), which I think should also be added to WA1. I envisage somehow making className implement the DOMTokenString interface: http://whatwg.org/specs/web-apps/current-work/#domtokenstring ...so that you would have Element.className.add(), Element.className.has(), etc. Cool, I'll see what I can do about implementing that. I think I may be able to extend the String() object quite easily for that, though I'll have to think about it a little more. What should each of these function calls return? I've listed the ones that my script currently selects. Are any of them incorrect? 04. getElementsByClassName( foo); | A, B, C, D, E, F, G 05. getElementsByClassName(foo ); | A, B, C, D, E, F, G 06. getElementsByClassName( foo ); | A, B, C, D, E, F, G 07. getElementsByClassName(foo bar);| E, F Incorrect; none of the above elements are in classes that have a space character in the class name. Fixed. All of those now return none, the other results are unchanged. It will also solve IMHO unclear case of getElementsByClassName(foo bar) matching bar foo. It would, as opposed to behavior where space is both separator and part of class name. What if an element is in the class foo bar? So, you're saying that it's possible that some hypothetical langauge may define a class attribute with any character as the delimiter, not just white space? So, for example, a language could use semi-colons like this: foo:class=foo bar;baz and thus, for that language, gEBCN(foo bar) would match that? In which case, would it be worth adding a note to the spec stating that implementations should not assume that all languages will use white space delimiters between class names? On Mon, 5 Sep 2005, Lachlan Hunt wrote: The problem is that white space handling in parameter values isn't currently defined at all... The spec now defines this better. Basically, foo bar would never match anything in HTML, XHTML, MathML or SVG. Thanks, that's much better. At the moment I trim any leading and trailing spaces... The spec doesn't mention trimming, so, no trimming. :-) Ok, trimming removed. -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] Re: getElementsByClassName
Ian Hickson wrote: On Tue, 6 Sep 2005, Lachlan Hunt wrote: http://whatwg.org/specs/web-apps/current-work/#domtokenstring Cool, I'll see what I can do about implementing that. I think I may be able to extend the String() object quite easily for that, though I'll have to think about it a little more. Let me know how that goes. That interface hasn't really been looked at much yet. I've thought about it some more, and it may be difficult to do with the way the add() and remove() are currently defined with no return value. I assume that means you're intending for these functions to modify the string itself. However, in JavaScript a String() is immutable and all other methods that do modifications actually return a new string, not modify itself. Then, there's also the question of assuming the token delimiter will always be a space. Will there need to be a way to specify what the delimiter is, or is that intended to be dependant upon the language? For example, in HTML .className would return a DOMTokenString delimited by spaces, but in FooBarML it may be semi-colons, commas, or anything else. In which case, would it be worth adding a note to the spec stating that implementations should not assume that all languages will use white space delimiters between class names? Well, it's highly theoretical. It seems such a note might be more confusing than helpful. What do you think? I think fixing the grammar of this paragraph and adding one more sentence won't be too confusing Current text: | The space character (U+0020) is not special in the method's arguments. | In HTML, XHTML, SVG and MathML it is impossible for an element to | belong to a class whose name contains a space character, however, and | so typically the method would return no nodes if one of its arguments | contained a space. Suggested text: The space character (U+0020) is not special in the method's arguments. In HTML, XHTML, SVG and MathML it is impossible for an element to belong to a class whose name contains a space character and thus, for these languages, the method would return no nodes if one of its arguments contained a space. This does not, however, prevent other languages from allowing spaces in class names. -- Lachlan Hunt http://lachy.id.au/
[whatwg] Re: Are the semantic inline elements really useful?
Henri Sivonen wrote: On Aug 28, 2005, at 11:02, Lachlan Hunt wrote: Although some editors do also provide some semantic options, they're usually limited in their abilities. Some have some semantic block level elements like headings, paragraphs, lists and maybe blockquote. However, few have semantic elements like abbr, cite, code, dfn, kbd, samp, var, q and strong/em (some, like contentEditable, mistakenly use bold and italic options for those). I often have to jump through hoops just to get code in my markup while using dreamweaver, by using the buttons for b and/or i and then running search and replace to fix up the markup. Could the user interface difficulties with this semantic inline elements stem at least partly from problems with the semantic inline elements themselves? I don't think so. I think it stems from the average person who thinks about things presentationally and jumps straight from what is the content to how do I want it to look and then marks that up. The problem is then compounded by poorly designed authoring tools that encourage such practices. Consider cite for example. What's it really good for?... ... The scenario that perhaps in the future there will be a need to style the titles of works in a different way (for example bold strike-through fuchsia) seemed ludicrous. Yes, it does seem ludicrous when you immediately think a different style involves such radical changes. However, what if you just want to be able to differentiate citations from emphasis, definitions, and anything else presented in italics by default. For example, your stylesheet might say something like this: /* default UA stylesheet */ em, cite, dfn, i { font-style: italic; } /* Author stylesheet */ em { background-color: #EEF; } cite { color: gray; } dfn { font-weight: bold; } Aside: Now that I looked at the source of the literature list, I noticed that some titles of works were marked up as em. my hypothesis is that after an upgrade Dreamweaver has started using em when pressing command-i. Sigh. See http://mpt.net.nz/archive/2004/05/02/b-and-i That's another problem with WYSIWYG editors, when they attempt to imply semantics based on how the user wan't something to look, instead of letting the user specify semantics and determine presentation from that. P.S. Using cite and code is relatively easy with OOo Writer/Web but not as easy as pressing command-i. That's a limitation of the editor, and similar to the point I was trying to make when I said above, in the part you quoted: I often have to jump through hoops just to get code in my markup [...] by using [...] b and/or i and then running search and replace... -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] getElementsByClassName()
Anne van Kesteren wrote: Quoting Kornel Lesinski [EMAIL PROTECTED]: It will also solve IMHO unclear case of getElementsByClassName(foo bar) matching bar foo. It would, as opposed to behavior where space is both separator and part of class name. This is not how the CLASS attribute works. foo bar means the element has two classes bound to it, foo and bar. With your syntax, getElementsByClassName(bar foo) would also need to match an element with foo bar as value for the CLASS attribute. The problem is that white space handling in parameter values isn't currently defined at all, and I implemented it assuming that each parameter value would contain only one class name. Handling the (currently) erroneous parameter (foo bar)is basically a form of error recovery, and the fact that it returns anything at all is merely a result of how the regex is constructed using it. Before I can fix the implementation in any way, I need to know how white space should be handled before ( foo), after (foo ) and inside (foo bar) the parameter value. At the moment I trim any leading and trailing spaces in most cases (there's currently a bug that stops it working sometimes), but I don't really handle white space inside very well. (foo bar) could basically be handled in the following ways and I need to know which: 1. Equivalent to (foo, bar) (or [class~=foo][class~=bar], or .foo.bar in CSS) 2. The way it currently works. ie. matches foo bar, not bar foo 3. Error, return nothing. -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] What exactly is contentEditable for?
Ian Hickson wrote: On Wed, 24 Aug 2005, Lachlan Hunt wrote: contentEditiable is not semantic, it's behavioural and belongs in the DOM interface only, not the markup. How is it not semantic? How is it semantic? It's not behavioural... It's behavioual because it specifies how content should be entered by the user (i.e. using a WYSIWYG editor) rather than just what kind of content is expected and leaving the editing/input methods up to the UA. It also (currently) requires scripts to be used at all, and scripts are behavioural. -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] What exactly is contentEditable for?
Ian Hickson wrote: On Mon, 15 Aug 2005, Lachlan Hunt wrote: How is [contentEditable] any different from a text area form control with a specified accept type of text/html, which would allow a UA to load any external editor (eg. XStandard) or degrade to a regular text area? contentEditable is implemented. textarea type=text/html is not. And, as I demonstrated in an earlier e-mail with the widgEditor I linked to, it's not hard for an author to provide a script that converts the textarea to a WYSIWYG editor using the contentEditable DOM interface. It's not much different from the scripts that are being written to add support for other extensions in today's browsers. That would be a far better option than using contentEditable, which is not only conceptually broken, but *all* implementations of it are so incredibly broken, that trying to standardise it is like dragging a dead horse through mud. There may be some truth to that, but contentEditable also has other benefits, like integration with the DOM, and the ability to seemlessly integrate with the page. For example, on a wiki, you can be browing the content, and then toggle one area so it is contentEditable, edit it, and submit that, all asynchronously and without having to switch in a textarea or anything like that. That's a reasonable argument for standardising the DOM interface for it, but not for including contentEditable as an attribute in the markup, which is what I'm against the most. Although I am against the use of contentEditable in general, that's mostly because {a) all the implementations of it are broken and (b) the way it was designed is too presentationally oriented for a semantic markup language - it basically suffers from the same design flaws as every other WYSIWYG editor. Using the wiki example, a script could be provided which captures the events for the edit this links and dynamically makes the content for that section editable using the contentEditable DOM interface. Scripts would also be used to handle the submission. However, without script those links should fall back to the way they currently work, which is to load a seperate page with the editable markup in a textarea for the user. Additionally, that textarea could have an accept=text/html attribute, from which (even without JS enabled) the user agent could choose to load an HTML editor for the user (whether that be for just providing syntax highlighting in code view or a WYSIWYG style editor). Personally, I'd like to see it better integrated with the DOM 2 Range interface, so the scripts could work with the nodes a little easier and which I'd like to see more widely implemented. -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] What exactly is contentEditable for?
Olav Junker Kjær wrote: Lachlan Hunt wrote: How is that any different from a text area form control with a specified accept type of text/html, which would allow a UA to load any external editor (eg. XStandard) or degrade to a regular text area? The point of contentEditable is that some areas of a page can be made editable (and editing toggled on and off), while still maintaining the styling and structure of the document. This is really useful for CMS'es and other kind of editors - template editing and so on. I'm not disputing the fact that there is an unfortunate demand for embedded WYSIWYG editing in web based CMSs, it is the conceputally broken implementation I'm against. contentEditable is quite clean since you just toggle an attribute. No, it's messy because it mixes normal document content with user input in a way that does not, by design, degrade very gracefully at all without scripting. With your proposal, the editable element should toggle between the original content, and a textarea element containing content, Which, from a user's point of view, is how contentEditable is generally implemented by authors within web pages. Take, for example, the example provided on MSDN [1]. That provides both a content editable region and a textarea. Although they are clearly seperated from each other, the concept of switching between the two editing modes is still there. Now, take a look at Cameron Adams' widgEditor [2]: an implementation using script to dynamically replacce an ordinary textarea with a content editable region with the ability for the user to switch between the two. now HTML escaped, but still rendered as if it were ordinary content, It had to be escaped because the textarea contains #PCDATA despite the fact that implementations tend to treat it more like CDATA, with the exception that they still process entity and character references. User can edit with plain text editor or UA can load WYSIWYG editor for text/html (or whatever ever MIME type is specified) But this considers the editable content as just an arbitrary content type which should be edited in some external editor. It doesn't necessarily have to be an external editor, that aspect is implementation specific. A UA could quite easily replace the text area with a content editable region, much like the widgEditor script does. Another UA could alternatively load an editor plug-in like XStandard into the page; and another could even, theoretically, launch an application like dreamweaver. The point is that the markup should not be concerned with the actual implementation details, like contentEditable is. The point of contentEditable is that the editable content is HTML The point of the suggested textarea content-type=text/html was that the editable content is HTML; what's the semantic difference? and an integrated part of the containing page, which enables much cleaner in place editing. Perhaps, cleaner from a user's point of view, but, IMHO, certainly not cleaner from an author's and markup point of view. However, as I've said above, there's nothing stopping a UA implementing the interface for my suggestion like the content editable interface. If you just consider the editable content an arbitrary blob of editable content, you wouldn't e.g. expect styles from the containing document to inherit into the editable HTML, which is a major point of contentEditable. That is conceptually a preview mode and there's nothing stopping the UA providing such a view with either method. In fact, there are several examples of authors providing script based previewing. See Jon Hicks' weblog comment system [3] for one. That particular example uses Textile for editing, rather than HTML, but the concept is still the same and with script enabled, the user should see a preview of the content below, as they type. Also consider that editable areas may contain non-editable islands which aganin may contain editable areas. How would that be expresses using TEXTAREA ? That's a usability nightmare, it wouldn't make much sense for part of the content to be editable and other parts not. If you have seperate sections to edit, provide seperate form fields for each one. I dont see how its conceptually broken. Well, firstly, because the whole idea of editing a semantic langauge like HTML with a very presentationally oriented WYSIWYG system is broken. That applies to all WYSIWYG HTML editors (not just contentEditable) which are not helped by the presence of presentational toolbar functions (eg. the typical bold, italic, font colour, alignment buttons, etc. found in a typical editor's toolbar). However, even ignoring the problems of WYSIWYG editing for HTML, contentEditable is still conceptually broken. The attribute is behavioural, not semantic, and has no place within a semantic language. Although, there may be some arguments for retaining/standardising
Re: [whatwg] What exactly is contentEditable for?
Anne van Kesteren wrote: Quoting dolphinling [EMAIL PROTECTED]: Perhaps I've missed something, but while I've seen lots on what contentEditable does and how it works and how various other things are associated with it, I've never actually seen anything explaining *why* it exists. So... what's it good for? Could you be more specific? It basically enables WYSIWYG editing for web pages. (With the freedom that you can restrict certain elements from being edited, et cetera.) How is that any different from a text area form control with a specified accept type of text/html, which would allow a UA to load any external editor (eg. XStandard) or degrade to a regular text area? eg. textarea content-type=text/html lt;pgt;Markup goes in here. User can edit with plain text editor or UA can load WYSIWYG editor for text/html (or whatever ever MIME type is specified)lt;/pgt; /textarea That would be a far better option than using contentEditable, which is not only conceptually broken, but *all* implementations of it are so incredibly broken, that trying to standardise it is like dragging a dead horse through mud. -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] Pattern Hint
Dean Edwards wrote: fantasai wrote: Dean Edwards wrote: http://www.whatwg.org/specs/web-forms/current-work/#the-pattern That is not enough. I wouldn't put something so complex in a tooltip. It would frighten my users. What could be so complex that would frighten users when used in a title attribute, yet wouldn't have the same effect when used in some kind of pattern hint attribute, regardless of how it's displayed to the user? -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] text/html conformance checkers and CDATA
Anne van Kesteren wrote: Quoting Lachlan Hunt [EMAIL PROTECTED]: I think conformance checkers should not allow '/' in elements whose content model in HTML 4 was CDATA. Agreed. That is how HTML 4 validators currently work. And also how no browser works. That's irrelevant in this case. The question is about whether or not it is valid for authors to use / within elements containing CDATA, which it is not, regardless of how browsers should actually handle the error. This point was raised before by the way: http://listserver.dreamhost.com/pipermail/whatwg-whatwg.org/2005-January/002993.html For the purpose of error handling, it would be acceptable to define that an erroneous / within does not close the element, but that doesn't make its use by authors any more valid and should be picked up by any decent conformance checker. -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] text/html conformance checkers and comments
David Håsäther wrote: On 2005-07-26 03:33, Lachlan Hunt wrote: ! ... The only real use I've ever seen for a null comment declaration is to suppress markup as in !amp; Why not just do it properly as amp;amp;? That way it works for both HTML and XHTML, whereas your version is only valid for HTML. -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] text/html conformance checkers and CDATA
Henri Sivonen wrote: Should text/html conformance checkers treat the string '/' as the end of script and style as in SGML or should they look for the entire end tag as in tag soup? I believe / is only valid in script and style elements according to SGML rules when it is the end-tag for the elements, and therefore, must be of the form /script, /style or the SHORTTAG NET form /. However, since SHORTTAG is not supported, only /script and /style should be allowed. I think conformance checkers should not allow '/' in elements whose content model in HTML 4 was CDATA. Agreed. That is how HTML 4 validators currently work. -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] [WF2] Web Forms 2.0: Repetition and type ID
Ian Hickson wrote: On Fri, 1 Jul 2005, fantasai wrote: I'd like to suggest that ID attributes use a different syntax than [] to mark repetition placeholders, ... Ok, I allowed two other characters to be used in the place of [] as well. In principle, that's a good idea. But do you really expect a typical author to remember that they should, or even be able to, type ⁅ and ⁆, instead of [ and ] for id attributes, considering that they don't appear most keyboards and they may not even have any fonts with those glyphs available? Personally, I prefer Matthew's idea to use a templateid attribute. -- Lachlan Hunt http://lachy.id.au/
Re: [whatwg] Web Apps 1.0: On-line help
Jim Ley wrote: adding in a link rel of help would seem a pretty low rent thing to define, There's already a help relationship defined in HTML 4 [1], it doesn't need to be added. Perhaps it's semantics could be refined and maybe give some examples of use and describe some possible implementation methods. [1] http://www.w3.org/TR/html401/types.html#h-6.12 -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] A thought: a href=... method=post
Ian Bicking wrote: I was just thinking about the recent problems introduced by the Google Web Accelerator following links that have side effects (the typical a href=form?delete=10[delete this]/a stuff)... So, is this a suggested solution to that problem... A related extension might be a method attribute to anchor tags. One might expect a href=form?delete=10 method=POST[delete this]/a to do a post request to form with a request body of delete=10. Or it could do a post with an empty request body, but unfortunately a large number of web frameworks ignore URL variables in post requests. The Google Web Accelerator will still be broken (the method attribute wouldn't magically appear on all the many applications out there), ...which doesn't really solve the problem at all? From what I understand, it's not Google's web accelerator that's broken, but rather the implementations that use links instead of forms and depend on JavaScript for confirmation. Anything that unconditionally depends on JS is broken by design, not the tool that doesn't make use of it. Ideally, if JS is used for confirmation like in the apps that I've heard are affected, the script should modify the URI in some way to pass additional confirmation information (eg. appending a ...confirmed=1 parameter). In the absence of that confirmation, the server could then send a page with a form requesting confirmation. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] text/html flavor conformance checkers and foo /
Henri Sivonen wrote: What should text/html flavor conformance checkers say about foo /? Silently treat as foo as per SGML? Yes. Silently treat as foo as per real world? Intentionally buggy/broken behaviour should not be carried over into conformance checkers. Report a warning? Yes. Report an error? I don't think it should be an error. A warning like the WDG validator issues is appropriate. What about foo/? Same as foo /. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
[whatwg] [WA1] lang and xml:lang
Hi, Web apps currently states [1]: # Authors should not use the lang attribute in XML documents. Authors # should instead use the xml:lang attribute. Is there any reason for not making that must not? The only reason someone would ever have for using lang instead of xml:lang in XHTML is when serving it as text/html, which is strictly forbidden in this version. It should be stated that lang is for HTML only and xml:lang is for X(HT)ML only. I think the heading for the attribute defintion should be updated to include xml:lang as well. [1] http://www.whatwg.org/specs/web-apps/current-work/#lang -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
[whatwg] [WA1] The profile Attribute
Hi, # User agents must ignore all the URIs given in the profile attribute # that follow a URI that the UA does not recognise. (Otherwise, if a # name is defined in two profiles, UAs would assign meanings to the # document differently based on which profiles they supported.) # # Note: If a profile's definition changes over time, documents # that use multiple profiles can change defined meaning over # time. So as to avoid this problem, authors are encouraged to # avoid using multiple profiles. I disagree with those statements for several reasons, but mostly because it's confusing nonsense that doesn't make sense and seems to apply unnecessary restrictions on the processing of profiles. 1. There are no reasons there to avoid multiple profiles all together, only reasons to avoid profiles with conflicting definitions. 2. Forcing a UA to ignore all profiles occuring after one they do not understand places an unnecessary burden on the author to specify profiles in the order in which they are most supported by the UAs. 3. That also forces unnecessary restrictions on which profiles a UA may support and process. For example: * User Agent A implements XFN * User Agent B implements RelLicence * A document uses both XFN and RelLicence, and specifies XFN first in the profile attribute. In that scenario, user agent B unfairly loses out on being able to apply the semantics of the RelLicence profile. Considering that UAs A and B are likely to serve different purposes There may be little reason for one to implement the other profile, for anything other than as work around for this specification. This also defeats the whole purpose of allowing multiple profiles 4. The Note about a profiles defintion changing over time, somehow only affecting documents with multiple profiles makes no sense. If a document uses any profile and its definition changes, then the semantics of the document are going to change too. It is certainly not a reason to avoid multiple profiles. I recommend updating the spec to say the following points: * If two profiles define the same name, then the semantic is given by the first known URI specified in the profile attribute. * UAs may ignore unknown profiles and continue to process any subsequent profiles. * Authors should avoid multiple profiles with conflicting defintions, because UAs may apply differing semantics, depending on the profiles they do and do not know. Remove the note from the end of the section entirely (or rewrite it) because the reason given does not match the recommendation to avoid multiple profiles, which is confusing. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [WA1] lang and xml:lang
Ian Hickson wrote: On Sun, 17 Apr 2005, Lachlan Hunt wrote: It should be stated that lang is for HTML only and xml:lang is for X(HT)ML only. Done. Thank you, but now there's just one more issue. # If both the xml:lang attribute and the lang attribute are set, user # agents must use the xml:lang attribute, and the lang attribute must be # ignored for the purposes of determining the element's language. Is that the case for both HTML and XHTML documents? It would make more sense if, in the case of both being set, lang was used for text/html documents and xml:lang for XML documents. However, in the case of only one being set but for the wrong MIME type (eg. xml:lang set for text/html document or lang for XML document), for error recovery, should UAs be allowed to fallback on it? -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [WA1] The profile Attribute
Ian Hickson wrote: On Sun, 17 Apr 2005, Lachlan Hunt wrote: 1. There are no reasons there to avoid multiple profiles all together, only reasons to avoid profiles with conflicting definitions. Imagine you use publicly available profiles A and B. A has definitions foo and bar. B has definitions baz. You use foo, bar, and baz extensively in your document. Two months later, the author of profile A updates his profile to include the definition baz, meaning something completely different to the definition from profile B. Well, I'd say the author of profile A has broken some rules by not keeping the URI for an old version persistent. Profile authors should (hopefully) be smarter than that. Even when XFN was updated from 1.0 to 1.1, a new URI was given to avoid altering the semantics of existing documents in any way. I'd say the chances of the above occuring are slim, and not worth affecting the ability to make use of multiple profiles. The spec could, instead, provide a strong recommendation for profile authors to keep profile versions persistent. Your document has now radically changed meaning, yet you didn't use profiles that had clashing meanings when you wrote your document. In which case, I'm sure many authors would be complaining to the profile author about such a change, and I still don't think the spec needs unnecessary restrictions for this small use case. The only way I can see to avoid this is to use only one profile, since then you can't ever get clashes. There are other ways I've seen proposed, such as using namespaces: http://www.protogenius.com/rel-schemas/draft-scheid-rel-schemas-00.htm Although that proposal doesn't seem to even make use of the profile attribute, but rather link elements which would be a big improvment over the profile attribute. Imagine you use publicly available profiles A and B. A has definitions foo and bar. B has definitions foo and baz. ... Someone uses a browser that supports only profile B. Now your document will be rendered or processed with completely different semantics, because the UA thinks that by foo you mean B's foo. Your document has now radically changed meaning, That's a valid use case to avoid profiles with conflicting definitions, not against multiple profiles in general. 3. That also forces unnecessary restrictions on which profiles a UA may support and process. For example: * User Agent A implements XFN * User Agent B implements RelLicence * A document uses both XFN and RelLicence, and specifies XFN first in the profile attribute. ... That's a fair point, but implementing XFN for user agent B might be simply a matter of dereferencing the profile URI, downloading the XMDP description (or whatever we end up specifying should be at the end of profile URIs -- something will eventually be specified) and ignoring the values from that profile. If it is defined that the resources referenced by the profile attribute should be XMDP (which would be a big improvement over HTML4, which leaves the format explicitly undefined), and UAs were able to download the profile and determine its values, then that would solve a lot of problems. Changed changes to introduces new definitions, which is what I meant. A profile should never drop values it previously defined, and this will be mentioned in the relevant spec when that gets defined. A profile version should never introduce, drop or change values and semantics. If values are added, changed, deprecated or removed, a new version with a new URI should be publised. The author can't always know when the profiles he's using will end up with clashes in the future. They can if profiles remain persistent and although persistence can never be guarenteed with 100% certainty, such changes are a small use case that's unlikely to occur. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [WA1] lang and xml:lang
Ian Hickson wrote: On Sun, 17 Apr 2005, Lachlan Hunt wrote: # If both the xml:lang attribute and the lang attribute are set, user # agents must use the xml:lang attribute, and the lang attribute must be # ignored for the purposes of determining the element's language. Is that the case for both HTML and XHTML documents? Yes. So, if I have this HTML document !DOCTYPE ... html lang=en xml:lang=fr titleHTML document/title pThis is an HTML, not an XML, document. Considering that legacy HTML UAs won't know about the xml:lang attribute, and will only use lang, are you saying that a conforming Web Apps UA should treat the document as french? It would make more sense if, in the case of both being set, lang was used for text/html documents and xml:lang for XML documents. The only way you can set xml:lang in an HTML document is via the DOM Now I'm confused. If that's the case, then wouldn't the above example be treated as english, regardless of the xml:lang attribute in the source? (in HTML, there are no namespaces). Which is why xml:lang should be completely ignored, as an unknown attribute, in HTML. I don't think it's worth having special requirements for something that no-one is likely to ever do outside of obscure test cases. I've seen people use lots of XML syntax in HTML documents, including xmlns and xml:lang attributes even in one that had an explicit HTML4 DOCTYPE (though I can't remember where) and not just in MS Word generated rubbish. The point is authors do a lot of silly things, and I thought UA behaviour needed to be well defined for as many use cases as possible. However, in the case of only one being set but for the wrong MIME type (eg. xml:lang set for text/html document or lang for XML document), for error recovery, should UAs be allowed to fallback on it? If xml:lang= is set onin a text/html document, it'll be {html, 'xml:lang'}, not {xml, 'lang'} which is what xml:lang really is. I don't understand how that answers the question. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [web-apps] 2.7.8 The i element
Ian Hickson wrote: On Sat, 16 Apr 2005, Lachlan Hunt wrote: Perhaps. it's been argued many times before that i is the most suitable element to use for such purposes; but then again, italics for ship names is merely a typographical convention and the i element is as meaningless as span. Actually, i in HTML5 is currently defined as having specific semantics: http://whatwg.org/specs/web-apps/current-work/#the-i So does i now stand for instance, instead of italics? My favourite book is a href=urn:isbn:0-735-71245-XEric Meyer on CSS/a. What if there is no appropriate link, though? I don't know. Or when I can't be bothered to find out what the link is? Then you're just being lazy :-) Also, there's nothing that distinguishes that a from other a elements, Sure there is: a[href^=urn:isbn:] { /* Styles for book titles */ } Although, that would depend on every book being linked with an ISBN URI, if they were all to recieve the same styles. yet there is something very different about that one -- it's the title of another work. I'd like to be able to style all such titles consistently, so they have to be marked up in some way. In that case, would you want to differentiate between ordinary titles and real citations? Or is that something that the class attribute could handle, if needed? Movie titles are similar. I'd like my UA to give me a tooltip containing information from IMDB for every movie title. With user JavaScript I can do this, if there's a way to recognise movie titles. Then would you want different markup for book titles, movie titles, play titles, song titles, etc? Or would you just expect the script to search IMDB for anything marked up with cite? -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] HTML5: New link-types regarding guideline 2.4 in WCAG 2.0
Anne van Kesteren wrote: Lachlan Hunt wrote: Could some of these be improved and included within web apps? http://lachy.id.au/dev/markup/specs/wclr/ I haven't read it completely, but this sentence sounds incorrect: # Designates a resource containing user contributed comments. May be # used in conjunction with feed to designate a syndication format # resource for comments. If you are proposing |rel=feed comments| that would imply that the link is both about comments and is a feed. I don't understand the problem. The comments relationship doesn't say it's about comments, it says contains comments. The definitions for comments and feed are: comments Designates a resource containing user contributed comments... feed Designates a resource used as a syndication format. With comments and feed, it should indicate a resource used as a syndication format containing user contributed comments. Perhaps the sentence you cited above could be clarified to reflect this better. |rel=alternate stylesheet| was an error from the HTML4 WG (I discussed this with fantasai on IRC) because it actually says that the resource linked to is both an alternate representation of the current page and is a stylesheet. However, it actually is an 'alternate stylesheet' for the current page opposed to the default stylesheet linked with |rel=stylesheet|. I somewhat agree with this, although it seems that it is just the definition of alternate that is poorly worded. If it were defined more like this, alternate stylesheet would be more appropriate: Designates substitute versions for the document in which the link occurs or, when used in conjuntion with another link type, an alternate version of the resource type indicated. (that definition is not perfect, but I think you'll understand what its supposed to mean anyway) I suggest you fix that (and others, if they exist) ambiguity first. Also note that we probably don't need |rel=permalink| as the link inside an ARTICLE element with a value of bookmark probably does that already. I somewhat disagree that bookmark does this. It's defined as: ...A bookmark is a link to a key entry point within an extended document... Unless I'm mistaken, a permanet link for the document doesn't really seem to fit that defintion. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [WF2] Conformance Requirements Issues
Ian Hickson wrote: But there are parts of HTML4 that will never be supported by mainstream browsers Then they won't be compliant to HTML4, or specs that extend HTML4 (like WF2). Then why write a spec that no browser will ever be able to be fully compliant with due to backwards compatibiltiy constraints? This will be addressed in Web Apps 1 / HTML5. Ok. Perhaps this bit from section 2.2 Existing Controls, can be moved or copied up to the conformance requirements. | Compliant UAs must follow all the guidelines given in the HTML4 | specification *except those modified by this specification*. Fair point. Done. I also made it (as you suggested, I think) only the forms-related parts. Yes, that looks good. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] p elements containing other block-level elements
Matthew Thomas wrote: Lachlan Hunt wrote: ... I don't understand what's wrong with the XML error handling. I think it's great because errors should be caught and handled during the authoring process and by the CMS, which XML essentially forces. http://diveintomark.org/archives/2004/01/14/thought_experiment As I said above, errors should be caught and handled during the authoring process and by the CMS. That is clearly just a case of the CMS not doing it's job properly and a broken implementation doesn't mean the language is broken. The nature of XML requires that both the client and publishing tool enforce well-formedness, not just one or the other. If your CMS isn't up to the job, then you shouldn't even attempt to maintain a well formed document that accepts input from external sources. I agree with Henri's comment about using ad hoc print statements, rather than a true XML tool that guarentees well formed output. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] p elements containing other block-level elements
Ian Hickson wrote: blockcode should probably be allowed too, though it doesn't seem to be included in web apps. Oh well, that's probably a discussion for another thread anyway, if it hasn't already been discussed (I'll search the archives later). We haven't discussed it yet. I hadn't really thought about it but given: precode ... /code/pre blockcode ... /blockcode To use precode like blockcode, one would have to style it with precode:only-child { display: block; } Although, there is a very small use case that would make precode unusable for that purpose. eg. in a marked up e-mail (or other plain text document), one may use code to marup code samples. But when there is only one occurance of code in the whole document surrounded only by plain text and no other elements then :only-child would still match it, causing a potentially undesired effect. Though, the chances of that happening are slim and probably not worth worrying about. ...and given that the former would work in all existing UAs and the second wouldn't, and the former has the same semantics as the second, I don't see much of an advantage to the second. You could introduce blockcode as an XML only element, but then I guess there's not much stopping me from using xhtml2:blockcode instead. It's a shame no browser actually reads the DTD, this wouldn't be a problem if they did :-(. This is one reason why HTML should be a true SGML application, and why browsers should have been built to conform. Yeah, well. In the words of Syndrome: Too late. 15 years too late. hehe. :-) That's one reason why I now consider HTML to be a dying langauge and only being retained for backwards compatibility where XHTML support is unavailable. b) We allow it in XML and the DOM but disallow it in the HTML serialisation Yes, this makes the most sense to me. Cool, it seems we are in agreement then. Wow, really!? This must be a first. :-) I think we'll probably be stuck with HTML for a very long time -- at least as long as it takes for XML to have a variant created that has well-defined error handling rules other than the author-hostile abort processing immediately. I don't understand what's wrong with the XML error handling. I think it's great because errors should be caught and handled during the authoring process and by the CMS, which XML essentially forces. I don't think user agents should have to gracefully handle errors when it's trivial for authors to fix them. Hopefully, one day CMSs will be built as real XML tools, and people will stop complaining about accidental errors causing a catastrophy. The history of HTML has shown us that unobvious errors simply don't get fixed because many authors are too lazy to even bother checking, and many are even lazier to fix things when they do. I've lost count of the number of posts to www-validator stating something like: I think the validator should ignore X... I don't know how/want to fix it... It works, so it is not invalid... The validator is wrong/broken. If XML were to allow more graceful error handling, I see nothing but the possibility of history repeating all over again. I don't think the spec should limit nested content too much because... Agreed. OMG! Twice in the same e-mail! What are the odds of that? :-) I've made the spec not restrict the content models per se, just say this element can contain this category of elements and made sure the elements are in the right categories. That seems like the most appropriate way to handle it. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] Image maps: should we drop a coords=?
Ian Hickson wrote: Client-side with a (doesn't work in WinIE6, works in Moz, Opera): img usemap=#foo (or object usemap=#foo/object) map id=foo ... a coords=... .../a /map I've never seen that used at all either, most likely because it doesn't work in IE and because every single tutorial I've ever seen only teaches area. While it is definitely a better design than area,it isn't a substantially better design, How so? Although a might have a slightly less presentational name than area, the semantics of both are identical when used for an image map. I believe we can take the opportunity to prune the spec without ill effect. I don't see any harm in either keeping it or removing, but there's not much point to having it either. Anyone want us to keep a coords=? No. One request though. When this section of the spec gets written, can you provide an example with less presentational abuse than HTML 4 does. Using it just to provide a navigational toolbar is innappropriate, because the same can be, and has been, achieved with CSS. Image maps should be used to describe the structure of an image and to indicate significant areas within it. The simplest and most often used non-presentational example I've seen is a world map, but perhaps something like highlighting sections of a photo, for which there are close-up pictures available. eg. img src=/images/park usemap=park alt=... map id=park area coords=... shape=rect href=swings alt=Swing Set title=Close up photo of the swing set area coords=... shape=poly href=tree alt=Old Willow Tree title=Close up photo of the old, gnarled willow tree /map -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
[whatwg] [WA1] Specifying Character Encoding
In the current draft, for specifying the character encoding [1], it is stated: | In XHTML, the XML declaration should be used for inline character | encoding information. | | Authors should avoid including inline character encoding information. | Character encoding information should instead be included at the | transport level (e.g. using the HTTP Content-Type header). The second paragraph should only apply to HTML using the meta element, not XHTML using the XML declaration. For X(HT)ML, according to the Architecture of the World Wide Web, Volume One - Media types for XML [2]: | In general, a representation provider SHOULD NOT specify the character | encoding for XML data in protocol headers since the data is | self-describing. I think it should also be noted that authors who omit the XML declaration (or include it but don't specify the encoding attribute) *must* use UTF-8 or UTF-16, as described in the XML recommendation. [1] http://www.whatwg.org/specs/web-apps/current-work/#charset [2] http://www.w3.org/TR/2004/REC-webarch-20041215/#xml-media-types -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [WA1] Title Element Content Model
Anne van Kesteren wrote: Lachlan Hunt wrote: | In HTML (as opposed to XHTML), the title element must not contain | content other than text and entities; user agents must parse the | element so that entities are recognised and processed, but all other | markup is interpreted as literal text. I think that should be changed to state: ... but, for backwards compatibility, all other markup (such as elements and comments) should be interpreted as literal text. Why? Its content model is #PCDATA: I know, so for HTML 4, current browsers shouldn't interpret markup as plain text and display it in the title bar, but they do. eg. titleemHello/em World!/title Will be displayed by current UAs in the title as emHello/em World!, instead of just Hello World!. As you can see in the quote above, the current draft makes this incorrect behaviour a requirement by stating that: user agents must parse the element so that [...] all other markup is interpreted as literal text. I am only requesting that that requirement be changed from a *must* to a *should* for backwards compatibility, because that's what current UAs do now, but not what strictly conforming SGML/HTML 4 UAs are supposed do. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [WA1] Specifying Character Encoding
Anne van Kesteren wrote: Lachlan Hunt wrote: | In XHTML, the XML declaration should be used for inline character | encoding information. | | Authors should avoid including inline character encoding information. | Character encoding information should instead be included at the | transport level (e.g. using the HTTP Content-Type header). The second paragraph should only apply to HTML using the meta element, not XHTML using the XML declaration. Why? If people are still using text/xml for example you really want them to use the HTTP Content-Type header. Otherwise its US-ASCII. I didn't consider text/xml because the current draft states in the conformance requirements. | XML documents [...] that are served over the wire (e.g. by HTTP) must | be sent using an XML MIME type such as application/xml or | application/xhtml+xml... I had initially interpreted that as meaning authors must use application/*+xml and must not use text/xml; however, that interpretation may be incorrect. Perhaps it should be explicitly stated that text/xml should not be used, with a reference to the webarch recommendation. In any case, my statement about the second paragraph still stands for XML served as application/*+xml, though it should probably apply to XML served as text/xml too. It is unclear whether or not a document served as text/xml;charset=whatever, should include the XML encoding declaration or not, but probably not because: Transcoding may make the self-description false... (as described in webarch). I think it should also be noted that authors who omit the XML declaration (or include it but don't specify the encoding attribute) *must* use UTF-8 or UTF-16, as described in the XML recommendation. Where did you read that in the XML specification? Appendix F.1. states [1]: | Because each XML entity not accompanied by external encoding | information and not in UTF-8 or UTF-16 encoding must begin with an XML | encoding declaration You can always specify encoding using the 'charset' parameter. ...although I had forgotten it was acceptable to use an encoding other than UTF-8 or UTF-16 without the xml declaration when accompanied by external encoding information, as well as being somewhat misinformed by the statement in XHTML 1.0 Appendix C [2]: | Remember, however, that when the XML declaration is not included in a | document, the document can only use the default character encodings | UTF-8 or UTF-16. Which fails to mention the condition of extenal encoding information. [1] http://www.w3.org/TR/REC-xml/#sec-guessing [2] http://www.w3.org/TR/xhtml1/#C_1 -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [WA1] Specifying Character Encoding
Anne van Kesteren wrote: Also, WHATWG shouldn't say anything about the MIME type you MUST use for XML IMHO. Agreed, but there's nothing wrong with stating that this version of XHTML must not be served as text/html and that an XML MIME type must be used instead, without specifying exactly which one. The current wording provides application/xhtml+xml and application/xml as examples only, which I think is acceptable. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [html5] tags, elements and generated DOM
Henri Sivonen wrote: On Apr 7, 2005, at 09:58, Lachlan Hunt wrote: There's no reason why a full conformance checker couldn't be based on OpenSP. It would be prudent not to use OpenSP in order to avoid accidentally allowing SGMLisms that are alien to real-world tag soup. If I ever get around to writing any form of conformance checker, true SGML validation (most likely using OpenSP) or XML validation (probably using Xerces or other XML parser) is at the top of my list. Personally, I probably wouldn't make use of a full conformance checker too often during my normal publishing process, as I understand semantic documents and most likely wouldn't end up writing non-conformant documents in that regard anyway. However, I do make mistakes and forget to close elements, misspell attributes and tag-names or whatever, in which case an SGML validator catches most of those mistakes for me. Yes, I know there are some things like conditionally required attributes that cannot be expressed by a DTD, but that doesn't make _true SGML or XML_ validation any less of a *very useful conformance tool*. Infact, it would probably be a good idea for them to do so, since then they'll also be real validators too, which is part of the conformance requirements. I don't think SGML validation is part of What WG conformance requirements. Considering it seems to be part of the conformance criteria, | Conformance checkers *must* verify that a document conforms to the | applicable conformance criteria described in this specification... | | The term validation specifically refers to a subset of conformance | checking... | | 1. Criteria that can be expressed in a DTD. validation is a critical part of conformance checking. I thought Hixie has specifically said he doesn't bother with DTDs. Just because his authoring practices may not involve their use, doesn't mean many other authors don't make use of them. As real usecase for DTD validation, consider this. There are increasing calls for CMSs to produce strictly conformant markup. There have been many complaints that such conformance is not enforced, which results in many invalid and non-conformant websites. Users should not be required to check all of these conformance criteria manually before submitting content through a CMS, as experience shows that simply doesn't happen. If CMSs are ever going to enforce strinctly conformant code, then DTD validation will be a core component of that process. Why re-invent the wheel when it comes to that, when a perfectly suitable and proven method already exists? Experience has shown, with all the lints available, that validation/conformance checking without a DTD is often incorrect, which makes them very useless conformance tools. This is why HTML must remain an application of SGML, the XHTML version *must* be a *valid* application of XML, and why DTDs are so important. The only thing we are waiting for in this field is CMSs that actually do enforce conformance, which we won't have a chance with if DTDs (or Schemas for XML) are not retained. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] p elements containing other block-level elements
Ian Hickson wrote: (Note that HTML1 was not an SGML application; HTML2 was retrofitted into the SGML world for theoretical reasons, but the real world never really caught up with that theory.) Yes, I'm aware of what HTML 1 was (Martin Bryan explains it well [1], for anyone that doesn't know) and, IMO, it was a very good decision to formalise it as SGML. However, as you say, the real world never caught on, and, sadly, probably never will (at least not in any mainstream browser). :-( In practice, though, the reason is the same as for MathML: The XML parser is a generic parser, the HTML parser is not. I assume you mean tag-soup parser? :-) Yes, I understand the problem. We can change content models and add concepts like namespaces to the XML parser easily; we can at best add new elements when it comes to the HTML parser. Fair enough. I guess this is one reason why XHTML is so good the mistakes of the past with SGML/HTML won't be repeated, and progress won't be held up so much by buggy browsers. it's just a pity it's not yet supported in IE. I'm also starting to understand why you don't consider HTML an application of SGML, although I still don't like it. :-| [1] http://www.is-thought.co.uk/book/home.htm -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] p elements containing other block-level elements
Ian Hickson wrote: To get truly nested elements, only the XML parser would be an option. The question is whether: a) We don't allow any of this. I don't think progress should be held up any more than it already is by broken browsers, so let's not let a limitation with HTML affect an XHTML implementation. b) We allow it in XML and the DOM but disallow it in the HTML serialisation Yes, this makes the most sense to me. c) We allow it in XML and the DOM and say that authors may do it in HTML but that parsers must (effectively) misunderstand them It makes no sense to allow authors to do something and then force all implementations to remain intentionally broken. d) We allow it in XML and the DOM and say that authors may use the object hack to us it in HTML This is exactly the same as b, except it's encouraging the use of non-semantic hacks. I don't see any other realistic options. I don't like c. I'm reluctant to do a. For me, that leaves b and d. That leaves b as the only valid choice, IMHO. Of b and d I prefer b. That, along with embedding MathML and other XML vocabularies, would be a reason to migrate to XML, if we consider that a good thing. Absolutely! Given the incredibly broken SGML/HTML implementations that will never get any better, Migrating to XML is certainly the best way to actually progress into the future. I'm sure no-one wants to be stuck with HTML forever, which is really more of a lost-cause when it comes to any real enhancements. The content model for any block element allowed inside paragraphs should be tweaked to not allow paragraphs when it's inside a paragraph, because nested paragraphs don't make sense. Agreed. (Including inside nested tables and lis, I assume? But obviously excluding inside nested blockquotes.) I don't think the spec should limit nested content too much because, as is shown by the pblockquotep//blockquote/p example, there are valid reasons to nest paragraphs, and possibly others we haven't thought of. Also, as history has shown, HTML4 never thought lists within paragraphs would be needed, though they are now allowed. By placing too much restriction on the content models, we risk locking out legitimate use-cases which we haven't thought of, but which authors may find in the future. I'm not saying we should just allow anything within anything, but we should be careful about being too restrictive. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
[whatwg] [WF2] Fixing Repetition Template Degradation in IE without Scripting
Hi, I've just done some experements with the repetition templates, and tried to devise a way to help IE end up with usable submit buttons, rather than useless push buttons. The solution I came up with involves a little (read: extremely evil and dirty) hack with IE's proprietary conditional comments. However, it doesn't quite work as expected, and I thought someone may be able to figure out a way to extend this idea a little more to make it work. For the remove button, this displays the correct button for IE 5 and 6. !--[if IE 1]--button type=remove name=remove value=[player] Remove/button!--![endif]-- !--[if IE]button name=remove value=[player] type=submit Remove/button![endif]-- Note: This: !--[if IE 1]--...!--![endif]-- is a validating version of the so-called downlevel-revealed conditional comment: ![if expression] HTML ![endif] (which should probaby be nick-named uplevel-revealed :-)). For the add button, this code works as intended, but still buggy like remove is (as I will explain later): !--[if IE 1]--button type=add name=add value=add Add Player/button!--![endif]-- !--[if IE]button type=submit name=add value=add Add Player/button![endif]-- Ok, the problem with the solution is that IE still sends the name/value pair for both the add and remove button regardless of which one was clicked (ie. successful) and sends the button label as the value, rather than the value attribute. This can be seen by looking at the resultant query string from the submission: ...remove=Remove+IEadd=Add+Player+%28IE%29 This seems to works as intended for the add button because the add name/value pair must be detected and used in the server-side script, before the remove. So, it ends up adding a field regardless of which button was pressed. The only solution I could think of was to change the buttons to inputs, however the buttons would then be labeled with the text from the value attribute (ie. [player] and add for remove and add buttons, respectively). And changing that value attribute, at least for the remove button, would stop server-side script form working correctly to remove the correct fields. Lastly, for anyone wondering how this solution would work after IE7 is released, and if IE fixes their button implementation, then the conditonal comments can be altered as follows: Change: !--[if IE 1]-- To: !--[if IE 7]!-- -- -- Change: !--[if IE] To: !--[if lt IE 7] However, even without these alterations the IE 5/6 version of the buttons should still work in IE7 anyway. Without the special !-- -- -- pseudo-comment [1], IE7 would end up outputting the -- from the original if IE 1 version. The 3 double-hyphens -- ensures that the enitire comment remains valid in SGML. [1] I called it a pseudo comment because it's not really a full comment in SGML terms, it only looks like one. The real SGML comment is the full thing including: !-- ... -- -- -- -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
[whatwg] [WA1] Title Element Content Model
Hi, The current draft states [1]: | In HTML (as opposed to XHTML), the title element must not contain | content other than text and entities; user agents must parse the | element so that entities are recognised and processed, but all other | markup is interpreted as literal text. I think that should be changed to state: ... but, for backwards compatibility, all other markup (such as elements and comments) should be interpreted as literal text. I don't think intentionally broken behaviour should ever be a strict requirement, only a strong recommendation for backwards compatibility. Although, are there any valid reasons as to why this requirement must be retained, even in standards compliant mode? Would many sites break if it were fixed in standards mode? | In XHTML, the title element must not contain any elements. I disagree with this. XHTML 2 has been updated to allow markup within the title element and I think this XHTML should too. Since we can change the content models for XHTML, I see no reason not too. Here are some use cases I can think of: titlespan xml:lang=pt-BRBrasil Futebol/span: Brazil - Football World Champions/title (Real example I found [2], though I added the language markup, and the primary language appeared to be en). title Eric's Archived Thoughts: emReally/em Undoing html.css /title (Note: Was a real example from meyerweb, but the WP bug that initally allowed it seems to have been fixed. This was also an example of why the requirement for HTML parsers to treat the element as plain text (at least in standards mode) is bad [2]) titleabbr title=Hypertext Markup LanguageHTML/abbr Tutorial/title Although current visual browsers may not be able to show things like emphasis or abbr expansions (eg. tooltips) visually in the window's title bar (though, that would probably depend on the OS), non-visual UAs (eg. aural) may still be able to indicate emphsis, expand abbr, etc. (eg. when speaking it).0 [1] http://www.whatwg.org/specs/web-apps/current-work/#the-title [2] http://www.the-football.com/brasil_2.html [3] http://meyerweb.com/eric/thoughts/2004/09/15/when-blog-software-attacks/ -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] p elements containing other block-level elements
Anne van Kesteren wrote: Ian Hickson wrote: p ... ol li.../li /ol /p If OL is an inline element here, sure. Whether or not it is rendered as block or inline within paragraphs can be quite easily handled with CSS. Lists should not be classified as block level or inline level elements within the spec. ol, ul { display: block } li { display: list-item; } p ol, p ul, p li { display: inline } -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] p elements containing other block-level elements
Ian Hickson wrote: On Thu, 7 Apr 2005, Henri Sivonen wrote: The problem with allowing the HTML flavor and XHTML flavor diverge is that one could no longer use HTML and XHTML serializations interchangeably in apps that do not suffer from the HTML DOM legacy and otherwise could treat the HTML-XHTML distinction as something you deal with on the IO boundary. I use Java XML tools for producing HTML. I use XHTML internally and serialize as HTML. This works great with XHTML 1.0 and HTML 4.01. If the HTML flavor of What WG HTML and the XHTML flavor diverge, I'd need to spec that only an HTML-compatible subset of What WG XHTML that doesn't nest elements in ways prohibited on the text/html side may be put into an app that outputs text/html. I don't think it's necessary to make HTML and XHTML diverge with relation to the element content models. I think the spec should just provide notes about backwards compatibility for older UAs that won't support such constructs properly; however, they will degrade gracefully. New UAs could be updated to handle pol.../ol/p correctly (when an HTML5 doctype is used) as text/html. So, this would produce the following DOM for a current UA: * (any parent element) +-P +-OL But for a new UA, it would produce (just like an XHTML UA will) * +-P +-OL However, I realise that may cause issues with supporting existing HTML4 documents, as it would require further DOCTYPE sniffing (or a proper SGML implementation that reads the DOCTYPE) to produce the correct DOMs in each case, but it might be a solution worth considering. One possible hack is to say that when you serialise this kind of stuff to HTML, you have to wrap the problematic elements in object tags, so that for example this XML: ... p objectol.../ol/object /p Isn't that just abuse of somewhat semantic element (representing external content that should be embedded within the document), for a completely non-semantic hack? If it were this, it would be more acceptable p... object type=image/png data=list.pngol.../ol/object p On the other hand, there already are other big differences between HTML5 and XHTML5 (or whatever we end up calling them). Calling it XHTML5 would be very confusing, as people won't understand that this version is on a track and for a purpose that is different from XHTML2. I'd call it something like (X)HTML Applications 1.0 (maybe it could be shortened to XHTML Apps and HTML Apps 1, or (even shorter) HAppy 1.0). That name would, of course, include web-apps, web-forms and web-controls. For instance, in the XHTML variant you can use embedded MathML. Is this just a case like that? I don't think so. MathML can't be used in HTML because there are no namespaces. Whereas, the only reason pol//p can't be used in HTML is for bugwards compatibility. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [html5] tags, elements and generated DOM
Anne van Kesteren wrote: Lachlan Hunt wrote: HTML5 will most likely stop the pretense of HTML being an SGML application. +1. -1 and the mostly undefined error handling, what about HTML 5 will be so incompatible with SGML to warrant such a decision? One example: http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2005-January/002993.html http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2005-January/002999.html http://lists.whatwg.org/htdig.cgi/whatwg-whatwg.org/2005-January/003001.html Documents that contain / within script and style elements, that are not /script and /style respectively (or the SHORTTAG version /) are broken. I see no problem with defining error handling for broken documents, but no need to break conformance with SGML in the process. HTML is an application of SGML, regardless of all the broken implementations and documents we currently have, and I don't want to see that changed. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [html5] tags, elements and generated DOM
Olav Junker Kjr wrote: Lachlan Hunt wrote: see no problem with defining error handling for broken documents, but no need to break conformance with SGML in the process. HTML is an application of SGML, regardless of all the broken implementations and documents we currently have, and I don't want to see that changed. An innocent question (no flamewar intended): Of course not, I try not to flame. :-) What is the benefit of having HTML defined as an application of SGML ? So that it may be processed with SGML tools, and validated with an SGML based validator, and possibly even generated using XSLT. (I know XSLT can generate HTML4, but I don't know if it would be able to do HTML5 or not, even if it did remain an SGML application). Even if it is decided that HTML 5 is not formally an application of SGML, it must at least remain fully compatible with SGML, and thus a conformant HTML 5 document must be a conformant SGML document. XHTML variants of HTML 5 must be a conformant XML document instead, though I noticed that is not the case with square brackets in ID attributes in section 3.7.2 of WF2 (are there no other character(s) than can be used instead?). So, I guess, there's already no hope of HTML 5 conforming to anything. However, I would like to request that any defined error handling behaviour designed to cope with malformed documents that directly violates SGML, be made optional (but recommended) so that a user agent with a conforming SGML parser may still be conform to HTML 5. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [html5] tags, elements and generated DOM
Anne van Kesteren wrote: Lachlan Hunt wrote: Olav Junker Kjr wrote: Lachlan Hunt wrote: Validators should not be non-conformant simply because they only do their job to validate a document and nothing else. I don't see any reason why such a statement needs to be included at all. I don't see anything about validators. I only read about Conformance checkers. In the note in that section [1]: | Conformance checkers that only perform validation are non-conformant, In fact, now that I've read it again, it seems rather contradictory. Just before the note, it states: | Conformance checkers are exempt from detecting errors that require | interpretation of the author's intent (for example, while a document | is non-conformant if the content of a blockquote element is not a | quote, conformance checkers do not have to check that blockquote | elements only contain quoted material). I would argue that conformance requirements that cannot be expressed by a DTD *are* constraints that require interpretation by the author. Therefore, that section seems to be saying that validators are exempt from checking some things, but are non-conformant for not checking them anyway. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [html5] tags, elements and generated DOM
Anne van Kesteren wrote: Lachlan Hunt wrote: | Conformance checkers that only perform validation are non-conformant, So? That doesn't make it a validator. What is a validator, if it is not a form of conformance checker that only peforms validation then? Or, the other way around, what is a conformance checker that only performs validation if it is not a validator? A conformance checker might do things validators do too, but that doesn't make it one. I belive such conformance checkers are often called lints and they are usually not true validators, despite what many claim, so you are correct in that a conformance checker may not be a validator. But, from what I understand of the wording in the spec, a validator is a form of conformance checker. Basically, metaphorically speaking, it's like a square is a rectangle, but a rectangle is not always a square. In fact, now that I've read it again, it seems rather contradictory. How? Did I not explain it well enough before? See below. I would argue that conformance requirements that cannot be expressed by a DTD *are* constraints that require interpretation by the author. Not really. Yes, really. Think about: http://annevankesteren.nl/archives/2003/09/invalid-after-validated Exactly, the conformance constraints violated in those examples cannot be expressed in an XML DTD (some can, and are, by the HTML4 DTD though), and require interpretation by the author. This merely illustrates the difference between valid and conformant. Therefore, that section seems to be saying that validators are exempt from checking some things, but are non-conformant for not checking them anyway. That is how the spec is contradictory, except s/validators/conformance checkers/ and with some things meaning errors that require interpretation of the author's intent Because, if I am understanding correctly and a validator is a form of conformance checker, a validator cannot check constraints that are not expressed in the DTD and require them to be interpreted by the author. Therefore, validators are exempt from checking such constraints, but are non-conformant for not checking them anyway, as stated in the note. (well done if you are not totally confused by that, I tried to make it as clear as possible :-)) Note that this is about more than just validating and isn't about validators. Yes, but Conformance checkers that only perform validation are, unless I am mistaken, validators. Hixie, can you please clarify what that means, if I am mistaken? -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [html5] tags, elements and generated DOM
Ian Hickson wrote: On Tue, 5 Apr 2005, Anne van Kesteren wrote: script type=text/javascript src=bar/script titleFoo/title ..? If I am not mistaken: htmlheadscript.../ title...//headbody/body/html I believe you are mistaken. A conforming SGML parser will not imply the body element without any content to make it do so. Is there a BODY element in this document (or, is there always a body element?): style type=text/css body{ background:lime } /style ... or this: titleBar/title The body will always be implied, though. Not in a conforming SGML parser, though it seems to be in Mozilla, Opera and IE, as I checked using your DOM viewer [1]. Although Opera seems to have a bug in standards comliant mode (at least, according to the DOM viewer script) because neither the head or body elements appeared in the DOM using this markup: !DOCTYPE HTML PUBLIC -//W3C//DTD HTML 4.01//EN http://www.w3.org/TR/html4/strict.dtd; titleFoo/title script type=text/javascript src=bar/script However, if the body element were to be automatically implied regardless, then the same would be true of the tbody element since both are required elements of html and table, respectively, and both have optional start- and end-tags,the rules for both must be the same. Neither Mozilla or Opera implies the missing tbody element within table/table, although IE does. However, OpenSP does not imply the missing elements in either case. The only documentation I could find that supports this, given the short amount of time I have to look, is this paragraph from section 9.2.3 of Martin Bryan's SGML and HTML Explained [2] that was explaining how the associated example should be parsed. | The start-tag can be omitted because the absence of this compulsory | first embedded subelement could be implied by the parser from the | content model... As soon as it sees a character other than a | start-tag delimiter () it will recognize that the character should be | preceded by [the start tag]. (For backwards compatibility with legacy parsers, the head probably won't be.) The head element seems to be implied by Mozilla and IE. Opera and OpenSP correctly don't imply the missing head element. [1] http://www.hixie.ch/tests/adhoc/html/parsing/compat/viewer.html [2] http://www.is-thought.co.uk/book/sgml-9.htm#Omitting -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [WF2] Objection to autocomplete Attribute
Hallvord Reiar Michaelsen Steen wrote: On 29 Mar 2005 at 11:01, James Graham wrote: Mikko Rantalainen wrote: My bank uses one-shot passwords for web access How does that work? Are you issued a new password every single time you login? How on earth do you remember it if it's always changing? Which seems to be an ideal use-case for the autocomplete attribute... Indeed, I've recently asked one of my banks to add autocomplete=off because there is no point in having the browser asking users if it should remember a once-only password :-) That's why user's can select Never for this site (or equivalent), so they're not prompted each time. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [WF2] Objection to autocomplete Attribute
Ian Hickson wrote: That's why user's can select Never for this site (or equivalent), so they're not prompted each time. Having the site just do it seems like better UI to me. Perhaps, for some users, but I would like to be notified every single time such a decision is made. I find it really bad UI when I get prompted on some sites, but not on others. Hmmm... I wonder if adding this to my user stylesheet could be useful for giving me some notification. [autocomplete] { ... } -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [WF2] Objection to autocomplete Attribute
Ian Hickson wrote: On Mon, 21 Mar 2005, Matthew Raymond wrote: Actually, now that I think about it, why do we need to have a spec saying that it's not depreciated or that it should be non-trivial to deactivate if the banks are going to blackmail UAs to support it? Because to be useful, specs have to be realistic. Yes, they should be realistic in documenting what markup should and should not be supported, but the spec is crossing the line by dictating what options should and should not be trivially accessible in a user agent. I recommend at least moving that statement to the note at the end of the section, perhaps changing it to something like this: # A UA may allow the user to disable support for this attribute. Support # for the attribute *should* be enabled by default, as there are # significant security implications for the user if support for this # attribute is disabled. (Note: the *should* in the second sentence above has been changed from *must*, for the same reason that specs must not dictate user-hostile behaviour, and to allow for any user agent vendor to correctly decide to disable support by default (as *there are valid reasons* to do so) and not violate this specification as a result.) And the note below could become: # Note: In practice, this attribute is required by many banking # institutions, who insist that UAs with auto-complete features # implement it before supporting them on their Web sites. For this # reason, it has been implemented by most major Web browsers for many # years and it is advised that the ability to disable support should not # be trivially accessible. Although I still recommend leaving out the statement about disabling support and strongly object to the inclusion of autocomplete, it seems I've already been overruled for those request, so I'm willing to compromise. However, I would like to point out that user agents that don't allow the user to override autocomplete, are in direct violation of the User Agent Accessibility Guidelines 1.0, Guideline 5 [1]: | Guideline 5. Ensure user control of user interface behavior | ... | Ensure that the user can control the behavior of viewports and user | interface controls, including those that may be manipulated by the | author (e.g., through scripts). Although the remainder of the guideline mainly discusses the viewport, a form field is still a user interface control [2], and thus I believe this guideline applies. In a previous post, Ian Hickson also wrote: Deprecating the feature would indicate that there is a chance the feature will be dropped in a future version, which there isn't. Why isn't there a chance it will be removed? I accept it being included as a way to document what UAs should support, but not as an attribute that authors should ever use; and I hope, if this spec is ever accepted by the W3C or other standards organistion, that it is removed before it becomes anything official. Those of us that often contribute to peer support forums, newsgroups, mailing lists, etc. for authoring HTML, have enough difficulty convincing some authors (newbies) to not use other user hostile extensions, such as disabling IE's image toolbar, Smart Tags (with the proprietary meta element values, though smart tags were never implemented in IE anyway), Google's AutoLink, controlling window sizes, status bars, toolbars, disabling context menus, etc. Do you realise how difficult it is going to become, and thus how much more innaccessible the web will become, if such authors find that this attribute is approved by a standards organistion? It would also make any site using the feature non-conformant, So what? Any site using it now is non-conformant, what difference does it make? which is pointless: the sites are going to use these features regardless, why make people have to violate the spec to do so. Then why is the size attribute deprecated now? Sites are going to use it regardless of the ability to specify such details using stylesheets, just like people continue to use font, b, etc, why make people have to violate the spec to do so? The point is: Documents must not use deprecated features. User agents should support deprecated features. That statement, from appendix C, applies to both the size and autocomplete attributes equally, so please deprecate autocomplete. [1] http://www.w3.org/TR/UAAG10/guidelines.html#gl-user-control-ui [2] http://www.w3.org/TR/UAAG10/glossary.html#def-ui-control -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox
Re: [whatwg] [WF2] Objection to autocomplete Attribute
Ian Hickson wrote: On Sat, 12 Mar 2005, Lachlan Hunt wrote: I realise I may be a little late with this issue, since WF2 seems to be fairly stable, but never the less I would like to note my objection to the inclusion of the autocomplete attribute [1]. The autocomplete attribute is already implemented in user agents. There's nothing we can do about it. I included it in the spec simply so that it is at least defined somewhere, instead of being just something people have to Know About without being documented anywhere. Then, please at least deprecate it. If it's only being defined to help with interopable implementations, that's fine, but it's use should be discouraged as much as possible, therefore it should be deprecated. The fact of the matter is, banks blackmail vendors into supporting this feature. Not much WHATWG can do about this. That's no reason to give in to their blackmail. As well as being depreacted, UAs should also be allowed to let the user to deactive this feature easily, despite what the current draft says about that matter. ie. Remove ... and the ability to disable support should not be trivially accessible from the spec # The off value means that the UA must not remember that field's value. That should also be changed from must not to should not to allow for a user to override this decision. -- Lachlan Hunt http://lachy.id.au/ http://GetFirefox.com/ Rediscover the Web http://GetThunderbird.com/ Reclaim your Inbox