First of all, let me state I wasn't (and am not) too strongly concerned on the following issues. These where either formal questions, or impromptu thoughts inspired by the dialog, perhaps not enough weighted when not enough felt. I guess it was no way clear in my mails. Let me also state that I'm definitely not aiming to argue, but I feel to disagree about some conclusions.

Ian Hickson ha scritto:

On Fri, 14 Nov 2008, Pentasis wrote:
1) Just because it makes sense to a human (it doesn't to me), does not mean it makes sense to a machine.

HTML is ultimately meant for human consumption, not machine consumption. Humans write it (sometimes with the help of a machine), humans read it (almost always with the help of a machine). We don't need it to make sense to a machine, we just need the machine to do what we tell it to so that it makes sense to us.



Don't you really consider the machine role as "central" in this process? HTML is the way (= the *language*) you tell the machine what to do so it makes sense to human users. You've given a bare definition of a *computer language*, but a computer language is for machine consumption! HTML is for human use (= the author/web developer) but for machine (= the UA) consumption, the very same way C++ is for human use (= the programmer) but for machine (= the compiler) consumption, since both are computer languages; the former being a specialized language and the latter being a general purpose one is no way relevant from this point of view, since both are computer languages *by definition* (not my own, of course...). Only the machine output is for human (end users) consumption. How should a human user be supposed to consume an HTML document if a machine doesn't consume HTML _code_? And how should a machine be supposed to consume HTML code if it's not projected having in mind machine constraints _first_ (e.g. context-freedom), authors needs in second place? :-)

On Tue, 25 Nov 2008, Calogero Alex Baldacchino wrote:
[...]

Could you give a concrete example? In all the examples I can think of, there is no problem that I can see. For example this:

   <p><b>H</b>ello!</p>

...would be fine in an AT, even if the AT went "bing" as it was saying the first part of the word.



What about <p><b>A</b>fter that....</p>, if the "bing" followed the <b> content (the same way a radio advertisement speaker could read out "Intel Inside" followed by the usual jingle "do dooDOOdooDO"), wouldn't such end up in a difficult to understand sound? [for a 'bing' preceding the <b> content, just shifting tags inside the word causes the same "problem"] Anyway, in a following mail I agreed an AT might default such cases as plain text, just ignoring "in word" tags whose semantics may alter speech (but specifying certain semantics should be applied only to whole words by non-visual UAs wouldn't be an awful idea, I think). Perhaps it wasn't clear.

However, I think that a solution, at least partial, can be found for the rendering concern (and I'd push for this being done anyway, since there are several new elements defined for HTML 5).

Which rendering concern?



The one raised vs my (impromptu and abandoned) idea of new semantic elements: backward compatibility with older browsers unaware of such new tags (it's the very same for new elements though).

[...]

Actually other than the validator, user agents ignore the DTD altogether.



[other points like the above]

I've acknowledged in other mails my assumptions were definitely wrong, and I apologized for that, as far as I remember (did I forgot to? if so, I apologize now!). Then the discussion moved towards the suitability of a kind of "foundation style sheet" to handle at least new elements presentation, and hiding those ones whose semantics might be difficult to cope with in older browsers (such as a menu constrained to be a contextual menu: a default CSS wouldn't be enough to cope with such), as a graceful degradation.

[my own personal conclusion, in my humble opinion, was that the result might be unreliable and definitely browser-dependent -- for instance, IE family seems to accept a 'custom' tag with its 'custom' attributes, by creating a 'proper' (as far as possible) html element, and styles are correctly applied to the element too, BUT any content inside the unknown tags is extracted and put inside the outer container, as if it were misplaced - a partial solution, though apparently not working in IE8, consists of adding a script creating an element with the 'custom' tag name by calling document.createElement() before unknown tags are parsed, but such tells me a "foundation style sheet" is not a (fully) working solution _per_se_, though desirable for consistent cross-browser rendering].

Let's come to the non-typographical interpretation a today u.a. may be capable of, as in your example about lynx. This can be a very good reason to deem <small> a very good choice. But, are we sure that *every* existing user agent can do that? If the answer is yes, we can stop here: <small> is a perfect choice. Better: <small> is all we need, so let's stop bothering each other about this matter. But if the answer is no, we have to face a number of user agents needing an update to understand the new semantics for the <small> tag, and so, if the new semantics can be assumed as *surely* reliable only with new/updated u.a.'s (that is, with those ones fully compatible with html 5 specifications), that's somehow like to be starting from scratch, and consequently there is space for a new, more appropriate element.

All browsers handling <small> is better than some browsers handling <small>, certainly, but some browsers handling <small> is better than no browsers handling a new element. So I don't really agree with your reasoning here.



Well, I guess there are more chances to see new elements, whose semantics is somehow felt as necessary, implemented in new browsers, than elements replacing other elements with a somehow close semantics. Thus I have to agree. :-)



On Tue, 25 Nov 2008, Calogero Alex Baldacchino wrote:
I'll start with an example. A few time ago I played around with Opera Voice. [...]

I don't think this browser bug is a good guide for language design.


Well, it was a plugin version perhaps, anyway I've never tried it out again. Anyway, that tells me any unspecified behaviours may lead to bugs/bad choices/different choices in different UAs, and such might be a reason to consider a standard definition instead, IMHO.

Let me reverse this approach: what should an assistive user agent do with such a <b>M</b><small>E</small><b>S</b><small>S</small>? [...]

What should an AT do with <em>M</em><strong>e</strong>s<em>s</em>? Why is this any different?


First, the same thing I've said above I agreed previously, that is ignoring elements (not their content) whose use (by authors) is not easily bindable to non-visual semantics.

Second, isn't an AT some kind of non-visual UA? Shouldn't <em>/<strong>/<b>/<i>/<whatever> semantics be defined such as covering non-visual behaviours for visual (mis)uses?

Third, anyway, but this is parenthetical, it was one reason I was considering (just on the fly) a kind of new element constrained, both in conformance and in parsing rules, as containing no less than one full word (this was part of what I called a "crazier" - and soon abandoned - "idea").

Here it is me not understanding. I think that any reason to offset some text from the surrounding one can be reduced to the different grade of 'importance' the author gives it, in the same meaning as Smylers used in his mails (that is, not the importance of the content, but the relevance it gets as attention focus - he made the example of the English "small print" idiom, and in another mail clarified that "It's less important in the sense that it isn't the point of what the author wants users to have conveyed to them; it's less important to the message.

I strongly disagree, and urge you to compare the examples in the spec for <em>, <strong>, <b>, <i>, and <small>, which show very different cases. They are not equivalent. Only <strong> indicates a change in importance.


I feel to disagree as well. For what concerns this subject, I've always used the term "importance" or "important" in a wider sense, as synonym at "relevance" or "relevant" (which I suppose to be consistent with a linguistic analysis - but linguistics may become a mined ground). From this point of view, I deem use cases for "b" as expressing a different (and perhaps lesser) grade of importance, or a differently 'scoped' importance than "strong" content (to say, "strong" applies to a whole sentence/a whole message or a substantial part of it, while "b" indicates importance in a tighter scope or something which is important as a reading key, to focus the reader attention on the message core but not necessarily expressing the core of the message per se/alone).

For instance, a product name and brand in any advertisements, though suitable to be 'labeled' as "b" content as keywords, represent the only relevant part of the message, that is the only one a company wants people to remember and wish to buy, while it's not the whole core of the message per se (which is "remember product x, wish product x, buy product x!!"), and the rest of the message is about a semiotic "trick" to make people remember the name and brand of a product. Furthermore, I really can't get how a keyword is not an important word in its message; ok, perhaps it doesn't (always) add important contents, but clarifies or otherwise focuses attention on something being somehow important in the surrounding content (it is, or can be, important to understand the overall meaning -- how much clear would be a hardware review never mentioning the reviewed product name? And the very first time it is mentioned, doesn't it adds an important content to the prose? Keywords are an important part of a message, and remarking them is worth it, thus they can be emboldened).

That is, if "strong" offsets a span of text which is important per se, as the core of the message, or a further message related to the rest but more important, "b" might be thought as offsetting something which is important with respect to its relationship with the surrounding text (this is the way I interpret it, even with current definition - for non-'decorative' purposes), expressing a different kind or degree of importance, not just a stylistic offset - which is (visual) presentational matter - thus I'd consider consistent to say that "b" offsets some phrasing content between plain text and "strong" content, and "i" offsets some content between plain text and emphasized content, just to trace a boundary for their semantics and try and avoid semantic overlapping between close elements in some borderline contexts (visually their semantics overlap though - I mean, each pair of <em>/<i> and <strong>/<b> are suitable for the very same visual presentation - and that's perhaps unavoidable to some extent, as well as misuses) -- elements semantics is for UAs consumption in first place, because if a UA cannot handle it, elements content cannot be rightly presented to users.

And if I figure out an AT producing a "bing" before or after an emboldened keyword, I can't help imagining it doing the same for "strong" text, perhaps with a louder or longer or (slightly) different sound (a different voice telling something like "the following sentences are very important, take care" might be an alternative choice, but not for "strong" content surrounded by plain text -- important things can also be spoken about with a somehow different inflection and speed, but it's the same for certain use cases of <b>/<i>).

[ Maybe this discussion is harmed by a cultural gap leading to different interpretations. For instance, I'm understanding the English concept of emphasis (mainly) covers a (quite noticeable) change in voice inflection causing a change in meaning, e.g. underlining different feelings; in my own language inflection is a kind of emphasis, but emphasis per se is related to meaning, to relationships between words remarking some concepts, e.g. a word used outside its context, as figurative, or a pompous term breaking out while discussing something, or an exaggeration, or a repetition of terms remarking a point (e.g. (translated) expressions like "never ever", "ever and ever" and so on, though leading to some speech emphasis, are emphatic per se and are said to bring emphasis into a sentence). Nevertheless, I think current semantics (as well as examples) for the <em> tag is quite well defined. ]

Anyway, I'd be tempted to prefer a "pure CSS" solution in most cases, as I think a (sighted) user can always disambiguate the meaning of boldened/italicized text not only because of their stylistic offset, but also by the mean of other characteristics, such as punctuation, the overall meaning, the presence of uppercase words (like 'WARNING'), while I don't really expect a (non-visual) user agent to be capable to cope with all possible subtleties covered by emboldened/italicized spans of text just basing on them being emboldened/italicized (e.g. a product name might be read out differently in a review and in advertisements, while a taxonomic name might be pronounced differently the first time it's found in a scientific paper, but not in other occurrences, though being always italicized -- a visual UA can just use italicized/emboldened text and leave out any semantic interpretation to the human reader, while for a non-visual one I think aural CSS's would be a better solution for fine tuning, but also a good way to mess everything up, and are not supported by screen readers, perhaps rightly). Stating <b>/<i> elements represent a somewhat middle value between normal text and <strong>/<em> elements might be a compromise from the point of view of a non-visual UA consuming them (at least as a well-defined, context-free, non-presentational (non-visual) semantics).

For what concerns the quoted part, I really can't figure out something more important than 'small print' content in legal agreements and ads, in most cases, since it's the more important part to take care of to avoid bad surprises... I mean, in some cases - if not in most - a kind of stylistic offset is not related to the real importance of the overall message, but to the greater or lesser relevance an author wishes readers will give to some content, as a mean to focus their attention one way or another, to mask real importance to some extent. This is basically why I like to think of 'relevance' (for authors' purposes) whenever and wherever I read 'importance', and also why I think actual semantics for <b> denotes 'importance' (as 'relevance'), in a different manner than <strong>, but it is something remarkable (when it's not just pure style, of course), as the right keywords, along with <strong> content and/or, perhaps, italicized text, may lead to different interpretations, pointing out something looking somehow obscure or secondary at first glance as being strongly related to all surrounding prose (e.g. in a quoted content).


On Wed, 26 Nov 2008, Calogero Alex Baldacchino wrote:
Now I'll throw in an even creazier idea. [...]

Experience with aural-specific markup has been quite negative, in that people end up using it when they think it's appropriate but it is not, and they end up making the experience significantly worse for screen reader users. Media-specific markup is bad regardless of the medium, it seems.


Well, I called it 'crazy' ( :-P ) and don't want to push it anymore. I was just thinking to possible misuses and (mainly) to rare use and scarce support. Didn't I pointed it out in following mails? Perhaps it wasn't clear anyway.

But I like to think of screen readers (and speech software in general) as a good example of non-visual user agents. A textual UA (like lynx) may use different colors to represent different styles (bold/italic/font sizes), thus, once the final user is confident with such a convention, any semantic disambiguation is up to him; but an aural technology must disambiguate any text before reading it in order to make it meaningful for listeners (maybe such is possible only to certain extent, but the spoken content must be as close as possible to its meaning to make people understanding it). From this point of view, perhaps similar oppositions might be raised vs every semantic elements an author might misuse, and particularly for nested <strong>/<em> elements (though I still find their semantics is quite well defined in general, and specially for non-visual UAs).

I mean, nested <em> semantics is quite perfect for authors' needs, but such can't be just a way to annotate an author's thought, it must be easy to handle for every UAs in order to produce a meaningful output for the end user. In a visual presentation, unless specific CSS rules are provided, the same style (= italicized text) might be applied regardless the nesting depth (or stopping at a certain level), because (human) readers would get the point by mean of punctuation (e.g. repeated or alternated exclamation/question marks can suggest a different degree of emphasis and/or a different feeling), text formatting (e.g. uppercase letters standing for a louder voice), and, last but not least, the surrounding content, which gives the context of a sentence; but a UA, as well as any language transducer, cannot understand contexts, thus it can't easily adjust punctuation or change letters case, nor it can safely apply different styles (such as increasing/decreasing font weight and/or size, and/or underscoring words, and so on) without caring of possibly resulting in non-very-friendly layouts (after a certain depth).

Instead, an AT (which might also avail of punctuation to some extent), might use nesting levels to tune voice pitches (and the alike), that is nested <em> elements provide a scale of emphasis (once the base inflection for a single <em> level is chosen, consecutive levels can be tuned proportionally). But scaling inflection with elements depth might result in a too loud speech, or a non-easily understandable one, if elements are improperly nested. Thus, a screen reader developers might choose not to support nested <em>, as well as they prefer not to support aural CSS, since they don't trust in authors' ability as a conservative approach.

Moreover, I believe that cross-media/media-neutral elements might require media-specific considerations (specially if cross-UA consistency and standard, predictable behaviours are a goal), the same way an IDL may require language-specific bindings (to solve peculiar problems, or as a guideline for similar languages). I think when elements semantics meets content meaning, elements presentation determines content understanding. [IMHO]


On Sun, 30 Nov 2008, Calogero Alex Baldacchino wrote:
[...]

Could I possibly encourage you to split your paragraphs into smaller paragraphs?


Oops...... sorry........


In other words, I'm not concerning whether the actual semantics of <b> and <i> is consistent with common uses of italicized and bold text, and with their conventional definitions (human-understandable, but perhaps not machine-friendly), but whether that's well defined (context-free) with respect to a user agent capabilities to correctly interpret and present them. Visually that's painless, but non visually (non graphically) I'm quite feeling the need for a greater context-freedom (at least binding them to some more precise semantics, with respect to which to scale <b> and <i> semantics and make them more context-free).

I have to admit to having no idea what you are talking about here.


I'll try and explain that, as far as I'm able to.

What do <b> and <i> tell a UA? If the UA is visual, about the same as <strong> and <em>, as well as whether the UA is textual (e.g., using a darker color in place of emboldened text). What about an aural one? <em> says "switch to emphasis inflection" or the alike, but what about <i> (and analogously <b> vs <strong>)? <i> covers a range of cases potentially leading to quite different voice "tuning", perhaps according to a language characteristics, perhaps when <i> is used to stress a concept as a non-speech kind of emphasis rather than naming taxonomy (both use cases for italicized text, since sometimes italic is preferred to bold and sometimes used in conjunction with bold, to create a kind of scale of emphasis or stress over a matter).

So, what's <i> for a (non-visual) UA? Is it something which sometimes is like plain text, sometimes is between plain text and <em> content, some other times is like <em> or even more than <em>? And how can a UA understand that by the mean of <i>? It can't, unless <i> had attributes telling about the context, or <i> semantics were restricted to one precise context and other elements where created to tell about a specific context, but such might be risky because of possible misuses, thus delegating context interpretation to UAs, in part, may be reasonable. But UAs cannot understand contexts because they're (about Turing) machines, so they don't understand content and cannot resolve contexts, thus a compromise is needed to help a UA to attach an acceptable presentation (when the most proper is not achievable) to a certain semantics.

At first glance, a compromise might be a convention like,

"normal text" <minor than or equal to> "i content" <minor than or equal to> "em content"

and

"normal text" <minor than or equal to> "b content" <minor than or equal to> "strong content"

so that, at first glance, any UAs might fix a presentation for <em> and <strong> and then tune the presentation for <b> and <i> as a "mean value" (or a value laying) between plain content (as a lower bound) and <strong>/<em> (as an upper bound), for instance <strong> text might be bolder than <b> text, or preceded by a longer "bing", and <i> content inflected a bit more than normal text and less than <em> content.

At the same time, there would be a chance to render a certain content the very same way as normal text or <em>/<strong> text, according to what's considered a better choice for a certain medium dealt with by a certain UA. Perhaps that's what UAs (specially non visual ones) should do anyway, but any degree of (non standard) freedom may lead to inconsistent behaviours passing from a UA to another, and standard behaviours are or can be a goal. Thus, I think that spending one more word for clearness purpose (at least) is always better than not doing so, because I believe precise semantics is needed by UAs more than by authors.

More technically, html defines a (specialized) programming language whose transducer is any conforming UA, the same way as any document format (from the binary .doc, to the human readable RTF, to LaTex, to ODF, to PDF and so on) defines a (specialized) programming language whose transducer is a compatible word processor (from this point of view, a WYSIWYG editor can be thought as a kind of visual IDE) -- let me point out that's not my own personal opinion, that's just current theory on computer languages, and I'm and will be fine with its actual statements at least until someone'll confute or overcome them.

HTML has to deal (not only, but also) with "human" (or "natural") languages semantics, but such semantics cannot be the base for html semantics, because as every computer language html needs context-freedom, while natural languages are strongly context-dependent. It's a fact, human beings are addicted to metaphors, to double senses, to figures of speech, we often don't even catch that, as when we talk about the "arms" of a chair, which is a catachrestic (unperceived) metaphor; nothing of such is far reproducible in computer languages actually.

In printed/written text we avail of conventions like punctuation, uppercase letters, fonts size and style, colours, and so on, to reproduce speech conventions, such as voice speed, pitches, volume, which are our first disambiguation mean; sometimes print/grammar conventions aren't enough, as well as voice inflection (e.g. a person may use very similar inflections to express (slightly or quite) different feelings, or no inflection for a mixed metaphor, since perceived as normal speech, or he/she may pronounce and write different-meaning words the very same way), though we're able to understand meaning most of times, because we can add further knowledge on a speech subject than what's expressed by the speech itself: we're aware of contexts, computers are not.

[ A classic example may be a sentence like "legs have cats": who owns what? Everyone can answer "cats" is the "who" and "legs" is the "what"; someone'll notice that sentence does not conform to English grammar rules, yet we understand its meaning, because we can add a wider semantics to each term, we can contextualize it, while a computer cannot; a computer can, at most, find the verb, understand it, than attach the 'owner' semantics to whatever precedes it and the 'owned' semantics to whatever follows it -- I like thinking of natural languages as some kind of multidimensional, cyclic, implicit and generally "non-explicitable" function our brains are capable to deal with by the mean of probabilistic algorithms based on a database of previous experiences coming from each sense and acquired knowledge - that is fuzzy logic ]

An HTML conforming UA is a language transducer, a kind of compiler, thus unaware of contexts and content meaning. HTML elements' semantics should be as close as possible to one specific context (eventually with the help of attributes - whether to create a new element or to add a new attribute is a matter of syntax), so that any UAs can attach a proper presentation to elements' content, helping human end users to understand its 'meaning'; if that's not reasonably possible, more contexts (close to each other) should be grouped in one semantics taking care of defining (or referring to, when possible) a mean presentation which is an acceptable cross-media compromise, possibly referring to other elements with a somehow close, but better defined ( = more specific) semantics, or even aliasing them. [IMHO]

In other words, an element's semantics should be defined taking care of UAs constraints, first, and of authors' needs in second place, not because authors' needs are less important, indeed they're so important that there's place even for some (reasonable) redundancy; but a human being can always take the effort to care of machine constraints, while the opposite is not always true (if possible at all), and won't be true at least until technology will provide us human-level AI.


On Tue, 25 Nov 2008, Pentasis wrote:
Just because HTML5 redefines the element does not mean that the element will suddenly be semantic.

The key is that the way we have defined <b>, <i>, and <small> is roughly in line with what authors do already anyway, as much as other tags are roughly in line with how they are used.


That's a good key, but solves half of the problem, the part related to authors needs; I think another key should be taken into account beside that, answering the question, is an element's semantics something any UAs can _easily_ understand and _correctly_ present to end users, without any further knowledge on the element's content and context than what's expressed by the element semantics itself? I fear whatever effort is taken to define a "media-neutral" semantic, there is always a chance for a media-dependent answer, especially for phrasing semantics, which deal somehow (or mainly) with content 'classification' and presentation (cross-media, as far as possible), and a wrong presentation may compromise content enjoyment, despite human capabilities to disambiguate contexts.


One way to think of <nav> is "would you want an accessibility tool to skip these links by default?". One way to think of <aside> is "would you want this to be moved to a sidebar?".


On Fri, 14 Nov 2008, Nils Dagsson Moskopp wrote:
The small element represents small print [...]

The b element represents a span of text to be stylistically offset from the normal prose without conveying any extra importance [...]
Both definitions seems rather presentational (contrasting, for example, the new semantic definition for the <i> element) and could also be realized by use of <span> elements.

Consider a speech browser. Does it makes sense to convey small print in a speech context? (Yes, consider radio ads for pharmaceuticals. They speak faster for the small print.) Does it make sense to represent a span of text stylistically offset from the normal prose without conveying importance in a speech browser? (Yes, e.g. there could be a "bing" sound after each word in a <b>, indicating that it is a keyword. I can't think of an example on radio currently, though.)

Media independence is what we're going for here. <font>, for example, isn't media-independent.


On Mon, 24 Nov 2008, Asbj?rn Ulsberg wrote:
However, you can only notice this if the words have been distinguished in some way. With <b>, all user-agents can choose to convey to users that those words are special.
They are only special for sighted users, browsing the page with a rather advanced user agent. They are not special to blind users or to users of text-based user agents like Lynx. If you want to express semantics, then use a semantic element.

<b> now _is_ a semantic element. Lynx already uses a different colour for it, for example. What problem do we solve by inventing a new element to do exactly what <b> does today?


Expressing semantics through presentation only is done in print because of the limitations in the printing system. If the print was for a blind person, printed with braille, one could imagine (had it been supported) that letters with a higher weight could be physically warmer than others, or with a more jagged edge so they could stand out.

Right, and we can get that with <b>. No need for a new element.


All right, but that's mainly a (cross-media) presentational semantics (unlike links and inputs, which describe interaction mainly, for instance), thus media-specific considerations might be needed to some extent to improve cross-media consistency, which I think is a goal conveyed by media-independence (otherwise, the same markup would/might lead to unreliable results, so telling what a certain semantics means for a certain UA, not only for authors, with respect to other, somehow similar elements, is something I'd consider), because what changes here is media-neutrality, while presentational nature of elements with a redefined semantics is left untouched (and couldn't be otherwise).

Once UAs implemented, for instance (and only as an example), conventions like,

'thicker text' (visual) <=> 'darker colour' (textual) <=> 'warmer letters' (braille) <=> 'louder "bing" before content' (aural)

and

'letters size' (visual) <=> 'colour hue or saturation' (textual) <=> 'more or less jagged edges' (braille) <=> 'voice speed and/or volume' (aural)

there wouldn't be any major differences between,

<b> Something </b>

and

<span style="font-size:inherit; font-weight:bold;"><!-- or perhaps hypothetical letter-size, letter-weight with font-* properties derived accordingly for screen media --> Something </span>

other than actual CSS support (that is, enriching and generalizing - implementation-side perhaps - certain visual CSS properties' semantics instead of certain html (born-)visual elements' semantics, once provided a wide support for CSS, would be quite the same), and perhaps the former being a good and more expressive 'shortcut' (or alias) for the second (from authors' point of view).

Most elements might be reduced to a <div> (or a <span> - only one is needed, 'thanks' to display property) with proper style and attributes, but a semantics such as "(almost) everything is a div" may not be enough expressive to meet authors' needs. Such a need for expressiveness (given any lack in CSS support is something possibly subject to change) is perhaps the only good reason to maintain presentational (though media-independent, as far as possible) elements such as <b> and <i>, but also to create newer ones such as <article>/<section> (<div>s with proper styles), <aside> (a floating or otherwise positioned <div>), <nav> (a <div> with an opportune tabindex, if supported by ATs to order content), for instance. Though, that's a very good reason to have what I'd call a reasonable redundancy. :-)

>
> On Fri, 14 Nov 2008, Pentasis wrote:
>> Not yet maybe, but we could at least try to keep options open for the
>> future.
>
> This doesn't scale -- there are an unbounded set of features that aren't
> in HTML5 currently. We can't add them all. We are focusing on only adding
> those features that we can justify today, as that seems like the most
> sensible cut-off point given that we need a cut-off point.
>

That's a good point, going further might be either unneeded (and might be done as soon as a real and wide need arose, in a bullet-tracing fashioned evolution), or yet possible by the mean of xml extensibility (in xhtml, for instance), or even by the mean of <div>s or <span>s with a proper not-only-presentational class attribute (if 'everything is a div' lacks expressiveness, 'something is a div classified as @class' might be enough expressive for custom/niche needs). :-)

Best regards, and happy holiday to everyone (if having holidays this period)
Alex


--
Caselle da 1GB, trasmetti allegati fino a 3GB e in piu' IMAP, POP3 e SMTP 
autenticato? GRATIS solo con Email.it http://www.email.it/f

Sponsor:
Polizza Auto?
* Con Direct Line garanzia furto e incendio a soli 30 euro per un anno! Affrettati: l’offerta è valida fino al 31 Dicembre. * Clicca qui: http://adv.email.it/cgi-bin/foclick.cgi?mid=8512&d=24-12

Reply via email to