Re: [whatwg] Adding and removing media source elements

2009-02-23 Thread Ian Hickson
On Tue, 3 Feb 2009, Philip J�genstedt wrote:
 On Tue, 03 Feb 2009 05:44:07 +0100, Ian Hickson i...@hixie.ch wrote:
  On Tue, 3 Feb 2009, Chris Pearce wrote:
   
   (2) Why don't we invoke load() whenever a media element's src 
   attribute or source children are changed, regardless of 
   networkState? That way changes to the media's src/source other than 
   the first change would have the same effect as first change, i.e. 
   they'd have an immediate effect, causing load() to be invoked.
  
  Doing this would cause the first file to be downloaded multiple times 
  in a row, leading to excessive network usage.
 
 Surely this can't be the only reason? User agents are free to 
 speculatively keep the current source loading when src/source changes 
 and to stop loading it only if the current media resource does change. 
 That, and caching, should be enough.

It seems rather unclean to require that kind of hack. It would also make 
the actual exact detectable behaviour dependent on a variety of timing and 
race conditions, which I generally try to avoid.

Anyway, the way the spec has been changed now solves this -- dynamic 
additions are used, without needing a reload of the previous sources.


 I have always imagined that the reason for the conditioned load() is to 
 not interrupt playback by fiddling with the DOM or doing something like 
 v.src=v.src (although I'm quite sure that doesn't count as changing the 
 attribute).

Yes, that's the intent.

We can't just rely on waiting for the script to end because the list of 
source elements might not be known right away -- e.g. it might be drip 
fed by the parser.


 Related, since load() is async it depends on timing whether or not
 
video id=v/video
script
 v = document.getElementById('v');
 v.src = 'test';
/script
 
 causes the source 'test' to be loaded, as the network state may not be
 NETWORK_EMPTY when the src attribute is set.

This is addressed now.


 The same goes for adding source child elements of course.

This too.


On Wed, 4 Feb 2009, Philip J�genstedt wrote:
 
 I also had this avoid accidental reloads theory before, but it doesn't 
 strike me as very reasonable after thinking more about it. Can anyone 
 give an example of a use case where the DOM src attribute or source 
 elements are added/changed accidentally so that it would cause an 
 unwanted reload?

The parser:

   video
 source
 !-- network lag inserts a pause here... --
 source

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: [whatwg] Adding resourceless media to document causes error event

2009-02-23 Thread Ian Hickson
On Wed, 4 Feb 2009, Chris Pearce wrote:

 My reading of the spec is that if you have a media element with no src 
 attribute or source element children (e.g. video/video) and you 
 insert it into a document, then the media load() algorithm will be 
 implicitly invoked, and because the list of potential media resources is 
 empty, that algorithm will immediately fall through to the failure 
 step (step 12), causing an error progress event to be dispatched to the 
 media element.
 
 My question is:
 
 Is is really necessary to invoke the load algorithm when adding a media 
 element with no src/sources to a document? Doing so just causes an error 
 progress event dispatch, and we've not exactly failed to load anything, 
 indeed, we've not even tried to load anything in this case.

The load algorithm now explicitly waits in this case.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Error in the play() method's algorithm

2009-02-23 Thread Ian Hickson
On Mon, 9 Feb 2009, Robert O'Callahan wrote:

 http://www.whatwg.org/specs/web-apps/current-work/#dom-media-play Step 
 3.3 says Otherwise, the media element's readyState attribute has the 
 value HAVE_FUTURE_DATA or HAVE_ENOUGH_DATA...
 
 But this isn't true. It could be HAVE_NOTHING, in fact it will often be 
 HAVE_NOTHING, because if the network state was NETWORK_EMPTY when play() 
 is called, then the load() method will have returned with the readyState 
 set to HAVE_NOTHING.
 
 I suspect the description of play() needs to be updated to account for 
 the change to make load() asynchronous.

I made this change as part of the recent load algorithm fixes.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Small inconsistencies in video events

2009-02-23 Thread Ian Hickson
On Mon, 9 Feb 2009, Robert O'Callahan wrote:

 I was just writing some tests for various events and noticed that 
 there's a slight weirdness in the events fired for readyState 
 transitions. If readyState changes from HAVE_CURRENT_DATA to 
 HAVE_FUTURE_DATA, the element is then potentially playing, and then 
 readyState changes to HAVE_ENOUGH_DATA, we fire canplay, playing, 
 canplay again, canplaythrough and playing again. OTOH if 
 readyState changes from HAVE_CURRENT_DATA directly to HAVE_ENOUGH_DATA 
 and the element is potentially playing, then we fire canplay, 
 canplaythrough, and playing.
 
 I think we should fire the same set of events in the same order whether 
 we transition through HAVE_FUTURE_DATA or not. So, I suggest that a 
 transition from HAVE_FUTURE_DATA to HAVE_ENOUGH_DATA should not fire 
 canplay or playing. Also, a transition from HAVE_CURRENT_DATA to 
 HAVE_ENOUGH_DATA should fire canplaythrough after we've handled 
 autoplay and potentially fired playing.

I've tried to make this much more consistent. You were definitely right 
that there were far too many duplicate and out-of-order events before.

Please let me know if the new spec is still doing silly things.

Thanks,
-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Clickjacking and CSRF

2009-02-23 Thread Sigbjørn Vik

On Fri, 20 Feb 2009 19:36:47 +0100, Bil Corry b...@corry.biz wrote:


Sigbjørn Vik wrote on 2/20/2009 8:46 AM:

One proposed way of doing this would be a single header, of the form:
x-cross-domain-options: deny=frame,post,auth; AllowSameOrigin;
allow=*.opera.com,example.net;
This incorporates the idea from the IE team, and extends on it.


Have you taken a look at ABE?

http://hackademix.net/wp-content/uploads/2008/12/abe_rules_03.pdf


I am not quite certain what you are referring to, the document is a ruleset for 
how to express what is allowed and disallowed. Do you mean that clients should 
be using a URL list, or that servers should be using this particular grammar to 
decide which headers to send with their URLs?
For a domain wide policy file a document like this might work well though.


For cross-domain resources, this means that a browser would first have
to make a request with GET and without authentication tokens to get the
x-cross-domain-options settings from the resource. If the settings
allow, a second request may be made, if the second request would be
different. The result of last request are handed over to the document.


Have you considered using OPTIONS for the pre-flight request, similar to  
how Access Control for Cross-Site Requests does it?


http://www.w3.org/TR/access-control/#cross-site2


Good point. Trying to use OPTIONS for existing servers might break them, a GET 
might be safer. Then again, OPTIONS shouldn't break anything, GETs might have 
side-effects where OPTIONS don't, and an OPTIONS reply typically has a much 
smaller payload than a GET reply. In the case of a reply to a pre-flight 
request where the user agents has cookies but the server replies that contents 
are the same with or without cookies, an OPTIONS request would require two 
requests, a GET only one. OPTIONS is probably more in the spirit of HTTP though.

Either could work, the idea is the same. Which would be better would have to be 
researched empirically, but OPTIONS might be the better candidate.

--
Sigbjørn Vik
Quality Assurance
Opera Software




Re: [whatwg] Video playback quality metric

2009-02-23 Thread Ian Hickson
On Mon, 9 Feb 2009, Jeremy Doig wrote:

 Measuring the rate at which the playback buffer is filling/emptying 
 gives a fair indication of network goodput, but there does not appear to 
 be a way to measure just how well the client is playing the video 
 itself. If I have a wimpy machine behind a fat network connection, you 
 may flood me with HD that I just can't play very well. The cpu or video 
 card may just not be able to render the video well.Exposing a metric 
 (eg: Dropped Frame count, rendered frame rate) would allow sites to 
 dynamically adjust the video which is being sent to a client [eg: switch 
 the url to a differently encoded file] and thereby optimize the playback 
 experience.

One concern is that there are several possible reasons for playback to be 
poor; the hardware could be simply unable to handle it, but it could also 
be that the system is overloaded. For example, multiple videos could be 
playing at once.

As a user, if I see choppy video, I can try to figure out whether my 
system is loaded, and frankly I'd rather do that than have the Web page 
automatically try to downgrade me...


On Tue, 10 Feb 2009, Philip J�genstedt wrote:
 
 While I think this kind of thing might be useful, I would be careful 
 about requiring any kind of detailed metrics like dropped frames or 
 effective frame rate to be exposed via DOM, as getting this information 
 reliably over different devices, platforms and media frameworks would be 
 quite difficult. How about an event which the user agent can optionally 
 fire to indicate that it cannot play at the requested rate due to 
 processing/memory constraints (rather than network)? This would (I 
 think) provide the same functionality but put less burden on 
 implementors.
 
 There is already a placeholder for non-fatal errors in the spec, perhaps 
 this could be included with that in some fashion?

On Tue, 10 Feb 2009, James Graham wrote:
 
 It seems like, in the short term at least, the worse is better 
 solution to this problem is for content providers to provide links to 
 resources at different quality levels, and allow users to choose the 
 most appropriate resource based on their internet connection and their 
 computer rather than having the computer try to work it out for them. 
 Assuming that the majority of users use a relatively small number of 
 sites with the resources to provide multiple-quality versions of their 
 videos and use a small number of computing devices with roughly 
 unchanging network conditions (I imagine this scenario applies to the 
 majority of non-technical), they will quickly learn which versions of 
 the media works best for them on each site. Therefore the burden of this 
 simple approach on end users does not seem to be very high.

On Tue, 10 Feb 2009, Michael A. Puls II wrote:
 
 Flash has low, medium and high quality that the user can change 
 (although a lot of sites/players seem to rudely disable that option in 
 the menu for some reason). This helps out a lot and can allow a video to 
 play better. I could imagine an Auto option too that automatically 
 switched quality as necessary to get decent playback.
 
 As an event, a site could use it like:
 
 video.onplaybacktooslow = function() {
this.quality = low;
this.setToNativeSize(); // stretched videos use more cpu
 };
 
 Or, something like that.

I'd be interested in seeing what implementors would find easiest to 
expose, once we have more implementation experience. Just an event along 
the lines of well I can't keep up with this? An arbitrary quality 
number where 0 is this is the worst experience I've ever exposed the 
user to and 1 is I'm not even breaking a sweat playing this? Frames per 
second? Dropped frames per second?

It should be noted that the spec already supports having the _browser_ 
automatically fall back to another stream. The author can include multiple 
streams like this:

   video
source src=hd.mov
source src=sd.mov
source src=postage-stamp.mov
   /video

...and the browser is well within its rights to decide that it can't play 
hd.mov (having downloaded it and examined it) and that it will use sd.mov 
instead. I would be interested in feedback from browser vendors regarding 
whether this is feasible to implement or not -- if it is, can we rely on 
this instead of exposing it to scripts?

I've noted the idea of having an explicit way for scripts to determine 
rendering quality for the v3 media element API. I haven't added anything 
to the spec yet because we're still waiting for the current features to 
get implemented and shipped reliably.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'

Re: [whatwg] Author control over media preloading/buffering

2009-02-23 Thread Ian Hickson
On Wed, 11 Feb 2009, Robert O'Callahan wrote:

 When a media element loads, reaches the HAVE_CURRENT_DATA state, but is 
 paused, and 'autoplay' is not set, we have to decide whether to keep 
 downloading data or not. If we expect the user to play the stream, we 
 should keep downloading and buffering data to minimize the chance that 
 buffering will be needed during playback. But if we don't expect the 
 user to play the stream, we should pause the download to conserve 
 resources. The latter is especially important on pages with large 
 numbers of media elements, only one or two of which the user will play.
 
 In general it's hard to see how to make a good guess automatically. If a 
 page has one (non-autoplay) media element on it, it's hard to know 
 whether the user is expected or not expected to play it. For example the 
 user might be expected to play it, but only after they've read some text 
 before the video (so autoplay is not appropriate). I think (but I'm not 
 sure) that authors are likely to be able to make better guesses, so I 
 think it would be useful to provide authors with control over this 
 decision. I think that authors are likely to want this control in the 
 same way they like to be able to preload images.
 
 So, how about adding an autobuffer attribute, which instructs the 
 browser that the user will probably play the video and as much data as 
 possible should be pre-downloaded? By default (when the attribute is not 
 present) the browser would be expected to pause the download after 
 reaching HAVE_CURRENT_DATA if the media element is paused and not 
 'autoplay'.

I've added this attribute.


On Thu, 12 Feb 2009, timeless wrote:
 
 if i'm a mobile browser vendor (and I am), and if I expect to use 
 Bluetooth to talk to a cell phone which has high bandwidth costs (and if 
 you're using an n800/n810 tethered to a phone in Canada, this is true), 
 then, i'm not sure I really want web pages to specify things quite like 
 this.

I've made it clear that the browser doesn't have to autobuffer even if the 
attribute is present.


On Fri, 13 Feb 2009, timeless wrote:
 
 i've seen a lot of places of late which have multiple videos which are 
 expected to play in some sort of sequence.
 
 just saying 'autobuffer' for all of them would kill my device, but a 
 suggestion of which ones to buffer in order would be helpful.

On Sat, 14 Feb 2009, Robert O'Callahan wrote:
 
 Perhaps they should use script to add 'autobuffer' to the next video (or 
 just play() it).

That seems to address this use case.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Video playback quality metric

2009-02-23 Thread Michael A. Puls II

On Mon, 23 Feb 2009 05:57:01 -0500, Ian Hickson i...@hixie.ch wrote:


On Tue, 10 Feb 2009, Michael A. Puls II wrote:


Flash has low, medium and high quality that the user can change
(although a lot of sites/players seem to rudely disable that option in
the menu for some reason). This helps out a lot and can allow a video to
play better. I could imagine an Auto option too that automatically
switched quality as necessary to get decent playback.

As an event, a site could use it like:

video.onplaybacktooslow = function() {
   this.quality = low;
   this.setToNativeSize(); // stretched videos use more cpu
};

Or, something like that.


I'd be interested in seeing what implementors would find easiest to
expose, once we have more implementation experience.


O.K. Make great sense.


Just an event along
the lines of well I can't keep up with this? An arbitrary quality
number where 0 is this is the worst experience I've ever exposed the
user to and 1 is I'm not even breaking a sweat playing this? Frames  
per

second? Dropped frames per second?

It should be noted that the spec already supports having the _browser_
automatically fall back to another stream. The author can include  
multiple

streams like this:

   video
source src=hd.mov
source src=sd.mov
source src=postage-stamp.mov
   /video

...and the browser is well within its rights to decide that it can't play
hd.mov (having downloaded it and examined it) and that it will use sd.mov
instead.


The question is, will the browser automatically switch to sd.mov in some 
situations where the user doesn't want it to? I think that's safe to say.

With that said, good defaults with a way for the user to override (and remember 
my answer etc.) would probably be the best of both worlds. (I realize that's 
getting into browser-specific UI/pref stuff, but just saying)

--
Michael




Re: [whatwg] Clickjacking and CSRF

2009-02-23 Thread Giorgio Maone

On Fri, 20 Feb 2009 19:36:47 +0100, Bil Corry b...@corry.biz wrote:


Sigbjørn Vik wrote on 2/20/2009 8:46 AM:

One proposed way of doing this would be a single header, of the form:
x-cross-domain-options: deny=frame,post,auth; AllowSameOrigin;
allow=*.opera.com,example.net;
This incorporates the idea from the IE team, and extends on it.


Have you taken a look at ABE?

http://hackademix.net/wp-content/uploads/2008/12/abe_rules_03.pdf


I am not quite certain what you are referring to, the document is a 
ruleset for how to express what is allowed and disallowed. Do you mean 
that clients should be using a URL list, or that servers should be 
using this particular grammar to decide which headers to send with 
their URLs?
For a domain wide policy file a document like this might work well 
though. 

ABE is meant to be configured in 3 ways:

  1. With user-provided rules, deployed directly client-side
  2. With community-provided rules, downloaded periodically from a
 trusted repository
  3. As a site-wide policy deployed on the server side in a single
 file, much like crossdomain.xml

See http://hackademix.net/2008/12/20/introducing-abe/ and especially 
this http://hackademix.net/2008/12/20/introducing-abe/#comment-10165 
comment about site-provided rules and merging.

--
Giorgio

Sigbjørn Vik wrote, On 23/02/2009 11.42:

On Fri, 20 Feb 2009 19:36:47 +0100, Bil Corry b...@corry.biz wrote:


Sigbjørn Vik wrote on 2/20/2009 8:46 AM:

One proposed way of doing this would be a single header, of the form:
x-cross-domain-options: deny=frame,post,auth; AllowSameOrigin;
allow=*.opera.com,example.net;
This incorporates the idea from the IE team, and extends on it.


Have you taken a look at ABE?

http://hackademix.net/wp-content/uploads/2008/12/abe_rules_03.pdf


I am not quite certain what you are referring to, the document is a 
ruleset for how to express what is allowed and disallowed. Do you mean 
that clients should be using a URL list, or that servers should be 
using this particular grammar to decide which headers to send with 
their URLs?
For a domain wide policy file a document like this might work well 
though.



For cross-domain resources, this means that a browser would first have
to make a request with GET and without authentication tokens to get the
x-cross-domain-options settings from the resource. If the settings
allow, a second request may be made, if the second request would be
different. The result of last request are handed over to the document.


Have you considered using OPTIONS for the pre-flight request, similar 
to how Access Control for Cross-Site Requests does it?


http://www.w3.org/TR/access-control/#cross-site2


Good point. Trying to use OPTIONS for existing servers might break 
them, a GET might be safer. Then again, OPTIONS shouldn't break 
anything, GETs might have side-effects where OPTIONS don't, and an 
OPTIONS reply typically has a much smaller payload than a GET reply. 
In the case of a reply to a pre-flight request where the user agents 
has cookies but the server replies that contents are the same with or 
without cookies, an OPTIONS request would require two requests, a GET 
only one. OPTIONS is probably more in the spirit of HTTP though.


Either could work, the idea is the same. Which would be better would 
have to be researched empirically, but OPTIONS might be the better 
candidate.






Re: [whatwg] Clickjacking and CSRF

2009-02-23 Thread Sigbjørn Vik

On Mon, 23 Feb 2009 14:23:40 +0100, Giorgio Maone g.ma...@informaction.com 
wrote:


On Fri, 20 Feb 2009 19:36:47 +0100, Bil Corry b...@corry.biz wrote:


Sigbjørn Vik wrote on 2/20/2009 8:46 AM:

One proposed way of doing this would be a single header, of the form:
x-cross-domain-options: deny=frame,post,auth; AllowSameOrigin;
allow=*.opera.com,example.net;
This incorporates the idea from the IE team, and extends on it.


Have you taken a look at ABE?

http://hackademix.net/wp-content/uploads/2008/12/abe_rules_03.pdf


I am not quite certain what you are referring to, the document is a
ruleset for how to express what is allowed and disallowed. Do you mean
that clients should be using a URL list, or that servers should be
using this particular grammar to decide which headers to send with
their URLs?
For a domain wide policy file a document like this might work well
though.

ABE is meant to be configured in 3 ways:

   1. With user-provided rules, deployed directly client-side
   2. With community-provided rules, downloaded periodically from a
  trusted repository
   3. As a site-wide policy deployed on the server side in a single
  file, much like crossdomain.xml

See http://hackademix.net/2008/12/20/introducing-abe/ and especially
this http://hackademix.net/2008/12/20/introducing-abe/#comment-10165
comment about site-provided rules and merging.


Yes, a domain wide policy file might be good to have, but it could not entirely 
replace having a header settable for a single resource, not all web authors 
have access to the root, so it would have to come as an addition, an optional 
replace.

If a domain wide policy file is used, it would make sense to have it in a 
format which can be distributed and applied locally, so users can patch web 
sites that don't do it themselves. ABE looks like a good candidate for all of 
this. A good candidate might also have to be implementable by the server, so 
that a server can look at the policy file, and determine which headers to send 
for any particular resource, including which resources to send no headers for 
at all. Presumably ABE would work for that too.

--
Sigbjørn Vik
Quality Assurance
Opera Software




Re: [whatwg] typo in UTF-16LE BOM sniffing rule

2009-02-23 Thread Adam Barth
Thanks.  This is fixed in the latest draft of the content type
sniffing rules, available here:

http://webblaze.cs.berkeley.edu/2009/mime-sniff/mime-sniff.txt

Adam


On Sun, Feb 22, 2009 at 6:15 AM, Dan Winship dan.wins...@gmail.com wrote:
 In the summary table at the end of 2.7.4 Content-Type sniffing: unknown
 type, the UTF-16LE BOM is incorrectly listed as FF FF instead of FF
 FE. (The UTF-16BE BOM is correct, and the UTF-16LE one is stated
 correctly earlier in the text):

 FF FF 00 00 FE FF 00 00 text/plain  n/a UTF-16BE BOM
 FF FF 00 00 FF FF 00 00 text/plain  n/a UTF-16LE BOM
   ^^
 FF FF FF 00 EF BB BF 00 text/plain  n/a UTF-8 BOM

 -- Dan



[whatwg] Dates and coordinates in HTML5

2009-02-23 Thread Andy Mabbett

This is a copy of my blog post:

  
http://pigsonthewing.wordpress.com/2009/02/23/dates-and-coordinates-in-html5/

  (aka http://is.gd/kB3k )

please feel free to comment here, or there.


I'm grateful to Bruce Lawson of Opera for alerting me to discussion of
the time element on this mailing list and encouraging me participate;
and indebted to him for the engaging discussions which have led me to
the ideas expressed below. So please blame him if you don't like what I
have to say ;-)

I've read up on what prior discussion I can find on the list; but may
have missed some. I'll be happy to have anything I've overlooked pointed
out to me.

I have considerable experience of marking up dates in microformats, both
forthcoming events on the West Midland Bird Club's diary pages:

  http://www.westmidlandbirdclub.com/diary/

and for historic events, on Wikipedia and Wikimedia Commons.

I've been a staunch and early critic of the accessibility problems
caused by abusing the abbr element for things like machine-readable
dates (as has Bruce). The HTML5 time element has the potential to
resolve that problem, but only if it caters for all the cases in which
microformats are — or could potentially be — used.

It seems to me that there are several outstanding, and overlapping,
issues for time in HTML5, which include use-cases, imprecise dates,
Gregorian vs. non-Gregorian dates and BCE (aka “BC“) dates. First,
though, I should like to make the observation that, while hCalendar
microformats are most commonly used to allow event details to be added
to calendar apps, and that that use case drove their development, they
should not be seen simply as a tool to that end. I see them, and hope
that others do, as a way of adding semantic meaning to mark-up; and
that's how I view the “time element, too. Once we indicate that the
semantic meaning of a string of text is date, it's up to other people to
decide what they use that for — let a thousand flowers bloom, as the
adage goes.

Use-cases for machine-readable date mark-up are many: as well as the
aforesaid calendar interactions, they can be used for sorting; for
searching (find me all the pages about events in 1923 — recent
developments in Yahoo's YQL searching API (which now supports searching
for microformats):

  http://developer.yahoo.net/blog/archives/2009/01/yql_with_microformats.html

have opened up a whole new set of possibilities, which is only just
beginning to be explored). They can be mapped visually on a SIMILE

  http://simile.mit.edu/timeline/

or similar time-line. They can be translated into other languages more
effectively than raw prose; they can be disambiguated (does “5/6/09
mean “5th June 2009? or “6th May 2009?); and they can be presented
in the user's preferred format (I might want to see “5th June 2009;
you might see “June 5, 2009 — such presentational preferences have
generated arguments of little-endian proportions on Wikipedia).

hCalendar microformats are already used to mark up imprecise dates
(June 1977; 2009). ISO8601 already supports them. Why not HTML5?
Though care needs to be taken, it's even possible to mark up words like
“today with a precise date, if that's generated real-time, server-side.

The issue of non-Gregorian (chiefly Julian) dates is a vexing one; and
has already caused problems on Wikipedia. So far as I am aware, there is
no ISO-, RFC- or similar standard for such dates, other than converting
them to Gregorian dates. It is not the job of the HTML5 working group to
solve this problem; but I think the group should recognise that at some
point a solution must be forthcoming. One way to do so would be allow
something like:

time schema=[schema-name] datetime=[value][date in plain
text]/time

where the schema defaults to ISO 8601 if not stated, and the whole
element is treated as simply:

[date in plain text]

if the schema is unrecognised; thereby ensuring backwards compatibility.
That way, if a hypothetical ISO- or other standard for Julian dates
emerges in the future, authors may simply start to use it without any
revision to HTML 5 being required.

As for BCE dates, they're already allowed in ISO 8601 (since there was
no year 0, the year 3 BCE is given as -0002 in ISO 8601). I see no
reason why they should be disallowed in time elements in HTML5. We
wouldn't, to take an extreme example, say that “P can be used for
paragraphs in English but not French; or paragraphs about literature but
not music, so why make an effectively arbitrary limit on the dates which
can be marked up semantically? Surely the use case for marking-up a
sortable table of Roman emperors, should allow all such emperors, and
not just those who ruled from 0001AD, to be included?
Coordinates

Another abuse of ABBR in microformats for coordinates:

abbr class=geo title=52.548;-1.932Great Barr/abbr

Bruce and I agree that this could be resolved, and HTML5 usefully
extended, by a “location element:

location latitude=52.548