Re: [whatwg] audio and video: volume and muted as content attributes?

2010-06-01 Thread Silvia Pfeiffer
On Tue, Jun 1, 2010 at 3:53 PM, Philip Jägenstedt phil...@opera.com wrote:
 On Mon, 31 May 2010 19:33:45 +0800, Silvia Pfeiffer
 silviapfeiff...@gmail.com wrote:

 Hi,

 I just came across a curious situation in the spec: IIUC, it seems the
 @volume and @muted attributes are only IDL attributes and not content
 attributes. This means that an author who is creating an audio-visual
 Webpage has to use JavaScript to turn down (or up) the loudness of
 their media elements or mute them rather than just being able to
 specify this through content attributes.

 I've searched the archives and didn't find a discussion or reasons for
 this. Apologies if this has been discussed before.

 I am guessing the reasons for not having them as content attributes is
 that anything that requires muting of audio-visual content is assumed
 to need JavaScript anyway.

 However, if I have multiple videos on a page, all on autoplay, it
 would be nice to turn off the sound of all of them without JavaScript.
 With all the new CSS3 functionality, I can, for example, build a
 spinning cube of video elements that are on autoplay or a marquee of
 videos on autoplay - all of which would require muting the videos to
 be bearable. If we added @muted to the content attributes, it would be
 easy to set the muted state without having to write any JavaScript.

 As for the @volume attribute, I think it would be similarly useful if
 an author could control the loudness at which a video or audio file
 starts playing back, in particular if he/she knows it is actually a
 fairly loud/quiet file.

 I'm curious about other people's opinions.

 Cheers,
 Silvia.


 I think both volume and muted could have some use as content attributes, so
 the question is only if the additional complexity for implementations and
 authors. muted is a boolean attribute and would be trivial to support.
 volume, however, is a float and last I checked Opera doesn't reflect [1] any
 other float properties. I wouldn't be surprised if it would be a first for
 some other browsers too. Reflecting floats is a little bit annoying (I tried
 when the spec had an aspect attribute for video) because of having to decide
 an arbitrary precision to which to round. That absence of volume should
 imply 1.0 (and not 0.0 or NaN) could also be a little bit of a nuisance.

It might be easier if the content attribute for volume was specified
as a percentage value between 0 and 100. Then it would be an integer
only. I'm not sure if this is possible, but it seems we have more
content attributes with these kinds of vlaues (e.g. width/height).


 So, I am neither in favor or against of reflecting volume and mute as
 content attributes. Implementation is quite simple, but doesn't come for
 free unless browsers are already reflecting other float properties.

Mute alone would already be really helpful. I wasn't aware that volume
created such a problem.

 [1]
 http://www.whatwg.org/specs/web-apps/current-work/multipage/urls.html#reflect


Cheers,
Silvia.


Re: [whatwg] audio and video: volume and muted as content attributes?

2010-06-01 Thread Philip Jägenstedt
On Tue, 01 Jun 2010 14:17:03 +0800, Silvia Pfeiffer  
silviapfeiff...@gmail.com wrote:


On Tue, Jun 1, 2010 at 3:53 PM, Philip Jägenstedt phil...@opera.com  
wrote:

On Mon, 31 May 2010 19:33:45 +0800, Silvia Pfeiffer
silviapfeiff...@gmail.com wrote:


Hi,

I just came across a curious situation in the spec: IIUC, it seems the
@volume and @muted attributes are only IDL attributes and not content
attributes. This means that an author who is creating an audio-visual
Webpage has to use JavaScript to turn down (or up) the loudness of
their media elements or mute them rather than just being able to
specify this through content attributes.

I've searched the archives and didn't find a discussion or reasons for
this. Apologies if this has been discussed before.

I am guessing the reasons for not having them as content attributes is
that anything that requires muting of audio-visual content is assumed
to need JavaScript anyway.

However, if I have multiple videos on a page, all on autoplay, it
would be nice to turn off the sound of all of them without JavaScript.
With all the new CSS3 functionality, I can, for example, build a
spinning cube of video elements that are on autoplay or a marquee of
videos on autoplay - all of which would require muting the videos to
be bearable. If we added @muted to the content attributes, it would be
easy to set the muted state without having to write any JavaScript.

As for the @volume attribute, I think it would be similarly useful if
an author could control the loudness at which a video or audio file
starts playing back, in particular if he/she knows it is actually a
fairly loud/quiet file.

I'm curious about other people's opinions.

Cheers,
Silvia.



I think both volume and muted could have some use as content  
attributes, so
the question is only if the additional complexity for implementations  
and

authors. muted is a boolean attribute and would be trivial to support.
volume, however, is a float and last I checked Opera doesn't reflect  
[1] any
other float properties. I wouldn't be surprised if it would be a first  
for
some other browsers too. Reflecting floats is a little bit annoying (I  
tried
when the spec had an aspect attribute for video) because of having to  
decide

an arbitrary precision to which to round. That absence of volume should
imply 1.0 (and not 0.0 or NaN) could also be a little bit of a nuisance.


It might be easier if the content attribute for volume was specified
as a percentage value between 0 and 100. Then it would be an integer
only. I'm not sure if this is possible, but it seems we have more
content attributes with these kinds of vlaues (e.g. width/height).


This would make volume even more special, as a float that reflects as an  
integer percentage. Just using the existing definition for reflecting a  
float would be simpler.



So, I am neither in favor or against of reflecting volume and mute as
content attributes. Implementation is quite simple, but doesn't come for
free unless browsers are already reflecting other float properties.


Mute alone would already be really helpful. I wasn't aware that volume
created such a problem.


I'd be fine with reflecting muted if many people think it would be useful.  
I'm not the one to make that judgment though.


Volume isn't a huge problem, just not as trivial as one might suspect.  
Another thing to consider is that it is currently impossible to set volume  
to a value outside the range [0,1] via the DOM API. With a content  
attribute, volume=-1 and volume=1.1 would need to be handled too. I'd  
prefer it being ignored rather than being clamped.



[1]
http://www.whatwg.org/specs/web-apps/current-work/multipage/urls.html#reflect



Cheers,
Silvia.



--
Philip Jägenstedt
Core Developer
Opera Software


[whatwg] WebSockets: UDP

2010-06-01 Thread Erik Möller
The use case I'd like to address in this post is Real-time client/server  
games.


The majority of the on-line games of today use a client/server model over  
UDP and we should try to give game developers the tools they require to  
create browser based games. For many simpler games a TCP based protocol is  
exactly what's needed but for most real-time games a UDP based protocol is  
a requirement. Games typically send small updates to its server at 20-30Hz  
over UDP and can with the help of entity interpolation and if required  
entity extrapolation cope well with intermittent packet loss. When a  
packet loss occur in a TCP based protocol the entire stream of data is  
held up until the packet is resent meaning a game would have to revert to  
entity extrapolation possibly over several seconds, leading to an  
unacceptable gameplay experience.


It seems to me the WebSocket interface can be easily modified to cope with  
UDP sockets (a wsd: scheme perhaps?) and it sounds like a good idea to  
leverage the work already done for WebSockets in terms of interface and  
framing.


The most important distinction between ws: and wsd: is that messages sent  
by send() in wsd: need not be acknowledged by the peer nor be resent. To  
keep the interface the same to the largest possible extent I'd suggest  
implementing a simple reliable 3-way handshake over UDP, keep-alive  
messages (and timeouts) and reliable close frames. If these are  
implemented right the interface in it's entirety could be kept. Only one  
new readonly attribute long maxMessageSize could be introduced to describe  
the min path MTU (perhaps only valid once in connected mode, or perhaps  
set to 0 or 576 initially and updated once in connected mode). This  
attribute could also be useful to expose in ws: and wss: but in that case  
be set to the internal limit of the browser / server.


The actual content of the handshake for wsd: can be vastly simplified  
compared to that of ws: as there's no need to be http compliant. It could  
contain just a magic identifier and length encoded strings for origin,  
location and protocol.


To minimize the work needed on the spec the data framing of wsd: can be  
kept identical to that of ws:, though I'd expect game developers would  
choose whatever the binary framing will be once the spec is done.


I'd be very interested to hear peoples opinion on this.

--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] audio and video: volume and muted as content attributes?

2010-06-01 Thread Bjartur Thorlacius
On 5/31/10, Silvia Pfeiffer silviapfeiff...@gmail.com wrote:
 On Tue, Jun 1, 2010 at 6:48 AM, bjartur svartma...@gmail.com wrote:
I just came across a curious situation in the spec: IIUC, it seems the
@volume and @muted attributes are only IDL attributes and not content
attributes. This means that an author who is creating an audio-visual
Webpage has to use JavaScript to turn down (or up) the loudness of
their media elements or mute them rather than just being able to
specify this through content attributes.
If you want to control the volume for the user after the page loads
then yes, you'll need JavaScript.
I've searched the archives and didn't find a discussion or reasons for
this. Apologies if this has been discussed before.

I am guessing the reasons for not having them as content attributes is
that anything that requires muting of audio-visual content is assumed
 to need JavaScript anyway.

 Exactly.

However, if I have multiple videos on a page, all on autoplay, it
would be nice to turn off the sound of all of them without JavaScript.
With all the new CSS3 functionality, I can, for example, build a
spinning cube of video elements that are on autoplay or a marquee of
videos on autoplay - all of which would require muting the videos to
be bearable. If we added @muted to the content attributes, it would be
easy to set the muted state without having to write any JavaScript.

 If you need the audio to be muted you should use CSS. If you need to
 control volume dynamically you need scripting.

 I am not aware of a CSS property for media elements that lets you
 control the muted state. Can you link me to a specification?
Well, http://www.w3.org/TR/CSS2/aural.html defines volume and
play-during. Play-during can stop, autoplay and repeat sounds.
It's not obvious to me how this will apply to elements that represent
audiovisual content but volume: silent; unambiguously mutes content.
Decorating audio (such as background music in games or videos)
seem to be even more easily styled for some reason. Multiple
soundtracks can be muxed and assigned different loudness.
Also @media aural {display: none;} can be used on audio elements
but I haven't read the specs properly so I don't know if that would hide
an video element when inside of an @media aural clause.

CSS 3 aural is still to be done so more capabilities may be suggested.

 Well, you have a point. That can be done by increasing the volume
 of the soundtrack itself, metedata (like embedded volume metadata in
 MPEG files) and should be possible in CSS. Adding it to HTML as well
 seems redundant.

 Are you saying that a Web author needs to edit the media resource in
 order to change the default volume setting for the resource? I think
 that's a bit of a stretch. Also, if you have a pointer to how this can
 be done in CSS, that would be highly appreciated.
Not necessarily, just pointing out that it would be a good idea to fix the
soundtrack if it's broken. CSS is perfect for these kind of things so I
recommend extending that rather than HTML.

-- 
kv,
  - Bjartur


Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Kornel Lesinski
On 1 Jun 2010, at 11:12, Erik Möller wrote:

 The use case I'd like to address in this post is Real-time client/server 
 games.
 
 The majority of the on-line games of today use a client/server model over UDP 
 and we should try to give game developers the tools they require to create 
 browser based games. For many simpler games a TCP based protocol is exactly 
 what's needed but for most real-time games a UDP based protocol is a 
 requirement. Games typically send small updates to its server at 20-30Hz over 
 UDP and can with the help of entity interpolation and if required entity 
 extrapolation cope well with intermittent packet loss. When a packet loss 
 occur in a TCP based protocol the entire stream of data is held up until the 
 packet is resent meaning a game would have to revert to entity extrapolation 
 possibly over several seconds, leading to an unacceptable gameplay experience.
 
 It seems to me the WebSocket interface can be easily modified to cope with 
 UDP sockets (a wsd: scheme perhaps?) and it sounds like a good idea to 
 leverage the work already done for WebSockets in terms of interface and 
 framing.
 
 The most important distinction between ws: and wsd: is that messages sent by 
 send() in wsd: need not be acknowledged by the peer nor be resent. To keep 
 the interface the same to the largest possible extent I'd suggest 
 implementing a simple reliable 3-way handshake over UDP, keep-alive messages 
 (and timeouts) and reliable close frames. If these are implemented right the 
 interface in it's entirety could be kept. Only one new readonly attribute 
 long maxMessageSize could be introduced to describe the min path MTU (perhaps 
 only valid once in connected mode, or perhaps set to 0 or 576 initially and 
 updated once in connected mode). This attribute could also be useful to 
 expose in ws: and wss: but in that case be set to the internal limit of the 
 browser / server.

SCTP would be ideal for this. It's connection-oriented, but supports 
multistreaming (can deliver messages out of order, without head of line 
blocking).

http://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol

-- 
regards, Kornel



Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Philip Taylor
On Tue, Jun 1, 2010 at 11:12 AM, Erik Möller emol...@opera.com wrote:
 The use case I'd like to address in this post is Real-time client/server
 games.

 The majority of the on-line games of today use a client/server model over
 UDP and we should try to give game developers the tools they require to
 create browser based games. For many simpler games a TCP based protocol is
 exactly what's needed but for most real-time games a UDP based protocol is a
 requirement. [...]

 It seems to me the WebSocket interface can be easily modified to cope with
 UDP sockets [...]

As far as I'm aware, games use UDP because they can't use TCP (since
packet loss shouldn't stall the entire stream) and there's no
alternative but UDP. (And also because peer-to-peer usually requires
NAT punchthrough, which is much more reliable with UDP than with TCP).
They don't use UDP because it's a good match for their requirements,
it's just the only choice that doesn't make their requirements
impossible.

There are lots of features that seem very commonly desired in games: a
mixture of reliable and unreliable and reliable-but-unordered channels
(movement updates can be safely dropped but chat messages must never
be), automatic fragmentation of large messages, automatic aggregation
of small messages, flow control to avoid overloading the network,
compression, etc. And there's lots of libraries that build on top of
UDP to implement protocols halfway towards TCP in order to provide
those features:
http://msdn.microsoft.com/en-us/library/bb153248(VS.85).aspx,
http://opentnl.sourceforge.net/doxydocs/fundamentals.html,
http://www.jenkinssoftware.com/raknet/manual/introduction.html,
http://enet.bespin.org/Features.html, etc.

UDP sockets seem like a pretty inadequate solution for the use case of
realtime games - everyone would have to write their own higher-level
networking libraries (probably poorly and incompatibly) in JS to
provide the features that they really want. Browsers would lose the
ability to provide much security, e.g. flow control to prevent
intentional/accidental DOS attacks on the user's network, since they
would be too far removed from the application level to understand what
they should buffer or drop or notify the application about.

I think it'd be much more useful to provide a level of abstraction
similar to those game networking libraries - at least the ability to
send reliable and unreliable sequenced and unreliable unsequenced
messages over the same connection, with automatic
aggregation/fragmentation so you don't have to care about packet
sizes, and dynamic flow control for reliable messages and maybe some
static rate limit for unreliable messages. The API shouldn't expose
details of UDP (you could implement exactly the same API over TCP,
with better reliability but worse latency, or over any other protocols
that become well supported in the network).

-- 
Philip Taylor
exc...@gmail.com


Re: [whatwg] audio and video: volume and muted as content attributes?

2010-06-01 Thread Silvia Pfeiffer
On Tue, Jun 1, 2010 at 9:09 PM, Bjartur Thorlacius svartma...@gmail.com wrote:
 On 5/31/10, Silvia Pfeiffer silviapfeiff...@gmail.com wrote:
 On Tue, Jun 1, 2010 at 6:48 AM, bjartur svartma...@gmail.com wrote:
I just came across a curious situation in the spec: IIUC, it seems the
@volume and @muted attributes are only IDL attributes and not content
attributes. This means that an author who is creating an audio-visual
Webpage has to use JavaScript to turn down (or up) the loudness of
their media elements or mute them rather than just being able to
specify this through content attributes.
If you want to control the volume for the user after the page loads
then yes, you'll need JavaScript.
I've searched the archives and didn't find a discussion or reasons for
this. Apologies if this has been discussed before.

I am guessing the reasons for not having them as content attributes is
that anything that requires muting of audio-visual content is assumed
 to need JavaScript anyway.

 Exactly.

However, if I have multiple videos on a page, all on autoplay, it
would be nice to turn off the sound of all of them without JavaScript.
With all the new CSS3 functionality, I can, for example, build a
spinning cube of video elements that are on autoplay or a marquee of
videos on autoplay - all of which would require muting the videos to
be bearable. If we added @muted to the content attributes, it would be
easy to set the muted state without having to write any JavaScript.

 If you need the audio to be muted you should use CSS. If you need to
 control volume dynamically you need scripting.

 I am not aware of a CSS property for media elements that lets you
 control the muted state. Can you link me to a specification?

 Well, http://www.w3.org/TR/CSS2/aural.html defines volume and
 play-during.

Interesting.

 Play-during can stop, autoplay and repeat sounds.
 It's not obvious to me how this will apply to elements that represent
 audiovisual content but volume: silent; unambiguously mutes content.
 Decorating audio (such as background music in games or videos)
 seem to be even more easily styled for some reason. Multiple
 soundtracks can be muxed and assigned different loudness.
 Also @media aural {display: none;} can be used on audio elements
 but I haven't read the specs properly so I don't know if that would hide
 an video element when inside of an @media aural clause.

 CSS 3 aural is still to be done so more capabilities may be suggested.

 Well, you have a point. That can be done by increasing the volume
 of the soundtrack itself, metedata (like embedded volume metadata in
 MPEG files) and should be possible in CSS. Adding it to HTML as well
 seems redundant.

 Are you saying that a Web author needs to edit the media resource in
 order to change the default volume setting for the resource? I think
 that's a bit of a stretch. Also, if you have a pointer to how this can
 be done in CSS, that would be highly appreciated.
 Not necessarily, just pointing out that it would be a good idea to fix the
 soundtrack if it's broken. CSS is perfect for these kind of things so I
 recommend extending that rather than HTML.

Has there been any discussion about implementing support for CSS2
aural in Web browsers? Until such a time - and in fact independently
of that - I still think turning the existing volume and muted IDL
attributes into content attributes would be a nice and simple
solution. Introducing a whole CSS aural control section will take lots
longer IMHO. Also, it won't hurt to have both - we do that for width
and height, too.

Cheers,
Silvia.


Re: [whatwg] audio and video: volume and muted as content attributes?

2010-06-01 Thread Lachlan Hunt

On 2010-06-01 13:09, Bjartur Thorlacius wrote:

On 5/31/10, Silvia Pfeiffersilviapfeiff...@gmail.com  wrote:

I am not aware of a CSS property for media elements that lets you
control the muted state. Can you link me to a specification?


Well, http://www.w3.org/TR/CSS2/aural.html defines volume and
play-during. Play-during can stop, autoplay and repeat sounds.
It's not obvious to me how this will apply to elements that represent
audiovisual content but volume: silent; unambiguously mutes content.


Those properties were designed for aural browsers using speech synthesis 
to read the content of a page, not to control multimedia in a page 
itself.  Also, attempting to hijack those properties for use with 
multimedia content could create difficulties as you would have to define 
how the HTMLMediaElement's volume and muted properties interact with 
those CSS properties, if at all.


--
Lachlan Hunt - Opera Software
http://lachy.id.au/
http://www.opera.com/


Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Erik Möller
On Tue, 01 Jun 2010 13:34:51 +0200, Philip Taylor  
excors+wha...@gmail.com wrote:



On Tue, Jun 1, 2010 at 11:12 AM, Erik Möller emol...@opera.com wrote:

The use case I'd like to address in this post is Real-time client/server
games.

The majority of the on-line games of today use a client/server model  
over

UDP and we should try to give game developers the tools they require to
create browser based games. For many simpler games a TCP based protocol  
is
exactly what's needed but for most real-time games a UDP based protocol  
is a

requirement. [...]

It seems to me the WebSocket interface can be easily modified to cope  
with

UDP sockets [...]


As far as I'm aware, games use UDP because they can't use TCP (since
packet loss shouldn't stall the entire stream) and there's no
alternative but UDP. (And also because peer-to-peer usually requires
NAT punchthrough, which is much more reliable with UDP than with TCP).
They don't use UDP because it's a good match for their requirements,
it's just the only choice that doesn't make their requirements
impossible.

There are lots of features that seem very commonly desired in games: a
mixture of reliable and unreliable and reliable-but-unordered channels
(movement updates can be safely dropped but chat messages must never
be), automatic fragmentation of large messages, automatic aggregation
of small messages, flow control to avoid overloading the network,
compression, etc. And there's lots of libraries that build on top of
UDP to implement protocols halfway towards TCP in order to provide
those features:
http://msdn.microsoft.com/en-us/library/bb153248(VS.85).aspx,
http://opentnl.sourceforge.net/doxydocs/fundamentals.html,
http://www.jenkinssoftware.com/raknet/manual/introduction.html,
http://enet.bespin.org/Features.html, etc.

UDP sockets seem like a pretty inadequate solution for the use case of
realtime games - everyone would have to write their own higher-level
networking libraries (probably poorly and incompatibly) in JS to
provide the features that they really want. Browsers would lose the
ability to provide much security, e.g. flow control to prevent
intentional/accidental DOS attacks on the user's network, since they
would be too far removed from the application level to understand what
they should buffer or drop or notify the application about.

I think it'd be much more useful to provide a level of abstraction
similar to those game networking libraries - at least the ability to
send reliable and unreliable sequenced and unreliable unsequenced
messages over the same connection, with automatic
aggregation/fragmentation so you don't have to care about packet
sizes, and dynamic flow control for reliable messages and maybe some
static rate limit for unreliable messages. The API shouldn't expose
details of UDP (you could implement exactly the same API over TCP,
with better reliability but worse latency, or over any other protocols
that become well supported in the network).



I've never heard any gamedevs complain how poorly UDP matches their needs  
so I'm not so sure about that, but you may be right it would be better to  
have a higher level abstraction. If we are indeed targeting the game  
developing community we should ask for their feedback rather than guessing  
what they prefer. I will grep my linked-in account for game-devs tonight  
and see if I can gather some feedback.


I suspect they prefer to be empowered with UDP rather than boxed into a  
high level protocol that doesn't fit their needs but I may be wrong.  
Those who have the knowledge, time and desire to implement their own  
reliable channels/flow control/security over UDP would be free to do so,  
those who couldn't care less can always use ws: or wss: for their reliable  
traffic and just use UDP where necessary.


So the question to the gamedevs will be, and please make suggestions for  
changes and I'll do an email round tonight:


If browser and server vendors agree on and standardize a socket based  
network interface to be used for real-time games running in the browsers,  
at what level would you prefer the interface to be?
(Note that an interface for communicating reliably via TCP and TLS are  
already implemented.)

- A low-level interface similar to a plain UDP socket
- A medium-level interface allowing for reliable and unreliable channels,  
automatically compressed data, flow control, data priority etc

- A high-level interface with ghosted entities

Oh, and I guess we should continue this discussion on the HyBi list... my  
fault for not posting there in the first place.


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] audio and video: volume and muted as content attributes?

2010-06-01 Thread Boris Zbarsky

On 6/1/10 7:09 AM, Bjartur Thorlacius wrote:

Also @media aural {display: none;} can be used on audio elements
but I haven't read the specs properly so I don't know if that would hide
anvideo  element when inside of an @media aural clause.


You seem to be somewhat confused about the way media are used in CSS.

A medium is a property of the way the entire document is being 
presented.  Typical values one runs into with desktop web browsers are 
screen and print.  The spec you link to is for the aural and 
speech media.


So in particular, rules inside @media aural {} will get ignored in all 
desktop browsers.  (Your example has a declaration directly inside 
@media, which is just a parse error, but I assume you meant putting an 
actual rule that assigns display:none to a particular element in the 
@media rule).


But more to the point, since the aural properties only apply to aural 
and speech media it would require a pretty major CSS spec change to 
make them mean anything in the screen medium, which is what you're 
proposing.


-Boris



Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Mike Belshe
On Tue, Jun 1, 2010 at 4:24 AM, Kornel Lesinski kor...@geekhood.net wrote:

 On 1 Jun 2010, at 11:12, Erik Möller wrote:

  The use case I'd like to address in this post is Real-time client/server
 games.
 
  The majority of the on-line games of today use a client/server model over
 UDP and we should try to give game developers the tools they require to
 create browser based games. For many simpler games a TCP based protocol is
 exactly what's needed but for most real-time games a UDP based protocol is a
 requirement. Games typically send small updates to its server at 20-30Hz
 over UDP and can with the help of entity interpolation and if required
 entity extrapolation cope well with intermittent packet loss. When a packet
 loss occur in a TCP based protocol the entire stream of data is held up
 until the packet is resent meaning a game would have to revert to entity
 extrapolation possibly over several seconds, leading to an unacceptable
 gameplay experience.
 
  It seems to me the WebSocket interface can be easily modified to cope
 with UDP sockets (a wsd: scheme perhaps?) and it sounds like a good idea to
 leverage the work already done for WebSockets in terms of interface and
 framing.
 
  The most important distinction between ws: and wsd: is that messages sent
 by send() in wsd: need not be acknowledged by the peer nor be resent. To
 keep the interface the same to the largest possible extent I'd suggest
 implementing a simple reliable 3-way handshake over UDP, keep-alive messages
 (and timeouts) and reliable close frames. If these are implemented right the
 interface in it's entirety could be kept. Only one new readonly attribute
 long maxMessageSize could be introduced to describe the min path MTU
 (perhaps only valid once in connected mode, or perhaps set to 0 or 576
 initially and updated once in connected mode). This attribute could also be
 useful to expose in ws: and wss: but in that case be set to the internal
 limit of the browser / server.

 SCTP would be ideal for this. It's connection-oriented, but supports
 multistreaming (can deliver messages out of order, without head of line
 blocking).

 http://en.wikipedia.org/wiki/Stream_Control_Transmission_Protocol


FYI:   SCTP is effectively non-deployable on the internet today due to NAT.

+1 on finding ways to enable UDP.  It's a key missing component to the web
platform.

Mike


Re: [whatwg] WebSockets: UDP

2010-06-01 Thread John Tamplin
On Tue, Jun 1, 2010 at 11:34 AM, Mike Belshe m...@belshe.com wrote:

 FYI:   SCTP is effectively non-deployable on the internet today due to NAT.

 +1 on finding ways to enable UDP.  It's a key missing component to the web
 platform.


But there is so much infrastructure that would have to be enabled to use UDP
from a web app.  How would proxies be handled?  Even if specs were written
and implementations available, how many years would it be before corporate
proxies/firewalls supported WebSocket over UDP?

I am all for finding a way to get datagram communication from a web app, but
I think it will take a long time and shouldn't hold up current WebSocket
work.

-- 
John A. Tamplin
Software Engineer (GWT), Google


Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Mike Belshe
On Tue, Jun 1, 2010 at 8:52 AM, John Tamplin j...@google.com wrote:

 On Tue, Jun 1, 2010 at 11:34 AM, Mike Belshe m...@belshe.com wrote:

 FYI:   SCTP is effectively non-deployable on the internet today due to
 NAT.

 +1 on finding ways to enable UDP.  It's a key missing component to the web
 platform.


 But there is so much infrastructure that would have to be enabled to use
 UDP from a web app.  How would proxies be handled?  Even if specs were
 written and implementations available, how many years would it be before
 corporate proxies/firewalls supported WebSocket over UDP?


Agree - nobody said it would be trivial.  There are so many games
successfully doing it today that it is clearly viable.  For games in
particular, they have had to document to their users how to configure their
home routers, and that has been successful too.  If you talk with game
writers - there are a class of games where UDP is just better (e.g. those
communicating real-time, interactive position and other info).  If we can
enable that through the web platform, that is good.



 I am all for finding a way to get datagram communication from a web app,
 but I think it will take a long time and shouldn't hold up current WebSocket
 work.


Agree-  no need to stall existing work.

Mike



 --
 John A. Tamplin
 Software Engineer (GWT), Google



Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Erik Möller

On Tue, 01 Jun 2010 18:45:51 +0200, Mike Belshe m...@belshe.com wrote:


On Tue, Jun 1, 2010 at 8:52 AM, John Tamplin j...@google.com wrote:


On Tue, Jun 1, 2010 at 11:34 AM, Mike Belshe m...@belshe.com wrote:


FYI:   SCTP is effectively non-deployable on the internet today due to
NAT.

+1 on finding ways to enable UDP.  It's a key missing component to the  
web

platform.



But there is so much infrastructure that would have to be enabled to use
UDP from a web app.  How would proxies be handled?  Even if specs were
written and implementations available, how many years would it be before
corporate proxies/firewalls supported WebSocket over UDP?



Agree - nobody said it would be trivial.  There are so many games
successfully doing it today that it is clearly viable.  For games in
particular, they have had to document to their users how to configure  
their

home routers, and that has been successful too.  If you talk with game
writers - there are a class of games where UDP is just better (e.g. those
communicating real-time, interactive position and other info).  If we can
enable that through the web platform, that is good.




I am all for finding a way to get datagram communication from a web app,
but I think it will take a long time and shouldn't hold up current  
WebSocket

work.



Agree-  no need to stall existing work.

Mike




--
John A. Tamplin
Software Engineer (GWT), Google



I don't think proxies and firewalls are going to be a major problem, like  
Mike said, the myriad of UDP games out there seem to do just fine in the  
real world. Sure, there will be corporate firewalls and proxies blocking  
employees from fragging their colleagues when the boss is in a meeting,  
but I guess they're partly put there to prevent just that so we probably  
shouldn't try to combat it.
If we were talking about peer-to-peer UDP it'd be a whole new ballgame,  
but that's why I specifically said the use-case was for client/server  
games, I don't think we should attempt peer-to-peer before WebSockets is  
all done and shipped.


I fully agree any discussions on UDP (or other protocol) shouldn't stall  
the existing work, but right now there seems to be very little activity  
anyways.


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Philip Taylor
On Tue, Jun 1, 2010 at 2:00 PM, Erik Möller emol...@opera.com wrote:
 [...]
 I've never heard any gamedevs complain how poorly UDP matches their needs so
 I'm not so sure about that, but you may be right it would be better to have
 a higher level abstraction. If we are indeed targeting the game developing
 community we should ask for their feedback rather than guessing what they
 prefer. I will grep my linked-in account for game-devs tonight and see if I
 can gather some feedback.

More feedback is certainly good, though I think the libraries I
mentioned (DirectPlay/OpenTNL/RakNet/ENet (there's probably more)) are
useful as an indicator of common real needs (as opposed to edge-case
or merely perceived needs) - they've been used by quite a few games
and they seem to have largely converged on a core set of features, so
that's better than just guessing.

I guess many commercial games write their own instead of reusing
third-party libraries, and I guess they often reimplement very similar
concepts to these, but it would be good to have more reliable
information about that.

 I suspect they prefer to be empowered with UDP rather than boxed into a
 high level protocol that doesn't fit their needs but I may be wrong.

If you put it like that, I don't see why anybody would not want to be
empowered :-)

But that's not the choice, since they could never really have UDP -
the protocol will perhaps have to be Origin-based, connection-oriented
(to exchange Origin information etc), with complex packet headers so
you can't trick it into talking to a DNS server, with rate limiting in
the browser to prevent DOS attacks, restricted to client-server (no
peer-to-peer since you probably can't run a socket server in the
browser), etc.

Once you've got all that, a simple UDP-socket-like API might not be
the most natural or efficient way to implement a higher-level
partially-reliable protocol - the application couldn't cooperate with
the low-level network buffering to prioritise certain messages, it
couldn't use the packet headers that have already been added on top of
UDP, it would have to send acks from a script callback which may add
some latency after a packet is received from the network, etc. So I
think there's some tradeoffs and it's not a question of one low-level
protocol vs one strictly more restrictive higher-level protocol.

 So the question to the gamedevs will be, and please make suggestions for
 changes and I'll do an email round tonight:

 If browser and server vendors agree on and standardize a socket based
 network interface to be used for real-time games running in the browsers, at
 what level would you prefer the interface to be?
 (Note that an interface for communicating reliably via TCP and TLS are
 already implemented.)
 - A low-level interface similar to a plain UDP socket
 - A medium-level interface allowing for reliable and unreliable channels,
 automatically compressed data, flow control, data priority etc
 - A high-level interface with ghosted entities

That first option sounds like you're offering something very much like
a plain UDP socket (and I guess anyone who's willing to write their
own high-level wrapper (which is only hundreds or thousands of lines
of code and not a big deal for a complex game) would prefer that since
they want as much power as possible), but (as above) I think that's
misleading - it's really a UDP interface on top of a protocol that has
some quite different characteristics to UDP. So I think the question
should be clearer that the protocol will necessarily include various
features and restrictions on top of UDP, and the choice is whether it
includes the minimal set of features needed for security and hides
them behind a UDP-like interface or whether it includes higher-level
features and exposes them in a higher-level interface.

-- 
Philip Taylor
exc...@gmail.com


Re: [whatwg] WebSockets: UDP

2010-06-01 Thread Erik Möller
On Tue, 01 Jun 2010 21:14:33 +0200, Philip Taylor  
excors+wha...@gmail.com wrote:



More feedback is certainly good, though I think the libraries I
mentioned (DirectPlay/OpenTNL/RakNet/ENet (there's probably more)) are
useful as an indicator of common real needs (as opposed to edge-case
or merely perceived needs) - they've been used by quite a few games
and they seem to have largely converged on a core set of features, so
that's better than just guessing.

I guess many commercial games write their own instead of reusing
third-party libraries, and I guess they often reimplement very similar
concepts to these, but it would be good to have more reliable
information about that.



I was hoping to be able to avoid looking at what the interfaces of a high  
vs low level option would look like this early on in the discussions, but  
perhaps we need to do just that; look at Torque, RakNet etc and find a  
least common denominator and see what the reactions would be to such an  
interface. Game companies are pretty restrictive about what they discuss,  
but I think I know enough game devs to at least get some good feedback on  
what would be required to make it work well with their engine/game.


I suspect they prefer to be empowered with UDP rather than boxed  
into a

high level protocol that doesn't fit their needs but I may be wrong.


If you put it like that, I don't see why anybody would not want to be
empowered :-)


Yeah I wouldn't put it like that when asking :) I'm really not trying to  
sell my view, I just like to see real browser gaming in a not too distant  
future.




But that's not the choice, since they could never really have UDP -
the protocol will perhaps have to be Origin-based, connection-oriented
(to exchange Origin information etc), with complex packet headers so
you can't trick it into talking to a DNS server, with rate limiting in
the browser to prevent DOS attacks, restricted to client-server (no
peer-to-peer since you probably can't run a socket server in the
browser), etc.

[...]


That first option sounds like you're offering something very much like
a plain UDP socket (and I guess anyone who's willing to write their
own high-level wrapper (which is only hundreds or thousands of lines
of code and not a big deal for a complex game) would prefer that since
they want as much power as possible), but (as above) I think that's
misleading - it's really a UDP interface on top of a protocol that has
some quite different characteristics to UDP. So I think the question
should be clearer that the protocol will necessarily include various
features and restrictions on top of UDP, and the choice is whether it
includes the minimal set of features needed for security and hides
them behind a UDP-like interface or whether it includes higher-level
features and exposes them in a higher-level interface.


So, what would the minimal set of limitations be to make a UDP WebSocket  
browser-safe?


-No listen sockets
-No multicast
-Reliable handshake with origin info
-Automatic keep-alives
-Reliable close handshake
-Socket is bound to one address for the duration of its lifetime
-Sockets open sequentially (like current DOS protection in WebSockets)
-Cap on number of open sockets per server and total per user agent


--
Erik Möller
Core Developer
Opera Software


Re: [whatwg] audio and video: volume and muted as content attributes?

2010-06-01 Thread Bjartur Thorlacius
References: aanlktilyqfgvi5azyr4zpxjiqctl7gickfhzpev6g...@mail.gmail.com 
4c0420c9.d345d80a.5c04.d...@mx.google.com 
aanlktimlhyxmzs7ujxsuacaj90dufk3advpej3uo1...@mail.gmail.com 
aanlktin3iq_ahiwwuwam6hevnss1rxal17g3zj9yn...@mail.gmail.com
Bjartur Thorlacius svartma...@gmail.com wrote:
 Play-during can stop, autoplay and repeat sounds.
 It's not obvious to me how this will apply to elements that represent
 audiovisual content but volume: silent; unambiguously mutes content.
 Decorating audio (such as background music in games or videos)
 seem to be even more easily styled for some reason. Multiple
 soundtracks can be muxed and assigned different loudness.
 Also @media aural {display: none;} can be used on audio elements
 but I haven't read the specs properly so I don't know if that would hide
 an video element when inside of an @media aural clause.

 CSS 3 aural has still to be done so more capabilities may be suggested.

Has there been any discussion about implementing support for CSS2
aural in Web browsers? Until such a time - and in fact independently
of that - I still think turning the existing volume and muted IDL
attributes into content attributes would be a nice and simple
solution. Introducing a whole CSS aural control section will take lots
longer IMHO. Also, it won't hurt to have both - we do that for width
and height, too.
It seems much more The Right Way(tm) to do such things in CSS.
Browsers don't have to conform to the whole aural specification nor
the speech module of CSS 3. I think CSS 3 will have seperate speech
and aural modules which would solve the problem entirely. Note also
that CSS 2 aural allows styling of cues. As there's a workaround
implementation time isn't the number one priority.
There's need for the capability so saying that it shouldn't be
implemented because of lack of discussion seems weird.
Lachlan Hunt lachlan.h...@lachy.id.au wrote:
On 2010-06-01 13:09, Bjartur Thorlacius wrote:
 On 5/31/10, Silvia Pfeiffer silviapfeiff...@gmail.com  wrote:
 I am not aware of a CSS property for media elements that lets you
 control the muted state. Can you link me to a specification?

 Well, http://www.w3.org/TR/CSS2/aural.html defines volume and
 play-during. Play-during can stop, autoplay and repeat sounds.
 It's not obvious to me how this will apply to elements that represent
 audiovisual content but volume: silent; unambiguously mutes content.

Those properties were designed for aural browsers using speech synthesis 
to read the content of a page, not to control multimedia in a page 
itself.
Well, sounds are to speech/text as images are to (written) text.
You can float both paragraphs and images because to CSS they're just boxes.
I don't see a reason not to allow authors to control the volume of sound
if they can do so with speech.

As for play-during it's so general that it might be included in interactive
visual media as well.
Also, attempting to hijack those properties for use with 
multimedia content could create difficulties as you would have to define 
how the HTMLMediaElement's volume and muted properties interact with 
those CSS properties, if at all.
How's it done for other visual/behavioral content attributes in HTML?
Align, color and in fact most of the attributes of font have similiar
problems.


Re: [whatwg] WebSockets: UDP

2010-06-01 Thread James Salsman
I agree UDP sockets are a legitimate, useful option, with applications
far beyond games.  In most cases TCP is fine, but adaptive bit-rate
vocoders, for example, can use packet loss as an adaptation parameter,
and chose only to retransmit some of the more essential packets in
cases of congestion.  I am not suggesting that javascript applications
should implement adaptive bit-rate vocoding (until a fast
cross-platform javascript signal processing library is developed, if
then) but there are reasons that a web application might want to send
both reliable and unreliable traffic; most all of them having to do
with adapting to bandwith constraints.

On Tue, Jun 1, 2010 at 8:52 AM, John Tamplin j...@google.com wrote:

... How would proxies be handled?

UDP is supposed to never be buffered, not even by proxies. Proxies are
supposed to simply forward UDP without logging.  Lots of them don't
forward any UDP, and alot of them probably log the traffic.

 Even if specs were written and implementations available, how many years
 would it be before corporate proxies/firewalls supported WebSocket over UDP?

Maybe UDP adoption would follow adoption of SIP and RTP.  Has anyone
measured the current rate of UDP transmission availability from the
typical corporate client host?

On Tue, Jun 1, 2010 at 1:02 PM, Erik Möller emol...@opera.com wrote:

 what would the minimal set of limitations be to make a UDP WebSocket 
 browser-safe?

 -No listen sockets

For outgoing-from-client UDP, client-initiated TCP streams for
incoming responses and packet acknowledgment may be maximally
NAT-safe.

 -No multicast

People will eventually ask for it, but forwarding to it through
servers known to be free from NATs is preferable.

 -Reliable handshake with origin info

Nothing about UDP is reliable, you just send packets and hope they get there.

 -Automatic keep-alives

You mean on the incoming-to-client TCP channel in the opposite
direction from the UDP traffic?

 -Reliable close handshake

Can we use REST/HTTP/HTTPS persistent connections for this?

 -Socket is bound to one address for the duration of its lifetime

That sounds reasonable, but clients do change IP addresses now and
then, so maybe there should be some anticipation of this possibility?

 -Sockets open sequentially (like current DOS protection in WebSockets)

Do you mean their sequence numbers should be strictly increasing
incrementally until they roll over?

 -Cap on number of open sockets per server and total per user agent

There was some discussion that people rarely check for the error
condition when such caps are exausted, so I'm not sure whether that
should be the same as the system cap, or some fraction, or dozens, or
a developer configuration parameter.


Re: [whatwg] [hybi] WebSockets: UDP

2010-06-01 Thread Mark Frohnmayer
On Tue, Jun 1, 2010 at 1:02 PM, Erik Möller emol...@opera.com wrote:

 I was hoping to be able to avoid looking at what the interfaces of a high vs
 low level option would look like this early on in the discussions, but
 perhaps we need to do just that; look at Torque, RakNet etc and find a least
 common denominator and see what the reactions would be to such an interface.
 Game companies are pretty restrictive about what they discuss, but I think I
 know enough game devs to at least get some good feedback on what would be
 required to make it work well with their engine/game.

Glad to see this discussion rolling!  For what it's worth, the Torque
Sockets design effort was to take a stab at answering this question --
what is the least-common-denominator webby API/protocol that's
sufficiently useful to be a common foundation for real time games.  I
did the first stab at porting OpenTNL (now tnl2) atop it; from my
reading of the RTP protocol that should easily layer as well, but it
would be worth getting the perspective of some other high-level
network stack folks (RakNet, etc).


 I suspect they prefer to be empowered with UDP rather than boxed into
 a
 high level protocol that doesn't fit their needs but I may be wrong.

 If you put it like that, I don't see why anybody would not want to be
 empowered :-)

 Yeah I wouldn't put it like that when asking :) I'm really not trying to
 sell my view, I just like to see real browser gaming in a not too distant
 future.

Hmm... given the number of different approaches to higher-level game
networking I'd hate to see a high-level straightjacket where a
well-conceived low level API could easily support all of the existing
solutions out there.  The more complex the API the larger the attack
surface at the trusted level and the more difficulty in getting
existing stakeholders (game and browser makers) on board.


 So, what would the minimal set of limitations be to make a UDP WebSocket
 browser-safe?

 -No listen sockets

Only feedback here would be I think p2p should be looked at in this
pass -- many client/server game instances are peers from the
perspective of the hosting service (XBox Live, Quake, Half-Life,
Battle.net) -- forcing all game traffic to pass through the hosting
domain is a severe constraint.  My question -- what does a webby p2p
solution look like regarding Origin restrictions, etc?

 -No multicast

 -Reliable handshake with origin info
 -Automatic keep-alives
 -Reliable close handshake

While we're at it, I'd add key exchange, encryption and client puzzles
to reduce connection depletion/CPU depletion attacks to the handshake.
 The protocol you seem to be aiming for isn't UDP -- rather it's more
like an connected, unreliable packet stream between hosts.

 -Socket is bound to one address for the duration of its lifetime
 -Sockets open sequentially (like current DOS protection in WebSockets)
 -Cap on number of open sockets per server and total per user agent

A single UDP socket can host multiple connections (indexed by packet
source address), so even a modest limit on actual number of sockets
wouldn't be a big impediment.

I'd also advocate for packet delivery notification to be a part of the
API -- with a known success or failure status on packet delivery many
of the higher level data transmission policies become trivial, and
should be essentially zero overhead at the protocol level.  Without
notification the higher level code has to do manual packet
acknowledgement as Phil mentioned.

Regards,
Mark


Re: [whatwg] [hybi] WebSockets: UDP

2010-06-01 Thread Scott Hess
On Tue, Jun 1, 2010 at 4:07 PM, Mark Frohnmayer
mark.frohnma...@gmail.com wrote:
 On Tue, Jun 1, 2010 at 1:02 PM, Erik Möller emol...@opera.com wrote:
 So, what would the minimal set of limitations be to make a UDP WebSocket
 browser-safe?

 -No listen sockets

 Only feedback here would be I think p2p should be looked at in this
 pass -- many client/server game instances are peers from the
 perspective of the hosting service (XBox Live, Quake, Half-Life,
 Battle.net) -- forcing all game traffic to pass through the hosting
 domain is a severe constraint.  My question -- what does a webby p2p
 solution look like regarding Origin restrictions, etc?

Unix domain sockets allow you to pass file descriptors between
processes.  It might be interesting to pass a WebSocket endpoint
across a WebSocket.  If the clients can punch through NATs, it becomes
a direct peer-to-peer connection, otherwise it gets proxied through
the server.  Probably makes implementations excessively complicated,
though.  UDP-style would be easier (no need to worry about data
received by the server after it initiates pushing the endpoint to the
other client - just drop it on the floor).

-scott


Re: [whatwg] [hybi] WebSockets: UDP

2010-06-01 Thread Mark Frohnmayer
On Tue, Jun 1, 2010 at 4:35 PM,  l.w...@surrey.ac.uk wrote:
 On 2 Jun 2010, at 00:07, Mark Frohnmayer wrote:
 A single UDP socket can host multiple connections (indexed by packet
 source address), so even a modest limit on actual number of sockets
 wouldn't be a big impediment.

 Um, NAT?

You would want to index by the NAT'd address.  In the case of peer
introduction and connection a third party is needed to provide the
external send-to address.

Is that what you were asking?

Regards,
Mark


Re: [whatwg] [hybi] WebSockets: UDP

2010-06-01 Thread Ben Garney
On Tue, Jun 1, 2010 at 5:12 PM, Mark Frohnmayer
mark.frohnma...@gmail.comwrote:

 On Tue, Jun 1, 2010 at 4:35 PM,  l.w...@surrey.ac.uk wrote:
  On 2 Jun 2010, at 00:07, Mark Frohnmayer wrote:
  A single UDP socket can host multiple connections (indexed by packet
  source address), so even a modest limit on actual number of sockets
  wouldn't be a big impediment.
 
  Um, NAT?

 You would want to index by the NAT'd address.  In the case of peer
 introduction and connection a third party is needed to provide the
 external send-to address.


In some cases you need to use UPNP to NAT, but in general the 3rd
party connection facilitator will help. The UPNP is mostly needed so that
clients can _host_, which is not the goal here.

If we assume a public, carefully set up UDP host, then nearly anyone can
connect if UDP is allowed at all. No NAT is required in this case. And I
think this is the common case, since we are not trying to run service hosts
in the browser at this time.

If you have any sort of connection identifier (typically port will be
different even if IP is not), then you can multiplex by that.

(Also, hi! This is my first post. I'm Ben Garney, I worked at PushButton
Labs on Flash game technology (www.pushbuttonengine.com,
www.pushbuttonlabs.com). Naturally, seeing browser capabilities expand
either by plugin or native capabilities is exciting. Before I worked at PBL
I worked with Mark at GarageGames on networking technology, among other
things.)

Ben


[whatwg] ISSUE-86, Re: hixie: Remove the HTML-to-Atom mapping definition from the W3C version of the spec. (whatwg r5100)

2010-06-01 Thread Julian Reschke

Hi Ian,

thanks for the removal.

I notice that you kept the text in the WHATWG version of the spec.

Various problems have been reported with respect to the mapping, notably

  http://www.w3.org/Bugs/Public/show_bug.cgi?id=7806

and

  http://www.w3.org/Bugs/Public/show_bug.cgi?id=9546

and in the Working Group discussions around

  http://www.w3.org/html/wg/tracker/issues/86

Please consider them raised (and still open) as per the WHATWG issue 
tracking rules.


Best regards, Julian


On 02.06.2010 06:20, poot wrote:

hixie: Remove the HTML-to-Atom mapping definition from the W3C version
of the spec. (whatwg r5100)

http://dev.w3.org/cvsweb/html5/spec/Overview.html?r1=1.4095r2=1.4096f=h
http://html5.org/tools/web-apps-tracker?from=5099to=5100

===
RCS file: /sources/public/html5/spec/Overview.html,v
retrieving revision 1.4095
retrieving revision 1.4096
diff -u -d -r1.4095 -r1.4096
--- Overview.html   1 Jun 2010 04:26:11 -   1.4095
+++ Overview.html   2 Jun 2010 04:19:31 -   1.4096
@@ -287,7 +287,7 @@

 h1HTML5/h1
 h2 class=no-num no-toc 
id=a-vocabulary-and-associated-apis-for-html-and-xhtmlA vocabulary and associated APIs for HTML 
and XHTML/h2
-h2 class=no-num no-toc id=editor-s-draft-1-june-2010Editor's Draft 1 June 
2010/h2
+h2 class=no-num no-toc id=editor-s-draft-2-june-2010Editor's Draft 2 June 
2010/h2
 dldtLatest Published Version:/dt
  dda 
href=http://www.w3.org/TR/html5/;http://www.w3.org/TR/html5//a/dd
  dtLatest Editor's Draft:/dt
@@ -390,7 +390,7 @@
Group/a  is the W3C working group responsible for this
specification's progress along the W3C Recommendation
track.
-  This specification is the 1 June 2010 Editor's Draft.
+  This specification is the 2 June 2010 Editor's Draft.
/p!-- UNDER NO CIRCUMSTANCES IS THE PRECEDING PARAGRAPH TO BE REMOVED OR EDITED WITHOUT TALKING TO IAN 
FIRST --!-- relationship to other work (required) --pThe contents of this specification are also 
part ofa href=http://www.whatwg.org/specs/web-apps/current-work/multipage/;a
specification/a  published by thea 
href=http://www.whatwg.org/;WHATWG/a, which is available under a
license that permits reuse of the specification text./p!-- UNDER NO CIRCUMSTANCES IS THE FOLLOWING 
PARAGRAPH TO BE REMOVED OR EDITED WITHOUT TALKING TO IAN FIRST --!-- required patent boilerplate 
--pThis document was produced by a group operating under thea 
href=http://www.w3.org/Consortium/Patent-Policy-20040205/;5
@@ -867,9 +867,7 @@
  ol
   lia href=#selectorsspan 
class=secno4.14.1/spanCase-sensitivity/a/li
   lia href=#pseudo-classesspan 
class=secno4.14.2/spanPseudo-classes/a/ol/li
-lia href=#converting-html-to-other-formatsspan 
class=secno4.15/spanConverting HTML to other formats/a
-ol
-lia href=#atomspan class=secno4.15.1/spanAtom/a/ol/ol/li
+lia href=#converting-html-to-other-formatsspan class=secno4.15/spanConverting HTML 
to other formats/a/ol/li
   lia href=#browsersspan class=secno5/spanLoading Web pages/a
ol
 lia href=#windowsspan class=secno5.1/spanBrowsing contexts/a
@@ -40034,457 +40032,6 @@
h3 id=converting-html-to-other-formatsspan class=secno4.15/spanConverting HTML to other formats/h3p 
class=XXX annotationbStatus:/biLast call for comments/i/p


-h4 id=atomspan class=secno4.15.1/spanAtom/h4p class=XXX annotationbStatus:/biLast call for 
comments./ispana href=http://www.w3.org/html/wg/tracker/issues/86;ISSUE-86/a  (atom-id-stability) blocks progress to Last Call/span/p
-
-pGiven acodea href=#documentDocument/a/code  var 
title=source/var, a user
-  agent may run the following algorithm todfn id=extracting-atom 
title=extracting
-  Atomextract an Atom feed/dfn. This is not the only algorithm
-  that can be used for this purpose; for instance, a user agent might
-  instead use the hAtom algorithm.a href=#refsHATOM[HATOM]/a/p
-
-ollipIf thecodea href=#documentDocument/a/code  var 
title=source/var  does
-   not contain anycodea href=#the-article-elementarticle/a/code  
elements, then return nothing
-   and abort these steps. This algorithm can only be used with
-   documents that contain distinct articles./p
-
-lipLetvar title=R/var  be an emptya href=#xml-documents title=XML
-   documentsXML/a  codea href=#documentDocument/a/code  object whosea 
href=#the-document-s-address title=the document's addressaddress/a  is user-agent
-   defined./li
-
-lipAppend acode title=feed/code  element in the
-a href=#atom-namespaceAtom namespace/a  tovar title=R/var./li
-
-li
-
-pFor eachcodea href=#metameta/a/code  element with acode title=attr-meta-namea href=#attr-meta-namename/a/code  attribute and acode title=attr-meta-contenta 
href=#attr-meta-contentcontent/a/code  attribute and whosecode title=attr-meta-namea href=#attr-meta-namename/a/code  attribute's value iscode title=meta-authora 
href=#meta-authorauthor/a/code, run the following substeps:/p
-
-ollipAppend ancode