Re: [whatwg] JavaScript function for closing tags

2017-10-17 Thread Silvia Pfeiffer
We could specify that WebVTT cues of type metadata should contain valid
JSON - that would make sense to me.

Cues of type captions or subtitles stupid get parsed dune by the addCue()
function of the texttrack API - but not all browsers implement this yet.
Would be worth registering bugs on browsers.

Cheers,
Silvia.

Best Regards,
Silvia.

On 18 Oct. 2017 2:51 am, "Michael A. Peters"  wrote:

> On 10/16/2017 10:08 AM, Roger Hågensen wrote:
>
>> On 2017-10-14 10:13, Michael A. Peters wrote:
>>
>>> I use TextTrack API but it's documention does not specify that it
>>> closes open tags within a cue, in fact I'm fairly certain it doesn't
>>> because some people use it for json and other related none tag related
>>> content.
>>>
>> Looking at https://www.html5rocks.com/en/tutorials/track/basics/
>> it seems JSON can be used, no idea if content type is different or not
>> for that.
>>
>> Some errors using the tracks in XML were solved by the innerHTML trick
>>> where I create a separate html document, append the cue, and then grab
>>> the innerHTML but that doesn't always work to close tags when html
>>> entities are part of the cue string.
>>>
>>
>> Mixing XML and HTML is not a good idea. Would it not be easier to have
>> the server send out proper XML instead of hTML? Valid XML is also valid
>> HTML (the reverse is not always true).
>>
>
> I agree, but what I was using an html document for - when using JS
> innerHTML it has closing tags so the only issue would be tags that html
> itself does not close (e.g. br) but those are not applicable with a WebVTT
> cue - which is only suppose to support a very small number of tags, all
> which have closing tags.
>
> The problem is WebVTT does not require tags be closed in a cue, e.g.
>
> 04:05.000 --> 04:07.250
> This is a cue.
>
> That's allowed in WebVTT
>
> I convert c.foo into
>
> This is a cue.
>
> and when I add that to the html document and use innerHTML it then has the
> closing  on it.
>
> While it seems to work with some html entities, it breaks with others like
> 
>
> So for now I have to just make sure all my WebVTT are closed and not use
> the hack that adds closing tags - but since WebVTT cues do not have to have
> closing tags, but the cues need to work in XML documents, a built-in parser
> in JS that can add missing closing tags I think would be a good thing.
>
>
> And if XML and HTML is giving you issues then use JSON instead.
>> I did not see JSON mentioned in the W3C spec though.
>>
>
> I think the JSON in WebVTT cues is not spec but some are using it.
>
> Basically the textrack API seems to allow almost any string, it really has
> to as WebVTT is not static and the spec changes. I wouldn't mind JSON being
> added to WebVTT as it would be a handy way to encode metadata about the
> media but that's another topic.
>
> A built in JS HTML parser may also be of benefit in preventing code
> injection, e.g. stripping out tags from a WebVTT cue that a website does
> not allow.
>
> The TextTrack API doesn't filter out things like script or other tags that
> aren't part of WebVTT which means any site that allows users to upload
> WebVTT files is creating a potential code injection vulnerability.
>
> Server-side code should filter it on upload, but it would be nice to
> *someday* be able to pass a string through a native JS filter much the same
> way we can with htmltidy server-side and remove all but white-listed tags
> and attributes and get back a cleaned string with all tags closed.
>
> It looks like Google has a library that does that but it isn't intended
> for client-side JS and may not be fast enough for things like phones to
> process time-sensitive cues (I don't know).
>
> I might be wrong but it looked like the google library I found was
> intended for server-side Node.js use.
>
>


Re: [whatwg] JavaScript function for closing tags

2017-10-14 Thread Silvia Pfeiffer
Hi Michael,

It seems to me that the TextTrack API is made for this use case.
Why does it not work for you?

Cheers,
Silvia.


On Sat, Oct 14, 2017 at 4:36 PM, Michael A. Peters
 wrote:
> There does not seem to be a JavaScript API for closing open tags.
>
> This is problematic when dealing with WebVTT which does not require tags be
> closed.
>
> Where it is the biggest problem is when the document is being served as
> XML+XHTML
>
> I tried the following hack which seemed to be working:
>
> cleandoc = document.implementation.createHTMLDocument("FuBar");
> cleanbody = document.createElementNS("http://www.w3.org/1999/xhtml;,
> "body");
> cleandoc.documentElement.appendChild(cleanbody);
>
>
> Then I could do the following when with a WebVTT cue:
>
> cleanbody.innerHTML = string;
> return (cleanbody.innerHTML);
>
> That *mostly* works but seems to sometimes fail when string contains
> entities, such as 
>
> What happens is it returns an empty string.
>
> Given that WebVTT is part of HTML5 and browser native html5 audio players
> don't support caption tracks forcing us to write our own implementations if
> we want captions with audio, it sure would be nice if there was a pure
> JavaScript way to just add closing tags to a string because there is never a
> guarantee valid WebVTT cue has closed tags which are required for XHTML sent
> as XML.
>
> Seems to me that a JS native function to add missing closing tags would have
> more application than just WebVTT cues.
>
> I looked for a jQuery filter that does it, but could not find one.
>
> It also could be of benefit in emulating document.write() as many of
> Google's tools *still* require document.write() despite the issues with
> document.write() and XML having been known for 15+ years now.
>
> Any chance of getting a parser into JavaScript that at least would be
> capable of closing open tags in a string passed to it?


Re: [whatwg] metadata

2017-04-23 Thread Silvia Pfeiffer
On Mon, Apr 24, 2017 at 5:04 AM, Kevin Marks  wrote:
> On Sun, Apr 23, 2017 at 5:58 PM, Andy Valencia
>  wrote:
>> === Dynamic versus static metadata
>>
>> Pretty much all audio formats have at least one metadata format.  While
>> some apparently can embed them at time points, this is not used by any
>> players I can find.  The Icecast/Streamcast "metastream" format is the
>> only technique I've ever encountered.  The industry is quickly shifting
>> to the so-called "Shoutcast v2" format due to:
>> https://forums.developer.apple.com/thread/66586
>>
>> Metadata formats as applied to static information are, of course, of
>> great interest.  Any dynamic technique should fit into the existing
>> approach.
>
> There are lots of models for dynamic metadata - look at soundcloud
> comments at times, youtube captions and overlays, Historically there
> have been chapter list markers in MPEG, QuickTime and mpeg4 (m4a, m4v)
> files too.

A different method is used on the Web for dynamic metadata: TextTracks
have been standardised to expose such time-aligned metadata. I don't
think this is the core of the discussion here though.

Cheers,
Silvia.


Re: [whatwg] Removing mediagroup/MediaController from HTML

2015-10-03 Thread Silvia Pfeiffer
On Fri, Oct 2, 2015 at 10:27 AM, Domenic Denicola  wrote:
> From: whatwg [mailto:whatwg-boun...@lists.whatwg.org] On Behalf Of
>
>> is removal really the right thing to do, given that we have an
>> implementation?
>
> I agree this is a problematic question. I opened 
> https://github.com/whatwg/html/issues/209 for the more general issue but am 
> happy to have the discussion here since that hasn't gotten much replies. Do 
> check out the examples listed there though. E.g. Blink is in similar 
> situations with  and HTML imports.
>
> The web seems to end up with a lot of APIs like this, where the spec ends up 
> just being documentation for a single-vendor implementation. I don't really 
> know what to do in these cases. If our goal in writing these specs is to 
> produce an interoperable web platform, then such features seem like they 
> shouldn't be part of the platform.


There is also a question about the why of the current state: is it
just a single-vendor implementation because nobody at the other
vendors has gotten around to implementing it or is it because they
fundamentally object to implementing it. If there are objections, then
it's reasonable to consider removing the feature. Otherwise, it would
be premature to remove it IMHO.

Silvia.


Re: [whatwg] Persistent and temporary storage

2015-03-14 Thread Silvia Pfeiffer
On 15 Mar 2015 03:35, Glenn Maynard gl...@zewt.org wrote:

 On Fri, Mar 13, 2015 at 3:13 PM, Silvia Pfeiffer 
silviapfeiff...@gmail.com wrote:

 On 14 Mar 2015 05:49, Tab Atkins Jr. jackalm...@gmail.com wrote:
  Users install a relatively small number of apps, and the uninstall
  flow (which deletes their storage) is also trivial.  Users visit a
  relatively large number of web-pages (and even more distinct origins,
  due to iframes and ads), and we don't have any good notion of
  uninstall yet on the web; the existing flows for deleting storage
  are terrible.

 First you need a notion of install.


 Not having to install web pages is a feature, not a bug.  In fact, it's
one of the defining features of the platform.


Sure, but you can't uninstall something that hasn't first been installed.

Silvia.


Re: [whatwg] Persistent and temporary storage

2015-03-13 Thread Silvia Pfeiffer
On 14 Mar 2015 05:49, Tab Atkins Jr. jackalm...@gmail.com wrote:

 On Fri, Mar 13, 2015 at 6:58 AM, Janusz Majnert j.majn...@samsung.com
wrote:
  On 13.03.2015 13:50, Anne van Kesteren wrote:
  A big gap with native is dependable storage for applications. I
  started sketching the problem space on this wiki page:
 
 https://wiki.whatwg.org/wiki/Storage
 
  Feedback I got is that having some kind of allotted quota is useful
  for applications. That way they know how much they can put away.
  However, this clashes a bit with offering something that is
  competitive with native.
 
  We can't really ask the user to divide up their storage. And yet when
  the user asks an application to store e.g. a whole bunch of music
  offline we don't really want the user agent to get in the way if the
  user already granted persistence.
 
  The real question is why having a quota is useful? Native apps are not
  controlled when it comes to storing data and nobody complains.

 Users install a relatively small number of apps, and the uninstall
 flow (which deletes their storage) is also trivial.  Users visit a
 relatively large number of web-pages (and even more distinct origins,
 due to iframes and ads), and we don't have any good notion of
 uninstall yet on the web; the existing flows for deleting storage
 are terrible.

First you need a notion of install. On an android KitKat, open browser
tabs are listed in the same way as open apps, which is a first step. Should
bookmarks and desktop icons be unified in a second step to indicate 
installation? Then, closing the tab of a non-bookmarked app would indicate
ability to remove local storage (implicit uninstall, but still following
typical browser caching strategies). Removing the bookmark/desktop icon
would indicate then indicate explicit uninstall.

Cheers,
Silvia.

  I think proper solution would be not to restrict the available space,
but
  provide GUI for users to:
  * see how much space an app uses (if it exceeds some preset amount)
  * inspect the files in platform's file explorer

 Yeah, some improved UI flows along these lines would be hugely helpful
 for this kind of thing.

 ~TJ


Re: [whatwg] How to expose caption tracks without TextTrackCues

2014-11-03 Thread Silvia Pfeiffer
On Tue, Nov 4, 2014 at 3:56 AM, Brendan Long s...@brendanlong.com wrote:

 On 10/27/2014 08:43 PM, Silvia Pfeiffer wrote:
 On Tue, Oct 28, 2014 at 2:41 AM, Philip Jägenstedt phil...@opera.com wrote:
 On Sun, Oct 26, 2014 at 8:28 AM, Silvia Pfeiffer
 silviapfeiff...@gmail.com wrote:
 On Thu, Oct 23, 2014 at 2:01 AM, Philip Jägenstedt phil...@opera.com 
 wrote:
 On Sun, Oct 12, 2014 at 11:45 AM, Silvia Pfeiffer
 silviapfeiff...@gmail.com wrote:
 Using the VideoTrack interface it would list them as a kind=captions
 and would thus also be able to be activated by JavaScript. The
 downside would that if you have N video tracks and m caption tracks in
 the media file, you'd have to expose NxM videoTracks in the interface.
 VideoTrackList can have at most one video track selected at a time, so
 representing this as a VideoTrack would require some additional
 tweaking to the model.
 The captions video track is one that has video and captions rendered
 together, so you only need the one video track active. If you want to
 turn off captions, you merely activate a different video track which
 is one without captions.

 There is no change to the model necessary - in fact, it fits perfectly
 to what the spec is currently describing without any change.
 Ah, right! Unless I'm misunderstanding again, your suggestion is to
 expose extra video tracks with kind captions or subtitles, requiring
 no spec change at all. That sounds good to me.
 Yes, that was my suggestion for dealing with UA rendered tracks.

 Doesn't this still leave us with the issue: if you have N video tracks
 and m caption tracks in
 the media file, you'd have to expose NxM videoTracks in the interface?

Right, that was the original concern. But how realistic is the
situation of n video tracks and m caption tracks with n being larger
than 2 or 3 without a change of the audio track anyway?

 We would also need to consider:

   * How do you label this combined video and text track?

That's not specific to the approach that we pick and will always need
to be decided. Note that label isn't something that needs to be unique
to a track, so you could just use the same label for all burnt-in
video tracks and identify them to be different only in the language.

   * What is the track's id?

This would need to be unique, but I think it will be easy to come up
with a scheme that works. Something like video_[n]_[captiontrackid]
could work.

   * How do you present this to users in a way that isn't confusing?

No different to presenting caption tracks.

   * What if the video track's kind isn't main? For example, what if we
 have a sign language track and we also want to display captions?
 What is the generated track's kind?

How would that work? Are you saying we're not displaying the main
video, but only displaying the sign language track? Is that realistic
and something anybody would actually do?

   * The language attribute could also have conflicts.

How so?

   * I think it might also be possible to create files where the video
 track and text track are different lengths, so we'd need to figure
 out what to do when one of them ends.

The timeline of a video is well defined in the spec - I don't think we
need to do more than what is already defined.

Silvia.


Re: [whatwg] How to expose caption tracks without TextTrackCues

2014-11-03 Thread Silvia Pfeiffer
On Tue, Nov 4, 2014 at 10:24 AM, Brendan Long s...@brendanlong.com wrote:

 On 11/03/2014 04:20 PM, Silvia Pfeiffer wrote:
 On Tue, Nov 4, 2014 at 3:56 AM, Brendan Long s...@brendanlong.com wrote:
 Right, that was the original concern. But how realistic is the
 situation of n video tracks and m caption tracks with n being larger
 than 2 or 3 without a change of the audio track anyway?
 I think the situation gets confusing at N=2. See below.

 We would also need to consider:

   * How do you label this combined video and text track?
 That's not specific to the approach that we pick and will always need
 to be decided. Note that label isn't something that needs to be unique
 to a track, so you could just use the same label for all burnt-in
 video tracks and identify them to be different only in the language.
 But the video and the text track might both have their own label in the
 underlying media file. Presumably we'd want to preserve both.

   * What is the track's id?
 This would need to be unique, but I think it will be easy to come up
 with a scheme that works. Something like video_[n]_[captiontrackid]
 could work.
 This sounds much more complicated and likely to cause problems for
 JavaScript developers than just indicating that a text track has cues
 that can't be represented in JavaScript.

   * How do you present this to users in a way that isn't confusing?
 No different to presenting caption tracks.
 I think VideoTracks with kind=caption are confusing too, and we should
 avoid creating more situations where we need to do that.

 Even when we only have one video, it's confusing that captions could
 exist in multiple places.

   * What if the video track's kind isn't main? For example, what if we
 have a sign language track and we also want to display captions?
 What is the generated track's kind?
 How would that work? Are you saying we're not displaying the main
 video, but only displaying the sign language track? Is that realistic
 and something anybody would actually do?
 It's possible, so the spec should handle it. Maybe it doesn't matter though?

   * The language attribute could also have conflicts.
 How so?
 The underlying streams could have their own metadata, and it could
 conflict. I'm not sure if it would ever be reasonable to author a file
 like that, but it would be trivial to create. At the very least, we'd
 need language to say which takes precedence if the two streams have
 conflicting metadata.

   * I think it might also be possible to create files where the video
 track and text track are different lengths, so we'd need to figure
 out what to do when one of them ends.
 The timeline of a video is well defined in the spec - I don't think we
 need to do more than what is already defined.
 What I mean is that this could be confusing for users. Say I'm watching
 a video with two video streams (main camera angle, secondary camera
 angle) and two captions tracks (for sports for example). If I'm watching
 the secondary camera angle and looking at one of the captions tracks,
 but then the secondary camera angle goes away, my player is now forced
 to randomly select one of the caption tracks combined with the primary
 video, because it's not obvious which one corresponds with the captions
 I was reading before.

 In fact, if I was making a video player for my website where multiple
 people give commentary on baseball games with multiple camera angles, I
 would probably create my own controls that parse the video track ids and
 separates them back into video and text tracks so that I could have
 offer separate video and text controls, since combining them just makes
 the UI more complicated.

That's what I meant with multiple video tracks: if you have several
that require different captions, then you're in a world of hurt in any
case and this has nothing to do with whether you're representing the
non-cue-exposed caption tracks as UARendered or as a video track.


 So, what's the advantage of combining video and captions, rather than
 just indicating that a text track can't be represented as TextTrackCues?

One important advantage: there's no need to change the spec.

If we change the spec, we still have to work through all the issues
that you listed above and find a solution.

Silvia.


Re: [whatwg] How to expose caption tracks without TextTrackCues

2014-10-27 Thread Silvia Pfeiffer
On Tue, Oct 28, 2014 at 2:41 AM, Philip Jägenstedt phil...@opera.com wrote:
 On Sun, Oct 26, 2014 at 8:28 AM, Silvia Pfeiffer
 silviapfeiff...@gmail.com wrote:

 On Thu, Oct 23, 2014 at 2:01 AM, Philip Jägenstedt phil...@opera.com wrote:
  On Sun, Oct 12, 2014 at 11:45 AM, Silvia Pfeiffer
  silviapfeiff...@gmail.com wrote:
 
  Hi all,
 
  In the Inband Text Tracks Community Group we've recently had a
  discussion about a proposal by HbbTV. I'd like to bring it up here to
  get some opinions on how to resolve the issue.
 
  (The discussion thread is at
  http://lists.w3.org/Archives/Public/public-inbandtracks/2014Sep/0008.html
  , but let me summarize it here, because it's a bit spread out.)
 
  The proposed use case is as follows:
  * there are MPEG-2 files that have an audio, a video and several caption 
  tracks
  * the caption tracks are not in WebVTT format but in formats that
  existing Digital TV receivers are already capable of decoding and
  displaying (e.g. CEA708, DVB-T, DVB-S, TTML)
  * there is no intention to standardize a TextTrackCue format for those
  other formats (statements are: there are too many formats to deal
  with, a set-top-box won't need access to cues)
 
  The request was to expose such caption tracks as textTracks:
  interface HTMLMediaElement : HTMLElement {
  ...
readonly attribute TextTrackList textTracks;
  ...
  }
 
  Then, the TextTrack interface would list them as a kind=captions,
  but without any cues, since they're not exposed. This then allows
  turning the caption tracks on/off via JavaScript. However, for
  JavaScript it is indistinguishable from a text track that has no
  captions. So the suggestion was to introduce a new kind=UARendered.
 
 
  My suggestion was to instead treat such tracks as burnt-in video
  tracks (by combination with the main video track):
  interface HTMLMediaElement : HTMLElement {
  ...
 
  readonly attribute VideoTrackList videoTracks;
  ...
  }
 
  Using the VideoTrack interface it would list them as a kind=captions
  and would thus also be able to be activated by JavaScript. The
  downside would that if you have N video tracks and m caption tracks in
  the media file, you'd have to expose NxM videoTracks in the interface.
 
 
  So, given this, should we introduce a kind=UARendered or expose such
  tracks a videoTracks or is there another solution that we're
  overlooking?
 
  VideoTrackList can have at most one video track selected at a time, so
  representing this as a VideoTrack would require some additional
  tweaking to the model.

 The captions video track is one that has video and captions rendered
 together, so you only need the one video track active. If you want to
 turn off captions, you merely activate a different video track which
 is one without captions.

 There is no change to the model necessary - in fact, it fits perfectly
 to what the spec is currently describing without any change.

 Ah, right! Unless I'm misunderstanding again, your suggestion is to
 expose extra video tracks with kind captions or subtitles, requiring
 no spec change at all. That sounds good to me.


Yes, that was my suggestion for dealing with UA rendered tracks.

Cheers,
Silvia.


Re: [whatwg] How to expose caption tracks without TextTrackCues

2014-10-26 Thread Silvia Pfeiffer
On Thu, Oct 23, 2014 at 2:01 AM, Philip Jägenstedt phil...@opera.com wrote:
 On Sun, Oct 12, 2014 at 11:45 AM, Silvia Pfeiffer
 silviapfeiff...@gmail.com wrote:

 Hi all,

 In the Inband Text Tracks Community Group we've recently had a
 discussion about a proposal by HbbTV. I'd like to bring it up here to
 get some opinions on how to resolve the issue.

 (The discussion thread is at
 http://lists.w3.org/Archives/Public/public-inbandtracks/2014Sep/0008.html
 , but let me summarize it here, because it's a bit spread out.)

 The proposed use case is as follows:
 * there are MPEG-2 files that have an audio, a video and several caption 
 tracks
 * the caption tracks are not in WebVTT format but in formats that
 existing Digital TV receivers are already capable of decoding and
 displaying (e.g. CEA708, DVB-T, DVB-S, TTML)
 * there is no intention to standardize a TextTrackCue format for those
 other formats (statements are: there are too many formats to deal
 with, a set-top-box won't need access to cues)

 The request was to expose such caption tracks as textTracks:
 interface HTMLMediaElement : HTMLElement {
 ...
   readonly attribute TextTrackList textTracks;
 ...
 }

 Then, the TextTrack interface would list them as a kind=captions,
 but without any cues, since they're not exposed. This then allows
 turning the caption tracks on/off via JavaScript. However, for
 JavaScript it is indistinguishable from a text track that has no
 captions. So the suggestion was to introduce a new kind=UARendered.


 My suggestion was to instead treat such tracks as burnt-in video
 tracks (by combination with the main video track):
 interface HTMLMediaElement : HTMLElement {
 ...

 readonly attribute VideoTrackList videoTracks;
 ...
 }

 Using the VideoTrack interface it would list them as a kind=captions
 and would thus also be able to be activated by JavaScript. The
 downside would that if you have N video tracks and m caption tracks in
 the media file, you'd have to expose NxM videoTracks in the interface.


 So, given this, should we introduce a kind=UARendered or expose such
 tracks a videoTracks or is there another solution that we're
 overlooking?

 VideoTrackList can have at most one video track selected at a time, so
 representing this as a VideoTrack would require some additional
 tweaking to the model.

The captions video track is one that has video and captions rendered
together, so you only need the one video track active. If you want to
turn off captions, you merely activate a different video track which
is one without captions.

There is no change to the model necessary - in fact, it fits perfectly
to what the spec is currently describing without any change.


 A separate text track kind seems better, but wouldn't it still be
 useful to distinguish between captions and subtitles even if the
 underlying data is unavailable?

As stated, the proposal was to introduce kind=UARendered and that
would introduce a change to the spec.

Regards,
Silvia.


Re: [whatwg] How to expose caption tracks without TextTrackCues

2014-10-26 Thread Silvia Pfeiffer
On Thu, Oct 23, 2014 at 2:33 AM, Bob Lund b.l...@cablelabs.com wrote:


 On 10/22/14, 9:01 AM, Philip Jägenstedt phil...@opera.com wrote:

On Sun, Oct 12, 2014 at 11:45 AM, Silvia Pfeiffer
silviapfeiff...@gmail.com wrote:

 Hi all,

 In the Inband Text Tracks Community Group we've recently had a
 discussion about a proposal by HbbTV. I'd like to bring it up here to
 get some opinions on how to resolve the issue.

 (The discussion thread is at

http://lists.w3.org/Archives/Public/public-inbandtracks/2014Sep/0008.html
 , but let me summarize it here, because it's a bit spread out.)

 The proposed use case is as follows:
 * there are MPEG-2 files that have an audio, a video and several
caption tracks
 * the caption tracks are not in WebVTT format but in formats that
 existing Digital TV receivers are already capable of decoding and
 displaying (e.g. CEA708, DVB-T, DVB-S, TTML)
 * there is no intention to standardize a TextTrackCue format for those
 other formats (statements are: there are too many formats to deal
 with, a set-top-box won't need access to cues)

 The request was to expose such caption tracks as textTracks:
 interface HTMLMediaElement : HTMLElement {
 ...
   readonly attribute TextTrackList textTracks;
 ...
 }

 Then, the TextTrack interface would list them as a kind=captions,
 but without any cues, since they're not exposed. This then allows
 turning the caption tracks on/off via JavaScript. However, for
 JavaScript it is indistinguishable from a text track that has no
 captions. So the suggestion was to introduce a new kind=UARendered.


 My suggestion was to instead treat such tracks as burnt-in video
 tracks (by combination with the main video track):
 interface HTMLMediaElement : HTMLElement {
 ...

 readonly attribute VideoTrackList videoTracks;
 ...
 }

 Using the VideoTrack interface it would list them as a kind=captions
 and would thus also be able to be activated by JavaScript. The
 downside would that if you have N video tracks and m caption tracks in
 the media file, you'd have to expose NxM videoTracks in the interface.


 So, given this, should we introduce a kind=UARendered or expose such
 tracks a videoTracks or is there another solution that we're
 overlooking?

VideoTrackList can have at most one video track selected at a time, so
representing this as a VideoTrack would require some additional
tweaking to the model.

A separate text track kind seems better, but wouldn't it still be
useful to distinguish between captions and subtitles even if the
underlying data is unavailable?

 This issue was clarified here [1]. TextTrack.mode would be set
 ³uarendered². TextTrack.kind would still reflect ³captions² or ³subtitles².

OK, right that's another approach and probably better than introducing
a different kind.

 [1]
 http://lists.w3.org/Archives/Public/public-whatwg-archive/2014Oct/0154.html


Philip



Re: [whatwg] Gapless playback problems with web audio standards

2014-10-25 Thread Silvia Pfeiffer
Have you tried media source extensions?

Best Regards,
Silvia.
On 26 Oct 2014 00:30, David Kendal m...@dpk.io wrote:

 Hi,

 http://w3.org/mid/10b10a1d-8b84-4015-8d49-a45b87e4b...@dpk.io

 dpk




[whatwg] How to expose caption tracks without TextTrackCues

2014-10-12 Thread Silvia Pfeiffer
Hi all,

In the Inband Text Tracks Community Group we've recently had a
discussion about a proposal by HbbTV. I'd like to bring it up here to
get some opinions on how to resolve the issue.

(The discussion thread is at
http://lists.w3.org/Archives/Public/public-inbandtracks/2014Sep/0008.html
, but let me summarize it here, because it's a bit spread out.)

The proposed use case is as follows:
* there are MPEG-2 files that have an audio, a video and several caption tracks
* the caption tracks are not in WebVTT format but in formats that
existing Digital TV receivers are already capable of decoding and
displaying (e.g. CEA708, DVB-T, DVB-S, TTML)
* there is no intention to standardize a TextTrackCue format for those
other formats (statements are: there are too many formats to deal
with, a set-top-box won't need access to cues)

The request was to expose such caption tracks as textTracks:
interface HTMLMediaElement : HTMLElement {
...
  readonly attribute TextTrackList textTracks;
...
}

Then, the TextTrack interface would list them as a kind=captions,
but without any cues, since they're not exposed. This then allows
turning the caption tracks on/off via JavaScript. However, for
JavaScript it is indistinguishable from a text track that has no
captions. So the suggestion was to introduce a new kind=UARendered.


My suggestion was to instead treat such tracks as burnt-in video
tracks (by combination with the main video track):
interface HTMLMediaElement : HTMLElement {
...

readonly attribute VideoTrackList videoTracks;
...
}

Using the VideoTrack interface it would list them as a kind=captions
and would thus also be able to be activated by JavaScript. The
downside would that if you have N video tracks and m caption tracks in
the media file, you'd have to expose NxM videoTracks in the interface.


So, given this, should we introduce a kind=UARendered or expose such
tracks a videoTracks or is there another solution that we're
overlooking?

Silvia.


Re: [whatwg] Adding a property to navigator for getting device model

2014-10-03 Thread Silvia Pfeiffer
On 3 Oct 2014 14:25, eberhard speer jr. ses...@ducis.net wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1

 Hi,

 Maybe you missed my initial message : I am a contributor [IPMC] to the
 Apache DeviceMap Project. [http://incubator.apache.org/devicemap/]

 DeviceMap does expert UA-sniffing both client and server-side.
 Apache Cordova on the other hand is a mobile development framework which
 uses HTML5/CSS/Javascript and via plug-ins allows access to device
 features not or not consistently accessible [yet] via HTML5. The whole
 is packaged in a wrapper targeting a specific platform [iOS, Android,...].
 So, no 'UA-sniffing' going on there to format content, but HTML5/CSS and
 javascript running inside a container which via the abstraction of a
 WebView and plug-ins allows access to things like accelerometer,
 camera, address-book etc.

 If you are interested in the intricacies of UA-sniffing, client and
 server-side, the use-cases, the esoterica etc I'll gladly contribute
 what I can.


Please explain more. We're keen to learn about your experience.

Silvia.


Re: [whatwg] Adding a property to navigator for getting device model

2014-10-02 Thread Silvia Pfeiffer
On 3 Oct 2014 04:45, Mounir Lamouri mou...@lamouri.fr wrote:

 On Fri, 3 Oct 2014, at 04:39, Jonas Sicking wrote:
  On Thu, Oct 2, 2014 at 3:57 AM, Mounir Lamouri mou...@lamouri.fr
wrote:
   On Wed, 1 Oct 2014, at 19:43, Jonas Sicking wrote:
   On Wed, Oct 1, 2014 at 2:27 AM, Mounir Lamouri mou...@lamouri.fr
wrote:
On Wed, 1 Oct 2014, at 15:01, Jonas Sicking wrote:
On Tue, Sep 30, 2014 at 4:40 AM, Mounir Lamouri mou...@lamouri.fr

wrote:
 On Wed, 24 Sep 2014, at 11:54, Jonas Sicking wrote:
 Thoughts?

 Do you have any data that makes you think that those websites
would stop
 using UA sniffing but start using navigator.deviceModel if they
had that
 property available?
   
I know that the Cordova module for exposing this information is
one of
the most popular Cordova modules, so that's a pretty good
indication.
But I don't have data directly from websites.
   
When you were pointing that websites currently do UA sniffing is
it on
the client side of the server side?
  
   I'd imagine UA sniffing happens more often on the server side, though
   I suspect it varies with the reason why people do it.
  
   But the Cordova API is client side, so there's definitely desire to
   have it there too.
  
   Isn't Cordova experience feedback a bit out of scope if usually
   developers do UA sniffing on the server side? It seems that such a
   feature would mostly benefit web sites that already entirely live on
the
   client side and might be more inclined to do feature detection.
 
  If feature detection covered all the use cases, then why would the
  Cordova module be so popular?

 I would love to know actually. Silvia, do you have any insights?

I've not used it - maybe others have some insights.

Best Regards,
Silvia.


Re: [whatwg] Adding a property to navigator for getting device model

2014-10-01 Thread Silvia Pfeiffer
On Wed, Oct 1, 2014 at 7:43 PM, Jonas Sicking jo...@sicking.cc wrote:
 On Wed, Oct 1, 2014 at 2:27 AM, Mounir Lamouri mou...@lamouri.fr wrote:
 On Wed, 1 Oct 2014, at 15:01, Jonas Sicking wrote:
 On Tue, Sep 30, 2014 at 4:40 AM, Mounir Lamouri mou...@lamouri.fr
 wrote:
  On Wed, 24 Sep 2014, at 11:54, Jonas Sicking wrote:
  Thoughts?
 
  Do you have any data that makes you think that those websites would stop
  using UA sniffing but start using navigator.deviceModel if they had that
  property available?

 I know that the Cordova module for exposing this information is one of
 the most popular Cordova modules, so that's a pretty good indication.
 But I don't have data directly from websites.

 When you were pointing that websites currently do UA sniffing is it on
 the client side of the server side?

 I'd imagine UA sniffing happens more often on the server side, though
 I suspect it varies with the reason why people do it.

 But the Cordova API is client side, so there's definitely desire to
 have it there too.

I was under the impression that we are mostly talking client-side so
that JavaScript can adapt the choice of features to what is available
in the browser. Server-side information is merely a side effect
(which, of course, is exploited for marketing and other purposes, but
not something we can really avoid).

Silvia.


Re: [whatwg] Adding a property to navigator for getting device model

2014-09-24 Thread Silvia Pfeiffer
On 24 Sep 2014 20:40, James Graham ja...@hoppipolla.co.uk wrote:

 On 24/09/14 02:54, Jonas Sicking wrote:

  In the meantime, I'd like to add a property to window.navigator to
  enable websites to get the same information from there as is already
  available in the UA string. That would at least help with the parsing
  problem.
 
  And if means that we could more quickly move the device model out of
  the UA string, then it also helps with the UA-string keying thing.

 It's not entirely clear this won't just leave us with the device string
 in two places, and unable to remove either of them. Do we have any
 evidence that the sites using UA detection will all change their code in
 relatively short order, or become unimportant enough that we are able to
 break them?

Why don't we provide a better structure and not just a random string. For
example: deviceID, browserID, renderingEngineVersion ... Not sure what else
would be useful to group actions that the developer needs to take. Haven't
looked in detail.

Silvia.


Re: [whatwg] (no subject)

2014-09-13 Thread Silvia Pfeiffer
On Sun, Sep 14, 2014 at 12:17 PM,  javascr...@riseup.net wrote:


 On 9/12/14, Silvia Pfeiffer silviapfeiff...@gmail.com wrote:

 What I'd you're a long way away from any medical help?


 What?

s/I'd/if/

(sorry - mobile keyboard)


 In my mind this is part of the larger drive of the web of things (IoT
 applied to the web) and needs device APIs. This might not be the right
 group to discuss it in though.


 Where is the best place to define the APIs for devices to track, monitor,
 and surveil us?

 Perhaps the W3C is the best place. It is funded by the very corporations
 that are making such monitoring devices and with developer relations experts
 to tell you how. These corporations are backed by  philanthropists, such as
 William Gates III, whose opposes climate change, whistleblowers, and
 overpopulation.

 Sure, Microsoft might've backdoored stuff for the NSA for the past 10 years,
 and Apple might share your info to the NSA (they'll get it anyway). And
 Google and the CIA might want info for MindMeld (TM) or Recorded Future,
 which they openly fund (links below).

 He who pays the piper calls the tune.

 You don't have anything to hide, right?

 Or maybe the question of how or where to best to engineer this or that
 new gadget is best answered by first asking how to prevent such engineering
 from being used by a top-down, efficient system.

 The system is working. That is the problem.


 http://www.rawstory.com/rs/2014/09/10/cant-wait-for-the-apple-watch-beware-your-fitness-data-may-be-sold-or-used-against-you/

 http://rt.com/usa/microsoft-nsa-snowden-leak-971/

 Google  CIA funded MindMeld
 https://en.wikipedia.org/wiki/Recorded_Future

 CIA funded MindMeld
 http://techcrunch.com/2014/07/17/expect-labs-lands-in-q-tel-investment-will-help-u-s-intelligence-integrate-its-mindmeld-technology/


If you don't want to give your data to anyone, don't. Nobody is
forcing you to share your medical data over the Internet.

I don't see that stopping the world moving forward though. Given that
it is happening anyway, I'd rather go with an open API than
proprietary ones where we don't know what is happening.

Silvia.


Re: [whatwg] Web API for Health Sensors

2014-09-12 Thread Silvia Pfeiffer
I believe a way to directly read health data into web apps with a browser
api (JavaScript) is an interesting idea. You could then have a webrtc video
conference with your doctor and he could read out your pulse and other
health data directly from your device live and give you an opinion. Seeing
as health sensors are increasingly part of devices that have a web browser
built in, it would be useful to get to that data from a browser.

Cheers,
Silvia.
 On 12 Sep 2014 22:09, Erik Reppen erik.rep...@gmail.com wrote:

 I'm not sure I understand the problem. Lack of a formal web API doesn't
 block these devices from exposing data through web services, does it?

 Coming up with a web service or JSON standard for the data would make sense
 but that's more of an industry-specific concern that would best be dealt
 with by people working within that sector. I don't see how this would be in
 scope for whatwg.

 On Fri, Sep 12, 2014 at 5:45 AM, Arpita Bahuguna a@samsung.com
 wrote:

  Hello all,
 
 
 
  Some of us were pondering over the need of having Web API(s) for health
 and
  similar other sensors. With the growing presence of such sensors on
  smart-watches and such, we believe a Web interface for retrieving data
 from
  such sensors is required (especially for Web Apps).
 
 
 
  Towards that end, we would like to know whether any work has been done
  towards creating a Web API for Health Sensors (and the like).
 
  Currently, the health sensor data is available only for native apps.
 
 
 
  Would appreciate the community's opinion on the same. Also, if such a
  standard is already under development, could someone kindly point us in
 the
  right direction?
 
 
 
  Regards,
 
  Arpita Bahuguna
 
 



Re: [whatwg] Web API for Health Sensors

2014-09-12 Thread Silvia Pfeiffer
Browsers have been dealing with private personal data for a while now, that
includes video camera  microphone input, geolocation and more. Health data
isn't so different in that respect. There are mechanisms to deal with
privacy already in the browser. But indeed: a spec would need to consider
such issues.

Best Regards,
Silvia.
On 13 Sep 2014 08:42, delfin del...@segonquart.net wrote:

  Hi All:


 Use and transmission of private/personal Health data, as other sensitive 
 personal data, is ruled by law and regional regulations in some -- or in most 
 of the -- developed countries.

 Please, take this aspect in consideration.


- I would not recommend to read health data within a browser.
- JSON transferred data, as I understood, might be 'seen' by a
semi-experienced user with, for example, the web inspector's tools a
desktop browser has. Not exactly, but nearly.
- Not to mention more sophisticated public methods of to collect this
JSON/JSONP data.
- One might use an existent API or develop a new one for this purpose.
The data of an unknown user is viewable by third-parties.

 An standard development should take this scenarios in consideration.

1. Laws and regulations in countries/govs referring the use and
transfer of private/sensitive data.
2. Open-sourceness and distribution via a web browser.

 Best -
 -- Delfin Ramirez

 +34 633 589231

 del...@segonquart.net

 twitter: delfinramirez

 IRC: segonquart

 Skype: segonquart

 http://segonquart.net, http://delfiramirez.info



Re: [whatwg] Web API for Health Sensors

2014-09-12 Thread Silvia Pfeiffer
Search for webrtc.

Best Regards,
Silvia.
On 13 Sep 2014 09:57, delfin del...@segonquart.net wrote:

  hello all:

 please, might you point me some were I can find  part of this solutions (
 ie video/audio encrypt . That would be really helpful to me.

 br


 ---

 Delfin Ramirez
 +34 633 589231
  del...@segonquart.net

 twitter: delfinramirez

  IRC: segonquart Skype: segonquart

 http://segonquart.net

 http://delfiramirez.info

  On 2014-09-13 00:52, Silvia Pfeiffer wrote:

 Browsers have been dealing with private personal data for a while now,
 that includes video camera  microphone input, geolocation and more. Health
 data isn't so different in that respect. There are mechanisms to deal with
 privacy already in the browser. But indeed: a spec would need to consider
 such issues.

 Best Regards,
 Silvia.
 On 13 Sep 2014 08:42, delfin del...@segonquart.net wrote:

  Hi All:


 Use and transmission of private/personal Health data, as other sensitive 
 personal data, is ruled by law and regional regulations in some -- or in 
 most of the -- developed countries.

 Please, take this aspect in consideration.


- I would not recommend to read health data within a browser.
- JSON transferred data, as I understood, might be 'seen' by a
semi-experienced user with, for example, the web inspector's tools a
desktop browser has. Not exactly, but nearly.
- Not to mention more sophisticated public methods of to collect this
JSON/JSONP data.
- One might use an existent API or develop a new one for this
purpose. The data of an unknown user is viewable by third-parties.

 An standard development should take this scenarios in consideration.

1. Laws and regulations in countries/govs referring the use and
transfer of private/sensitive data.
2. Open-sourceness and distribution via a web browser.

 Best -
 -- Delfin Ramirez

 +34 633 589231

 del...@segonquart.net

 twitter: delfinramirez

 IRC: segonquart

 Skype: segonquart

 http://segonquart.net, http://delfiramirez.info




Re: [whatwg] Web API for Health Sensors

2014-09-12 Thread Silvia Pfeiffer
What I'd you're a long way away from any medical help?

In my mind this is part of the larger drive of the web of things (IoT
applied to the web) and needs device APIs. This might not be the right
group to discuss it in though.

Best Regards,
Silvia.
On 13 Sep 2014 10:53, Erik Reppen erik.rep...@gmail.com wrote:

 That's a stronger argument than I would have thought of (and mind you I'm
 just a lurker without anything in the way of influence so don't let me
 shoot you down or anything). But audio/video capture is a general
 media/communication thing.

 To me it's like the difference between geolocation and having an API on
 top of geolocation that tells you how close you are to a hospital. You have
 the tools for that. Why the desire for a specific API for something that,
 IMO,  would really benefit more from consideration by subject matter
 experts and devs within that field than general web technology nerds?

 On Fri, Sep 12, 2014 at 5:52 PM, Silvia Pfeiffer 
 silviapfeiff...@gmail.com wrote:

 Browsers have been dealing with private personal data for a while now,
 that includes video camera  microphone input, geolocation and more. Health
 data isn't so different in that respect. There are mechanisms to deal with
 privacy already in the browser. But indeed: a spec would need to consider
 such issues.

 Best Regards,
 Silvia.
 On 13 Sep 2014 08:42, delfin del...@segonquart.net wrote:

  Hi All:


 Use and transmission of private/personal Health data, as other sensitive 
 personal data, is ruled by law and regional regulations in some -- or in 
 most of the -- developed countries.

 Please, take this aspect in consideration.


- I would not recommend to read health data within a browser.
- JSON transferred data, as I understood, might be 'seen' by a
semi-experienced user with, for example, the web inspector's tools a
desktop browser has. Not exactly, but nearly.
- Not to mention more sophisticated public methods of to collect
this JSON/JSONP data.
- One might use an existent API or develop a new one for this
purpose. The data of an unknown user is viewable by third-parties.

 An standard development should take this scenarios in consideration.

1. Laws and regulations in countries/govs referring the use and
transfer of private/sensitive data.
2. Open-sourceness and distribution via a web browser.

 Best -
 -- Delfin Ramirez

 +34 633 589231

 del...@segonquart.net

 twitter: delfinramirez

 IRC: segonquart

 Skype: segonquart

 http://segonquart.net, http://delfiramirez.info





Re: [whatwg] Canvas-Only Document Type

2014-07-07 Thread Silvia Pfeiffer
Has anyone considered the accessibility implications of this? IIUC
accessibility for canvas is provided through extra dom elements. So,
this would defeat that purpose.

Silvia.

On Tue, Jul 8, 2014 at 8:39 AM, Brian M. Blakely
anewpage.me...@gmail.com wrote:
 Hi Ashley,

 With the budding of Canvas 2D and WebGL UI frameworks, I believe that, in a 
 couple years' time, the role of CSS in the cases I described will diminish 
 drastically. A lot of this was kind of waiting for Apple to give the OK 
 before people began committing their hearts to WebGL.


 On Jul 7, 2014, at 5:17 PM, Ashley Gullen ash...@scirra.com wrote:

 Having developed a major HTML5 game engine, and given this appears to be 
 aimed at a gaming use case, I feel qualified to offer my opinion: I'm not 
 sure this is a good idea.

 Despite being 99% canvas and javascript, we use CSS to implement some useful 
 scaling modes (like letterbox fullscreen). We also use the DOM for many 
 useful features, such as form controls, divs, Twitter or Facebook buttons 
 and so on, which are positioned over the canvas. In particular text inputs 
 are useful for things like name entry or logins even for games, and are 
 typically difficult and error-prone to reimplement in only canvas and 
 javascript.

 Is there any evidence that such a mode would actually improve performance? 
 Are there benchmarks indicating the existence of a DOM, even if inert, harms 
 performance in any way?

 Ashley Gullen
 Scirra.com



 On 7 July 2014 21:35, Brian Blakely anewpage.me...@gmail.com wrote:
 Floating a concept for a document mode which eschews CSS and the DOM
 to enable a more jank-free Canvas surface.

 Depending on how this allows for optimization, might be used well for
 games, VR, wearables, and ultra-portable or high-performance apps.
 Probably most beneficial to memory usage and first paint time.  Would
 appreciate if some vendor engineers who might be reading could chime
 in on this point.

 Strawman:

 Document only contains !doctype canvas-[2d|3d] and script elements.
 Everything else is ignored.  document object is gone.

 A Canvas drawing surface consumes the entire viewport.  It always has
 an opaque backing store, same as specifying getContext('2d', { alpha:
 false }).

 UA provides:
 * A host object representing surface's CanvasRenderingContext2D or
 WebGLRenderingContext (depending on specified doctype).

 * In lieu of DOM, an API for creating offscreen canvases (actually,
 this abstraction should probably exist anyway).  This might live on
 the Context host obj, which may open a beneficial performance
 relationship between onscreen canvas and offscreen children.



Re: [whatwg] HTTP status code from JavaScript

2014-05-25 Thread Silvia Pfeiffer
Hi Michael,

On Sun, May 25, 2014 at 3:10 PM, Michael Heuberger
michael.heuber...@binarykitchen.com wrote:
 Hi David

 Interesting. Yes and no, I agree with some. See my comments below:

 On 25/05/14 06:53, David Bruant wrote:
 Le 23/05/2014 10:04, Michael Heuberger a écrit :
 - Display a beautiful 404 page and hide parts of the navigation
 - Reveal navigation history to give users a better usability
 experience
 during 404s
 - And many more …
 I agree with those entirely but couldn’t they also be achieved by
 including the correct scripts on the 404 page issued from the server?
 No, it is a single page app. All the HTML templates are on the client
 side and loaded once during page load. And everything happens
 dynamically. In other words: You load everything once, then there is no
 further interaction with the server unless it's a specific query for
 data or alters data in the database.
 single page app usually means that no interaction in a given page
 will trigger a navigation (as long as there is JavaScript. A good SPA
 will fallback to using links and forms if there is a problem with JS).
 It does not mean that all HTML templates are on the client nor that
 you serve the exact same thing for every URL be they 200s or 404s.
 That's an definitely an option, but it's one you impose upon yourself.

 I understand. Okay we could debate about the definition of SPA. For
 fallbacks and special cases I deliver special files from the server when
 JS is disabled. But I think the definition of SPA nor its special cases
 are the main issue here.

 Look at Angular, their templates reside on the client side. For
 production, a grunt task can compress all files into one single, huge JS
 file that is served to the client, then for any subsequent pages no more
 resources are loaded from the server. It is a widely used practice.

If you're creating your JS file on the server and pulling in all
resources then, surely you can find out already at that time whether a
piece is missing and can't be loaded? That's not a client side issue,
but something that you'll need to deal with on the server.

Silvia.

 Also I mentioned earlier, PhoneGap is getting more popular and exactly
 uses the architecture I have described.

 Again, I cannot emphasize how cool it would be to obtain the HTTP status
 code from JavaScript! It would save SPAs and PhoneGap projects some
 bandwidth.

 Serving different content based on different URLs (and status)
 actually does make a lot of sense when you want your user to see the
 proper content within the first HTTP round-trip (which saves
 bandwidth). If you always serve generic content and figure it all out
 on the client side, then either you always need a second request to
 get the specific content or you're always sending useless data during
 the first generic response which is also wasted bandwidth.

 Good point. From that point of view I agree but you forgot one thing:
 The user experience. We want mobile apps to be very responsive below
 300ms. Hence the two requests. The first one ensures the SPA to be
 loaded and the UI to be initialized. You'll see some animation, a text
 saying Fetching data whatever. Then the second request retrieves the
 specific content.

 This is better than letting the user wait about 700ms until the user
 sees something on the screen.

 On this topic, I recommend watching [1] which introduces the idea of
 critical rendering path. Given your focus on performance and
 preventing wasted bandwidth, I think you'll be interested.

 Thanks for the link but I am Deaf and do not understand what they talk
 on YouTube :(

 Furthermore you can convert a whole single page app into an iPhone app
 with PhoneGap. All the HTML resides in the app, not on the server.
 That's a very different approach and a good reason why JavaScript has
 the right to know if the HTTP request resulted into a 200 or a 404.
 If all the HTML resides in the app, not on the server, then it wasn't
 served via HTTP, so there is no 200 or 404 to inform about (since no
 HTTP request occured).

 Ah, well spotted. PhoneGap comes with two options:
 a) You can choose to reside the whole HTML in the app or
 b) have it served from the server during the first HTTP request.

 Option a) saved bandwidth but you cannot update pages easily (option b).

 Option a) wouldn't need to know if it's a 200 or 404, you are right.
 Still, option b) needs to know the status code.

 Let me ask you another question:
 Is there a good reason NOT to give JavaScript a chance to find out the
 HTTP status code of the current page?

 Cheers
 Michael

 --

 Binary Kitchen
 Michael Heuberger
 4c Dunbar Road
 Mt Eden
 Auckland 1024
 (New Zealand)

 Mobile (text only) ...  +64 21 261 89 81
 Email   mich...@binarykitchen.com
 Website ..  http://www.binarykitchen.com



Re: [whatwg] HTTP status code from JavaScript

2014-05-25 Thread Silvia Pfeiffer
You might want to review http://wiki.whatwg.org/wiki/FAQ .

In particular: 
http://wiki.whatwg.org/wiki/FAQ#Is_there_a_process_for_adding_new_features_to_a_specification.3F

HTH,
Silvia.


On Mon, May 26, 2014 at 10:02 AM, Michael Heuberger
michael.heuber...@binarykitchen.com wrote:
 Hi Jasper

 On 26/05/14 08:09, Jasper St. Pierre wrote:


 * It is a redundancy. The browser already knows the status code, just
 not JavaScript.
 That argument can equally well be used the other way round: it's a
 redundancy to expose in JS something that be easily exposed by the
 server.
 I understand your perspective but you cannot compare two entirely
 different things. Don't forget that most modern web apps are 99% driven
 by JavaScript. If the server returns a 404, JavaScript is still unable
 to read the initial HTTP status code. Think about it :)

 The web server sends you back a response. It first sends the response code,
 then the response headers, then the response body.

 If you can alter the response code from the server, why can't you alter the
 response body?

 I know that my dear :)

 Whatever we alter on the server, Javascript on the client-side is still
 unable to read the HTTP status code.

 I already mentioned in earlier emails that altering the response body is
 a redundancy. The information is already in the header.


 * Adding inline JS script slows down the page load.
 In that case, use a meta tag:

 meta name=http-status content=404

 Then in JS:

 var status =
 parseInt(document.querySelector(meta[name=http-status]).getAttribute(content));
 Should this pattern become pervasive, it might bathe sense to
 standardize it and expose it in JS. Frankly, though, it's the first
 time I hear of such a request.
 That would work but is an overhead, a redundancy. Why add another meta
 tag if the status code is already in the HTTP header??

 Yes, it's interesting why nobody has suggested this before. There is
 always a first time. Probably I am the first to ask for this feature
 because I've been working heavily with SPA's and node.js in the recent
 years.

 Really, it would be awesome if JavaScript could read the HTTP status code!

 Yes, ideally the initial request to the server would be accessible to the
 script, including the response code, response headers, and so on
 (document.initialRequest returns an XMLHttpRequest-like object that's
 already completed?)

 I have never seen document.initialRequest before.

 At the same time, in order to deploy to sites without this feature, you'd
 need to be able to modify the response body accordingly as well. I think it
 makes sense to simply do that too.

 What server are you using here? Does it have a way of configuring it to
 modify the response codes for certain requests, but not the response body?

 nginx + node.js with ExpressJS

 Michael

 --

 Binary Kitchen
 Michael Heuberger
 4c Dunbar Road
 Mt Eden
 Auckland 1024
 (New Zealand)

 Mobile (text only) ...  +64 21 261 89 81
 Email   mich...@binarykitchen.com
 Website ..  http://www.binarykitchen.com



Re: [whatwg] HTTP status code from JavaScript

2014-05-23 Thread Silvia Pfeiffer
I had to deal with this on a script created IMG element the other day. I
used onerror  to deal with it.
For xmlhttprequest you can use the status field.
Why is that not enough?

Silvia.
 On 23 May 2014 18:06, Michael Heuberger 
michael.heuber...@binarykitchen.com wrote:

 Good points Mat

 In theory you have good points but in the real world it is more
 complicated than that. See my comments below:

 On 23/05/14 19:49, Mat Carey wrote:
  - Notify the administrator about a 404 by email with a response back to
  the server
  But the server already knows about the 404, JS shouldn’t be needed/used
 to re-inform the server of the status it’s already sent.

 Nowadays you can access other entities directly, i.E. a RIAK Database
 server which returns a 404 if the ID in the query does not exist which
 can be a raw HTTP request. This without any app logic in-between.

 ... or you have a cloud with multiple servers but only one of them is
 responsible for error reporting.

 It is just an example. I could count more use cases where the feature is
 really needed.

  - Display a beautiful 404 page and hide parts of the navigation
  - Reveal navigation history to give users a better usability experience
  during 404s
  - And many more …
  I agree with those entirely but couldn’t they also be achieved by
 including the correct scripts on the 404 page issued from the server?

 No, it is a single page app. All the HTML templates are on the client
 side and loaded once during page load. And everything happens
 dynamically. In other words: You load everything once, then there is no
 further interaction with the server unless it's a specific query for
 data or alters data in the database.

 Furthermore you can convert a whole single page app into an iPhone app
 with PhoneGap. All the HTML resides in the app, not on the server.
 That's a very different approach and a good reason why JavaScript has
 the right to know if the HTTP request resulted into a 200 or a 404.

 Cheers
 Michael

 
  (I’m not against the original suggestion, I just don’t think these
 particular use-cases demand a new feature)
 
  Mat Carey
  07952258096
 
  On 23 May 2014, at 07:52, Michael Heuberger 
 michael.heuber...@binarykitchen.com wrote:
 
  Hi Julian
 
  Yes, with AJAX requests I meant using XMLHTTPRequest.
 
  If the initial page load yields a 404 will there be any scripts to
  execute at all?
  Oh yes, absolutely. Have you ever written a single page app? There is
  lots of logic to execute when a 404 occurs. I could count plenty of use
  cases and functions that make sense. Here some examples:
  - Notify the administrator about a 404 by email with a response back to
  the server
  - Display a beautiful 404 page and hide parts of the navigation
  - Reveal navigation history to give users a better usability experience
  during 404s
  - And many more ...
 
  All these above examples run on JavaScript. Because there is currently
  no way for JavaScript to determine if the page load yielded a 404, a
  subsequent request, namely a XMLHTTPRequest one is often added. In my
  professional opinion a bad solution.
 
  Again, I strongly believe that this would be a huge improvement and
  avoids unnecessary network traffic.
 
  Cheers
  Michael
 
  --
 
  Binary Kitchen
  Michael Heuberger
  4c Dunbar Road
  Mt Eden
  Auckland 1024
  (New Zealand)
 
  Mobile (text only) ...  +64 21 261 89 81
  Email   mich...@binarykitchen.com
  Website ..  http://www.binarykitchen.com
 

 --

 Binary Kitchen
 Michael Heuberger
 4c Dunbar Road
 Mt Eden
 Auckland 1024
 (New Zealand)

 Mobile (text only) ...  +64 21 261 89 81
 Email   mich...@binarykitchen.com
 Website ..  http://www.binarykitchen.com




Re: [whatwg] Question on HTML5 media element, the seeking algorithm and the seeked event

2013-12-15 Thread Silvia Pfeiffer
On Sat, Dec 7, 2013 at 6:31 AM, Ian Hickson i...@hixie.ch wrote:
 On Fri, 8 Nov 2013, Andres Gomez wrote:

 because of my recent work related to a bug and test in WebKit, I've
 gotten to deal with the HTML5 media element's seeking algorithm and
 seeked event.

 During my analysis I was unable, following the specs preview, to
 determine whether on specific conditions, after a seek request a
 seeked event would be received.

 The cases could be more complicated but, trying to put it simple, in the
 seeking algorithm of the specs:
 http://dev.w3.org/html5/spec-preview/media-elements.html#seeking

 I recommend using the WHATWG version of the spec, since that's the version
 that I edit in response to comments here. While the W3C version does
 subsequently adopt many of those changes as well, the two versions are
 unfortunately not identical.

 In the WHATWG version of the HTML standard, the algorithm you cite above
 is found here:

http://whatwg.org/html#dom-media-seek

 I mention this because this is in fact one of the algorithms that, for
 reasons I am not familiar with, is in fact different in the W3C version.

That would be because Andres was looking at the HTML5.0 version of
22nd August rather than the more up-to-date HTML5.1 version:
http://www.w3.org/html/wg/drafts/html/master/embedded-content-0.html#seeking

FAICT these are identical.

This is just a note of clarification to assure that the W3C version
has the same algorithm - I don't want to go into why there are several
versions of the HTML spec around.


 We can read on the step 7, that, on certain conditions:

 ... If there are no ranges given in the seekable attribute then set the
 seeking IDL attribute to false and abort these steps.

 (In the WHATWG version today, this is step 8.)


 Hence, we won't walk the following steps, including the last ones, 13
 and 14, which would fire the timeupdate and seeked events.

 However, reading the events summary for the seeked one:
 http://dev.w3.org/html5/spec-preview/media-elements.html#event-media-seeked

http://whatwg.org/html#event-media-seeked

 We can read that it is Fired when...:
 The seeking IDL attribute changed to false.

 ... which is what happens in the mentioned step 7 of the seeking
 algorithm.

 Hm, yes, the non-normative text in the event summary table here was a bit
 overly simplistic.

 I've tried to make the summary table more precise for this event. Let me
 know if you think it's still not clear enough!

HTML5.1 at W3C will pick up this change, too.

Cheers,
Silvia.


 Because of this, I'm unsure whether a seeked (and timeupdate ?)
 event should be fired when the conditions in step 7 happen.

 Per the spec, no. The section with the event summary table explicitly says
 This section is non-normative, and, even ignoring that, none of the
 statements in that section are normative -- none of them use the word
 must. Contrast this to the algorithm, which is introduced by the
 requirement that says the user agent must run the following steps.

 See also:

http://whatwg.org/html#conformance-requirements

 Thanks,
 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] imgset responsive imgs proposition (Re: The src-N proposal)

2013-11-13 Thread Silvia Pfeiffer
On 13 Nov 2013 11:33, Jirka Kosek ji...@kosek.cz wrote:

 On 13.11.2013 2:56, Christian Biesinger wrote:
  For a bit more presentation, and while we're inventing new syntax
  anyway, how about this:
 
  style
  @media (min-width: 480px) {
.artdirected { content: replaced url(attr(src-small)); }
 ...
  /style
  ...
  img class=artdirected src=foo.jpg src-small=foo-small.jpg
  src-medium=foo-medium.jpg src-big=foo-big.jpg

 Do you expect that there will be just predefined set of src-* attributes
 or user can define as many of them as he/she wants and use arbitrary
 identifier after src-?

 If the later is one, then validation and content completion in HTML
 source editors will be nightmare.

No different to data-* .

Silvia.


 Jirka

 --
 --
   Jirka Kosek  e-mail: ji...@kosek.cz  http://xmlguru.cz
 --
Professional XML consulting and training services
   DocBook customization, custom XSLT/XSL-FO document processing
 --
  OASIS DocBook TC member, W3C Invited Expert, ISO JTC1/SC34 rep.
 --
 Bringing you XML Prague conferencehttp://xmlprague.cz
 --


Re: [whatwg] imgset responsive imgs proposition (Re: The src-N proposal)

2013-11-12 Thread Silvia Pfeiffer
On Wed, Nov 13, 2013 at 9:56 AM, Christian Biesinger
cbiesin...@google.com wrote:
 On Tue, Nov 12, 2013 at 3:06 PM, Markus Ernst derer...@gmx.ch wrote:
 What I don't like about CSS approaches is the fact that changing the source
 of an image is fundamentally different from changing a dimension or color of
 an element. This is not presentational in the same way. Having to
 reference content images in the CSS in order to change their sources is an
 authoring nightmare.

 For a bit more presentation, and while we're inventing new syntax
 anyway, how about this:

 style
 @media (min-width: 480px) {
   .artdirected { content: replaced url(attr(src-small)); }
 }
 @media (min-width: 600px) {
   .artdirected { content: replaced url(attr(src-medium)); }
 }
 @media (min-width: 800px) {
   .artdirected { content: replaced url(attr(src-big)); }
 }
 /style
 ...
 img class=artdirected src=foo.jpg src-small=foo-small.jpg
 src-medium=foo-medium.jpg src-big=foo-big.jpg

I quite like this if we can make it work. It keeps the references to
the images in the img tag, but makes use of media queries for the
different constraints. It looks clean and is easy to understand.

Silvia.


Re: [whatwg] Reconsidering how we deal with text track cues

2013-09-03 Thread Silvia Pfeiffer
On Wed, Sep 4, 2013 at 7:38 AM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 17 Jun 2013, Silvia Pfeiffer wrote:
 On Thu, Jun 13, 2013 at 3:08 AM, Ian Hickson i...@hixie.ch wrote:
  On Wed, 12 Jun 2013, Silvia Pfeiffer wrote:
 
  As we continue to evolve the functionality of text tracks, we will
  introduce more complex other structured content into cues and we will
  want browsers to parse and interpret them.
 
  I think it's a mistake to try to solve problems before they exist. We
  don't know exactly what we'll be adding in the future, so we don't
  know what we'll need yet.

 I'm preparing to start specifying how to render chapters. There's
 already been mention of need for a thumbnail image in chapters.

 I'll also have to specify how to render descriptions. Since the target
 audience are blind and vision-impaired users, there will be a rendering
 algorithm that includes speech synthesis.

 This is a problem I have to deal with now.

 I don't think the problems you describe here require any changes to the
 API or to the format, but maybe I'm missing something. (Images for
 chapters would, I guess, if you're not using the images from the video
 file, but why wouldn't you use those actual images?)

It's possible that the chapter track author wants to use other images.
The images in DVD chapters aren't always from the video either. Also,
if you have a audio-only src in the video element, but you want to
provide images for chapter navigation, then you'd provide them in the
WebVTT file, probably as dataURLs. I've seen that done before.


  For example, I expect that once we have support for speech synthesis
  in browsers [1], cues of kind descriptions will be voiced by speech
  synthesis, and eventually we want to influence that speech synthesis
  with markup (possibly a subpart of SSML [2] or some other simpler
  markup that influences prosody).
 
  I think it's highly unlikely that we'll actually ever want that, but
  if we ever do, then we should fix the problem then.

 Rendering description cues with speech synthesis is 100% something that
 is coming. Richer markup of description cues is then just the logical
 next step - it won't be required now, but is certainly on the roadmap.
 How likely it will be to be SSML is unclear - I'd much prefer a simpler
 markup for WebVTT, too.

 I'm not even remotely convinced that speech synthesis in description cues
 needs any markup, let alone markup more elaborate than VTT already has.

That would be good. I've been told though that synthesised speech does
not convey enough prosodic and emotional content and that markup would
help with that (see
http://en.wikipedia.org/wiki/Speech_synthesis#Prosodics_and_emotional_content).
Not a top priority though.


  What we have done with WebVTT is actually two-fold:
  1. we have created a file format that serializes arbitrary content
  that is time-synchronized with a media element.
  2. and we have created a simple caption/subtitle cue format.
 
  That both are called WebVTT is the cause of a lot of confusion and not
  a good design approach.
 
  I think it's a mistake to view these as distinct. It's just one format.
  But as you're that spec's editor, that's your choice. :-)

 We've actually done more - we also have a chapter and a metadata cue format:
 http://dev.w3.org/html5/webvtt/#dfn-webvtt-cue

 WebVTT chapter title text is syntactically a subset of WebVTT cue
 text, and WebVTT cue text is syntactically a subset of WebVTT metadata
 text. Conformance checkers, when validating WebVTT files, may offer to
 restrict all cues to only having WebVTT chapter title text or WebVTT
 cue text as their cue payload; WebVTT metadata text cues are only
 useful for scripted applications (using the metadata text track
 kind).

 They are already hierarchically defined upon each other (already when
 you were the editor).

 They just aren't represented in objects in this way.

 I don't think the way you're viewing it is the right way to view it. IMHO
 there's just one format, it just can be used in various ways, just like
 HTML can be used for applications and documents and games, etc. Just like
 how HTML sometimes requires a title and sometimes does not, based on
 context, there are different contextual constraints on authoring VTT. But
 it's still just one format.

I've followed that view (you might have seen such a message on the W3C
list). I'm expecting it will lead to interesting consequences, such as
hyperlinks and dataURLs becoming part of WebVTT cue markup. But it is
likely the easier route.

Cheers,
Silvia.


Re: [whatwg] Should video controls generate click events?

2013-08-21 Thread Silvia Pfeiffer
On Wed, Aug 21, 2013 at 8:59 PM, Simon Pieters sim...@opera.com wrote:

 The problem was this: if you want to do something when a user clicks on a
 video but not when the user interacts with the native controls, you're
 basically out of luck.


No, you can do as you do below: you can define an onclick handler and do
something. As long as you don't do something that the native controls are
already taking care of, such as play() and pause(). If you are indeed
trying to influence the play/pause state of the video element both from
script and native controls, you need to be more creative with your event
handlers and use onclick and onpause and onplay event handlers and
carefully manage which ones cancel out which other ones.


video controls onclick=if (paused) play(); else pause()
 src=foobar/video

 If the user clicks on the video's rendering area (i.e. outside the
 controls), this works as intended. However, if the user clicks on the
 native play/pause button, the video plays and then immediately pauses
 again. The change fixes this.


It also means that in the case of Firefox or in the case of Android Chrome,
where the native controls cover the full video with a overlay button when
not on autoplay, you cannot get any onclick events on the video element at
all.

Silvia.


Re: [whatwg] Should video controls generate click events?

2013-08-20 Thread Silvia Pfeiffer
On Wed, Aug 21, 2013 at 7:52 AM, Edward O'Connor eocon...@apple.com wrote:

 Hi,

  [W]e do want users to be able to bring up the native controls via a
  context menu and be able to use them regardless of what the page
  does in its event handlers. So, I request that the spec be explicit
  that interacting with the video controls does not cause the normal
  script-visible events to be fired.
 […]
  I've made the spec say this is a valid (and recommended)
  implemenation strategy.
 
  The change http://html5.org/r/8134 looks good to me, thanks!

 I don't see why video controls should be any different than, say,
 button here. If I install an event handler on an ancestor of an
 element, I'm able to capture events and prevent the descendent element
 from seeing them.

 A UI which allows users to activate a control regardless of what the
 page does in its event handlers is a general feature not specific to
 media elements—and may be worth considering—but we shouldn't make a
 one-off exception to the basic model of DOM events just for video.



The paragraph added in http://html5.org/r/8134 should probably be
restricted to the case where the default video controls have been enabled
by the user (e.g. through the context menu) rather than by the Web page. It
would indeed be bad if the Web page author, who is using the default
controls through a video controls attribute could not rely on the events
firing.

IMHO, the example that Philip provided in http://people.opera.com/~**
philipj/click.html http://people.opera.com/~philipj/click.html is not a
realistic example of something a JS dev would do.

Silvia.


Re: [whatwg] Should video controls generate click events?

2013-08-20 Thread Silvia Pfeiffer
On Wed, Aug 21, 2013 at 8:32 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Tue, Aug 20, 2013 at 3:28 PM, Silvia Pfeiffer
 silviapfeiff...@gmail.com wrote:
  IMHO, the example that Philip provided in http://people.opera.com/~**
  philipj/click.html http://people.opera.com/~philipj/click.html is not
 a
  realistic example of something a JS dev would do.

 Um, why not?  Clicking on the video to play/pause is a useful
 behavior, which things like the Youtube player do.  Since video
 elements don't generally do this, it seems reasonable that an author
 could do pretty much exactly what Philip shows in his demo.


YouTube has their own controls for this, so Philip's example does not apply.

What I'm saying is that the idea that the JS developer controls pause/play
as well as exposes video controls is a far-fetched example.

Silvia.


Re: [whatwg] Should video controls generate click events?

2013-08-20 Thread Silvia Pfeiffer
On Wed, Aug 21, 2013 at 8:57 AM, Bob Lund b.l...@cablelabs.com wrote:



 On 8/20/13 4:46 PM, Silvia Pfeiffer silviapfeiff...@gmail.com wrote:

 On Wed, Aug 21, 2013 at 8:32 AM, Tab Atkins Jr.
 jackalm...@gmail.comwrote:
 
  On Tue, Aug 20, 2013 at 3:28 PM, Silvia Pfeiffer
  silviapfeiff...@gmail.com wrote:
   IMHO, the example that Philip provided in http://people.opera.com/~**
   philipj/click.html http://people.opera.com/~philipj/click.html is
 not
  a
   realistic example of something a JS dev would do.
 
  Um, why not?  Clicking on the video to play/pause is a useful
  behavior, which things like the Youtube player do.  Since video
  elements don't generally do this, it seems reasonable that an author
  could do pretty much exactly what Philip shows in his demo.
 
 
 YouTube has their own controls for this, so Philip's example does not
 apply.
 
 What I'm saying is that the idea that the JS developer controls pause/play
 as well as exposes video controls is a far-fetched example.

 What about a Web page that uses JS to control pause/play/etc based on
 external messages, say from a WebSocket? The sender in this case acts as a
 remote control.


The patch applies only to the case where the user interacts with
browser-provided controls on the video element. In your case, the JS dev
would probably not use the @controls attribute on the video element, since
the playback controls comes from the remote.

Silvia.


Re: [whatwg] Should video controls generate click events?

2013-08-20 Thread Silvia Pfeiffer
On Wed, Aug 21, 2013 at 9:11 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 On Tue, Aug 20, 2013 at 3:46 PM, Silvia Pfeiffer
 silviapfeiff...@gmail.com wrote:
  On Wed, Aug 21, 2013 at 8:32 AM, Tab Atkins Jr. jackalm...@gmail.com
  wrote:
 
  On Tue, Aug 20, 2013 at 3:28 PM, Silvia Pfeiffer
  silviapfeiff...@gmail.com wrote:
   IMHO, the example that Philip provided in http://people.opera.com/~**
   philipj/click.html http://people.opera.com/~philipj/click.html is
 not
   a
   realistic example of something a JS dev would do.
 
  Um, why not?  Clicking on the video to play/pause is a useful
  behavior, which things like the Youtube player do.  Since video
  elements don't generally do this, it seems reasonable that an author
  could do pretty much exactly what Philip shows in his demo.
 
 
  YouTube has their own controls for this, so Philip's example does not
 apply.
 
  What I'm saying is that the idea that the JS developer controls
 pause/play
  as well as exposes video controls is a far-fetched example.

 Yes, Youtube has their own controls.  They have long-standing branding
 that makes it worthwhile for them to roll their own.

 Why would I want to roll my own, though, when all I want is to add
 click-to-play/pause?  That seems like a lot of difficult make-work.


Indeed. As a JS dev you make a choice: either you roll your own, or you
don't.

If you roll your own, you write the JS to handle the clicks from the
controls and do video.pause() and video.play() yourself.

If you don't roll your own, you write video controls and you expect the
browser to handle pausing/playing. You don't do what Philip's demo (
http://people.opera.com/~philipj/click.html) does: handle pause and play
toggling in JS. Because the browser already does that for you.

This is why I am saying: Philip's example is not a typical use case. It
only happens when the developer made the choice to roll their own, but the
user activates the default controls (e.g. through the context menu) as
well. This can't happen on YouTube, because YouTube hide away the context
menu on the video element. It may happen elsewhere (though I've just tried
videoplayer.js and sublime video player and jwplayer and all of them have
their video controls on top of the browser-provided ones, so you can't even
get to them). It's this far-fetched use case that the patch is addressing.

However, the patch has a wider implication: namely that the User agent will
suppress all user interaction events from the browser-provided video
controls. I.e. if the user clicks on the play button, no click event is
raised on the video element and the elements that the video element is in.
That's what Edward is objecting to - and I agree.

My suggestion was therefore to limit the patch to only apply when something
that the JS developer did not prepare for happens: namely the user
activates the browser-provided video controls (through the context menu).

Hope that clarifies my position.

Cheers,
Silvia.


Re: [whatwg] Forms-related feedback

2013-07-29 Thread Silvia Pfeiffer
On Tue, Jul 30, 2013 at 9:59 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Mon, Jul 29, 2013 at 4:50 PM, Jonas Sicking jo...@sicking.cc wrote:
 Ian, has *any* implementer expressed a preference for implementing a
 picker which allows selecting date+time+timezone? I.e. one that
 returns UTC dates?

 The Gmail time-picker (when you indicate that you want to set
 timezones), gives you three things next to each other, for date, time,
 and timezone.

 These are all right next to each other - whether they're separate or
 combined widgets isn't observable unless you check out the code.

The iCal one allows for selecting a specific time zone and a
floating one (which is what is currently called local):
http://www.macobserver.com/tmo/article/Understanding_iCal_Time_Zones

I actually think we need to distinguish between local and floating time zones.

When using datetime-local, I would actually expect that the browser
picks the local timezone as the default time zone, so doesn't expose a
timezone entry in the UI. The value that is returned by the form,
however, actually has a timezone.

In contrast, when the app doesn't want a timezone, the developer
should probably use something like datetime-floating. Then, it's clear
that the time zone is actually left off of the returned value, too.

Silvia.


Re: [whatwg] Forms-related feedback

2013-07-29 Thread Silvia Pfeiffer
On Tue, Jul 30, 2013 at 10:43 AM, Ian Hickson i...@hixie.ch wrote:
 On Tue, 30 Jul 2013, Silvia Pfeiffer wrote:

 I actually think we need to distinguish between local and floating time
 zones.

 When using datetime-local, I would actually expect that the browser
 picks the local timezone as the default time zone, so doesn't expose a
 timezone entry in the UI. The value that is returned by the form,
 however, actually has a timezone.

 That's type=datetime (the time zone is always UTC).


Hmm.. .what does a JS dev use, then, when they want to require a user
to have to pick a timezone other than UTC?

Silvia.


Re: [whatwg] Proposal: createImageBitmap should return a Promise instead of using a callback

2013-07-18 Thread Silvia Pfeiffer
Promises are new to browsers and people who have used them before have
raised issues about the extra resources they require. It may be a
non-issue in the browser, but it's still something we should be wary
of.

Would it be possible for the first browser that implements this to
have both implementations (callback and Promise objects) and use the
below code or something a little more complex to see how much overhead
is introduced by the Promise object and whether it is in fact
negligible both from a memory and execution time POV?

Silvia.


On Thu, Jul 18, 2013 at 8:54 AM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 18 Jul 2013, Silvia Pfeiffer wrote:

 In this case you did remove the non-promise based approach - presumably
 because it has not been implemented in browsers yet, which is fair
 enough for browsers.

 Right.


 However, for JS developers it means that if they want to use this
 function, they now have to move to introduce a Promise model in their
 libraries.

 Not really. You don't have to use the promise API for anything other than
 a callback if you don't want to.

 As in, if your code uses the style that the HTML spec used to have for the
 createImageBitmap() example:

var sprites = {};
function loadMySprites(loadedCallback) {
  var image = new Image();
  image.src = 'mysprites.png';
  image.onload = function () {
// ... do something to fill in sprites, and then call loadedCallback
  };
}

function runDemo() {
  var canvas = document.querySelector('canvas#demo');
  var context = canvas.getContext('2d');
  context.drawImage(sprites.tree, 30, 10);
  context.drawImage(sprites.snake, 70, 10);
}

loadMySprites(runDemo);

 ...then you can still do this with promises:

var sprites = {};
function loadMySprites(loadedCallback) {
  var image = new Image();
  image.src = 'mysprites.png';
  image.onload = function () {
// only the comment from the snippet above is different here:
Promise.every(
  createImageBitmap(image,  0,  0, 40, 40).then(function (image) { 
 sprites.woman = image }),
  createImageBitmap(image, 40,  0, 40, 40).then(function (image) { 
 sprites.man   = image }),
  createImageBitmap(image, 80,  0, 40, 40).then(function (image) { 
 sprites.tree  = image }),
  createImageBitmap(image,  0, 40, 40, 40).then(function (image) { 
 sprites.hut   = image }),
  createImageBitmap(image, 40, 40, 40, 40).then(function (image) { 
 sprites.apple = image }),
  createImageBitmap(image, 80, 40, 40, 40).then(function (image) { 
 sprites.snake = image }),
).then(loadedCallback);
  };
}

function runDemo() {
  var canvas = document.querySelector('canvas#demo');
  var context = canvas.getContext('2d');
  context.drawImage(sprites.tree, 30, 10);
  context.drawImage(sprites.snake, 70, 10);
}

loadMySprites(runDemo);

 The promises are very localised, just to the code that uses them. But
 then when you want to use them everywhere, you can do so easily too,
 just slowly extending them out as you want to. And when two parts of
 the codebase that use promises touch, suddenly the code that glues
 them together gets simpler, since you can use promise utility methods
 instead of rolling your own synchronisation.


 I'm just dubious whether they are ready for that yet (in fact, I have
 heard that devs are not ready yet).

 Ready for what?


 At the same time, I think we should follow a clear pattern for
 introducing a Promise based API, which the .create() approach would
 provide.

 I don't understand what that means.


 I guess I'm asking for JS dev input here...

 Promises are just regular callbacks, with the synchronisation done by the
 browser (or shim library) rather than by author code. I don't really
 understand the problem here.

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Proposal: createImageBitmap should return a Promise instead of using a callback

2013-07-17 Thread Silvia Pfeiffer
On 18 Jul 2013 07:08, Ian Hickson i...@hixie.ch wrote:

 On Wed, 19 Jun 2013, Justin Novosad wrote:
 
  I was about to launch the implementation of window.createImageBitmap in
  Blink, and I received feedback on the blink-dev mailing list that the
  Promise API is the wave of the future for asynchronous JS, and that
  the new createImageBitmap method should use Promises.
 
  Current spec:
 
http://www.whatwg.org/specs/web-apps/current-work/multipage/timers.html#images
 
  The proposal is to change the ImageBitmapFactories IDL to something like
  this:
 
  [NoInterfaceObject]
  interface ImageBitmapFactories {
Promise createImageBitmap(ImageBitmapSource image, optional long sx,
long
  sy, long sw, long sh);
  };
 
  The value of the promise would resolve to an ImageBitmap object.

 Done.


 On Thu, 20 Jun 2013, Anne van Kesteren wrote:
 
  I think something like
 
  interface ImageBitmap {
static Promise create(ImageBitmapSource image, optional long sx,
  long sy, long sw, long sh);
  };
 
  would be much nicer.

 Why?


 On Thu, 20 Jun 2013, Justin Novosad wrote:
 
  I agree it would be nicer, but it seems less consistent with other
  existing APIs.

 Indeed.


 On Thu, 20 Jun 2013, Tab Atkins Jr. wrote:
 
  There's really no consistency here anyway, and the Interface.create()
  idiom is pretty easy and nice.

 There are basically two styles:

  - constructors (new Date(), new Function(), etc)
  - factory methods on the parent object (document.createElement(),
implementation.createDocument(), context.createLinearGradient(), etc)

Do we have a strategy for moving to Promises for all sync factory methods
across the API?

I.e. are we keeping existing .createXxx() methods and adding .create() for
Promise-based API (which seems to be the way of the future?) Or are we at
the same time as introducing Promises also deprecating the existing factory
methods?

I'm asking because it seems like a big change of programming pattern and
not everyone may be ready to move on from the old one yet (read: this is
next generation technology), so would it be better to keep both interfaces
around for a while?

Silvia.

 I don't think we have anything that uses the interface.create() pattern.
 URL.createObjectURL() is the closest, and it's not a factory.

 The constructor pattern is obviously better where possible, but in this
 case it's not, since it has to be async (hence Promises).

 --
 Ian Hickson   U+1047E)\._.,--,'``.fL
 http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
 Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'


Re: [whatwg] Proposal: createImageBitmap should return a Promise instead of using a callback

2013-07-17 Thread Silvia Pfeiffer
On 18 Jul 2013 07:57, Ian Hickson i...@hixie.ch wrote:

 On Thu, 18 Jul 2013, Silvia Pfeiffer wrote:
  
   There are basically two styles:
  
- constructors (new Date(), new Function(), etc)
- factory methods on the parent object (document.createElement(),
  implementation.createDocument(), context.createLinearGradient(),
etc)
 
  Do we have a strategy for moving to Promises for all sync factory
  methods across the API?

 Using Promises vs the issue of the factory method names are two orthogonal
 issues.

 We can't change old APIs to use Promises (and indeed in most cases they're
 not needed, e.g. all those I cite above). If you don't need a promise, you
 should really just use a constructor.

Sorry I meant it for the case of *async* factory methods - i.e. method like
createImageBitmap() . we will of course continue to need constructors.

  I'm asking because it seems like a big change of programming pattern and
  not everyone may be ready to move on from the old one yet (read: this is
  next generation technology), so would it be better to keep both
  interfaces around for a while?

 We can never remove functionality. I don't think it's ever good to have
 duplicate functionality. But in this case I think this is a non-issue.


In this case you did remove the non-promise based approach - presumably
because it has not been implemented in browsers yet, which is fair enough
for browsers. However, for JS developers it means that if they want to use
this function, they now have to move to introduce a Promise model in their
libraries. I'm just dubious whether they are ready for that yet (in fact, I
have heard that devs are not ready yet).

At the same time, I think we should follow a clear pattern for introducing
a Promise based API, which the .create() approach would provide.

I guess I'm asking for JS dev input here...

Silvia.


Re: [whatwg] Proposal: createImageBitmap should return a Promise instead of using a callback

2013-07-17 Thread Silvia Pfeiffer
On Thu, Jul 18, 2013 at 10:00 AM, Ian Hickson i...@hixie.ch wrote:
 On Wed, 17 Jul 2013, Justin Novosad wrote:
 On Wed, Jul 17, 2013 at 6:54 PM, Ian Hickson i...@hixie.ch wrote:
  On Thu, 18 Jul 2013, Silvia Pfeiffer wrote:
   At the same time, I think we should follow a clear pattern for
   introducing a Promise based API, which the .create() approach would
   provide.
 
  I don't understand what that means.

 I think the concern is about the case where we end up with legacy
 callback Factory methods that co-exist new with Promise-based flavors of
 the factory methods. There's no technical obstacle to having the two
 co-exist with the same name, it's just an overload.

Yes, that's my concern.


 I guess I don't understand what methods we're talking about here. Can we
 be more concrete? I am very much in favour of not having redundant APIs,
 not having lots of different kinds of APIs. But I'm not aware of this
 problem existing here. We have constructors and synchronous factory
 methods, have had for over a decade, and we're slowly adding constructors
 where it makes sense and not adding new synchronous factory methods. But
 in the case of ImageData, we need an asynchronous factory. This is unusual
 in the Web; mostly we have instead returned incomplete objects. In this
 case, the whole point of the API is to avoid this. This means we need a
 callback mechanism; Promises are a good, non-invasive way to do this.

We have the same issues with WebRTC, which already has a callback
based API, but there is a suggestion to replace/augment with a Promise
based API, so I just wanted to understand the motivation, potential
complications and implications.

One issue is the change in API paradigm that we use. People have got
used to wrapping the callback API with a Promise style API when they
need it. Now they have to do both: wrap the browser API for a Promise
style API and wrap it for a callback style API.

It may well be that Promises are the right way to go for
createImageBitmap(), but we are blazing a new trail here and need to
be careful about the implications. For example, here is an interesting
discussion thread with a statement that node.js originally used
Promises, but moved away from them for several reasons, not least
because they created a 20% performance degradation with v8:
https://github.com/gladiusjs/gladius-core/issues/127#issuecomment-5212272
.

I just want us to be clear about the situations in which we should
take this step and where we should avoid it.

Silvia.


Re: [whatwg] Proposal: createImageBitmap should return a Promise instead of using a callback

2013-07-17 Thread Silvia Pfeiffer
On Thu, Jul 18, 2013 at 10:39 AM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 18 Jul 2013, Silvia Pfeiffer wrote:

 We have the same issues with WebRTC, which already has a callback based
 API, but there is a suggestion to replace/augment with a Promise based
 API, so I just wanted to understand the motivation, potential
 complications and implications.

 WebRTC's constructor is synchronous, no?

There are many callbacks in WebRTC and getUserMedia that are
asynchronous and the discussion revolves around where it would be
useful to change to a Promise-based model.
Related thread ends here:
http://lists.w3.org/Archives/Public/public-webrtc/2013Jul/0170.html -
no decisions or concrete proposals have been made, but the discussion
exists.

 I don't understand the relevance to createImageBitmap().

Sorry, I side-tracked this thread to a more fundamental discussion
about when it's appropriate to use introduce Promises in the Web
platform and this was the first function that got changed to follow
this new pattern FAIK.

Silvia.


Re: [whatwg] Reconsidering how we deal with text track cues

2013-06-17 Thread Silvia Pfeiffer
On Thu, Jun 13, 2013 at 3:08 AM, Ian Hickson i...@hixie.ch wrote:
 On Wed, 12 Jun 2013, Silvia Pfeiffer wrote:

 As we continue to evolve the functionality of text tracks, we will
 introduce more complex other structured content into cues and we will
 want browsers to parse and interpret them.

 I think it's a mistake to try to solve problems before they exist. We
 don't know exactly what we'll be adding in the future, so we don't know
 what we'll need yet.

I'm preparing to start specifying how to render chapters. There's
already been mention of need for a thumbnail image in chapters.

I'll also have to specify how to render descriptions. Since the
target audience are blind and vision-impaired users, there will be a
rendering algorithm that includes speech synthesis.

This is a problem I have to deal with now.


 For example, I expect that once we have support for speech synthesis in
 browsers [1], cues of kind descriptions will be voiced by speech
 synthesis, and eventually we want to influence that speech synthesis
 with markup (possibly a subpart of SSML [2] or some other simpler markup
 that influences prosody).

 I think it's highly unlikely that we'll actually ever want that, but if we
 ever do, then we should fix the problem then.

Rendering description cues with speech synthesis is 100% something
that is coming. Richer markup of description cues is then just the
logical next step - it won't be required now, but is certainly on the
roadmap. How likely it will be to be SSML is unclear - I'd much prefer
a simpler markup for WebVTT, too.


 All of these new cue settings would end up as new attributes on the
 WebVTTCue object. This is a dangerous design path that we have taken.

 This is wrong on two points. One, there's nothing forcing a text track
 format to only generate one kind of object -- just like HTML generates
 different objects for different elements, WebVTT could generate different
 objects for different cues.

Indeed, that's what I believe will be necessary.

 Two, it's not dangerous to have an object with
 lots of fields.

Why then do we then distinguish between a HTMLMediaElement, a
HTMLVideoElement and a HTMLAudioElement? What reasons make us create
new objects?


 What we have done with WebVTT is actually two-fold:
 1. we have created a file format that serializes arbitrary content
 that is time-synchronized with a media element.
 2. and we have created a simple caption/subtitle cue format.

 That both are called WebVTT is the cause of a lot of confusion and not
 a good design approach.

 I think it's a mistake to view these as distinct. It's just one format.
 But as you're that spec's editor, that's your choice. :-)

We've actually done more - we also have a chapter and a metadata cue format:
http://dev.w3.org/html5/webvtt/#dfn-webvtt-cue

WebVTT chapter title text is syntactically a subset of WebVTT cue
text, and WebVTT cue text is syntactically a subset of WebVTT metadata
text. Conformance checkers, when validating WebVTT files, may offer to
restrict all cues to only having WebVTT chapter title text or WebVTT
cue text as their cue payload; WebVTT metadata text cues are only
useful for scripted applications (using the metadata text track
kind).

They are already hierarchically defined upon each other (already when
you were the editor).

They just aren't represented in objects in this way.


 Firstly, there are consequences on the WebVTT spec.

 I suggest we rename WebVTTCue [1] to VTTCaptionCue and allow such cues
 only on tracks of kind={caption, subtitle}.

 I don't think that makes any sense. Any WebVTT file can be used for any
 kind of track. These are orthogonal contexts.

Yes, there are two different things at play: the format of the cue and
the interpretation of the cue format in the browser. The second one is
driven by the kind.

However, WebVTT files are authored with a certain usage target in
mind. If I author a caption file, I'd not expect it to work when
interpreted as a chapter track or a description track.

It is possible to interpret a caption cue on any kind of track, but
then it needs to follow the parsing and rendering approach of cues on
that kind of track. Hooking these different parsing and rendering
algorithms up to the WebVTTCue object and dynamically applying them
depending on the kind of track is a lot of magic to be hidden in an
object. Normally every object that we have in HTML has a single
rendering approach and doesn't change depending on an attribute
setting of a member object.

Thus, I suggest that a cue coming from a WebVTT file on a kind=chapter
track will be interpreted as a ChapterCue, on a kind=captions track as
a VTTCaptionsCue, and on a kind=metadata track as a MetadataCue. The
cue as authored in WebVTT could, however, contain anything.


 It would be like having a different DOM for an HTML file in an iframe
 and in a top-level browsing context.

Contrast that to applying a different parsing and rendering algorithm
of the iframe depending

Re: [whatwg] Reconsidering how we deal with text track cues

2013-06-17 Thread Silvia Pfeiffer
On Tue, Jun 18, 2013 at 12:57 PM, Brendan Long s...@brendanlong.com wrote:
 On 06/17/2013 12:41 AM, Silvia Pfeiffer wrote:
 Why VTTCaptionCue and not just HTMLCue? It seems like any cue that can
 be rendered needs to be able to provide its content as HTML, and once we
 have that, the browser shouldn't care where we got that HTML from.
 That could indeed be a different way to approach caption cues.
 However, authoring caption text on video with only the formatting
 markup that a caption may need and limiting HTML functionality to
 features that captions need was one of the motivations for creating
 WebVTT.

 I don't think it's necessary to use the same language for authoring as
 display though. Since we already have rules for rendering HTML, and
 WebVTT seems to be a subset of HTML (with some special CSS rules, and
 some shorthand tags), I think the easiest way to handle it would be to
 translate WebVTT cues into HTML+CSS, then rely on the existing rendering
 engine.

That's exactly what the WebVTT rendering algorithm does and HTMLCue is
not necessary for this.

However, you should be able to author WebVTT cues in JavaScript -
that's what WebVTTCue was created for.

HTH,
Silvia.


[whatwg] Reconsidering how we deal with text track cues

2013-06-11 Thread Silvia Pfeiffer
Hi all,

The model in which we have looked at text tracks (track element of
media elements) thus far has some issues that I would like to point
out in this email and I would like to suggest a new way to look at
tracks. This will result in changes to the HTML and WebVTT specs and
has an influence on others specifying text track cue formats, so I am
sharing this information widely.

Current situation
=
Text tracks provide lists of timed cues for media elements, i.e. they
have a start time, an end time, and some content that is to be
interpreted in sync with the media element's timeline.

WebVTT is the file format that we chose to define as a serialisation
for the cues (just like audio files serialize audio samples/frames and
video files serialize video frames).

The means in which we currently parse WebVTT files into JS objects has
us create objects of type WebVTTCue. These objects contain information
about any kind of cue that could be included in a WebVTT file -
captions, subtitles, descriptions, chapters, metadata and whatnot.

The WebVTTCue object looks like this:

enum AutoKeyword { auto };
[Constructor(double startTime, double endTime, DOMString text)]
interface WebVTTCue : TextTrackCue {
   attribute DOMString vertical;
   attribute boolean snapToLines;
   attribute (long or AutoKeyword) line;
   attribute long position;
   attribute long size;
   attribute DOMString align;
   attribute DOMString text;
  DocumentFragment getCueAsHTML();
};

There are attributes in the WebVTTCue object that relate only to cues
of kind captions and subtitles (vertical, snapToLines etc). For cues
of other kinds, the only relevant attribute right now is the text
attribute.

This works for now, because cues of kind descriptions and chapters are
only regarded as plain text, and the structure of the content of cues
of kind metadata is not parsed by the browser. So, for cues of kind
descriptions, chapters and metadata, that .text attribute is
sufficient.


The consequence
===
As we continue to evolve the functionality of text tracks, we will
introduce more complex other structured content into cues and we will
want browsers to parse and interpret them.

For example, I expect that once we have support for speech synthesis
in browsers [1], cues of kind descriptions will be voiced by speech
synthesis, and eventually we want to influence that speech synthesis
with markup (possibly a subpart of SSML [2] or some other simpler
markup that influences prosody).

Since we have set ourselves up for parsing all cue content that comes
out of WebVTT files into WebVTTCue objects, we now have to expand the
WebVTTCue object with attributes for speech synthesis, e.g. I can
imagine cue settings for descriptions to contain a field called
channelMask to contain which audio channels a particular cue should
be rendered into with values being center, left, right.

Another example is that eventually somebody may want to introduce
ThumbnailCues that contain data URLs for images and may have a
transparency cue setting. Or somebody wants to formalize
MidrollAdCues that contain data URLs for short video ads and may have
a skippableAfterSecs cue setting.

All of these new cue settings would end up as new attributes on the
WebVTTCue object. This is a dangerous design path that we have taken.

[1] https://dvcs.w3.org/hg/speech-api/raw-file/tip/speechapi.html#tts-section
[2] http://www.w3.org/TR/speech-synthesis/#S3.2


Problem analysis

What we have done by restricting ourselves to a single WebVTTCue
object to represent all types of cues that come from a WebVTT file is
to ignore that WebVTT is just a serialisation format for cues, but
that cues are the ones that provide the different types of timed
content to the browser. The browser should not have to care about the
serialisation format. But it should care about the different types of
content that a track cue could contain.

For example, it is possible that a WebVTT caption cue (one with all
the markup and cue settings) can be provided to the browser through a
WebM file or through a MPEG file or in fact (gasp!) through a TTML
file. Such a cue should always end up in a WebVTTCue object (will need
a better name) and not in an object that is specific to the
serialisation format.

What we have done with WebVTT is actually two-fold:
1. we have created a file format that serializes arbitrary content
that is time-synchronized with a media element.
2. and we have created a simple caption/subtitle cue format.

That both are called WebVTT is the cause of a lot of confusion and
not a good design approach.


The solution
===
We thus need to distinguish between cue formats in the browser and not
between serialisation formats (we don't distinguish between different
image formats or audio formats in the browser either - we just handle
audio samples or image pixels).

Once a WebVTT file is parsed into a list of cues, 

Re: [whatwg] Pull requests for HTML5 spec?

2013-05-14 Thread Silvia Pfeiffer
You can make pull requests for the master branch for
https://github.com/w3c/html , which will end up in the HTML5.1 spec
[1]. Patches to the CR branch end up in the HTML5.0 spec [2] if
accepted. For anything that's more than editorial, register a bug on
https://www.w3.org/Bugs/Public/enter_bug.cgi?product=HTML%20WG .

If you want to contribute to the WHATWG spec, you should register a
bug on https://www.w3.org/Bugs/Public/describecomponents.cgi?product=WHATWG
. WHATWG patches eventually get cherry-picked into the W3C spec, too,
unless there is strong opposition in the HTML WG.

HTH.

Cheers,
Silvia.

[1] http://www.w3.org/html/wg/drafts/html/master/single-page.html
[2] http://www.w3.org/TR/html5/

On Tue, May 14, 2013 at 4:50 PM, Michael Day mike...@yeslogic.com wrote:
 Hi,

 There are various branches and versions of the W3C and WHAT-WG HTML
 specifications hosted on Github.

 Is there any standard procedure in place for pull requests, if you have
 editorial changes to suggest?

 Or is there a better way to track these kinds of changes?

 Cheers,

 Michael

 --
 Prince: Print with CSS!
 http://www.princexml.com


Re: [whatwg] Forced subtitles

2013-04-15 Thread Silvia Pfeiffer
Hi Jonathan,

All of what you're saying is indeed how it would work.

The existing mean of publishing subtitles and the developer choice to add a
@default attribute on a track that a developer wants activated by default
is not affected by a new @kind=forced. You can still do all the things that
you're taking about. @kind=forced kicks in only if neither the developer
nor the user has made any explicit choice about which track to activate.
Also, Eric's suggestion to activate the forced subtitle track that equals
the language of the video seems to meet your suggestion.

Silvia.



On Tue, Apr 16, 2013 at 7:25 AM, Jonathan Garbee jonat...@garbee.me wrote:

 I think it should be up to the developer to chose the default subtitles
 they want. What if someone is trying to localize a page, and the default
 subtitle is always English when they want Spanish? If not specified, the
 default subtitle should be the language specified of the page or video.
 But, if a specific subtitle language is told to the browser it should use
 that.


 On Thu, Apr 11, 2013 at 7:35 PM, Eric Carlson eric.carl...@apple.com
 wrote:

 
  On Apr 11, 2013, at 3:54 PM, Silvia Pfeiffer silviapfeiff...@gmail.com
  wrote:
 
   I think Eric is right - we need a new @kind=forced or
  @kind=forcedSubtitles value on track elements, because they behave
  differently from the subtitle kind:
   * are not listed in a track menu
   * are turned on by browser when no other subtitle or caption track is
 on
   * multiple forced subtitles tracks can be on at the same time (see
  discussion at https://www.w3.org/Bugs/Public/show_bug.cgi?id=21667 )
  
   I only wonder how the browser is meant to identify for which language
 it
  needs to turn on the forced subtitles. If it should depend on the
 language
  of the audio track of the video rather than the browser's default
 language
  setting, maybe it will need to be left to the server to pick which tracks
  to list and all forced tracks are on, no matter what? Did you have any
  ideas on this, Eric?
  
I believe it should be the language of the video's primary audio track,
  because forced subtitles are enabled in a situation where the user can
  presumably understand the dialog being spoken in the track's language and
  has not indicated a preference for captions or subtitles.
 
  eric
 
 
  
   On Fri, Apr 12, 2013 at 4:08 AM, Eric Carlson eric.carl...@apple.com
  wrote:
  
In working with real-world content with in-band subtitle tracks, I
 have
  realized that the spec doesn't accommodate forced subtitles. Forced
  subtitles are used when a video has dialog or text in a language that is
  different from the main language. For example in the Lord of the Rings,
  dialog in Elvish is subtitled so those of us that don't speak Elvish can
  understand.
  
This is only an issue for users that do not already have
  subtitles/captions enabled, because standard caption/subitle tracks are
  expected to mix the translations into the other captions in the track. In
  other words, if I enable an English caption track I will get English
  captions for the dialog spoken in English and the dialog spoken in
 Elvish.
  However, users that do not typically have subtitles enabled also need to
  have the Elvish dialog translated so subtitle providers typically
 provide a
  second subtitle track with *only* the forced subtitles.
  
 UAs are expected to automatically enable a forced-only subtitle track
  when no other caption/subtitle track is visible and there is a
 forced-only
  track in the same language of the primary audio track. This means that
 when
  I watch a version of LOTR that has been dubbed into French and I do not
  have a subtitle or caption track enabled, the UA will automatically show
  French forced subtitles if they are available.
  
Because forced subtitles are meant to be enabled automatically by the
  UA, it is essential that the UA is able to differentiate between normal
  and forced subtitles. It is also important because forced subtitles are
  not typically listed in the caption menu, again because the captions in
  them are also in the normal subtitles/captions.
  
I therefore propose that we add a new @kind value for forced
 subtitles.
  Forced is a widely used term in the industry, so I think forced is
 the
  appropriate value.
  
   eric
  
  
  
 
 



Re: [whatwg] Forced subtitles

2013-04-11 Thread Silvia Pfeiffer
I think Eric is right - we need a new @kind=forced or
@kind=forcedSubtitles value on track elements, because they behave
differently from the subtitle kind:
* are not listed in a track menu
* are turned on by browser when no other subtitle or caption track is on
* multiple forced subtitles tracks can be on at the same time (see
discussion at https://www.w3.org/Bugs/Public/show_bug.cgi?id=21667 )

I only wonder how the browser is meant to identify for which language it
needs to turn on the forced subtitles. If it should depend on the language
of the audio track of the video rather than the browser's default language
setting, maybe it will need to be left to the server to pick which tracks
to list and all forced tracks are on, no matter what? Did you have any
ideas on this, Eric?

Silvia.

On Fri, Apr 12, 2013 at 4:08 AM, Eric Carlson eric.carl...@apple.comwrote:


  In working with real-world content with in-band subtitle tracks, I have
 realized that the spec doesn't accommodate forced subtitles. Forced
 subtitles are used when a video has dialog or text in a language that is
 different from the main language. For example in the Lord of the Rings,
 dialog in Elvish is subtitled so those of us that don't speak Elvish can
 understand.

  This is only an issue for users that do not already have
 subtitles/captions enabled, because standard caption/subitle tracks are
 expected to mix the translations into the other captions in the track. In
 other words, if I enable an English caption track I will get English
 captions for the dialog spoken in English and the dialog spoken in Elvish.
 However, users that do not typically have subtitles enabled also need to
 have the Elvish dialog translated so subtitle providers typically provide a
 second subtitle track with *only* the forced subtitles.

   UAs are expected to automatically enable a forced-only subtitle track
 when no other caption/subtitle track is visible and there is a forced-only
 track in the same language of the primary audio track. This means that when
 I watch a version of LOTR that has been dubbed into French and I do not
 have a subtitle or caption track enabled, the UA will automatically show
 French forced subtitles if they are available.

  Because forced subtitles are meant to be enabled automatically by the UA,
 it is essential that the UA is able to differentiate between normal and
 forced subtitles. It is also important because forced subtitles are not
 typically listed in the caption menu, again because the captions in them
 are also in the normal subtitles/captions.

  I therefore propose that we add a new @kind value for forced subtitles.
 Forced is a widely used term in the industry, so I think forced is the
 appropriate value.

 eric





Re: [whatwg] Proposal: channel attribute on HTMLMediaElement

2013-04-07 Thread Silvia Pfeiffer
These all sound like important use cases. What I am reading is that there
is a need to manage mute state override, playback state management based on
browser state, and latency management (some sort of urgency measure). Is
there a reason that you have munged these into a single attribute? I am
asking because I can see them as being independent dimensions, e.g. you
could require low latency and mute state override together.

Silvia.



On Sat, Apr 6, 2013 at 12:50 AM, Mounir Lamouri mou...@lamouri.fr wrote:

 Hi,

 As Wes suggested it recently [1], we need a way for content to be able
 to ask for their media to be played in the background. This is
 particularly useful on Mobile when applications could have their audio
 shut down when they are in the background. However, we can imagine that
 someone listening to a web music player might want to keep that music
 stream playing when the browser is no longer in the foreground. Also, it
 will help browsers to know when to keep media playing or pause them when
 a tab is put in the background.

 However, that problem is a sub-class or a larger problem about assigning
 media elements to a specific channel. This is a problem Firefox OS and
 Windows 8 have tackled recently with proprietary extensions to
 HTMLMediaElement [2][3]. This is a feature other platforms have, like
 Android [4] or PulseAudio (GNU/Linux) [5].

 Based on the prior work, Paul Adenot and I tried to figure out the use
 cases that would apply on the web.

 Our proposal is to add a channel attribute on HTMLMediaElement. That
 attribute would give information about the type of channel to use and
 thus help the UA to know if the channel should be muted or not based on
 the current context. In addition, depending on the type of channel, the
 UA could decide to whether or not create a low-latency channel.

 There is open question regarding the default behaviour. Our proposal
 makes the default behaviour to only play the media when the website is
 visible but this is not the common default behaviour and such behaviour
 might break some websites. Depending on how critical this
 retro-compatibility issue is, we could add a 'Default' state that would
 have an undefined behaviour to do whatever retro-compatibility requires.

 The proposal is the following:

 The channel attribute gives a hint about the type of channel the author
 is expecting the UA to use. It is an enumerated attribute that uses the
 following keywords and states:

 Keyword: media
 State: Foreground Media
 Fallback: none
 Description: To be used for media element that wants to play basic media
 such as an audio or video stream that should be paused when the page is
 put in the background.

 Keyword: background-media
 State: Background Media
 Fallback: Foreground Media
 Description: To be used for media element that wants to play basic media
 such as an audio or video stream that should not be paused when the page
 is put in the background. Music, podcast or radio players are expected
 to use this state.

 Keyword: effects
 State: Effects
 Fallback: Foreground Media
 Description: To be used for media element that creates short and quick
 effects such as button click, game effects. It is intended to be used
 for effects that are being heard while using the page, not for
 notifications. When on this state, the media should use a low latency
 channel.

 Keyword: notification
 State: Notification
 Fallback: effects
 Description: Like the effects state, this intends to be used fort short
 and quick media but that require to catch user attention whether the
 page is currently visible or not. For example, this state could be used
 for a sound notification when there is a new email in the user's inbox.

 Keyword: communication
 State: Communication
 Fallback: Background Media
 Description: To be used for media element that transmits real time
 communication such as phone calls or VOIP. When on this state, the media
 should use a low latency channel.

 Keyword: alarm
 State: Alarm
 Fallback: Notification
 Description: To be used for media element that wants to make sure to be
 played even if the device is currently muted. Typical use case would be
 an alarm clock.

 The UA might not support some channels or not allow a specific media
 element to use some channels, in which case the fallback state should be
 used.

 The missing value default is the Foreground Media state.

 [1]
 http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2013-March/039202.html
 [2] https://wiki.mozilla.org/WebAPI/AudioChannels
 [3] http://msdn.microsoft.com/en-us/library/windows/apps/hh767375
 [4]
 https://developer.android.com/reference/android/media/AudioManager.html
 (see STREAM_* constants)
 [5]

 http://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/Developer/Clients/ApplicationProperties#line-58
 (see PA_PROP_MEDIA_ROLE)

 Thanks,
 --
 Mounir



Re: [whatwg] Hide placeholder on input controls on focus

2013-03-22 Thread Silvia Pfeiffer
On 23 Mar 2013 10:00, Tim Streater t...@clothears.org.uk wrote:

 On 22 Mar 2013 at 22:32, Glenn Maynard gl...@zewt.org wrote:

  I start typing after I read the placeholder.  Hiding placeholder text
just
  because I focused the input is wrong; I may not have read it yet.

 You shouldn't start typing until you *have* read it. Simples.

When you tab onto it, you likely haven't seen it yet. Whereas when you
click on it, you likely have.

Silvia.


Re: [whatwg] use of article to markup comments

2013-02-17 Thread Silvia Pfeiffer
On Mon, Feb 18, 2013 at 12:19 AM, Nils Dagsson Moskopp 
n...@dieweltistgarnichtso.net wrote:

 Bruce Lawson bru...@opera.com schrieb am Sat, 26 Jan 2013 13:30:18
 -:

  In short, why should the spec suggest any specific method of marking
  up comments?

 As someone who is interested in semantics and tired of scraping
 content and applying scrappy heuristics: If it is clear that an
 article within an article represents a comments one can easily:



article in article could be a comment. Or it could be something else
entirely. Your heuristic may work in many cases, but certainly not in all.
If we really wanted to be sure where to find the semantic concept of a
comment, we should introduce a meaningful element for it such as comment.

Silvia.


Re: [whatwg] Is main now an official HTML5 element?

2013-02-13 Thread Silvia Pfeiffer
 I will try to determine what topic
 belongs to which group in the future.

The simple answer is: have your discussion wherever you feel comfortable to
have it. Even if the specs differ, in the end what matters is what browsers
implement.

If a discussion about a topic is more appropriate in a different forum (and
that could also be the W3C CSS WG, a W3C Community Group, a IETF list or
other list), ppl will soon enough tell you.

HTH.
Silvia.


Re: [whatwg] A plea to Hixie to adopt main

2012-11-28 Thread Silvia Pfeiffer
On Wed, Nov 28, 2012 at 4:26 PM, Ian Hickson i...@hixie.ch wrote:


 All of this has already been discussed on this mailing list, so this is
 not new information. I would please refer you to the earlier messages on
 this topic. In general, unless there is substantial new information,
 please don't keep posting on a thread in this mailing list -- the list is
 high-traffic enough without us covering old ground (which is unlikely to
 result in a different outcome if nothing has changed).


I think there are misunderstandings here - in particular about how blind
users perceive a Web page. I was trying to be helpful. It seems the cards
have fallen and we will just need to continue authoring and preaching
@role=main. Sorry about the noise.

Regards,
Silvia.


Re: [whatwg] A plea to Hixie to adopt main

2012-11-27 Thread Silvia Pfeiffer
On Sat, Nov 17, 2012 at 11:01 AM, Ian Hickson i...@hixie.ch wrote:
[..]


 On Sat, 10 Nov 2012, Maciej Stachowiak wrote:
 
  I personally think main would be useful. I don't think it has a huge
  benefit, but it has modest benefits, like aside, header, footer
  and section. I also think the implementation costs are low. The
  reasons I think it has some benefits:
 
  - Even though heuristics (such as the scooby-doo algorithm or even
  guesses based on role or class, or the layout) will always be necessary
  in some cases, it's still good to have a simple and relatively
  trustworthy marker of the main content. This is useful both for
  accessibility purposes and for other browser features that want to find
  the main content. In many cases, we have found that even when semantics
  can be heuristically inferred, having an explicit marker is still
  useful. For example, you can usually guess that some text is an address,
  but we still have a microformat that helps identify such data
  unambiguously.

 But we already have this. The main content is whatever content isn't
 marked up as not being main content (anything not marked up with header,
 aside, nav, etc).


I tried to validate that claim. It's not really possible with today's Web
pages, since they haven't moved to making use of these elements, but I made
some educated guesses as to where they would be used sensibly on a normal
Web page. I have applied this to Google search results, Facebook user
pages, YouTube video pages, and Wikipedia articles as examples of some of
the most used content on the Web. You can see my results at
http://blog.gingertech.net/2012/11/28/the-use-cases-for-a-main-element-in-html/.

I believe that none of the heuristic approaches work 100% of the time. In
particular Scooby Doo doesn't work because there are far too many
layout-only elements that cannot be grasped with header, aside etc and
for which we cannot clearly say that the first top-level such element
should be regarded as the main content element.

Hope that bit of research helps.

Regards,
Silvia.


Re: [whatwg] A plea to Hixie to adopt main

2012-11-27 Thread Silvia Pfeiffer
On Wed, Nov 28, 2012 at 10:35 AM, Ian Hickson i...@hixie.ch wrote:

 On Wed, 28 Nov 2012, Silvia Pfeiffer wrote:
  
   But we already have this. The main content is whatever content isn't
   marked up as not being main content (anything not marked up with
   header, aside, nav, etc).
 
  I tried to validate that claim. It's not really possible with today's
  Web pages, since they haven't moved to making use of these elements, but
  I made some educated guesses as to where they would be used sensibly on
  a normal Web page. I have applied this to Google search results,
  Facebook user pages, YouTube video pages, and Wikipedia articles as
  examples of some of the most used content on the Web. You can see my
  results at
 
 http://blog.gingertech.net/2012/11/28/the-use-cases-for-a-main-element-in-html/
 .

 I think you're massively over-complicating what needs to be authored here.

 For example, with a Google search, just mark up everything up to the
 id=main in a header, mark up the id=top_nav as a nav, and mark up
 the id=foot in a footer, and what's left is the main content. (Note
 that this does _not_ map to what the authors would have marked up using
 main, as determined by looking at what they marked up with id=main --
 that contains the navigation.)


Agreed. But what the authors have marked up as @id=main is not relevant for
this discussion - it's not what main tries to achieve. We need to look at
what is marked up with @role=main and that's a completely different element.

Google actually placed a @role=main on a different element, namely the one
that encapsulates all the search results. This is where the main should
be and that excludes all the columns on the right and left of the search
results.



 I don't know what you'd mark up on youtube.com because I don't know what
 is the main content there.


The feedback that I get from blind users is: the main content is the video
and I can't find it.


 But marking up all the children of
 id=body-container as header except the id=page-container child, and
 marking the id=guide-container and first class=branded-page-v2-
 secondary-col as aside, would make the stream in the middle be the
 first main content, which is probably what the page author intended.


Actually, no - see above.


 This again doesn't match what the author would likely use for main --
 id=content -- which contains the second of those asides currently.


No, the element with @id=content is also not the one that main should
contain - see above.


 Wikipedia already has role=main on the appropriate element, and all the
 stuff that isn't main (except the appeal) comes after that, so they're
 fine either way, even without ARIA. Their divs map pretty directly to
 the elements in HTML so that the algorithm I describe above would surface
 the main content fine.


Agreed - it's the simple case.


 I believe that none of the heuristic approaches work 100% of the time.

 Sure. Nor would main.


When authored correctly or corrected after feedback from blind users, it
will work - and it will always work better than the heuristic approach. In
addition, there is a large enough blind community in the world to make sure
that where it doesn't work they will speak up to get it fixed. For that
reason, main is a hint that is always better than any heuristic approach,
in particular as accessibility tools do not support any heuristic
approaches.

Regards,
Silvia.


Re: [whatwg] Feature Request: Media Elements as Targets for Links

2012-11-25 Thread Silvia Pfeiffer
Can you provide an example markup and an example URL that you think will
solve your use case?

I'm asking because we don't use the name attribute any more in HTML5,
because we have the id attribute on all elements. Thus, it is always
possible to hyperlink directly to a video element using a hash on a URL and
the value of the id element. But I still wonder what you think is missing.

Regards,
Silvia.

On Sun, Nov 25, 2012 at 7:19 AM, Nils Dagsson Moskopp 
n...@dieweltistgarnichtso.net wrote:

 Excuse me if I am doing something wrong by submitting this by mail. I
 am doing this for the first time, trying to fill in the template given
 at http://wiki.whatwg.org/wiki/Problem_Solving as good as I could.

 Use Case Description:

   Linking to specific fragments of media is possible via media fragment
   URIs [1]. However, it is not possible to apply a link to embedded
   media declaratively, for example to link to a specific point in time
   for a media element on a page.

   [1] http://www.w3.org/TR/media-frags/

 - Current Limitations:

   Linking to media using media fragment URIs changes browsing context.

 - Current Usage and Workarounds:

   1. metavid (Videos of United States Congress) uses JavaScript, even
   though they have CMML transcripts and SRT.

   2. I have a podcast ”Warum nicht?“ generated by a software called
   redokast. Annotations need JavaScript: Click on the timestamps.
   http://warumnicht.dieweltistgarnichtso.net/wn-15.html
   https://github.com/erlehmann/redokast

 - Benefits: Declarative markup would make referring to timed
   annotations easier. Referring to a specific point in time in a podcast
   on the same comments, for example, could be possible.

 Proposed Solutions:

 - My Solution:
   Give HTML media elements a name attribut. Make them valid targets for
   links with a target attribut.

   - Processing Model:
 Processing for media elements and the a element needs to change.

 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/links.html#attr-hyperlink-target
 
 Change “The target attribute, if present, must be a valid browsing
 context name or keyword. It gives the name of the browsing context
 that will be used.” to “The target attribute, if present, must be a
 valid browsing context name or keyword or the name of a media
 element in the current browsing context. It gives the name of the
 browsing context or media element that will be used.”

 
 http://www.whatwg.org/specs/web-apps/current-work/multipage/links.html#following-hyperlinks-0
 
 Append after “If the user indicated a specific browsing context when
 following the hyperlink, or if the user agent is configured to
 follow hyperlinks by navigating a particular browsing context, then
 that must be the browsing context that is navigated.” the paragraph
 “If the user indicated a media element on the current page when
 following the hyperlink, then change the currentSrc attribute of
 the media element to the absolute URL given by the href attribute
 relative to the URL given by the currentSrc of the media element.”.

 (I am unsure about relative URIs. Would we need to change only the
 media fragment, and not re-run the initialization steps? What about
 the media formats given by source elements?)

   - Limitations
 (No idea.)

   - Implementation:
 (I am not a very clever guy. Someone would need to fill this in.)

   - Adoption:
 Users could easily link to parts of media resources on a page. The
 solution would be backwards compatible for existing UAs that are
 able to process media fragment URIs as long as absolute URIs are
 used. A JavaScript polyfill could be used while not all UAs support
 this feature. Consumers of web pages could easily see what a
 discussion about a media resource refers to.

 --
 Nils Dagsson Moskopp // erlehmann
 http://dieweltistgarnichtso.net



Re: [whatwg] A plea to Hixie to adopt main

2012-11-15 Thread Silvia Pfeiffer
On Fri, Nov 16, 2012 at 1:45 AM, Tim Leverett ...@gmail.com wrote:

  Con: Adding a main element adds redundancy to the [role=main]
 attribute.
  I don't see why this is a con, if main is mapped to role=main in the
 browser it means that authors won't have to. Also adding
 aside/article/footer etc adds redundancy to the matching ARIA roles.

 Redundancy tends to be a source of error if they get out of sync. If one
 browser supports [role=main] and another supports main, both would be
 needed to provide compatibility. Obviously this is a bit contrived, as
 browsers supporting main would likely also support [role=main], but
 older versions would not support main . Going forward, this would mean
 that authors wanting to use main would have to use main role=main for
 backwards compatibility.



Actually, there's a good point: I would actually add this: if main or an
element with @role=main exist on the page, there is no need to run the
Scooby-Doo algorithm and that element can just be chosen as the main
element.

Silvia.


Re: [whatwg] A plea to Hixie to adopt main

2012-11-15 Thread Silvia Pfeiffer
On Fri, Nov 16, 2012 at 12:21 PM, Eitan Adler li...@eitanadler.com wrote:

 On 15 November 2012 19:20, Silvia Pfeiffer silviapfeiff...@gmail.com
 wrote:
  On Fri, Nov 16, 2012 at 1:45 AM, Tim Leverett ...@gmail.com wrote:
 
   Con: Adding a main element adds redundancy to the [role=main]
  attribute.
   I don't see why this is a con, if main is mapped to role=main in the
  browser it means that authors won't have to. Also adding
  aside/article/footer etc adds redundancy to the matching ARIA roles.
 
  Redundancy tends to be a source of error if they get out of sync. If one
  browser supports [role=main] and another supports main, both would
 be
  needed to provide compatibility. Obviously this is a bit contrived, as
  browsers supporting main would likely also support [role=main], but
  older versions would not support main . Going forward, this would mean
  that authors wanting to use main would have to use main role=main
 for
  backwards compatibility.
 
 
 
  Actually, there's a good point: I would actually add this: if main or
 an
  element with @role=main exist on the page, there is no need to run the
  Scooby-Doo algorithm and that element can just be chosen as the main
  element.

 What if both exist but are different elements?


Good question. I'd likely choose main over @role=main.

Silvia.


Re: [whatwg] A plea to Hixie to adopt main

2012-11-14 Thread Silvia Pfeiffer
Apologies for misunderstanding - the smiley led me to believe it may not
have been a real concern. I did answer in good faith though, so back to the
concern.

You are absolutely correct that an algorithmic approach would still be
necessary to resolve the situation when main is not provided in the same
way that browsers create a body tag when it's not provided or a head
etc. Scooby-Doo seems both simple enough and appropriate for this.


 I'm sure a lot of other people had to solve this problem as well and have
 done so in their own special way. Explicit author markup would make such
a
 task so much easier.

 I was disagreeing with that point because there's no way to implicitly
trust the author, in the same way that search engines can't trust meta
 name=keywords /

Are you fundamentally distrusting the author in all semantic markup? Why
then did we introduce article, header, nav, aside, footer etc
when we can't trust the author to put the correct content in there? I don't
really see the difference.

Cheers,
Silvia.


On Wed, Nov 14, 2012 at 5:21 PM, Tim Leverett ...@gmail.com wrote:

  Hope you're not just trolling

 I was just trying to make the point that an algorithmic approach to
 finding the main content of a document would still be necessary with or
 without the main element.

 ☺



 On Tue, Nov 13, 2012 at 7:03 PM, Silvia Pfeiffer 
 silviapfeiff...@gmail.com wrote:

 On Wed, Nov 14, 2012 at 4:25 AM, Tim Leverett ...@gmail.com wrote:

  Explicit author markup would make such a task so much easier.

 Only if every author marked up their code correctly. If some authors use
 incorrect markup, then an algorithm would still be necessary for
 determining if each usage was correct.


 Hope you're not just trolling.

 From a browser perspective, if there is one main element and it sits
 within body, that would be sufficiently correct.

 Whether it's semantically correct for a particular application, that's
 not something the HTML spec should or could deal with. We don't protect
 people from putting the wrong text in tags - not in microdata, not in
 article or anywhere else. An application may care - or they may trust the
 author and if the author cares enough, they will fix up their markup if it
 doesn't achieve the right goal.

 But I'm sure you were just trolling... ;-)

 Cheers,
 Silvia.





Re: [whatwg] A plea to Hixie to adopt main

2012-11-14 Thread Silvia Pfeiffer
On Thu, Nov 15, 2012 at 12:17 PM, Tim Leverett ...@gmail.com wrote:


  Are you fundamentally distrusting the author in all semantic markup?

 In some circumstances, yes. Most of the work I've done so far has been in
 environments where programmers write code, and editors write content.
 Typically the content is via a CMS. If the editor is good but the
 programmer is not, the content is still worth having even if its surrounded
 by rubbish markup. From a data analytics and processing standpoint, there's
 no reason to discard good content just because its held in bad code in the
 same way that there's no reason to accept bad content just because its well
 formatted.


  Why then did we introduce article, header, nav, aside, footer
 etc when we can't trust the author to put the correct content in there? I
 don't really see the difference.

 Steve made a good point about user agents being able to tune into semantic
 elements for assistive tech. But I would guess (with no data to support my
 claim) that most programmers *want* to do things the right way.


I agree.


 I find that, semantically, most of the time most websites are mostly
 correct. Headings tend to be in h# elements, paragraphs tend to be in p
 elements, etc. Heuristic analysis of content can take advantage of semantic
 markup by giving it a heavier weight than, say...a div element, but that
 doesn't mean the heuristics are any less complex.


Are you saying we should not introduce a main element because where there
is no main element we may need to come up with a complex heuristic to
determine where it should have been?

Silvia.


Re: [whatwg] A plea to Hixie to adopt main

2012-11-13 Thread Silvia Pfeiffer
By random chance, I just stumbled across this GitHub tool:
https://github.com/visualrevenue/reporter

It provides another heuristic approach - different from Scooby-Doo -  to
determining what is the main content on a page. This is from a journalist's
point of view and it is using a scoring and evaluation algorithm.

I'm sure a lot of other people had to solve this problem as well and have
done so in their own special way. Explicit author markup would make such a
task so much easier.

Regards,
Silvia.


On Tue, Nov 13, 2012 at 11:53 AM, Silvia Pfeiffer silviapfeiff...@gmail.com
 wrote:

 On Tue, Nov 13, 2012 at 5:26 AM, Jens O. Meiert j...@meiert.com wrote:

 Should main be optional or required?


 I’d deem an optional main to be nonsense because it suggests
 documents are inherently without goal, or focus.

 I’d deem a required main to be nonsense because we already have an
 (implied) body element, and because element proliferation doesn’t
 work in anyone’s favor.


 I can imagine it to become required, if we mean by that that the
 browsers will need to parse a page and either find a main element or
 determine heuristically with the Scooby-Doo algorithm which part of the
 page is actually the main part and then add that to its DOM. Since we have
 the Scooby-Doo algorithm, we have a means to stay backwards compatible.


 That body essentially means main always seemed reasonable to me.
 There are plenty of options for authors to add styling hooks if they
 need any, including div role=main.


 You are correct - there is no need for this for styling. However, main
 is actually not for styling, but to provide a direct markup of the
 *semantically* main piece of content on the page. A Scooby-Doo algorithm
 can only heuristically determine what that is - with main the Web Dev
 gets an actual vehicle to point their finger explicitly rather than
 implicitly saying in a hand-wavy manner that it's what remains if you take
 away all this other stuff (that is: if we're lucky and that other stuff
 has actually been marked up).

 Silvia.



Re: [whatwg] A plea to Hixie to adopt main

2012-11-13 Thread Silvia Pfeiffer
On Wed, Nov 14, 2012 at 4:25 AM, Tim Leverett ...@gmail.com wrote:

  Explicit author markup would make such a task so much easier.

 Only if every author marked up their code correctly. If some authors use
 incorrect markup, then an algorithm would still be necessary for
 determining if each usage was correct.


Hope you're not just trolling.

From a browser perspective, if there is one main element and it sits
within body, that would be sufficiently correct.

Whether it's semantically correct for a particular application, that's not
something the HTML spec should or could deal with. We don't protect people
from putting the wrong text in tags - not in microdata, not in article or
anywhere else. An application may care - or they may trust the author and
if the author cares enough, they will fix up their markup if it doesn't
achieve the right goal.

But I'm sure you were just trolling... ;-)

Cheers,
Silvia.


Re: [whatwg] A plea to Hixie to adopt main

2012-11-12 Thread Silvia Pfeiffer
On Tue, Nov 13, 2012 at 5:26 AM, Jens O. Meiert j...@meiert.com wrote:

 Should main be optional or required?


 I’d deem an optional main to be nonsense because it suggests
 documents are inherently without goal, or focus.

 I’d deem a required main to be nonsense because we already have an
 (implied) body element, and because element proliferation doesn’t
 work in anyone’s favor.


I can imagine it to become required, if we mean by that that the browsers
will need to parse a page and either find a main element or determine
heuristically with the Scooby-Doo algorithm which part of the page is
actually the main part and then add that to its DOM. Since we have the
Scooby-Doo algorithm, we have a means to stay backwards compatible.


That body essentially means main always seemed reasonable to me.
 There are plenty of options for authors to add styling hooks if they
 need any, including div role=main.


You are correct - there is no need for this for styling. However, main is
actually not for styling, but to provide a direct markup of the
*semantically* main piece of content on the page. A Scooby-Doo algorithm
can only heuristically determine what that is - with main the Web Dev
gets an actual vehicle to point their finger explicitly rather than
implicitly saying in a hand-wavy manner that it's what remains if you take
away all this other stuff (that is: if we're lucky and that other stuff
has actually been marked up).

Silvia.


Re: [whatwg] Sortable Tables

2012-11-07 Thread Silvia Pfeiffer
On Wed, Nov 7, 2012 at 8:37 PM, Jirka Kosek ji...@kosek.cz wrote:

 On 6.11.2012 23:18, Silvia Pfeiffer wrote:

  * data-type: date, number, text etc which determines the comparison
  function used in sort

 It would be very difficult to support sorting on dates and numbers as in
 HTML they are usually present formatted using specific locale. So there
 should be additional attribute added to td/th which can hold sort key
 which will override cell contents, something like

 td sortas=2012-11-0711. listopadu 2012/td



Agreed. My example was very crude and simple and worked fine for our
purposes, but something more generic (and internationalized) needs more
functionality like this.

Silvia.


Re: [whatwg] A plea to Hixie to adopt main

2012-11-07 Thread Silvia Pfeiffer
On Thu, Nov 8, 2012 at 3:00 AM, Markus Ernst derer...@gmx.ch wrote:

 Am 07.11.2012 15:48 schrieb Jukka K. Korpela:

  I suppose that the heuristics would include recognizing a div element
 to which class main has been assigned. Then one could argue that
 main is not needed, as authors can keep using div class=main, as
 millions of pages use.


 I doubt that this is useable for that kind of heuristics anyway - as there
 is no standard for this, main as a class name may indicate the main
 contents, but also a main container to center the whole page. Also,
 non-english speaking coders may use their own language words as id or class
 names.


Agreed.

Looking at existing uses of div class=main to analyse whether we need a
main element really doesn't make sense to me. I firmly believe that
class=main is mostly used for CSS purposes and not for semantic (and thus
accessibility) purposes.

Instead, we should be looking at pages that use xxx role=main or more
traditionally in older Web pages use a skip to main link as the use cases
for a main element. Sometimes that may co-incide with div class=main,
but not in general.

Therefore, I don't actually think that the introduction in Steve's
document  is making a good case for the existence of the element with this
sentence:
The main element formalises the common
practicehttps://dvcs.w3.org/hg/html-extensions/raw-file/tip/maincontent/index.html#commonof
identification of the main content section of a document using the
id values such as 'content' and 'main'.

I'd suggest explaining that there is currently no explicit means of
identifying with 100% accuracy what part of a Web page is the single most
important part. Instead we have a solution only for accessibility purposes
with the @role=main ARIA attribute, or more traditionally by providing a
skip to main link on the top of the page. If there was a main element
that semantically identified the important part of a Web page, that would
improve accessibility, but also enable for example search engines to give
that part of a Web page a higher importance.

On that latter part: I am always annoyed when a search engine gives me
links to a particular topic that I was searching for which is only
mentioned in a side bar as some related information. It would be possible
to exclude such content if there was a main element. The argument that
article and aside etc. will do away with such problems relies on
authors actually making use of these elements. I am yet to see that happen
- in fact I have seen people that started using these elements go away from
them again, since they don't seem to have any obvious advantage. main on
the other hand has a very real advantage - immediately for accessibility -
and its easier to put a single main element on a page than to introduce a
whole swag of new elements. It's the simplicity of that single element that
will make it immediately usable by everyone, will reduce the probability of
authoring error, and thus make it reliable for search engines and other
semantic uses.

Regards,
Silvia.


Re: [whatwg] Sortable Tables

2012-11-06 Thread Silvia Pfeiffer
On Wed, Nov 7, 2012 at 6:55 AM, Boris Zbarsky bzbar...@mit.edu wrote:

 On 11/6/12 11:39 AM, Ojan Vafai wrote:

 This is a use-case that I absolutely think it makes sense to address.


 Agreed.  Not that I can commit to implementing, necessarily, but I do
 think this is a common want.


Great to hear browser interest! It's something I've had to implement for
basically all Web apps I've been involved with developing, so am really
keen to get browsers to take this over.

Not quite new, but a good requirements analysis can be had from the list of
JS solutions here:
http://tympanus.net/codrops/2009/10/03/33-javascript-solutions-for-sorting-tables/

I our apps, we'd typically not associate sortable to the table, but to a
column header.
Typical classes/attributes we'd add to a table header cell:
* sortable class: boolean
* data-direction: ascending/descending
* data-type: date, number, text etc which determines the comparison
function used in sort
* data-sort-prio: numeric indicating sorting priority

Also, a sortable table's header needed some indication of the sortability,
so some default CSS like this:
th.sortable {
  :after { content:  ▲▼}
  .current{
[data-direction=asc]:after { content:  ▼}
[data-direction=desc]:after { content:  ▲}
  }
}

HTH...

Cheers,
Silvia.


Re: [whatwg] video feedback

2012-10-02 Thread Silvia Pfeiffer
On Wed, Oct 3, 2012 at 6:41 AM, Jer Noble jer.no...@apple.com wrote:
 On Sep 17, 2012, at 12:43 PM, Ian Hickson i...@hixie.ch wrote:

 On Mon, 9 Jul 2012, adam k wrote:

 i have a 25fps video, h264, with a burned in timecode.  it seems to be
 off by 1 frame when i compare the burned in timecode to the calculated
 timecode.  i'm using rob coenen's test app at
 http://www.massive-interactive.nl/html5_video/smpte_test_universal.html
 to load my own video.

 what's the process here to report issues?  please let me know whatever
 formal or informal steps are required and i'll gladly follow them.

 Depends on the browser. Which browser?


 i'm aware that crooked framerates (i.e. the notorious 29.97) were not
 supported when frame accuracy was implemented.  in my tests, 29.97DF
 timecodes were incorrect by 1 to 3 frames at any given point.

 will there ever be support for crooked framerate accuracy?  i would be
 more than happy to contribute whatever i can to help test it and make it
 possible.  can someone comment on this?

 This is a Quality of Implementation issue, basically. I believe there's
 nothing inherently in the API that would make accuracy to such timecodes
 impossible.

 TLDR; for precise navigation, you need to use a a rational time class, rather 
 than a float value.

 The nature of floating point math makes precise frame navigation difficult, 
 if not impossible.  Rob's test is especially hairy, given that each frame has 
 a timing bound of [startTime, endTime), and his test attempts to navigate 
 directly to the startTime of a given frame, a value which gives approximately 
 zero room for error.

 I'm most familiar with MPEG containers, but I believe the following is also 
 true of the WebM container: times are represented by a rational number, 
 timeValue / timeScale, where both numerator and denominator are unsigned 
 integers.


FYI: the Ogg container also uses rational numbers to represent time.


  To seek to a particular media time, we must convert a floating-point time 
 value into this rational time format (e.g. when calculating the 4th frame's 
 start time, from 3 * 1/29.97 to 3 * 1001/3).  If there is a 
 floating-point error in the wrong direction (e.g., as above, a numerator of 
 3002 vs 3003), the end result will not be the frame's startTime, but one 
 timeScale before it.

 We've fixed some frame accuracy bugs in WebKit (and Chromium) by carefully 
 rounding the incoming floating point time value, taking into account the 
 media's time scale, and rounding to the nearest 1/timeScale value.  This 
 fixes Rob's precision test, but at the expense of precision. (I.e. in a 30 
 fps movie, currentTime = 0.99 / 30 will navigate to the second frame, 
 not the first, due to rounding, which is technically incorrect.)

 This is a common problem, and Apple media frameworks (for example) therefore 
 provide rational time classes which provide enough accuracy for precise 
 navigation (e.g. QTTime, CMTime). Using a floating point number to represent 
 time with any precision is not generally accepted as good practice when these 
 rational time classes are available.

 -Jer


Re: [whatwg] New URL Standard

2012-09-25 Thread Silvia Pfeiffer
On Tue, Sep 25, 2012 at 9:48 PM, Robin Berjon ro...@w3.org wrote:
 On 25/09/2012 01:07 , Glenn Maynard wrote:

 On Mon, Sep 24, 2012 at 12:30 PM, Tab Atkins Jr.
 jackalm...@gmail.comwrote:

 I suggest just making it a map from String-[String].  You probably
 want a little bit of magic - if the setter receives an array, replace
 the current value with it; anything else, stringify then wrap in an
 array and replace the current value.  The getter should return an
 empty array for non-existing params.  You should be able to set .query
 itself with an object, which empties out the map and then runs the
 setter over all the items.  Bam, every single methods is now obsolete.


 When should this API guarantee that it round-trips URLs cleanly (aside
 from
 quoting differences)?  For example, maintaining order in a=1b=2a=1,
 and
 representing things like a=1b (no '=') and ab (no key at all).


 And round-tripping using ; as the separator instead of . I mention this
 because I've seen actual production code (more than once) that relied on
 this. I have no idea how common it is though. I'm guessing not too much, but
 probably some since it was in HTML 4.01:

 http://www.w3.org/TR/html401/appendix/notes.html#h-B.2.2

 Of course another option is to just not parse that into key-value pairs in
 the first place.

I have also seen key-value pairs separated both by  and by ;, but
not in real life in quite some time. See also the discussion here:
[1]. For media fragment URIs we chose to only recommend use of  [2]
(see section 51.   is the only primary separator for name-value
pairs, but some server-side languages also treat ; as a separator.
).

Cheers,
Silvia.

[1] https://discussion.dreamhost.com/thread-134179.html
[2] http://www.w3.org/TR/media-frags/


[whatwg] exposing metadata on replaced elements (was: video feedback)

2012-09-17 Thread Silvia Pfeiffer
Hi Ralph, all,

 On Mon, 11 Jun 2012, Ralph Giles wrote:

 Recently, we've been considering adding a 'tags' or 'metadata' attribute
 to HTML media elements in Firefox, to allow webcontent access to
 metadata from the playing media resource. In particular we're interested
 in tag data like creator, title, date, and so on.

 My recollection is that this has been discussed a number of times in the
 past, but there was never suffient motivation to support the interface.
 Our particular motivation here is webapps that present a media file
 library. While it's certainly possible to parse the tag data out
 directly with javascript, it's more convenient if the HTML media element
 does so, and the underlying platform decoder libraries usually provide
 this data already.

 My recommendation would be to develop a specification for this (or use the
 one(s) already available for this purpose), and in that specification
 define how it is added to HTMLMediaElement, much as you suggest:

 partial interface HTMLMediaElement {
   [...]
 };

 (I don't have the bandwidth to define how to extract this kind of thing
 from each video format. Even trying to define what little the spec already
 says has required hours of reading obscure specifications and that's
 without even testing to see if those specs match reality.)

I think we have a more generic problem than just for media elements.

None of our elements that pull in external resources (including img,
object/embed, video, audio, track) expose metadata to the Web page and
the Web developer is required to implement a XHR to get this kind of
information.

While we have somewhat of a proposal for exposing metadata to media
elements through the W3C media annotations WG [1] with the
getMediaProperty() function, we have no such thing for any of the
other resource types. Also, it is not clear that this function is
appropriate.

I suggest we need to develop a generic solution for this problem.

Cheers,
Silvia.

[1] http://www.w3.org/TR/2011/WD-mediaont-api-1.0-2022/#async-api


Re: [whatwg] Problem in the Section 4 Elements of HTML = 4.4 Sections = 4.4.2 The Section element

2012-09-13 Thread Silvia Pfeiffer
On Fri, Sep 14, 2012 at 4:15 AM, Tab Atkins Jr. jackalm...@gmail.com wrote:
 On Thu, Sep 13, 2012 at 11:09 AM, Jukka K. Korpela jkorp...@cs.tut.fi wrote:
 And I suppose by weird formation you mean the example that starts with

 !DOCTYPE Html
 Html
  Head
Title
  Graduation Ceremony Summer 2022/Title
/Head
  Body
H1


 can anybody tell me if this is known/on purpous


 My guess is that it's an accidental result of some software used to maintain
 the document. It's not incorrect, just odd, because it deviates from the
 coding style used otherwise, both in the use of spaces between tag close ()
 and in casing (capitalized tag names).

 No, it's intentional.  Hixie purposely varied his style across
 examples, to show that certain variances in the syntax were allowed
 and perfectly fine.

Might be worth a note in this instance to stop people from wondering?

Silvia.


Re: [whatwg] Missing alt attribute name bikeshedding (was Re: alt= and the meta name=generator exception)

2012-08-05 Thread Silvia Pfeiffer
On Mon, Aug 6, 2012 at 10:31 AM, Maciej Stachowiak m...@apple.com wrote:

 On Aug 1, 2012, at 12:56 AM, Ian Hickson i...@hixie.ch wrote:


 We briefly brainstormed some ideas on #whatwg earlier tonight, and one
 name in particular that I think could work is the absurdly long

   img src=... generator-unable-to-provide-required-alt=

 This has several key characteristics that I think are good:

 - it's long, so people aren't going to want to type it out
 - it's long, so it will stick out in copy-and-paste scenarios
 - it's emminently searchable (long unique term) and so will likely lead
   to good documentation if it's adopted
 - the generator part implies that it's for use by generators, and may
   discourage authors from using it
 - the unable and required parts make it obvious that using this
   attribute is an act of last resort

 Here's a review of other proposed names and a few new ideas:

 noalt
 Pro: brief
 Con: not very explanatory, so perhaps more likely to be misused

 relaxed [suggested by Ted]
 Pro: correctly conveys relaxed validation
 Con: not clear what is relaxed or why

 incomplete [suggested by Laura]
 Pro: correctly conveys that a non-decorative content image is incomplete 
 without a textual equivalent
 Con: not clear what is incomplete or why

 unknown
 Pro: correctly conveys the reason for omitting alt, i.e. that the name is 
 unknown to the generator
 Con: might not be clear that it is not for human authors

 unknown-to-generator
 Pro: correctly conveys intended generator use
 Con: not totally clear what it is that is unknown

 I don't have a strong opinion, but I think 
 generator-unable-to-provide-required-alt might be long to the point of 
 silliness.

I'd think it should at least mention alt. Shorter would e.g. be
unable-to-provide-alt.

Names are difficult to get right. ;-)

Cheers,
Silvia.


Re: [whatwg] Comments about the track element

2012-07-26 Thread Silvia Pfeiffer
Hi Cyril,

On Thu, Jul 26, 2012 at 10:03 PM, Cyril Concolato
cyril.concol...@telecom-paristech.fr wrote:
 What do you mean here by positioning issues? SVG handles the positioning
 within its viewbox and what I propose is to define the size and position of
 this viewbox in the parent coordinate system, i.e. with respect to the
 video. I don't see what else is needed? or do you mean when SVG is
 transported in cue, how do you use the cue settings?

There is the SVG viewbox and there is the video viewbox. It is not
immediately clear how they relate to each other. What I meant was: how
to position the SVG viewbox within the boundaries of the video
viewbox. It could fully cover it, but it may not need to. For example
in your example with the clock, it could be positioned by coordinates
of the video, e.g. left: 70%, top:30% or something like it. Then the
SVG can be much smaller and it is possible to overlay other elements,
too.

 Do you mean that you would like to have some signaling in the WebVTT file
 (for instance in the header) to indicate the type of the cue payload? I
 think that'll be interesting.

Yes, we have a proposal for a metadata field in the WebVTT header to
signify the kind.

 Otherwise, it'll be interesting to have a type
 selector in the validator.

That can work, too, of course.


 TTML in WebVTT probably doesn't make sense. But SVG's timing model can
 be a applied within the timeframe of a cue, so that does make sense.

 Maybe, yes. It might make sense if your cue has a long duration, otherwise
 the overhead of loading an SVG document for each cue might be too big. But
 in general, since you can structure an SVG document with a frame-based
 structure (see this cartoon for instance:
 http://perso.telecom-paristech.fr/~concolat/SVG/flash10.svg), I don't see
 the added value of WebVTT to carry SVG.

Indeed, for this kind of use case, putting SVG in WebVTT makes no sense.

You could, however, put SVG in WebVTT e.g. to provide overlay graphics
that are non-moving or are in a loop for a certain duration of the
video. E.g. an animated character (like your Rhino) could be rendered
in a loop on top of a video for the first 3 minutes of the video.

 How would you specify this with TTML? It would run into the same
 problems, wouldn't it?

 I think so, the problems would be similar. But again, TTML can also express
 frame-based animations, why should you add the WebVTT layer?

I don't want to take this discussion off track, but it is news to me
that TTML can express frame-based animations.
I indeed wouldn't mingle WebVTT and TTML layering since they satisfy
the same use cases.


 What would your preferred markup for
 http://perso.telecom-paristech.fr/~concolat/html5_tests/svg.vtt be ?
 How would you avoid the duplication?

 For instance, you would want to be able to construct the SVG document
 progressively, to have only one document that you modify by adding more
 data. One way to do it would be to have the first cue contain the beginning
 of the document and the following cues contain more data, but since
 modifying the document after its load is tricky, this would require
 concatenating all previous cue texts and then parsing that as a new document
 (ugly!). I'd like to have the parsing step done under the hood by the
 browser, as it usually do.

How does the browser support constructing SVG progressively right now?
If there is a SVG-internal solution, that should be used. In this
case, @mediagroup synchronization would again make the most sense. Or
you just do everything in SVG.


 If you try my example here
 (http://perso.telecom-paristech.fr/~concolat/html5_tests/getcueasSVG.html),
 you'll see that changing the playback speed (even to 0.1) does not guarantee
 synchronization either. By the time the JS has processed the content, it's
 already too late. It might be an implementation issue but it's symptomatic
 of the stacking, that's why I think we should leverage the native parsing,
 synchronization and support for SVG rendering (not through JS). The clock
 might be a (not so) extreme case, but I don't think I'm trying to do very
 fancy things here, just trying to reproduce existing technologies
 (proprietary or not) with existing web standards.

Sure.

 I'm not sure. Having to repeatedly parse WebVTT cues and draw the SVG
 image makes this particularly slow. Have you tried to paint the SVG
 just once on the video and using TextTrackCues just to change the
 transform value using JavaScript? Upon a cuechange event, you re-draw
 the SVG.

 I could give it a try if I have some time but I'm not really sure I
 understand what you're suggesting. Do you mean using addCue? Could you give
 an example? Are you suggesting something similar to the example in the spec
 with

 var sounds = sfx.addTextTrack('metadata');

No, not really. What I meant was to draw the blue handle on top of the
video not through cues, but directly in the browser. Then, the WebVTT
file only delivers the according position changes for 

Re: [whatwg] Comments about the track element

2012-07-25 Thread Silvia Pfeiffer
On Wed, Jul 25, 2012 at 11:45 PM, Cyril Concolato
cyril.concol...@telecom-paristech.fr wrote:
 Right now it is fully defined how data in a TextTrack (of the defined
 kinds) is displayed on top of the video. As this is as yet unclear for
 SVG resources,

 I wouldn't say it's unclear, I'd say it needs to be specified ;) meaning
 that it probably doesn't require much specification. I was thinking that we
 could use the CSS box of the video element to position the SVG, as if the
 SVG was put in a div.

Let's work on this basis and see where we get. There's also
positioning issues etc. so it's not as simple as just putting the SVG
in a cue.


 I would suggest using the @metadata tack kind for now
 and providing the SVG as markup in a TextTrackCue (either from WebVTT
 cues

 I've tried this option but I'm facing several problems (Tested with Chrome
 Version 22.0.1216.0 canary).

 The first problem is how to embed SVG in a cue? Should the '', '' and
 other characters be escaped or not? According to Anne's validator,

So, I assume you created WebVTT files. (You don't have to - you can
directly use the TextTrack API.)

Anne's validator validates the WebVTT rules for caption and subtitle
kinds. For metadata kinds, there should be no parsing of the cues in
browsers. A validator can only decide whether to parse the cues
according to captions/subtitles, or chapters, or metadata
rules if the WebVTT file has such an indicator. I've asked for such
information to be included in WebVTT, but we don't currently have such
markup/metadata.

 they
 should be.

Actually, for @kind=metadata you don't need to escape '' or ''.

 But if I use them, then the parsing of the escaped string returns
 'empty document'
 (http://perso.telecom-paristech.fr/~concolat/html5_tests/getcueasSVG-escaped.html).

Which parsing? Anne's validator? Have you tried Chrome directly?
http://perso.telecom-paristech.fr/~concolat/html5_tests/svg-escaped.vtt
does look very ugly.

 However, if I don't escape them, the parsing doesn't fail and returns an SVG
 document
 (http://perso.telecom-paristech.fr/~concolat/html5_tests/getcueasSVG.html).

cue.text is the SVG code? That's what we want, right?
(http://perso.telecom-paristech.fr/~concolat/html5_tests/svg.vtt looks
much nicer)

 In any case, I think embedding the SVG in WEBVTT does not really make sense.

Why not?

 An other problem is in terms of design. SVG has a timing model (similar to
 TTML), WebVTT another. For instance, SVG can express things like repetitions
 of animations that WebVTT cannot. Are you saying that TTML should be carried
 in a WebVTT file?

TTML in WebVTT probably doesn't make sense. But SVG's timing model can
be a applied within the timeframe of a cue, so that does make sense.

How would you specify this with TTML? It would run into the same
problems, wouldn't it?

 Similarly, in terms of design, embedding SVG in cues requires repeating a
 lot of SVG content at each cue (see
 http://perso.telecom-paristech.fr/~concolat/html5_tests/svg.vtt), as this
 approach requires parsing an entire document at each cue. You could probably
 envisage overlapping cues but that would require a lot of overhead.
 Leveraging the progressive loading of SVG cannot be done this way either.
 In general, I think it would make sense to leverage the browsers' support
 for SVG and not stack different technologies.

Sure, it should use existing SVG support. I'm not so sure I agree with
not stacking - that depends.
What would your preferred markup for
http://perso.telecom-paristech.fr/~concolat/html5_tests/svg.vtt be ?
How would you avoid the duplication?

 Another problem is that I don't know if it's possible to display the SVG
 content in a layer between the video and the UI controls. Currently, I
 display the SVG on top of the video element, therefore the UI controls are
 not accessible for clicks. Having to embed my own UI controls for that is a
 bit of a pain. And, semantically, when reading the spec, 'metadata' tracks
 say  Not displayed by the user agent.  so I think this might be a bit
 confusing for users/authors.

All publishers that want the same controls in all browsers make their
own controls anyway. If you make a library for SVG display on top of a
video, you can also make one for the controls (or use one of the many
existing ones).

 The third problem is performance-wise. In my example, the blue line (in
 SVG), when synchronized with the video, should be aligned with the moving
 (white-gray) edge of the pie. As you can see, this is not the case. Only 4-5
 cuechange events seems to be processed properly. I noticed the same problem
 with 'timeupdate' events. Also, I've noticed that even though my WebVTT file
 is designed to have only one active cue at a time, for some cuechange
 events, there are 2. This might be an implementation issue but this might be
 a problem of reentrant code (the cuechange callback being called while it's
 not finished), but in general, I'm not sure it's a good idea to go through
 

Re: [whatwg] Comments about the track element

2012-07-25 Thread Silvia Pfeiffer
On Wed, Jul 25, 2012 at 11:51 PM, Cyril Concolato
cyril.concol...@telecom-paristech.fr wrote:
 Hi Silvia,

 Le 7/25/2012 3:42 PM, Silvia Pfeiffer a écrit :

 On Wed, Jul 25, 2012 at 10:55 PM, Henri Sivonenhsivo...@iki.fi  wrote:

 On Wed, Jul 25, 2012 at 11:24 AM, Silvia Pfeiffer
 silviapfeiff...@gmail.com  wrote:

 But you can use cue.text and parse it as a SVG fragment.

 That would be RSS all over again. :-(

 To some extent. If we are very clear about what will be in the cues
 and that it will always be just SVG, we could just create a
 @kind=svg.

 The SVG WG resolved to write a document describing how to store SVG content
 into streaming packets (or cues) whether full documents or document
 fragments for progressive loading of content. I'll be the editor of this
 document. You can already have an idea of what would be in this document
 looking at http://www.w3.org/Graphics/SVG/WG/wiki/SVGStreaming. I plan to
 talk about cue content and associated signaling timing, random access point
 ...

I think the @mediagroup approach is indeed better, in particular for
SVG content that should synchronize frame-by-frame.

Silvia.


Re: [whatwg] Comments about the track element

2012-07-24 Thread Silvia Pfeiffer
Expanding a bit on what Anne said...

On Tue, Jul 24, 2012 at 11:18 PM, Cyril Concolato
cyril.concol...@telecom-paristech.fr wrote:
 Dear WhatWG,

 During the ongoing SVG F2F meeting, the SVG WG discussed the use case of
 displaying SVG graphics on top of a video, in a synchronous manner.

 The SVG WG believes that for such use case, it is necessary to indicate to
 the browser that the SVG and video content should stay synchronized (no
 matter what happens to the video playback), and to let the browser handle
 the synchronization internally. The SVG WG resolved to include such
 indication as part of the Web Animation specification, for instance using
 the HTML mediagroup attribute or the MediaController API.

 However, the SVG WG thinks it would also be interesting to leverage the
 native UI controls of the video element to select (or deactivate) the
 display of the SVG content on top of the video, in a similar manner to a
 subtitle track. Obviously, the HTML 5 track element would be a suitable
 option for that. However, currently it only allows text tracks. So, the SVG
 WG would like HTML to allow the track element's URL to identify an SVG
 resource, and in that case the track kind would be 'graphics'. There would
 be a need to define how the graphics are displayed on top of the video,

Right now it is fully defined how data in a TextTrack (of the defined
kinds) is displayed on top of the video. As this is as yet unclear for
SVG resources, I would suggest using the @metadata tack kind for now
and providing the SVG as markup in a TextTrackCue (either from WebVTT
cues or from JavaScript calls to addTextTrack()). The
timing/synchronization is provided by the TextTrackCue, which seems to
be all you are asking for right now. The rendering is then done by
JavaScript, which seems the most flexible approach. You can even use
getCueAsHTML() to simply hand the SVG to HTML for rendering.

@metadata tracks can be part of the native video controls and the
@label on the text track would provide a description of what it is,
e.g. Graphic overlays.

 for
 instance reusing the viewport/viewbox negotiation phase. There would also be
 a need to make a more generic Track API or to replace the TextTrack API by
 the SVG API when the track is of kind 'graphics'.

I don't understand this requirement. What API needs are there aside
from the synchronization? Trying to replicate SVG APIs through the
TextTrack API seems like a repetition of the API and thus fragile.


Regards,
Silvia.


Re: [whatwg] Why does CanvasRenderingContext2D.drawImage not draw a video's poster?

2012-07-21 Thread Silvia Pfeiffer
On Thu, Jul 19, 2012 at 2:46 AM, Charles Pritchard ch...@jumis.com wrote:
 On 7/17/2012 11:06 PM, Silvia Pfeiffer wrote:

 On Wed, Jul 18, 2012 at 6:57 AM, Charles Pritchard ch...@jumis.com
 wrote:

 On Jul 17, 2012, at 9:04 PM, Mark Callow callow_m...@hicorp.co.jp
 wrote:

 On 18/07/2012 00:17, Silvia Pfeiffer wrote:

 I think this is simply an idea that hasn't been raised before. I like
 it. Though even then sometimes there may be nothing when there is no
 explicit poster and preload is set to none.

 The language gives me the impression that drawing nothing was a
 deliberate choice, in particular because later on it says:


 We don't have events based on poster, so we don't know whether or not
 it's been loaded. Poster is meant for the video implementation. We use other
 events to know if video is playing.

 So as a coder, I can just do an attribute check to see if poster exists,
 then load it into an image tag. It's a normal part of working with Canvas.
 We always follow onload events.

 IIUC, that still excludes the case where there is no @poster attribute
 set on video, @autoplay is set to none, and the browsers load the
 first frame as the poster. It would make sense in this case to hand
 that poster to the canvas. And it would make it easier if the
 explicitly set @poster attribute would be used, too, so you don't have
 to do that by hand.


 We need more data if we're going to try that, and I'm still rather timid on
 the idea though it would be nice if img would load the first frame of
 video (and .gif for that matter).

 We really don't know what the browser is going to show if it's not showing a
 poster. It could show an arbitrary frame, it could show some kind of frame
 with a blur or opacity change, it could add on various controls.

This is all theoretical. In practice, all browsers show a black frame,
since no other frame is actually available.


 I'm not opposed to the idea, but I'm failing to see the benefit.

The advantage clearly is that if you have a canvas that is copying
data out of the video, it includes the poster without having to write
custom code for it. The poster is an integral part of the video (it's
not distinguishable by the user whether it is a separate picture or a
frame from the video), so I don't see why it should need custom
handling.

 Still, if
 there's going to be one, we're going to need an onposterloaded event.

Why? onmetadataloaded provides a sufficiently stable stuation: either
the poster img or video frame is then loaded (if @preload is not
none) or it's black (if @poster is not set and @preload is none).

I don't follow the objections.

Cheers,
Silvia.


Re: [whatwg] Why does CanvasRenderingContext2D.drawImage not draw a video's poster?

2012-07-18 Thread Silvia Pfeiffer
On Wed, Jul 18, 2012 at 6:57 AM, Charles Pritchard ch...@jumis.com wrote:
 On Jul 17, 2012, at 9:04 PM, Mark Callow callow_m...@hicorp.co.jp wrote:

 On 18/07/2012 00:17, Silvia Pfeiffer wrote:
 I think this is simply an idea that hasn't been raised before. I like
 it. Though even then sometimes there may be nothing when there is no
 explicit poster and preload is set to none.
 The language gives me the impression that drawing nothing was a
 deliberate choice, in particular because later on it says:


 We don't have events based on poster, so we don't know whether or not it's 
 been loaded. Poster is meant for the video implementation. We use other 
 events to know if video is playing.

 So as a coder, I can just do an attribute check to see if poster exists, then 
 load it into an image tag. It's a normal part of working with Canvas. We 
 always follow onload events.

IIUC, that still excludes the case where there is no @poster attribute
set on video, @autoplay is set to none, and the browsers load the
first frame as the poster. It would make sense in this case to hand
that poster to the canvas. And it would make it easier if the
explicitly set @poster attribute would be used, too, so you don't have
to do that by hand.

Silvia.


Re: [whatwg] Why does CanvasRenderingContext2D.drawImage not draw a video's poster?

2012-07-17 Thread Silvia Pfeiffer
I think this is simply an idea that hasn't been raised before. I like
it. Though even then sometimes there may be nothing when there is no
explicit poster and preload is set to none.

Regards,
Silvia.

On Tue, Jul 17, 2012 at 9:58 AM, Mark Callow callow_m...@hicorp.co.jp wrote:
 The spec. for CanvasRenderingContext2D.drawImage says draw nothing when
 a video element's readyState is  HAVE_NOTHING or HAVE_METADATA. I was
 wondering why this was chosen vs. drawing the poster. A search in the
 list archive didn't turn up any discussion or explanation.

 Regards

 -Mark

 --
 注意:この電子メールには、株式会社エイチアイの機密情報が含まれている場合
 が有ります。正式なメール受信者では無い場合はメール複製、 再配信または情
 報の使用を固く禁じております。エラー、手違いでこのメールを受け取られまし
 たら削除を行い配信者にご連絡をお願いいたし ます。

 NOTE: This electronic mail message may contain confidential and
 privileged information from HI Corporation. If you are not the intended
 recipient, any disclosure, photocopying, distribution or use of the
 contents of the received information is prohibited. If you have received
 this e-mail in error, please notify the sender immediately and
 permanently delete this message and all related copies.



Re: [whatwg] frame accuracy breaking case for 25fps / status of 29.97fps

2012-07-12 Thread Silvia Pfeiffer
On Mon, Jul 9, 2012 at 8:17 PM, Odin Hørthe Omdal odi...@opera.com wrote:
 On Mon, 09 Jul 2012 18:46:20 +0200, adam k li...@inconduit.com wrote:

 i have a 25fps video, h264, with a burned in timecode.  it seems to be off
 by 1 frame when i compare the burned in timecode to the calculated timecode.
 i'm using rob coenen's test app at
 http://www.massive-interactive.nl/html5_video/smpte_test_universal.html to
 load my own video.

 what's the process here to report issues?  please let me know whatever
 formal or informal steps are required and i'll gladly follow them.


 Well, it works beautifully in that web site you reference. What do you think
 is wrong actually? I'm not so sure if the spec is the first and best way to
 go to find the error(?)

Indeed: which browser are you actually using to see the 1 frame offset?

Cheers,
Silvia.


Re: [whatwg] Proposal for HTML5: Motion sensing input device (Kinect, SoftKinetic, Asus Xtion)

2012-06-27 Thread Silvia Pfeiffer
On Wed, Jun 27, 2012 at 1:56 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Tue, Jun 26, 2012 at 8:22 AM, Tab Atkins Jr. jackalm...@gmail.comwrote:

 The ability to capture sound and video from the user's devices and
 manipulate it in the page is already being exposed by the getUserMedia
 function.  Theoretically, a Kinect can provide this information.

 More advanced functionality like Kinect's depth information probably
 needs more study and experience before we start thinking about adding
 it to the language itself.


 If we were going to support anything like this, I think the best approach
 would be to have a new track type that getUserMedia can return in a
 MediaStream, containing depth buffer data.

I agree.

Experimentation with this in a non-live manner is already possible by
using a @kind=metadata track and putting the Kinect's depth
information into a WebVTT file to use in parallel with the video.

WebM has further defined how to encapsulate WebVTT into a WebM text
track [1], so you could even put this information into a video file.
I believe the same is possible with MPEG [2].

The exact format for how the Kinect's depth information is delivered
as a timed metadata track would need to be specified before it could
turn into its own @kind track type and deliver it live.


Cheers,
Silvia.
[1] http://wiki.webmproject.org/webm-metadata/temporal-metadata/webvtt-in-webm
[2] http://html5.cablelabs.com/tracks/media-container-mapping.html


Re: [whatwg] make video always focusable and interactive content

2012-06-20 Thread Silvia Pfeiffer
On Thu, Jun 21, 2012 at 9:09 AM, Chris Double chris.dou...@double.co.nz wrote:
 On Wed, Jun 20, 2012 at 5:47 PM, Silvia Pfeiffer
 silviapfeiff...@gmail.com wrote:
 They are in Opera. The spec allows it.

 Yes, thankfully one browser has video keyboard interaction.


 I just tested Firefox and the keys work. See Media Shortcuts here:

 https://support.mozilla.org/en-US/kb/keyboard-shortcuts-perform-firefox-tasks-quickly

 I could tab to the video, a focus ring appears around it, and use the
 keys in that list. Did that not work when you tried it?

You're right - it works when you use Firefox normally. It was when I
used it with VoiceOver on the Mac. But that's indeed a screenreader
issue. Sorry for being confusing.

Cheers,
Silvia.


[whatwg] make video always focusable and interactive content

2012-06-19 Thread Silvia Pfeiffer
Hi all,

I recently experimented with keyboard accessibility of media elements.

I found that browsers don't provide a default tabfocus on media
elements nor do they provide keyboard interactivity. I had to put
explicit @tabindex attributes onto the media elements to allow them to
at least receive focus. This is particularly irritating in a
screenreader.

As the video is specified right now, it is not a tabfocusable element
[1] and only interactive [2] when it has controls. This is sufficient
for audio elements, which have no visual representation without
controls, but isn't right for video, which always renders at least a
poster (or a black area). Also, if there are controls specified, they
should actually be tabfocusable.

Even video without controls should allow keyboard focus and should
provide for default keyboard interaction: at minimum it should allow
for ENTER and/or SPACE to toggle play/pause - and clicking on it
should work, too. Potentially it should have up/down arrows to change
the volume and left/right arrows to seek back/forward by e.g. 10sec.
As it's currently specified, browser cannot provide such interaction
when there are no controls, since the element is not generally
specified as an interactive element [2].

[1] 
http://www.whatwg.org/specs/web-apps/current-work/multipage/editing.html#focusable
[2] 
http://www.whatwg.org/specs/web-apps/current-work/multipage/elements.html#interactive-content-0

There is also a bug in the W3C wiki for this:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=17463

Cheers,
Silvia.


Re: [whatwg] make video always focusable and interactive content

2012-06-19 Thread Silvia Pfeiffer
On Wed, Jun 20, 2012 at 2:51 PM, Simon Pieters sim...@opera.com wrote:
 On Wed, 20 Jun 2012 05:43:20 +0200, Silvia Pfeiffer
 silviapfeiff...@gmail.com wrote:

 Hi all,

 I recently experimented with keyboard accessibility of media elements.

 I found that browsers don't provide a default tabfocus on media
 elements nor do they provide keyboard interactivity. I had to put
 explicit @tabindex attributes onto the media elements to allow them to
 at least receive focus. This is particularly irritating in a
 screenreader.

 As the video is specified right now, it is not a tabfocusable element
 [1] and only interactive [2] when it has controls. This is sufficient
 for audio elements, which have no visual representation without
 controls, but isn't right for video, which always renders at least a
 poster (or a black area). Also, if there are controls specified, they
 should actually be tabfocusable.


 They are in Opera. The spec allows it.

Yes, thankfully one browser has video keyboard interaction.

Video is not listed in the tabfocusable list, though. How does the
spec allow/encourage that?


 Even video without controls should allow keyboard focus and should
 provide for default keyboard interaction: at minimum it should allow
 for ENTER and/or SPACE to toggle play/pause - and clicking on it
 should work, too.


 Why? Video without controls is expected to have author-provided controls.
 Trying to squeeze in hard-to-discover invisible browser-provided controls in
 that case would likely just confuse users and make authors curse browsers
 and try to preventDefault() and tabindex=-1 their video elements (or switch
 back to Flash) so that their own controls is what their users interact with.

Hmm... I guess so. The problem that I have is that it's not guaranteed
that there are accessible controls when there is no @controls
attribute. That means that screen readers don't even see the image,
nor would they provide access to the context menu, through which
play is usually possible. But maybe that's a bug on the
screenreaders rather than the spec - they should always have video as
an interactive element.


 Potentially it should have up/down arrows to change
 the volume and left/right arrows to seek back/forward by e.g. 10sec.
 As it's currently specified, browser cannot provide such interaction
 when there are no controls, since the element is not generally
 specified as an interactive element [2].


 It can, actually. interactive content is just a category for the purpose
 of the content model, it doesn't have implications like the above. (For
 instance, if you have a video without controls attribute, and the user
 enables the controls from the context menu, the element still isn't
 interactive content but it shows controls.)

That's a browser-specific hack, though, and not quite in the spirit of
the spec, isn't it?

Maybe the answer is in general: it's an implementation issue. However,
the spec doesn't really encourage such
implementations/interpretations. The spec should then say something
like: if there is a screenreader running or a context menu available
that provide for controls, then the elements are also regarded as
interactive content.

Thanks,
Silvia.


Re: [whatwg] metadata attribute for media

2012-06-11 Thread Silvia Pfeiffer
On Tue, Jun 12, 2012 at 7:53 AM, Ralph Giles gi...@mozilla.com wrote:
 Recently, we've been considering adding a 'tags' or 'metadata' attribute
 to HTML media elements in Firefox, to allow webcontent access to
 metadata from the playing media resource. In particular we're interested
 in tag data like creator, title, date, and so on.

 My recollection is that this has been discussed a number of times in the
 past, but there was never suffient motivation to support the interface.
 Our particular motivation here is webapps that present a media file
 library. While it's certainly possible to parse the tag data out
 directly with javascript, it's more convenient if the HTML media element
 does so, and the underlying platform decoder libraries usually provide
 this data already.

 As such I wanted to raise the issue here and get design feedback and
 levels of interest for other user agents.

 Here's a first idea:

 partial interface HTMLMediaElement {
  readonly attribute object tags;
 };

 Accessing media.tags provides an object with a key: value data, for example:

 {
  'title': 'My Movie',
  'creator': 'This User',
  'date': '2012-06-18',
  'license': 'http://creativecommons.org/licenses/by-nc-sa/'
 }

 The keys may need to be filtered, since the files can contain things
 like base64-encoded cover art, which makes the object prohibitively
 large. The keys may need to be mapped to some standard scheme (i.e.
 dublic core) since vocabularies vary from format to format.

 This is nice because it's easy to access, can be simply enumerated,
 and extensible. Which is helpful when if gets added the img for exif data.


Did you know that the W3C media annotations WG has specified such an
API in http://www.w3.org/TR/2011/WD-mediaont-api-1.0-2022/#api-description
. Essentially, their suggestion is to add the following IDL functions:

void getMediaProperty (DOMString[] propertyNames, PropertyCallback
successCallback, ErrorCallback errorCallback, optional DOMString
fragment, optional DOMString sourceFormat, optional DOMString
language);

void getOriginalMetadata (DOMString sourceFormat, MetadataCallback
successCallback, ErrorCallback errorCallback);


I actually think their API is too complicated and prefer your simple
approach. Returning a JSON object also allows hierarchical tags to be
returned in a structured way, which is nice. You lose the
normalisation that the W3C media ann WG has worked on across different
media resources, but that normalisation can always be done on top of
the JSON objects that your API returns.


Cheers,
Silvia.


Re: [whatwg] tabindexscope

2012-06-07 Thread Silvia Pfeiffer
On Thu, Jun 7, 2012 at 9:57 AM, Ian Hickson i...@hixie.ch wrote:
 On Mon, 30 Jan 2012, Tab Atkins Jr. wrote:
 On Mon, Jan 30, 2012 at 1:54 PM, Ian Hickson i...@hixie.ch wrote:
  On Tue, 8 Nov 2011, Ojan Vafai wrote:
  We keep running into the use case where the physical position matters
  for the tab order. The problem with just setting tabIndex (or CSS3
  tab-index) is that it takes the thing out of the natural order.
 
  This problem comes up in a lot of places (e.g. absolute positioning).
  It's recently come up for CSS flexboxes, e.g. if you set flex-order or a
  reverse flow, then the tabindex still being in document order is often
  not what the author wants
  (https://bugs.webkit.org/show_bug.cgi?id=62664).
 
  button tabindex=0A/button
  div tabindex=2 tabindexscope
  button tabindex=2C/button
  button tabindex=1B/button
  /div
  button tabindex=1D/button
 
  The order for the tabbing would be A-D-B-C.
 
  The spec says that the order when you omit tabindex (or set it to 0)
  should follow platform conventions. If the platform convention is to make
  the tab order follow the visual position, then that's what the browser
  should do.
 
  Surely that would be better than having authors manage local regions for
  tabindex, especially since the positioning depends on the CSS level, not
  the HTML level, and thus trying to manage the tabindex in the HTML would
  be a layering violation anyway.

 If you are attempting to match the tab order to the position of an
 element, you are correct.  In this situation, the tab order of the
 group itself should be controlled by the 'nav-index' property
 alongside the positioning code.

 However, *within* a group of controls, the relative order can want to
 be scoped without reference to CSS.  This can happen because the group
 is being positioned with CSS (and thus the appropriate tab-index is
 unpredictable), because the group may be generated into multiple pages
 with different tab-index'd items elsewhere in the page, or just
 because the dev would like to write their tab-indexes without having
 to renumber everything every time they move the HTML around in the
 page.

 Scoping a tab-index is thus a property that can appropriately belong
 to the HTML level, just as much as tab-index itself does.

 Can you give some examples of real-world pages where the tabindex
 attribute has been used (with difficulty due to the lack of scoping),
 where nav-index is not the right solution, and where the UA following
 platform conventions for tab order doesn't or wouldn't end up in a good
 UI, that show that this feature would be useful? I'm having trouble
 picturing it, and the frequent references above to positioning and other
 CSS layout features is confusing me.

I think I have an example.

Imagine a video player with special functionality and a special tab
order defined (e.g. the current YouTube HTML5 player has that because
the logical visual layout of the control element is different from the
DOM order). On the video's home page you can pre-define the tab order
and make sure it fits with the page. But when it's embedded in another
Web page, and that special tab order potentially conflicts with the
page's tab order, since the embed code can't really know what index
number to start with, since you don't know anything about the page
into which it is embedded. I believe nav-index would have the same
problem, but a tabindexscope would solve the issue.

The same should actually be true for any other web widget with a
custom tab order.

Cheers,
Silvia.


Re: [whatwg] tabindexscope

2012-06-07 Thread Silvia Pfeiffer
On Fri, Jun 8, 2012 at 6:10 AM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 7 Jun 2012, Silvia Pfeiffer wrote:
 
  Can you give some examples of real-world pages where the tabindex
  attribute has been used (with difficulty due to the lack of scoping),
  where nav-index is not the right solution, and where the UA following
  platform conventions for tab order doesn't or wouldn't end up in a
  good UI, that show that this feature would be useful? I'm having
  trouble picturing it, and the frequent references above to positioning
  and other CSS layout features is confusing me.

 Imagine a video player with special functionality and a special tab
 order defined (e.g. the current YouTube HTML5 player has that because
 the logical visual layout of the control element is different from the
 DOM order). On the video's home page you can pre-define the tab order
 and make sure it fits with the page. But when it's embedded in another
 Web page, and that special tab order potentially conflicts with the
 page's tab order, since the embed code can't really know what index
 number to start with, since you don't know anything about the page into
 which it is embedded. I believe nav-index would have the same problem,
 but a tabindexscope would solve the issue.

 I don't think this is really a good use case for three reasons:

 - You describe the intended tab order as being the visual order, which is,
 per spec, the order the UA should be using in the first place if that's
 what the platform does, not the DOM order;

I haven't seen a spec that says how browsers should implement the
default tab order - is there one? Typically it has been implemented to
be DOM order with the ability given to Web Devs the ability to
override this using @tabindex. As long as there is no spec requiring
browsers to implement the default tab order to be the visual order
(given absolutely placed elements, floating elements etc), I don't
think we can make any assumptions about it.


 - Typically a video player like this would be embedded using an iframe,
 which introduces a new tab order scope anyway;

Yes, that solves this issue oftentimes. But what happens when you want
to provide developers a HTML fragment without iframe for cut and
paste and it requires tabindex fixes? It's pretty annoying for the Web
dev to have to go through that snippet and manually adapt it to their
Website. Assuming it comes from a content mgmt system, you'd need to
include a parameter for the embedding that adapts the snippet's
tabindex attribute values when including it with a dependency on the
Web page on which it renders. It would be pretty fragile.


 - Widgets in general will in the future be designed in self-contained
 components, which should IMHO be defined as a tab order scope -- we don't
 need an attribute in HTML to support that.

How would that work? Is there a spec somewhere?

Cheers,
Silvia.


Re: [whatwg] tabindexscope

2012-06-07 Thread Silvia Pfeiffer
On Fri, Jun 8, 2012 at 11:49 AM, Silvia Pfeiffer
silviapfeiff...@gmail.com wrote:
 On Fri, Jun 8, 2012 at 6:10 AM, Ian Hickson i...@hixie.ch wrote:
 On Thu, 7 Jun 2012, Silvia Pfeiffer wrote:
 
  Can you give some examples of real-world pages where the tabindex
  attribute has been used (with difficulty due to the lack of scoping),
  where nav-index is not the right solution, and where the UA following
  platform conventions for tab order doesn't or wouldn't end up in a
  good UI, that show that this feature would be useful? I'm having
  trouble picturing it, and the frequent references above to positioning
  and other CSS layout features is confusing me.

 Imagine a video player with special functionality and a special tab
 order defined (e.g. the current YouTube HTML5 player has that because
 the logical visual layout of the control element is different from the
 DOM order). On the video's home page you can pre-define the tab order
 and make sure it fits with the page. But when it's embedded in another
 Web page, and that special tab order potentially conflicts with the
 page's tab order, since the embed code can't really know what index
 number to start with, since you don't know anything about the page into
 which it is embedded. I believe nav-index would have the same problem,
 but a tabindexscope would solve the issue.

 I don't think this is really a good use case for three reasons:

 - You describe the intended tab order as being the visual order, which is,
 per spec, the order the UA should be using in the first place if that's
 what the platform does, not the DOM order;

 I haven't seen a spec that says how browsers should implement the
 default tab order - is there one? Typically it has been implemented to
 be DOM order with the ability given to Web Devs the ability to
 override this using @tabindex. As long as there is no spec requiring
 browsers to implement the default tab order to be the visual order
 (given absolutely placed elements, floating elements etc), I don't
 think we can make any assumptions about it.


 - Typically a video player like this would be embedded using an iframe,
 which introduces a new tab order scope anyway;

 Yes, that solves this issue oftentimes. But what happens when you want
 to provide developers a HTML fragment without iframe for cut and
 paste and it requires tabindex fixes? It's pretty annoying for the Web
 dev to have to go through that snippet and manually adapt it to their
 Website. Assuming it comes from a content mgmt system, you'd need to
 include a parameter for the embedding that adapts the snippet's
 tabindex attribute values when including it with a dependency on the
 Web page on which it renders. It would be pretty fragile.


 - Widgets in general will in the future be designed in self-contained
 components, which should IMHO be defined as a tab order scope -- we don't
 need an attribute in HTML to support that.

 How would that work? Is there a spec somewhere?

 Cheers,
 Silvia.

Actually, I'm just thinking it through for the content management use
case (in particular here the YouTube case). I don't think I can solve
this without a @tabindexscope.

Assuming the video player is in a Web page and has a custom tab order
of the elements defined where the first one will start at value n and
the others successively have value n+1, this will still dominate all
other elements on the page that come before it. I can't even change
that n value dynamically for the page, because the player overall
needs to sit as part of the @tabindex=0 order of the page. Otherwise
it will always be addressed first.

The problem is that @tabindex does two things: it changes the relative
order, and it prioritizes elements over those that have an implicit
tab order. It's the need for explicit order combined with the side
effect of prioritization that motivates the use case for
@tabindexscope.

Cheers,
Silvia.


Re: [whatwg] sources in video by quality as well as codec

2012-06-06 Thread Silvia Pfeiffer
I believe right now there are two proposals under discussion that are
trying to address the adaptive streaming issues:
https://dvcs.w3.org/hg/audio/raw-file/tip/streams/StreamProcessing.html
and
http://dvcs.w3.org/hg/html-media/raw-file/tip/media-source/media-source.html

I believe both are still somewhat at the experimental level and need
harmonization, but they are both being worked on at the W3C.

HTH.

Cheers,
Silvia.

On Wed, Jun 6, 2012 at 8:23 AM, Charles Pritchard ch...@jumis.com wrote:
 On Jun 5, 2012, at 2:54 PM, Ian Hickson i...@hixie.ch wrote:

 On Tue, 21 Feb 2012, Rodger Combs wrote:

 I propose that source add a quality, bitrate, or filesize attribute to
 allow the UA to decide between multiple streams by choosing the maximum
 quality file that it can download within a reasonable amount of time
 (e.g. it will download faster than it will play) or based on a user
 preference (e.g. prefer SD quality, or always use HD when provided). It
 should also be possible to retrieve a list of the sources the UA can
 play in JS, and switch between them by user action (either a JS call for
 a custom UI or a dropdown in the builtin UI), loading the new file and
 switching to it with minimal skipping. This way, a site like YouTube,
 which presents several files in various bitrates and codecs, can allow
 the user to choose to use a higher quality without having to force an
 src attribute on the video, and a mobile UA that roams from 3G to WiFi
 or moves close to a base station can increase the quality of its stream.
 I think it fits in well with the purpose of the source element. This is
 certainly open for modification, but I think it's a good concept in
 essence.

 If this is for a site like YouTube, I think an adaptive network channel
 would be a more effective solution (i.e. one where the download adapts in
 real time to changing network conditions, with the endpoints negotiating
 with each other regarding what to transmit).


 I'd like to see strawman proposals for resource description markup.

 Presently, magnet+BitTorrent is the only mature and implemented tech in this 
 field that I've found with wide support. And it's not even meant for adaptive 
 streaming.

 I know that markup for subtitles happened in this group. I'd like to see an 
 effort for markup for resources, with the same experimental atmosphere.

 The hope being that we can copy and paste some kind of text markup which 
 describes various endpoints and metadata sufficient for streaming strategies 
 for media.

 -Charles


Re: [whatwg] Correcting some misconceptions about Responsive Images

2012-05-18 Thread Silvia Pfeiffer
On Fri, May 18, 2012 at 6:01 PM, Bruce Lawson bru...@opera.com wrote:
 On Fri, 18 May 2012 01:16:52 +0100, Tab Atkins Jr. jackalm...@gmail.com
 wrote:

  I believe the CG rules

 would not allow an employee of a W3C Member company to be a free agent
 though.


 It appears not. I tried to join the responsive images CG as just me as I'm
 interested, but not representing Opera, and don't like to give people the
 impression that my interest or support of any suggestion means Opera will
 implement it next Thursday. But I couldn't; I had to join as an Opera rep,
 and get permission internally. That's time-consuming and process laden.

I believe that is the case for all participation in the W3C.
Unfortunately, I was not able to join a WG as a private invited
expert and a different one as a Google representative (in my role as
a Google contractor). I think that is a problem at the W3C that they
don't allow you to have multiple different forms of representation per
person. I even tried with different email addresses and that wasn't
allowed either.

Silvia.


Re: [whatwg] Features for responsive Web design

2012-05-17 Thread Silvia Pfeiffer
On Thu, May 17, 2012 at 3:26 PM, Maciej Stachowiak m...@apple.com wrote:

 On May 16, 2012, at 9:39 PM, Silvia Pfeiffer silviapfeiff...@gmail.com 
 wrote:

 On Wed, May 16, 2012 at 10:55 PM, Matthew Wilcox m...@matthewwilcox.com 
 wrote:
 Chalk me up as another making that mistake. Properties on elements
 usually describe a property of the element. Not a property of
 something else (like the viewport).

 If it does indeed rely on a rendering issue (like the size of the
 viewport), wouldn't it make more sense to make it a CSS feature? I
 think that would be less confusing and follow better the established
 separation of layout in html, styling(/rendering) in CSS.

 CSS can handle this fine for presentational images (such as CSS background 
 images). But it's not obvious that it's the best way to influence selection 
 of content images references from img. Content images are meaningful, not 
 just stylistic, so their variants are meaningful too (even if the choice of 
 variant is influenced by presentational issues).

Hmm... I'm not actually talking about having the images specified in CSS.
I don't actually have a suggestion for how that would look - it seems
the list of resources needs to be given in HTML, but the selection
between them should be done in CSS.

Not sure this helps ... but since we're brainstorming...

Silvia.


Re: [whatwg] Features for responsive Web design

2012-05-17 Thread Silvia Pfeiffer
On Thu, May 17, 2012 at 7:54 PM, Kornel Lesiński kor...@geekhood.net wrote:
 On Thu, 17 May 2012 02:29:11 +0100, Jacob Mather jmat...@itsmajax.com
 wrote:

 As I said, I understand that it is a hard problem, but the question
 is, is it the correct problem.

 There are plenty of reasons not to do it, but is there any actual
 reason that the approach is less correct than the current proposals?


 Yes, trading off latency to save bandwidth is definitely an incorrect
 approach. Bandwidth will keep increasing much faster than latency decreases
 (and there are hard physical limits to decreasing latency, while bandwidth
 could go up to infinity).

 On high-latency high-bandwidth connections (satellite, 3G/4G) it may already
 be cheaper to download all versions of all images than to wait for CSS to be
 able to select the right ones to load. Solution that requires page layout
 for image loading is a step backwards for performance.


Maybe the metrics that we are suggesting for resources can help:
https://www.w3.org/Bugs/Public/show_bug.cgi?id=12399 .
When measuring bytesReceived, downloadTime, and networkWaitTime for
resources loaded, it is possible to track available bandwidth and
latency and thus find out what type of network connection one is on.
This may help with making decisions?

Silvia.


Re: [whatwg] Features for responsive Web design

2012-05-16 Thread Silvia Pfeiffer
On Wed, May 16, 2012 at 10:55 PM, Matthew Wilcox m...@matthewwilcox.com wrote:
 Chalk me up as another making that mistake. Properties on elements
 usually describe a property of the element. Not a property of
 something else (like the viewport).

If it does indeed rely on a rendering issue (like the size of the
viewport), wouldn't it make more sense to make it a CSS feature? I
think that would be less confusing and follow better the established
separation of layout in html, styling(/rendering) in CSS.

Just my 2c...

Silvia.


Re: [whatwg] So if media-queries aren't for determining the media to be used what are they for?

2012-05-15 Thread Silvia Pfeiffer
On Wed, May 16, 2012 at 8:37 AM, Odin Hørthe Omdal odi...@opera.com wrote:
 Andy Davies dajdav...@gmail.com wrote:

 Looking at the srcset proposal it appears to be recreating aspects of
 media-queries in a terse less obvious form...

 We've already got media queries so surelt we should be using them to
 determine which image should be used and if media-queries don't have
 features we need then we should be extending them...


 Ah! What a truly great question, so simple.

 The answer is: no, it is not media-queries although they look like it. A
 big problem is that it's so easy to explain it by saying it's just like
 media-query max-width, rather than finding the words to illustrate that
 they are totally different.

 The *limited effect* also feels similar which doesn't help the case at
 all.

 So, even though I have a rather bad track record of explaining any
 thing, I'll try again:

 Media queries come from the client side. They allow the author of a web
 page to tell exactly how she want to lay out her design based on the
 different queries. The browser *HAS* to follow these queries. And also,
 I don't think (please correct me if wrong) the media query can be subset
 to only the stuff that's really meaningful to do at prefetch-time.

 The srcset proposal, on the other hand, are purely HINTS to the browser
 engine about the resources. They are only declarative hints that can be
 leveraged in a secret sauce way (like Bruce said in another mail) to
 always optimize image fetching and other features. If you make a new
 kind of browser (like e.g. Opera mini) it can have its own heuristics
 that make sense *for that single browser* without asking _anyone_.
 Without relying on web authors doing the correct thing, or changing
 anything or even announce to anyone what they are doing. It's opening up
 for innovation, good algorithms and smart uses in the future.


 That's the basic difference, totally different. :-)

If that's the case, would it make sense to get rid of the @media
attribute on source elements in video and replace it with @srcset?

Silvia.


Re: [whatwg] So if media-queries aren't for determining the media to be used what are they for?

2012-05-15 Thread Silvia Pfeiffer
On Wed, May 16, 2012 at 9:20 AM, Odin Hørthe Omdal odi...@opera.com wrote:
 Silvia Pfeiffer silviapfeiff...@gmail.com skreiv Wed, 16 May 2012 00:57:48
 +0200

 Media queries come from the client side. They allow the author of a web
 page to tell exactly how she want to lay out her design based on the
 different queries. The browser *HAS* to follow these queries. And also,
 I don't think (please correct me if wrong) the media query can be subset
 to only the stuff that's really meaningful to do at prefetch-time.

 The srcset proposal, on the other hand, are purely HINTS to the browser
 engine about the resources. They are only declarative hints that can be
 leveraged in a secret sauce way (like Bruce said in another mail) to
 always optimize image fetching and other features. If you make a new
 kind of browser (like e.g. Opera mini) it can have its own heuristics
 that make sense *for that single browser* without asking _anyone_.
 Without relying on web authors doing the correct thing, or changing
 anything or even announce to anyone what they are doing. It's opening up
 for innovation, good algorithms and smart uses in the future.

 That's the basic difference, totally different.


 If that's the case, would it make sense to get rid of the @media
 attribute on source elements in video and replace it with @srcset?


 Video is at least a bit different in that you don't expect it to be fully
 loaded and prefetch at such an early stage as img. But I've been thinking
 about that since I read something like we already have media queries in
 source for video, but it's not really implemented and used yet.

Some browsers support @media in video for min/max width and height
specifications. But I believe the use case is more like the one we are
trying to solve with @srcset than a traditional media queries use
case.


 I'm not sure. What do you think? As far as I've seen, you're highly
 knowledgeable about video. Why do we have mediaqueries on video element?
 Do we have a use case page?

Hehe, thanks. :-) But media queries were in video before I arrived,
so I missed the whole discussion around it and how it got there. Some
of the browsers that implement support for it should speak up.


 Doing the same as whatever img ends up doing
 might be a good fit if the use cases are similar enough. Would be nice to be
 consistent if that makes sense.

I'm not 100% sure I grok the difference between media queries and
@srcset. I threw this question into the mix to see the reaction -
maybe we need both? What would that even mean?

In addition, I wonder about the adaptive streaming case where byte
ranges from different files are being switched to dynamically during
playback because of bandwidth change. For video, the solution seems to
be: use a manifest file in your @src (such as DASH) and rely on the
browser to pick between the files. Or you use javascript:
http://dvcs.w3.org/hg/html-media/raw-file/tip/media-source/media-source.html
. An attribute like @srcset would allow listing the alternative files
directly in the HTML file. That may be preferable?

More questions than answers right now, but we should think
consistently between audio, video and images.

Cheers,
Silvia.


Re: [whatwg] Considering a lang- attribute prefix for machine translation and intelligibility

2012-05-02 Thread Silvia Pfeiffer
On Thu, May 3, 2012 at 2:59 AM, Charles Pritchard ch...@jumis.com wrote:
 There has been some discussion on the w3c/whatwg mailing lists about how far
 we can mark up content with linguistic tags, such as marking word and/or
 sentence boundaries.

 In my authoring of web apps, I often write a short manual into a hidden div,
 so that the vocabulary of my application can be processed by translation
 services such as Google translate. Having content in the DOM seems the most
 appropriate way to handle translation.

 I'd like the group to consider the costs/benefits/alternatives to a lang-
 attribute.
 Such as span lang-role=sentenceThis is a sentence./span

 The data- and aria- attributes have worked out well. We may want to make
 room for one more.

 Such a structure could be used to markup typical subject/object/verb and
 clause sections; it could also be used to markup poetic texts as well as
 defined meanings of content.

 http://www.omegawiki.org/Expression:orange
 This is an span lang-meaning=DefinedMeaning:orange_(5821)orange/span.
 Now this, this is span
 lang-meaning=DefinedMeaning:orange_(5822)orange/span.

 In most cases there's no need to define sentence boundary, meaning or
 otherwise. But, it'd sure be nice to have the ability to do so in a standard
 manner.

 I'd recommend role, meaning and prosody/pronunciation as the primary
 targets. Character markup may be something to consider as it's come up in
 SVG (rotate) and in CSS before. Doing a span for each character is not
 practical, so we'd want a shorthand much as SVG has shorthand for rotate.

 -Charles

Hi Charles,

In one of my companies, we've successfully used span, @class and
@data-xxx attributes to support linguistic markup. See
http://www.eopas.org/transcripts/70 for an example (you will need to
agree to a research license checkbox to link through).

Here's a markup excerpt:

div class=051-004_w morphemes tier
span
table class=word
tbodytr
td colspan=1
span class=concordance data-addr=/p4/w1 data-language-code=erk
data-search=Maarik data-type=word
Maarik
/span
/td/trtr
td class=morpheme
span class=concordance data-addr=/p4/w1/m1
data-language-code=erk data-search=maarik data-type=morpheme
maarik
/span
/td
/tr
tr
td class=glossmister/td
/tr
/tbody/table
/span

It supports multiple levels of linguistic semantic markup:
* phrase
* word
* morpheme
* gloss

If you wanted to make a standard for what levels should be marked up
in which way for linguistic data, you'd first have to get the
linguistic researchers to agree on the required feature-set. Then you
could standardise e.g. data-lang-xxx attributes - or even make up new
linguistic-xxx attributes .
http://www.whatwg.org/specs/web-apps/current-work/#extensibility
describes how to do that.

Hope this helps.

Cheers,
Silvia.


Re: [whatwg] On implementing videos with multiple tracks in HTML5

2012-04-30 Thread Silvia Pfeiffer
On Tue, May 1, 2012 at 2:27 PM, Ian Hickson i...@hixie.ch wrote:
 On Fri, 20 Aug 2010, Silvia Pfeiffer wrote:

 Three issues I have taken out of this discussion that I think are still
 open to discuss and potentially define in the spec:

 * How to expose in-band extra audio and video tracks from a multi-track
 media resource to the Web browser? I am particularly thinking here about
 the use cases Lachlan mentioned: offering stereo and surround sound
 alternatives, audio descriptions, audio commentaries or multiple
 languages, and would like to add sign language tracks to this list. This
 is important to solve now, since it will allow the use of audio
 descriptions and sign language, two important accessibility
 requirements.

 I think this is now resolved. Let me know if there's still something open
 here.

Ha, yes! 21 months later and it's indeed solved through the same
mechanism that synchronisation of multiple audio/video tracks is
solved.


 * How to associate and expose such extra audio and video tracks that are
 provided out-of-band to the Web browser? This is probably a next-version
 issue since it's rather difficult to implement in the browser. It
 improves on meeting accessibility needs, but it doesn't stand in the way
 of providing audio descriptions and sign language - just makes it easier
 to use them.

 I'm not sure what you mean here.

It was the difference between in-band tracks and separate files. Also
solved by now.


 * Whether to include a multiplexed download functionality in browsers
 for media resources, where the browser would do the multiplexing of the
 active media resource with all the active text, audio and video tracks?
 This could be a context menu functionality, so is probably not so much a
 need to include in the HTML5 spec, but it's something that browsers can
 consider to provide. And since muxing isn't quite as difficult a
 functionality as e.g. decoding video, it could actually be fairly cheap
 to implement.

 I agree that this seems out of scope for the spec.

Thread closed. :-)

Cheers,
Silvia.


Re: [whatwg] Encoding Sniffing

2012-04-21 Thread Silvia Pfeiffer
On Sat, Apr 21, 2012 at 8:21 PM, Anne van Kesteren ann...@opera.com wrote:
 Hey,

 This morning I looked into what it would take to define Encoding Sniffing.
 http://wiki.whatwg.org/wiki/Encoding#Sniffing has links as to what I looked
 at (minus Opera internal). As far as I can tell Gecko has the most
 comprehensive approach and should not be too hard to define (though writing
 it all out correctly and clear will be some work).

 I have some questions though:

 1) Is this something we want to define and eventually implement the same
 way?
 2) Does this need to apply outside HTML? For JavaScript it forbidden per the
 HTML standard at the moment. CSS and XML do not allow it either. Is it used
 for decoding text/plain at the moment?

We've had some discussion on the usefulness of this in WebVTT - mostly
just in relation with HTML, though I am sure that stand-alone video
players that decode WebVTT would find it useful, too.

Cheers,
Silvia.

 3) Is there a limit to how many bytes we should look at?

 Thanks,


 --
 Anne van Kesteren
 http://annevankesteren.nl/


Re: [whatwg] sources in video by quality as well as codec

2012-02-23 Thread Silvia Pfeiffer
I'd be curious what you think about the proposal at
http://wiki.whatwg.org/wiki/Video_Metrics which is being addressed
through bugs https://www.w3.org/Bugs/Public/show_bug.cgi?id=14970 and
https://www.w3.org/Bugs/Public/show_bug.cgi?id=12399 .

Regards,
Silvia.

On Wed, Feb 22, 2012 at 11:06 AM, Rodger Combs rodger.co...@gmail.com wrote:
 I propose that source add a quality, bitrate, or filesize attribute to 
 allow the UA to decide between multiple streams by choosing the maximum 
 quality file that it can download within a reasonable amount of time (e.g. it 
 will download faster than it will play) or based on a user preference (e.g. 
 prefer SD quality, or always use HD when provided). It should also be 
 possible to retrieve a list of the sources the UA can play in JS, and 
 switch between them by user action (either a JS call for a custom UI or a 
 dropdown in the builtin UI), loading the new file and switching to it with 
 minimal skipping. This way, a site like YouTube, which presents several files 
 in various bitrates and codecs, can allow the user to choose to use a higher 
 quality without having to force an src attribute on the video, and a mobile 
 UA that roams from 3G to WiFi or moves close to a base station can increase 
 the quality of its stream. I think it fits in well with the purpose of the 
 source element. This is certainly open for modification, but I think it's a 
 good concept in essence.


Re: [whatwg] sources in video by quality as well as codec

2012-02-23 Thread Silvia Pfeiffer
quality, bitrate and filesize can all be calculated from those metrics
and it can all be done automatically. So, if you give a list of
source elements and those metrics are provided by the browser
through the IDL, the switching that you're asking for will be made
possible.

On Fri, Feb 24, 2012 at 4:22 PM, Rodger Combs rodger.co...@gmail.com wrote:
 While they're useful, I don't see how those bugs add the functions I 
 proposed. Am I missing something, or are you just asking for input on a 
 related topic? If so, I think those seem pretty nice.

 On Feb 23, 2012, at 11:09 PM, Silvia Pfeiffer wrote:

 I'd be curious what you think about the proposal at
 http://wiki.whatwg.org/wiki/Video_Metrics which is being addressed
 through bugs https://www.w3.org/Bugs/Public/show_bug.cgi?id=14970 and
 https://www.w3.org/Bugs/Public/show_bug.cgi?id=12399 .

 Regards,
 Silvia.

 On Wed, Feb 22, 2012 at 11:06 AM, Rodger Combs rodger.co...@gmail.com 
 wrote:
 I propose that source add a quality, bitrate, or filesize attribute to 
 allow the UA to decide between multiple streams by choosing the maximum 
 quality file that it can download within a reasonable amount of time (e.g. 
 it will download faster than it will play) or based on a user preference 
 (e.g. prefer SD quality, or always use HD when provided). It should also be 
 possible to retrieve a list of the sources the UA can play in JS, and 
 switch between them by user action (either a JS call for a custom UI or a 
 dropdown in the builtin UI), loading the new file and switching to it with 
 minimal skipping. This way, a site like YouTube, which presents several 
 files in various bitrates and codecs, can allow the user to choose to use a 
 higher quality without having to force an src attribute on the video, and a 
 mobile UA that roams from 3G to WiFi or moves close to a base station can 
 increase the quality of its stream. I think it fits in well with the 
 purpose of the source element. This is certainly open for modification, but 
 I think it's a good concept in essence.



Re: [whatwg] [html5] r6895 - [ac] (0) Tweak hidden='''s definition a bit to be more consistent with likely us [...]

2012-01-24 Thread Silvia Pfeiffer
Could we add video to the list of aria-describedby elements that may
link to hidden text?

i.e. change
..being referenced from the images that they describe
to
..being referenced from the images or videos that they describe ?

IMO that would resolve several of the accessibility issues for the
video element, including poster-alt.

Cheers,
Silvia.


On Sat, Jan 14, 2012 at 9:11 AM,  wha...@whatwg.org wrote:
 Author: ianh
 Date: 2012-01-13 14:11:10 -0800 (Fri, 13 Jan 2012)
 New Revision: 6895

 Modified:
   complete.html
   index
   source
 Log:
 [ac] (0) Tweak hidden='''s definition a bit to be more consistent with likely 
 usage scenarios.
 Affected topics: HTML

 Modified: complete.html
 ===
 --- complete.html       2012-01-13 01:42:12 UTC (rev 6894)
 +++ complete.html       2012-01-13 22:11:10 UTC (rev 6895)
 @@ -70442,9 +70442,11 @@

   pAll a href=#html-elementsHTML elements/a may have the code 
 title=attr-hiddena href=#the-hidden-attributehidden/a/code content 
 attribute set. The code title=attr-hiddena 
 href=#the-hidden-attributehidden/a/code attribute is a a 
 href=#boolean-attributeboolean
   attribute/a. When specified on an element, it indicates that
 -  the element is not yet, or is no longer, relevant. span class=implUser 
 agents should not render elements that have the
 -  code title=attr-hiddena href=#the-hidden-attributehidden/a/code 
 attribute
 -  specified./span/p
 +  the element is not yet, or is no longer, directly relevant to the
 +  page's current state, or that it is being used to declare content to
 +  be reused by other parts of the page as opposed to being directly
 +  accessed by the user. span class=implUser agents should not
 +  render elements that have the code title=attr-hiddena 
 href=#the-hidden-attributehidden/a/code attribute specified./span/p

   div class=example

 @@ -70485,9 +70487,15 @@
   !-- for example, a hidden href=#contentSkip to content/a would be 
 inappropriate. --
   !-- (but only add that example if you first add some more good valid 
 examples --

 -  pElements that are not code title=attr-hiddena 
 href=#the-hidden-attributehidden/a/code
 -  should not link to or refer to elements that are code 
 title=attr-hiddena href=#the-hidden-attributehidden/a/code./p
 +  pElements that are not themselves code title=attr-hiddena 
 href=#the-hidden-attributehidden/a/code must not a 
 href=#hyperlinkhyperlink/a to
 +  elements that are code title=attr-hiddena 
 href=#the-hidden-attributehidden/a/code. The code title=for/code 
 attributes of codea href=#the-label-elementlabel/a/code and
 +  codea href=#the-output-elementoutput/a/code elements that are not 
 themselves code title=attr-hiddena 
 href=#the-hidden-attributehidden/a/code must similarly not refer to
 +  elements that are code title=attr-hiddena 
 href=#the-hidden-attributehidden/a/code. In both
 +  cases, such references would cause user confusion./p

 +  pElements and scripts may, however, refer to elements that are
 +  code title=attr-hiddena href=#the-hidden-attributehidden/a/code in 
 other contexts./p
 +
   div class=example

    pFor example, it would be incorrect to use the code 
 title=attr-hyperlink-hrefa href=#attr-hyperlink-hrefhref/a/code 
 attribute to link to a
 @@ -70495,12 +70503,17 @@
    attribute. If the content is not applicable or relevant, then there
    is no reason to link to it./p

 -   pIt would similarly be incorrect to use the ARIA code 
 title=attr-aria-describedbyaria-describedby/code attribute to
 -   refer to descriptions that are themselves code title=attr-hiddena 
 href=#the-hidden-attributehidden/a/code. Hiding a section means that it
 -   is not applicable or relevant to anyone at the current time, so
 -   clearly it cannot be a valid description of content the user can
 -   interact with./p
 +   pIt would be fine, however, to use the ARIA code 
 title=attr-aria-describedbyaria-describedby/code attribute to
 +   refer to descriptions that are themselves code title=attr-hiddena 
 href=#the-hidden-attributehidden/a/code. While hiding the descriptions
 +   implies that they are not useful alone, they could be written in
 +   such a way that they are useful in the specific context of being
 +   referenced from the images that they describe./p

 +   pSimilarly, a codea href=#the-canvas-elementcanvas/a/code 
 element with the code title=attr-hiddena 
 href=#the-hidden-attributehidden/a/code attribute could be used by a
 +   scripted graphics engine as an off-screen buffer, and a form
 +   control could refer to a hidden codea 
 href=#the-form-elementform/a/code element using its
 +   code title=attr-fae-forma href=#attr-fae-formform/a/code 
 attribute./p
 +
   /div

   pElements in a section hidden by the code title=attr-hiddena 
 href=#the-hidden-attributehidden/a/code attribute are still active,

 Modified: index
 ===
 --- index       2012-01-13 

  1   2   3   4   5   >