Re: [whatwg] Timestamp from video source in order to sync (e.g. expose OGG timestamp to javascript)

2010-05-24 Thread Robert O'Callahan
On Mon, May 24, 2010 at 5:54 PM, Philip Jägenstedt phil...@opera.comwrote:

 So from this I gather that either:

 1. initialTime is always 0

 or

 2. duration is not the duration of resource, but the time at the end.


I wouldn't say that. If you can seek backwards to before the initial time,
then clearly 'duration' really is still the duration, you just didn't start
at the beginning. Same goes even if you can't seek backwards; e.g. this
live stream is an hour long and you have started 20 minutes into it.

This seems to be what is already in the spec. Instead of guessing what
 everyone means, here's what I'd want:

 1. let currentTime always start at 0, regardless of what the timestamps or
 other metadata of the media resource says.

 2. let currentTime always end at duration.

 3. expose an offset from 0 in startTime or a renamed attribute for cases
 like live streaming so that the client can e.g. sync slides.

 The difference from what the spec says is that the concept of earliest
 possible position is dropped.


I think the current spec allows you to seek backwards from the starting
point. So would my proposal. Would yours? Would you allow 'seekable' to
contain negative times? I think it's slightly simpler to allow currentTime
to start at a non-zero position than to allow negative times and to support
the offset in your point 3.

I also think your point 3 would be slightly harder to spec. I'm not sure
what you'd say.

Rob
-- 
He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all. [Isaiah
53:5-6]


Re: [whatwg] Canvas and Image problems

2010-05-24 Thread Schalk Neethling
Hi Marius,

 

That is actually a pretty good idea. You can still have your other code run
at document ready, but then do the drawing to canvas once the image is
ready. Best of both ;)

 

Thanks,

Schalk

 

From: Marius Gundersen [mailto:gunder...@gmail.com] 
Sent: Monday, May 24, 2010 4:49 AM
To: Schalk Neethling
Cc: whatwg@lists.whatwg.org
Subject: Re: [whatwg] Canvas and Image problems

 

You could also add a listener to the image to check that it actually loads:

 

$(document).ready(function() {

  var image = $(#cat).get(0);

  image.onload = function(e){

  var cv = $(#img_container).get(0);  

  var ctx = cv.getContext('2d');

  

  ctx.drawImage(image, 0, 0);

};

});

On Sun, May 23, 2010 at 10:30 PM, Schalk Neethling
sch...@ossreleasefeed.com wrote:

Jip, using $(window).load() works perfect.

 



Re: [whatwg] Canvas and Image problems

2010-05-24 Thread Simon Pieters
On Mon, 24 May 2010 04:48:32 +0200, Marius Gundersen gunder...@gmail.com  
wrote:


You could also add a listener to the image to check that it actually  
loads:


If you want to paint the first frame of a video, listen for the loadeddata  
event.


--
Simon Pieters
Opera Software


Re: [whatwg] Timestamp from video source in order to sync (e.g. expose OGG timestamp to javascript)

2010-05-24 Thread Philip Jägenstedt
On Mon, 24 May 2010 08:14:47 +0200, Robert O'Callahan  
rob...@ocallahan.org wrote:


On Mon, May 24, 2010 at 5:54 PM, Philip Jägenstedt  
phil...@opera.comwrote:



So from this I gather that either:

1. initialTime is always 0

or

2. duration is not the duration of resource, but the time at the end.



I wouldn't say that. If you can seek backwards to before the initial  
time,
then clearly 'duration' really is still the duration, you just didn't  
start

at the beginning. Same goes even if you can't seek backwards; e.g. this
live stream is an hour long and you have started 20 minutes into it.


Oh, so the idea is that the earlier data might actually be seekable, it's  
just that the UA seeks to an offset, much like with media fragments? The  
exception might be live streaming, where the duration is +Inf anyway.



This seems to be what is already in the spec. Instead of guessing what

everyone means, here's what I'd want:

1. let currentTime always start at 0, regardless of what the timestamps  
or

other metadata of the media resource says.

2. let currentTime always end at duration.

3. expose an offset from 0 in startTime or a renamed attribute for cases
like live streaming so that the client can e.g. sync slides.

The difference from what the spec says is that the concept of earliest
possible position is dropped.



I think the current spec allows you to seek backwards from the starting
point. So would my proposal. Would yours? Would you allow 'seekable' to
contain negative times? I think it's slightly simpler to allow  
currentTime
to start at a non-zero position than to allow negative times and to  
support

the offset in your point 3.

I also think your point 3 would be slightly harder to spec. I'm not sure
what you'd say.

Rob


I don't think the current spec allows you to seek to before the earliest  
possible position, pretty much by definition.


These are the cases I know of where an offset of some kind may be relevant:

* live streaming.

* server-applied media fragments where the offset of the fragment is given  
in a header of the resource.


For live streaming, I'm not sure the current spec has a problem, if  
browsers would implement the startTime property. For resources which  
themself claim an offset, I think we should let them start at 0 anyway and  
let people who really want a weird timeline fix it themselves.


--
Philip Jägenstedt
Core Developer
Opera Software


Re: [whatwg] Timestamp from video source in order to sync (e.g. expose OGG timestamp to javascript)

2010-05-24 Thread Robert O'Callahan
On Mon, May 24, 2010 at 10:13 PM, Philip Jägenstedt phil...@opera.comwrote:

 Oh, so the idea is that the earlier data might actually be seekable, it's
 just that the UA seeks to an offset, much like with media fragments? The
 exception might be live streaming, where the duration is +Inf anyway.


Yes.

I don't think the current spec allows you to seek to before the earliest
 possible position, pretty much by definition.

 These are the cases I know of where an offset of some kind may be relevant:

 * live streaming.

 * server-applied media fragments where the offset of the fragment is given
 in a header of the resource.

 For live streaming, I'm not sure the current spec has a problem, if
 browsers would implement the startTime property.


But you just said you want to get rid of startTime regardless of anything
else!

For resources which themself claim an offset, I think we should let them
 start at 0 anyway and let people who really want a weird timeline fix it
 themselves.


That means they basically won't work with most players, which won't be
expecting to deal with negative seekable times or the weird timeline.

Rob
-- 
He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all. [Isaiah
53:5-6]


Re: [whatwg] Timestamp from video source in order to sync (e.g. expose OGG timestamp to javascript)

2010-05-24 Thread Philip Jägenstedt
On Mon, 24 May 2010 12:33:56 +0200, Robert O'Callahan  
rob...@ocallahan.org wrote:


On Mon, May 24, 2010 at 10:13 PM, Philip Jägenstedt  
phil...@opera.comwrote:


Oh, so the idea is that the earlier data might actually be seekable,  
it's

just that the UA seeks to an offset, much like with media fragments? The
exception might be live streaming, where the duration is +Inf anyway.



Yes.

I don't think the current spec allows you to seek to before the earliest

possible position, pretty much by definition.

These are the cases I know of where an offset of some kind may be  
relevant:


* live streaming.

* server-applied media fragments where the offset of the fragment is  
given

in a header of the resource.

For live streaming, I'm not sure the current spec has a problem, if
browsers would implement the startTime property.



But you just said you want to get rid of startTime regardless of  
anything

else!

For resources which themself claim an offset, I think we should let them

start at 0 anyway and let people who really want a weird timeline fix it
themselves.



That means they basically won't work with most players, which won't be
expecting to deal with negative seekable times or the weird timeline.

Rob


I think we both agree but aren't understanding each other very well, or  
I'm not thinking very clearly. People will write players assuming that  
currentTime starts at 0 and ends at duration. If this is not the case they  
will break, so an API which makes this not be the case in very few cases  
isn't very nice. That's the case now, where currentTime actually runs from  
startTime to startTime+duration (I think). Therefore, I want to get rid of  
startTime as it is now (regardless of anything else). I don't know if any  
current browser lets startTime be anything but 0.


Unless I'm missing some detail, that would mean that the current spec  
*does* have a problem even for live streaming, so I must take that back.  
To avoid confusion, it might be best to introduce a new attribute to solve  
the problem rather than re-use startTime.


--
Philip Jägenstedt
Core Developer
Opera Software


Re: [whatwg] Timestamp from video source in order to sync (e.g. expose OGG timestamp to javascript)

2010-05-24 Thread Silvia Pfeiffer
On Mon, May 24, 2010 at 4:14 PM, Robert O'Callahan rob...@ocallahan.org wrote:
 On Mon, May 24, 2010 at 5:54 PM, Philip Jägenstedt phil...@opera.com
 wrote:

 So from this I gather that either:

 1. initialTime is always 0

 or

 2. duration is not the duration of resource, but the time at the end.

 I wouldn't say that. If you can seek backwards to before the initial time,
 then clearly 'duration' really is still the duration, you just didn't start
 at the beginning. Same goes even if you can't seek backwards; e.g. this
 live stream is an hour long and you have started 20 minutes into it.

 This seems to be what is already in the spec. Instead of guessing what
 everyone means, here's what I'd want:

 1. let currentTime always start at 0, regardless of what the timestamps or
 other metadata of the media resource says.

 2. let currentTime always end at duration.

 3. expose an offset from 0 in startTime or a renamed attribute for cases
 like live streaming so that the client can e.g. sync slides.

 The difference from what the spec says is that the concept of earliest
 possible position is dropped.

 I think the current spec allows you to seek backwards from the starting
 point. So would my proposal. Would yours? Would you allow 'seekable' to
 contain negative times? I think it's slightly simpler to allow currentTime
 to start at a non-zero position than to allow negative times and to support
 the offset in your point 3.

 I also think your point 3 would be slightly harder to spec. I'm not sure
 what you'd say.

 Rob


I am utterly confused now. I think we need a picture. So, let me give
this a shot.

This is the streaming video resource:

(1)(2)#(3)(4)---(5)

(1) is when the video started getting transmitted
(2) is where the UA joined in and started playing back from
(3) is up to where the UA has played back
(4) is up to where the UA has data buffered
(5) is when the video will end (which is most probably not known)
Let's further say the video started streaming on 1st January 2010 at 10am.

The video's timeline is:
(1) = 0 sec
(2) = t1 sec with t1 = 0
(3) = t2 sec with t2 = t1
(4) = t3 sec with t3 = t2
(5) = t4 sec with t4 = t3

I am assuming what is displayed in the video player is exactly this
video's timeline, i.e. t1 at (2), t2 at (3), and t4 at (5). Now, the
position (1) is not visible in the video player? Or is it visible and
playback starts in the 'controls' with an offset? In that latter case,
we can jump back to the beginning in the interface, in the earlier
case, we can't except for maybe with media fragment URIs. But I quite
like the representation from 0 with an actual playback start of t1.

Here's how I've understood it would work with the attributes:
* currentTime is the video's timeline as described, so since we are at
offset (3), currentTime = t2.
* initialTime = t1, namely the offset at where the video playback started.
* dateTime = 2010-01-01T10:00:00.000


Incidentally, the current concept of startTime has had me utterly
confused. I wonder if it meant that seeking to a time before t1 wasn't
possible. I don't know why such a concept would be necessary unless a
live stream wouldn't be seekable before the current time. But maybe
that is much more easily represented by seekable.


Cheers,
Silvia.


Re: [whatwg] Timestamp from video source in order to sync (e.g. expose OGG timestamp to javascript)

2010-05-24 Thread Robert O'Callahan
On Mon, May 24, 2010 at 11:29 PM, Silvia Pfeiffer silviapfeiff...@gmail.com
 wrote:

 Here's how I've understood it would work with the attributes:
 * currentTime is the video's timeline as described, so since we are at
 offset (3), currentTime = t2.
 * initialTime = t1, namely the offset at where the video playback started.
 * dateTime = 2010-01-01T10:00:00.000


That is exactly what I was suggesting.

Rob
-- 
He was pierced for our transgressions, he was crushed for our iniquities;
the punishment that brought us peace was upon him, and by his wounds we are
healed. We all, like sheep, have gone astray, each of us has turned to his
own way; and the LORD has laid on him the iniquity of us all. [Isaiah
53:5-6]


Re: [whatwg] Timestamp from video source in order to sync (e.g. expose OGG timestamp to javascript)

2010-05-24 Thread David Singer
I think it rather important that the format define where you are in time, 
precisely so that temporal fragments, or syncing with other material, can work.

For most video-on-demand, the program starts at zero and runs to its duration.  
But for 'streaming', knowing 'where you are' in a stream depends on a lot of 
things.  The 3G HTTP streaming solution explicitly anchors the timeline, so 
that two players playing the same program at the same point in it will see the 
same time, no matter when they tuned it. 

On May 18, 2010, at 2:46 , Silvia Pfeiffer wrote:

 On Tue, May 18, 2010 at 7:28 PM, Robert O'Callahan rob...@ocallahan.org 
 wrote:
 On Tue, May 18, 2010 at 8:23 PM, Odin Omdal Hørthe odin.om...@gmail.com
 wrote:
 
 Justin Dolske's idea looks rather nice:
 This seems like a somewhat unfortunate thing for the spec, I bet
 everyone's
 going to get it wrong because it won't be common. :( I can't help but
 wonder if
 it would be better to have a startTimeOffset property, so that
 .currentTime et
 al are all still have a timeline starting from 0, and if you want the
 real
 time you'd use .currentTime + .startTimeOffset.
 
 I'd also suspect we'll want the default video controls to normalize
 everything
 to 0 (.currentTime - .startTime), since it would be really confusing
 otherwise.
 
 
 That's exactly what I've advocated before. I lost the argument, but I forget
 why, probably because I didn't understand the reasons.
 
 
 To be honest, it doesn't make much sense to display the wrong time
 in a player. If a video stream starts at 10:30am and goes for 30 min,
 then a person joining the stream 10 min in should see a time of 10min
 - or better even 10:40am - which is in sync with what others see that
 joined at the start. It would be rather confusing if the same position
 in a video would be linked by one person as at offset 10min while
 another would say at offset 0min. And since the W3C Media Fragments
 WG is defining temporal addressing, such diverging pointers will even
 end up in a URL and how should that be interpreted then?
 
 Cheers,
 Silvia.

David Singer
Multimedia and Software Standards, Apple Inc.



Re: [whatwg] On the subtitle format for HTML5

2010-05-24 Thread Tab Atkins Jr.
2010/5/23 Silvia Pfeiffer silviapfeiff...@gmail.com:
 I just came across this thread
 http://forum.doom9.org/showthread.php?p=1397067 and found it a most
 interesting read!
 Particularly the comment of jiifurusu .

 It seems the subtitling community is developing a replacement format
 for ASS with advanced features beyond what WebSRT has. Wouldn't that
 show there is a need for an exchange format with advanced features?

Not necessarily.  It means that people want certain advanced features.
 It doesn't mean that those are necessary, or that the people
developing those advanced features are aware of existing work they can
build on, like the entire web stack.  We can do a lot with a very
simple format that covers all the *necessary* use-cases and can be
easily implemented by simple devices, and then expose extra
functionality via the web stack's technologies like CSS for the more
important devices (that is, anything that can implement the web).

This does presuppose a particular segmentation of device
needs/priorities, but it's a segmentation that I believe makes the
most sense for a modern format, given the reality and increasing
pervasiveness of web-based video.


 That new format seems to try and cater for high-end needs and lower
 end needs. If we have to develop a new non-HTML-like format, wouldn't
 it make sense to coordinate with those guys? In particular if the
 community that we are trying to build upon by reusing SRT is actually
 against extending SRT?

Based on that thread, the main argument that community has against
extending SRT is that it won't be compatible with current authoring
tools.  Their advice appears to be to instead adopt a new format being
created which will also be incompatible with current authoring tools,
though, so I don't know if I can trust their instincts too much.  ^_^

(Not saying anything in particular against ASS or AS6, as I haven't
looked at them in any sort of detail, but they do similarly appear to
be more complex than we want for the same reasons that everything else
has been too complex - they build in things that are potentially
desirable but not necessary, and which can be done through existing
web-stack technology equally well.)

~TJ


Re: [whatwg] Image resize API proposal

2010-05-24 Thread David Levin
Thanks for all the feedback.

We've gotten into a lot of details about this proposal for image resizing
(without hanging the UI), so I'd like to step back to a summary of the
current state:

   1. We've presented several use cases which demonstrate many websites
   which would benefit from this feature. In fact, twice when this has come up,
   there have been web developers who mentioned a need for this. This mirrors
   my experience in which I've talked to at least 4 different teams that would
   use this functionality now if it were available.
   2. We've presented a canvas solution that would utilize workers, but as
   Mike Shaver (
   http://lists.whatwg.org/pipermail/whatwg-whatwg.org/2010-March/025590.html)
   indicated, different trade-offs among UAs supports the idea of specialized
   API.
   3. We explored 7 possible approaches for a more specialized api and then
   presented the one which seemed best (as well as mentioned the other 6 and
   the reasons that they were not our primary).
   4. We've discussed the leading alternate proposal optimized canvas (plus
   js to read the exif information) and then get the bits out of canvas, but
   there are several issues with this proposal including


   - that not all browsers will have an implementation using the gpu that
  allows web sites to use this and not hang the UI
  - that even if it was implemented everywhere, this solution involves
  readback from the GPU which, as Chris mentioned, is generally evil and
  should be avoided at all costs.

At this point, it seems clear that the image resizing scenario is worth
solving due to the number of places that it will benefit and that the api
presented at the beginning of this thread solves it in a manner that allows
the UA to optimize the operation.

dave

PS The proposed api is narrow (but it is just one api so perhaps that is
good). In essence, that was part of the intent (following the guidance of
the specialized api). For example, this api doesn't allow for getting an
image for a pdf or word document. Also, the api makes it hard pull the
thumbnail right out of the jpg when applicable (and this is a really nice
optimization to avoid doing lots of unnecessary slow I/O). If folks have
ideas about how to fix this, it would be interesting to hear.


[whatwg] Questions about the progress element

2010-05-24 Thread Mounir Lamouri
Hi,

I'm wondering why the value and max IDL attributes have to reflect the
content attribute with zero as a default value instead of reflecting the
internal values used to calculate the position. Wouldn't that be easier
to know what is the internal value and max values by using the IDL
attributes ?

In addition, couldn't the position IDL attribute returns a double
instead of a float ? As we are dividing two floats, the precision may be
helpful.

Thanks,
--
Mounir


Re: [whatwg] Image resize API proposal

2010-05-24 Thread Aryeh Gregor
On Mon, May 24, 2010 at 1:21 PM, David Levin le...@google.com wrote:
 We've discussed the leading alternate proposal optimized canvas (plus js to
 read the exif information) and then get the bits out of canvas, but there
 are several issues with this proposal including

 that not all browsers will have an implementation using the gpu that allows
 web sites to use this and not hang the UI

This is a nonissue.  There's no point in speccing one feature to work
around the fact that browsers haven't implemented another -- it makes
more sense to just get the browsers to implement the latter feature,
making the former moot.  Browsers look like they're moving toward GPU
acceleration for everything now, and that has many more benefits, so
we should assume that by the time they'd implement this API, they'll
already be GPU-accelerated.

 that even if it was implemented everywhere, this solution involves readback
 from the GPU which, as Chris mentioned, is generally evil and should be
 avoided at all costs.

This I'm not qualified to comment on, though.  To the best of my
knowledge, GPUs are magical boxes that make things go faster via pixie
dust.  ;)


[whatwg] Installable web apps

2010-05-24 Thread Aaron Boodman
This has come up before, but since Google has officially announced the
project at IO, and Mozilla has voiced interest in the idea on their
blog, I felt like it might be a good to revisit.

Google would like to make today's web apps installable in Chrome.
From a user's point of view, installing a web app would:

- Give it a permanent access point in the browser with a big juicy icon
- Allow the browser to treat a web app as a conceptual unit (eg give
it special presentation, show how much storage it uses)
- Add some light integration with the OS
- (optionally) Pre-grant some permissions that would otherwise have to
be requested one-at-a-time (eg geolocation, notifications)
- (optionally) Grant access to some APIs that would otherwise be
inaccessible (eg system clipboard, permanent storage)

There is some more background on our thinking at these two URL:

http://code.google.com/chrome/apps/
http://code.google.com/chrome/apps/docs

We have started implementing this using Chrome's current extension
system. However, we'd like it if installation could eventually work in
other browsers. Is there any interest from other vendors in
collaborating on the design of such a system?

Thanks,

- a


Re: [whatwg] Image resize API proposal

2010-05-24 Thread David Levin
On Mon, May 24, 2010 at 1:40 PM, Aryeh Gregor
simetrical+...@gmail.comsimetrical%2b...@gmail.com
 wrote:

 On Mon, May 24, 2010 at 1:21 PM, David Levin le...@google.com wrote:
  We've discussed the leading alternate proposal optimized canvas (plus js
 to
  read the exif information) and then get the bits out of canvas, but there
  are several issues with this proposal including
 
  that not all browsers will have an implementation using the gpu that
 allows
  web sites to use this and not hang the UI

 This is a nonissue.  There's no point in speccing one feature to work
 around the fact that browsers haven't implemented another -- it makes
 more sense to just get the browsers to implement the latter feature,
 making the former moot.  Browsers look like they're moving toward GPU
 acceleration for everything now, and that has many more benefits, so
 we should assume that by the time they'd implement this API, they'll
 already be GPU-accelerated.

  that even if it was implemented everywhere, this solution involves
 readback
  from the GPU which, as Chris mentioned, is generally evil and should be
  avoided at all costs.

 This I'm not qualified to comment on, though.  To the best of my
 knowledge, GPUs are magical boxes that make things go faster via pixie
 dust.  ;)


Thanks for your opinion. :)

Chris is qualified so are other people whom I've spoken to who have said the
same thing, so using the gpu is not pixie dust in this particular scenario
even though folks would like to be believe it so.


Re: [whatwg] Installable web apps

2010-05-24 Thread Dion Almaer
I think that unifying as much as possible would be a win. We could either:

a) each browser has their own formats, and someone generates tools to spit
them all out
b) come to some agreement on at least a base set (with vendor goodies added)

As a developer, I want to create one app and send that one app to multiple
web stores and have it work in multiple browser clients.

I have been working on taking the Chrome format from
http://code.google.com/chrome/apps/docs/developers_guide.html and showing it
with the other formats out there. Some are webby, some are mobiley, but
interesting to view:

http://developer.android.com/guide/topics/manifest/manifest-intro.html
http://developer.palm.com/index.php?option=com_contentview=articleid=1748Itemid=43
http://library.forum.nokia.com/index.jsp?topic=/Web_Developers_Library/GUID-BBA0299B-81B6-4508-8D5B-5627206CBF7B.html
http://www.w3.org/TR/widgets/#configuration-document0

There are many differences in the goals of this formats, but there are some
large overlapping areas too.

I have been writing about some thoughts on app stores too (
http://almaer.com/blog/what-should-the-future-of-web-app-stores-be,
http://almaer.com/blog/chrome-web-store-the-opportunity-for-breaking-out-of-silos-with-experience
).

I am sure people may have strong issues around permissions and the like. We
have been spending time on permissions ourselves and it would be nice to
have more options than full up front permissions in a string array.

Excited to watch this play out.

Cheers,

Dion


On Mon, May 24, 2010 at 1:45 PM, Aaron Boodman a...@google.com wrote:

 This has come up before, but since Google has officially announced the
 project at IO, and Mozilla has voiced interest in the idea on their
 blog, I felt like it might be a good to revisit.

 Google would like to make today's web apps installable in Chrome.
 From a user's point of view, installing a web app would:

 - Give it a permanent access point in the browser with a big juicy icon
 - Allow the browser to treat a web app as a conceptual unit (eg give
 it special presentation, show how much storage it uses)
 - Add some light integration with the OS
 - (optionally) Pre-grant some permissions that would otherwise have to
 be requested one-at-a-time (eg geolocation, notifications)
 - (optionally) Grant access to some APIs that would otherwise be
 inaccessible (eg system clipboard, permanent storage)

 There is some more background on our thinking at these two URL:

 http://code.google.com/chrome/apps/
 http://code.google.com/chrome/apps/docs

 We have started implementing this using Chrome's current extension
 system. However, we'd like it if installation could eventually work in
 other browsers. Is there any interest from other vendors in
 collaborating on the design of such a system?

 Thanks,

 - a



Re: [whatwg] On the subtitle format for HTML5

2010-05-24 Thread Silvia Pfeiffer
2010/5/25 Tab Atkins Jr. jackalm...@gmail.com:
 2010/5/23 Silvia Pfeiffer silviapfeiff...@gmail.com:
 I just came across this thread
 http://forum.doom9.org/showthread.php?p=1397067 and found it a most
 interesting read!
 Particularly the comment of jiifurusu .

 It seems the subtitling community is developing a replacement format
 for ASS with advanced features beyond what WebSRT has. Wouldn't that
 show there is a need for an exchange format with advanced features?

 Not necessarily.  It means that people want certain advanced features.
  It doesn't mean that those are necessary, or that the people
 developing those advanced features are aware of existing work they can
 build on, like the entire web stack.  We can do a lot with a very
 simple format that covers all the *necessary* use-cases and can be
 easily implemented by simple devices, and then expose extra
 functionality via the web stack's technologies like CSS for the more
 important devices (that is, anything that can implement the web).

 This does presuppose a particular segmentation of device
 needs/priorities, but it's a segmentation that I believe makes the
 most sense for a modern format, given the reality and increasing
 pervasiveness of web-based video.


 That new format seems to try and cater for high-end needs and lower
 end needs. If we have to develop a new non-HTML-like format, wouldn't
 it make sense to coordinate with those guys? In particular if the
 community that we are trying to build upon by reusing SRT is actually
 against extending SRT?

 Based on that thread, the main argument that community has against
 extending SRT is that it won't be compatible with current authoring
 tools.  Their advice appears to be to instead adopt a new format being
 created which will also be incompatible with current authoring tools,
 though, so I don't know if I can trust their instincts too much.  ^_^

 (Not saying anything in particular against ASS or AS6, as I haven't
 looked at them in any sort of detail, but they do similarly appear to
 be more complex than we want for the same reasons that everything else
 has been too complex - they build in things that are potentially
 desirable but not necessary, and which can be done through existing
 web-stack technology equally well.)

The complexity argument will similarly be used by the subtitling
community that if we require all of HTML as a format for high-end
subtitling, we are bringing too much complexity into the subtitling
world.

I think we have to be careful not to make a short-sighted decision
right now with what we think is the 80% use case, which in the future
may turn into more of a 40% use case and the high end features -
things such as animations in subtitles, SVG images in subtitles,
hyperlinks in subtitles, transparent overlay images in subtitles, etc
- will be much more common, because the world's technology has moved
on and subtitles are much more common. I look in particular at what is
already possible on YouTube with subtitle-like technology such as
annotations and even the overlay ads. I know that much of this is not
for accessibility, but why would we not think beyond accessibility for
something as important as timed text for video.

Cheers,
Silvia.


[whatwg] A standard for adaptive HTTP streaming for media resources

2010-05-24 Thread Silvia Pfeiffer
Hi all,

I would like to raise an issue that has come up multiple times before,
but hasn't ever really been addressed properly.

We've in the past talked about how there is a need to adapt the
bitrate version of a audio or video resource that is being delivered
to a user agent based on the available bandwidth on the network, the
available CPU cycles, and possibly other conditions.

It has been discussed to do this using @media queries and providing
links to alternative versions of a media resources through the
source element inside it. But this is a very inflexible solution,
since the side conditions for choosing a bitrate version may change
over time and what is good at the beginning of video playback may not
be good 2 minutes later (in particular if you're on a mobile device
driving through town).

Further, we have discussed the need for supporting a live streaming
approach such as RTP/RTSP - but RTP/RTSP has its own non-Web issues
that will make it difficult to make it part of a Web application
framework - in particular it request a custom server and won't just
work with a HTTP server.

In recent times, vendors have indeed started moving away from custom
protocols and custom servers and have moved towards more intelligence
in the UA and special approaches to streaming over HTTP.

Microsoft developed Smooth Streaming [1], Apple developed HTTP Live
Streaming [2] and Adobe recently launched HTTP Dynamic Streaming
[3]. (Also see a comparison at [4]). As these vendors are working on
it for MPEG files, so are some people for Ogg. I'm not aware anyone is
looking at it for WebM yet.

Standards bodies haven't held back either. The 3GPP organisation have
defined 3GPP adaptive HTTP Streaming (AHS) in their March 2010 release
9 of  3GPP [5]. Now, MPEG has started consolidating approaches for
adaptive bitrate streaming over HTTP for MPEG file formats [6].

Adaptive bitrate streaming over HTTP is the correct approach towards
solving the double issues of adapting to dynamic bandwidth
availability, and of providing a live streaming approach that is
reliable.

Right now, no standard exists that has been proven to work in a
format-independent way. This is particularly an issue for HTML5, where
we want at least support for MPEG4, Ogg Theora/Vorbis, and WebM.

I know that it is not difficult to solve this issue in a
format-independent way, which is why solutions are jumping up
everywhere. They are, however, not compatible and create a messy
environment where people have to install solutions for multiple
different approaches to make sure they are covered for different
platforms, different devices, and different formats. It's a clear
situation where a new standard is necessary.

The standard basically needs to provide three different things:
* authoring of content in a specific way
* description of the alternative files on the server and their
features for the UA to download and use for switching
* a means to easily switch mid-way between these alternative files

I am personally not sure which is the right forum to create the new
standard in, but I know that we have a need for it in HTML5.

Would it be possible / the right way to start something like this as
part of the Web applications work at WHATWG?
(Incidentally, I've brought this up in W3C before an not got any
replies, so I'm not sure W3C would be a better place for this work.
Maybe IETF? But then, why not here...)

What do people think?

Cheers,
Silvia.


[1] http://www.iis.net/download/SmoothStreaming
[2] http://www.iis.net/download/smoothstreaming
[3] 
http://www.adobe.com/devnet/flashmediaserver/articles/dynstream_on_demand.html
[4] http://learn.iis.net/page.aspx/792/adaptive-streaming-comparison
[5] 
https://labs.ericsson.com/apis/streaming-media/documentation/3gpp-adaptive-http-streaming-ahs
[6] 
http://multimediacommunication.blogspot.com/2010/05/http-streaming-of-mpeg-media.html


Re: [whatwg] A standard for adaptive HTTP streaming for media resources

2010-05-24 Thread Chris Holland




* authoring of content in a specific way
* description of the alternative files on the server and their
features for the UA to download and use for switching
* a means to easily switch mid-way between these alternative files


I don't have something decent to offer for the first and last bullets  
but I'd like to throw-in something for the middle bullet:


The http protocol is vastly under utilized today when it comes to URIs  
and the various Accept* headers.


Today developers might embed an image in a document as chris.png. Web  
daemons know to find that resource and serve it, in this sense,  
chris.png is a resource locator.


Technically one might reference the image as a resource identifier  
named chris. The user's browser may send  image/gif as the only  
value of an accept header, signaling the following to the server: I'm  
supposed to download an image of chris here, but I only support gif,  
so don't bother sending me a .png. In a perhaps more useful scenario  
the user agent may tell the server don't bother sending me an image,  
I'm a screen reader, do you have anything my user could listen to?.  
In this sense, the document's author didn't have to code against or  
account for every possible context out there, the author merely puts  
a reference to a higher-level representation that should remain  
forward-compatible with evolving servers and user-agents.


By passing a list of accepted mimetypes, the accept http header  
provides this ability to serve context-aware resources, which starts  
to feel like a contender for catering to your middle bullet.


To that end, new mime-types could be defined to encapsulate media type/ 
bit rate combinations.


Or the accept header might remain confined to media types and  
acceptable bit rate information might get encapsulated into a new  
header, such as: X-Accept-Bitrate .


If you combined the above approach with existing standards for http  
byte range requests, there may be a mechanism there to cater to your  
3rd bullet as well: when network conditions deteriorate, the client  
could interrupt the current stream and issue a new request where it  
left off to the server. Although this likel wouldn't work because a  
byte range request would mean nothing on files of two different sizes.  
For playbacked media, time codes would be needed to define range.


-chris


On May 24, 2010, at 19:33, Silvia Pfeiffer silviapfeiff...@gmail.com  
wrote:



Hi all,

I would like to raise an issue that has come up multiple times before,
but hasn't ever really been addressed properly.

We've in the past talked about how there is a need to adapt the
bitrate version of a audio or video resource that is being delivered
to a user agent based on the available bandwidth on the network, the
available CPU cycles, and possibly other conditions.

It has been discussed to do this using @media queries and providing
links to alternative versions of a media resources through the
source element inside it. But this is a very inflexible solution,
since the side conditions for choosing a bitrate version may change
over time and what is good at the beginning of video playback may not
be good 2 minutes later (in particular if you're on a mobile device
driving through town).

Further, we have discussed the need for supporting a live streaming
approach such as RTP/RTSP - but RTP/RTSP has its own non-Web issues
that will make it difficult to make it part of a Web application
framework - in particular it request a custom server and won't just
work with a HTTP server.

In recent times, vendors have indeed started moving away from custom
protocols and custom servers and have moved towards more intelligence
in the UA and special approaches to streaming over HTTP.

Microsoft developed Smooth Streaming [1], Apple developed HTTP Live
Streaming [2] and Adobe recently launched HTTP Dynamic Streaming
[3]. (Also see a comparison at [4]). As these vendors are working on
it for MPEG files, so are some people for Ogg. I'm not aware anyone is
looking at it for WebM yet.

Standards bodies haven't held back either. The 3GPP organisation have
defined 3GPP adaptive HTTP Streaming (AHS) in their March 2010 release
9 of  3GPP [5]. Now, MPEG has started consolidating approaches for
adaptive bitrate streaming over HTTP for MPEG file formats [6].

Adaptive bitrate streaming over HTTP is the correct approach towards
solving the double issues of adapting to dynamic bandwidth
availability, and of providing a live streaming approach that is
reliable.

Right now, no standard exists that has been proven to work in a
format-independent way. This is particularly an issue for HTML5, where
we want at least support for MPEG4, Ogg Theora/Vorbis, and WebM.

I know that it is not difficult to solve this issue in a
format-independent way, which is why solutions are jumping up
everywhere. They are, however, not compatible and create a messy
environment where people have to install solutions for multiple
different