Re: [whatwg] video/audio feedback

2009-05-14 Thread Jonas Sicking
On Tue, May 12, 2009 at 9:29 PM, David Singer sin...@apple.com wrote:
 At 12:09  +1000 13/05/09, Silvia Pfeiffer wrote:

 On Wed, May 13, 2009 at 5:01 AM, Jonas Sicking jo...@sicking.cc wrote:

  On Sun, May 10, 2009 at 6:56 PM, David Singer sin...@apple.com wrote:

  At 14:09  +1000 9/05/09, Silvia Pfeiffer wrote:

  Of course none of the
  discussion will inherently disallow seeking - scripts will always be
  able to do the seeking. But the user may not find it easy to do
  seeking to a section that is not accessible through the displayed
  timeline, which can be both a good and a bad thing.

  How easy a particular user interface is to use for various tasks is (I
 hope)
  not our worry...

  I'm not sure I agree. If the spec provides a feature set that no one
  is able to create a useful UI for, then there definitely might be a
  problem with the spec.

  I still have not received any comments on my previous assertion that
  there are essentially two separate use cases here. One for bringing
  attention to a specific point in a larger context, one for showing
  only a smaller range of a video.

 Just to confirm: yes, there are two separate use cases. (I was under
 the impression that the discussion had brought that out).

 Yes, that's fine.  I think it's clear that we could have a 'verb' in the
 fragment focus-on, select etc. to indicate that.  I think it's also
 clear that no matter what verb is used, the entire resource is 'available'
 to the UA, that scripts can (if they wish) navigate anywhere in the entire
 resource, and that UAs can optimize the interface for the given verb, but
 the interface can still permit access to the entire resource.

Personally I'm pretty un-opinionated on these details.

If setting a range in the fragment results in the .currentTime
spanning from 0 to length-of-range or from start-of-range to
end-or-range seems only like a question of which API is the most
author friendly. Script can always remove the range-fragment if it
wants to use a .currentTime outside of the range.

The only argument that I can think of either way is that it might be
hard to create a decent UI for the situation when a range is
specified, but .currentTime is set to outside that range using script.

But again, I don't have much of an opinion.

/ Jonas


Re: [whatwg] video/audio feedback

2009-05-12 Thread Jonas Sicking
On Sun, May 10, 2009 at 6:56 PM, David Singer sin...@apple.com wrote:
 At 14:09  +1000 9/05/09, Silvia Pfeiffer wrote:
  Of course none of the
 discussion will inherently disallow seeking - scripts will always be
 able to do the seeking. But the user may not find it easy to do
 seeking to a section that is not accessible through the displayed
 timeline, which can be both a good and a bad thing.

 How easy a particular user interface is to use for various tasks is (I hope)
 not our worry...

I'm not sure I agree. If the spec provides a feature set that no one
is able to create a useful UI for, then there definitely might be a
problem with the spec.

I still have not received any comments on my previous assertion that
there are essentially two separate use cases here. One for bringing
attention to a specific point in a larger context, one for showing
only a smaller range of a video.

I do think both of these can be addressed using fragment identifiers,
but I do think we should treat them separately.

The fact that one of the bigger video sites today, youtube, has
support for displaying a given point in a larger context leads at
least some credibility to that people actually want to do this.

/ Jonas


Re: [whatwg] video/audio feedback

2009-05-12 Thread Silvia Pfeiffer
On Wed, May 13, 2009 at 5:01 AM, Jonas Sicking jo...@sicking.cc wrote:
 On Sun, May 10, 2009 at 6:56 PM, David Singer sin...@apple.com wrote:
 At 14:09  +1000 9/05/09, Silvia Pfeiffer wrote:
  Of course none of the
 discussion will inherently disallow seeking - scripts will always be
 able to do the seeking. But the user may not find it easy to do
 seeking to a section that is not accessible through the displayed
 timeline, which can be both a good and a bad thing.

 How easy a particular user interface is to use for various tasks is (I hope)
 not our worry...

 I'm not sure I agree. If the spec provides a feature set that no one
 is able to create a useful UI for, then there definitely might be a
 problem with the spec.

 I still have not received any comments on my previous assertion that
 there are essentially two separate use cases here. One for bringing
 attention to a specific point in a larger context, one for showing
 only a smaller range of a video.

Just to confirm: yes, there are two separate use cases. (I was under
the impression that the discussion had brought that out).

The question is how to distinguish them and which component should
make the choice.


 I do think both of these can be addressed using fragment identifiers,
 but I do think we should treat them separately.

They could be addresses using fragment identifiers, but then it's up
to the UA to decide when to display what and different UA
implementations will become inconsistent.

The challenge is to find a means of treating them separately in a
predefined way.

Thus, the idea of using fragment for attention (in YouTube style)
and query for focus on the now restricted new resource.

We're having that discussion now in the media fragment WG.

Cheers,
Silvia.

 The fact that one of the bigger video sites today, youtube, has
 support for displaying a given point in a larger context leads at
 least some credibility to that people actually want to do this.

 / Jonas



Re: [whatwg] video/audio feedback

2009-05-12 Thread David Singer

At 12:09  +1000 13/05/09, Silvia Pfeiffer wrote:

On Wed, May 13, 2009 at 5:01 AM, Jonas Sicking jo...@sicking.cc wrote:

 On Sun, May 10, 2009 at 6:56 PM, David Singer sin...@apple.com wrote:

 At 14:09  +1000 9/05/09, Silvia Pfeiffer wrote:

  Of course none of the
 discussion will inherently disallow seeking - scripts will always be
 able to do the seeking. But the user may not find it easy to do
 seeking to a section that is not accessible through the displayed
 timeline, which can be both a good and a bad thing.


 How easy a particular user interface is to use for various tasks 
is (I hope)

 not our worry...


 I'm not sure I agree. If the spec provides a feature set that no one
 is able to create a useful UI for, then there definitely might be a
 problem with the spec.

 I still have not received any comments on my previous assertion that
 there are essentially two separate use cases here. One for bringing
 attention to a specific point in a larger context, one for showing
 only a smaller range of a video.


Just to confirm: yes, there are two separate use cases. (I was under
the impression that the discussion had brought that out).


Yes, that's fine.  I think it's clear that we could have a 'verb' in 
the fragment focus-on, select etc. to indicate that.  I think 
it's also clear that no matter what verb is used, the entire resource 
is 'available' to the UA, that scripts can (if they wish) navigate 
anywhere in the entire resource, and that UAs can optimize the 
interface for the given verb, but the interface can still permit 
access to the entire resource.

--
David Singer
Multimedia Standards, Apple Inc.


Re: [whatwg] video/audio feedback

2009-05-10 Thread David Singer

At 14:09  +1000 9/05/09, Silvia Pfeiffer wrote:
  you might try loading, say, the one-page version of the HTML5 
spec. from the

 WhatWG site...it takes quite a while.  Happily Ian also provides a
 multi-page, but this is not always the case.


That just confirms the problem and it's obviously worse with video. :-)



 The reason I want clarity is that this has ramifications.  For example, if a
 UA is asked to play a video with a fragment indication #time=10s-20s, and
 then a script seeks to 5s, does the user see the video at the 5s point of
 the total resource, or 15s?  I think it has to be 5s.


I agree, it has to be 5s. The discussion was about what timeline is
displayed and what can the user easily access through seeking through
the displayed timeline. A script can access any time of course. But a
user is restricted by what the user interface offers.


Sure.  I think we are probably in agreement.  Logically, the UA is 
dealing with the whole resource -- which is why it's 5s in this case. 
The UA is also responsible for focusing the user on the fragment, and 
(implicitly) for optimizing the network for what the user is focusing 
on.


For example, some UAs would essentially invoke the same code if the 
user immediately did a seek to a time, if the javacsript did a seek 
to a time, or the initial URI had a fragment indicator starting at a 
time.  In all three cases, the UA tries to start at that time as best 
it can, optimizing network access to do that.



  But we can optimize for the fragment without disallowing the seeking.

What do you mean by optimize for the fragment?


I mean, the UA can get support from the server for time-based access, 
helping optimizing the network access for the fragment to be 
presented, while at the same time allowing seeking outside that 
fragment.



 Of course none of the
discussion will inherently disallow seeking - scripts will always be
able to do the seeking. But the user may not find it easy to do
seeking to a section that is not accessible through the displayed
timeline, which can be both a good and a bad thing.


How easy a particular user interface is to use for various tasks is 
(I hope) not our worry...

--
David Singer
Multimedia Standards, Apple Inc.


Re: [whatwg] video/audio feedback

2009-05-08 Thread Silvia Pfeiffer
On Fri, May 8, 2009 at 9:43 AM, David Singer sin...@apple.com wrote:
 At 8:45  +1000 8/05/09, Silvia Pfeiffer wrote:

 On Fri, May 8, 2009 at 5:04 AM, David Singer sin...@apple.com wrote:

  At 8:39  +0200 5/05/09, KÞitof Îelechovski wrote:

  If the author wants to show only a sample of a resource and not the
 full
  resource, I think she does it on purpose.  It is not clear why it is
 vital
  for the viewer to have an _obvious_ way to view the whole resource
  instead;
  if it were the case, the author would provide for this.
  IMHO,
  Chris

  It depends critically on what you think the semantics of the fragment
 are.
  In HTML (the best analogy I can think of), the web page is not trimmed
 or
  edited in any way -- you are merely directed to one section of it.

 There are critical differences between HTML and video, such that this
 analogy has never worked well.

 could you elaborate?

At the risk of repeating myself ...

HTML is text and therefore whether you download a snippet only or the
full page and then do an offset does not make much of a difference.
Even for a long page.

In contrast, downloading a snippet of video compared to the full video
will make a huge difference, in particular for long-form video.

So, the difference is that in HTML the user agent will always have the
context available within its download buffer, while for video this may
not be the case.

This admittedly technical difference also has an influence on the user
interface.

If you have all the context available in the user agent, it is easy to
just grab a scroll-bar and jump around in the full content manually to
look for things. This is not possible in the video case without many
further download actions, which will each incur a network delay. This
difference opens the door to enable user agents with a choice in
display to either provide the full context, or just the fragment
focus.

Thus, while comparing media fragments to HTML fragments is a simple
way to introduce the concept - and I use it, too, to explain to my
less technical peers - it doesn't really help for detailed
specifications.

Regards,
Silvia.


Re: [whatwg] video/audio feedback

2009-05-08 Thread David Singer

At 23:46  +1000 8/05/09, Silvia Pfeiffer wrote:

On Fri, May 8, 2009 at 9:43 AM, David Singer sin...@apple.com wrote:

 At 8:45  +1000 8/05/09, Silvia Pfeiffer wrote:


 On Fri, May 8, 2009 at 5:04 AM, David Singer sin...@apple.com wrote:


  At 8:39  +0200 5/05/09, KÞitof Îelechovski wrote:


  If the author wants to show only a sample of a resource and not the
 full
  resource, I think she does it on purpose.  It is not clear why it is
 vital
  for the viewer to have an _obvious_ way to view the whole resource
  instead;
  if it were the case, the author would provide for this.
  IMHO,
  Chris


  It depends critically on what you think the semantics of the fragment
 are.
  In HTML (the best analogy I can think of), the web page is not trimmed
 or
  edited in any way -- you are merely directed to one section of it.


 There are critical differences between HTML and video, such that this
 analogy has never worked well.


 could you elaborate?


At the risk of repeating myself ...

HTML is text and therefore whether you download a snippet only or the
full page and then do an offset does not make much of a difference.
Even for a long page.


you might try loading, say, the one-page version 
of the HTML5 spec. from the WhatWG site...it 
takes quite a while.  Happily Ian also provides a 
multi-page, but this is not always the case.




In contrast, downloading a snippet of video compared to the full video
will make a huge difference, in particular for long-form video.


there are short and long pages and videos.

But we're talking about a point of principal 
here, which should be informed by practical, for 
sure, but not dominated by it.


The reason I want clarity is that this has 
ramifications.  For example, if a UA is asked to 
play a video with a fragment indication 
#time=10s-20s, and then a script seeks to 5s, 
does the user see the video at the 5s point of 
the total resource, or 15s?  I think it has to be 
5s.




So, the difference is that in HTML the user agent will always have the
context available within its download buffer, while for video this may
not be the case.


I'm sorry, I am lost.  We could quite easily 
extend HTTP to allow for anchor-based retrieval 
of HTML (i.e. convert a 'please start at anchor 
X' into a pair of byte-range responses, for the 
global material, and then the document from that 
anchor onwards).




This admittedly technical difference also has an influence on the user
interface.

If you have all the context available in the user agent, it is easy to
just grab a scroll-bar and jump around in the full content manually to
look for things. This is not possible in the video case without many
further download actions, which will each incur a network delay. This
difference opens the door to enable user agents with a choice in
display to either provide the full context, or just the fragment
focus.


But we can optimize for the fragment without disallowing the seeking.


--
David Singer
Multimedia Standards, Apple Inc.


Re: [whatwg] video/audio feedback

2009-05-08 Thread Silvia Pfeiffer
On Sat, May 9, 2009 at 2:25 AM, David Singer sin...@apple.com wrote:
 At 23:46  +1000 8/05/09, Silvia Pfeiffer wrote:

 On Fri, May 8, 2009 at 9:43 AM, David Singer sin...@apple.com wrote:

  At 8:45  +1000 8/05/09, Silvia Pfeiffer wrote:

  On Fri, May 8, 2009 at 5:04 AM, David Singer sin...@apple.com wrote:

  At 8:39  +0200 5/05/09, KÞitof Îelechovski wrote:

  If the author wants to show only a sample of a resource and not the
  full
  resource, I think she does it on purpose.  It is not clear why it is
  vital
  for the viewer to have an _obvious_ way to view the whole resource
  instead;
  if it were the case, the author would provide for this.
  IMHO,
  Chris

  It depends critically on what you think the semantics of the fragment
  are.
  In HTML (the best analogy I can think of), the web page is not trimmed
  or
  edited in any way -- you are merely directed to one section of it.

  There are critical differences between HTML and video, such that this
  analogy has never worked well.

  could you elaborate?

 At the risk of repeating myself ...

 HTML is text and therefore whether you download a snippet only or the
 full page and then do an offset does not make much of a difference.
 Even for a long page.

 you might try loading, say, the one-page version of the HTML5 spec. from the
 WhatWG site...it takes quite a while.  Happily Ian also provides a
 multi-page, but this is not always the case.

That just confirms the problem and it's obviously worse with video. :-)


 The reason I want clarity is that this has ramifications.  For example, if a
 UA is asked to play a video with a fragment indication #time=10s-20s, and
 then a script seeks to 5s, does the user see the video at the 5s point of
 the total resource, or 15s?  I think it has to be 5s.

I agree, it has to be 5s. The discussion was about what timeline is
displayed and what can the user easily access through seeking through
the displayed timeline. A script can access any time of course. But a
user is restricted by what the user interface offers.


 So, the difference is that in HTML the user agent will always have the
 context available within its download buffer, while for video this may
 not be the case.

 I'm sorry, I am lost.  We could quite easily extend HTTP to allow for
 anchor-based retrieval of HTML (i.e. convert a 'please start at anchor X'
 into a pair of byte-range responses, for the global material, and then the
 document from that anchor onwards).

Yes, but that's not the way it currently works and it is not a
proposal currently under discussion.


 This admittedly technical difference also has an influence on the user
 interface.

 If you have all the context available in the user agent, it is easy to
 just grab a scroll-bar and jump around in the full content manually to
 look for things. This is not possible in the video case without many
 further download actions, which will each incur a network delay. This
 difference opens the door to enable user agents with a choice in
 display to either provide the full context, or just the fragment
 focus.

 But we can optimize for the fragment without disallowing the seeking.

What do you mean by optimize for the fragment? Of course none of the
discussion will inherently disallow seeking - scripts will always be
able to do the seeking. But the user may not find it easy to do
seeking to a section that is not accessible through the displayed
timeline, which can be both a good and a bad thing.


Cheers,
Silvia.


Re: [whatwg] video/audio feedback

2009-05-05 Thread Křištof Želechovski
If the author wants to show only a sample of a resource and not the full
resource, I think she does it on purpose.  It is not clear why it is vital
for the viewer to have an _obvious_ way to view the whole resource instead;
if it were the case, the author would provide for this.
IMHO,
Chris




Re: [whatwg] video/audio feedback

2009-05-04 Thread Jonas Sicking
On Thu, Apr 30, 2009 at 6:42 PM, Silvia Pfeiffer
silviapfeiff...@gmail.com wrote:
 On Fri, May 1, 2009 at 2:25 AM, David Singer sin...@apple.com wrote:
 At 23:15  +1000 30/04/09, Silvia Pfeiffer wrote:

   On Thu, 30 Apr 2009, Silvia Pfeiffer wrote:

   On Wed, 8 Apr 2009, Silvia Pfeiffer wrote:
  
   Note that in the Media Fragment working group even the specification
   of http://www.example.com/t.mov#time=10s-20s; may mean that only
 the
   requested 10s clip is delivered, especially if all the involved
   instances in the exchange understand media fragment URIs.
  
   That doesn't seem possible since fragments aren't sent to the server.

  The current WD of the Media Fragments WG
  http://www.w3.org/2008/WebVideo/Fragments/WD-media-fragments-reqs/
  specifies that a URL that looks like this
  http://www.w3.org/2008/WebVideo/Fragments/media/fragf2f.mp4#t=12,21
  is to be resolved on the server through the following basic process:

  1. UA chops off the fragment and turns it into a HTTP GET request with
  a newly introduced time range header
  e.g.
  GET /2008/WebVideo/Fragments/media/fragf2f.mp4 HTTP/1.1
  Host: www.w3.org
  Accept: video/*
  Range: seconds=12-21

  2. The server slices the multimedia resource by mapping the seconds to
  bytes and extracting a playable resource (potentially fixing container
  headers). The server will then reply with the closest inclusive range
  in a 206 HTTP response:
  e.g.
  HTTP/1.1 206 Partial Content
  Accept-Ranges: bytes, seconds
  Content-Length: 3571437
  Content-Type: video/mp4
  Content-Range: seconds 11.85-21.16

  That seems quite reasonable, assuming the UA is allowed to seek to other
  parts of the video also.


   On Thu, 9 Apr 2009, Jonas Sicking wrote:
  
   If we look at how fragment identifiers work in web pages today, a
   link such as
  
   http://example.com/page.html#target
  
   this displays the 'target' part of the page, but lets the user
 scroll
   to anywhere in the resource. This feels to me like it maps fairly
   well to
  
   http://example.com/video.ogg#t=5s
  
   displaying the selected frame, but displaying a timeline for the
 full
   video and allowing the user to directly go to any position.
  
   Agreed. This is how the spec works now.

  This is also how we did it with Ogg and temporal URIs, but this is not
  the way in which the standard for media fragment URIs will work.

  It sounds like it is. I don't understand the difference.

 Because media fragment URIs will not deliver the full resource like a
 HTML page does, but will instead only provide the segment that is
 specified with the temporal region.
 http://example.com/video.ogg#t=5s  only retrieves the video from 5s to
 the end, not from start to end.

 So you cannot scroll to the beginning of the video without another
 retrieval action:

 which is fine.  I don't see the problem;  given a fragment we
 a) focus the user's attention on that fragment
 b) attempt to optimize network traffic to display that fragment as quickly
 as possible

 Neither of these stop
 c) the user from casting his attention elsewhere
 d) more network transactions being done to support this


 re c):
 It depends on how the UA displays it. If the UA displays the 5s offset
 as the beginning of the video, then the user cannot easily jump to 0s
 offset. I thought this was the whole purpose of the discussion:
 whether we should encourage UAs to display just the addressed segment
 in the timeline (which makes sense for a 5sec extract from a 2 hour
 video) or whether we encourage UAs to display the timeline of the full
 resource only. I only tried to clarify the differences for the UA and
 what the user gets, supporting an earlier suggestion that UAs may want
 to have a means for switching between full timeline and segment
 timeline display. Ultimately, it's a UA problem and not a HTML5
 problem.

I think there are two use cases:

1. Wanting to start the user at a particular point in a video, while
still showing the user that the full video is there. For example in a
political speech you may want to start off at a particularly
interesting part, while still allowing the viewer to rewind to any
part of the speech in order to gain more context if so desired.
This is very similar to how web pages work today if you include a
fragment identifier. The UI accounts for the full page, but the page
is scrolled to a particular part.

2. Wanting to only show a small part of a longer video. For example in
the video containing a movie, it would be possible to link to a
particular scene, with a given start and end time.


The danger of only doing 2, even if it's possible somehow for the user
to switch to make the UI display the full range of the movie, is that
unless the UI is extremely obvious, most users are not going to see
it.

Or to put it another way. I think there is a use case for both linking
to a specific point in a video file, as well to point to a range in
it. Probably even a combination of the two where you 

Re: [whatwg] video/audio feedback

2009-05-01 Thread David Singer

At 11:42  +1000 1/05/09, Silvia Pfeiffer wrote:

re c):
It depends on how the UA displays it. If the UA displays the 5s offset
as the beginning of the video, then the user cannot easily jump to 0s
offset. I thought this was the whole purpose of the discussion:
whether we should encourage UAs to display just the addressed segment
in the timeline (which makes sense for a 5sec extract from a 2 hour
video) or whether we encourage UAs to display the timeline of the full
resource only.


I think we came to a slightly more abstract conclusion, that the UA 
focuses the user's initial attention on the indicated fragment.


[And we are silent about how it does that, and also about how easy it 
is to look elsewhere.]



I only tried to clarify the differences for the UA and
what the user gets, supporting an earlier suggestion that UAs may want
to have a means for switching between full timeline and segment
timeline display. Ultimately, it's a UA problem and not a HTML5
problem.


Exactly, agreed.
--
David Singer
Multimedia Standards, Apple Inc.


[whatwg] video/audio feedback

2009-04-30 Thread Ian Hickson
On Fri, 10 Apr 2009, Robert O'Callahan wrote:

 Media element state changes, such as readyState changes, trigger 
 asynchronous events. When the event handler actually runs, the element 
 state might have already changed again. For example, it's quite possible 
 for readyState to change to HAVE_ENOUGH_DATA, a canplaythrough event is 
 scheduled, then the readyState changes to HAVE_CURRENT_DATA, then the 
 canplaythrough event handler runs and may be surprised to find that the 
 state is not HAVE_ENOUGH_DATA.

Yeah. Not sure what to do about this.


 A related surprise is that although a media element delays the document 
 load event until the readyState reaches HAVE_CURRENT_DATA, it is 
 possible for a loadeddata event handler to actually run after the 
 document load event handler.

That's true, because the media element's events are all fired on the 
element's own task source, and are therefore not guaranteed to be ordered 
with respect to the DOM manipulation task source (which is used for the 
document-wide 'load' event).

The reason for this is that we don't want to have to guarentee the order 
of events between two video elements, so they can't use the same task 
source, but we _do_ want to make sure that events from a particular media 
element are ordered with respect to each other.

Again, I'm not sure what to do about this.


 An obvious approach to avoid these surprises is to arrange for the state 
 changes to not be reflected in the element until the associated event 
 actually fires. That's a problem if you apply it generally, though. If 
 you delay changes to the 'currentTime' attribute until the associated 
 timeupdate event runs, either 'currentTime' does not reflect what is 
 actually being displayed or your video playback depends on timely JS 
 event execution --- both of those options are unacceptable. And allowing 
 'currentTime' to advance while the readyState is still at 
 HAVE_CURRENT_DATA seems like it could be confusing too.

Indeed.


On Thu, 9 Apr 2009, Boris Zbarsky wrote:
 
 For what it's worth, there are similar situations elsewhere.  For 
 example, the currently proposed spec for stylesheet load events says 
 those fire asynchronously, so it looks to me like they could fire after 
 onload.

Actually the way this is defined they will always fire before the main 
load event (the events are both fired on the same task source, so their 
ordering is defined).


On Sat, 18 Apr 2009, Biju wrote:

 from https://bugzilla.mozilla.org/show_bug.cgi?id=480376
 
  It's not too uncommon for videos to have no audio track. It would be 
  really nice if the video controls could indicate this, so that users 
  know why there's no sound (is something broken? is my volume too low? 
  wtf?).
 
  Unfortunately this info isn't available through the media element API, 
  so this would need to be added to the HTML5 spec. The simplest way to 
  expose this would be as |readonly boolean hasAudio|. Is the media 
  backend capable of determining this this?
 
 we need a hasAudio JS only property for video element

The notes in the spec for the next version of the API mention:

* hasAudio, hasVideo, hasCaptions, etc


On Sat, 18 Apr 2009, Biju wrote:

 if a video element is already playing/loaded video from URL
 http://mysite.com/a.ogg
 and if we want play another file http://mysite.com/b.ogg
 we should do following JS code
 
  v = $('v1');
  v.src = http://mysite.com/b.ogg;;
  v.load();
  v.play();
 
 Why cant it be as simple as
 
v = $('v1');
v.play(http://mysite.com/b.ogg;);
 
 Similarly for load

v = $('v1');
v.load(http://mysite.com/b.ogg;);

Is saving two lines really that big of a deal?


On Mon, 20 Apr 2009, Philip J�genstedt wrote:
 
 Since static markup uses the src attribute it needs to be supported via 
 the DOM, so adding a parameter to load/play would only mean that there 
 are several ways to do the same thing. I'm not sure replacing an already 
 playing media resource is an important enough use case to make such a 
 change to an already complex spec.

Indeed.


On Mon, 20 Apr 2009, Biju wrote:
 
 I did not mean to remove option for assigning to .src property.

That's the problem. It would increase the size of the API.


 This will make web developers work easier. ie, the js code will become 
 1/3 most time for the same operation.

If saving two lines in this case is that much of a big deal, I recommend 
writing a wrapper function that takes an ID and a URL and finds the 
relevant video element, updates the src=, and reloads it. Then it's just 
one line of code. Problem solved. :-)


On Mon, 20 Apr 2009, Biju wrote:
 
 I am sorry if I am missing something, how do adding it make spec 
 complex.

Anything we add makes the spec more complex. Two API members is more than 
one API member.


 So remaining logic is only
 
 HTMLVideoElement.prototype.newPlay =
 function newPlay(url){
   if(arguments.length) this.src = url;
   this.load();
   

Re: [whatwg] video/audio feedback

2009-04-30 Thread Silvia Pfeiffer
 On Thu, 30 Apr 2009, Silvia Pfeiffer wrote:
  On Wed, 8 Apr 2009, Silvia Pfeiffer wrote:
 
  Note that in the Media Fragment working group even the specification
  of http://www.example.com/t.mov#time=10s-20s; may mean that only the
  requested 10s clip is delivered, especially if all the involved
  instances in the exchange understand media fragment URIs.
 
  That doesn't seem possible since fragments aren't sent to the server.

 The current WD of the Media Fragments WG
 http://www.w3.org/2008/WebVideo/Fragments/WD-media-fragments-reqs/
 specifies that a URL that looks like this
 http://www.w3.org/2008/WebVideo/Fragments/media/fragf2f.mp4#t=12,21
 is to be resolved on the server through the following basic process:

 1. UA chops off the fragment and turns it into a HTTP GET request with
 a newly introduced time range header
 e.g.
 GET /2008/WebVideo/Fragments/media/fragf2f.mp4 HTTP/1.1
 Host: www.w3.org
 Accept: video/*
 Range: seconds=12-21

 2. The server slices the multimedia resource by mapping the seconds to
 bytes and extracting a playable resource (potentially fixing container
 headers). The server will then reply with the closest inclusive range
 in a 206 HTTP response:
 e.g.
 HTTP/1.1 206 Partial Content
 Accept-Ranges: bytes, seconds
 Content-Length: 3571437
 Content-Type: video/mp4
 Content-Range: seconds 11.85-21.16

 That seems quite reasonable, assuming the UA is allowed to seek to other
 parts of the video also.


  On Thu, 9 Apr 2009, Jonas Sicking wrote:
 
  If we look at how fragment identifiers work in web pages today, a
  link such as
 
  http://example.com/page.html#target
 
  this displays the 'target' part of the page, but lets the user scroll
  to anywhere in the resource. This feels to me like it maps fairly
  well to
 
  http://example.com/video.ogg#t=5s
 
  displaying the selected frame, but displaying a timeline for the full
  video and allowing the user to directly go to any position.
 
  Agreed. This is how the spec works now.

 This is also how we did it with Ogg and temporal URIs, but this is not
 the way in which the standard for media fragment URIs will work.

 It sounds like it is. I don't understand the difference.

Because media fragment URIs will not deliver the full resource like a
HTML page does, but will instead only provide the segment that is
specified with the temporal region.
http://example.com/video.ogg#t=5s  only retrieves the video from 5s to
the end, not from start to end.

So you cannot scroll to the beginning of the video without another
retrieval action:
i.e. assuming we displaying the full video timeline for a video
src=http://example.com/video.ogg#t=5s;.. element, and then the user
clicks on the beginning of the video, a
http://example.com/video.ogg#t=0s request would be sent.

The difference is the need for the additional retrieval action, which,
if the full resource was immediately downloaded for
http://example.com/video.ogg#t=5s would not be necessary. But that's
not how media fragments work, so I tried pointing this out.

Cheers,
Silvia.


Re: [whatwg] video/audio feedback

2009-04-30 Thread David Singer

At 23:15  +1000 30/04/09, Silvia Pfeiffer wrote:

  On Thu, 30 Apr 2009, Silvia Pfeiffer wrote:

  On Wed, 8 Apr 2009, Silvia Pfeiffer wrote:
 
  Note that in the Media Fragment working group even the specification
  of http://www.example.com/t.mov#time=10s-20s; may mean that only the
  requested 10s clip is delivered, especially if all the involved
  instances in the exchange understand media fragment URIs.
 
  That doesn't seem possible since fragments aren't sent to the server.

 The current WD of the Media Fragments WG
 http://www.w3.org/2008/WebVideo/Fragments/WD-media-fragments-reqs/
 specifies that a URL that looks like this
 http://www.w3.org/2008/WebVideo/Fragments/media/fragf2f.mp4#t=12,21
 is to be resolved on the server through the following basic process:

 1. UA chops off the fragment and turns it into a HTTP GET request with
 a newly introduced time range header
 e.g.
 GET /2008/WebVideo/Fragments/media/fragf2f.mp4 HTTP/1.1
 Host: www.w3.org
 Accept: video/*
 Range: seconds=12-21

 2. The server slices the multimedia resource by mapping the seconds to
 bytes and extracting a playable resource (potentially fixing container
 headers). The server will then reply with the closest inclusive range
 in a 206 HTTP response:
 e.g.
 HTTP/1.1 206 Partial Content
 Accept-Ranges: bytes, seconds
 Content-Length: 3571437
 Content-Type: video/mp4
 Content-Range: seconds 11.85-21.16


 That seems quite reasonable, assuming the UA is allowed to seek to other
 parts of the video also.



  On Thu, 9 Apr 2009, Jonas Sicking wrote:
 
  If we look at how fragment identifiers work in web pages today, a
  link such as
 
  http://example.com/page.html#target
 
  this displays the 'target' part of the page, but lets the user scroll
  to anywhere in the resource. This feels to me like it maps fairly
  well to
 
  http://example.com/video.ogg#t=5s
 
  displaying the selected frame, but displaying a timeline for the full
  video and allowing the user to directly go to any position.
 
  Agreed. This is how the spec works now.

 This is also how we did it with Ogg and temporal URIs, but this is not
 the way in which the standard for media fragment URIs will work.


 It sounds like it is. I don't understand the difference.


Because media fragment URIs will not deliver the full resource like a
HTML page does, but will instead only provide the segment that is
specified with the temporal region.
http://example.com/video.ogg#t=5s  only retrieves the video from 5s to
the end, not from start to end.

So you cannot scroll to the beginning of the video without another
retrieval action:


which is fine.  I don't see the problem;  given a fragment we
a) focus the user's attention on that fragment
b) attempt to optimize network traffic to display that fragment as 
quickly as possible


Neither of these stop
c) the user from casting his attention elsewhere
d) more network transactions being done to support this


i.e. assuming we displaying the full video timeline for a video
src=http://example.com/video.ogg#t=5s;.. element, and then the user
clicks on the beginning of the video, a
http://example.com/video.ogg#t=0s request would be sent.

The difference is the need for the additional retrieval action, which,
if the full resource was immediately downloaded for
http://example.com/video.ogg#t=5s would not be necessary. But that's
not how media fragments work, so I tried pointing this out.

Cheers,
Silvia.



--
David Singer
Multimedia Standards, Apple Inc.


Re: [whatwg] video/audio feedback

2009-04-30 Thread Ian Hickson
On Thu, 30 Apr 2009, Silvia Pfeiffer wrote:
 
 Because media fragment URIs will not deliver the full resource like a 
 HTML page does, but will instead only provide the segment that is 
 specified with the temporal region. http://example.com/video.ogg#t=5s 
 only retrieves the video from 5s to the end, not from start to end.
 
 So you cannot scroll to the beginning of the video without another 
 retrieval action: i.e. assuming we displaying the full video timeline 
 for a video src=http://example.com/video.ogg#t=5s;.. element, and 
 then the user clicks on the beginning of the video, a 
 http://example.com/video.ogg#t=0s request would be sent.
 
 The difference is the need for the additional retrieval action, which, 
 if the full resource was immediately downloaded for 
 http://example.com/video.ogg#t=5s would not be necessary. But that's not 
 how media fragments work, so I tried pointing this out.

It's generally understood that videos wouldn't be downloaded all at once 
anyway; UAs are expected to download the bits they want to cache, jumping 
in and out of the resources as the user seeks, etc. So this doesn't seem 
like a major difference to me.

-- 
Ian Hickson   U+1047E)\._.,--,'``.fL
http://ln.hixie.ch/   U+263A/,   _.. \   _\  ;`._ ,.
Things that are impossible just take longer.   `._.-(,_..'--(,_..'`-.;.'