Re: [whatwg] HTML5 video: frame accuracy / SMPTE

2011-01-18 Thread Rob Coenen
I'm really happy to see that Chromium has landed a fix for frame-accurate
seeking, making SMPTE timecode compliant operations with HTML5 video
possible.
The fix for Firefox is underway (
https://bugzilla.mozilla.org/show_bug.cgi?id=626273 ) and I have filed bugs
at both Webkit/Safari ( https://bugs.webkit.org/show_bug.cgi?id=52697) and
Microsoft Internet Explorer 9 (
https://connect.microsoft.com/IE/feedback/details/636755 )

BTW I tried with Opera 11, but it would only allow me to seek to full
seconds, not frames?

-Rob


On Thu, Jan 13, 2011 at 7:45 AM, Philip Jägenstedt wrote:

> On Thu, 13 Jan 2011 01:03:03 +0100, Aryeh Gregor 
> >
> wrote:
>
>  On Wed, Jan 12, 2011 at 3:42 AM, Philip Jägenstedt 
>> wrote:
>>
>>> * add HTMLMediaElement.seek(t, [exact]), where exact defaults to false if
>>> missing
>>>
>>
>> Boolean parameters are evil, since it's impossible to guess what they
>> do from reading the code.  Make it a two-value enum instead.  The
>> second argument could be extended to a bitfield later if desired, too.
>>
>
> WFM
>
>
> --
> Philip Jägenstedt
> Core Developer
> Opera Software
>


Re: [whatwg] Questions regarding microdata implementations.

2011-01-18 Thread Emiliano Martinez Luque
Thank you for the reply, it took some time going through the algorithm
and I should have looked there first. But, It still does not explain
what an implementation should do with the results already found before
encountering the loop and failing. I'll take it that this is up to the
application dealing with the data.

Other than that, I took itemscope to represent "within the scope of an
item", my mistake. (And you are absolutely right on saving a list of
references to elements with Ids).

Well, you have clarified all my doubts and I I have to say I agree
completely with the open approach expressed in your replies and in the
specs. Again, thank you for this great spec and for the reply.

-
Emiliano Martínez Luque
http://www.metonymie.com


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Glenn Maynard
On Tue, Jan 18, 2011 at 7:32 PM, David Singer  wrote:

> I'm sorry, perhaps that was a shorthand.
>
> In RTSP-controlled RTP, there is a tight relationship between the play
> point, and play state, the protocol state (delivering data or paused) and
> the data delivered (it is delivered in precisely real-time, and played and
> discarded shortly after playing).  The server delivers very little more data
> than is actually watched.
>
> In HTTP, however, the entire resource is offered to the client, and there
> is no protocol to convey play/paused back to the server, and the typical
> behavior when offered a resource in HTTP is to make a simple binary decision
> to either load it (all) or not load it (at all).  So, by providing a media
> resource over HTTP, the server should kinda be expecting this 'download'
> behavior.
>

The only practical server-side problem I can think of is that capping the
prebuffer may result in keeping HTTP connections open longer; rather than
opening the connection long enough to download the video, it's likely to be
kept open for the entire duration of the video.  That's something to think
carefully on, and influences implementations (eg. when the cap is reached,
close the connection if the video is paused), but it doesn't seem like a
showstopper.

Not only that, but if my client downloads as much as possible as soon as
> possible and caches as much as possible, and yours downloads as little as
> possible as late as possible, you may get brownie points from the server
> owner, but I get brownie points from my local user -- the person I want to
> please if I am a browser vendor.  There is every incentive to be resilient
> and 'burn' bandwidth to achieve a better user experience.
>
> Servers are at liberty to apply a 'throttle' to the supply, of course
> ("download as fast as you like at first, but after a while I'll only supply
> at roughly the media rate").  They can suggest that the client be a little
> less aggressive in buffering, but it's easily ignored and the incentive is
> to ignore it.
>
> So I tend to return to "if you want more tightly-coupled behavior, use a
> more tightly-coupled protocol"...
>

Browser vendors always have incentives to benefit the user at the expense of
servers.  Parallel HTTP connections are the most obvious example: although
you could make pages load faster by opening 20 parallel connections to a
server, as I recall most browsers are around 6, which is out of spec but a
reasonable value that improves the user experience without hammering
servers.  Some browsers are less civil and open far more, of course; but
enough browser vendors seem reasonable about this sort of thing, even when
the incentive is to open the floodgates, for this to be useful.

So if the suggestion is that a "maximumPrebuffer" setting wouldn't be
implemented because not doing so makes the browser look better to the user,
at least on first impression I'm not so sure.  I think that's only true if
it's impossible to implement capped prebuffering reliably, but I don't think
anyone's made that argument.

-- 
Glenn Maynard


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Robert O'Callahan
On Wed, Jan 19, 2011 at 1:35 PM, Andy Berkheimer  wrote:

> As an example, I believe Chrome's current implementation _does_ stall
> the HTTP connection (stop reading from the socket interface but keep
> it open) after some amount of readahead - a magic hardcoded constant.
> We've run into issues there - their browser readahead buffer is too
> small and causing a lot of underruns.
>

For the record, Firefox does this too, but only when we run out of storage
on the client.

Rob
-- 
"Now the Bereans were of more noble character than the Thessalonians, for
they received the message with great eagerness and examined the Scriptures
every day to see if what Paul said was true." [Acts 17:11]


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Andy Berkheimer
On Tue, Jan 18, 2011 at 5:11 PM, Zachary Ozer  wrote:
> I've heard from some people that they're a bit lost, so I wanted to
> take a moment to summarize.
>
> We have two competing interests here:
>  * Viewers want a smooth playback experience regardless of their
> bandwidth or device. Some viewers may also want to limit the amount
> they download because they're paying for bandwidth. Additionally,
> devices may have memory limitations in terms of how much they're able
> to buffer.
>  * Content providers are worried about bandwidth costs. While they
> want a great experience for viewers, a lot of people click play and
> then watch just a small fraction of their video.

In the case where the viewer does not have enough bandwidth to stream
the video in realtime, there are two basic options for the experience:
- buffer the majority of the video (per Glenn and Boris' discussion)
- switch to a lower bitrate that can be streamed in realtime

This thread has focused primarily of the first option and this is an
experience that we see quite a bit.  This is the option favored
amongst enthusiasts and power users, and also makes sense when a
viewer has made a purchase with an expectation of quality.  And
there's always the possibility that the user does not have enough
bandwidth for even the lowest available bitrate.

But the second option is the experience that the majority of our viewers expect.

The ideal interface would have a reasonable default behavior but give
an application the ability to implement either experience depending on
user preference (or lack thereof), viewing context, etc.  More on that
below.

> Currently, there's no way to stop / limit the browser from buffering -
> once you hit play, you start downloading and don't stop until the
> resource is completely loaded. This is largely the same as Flash, save
> the fact that some browsers don't respect the preload attribute. (Side
> note: I also haven't found a browser that stops loading the resource
> even if you destroy the video tag.)

As an example, I believe Chrome's current implementation _does_ stall
the HTTP connection (stop reading from the socket interface but keep
it open) after some amount of readahead - a magic hardcoded constant.
We've run into issues there - their browser readahead buffer is too
small and causing a lot of underruns.

> There have been a few suggestions for how to deal with this, but most
> have revolved around using downloadBufferTarget - a settable property
> that determines how much video to buffer ahead in seconds. Originally,
> it was suggested that the content producers should have control over
> this, but most seem to favor the client retaining some control since
> they are the most likely to be in low bandwidth situations.
> (Publishers who want strict bandwidth control could use a more
> advanced server and communication layer ala YouTube).

The advanced layer you speak of is naive server-side throttling with
no feedback from the client, and a few tricks to kill the current
progressive download and open a new one for out-of-buffer seeks.  This
has a lot of bad behaviors that we'd like to fix - which comes to the
crux of the issue.

No matter how much data you pass between client and server, there's
always some useful playback state that the client knows and the server
does not - or the server's view of the state is stale.  This is
particularly true if there's an HTTP proxy between the user agent and
the server.  Any behavior that could be implemented through an
advanced server/communication layer can be achieved in a simpler, more
robust fashion with a solid buffer management implementation that
provides "advanced" control through javascript and attributes.


> The simplest enhancement would be to honor the downloadBufferTarget
> only when readyState=HAVE_ENOUGH_DATA and playback is paused, as this
> would imply that there is not a low bandwidth situation.
>
> As an enhancement to this, the browser could always respect the
> downloadBufferTarget until the buffer underruns
> (networkState=NETWORK_LOADING and readyState=HAVE_CURRENT_DATA). At
> this point, the browser could either:
>  * Ignore downloadBufferTarget and load as fast as possible
>  * Double the size of downloadBufferTarget
>
> As a further enhancement, the browser could store these values per
> site so that they are not recalculated on each playback. Finally, if
> there is a playback with no underruns, the browser would reduce
> downloadBufferTarget by some factor to ensure that it is not over
> buffering.
>
> Separately, there has been some discussion about how much buffer needs
> to be retained / when the buffer should be cleared. (I think this
> should be moved off to a separate discussion.)
>
> ==
>
> Personally, I really like the idea of allowing the content provider to
> specify a downloadBufferTarget, but allowing the browser to override
> this based on historical data / current network conditions. I'm not
> sure how much work each of the propose

Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread David Singer

On Jan 18, 2011, at 16:16 , Glenn Maynard wrote:

> On Tue, Jan 18, 2011 at 6:54 PM, David Singer  wrote:
> 
>> I feel like we are asking this question at the wrong protocol level.
>> 
>> If you use the HTML5 video tag, you indicate the resource and the protocol
>> used to get it, in a URL.
>> 
>> If you indicate a download protocol, you can hardly be surprised if, well,
>> download happens.
>> 
> 
> HTTP isn't a "download protocol"--I'm not really sure what that means--it's
> a transfer protocol.  Nothing about HTTP prevents capping prebuffering, and
> nothing about an HTTP URL implies that the entire resource needs to be
> downloaded.
> 
> This setting is relevant for any protocol, whether it's a generic one like
> HTTP or one designed for streaming video.  This isn't a discussion about
> HTTP; it's one about an interface to control prebuffering, regardless of the
> underlying protocol.  I havn't seen it suggested that any difficulty of
> doing this is due to HTTP.  If you think it is, it'd be helpful to explain
> further.


I'm sorry, perhaps that was a shorthand.

In RTSP-controlled RTP, there is a tight relationship between the play point, 
and play state, the protocol state (delivering data or paused) and the data 
delivered (it is delivered in precisely real-time, and played and discarded 
shortly after playing).  The server delivers very little more data than is 
actually watched.

In HTTP, however, the entire resource is offered to the client, and there is no 
protocol to convey play/paused back to the server, and the typical behavior 
when offered a resource in HTTP is to make a simple binary decision to either 
load it (all) or not load it (at all).  So, by providing a media resource over 
HTTP, the server should kinda be expecting this 'download' behavior.

Not only that, but if my client downloads as much as possible as soon as 
possible and caches as much as possible, and yours downloads as little as 
possible as late as possible, you may get brownie points from the server owner, 
but I get brownie points from my local user -- the person I want to please if I 
am a browser vendor.  There is every incentive to be resilient and 'burn' 
bandwidth to achieve a better user experience.

Servers are at liberty to apply a 'throttle' to the supply, of course 
("download as fast as you like at first, but after a while I'll only supply at 
roughly the media rate").  They can suggest that the client be a little less 
aggressive in buffering, but it's easily ignored and the incentive is to ignore 
it.

So I tend to return to "if you want more tightly-coupled behavior, use a more 
tightly-coupled protocol"...

David Singer
Multimedia and Software Standards, Apple Inc.



Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Glenn Maynard
On Tue, Jan 18, 2011 at 6:54 PM, David Singer  wrote:

> I feel like we are asking this question at the wrong protocol level.
>
> If you use the HTML5 video tag, you indicate the resource and the protocol
> used to get it, in a URL.
>
> If you indicate a download protocol, you can hardly be surprised if, well,
> download happens.
>

HTTP isn't a "download protocol"--I'm not really sure what that means--it's
a transfer protocol.  Nothing about HTTP prevents capping prebuffering, and
nothing about an HTTP URL implies that the entire resource needs to be
downloaded.

This setting is relevant for any protocol, whether it's a generic one like
HTTP or one designed for streaming video.  This isn't a discussion about
HTTP; it's one about an interface to control prebuffering, regardless of the
underlying protocol.  I havn't seen it suggested that any difficulty of
doing this is due to HTTP.  If you think it is, it'd be helpful to explain
further.

-- 
Glenn Maynard


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread David Singer

On Jan 18, 2011, at 5:40 , Boris Zbarsky wrote:

> On 1/18/11 6:09 AM, Glenn Maynard wrote:
>> I'm confused--how is the required buffer size a function of the length of
>> the video?  Once the buffer is large enough to smooth out network
>> fluctuations, either you have the bandwidth to stream the video or you
>> don't; the length of the video doesn't enter into it.
> 
> The point is that many users _don't_ have enough bandwidth to stream the 
> video.  At that point, the size of the buffer that puts you in 
> HAVE_ENOUGH_DATA depends on the length of the video.
> 

It certainly used to be true that many users of Apple's trailers site would 
deliberately choose higher quality (and hence bandwidth) trailers than they 
could download in realtime.

David Singer
Multimedia and Software Standards, Apple Inc.



Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread David Singer
I feel like we are asking this question at the wrong protocol level.

If you use the HTML5 video tag, you indicate the resource and the protocol used 
to get it, in a URL.

If you indicate a download protocol, you can hardly be surprised if, well, 
download happens.

If you want a more tightly coupled supply/consume protocol, then use one.  As 
long as it's implemented by client and server, you're on.

Note that the current move of the web towards download in general and HTTP in 
particular is due in no small part to the fact that getting more tightly 
coupled protocols -- actually, any protocol other than HTTP -- out of content 
servers, across firewalls, through NATs, and into clients is...still a 
nightmare.  So, we've been given a strong incentive by all those to use HTTP.  
It's sad that some of them are not happy with that result, but it's going to be 
hard to change now.

David Singer
Multimedia and Software Standards, Apple Inc.



Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Robert O'Callahan
On Wed, Jan 19, 2011 at 6:11 AM, Zachary Ozer wrote:

> (Side note: I also haven't found a browser that stops loading the resource
> even if you destroy the video tag.)
>

Setting the source URI to "" should stop the download.

Personally I think having browsers honor dynamic changes to the preload
attribute is the best way to solve this.

Rob
-- 
"Now the Bereans were of more noble character than the Thessalonians, for
they received the message with great eagerness and examined the Scriptures
every day to see if what Paul said was true." [Acts 17:11]


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Glenn Maynard
On Tue, Jan 18, 2011 at 5:00 PM, Boris Zbarsky  wrote:

> On 1/18/11 4:37 PM, Glenn Maynard wrote:
>
>> If you don't have enough bandwidth, then the necessary buffer size is
>> effectively the entire video[1]
>>
>
> No, it's really not.  Your footnote is, of course, correct.
>
> If my bandwidth is such that I can download the video in 2 hours, and it's
> one hour long, then letting me start playing after 1.5 hours of downloading
> seems perfectly safe to me, if the download speed is stable enough (has a 2x
> margin of safety).
>

I'd tend--both as a user and as a web developer--to err the other way, and
always download the whole video if the connection isn't fast enough to
reliably stream it.  The failure mode otherwise is very bad: the user's
movie underruns two hours in, possibly resulting in a box of popcorn being
hurled angrily at the TV.  Either way, it's a judgement call based on user
experience priorities, so web pages should have some influence over the
decision.

Note that we're not actually describing the "maximum prebuffer" value (the
topic of the thread), but rather the "minimum prebuffer", eg. Flash's
bufferTime value from Zachary's mail.  In both of the above cases you'd
probably want the maximumPrebuffer value to be unlimited.

-- 
Glenn Maynard


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Boris Zbarsky

On 1/18/11 4:37 PM, Glenn Maynard wrote:

If you don't have enough bandwidth, then the necessary buffer size is
effectively the entire video[1]


No, it's really not.  Your footnote is, of course, correct.

If my bandwidth is such that I can download the video in 2 hours, and 
it's one hour long, then letting me start playing after 1.5 hours of 
downloading seems perfectly safe to me, if the download speed is stable 
enough (has a 2x margin of safety).


If the download speed is not stable enough, then it doesn't matter 
whether on average you can stream the video, because the outliers will 
still kill you.



Mikko seems to suggest that it's the
entire video times some multiplier, where that multiplier can be
discovered by binary searching.


The multiplier should just be a function of the ratio of the stream 
bitrate and the available download bandwidth if those are both constant.


But note that available download bandwidth is non-constant.  A number of 
cable services around here, at least, seem to do something where they 
give you N bytes per second for the first K bytes of a download 
(presumably that's a single connection, but who knows how they define 
it) and N/2 or N/3 or N/10 bytes per second for the rest of the 
download.  The connection is sold as an N/2 or N/3 or N/10 connection, 
not a connection that can actually produce N bytes per second.  But 
nevertheless, the average download rate will differ depending on the 
file size, and hence the multiplier is different for different file sizes.


Given that I think the random speedup bit is per-connection, I doubt 
that this can be discovered with some sort of binary search, though. 
Agreed on that.


-Boris


Re: [whatwg] Control over selection direction

2011-01-18 Thread Ojan Vafai
On Sun, Jan 16, 2011 at 2:44 PM, Aryeh Gregor

> wrote:

> If we just have a boolean, it's unambiguous: the properties are all
> logically separate.  We don't want to emulate the DOM selection API,
> IMO -- it's ridiculously complex for minimal functionality gain, even
> accounting for the fact that it has to deal with nodes as well as
> offsets.  Save authors the pain, if they only have to deal with text
> fields.
>

I agree. If we could change the DOM apis to just have start/end use a
bool/enum for the direction it would certainly be simpler for web developers
than what we have today.

Ojan


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Glenn Maynard
On Tue, Jan 18, 2011 at 8:40 AM, Boris Zbarsky  wrote:

> On 1/18/11 6:09 AM, Glenn Maynard wrote:
>
>> I'm confused--how is the required buffer size a function of the length of
>> the video?  Once the buffer is large enough to smooth out network
>> fluctuations, either you have the bandwidth to stream the video or you
>> don't; the length of the video doesn't enter into it.
>>
>
> The point is that many users _don't_ have enough bandwidth to stream the
> video.  At that point, the size of the buffer that puts you in
> HAVE_ENOUGH_DATA depends on the length of the video.
>

If you don't have enough bandwidth, then the necessary buffer size is
effectively the entire video[1].  Mikko seems to suggest that it's the
entire video times some multiplier, where that multiplier can be discovered
by binary searching.  This doesn't make sense to me:

> static time period in seconds. This is required because a 5 second
> buffer could be enough for a 20 second clip but a 2 minute buffer could
> be required for one hour video. In both cases, the actual available





[1] (Of course, it's more precisely the size of the video minus a function
of the video size, bitrate and user bandwidth--the amount of data you can
leave unbuffered at the end and have it finish while you're watching.  I
point this out because someone else will if I don't, but I don't think it's
relevant to any buffer size algorithm: it's hard to determine, and if you
get it wrong for a long video you have a very annoyed user with his movie
interrupted two hours in.)

-- 
Glenn Maynard


[whatwg] Stop control for video [was: Limiting the amount of downloaded but not watched video]

2011-01-18 Thread Markus Ernst

Am 18.01.2011 18:11 schrieb Zachary Ozer:

Currently, there's no way to stop / limit the browser from buffering -
once you hit play, you start downloading and don't stop until the
resource is completely loaded. This is largely the same as Flash, save
the fact that some browsers don't respect the preload attribute. (Side
note: I also haven't found a browser that stops loading the resource
even if you destroy the video tag.)


There has been a version of JWplayer with a stop control (as I suggested 
earlier in this thread for the user aspect of the topic). I set up a 
demo page:


http://www.markusernst.ch/stuff_for_the_world/jwplayertest.html

If you click the stop button, playback is stopped, and the status bar 
disappears. If you click the play button again, playback is started from 
the beginning, and the status bar shows download being continued from 
the point where the stop button has been clicked. So this player seems 
to interrupt the download and resume it, when playback is restarted.


This looks like an intuitive and sensible behaviour to me, which could 
both improve user experience and save server bandwidth. It could be 
implemented independent from, and additionnally to the points discussed 
in the original thread.


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Boris Zbarsky

On 1/18/11 2:01 PM, Zachary Ozer wrote:

On Tue, Jan 18, 2011 at 6:46 PM, Boris Zbarsky  wrote:

On 1/18/11 12:11 PM, Zachary Ozer wrote:


(Side
note: I also haven't found a browser that stops loading the resource
even if you destroy the video tag.)


"destroy" in what sense?  You verified in a debugger that it had been
garbage collected?


I'm doing document.body.removeChild. Is there a better way to do it?


Not really, no.  Once you remove it, even if there is nothing 
referencing it anymore (which may or may not be the case depending on 
what other code is running on the page), it still won't be destroyed 
until the next time garbage collection happens.  In the case of garbage 
collectors that can do different levels of collection (generational, 
etc), it won't be destroyed until whatever level is needed to destroy it 
happens.


In general, depending on finalizers to release resources (which is 
what's happening here) is not really a workable setup.  Maybe we need an 
api to explicitly release the data on an audio/video tag?


-Boris


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Zachary Ozer
On Tue, Jan 18, 2011 at 6:46 PM, Boris Zbarsky  wrote:
> On 1/18/11 12:11 PM, Zachary Ozer wrote:
>>
>> (Side
>> note: I also haven't found a browser that stops loading the resource
>> even if you destroy the video tag.)
>
> "destroy" in what sense?  You verified in a debugger that it had been
> garbage collected?

I'm doing document.body.removeChild. Is there a better way to do it?

Best,

Zach


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Boris Zbarsky

On 1/18/11 12:11 PM, Zachary Ozer wrote:

(Side
note: I also haven't found a browser that stops loading the resource
even if you destroy the video tag.)


"destroy" in what sense?  You verified in a debugger that it had been 
garbage collected?


-Boris


Re: [whatwg] Questions regarding microdata implementations.

2011-01-18 Thread Tab Atkins Jr.
Hey, Emiliano!  I'm going to snip your actual questions, as they're rather long.

> 1) The specification does not define any mechanism for an application
> using the microdata to deal with possible misuses of data
> vocabularies.

The spec completely specifies how to extract the data.  What
applications do with the data afterwards is out-of-scope for HTML.  It
may be useful for an application to accept and keep around all the
data that was extracted, even if it knows the vocabulary and sees
unknown properties (for example, this can help with
forward-compatibility, if that makes sense for the application; it
could also allow custom extensions, if that makes sense for the
application).  It may just throw away all the data it extracted that
it doesn't recognize.

Both of these, and any other behavior, are perfectly fine, and it's up
to the application to decide what's most useful.


> 2) The specs specify item types should be identified by URLs. It is
> not completely clear (or at least not clear to me) whether they
> represent the string of the URL as a URI for unambiguously
> representing the item type, a URL for a document that defines that
> item type or both. which is the case?

The former, though, since it's a URL, it can certainly play the role
of the latter as well.


> 3) The specification states that itemref references a node within the
> html tree, referencing it by it's id. However it specifies nothing
> regarding how the referenced node should be marked up. Since, the
> nodes referenced may exist before the itemrefs, an application
> discovering microdata may have to do multiple passes through the html
> tree to extract this information. I would like to know, if any thought
> has been given to using itemscope within the referenced node, ie:
>
> 
>        value of a1
>        value of a2
> 
>
> 
>        value of b1
>        
> 

Using @itemscope changes the meaning - it implies that the element
forms an independent (though possibly nested) Microdata item.

You don't necessarily need to make multiple passes through the
document to resolve all the itemrefs, though.  For example, you could
keep a stack of #ids, and associate each @itemprop you find with the
current stack.  When you're done extracting everything, you can
resolve the @itemrefs by just filtering your list of @itemprops by ids
in their stack.


> 4) What is the intended behaviour of an application when encountering
> a loop within the itemref references? ie:

This is described in
.
 I don't want to squint at the algorithm again to find out exactly
what happens, but the algo keeps track of things it's seen before, and
cuts off recursion if an @itemref results in a loop.

> 5) The specification states:
>
> "The itemref attribute, if specified, must have a value that is an
> unordered set of unique space-separated tokens that are
> case-sensitive, consisting of IDs of elements in the same home
> subtree."
>
> (5.2.2 of http://www.whatwg.org/specs/web-apps/current-work/#microdata)
>
> I would like to know if there has been any thoughts given to
> referencing fragments on an outside document. For example, a document
> with URL http://www.personaldata.com/me.html might contain the
> following fragment:

That's more complex than appeared necessary for any of the (fairly
extensive) use-cases that were considered when Microdata was written.

Vocabularies can certainly define that some of their properties take
urls which are intended to point to more data, but that doesn't affect
the Microdata data extraction algorithm itself, which only cares about
the single page it was run on.

~TJ


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Ryosuke Niwa
On Tue, Jan 18, 2011 at 9:11 AM, Zachary Ozer wrote:
>
> Currently, there's no way to stop / limit the browser from buffering -
> once you hit play, you start downloading and don't stop until the
> resource is completely loaded. This is largely the same as Flash, save
> the fact that some browsers don't respect the preload attribute. (Side
> note: I also haven't found a browser that stops loading the resource
> even if you destroy the video tag.)
>

This sounds like a UI issue each browser vendor can address.

- Ryosuke


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Zachary Ozer
I've heard from some people that they're a bit lost, so I wanted to
take a moment to summarize.

We have two competing interests here:
 * Viewers want a smooth playback experience regardless of their
bandwidth or device. Some viewers may also want to limit the amount
they download because they're paying for bandwidth. Additionally,
devices may have memory limitations in terms of how much they're able
to buffer.
 * Content providers are worried about bandwidth costs. While they
want a great experience for viewers, a lot of people click play and
then watch just a small fraction of their video.

Currently, there's no way to stop / limit the browser from buffering -
once you hit play, you start downloading and don't stop until the
resource is completely loaded. This is largely the same as Flash, save
the fact that some browsers don't respect the preload attribute. (Side
note: I also haven't found a browser that stops loading the resource
even if you destroy the video tag.)

There have been a few suggestions for how to deal with this, but most
have revolved around using downloadBufferTarget - a settable property
that determines how much video to buffer ahead in seconds. Originally,
it was suggested that the content producers should have control over
this, but most seem to favor the client retaining some control since
they are the most likely to be in low bandwidth situations.
(Publishers who want strict bandwidth control could use a more
advanced server and communication layer ala YouTube).

The simplest enhancement would be to honor the downloadBufferTarget
only when readyState=HAVE_ENOUGH_DATA and playback is paused, as this
would imply that there is not a low bandwidth situation.

As an enhancement to this, the browser could always respect the
downloadBufferTarget until the buffer underruns
(networkState=NETWORK_LOADING and readyState=HAVE_CURRENT_DATA). At
this point, the browser could either:
 * Ignore downloadBufferTarget and load as fast as possible
 * Double the size of downloadBufferTarget

As a further enhancement, the browser could store these values per
site so that they are not recalculated on each playback. Finally, if
there is a playback with no underruns, the browser would reduce
downloadBufferTarget by some factor to ensure that it is not over
buffering.

Separately, there has been some discussion about how much buffer needs
to be retained / when the buffer should be cleared. (I think this
should be moved off to a separate discussion.)

==

Personally, I really like the idea of allowing the content provider to
specify a downloadBufferTarget, but allowing the browser to override
this based on historical data / current network conditions. I'm not
sure how much work each of the proposed solutions would be, I think
that respecting downloadBufferTarget until the buffer underruns and
then downloading as fast as possible would be fairly straight forward,
and a big improvement on what's available today.

Best,

Zach
--
Zachary Ozer
Developer, LongTail Video

w: longtailvideo.com • e: z...@longtailvideo.com • p: 212.244.0140 •
f: 212.656.1335
JW Player  |  Bits on the Run  |  AdSolution



On Tue, Jan 18, 2011 at 1:40 PM, Boris Zbarsky  wrote:
> On 1/18/11 6:09 AM, Glenn Maynard wrote:
>>
>> I'm confused--how is the required buffer size a function of the length of
>> the video?  Once the buffer is large enough to smooth out network
>> fluctuations, either you have the bandwidth to stream the video or you
>> don't; the length of the video doesn't enter into it.
>
> The point is that many users _don't_ have enough bandwidth to stream the
> video.  At that point, the size of the buffer that puts you in
> HAVE_ENOUGH_DATA depends on the length of the video.
>
> -Boris
>
>


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Boris Zbarsky

On 1/18/11 6:09 AM, Glenn Maynard wrote:

I'm confused--how is the required buffer size a function of the length of
the video?  Once the buffer is large enough to smooth out network
fluctuations, either you have the bandwidth to stream the video or you
don't; the length of the video doesn't enter into it.


The point is that many users _don't_ have enough bandwidth to stream the 
video.  At that point, the size of the buffer that puts you in 
HAVE_ENOUGH_DATA depends on the length of the video.


-Boris



Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Glenn Maynard
On Mon, Jan 17, 2011 at 5:01 PM, Zachary Ozer wrote:

> > I assume you're comparing to the bandwidth usage of flash? Does flash
> allow
> > developers to control how the media is downloaded on the client? What
> > mechanisms does it provide? Maybe we can do something similar?
>
> There are a bunch:
>
> * backBufferLength : Number - [read-only] The number of seconds of
> previously displayed data that currently cached for rewinding and
> playback.
>
> * backBufferTime : Number - Specifies how much previously displayed
> data Flash Player tries to cache for rewinding and playback, in
> seconds.
>
> * bufferLength : Number - [read-only] The number of seconds of data
> currently in the buffer.
>
> * bufferTime : Number - Specifies how long to buffer messages before
> starting to display the stream.
>
> * bufferTimeMax : Number - Specifies a maximum buffer length for live
> streaming content, in seconds.
>
> * bytesLoaded : uint - [read-only] The number of bytes of data that
> have been loaded into the application.
>
> * bytesTotal : uint - [read-only] The total size in bytes of the file
> being loaded into the application.
>

Note that this doesn't actually seem to include the amount of time to
buffer; bufferTimeMax appears to be for live streaming only (eg.
videoconferencing and webcams) to control sync, not static videos like
YouTube.

-- 
Glenn Maynard


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Glenn Maynard
On Tue, Jan 18, 2011 at 5:46 AM, Mikko Rantalainen <
mikko.rantalai...@peda.net> wrote:

> This way the UA would (slowly?) converge to correct downloadBufferTarget
> for any site for any given network connection. If the full length of the
> video clip is known, then downloadBufferTarget should probably be more
> along the lines of multiplier of full video clip length instead of
> static time period in seconds. This is required because a 5 second
> buffer could be enough for a 20 second clip but a 2 minute buffer could
> be required for one hour video. In both cases, the actual available
>

I'm confused--how is the required buffer size a function of the length of
the video?  Once the buffer is large enough to smooth out network
fluctuations, either you have the bandwidth to stream the video or you
don't; the length of the video doesn't enter into it.

-- 
Glenn Maynard


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Mikko Rantalainen
2011-01-17 23:32 EEST: Silvia Pfeiffer:
> On Mon, Jan 17, 2011 at 10:15 PM, Chris Pearce  wrote:
>> Perhaps we should only honour the downloadBufferTarget (or whatever measure
>> we use) when the media is in readyState HAVE_ENOUGH_DATA, i.e. if we're
>> downloading at a rate greater than what we require to playback in real time?
> 
> Hmm... it's certainly a necessary condition, but is it sufficient?
> 
> Probably if we ever end up in a buffering state (i.e.
> networkState=NETWORK_LOADING and readyState=HAVE_CURRENT_DATA or less)
> then we should increase the downloadBufferTarget or completely drop
> it, since we weren't able to get data from the network fast enough to
> continue feeding the decoding buffer. Even if after that we return to
> readyState=HAVE_ENOUGH_DATA, it's probably just a matter of time
> before we again have to go into buffering state.
> 
> Maybe it's more correct to say that we honour the downloadBufferTarget
> only when the readyState is *always* HAVE_ENOUGH_DATA during playback?

I think that downloadBufferTarget (seconds to prebuffer) should not be
content author specifiable. A sensible behavior would be

1. Set downloadBufferTarget to UA defined default (e.g. 5 seconds)
2. In case of buffer underrun, double to downloadBufferTarget and store
this as the new default for the site (e.g. domain name)

This way the UA would (slowly?) converge to correct downloadBufferTarget
for any site for any given network connection. If the full length of the
video clip is known, then downloadBufferTarget should probably be more
along the lines of multiplier of full video clip length instead of
static time period in seconds. This is required because a 5 second
buffer could be enough for a 20 second clip but a 2 minute buffer could
be required for one hour video. In both cases, the actual available
network bandwidth is some ratio slower than the required bandwidth, not
a static time period buffer as would be in case of simple delays. The
default buffer should be pretty small to allow small delays in the start
of the playback for high bandwidth users and to reduce the wasted server
bandwidth in case the user does not view the video clip to the end. The
buffer doubling is obviously binary search for the correct buffer size
and storing per site is just required because bandwidth to different
services could very a lot.

The above logic could be appended with an another rule:

3. In case of successful video clip playback (no buffer underrun during
a full playback of a video clip), multiply downloadBufferTarget by 0.95
and store this as the new default for this site.

This would cause slight buffer underruns in the long run (about 5%
change for visible buffer underrun for random video clip) but any
underrun would always double the buffer. Having this additional rule
would allow decreasing the downloadBufferTarget in case the network
bandwidth has been improved. It could make sense to save this multiplier
per site as well and tune this multiplier towards 1.0 in the long run
for any given site.

PS. It could make sense to save these preferences per {site, connection
method} tuple in case one often uses e.g. 100 Mbps LAN connection and a
3G mobile data connection. Both cases should converge to different
downloadBufferTarget values for any give site (e.g. youtube).

-- 
Mikko


Re: [whatwg] Limiting the amount of downloaded but not watched video

2011-01-18 Thread Silvia Pfeiffer
On Tue, Jan 18, 2011 at 1:30 AM, Boris Zbarsky  wrote:
> On 1/17/11 6:04 PM, Boris Zbarsky wrote:
>>
>>  From a user's perspective (which is what I'm speaking as here), it
>> doesn't matter what the technology is. The point is that there is
>> prevalent UI out there right now where pausing a moving will keep
>> buffering it up and then you can watch it later. This is just as true
>> for 2-hour movies as it is for 2-minute ones, last I checked.
>>
>> So one question is whether this is a UI that we want to support, given
>> existing user familiarity with it. If so, there are separate questions
>> about how to support it, of course.
>
> I checked with some other users who aren't me, as a sanity check, and while
> all of them expected pausing a movie to buffer far enough to be able to play
> through when unpaused, none of them really expected the whole movie to
> buffer.  So it might in fact make the most sense to stick to buffering when
> paused until we're in the playthrough state and then stop, and have some
> other UI for making the moving available offline.

I think that's indeed one obvious improvement, i.e. when going to
pause stat, stop buffering when readyState=HAVE_ENOUGH_DATA (i.e. we
have reached canplaythrough state).

However, again, I don't think that's sufficient. Because we will also
buffer during playback and it is possible that we buffer fast enough
to have buffered e.g. the whole of a 10min video by the time we hit
pause after 1 min and stop watching. That's far beyond canplaythrough
and that's 9min worth of video download wasted bandwidth. This is
where the suggested downloadBufferTarget would make sense. It would
basically specify how much more to download beyond HAVE_ENOUGH_DATA
before pausing the download.

Cheers,
Silvia.