On Mar 27, 2011, at 8:01 PM, Ian Hickson wrote:

> 
> It's been brought to my attention that there are aspects of the 
> MediaController design that are hard to implement; in particular around 
> the ability to synchronise or desynchronise media while it is playing 
> back.
> 
> To help with this, I propose to put in some blocks on the API on the short 
> term so that things that are hard to implement today will simply throw 
> exceptions or otherwise fail in detectable and predictable ways.
> 
> However, to do that I need a better idea of what exactly is hard to 
> implement.
> 
> It would be helpful if you could describe exactly what is easy and what is 
> hard (that is, glitchy or simply unsupported by common media frameworks) 
> in terms of media synchronisation, in particular along the following axes:
> 

Hi Ian,

Contained is Eric and my feedback as to the difficulty of implementing this 
proposal in Apple's port of WebKit:

> * multiple in-band tracks vs multiple independent files

Playing in-band tracks from a single element will always be more efficient than 
playing multiple independent files or tracks, because the media engine can 
optimize its I/O and decoding pipelines at the lowest level.  

> * playing tracks synchronised at different offsets

However, if the in-band tracks will be played at a different time offsets, or 
at different rates, playback becomes just as inefficient as playing independent 
files.  To implement this we will have to open two instances of a movie, enable 
different tracks on each, and then play the two instances in sync.

> * playing tracks at different rates

In addition to the limitation listed above, efficient playback of tracks at 
different rates will require all tracks to be played in the same direction.  

> * changing any of the above while media is playing vs when it is stopped

Modifying the media groups while the media is playing is probably impossible to 
do without stalling.  The media engine may have thrown out unneeded data from 
disabled tracks and may have to rebuffer that data, even in the case of in-band 
tracks.

> * adding or removing tracks while media is playing vs when it is stopped

As above.

> * changing overall playback rate while a synced set of media is playing

This is possible to do efficiently.

> Based on this I can then limit the API accordingly.
> 
> (Any other feedback you may have on this proposed API is of course also 
> very welcome.)

From a user's point of view, your proposal seems more complicated than the 
basic use cases merit.  For example, attempting to fix the synchronization of 
improperly authored media with micro-adjustments of the playback rate isn't 
likely to be very successful or accurate.  The metronome case, while an 
interesting experiment, would be better served through something like the 
proposed Audio API.  

Slaving multiple media elements' playback rate and current time to a single 
master media element, Silvia and Eric's proposal, seems to achieve the needs of 
the broadest use cases.   If adding independent playback rates becomes 
necessary later, adding this support in a future revision will be possible.

-Jer

 Jer Noble <[email protected]>


Reply via email to