On Thu, Aug 9, 2012 at 7:28 AM, Randell Jesup <[email protected]> wrote:

> Use cases:
> 1) realtime mediastream (from getUserMedia() or PeerConnection) as input
>     Perhaps using MediaStream Processing to modify the media before
> playback
> 2) non-realtime mediastream (from captureStreamUntilEnded/etc) as input
>     For example, done as <video> -> MediaStream -> MediaStream Processing
> to filter/effects -> <video>
>
> For #1, on Pause you want to throw away all data and resume when unpaused.
>  Technically, there's some advantage to throwing away at the output (since
> that means no delay to restart), but if user-scale delays aren't possible
> in the MediaStream graph (via a MediaStream Processing graph doing
> 'ducking' for example), then the dropping of data can occur at the output,
> the input to MediaStreams, or in the code feeding the MediaStream.  If
> there is significant delay, you really don't want to show second-old video
> for a while when you resume and then jump, and you're prefer not to wait a
> second to resume - but that might be ok.
>
> I know we had discussion on this; we need to decide on the right way/place
> to handle it and do it.  (Item 2 below *might* make this moot.)
>

With the patches in bug 779715 it becomes easy to do whatever you want
here. If you want your WebRTC stream to never be blocked by a downstream
consumer being paused, just introduce a TrackUnionStream which takes your
SourceMediaStream(s) as input and doesn't propagate blocking from output to
input, and only expose that TrackUnionStream to consumers.

For #2, you could react the same way, but use cases like "Playback video
> with effects" would very much want the visible element's video controls to
> control the source video element - pause would pause the source, resume
> would resume it, seek would seek it, etc.  Pause is the most simple case;
> the others would require more complex proxying of controls.
>
> If we define events on MediaStreams/Tracks that bubble up the graph, then
> a video element could bubble up a Pause, Seek, etc request to the source,
> which could react appropriately (realtime elements would just start
> throwing away data; video_elem.**createStreamUntilEnded() nodes might
> just apply those commands to the video element.  Note that Seek in
> particular might be problematic unless the element can push *down* the
> graph updates to the current time and start/end times, so the playback
> element knows what to show in the UI.
>
> There are some interesting cases to work out with composite and fanned-out
> MediaStreamTracks/**MediaStreams...
>
> Maybe this is all too complex, or maybe it can be all dumped into the
> application layer JS (though I don't think the app has that level of
> control/interaction with the media element UI).  Or we can decide we don't
> care, and if you use MediaStreams at all you get no media element UI for
> this type of thing - all UI must be in the app.  (This may confuse users,
> since they don't know how you internally plumbed your page and streams.)
>

An alternative would be to implement the HTML5 MediaController API and have
authors use that to keep the output media element and the source media
element in sync.

Rob
-- 
“You have heard that it was said, ‘Love your neighbor and hate your enemy.’
But I tell you, love your enemies and pray for those who persecute you,
that you may be children of your Father in heaven. ... If you love those
who love you, what reward will you get? Are not even the tax collectors
doing that? And if you greet only your own people, what are you doing more
than others?" [Matthew 5:43-47]
_______________________________________________
dev-media mailing list
[email protected]
https://lists.mozilla.org/listinfo/dev-media

Reply via email to