Randell Jesup wrote:
So, I think what should be happening here is that a mediastream (at least one with a realtime source) should throw away at the output if the consumer doesn't consume.
The way this was originally designed, a MediaInput to a ProcessedMediaStream has blockInput and blockOutput flags. If the output of the ProcessedMediaStream is blocked (e.g., because it's been paused), then any active MediaInput with blockInput set is blocked.
In the other direction, if the MediaStream feeding a MediaInput is blocked and it has its blockOutput flag set, then the output of the ProcessedMediaStream is blocked.
Both of these things propagate the blocking status through the graph. I don't know how this interferes with our plan to add cycles to the graph, since apparently we have to do that to support the Web Audio API.
I've always viewed MediaStreams made up of tracks from other MediaStreams as just a special case of ProcessedMediaStream. If it were an actual ProcessedMediaStream, you could do everything you want to do here with these blocking flags. I.e., you unset blockInput on the stream feeding to PeerConnection/local preview. Perhaps it makes sense to have defaults depending on the type of source (since we don't have the full ProcessedMediaStream API implemented yet).
I worry about things like, "a MediaStream should throw away at the output if the consumer doesn't consume" because that requires you to have some threshold for how long that is... either it's so aggressive that you throw away data when you really don't want to (i.e., archival recording where you don't care about processing latency) or it's so loose you add latency where you don't want it (because you aren't sure if you should be throwing away data yet). I think explicit flags are better.
_______________________________________________ dev-media mailing list [email protected] https://lists.mozilla.org/listinfo/dev-media

