I've been trying to model the new NetStream architecture.
Doing so, questions arise very early in the process, so
here they are.
Consider the following model:
+------------+
| input |
+------------+
| bytes |
+------------+
|
v
( parser )
|
+----------+---------+
| |
v v
+----------------+ +----------------+ +------------+
| video_buffer | | audio_buffer | | playhead |
+----------------+ +----------------+ +------------+
| encoded_frames | | encoded_frames | | cur_time |
+----------------+ +----------------+ | play_state |
+------------+
In this model, the (parser) process would run in its thread and
fully fill the video/audio buffers. The buffers would need to
have concept of 'timestamp' to respond to ActionScript exposed
queries like: "how many seconds of media do we have in the buffer ?"
and accept requests from ActionScript like: "buffer these many seconds
of frames before starting playback".
I can see that the FLV format allows this, in that timestamps are
associated to encoded frames. But is this also true for other kind
of formats ? What about mpeg for instance ? And OGG ?
I'll add more elements to the diagram as feedback flows....
--strk;
() ASCII Ribbon Campaign
/\ Keep it simple!
_______________________________________________
Gnash-dev mailing list
[email protected]
http://lists.gnu.org/mailman/listinfo/gnash-dev