Hi,

  I am consuming a multi-program transport stream with several video
streams and decoding them simultaneously. This works well.

I am currently doing it al on a single thread.
Each AVPacket received by av_read_frame() is checked for the relevant
stream_index and passed to a *corresponding* decoder.
Hence, I have one AVCodecContext per decoded elementary stream. Each such
AVCodecContext handles one elementary stream, calling
avcodec_decode_video2() etc.

The current single threaded design means that the next packet isn't decoded
until the one before it is decoded.
I'd like to move to a multi-threaded design where each AVCodecContext
resides in a separate thread with its own AVPacket (concurrent SPSC-)queue
and the master thread calls av_read_frame() and inserts the coded packet
into the relevant queue (Actor Model / Erlang style).
Note that each elementary stream is always decoded by the same single
thread.

Before I refactor my code to do this, I'd like to know if there is anything
on the avlib side *preventing* me from implementing this approach.

   - AVPacket is a pointer to internal and external data. Are there any
   such data that are shared between elementary streams?
   - What should I beware of?

Please advise,
Thanks,
Adi
_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to