Hi
I am using ffmpeg 1.2 libraries to create an application that reads and decodes 
frames for processing. It is handling MPEG2/H264 from UDP or file.

I have been using a multi-threaded application where the frames are read from 
one thread, then the AVFrames are put in a queue (I memcopy the AVFrame* and 
not the frame data).
In another thread I loop and remove AVFrames * from the queue and run the 
processing that I have to do. 
This seems to work most of the time but I think sometimes the frames maybe 
garbled up.

My question is what is the proper way to create an AVFrame queue? What I would 
rather not do is do a memcopy of the frame (actual image) data.

It seems to me that the pointers to the data of the frame seem to be re-cycling 
every 10 frames. FFMPEG is internally re-using the same buffers.

Do I have to use reference counting? or write my own AVFrame allocation/freeing 
routines?

Any help is appreciated.

Thanks



_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to