On May 15, 2013, at 9:02 AM, Joe Flowers <[email protected]> wrote:

> Hi Brad, 
> 
> Very interesting comments. I am worrying more about the speed hit of putting 
> it in a wrapper. We're in a real-time audio processing situation. Your 
> experience is most appreciated.

"Real-time": there indeed is the rub. I have had to solve two real-time video / 
audio network streaming use-cases over the past year, one in particular where 
minimum latency was absolutely imperative (<= .5 sec). What I found was that 
repeated memory allocation during frame processing resulted in my biggest hit 
to latency. So eliminate that where possible. For example, in my recent audio 
capture-encode-streaming use case I have a scenario where it is theoretically 
possible for the sample format, sample rate, and number of samples per buffer 
to change midstream during processing. I haven't encountered that, but 
according to the capture API it is possible. Should that happen, it would 
require the source data buffer, destination data buffer, and resampling context 
to also be recreated. So I create once, then check every frame to see if 
anything has changed -- if it has, reallocate, otherwise, reuse. That was a 
performance tweak I made from allocating every frame, after I got the
  thing working properly. 

The other tip as far as locking goes -- the best locking design is one where 
locking isn't needed at all. Again, I don't know what you can pull off in your 
design, but if you can avoid it entirely by just managing multiple instances of 
things, it might be worth considering. In other words, don't send the cars all 
down the same lane -- add another lane to your freeway. In real-time 
processing, speed is the prime directive, so it is usually OK to give up a 
little in memory to increase throughput. 

Good luck...

Brad
_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to