I've tried a few ways of allocating buffers for use with libavformat.

I currently use one AVFrame, then copy information to my own allocated buffers. Problem is, I'm spending 40% of my app time just copying.

My own buffer class has methods to create raw pixel data, and I'd rather decode right into that. I do alignment on my buffers, plus I round up to 64 bit boundaries on every line (stride).

Looking through avcodec_default_get_buffer, I see such a complex set of allocations and alignment that it seems hard to duplicate. There also seems to be a whole 'internal buffer' that is created, presumably for the codec to use. When I did a rough experiment putting my own pixel planes into the AVFrame, it did seem to have trouble with the missing internal buffer.

But when I look at, for instance, the MythTV code that uses libavformat (int get_avf_buffer), in their get_buffer callback, just sticks it's own stride and plane values into the AVFrame - that's what I'd like to do. They literally seem to just set:
pic->data[0-3]
pic->linesize[0-3]
pic->type = FF_BUFFER_TYPE_USER;
pic->age = 256 * 256 * 256 * 64
And of course, it seem to work. Are they doing something else and I'm missing it? Why don't they need to make the internal buffer for the codec?

My other option is to use avcodec_default_get_buffer, then set pic- >type = FF_BUFFER_TYPE_USER; so that the frame doesn't get deleted by libavformat. Is that a better option?

Thanks,

Bruce Wheaton

(Resend from address I signed up with. Apologies if it comes through twice)
_______________________________________________
libav-user mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/libav-user

Reply via email to