Would someone please mind pointing me to, or offering a short
explanation of the lifecycle of an avframe?
I have in my app an existing frame class that wraps pixel, PBOs, FBOs
and some platform native (Mac) frame types, so I figured adding an
AVFrame would be fine, but I ran into some troubles.
First off I tried to use my pixel buffer, hold an AVFrame and fill in
the data & linesize members (from the memory I allocated). I added a
get_buffer callback that did nothing. Of course, that didn't work,
since, I find, the codec allocates internal buffers and also re-uses
some of the frames it's produced.
So lifecycle becomes important. I want the AVFrame to be created using
libav functions, then I'll update my frame to point at it. But then I
need to hold onto that frame for an indefinite amount of time. How
will I know when the codec is finished with it, for instance?
If someone could fill in the blanks, that would help:
An AVFrame's buffers are allocated when _
An AVFrame's buffer are/might be de-allocated when _
unless it's a reference frame, then _
I'm thinking that a common trick might be to store the holding frame's
pointer in a member of the AVFrame? Is that what people do? Then on
the callback access back to the holder frame to set anything needed
(and do some locking, in my case).
Thanks,
Bruce Wheaton
_______________________________________________
libav-user mailing list
[email protected]
https://lists.mplayerhq.hu/mailman/listinfo/libav-user