Le sextidi 6 fructidor, an CCXXIV, Perette Barella a écrit : > My assumption has been that since AVStream->codec exists and was filled in > by avformat_new_stream and other functions, and *that there was an > association between an AVStream and its AVCodec* so that all these > different components worked together.
Multimedia formats contain information about codecs that are sometimes needed for decoding, for example the resolution or the sample rate. A structure is needed to carry that information from the demuxer to the decoder, and from the encoder to the muxer. In the past, the AVCodecContext structure itself was used for that. The AVStream structure did contain one exemplary of it for that task. Obviously, people started using the AVCodecContext in AVStream to do the actual decoding. It appeared progressively that it was a bad idea. The AVCodecContext structure is complex, there are fields that are only used for some codecs, some fields that vary during encoding or decoding, etc. Thus, a new structure was introduced, AVCodecParameters, just the fields that are universally useful for carrying the information from the demuxers to the decoders. It can not be used for decoding or for encoding. There are utility functions to copy the relevant fields from AVCodecParameters to AVCodecContext and conversely. > And it would > explain why it takes to much code to do anything with lav as opposed to > gstreamer or other libraries. The FFmpeg libraries are lower level: they give you more control, at the cost of more pedestrian code. Also, note that "gstreamer or other libraries", usually, use the FFmpeg libraries internally for most of their work.
signature.asc
Description: Digital signature
_______________________________________________ Libav-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/libav-user
