Hello it seems that ffmpeg doxygen API docs have minimal information about handling subtitles, compared to video/audio, so it would be nice to look at some code examples or tutorial covering full cycle of preparing subtitles, encoding, putting into container, demuxing, decoding and presenting on the screen to user.
If I understand correctly, it should be enough to fill AVSubtitleRect.text field with desired text, put that into AVSubtitle.rects, fill other fields, pass AVSubtitle to avcodec_encode_subtitle() and put resulting "subtitle_out" buffer into AVPacket, is it right? After decoding, I should get AVSubtitle struct with AVSubtitleRect, containing text and bitmap (probably both). Is subtitle always rendered to bitmap? What format is used for bitmap? How can I draw it on the screen? Do I always need libass library to do that? _______________________________________________ Libav-user mailing list [email protected] http://ffmpeg.org/mailman/listinfo/libav-user
