I am trying to mux together an existing mpeg-ts video stream and a synthesised 
KLV data stream in the same vein as the muxing.c example in the ffmpeg 
documentation.  I wish to read the incoming ts file, identify the video frames 
and insert KLV data after each frame has been inserted into the output file 
(which would now contain two streams - VIDEO and DATA).
Although the general structure of the muxing example seems clear enough - I 
cannot see how to fit a KLV packet into the concept of AVFrame in order to 
compose my own write_data_frame to parallel the write_audio_frame already in 
the muxing example.
Does anyone have any example code which would help me understand how to 
populate the mpeg-ts data stream with data (Note that the actual KLV packet 
contents is not an issue - I have formatters for that, my problem is how to use 
the AV architecture to put the packets in the second output stream)?

The packets do not require dts/pts stamps - asynchronous behaviour is more than 
adequate.

Any help welcome!

_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to