I have looked at several (up-to-date) examples, and am still confused about
how/when to set PTS.

My app is fairly simple: I am capturing audio and video in real time. Each
stream - audio and video - are from separate threads, fed through a queue
to another thread that encodes and writes them out to a file. Each acquired
"frame" (one video frame, or a block of audio samples) is timestamped at
capture time, before being queued for encoding/writing. (NOTE: these
timestamps are based on the system nanosecond clock).

For each audio block, the timestamps are fairly consistent - previous
timestamp + (num_sumples * sample rate), albeit perhaps with some latency;
for video, frames are acquired asynchrounously (not at a fixed frame rate)
due to the hardware - but again, each frame is still timestamped at acquire
time.

My confusion is partly when  to write out each frame/block, and also what
to set the PTS for each.

In the muxing example (
https://www.ffmpeg.org/doxygen/1.2/doc_2examples_2muxing_8c-example.html),
each frame is generated on the fly, with the PTS set to the last PTS plus a
fixed value (the expected time between each video frame, based on a defined
fixed frame rate).

So what should I set the PTS to for each stream? And do I just encode and
write each frame as I get them?

TIA

ken
_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to