On 11/13/2013 7:40 PM, Ulf Magnusson wrote:
Hi,

I'm adding a movie recording feature to an emulator using libav. I've
got video (x264), audio (Android VisualON AAC), and muxing (mp4)
working, but since I make no special efforts to keep audio and video
synchronized, they desynchronize after a few minutes. My questions are
as follows:

1. I believe I will need to derive PTS values for the audio and video
streams, diff them, and compensate by e.g. changing the audio
resampling rate when they drift apart. For audio, the PTS values from
the AVStream seem reliable, but not for video (they seem to lag
behind, perhaps due to buffering (?)). Is it safe to simply use the
PTS values I write into the AVFrame for video frames? Is there some
other field I could use?

2. Why does ffmpeg.c assign DTS values to PTS values in some locations, e.g.
ist->next_pts = ist->pts = av_rescale_q(pkt->dts, ist->st->time_base,
AV_TIME_BASE_Q); ?

3. Is there some simpler solution specific to x264/AAC/mp4?

Thanks,
Ulf
_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Hello,
Mostly there is no guarantee as to how hardware will provide samples up front. the variables are:

A: One will arrive first with samples available.
B: One stream will have timestamps earlier than the other, and it may be either one of (A).

I buffer both channels until I have overlap.

The general rule is , 'write audio first'.

The routine I run is this:

After compressing the raw samples, I have two FIFO queues, one for aud and one for vid. This is what I typically run for static video frame rates.

..

1) is there audio packet? yes, goto 2.

2) is there video? yes, goto 3.

3) Is this the first time you have both channels? yes, goto 4, no goto 5.

4) Consider dropping samples. Do you have lots of audio without video or lots of video before audio?

5)Write all audio packets up to the earliest video frame time, and then write that one video frame to the container.

6) Repeat. goto1.


Short answer...

In general, you need to buffer and mux those timestamps in ascending order from either media type, writing them to your container as soon as they line up within the range of your camera interval. Dumping early media until they do line up. Dont mess with the timestamps.

have fun!











_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to