On 2022-07-13 06:00 pm, Anton Khirnov wrote:
Quoting Gyan Doshi (2022-07-11 08:46:48)

On 2022-07-11 12:21 am, Anton Khirnov wrote:
Quoting Gyan Doshi (2022-07-10 20:02:38)
On 2022-07-10 10:46 pm, Anton Khirnov wrote:
Quoting Gyan Doshi (2022-07-08 05:56:21)
On 2022-07-07 03:11 pm, Anton Khirnov wrote:
Quoting Gyan Doshi (2022-07-04 18:29:12)
This is a per-file input option that adjusts an input's timestamps
with reference to another input, so that emitted packet timestamps
account for the difference between the start times of the two inputs.

Typical use case is to sync two or more live inputs such as from capture
devices. Both the target and reference input source timestamps should be
based on the same clock source.

If not all inputs have timestamps, the wallclock times at the time of
reception of inputs shall be used. FFmpeg must have been compiled with
thread support for this last case.
I'm wondering if simply using the other input's InputFile.ts_offset
wouldn't achieve the same effect with much less complexity.
That's what I initially did. But since the code can also use two other
sources for start times (start_time_realtime, first_pkt_wallclock),
those intervals may not exactly match the difference between
fmctx->start_times so I use a generic calculation.
In what cases is it better to use either of those two other sources?

As per the commit message, the timestamps of both inputs are supposed to
come from the same clock. Then it seems to me that offsetting each of
those streams by different amounts would break synchronization rather
than improve it.
The first preference, when available, stores the epoch time closest to
time of capture. That would eliminate some jitter.
The 2nd preference is the fmctx->start_time. The 3rd is the reception
wallclock. It is a fallback. It will likely lead to the worst sync.
You did not answer my question.
If both streams use the same clock, then how is offsetting them by
different amounts improve sync?
Because the clocks can be different at different stages of stream
conveyance  i.e. capture -> encode -> network relay -> ffmpeg reception.
As long as both use the same clock at a given stage, they represent the
same sync relation but with some jitter in the mix added with each stage.
Why would you send the streams separately and not synchronized before
network transmission?

Because they may arise from separate machines e.g. a video teleconference with multiple participants on the LAN, conveyed with NTP time of start of stream.

Regards,
Gyan
_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
https://ffmpeg.org/mailman/listinfo/ffmpeg-devel

To unsubscribe, visit link above, or email
ffmpeg-devel-requ...@ffmpeg.org with subject "unsubscribe".

Reply via email to