The input is a stream you’re reading at real time, the concept of frames as 
when you decode a format that’s framed for network streaming doesn’t apply the 
same.

> The aim is to have a consist method to correlate where the sender and
> receiver are at once the audio stream has been stopped. 

Isn’t that a pretty consistent method in itself? They’re both at the end. If 
synchronization throughout is important, maybe you could have separate 
processes for recording and streaming, and the server and receiver process can 
refer to an external clock.

_______________________________________________
ffmpeg-user mailing list
[email protected]
https://ffmpeg.org/mailman/listinfo/ffmpeg-user

To unsubscribe, visit link above, or email
[email protected] with subject "unsubscribe".

Reply via email to