The input is a stream you’re reading at real time, the concept of frames as when you decode a format that’s framed for network streaming doesn’t apply the same. > The aim is to have a consist method to correlate where the sender and > receiver are at once the audio stream has been stopped.
True. Having multiple -f options on the output section of the ffmpeg sender command line is unnecessary and has been adjusted since originally posted. Only the last one takes effect, but doesn't trigger an error. -- Sent from: http://www.ffmpeg-archive.org/
Am Mo., 9. Dez. 2019 um 20:04 Uhr schrieb klongwood3 : > Sending: > ffmpeg -re -f pulse -ac 1 -i default -f s16le -af 'ashowinfo' -f rtsp > -rtsp_transport tcp rtsp://127.0.0.1:1234 -loglevel debug -report This command line looks invalid, there are two output formats specified. Carl Eugen
After discussing with support within the #ffmpeg IRC channel, this approach doesn't seem to be too reliable or consistent to keep track of frame count from sender and receiver. A "frame" in ffmpeg at the decoding side is just "however many samples the decoder decoded from one packet". The thing
I am streaming audio between two linux machines using ffmpeg and ffplay. The sender is using rtsp to transport audio using a USB microphone over TCP. The receiver (listener) is receiving and playing the audio received using ffplay. Sending: Receiving(listening): Using /‘ashowinfo’/, I am