Hi,

I know gstreamer a little bit (already done a local audio/video player
for our target) and I am studying the possibility to use farsight for
our streaming solution.
Could you explain me how the synchronisation between audio and video is
done?
In fact, there 2 independant pipes
udp | rtp | buffer-jitter | depayloader-audio | audio-decoder | audio-sink
and
udp | rtp | buffer-jitter | depayloader-video | video-decoder | video-sink

Imagine the video-pipe  goes to state rebuffering. How the audio-pipe
knows  it has to stop  playing audio? How will  it knows it can restart
playing (once enough video has been bufferized)?

thanks
Matthieu LAURENT

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys-and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
Farsight-devel mailing list
Farsight-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/farsight-devel

Reply via email to