I see for H.264 streams, openRTSP defaults to the H264VideoFileSink,
which is based on FileSink, which is based on MediaSink.
I don't want to write the video out to a file; I want the video
exposed as a live stream to the rest of my application. To me, it
seems like I need to write my own "sink," but I'm not sure what
class would be best to inherit from (MediaSink?
Yes. However, a simpler solution is to not modify "openRTSP" at all.
Instead, use the "-v" option to cause "openRTSP" to write its output
to 'stdout', and then pipe this to your application.
My other question is more general; I see that a single RTSP server
can have multiple sessions, and each session can be composed of
multiple subsessions. So I'm wondering what the best (easiest?) way
to structure my media streams would be. I'm going to have several
H.264 streams, audio, MJPEG, and possibly MPEG4, and I'm wondering
if each should get its own session, or if I should combine audio and
video into the same session. Will I have AV sync issues if each
stream is in its own session?
If you want to stream audio and video together, then both
"ServerMediaSubsession"s should be in a single "ServerMediaSession".
A RTSP client then requests a single stream, which will contain both
audio and video.
However, if you have several video streams, then they should usually
be in separate "ServerMediaSession"s (because a RTSP client will
rarely want to receive more than one video stream at the same time).
--
Ross Finlayson
Live Networks, Inc.
http://www.live555.com/
_______________________________________________
live-devel mailing list
[email protected]
http://lists.live555.com/mailman/listinfo/live-devel