Hey all, I know some parts of this have been covered in (many) other posts, I've spent sometime reading through them so apologies if I've missed the crucial post but believe this is unique.
I have a live camera source providing individual frames which I'm encoding in the x264 library, I've setup an RTSP server using a custom "OnDemandServerMediaSubsession", and a custom FramedSource (using DeviceSource as a template). To give an idea of how it chains together I've provided "createNewStreamSource" and "createNewRTPSink" in the footer, but AFAIK they follow the guidance in the FAQ. I can watch this stream using either the Discrete or normal framer classes in VLC over the RTSP server when the x264 library provides annex-b supporting NAL units (with the 00000001 start code) AND I do not try to separate the NALs into individual calls to "doGetNextFrame()" - each frame is provided as a single concatenated block of data where several NALs could be within. However there are problems with using both the discrete and standard framers: - If I use the H264VideoStreamFramer class, the fMaxSize variable counts down till a frame is truncated, this truncation is visible in VLC as a broken frame. This problem is similar to http://lists.live555.com/pipermail/live-devel/2010-July/012357.html where it is advised to use a Discrete framer. - If I use the H264VideoStreamDiscreteFramer class, I get the warnings about a startcode being present, looking at the code this means that saveCopyOfSPS and saveCopyOfPPS are never called. It does play in VLC, I'm just concerned about the implications of never using these functions? If I remove the startcode (just provide the remaining data block), VLC won't display anything & in the messages it says "waiting for SPS/PPS", this is true whether or not I split the NALs into individual "doGetNextFrame()" calls, but in this case live555 seems happy and doesn't output any warnings. - I've seen hints at writing your own framer class, but it's unclear why & what I need to achieve in doing so? Thanks in advance & do appreciate all help, James -- FramedSource* FramedServerMediaSubsession::createNewStreamSource(unsigned clientSessionId, unsigned& estBitrate) { estBitrate = 500; // Create the video source: DeviceParameters p; RTPFrameLoader* frameSource = RTPFrameLoader::createNew( envir(), p ); // encoder outputs to RTPFrameLoader encoder = new H264Encoder ( frameSource, 640, 480, 3 ); Camera *cam = CameraFactory::getInstance()->getCamera( CameraFactory::FAKE ); // the encoder listens in for raw camera frames cam->registerFrameListener ( encoder ); encoder->go(); // Create a framer for the Video Elementary Stream: return H264VideoStreamDiscreteFramer::createNew ( envir(), frameSource ); } RTPSink* FramedServerMediaSubsession::createNewRTPSink ( Groupsock* rtpGroupsock, unsigned char rtpPayloadTypeIfDynamic, FramedSource* inputSource ) { return H264VideoRTPSink::createNew(envir(), rtpGroupsock, rtpPayloadTypeIfDynamic); }
_______________________________________________ live-devel mailing list [email protected] http://lists.live555.com/mailman/listinfo/live-devel
