Update: getting closer!!!

I have re-written my framer based off the H263plusVideoFramer instead of
trying to subclass the MPEGVideoStreamParser.  Now I can see that my
framer is parsing through the NAL units and that the H264RTPSink is
actually sending some packets out.  The problem is that the file is
getting parsed too quickly and it seems as though the ethernet port
can't keep up.  I'm pretty sure this has something to do with the way I
am parsing through the file because I kept getting stuck in the parser
so I just automatically call parseNALUnit from within
parseStartSequence.  Now that I am sure I can actually parse through my
file and see all the NAL units its time to go back and figure out why I
was getting stuck.  

I've attached my framer, parser, and test program if anybody is
interested in taking a look.  (pardon the mess, haven't had time to
clean it up yet)


On Tue, 2008-06-24 at 10:21 -0400, Mike Gilorma wrote:
> All, 
> 
> I have been working on creating a h264 framer for about a week and feel
> like I have headed in the wrong direction, so its time for a fresh
> start.  Would H263plusVideoFramer be a good starting point?  
> 
> I have been searching "h.264 site:lists.live555.com" and it seems that
> there are people out there that have gotten their framers working and
> have not gone on to share any info after that.  My goal is to create a
> framer for discrete NAL units and to create a testH264VideoStreamer.cpp
> program for everyone to have access to.
> 
> thanks,
> Mike
> 
> 
> _______________________________________________
> live-devel mailing list
> live-devel@lists.live555.com
> http://lists.live555.com/mailman/listinfo/live-devel
/**********
This library is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 2.1 of the License, or (at your
option) any later version. (See <http://www.gnu.org/copyleft/lesser.html>.)

This library is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.  See the GNU Lesser General Public License for
more details.

You should have received a copy of the GNU Lesser General Public License
along with this library; if not, write to the Free Software Foundation, Inc.,
59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
**********/
// "liveMedia"
// Copyright (c) 1996-2007 Live Networks, Inc.  All rights reserved.
// Any source that feeds into a "H264VideoRTPSink" must be of this class.
// This is a virtual base class; subclasses must implement the
// "currentNALUnitEndsAccessUnit()" virtual function.
// Implementation

// #include "H264VideoStreamFramer.hh"
// 
// H264VideoStreamFramer::H264VideoStreamFramer(UsageEnvironment& env, FramedSource* inputSource)
//   : FramedFilter(env, inputSource) {
// }
// 
// H264VideoStreamFramer::~H264VideoStreamFramer() {
// }
// 
// Boolean H264VideoStreamFramer::isH264VideoStreamFramer() const {
//   return True;
// }

// ***************************** GOING FOR A RIDE YEAH

#include <iostream>
#include "H264VideoStreamFramer.hh"
#include "H264VideoStreamParser.hh"

#include <string.h>
#include <GroupsockHelper.hh>

int check = 0;
///////////////////////////////////////////////////////////////////////////////
////////// h264VideoStreamFramer implementation //////////
//public///////////////////////////////////////////////////////////////////////
H264VideoStreamFramer* H264VideoStreamFramer::createNew(
                                                         UsageEnvironment& env,
                                                         FramedSource* inputSource)
{
   // Need to add source type checking here???  #####
    std::cout << "H264VideoStreamFramer: in createNew" << std::endl;
   H264VideoStreamFramer* fr;
   fr = new H264VideoStreamFramer(env, inputSource);
   return fr;
}


///////////////////////////////////////////////////////////////////////////////
H264VideoStreamFramer::H264VideoStreamFramer(
                              UsageEnvironment& env,
                              FramedSource* inputSource,
                              Boolean createParser)
                              : FramedFilter(env, inputSource),
                                fFrameRate(0.0), // until we learn otherwise 
                                fPictureEndMarker(False)
{
   // Use the current wallclock time as the base 'presentation time':
   gettimeofday(&fPresentationTimeBase, NULL);
    std::cout << "H264VideoStreamFramer: going to create H264VideoStreamParser" << std::endl;
   fParser = createParser ? new H264VideoStreamParser(this, inputSource) : NULL;
}

///////////////////////////////////////////////////////////////////////////////
H264VideoStreamFramer::~H264VideoStreamFramer()
{
   delete   fParser;
}


///////////////////////////////////////////////////////////////////////////////
#ifdef DEBUG
static struct timeval firstPT;
#endif


///////////////////////////////////////////////////////////////////////////////
void H264VideoStreamFramer::doGetNextFrame()
{
//   std::cout << "H264VideoStreamFramer: in doGetNextFrame" << std::endl;
  fParser->registerReadInterest(fTo, fMaxSize);
  continueReadProcessing();
}


///////////////////////////////////////////////////////////////////////////////
Boolean H264VideoStreamFramer::isH264VideoStreamFramer() const
{
  return True;
}

///////////////////////////////////////////////////////////////////////////////
Boolean H264VideoStreamFramer::currentNALUnitEndsAccessUnit() 
{
  return True;
}


///////////////////////////////////////////////////////////////////////////////
void H264VideoStreamFramer::continueReadProcessing(
                                   void* clientData,
                                   unsigned char* /*ptr*/, unsigned /*size*/,
                                   struct timeval /*presentationTime*/)
{
   H264VideoStreamFramer* framer = (H264VideoStreamFramer*)clientData;
   framer->continueReadProcessing();
}

///////////////////////////////////////////////////////////////////////////////
void H264VideoStreamFramer::continueReadProcessing()
{
   unsigned acquiredFrameSize; 
//     std::cout << "H264VideoStreamFramer: in continueReadProcessing" << std::endl;
   u_int64_t frameDuration;  // in ms

//    acquiredFrameSize = fParser->parse(frameDuration);
      acquiredFrameSize = fParser->parse();
// Calculate some average bitrate information (to be adapted)   
//  avgBitrate = (totalBytes * 8 * H263_TIMESCALE) / totalDuration;
//     std::cout << "continueReadProcessing, acquiredFrameSize: " << acquiredFrameSize << std::endl;

//     while (!acquiredFrameSize) {
//         std::cout << "waiting for data!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!" << std::endl;
//     }

   if (acquiredFrameSize > 0) {
      std::cout << "continueReadProcessing, acquiredFrameSize: " << acquiredFrameSize << std::endl;
        check++;
        std::cout << "NAL # " << check << std::endl;
      // We were able to acquire a frame from the input.
      // It has already been copied to the reader's space.
      fFrameSize = acquiredFrameSize;
//    fNumTruncatedBytes = fParser->numTruncatedBytes(); // not needed so far

      fFrameRate = frameDuration == 0 ? 0.0 : 1000./(long)frameDuration;

      // Compute "fPresentationTime" 
      if (acquiredFrameSize == 5) // first frame
         fPresentationTime = fPresentationTimeBase;
      else 
         fPresentationTime.tv_usec += (long) frameDuration*1000;

      while (fPresentationTime.tv_usec >= 1000000) {
         fPresentationTime.tv_usec -= 1000000;
         ++fPresentationTime.tv_sec;
      }

      // Compute "fDurationInMicroseconds" 
      fDurationInMicroseconds = (unsigned int) frameDuration*1000;;

      // Call our own 'after getting' function.  Because we're not a 'leaf'
      // source, we can call this directly, without risking infinite recursion.
      afterGetting(this);
   } else {
//         std::cout << "need to do somemore reading here to get a frame!!!!" << std::endl;
//         std::cout << "lets check parse state: " << fParser->getParseState() << std::endl;
//         afterGetting(this);
      // We were unable to parse a complete frame from the input, because:
      // - we had to read more data from the source stream, or
      // - the source stream has ended.
   }
}
/**********
This library is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 2.1 of the License, or (at your
option) any later version. (See <http://www.gnu.org/copyleft/lesser.html>.)

This library is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.  See the GNU Lesser General Public License for
more details.

You should have received a copy of the GNU Lesser General Public License
along with this library; if not, write to the Free Software Foundation, Inc.,
59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
**********/
// "liveMedia"
// Copyright (c) 1996-2007 Live Networks, Inc.  All rights reserved.
// An abstract parser for H264 video streams
// Implementation

#include "H264VideoStreamFramer.hh"
#include "H264VideoStreamParser.hh"
#include <iostream>

H264VideoStreamParser
::H264VideoStreamParser(H264VideoStreamFramer* usingSource,
			FramedSource* inputSource)
  : StreamParser(inputSource, FramedSource::handleClosure, usingSource,
		 &H264VideoStreamFramer::continueReadProcessing, usingSource),
  fUsingSource(usingSource) {
}

H264VideoStreamParser::~H264VideoStreamParser() {
}

void H264VideoStreamParser::restoreSavedParserState() {
  StreamParser::restoreSavedParserState();
  fTo = fSavedTo;
  fNumTruncatedBytes = fSavedNumTruncatedBytes;
}

void H264VideoStreamParser::setParseState(H264ParseState parseState) {
//     std::cout << "H264VideoStreamPARSER: in setParseState: " << parseState << std::endl;
  fSavedTo = fTo;
  fSavedNumTruncatedBytes = fNumTruncatedBytes;
  fCurrentParseState = parseState;
  saveParserState();
}

unsigned H264VideoStreamParser::getParseState() {
    return fCurrentParseState;
}

void H264VideoStreamParser::registerReadInterest(unsigned char* to,
						 unsigned maxSize) {
//     std::cout << "Parser max size??: " << maxSize << std::endl;

  fStartOfFrame = fTo = fSavedTo = to;
  fLimit = to + maxSize;
  fNumTruncatedBytes = fSavedNumTruncatedBytes = 0;
}

unsigned H264VideoStreamParser::parse() {
    
  try {
//     std::cout << "H264VideoStreamPARSER: parse : " << fCurrentParseState << std::endl;
    switch (fCurrentParseState) {
    case PARSING_START_SEQUENCE: {
        return parseStartSequence();
//         return 0;
    }
    case PARSING_NAL_UNIT: {
        return parseNALUnit();
//         return 0;
    }
    default: {
      return 0; // shouldn't happen
    }
    }
  } catch (int /*e*/) {
#ifdef DEBUG
    fprintf(stderr, "H264VideoStreamParser::parse() EXCEPTION (This is normal behavior - *not* an error)\n");
#endif
    return 0;  // the parsing got interrupted
  }
}

unsigned H264VideoStreamParser::parseStartSequence()
{
//     std::cout << "H264VideoStreamPARSER: parseStartSequence" << std::endl;
    // Find start sequence (0001)
//     std::cout << "going to test4Bytes" << std::endl;
    u_int32_t test = test4Bytes();
//     std::cout << "test result: " << test << std::endl;
    while (test != 0x00000001)
    {
//         std::cout << "waiting for start sequence" << std::endl;
        skipBytes(1);
        test = test4Bytes();
    }
//     setParseState(PARSING_NAL_UNIT);
    setParseState(PARSING_START_SEQUENCE);

//     std::cout << "going to skip bytes" << std::endl;
    skipBytes(4);
    
    return parseNALUnit();
//     return 0;
//     // Compute this frame's presentation time:
//     usingSource()->computePresentationTime(fTotalTicksSinceLastTimeCode);
//     // This header forms part of the 'configuration' information:
//     usingSource()->appendToNewConfig(fStartOfFrame, curFrameSize());


}

unsigned H264VideoStreamParser::parseNALUnit()
{
//     std::cout << "H264VideoStreamPARSER: parseNALUnit" << std::endl;
    // Find next start sequence (0001) or end of stream
    u_int32_t test = test4Bytes();
    int numBytes = 0;
    while (test != 0x00000001)
    {
        saveByte(get1Byte());
        numBytes++;
        test = test4Bytes();
    }

//     std::cout << "just read this many bytes: " << numBytes << std::endl;
    
//     setParseState(PARSING_START_SEQUENCE);

//     // Compute this frame's presentation time:
//     usingSource()->computePresentationTime(fTotalTicksSinceLastTimeCode);
//     // This header forms part of the 'configuration' information:
//     usingSource()->appendToNewConfig(fStartOfFrame, curFrameSize());

    return curFrameSize();
}

/**********
This library is free software; you can redistribute it and/or modify it under
the terms of the GNU Lesser General Public License as published by the
Free Software Foundation; either version 2.1 of the License, or (at your
option) any later version. (See <http://www.gnu.org/copyleft/lesser.html>.)

This library is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
FOR A PARTICULAR PURPOSE.  See the GNU Lesser General Public License for
more details.

You should have received a copy of the GNU Lesser General Public License
along with this library; if not, write to the Free Software Foundation, Inc.,
59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
**********/
// Copyright (c) 1996-2007, Live Networks, Inc.  All rights reserved
// A test program that reads a MPEG-4 Video Elementary Stream file,
// and streams it using RTP
// main program
#include <iostream>
#include "liveMedia.hh"
#include "BasicUsageEnvironment.hh"
#include "GroupsockHelper.hh"

UsageEnvironment* env;
char const* inputFileName = "test.h264";
H264VideoStreamFramer* videoSource;
RTPSink* videoSink;

void play(); // forward

int main(int argc, char** argv) {
  // Begin by setting up our usage environment:
  TaskScheduler* scheduler = BasicTaskScheduler::createNew();
  env = BasicUsageEnvironment::createNew(*scheduler);

  // Create 'groupsocks' for RTP and RTCP:
  struct in_addr destinationAddress;
  destinationAddress.s_addr = chooseRandomIPv4SSMAddress(*env);
  // Note: This is a multicast address.  If you wish instead to stream
  // using unicast, then you should use the "testOnDemandRTSPServer"
  // test program - not this test program - as a model.

  const unsigned short rtpPortNum = 18888;
  const unsigned short rtcpPortNum = rtpPortNum+1;
  const unsigned char ttl = 255;

  const Port rtpPort(rtpPortNum);
  const Port rtcpPort(rtcpPortNum);

  Groupsock rtpGroupsock(*env, destinationAddress, rtpPort, ttl);
  rtpGroupsock.multicastSendOnly(); // we're a SSM source
  Groupsock rtcpGroupsock(*env, destinationAddress, rtcpPort, ttl);
  rtcpGroupsock.multicastSendOnly(); // we're a SSM source

  // Create a 'MPEG-4 Video RTP' sink from the RTP 'groupsock':
  videoSink = H264VideoRTPSink::createNew(*env, &rtpGroupsock, 96, 0x42, "h264");

  // Create (and start) a 'RTCP instance' for this RTP sink:
  const unsigned estimatedSessionBandwidth = 500; // in kbps; for RTCP b/w share
  const unsigned maxCNAMElen = 100;
  unsigned char CNAME[maxCNAMElen+1];
  gethostname((char*)CNAME, maxCNAMElen);
  CNAME[maxCNAMElen] = '\0'; // just in case
  RTCPInstance* rtcp
  = RTCPInstance::createNew(*env, &rtcpGroupsock,
			    estimatedSessionBandwidth, CNAME,
			    videoSink, NULL /* we're a server */,
			    True /* we're a SSM source */);
  // Note: This starts RTCP running automatically

  RTSPServer* rtspServer = RTSPServer::createNew(*env, 8554);
  if (rtspServer == NULL) {
    *env << "Failed to create RTSP server: " << env->getResultMsg() << "\n";
    exit(1);
  }
  ServerMediaSession* sms
    = ServerMediaSession::createNew(*env, "testStream", inputFileName,
		   "Session streamed by \"testMPEG4VideoStreamer\"",
					   True /*SSM*/);
  sms->addSubsession(PassiveServerMediaSubsession::createNew(*videoSink, rtcp));
  rtspServer->addServerMediaSession(sms);

  char* url = rtspServer->rtspURL(sms);
  *env << "Play this stream using the URL \"" << url << "\"\n";
  delete[] url;

  // Start the streaming:
  *env << "Beginning streaming...\n";
  play();

  env->taskScheduler().doEventLoop(); // does not return

  return 0; // only to prevent compiler warning
}

void afterPlaying(void* /*clientData*/) {
  *env << "...done reading from file\n";
    std::cout << "...done reading from file\n";

  Medium::close(videoSource);
  // Note that this also closes the input file that this source read from.
    exit(1);
  // Start playing once again:
  play();
}

void play() {
  // Open the input file as a 'byte-stream file source':
  ByteStreamFileSource* fileSource
    = ByteStreamFileSource::createNew(*env, inputFileName);
  if (fileSource == NULL) {
    *env << "Unable to open file \"" << inputFileName
	 << "\" as a byte-stream file source\n";
    exit(1);
  }
  
  FramedSource* videoES = fileSource;

  // Create a framer for the Video Elementary Stream:
  videoSource = H264VideoStreamFramer::createNew(*env, videoES);
  
  // Finally, start playing:
  *env << "Beginning to read from file...\n";
  std::cout << "Beginning to read from file...\n";
  videoSink->startPlaying(*videoSource, afterPlaying, videoSink);
}
_______________________________________________
live-devel mailing list
live-devel@lists.live555.com
http://lists.live555.com/mailman/listinfo/live-devel

Reply via email to