As per the ffmpeg tutorial code, for code like this:/
while(av_read_frame_proc(pFormatCtx, &packet) >= 0)
  {
    if (packet.stream_index == videoStream)
    {
avcodec_decode_video2_proc(pCodecCtx, pFrame, &frameFinished, &packet);// Decode video frame
      if (frameFinished)
      {
sws_scale_proc(sws_ctx, (uint8_t const * const *)pFrame->data, pFrame->linesize, 0, pCodecCtx->height, pFrameRGB->data, pFrameRGB->linesize );/

As a general question, if I'm receiving realtime webcam frames or from a remote video file from a very fast network and I receive frames faster than the framerate of the video, I'd need to buffer the frames in my own datastructure. But if my datastructure is small, then won't I lose a lot of frames that didn't get buffered? I'm asking because I read somewhere that ffmpeg buffers a video internally. How does this buffering happen? How would that buffering be any different than if I implemented my own buffer? The chances that I'd lose frames is very real, isn't it? Any chance I could read up or see some source code about this?
Or does all this depend on the streaming protocol being used?

--
Navin

_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to