On 05.02.2017 21:15, Blake Senftner wrote:
I’m in a somewhat similar situation as you, working on a video player,
using a more C approach versus C++ classes for all the data and logic. I
write a hybrid, with classes, but minimally. I prefer C.  And I’m using
wxWidgets with OpenGL, so there is very little hiding/encapsulation of
my data handling.

  * In your “Init AV backend” I have additional calls to:
      o avdevice_register_all();   seems to be necessary for USB streams
      o avfilter_register_all();    you will want an avfilter graph to
        filter the stream of failed frames and similar error recovery
      o avformat_network_init();   if you’re playing IP streams, you’ll
        want this
  * When you call sws_getContext(), for your 3rd parameter pass in
    the videoCodecContext->pix_fmt. It’s value means “unset” and
    essentially tells sws to insure to return pixels in your desired
    format.
  * When using avcodec_send_packet() one needs to
    useavcodec_receive_frame(), as that is the API design. Consider
    those two routines the in and out of a single algorithm.
  * Also, due to buffering, you will wantavcodec_receive_frame() called
    in a while loop, to drain the buffered frames afterav_read_Frame()
    has indicated the stream EOF’ed or terminated.
  * It also looks like you’re not using AVFilter yet, which I have found
    is critical to stable playback - and it replaces your use of
    sws_context… (with it’s own use of sws, but with a lot more logic
    around it)
  * Regardless of your using AVFilter or directly using sws, your pixel
    data is not guaranteed to always be a single continuous pixel buffer
    with all codecs. Your logic needs to look at each frame’s
    linesize[0] (in the case of RGB pixels) and compare that to your
    calculated correct number of expected bytes. If they match, then you
    have a continuous buffer for each pixel line. If they do not match,
    your logic needs to loop over the pixel buffer copying the pixels
    from each row start, because each row can have additional bytes that
    are not image pixels at the end of the pixel row.


I don’t know when you started working on your player, but it looks from
reading your comments, you are just a few beats behind my figuring all
this out. I can’t share my code directly with you, but keep posting. The
advice from others looks correct, but as you point out, you’re working
in C, and C++ encapsulation is just hiding details you want to see.

Sincerely,
-Blake Senftner
Computer Scientist



Hello Blake,
hello all.

thank you for sharing your insights, for sure very helpful. Just AVFilter and the latter is again not very intuitive, when to use the buffer, when to use the sink and how to archive any color conversion or scaling!

I got my player working, in particular the error came from an unitilized src frame (videoframe) and the issue that the packet.data and packet.linesize were not filled.

Currently I am not using the new API avcodec_send_packet() and av_receive_frame() - because I didnt get my output working, even so the decoding did work almost indepently using a while loop for av_receive_frame(), which seems to do most of the parsing itself, even so I used av_image_fill_arrays() to get the information of the frame into the videoFrame for swscale() - I simply couldt not get the video shown yet.

Anyhow, its working for me now. But to be honest, I use deprecated features...

I also noticed ffplay is also still using it avframe_decode_open2 and it seems no one the new API yet. Sadly.

If anyone else has suggestions and hints on how to setup the packet or AVFrame data and linesize, or how to correctly setup the new decoding API, please let me know.

As I really would like not to hang around of deprecated features, which seems to be the easiest solution so.

Kind regards

Jan
_______________________________________________
Libav-user mailing list
[email protected]
http://ffmpeg.org/mailman/listinfo/libav-user

Reply via email to