On 09/04/2017 07:55 PM, Mark Thompson wrote:
On 04/09/17 18:01, Jorge Ramirez wrote:
On 09/04/2017 06:31 PM, Mark Thompson wrote:
On 04/09/17 17:00, Jorge Ramirez wrote:
On 09/03/2017 08:23 PM, Mark Thompson wrote:
On 03/09/17 17:54, Jorge Ramirez wrote:
On 09/03/2017 02:27 AM, Mark Thompson wrote:
+/* in ffmpeg there is a single thread could be queueing/dequeuing buffers so a
+ * timeout is * required when retrieving a frame in case the driver has not 
received
+ * enough input * to start generating output.
+ *
+ * once decoding starts, the timeout should not be hit.
This seems like it could introduce a significant delay on startup for no good 
reason.  Can you instead just queue packets until either you run out of input 
buffers or a nonblocking dequeue succeeds?

(I might need to think more about how the semantics of this work.)

if the decoder needs 4 blocks, the delay is 200ms, if it is 10 blocks, that is 
500ms which doesn't seem too significant? when I test I barely notice the 
difference with respect to using the h264 codec (or any of the others in fact)

the best solution would be to be able to block until the capture queue has 
frames ready but for that we would need another thread inputting independently 
on the other queue...does ffmpeg allow for this? separate threads for input and 
output?
Since the API is nonblocking, you can just return EAGAIN from receive_frame if 
there are any free buffers (to request more input).  You would then only block 
waiting for output if there is no more input (end of stream) or there aren't 
any free buffers (so no more input could be accepted).  Ideally there would 
then be no timeouts at all except in error cases.
sure, can do that as well, not a problem.

the encoding API doesnt seem to allow for this though: once it retrieves a 
valid frame it appears to keep on reading them without inputing others (this 
causes teh capture queue to block for ever)

is this intentional or is it a bug?
The encode API should be identical to the decode API with frames/packets 
swapped (see docs in avcodec.h).

If you have an lavc-using program which calls receive_packet() repeatedly after 
it has returned EAGAIN and never calls send_packet() then that program is wrong.
thanks for the prompt answer.

yes I am just using the ffmpeg binary to encode a stream; however once a valid 
encoded packet is returned,  send_frame is not ever called again unless I 
return EAGAIN from v4l2m2m_receive_packet.
But I cant return EAGAIN on receive_packet while I am blocked waiting for data 
which will never arrive (since send_frame is not executing)

seems to me like a bug in ffmeg but I dont like to question baseline code with 
obvious bugs (this seems to simple to be real)

anyway looking at the function do_video_out() the code seems strange but it 
explains why my encoding blocks unless I timeout and return EAGAIN.


         ret = avcodec_send_frame(enc, in_picture);
         if (ret < 0)
             goto error;

         while (1) {
             ret = avcodec_receive_packet(enc, &pkt);
             update_benchmark("encode_video %d.%d", ost->file_index, 
ost->index);
             if (ret == AVERROR(EAGAIN))
                 break;
             if (ret < 0)
                 goto error;

             if (debug_ts) {
                 av_log(NULL, AV_LOG_INFO, "encoder -> type:video "
                        "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s 
pkt_dts_time:%s\n",
                        av_ts2str(pkt.pts), av_ts2timestr(pkt.pts, 
&enc->time_base),
                        av_ts2str(pkt.dts), av_ts2timestr(pkt.dts, 
&enc->time_base));
             }

             if (pkt.pts == AV_NOPTS_VALUE && !(enc->codec->capabilities & 
AV_CODEC_CAP_DELAY))
                 pkt.pts = ost->sync_opts;

             av_packet_rescale_ts(&pkt, enc->time_base, ost->mux_timebase);

             if (debug_ts) {
                 av_log(NULL, AV_LOG_INFO, "encoder -> type:video "
                     "pkt_pts:%s pkt_pts_time:%s pkt_dts:%s pkt_dts_time:%s\n",
                     av_ts2str(pkt.pts), av_ts2timestr(pkt.pts, 
&ost->mux_timebase),
                     av_ts2str(pkt.dts), av_ts2timestr(pkt.dts, 
&ost->mux_timebase));
             }

             frame_size = pkt.size;
             output_packet(of, &pkt, ost, 0);

             /* if two pass, output log */
             if (ost->logfile && enc->stats_out) {
                 fprintf(ost->logfile, "%s", enc->stats_out);
             }
         }
     }


so if I queue 20 frames in the output queue and the allow frames to be 
dequeued, all of them are dequeued at once and then the code just blocks 
waiting for more input...



does the above look ok to you?
Yes: send_frame() is always callable once, and then receive_packet() is 
callable repeatedly until it returns EAGAIN; once it does, send_frame() is 
necessarily callable again.

Can you offer a sequence of valid returns from the lavc API which would break 
it?  (Ignoring what the implementation behind it actually is for a moment.)


yes, just the code I have in the patch I am preparing for v9 for instance where I remove the timeout.
https://github.com/ldts/FFmpeg/blob/patches/v9/libavcodec/v4l2_m2m_enc.c#L267

encoding just blocks.
After having queued all input buffers it dequeues the first packet ...but then it just and keeps on trying to do that for ever since we block for output ready and never send another frame.

decoding works as expected.
https://github.com/ldts/FFmpeg/blob/patches/v9/libavcodec/v4l2_m2m_dec.c#L148

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to