On 08/29/2017 10:56 AM, wm4 wrote:
On Mon, 28 Aug 2017 23:36:08 +0200
Jorge Ramirez <jorge.ramirez-or...@linaro.org> wrote:

On 08/28/2017 09:53 PM, wm4 wrote:
On Mon, 28 Aug 2017 21:24:26 +0200
Jorge Ramirez <jorge.ramirez-or...@linaro.org> wrote:
On 08/28/2017 02:16 PM, Jorge Ramirez wrote:
On 08/28/2017 12:47 PM, wm4 wrote:
I guess that instead of polling for the AVBufferRef to be unreferenced,
I can associate a sync (ie a sempahore) to each buffer, take it on
release and post the semaphore on the AVBufferRefs being unreferenced.
that is actually pretty clean in terms of cpu usage.
That would just freeze an API user calling avcodec_close(), when it
keeps around decoded AVFrames for later use.
yes I understand, but it does avoid using the CPU to poll for the
buffer release (an incremental improvement)

but yes I think that the message is that even though this proposal
might suffice for simple video players (my tests) is not good enough
for other users requiring the decoded frame for post processing.

is this a blocker to upstream or could I continue working with it
flagging the encoder/decoder as EXPERIMENTAL? the current situation at
least keeps video players happy.
I'd say yes this is a blocker. We usually try to avoid committing
half-finished code, because it often means it will be never finished.
hi, I forgot to say earlier, thanks for all the review over the past
couple of days (it has been of much help).

on the half finished matter, the issue that I face is that the current
code doesn't cover the use case where _all_ the processed frames have to
be kept available indefinitely (this is why I thought that perhaps
setting .capabilities to AV_CODEC_CAP_EXPERIMENTAL could be an option to
upstream and get more exposure to other users;

I do plan to continue supporting v4l2 ffmpeg integration - mmaped
filters, DRM and so on...having invested this long I do want to see this
through; and since I can't guaranteed that some "force majeure" wont
happen I think the sooner the code I have been working on can get
exposure the sooner we will start seeing contributions.

Anyhow, the current code does support the typical use case of most video
players so it would benefit a considerable amount of users.

does it have to be an all or nothing at this point or could we flag the
v4l2 m2m as experimental codecs?
You could just copy the frames before returning them to the user to
avoid breaking refcounting.

thinking again about this I'd rather not do that (it will impact performance too much) and Hendrik gave me some pointers yesterday in line with what you said as well. I implemented reference counting delegating the closing of _some_ resources needed to keep the buffers alive.

closing the codec now doesnt wait or leave dangling buffers.

the AVBufferRef free callback looks just like this

static void free_v4l2buf_cb(void *opaque, uint8_t *unused)
{
    V4L2Buffer* avbuf = opaque;
V4L2m2mContext *s = container_of(avbuf->context, V4L2m2mContext, capture);

    atomic_fetch_sub_explicit(&s->refcount, 1, memory_order_acq_rel);

    if (s->reinit) {
        if (!atomic_load(&s->refcount))
            sem_post(&s->refsync);
        return;
    }

    if (avbuf->context->streamon) {
        avbuf->context->ops.enqueue(avbuf);
        return;
    }

    if (!atomic_load(&s->refcount))
        avpriv_v4l2m2m_end(s);
}

The only case where I can't get away without waiting for the AVBufferRef to be released is when re-initializing the frame dimensions (ie, resolution changes/format) _during_ streaming since I need to release _all_ hardware buffers and queue them again.

will this be acceptable?
I have just tested these changes and works as expected.


just wondering, if the AVBufferRefs must live for ever (ie, after the
codecs have been closed), what do other codecs dequeuing from a limited
number of re-usable hardware allocated buffers do?
do they use the CPU allocate and copy the data from those buffers to the
heap?
Like I wrote before: hwaccels use AVHWFramesContext, which was made
more or less for this situation. If you want FD support later (for
something like zero-copy transcoding or playback), AVHWFramesContext
will probably be mandatory anyway. But I guess it's a big change for
someone not familiar with the codebase.
Yes I had a look and it seems not an easy change to integrate.

Still I'd like to make sure we are talking about the same requirement
because if AVHWFramesContext works around the issue [1] , I can probably
do the same with a few more lines of code (including the FD support for
v4l2 which is pretty straight forward)

[1]  When:
a) there is a limited number of buffers allocated by the hardware and
b) these buffers are mapped to the process address space and
c) the application can choose to keep _all_ decoded buffers for post
processing,

then there is no other option than copying each of the processed buffers
to newly allocated areas in the heap (there can be no magic on this
since the hardware buffers are always limited and have to be reused).
The semantics of AVHWFramesContext are such that if the user keeps
enough AVFrames references to exhaust the frame pool, trying to continue
decoding will result in an error. The whole point is to make the
limited and fixed buffer allocation visible to the API user.

makes sense although I havent found such an interface; in my view, the user should be able to register an observer to receive async events from codecs (be these from hardware or codec state machines) could you point me where that is? the way I understand ffmpeg is that everything seem to be working synchronously with no room for events like this (so the user would only be reported of an error after it tries to get a frame for instance..)



We are also thinking about adding an API that lets the decoder
communicate to the user how many references are required (inherently,
and due to codec reference frame requirements).

that is an interface that I would welcome as well.


I had a look a AVHWFRamesContext and it seems to me  that under the
transfer frames semantics it performs some sort of memcpy in/out
(something I could do on every capture buffer dequeue if this is the
requirement). I could be wrong and therefore would appreciate the
clarification if the previous comment is incorrect.
The AVFrames in the context pool would be opaque frames (using FDs),
and there are entry points for temporary and permanent mapping of
opaque frames, which in your code would call mmap.

thanks. I'll have a look at this.


notice that I do insist on continue using V4L2_MEMORY_MMAP (instead of
switching to V4L2_MEMORY_USERPTR) because it is easy to export the
buffers as DMABUFs (~30 lines of code) and then pass these in FDs (which
could be associated to short lived AVBufferRefs for DRM)
No idea what's the difference between those.

If you want to support direct FD/dmabuf export, adapting to
AVHWFramesContext now would probably be easier in total. Especially
because of the implied API change. But I'd wait for Mark Thompson's
comments on that before making any big changes. AFAIK he posted a
proposal patch for a DRM AVHWFramesContext too.

why not just add a FD to the AVBufferRef and let the user decide whether to use it or not?


But manually "nesting" AVBufferRefs to make any underlying state
refcounted would also work.
I think so, context release now looks like this (it raises an ERROR to
the user) but will not lock or poll.

void avpriv_v4l2_context_release(V4L2Context* ctx)
{
      struct timespec timeout = { 0, 0};
      int i, ret;

      if (!ctx->buffers)
          return;

      timeout.tv_sec = av_gettime() / 1000000 + 10;

      /* wait until all buffers owned by the user are returned */
      for (i = 0; i < ctx->num_buffers; i++) {
          for (;;) {
              ret = sem_timedwait(&ctx->buffers[i].delete_sync, &timeout);
              if (!ret)
                  break;
              if (errno == EINTR)
                  continue;
              if (errno == ETIMEDOUT)
                  av_log(ctx->log_ctx, AV_LOG_ERROR, "AVBufferRef nbr %d
in use, bad things might happen\n", i);
Undefined behavior after timeout? How is that reasonable in any way?

is not! yes, please forget about the above...

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

_______________________________________________
ffmpeg-devel mailing list
ffmpeg-devel@ffmpeg.org
http://ffmpeg.org/mailman/listinfo/ffmpeg-devel

Reply via email to