Re: [PATCH] media: vb2: Allow reqbufs(0) with "in use" MMAP buffers

2018-11-13 Thread Nicolas Dufresne
Le mercredi 14 novembre 2018 à 00:27 +0200, Sakari Ailus a écrit :
> Hi Philipp,
> 
> On Tue, Nov 13, 2018 at 04:06:21PM +0100, Philipp Zabel wrote:
> > From: John Sheu 
> > 
> > Videobuf2 presently does not allow VIDIOC_REQBUFS to destroy outstanding
> > buffers if the queue is of type V4L2_MEMORY_MMAP, and if the buffers are
> > considered "in use".  This is different behavior than for other memory
> > types and prevents us from deallocating buffers in following two cases:
> > 
> > 1) There are outstanding mmap()ed views on the buffer. However even if
> >we put the buffer in reqbufs(0), there will be remaining references,
> >due to vma .open/close() adjusting vb2 buffer refcount appropriately.
> >This means that the buffer will be in fact freed only when the last
> >mmap()ed view is unmapped.
> > 
> > 2) Buffer has been exported as a DMABUF. Refcount of the vb2 buffer
> >is managed properly by VB2 DMABUF ops, i.e. incremented on DMABUF
> >get and decremented on DMABUF release. This means that the buffer
> >will be alive until all importers release it.
> > 
> > Considering both cases above, there does not seem to be any need to
> > prevent reqbufs(0) operation, because buffer lifetime is already
> > properly managed by both mmap() and DMABUF code paths. Let's remove it
> > and allow userspace freeing the queue (and potentially allocating a new
> > one) even though old buffers might be still in processing.
> > 
> > To let userspace know that the kernel now supports orphaning buffers
> > that are still in use, add a new V4L2_BUF_CAP_SUPPORTS_ORPHANED_BUFS
> > to be set by reqbufs and create_bufs.
> > 
> > Signed-off-by: John Sheu 
> > Reviewed-by: Pawel Osciak 
> > Reviewed-by: Tomasz Figa 
> > Signed-off-by: Tomasz Figa 
> > [p.za...@pengutronix.de: moved __vb2_queue_cancel out of the mmap_lock
> >  and added V4L2_BUF_CAP_SUPPORTS_ORPHANED_BUFS]
> > Signed-off-by: Philipp Zabel 
> 
> This lets the user to allocate lots of mmap'ed buffers that are pinned in
> physical memory. Considering that we don't really have a proper mechanism
> to limit that anyway,

It's currently limited to 32 buffers. It's not worst then DRM dumb
buffers which will let you allocate as much as you want.

> 
> Acked-by: Sakari Ailus 
> 
> That said, the patch must be accompanied by the documentation change in
> Documentation/media/uapi/v4l/vidioc-reqbufs.rst .
> 
> I wonder what Hans thinks.
> 
> > ---
> >  .../media/common/videobuf2/videobuf2-core.c   | 26 +--
> >  .../media/common/videobuf2/videobuf2-v4l2.c   |  2 +-
> >  include/uapi/linux/videodev2.h|  1 +
> >  3 files changed, 3 insertions(+), 26 deletions(-)
> > 
> > diff --git a/drivers/media/common/videobuf2/videobuf2-core.c 
> > b/drivers/media/common/videobuf2/videobuf2-core.c
> > index 975ff5669f72..608459450c1e 100644
> > --- a/drivers/media/common/videobuf2/videobuf2-core.c
> > +++ b/drivers/media/common/videobuf2/videobuf2-core.c
> > @@ -553,20 +553,6 @@ bool vb2_buffer_in_use(struct vb2_queue *q, struct 
> > vb2_buffer *vb)
> >  }
> >  EXPORT_SYMBOL(vb2_buffer_in_use);
> >  
> > -/*
> > - * __buffers_in_use() - return true if any buffers on the queue are in use 
> > and
> > - * the queue cannot be freed (by the means of REQBUFS(0)) call
> > - */
> > -static bool __buffers_in_use(struct vb2_queue *q)
> > -{
> > -   unsigned int buffer;
> > -   for (buffer = 0; buffer < q->num_buffers; ++buffer) {
> > -   if (vb2_buffer_in_use(q, q->bufs[buffer]))
> > -   return true;
> > -   }
> > -   return false;
> > -}
> > -
> >  void vb2_core_querybuf(struct vb2_queue *q, unsigned int index, void *pb)
> >  {
> > call_void_bufop(q, fill_user_buffer, q->bufs[index], pb);
> > @@ -674,23 +660,13 @@ int vb2_core_reqbufs(struct vb2_queue *q, enum 
> > vb2_memory memory,
> >  
> > if (*count == 0 || q->num_buffers != 0 ||
> > (q->memory != VB2_MEMORY_UNKNOWN && q->memory != memory)) {
> > -   /*
> > -* We already have buffers allocated, so first check if they
> > -* are not in use and can be freed.
> > -*/
> > -   mutex_lock(>mmap_lock);
> > -   if (q->memory == VB2_MEMORY_MMAP && __buffers_in_use(q)) {
> > -   mutex_unlock(>mmap_lock);
> > -   dprintk(1, "memory in use, cannot free\n");
> > -   return -EBUSY;
> > -   }
> > -
> > /*
> >  * Call queue_cancel to clean up any buffers in the
> >  * QUEUED state which is possible if buffers were prepared or
> >  * queued without ever calling STREAMON.
> >  */
> > __vb2_queue_cancel(q);
> > +   mutex_lock(>mmap_lock);
> > ret = __vb2_queue_free(q, q->num_buffers);
> > mutex_unlock(>mmap_lock);
> > if (ret)
> > diff --git a/drivers/media/common/videobuf2/videobuf2-v4l2.c 
> > 

Re: [RFP] Which V4L2 ioctls could be replaced by better versions?

2018-11-09 Thread Nicolas Dufresne
Le jeudi 08 novembre 2018 à 16:45 +0900, Tomasz Figa a écrit :
> > In this patch we should consider a way to tell userspace that this has
> > been opt in, otherwise existing userspace will have to remain using
> > sub-optimal copy based reclaiming in order to ensure that renegotiation
> > can work on older kernel tool. At worst someone could probably do trial
> > and error (reqbufs(1)/mmap/reqbufs(0)) but on CMA with large buffers
> > this introduces extra startup time.
> 
> Would such REQBUFS dance be really needed? Couldn't one simply try
> reqbufs(0) when it's really needed and if it fails then do the copy,
> otherwise just proceed normally?

In simple program, maybe, in modularized code, where the consumer of
these buffer (the one that is forced to make a copy) does not know the
origin of the DMABuf, it's a bit complicated.

In GStreamer as an example, the producer is a plugin called
libgstvideo4linux2.so, while the common consumer would be libgstkms.so.
They don't know each other. The pipeline would be described as:

  v4l2src ! kmssink

GStreamer does not have an explicit reclaiming mechanism. No one knew
about V4L2 restrictions when this was designed, DMABuf didn't exist and
GStreamer didn't have OMX support.

What we ended up crafting, as a plaster, is that when upstream element
(v4l2src) query a new allocation from downstream (kmssink), we always
copy and return any ancient buffers by copying. kmssink holds on a
buffer because we can't remove the scannout buffer on the display. This
is slow and inefficient, and also totally unneeded if the dmabuf
originate from other kernel subsystems (like DRM).

So what I'd like to be able to do, to support this in a more optimal
and generic way, is to mark the buffers that needs reclaiming before
letting them go. But for that, I would need a flag somewhere to tell me
this kernel allow this.

You got the context, maybe the conclusion is that I should simply do
kernel version check, though I'm sure a lot of people will backport
this, which means that check won't work so well.

Let me know, I understand adding more API is not fun, but as nothing is
ever versionned in the linux-media world, it's really hard to detect
and use new behaviour while supporting what everyone currently run on
their systems.

I would probably try and find a way to implement your suggestion, and
then introduce a flag in the query itself, but I would need to think
about it a little more. It's not as simple as it look like
unfortunately.

Nicolas


signature.asc
Description: This is a digitally signed message part


Re: [RFC v2 1/4] media: Introduce helpers to fill pixel format structs

2018-11-02 Thread Nicolas Dufresne
Le vendredi 02 novembre 2018 à 12:52 -0300, Ezequiel Garcia a écrit :
> Add two new API helpers, v4l2_fill_pixfmt and v4l2_fill_pixfmt_mp,
> to be used by drivers to calculate plane sizes and bytes per lines.
> 
> Note that driver-specific paddig and alignment are not yet
> taken into account.
> 
> Signed-off-by: Ezequiel Garcia 
> ---
>  drivers/media/v4l2-core/Makefile  |  2 +-
>  drivers/media/v4l2-core/v4l2-common.c | 66 +++
>  drivers/media/v4l2-core/v4l2-fourcc.c | 93 +++
>  include/media/v4l2-common.h   |  5 ++
>  include/media/v4l2-fourcc.h   | 53 +++
>  5 files changed, 218 insertions(+), 1 deletion(-)
>  create mode 100644 drivers/media/v4l2-core/v4l2-fourcc.c
>  create mode 100644 include/media/v4l2-fourcc.h
> 
> diff --git a/drivers/media/v4l2-core/Makefile 
> b/drivers/media/v4l2-core/Makefile
> index 9ee57e1efefe..bc23c3407c17 100644
> --- a/drivers/media/v4l2-core/Makefile
> +++ b/drivers/media/v4l2-core/Makefile
> @@ -7,7 +7,7 @@ tuner-objs:=  tuner-core.o
>  
>  videodev-objs:=  v4l2-dev.o v4l2-ioctl.o v4l2-device.o v4l2-fh.o 
> \
>   v4l2-event.o v4l2-ctrls.o v4l2-subdev.o v4l2-clk.o \
> - v4l2-async.o
> + v4l2-async.o v4l2-fourcc.o
>  ifeq ($(CONFIG_COMPAT),y)
>videodev-objs += v4l2-compat-ioctl32.o
>  endif
> diff --git a/drivers/media/v4l2-core/v4l2-common.c 
> b/drivers/media/v4l2-core/v4l2-common.c
> index 50763fb42a1b..97bb51d15188 100644
> --- a/drivers/media/v4l2-core/v4l2-common.c
> +++ b/drivers/media/v4l2-core/v4l2-common.c
> @@ -61,6 +61,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  
>  #include 
>  
> @@ -455,3 +456,68 @@ int v4l2_s_parm_cap(struct video_device *vdev,
>   return ret;
>  }
>  EXPORT_SYMBOL_GPL(v4l2_s_parm_cap);
> +
> +void v4l2_fill_pixfmt_mp(struct v4l2_pix_format_mplane *pixfmt, int 
> pixelformat, int width, int height)

My first impression is that this code expects width/height to be
aligned to the sub-sampling. Hence, I don't think it works for odd
width/height. It would be nice to document the constraint, specially
that this will lead to under-allocation due to the lack of round up.

> +{
> + const struct v4l2_format_info *info;
> + struct v4l2_plane_pix_format *plane;
> + int i;
> +
> + info = v4l2_format_info(pixelformat);
> + if (!info)
> + return;
> +
> + pixfmt->width = width;
> + pixfmt->height = height;
> + pixfmt->pixelformat = pixelformat;
> +
> + if (info->has_contiguous_planes) {
> + pixfmt->num_planes = 1;
> + plane = >plane_fmt[0];
> + plane->bytesperline = info->is_compressed ?
> + 0 : width * info->cpp[0];
> + plane->sizeimage = info->header_size;
> + for (i = 0; i < info->num_planes; i++) {
> + unsigned int hsub = (i == 0) ? 1 : info->hsub;
> + unsigned int vsub = (i == 0) ? 1 : info->vsub;
> +
> + plane->sizeimage += width * height * info->cpp[i] / 
> (hsub * vsub);
> + }
> + } else {
> + pixfmt->num_planes = info->num_planes;
> + for (i = 0; i < info->num_planes; i++) {
> + unsigned int hsub = (i == 0) ? 1 : info->hsub;
> + unsigned int vsub = (i == 0) ? 1 : info->vsub;
> +
> + plane = >plane_fmt[i];
> + plane->bytesperline = width * info->cpp[i] / hsub;
> + plane->sizeimage = width * height * info->cpp[i] / 
> (hsub * vsub);
> + }
> + }
> +}
> +EXPORT_SYMBOL_GPL(v4l2_fill_pixfmt_mp);
> +
> +void v4l2_fill_pixfmt(struct v4l2_pix_format *pixfmt, int pixelformat, int 
> width, int height)
> +{
> + const struct v4l2_format_info *info;
> + char name[32];
> + int i;
> +
> + pixfmt->width = width;
> + pixfmt->height = height;
> + pixfmt->pixelformat = pixelformat;
> +
> + info = v4l2_format_info(pixelformat);
> + if (!info)
> + return;
> + pixfmt->bytesperline = info->is_compressed ? 0 : width * info->cpp[0];
> +
> + pixfmt->sizeimage = info->header_size;
> + for (i = 0; i < info->num_planes; i++) {
> + unsigned int hsub = (i == 0) ? 1 : info->hsub;
> + unsigned int vsub = (i == 0) ? 1 : info->vsub;
> +
> + pixfmt->sizeimage += width * height * info->cpp[i] / (hsub * 
> vsub);
> + }
> +}
> +EXPORT_SYMBOL_GPL(v4l2_fill_pixfmt);
> diff --git a/drivers/media/v4l2-core/v4l2-fourcc.c 
> b/drivers/media/v4l2-core/v4l2-fourcc.c
> new file mode 100644
> index ..4e8a15525b58
> --- /dev/null
> +++ b/drivers/media/v4l2-core/v4l2-fourcc.c
> @@ -0,0 +1,93 @@
> +/*
> + * Copyright (c) 2018 Collabora, Ltd.
> + *
> + * Based on drm-fourcc:
> + * Copyright (c) 2016 Laurent Pinchart 
> + *
> + * Permission to use, copy, modify, 

Re: [RFP] Which V4L2 ioctls could be replaced by better versions?

2018-10-27 Thread Nicolas Dufresne
Le lundi 22 octobre 2018 à 12:37 +0900, Tomasz Figa a écrit :
> Hi Philipp,
> 
> On Mon, Oct 22, 2018 at 1:28 AM Philipp Zabel  wrote:
> > 
> > On Wed, Oct 03, 2018 at 05:24:39PM +0900, Tomasz Figa wrote:
> > [...]
> > > > Yes, but that would fall in a complete redesign I guess. The buffer
> > > > allocation scheme is very inflexible. You can't have buffers of two
> > > > dimensions allocated at the same time for the same queue. Worst, you
> > > > cannot leave even 1 buffer as your scannout buffer while reallocating
> > > > new buffers, this is not permitted by the framework (in software). As a
> > > > side effect, there is no way to optimize the resolution changes, you
> > > > even have to copy your scannout buffer on the CPU, to free it in order
> > > > to proceed. Resolution changes are thus painfully slow, by design.
> > 
> > [...]
> > > Also, I fail to understand the scanout issue. If one exports a vb2
> > > buffer to a DMA-buf and import it to the scanout engine, it can keep
> > > scanning out from it as long as it want, because the DMA-buf will hold
> > > a reference on the buffer, even if it's removed from the vb2 queue.
> > 
> > REQBUFS 0 fails if the vb2 buffer is still in use, including from dmabuf
> > attachments: vb2_buffer_in_use checks the num_users memop. The refcount
> > returned by num_users shared between the vmarea handler and dmabuf ops,
> > so any dmabuf attachment counts towards in_use.
> 
> Ah, right. I've managed to completely forget about it, since we have a
> downstream patch that we attempted to upstream earlier [1], but didn't
> have a chance to follow up on the comments and there wasn't much
> interest in it in general.
> 
> [1] https://lore.kernel.org/patchwork/patch/607853/
> 
> Perhaps it would be worth reviving?

In this patch we should consider a way to tell userspace that this has
been opt in, otherwise existing userspace will have to remain using
sub-optimal copy based reclaiming in order to ensure that renegotiation
can work on older kernel tool. At worst someone could probably do trial
and error (reqbufs(1)/mmap/reqbufs(0)) but on CMA with large buffers
this introduces extra startup time.

> 
> Best regards,
> Tomasz


signature.asc
Description: This is a digitally signed message part


Re: [PATCH 2/2] vicodec: Implement spec-compliant stop command

2018-10-19 Thread Nicolas Dufresne
Le vendredi 19 octobre 2018 à 07:35 -0400, Nicolas Dufresne a écrit :
> Le vendredi 19 octobre 2018 à 09:28 +0200, Hans Verkuil a écrit :
> > On 10/18/2018 06:08 PM, Ezequiel Garcia wrote:
> > > Set up a statically-allocated, dummy buffer to
> > > be used as flush buffer, which signals
> > > a encoding (or decoding) stop.
> > > 
> > > When the flush buffer is queued to the OUTPUT queue,
> > > the driver will send an V4L2_EVENT_EOS event, and
> > > mark the CAPTURE buffer with V4L2_BUF_FLAG_LAST.
> > 
> > I'm confused. What is the current driver doing wrong? It is already
> > setting the LAST flag AFAIK. I don't see why a dummy buffer is
> > needed.
> 
> I'm not sure of this patch either. It seems to trigger the legacy
> "empty payload" buffer case. Driver should mark the last buffer, and
> then following poll should return EPIPE. Maybe it's the later that
> isn't respected ?

Sorry, I've send this too fast. The following poll should not block,
and DQBUF should retunr EPIPE.

In GStreamer we currently ignore the LAST flag and wait for EPIPE. The
reason is that not all driver can set the LAST flag. Exynos firmware
tells you it's done later and we don't want to introduce any latency in
the driver. The last flag isn't that useful in fact, but it can be use
with RTP to set the marker bit.

In previous discussion, using a buffer with payload 0 was not liked.
There might be codec where an empty buffer is valid, who knows ?

> 
> > 
> > Regards,
> > 
> > Hans
> > 
> > > 
> > > With this change, it's possible to run a pipeline to completion:
> > > 
> > > gst-launch-1.0 videotestsrc num-buffers=10 ! v4l2fwhtenc !
> > > v4l2fwhtdec ! fakevideosink
> > > 
> > > Signed-off-by: Ezequiel Garcia 
> > > ---
> > >  drivers/media/platform/vicodec/vicodec-core.c | 80 ++---
> > > --
> > > 
> > >  1 file changed, 44 insertions(+), 36 deletions(-)
> > > 
> > > diff --git a/drivers/media/platform/vicodec/vicodec-core.c
> > > b/drivers/media/platform/vicodec/vicodec-core.c
> > > index a2c487b4b80d..4ed4dae10e30 100644
> > > --- a/drivers/media/platform/vicodec/vicodec-core.c
> > > +++ b/drivers/media/platform/vicodec/vicodec-core.c
> > > @@ -113,7 +113,7 @@ struct vicodec_ctx {
> > >   struct v4l2_ctrl_handler hdl;
> > >  
> > >   struct vb2_v4l2_buffer *last_src_buf;
> > > - struct vb2_v4l2_buffer *last_dst_buf;
> > > + struct vb2_v4l2_buffer  flush_buf;
> > >  
> > >   /* Source and destination queue data */
> > >   struct vicodec_q_data   q_data[2];
> > > @@ -220,6 +220,7 @@ static void device_run(void *priv)
> > >   struct vicodec_dev *dev = ctx->dev;
> > >   struct vb2_v4l2_buffer *src_buf, *dst_buf;
> > >   struct vicodec_q_data *q_out;
> > > + bool flushing;
> > >   u32 state;
> > >  
> > >   src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
> > > @@ -227,26 +228,36 @@ static void device_run(void *priv)
> > >   q_out = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> > >  
> > >   state = VB2_BUF_STATE_DONE;
> > > - if (device_process(ctx, src_buf, dst_buf))
> > > +
> > > + flushing = (src_buf == >flush_buf);
> > > + if (!flushing && device_process(ctx, src_buf, dst_buf))
> > >   state = VB2_BUF_STATE_ERROR;
> > > - ctx->last_dst_buf = dst_buf;
> > >  
> > >   spin_lock(ctx->lock);
> > > - if (!ctx->comp_has_next_frame && src_buf == ctx->last_src_buf)
> > > {
> > > + if (!flushing) {
> > > + if (!ctx->comp_has_next_frame && src_buf == ctx-
> > > > last_src_buf) {
> > > 
> > > + dst_buf->flags |= V4L2_BUF_FLAG_LAST;
> > > + v4l2_event_queue_fh(>fh, _event);
> > > + }
> > > +
> > > + if (ctx->is_enc) {
> > > + src_buf->sequence = q_out->sequence++;
> > > + src_buf = v4l2_m2m_src_buf_remove(ctx-
> > > > fh.m2m_ctx);
> > > 
> > > + v4l2_m2m_buf_done(src_buf, state);
> > > + } else if (vb2_get_plane_payload(_buf->vb2_buf, 0)
> > > + == ctx->cur_buf_offset) {
> > > + src_buf->sequence = q_out->sequence++;
> > > + src_buf = v4l2_m2m_src_buf_remove(ctx-
> > > > fh.m2m_ctx);
> > > 
> 

Re: [PATCH 2/2] vicodec: Implement spec-compliant stop command

2018-10-19 Thread Nicolas Dufresne
Le vendredi 19 octobre 2018 à 09:28 +0200, Hans Verkuil a écrit :
> On 10/18/2018 06:08 PM, Ezequiel Garcia wrote:
> > Set up a statically-allocated, dummy buffer to
> > be used as flush buffer, which signals
> > a encoding (or decoding) stop.
> > 
> > When the flush buffer is queued to the OUTPUT queue,
> > the driver will send an V4L2_EVENT_EOS event, and
> > mark the CAPTURE buffer with V4L2_BUF_FLAG_LAST.
> 
> I'm confused. What is the current driver doing wrong? It is already
> setting the LAST flag AFAIK. I don't see why a dummy buffer is
> needed.

I'm not sure of this patch either. It seems to trigger the legacy
"empty payload" buffer case. Driver should mark the last buffer, and
then following poll should return EPIPE. Maybe it's the later that
isn't respected ?

> 
> Regards,
> 
>   Hans
> 
> > 
> > With this change, it's possible to run a pipeline to completion:
> > 
> > gst-launch-1.0 videotestsrc num-buffers=10 ! v4l2fwhtenc !
> > v4l2fwhtdec ! fakevideosink
> > 
> > Signed-off-by: Ezequiel Garcia 
> > ---
> >  drivers/media/platform/vicodec/vicodec-core.c | 80 ++-
> > 
> >  1 file changed, 44 insertions(+), 36 deletions(-)
> > 
> > diff --git a/drivers/media/platform/vicodec/vicodec-core.c
> > b/drivers/media/platform/vicodec/vicodec-core.c
> > index a2c487b4b80d..4ed4dae10e30 100644
> > --- a/drivers/media/platform/vicodec/vicodec-core.c
> > +++ b/drivers/media/platform/vicodec/vicodec-core.c
> > @@ -113,7 +113,7 @@ struct vicodec_ctx {
> > struct v4l2_ctrl_handler hdl;
> >  
> > struct vb2_v4l2_buffer *last_src_buf;
> > -   struct vb2_v4l2_buffer *last_dst_buf;
> > +   struct vb2_v4l2_buffer  flush_buf;
> >  
> > /* Source and destination queue data */
> > struct vicodec_q_data   q_data[2];
> > @@ -220,6 +220,7 @@ static void device_run(void *priv)
> > struct vicodec_dev *dev = ctx->dev;
> > struct vb2_v4l2_buffer *src_buf, *dst_buf;
> > struct vicodec_q_data *q_out;
> > +   bool flushing;
> > u32 state;
> >  
> > src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
> > @@ -227,26 +228,36 @@ static void device_run(void *priv)
> > q_out = get_q_data(ctx, V4L2_BUF_TYPE_VIDEO_OUTPUT);
> >  
> > state = VB2_BUF_STATE_DONE;
> > -   if (device_process(ctx, src_buf, dst_buf))
> > +
> > +   flushing = (src_buf == >flush_buf);
> > +   if (!flushing && device_process(ctx, src_buf, dst_buf))
> > state = VB2_BUF_STATE_ERROR;
> > -   ctx->last_dst_buf = dst_buf;
> >  
> > spin_lock(ctx->lock);
> > -   if (!ctx->comp_has_next_frame && src_buf == ctx->last_src_buf)
> > {
> > +   if (!flushing) {
> > +   if (!ctx->comp_has_next_frame && src_buf == ctx-
> > >last_src_buf) {
> > +   dst_buf->flags |= V4L2_BUF_FLAG_LAST;
> > +   v4l2_event_queue_fh(>fh, _event);
> > +   }
> > +
> > +   if (ctx->is_enc) {
> > +   src_buf->sequence = q_out->sequence++;
> > +   src_buf = v4l2_m2m_src_buf_remove(ctx-
> > >fh.m2m_ctx);
> > +   v4l2_m2m_buf_done(src_buf, state);
> > +   } else if (vb2_get_plane_payload(_buf->vb2_buf, 0)
> > +   == ctx->cur_buf_offset) {
> > +   src_buf->sequence = q_out->sequence++;
> > +   src_buf = v4l2_m2m_src_buf_remove(ctx-
> > >fh.m2m_ctx);
> > +   v4l2_m2m_buf_done(src_buf, state);
> > +   ctx->cur_buf_offset = 0;
> > +   ctx->comp_has_next_frame = false;
> > +   }
> > +   } else {
> > +   src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> > +   vb2_set_plane_payload(_buf->vb2_buf, 0, 0);
> > dst_buf->flags |= V4L2_BUF_FLAG_LAST;
> > v4l2_event_queue_fh(>fh, _event);
> > }
> > -   if (ctx->is_enc) {
> > -   src_buf->sequence = q_out->sequence++;
> > -   src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> > -   v4l2_m2m_buf_done(src_buf, state);
> > -   } else if (vb2_get_plane_payload(_buf->vb2_buf, 0) == ctx-
> > >cur_buf_offset) {
> > -   src_buf->sequence = q_out->sequence++;
> > -   src_buf = v4l2_m2m_src_buf_remove(ctx->fh.m2m_ctx);
> > -   v4l2_m2m_buf_done(src_buf, state);
> > -   ctx->cur_buf_offset = 0;
> > -   ctx->comp_has_next_frame = false;
> > -   }
> > v4l2_m2m_buf_done(dst_buf, state);
> > ctx->comp_size = 0;
> > ctx->comp_magic_cnt = 0;
> > @@ -293,6 +304,8 @@ static int job_ready(void *priv)
> > src_buf = v4l2_m2m_next_src_buf(ctx->fh.m2m_ctx);
> > if (!src_buf)
> > return 0;
> > +   if (src_buf == >flush_buf)
> > +   return 1;
> > p_out = vb2_plane_vaddr(_buf->vb2_buf, 0);
> > sz = vb2_get_plane_payload(_buf->vb2_buf, 0);
> > p = p_out + ctx->cur_buf_offset;
> > @@ -770,21 +783,6 @@ static int vidioc_s_fmt_vid_out(struct file
> > *file, void *priv,
> > return ret;
> >  }
> >  
> > -static void 

Re: [RFC] Informal meeting during ELCE to discuss userspace support for stateless codecs

2018-10-09 Thread Nicolas Dufresne
Le lundi 08 octobre 2018 à 13:53 +0200, Hans Verkuil a écrit :
> Hi all,
> 
> I would like to meet up somewhere during the ELCE to discuss userspace support
> for stateless (and perhaps stateful as well?) codecs.
> 
> It is also planned as a topic during the summit, but I would prefer to prepare
> for that in advance, esp. since I myself do not have any experience writing
> userspace SW for such devices.
> 
> Nicolas, it would be really great if you can participate in this meeting
> since you probably have the most experience with this by far.
> 
> Looking through the ELCE program I found two timeslots that are likely to work
> for most of us (because the topics in the program appear to be boring for us
> media types!):
> 
> Tuesday from 10:50-15:50
> 
> or:
> 
> Monday from 15:45 onward

Both works for me.

> 
> My guess is that we need 2-3 hours or so. Hard to predict.
> 
> The basic question that I would like to have answered is what the userspace
> component should look like? libv4l-like plugin or a library that userspace can
> link with? Do we want more general support for stateful codecs as well that 
> deals
> with resolution changes and the more complex parts of the codec API?
> 
> I've mailed this directly to those that I expect are most interested in this,
> but if someone want to join in let me know.
> 
> I want to keep the group small though, so you need to bring relevant 
> experience
> to the table.
> 
> Regards,
> 
>   Hans



Re: [RFC] V4L2_PIX_FMT_MJPEG vs V4L2_PIX_FMT_JPEG

2018-10-01 Thread Nicolas Dufresne
Hello Hans,

Le lundi 01 octobre 2018 à 10:43 +0200, Hans Verkuil a écrit :
> It turns out that we have both JPEG and Motion-JPEG pixel formats defined.
> 
> Furthermore, some drivers support one, some the other and some both.
> 
> These pixelformats both mean the same.
> 
> I propose that we settle on JPEG (since it seems to be used most often) and
> add JPEG support to those drivers that currently only use MJPEG.

Thanks for looking into this. As per GStreamer code, I see 3 alias for
JPEG. V4L2_PIX_FMT_MJPEG/JPEG/PJPG. I don't know the context, this code
was written before I knew GStreamer existed. It's possible there is a
subtle difference, I have never looked at it, but clearly all our JPEG
decoder handle these as being the same.

https://cgit.freedesktop.org/gstreamer/gst-plugins-good/tree/sys/v4l2/gstv4l2object.c#n956

> 
> We also need to update the V4L2_PIX_FMT_JPEG documentation since it just says
> TBD:
> 
> https://www.linuxtv.org/downloads/v4l-dvb-apis-new/uapi/v4l/pixfmt-compressed.html
> 
> $ git grep -l V4L2_PIX_FMT_MJPEG
> drivers/media/pci/meye/meye.c
> drivers/media/pci/solo6x10/solo6x10-v4l2-enc.c
> drivers/media/platform/sti/delta/delta-cfg.h
> drivers/media/platform/sti/delta/delta-mjpeg-dec.c
> drivers/media/usb/cpia2/cpia2_v4l.c
> drivers/media/usb/go7007/go7007-driver.c
> drivers/media/usb/go7007/go7007-fw.c
> drivers/media/usb/go7007/go7007-v4l2.c
> drivers/media/usb/s2255/s2255drv.c
> drivers/media/usb/uvc/uvc_driver.c
> drivers/staging/media/zoran/zoran_driver.c
> drivers/staging/vc04_services/bcm2835-camera/bcm2835-camera.c
> drivers/usb/gadget/function/uvc_v4l2.c
> 
> It looks like s2255 and cpia2 support both already, so that would leave
> 8 drivers that need to be modified, uvc being the most important of the
> lot.
> 
> Any comments?
> 
> Regards,
> 
>   Hans



Re: [RFP] Which V4L2 ioctls could be replaced by better versions?

2018-09-20 Thread Nicolas Dufresne
Le jeudi 20 septembre 2018 à 16:42 +0200, Hans Verkuil a écrit :
> Some parts of the V4L2 API are awkward to use and I think it would be
> a good idea to look at possible candidates for that.
> 
> Examples are the ioctls that use struct v4l2_buffer: the multiplanar support 
> is
> really horrible, and writing code to support both single and multiplanar is 
> hard.
> We are also running out of fields and the timeval isn't y2038 compliant.
> 
> A proof-of-concept is here:
> 
> https://git.linuxtv.org/hverkuil/media_tree.git/commit/?h=v4l2-buffer=a95549df06d9900f3559afdbb9da06bd4b22d1f3
> 
> It's a bit old, but it gives a good impression of what I have in mind.
> 
> Another candidate is 
> VIDIOC_SUBDEV_ENUM_FRAME_INTERVAL/VIDIOC_ENUM_FRAMEINTERVALS:
> expressing frame intervals as a fraction is really awkward and so is the fact
> that the subdev and 'normal' ioctls are not the same.
> 
> Would using nanoseconds or something along those lines for intervals be 
> better?

This one is not a good idea, because you cannot represent well known
rates used a lot in the field. Like 6/1001 also known as 59.94 Hz.
You could endup with drift issues.

For me, what is the most difficult with this API is the fact that it
uses frame internal (duration) instead of frame rate.

> 
> I have similar concerns with VIDIOC_SUBDEV_ENUM_FRAME_SIZE where there is no
> stepwise option, making it different from VIDIOC_ENUM_FRAMESIZES. But it 
> should
> be possible to extend VIDIOC_SUBDEV_ENUM_FRAME_SIZE with stepwise support, I
> think.

One of the thing to fix, maybe it's doable now, is the differentiation
between allocation size and display size. Pretty much all video capture
code assumes this is display size and ignores the selection API. This
should be documented explicitly.

In fact, the display/allocation dimension isn't very nice, as both
information overlaps in structures. As an example, you call S_FMT with
a display size you want, and it will return you an allocation size
(which yes, can be smaller, because we always round to the middle).

> 
> Do we have more ioctls that could use a refresh? S/G/TRY_FMT perhaps, again in
> order to improve single vs multiplanar handling.

Yes, but that would fall in a complete redesign I guess. The buffer
allocation scheme is very inflexible. You can't have buffers of two
dimensions allocated at the same time for the same queue. Worst, you
cannot leave even 1 buffer as your scannout buffer while reallocating
new buffers, this is not permitted by the framework (in software). As a
side effect, there is no way to optimize the resolution changes, you
even have to copy your scannout buffer on the CPU, to free it in order
to proceed. Resolution changes are thus painfully slow, by design.

You also cannot switch from internal buffers to importing buffers
easily (in some case you, like encoder, you cannot do that without
flushing the encoder state).

> 
> It is not the intention to come to a full design, it's more to test the waters
> so to speak.
> 
> Regards,
> 
>   Hans


signature.asc
Description: This is a digitally signed message part


Re: [PATCH v3 0/5] Fix OV5640 exposure & gain

2018-09-14 Thread Nicolas Dufresne
Interesting, I just hit this problem yesterday. Same module as Steve,  
with MIPI CSI-2 OV5640 (on Sabre Lite).

Tested-By: Nicolas Dufresne 

Le mardi 11 septembre 2018 à 15:48 +0200, Hugues Fruchet a écrit :
> This patch serie fixes some problems around exposure & gain in OV5640
> driver.
> 
> The 4th patch about autocontrols requires also a fix in v4l2-ctrls.c:
> 
https://www.mail-archive.com/linux-media@vger.kernel.org/msg133164.html
> 
> Here is the test procedure used for exposure & gain controls check:
> 1) Preview in background
> $> gst-launch-1.0 v4l2src ! "video/x-raw, width=640, Height=480" !
> queue ! waylandsink -e &
> 2) Check gain & exposure values
> $> v4l2-ctl --all | grep -e exposure -e gain | grep "(int)"
>exposure (int): min=0 max=65535 step=1
> default=0 value=330 flags=inactive, volatile
>gain (int): min=0 max=1023 step=1
> default=0 value=19 flags=inactive, volatile
> 3) Put finger in front of camera and check that gain/exposure values
> are changing:
> $> v4l2-ctl --all | grep -e exposure -e gain | grep "(int)"
>exposure (int): min=0 max=65535 step=1
> default=0 value=660 flags=inactive, volatile
>gain (int): min=0 max=1023 step=1
> default=0 value=37 flags=inactive, volatile
> 4) switch to manual mode, image exposition must not change
> $> v4l2-ctl --set-ctrl=gain_automatic=0
> $> v4l2-ctl --set-ctrl=auto_exposure=1
> Note the "1" for manual exposure.
> 
> 5) Check current gain/exposure values:
> $> v4l2-ctl --all | grep -e exposure -e gain | grep "(int)"
>exposure (int): min=0 max=65535 step=1
> default=0 value=330
>gain (int): min=0 max=1023 step=1
> default=0 value=20
> 
> 6) Put finger behind camera and check that gain/exposure values are
> NOT changing:
> $> v4l2-ctl --all | grep -e exposure -e gain | grep "(int)"
>exposure (int): min=0 max=65535 step=1
> default=0 value=330
>gain (int): min=0 max=1023 step=1
> default=0 value=20
> 7) Update exposure, check that it is well changed on display and that
> same value is returned:
> $> v4l2-ctl --set-ctrl=exposure=100
> $> v4l2-ctl --get-ctrl=exposure
> exposure: 100
> 
> 9) Update gain, check that it is well changed on display and that
> same value is returned:
> $> v4l2-ctl --set-ctrl=gain=10
> $> v4l2-ctl --get-ctrl=gain
> gain: 10
> 
> 10) Switch back to auto gain/exposure, verify that image is correct
> and values returned are correct:
> $> v4l2-ctl --set-ctrl=gain_automatic=1
> $> v4l2-ctl --set-ctrl=auto_exposure=0
> $> v4l2-ctl --all | grep -e exposure -e gain | grep "(int)"
>exposure (int): min=0 max=65535 step=1
> default=0 value=330 flags=inactive, volatile
>gain (int): min=0 max=1023 step=1
> default=0 value=22 flags=inactive, volatile
> Note the "0" for auto exposure.
> 
> ===
> = history =
> ===
> version 3:
>   - Change patch 5/5 by removing set_mode() orig_mode parameter as
> per jacopo' suggestion:
> https://www.spinics.net/lists/linux-media/msg139457.html
> 
> version 2:
>   - Fix patch 3/5 commit comment and rename binning function as per
> jacopo' suggestion:
> 
> https://www.mail-archive.com/linux-media@vger.kernel.org/msg133272.html
> 
> Hugues Fruchet (5):
>   media: ov5640: fix exposure regression
>   media: ov5640: fix auto gain & exposure when changing mode
>   media: ov5640: fix wrong binning value in exposure calculation
>   media: ov5640: fix auto controls values when switching to manual
> mode
>   media: ov5640: fix restore of last mode set
> 
>  drivers/media/i2c/ov5640.c | 128 ++-
> --
>  1 file changed, 73 insertions(+), 55 deletions(-)
> 


signature.asc
Description: This is a digitally signed message part


Re: [PATCH 2/2] vicodec: set state->info before calling the encode/decode funcs

2018-09-10 Thread Nicolas Dufresne
Le lundi 10 septembre 2018 à 12:37 -0300, Ezequiel Garcia a écrit :
> On Mon, 2018-09-10 at 17:00 +0200, Hans Verkuil wrote:
> > From: Hans Verkuil 
> > 
> > state->info was NULL since I completely forgot to set state->info.
> > Oops.
> > 
> > Reported-by: Ezequiel Garcia 
> > Signed-off-by: Hans Verkuil 
> 
> For both patches:
> 
> Tested-by: Ezequiel Garcia 
> 
> With these changes, now this gstreamer pipeline no longer
> crashes:
> 
> gst-launch-1.0 -v videotestsrc num-buffers=30 ! video/x-
> raw,width=1280,height=720 ! v4l2fwhtenc capture-io-mode=mmap output-
> io-mode=mmap ! v4l2fwhtdec
> capture-io-mode=mmap output-io-mode=mmap ! fakesink
> 
> A few things:
> 
>   * You now need to mark "[PATCH] vicodec: fix sparse warning" as
> invalid.
>   * v4l2fwhtenc/v4l2fwhtdec elements are not upstream yet.
>   * Gstreamer doesn't end properly; and it seems to negotiate

Is the driver missing CMD_STOP implementation ? (draining flow)

> different sizes for encoded and decoded unless explicitly set.
> 
> Thanks!
> 
> >  drivers/media/platform/vicodec/vicodec-core.c | 11 +++
> >  1 file changed, 7 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/media/platform/vicodec/vicodec-core.c
> > b/drivers/media/platform/vicodec/vicodec-core.c
> > index fdd77441a47b..5d42a8414283 100644
> > --- a/drivers/media/platform/vicodec/vicodec-core.c
> > +++ b/drivers/media/platform/vicodec/vicodec-core.c
> > @@ -176,12 +176,15 @@ static int device_process(struct vicodec_ctx
> > *ctx,
> > }
> >  
> > if (ctx->is_enc) {
> > -   unsigned int size = v4l2_fwht_encode(state, p_in,
> > p_out);
> > -
> > -   vb2_set_plane_payload(_vb->vb2_buf, 0, size);
> > +   state->info = q_out->info;
> > +   ret = v4l2_fwht_encode(state, p_in, p_out);
> > +   if (ret < 0)
> > +   return ret;
> > +   vb2_set_plane_payload(_vb->vb2_buf, 0, ret);
> > } else {
> > +   state->info = q_cap->info;
> > ret = v4l2_fwht_decode(state, p_in, p_out);
> > -   if (ret)
> > +   if (ret < 0)
> > return ret;
> > vb2_set_plane_payload(_vb->vb2_buf, 0, q_cap-
> > >sizeimage);
> > }
> 
> 


signature.asc
Description: This is a digitally signed message part


Re: [RFP] Stateless Codec Userspace Support

2018-09-07 Thread Nicolas Dufresne
Le vendredi 07 septembre 2018 à 16:34 +0200, Hans Verkuil a écrit :
> Support for stateless codecs and Request API will hopefully be merged
> for
> 4.20, and the next step is to discuss how to organize the userspace
> support.
> 
> Hopefully by the time the media summit starts we'll have some better
> ideas
> of what we want in this area.
> 
> Some userspace support is available from bootlin for the cedrus
> driver:
> 
>   - v4l2-request-test, that has a bunch of sample frames for various
> codecs and will rely solely on the kernel request api (and DRM
> for
> the display part) to test and bringup a particular driver
> https://github.com/bootlin/v4l2-request-test
> 
>   - libva-v4l2-request, that is a libva implementation using the
> request API
> https://github.com/bootlin/libva-v4l2-request
> 
> But this is more geared towards testing and less a 'proper'
> implementation.

Considering that libva is largely supported across media players,
browsers and media frameworks, the VA Driver approach seems like the
most promising solution to get short term usage. This way, we can share
the userspace code across various codec and also across V4L2 and DRM
subsystems.

That being said, a lot of userspace will need modification. Indeed, VA
do expose some of the DRM details for zero-copy path (like DMABuf
exportation). We can emulate this support, or simply enhance VA with
it's own V4L2 specific bits. It's too early to tell, and also I'm not
deep enough into VA driver interface to be able to give guidelines.

Another thing that most userspace rely on is the presence of VPP
functions. I notice some assembly code that does detiling in that libva
driver, I bet this is related to not having enabled some sort of HW VPP
yet on the Allwinner SoC. Overall, this does not seems like a problem,
the m2m interface is well suited for that and a VA driver can make use
of that. What will be needed is a better way to figure-out what these
VPP can do, things like CSC, deinterlacing, scaling, rotation, etc.
Just like in any other library, we need to be able to announce which
"function" are supported.

Putting my GStreamer hat back, I'd very like to have a native support
for these stateless CODEC, as this would give a bit more flexibility,
but this isn't something that one can write in a day (specially if that
happens on my spare or r time). Though, I'm looking forward into this
in order to come up with a library, a bit like the existing GStreamer
bitstream parser library, that could handle reference picture
management and lost frame concealment (a currently missing feature in
gstreamer-vaapi).

I think that most straighforward place to add direct support (without
VA abstraction) would be FFMPEG. If I understood well, they already
share most of the decoder layer needed between their software decoder
and the VAAPI one.

One place that haven't been mentioned, but seems rather important,
would be Android. Implementing a generic OMX component for the Android
OMX stack would get quite some traction, as the CODEC integration work
would become very much a kernel work. Having that handy, would help
convince the vendors that the V4L2 framework is worth it. Making the
OMX stack in Android as vendor agnostic as possible is also helping
Android folks in eventually getting rid of OMX. OMX specification is
mostly abandoned, with no-one to review new extensions.

> 
> I don't know yet how much time to reserve for this discussion. It's a
> bit too early for that. I would expect an hour minimum, likely more.
> 
> Regards,
> 
>   Hans


signature.asc
Description: This is a digitally signed message part


Re: [RFC] Request API and V4L2 capabilities

2018-08-23 Thread Nicolas Dufresne
Le jeudi 23 août 2018 à 16:31 +0200, Hans Verkuil a écrit :
> > I propose adding these capabilities:
> > 
> > #define V4L2_BUF_CAP_HAS_REQUESTS 0x0001
> > #define V4L2_BUF_CAP_REQUIRES_REQUESTS0x0002
> > #define V4L2_BUF_CAP_HAS_MMAP 0x0100
> > #define V4L2_BUF_CAP_HAS_USERPTR  0x0200
> > #define V4L2_BUF_CAP_HAS_DMABUF   0x0400
> 
> I substituted SUPPORTS for HAS and dropped the REQUIRES_REQUESTS capability.
> As Tomasz mentioned, technically (at least for stateless codecs) you could
> handle just one frame at a time without using requests. It's very inefficient,
> but it would work.

I thought the request was providing a data structure to refer back to
the frames, so each codec don't have to implement one. Do you mean that
the framework will implicitly request requests in that mode ? or simply
that there is no such helper ?


signature.asc
Description: This is a digitally signed message part


Re: [RFC] Request API and V4L2 capabilities

2018-08-23 Thread Nicolas Dufresne
Le jeudi 23 août 2018 à 10:05 +0200, Paul Kocialkowski a écrit :
> Hi,
> 
> On Wed, 2018-08-22 at 14:33 -0300, Ezequiel Garcia wrote:
> > On Wed, 2018-08-22 at 16:10 +0200, Paul Kocialkowski wrote:
> > > Hi,
> > > 
> > > On Tue, 2018-08-21 at 17:52 +0900, Tomasz Figa wrote:
> > > > Hi Hans, Paul,
> > > > 
> > > > On Mon, Aug 6, 2018 at 6:29 PM Paul Kocialkowski
> > > >  wrote:
> > > > > 
> > > > > On Mon, 2018-08-06 at 11:23 +0200, Hans Verkuil wrote:
> > > > > > On 08/06/2018 11:13 AM, Paul Kocialkowski wrote:
> > > > > > > Hi,
> > > > > > > 
> > > > > > > On Mon, 2018-08-06 at 10:32 +0200, Hans Verkuil wrote:
> > > > > > > > On 08/06/2018 10:16 AM, Paul Kocialkowski wrote:
> > > > > > > > > On Sat, 2018-08-04 at 15:50 +0200, Hans Verkuil wrote:
> > > > > > > > > > Regarding point 3: I think this should be documented next 
> > > > > > > > > > to the pixel format. I.e.
> > > > > > > > > > the MPEG-2 Slice format used by the stateless cedrus codec 
> > > > > > > > > > requires the request API
> > > > > > > > > > and that two MPEG-2 controls (slice params and quantization 
> > > > > > > > > > matrices) must be present
> > > > > > > > > > in each request.
> > > > > > > > > > 
> > > > > > > > > > I am not sure a control flag (e.g. 
> > > > > > > > > > V4L2_CTRL_FLAG_REQUIRED_IN_REQ) is needed here.
> > > > > > > > > > It's really implied by the fact that you use a stateless 
> > > > > > > > > > codec. It doesn't help
> > > > > > > > > > generic applications like v4l2-ctl or qv4l2 either since in 
> > > > > > > > > > order to support
> > > > > > > > > > stateless codecs they will have to know about the details 
> > > > > > > > > > of these controls anyway.
> > > > > > > > > > 
> > > > > > > > > > So I am inclined to say that it is not necessary to expose 
> > > > > > > > > > this information in
> > > > > > > > > > the API, but it has to be documented together with the 
> > > > > > > > > > pixel format documentation.
> > > > > > > > > 
> > > > > > > > > I think this is affected by considerations about codec 
> > > > > > > > > profile/level
> > > > > > > > > support. More specifically, some controls will only be 
> > > > > > > > > required for
> > > > > > > > > supporting advanced codec profiles/levels, so they can only be
> > > > > > > > > explicitly marked with appropriate flags by the driver when 
> > > > > > > > > the target
> > > > > > > > > profile/level is known. And I don't think it would be sane 
> > > > > > > > > for userspace
> > > > > > > > > to explicitly set what profile/level it's aiming at. As a 
> > > > > > > > > result, I
> > > > > > > > > don't think we can explicitly mark controls as required or 
> > > > > > > > > optional.
> > > > 
> > > > I'm not sure this is entirely true. The hardware may need to be
> > > > explicitly told what profile the video is. It may even not be the
> > > > hardware, but the driver itself too, given that the profile may imply
> > > > the CAPTURE pixel format, e.g. for VP9 profiles:
> > > > 
> > > > profile 0
> > > > color depth: 8 bit/sample, chroma subsampling: 4:2:0
> > > > profile 1
> > > > color depth: 8 bit, chroma subsampling: 4:2:0, 4:2:2, 4:4:4
> > > > profile 2
> > > > color depth: 10–12 bit, chroma subsampling: 4:2:0
> > > > profile 3
> > > > color depth: 10–12 bit, chroma subsampling: 4:2:0, 4:2:2, 4:4:4
> > > > 
> > > > (reference: https://en.wikipedia.org/wiki/VP9#Profiles)
> > > 
> > > I think it would be fair to expect userspace to select the right
> > > destination format (and maybe have the driver error if there's a
> > > mismatch with the meta-data) instead of having the driver somewhat
> > > expose what format should be used.
> > > 
> > > But maybe this would be an API violation, since all the enumerated
> > > formats are probably supposed to be selectable?
> > > 
> > > We could also look at it the other way round and consider that selecting
> > > an exposed format is always legit, but that it implies passing a
> > > bitstream that matches it or the driver will error (because of an
> > > invalid bitstream passed, not because of a "wrong" selected format).
> > > 
> > 
> > The API requires the user to negotiate via TRY_FMT/S_FMT. The driver
> > usually does not return error on invalid formats, and simply return
> > a format it can work with. I think the kernel-user contract has to
> > guarantee if the driver accepted a given format, it won't fail to
> > encoder or decode.
> 
> Well, the issue here is that in order to correctly enumerate the
> formats, the driver needs to be aware of:
> 1. in what destination format the bitstream data is decoded to;

This is covered by the state-full specification patch if I remember
correctly. So the driver, if it's a multi-format, will first return all
possible formats, and later on, will return the proper subset. So let's
take an encoder, after setting the capture format, the enumeration of
the raw formats could then be limited to what is supported for this
output. For an H264 encoder, what could also affect this 

Re: [RFC] Request API and V4L2 capabilities

2018-08-22 Thread Nicolas Dufresne
Le mercredi 22 août 2018 à 16:52 +0200, Paul Kocialkowski a écrit :
> Hi,
> 
> On Wed, 2018-08-15 at 09:57 -0400, Nicolas Dufresne wrote:
> > Le lundi 06 août 2018 à 10:16 +0200, Paul Kocialkowski a écrit :
> > > Hi Hans and all,
> > > 
> > > On Sat, 2018-08-04 at 15:50 +0200, Hans Verkuil wrote:
> > > > Hi all,
> > > > 
> > > > While the Request API patch series addresses all the core API issues, 
> > > > there
> > > > are some high-level considerations as well:
> > > > 
> > > > 1) How can the application tell that the Request API is supported and 
> > > > for
> > > >which buffer types (capture/output) and pixel formats?
> > > > 
> > > > 2) How can the application tell if the Request API is required as 
> > > > opposed to being
> > > >optional?
> > > > 
> > > > 3) Some controls may be required in each request, how to let userspace 
> > > > know this?
> > > >Is it even necessary to inform userspace?
> > > > 
> > > > 4) (For bonus points): How to let the application know which streaming 
> > > > I/O modes
> > > >are available? That's never been possible before, but it would be 
> > > > very nice
> > > >indeed if that's made explicit.
> > > 
> > > Thanks for bringing up these considerations and questions, which perhaps
> > > cover the last missing bits for streamlined use of the request API by
> > > userspace. I would suggest another item, related to 3):
> > > 
> > > 5) How can applications tell whether the driver supports a specific
> > > codec profile/level, not only for encoding but also for decoding? It's
> > > common for low-end embedded hardware to not support the most advanced
> > > profiles (e.g. H264 high profile).
> > 
> > Hi Paul, after some discussion with Philip, he sent a proposal patch
> > that enables profile/level extended CID support to decoders too. The
> > control is made read-only, the point is not really the CID get/set but
> > that the controls allow enumerating the supported values. This seems
> > quite straightforward and easy to use.
> > 
> > This enumeration is already provided this way some of the existing
> > sate-full encoders. 
> 
> Sounds great, thanks for looking into it! I looked for the patch in the
> list, but couldn't find it off-hand. Do you have a link to it? I would
> like to bind it to the Cedrus VPU driver eventually.

I believe it is that one:
https://patchwork.kernel.org/patch/10494379/



> 
> Cheers,
> 
> Paul
> 
> > > > Since the Request API associates data with frame buffers it makes sense 
> > > > to expose
> > > > this as a new capability field in struct v4l2_requestbuffers and struct 
> > > > v4l2_create_buffers.
> > > > 
> > > > The first struct has 2 reserved fields, the second has 8, so it's not a 
> > > > problem to
> > > > take one for a capability field. Both structs also have a buffer type, 
> > > > so we know
> > > > if this is requested for a capture or output buffer type. The pixel 
> > > > format is known
> > > > in the driver, so HAS/REQUIRES_REQUESTS can be set based on that. I 
> > > > doubt we'll have
> > > > drivers where the request caps would actually depend on the pixel 
> > > > format, but it
> > > > theoretically possible. For both ioctls you can call them with count=0 
> > > > at the start
> > > > of the application. REQBUFS has of course the side-effect of deleting 
> > > > all buffers,
> > > > but at the start of your application you don't have any yet. 
> > > > CREATE_BUFS has no
> > > > side-effects.
> > > 
> > > My initial thoughts on this point were to have flags exposed in
> > > v4l2_capability, but now that you're saying it, it does make sense for
> > > the flag to be associated with a buffer rather than the global device.
> > > 
> > > In addition, I've heard of cases (IIRC it was some Rockchip platforms)
> > > where the platform has both stateless and stateful VPUs (I think it was
> > > stateless up to H264 and stateful for H265). This would allow supporting
> > > these two hardware blocks under the same video device (if that makes
> > > sense anyway). And even if there's no immediate need, it's always good
> > > to have this level of granularity (with little drawbacks).
>

Re: [RFC] Request API and V4L2 capabilities

2018-08-15 Thread Nicolas Dufresne
Le mercredi 15 août 2018 à 09:11 -0300, Mauro Carvalho Chehab a écrit :
> Em Sat, 4 Aug 2018 15:50:04 +0200
> Hans Verkuil  escreveu:
> 
> > Hi all,
> > 
> > While the Request API patch series addresses all the core API
> > issues, there
> > are some high-level considerations as well:
> > 
> > 1) How can the application tell that the Request API is supported
> > and for
> >which buffer types (capture/output) and pixel formats?
> > 
> > 2) How can the application tell if the Request API is required as
> > opposed to being
> >optional?
> 
> Huh? Why would it be mandatory?
> 
> > 
> > 3) Some controls may be required in each request, how to let
> > userspace know this?
> >Is it even necessary to inform userspace?
> 
> Again, why would it need to have a set of mandatory controls for
> requests
> to work? If this is really required,  it should have a way to send
> such
> list to userspace.
> 
> > 
> > 4) (For bonus points): How to let the application know which
> > streaming I/O modes
> >are available? That's never been possible before, but it would
> > be very nice
> >indeed if that's made explicit.
> > 
> > Since the Request API associates data with frame buffers it makes
> > sense to expose
> > this as a new capability field in struct v4l2_requestbuffers and
> > struct v4l2_create_buffers.
> > 
> > The first struct has 2 reserved fields, the second has 8, so it's
> > not a problem to
> > take one for a capability field. Both structs also have a buffer
> > type, so we know
> > if this is requested for a capture or output buffer type. The pixel
> > format is known
> > in the driver, so HAS/REQUIRES_REQUESTS can be set based on that. I
> > doubt we'll have
> > drivers where the request caps would actually depend on the pixel
> > format, but it
> > theoretically possible. For both ioctls you can call them with
> > count=0 at the start
> > of the application. REQBUFS has of course the side-effect of
> > deleting all buffers,
> > but at the start of your application you don't have any yet.
> > CREATE_BUFS has no
> > side-effects.
> > 
> > I propose adding these capabilities:
> > 
> > #define V4L2_BUF_CAP_HAS_REQUESTS   0x0001
> 
> I'm OK with that.
> 
> > #define V4L2_BUF_CAP_REQUIRES_REQUESTS  0x0002
> 
> But I'm not ok with breaking even more userspace support by forcing 
> requests.

This is not breaking userspace, not in regard to state-less CODEC.
Stateless CODECs uses a set of new pixel formats specifically designed
for driving an accelerator rather then a full CODEC.

The controls are needed to provide a state to the accelerator, so the
accelerator knows what to do. Though, because of the nature of CODECs,
queuing multiple buffers is strictly needed. Without the request, there
is no way to figure-out which CID changes goes with which picture.

There is no way an existing userspace will break as there is no way it
can support these drivers as a) the formats aren't defined yet b) the
CIDs didn't exist. 

> 
> > #define V4L2_BUF_CAP_HAS_MMAP   0x0100
> > #define V4L2_BUF_CAP_HAS_USERPTR0x0200
> > #define V4L2_BUF_CAP_HAS_DMABUF 0x0400
> 
> Those sounds ok to me too.
> 
> > 
> > If REQUIRES_REQUESTS is set, then HAS_REQUESTS is also set.
> > 
> > At this time I think that REQUIRES_REQUESTS would only need to be
> > set for the
> > output queue of stateless codecs.
> 
> Same as before: I don't see the need of support a request-only
> driver.
> 
> > 
> > If capabilities is 0, then it's from an old kernel and all you know
> > is that
> > requests are certainly not supported, and that MMAP is supported.
> > Whether USERPTR
> > or DMABUF are supported isn't known in that case (just try it :-)
> > ).
> > 
> > Strictly speaking we do not need these HAS_MMAP/USERPTR/DMABUF
> > caps, but it is very
> > easy to add if we create a new capability field anyway, and it has
> > always annoyed
> > the hell out of me that we didn't have a good way to let userspace
> > know what
> > streaming I/O modes we support. And with vb2 it's easy to
> > implement.
> 
> Yeah, that sounds a bonus to me too.
> 
> > Regarding point 3: I think this should be documented next to the
> > pixel format. I.e.
> > the MPEG-2 Slice format used by the stateless cedrus codec requires
> > the request API
> > and that two MPEG-2 controls (slice params and quantization
> > matrices) must be present
> > in each request.
> 
> Makes sense to document with the pixel format...
> 
> > I am not sure a control flag (e.g. V4L2_CTRL_FLAG_REQUIRED_IN_REQ)
> > is needed here.
> 
> but it sounds worth to also have a flag.
> 
> > It's really implied by the fact that you use a stateless codec. It
> > doesn't help
> > generic applications like v4l2-ctl or qv4l2 either since in order
> > to support
> > stateless codecs they will have to know about the details of these
> > controls anyway.
> 
> Yeah, but they could skip enum those ioctls if they see one marked
> with
> V4L2_CTRL_FLAG_REQUIRED_IN_REQ 

Re: [RFC] Request API and V4L2 capabilities

2018-08-15 Thread Nicolas Dufresne
Le samedi 04 août 2018 à 15:50 +0200, Hans Verkuil a écrit :
> Hi all,
> 
> While the Request API patch series addresses all the core API issues, there
> are some high-level considerations as well:
> 
> 1) How can the application tell that the Request API is supported and for
>which buffer types (capture/output) and pixel formats?
> 
> 2) How can the application tell if the Request API is required as opposed to 
> being
>optional?
> 
> 3) Some controls may be required in each request, how to let userspace know 
> this?
>Is it even necessary to inform userspace?

For state-less codec, there is a very strict set of controls that must
be supported / filled. The data format pretty much dictate this.

For complex camera's and video transformation m2m devices, there is a
gap indeed. Duplicating the formats for this case does not seem like
the right approach.

> 
> 4) (For bonus points): How to let the application know which streaming I/O 
> modes
>are available? That's never been possible before, but it would be very nice
>indeed if that's made explicit.

In GStreamer, we call REQBUFS(type, count=0) for each types we support.
This call should never fail, unless the type is not supported. We build
a list of supported I/O mode this way. It's also a no-op, because we
didn't allocate any buffers yet.

> 
> Since the Request API associates data with frame buffers it makes sense to 
> expose
> this as a new capability field in struct v4l2_requestbuffers and struct 
> v4l2_create_buffers.
> 
> The first struct has 2 reserved fields, the second has 8, so it's not a 
> problem to
> take one for a capability field. Both structs also have a buffer type, so we 
> know
> if this is requested for a capture or output buffer type. The pixel format is 
> known
> in the driver, so HAS/REQUIRES_REQUESTS can be set based on that. I doubt 
> we'll have
> drivers where the request caps would actually depend on the pixel format, but 
> it
> theoretically possible. For both ioctls you can call them with count=0 at the 
> start
> of the application. REQBUFS has of course the side-effect of deleting all 
> buffers,
> but at the start of your application you don't have any yet. CREATE_BUFS has 
> no
> side-effects.
> 
> I propose adding these capabilities:
> 
> #define V4L2_BUF_CAP_HAS_REQUESTS 0x0001
> #define V4L2_BUF_CAP_REQUIRES_REQUESTS0x0002
> #define V4L2_BUF_CAP_HAS_MMAP 0x0100
> #define V4L2_BUF_CAP_HAS_USERPTR  0x0200
> #define V4L2_BUF_CAP_HAS_DMABUF   0x0400

Looks similar to the bit map we create inside GStreamer using the
described technique. Though we also add HAS_CREATE_BUFS to the lot.

My main concern is in userspace like GStreamer, the difficulty is to
sort drivers that we support, from the ones that we don't. So if we
don't support requests yet, we would like to detect this early. As
CODEC don't really have an initial format, I believe that before S_FMT,
any kind of call to REQBUFS might fail at the moment. So detection
would be very late.

Thoughm be aware this is a totally artificial issue in the short term
since state-less CODEC uses dedicated formats.

> 
> If REQUIRES_REQUESTS is set, then HAS_REQUESTS is also set.
> 
> At this time I think that REQUIRES_REQUESTS would only need to be set for the
> output queue of stateless codecs.
> 
> If capabilities is 0, then it's from an old kernel and all you know is that
> requests are certainly not supported, and that MMAP is supported. Whether 
> USERPTR
> or DMABUF are supported isn't known in that case (just try it :-) ).

Just a clarification, the doc is pretty clear the MMAP is supported if
the device capability have STREAMING in it.

> 
> Strictly speaking we do not need these HAS_MMAP/USERPTR/DMABUF caps, but it 
> is very
> easy to add if we create a new capability field anyway, and it has always 
> annoyed
> the hell out of me that we didn't have a good way to let userspace know what
> streaming I/O modes we support. And with vb2 it's easy to implement.
> 
> Regarding point 3: I think this should be documented next to the pixel 
> format. I.e.
> the MPEG-2 Slice format used by the stateless cedrus codec requires the 
> request API
> and that two MPEG-2 controls (slice params and quantization matrices) must be 
> present
> in each request.
> 
> I am not sure a control flag (e.g. V4L2_CTRL_FLAG_REQUIRED_IN_REQ) is needed 
> here.
> It's really implied by the fact that you use a stateless codec. It doesn't 
> help
> generic applications like v4l2-ctl or qv4l2 either since in order to support
> stateless codecs they will have to know about the details of these controls 
> anyway.

Right, I don't think this is needed in the short term, as we target
only stateless CODEC. But this is important use case for let's say
request to cameras. When we get there, we will need a mechnism to
list all the controls that can be included in a request, and also all
the only the must be present (if any).

> 
> 

Re: [RFC] Request API and V4L2 capabilities

2018-08-15 Thread Nicolas Dufresne
Le lundi 06 août 2018 à 10:16 +0200, Paul Kocialkowski a écrit :
> Hi Hans and all,
> 
> On Sat, 2018-08-04 at 15:50 +0200, Hans Verkuil wrote:
> > Hi all,
> > 
> > While the Request API patch series addresses all the core API issues, there
> > are some high-level considerations as well:
> > 
> > 1) How can the application tell that the Request API is supported and for
> >which buffer types (capture/output) and pixel formats?
> > 
> > 2) How can the application tell if the Request API is required as opposed 
> > to being
> >optional?
> > 
> > 3) Some controls may be required in each request, how to let userspace know 
> > this?
> >Is it even necessary to inform userspace?
> > 
> > 4) (For bonus points): How to let the application know which streaming I/O 
> > modes
> >are available? That's never been possible before, but it would be very 
> > nice
> >indeed if that's made explicit.
> 
> Thanks for bringing up these considerations and questions, which perhaps
> cover the last missing bits for streamlined use of the request API by
> userspace. I would suggest another item, related to 3):
> 
> 5) How can applications tell whether the driver supports a specific
> codec profile/level, not only for encoding but also for decoding? It's
> common for low-end embedded hardware to not support the most advanced
> profiles (e.g. H264 high profile).

Hi Paul, after some discussion with Philip, he sent a proposal patch
that enables profile/level extended CID support to decoders too. The
control is made read-only, the point is not really the CID get/set but
that the controls allow enumerating the supported values. This seems
quite straightforward and easy to use.

This enumeration is already provided this way some of the existing
sate-full encoders. 

> 
> > Since the Request API associates data with frame buffers it makes sense to 
> > expose
> > this as a new capability field in struct v4l2_requestbuffers and struct 
> > v4l2_create_buffers.
> > 
> > The first struct has 2 reserved fields, the second has 8, so it's not a 
> > problem to
> > take one for a capability field. Both structs also have a buffer type, so 
> > we know
> > if this is requested for a capture or output buffer type. The pixel format 
> > is known
> > in the driver, so HAS/REQUIRES_REQUESTS can be set based on that. I doubt 
> > we'll have
> > drivers where the request caps would actually depend on the pixel format, 
> > but it
> > theoretically possible. For both ioctls you can call them with count=0 at 
> > the start
> > of the application. REQBUFS has of course the side-effect of deleting all 
> > buffers,
> > but at the start of your application you don't have any yet. CREATE_BUFS 
> > has no
> > side-effects.
> 
> My initial thoughts on this point were to have flags exposed in
> v4l2_capability, but now that you're saying it, it does make sense for
> the flag to be associated with a buffer rather than the global device.
> 
> In addition, I've heard of cases (IIRC it was some Rockchip platforms)
> where the platform has both stateless and stateful VPUs (I think it was
> stateless up to H264 and stateful for H265). This would allow supporting
> these two hardware blocks under the same video device (if that makes
> sense anyway). And even if there's no immediate need, it's always good
> to have this level of granularity (with little drawbacks).
> 
> > I propose adding these capabilities:
> > 
> > #define V4L2_BUF_CAP_HAS_REQUESTS   0x0001
> > #define V4L2_BUF_CAP_REQUIRES_REQUESTS  0x0002
> > #define V4L2_BUF_CAP_HAS_MMAP   0x0100
> > #define V4L2_BUF_CAP_HAS_USERPTR0x0200
> > #define V4L2_BUF_CAP_HAS_DMABUF 0x0400
> > 
> > If REQUIRES_REQUESTS is set, then HAS_REQUESTS is also set.
> > 
> > At this time I think that REQUIRES_REQUESTS would only need to be set for 
> > the
> > output queue of stateless codecs.
> > 
> > If capabilities is 0, then it's from an old kernel and all you know is that
> > requests are certainly not supported, and that MMAP is supported. Whether 
> > USERPTR
> > or DMABUF are supported isn't known in that case (just try it :-) ).
> 
> Sounds good to me!
> 
> > Strictly speaking we do not need these HAS_MMAP/USERPTR/DMABUF caps, but it 
> > is very
> > easy to add if we create a new capability field anyway, and it has always 
> > annoyed
> > the hell out of me that we didn't have a good way to let userspace know what
> > streaming I/O modes we support. And with vb2 it's easy to implement.
> 
> I totally agree here, it would be very nice to take the occasion to
> expose to userspace what I/O modes are available. The current try-and-
> see approach works, but this feels much better indeed.
> 
> > Regarding point 3: I think this should be documented next to the pixel 
> > format. I.e.
> > the MPEG-2 Slice format used by the stateless cedrus codec requires the 
> > request API
> > and that two MPEG-2 controls (slice params and quantization matrices) must 
> 

Re: [PATCH v2 1/2] uvcvideo: rename UVC_QUIRK_INFO to UVC_INFO_QUIRK

2018-08-04 Thread Nicolas Dufresne
Le vendredi 03 août 2018 à 13:36 +0200, Guennadi Liakhovetski a écrit :
> This macro defines "information about quirks," not "quirks for
> information."

Does not sound better to me. It's "Quirk's information", vs
"information about quirks". I prefer the first one. In term of C
namespace the orignal is also better. So the name space is UVC_QUIRK,
and the detail is INFO.

If we where to apply your logic, you'd rename driver_info, into
info_driver, because it's information about the driver.

> 
> Signed-off-by: Guennadi Liakhovetski  >
> ---
>  drivers/media/usb/uvc/uvc_driver.c | 18 +-
>  1 file changed, 9 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/media/usb/uvc/uvc_driver.c
> b/drivers/media/usb/uvc/uvc_driver.c
> index d46dc43..699984b 100644
> --- a/drivers/media/usb/uvc/uvc_driver.c
> +++ b/drivers/media/usb/uvc/uvc_driver.c
> @@ -2344,7 +2344,7 @@ static int uvc_clock_param_set(const char *val,
> const struct kernel_param *kp)
>   .quirks = UVC_QUIRK_FORCE_Y8,
>  };
>  
> -#define UVC_QUIRK_INFO(q) (kernel_ulong_t)&(struct
> uvc_device_info){.quirks = q}
> +#define UVC_INFO_QUIRK(q) (kernel_ulong_t)&(struct
> uvc_device_info){.quirks = q}
>  
>  /*
>   * The Logitech cameras listed below have their interface class set
> to
> @@ -2453,7 +2453,7 @@ static int uvc_clock_param_set(const char *val,
> const struct kernel_param *kp)
> .bInterfaceClass  = USB_CLASS_VIDEO,
> .bInterfaceSubClass   = 1,
> .bInterfaceProtocol   = 0,
> -   .driver_info  =
> UVC_QUIRK_INFO(UVC_QUIRK_RESTORE_CTRLS_ON_INIT) },
> +   .driver_info  =
> UVC_INFO_QUIRK(UVC_QUIRK_RESTORE_CTRLS_ON_INIT) },
>   /* Chicony CNF7129 (Asus EEE 100HE) */
>   { .match_flags  = USB_DEVICE_ID_MATCH_DEVICE
>   | USB_DEVICE_ID_MATCH_INT_INFO,
> @@ -2462,7 +2462,7 @@ static int uvc_clock_param_set(const char *val,
> const struct kernel_param *kp)
> .bInterfaceClass  = USB_CLASS_VIDEO,
> .bInterfaceSubClass   = 1,
> .bInterfaceProtocol   = 0,
> -   .driver_info  =
> UVC_QUIRK_INFO(UVC_QUIRK_RESTRICT_FRAME_RATE) },
> +   .driver_info  =
> UVC_INFO_QUIRK(UVC_QUIRK_RESTRICT_FRAME_RATE) },
>   /* Alcor Micro AU3820 (Future Boy PC USB Webcam) */
>   { .match_flags  = USB_DEVICE_ID_MATCH_DEVICE
>   | USB_DEVICE_ID_MATCH_INT_INFO,
> @@ -2525,7 +2525,7 @@ static int uvc_clock_param_set(const char *val,
> const struct kernel_param *kp)
> .bInterfaceClass  = USB_CLASS_VIDEO,
> .bInterfaceSubClass   = 1,
> .bInterfaceProtocol   = 0,
> -   .driver_info  =
> UVC_QUIRK_INFO(UVC_QUIRK_PROBE_MINMAX
> +   .driver_info  =
> UVC_INFO_QUIRK(UVC_QUIRK_PROBE_MINMAX
>   | UVC_QUIRK_BUILTIN_ISIGHT) },
>   /* Apple Built-In iSight via iBridge */
>   { .match_flags  = USB_DEVICE_ID_MATCH_DEVICE
> @@ -2607,7 +2607,7 @@ static int uvc_clock_param_set(const char *val,
> const struct kernel_param *kp)
> .bInterfaceClass  = USB_CLASS_VIDEO,
> .bInterfaceSubClass   = 1,
> .bInterfaceProtocol   = 0,
> -   .driver_info  =
> UVC_QUIRK_INFO(UVC_QUIRK_PROBE_MINMAX
> +   .driver_info  =
> UVC_INFO_QUIRK(UVC_QUIRK_PROBE_MINMAX
>   | UVC_QUIRK_PROBE_DEF) },
>   /* IMC Networks (Medion Akoya) */
>   { .match_flags  = USB_DEVICE_ID_MATCH_DEVICE
> @@ -2707,7 +2707,7 @@ static int uvc_clock_param_set(const char *val,
> const struct kernel_param *kp)
> .bInterfaceClass  = USB_CLASS_VIDEO,
> .bInterfaceSubClass   = 1,
> .bInterfaceProtocol   = 0,
> -   .driver_info  =
> UVC_QUIRK_INFO(UVC_QUIRK_PROBE_MINMAX
> +   .driver_info  =
> UVC_INFO_QUIRK(UVC_QUIRK_PROBE_MINMAX
>   | UVC_QUIRK_PROBE_EXTRAFIELDS)
> },
>   /* Aveo Technology USB 2.0 Camera (Tasco USB Microscope) */
>   { .match_flags  = USB_DEVICE_ID_MATCH_DEVICE
> @@ -2725,7 +2725,7 @@ static int uvc_clock_param_set(const char *val,
> const struct kernel_param *kp)
> .bInterfaceClass  = USB_CLASS_VIDEO,
> .bInterfaceSubClass   = 1,
> .bInterfaceProtocol   = 0,
> -   .driver_info  =
> UVC_QUIRK_INFO(UVC_QUIRK_PROBE_EXTRAFIELDS) },
> +   .driver_info  =
> UVC_INFO_QUIRK(UVC_QUIRK_PROBE_EXTRAFIELDS) },
>   /* Manta MM-353 Plako */
>   { .match_flags  = USB_DEVICE_ID_MATCH_DEVICE
>   | USB_DEVICE_ID_MATCH_INT_INFO,
> @@ -2771,7 +2771,7 @@ static int uvc_clock_param_set(const char *val,
> const struct kernel_param *kp)
> .bInterfaceClass  = USB_CLASS_VIDEO,
> .bInterfaceSubClass   = 1,
> .bInterfaceProtocol   = 0,
> -   .driver_info  =
> UVC_QUIRK_INFO(UVC_QUIRK_STATUS_INTERVAL) },
> +  

Re: Video capturing

2018-07-05 Thread Nicolas Dufresne
Le jeudi 05 juillet 2018 à 16:35 +0300, Oleh Kravchenko a écrit :
> Hello Nicolas,
> 
> On 05.07.18 15:57, Nicolas Dufresne wrote:
> > 
> > 
> > Le jeu. 5 juil. 2018 05:28, Oleh Kravchenko  > <mailto:o...@kaa.org.ua>> a écrit :
> > 
> > Hello!
> > 
> > Yesterday I tried to capture video from old game console (PAL)
> > and
> > got an image like this
> > https://www.kaa.org.ua/images/EvromediaUSBFullHybridFullHD/mpla
> > yer_nes.png
> > 
> > 
> > Can you describe how this image was captured ? Can you give some
> > details about your tv tuner? Do you also use GStreamer on RPi?
> 
> I have those TV tuners:
> AVerTV Hybrid Express Slim HC81R
> Evromedia USB Full Hybrid Full HD
> Astrometa T2hybrid
> 
> Here examples with mplayer and mpv:
> mplayer tv:///1 -tv
> width=720:height=576:adevice=hw.2:alsa=1:amode=0:forceaudio=1:immedia
> temode=0:norm=pal
> mpv tv:///1 --tv-width=720 --tv-height=576 --tv-adevice=hw.2
> --tv-alsa --tv-amode=0 --tv-forceaudio=yes --tv-immediatemode=no
> --tv-norm=pal

And do you get the same with GStreamer ?

gst-launch-1.0 v4l2src device=/dev/video1 norm=PAL ! videoconvert ! 
autovideosink

> 
> I didn't use GStreamer on RPi, because in my case RPi is a source of
> video signal for TV tuner.
> 
> > 
> > 
> > 
> > I tried different TV norms, but no success.
> > At the same time that video console works fine with my TV!
> > My TV tuners works fine with Nokia N900 (PAL, NTSC), Raspberry
> > Pi
> > (PAL),
> > PlayStation 3 (PAL).
> > 
> > Any idea what it can be?
> > 
> > PS:
> > By the way, is allowed to send screenshots and photos as
> > attachments in
> > this mail list?
> > 
> > -- 
> > Best regards,
> > Oleh Kravchenko
> > 
> > 
> 
> 


Re: [PATCH 16/16] media: imx: add mem2mem device

2018-06-22 Thread Nicolas Dufresne
Le vendredi 22 juin 2018 à 17:52 +0200, Philipp Zabel a écrit :
> Add a single imx-media mem2mem video device that uses the IPU IC PP
> (image converter post processing) task for scaling and colorspace
> conversion.
> On i.MX6Q/DL SoCs with two IPUs currently only the first IPU is used.
> 
> The hardware only supports writing to destination buffers up to
> 1024x1024 pixels in a single pass, so the mem2mem video device is
> limited to this resolution. After fixing the tiling code it should
> be possible to extend this to arbitrary sizes by rendering multiple
> tiles per frame.
> 
> Signed-off-by: Philipp Zabel 

Tested-by: Nicolas Dufresne 

> ---
>  drivers/staging/media/imx/Kconfig |   1 +
>  drivers/staging/media/imx/Makefile|   1 +
>  drivers/staging/media/imx/imx-media-dev.c |  11 +
>  drivers/staging/media/imx/imx-media-mem2mem.c | 953
> ++
>  drivers/staging/media/imx/imx-media.h |  10 +
>  5 files changed, 976 insertions(+)
>  create mode 100644 drivers/staging/media/imx/imx-media-mem2mem.c
> 
> diff --git a/drivers/staging/media/imx/Kconfig
> b/drivers/staging/media/imx/Kconfig
> index bfc17de56b17..07013cb3cb66 100644
> --- a/drivers/staging/media/imx/Kconfig
> +++ b/drivers/staging/media/imx/Kconfig
> @@ -6,6 +6,7 @@ config VIDEO_IMX_MEDIA
>   depends on HAS_DMA
>   select VIDEOBUF2_DMA_CONTIG
>   select V4L2_FWNODE
> + select V4L2_MEM2MEM_DEV
>   ---help---
> Say yes here to enable support for video4linux media
> controller
> driver for the i.MX5/6 SOC.
> diff --git a/drivers/staging/media/imx/Makefile
> b/drivers/staging/media/imx/Makefile
> index 698a4210316e..f2e722d0fa19 100644
> --- a/drivers/staging/media/imx/Makefile
> +++ b/drivers/staging/media/imx/Makefile
> @@ -6,6 +6,7 @@ imx-media-ic-objs := imx-ic-common.o imx-ic-prp.o
> imx-ic-prpencvf.o
>  obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media.o
>  obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-common.o
>  obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-capture.o
> +obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-mem2mem.o
>  obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-vdic.o
>  obj-$(CONFIG_VIDEO_IMX_MEDIA) += imx-media-ic.o
>  
> diff --git a/drivers/staging/media/imx/imx-media-dev.c
> b/drivers/staging/media/imx/imx-media-dev.c
> index 289d775c4820..7a9aabcae3ee 100644
> --- a/drivers/staging/media/imx/imx-media-dev.c
> +++ b/drivers/staging/media/imx/imx-media-dev.c
> @@ -359,6 +359,17 @@ static int imx_media_probe_complete(struct
> v4l2_async_notifier *notifier)
>   goto unlock;
>  
>   ret = v4l2_device_register_subdev_nodes(>v4l2_dev);
> + if (ret)
> + goto unlock;
> +
> + /* TODO: check whether we have IC subdevices first */
> + imxmd->m2m_vdev = imx_media_mem2mem_device_init(imxmd);
> + if (IS_ERR(imxmd->m2m_vdev)) {
> + ret = PTR_ERR(imxmd->m2m_vdev);
> + goto unlock;
> + }
> +
> + ret = imx_media_mem2mem_device_register(imxmd->m2m_vdev);
>  unlock:
>   mutex_unlock(>mutex);
>   if (ret)
> diff --git a/drivers/staging/media/imx/imx-media-mem2mem.c
> b/drivers/staging/media/imx/imx-media-mem2mem.c
> new file mode 100644
> index ..8830f77f0407
> --- /dev/null
> +++ b/drivers/staging/media/imx/imx-media-mem2mem.c
> @@ -0,0 +1,953 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * i.MX IPUv3 mem2mem Scaler/CSC driver
> + *
> + * Copyright (C) 2011 Pengutronix, Sascha Hauer
> + * Copyright (C) 2018 Pengutronix, Philipp Zabel
> + *
> + * This program is free software; you can redistribute it and/or
> modify
> + * it under the terms of the GNU General Public License as published
> by
> + * the Free Software Foundation; either version 2 of the License, or
> + * (at your option) any later version.
> + */
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "imx-media.h"
> +
> +#define MIN_W 16
> +#define MIN_H 16
> +#define MAX_W 4096
> +#define MAX_H 4096
> +
> +#define fh_to_ctx(__fh)  container_of(__fh, struct
> mem2mem_ctx, fh)
> +
> +enum {
> + V4L2_M2M_SRC = 0,
> + V4L2_M2M_DST = 1,
> +};
> +
> +struct mem2mem_priv {
> + struct imx_media_video_dev vdev;
> +
> + struct v4l2_m2m_dev   *m2m_dev;
> + struct device *dev;
> +
> + struct imx_media_dev  *md;
> +
> + struct mutex  mutex;   /* mem2mem device mutex
> */
> +
> + atom

Re: Software-only image processing for Intel "complex" cameras

2018-06-20 Thread Nicolas Dufresne
Le mercredi 20 juin 2018 à 22:38 +0200, Pavel Machek a écrit :
> Hi!
> 
> On Nokia N900, I have similar problems as Intel IPU3 hardware.
> 
> Meeting notes say that pure software implementation is not fast
> enough, but that it may be useful for debugging. It would be also
> useful for me on N900, and probably useful for processing "raw"
> images
> from digital cameras.
> 
> There is sensor part, and memory-to-memory part, right? What is
> the format of data from the sensor part? What operations would be
> expensive on the CPU? If we did everthing on the CPU, what would be
> maximum resolution where we could still manage it in real time?

The IPU3 sensor produce a vendor specific form of bayer. If we manage
to implement support for this format, it would likely be done in
software. I don't think anyone can answer your other questions has no
one have ever implemented this, hence measure performance.

> 
> Would it be possible to get access to machine with IPU3, or would
> there be someone willing to test libv4l2 patches?
> 
> Thanks and best regards,
> 
>   
> Pavel

signature.asc
Description: This is a digitally signed message part


Re: [RFC 0/2] Memory-to-memory media controller topology

2018-06-16 Thread Nicolas Dufresne
Le vendredi 15 juin 2018 à 17:05 -0300, Ezequiel Garcia a écrit :
> > Will the end result have "device node name /dev/..." on both entity
> > 1
> > and 6 ? 
> 
> No. There is just one devnode /dev/videoX, which is accepts
> both CAPTURE and OUTPUT directions.

My question is more ifthe dev node path will be provided somehow,
because it's not displayed in this topologyé

Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [RFC 0/2] Memory-to-memory media controller topology

2018-06-12 Thread Nicolas Dufresne
Le mardi 12 juin 2018 à 07:48 -0300, Ezequiel Garcia a écrit :
> As discussed on IRC, memory-to-memory need to be modeled
> properly in order to be supported by the media controller
> framework, and thus to support the Request API.
> 
> This RFC is a first draft on the memory-to-memory
> media controller topology.
> 
> The topology looks like this:
> 
> Device topology
> - entity 1: input (1 pad, 1 link)
> type Node subtype Unknown flags 0
>   pad0: Source
>   -> "proc":1 [ENABLED,IMMUTABLE]
> 
> - entity 3: proc (2 pads, 2 links)
> type Node subtype Unknown flags 0
>   pad0: Source
>   -> "output":0 [ENABLED,IMMUTABLE]
>   pad1: Sink
>   <- "input":0 [ENABLED,IMMUTABLE]
> 
> - entity 6: output (1 pad, 1 link)
> type Node subtype Unknown flags 0
>   pad0: Sink
>   <- "proc":0 [ENABLED,IMMUTABLE]

Will the end result have "device node name /dev/..." on both entity 1
and 6 ? I got told that in the long run, one should be able to map a
device (/dev/mediaN) to it's nodes (/dev/video*). In a way that if we
keep going this way, all the media devices can be enumerated from media
node rather then a mixed between media nodes and orphaned video nodes.
> 
> The first commit introduces a register/unregister API,
> that creates/destroys all the entities and pads needed,
> and links them.
> 
> The second commit uses this API to support the vim2m driver.
> 
> Notes
> -
> 
> * A new device node type is introduced VFL_TYPE_MEM2MEM,
>   this is mostly done so the video4linux core doesn't
>   try to register other media controller entities.
> 
> * Also, a new media entity type is introduced. Memory-to-memory
>   devices have a multi-entity description and so can't
>   be simply embedded in other structs, or cast from other structs.
> 
> Ezequiel Garcia (1):
>   media: add helpers for memory-to-memory media controller
> 
> Hans Verkuil (1):
>   vim2m: add media device
> 
>  drivers/media/platform/vim2m.c |  41 ++-
>  drivers/media/v4l2-core/v4l2-dev.c |  23 ++--
>  drivers/media/v4l2-core/v4l2-mem2mem.c | 157
> +
>  include/media/media-entity.h   |   4 +
>  include/media/v4l2-dev.h   |   2 +
>  include/media/v4l2-mem2mem.h   |   5 +
>  include/uapi/linux/media.h |   2 +
>  7 files changed, 222 insertions(+), 12 deletions(-)
> 
> -- 
> 2.17.1
> 
> 

signature.asc
Description: This is a digitally signed message part


Re: Bug: media device controller node not removed when uvc device is unplugged

2018-06-07 Thread Nicolas Dufresne
Le jeudi 07 juin 2018 à 14:07 +0200, Torleiv Sundre a écrit :
> Hi,
> 
> Every time I plug in a UVC camera, a media controller node is created at 
> /dev/media.
> 
> In Ubuntu 17.10, running kernel 4.13.0-43, the media controller device 
> node is removed when the UVC camera is unplugged.
> 
> In Ubuntu 18.10, running kernel 4.15.0-22, the media controller device 
> node is not removed. For every time I plug the device, a new device node 
> with incremented minor number is created, leaving me with a growing list 
> of media controller device nodes. If I repeat for long enough, I get the 
> following error:
> "media: could not get a free minor"
> I also tried building a kernel from mainline, with the same result.
> 
> I'm running on x86_64.

I also observe this on 4.17.

> 
> Torleiv Sundre


Re: [PATCH 3/6] media: videodev2.h: Add macro V4L2_FIELD_IS_SEQUENTIAL

2018-05-25 Thread Nicolas Dufresne
Le vendredi 25 mai 2018 à 21:14 -0400, Nicolas Dufresne a écrit :
> Le vendredi 25 mai 2018 à 17:19 -0700, Steve Longerbeam a écrit :
> > 
> > On 05/25/2018 05:10 PM, Nicolas Dufresne wrote:
> > > (in text this time, sorry)
> > > 
> > > Le vendredi 25 mai 2018 à 16:53 -0700, Steve Longerbeam a écrit :
> > > > Add a macro that returns true if the given field type is
> > > > 'sequential',
> > > > that is, the data is transmitted, or exists in memory, as all top
> > > > field
> > > > lines followed by all bottom field lines, or vice-versa.
> > > > 
> > > > Signed-off-by: Steve Longerbeam <steve_longerb...@mentor.com>
> > > > ---
> > > >   include/uapi/linux/videodev2.h | 4 
> > > >   1 file changed, 4 insertions(+)
> > > > 
> > > > diff --git a/include/uapi/linux/videodev2.h
> > > > b/include/uapi/linux/videodev2.h
> > > > index 600877b..408ee96 100644
> > > > --- a/include/uapi/linux/videodev2.h
> > > > +++ b/include/uapi/linux/videodev2.h
> > > > @@ -126,6 +126,10 @@ enum v4l2_field {
> > > >  (field) == V4L2_FIELD_INTERLACED_BT ||\
> > > >  (field) == V4L2_FIELD_SEQ_TB ||\
> > > >  (field) == V4L2_FIELD_SEQ_BT)
> > > > +#define V4L2_FIELD_IS_SEQUENTIAL(field) \
> > > > +   ((field) == V4L2_FIELD_SEQ_TB ||\
> > > > +(field) == V4L2_FIELD_SEQ_BT ||\
> > > > +(field) == V4L2_FIELD_ALTERNATE)
> > > 
> > > No, alternate has no place here, in alternate mode each buffers have
> > > only one field.
> > 
> > Then I misunderstand what is meant by "alternate". The name implies
> > to me that a source sends top or bottom field alternately, or top/bottom
> > fields exist in memory buffers alternately, but with no information about
> > which field came first. In other words, "alternate" is either seq-tb or 
> > seq-bt,
> > without any info about field order.
> > 
> > If it is just one field in a memory buffer, why isn't it called
> > V4L2_FIELD_TOP_OR_BOTTOM, e.g. we don't know which?
> 
> I don't see why this could be better then ALTERNATE, were buffers are
> only top or bottom fields alternatively. And even if there was another
> possible name, this is public API.
> 
> V4L2_FIELD_ALTERNATE is a mode, that will only be used with
> v4l2_pix_format or v4l2_pix_format_mplane. I should never bet set on
> the v4l2_buffer.field, instead the driver indicates the parity of the
> field by setting V42_FIELD_TOP/BOTTOM on the v4l2_buffer returned by
> DQBUF. This is a very different mode of operation compared to
> sequential, hence why I believe it is wrong to make it part of the new
> helper. So far, it's the only field value that has this asymmetric
> usage and meaning.

I should have put some references. The explanation of the modes, with a
temporal representation of the fields. Small note, in ALTERNATE mode
bottom and top fields will likely not share the same timestamp, it is a
mode used to achieve lower latency.

https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/field-order.html#c.v4l2_field

And in this section, you'll see a paragraph that explain the field
values when running in ALTERNATE mode.

https://linuxtv.org/downloads/v4l-dvb-apis/uapi/v4l/buffer.html#c.v4l2_buffer

> 
> > 
> > Steve
> > 


Re: [PATCH 3/6] media: videodev2.h: Add macro V4L2_FIELD_IS_SEQUENTIAL

2018-05-25 Thread Nicolas Dufresne
Le vendredi 25 mai 2018 à 17:19 -0700, Steve Longerbeam a écrit :
> 
> On 05/25/2018 05:10 PM, Nicolas Dufresne wrote:
> > (in text this time, sorry)
> > 
> > Le vendredi 25 mai 2018 à 16:53 -0700, Steve Longerbeam a écrit :
> > > Add a macro that returns true if the given field type is
> > > 'sequential',
> > > that is, the data is transmitted, or exists in memory, as all top
> > > field
> > > lines followed by all bottom field lines, or vice-versa.
> > > 
> > > Signed-off-by: Steve Longerbeam <steve_longerb...@mentor.com>
> > > ---
> > >   include/uapi/linux/videodev2.h | 4 
> > >   1 file changed, 4 insertions(+)
> > > 
> > > diff --git a/include/uapi/linux/videodev2.h
> > > b/include/uapi/linux/videodev2.h
> > > index 600877b..408ee96 100644
> > > --- a/include/uapi/linux/videodev2.h
> > > +++ b/include/uapi/linux/videodev2.h
> > > @@ -126,6 +126,10 @@ enum v4l2_field {
> > >(field) == V4L2_FIELD_INTERLACED_BT ||\
> > >(field) == V4L2_FIELD_SEQ_TB ||\
> > >(field) == V4L2_FIELD_SEQ_BT)
> > > +#define V4L2_FIELD_IS_SEQUENTIAL(field) \
> > > + ((field) == V4L2_FIELD_SEQ_TB ||\
> > > +  (field) == V4L2_FIELD_SEQ_BT ||\
> > > +  (field) == V4L2_FIELD_ALTERNATE)
> > 
> > No, alternate has no place here, in alternate mode each buffers have
> > only one field.
> 
> Then I misunderstand what is meant by "alternate". The name implies
> to me that a source sends top or bottom field alternately, or top/bottom
> fields exist in memory buffers alternately, but with no information about
> which field came first. In other words, "alternate" is either seq-tb or 
> seq-bt,
> without any info about field order.
> 
> If it is just one field in a memory buffer, why isn't it called
> V4L2_FIELD_TOP_OR_BOTTOM, e.g. we don't know which?

I don't see why this could be better then ALTERNATE, were buffers are
only top or bottom fields alternatively. And even if there was another
possible name, this is public API.

V4L2_FIELD_ALTERNATE is a mode, that will only be used with
v4l2_pix_format or v4l2_pix_format_mplane. I should never bet set on
the v4l2_buffer.field, instead the driver indicates the parity of the
field by setting V42_FIELD_TOP/BOTTOM on the v4l2_buffer returned by
DQBUF. This is a very different mode of operation compared to
sequential, hence why I believe it is wrong to make it part of the new
helper. So far, it's the only field value that has this asymmetric
usage and meaning.

> 
> Steve
> 


Re: [PATCH 3/6] media: videodev2.h: Add macro V4L2_FIELD_IS_SEQUENTIAL

2018-05-25 Thread Nicolas Dufresne
(in text this time, sorry)

Le vendredi 25 mai 2018 à 16:53 -0700, Steve Longerbeam a écrit :
> Add a macro that returns true if the given field type is
> 'sequential',
> that is, the data is transmitted, or exists in memory, as all top
> field
> lines followed by all bottom field lines, or vice-versa.
> 
> Signed-off-by: Steve Longerbeam 
> ---
>  include/uapi/linux/videodev2.h | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/include/uapi/linux/videodev2.h
> b/include/uapi/linux/videodev2.h
> index 600877b..408ee96 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -126,6 +126,10 @@ enum v4l2_field {
>(field) == V4L2_FIELD_INTERLACED_BT ||\
>(field) == V4L2_FIELD_SEQ_TB ||\
>(field) == V4L2_FIELD_SEQ_BT)
> +#define V4L2_FIELD_IS_SEQUENTIAL(field) \
> + ((field) == V4L2_FIELD_SEQ_TB ||\
> +  (field) == V4L2_FIELD_SEQ_BT ||\
> +  (field) == V4L2_FIELD_ALTERNATE)

No, alternate has no place here, in alternate mode each buffers have
only one field.

>  #define V4L2_FIELD_HAS_T_OR_B(field) \
>   ((field) == V4L2_FIELD_BOTTOM (||\
>(field) == V4L2_FIELD_TOP ||\


Re: [ANN] Meeting to discuss improvements to support MC-based cameras on generic apps

2018-05-18 Thread Nicolas Dufresne
Le vendredi 18 mai 2018 à 16:37 +0100, Dave Stevenson a écrit :
> On 18 May 2018 at 16:05, Mauro Carvalho Chehab
>  wrote:
> > Em Fri, 18 May 2018 15:27:24 +0300
> 
> 
> > > 
> > > > There, instead of an USB camera, the hardware is equipped with a
> > > > MC-based ISP, connected to its camera. Currently, despite having
> > > > a Kernel driver for it, the camera doesn't work with any
> > > > userspace application.
> > > > 
> > > > I'm also aware of other projects that are considering the usage of
> > > > mc-based devices for non-dedicated hardware.
> > > 
> > > What are those projects ?
> > 
> > Well, cheap ARM-based hardware like RPi3 already has this issue: they
> > have an ISP (or some GPU firmware meant to emulate an ISP). While
> > those hardware could have multiple sensors, typically they have just
> > one.
> 
> Slight hijack, but a closely linked issue for the Pi.
> The way I understand the issue of V4L2 / MC on Pi is a more
> fundamental mismatch in architecture. Please correct me if I'm wrong
> here.
> 
> The Pi CSI2 receiver peripheral always writes the incoming data to
> SDRAM, and the ISP is then a memory to memory device.

This is the same for IPU3 and some new can ARM ISP works like this too.
Though, IPU3 is fixed, you simply enable / disable / configure the ISP
base on the stats/metadata. Basically, it's not a single device, but
really two separate thing, were the ISP could be used without a sensor.
(Hope this make sense, need to be taken in consideration).

> 
> V4L2 subdevices are not dma controllers and therefore have no buffers
> allocated to them. So to support the full complexity of the pipeline
> in V4L2 requires that something somewhere would have to be dequeuing
> the buffers from the CSI receiver V4L2 device and queuing them to the
> input of a (theoretical) ISP M2M V4L2 device, and returning them once
> processed. The application only cares about the output of the ISP M2M
> device.
> 
> So I guess my question is whether there is a sane mechanism to remove
> that buffer allocation and handling from the app? Without it we are
> pretty much forced to hide bigger blobs of functionality to even
> vaguely fit in with V4L2.
> 
> I'm at the point where it shouldn't be a huge amount of work to create
> at least a basic ISP V4L2 M2M device, but I'm not planning on doing it
> if it pushes the above buffer handling onto the app because it simply
> won't get used beyond demo apps. The likes of Cheese, Scratch, etc,
> just won't do it.

Well, would have to be a media controller running in M2M as it is not
1:1 in term of number of input : number of output.

> 
> To avoid ambiguity, the Pi has a hardware ISP block. There are other
> SoCs that use either GPU code or a DSP to implement their ISP.

That's a good point, something that need to be kept in mind.

> 
>   Dave

signature.asc
Description: This is a digitally signed message part


Re: [ANN] Meeting to discuss improvements to support MC-based cameras on generic apps

2018-05-18 Thread Nicolas Dufresne
Le vendredi 18 mai 2018 à 15:38 +0300, Laurent Pinchart a écrit :
> > Before libv4l, media support for a given device were limited to a few
> > apps that knew how to decode the format. There were even cases were a
> > proprietary app were required, as no open source decoders were available.
> > 
> > From my PoV, the biggest gain with libv4l is that the same group of
> > maintainers can ensure that the entire solution (Kernel driver and
> > low level userspace support) will provide everything required for an
> > open source app to work with it.
> > 
> > I'm not sure how we would keep enforcing it if the pipeline setting
> > and control propagation logic for an specific hardware will be
> > delegated to PipeWire. It seems easier to keep doing it on a libv4l
> > (version 2) and let PipeWire to use it.
> 
> I believe we need to first study pipewire in more details. I have no personal 
> opinion yet as I haven't had time to investigate it. That being said, I don't 
> think that libv4l with closed-source plugins would be much better than a 
> closed-source pipewire plugin. What main concern once we provide a userspace 
> camera stack API is that vendors might implement that API in a closed-source 
> component that calls to a kernel driver implementing a custom API, with all 
> knowledge about the camera located in the closed-source component. I'm not 
> sure how to prevent that, my best proposal would be to make V4L2 so useful 
> that vendors wouldn't even think about a different solution (possibly coupled 
> by the pressure put by platform vendors such as Google who mandate upstream 
> kernel drivers for Chrome OS, but that's still limited as even when it comes 
> to Google there's no such pressure on the Android side).

If there is proprietary plugins, then I don't think it will make any
difference were this is implemented. The difference is the feature set
we expose. 3A is per device, but multiple streams, with per request
controls is also possible. PipeWire gives central place to manage this,
while giving multiple process access to the camera streams. I think in
the end, what fits better would be something like or the Android Camera
HAL2. But we could encourage OSS by maintaining a base implementation
that covers all the V4L2 aspect, leaving only the 3A aspect of the work
to be done. Maybe we need to come up with an abstraction that does not
prevent multi-streams, but only requires 3A per vendors (saying per
vendors, as some of this could be Open Source by third parties).

just thinking out loud now ;-P
Nicolas

p.s. Do we have the Intel / IPU3 folks in in the loop ? This is likely
the most pressing HW as it's shipping on many laptops now.

signature.asc
Description: This is a digitally signed message part


Re: [ANN] Meeting to discuss improvements to support MC-based cameras on generic apps

2018-05-18 Thread Nicolas Dufresne
Le vendredi 18 mai 2018 à 11:15 +0300, Laurent Pinchart a écrit :
> > I need to clarify a little bit on why we disabled libv4l2 in
> > GStreamer,
> > as it's not only for performance reason, there is couple of major
> > issues in the libv4l2 implementation that get's in way. Just a
> > short
> > list:
> > 
> > 
> 
> Do you see any point in that list that couldn't be fixed in libv4l ?

Sure, most of it is features being added into the kernel uAPI but not
added to the emulation layer. But appart from that, libv4l will only
offer legacy use case, we need to think how generic userspace will be
able to access these camera, and leverage the per request controls,
multi-stream, etc. features. This is mostly what Android Camera HAL2
does (and it does it well), but it won't try and ensure this stays Open
Source in any ways. I would not mind if Android Camera HAL2 leads the
way, and a facilitator (something that does 90% of the work if you have
a proper Open Source driver) would lead the way in getting more OSS
drivers submitted.

> >- Crash when CREATE_BUFS is being used

This is a side effect of CREATE_BUFS being passed-through, implementing
emulation for this should be straightforward.

> >- Crash in the jpeg decoder (when frames are corrupted)

A minimalist framing parser would detect just enough of this, and would
fix it.

> >- App exporting DMABuf need to be aware of emulation, otherwise the
> >  DMABuf exported are in the orignal format

libv4l2 can return ENOTTY to expbufs calls in 

> >- RW emulation only initialize the queue on first read (causing
> >  userspace poll() to fail)

This is not fixable, only place it would be fixed is by moving this
emulation into VideoBuf2. That would assume someone do care about RW
(even though it could be nicer uAPI when dealing with muxed or byte-
stream type of data).

> >- Signature of v4l2_mmap does not match mmap() (minor)
> >- The colorimetry does not seem emulated when conversion

This one is probably tricky, special is the converter plugin API is
considered stable. Maybe just resetting everything to DEFAULT would
work ?

> >- Sub-optimal locking (at least deadlocks were fixed)

Need more investigation really, and proper measurement.

signature.asc
Description: This is a digitally signed message part


Re: [ANN] Meeting to discuss improvements to support MC-based cameras on generic apps

2018-05-17 Thread Nicolas Dufresne
Le jeudi 17 mai 2018 à 16:07 -0300, Mauro Carvalho Chehab a écrit :
> Hi all,
> 
> The goal of this e-mail is to schedule a meeting in order to discuss
> improvements at the media subsystem in order to support complex
> camera
> hardware by usual apps.
> 
> The main focus here is to allow supporting devices with MC-based
> hardware connected to a camera.
> 
> In short, my proposal is to meet with the interested parties on
> solving
> this issue during the Open Source Summit in Japan, e. g. between
> June, 19-22, in Tokyo.
> 
> I'd like to know who is interested on joining us for such meeting,
> and to hear a proposal of themes for discussions.
> 
> I'm enclosing a detailed description of the problem, in order to
> allow the interested parties to be at the same page.

It's unlikely I'll be able to attend this meeting, but I'd like to
provide some initial input on this. Find inline some clarification on
why libv4l2 is disabled by default in Gst, as it's not just
performance.

A major aspect that is totally absent of this mail is PipeWire. With
the venue of sandboxed application, there is a need to control access
to cameras through a daemon. The same daemon is also used to control
access to screen capture on Wayland (instead of letting any random
application capture your screen, like on X11). The effort is lead by
the desktop team at RedHat (folks CCed). PipeWire already have V4L2
native support and is integrated in GStreamer already in a way that it
can totally replace the V4L2 capture component there. PipeWire is
plugin base, so more type of camera support (including proprietary
ones) can be added. Remote daemon can also provide streams, as this is
the case for compositors and screen casting. An extra benefit is that
you can have multiple application reading frames from the same camera.
It also allow sandboxed application (the do not have access to /dev) to
use the cameras. PipeWire is much more then that, but let's focus on
that.

This is the direction we are heading on the "generic" / Desktop Linux.
Porting Firefox and Chrome is obviously planed, as these beast are
clear candidate for being sand-boxed and requires screen share feature
for WebRTC.

In this context, proprietary or HW specific algorithm could be
implemented in userspace as PipeWire plugins, and then application will
automatically be enable to enumerate and use these. I'm not saying the
libv4l2 stuff is not needed short term, but it's just a short term
thing in my opinion.

> 
> Regards,
> Mauro
> 
> ---
> 
> 
> 1. Introduction
> ===
> 
> 1.1 V4L2 Kernel aspects
> ---
> 
> The media subsystem supports two types of devices:
> 
> - "traditional" media hardware, supported via V4L2 API. On such
> hardware, 
>   opening a single device node (usually /dev/video0) is enough to
> control
>   the entire device. We call it as devnode-based devices.
> 
> - Media-controller based devices. On those devices, there are several
>   /dev/video? nodes and several /dev/v4l2-subdev? nodes, plus a media
>   controller device node (usually /dev/media0).
>   We call it as mc-based devices. Controlling the hardware require
>   opening the media device (/dev/media0), setup the pipeline and
> adjust
>   the sub-devices via /dev/v4l2-subdev?. Only streaming is controlled
>   by /dev/video?.
> 
> All "standard" media applications, including open source ones
> (Camorama,
> Cheese, Xawtv, Firefox, Chromium, ...) and closed source ones
> (Skype, 
> Chrome, ...) supports devnode-based devices.
> 
> Support for mc-based devices currently require an specialized
> application 
> in order to prepare the device for its usage (setup pipelines, adjust
> hardware controls, etc). Once pipeline is set, the streaming goes via
> /dev/video?, although usually some /dev/v4l2-subdev? devnodes should
> also
> be opened, in order to implement algorithms designed to make video
> quality
> reasonable. On such devices, it is not uncommon that the device used
> by the
> application to be a random number (on OMAP3 driver, typically, is
> either
> /dev/video4 or /dev/video6).
> 
> One example of such hardware is at the OMAP3-based hardware:
> 
>   http://www.infradead.org/~mchehab/mc-next-gen/omap3-igepv2-with
> -tvp5150.png
> 
> On the picture, there's a graph with the hardware blocks in
> blue/dark/blue
> and the corresponding devnode interfaces in yellow.
> 
> The mc-based approach was taken when support for Nokia N9/N900
> cameras 
> was added (with has OMAP3 SoC). It is required because the camera
> hardware
> on SoC comes with a media processor (ISP), with does a lot more than
> just
> capturing, allowing complex algorithms to enhance image quality in
> runtime.
> Those algorithms are known as 3A - an acronym for 3 other acronyms:
> 
>   - AE (Auto Exposure);
>   - AF (Auto Focus);
>   - AWB (Auto White Balance).
> 
> Setting a camera with such ISPs are harder because the pipelines to
> be
> set actually depends the requirements for those 3A 

Re: [PATCH 26/28] venus: implementing multi-stream support

2018-05-02 Thread Nicolas Dufresne
Le mercredi 02 mai 2018 à 13:10 +0530, Vikash Garodia a écrit :
> Hello Stanimir,
> 
> On 2018-04-24 18:14, Stanimir Varbanov wrote:
> > This is implementing a multi-stream decoder support. The multi
> > stream gives an option to use the secondary decoder output
> > with different raw format (or the same in case of crop).
> > 
> > Signed-off-by: Stanimir Varbanov 
> > ---
> >  drivers/media/platform/qcom/venus/core.h|   1 +
> >  drivers/media/platform/qcom/venus/helpers.c | 204 
> > +++-
> >  drivers/media/platform/qcom/venus/helpers.h |   6 +
> >  drivers/media/platform/qcom/venus/vdec.c|  91 -
> >  drivers/media/platform/qcom/venus/venc.c|   1 +
> >  5 files changed, 299 insertions(+), 4 deletions(-)
> > 
> > diff --git a/drivers/media/platform/qcom/venus/core.h
> > b/drivers/media/platform/qcom/venus/core.h
> > index 4d6c05f156c4..85e66e2dd672 100644
> > --- a/drivers/media/platform/qcom/venus/core.h
> > +++ b/drivers/media/platform/qcom/venus/core.h
> > @@ -259,6 +259,7 @@ struct venus_inst {
> > struct list_head list;
> > struct mutex lock;
> > struct venus_core *core;
> > +   struct list_head dpbbufs;
> > struct list_head internalbufs;
> > struct list_head registeredbufs;
> > struct list_head delayed_process;
> > diff --git a/drivers/media/platform/qcom/venus/helpers.c
> > b/drivers/media/platform/qcom/venus/helpers.c
> > index ed569705ecac..87dcf9973e6f 100644
> > --- a/drivers/media/platform/qcom/venus/helpers.c
> > +++ b/drivers/media/platform/qcom/venus/helpers.c
> > @@ -85,6 +85,112 @@ bool venus_helper_check_codec(struct venus_inst
> > *inst, u32 v4l2_pixfmt)
> >  }
> >  EXPORT_SYMBOL_GPL(venus_helper_check_codec);
> > 
> > +static int venus_helper_queue_dpb_bufs(struct venus_inst *inst)
> > +{
> > +   struct intbuf *buf;
> > +   int ret = 0;
> > +
> > +   if (list_empty(>dpbbufs))
> > +   return 0;
> > +
> > +   list_for_each_entry(buf, >dpbbufs, list) {
> > +   struct hfi_frame_data fdata;
> > +
> > +   memset(, 0, sizeof(fdata));
> > +   fdata.alloc_len = buf->size;
> > +   fdata.device_addr = buf->da;
> > +   fdata.buffer_type = buf->type;
> > +
> > +   ret = hfi_session_process_buf(inst, );
> > +   if (ret)
> > +   goto fail;
> > +   }
> > +
> > +fail:
> > +   return ret;
> > +}
> > +
> > +int venus_helper_free_dpb_bufs(struct venus_inst *inst)
> > +{
> > +   struct intbuf *buf, *n;
> > +
> > +   if (list_empty(>dpbbufs))
> > +   return 0;
> > +
> > +   list_for_each_entry_safe(buf, n, >dpbbufs, list) {
> > +   list_del_init(>list);
> > +   dma_free_attrs(inst->core->dev, buf->size, buf-
> > >va, buf->da,
> > +  buf->attrs);
> > +   kfree(buf);
> > +   }
> > +
> > +   INIT_LIST_HEAD(>dpbbufs);
> > +
> > +   return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(venus_helper_free_dpb_bufs);
> > +
> > +int venus_helper_alloc_dpb_bufs(struct venus_inst *inst)
> > +{
> > +   struct venus_core *core = inst->core;
> > +   struct device *dev = core->dev;
> > +   enum hfi_version ver = core->res->hfi_version;
> > +   struct hfi_buffer_requirements bufreq;
> > +   u32 buftype = inst->dpb_buftype;
> > +   unsigned int dpb_size = 0;
> > +   struct intbuf *buf;
> > +   unsigned int i;
> > +   u32 count;
> > +   int ret;
> > +
> > +   /* no need to allocate dpb buffers */
> > +   if (!inst->dpb_fmt)
> > +   return 0;
> > +
> > +   if (inst->dpb_buftype == HFI_BUFFER_OUTPUT)
> > +   dpb_size = inst->output_buf_size;
> > +   else if (inst->dpb_buftype == HFI_BUFFER_OUTPUT2)
> > +   dpb_size = inst->output2_buf_size;
> > +
> > +   if (!dpb_size)
> > +   return 0;
> > +
> > +   ret = venus_helper_get_bufreq(inst, buftype, );
> > +   if (ret)
> > +   return ret;
> > +
> > +   count = HFI_BUFREQ_COUNT_MIN(, ver);
> > +
> > +   for (i = 0; i < count; i++) {
> > +   buf = kzalloc(sizeof(*buf), GFP_KERNEL);
> > +   if (!buf) {
> > +   ret = -ENOMEM;
> > +   goto fail;
> > +   }
> > +
> > +   buf->type = buftype;
> > +   buf->size = dpb_size;
> > +   buf->attrs = DMA_ATTR_WRITE_COMBINE |
> > +DMA_ATTR_NO_KERNEL_MAPPING;
> > +   buf->va = dma_alloc_attrs(dev, buf->size, 
> > >da, GFP_KERNEL,
> > + buf->attrs);
> > +   if (!buf->va) {
> > +   kfree(buf);
> > +   ret = -ENOMEM;
> > +   goto fail;
> > +   }
> > +
> > +   list_add_tail(>list, >dpbbufs);
> > +   }
> > +
> > +   return 0;
> > +
> > +fail:
> > +   venus_helper_free_dpb_bufs(inst);
> > +   return ret;
> > +}
> > +EXPORT_SYMBOL_GPL(venus_helper_alloc_dpb_bufs);
> > +
> >  static int intbufs_set_buffer(struct venus_inst *inst, u32 type)
> >  {
> > struct venus_core *core = 

Re: Webcams not recognized on a Dell Latitude 5285 laptop

2018-04-01 Thread Nicolas Dufresne
This laptop embeds one of these new "complex" cameras from Intel. They
requires IPU3 driver. Though, unlike traditional webcam, you need
special userspace to use it (there is no embedded firmware to manage
focus, whitebalance, etc, userspace code need to read the stats and
manage that). As of now, there is no good plan on how to support this
in userspace.

I have seen a lot of Intel engineer speak about GStreamer element
icamsrc, which would support this camera, but haven't seen any public
source code, so I can only assume it's all closed source. It's also of
limited use considering that browsers don't use GStreamer.

If I was to propose a plan, this should be integrated into Pipewire
daemon, which is upcoming userspace daemon to multi-plex notably
cameras across multiple process (a bit like pulseaudio for audio). It's
also the foreseen solution for sandboxed application that cannot
directly access any of the /dev, hence reserve forever a sesrouce like
a camera.

On Linux Desktop, this is a very bad launch, it will likely takes some
times before your camera works out of the box (if ever).

Nicolas

Le dimanche 01 avril 2018 à 18:17 +0200, Frédéric Parrenin a écrit :
> Dear Sakari et al.,
> 
> The acpi tables are apparently too big for the mailing lists.
> So I put the file here:
> https://mycore.core-cloud.net/index.php/s/DwTOb8TJJZYJtNe
> 
> Any information on what is going on with the webcams will be
> appreciated.
> 
> Thanks,
> 
> Frédéric
> 
> > 
> > Or drivers. And a bit more than that actually. Assuming this is
> > IPU3, that
> > is. If that's the case, the short answer is there's no trivial way
> > to
> > support webcam-like functionality using this device. The ACPI
> > tables would
> > tell more details.
> > 
> > Could you send me the ACPI tables, i.e. the file produced by the
> > command:
> > 
> > acpidump -o acpidump.bin
> > 
> > In Debian acpidump is included in acpica-tools package.
> > 
> > Thank you.
> > 


Re: [RFC] Request API

2018-03-22 Thread Nicolas Dufresne
Le jeudi 22 mars 2018 à 18:22 +0100, Hans Verkuil a écrit :
> On 03/22/2018 05:36 PM, Nicolas Dufresne wrote:
> > Le jeudi 22 mars 2018 à 15:18 +0100, Hans Verkuil a écrit :
> > > RFC Request API
> > > ---
> > > 
> > > This document proposes the public API for handling requests.
> > > 
> > > There has been some confusion about how to do this, so this summarizes the
> > > current approach based on conversations with the various stakeholders 
> > > today
> > > (Sakari, Alexandre Courbot, Tomasz Figa and myself).
> > > 
> > > The goal is to finalize this so the Request API patch series work can
> > > continue.
> > > 
> > > 1) Additions to the media API
> > > 
> > >Allocate an empty request object:
> > > 
> > >#define MEDIA_IOC_REQUEST_ALLOC _IOW('|', 0x05, __s32 *)
> > 
> > I see this is MEDIA_IOC namespace, I thought that there was an opening
> > for m2m (codec) to not have to expose a media node. Is this still the
> > case ?
> 
> Allocating requests will have to be done via the media device and codecs will
> therefor register a media device as well.
> 
> However, it is an open question if we want to have what is basically a 
> shortcut
> V4L2 ioctl like VIDIOC_REQUEST_ALLOC so applications that deal with stateless
> codecs do not have to open the media device just to allocate a request.

CODEC driver don't have any use for the media driver. So to me it's
important to not impose on userspace to open and manage two devices.
The presence of a media object in the kernel should not imply exposing
such a device in /dev.

> 
> I guess that whether or not you want that depends on how open you are for
> practical considerations in an API.
> 
> I've asked Alexandre to add this V4L2 ioctl as a final patch in the series
> and we can decide later on whether or not to accept it.
> 
> Sorry, I wanted to mention this in the RFC as a note at the end, but I forgot.
> 
> > 
> > > 
> > >This will return a file descriptor representing the request or an error
> > >if it can't allocate the request.
> > > 
> > >If the pointer argument is NULL, then this will just return 0 (if this 
> > > ioctl
> > >is implemented) or -ENOTTY otherwise. This can be used to test whether 
> > > this
> > >ioctl is supported or not without actually having to allocate a 
> > > request.
> > > 
> > > 2) Operations on the request fd
> > > 
> > >You can queue (aka submit) or reinit a request by calling these ioctls 
> > > on the request fd:
> > > 
> > >#define MEDIA_REQUEST_IOC_QUEUE   _IO('|',  128)
> > >#define MEDIA_REQUEST_IOC_REINIT  _IO('|',  129)
> > > 
> > >Note: the original proposal from Alexandre used IOC_SUBMIT instead of
> > >IOC_QUEUE. I have a slight preference for QUEUE since that implies 
> > > that the
> > >request end up in a queue of requests. That's less obvious with 
> > > SUBMIT. I
> > >have no strong opinion on this, though.
> > > 
> > >With REINIT you reset the state of the request as if you had just 
> > > allocated
> > >it. You cannot REINIT a request if the request is queued but not yet 
> > > completed.
> > >It will return -EBUSY in that case.
> > > 
> > >Calling QUEUE if the request is already queued or completed will 
> > > return -EBUSY
> > >as well. Or would -EPERM be better? I'm open to suggestions. Either 
> > > error code
> > >will work, I think.
> > > 
> > >You can poll the request fd to wait for it to complete. A request is 
> > > complete
> > >if all the associated buffers are available for dequeuing and all the 
> > > associated
> > >controls (such as controls containing e.g. statistics) are updated 
> > > with their
> > >final values.
> > > 
> > >To free a request you close the request fd. Note that it may still be 
> > > in
> > >use internally, so this has to be refcounted.
> > > 
> > >Requests only contain the changes since the previously queued request 
> > > or
> > >since the current hardware state if no other requests are queued.
> > > 
> > > 3) To associate a v4l2 buffer with a request the 'reserved' field in 
> > > struct
> > >v4l2_buffer is used to store the request fd. Buffers won't be 
> > > 'prepared'

Re: [RFC] Request API

2018-03-22 Thread Nicolas Dufresne
Le jeudi 22 mars 2018 à 15:18 +0100, Hans Verkuil a écrit :
> RFC Request API
> ---
> 
> This document proposes the public API for handling requests.
> 
> There has been some confusion about how to do this, so this summarizes the
> current approach based on conversations with the various stakeholders today
> (Sakari, Alexandre Courbot, Tomasz Figa and myself).
> 
> The goal is to finalize this so the Request API patch series work can
> continue.
> 
> 1) Additions to the media API
> 
>Allocate an empty request object:
> 
>#define MEDIA_IOC_REQUEST_ALLOC _IOW('|', 0x05, __s32 *)

I see this is MEDIA_IOC namespace, I thought that there was an opening
for m2m (codec) to not have to expose a media node. Is this still the
case ?

> 
>This will return a file descriptor representing the request or an error
>if it can't allocate the request.
> 
>If the pointer argument is NULL, then this will just return 0 (if this 
> ioctl
>is implemented) or -ENOTTY otherwise. This can be used to test whether this
>ioctl is supported or not without actually having to allocate a request.
> 
> 2) Operations on the request fd
> 
>You can queue (aka submit) or reinit a request by calling these ioctls on 
> the request fd:
> 
>#define MEDIA_REQUEST_IOC_QUEUE   _IO('|',  128)
>#define MEDIA_REQUEST_IOC_REINIT  _IO('|',  129)
> 
>Note: the original proposal from Alexandre used IOC_SUBMIT instead of
>IOC_QUEUE. I have a slight preference for QUEUE since that implies that the
>request end up in a queue of requests. That's less obvious with SUBMIT. I
>have no strong opinion on this, though.
> 
>With REINIT you reset the state of the request as if you had just allocated
>it. You cannot REINIT a request if the request is queued but not yet 
> completed.
>It will return -EBUSY in that case.
> 
>Calling QUEUE if the request is already queued or completed will return 
> -EBUSY
>as well. Or would -EPERM be better? I'm open to suggestions. Either error 
> code
>will work, I think.
> 
>You can poll the request fd to wait for it to complete. A request is 
> complete
>if all the associated buffers are available for dequeuing and all the 
> associated
>controls (such as controls containing e.g. statistics) are updated with 
> their
>final values.
> 
>To free a request you close the request fd. Note that it may still be in
>use internally, so this has to be refcounted.
> 
>Requests only contain the changes since the previously queued request or
>since the current hardware state if no other requests are queued.
> 
> 3) To associate a v4l2 buffer with a request the 'reserved' field in struct
>v4l2_buffer is used to store the request fd. Buffers won't be 'prepared'
>until the request is queued since the request may contain information that
>is needed to prepare the buffer.
> 
>Queuing a buffer without a request after a buffer with a request is 
> equivalent
>to queuing a request containing just that buffer and nothing else. I.e. it 
> will
>just use whatever values the hardware has at the time of processing.
> 
> 4) To associate v4l2 controls with a request we take the first of the
>'reserved[2]' array elements in struct v4l2_ext_controls and use it to 
> store
>the request fd.
> 
>When querying a control value from a request it will return the newest
>value in the list of pending requests, or the current hardware value if
>is not set in any of the pending requests.
> 
>Setting controls without specifying a request fd will just act like it does
>today: the hardware is immediately updated. This can cause race conditions
>if the same control is also specified in a queued request: it is not 
> defined
>which will be set first. It is therefor not a good idea to set the same
>control directly as well as set it as part of a request.
> 
> Notes:
> 
> - Earlier versions of this API had a TRY command as well to validate the
>   request. I'm not sure that is useful so I dropped it, but it can easily
>   be added if there is a good use-case for it. Traditionally within V4L the
>   TRY ioctl will also update wrong values to something that works, but that
>   is not the intention here as far as I understand it. So the validation
>   step can also be done when the request is queued and, if it fails, it will
>   just return an error.

I think it's worth to understand that this would mimic DRM Atomic
interface. The reason atomic operation can be tried like this is
because it's not possible to generically represent all the constraints.
So this would only be useful we we do have this issue.

> 
> - If due to performance reasons we will have to allocate/queue/reinit multiple
>   requests with a single ioctl, then we will have to add new ioctls to the
>   media device. At this moment in time it is not clear that this is really
>   needed and it certainly isn't needed for the stateless 

Re: uvcvideo: Unknown video format,00000032-0002-0010-8000-00aa00389b71

2018-03-21 Thread Nicolas Dufresne
Le mercredi 21 mars 2018 à 10:55 +0200, Laurent Pinchart a écrit :
> Hi Nicolas,
> 
> On Wednesday, 21 March 2018 05:38:59 EET Nicolas Dufresne wrote:
> > Le mardi 20 mars 2018 à 20:04 +0200, Laurent Pinchart a écrit :
> > > On Tuesday, 20 March 2018 19:45:51 EET Nicolas Dufresne wrote:
> > > > Le mardi 20 mars 2018 à 13:20 +0100, Paul Menzel a écrit :
> > > > > Dear Linux folks,
> > > > > 
> > > > > 
> > > > > On the Dell XPS 13 9370, Linux 4.16-rc6 outputs the messages below.
> > > > > 
> > > > > ```
> > > > > [2.338094] calling  uvc_init+0x0/0x1000 [uvcvideo] @ 295
> > > > > [2.338569] calling  iTCO_wdt_init_module+0x0/0x1000 [iTCO_wdt] @
> > > > > 280
> > > > > [2.338570] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
> > > > > [2.338713] iTCO_wdt: Found a Intel PCH TCO device (Version=4,
> > > > > TCOBASE=0x0400)
> > > > > [2.338755] uvcvideo: Found UVC 1.00 device Integrated_Webcam_HD
> > > > > (0bda:58f4)
> > > > > [2.338827] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
> > > > > [2.338851] initcall iTCO_wdt_init_module+0x0/0x1000 [iTCO_wdt]
> > > > > returned 0 after 271 usecs
> > > > > [2.340669] uvcvideo 1-5:1.0: Entity type for entity Extension 4
> > > > > was
> > > > > not initialized!
> > > > > [2.340670] uvcvideo 1-5:1.0: Entity type for entity Extension 7
> > > > > was
> > > > > not initialized!
> > > > > [2.340672] uvcvideo 1-5:1.0: Entity type for entity Processing 2
> > > > > was
> > > > > not initialized!
> > > > > [2.340673] uvcvideo 1-5:1.0: Entity type for entity Camera 1 was
> > > > > not
> > > > > initialized!
> > > > > [2.340736] input: Integrated_Webcam_HD: Integrate as
> > > > > /devices/pci:00/:00:14.0/usb1/1-5/1-5:1.0/input/input9
> > > > > [2.341447] uvcvideo: Unknown video format
> > > > > 0032-0002-0010-8000-00aa00389b71
> > > > 
> > > > While the 0002 is suspicious, this is pretty close to a color format.
> > > > I've recently come across of similar format using D3DFORMAT instead of
> > > > GUID. According to the vendor*, this camera module includes an infrared
> > > > camera (340x340), so I suspect this is to specify the format it
> > > > outputs. A good guess to start with would be that this is
> > > > D3DFMT_X8L8V8U8 (0x32).
> > > 
> > > Isn't 0x32 D3DFMT_L8, not D3DFMT_X8L8V8U8 ?
> > 
> > You are right, sorry about that, I totally miss-translate. It felt
> > weird. This is much more likely yes. So maybe it's the same mapping
> > (but with the -2- instead) as what I added for the HoloLense
> > Camera.
> > 
> > > > To test it, you could map this
> > > > V4L2_PIX_FMT_YUV32/xRGB and see if the driver is happy with the buffer
> > > > size.
> > > 
> > > VideoStreaming Interface Descriptor:
> > > bLength30
> > > bDescriptorType36
> > > bDescriptorSubtype  5 (FRAME_UNCOMPRESSED)
> > > bFrameIndex 1
> > > bmCapabilities   0x00
> > > 
> > >   Still image unsupported
> > > 
> > > wWidth340
> > > wHeight   340
> > > dwMinBitRate 55488000
> > > dwMaxBitRate 55488000
> > > dwMaxVideoFrameBufferSize  115600
> > > dwDefaultFrameInterval 16
> > > bFrameIntervalType  1
> > > dwFrameInterval( 0)16
> > > 
> > > 340*340 is 115600, so this should be a 8-bit format.
> > 
> > Indeed, that matches.
> > 
> > > > Then render it to make sure it looks some image of some sort. A
> > > > new format will need to be defined as this format is in the wrong
> > > > order, and is ambiguous (it may mean AYUV or xYUV). I'm not sure if we
> > > > need specific formats to differentiate infrared data from YUV images,
> > > > need to be discussed.
> > > 
> > > If the format is indeed D3DFMT_L8, it should map to V4L

Re: uvcvideo: Unknown video format,00000032-0002-0010-8000-00aa00389b71

2018-03-20 Thread Nicolas Dufresne
Le mardi 20 mars 2018 à 20:04 +0200, Laurent Pinchart a écrit :
> Hi Nicolas,
> 
> On Tuesday, 20 March 2018 19:45:51 EET Nicolas Dufresne wrote:
> > Le mardi 20 mars 2018 à 13:20 +0100, Paul Menzel a écrit :
> > > Dear Linux folks,
> > > 
> > > 
> > > On the Dell XPS 13 9370, Linux 4.16-rc6 outputs the messages below.
> > > 
> > > ```
> > > [2.338094] calling  uvc_init+0x0/0x1000 [uvcvideo] @ 295
> > > [2.338569] calling  iTCO_wdt_init_module+0x0/0x1000 [iTCO_wdt] @ 280
> > > [2.338570] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
> > > [2.338713] iTCO_wdt: Found a Intel PCH TCO device (Version=4,
> > > TCOBASE=0x0400)
> > > [2.338755] uvcvideo: Found UVC 1.00 device Integrated_Webcam_HD
> > > (0bda:58f4)
> > > [2.338827] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
> > > [2.338851] initcall iTCO_wdt_init_module+0x0/0x1000 [iTCO_wdt]
> > > returned 0 after 271 usecs
> > > [2.340669] uvcvideo 1-5:1.0: Entity type for entity Extension 4 was
> > > not initialized!
> > > [2.340670] uvcvideo 1-5:1.0: Entity type for entity Extension 7 was
> > > not initialized!
> > > [2.340672] uvcvideo 1-5:1.0: Entity type for entity Processing 2 was
> > > not initialized!
> > > [2.340673] uvcvideo 1-5:1.0: Entity type for entity Camera 1 was not
> > > initialized!
> > > [2.340736] input: Integrated_Webcam_HD: Integrate as
> > > /devices/pci:00/:00:14.0/usb1/1-5/1-5:1.0/input/input9
> > > [2.341447] uvcvideo: Unknown video format
> > > 0032-0002-0010-8000-00aa00389b71
> > 
> > While the 0002 is suspicious, this is pretty close to a color format.
> > I've recently come across of similar format using D3DFORMAT instead of
> > GUID. According to the vendor*, this camera module includes an infrared
> > camera (340x340), so I suspect this is to specify the format it
> > outputs. A good guess to start with would be that this is
> > D3DFMT_X8L8V8U8 (0x32).
> 
> Isn't 0x32 D3DFMT_L8, not D3DFMT_X8L8V8U8 ?

You are right, sorry about that, I totally miss-translate. It felt
weird. This is much more likely yes. So maybe it's the same mapping
(but with the -2- instead) as what I added for the HoloLense
Camera.
> 
> > To test it, you could map this
> > V4L2_PIX_FMT_YUV32/xRGB and see if the driver is happy with the buffer
> > size.
> 
> VideoStreaming Interface Descriptor:
> bLength30
> bDescriptorType36
> bDescriptorSubtype  5 (FRAME_UNCOMPRESSED)
> bFrameIndex 1
> bmCapabilities   0x00
>   Still image unsupported
> wWidth340
> wHeight   340
> dwMinBitRate 55488000
> dwMaxBitRate 55488000
> dwMaxVideoFrameBufferSize  115600
> dwDefaultFrameInterval 16
> bFrameIntervalType  1
> dwFrameInterval( 0)16
> 
> 340*340 is 115600, so this should be a 8-bit format.

Indeed, that matches.

> 
> > Then render it to make sure it looks some image of some sort. A
> > new format will need to be defined as this format is in the wrong
> > order, and is ambiguous (it may mean AYUV or xYUV). I'm not sure if we
> > need specific formats to differentiate infrared data from YUV images,
> > need to be discussed.
> 
> If the format is indeed D3DFMT_L8, it should map to V4L2_PIX_FMT_GREY (8-bit 
> luminance). I suspect the camera transmits a depth map though.

I wonder if we should think of a way to tell userspace this is fnfrared
data rather then black and white ?

> 
> > *https://dustinweb.azureedge.net/media/338953/xps-13-9370.pdf
> > 
> > > [2.341450] uvcvideo: Found UVC 1.00 device Integrated_Webcam_HD
> > > (0bda:58f4)
> > > [2.343371] uvcvideo: Unable to create debugfs 1-2 directory.
> > > [2.343420] uvcvideo 1-5:1.2: Entity type for entity Extension 10 was
> > > not initialized!
> > > [2.343422] uvcvideo 1-5:1.2: Entity type for entity Extension 12 was
> > > not initialized!
> > > [2.343423] uvcvideo 1-5:1.2: Entity type for entity Processing 9 was
> > > not initialized!
> > > [2.343424] uvcvideo 1-5:1.2: Entity type for entity Camera 11 was
> > > not initialized!
> > > [2.343472] input: Integrated_Webcam_HD: Integrate as
> > > /devices/pci:00/:00:14.0/usb1/1-5/1-5:1.2/input/input10
> > > [2.343496] usbcore: registered new interface driver uvcvideo
> > > [2.343496] USB Video Class driver (1.1.1)
> > > [2.343501] initcall uvc_init+0x0/0x1000 [uvcvideo] returned 0 after
> > > 5275 usecs
> > > ```
> > > 
> > > Please tell me, what I can do to improve the situation.
> 
> 


Re: uvcvideo: Unknown video format,00000032-0002-0010-8000-00aa00389b71

2018-03-20 Thread Nicolas Dufresne
Le mardi 20 mars 2018 à 13:20 +0100, Paul Menzel a écrit :
> Dear Linux folks,
> 
> 
> On the Dell XPS 13 9370, Linux 4.16-rc6 outputs the messages below.
> 
> ```
> [2.338094] calling  uvc_init+0x0/0x1000 [uvcvideo] @ 295
> [2.338569] calling  iTCO_wdt_init_module+0x0/0x1000 [iTCO_wdt] @ 280
> [2.338570] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11
> [2.338713] iTCO_wdt: Found a Intel PCH TCO device (Version=4,
> TCOBASE=0x0400)
> [2.338755] uvcvideo: Found UVC 1.00 device Integrated_Webcam_HD 
> (0bda:58f4)
> [2.338827] iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0)
> [2.338851] initcall iTCO_wdt_init_module+0x0/0x1000 [iTCO_wdt] 
> returned 0 after 271 usecs
> [2.340669] uvcvideo 1-5:1.0: Entity type for entity Extension 4 was 
> not initialized!
> [2.340670] uvcvideo 1-5:1.0: Entity type for entity Extension 7 was 
> not initialized!
> [2.340672] uvcvideo 1-5:1.0: Entity type for entity Processing 2 was 
> not initialized!
> [2.340673] uvcvideo 1-5:1.0: Entity type for entity Camera 1 was not
> initialized!
> [2.340736] input: Integrated_Webcam_HD: Integrate as
> /devices/pci:00/:00:14.0/usb1/1-5/1-5:1.0/input/input9
> [2.341447] uvcvideo: Unknown video format
> 0032-0002-0010-8000-00aa00389b71

While the 0002 is suspicious, this is pretty close to a color format.
I've recently come across of similar format using D3DFORMAT instead of
GUID. According to the vendor*, this camera module includes an infrared
camera (340x340), so I suspect this is to specify the format it
outputs. A good guess to start with would be that this is
D3DFMT_X8L8V8U8 (0x32). To test it, you could map this
V4L2_PIX_FMT_YUV32/xRGB and see if the driver is happy with the buffer
size. Then render it to make sure it looks some image of some sort. A
new format will need to be defined as this format is in the wrong
order, and is ambiguous (it may mean AYUV or xYUV). I'm not sure if we
need specific formats to differentiate infrared data from YUV images,
need to be discussed.

*https://dustinweb.azureedge.net/media/338953/xps-13-9370.pdf

> [2.341450] uvcvideo: Found UVC 1.00 device Integrated_Webcam_HD 
> (0bda:58f4)
> [2.343371] uvcvideo: Unable to create debugfs 1-2 directory.
> [2.343420] uvcvideo 1-5:1.2: Entity type for entity Extension 10 was 
> not initialized!
> [2.343422] uvcvideo 1-5:1.2: Entity type for entity Extension 12 was 
> not initialized!
> [2.343423] uvcvideo 1-5:1.2: Entity type for entity Processing 9 was 
> not initialized!
> [2.343424] uvcvideo 1-5:1.2: Entity type for entity Camera 11 was 
> not initialized!
> [2.343472] input: Integrated_Webcam_HD: Integrate as
> /devices/pci:00/:00:14.0/usb1/1-5/1-5:1.2/input/input10
> [2.343496] usbcore: registered new interface driver uvcvideo
> [2.343496] USB Video Class driver (1.1.1)
> [2.343501] initcall uvc_init+0x0/0x1000 [uvcvideo] returned 0 after 
> 5275 usecs
> ```
> 
> Please tell me, what I can do to improve the situation.
> 
> 
> Kind regards,
> 
> Paul
> 

signature.asc
Description: This is a digitally signed message part


Re: [PATCH v2 1/3] staging: xm2mvscale: Driver support for Xilinx M2M Video Scaler

2018-03-19 Thread Nicolas Dufresne
Le mardi 20 mars 2018 à 00:46 +, Rohit Athavale a écrit :
> Hi Hans,
> 
> Thanks for taking the time to take a look at this.
> 
> > This should definitely use the V4L2 API. I guess it could be added
> > to staging/media with a big fat TODO that this should be converted
> > to
> > the V4L2 mem2mem framework.
> > 
> > But it makes no sense to re-invent the V4L2 streaming API :-)
> > 
> > drivers/media/platform/mx2_emmaprp.c does something similar to
> > this.
> > It's a little bit outdated (not using the latest m2m helper
> > functions)
> > but it is a good starting point.
> 
> I looked at the mx2_emmaprp.c and the Samsung G-Scaler M2M driver.
> IMHO, the main difference between
> the Hardware registers/capabilities is that mx2_emmaprp driver or the
> gsc driver, have one scaling "channel"
> if we might call it. Whereas the HW/IP I have in mind has 4-8 scaling
> channels.
> 
> By a scaling channel, I mean an entity of the HW or IP, that can take
> the following parameters :
>  - Input height, stride , width, color format, input Y and Cb/Cr
> physically contiguous memory pointers 
>  - Output height, stride, width, color format, output Y and Cb/Cr
> physically contiguous  memory pointers
> 
> Based on the above parameters, when the above are provided and the IP
> is started, we get an interrupt on completion.
> I'm sure you are familiar with this model. However, in the case of
> this IP, there could be 4-8 such channels and a single interrupt
> on the completion of the all 4-8 scaling operations.
> 
> In this IP, we are trying to have 4-8 input sources being scaled by
> this single piece of hardware, by time multiplexing.
> 
> An example use case is :
> 
> Four applications (sources) will feed (enqueue) 4 input buffers to
> the scaler, the scaler driver will synchronize the programming of
> these buffers, when the number of buffers received  by the driver
> meets our batch size (say a batch size of 4), it will kick start the
> IP. The four applications  will poll on the fd, upon receiving an
> interrupt from the hardware the poll will unblock. And all four
> applications can dequeue their respective buffers and display them on
> a sink.

You should think of a better scheduling model, it will be really hard
to design userspace that collaborate in order to optimize the IP usage.
I think a better approach would be to queue while the IP is busy. These
queues can then be sorted and prioritized.

> 
> But each "channel" can be set to do accept its own individual input
> and output formats. When I went through :
> https://www.kernel.org/doc/html/v4.14/media/uapi/v4l/open.html#multip
> le-opens
> 
> It appears, once an application has invoked VIDIOC_REQBUFS or
> VIDIOC_CREATE_BUFS, other applications cannot VIDIOC_S_FMT on them.
> However to maximize the available number of channels, it would be
> necessary to allow several applications to be able to 
> perform VIDIOC_S_FMT on the device node in the case of this hardware
> as different channels can be expected to deal with different scaling
> operations.

This does not apply to M2M devices. Each time userspace open an M2M
device, it will get a different instance (unless there is no more
resource available). What drivers like Samsung FIMC, GSCALER, MFC. etc.
do, is that they limit the number of instances (open calls) to the
number of streams they can handle in parallel. They don't seem to share
an IRQ when doing batch though.

> 
> One option is to create a logical /dev/videoX node for each such
> channel, and have a parent driver perform the interrupt handling,
> batch size setting and other such common functionalities. Is there a
> way to allow multiple applications talk to the same video device
> node/file handle without creating logical video nodes for each
> channel ?

FIMC used to expose a node per instance and it was terribly hard to
use. I don't think this is a good idea.

> 
> Please let me know if the description of HW is not clear. I will look
> forward to hear comments from you.
> 
> > 
> > So for this series:
> > 
> > Nacked-by: Hans Verkuil 
> > 
> > If this was added to drivers/staging/media instead and with an
> > updated
> > TODO, then we can accept it, but we need to see some real effort
> > afterwards
> > to switch this to the right API. Otherwise it will be removed again
> > after a few kernel cycles.
> > 
> 
> Many thanks for providing a pathway to get this into
> drivers/staging/media
> 
> I will drop this series, and re-send with the driver being placed in
> drivers/staging/media.
> I'll add some references to this conversation, so a new reviewer gets
> some context of what
> was discussed. In the meanwhile I will look into re-writing this to
> utilize the M2M V4L2 API.
> 
> > Regards,
> > 
> > Hans
> 
> 
> Best Regards,
> Rohit
> 
> 
> > -Original Message-
> > From: Hans Verkuil [mailto:hverk...@xs4all.nl]
> > Sent: Friday, March 09, 2018 3:58 AM
> > To: Greg KH ; Rohit 

Re: [PATCH] media: vb2: unify calling of set_page_dirty_lock

2018-03-13 Thread Nicolas Dufresne
Le mardi 13 mars 2018 à 21:09 -0400, Nicolas Dufresne a écrit :
> > I've looked into this again. I have hit the same issue but with CPU
> > to
> > DRM, using DMABuf allocated from DRM Dumb buffers. In that case,
> > using
> > DMA_BUF_IOCTL_SYNC fixes the issues.
> > 
> > This raises a lot of question around the model used in V4L2. As you
> > mention, prepare/finish are missing in dma-vmalloc mem_ops. I'll
> > give
> > a
> > try implementing that, it should cover my initial use case, but
> > then
> > I
> > believe it will fail if my pipeline is:
> > 
> >UVC -> in plane CPU modification -> DRM
> > 
> > Because we don't implement begin/end_cpu_access on our exported
> > DMABuf.
> > It should also fail for the following use case:
> > 
> >UVC (importer) -> DRM
> > 
> > UVC driver won't call the remote dmabuf being/end_cpu_access
> > method.
> > This one is difficult because UVC driver and vivid don't seem to be
> > aware of being an importer, exported or simply exporting to CPU
> > (through mmap). I believe what we have now pretty much assumes the
> > what
> > we export as vmalloc is to be used by CPU only. Also, the usual
> > direction used by prepare/finish ops won't work for drivers like
> > vivid
> > and UVC that write into the buffers using the cpu.
> > 
> > To be continued ...
> 
> While I was writing that, I was already outdated, as of now, we only
> have one ops, called sync. This implements the to_cpu direction only.

Replying to myself again, obviously looking at the old videobuf code
can only get one confused.

Nicolas


Re: [PATCH] media: vb2: unify calling of set_page_dirty_lock

2018-03-13 Thread Nicolas Dufresne
Le mardi 13 mars 2018 à 20:44 -0400, Nicolas Dufresne a écrit :
> Le mercredi 18 octobre 2017 à 11:34 +0300, Stanimir Varbanov a écrit
> :
> > 
> > On 10/17/2017 05:19 PM, Nicolas Dufresne wrote:
> > > Le mardi 17 octobre 2017 à 13:14 +0300, Sakari Ailus a écrit :
> > > > On Sun, Oct 15, 2017 at 07:09:24PM -0400, Nicolas Dufresne
> > > > wrote:
> > > > > Le dimanche 15 octobre 2017 à 23:40 +0300, Sakari Ailus a
> > > > > écrit
> > > > > :
> > > > > > Hi Nicolas,
> > > > > > 
> > > > > > On Tue, Oct 10, 2017 at 11:40:10AM -0400, Nicolas Dufresne
> > > > > > wrote:
> > > > > > > Le mardi 29 août 2017 à 14:26 +0300, Stanimir Varbanov a
> > > > > > > écrit :
> > > > > > > > Currently videobuf2-dma-sg checks for dma direction for
> > > > > > > > every single page and videobuf2-dc lacks any dma
> > > > > > > > direction
> > > > > > > > checks and calls set_page_dirty_lock unconditionally.
> > > > > > > > 
> > > > > > > > Thus unify and align the invocations of
> > > > > > > > set_page_dirty_lock
> > > > > > > > for videobuf2-dc, videobuf2-sg  memory allocators with
> > > > > > > > videobuf2-vmalloc, i.e. the pattern used in vmalloc has
> > > > > > > > been
> > > > > > > > copied to dc and dma-sg.
> > > > > > > 
> > > > > > > Just before we go too far in "doing like vmalloc", I
> > > > > > > would
> > > > > > > like to
> > > > > > > share this small video that display coherency issues when
> > > > > > > rendering
> > > > > > > vmalloc backed DMABuf over various KMS/DRM driver. I can
> > > > > > > reproduce
> > > > > > > this
> > > > > > > easily with Intel and MSM display drivers using UVC or
> > > > > > > Vivid as
> > > > > > > source.
> > > > > > > 
> > > > > > > The following is an HDMI capture of the following
> > > > > > > GStreamer
> > > > > > > pipeline
> > > > > > > running on Dragonboard 410c.
> > > > > > > 
> > > > > > > gst-launch-1.0 -v v4l2src device=/dev/video2 !
> > > > > > > video/x-
> > > > > > > raw,format=NV16,width=1280,height=720 ! kmssink
> > > > > > > https://people.collabora.com/~nicolas/vmalloc-issue.m
> > > > > > > ov
> > > > > > > 
> > > > > > > Feedback on this issue would be more then welcome. It's
> > > > > > > not
> > > > > > > clear
> > > > > > > to me
> > > > > > > who's bug is this (v4l2, drm or iommu). The software is
> > > > > > > unlikely to
> > > > > > > be
> > > > > > > blamed as this same pipeline works fine with non-vmalloc
> > > > > > > based
> > > > > > > sources.
> > > > > > 
> > > > > > Could you elaborate this a little bit more? Which Intel CPU
> > > > > > do you
> > > > > > have
> > > > > > there?
> > > > > 
> > > > > I have tested with Skylake and Ivy Bridge and on Dragonboard
> > > > > 410c
> > > > > (Qualcomm APQ8016 SoC) (same visual artefact)
> > > > 
> > > > I presume kmssink draws on the display. Which GPU did you use?
> > > 
> > > In order, GPU will be Iris Pro 580, Intel® Ivybridge Mobile and
> > > an
> > > Adreno (3x ?). Why does it matter ? I'm pretty sure the GPU is
> > > not
> > > used
> > > on the DB410c for this use case.
> > 
> > Nicolas, for me this looks like a problem in v4l2. In the case of
> > vivid
> > the stats overlay (where the coherency issues are observed, and
> > most
> > probably the issue will be observed on the whole image but
> > fortunately
> > it is a static image pattern) are filled by the CPU but I cannot
> > see
> > where the cache is flushed. Also I'm wondering why .finish method
> > is
> > missing for dma-vmalloc mem_ops.
> > 
> > To be sure that the problem is in vmalloc v4l2 allocator, could you
> > change the allocator to dma-contig, there is a module param for
> > that
> > called 'allocators'.
> 
> I've looked into this again. I have hit the same issue but with CPU
> to
> DRM, using DMABuf allocated from DRM Dumb buffers. In that case,
> using
> DMA_BUF_IOCTL_SYNC fixes the issues.
> 
> This raises a lot of question around the model used in V4L2. As you
> mention, prepare/finish are missing in dma-vmalloc mem_ops. I'll give
> a
> try implementing that, it should cover my initial use case, but then
> I
> believe it will fail if my pipeline is:
> 
>   UVC -> in plane CPU modification -> DRM
> 
> Because we don't implement begin/end_cpu_access on our exported
> DMABuf.
> It should also fail for the following use case:
> 
>   UVC (importer) -> DRM
> 
> UVC driver won't call the remote dmabuf being/end_cpu_access method.
> This one is difficult because UVC driver and vivid don't seem to be
> aware of being an importer, exported or simply exporting to CPU
> (through mmap). I believe what we have now pretty much assumes the
> what
> we export as vmalloc is to be used by CPU only. Also, the usual
> direction used by prepare/finish ops won't work for drivers like
> vivid
> and UVC that write into the buffers using the cpu.
> 
> To be continued ...

While I was writing that, I was already outdated, as of now, we only
have one ops, called sync. This implements the to_cpu direction only.

> 
> Nicolas
> 
> 


Re: [PATCH] media: vb2: unify calling of set_page_dirty_lock

2018-03-13 Thread Nicolas Dufresne
Le mercredi 18 octobre 2017 à 11:34 +0300, Stanimir Varbanov a écrit :
> 
> On 10/17/2017 05:19 PM, Nicolas Dufresne wrote:
> > Le mardi 17 octobre 2017 à 13:14 +0300, Sakari Ailus a écrit :
> > > On Sun, Oct 15, 2017 at 07:09:24PM -0400, Nicolas Dufresne wrote:
> > > > Le dimanche 15 octobre 2017 à 23:40 +0300, Sakari Ailus a écrit
> > > > :
> > > > > Hi Nicolas,
> > > > > 
> > > > > On Tue, Oct 10, 2017 at 11:40:10AM -0400, Nicolas Dufresne
> > > > > wrote:
> > > > > > Le mardi 29 août 2017 à 14:26 +0300, Stanimir Varbanov a
> > > > > > écrit :
> > > > > > > Currently videobuf2-dma-sg checks for dma direction for
> > > > > > > every single page and videobuf2-dc lacks any dma
> > > > > > > direction
> > > > > > > checks and calls set_page_dirty_lock unconditionally.
> > > > > > > 
> > > > > > > Thus unify and align the invocations of
> > > > > > > set_page_dirty_lock
> > > > > > > for videobuf2-dc, videobuf2-sg  memory allocators with
> > > > > > > videobuf2-vmalloc, i.e. the pattern used in vmalloc has
> > > > > > > been
> > > > > > > copied to dc and dma-sg.
> > > > > > 
> > > > > > Just before we go too far in "doing like vmalloc", I would
> > > > > > like to
> > > > > > share this small video that display coherency issues when
> > > > > > rendering
> > > > > > vmalloc backed DMABuf over various KMS/DRM driver. I can
> > > > > > reproduce
> > > > > > this
> > > > > > easily with Intel and MSM display drivers using UVC or
> > > > > > Vivid as
> > > > > > source.
> > > > > > 
> > > > > > The following is an HDMI capture of the following GStreamer
> > > > > > pipeline
> > > > > > running on Dragonboard 410c.
> > > > > > 
> > > > > > gst-launch-1.0 -v v4l2src device=/dev/video2 ! video/x-
> > > > > > raw,format=NV16,width=1280,height=720 ! kmssink
> > > > > > https://people.collabora.com/~nicolas/vmalloc-issue.mov
> > > > > > 
> > > > > > Feedback on this issue would be more then welcome. It's not
> > > > > > clear
> > > > > > to me
> > > > > > who's bug is this (v4l2, drm or iommu). The software is
> > > > > > unlikely to
> > > > > > be
> > > > > > blamed as this same pipeline works fine with non-vmalloc
> > > > > > based
> > > > > > sources.
> > > > > 
> > > > > Could you elaborate this a little bit more? Which Intel CPU
> > > > > do you
> > > > > have
> > > > > there?
> > > > 
> > > > I have tested with Skylake and Ivy Bridge and on Dragonboard
> > > > 410c
> > > > (Qualcomm APQ8016 SoC) (same visual artefact)
> > > 
> > > I presume kmssink draws on the display. Which GPU did you use?
> > 
> > In order, GPU will be Iris Pro 580, Intel® Ivybridge Mobile and an
> > Adreno (3x ?). Why does it matter ? I'm pretty sure the GPU is not
> > used
> > on the DB410c for this use case.
> 
> Nicolas, for me this looks like a problem in v4l2. In the case of
> vivid
> the stats overlay (where the coherency issues are observed, and most
> probably the issue will be observed on the whole image but
> fortunately
> it is a static image pattern) are filled by the CPU but I cannot see
> where the cache is flushed. Also I'm wondering why .finish method is
> missing for dma-vmalloc mem_ops.
> 
> To be sure that the problem is in vmalloc v4l2 allocator, could you
> change the allocator to dma-contig, there is a module param for that
> called 'allocators'.

I've looked into this again. I have hit the same issue but with CPU to
DRM, using DMABuf allocated from DRM Dumb buffers. In that case, using
DMA_BUF_IOCTL_SYNC fixes the issues.

This raises a lot of question around the model used in V4L2. As you
mention, prepare/finish are missing in dma-vmalloc mem_ops. I'll give a
try implementing that, it should cover my initial use case, but then I
believe it will fail if my pipeline is:

  UVC -> in plane CPU modification -> DRM

Because we don't implement begin/end_cpu_access on our exported DMABuf.
It should also fail for the following use case:

  UVC (importer) -> DRM

UVC driver won't call the remote dmabuf being/end_cpu_access method.
This one is difficult because UVC driver and vivid don't seem to be
aware of being an importer, exported or simply exporting to CPU
(through mmap). I believe what we have now pretty much assumes the what
we export as vmalloc is to be used by CPU only. Also, the usual
direction used by prepare/finish ops won't work for drivers like vivid
and UVC that write into the buffers using the cpu.

To be continued ...

Nicolas




Re: [PATCH v8 07/13] [media] vb2: mark codec drivers as unordered

2018-03-09 Thread Nicolas Dufresne
Le vendredi 09 mars 2018 à 14:49 -0300, Gustavo Padovan a écrit :
> From: Gustavo Padovan 
> 
> In preparation to have full support to explicit fence we are
> marking codec as non-ordered preventively. It is easier and safer from an

The usage of "codec" is soso 

> uAPI point of view to move from unordered to ordered than the opposite.
> 
> Signed-off-by: Gustavo Padovan 
> ---
>  drivers/media/platform/coda/coda-common.c  | 1 +
>  drivers/media/platform/exynos-gsc/gsc-m2m.c| 1 +
>  drivers/media/platform/exynos4-is/fimc-m2m.c   | 1 +
>  drivers/media/platform/m2m-deinterlace.c   | 1 +

... these tree are not codecs. Did you just set all M2M drivers are
unordered ?

>  drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c| 1 +
>  drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c   | 1 +
>  drivers/media/platform/mtk-vcodec/mtk_vcodec_dec.c | 1 +
>  drivers/media/platform/mtk-vcodec/mtk_vcodec_enc.c | 1 +
>  drivers/media/platform/mx2_emmaprp.c   | 1 +
>  drivers/media/platform/qcom/venus/vdec.c   | 1 +
>  drivers/media/platform/qcom/venus/venc.c   | 1 +
>  drivers/media/platform/rcar_fdp1.c | 1 +
>  drivers/media/platform/rcar_jpu.c  | 1 +
>  drivers/media/platform/rockchip/rga/rga-buf.c  | 1 +
>  drivers/media/platform/s5p-g2d/g2d.c   | 1 +

If this 2D blitter driver picks input buffers in random order, we have
a serious problem.

>  drivers/media/platform/s5p-jpeg/jpeg-core.c| 1 +
>  drivers/media/platform/s5p-mfc/s5p_mfc_dec.c   | 1 +
>  drivers/media/platform/s5p-mfc/s5p_mfc_enc.c   | 1 +
>  drivers/media/platform/sh_veu.c| 1 +
>  drivers/media/platform/sti/bdisp/bdisp-v4l2.c  | 1 +
>  drivers/media/platform/ti-vpe/vpe.c| 1 +
>  drivers/media/platform/vim2m.c | 1 +
>  22 files changed, 22 insertions(+)
> 
> diff --git a/drivers/media/platform/coda/coda-common.c 
> b/drivers/media/platform/coda/coda-common.c
> index 04e35d70ce2e..6deb29fe6eb7 100644
> --- a/drivers/media/platform/coda/coda-common.c
> +++ b/drivers/media/platform/coda/coda-common.c
> @@ -1649,6 +1649,7 @@ static const struct vb2_ops coda_qops = {
>   .stop_streaming = coda_stop_streaming,
>   .wait_prepare   = vb2_ops_wait_prepare,
>   .wait_finish= vb2_ops_wait_finish,
> + .is_unordered   = vb2_ops_set_unordered,
>  };
>  
>  static int coda_s_ctrl(struct v4l2_ctrl *ctrl)
> diff --git a/drivers/media/platform/exynos-gsc/gsc-m2m.c 
> b/drivers/media/platform/exynos-gsc/gsc-m2m.c
> index e9ff27949a91..10c3e4659d38 100644
> --- a/drivers/media/platform/exynos-gsc/gsc-m2m.c
> +++ b/drivers/media/platform/exynos-gsc/gsc-m2m.c
> @@ -286,6 +286,7 @@ static const struct vb2_ops gsc_m2m_qops = {
>   .wait_finish = vb2_ops_wait_finish,
>   .stop_streaming  = gsc_m2m_stop_streaming,
>   .start_streaming = gsc_m2m_start_streaming,
> + .is_unordered= vb2_ops_set_unordered,
>  };
>  
>  static int gsc_m2m_querycap(struct file *file, void *fh,
> diff --git a/drivers/media/platform/exynos4-is/fimc-m2m.c 
> b/drivers/media/platform/exynos4-is/fimc-m2m.c
> index a19f8b164a47..dfc487a582c0 100644
> --- a/drivers/media/platform/exynos4-is/fimc-m2m.c
> +++ b/drivers/media/platform/exynos4-is/fimc-m2m.c
> @@ -227,6 +227,7 @@ static const struct vb2_ops fimc_qops = {
>   .wait_finish = vb2_ops_wait_finish,
>   .stop_streaming  = stop_streaming,
>   .start_streaming = start_streaming,
> + .is_unordered= vb2_ops_set_unordered,
>  };
>  
>  /*
> diff --git a/drivers/media/platform/m2m-deinterlace.c 
> b/drivers/media/platform/m2m-deinterlace.c
> index 1e4195144f39..35a0f45d2a51 100644
> --- a/drivers/media/platform/m2m-deinterlace.c
> +++ b/drivers/media/platform/m2m-deinterlace.c
> @@ -856,6 +856,7 @@ static const struct vb2_ops deinterlace_qops = {
>   .queue_setup = deinterlace_queue_setup,
>   .buf_prepare = deinterlace_buf_prepare,
>   .buf_queue   = deinterlace_buf_queue,
> + .is_unordered= vb2_ops_set_unordered,
>  };
>  
>  static int queue_init(void *priv, struct vb2_queue *src_vq,
> diff --git a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c 
> b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
> index 226f90886484..34a4b5b2e1b5 100644
> --- a/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
> +++ b/drivers/media/platform/mtk-jpeg/mtk_jpeg_core.c
> @@ -764,6 +764,7 @@ static const struct vb2_ops mtk_jpeg_qops = {
>   .wait_finish= vb2_ops_wait_finish,
>   .start_streaming= mtk_jpeg_start_streaming,
>   .stop_streaming = mtk_jpeg_stop_streaming,
> + .is_unordered   = vb2_ops_set_unordered,
>  };
>  
>  static void mtk_jpeg_set_dec_src(struct mtk_jpeg_ctx *ctx,
> diff --git a/drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c 
> 

Re: [Patch v8 12/12] Documention: v4l: Documentation for HEVC CIDs

2018-02-27 Thread Nicolas Dufresne
Le vendredi 02 février 2018 à 17:55 +0530, Smitha T Murthy a écrit :
> Added V4l2 controls for HEVC encoder
> 
> Signed-off-by: Smitha T Murthy 
> ---
>  Documentation/media/uapi/v4l/extended-controls.rst | 410
> +
>  1 file changed, 410 insertions(+)
> 
> diff --git a/Documentation/media/uapi/v4l/extended-controls.rst
> b/Documentation/media/uapi/v4l/extended-controls.rst
> index dfe49ae..cb0a64a 100644
> --- a/Documentation/media/uapi/v4l/extended-controls.rst
> +++ b/Documentation/media/uapi/v4l/extended-controls.rst
> @@ -1960,6 +1960,416 @@ enum v4l2_vp8_golden_frame_sel -
>  1, 2 and 3 corresponding to encoder profiles 0, 1, 2 and 3.
>  
>  
> +High Efficiency Video Coding (HEVC/H.265) Control Reference
> +---
> +
> +The HEVC/H.265 controls include controls for encoding parameters of
> HEVC/H.265
> +video codec.
> +
> +
> +.. _hevc-control-id:
> +
> +HEVC/H.265 Control IDs
> +^^
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP (integer)``
> +Minimum quantization parameter for HEVC.
> +Valid range: from 0 to 51.
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP (integer)``
> +Maximum quantization parameter for HEVC.
> +Valid range: from 0 to 51.
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_I_FRAME_QP (integer)``
> +Quantization parameter for an I frame for HEVC.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_P_FRAME_QP (integer)``
> +Quantization parameter for a P frame for HEVC.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_B_FRAME_QP (integer)``
> +Quantization parameter for a B frame for HEVC.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_QP (boolean)``
> +HIERARCHICAL_QP allows the host to specify the quantization
> parameter
> +values for each temporal layer through HIERARCHICAL_QP_LAYER.
> This is
> +valid only if HIERARCHICAL_CODING_LAYER is greater than 1.
> Setting the
> +control value to 1 enables setting of the QP values for the
> layers.
> +
> +.. _v4l2-hevc-hier-coding-type:
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_TYPE``
> +(enum)
> +
> +enum v4l2_mpeg_video_hevc_hier_coding_type -
> +Selects the hierarchical coding type for encoding. Possible
> values are:
> +
> +.. raw:: latex
> +
> +\begin{adjustbox}{width=\columnwidth}
> +
> +.. tabularcolumns:: |p{11.0cm}|p{10.0cm}|
> +
> +.. flat-table::
> +:header-rows:  0
> +:stub-columns: 0
> +
> +* - ``V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_B``
> +  - Use the B frame for hierarchical coding.
> +* - ``V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_P``
> +  - Use the P frame for hierarchical coding.
> +
> +.. raw:: latex
> +
> +\end{adjustbox}
> +
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_LAYER (integer)``
> +Selects the hierarchical coding layer. In normal encoding
> +(non-hierarchial coding), it should be zero. Possible values are
> [0, 6].
> +0 indicates HIERARCHICAL CODING LAYER 0, 1 indicates
> HIERARCHICAL CODING
> +LAYER 1 and so on.
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L0_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 0.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L1_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 1.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L2_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 2.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L3_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 3.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L4_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 4.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L5_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 5.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L6_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 6.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +.. _v4l2-hevc-profile:
> +
> 

Re: [Patch v8 12/12] Documention: v4l: Documentation for HEVC CIDs

2018-02-27 Thread Nicolas Dufresne
Le vendredi 02 février 2018 à 17:55 +0530, Smitha T Murthy a écrit :
> Added V4l2 controls for HEVC encoder
> 
> Signed-off-by: Smitha T Murthy 
> ---
>  Documentation/media/uapi/v4l/extended-controls.rst | 410
> +
>  1 file changed, 410 insertions(+)
> 
> diff --git a/Documentation/media/uapi/v4l/extended-controls.rst
> b/Documentation/media/uapi/v4l/extended-controls.rst
> index dfe49ae..cb0a64a 100644
> --- a/Documentation/media/uapi/v4l/extended-controls.rst
> +++ b/Documentation/media/uapi/v4l/extended-controls.rst
> @@ -1960,6 +1960,416 @@ enum v4l2_vp8_golden_frame_sel -
>  1, 2 and 3 corresponding to encoder profiles 0, 1, 2 and 3.
>  
>  
> +High Efficiency Video Coding (HEVC/H.265) Control Reference
> +---
> +
> +The HEVC/H.265 controls include controls for encoding parameters of
> HEVC/H.265
> +video codec.
> +
> +
> +.. _hevc-control-id:
> +
> +HEVC/H.265 Control IDs
> +^^
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP (integer)``
> +Minimum quantization parameter for HEVC.
> +Valid range: from 0 to 51.
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP (integer)``
> +Maximum quantization parameter for HEVC.
> +Valid range: from 0 to 51.
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_I_FRAME_QP (integer)``
> +Quantization parameter for an I frame for HEVC.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_P_FRAME_QP (integer)``
> +Quantization parameter for a P frame for HEVC.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_B_FRAME_QP (integer)``
> +Quantization parameter for a B frame for HEVC.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_QP (boolean)``
> +HIERARCHICAL_QP allows the host to specify the quantization
> parameter
> +values for each temporal layer through HIERARCHICAL_QP_LAYER.
> This is
> +valid only if HIERARCHICAL_CODING_LAYER is greater than 1.
> Setting the
> +control value to 1 enables setting of the QP values for the
> layers.
> +
> +.. _v4l2-hevc-hier-coding-type:
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_TYPE``
> +(enum)
> +
> +enum v4l2_mpeg_video_hevc_hier_coding_type -
> +Selects the hierarchical coding type for encoding. Possible
> values are:
> +
> +.. raw:: latex
> +
> +\begin{adjustbox}{width=\columnwidth}
> +
> +.. tabularcolumns:: |p{11.0cm}|p{10.0cm}|
> +
> +.. flat-table::
> +:header-rows:  0
> +:stub-columns: 0
> +
> +* - ``V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_B``
> +  - Use the B frame for hierarchical coding.
> +* - ``V4L2_MPEG_VIDEO_HEVC_HIERARCHICAL_CODING_P``
> +  - Use the P frame for hierarchical coding.
> +
> +.. raw:: latex
> +
> +\end{adjustbox}
> +
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_LAYER (integer)``
> +Selects the hierarchical coding layer. In normal encoding
> +(non-hierarchial coding), it should be zero. Possible values are
> [0, 6].
> +0 indicates HIERARCHICAL CODING LAYER 0, 1 indicates
> HIERARCHICAL CODING
> +LAYER 1 and so on.
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L0_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 0.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L1_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 1.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L2_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 2.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L3_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 3.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L4_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 4.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L5_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 5.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +``V4L2_CID_MPEG_VIDEO_HEVC_HIER_CODING_L6_QP (integer)``
> +Indicates quantization parameter for hierarchical coding layer
> 6.
> +Valid range: [V4L2_CID_MPEG_VIDEO_HEVC_MIN_QP,
> +V4L2_CID_MPEG_VIDEO_HEVC_MAX_QP].
> +
> +.. _v4l2-hevc-profile:
> +
> 

Re: [PATCH v2 0/3] Initial driver support for Xilinx M2M Video Scaler

2018-02-22 Thread Nicolas Dufresne
Le mercredi 21 février 2018 à 14:43 -0800, Rohit Athavale a écrit :
> This patch series has three commits :
>  - Driver support for the Xilinx M2M Video Scaler IP
>  - TODO document
>  - DT binding doc
> 
> Changes in HW register map is expected as the IP undergoes changes.
> This is a first attempt at the driver as an early prototype.
> 
> This is a M2M Video Scaler IP that uses polyphases filters to perform
> video scaling. The driver will be used by an application like a
> gstreamer plugin.

I'm hoping you know all this already, but just in case, rebasing your
driver on videobuf2-v4l2.h interface would be automatically supported
by GStreamer, and likely a better proposal for upstreaming.

There few drivers already that could be use as an inspiration.

./drivers/media/platform/vim2m.c: Which demonstrate the API
./drivers/media/platform/exynos4-is/: Exynos4 imaging functions
./drivers/media/platform/exynos-gsc/: Exynos4 scaler (and more)
./drivers/media/platform/mtk-mdp/mtk_mdp_m2m.c: MediaTek CSC/Scale
./drivers/media/platform/s5p-g2d/g2d.c: A 2D blitter iirc ?
. . .

I don't know them all, I have developped the GStreamer code with the
Exynos4/5 platform, but also had success report on IMX6 (not upstream
yet apparently). With the framework, you'll gain DMABuf with very
little code.

> 
> Change Log:
> 
> v2 
>  - Cc'ing linux-media mailing list as suggested by Dan Carpenter.
>Dan wanted to see if someone from linux-media can review the 
>driver interface in xm2m_vscale.c to see if it makes sense.
>  - Another question would be the right place to keep the driver,
>in drivers/staging/media or drivers/staging/ 
>  - Dropped empty mmap_open, mmap_close ops.
>  - Removed incorrect DMA_SHARED_BUFFER select from Kconfig
> v1 - Initial version
> 
> 
> Rohit Athavale (3):
>   staging: xm2mvscale: Driver support for Xilinx M2M Video Scaler
>   staging: xm2mvscale: Add TODO for the driver
>   Documentation: devicetree: bindings: Add DT binding doc for xm2mvsc
> driver
> 
>  drivers/staging/Kconfig|   2 +
>  drivers/staging/Makefile   |   1 +
>  .../devicetree/bindings/xm2mvscaler.txt|  25 +
>  drivers/staging/xm2mvscale/Kconfig |  11 +
>  drivers/staging/xm2mvscale/Makefile|   3 +
>  drivers/staging/xm2mvscale/TODO|  18 +
>  drivers/staging/xm2mvscale/ioctl_xm2mvsc.h | 134 +++
>  drivers/staging/xm2mvscale/scaler_hw_xm2m.c| 945
> +
>  drivers/staging/xm2mvscale/scaler_hw_xm2m.h| 152 
>  drivers/staging/xm2mvscale/xm2m_vscale.c   | 768
> +
>  drivers/staging/xm2mvscale/xvm2mvsc_hw_regs.h  | 204 +
>  11 files changed, 2263 insertions(+)
>  create mode 100644
> drivers/staging/xm2mvscale/Documentation/devicetree/bindings/xm2mvsca
> ler.txt
>  create mode 100644 drivers/staging/xm2mvscale/Kconfig
>  create mode 100644 drivers/staging/xm2mvscale/Makefile
>  create mode 100644 drivers/staging/xm2mvscale/TODO
>  create mode 100644 drivers/staging/xm2mvscale/ioctl_xm2mvsc.h
>  create mode 100644 drivers/staging/xm2mvscale/scaler_hw_xm2m.c
>  create mode 100644 drivers/staging/xm2mvscale/scaler_hw_xm2m.h
>  create mode 100644 drivers/staging/xm2mvscale/xm2m_vscale.c
>  create mode 100644 drivers/staging/xm2mvscale/xvm2mvsc_hw_regs.h
> 


Re: [PATCH v3 3/9] uapi: media: New fourcc codes needed by Xilinx Video IP

2018-02-16 Thread Nicolas Dufresne
Le mercredi 14 février 2018 à 22:42 -0800, Satish Kumar Nagireddy a écrit :
> From: Jeffrey Mouroux 
> 
> The Xilinx Video Framebuffer DMA IP supports video memory formats
> that are not represented in the current V4L2 fourcc library. This
> patch adds those missing fourcc codes. This includes both new
> 8-bit and 10-bit pixel formats.

As Hyon spotted, this is missing Documentation/media/uapi/v4l/ doc.
But there is some comment here that can be improved, see next:

> 
> Signed-off-by: Satish Kumar Nagireddy 
> ---
>  include/uapi/linux/videodev2.h | 11 +++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index 9827189..9fa4313c 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -509,7 +509,10 @@ struct v4l2_pix_format {
>  #define V4L2_PIX_FMT_XBGR32  v4l2_fourcc('X', 'R', '2', '4') /* 32  
> BGRX-8-8-8-8  */
>  #define V4L2_PIX_FMT_RGB32   v4l2_fourcc('R', 'G', 'B', '4') /* 32  
> RGB-8-8-8-8   */
>  #define V4L2_PIX_FMT_ARGB32  v4l2_fourcc('B', 'A', '2', '4') /* 32  
> ARGB-8-8-8-8  */
> +#define V4L2_PIX_FMT_BGRA32  v4l2_fourcc('A', 'B', 'G', 'R') /* 32  
> ABGR-8-8-8-8  */
>  #define V4L2_PIX_FMT_XRGB32  v4l2_fourcc('B', 'X', '2', '4') /* 32  
> XRGB-8-8-8-8  */
> +#define V4L2_PIX_FMT_BGRX32  v4l2_fourcc('X', 'B', 'G', 'R') /* 32  
> XBGR-8-8-8-8 */
> +#define V4L2_PIX_FMT_XBGR30  v4l2_fourcc('R', 'X', '3', '0') /* 32  
> XBGR-2-10-10-10 */
> 
>  /* Grey formats */
>  #define V4L2_PIX_FMT_GREYv4l2_fourcc('G', 'R', 'E', 'Y') /*  8  
> Greyscale */
> @@ -537,12 +540,16 @@ struct v4l2_pix_format {
>  #define V4L2_PIX_FMT_VYUYv4l2_fourcc('V', 'Y', 'U', 'Y') /* 16  YUV 
> 4:2:2 */
>  #define V4L2_PIX_FMT_Y41Pv4l2_fourcc('Y', '4', '1', 'P') /* 12  YUV 
> 4:1:1 */
>  #define V4L2_PIX_FMT_YUV444  v4l2_fourcc('Y', '4', '4', '4') /* 16   
>  */
> +#define V4L2_PIX_FMT_XVUY32  v4l2_fourcc('X', 'V', '3', '2') /* 32  XVUY 
> 8:8:8:8 */
> +#define V4L2_PIX_FMT_AVUY32  v4l2_fourcc('A', 'V', '3', '2') /* 32  AVUY 
> 8:8:8:8 */
> +#define V4L2_PIX_FMT_VUY24   v4l2_fourcc('V', 'U', '2', '4') /* 24  VUY 
> 8:8:8 */

If you read the convention, X:Y:Z is used to illustrate the sub-
sampling. The three previous are representing the number of bits, so
should have a comment like:
  XVUY-8-8-8-8
  AVUY-8-8-8-8
  VUY-8-8-8

>  #define V4L2_PIX_FMT_YUV555  v4l2_fourcc('Y', 'U', 'V', 'O') /* 16  
> YUV-5-5-5 */
>  #define V4L2_PIX_FMT_YUV565  v4l2_fourcc('Y', 'U', 'V', 'P') /* 16  
> YUV-5-6-5 */
>  #define V4L2_PIX_FMT_YUV32   v4l2_fourcc('Y', 'U', 'V', '4') /* 32  
> YUV-8-8-8-8   */
>  #define V4L2_PIX_FMT_HI240   v4l2_fourcc('H', 'I', '2', '4') /*  8  8-bit 
> color   */
>  #define V4L2_PIX_FMT_HM12v4l2_fourcc('H', 'M', '1', '2') /*  8  YUV 
> 4:2:0 16x16 macroblocks */
>  #define V4L2_PIX_FMT_M420v4l2_fourcc('M', '4', '2', '0') /* 12  YUV 
> 4:2:0 2 lines y, 1 line uv interleaved */
> +#define V4L2_PIX_FMT_XVUY10  v4l2_fourcc('X', 'Y', '1', '0') /* 32  XVUY 
> 2-10-10-10 */
> 
>  /* two planes -- one Y, one Cr + Cb interleaved  */
>  #define V4L2_PIX_FMT_NV12v4l2_fourcc('N', 'V', '1', '2') /* 12  Y/CbCr 
> 4:2:0  */
> @@ -551,6 +558,8 @@ struct v4l2_pix_format {
>  #define V4L2_PIX_FMT_NV61v4l2_fourcc('N', 'V', '6', '1') /* 16  Y/CrCb 
> 4:2:2  */
>  #define V4L2_PIX_FMT_NV24v4l2_fourcc('N', 'V', '2', '4') /* 24  Y/CbCr 
> 4:4:4  */
>  #define V4L2_PIX_FMT_NV42v4l2_fourcc('N', 'V', '4', '2') /* 24  Y/CrCb 
> 4:4:4  */
> +#define V4L2_PIX_FMT_XV20v4l2_fourcc('X', 'V', '2', '0') /* 32  XY/UV 
> 4:2:2 10-bit */
> +#define V4L2_PIX_FMT_XV15v4l2_fourcc('X', 'V', '1', '5') /* 32  XY/UV 
> 4:2:0 10-bit */
> 
>  /* two non contiguous planes - one Y, one Cr + Cb interleaved  */
>  #define V4L2_PIX_FMT_NV12M   v4l2_fourcc('N', 'M', '1', '2') /* 12  Y/CbCr 
> 4:2:0  */
> @@ -558,6 +567,8 @@ struct v4l2_pix_format {
>  #define V4L2_PIX_FMT_NV16M   v4l2_fourcc('N', 'M', '1', '6') /* 16  Y/CbCr 
> 4:2:2  */
>  #define V4L2_PIX_FMT_NV61M   v4l2_fourcc('N', 'M', '6', '1') /* 16  Y/CrCb 
> 4:2:2  */
>  #define V4L2_PIX_FMT_NV12MT  v4l2_fourcc('T', 'M', '1', '2') /* 12  Y/CbCr 
> 4:2:0 64x32 macroblocks */
> +#define V4L2_PIX_FMT_XV20M   v4l2_fourcc('X', 'M', '2', '0') /* 32  XY/UV 
> 4:2:2 10-bit */
> +#define V4L2_PIX_FMT_XV15M   v4l2_fourcc('X', 'M', '1', '5') /* 32  XY/UV 
> 4:2:0 10-bit */
>  #define V4L2_PIX_FMT_NV12MT_16X16 v4l2_fourcc('V', 'M', '1', '2') /* 12  
> Y/CbCr 4:2:0 16x16 macroblocks */
> 
>  /* three planes - Y Cb, Cr */
> --
> 2.7.4
> 
> This email and any attachments are intended for the sole use of the named 
> recipient(s) and contain(s) confidential information that may be proprietary, 
> privileged or copyrighted under applicable law. If you are not the intended 
> recipient, do not read, copy, or forward this email message or any 
> attachments. Delete this email message and 

Re: [RFC PATCH 6/8] v4l2: document the request API interface

2018-01-26 Thread Nicolas Dufresne
Le vendredi 26 janvier 2018 à 12:40 -0800, Randy Dunlap a écrit :
> > +Request API
> > +===
> > +
> > +The Request API has been designed to allow V4L2 to deal with
> > requirements of
> > +modern IPs (stateless codecs, MIPI cameras, ...) and APIs (Android
> > Codec v2).
> 
> Hi,
> Just a quick question:  What are IPs?
> 
> Not Internet Protocols. Hopefully not Intellectual Properties.
> Maybe Intelligent Processors?

Stands for "Intellectual Property". Used like this, I believe it's a
bit of a slang. It's also slightly pejorative as we often assume that
self contained "IP Cores" are proprietary black box. But considering
all the security issues that was found in these black boxes firmwares,
it's kind of fair criticism.

Nicolas

p.s. I'd propose to rephrase this in later version


Re: iMX6q/coda encoder failures with ffmpeg/gstreamer m2m encoders

2018-01-12 Thread Nicolas Dufresne
Le mardi 19 décembre 2017 à 13:38 +0100, Neil Armstrong a écrit :
> > 
> > The coda driver does not allow S_FMT anymore, as soon as the
> > buffers are
> > allocated with REQBUFS:
> > 
> > https://bugzilla.gnome.org/show_bug.cgi?id=791338
> > 
> > regards
> > Philipp
> > 
> 
> Thanks Philipp,
> 
> It solves the gstreamer encoding.

Just to let you know that a fix, though slightly different, was merged
into master branch. Let us know if you have any further issues.

regards,
Nicolas


Re: MT9M131 on I.MX6DL CSI color issue

2018-01-12 Thread Nicolas Dufresne
Le vendredi 12 janvier 2018 à 10:58 +0100, Anatolij Gustschin a écrit :
> On Fri, 12 Jan 2018 01:16:03 +0100
> Florian Boor florian.b...@kernelconcepts.de wrote:
> ...
> > Basically it works pretty well apart from the really strange
> > colors. I guess its
> > some YUV vs. RGB issue or similar. Here [1] is an example generated
> > with the
> > following command.
> > 
> > gst-launch v4l2src device=/dev/video4 num-buffers=1 ! jpegenc !
> > filesink
> > location=capture1.jpeg
> > 
> > Apart from the colors everything is fine.
> > I'm pretty sure I have not seen such an effect before - what might
> > be wrong here?
> 
> You need conversion to RGB before JPEG encoding. Try with
> 
>  gst-launch v4l2src device=/dev/video4 num-buffers=1 ! \
> videoparse format=5 width=1280 height=1024 framerate=25/1
> ! \
> jpegenc ! filesink location=capture1.jpeg
> 
> For "format" codes see gst-inspect-1.0 videoparse.

A properly written driver should never permit this.

Nicolas


Re: [PATCH v7 0/6] V4L2 Explicit Synchronization

2018-01-10 Thread Nicolas Dufresne
Le mercredi 10 janvier 2018 à 14:07 -0200, Gustavo Padovan a écrit :
> v7 bring a fix for a crash when not using fences and a uAPI fix.
> I've done a bit more of testing on it and also measured some
> performance. On a intel laptop a DRM<->V4L2 pipeline with fences is
> runnning twice as faster than the same pipeline with no fences.

What does it mean twice faster here ?

Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [RFC PATCH 0/9] media: base request API support

2017-12-15 Thread Nicolas Dufresne
Le vendredi 15 décembre 2017 à 16:56 +0900, Alexandre Courbot a écrit :
> Here is a new attempt at the request API, following the UAPI we agreed on in
> Prague. Hopefully this can be used as the basis to move forward.
> 
> This series only introduces the very basics of how requests work: allocate a
> request, queue buffers to it, queue the request itself, wait for it to 
> complete,
> reuse it. It does *not* yet use Hans' work with controls setting. I have
> preferred to submit it this way for now as it allows us to concentrate on the
> basic request/buffer flow, which was harder to get properly than I initially
> thought. I still have a gut feeling that it can be improved, with less 
> back-and-
> forth into drivers.
> 
> Plugging in controls support should not be too hard a task (basically just 
> apply
> the saved controls when the request starts), and I am looking at it now.
> 
> The resulting vim2m driver can be successfully used with requests, and my 
> tests
> so far have been successful.
> 
> There are still some rougher edges:
> 
> * locking is currently quite coarse-grained
> * too many #ifdef CONFIG_MEDIA_CONTROLLER in the code, as the request API
>   depends on it - I plan to craft the headers so that it becomes unnecessary.
>   As it is, some of the code will probably not even compile if
>   CONFIG_MEDIA_CONTROLLER is not set

Would it be possible to explain why this relation between request and
the media controller ? Why couldn't request be created from video
devices ?

> 
> But all in all I think the request flow should be clear and easy to review, 
> and
> the possibility of custom queue and entity support implementations should give
> us the flexibility we need to support more specific use-cases (I expect the
> generic implementations to be sufficient most of the time though).
> 
> A very simple test program exercising this API is available here (don't forget
> to adapt the /dev/media0 hardcoding):
> https://gist.github.com/Gnurou/dbc3776ed97ea7d4ce6041ea15eb0438
> 
> Looking forward to your feedback and comments!
> 
> Alexandre Courbot (8):
>   media: add request API core and UAPI
>   media: request: add generic queue
>   media: request: add generic entity ops
>   media: vb2: add support for requests
>   media: vb2: add support for requests in QBUF ioctl
>   media: v4l2-mem2mem: add request support
>   media: vim2m: add media device
>   media: vim2m: add request support
> 
> Hans Verkuil (1):
>   videodev2.h: Add request field to v4l2_buffer
> 
>  drivers/media/Makefile|   4 +-
>  drivers/media/media-device.c  |   6 +
>  drivers/media/media-request-entity-generic.c  |  56 
>  drivers/media/media-request-queue-generic.c   | 150 ++
>  drivers/media/media-request.c | 390 
> ++
>  drivers/media/platform/vim2m.c|  46 +++
>  drivers/media/usb/cpia2/cpia2_v4l.c   |   2 +-
>  drivers/media/v4l2-core/v4l2-compat-ioctl32.c |   7 +-
>  drivers/media/v4l2-core/v4l2-ioctl.c  |  99 ++-
>  drivers/media/v4l2-core/v4l2-mem2mem.c|  34 +++
>  drivers/media/v4l2-core/videobuf2-core.c  |  59 +++-
>  drivers/media/v4l2-core/videobuf2-v4l2.c  |  32 ++-
>  include/media/media-device.h  |   3 +
>  include/media/media-entity.h  |   6 +
>  include/media/media-request.h | 282 +++
>  include/media/v4l2-mem2mem.h  |  19 ++
>  include/media/videobuf2-core.h|  25 +-
>  include/media/videobuf2-v4l2.h|   2 +
>  include/uapi/linux/media.h|  11 +
>  include/uapi/linux/videodev2.h|   3 +-
>  20 files changed, 1216 insertions(+), 20 deletions(-)
>  create mode 100644 drivers/media/media-request-entity-generic.c
>  create mode 100644 drivers/media/media-request-queue-generic.c
>  create mode 100644 drivers/media/media-request.c
>  create mode 100644 include/media/media-request.h
> 

signature.asc
Description: This is a digitally signed message part


Re: [RFC PATCH 0/9] media: base request API support

2017-12-15 Thread Nicolas Dufresne
Le vendredi 15 décembre 2017 à 16:56 +0900, Alexandre Courbot a écrit :
> Here is a new attempt at the request API, following the UAPI we agreed on in
> Prague. Hopefully this can be used as the basis to move forward.
> 
> This series only introduces the very basics of how requests work: allocate a
> request, queue buffers to it, queue the request itself, wait for it to 
> complete,
> reuse it. It does *not* yet use Hans' work with controls setting. I have
> preferred to submit it this way for now as it allows us to concentrate on the
> basic request/buffer flow, which was harder to get properly than I initially
> thought. I still have a gut feeling that it can be improved, with less 
> back-and-
> forth into drivers.
> 
> Plugging in controls support should not be too hard a task (basically just 
> apply
> the saved controls when the request starts), and I am looking at it now.
> 
> The resulting vim2m driver can be successfully used with requests, and my 
> tests
> so far have been successful.
> 
> There are still some rougher edges:
> 
> * locking is currently quite coarse-grained
> * too many #ifdef CONFIG_MEDIA_CONTROLLER in the code, as the request API
>   depends on it - I plan to craft the headers so that it becomes unnecessary.
>   As it is, some of the code will probably not even compile if
>   CONFIG_MEDIA_CONTROLLER is not set
> 
> But all in all I think the request flow should be clear and easy to review, 
> and
> the possibility of custom queue and entity support implementations should give
> us the flexibility we need to support more specific use-cases (I expect the
> generic implementations to be sufficient most of the time though).
> 
> A very simple test program exercising this API is available here (don't forget
> to adapt the /dev/media0 hardcoding):
> https://gist.github.com/Gnurou/dbc3776ed97ea7d4ce6041ea15eb0438

It looks like the example uses Hans control work you just mention.
Notably, it uses v4l2_ext_controls ctrls.request.

> 
> Looking forward to your feedback and comments!
> 
> Alexandre Courbot (8):
>   media: add request API core and UAPI
>   media: request: add generic queue
>   media: request: add generic entity ops
>   media: vb2: add support for requests
>   media: vb2: add support for requests in QBUF ioctl
>   media: v4l2-mem2mem: add request support
>   media: vim2m: add media device
>   media: vim2m: add request support
> 
> Hans Verkuil (1):
>   videodev2.h: Add request field to v4l2_buffer
> 
>  drivers/media/Makefile|   4 +-
>  drivers/media/media-device.c  |   6 +
>  drivers/media/media-request-entity-generic.c  |  56 
>  drivers/media/media-request-queue-generic.c   | 150 ++
>  drivers/media/media-request.c | 390 
> ++
>  drivers/media/platform/vim2m.c|  46 +++
>  drivers/media/usb/cpia2/cpia2_v4l.c   |   2 +-
>  drivers/media/v4l2-core/v4l2-compat-ioctl32.c |   7 +-
>  drivers/media/v4l2-core/v4l2-ioctl.c  |  99 ++-
>  drivers/media/v4l2-core/v4l2-mem2mem.c|  34 +++
>  drivers/media/v4l2-core/videobuf2-core.c  |  59 +++-
>  drivers/media/v4l2-core/videobuf2-v4l2.c  |  32 ++-
>  include/media/media-device.h  |   3 +
>  include/media/media-entity.h  |   6 +
>  include/media/media-request.h | 282 +++
>  include/media/v4l2-mem2mem.h  |  19 ++
>  include/media/videobuf2-core.h|  25 +-
>  include/media/videobuf2-v4l2.h|   2 +
>  include/uapi/linux/media.h|  11 +
>  include/uapi/linux/videodev2.h|   3 +-
>  20 files changed, 1216 insertions(+), 20 deletions(-)
>  create mode 100644 drivers/media/media-request-entity-generic.c
>  create mode 100644 drivers/media/media-request-queue-generic.c
>  create mode 100644 drivers/media/media-request.c
>  create mode 100644 include/media/media-request.h
> 

signature.asc
Description: This is a digitally signed message part


Re: Kernel Oopses from v4l_enum_fmt

2017-12-13 Thread Nicolas Dufresne
Le mercredi 13 décembre 2017 à 22:33 +0100, Oleksandr Ostrenko a
écrit :
> Dear all,
> 
> There is an issue in v4l_enum_fmt leading to kernel panic under
> certain 
> circumstance. It happens while I try to capture video from my TV
> tuner.
> 
> When I connect this USB TV tuner (WinTV HVR-1900) it gets recognized 
> just fine. However, whenever I try to capture a video from the
> device, 
> it hangs the terminal and I end up with a lot of "Unknown
> pixelformat 
> 0x" errors from v4l_enum_fmt in dmesg that eventually lead
> to 
> kernel panic on a machine with Linux Mint. On another machine with 
> openSUSE it does not hang but just keeps producing the error message 
> below until I stop the video acquisition. I have already tried
> several 
> kernel versions (4.4, 4.8, 4.14) and two different distributions
> (Mint, 
> openSUSE) but to no avail.
> 
> Can somebody give me a hint on debugging this issue?
> Below are sample outputs of lsusb and dmesg.
> 
> Thanks,
> Oleksandr
> 
> lsusb
> 
> Bus 001 Device 002: ID 8087:8001 Intel Corp.
> Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
> Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
> Bus 002 Device 002: ID 8087:0a2a Intel Corp.
> Bus 002 Device 005: ID 2040:7300 Hauppauge
> Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
> 
> Relevant dmesg
> 
> [  515.920080] usb 2-3: new high-speed USB device number 4 using
> xhci_hcd
> [  516.072041] usb 2-3: New USB device found, idVendor=2040,
> idProduct=7300
> [  516.072045] usb 2-3: New USB device strings: Mfr=1, Product=2, 
> SerialNumber=3
> [  516.072047] usb 2-3: Product: WinTV
> [  516.072049] usb 2-3: Manufacturer: Hauppauge
> [  516.072051] usb 2-3: SerialNumber: 7300-00-F04BADA0
> [  516.072474] pvrusb2: Hardware description: WinTV HVR-1900 Model
> 73xxx
> [  517.089290] pvrusb2: Device microcontroller firmware (re)loaded;
> it 
> should now reset and reconnect.
> [  517.121228] usb 2-3: USB disconnect, device number 4
> [  517.121436] pvrusb2: Device being rendered inoperable
> [  518.908091] usb 2-3: new high-speed USB device number 5 using
> xhci_hcd
> [  519.065592] usb 2-3: New USB device found, idVendor=2040,
> idProduct=7300
> [  519.065597] usb 2-3: New USB device strings: Mfr=1, Product=2, 
> SerialNumber=3
> [  519.065600] usb 2-3: Product: WinTV
> [  519.065602] usb 2-3: Manufacturer: Hauppauge
> [  519.065605] usb 2-3: SerialNumber: 7300-00-F04BADA0
> [  519.066862] pvrusb2: Hardware description: WinTV HVR-1900 Model
> 73xxx
> [  519.098815] pvrusb2: Binding ir_rx_z8f0811_haup to i2c address
> 0x71.
> [  519.098872] pvrusb2: Binding ir_tx_z8f0811_haup to i2c address
> 0x70.
> [  519.131651] cx25840 6-0044: cx25843-24 found @ 0x88 (pvrusb2_a)
> [  519.133234] lirc_dev: IR Remote Control driver registered, major
> 241
> [  519.134192] lirc_zilog: module is from the staging directory, the 
> quality is unknown, you have been warned.
> [  519.134194] lirc_zilog: module is from the staging directory, the 
> quality is unknown, you have been warned.
> [  519.134564] Zilog/Hauppauge IR driver initializing
> [  519.135628] probing IR Rx on pvrusb2_a (i2c-6)
> [  519.135674] probe of IR Rx on pvrusb2_a (i2c-6) done. Waiting on
> IR Tx.
> [  519.135678] i2c i2c-6: probe of IR Rx on pvrusb2_a (i2c-6) done
> [  519.135706] probing IR Tx on pvrusb2_a (i2c-6)
> [  519.135728] i2c i2c-6: Direct firmware load for haup-ir-
> blaster.bin 
> failed with error -2
> [  519.135730] i2c i2c-6: firmware haup-ir-blaster.bin not available
> (-2)
> [  519.135799] i2c i2c-6: lirc_dev: driver lirc_zilog registered at 
> minor = 0
> [  519.135800] i2c i2c-6: IR unit on pvrusb2_a (i2c-6) registered as 
> lirc0 and ready
> [  519.135802] i2c i2c-6: probe of IR Tx on pvrusb2_a (i2c-6) done
> [  519.135826] initialization complete
> [  519.140759] pvrusb2: Attached sub-driver cx25840
> [  519.147644] tuner: 6-0042: Tuner -1 found with type(s) Radio TV.
> [  519.147667] pvrusb2: Attached sub-driver tuner
> [  521.029446] cx25840 6-0044: loaded v4l-cx25840.fw firmware (14264
> bytes)
> [  521.124582] tveeprom: Hauppauge model 73219, rev D1E9, serial#
> 4031491488
> [  521.124586] tveeprom: MAC address is 00:0d:fe:4b:ad:a0
> [  521.124588] tveeprom: tuner model is Philips 18271_8295 (idx 149, 
> type 54)
> [  521.124591] tveeprom: TV standards PAL(B/G) PAL(I) SECAM(L/L') 
> PAL(D/D1/K) ATSC/DVB Digital (eeprom 0xf4)
> [  521.124593] tveeprom: audio processor is CX25843 (idx 37)
> [  521.124594] tveeprom: decoder processor is CX25843 (idx 30)
> [  521.124596] tveeprom: has radio, has IR receiver, has IR
> transmitter
> [  521.124606] pvrusb2: Supported video standard(s) reported
> available 
> in hardware: PAL-B/B1/D/D1/G/H/I/K;SECAM-B/D/G/H/K/K
> [  521.124617] pvrusb2: Device initialization completed successfully.
> [  521.124811] pvrusb2: registered device video0 [mpeg]
> [  521.124819] dvbdev: DVB: registering new adapter (pvrusb2-dvb)
> [  523.039178] cx25840 6-0044: loaded 

Re: [PATCH v4 3/5] staging: Introduce NVIDIA Tegra video decoder driver

2017-12-10 Thread Nicolas Dufresne
Le dimanche 10 décembre 2017 à 21:56 +0300, Dmitry Osipenko a écrit :
> > I've CC-ed Maxime and Giulio as well: they are looking into adding support 
> > for
> > the stateless allwinner codec based on this code as well. There may well be
> > opportunities for you to work together, esp. on the userspace side. Note 
> > that
> > Rockchip has the same issue, they too have a stateless HW codec.
> 
> IIUC, we will have to define video decoder parameters in V4L API and then 
> make a
> V4L driver / userspace prototype (ffmpeg for example) that will use the 
> requests
> API for video decoding in order to upstream the requests API. Does it sound 
> good?

Chromium/Chrome already have support for that type of decoder in their
tree. In theory, it should just work.

Nicolas


Re: [Patch v6 05/12] [media] videodev2.h: Add v4l2 definition for HEVC

2017-12-08 Thread Nicolas Dufresne
Le vendredi 08 décembre 2017 à 14:38 +0530, Smitha T Murthy a écrit :
> Add V4L2 definition for HEVC compressed format
> 
> Signed-off-by: Smitha T Murthy 
> Reviewed-by: Andrzej Hajda 
> Reviewed-by: Stanimir Varbanov 
> Acked-by: Hans Verkuil 
> ---
>  include/uapi/linux/videodev2.h | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/include/uapi/linux/videodev2.h b/include/uapi/linux/videodev2.h
> index 185d6a0..bd9b5d5 100644
> --- a/include/uapi/linux/videodev2.h
> +++ b/include/uapi/linux/videodev2.h
> @@ -634,6 +634,7 @@ struct v4l2_pix_format {
>  #define V4L2_PIX_FMT_VC1_ANNEX_L v4l2_fourcc('V', 'C', '1', 'L') /* SMPTE 
> 421M Annex L compliant stream */
>  #define V4L2_PIX_FMT_VP8  v4l2_fourcc('V', 'P', '8', '0') /* VP8 */
>  #define V4L2_PIX_FMT_VP9  v4l2_fourcc('V', 'P', '9', '0') /* VP9 */
> +#define V4L2_PIX_FMT_HEVC v4l2_fourcc('H', 'E', 'V', 'C') /* HEVC aka 
> H.265 */

Wouldn't it be more consistent to call it V4L2_PIX_FMT_H265 as we have
used H264 for the previous generation, or is there a formal rationale ?
Also, this is byte-stream right ? With start codes ?

>  
>  /*  Vendor-specific formats   */
>  #define V4L2_PIX_FMT_CPIA1v4l2_fourcc('C', 'P', 'I', 'A') /* cpia1 YUV */

signature.asc
Description: This is a digitally signed message part


Re: [linux-sunxi] Cedrus driver

2017-11-16 Thread Nicolas Dufresne
Le jeudi 16 novembre 2017 à 12:02 +0100, Maxime Ripard a écrit :
> Assuming that the request API is in, we'd need to:
>   - Finish the MPEG4 support
>   - Work on more useful codecs (H264 comes to my mind)

For which we will have to review the tables and make sure they match
the spec (the easy part). But as an example, that branch uses a table
that merge Mpeg4 VOP and VOP Short Header. We need to make sure it does
not pause problems or split it up. On top of that, ST and Rockchip
teams should give some help and sync with these tables on their side.
We also need to consider decoder like Tegra 2. In H264, they don't need
frame parsing, but just the PPS/SPS data (might just be parsed in the
driver, like CODA ?). There is other mode of operation, specially in
H264/HEVC low latency, where the decoder will be similar, but will
accept and process slices right away, without waiting for the full
frame.

We also need some doc, to be able to tell the GStreamer and FFMPEG team
how to detect and handle these decoder. I doubt the libv4l2 proposed
approach will be used for these two projects since they already have
their own parser and would like to not parse twice. As an example, we
need to document that V4L2_PIX_FMT_MPEG2_FRAME implies using the
Request API and specific CID. We should probably also ping the Chrome
Devs, which probably have couple of pending branches around this.

regards,
Nicolas




signature.asc
Description: This is a digitally signed message part


[PATCH v2] uvc: Add D3DFMT_L8 support

2017-11-06 Thread Nicolas Dufresne
Microsoft HoloLense UVC sensor uses D3DFMT instead of FOURCC when
exposing formats. This adds support for D3DFMT_L8 as exposed from
the Acer Windows Mixed Reality Headset.

Signed-off-by: Nicolas Dufresne <nicolas.dufre...@collabora.com>
---
 drivers/media/usb/uvc/uvc_driver.c | 5 +
 drivers/media/usb/uvc/uvcvideo.h   | 5 +
 2 files changed, 10 insertions(+)

diff --git a/drivers/media/usb/uvc/uvc_driver.c 
b/drivers/media/usb/uvc/uvc_driver.c
index 6d22b22cb35b..113130b6b2d6 100644
--- a/drivers/media/usb/uvc/uvc_driver.c
+++ b/drivers/media/usb/uvc/uvc_driver.c
@@ -94,6 +94,11 @@ static struct uvc_format_desc uvc_fmts[] = {
.fcc= V4L2_PIX_FMT_GREY,
},
{
+   .name   = "Greyscale 8-bit (D3DFMT_L8)",
+   .guid   = UVC_GUID_FORMAT_D3DFMT_L8,
+   .fcc= V4L2_PIX_FMT_GREY,
+   },
+   {
.name   = "Greyscale 10-bit (Y10 )",
.guid   = UVC_GUID_FORMAT_Y10,
.fcc= V4L2_PIX_FMT_Y10,
diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
index 34c7ee6cc9e5..fbc1f433ff05 100644
--- a/drivers/media/usb/uvc/uvcvideo.h
+++ b/drivers/media/usb/uvc/uvcvideo.h
@@ -153,6 +153,11 @@
{ 'I',  'N',  'V',  'I', 0xdb, 0x57, 0x49, 0x5e, \
 0x8e, 0x3f, 0xf4, 0x79, 0x53, 0x2b, 0x94, 0x6f}
 
+#define UVC_GUID_FORMAT_D3DFMT_L8 \
+   {0x32, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, \
+0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}
+
+
 /* 
  * Driver specific constants.
  */
-- 
2.13.6



Re: [PATCH v2] media: s5p-mfc: Add support for V4L2_MEMORY_DMABUF type

2017-11-06 Thread Nicolas Dufresne
Le lundi 06 novembre 2017 à 10:28 +0100, Marek Szyprowski a écrit :
> Hi Nicolas,
> 
> On 2017-11-03 14:45, Nicolas Dufresne wrote:
> > Le vendredi 03 novembre 2017 à 09:11 +0100, Marek Szyprowski a écrit :
> > > MFC driver supports only MMAP operation mode mainly due to the hardware
> > > restrictions of the addresses of the DMA buffers (MFC v5 hardware can
> > > access buffers only in 128MiB memory region starting from the base address
> > > of its firmware). When IOMMU is available, this requirement is easily
> > > fulfilled even for the buffers located anywhere in the memory - typically
> > > by mapping them in the DMA address space as close as possible to the
> > > firmware. Later hardware revisions don't have this limitations at all.
> > > 
> > > The second limitation of the MFC hardware related to the memory buffers
> > > is constant buffer address. Once the hardware has been initialized for
> > > operation on given buffer set, the addresses of the buffers cannot be
> > > changed.
> > > 
> > > With the above assumptions, a limited support for USERPTR and DMABUF
> > > operation modes can be added. The main requirement is to have all buffers
> > > known when starting hardware. This has been achieved by postponing
> > > hardware initialization once all the DMABUF or USERPTR buffers have been
> > > queued for the first time. Once then, buffers cannot be modified to point
> > > to other memory area.
> > 
> > I am concerned about enabling this support with existing userspace.
> > Userspace application will be left with some driver with this
> > limitation and other drivers without. I think it is harmful to enable
> > that feature without providing userspace the ability to know.
> > 
> > This is specially conflicting with let's say UVC driver providing
> > buffers, since UVC driver implementing CREATE_BUFS ioctl. So even if
> > userspace start making an effort to maintain the same DMABuf for the
> > same buffer index, if a new buffer is created, we won't be able to use
> > it.
> 
> Sorry, but I don't get this as an 'issue'. The typical use scenario is to
> configure buffer queue and start streaming. Basically ReqBufs, stream on and
> a sequence of bufq/bufdq. This is handled without any problem by MFC driver
> with this patch. After releasing a queue with reqbufs(0), one can use
> different set of buffers.

In real life, you often have capture code decorelated from the encoder
code. At least, it's the case in GStreamer. The encoder have no
information about how many buffers were pre-allocated by let's say the
capture driver. As a side effect, we cannot make sure the importation
queue is of the same size (amount of buffer). Specially if you have UVC
driver that allow allocating more buffers at run-time. This is used in
real-life to compensate additional latency that get's added when a
pipeline topology is changed (at run-time). Only workaround I had in
mind, would be to always prepare the queue with the maximum (32), and
use this as a cache size, but for now, this is not how the deployed
userspace works unfortunately.

Your reqbuf(0) technique cause a break in the stream (probably a new
keyframe), as you are forced to STREAMOFF. This is often unwanted, and
it may create a time discontinuity in case the allocation took time.

> 
> What is the point of changing the buffers during the streaming? IMHO it was
> one of the biggest pathology of the V4L2 USERPTR API that the buffers 
> were in
> fact 'created' on the first queue operation. By creating I mean creating all
> the kernel all needed structures and mappings between the real memory (user
> ptr value) and the buffer index. The result of this was userspace, which 
> don't
> use buffer indices and always queues buffers with index = 0, what means that
> kernel has to reacquire direct access to each buffer every single frame. 
> That
> is highly inefficient approach. DMABUF operation mode inherited this 
> drawback.

This in fact is an implementation detail of the caching in the kernel
framework. There is nothing that prevent the framework to maintain a
validation cache that isn't bound to the queue size. DMABuf simply
makes the buffer identification easier and safer. I bet it is difficult
and it will stay like this, so it should at least be documented.

I am completely aware of the inefficiency of the GStreamer behaviour,
though it remains much faster in many case then copying raw frames
using the CPU. You can complain as much as you can about what this
userspace doing, but it as has been working better then nothing for
many users. It might not be like this forever, someone could optimize
this by signalling the missing information or with the 32 buffe

[PATCH] uvc: Add D3DFMT_L8 support

2017-11-03 Thread Nicolas Dufresne
Microsoft HoloLense UVC sensor uses D3DFMT instead of FOURCC when
exposing formats. This add support for D3DFMT_L8 as exposed from
the Acer Windows Mixed Reality Headset.

Signed-off-by: Nicolas Dufresne <nicolas.dufre...@collabora.com>
---
 drivers/media/usb/uvc/uvc_driver.c | 5 +
 drivers/media/usb/uvc/uvcvideo.h   | 5 +
 2 files changed, 10 insertions(+)

diff --git a/drivers/media/usb/uvc/uvc_driver.c 
b/drivers/media/usb/uvc/uvc_driver.c
index 6d22b22cb35b..56f70851f88b 100644
--- a/drivers/media/usb/uvc/uvc_driver.c
+++ b/drivers/media/usb/uvc/uvc_driver.c
@@ -203,6 +203,11 @@ static struct uvc_format_desc uvc_fmts[] = {
.guid   = UVC_GUID_FORMAT_INZI,
.fcc= V4L2_PIX_FMT_INZI,
},
+   {
+   .name   = "Greyscale 8-bit (D3DFMT_L8)",
+   .guid   = UVC_GUID_FORMAT_D3DFMT_L8,
+   .fcc= V4L2_PIX_FMT_GREY,
+   },
 };
 
 /* 
diff --git a/drivers/media/usb/uvc/uvcvideo.h b/drivers/media/usb/uvc/uvcvideo.h
index 34c7ee6cc9e5..fbc1f433ff05 100644
--- a/drivers/media/usb/uvc/uvcvideo.h
+++ b/drivers/media/usb/uvc/uvcvideo.h
@@ -153,6 +153,11 @@
{ 'I',  'N',  'V',  'I', 0xdb, 0x57, 0x49, 0x5e, \
 0x8e, 0x3f, 0xf4, 0x79, 0x53, 0x2b, 0x94, 0x6f}
 
+#define UVC_GUID_FORMAT_D3DFMT_L8 \
+   {0x32, 0x00, 0x00, 0x00, 0x00, 0x00, 0x10, 0x00, \
+0x80, 0x00, 0x00, 0xaa, 0x00, 0x38, 0x9b, 0x71}
+
+
 /* 
  * Driver specific constants.
  */
-- 
2.13.6



Re: [PATCH v2] media: s5p-mfc: Add support for V4L2_MEMORY_DMABUF type

2017-11-03 Thread Nicolas Dufresne
Le vendredi 03 novembre 2017 à 09:11 +0100, Marek Szyprowski a écrit :
> MFC driver supports only MMAP operation mode mainly due to the hardware
> restrictions of the addresses of the DMA buffers (MFC v5 hardware can
> access buffers only in 128MiB memory region starting from the base address
> of its firmware). When IOMMU is available, this requirement is easily
> fulfilled even for the buffers located anywhere in the memory - typically
> by mapping them in the DMA address space as close as possible to the
> firmware. Later hardware revisions don't have this limitations at all.
> 
> The second limitation of the MFC hardware related to the memory buffers
> is constant buffer address. Once the hardware has been initialized for
> operation on given buffer set, the addresses of the buffers cannot be
> changed.
> 
> With the above assumptions, a limited support for USERPTR and DMABUF
> operation modes can be added. The main requirement is to have all buffers
> known when starting hardware. This has been achieved by postponing
> hardware initialization once all the DMABUF or USERPTR buffers have been
> queued for the first time. Once then, buffers cannot be modified to point
> to other memory area.

I am concerned about enabling this support with existing userspace.
Userspace application will be left with some driver with this
limitation and other drivers without. I think it is harmful to enable
that feature without providing userspace the ability to know.

This is specially conflicting with let's say UVC driver providing
buffers, since UVC driver implementing CREATE_BUFS ioctl. So even if
userspace start making an effort to maintain the same DMABuf for the
same buffer index, if a new buffer is created, we won't be able to use
it.

> 
> This patch also removes unconditional USERPTR operation mode from encoder
> video node, because it doesn't work with v5 MFC hardware without IOMMU
> being enabled.
> 
> In case of MFC v5 a bidirectional queue flag has to be enabled as a
> workaround of the strange hardware behavior - MFC performs a few writes
> to source data during the operation.

Do you have more information about this ? It is quite terrible, since
if you enable buffer importation, the buffer might be multi-plex across
multiple encoder instance. That is another way this feature can be
harmful to existing userspace.

> 
> Signed-off-by: Seung-Woo Kim 
> [mszyprow: adapted to v4.14 code base, rewrote and extended commit message,
>  added checks for changing buffer addresses, added bidirectional queue
>  flags and comments]
> Signed-off-by: Marek Szyprowski 
> ---
> v2:
> - fixed copy/paste bug, which broke encoding support (thanks to Marian
>   Mihailescu for reporting it)
> - added checks for changing buffers DMA addresses
> - added bidirectional queue flags
> 
> v1:
> - inital version
> ---
>  drivers/media/platform/s5p-mfc/s5p_mfc.c |  23 +-
>  drivers/media/platform/s5p-mfc/s5p_mfc_dec.c | 111 
> +++
>  drivers/media/platform/s5p-mfc/s5p_mfc_enc.c |  64 +++
>  3 files changed, 147 insertions(+), 51 deletions(-)
> 
> diff --git a/drivers/media/platform/s5p-mfc/s5p_mfc.c 
> b/drivers/media/platform/s5p-mfc/s5p_mfc.c
> index 1839a86cc2a5..f1ab8d198158 100644
> --- a/drivers/media/platform/s5p-mfc/s5p_mfc.c
> +++ b/drivers/media/platform/s5p-mfc/s5p_mfc.c
> @@ -754,6 +754,7 @@ static int s5p_mfc_open(struct file *file)
>   struct s5p_mfc_dev *dev = video_drvdata(file);
>   struct s5p_mfc_ctx *ctx = NULL;
>   struct vb2_queue *q;
> + unsigned int io_modes;
>   int ret = 0;
>  
>   mfc_debug_enter();
> @@ -839,16 +840,25 @@ static int s5p_mfc_open(struct file *file)
>   if (ret)
>   goto err_init_hw;
>   }
> +
> + io_modes = VB2_MMAP;
> + if (exynos_is_iommu_available(>plat_dev->dev) || !IS_TWOPORT(dev))
> + io_modes |= VB2_USERPTR | VB2_DMABUF;
> +
>   /* Init videobuf2 queue for CAPTURE */
>   q = >vq_dst;
>   q->type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE;
> + q->io_modes = io_modes;
> + /*
> +  * Destination buffers are always bidirectional, they use used as
> +  * reference data, which require READ access
> +  */
> + q->bidirectional = true;
>   q->drv_priv = >fh;
>   q->lock = >mfc_mutex;
>   if (vdev == dev->vfd_dec) {
> - q->io_modes = VB2_MMAP;
>   q->ops = get_dec_queue_ops();
>   } else if (vdev == dev->vfd_enc) {
> - q->io_modes = VB2_MMAP | VB2_USERPTR;
>   q->ops = get_enc_queue_ops();
>   } else {
>   ret = -ENOENT;
> @@ -869,13 +879,18 @@ static int s5p_mfc_open(struct file *file)
>   /* Init videobuf2 queue for OUTPUT */
>   q = >vq_src;
>   q->type = V4L2_BUF_TYPE_VIDEO_OUTPUT_MPLANE;
> + q->io_modes = io_modes;
> + /*
> +  * MFV v5 performs write operations on source data, so make queue
> 

Re: [PATCH] media: vb2: unify calling of set_page_dirty_lock

2017-10-17 Thread Nicolas Dufresne
Le mardi 17 octobre 2017 à 13:14 +0300, Sakari Ailus a écrit :
> On Sun, Oct 15, 2017 at 07:09:24PM -0400, Nicolas Dufresne wrote:
> > Le dimanche 15 octobre 2017 à 23:40 +0300, Sakari Ailus a écrit :
> > > Hi Nicolas,
> > > 
> > > On Tue, Oct 10, 2017 at 11:40:10AM -0400, Nicolas Dufresne wrote:
> > > > Le mardi 29 août 2017 à 14:26 +0300, Stanimir Varbanov a écrit :
> > > > > Currently videobuf2-dma-sg checks for dma direction for
> > > > > every single page and videobuf2-dc lacks any dma direction
> > > > > checks and calls set_page_dirty_lock unconditionally.
> > > > > 
> > > > > Thus unify and align the invocations of set_page_dirty_lock
> > > > > for videobuf2-dc, videobuf2-sg  memory allocators with
> > > > > videobuf2-vmalloc, i.e. the pattern used in vmalloc has been
> > > > > copied to dc and dma-sg.
> > > > 
> > > > Just before we go too far in "doing like vmalloc", I would like to
> > > > share this small video that display coherency issues when rendering
> > > > vmalloc backed DMABuf over various KMS/DRM driver. I can reproduce
> > > > this
> > > > easily with Intel and MSM display drivers using UVC or Vivid as
> > > > source.
> > > > 
> > > > The following is an HDMI capture of the following GStreamer
> > > > pipeline
> > > > running on Dragonboard 410c.
> > > > 
> > > > gst-launch-1.0 -v v4l2src device=/dev/video2 ! video/x-
> > > > raw,format=NV16,width=1280,height=720 ! kmssink
> > > > https://people.collabora.com/~nicolas/vmalloc-issue.mov
> > > > 
> > > > Feedback on this issue would be more then welcome. It's not clear
> > > > to me
> > > > who's bug is this (v4l2, drm or iommu). The software is unlikely to
> > > > be
> > > > blamed as this same pipeline works fine with non-vmalloc based
> > > > sources.
> > > 
> > > Could you elaborate this a little bit more? Which Intel CPU do you
> > > have
> > > there?
> > 
> > I have tested with Skylake and Ivy Bridge and on Dragonboard 410c
> > (Qualcomm APQ8016 SoC) (same visual artefact)
> 
> I presume kmssink draws on the display. Which GPU did you use?

In order, GPU will be Iris Pro 580, Intel® Ivybridge Mobile and an
Adreno (3x ?). Why does it matter ? I'm pretty sure the GPU is not used
on the DB410c for this use case.

regards,
Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [PATCH] media: vb2: unify calling of set_page_dirty_lock

2017-10-15 Thread Nicolas Dufresne
Le dimanche 15 octobre 2017 à 23:40 +0300, Sakari Ailus a écrit :
> Hi Nicolas,
> 
> On Tue, Oct 10, 2017 at 11:40:10AM -0400, Nicolas Dufresne wrote:
> > Le mardi 29 août 2017 à 14:26 +0300, Stanimir Varbanov a écrit :
> > > Currently videobuf2-dma-sg checks for dma direction for
> > > every single page and videobuf2-dc lacks any dma direction
> > > checks and calls set_page_dirty_lock unconditionally.
> > > 
> > > Thus unify and align the invocations of set_page_dirty_lock
> > > for videobuf2-dc, videobuf2-sg  memory allocators with
> > > videobuf2-vmalloc, i.e. the pattern used in vmalloc has been
> > > copied to dc and dma-sg.
> > 
> > Just before we go too far in "doing like vmalloc", I would like to
> > share this small video that display coherency issues when rendering
> > vmalloc backed DMABuf over various KMS/DRM driver. I can reproduce
> > this
> > easily with Intel and MSM display drivers using UVC or Vivid as
> > source.
> > 
> > The following is an HDMI capture of the following GStreamer
> > pipeline
> > running on Dragonboard 410c.
> > 
> > gst-launch-1.0 -v v4l2src device=/dev/video2 ! video/x-
> > raw,format=NV16,width=1280,height=720 ! kmssink
> > https://people.collabora.com/~nicolas/vmalloc-issue.mov
> > 
> > Feedback on this issue would be more then welcome. It's not clear
> > to me
> > who's bug is this (v4l2, drm or iommu). The software is unlikely to
> > be
> > blamed as this same pipeline works fine with non-vmalloc based
> > sources.
> 
> Could you elaborate this a little bit more? Which Intel CPU do you
> have
> there?

I have tested with Skylake and Ivy Bridge and on Dragonboard 410c
(Qualcomm APQ8016 SoC) (same visual artefact)

> 
> Where are the buffers allocated for this GStreamer pipeline, is it
> v4l2src
> or another element or somewhere else?

This is from V4L2 capture driver, exported as DMABuf, drivers are UVC
and VIVID, both are using the vmalloc allocator.

Nicolas


Re: [PATCH] venus: reimplement decoder stop command

2017-10-13 Thread Nicolas Dufresne
Thanks, is the encoder stop command going to be implemented too ?

Le vendredi 13 octobre 2017 à 17:13 +0300, Stanimir Varbanov a écrit :
> This addresses the wrong behavior of decoder stop command by
> rewriting it. These new implementation enqueue an empty buffer
> on the decoder input buffer queue to signal end-of-stream. The
> client should stop queuing buffers on the V4L2 Output queue
> and continue queuing/dequeuing buffers on Capture queue. This
> process will continue until the client receives a buffer with
> V4L2_BUF_FLAG_LAST flag raised, which means that this is last
> decoded buffer with data.
> 
> Signed-off-by: Stanimir Varbanov <stanimir.varba...@linaro.org>

Tested-By: Nicolas Dufresne <nicolas.dufre...@collabora.com>

> ---
>  drivers/media/platform/qcom/venus/core.h|  2 --
>  drivers/media/platform/qcom/venus/helpers.c |  7 --
>  drivers/media/platform/qcom/venus/hfi.c |  1 +
>  drivers/media/platform/qcom/venus/vdec.c| 34
> +++--
>  4 files changed, 24 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/media/platform/qcom/venus/core.h
> b/drivers/media/platform/qcom/venus/core.h
> index cba092bcb76d..a0fe80df0cbd 100644
> --- a/drivers/media/platform/qcom/venus/core.h
> +++ b/drivers/media/platform/qcom/venus/core.h
> @@ -194,7 +194,6 @@ struct venus_buffer {
>   * @fh:   a holder of v4l file handle structure
>   * @streamon_cap: stream on flag for capture queue
>   * @streamon_out: stream on flag for output queue
> - * @cmd_stop:a flag to signal encoder/decoder commands
>   * @width:   current capture width
>   * @height:  current capture height
>   * @out_width:   current output width
> @@ -258,7 +257,6 @@ struct venus_inst {
>   } controls;
>   struct v4l2_fh fh;
>   unsigned int streamon_cap, streamon_out;
> - bool cmd_stop;
>   u32 width;
>   u32 height;
>   u32 out_width;
> diff --git a/drivers/media/platform/qcom/venus/helpers.c
> b/drivers/media/platform/qcom/venus/helpers.c
> index cac429be5609..6a85dd10ecd4 100644
> --- a/drivers/media/platform/qcom/venus/helpers.c
> +++ b/drivers/media/platform/qcom/venus/helpers.c
> @@ -626,13 +626,6 @@ void venus_helper_vb2_buf_queue(struct
> vb2_buffer *vb)
>  
>   mutex_lock(>lock);
>  
> - if (inst->cmd_stop) {
> - vbuf->flags |= V4L2_BUF_FLAG_LAST;
> - v4l2_m2m_buf_done(vbuf, VB2_BUF_STATE_DONE);
> - inst->cmd_stop = false;
> - goto unlock;
> - }
> -
>   v4l2_m2m_buf_queue(m2m_ctx, vbuf);
>  
>   if (!(inst->streamon_out & inst->streamon_cap))
> diff --git a/drivers/media/platform/qcom/venus/hfi.c
> b/drivers/media/platform/qcom/venus/hfi.c
> index c09490876516..ba29fd4d4984 100644
> --- a/drivers/media/platform/qcom/venus/hfi.c
> +++ b/drivers/media/platform/qcom/venus/hfi.c
> @@ -484,6 +484,7 @@ int hfi_session_process_buf(struct venus_inst
> *inst, struct hfi_frame_data *fd)
>  
>   return -EINVAL;
>  }
> +EXPORT_SYMBOL_GPL(hfi_session_process_buf);
>  
>  irqreturn_t hfi_isr_thread(int irq, void *dev_id)
>  {
> diff --git a/drivers/media/platform/qcom/venus/vdec.c
> b/drivers/media/platform/qcom/venus/vdec.c
> index da611a5eb670..c9e9576bb08a 100644
> --- a/drivers/media/platform/qcom/venus/vdec.c
> +++ b/drivers/media/platform/qcom/venus/vdec.c
> @@ -469,8 +469,14 @@ static int vdec_subscribe_event(struct v4l2_fh
> *fh,
>  static int
>  vdec_try_decoder_cmd(struct file *file, void *fh, struct
> v4l2_decoder_cmd *cmd)
>  {
> - if (cmd->cmd != V4L2_DEC_CMD_STOP)
> + switch (cmd->cmd) {
> + case V4L2_DEC_CMD_STOP:
> + if (cmd->flags & V4L2_DEC_CMD_STOP_TO_BLACK)
> + return -EINVAL;
> + break;
> + default:
>   return -EINVAL;
> + }
>  
>   return 0;
>  }
> @@ -479,6 +485,7 @@ static int
>  vdec_decoder_cmd(struct file *file, void *fh, struct
> v4l2_decoder_cmd *cmd)
>  {
>   struct venus_inst *inst = to_inst(file);
> + struct hfi_frame_data fdata = {0};
>   int ret;
>  
>   ret = vdec_try_decoder_cmd(file, fh, cmd);
> @@ -486,12 +493,23 @@ vdec_decoder_cmd(struct file *file, void *fh,
> struct v4l2_decoder_cmd *cmd)
>   return ret;
>  
>   mutex_lock(>lock);
> - inst->cmd_stop = true;
> - mutex_unlock(>lock);
>  
> - hfi_session_flush(inst);
> + /*
> +  * Implement V4L2_DEC_CMD_STOP by enqueue an empty buffer on
> decoder
> +  * input to signal EOS.
> +  */
> + if (!(inst->streamon_out & inst->streamon_cap))
> 

Re: [PATCH v3 1/2] staging: Introduce NVIDIA Tegra20 video decoder driver

2017-10-11 Thread Nicolas Dufresne
Le mercredi 11 octobre 2017 à 23:08 +0300, Dmitry Osipenko a écrit :
> diff --git a/drivers/staging/tegra-vde/TODO b/drivers/staging/tegra-
> vde/TODO
> new file mode 100644
> index ..e98bbc7b3c19
> --- /dev/null
> +++ b/drivers/staging/tegra-vde/TODO
> @@ -0,0 +1,5 @@
> +TODO:
> +   - Figure out how generic V4L2 API could be utilized by this
> driver,
> + implement it.
> +

That is a very interesting effort, I think it's the first time someone
is proposing an upstream driver for a Tegra platform. When I look
tegra_vde_h264_decoder_ctx, it looks like the only thing that the HW is
not parsing is the media header (pps/sps). Is that correct ?

I wonder how acceptable it would be to parse this inside the driver. It
is no more complex then parsing an EDID. If that was possible, wrapping
this driver as a v4l2 mem2mem should be rather simple. As a side
effect, you'll automatically get some userspace working, notably
GStreamer and FFmpeg.

For the case even parsing the headers is too much from a kernel point
of view, then I think you should have a look at the following effort.
It's a proposal base on yet to be merged Request API. Hugues is also
propose a libv4l2 adapter that makes the driver looks like a normal
v4l2 m2m, hiding all the userspace parsing and table filling. This
though, is long term plan to integrate state-less or parser-less
encoders into linux-media. It seems rather overkill for state-full
driver that requires parsed headers like PPS/SPS.

https://lwn.net/Articles/720797/

regards,
Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [PATCH] media: vb2: unify calling of set_page_dirty_lock

2017-10-10 Thread Nicolas Dufresne
Le mardi 29 août 2017 à 14:26 +0300, Stanimir Varbanov a écrit :
> Currently videobuf2-dma-sg checks for dma direction for
> every single page and videobuf2-dc lacks any dma direction
> checks and calls set_page_dirty_lock unconditionally.
> 
> Thus unify and align the invocations of set_page_dirty_lock
> for videobuf2-dc, videobuf2-sg  memory allocators with
> videobuf2-vmalloc, i.e. the pattern used in vmalloc has been
> copied to dc and dma-sg.

Just before we go too far in "doing like vmalloc", I would like to
share this small video that display coherency issues when rendering
vmalloc backed DMABuf over various KMS/DRM driver. I can reproduce this
easily with Intel and MSM display drivers using UVC or Vivid as source.

The following is an HDMI capture of the following GStreamer pipeline
running on Dragonboard 410c.

gst-launch-1.0 -v v4l2src device=/dev/video2 ! 
video/x-raw,format=NV16,width=1280,height=720 ! kmssink
https://people.collabora.com/~nicolas/vmalloc-issue.mov

Feedback on this issue would be more then welcome. It's not clear to me
who's bug is this (v4l2, drm or iommu). The software is unlikely to be
blamed as this same pipeline works fine with non-vmalloc based sources.

regards,
Nicolas

> 
> Suggested-by: Marek Szyprowski 
> Signed-off-by: Stanimir Varbanov 
> ---
>  drivers/media/v4l2-core/videobuf2-dma-contig.c | 6 --
>  drivers/media/v4l2-core/videobuf2-dma-sg.c | 7 +++
>  2 files changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/drivers/media/v4l2-core/videobuf2-dma-contig.c 
> b/drivers/media/v4l2-core/videobuf2-dma-contig.c
> index 9f389f36566d..696e24f9128d 100644
> --- a/drivers/media/v4l2-core/videobuf2-dma-contig.c
> +++ b/drivers/media/v4l2-core/videobuf2-dma-contig.c
> @@ -434,8 +434,10 @@ static void vb2_dc_put_userptr(void *buf_priv)
>   pages = frame_vector_pages(buf->vec);
>   /* sgt should exist only if vector contains pages... */
>   BUG_ON(IS_ERR(pages));
> - for (i = 0; i < frame_vector_count(buf->vec); i++)
> - set_page_dirty_lock(pages[i]);
> + if (buf->dma_dir == DMA_FROM_DEVICE ||
> + buf->dma_dir == DMA_BIDIRECTIONAL)
> + for (i = 0; i < frame_vector_count(buf->vec); i++)
> + set_page_dirty_lock(pages[i]);
>   sg_free_table(sgt);
>   kfree(sgt);
>   }
> diff --git a/drivers/media/v4l2-core/videobuf2-dma-sg.c 
> b/drivers/media/v4l2-core/videobuf2-dma-sg.c
> index 6808231a6bdc..753ed3138dcc 100644
> --- a/drivers/media/v4l2-core/videobuf2-dma-sg.c
> +++ b/drivers/media/v4l2-core/videobuf2-dma-sg.c
> @@ -292,11 +292,10 @@ static void vb2_dma_sg_put_userptr(void *buf_priv)
>   if (buf->vaddr)
>   vm_unmap_ram(buf->vaddr, buf->num_pages);
>   sg_free_table(buf->dma_sgt);
> - while (--i >= 0) {
> - if (buf->dma_dir == DMA_FROM_DEVICE ||
> - buf->dma_dir == DMA_BIDIRECTIONAL)
> + if (buf->dma_dir == DMA_FROM_DEVICE ||
> + buf->dma_dir == DMA_BIDIRECTIONAL)
> + while (--i >= 0)
>   set_page_dirty_lock(buf->pages[i]);
> - }
>   vb2_destroy_framevec(buf->vec);
>   kfree(buf);
>  }

signature.asc
Description: This is a digitally signed message part


Re: [PATCH 2/2] media: venus: venc: fix bytesused v4l2_plane field

2017-10-09 Thread Nicolas Dufresne
I confirm this works properly now. This was tested with GStreamer with
the following command:

  gst-launch-1.0 videotestsrc ! v4l2vp8enc ! v4l2vp8dec ! kmssink

And the following patch, which is work in progress to implement
data_offset.

  
https://gitlab.collabora.com/nicolas/gst-plugins-good/commit/aaedee9a37e396657568a70fc0110375e14fb4fa

Le lundi 09 octobre 2017 à 15:24 +0300, Stanimir Varbanov a écrit :
> This fixes wrongly filled bytesused field of v4l2_plane structure
> by include data_offset in the plane, Also fill data_offset and
> bytesused for capture type of buffers only.
> 
> Signed-off-by: Stanimir Varbanov <stanimir.varba...@linaro.org>

Tested-by: Nicolas Dufresne <nicolas.dufre...@collabora.com>

> ---
>  drivers/media/platform/qcom/venus/venc.c | 9 +
>  1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/drivers/media/platform/qcom/venus/venc.c 
> b/drivers/media/platform/qcom/venus/venc.c
> index 6f123a387cf9..9445ad492966 100644
> --- a/drivers/media/platform/qcom/venus/venc.c
> +++ b/drivers/media/platform/qcom/venus/venc.c
> @@ -963,15 +963,16 @@ static void venc_buf_done(struct venus_inst *inst, 
> unsigned int buf_type,
>   if (!vbuf)
>   return;
>  
> - vb = >vb2_buf;
> - vb->planes[0].bytesused = bytesused;
> - vb->planes[0].data_offset = data_offset;
> -
>   vbuf->flags = flags;
>  
>   if (type == V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE) {
> + vb = >vb2_buf;
> + vb2_set_plane_payload(vb, 0, bytesused + data_offset);
> + vb->planes[0].data_offset = data_offset;
>   vb->timestamp = timestamp_us * NSEC_PER_USEC;
>   vbuf->sequence = inst->sequence_cap++;
> +
> + WARN_ON(vb2_get_plane_payload(vb, 0) > vb2_plane_size(vb, 0));
>   } else {
>   vbuf->sequence = inst->sequence_out++;
>   }


Re: platform: coda: how to use firmware-imx binary releases? / how to use VDOA on imx6?

2017-10-05 Thread Nicolas Dufresne
Le jeudi 05 octobre 2017 à 13:54 +0200, Martin Kepplinger a écrit :
> > This message is most likely just a result of the VDOA not supporting 
> > the
> > selected capture format. In vdoa_context_configure, you can see that 
> > the
> > VDOA only writes YUYV or NV12.
> 
> ok. I'll have to look into it, and just in case you see a problem on 
> first sight:
> this is what coda says with debug level 1, when doing
> 
> gst-launch-1.0 playbin uri=file:///data/test2_hd480.h264 
> video-sink=fbdevsink

A bit unrelated, but kmssink remains a better choice here.

Nicolas


Re: Memory freeing when dmabuf fds are exported with VIDIOC_EXPBUF

2017-10-03 Thread Nicolas Dufresne
I'd like to revive this discussion.

Le lundi 01 août 2016 à 12:56 +0200, Hans Verkuil a écrit :
> > 
> > Hans, Marek, any opinion on this ?
> 
> What is the use-case for this? What you are doing here is to either free all
> existing buffers or reallocate buffers. We can decide to rely on refcounting,
> but then you would create a second set of buffers (when re-allocating) or
> leave a lot of unfreed memory behind. That's pretty hard on the memory usage.
> 
> I think the EBUSY is there to protect the user against him/herself: i.e. don't
> call this unless you know all refs are closed.
> 
> Given the typical large buffersizes we're talking about, I think that EBUSY
> makes sense.

This is a userspace hell for the use case of seamless resolution
change. Let's say I'm rendering buffers from a V4L2 camera toward my
display KMS driver. While I'm streaming, the KMS driver will hold on
the last frame. This is required when your display is sourcing data
directly from your DMABuf, because the KMS render are not synchronized
with the V4L2 camera (you could have 24fps camera over a 60fps
display).

When its time to change the resolution, the fact that we can't let go
the DMABuf means that we need to reclaim the memory from KMS first. We
can't just take it back, we need to allocate a new buffer, copy using
the CPU that buffer data, setupe the DMABuf reference. that new buffer
for redraw and then releas

This operation is extremely slow, since it requires an allocation and a
CPU copy of the data. This is only needed because V4L2 is trying to
prevent over allocation. In this case, userspace is only holding on 1
of the frames, this is far from the dramatic memory waste that we are
describing here.

regards,
Nicolas


Re: [PATCH v3 01/15] [media] v4l: Document explicit synchronization behaviour

2017-09-07 Thread Nicolas Dufresne
Le jeudi 07 septembre 2017 à 15:42 -0300, Gustavo Padovan a écrit :
> From: Gustavo Padovan 
> 
> Add section to VIDIOC_QBUF about it
> 
> v2:
>   - mention that fences are files (Hans)
>   - rework for the new API
> 
> Signed-off-by: Gustavo Padovan 
> ---
>  Documentation/media/uapi/v4l/vidioc-qbuf.rst | 31 
> 
>  1 file changed, 31 insertions(+)
> 
> diff --git a/Documentation/media/uapi/v4l/vidioc-qbuf.rst 
> b/Documentation/media/uapi/v4l/vidioc-qbuf.rst
> index 1f3612637200..fae0b1431672 100644
> --- a/Documentation/media/uapi/v4l/vidioc-qbuf.rst
> +++ b/Documentation/media/uapi/v4l/vidioc-qbuf.rst
> @@ -117,6 +117,37 @@ immediately with an ``EAGAIN`` error code when no buffer 
> is available.
>  The struct :c:type:`v4l2_buffer` structure is specified in
>  :ref:`buffer`.
>  
> +Explicit Synchronization
> +
> +
> +Explicit Synchronization allows us to control the synchronization of
> +shared buffers from userspace by passing fences to the kernel and/or
> +receiving them from it. Fences passed to the kernel are named in-fences and
> +the kernel should wait them to signal before using the buffer, i.e., queueing
> +it to the driver. On the other side, the kernel can create out-fences for the
> +buffers it queues to the drivers, out-fences signal when the driver is
> +finished with buffer, that is the buffer is ready. The fence are represented
> +by file and passed as file descriptor to userspace.

I think the API works to deliver the fence FD userspace, though for the
userspace I maintain (GStreamer) it's often the case that the buffer is
unusable without the associated timestamp.

Let's consider the capture to display case (V4L2 -> DRM). As soon as
you add audio capture to the loop, GStreamer will need to start dealing
with synchronization. We can't just blindly give that buffer to the
display, we need to make sure this buffer makes it on time, in a way
that it is synchronized with the audio. To deal with synchronization,
we need to be able to correlate the video image capture time with the
audio capture time.

The problem is that this timestamp is only delivered when DQBUF
succeed, which is after the fence has unblocked. This makes the fences
completely unusable for that purpose. In general, to achieve very low
latency and still have synchronization, we'll probably need the
timestamp to be delivered somehow before the image transfer have
complete. Obviously, this is only possible if we have timestamp with
flag V4L2_BUF_FLAG_TSTAMP_SRC_SOE.

On another note, for m2m drivers that behave as color converters and
scalers, with ordered queues, userspace knows the timestamp already, so
 using the proposed API and passing the buffer immediately with it's
fence is really going to help.

For encoded (compressed) data, similar issues will be found with the
bytesused member of struct v4l2_buffer. We'll need to know the size of
the encoded data before we can pass it to another driver. I'm not sure
how relevant fences are for this type of data.

> +
> +The in-fences are communicated to the kernel at the ``VIDIOC_QBUF`` ioctl
> +using the ``V4L2_BUF_FLAG_IN_FENCE`` buffer
> +flags and the `fence_fd` field. If an in-fence needs to be passed to the 
> kernel,
> +`fence_fd` should be set to the fence file descriptor number and the
> +``V4L2_BUF_FLAG_IN_FENCE`` should be set as well. Failure to set both will
> +cause ``VIDIOC_QBUF`` to return with error.
> +
> +To get a out-fence back from V4L2 the ``V4L2_BUF_FLAG_OUT_FENCE`` flag should
> +be set to notify it that the next queued buffer should have a fence attached 
> to
> +it. That means the out-fence may not be associated with the buffer in the
> +current ``VIDIOC_QBUF`` ioctl call because the ordering in which videobuf2 
> core
> +queues the buffers to the drivers can't be guaranteed. To become aware of the
> +of the next queued buffer and the out-fence attached to it the

"of the" is repeated twice.

> +``V4L2_EVENT_BUF_QUEUED`` event should be used. It will trigger an event
> +for every buffer queued to the V4L2 driver.
> +
> +At streamoff the out-fences will either signal normally if the drivers wait
> +for the operations on the buffers to finish or signal with error if the
> +driver cancel the pending operations.
>  
>  Return Value
>  


Re: DRM Format Modifiers in v4l2

2017-08-31 Thread Nicolas Dufresne
Le jeudi 31 août 2017 à 17:28 +0300, Laurent Pinchart a écrit :
> > e.g. if I have two devices which support MODIFIER_FOO, I could attempt
> > to share a buffer between them which uses MODIFIER_FOO without
> > necessarily knowing exactly what it is/does.
> 
> Userspace could certainly set modifiers blindly, but the point of modifiers 
> is 
> to generate side effects benefitial to the use case at hand (for instance by 
> optimizing the memory access pattern). To use them meaningfully userspace 
> would need to have at least an idea of the side effects they generate.

Generic userspace will basically pick some random combination. To allow
generically picking the optimal configuration we could indeed rely on
the application knowledge, but we could also enhance the spec so that
the order in the enumeration becomes meaningful.

regards,
Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [PATCH 2/2] media:imx274 V4l2 driver for Sony imx274 CMOS sensor

2017-08-28 Thread Nicolas Dufresne
Le lundi 28 août 2017 à 08:15 -0700, Soren Brinkmann a écrit :
> From: Leon Luo 
> 
> The imx274 is a Sony CMOS image sensor that has 1/2.5 image size.
> It supports up to 3840x2160 (4K) 60fps, 1080p 120fps. The interface
> is 4-lane MIPI running at 1.44Gbps each.
> 
> This driver has been tested on Xilinx ZCU102 platform with a Leopard
> LI-IMX274MIPI-FMC camera board.
> 
> Support for the following features:
> -Resolutions: 3840x2160, 1920x1080, 1280x720
> -Frame rate: 3840x2160 : 5 – 60fps
> 1920x1080 : 5 – 120fps
> 1280x720 : 5 – 120fps
> -Exposure time: 16 – (frame interval) micro-seconds
> -Gain: 1x - 180x
> -VFLIP: enable/disable
> -Test pattern: 12 test patterns
> 
> Signed-off-by: Leon Luo 
> Tested-by: Sören Brinkmann 
> ---
>  drivers/media/i2c/Kconfig  |   16 +-
>  drivers/media/i2c/Makefile |1 +
>  drivers/media/i2c/imx274.c | 1843 
> 
>  3 files changed, 1850 insertions(+), 10 deletions(-)
>  create mode 100644 drivers/media/i2c/imx274.c
> 
> diff --git a/drivers/media/i2c/Kconfig b/drivers/media/i2c/Kconfig
> index 94153895fcd4..4e8b64575b2a 100644
> --- a/drivers/media/i2c/Kconfig
> +++ b/drivers/media/i2c/Kconfig
> @@ -547,16 +547,12 @@ config VIDEO_APTINA_PLL
>  config VIDEO_SMIAPP_PLL
>   tristate
>  
> -config VIDEO_OV2640
> - tristate "OmniVision OV2640 sensor support"
> - depends on VIDEO_V4L2 && I2C
> - depends on MEDIA_CAMERA_SUPPORT
> - help
> -   This is a Video4Linux2 sensor-level driver for the OmniVision
> -   OV2640 camera.
> -
> -   To compile this driver as a module, choose M here: the
> -   module will be called ov2640.

Is this removal of another sensor intentional ?

> +config VIDEO_IMX274
> + tristate "Sony IMX274 sensor support"
> + depends on I2C && VIDEO_V4L2 && VIDEO_V4L2_SUBDEV_API
> + ---help---
> +   This is a V4L2 sensor-level driver for the Sony IMX274
> +   CMOS image sensor.
>  
>  config VIDEO_OV2659
>   tristate "OmniVision OV2659 sensor support"
> diff --git a/drivers/media/i2c/Makefile b/drivers/media/i2c/Makefile
> index c843c181dfb9..f8d57e453936 100644
> --- a/drivers/media/i2c/Makefile
> +++ b/drivers/media/i2c/Makefile
> @@ -92,5 +92,6 @@ obj-$(CONFIG_VIDEO_IR_I2C)  += ir-kbd-i2c.o
>  obj-$(CONFIG_VIDEO_ML86V7667)+= ml86v7667.o
>  obj-$(CONFIG_VIDEO_OV2659)   += ov2659.o
>  obj-$(CONFIG_VIDEO_TC358743) += tc358743.o
> +obj-$(CONFIG_VIDEO_IMX274)   += imx274.o
>  
>  obj-$(CONFIG_SDR_MAX2175) += max2175.o
> diff --git a/drivers/media/i2c/imx274.c b/drivers/media/i2c/imx274.c
> new file mode 100644
> index ..8b0a1316eadf
> --- /dev/null
> +++ b/drivers/media/i2c/imx274.c
> @@ -0,0 +1,1843 @@
> +/*
> + * imx274.c - IMX274 CMOS Image Sensor driver
> + *
> + * Copyright (C) 2017, Leopard Imaging, Inc.
> + *
> + * Leon Luo 
> + * Edwin Zou 
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program.  If not, see .
> + */
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +static int debug;
> +module_param(debug, int, 0644);
> +MODULE_PARM_DESC(debug, "Debug level (0-2)");
> +
> +/*
> + * See "SHR, SVR Setting" in datasheet
> + */
> +#define IMX274_DEFAULT_FRAME_LENGTH  (4550)
> +#define IMX274_MAX_FRAME_LENGTH  (0x000f)
> +
> +/*
> + * See "Frame Rate Adjustment" in datasheet
> + */
> +#define IMX274_PIXCLK_CONST1 (7200)
> +#define IMX274_PIXCLK_CONST2 (100)
> +
> +/*
> + * The input gain is shifted by IMX274_GAIN_SHIFT to get
> + * decimal number. The real gain is
> + * (float)input_gain_value / (1 << IMX274_GAIN_SHIFT)
> + */
> +#define IMX274_GAIN_SHIFT(8)
> +#define IMX274_GAIN_SHIFT_MASK   ((1 << 
> IMX274_GAIN_SHIFT) - 1)
> +
> +/*
> + * See "Analog Gain" and "Digital Gain" in datasheet
> + * min gain is 1X
> + * max gain is calculated based on IMX274_GAIN_REG_MAX
> + */
> +#define IMX274_GAIN_REG_MAX  

Re: DRM Format Modifiers in v4l2

2017-08-28 Thread Nicolas Dufresne
Le jeudi 24 août 2017 à 13:26 +0100, Brian Starkey a écrit :
> > What I mean was: an application can use the modifier to give buffers from
> > one device to another without needing to understand it.
> > 
> > But a generic video capture application that processes the video itself
> > cannot be expected to know about the modifiers. It's a custom HW specific
> > format that you only use between two HW devices or with software written
> > for that hardware.
> > 
> 
> Yes, makes sense.
> 
> > > 
> > > However, in DRM the API lets you get the supported formats for each
> > > modifier as-well-as the modifier list itself. I'm not sure how exactly
> > > to provide that in a control.
> > 
> > We have support for a 'menu' of 64 bit integers: 
> > V4L2_CTRL_TYPE_INTEGER_MENU.
> > You use VIDIOC_QUERYMENU to enumerate the available modifiers.
> > 
> > So enumerating these modifiers would work out-of-the-box.
> 
> Right. So I guess the supported set of formats could be somehow
> enumerated in the menu item string. In DRM the pairs are (modifier +
> bitmask) where bits represent formats in the supported formats list
> (commit db1689aa61bd in drm-next). Printing a hex representation of
> the bitmask would be functional but I concede not very pretty.

The problem is that the list of modifiers depends on the format
selected. Having to call S_FMT to obtain this list is quite
inefficient.

Also, be aware that DRM_FORMAT_MOD_SAMSUNG_64_32_TILE modifier has been
implemented in V4L2 with a direct format (V4L2_PIX_FMT_NV12MT). I think
an other one made it the same way recently, something from Mediatek if
I remember. Though, unlike the Intel one, the same modifier does not
have various result depending on the hardware revision.

regards,
Nicolas



signature.asc
Description: This is a digitally signed message part


Re: [PATCH v3] [media] v4l2: Add support for go2001 PCI codec driver

2017-07-25 Thread Nicolas Dufresne
Le mardi 25 juillet 2017 à 19:40 +0200, Thierry Escande a écrit :
> This patch adds support for the go2001 PCI codec driver. This
> hardware
> is present on ChromeOS based devices like the Acer ChromeBox and
> Acer/LG
> ChromeBase 24 devices. This chipset comes on a mini PCI-E card with
> Google as PCI vendor ID (0x1ae0).
> 
> This driver comes from the ChromeOS v3.18 kernel tree and has been
> modified to support vb2_buffer restructuring introduced in Linux
> v4.4.
> 
> The go2001 firmware files can be found in the build tree of the
> Google
> Chromium OS open source project.
> 
> This driver is originally developed by:
>  Pawel Osciak 
>  Ville-Mikko Rautio 
>  henryhsu 
>  Wu-Cheng Li 
> 
> Signed-off-by: Thierry Escande 
> ---
>  drivers/media/pci/Kconfig|2 +
>  drivers/media/pci/Makefile   |1 +
>  drivers/media/pci/go2001/Kconfig |   11 +
>  drivers/media/pci/go2001/Makefile|2 +
>  drivers/media/pci/go2001/go2001.h|  331 
>  drivers/media/pci/go2001/go2001_driver.c | 2525
> ++
>  drivers/media/pci/go2001/go2001_hw.c | 1362 
>  drivers/media/pci/go2001/go2001_hw.h |   55 +
>  drivers/media/pci/go2001/go2001_proto.h  |  359 +
>  9 files changed, 4648 insertions(+)
>  create mode 100644 drivers/media/pci/go2001/Kconfig
>  create mode 100644 drivers/media/pci/go2001/Makefile
>  create mode 100644 drivers/media/pci/go2001/go2001.h
>  create mode 100644 drivers/media/pci/go2001/go2001_driver.c
>  create mode 100644 drivers/media/pci/go2001/go2001_hw.c
>  create mode 100644 drivers/media/pci/go2001/go2001_hw.h
>  create mode 100644 drivers/media/pci/go2001/go2001_proto.h
> 
> diff --git a/drivers/media/pci/Kconfig b/drivers/media/pci/Kconfig
> index da28e68..837681e 100644
> --- a/drivers/media/pci/Kconfig
> +++ b/drivers/media/pci/Kconfig
> @@ -54,5 +54,7 @@ source "drivers/media/pci/smipcie/Kconfig"
>  source "drivers/media/pci/netup_unidvb/Kconfig"
>  endif
>  
> +source "drivers/media/pci/go2001/Kconfig"
> +
>  endif #MEDIA_PCI_SUPPORT
>  endif #PCI
> diff --git a/drivers/media/pci/Makefile b/drivers/media/pci/Makefile
> index a7e8af0..58639b7 100644
> --- a/drivers/media/pci/Makefile
> +++ b/drivers/media/pci/Makefile
> @@ -32,3 +32,4 @@ obj-$(CONFIG_STA2X11_VIP) += sta2x11/
>  obj-$(CONFIG_VIDEO_SOLO6X10) += solo6x10/
>  obj-$(CONFIG_VIDEO_COBALT) += cobalt/
>  obj-$(CONFIG_VIDEO_TW5864) += tw5864/
> +obj-$(CONFIG_VIDEO_GO2001) += go2001/
> diff --git a/drivers/media/pci/go2001/Kconfig
> b/drivers/media/pci/go2001/Kconfig
> new file mode 100644
> index 000..c7b5149
> --- /dev/null
> +++ b/drivers/media/pci/go2001/Kconfig
> @@ -0,0 +1,11 @@
> +config VIDEO_GO2001
> + tristate "GO2001 codec driver"
> + depends on VIDEO_V4L2 && PCI
> + select VIDEOBUF2_DMA_SG
> + ---help---
> +   This driver supports the GO2001 PCI hardware codec. This
> codec
> +   is present on ChromeOS based devices like the Acer
> ChromeBox
> +   and ChromeBase 24 and LG ChromeBase as well.
> +
> +   To compile this driver as a module, choose M here: the
> +   module will be called go2001.
> diff --git a/drivers/media/pci/go2001/Makefile
> b/drivers/media/pci/go2001/Makefile
> new file mode 100644
> index 000..20bad18
> --- /dev/null
> +++ b/drivers/media/pci/go2001/Makefile
> @@ -0,0 +1,2 @@
> +go2001-objs  := go2001_driver.o go2001_hw.o
> +obj-$(CONFIG_VIDEO_GO2001) += go2001.o
> diff --git a/drivers/media/pci/go2001/go2001.h
> b/drivers/media/pci/go2001/go2001.h
> new file mode 100644
> index 000..0e5ccfd
> --- /dev/null
> +++ b/drivers/media/pci/go2001/go2001.h
> @@ -0,0 +1,331 @@
> +/*
> + *  go2001 - GO2001 codec driver.
> + *
> + *  Author : Pawel Osciak 
> + *
> + *  Copyright (C) 2017 Google, Inc.
> + *
> + *  This program is free software; you can redistribute it and/or
> modify
> + *  it under the terms of the GNU General Public License as
> published by
> + *  the Free Software Foundation; either version 2 of the License,
> or
> + *  (at your option) any later version.
> + *
> + *  This program is distributed in the hope that it will be useful,
> + *  but WITHOUT ANY WARRANTY; without even the implied warranty of
> + *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + *  GNU General Public License for more details.
> + *
> + *  You should have received a copy of the GNU General Public
> License
> + *  along with this program.  If not, see  es/>.
> + */
> +#ifndef _MEDIA_PCI_GO2001_GO2001_H_
> +#define _MEDIA_PCI_GO2001_GO2001_H_
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "go2001_proto.h"
> +
> +struct go2001_msg {
> + struct list_head list_entry;
> + 

Re: [PATCH v2 2/6] [media] rockchip/rga: v4l2 m2m support

2017-07-17 Thread Nicolas Dufresne
Le lundi 17 juillet 2017 à 05:37 +0300, Laurent Pinchart a écrit :
> Hi Nicolas,
> 
> On Saturday 15 Jul 2017 12:49:13 Personnel wrote:
> 
> You might want to fix your mailer to use your name :-)
> 
> > Le samedi 15 juillet 2017 à 12:42 +0300, Laurent Pinchart a écrit :
> > > On Saturday 15 Jul 2017 14:58:36 Jacob Chen wrote:
> > > > Rockchip RGA is a separate 2D raster graphic acceleration unit. It
> > > > accelerates 2D graphics operations, such as point/line drawing, image
> > > > scaling, rotation, BitBLT, alpha blending and image blur/sharpness.
> > > > 
> > > > The drvier is mostly based on s5p-g2d v4l2 m2m driver.
> > > > And supports various operations from the rendering pipeline.
> > > > 
> > > >  - copy
> > > >  - fast solid color fill
> > > >  - rotation
> > > >  - flip
> > > >  - alpha blending
> > > 
> > > I notice that you don't support the drawing operations. How do you plan to
> > > support them later through the V4L2 M2M API ? I hate stating the obvious,
> > > but wouldn't the DRM API be better fit for a graphic accelerator ?
> > 
> > It could fit, maybe, but it really lacks some framework. Also, DRM is
> > not really meant for M2M operation, and it's also not great for multi-
> > process.
> 
> GPUs on embedded devices are mem-to-mem, and they're definitely shared 
> between 
> multiple processes :-)
> 
> > Until recently, there was competing drivers for Exynos, both
> > implemented in V4L2 and DRM, for similar rational, all DRM ones are
> > being deprecated/removed.
> > 
> > I think 2D blitters in V4L2 are fine, but they terribly lack something
> > to differentiate them from converters/scalers when looking up the HW
> > list. Could be as simple as a capability flag, if I can suggest. For
> > the reference, the 2D blitter on IMX6 has been used to implement a live
> > video mixer in GStreamer.
> > 
> > https://bugzilla.gnome.org/show_bug.cgi?id=772766
> 
> If we decide that 2D blitters should be supported by V4L2 (and I'm open to 
> get 
> convinced about that), we really need to define a proper API before merging a 
> bunch of drivers that will implement things in slightly different ways, 
> otherwise the future will be very painful.

Arguably, Jacob is not proposing anything new, as at least one other
driver has been merged.

> 
> Among the issues that need to be solved are
> 
> - stateful vs. stateless operation (as mentioned by Jacob in this mail 
> thread), a.k.a. the request API

Would it be possible to extend your thought. To me, Request API could
enable more use cases but is not strictly required.

> 
> - exposing capabilities to userspace (a single capability flag would be 
> enough 
> only if all blitters expose the same API, which I'm not sure we can assume)

I am just rethinking this. With this patch series, Jacob is trying to
generalize the Blit Operation controls (still need a name, blend mode
does not work). We can easily make a recommendation to set the default
operation to a copy operation (drivers always support that). This way,
the node will behave like a converter (scaler, colorspace converter,
rotator and/or etc.) Checking the presence of that control, we can
clearly and quickly figure-out what this node is about. The capability
remains a nice idea, but probably optional.

I totally agree we should document the behaviours and rationals for
picking a certain default. The control should maybe become a "menu"
too, so each driver can cherry-pick the blit operations they support
(using int with min/max requires userspace trial and error, we already
did that mistake for encoders profiles and level).

> 
> - single input (a.k.a. in-place blitters as you mentioned below) vs. multiple 
> inputs

I do think the second is something you can build on top of the first by
cascading (what we do in the refereed GStreamer element). So far this
is applicable to Exynos, IMX6 and now Rockchip (probably more). The
"optimal" form for the second case seems like something that will be
implemented using much lower level kernel interface, like a GPU
programming interface (aka proprietary Adreno C2D API), or through
multiple nodes (multiple inputs and outputs). It's seems like the cut
between high-end and low-end.

> 
> - API for 2D-accelerated operations other than blitting (filling, point and 
> line drawing, ...)

I doubt such hardware exist in a form that is not bound to the GPU. I'm
not ignoring your point, there is a clear overlap between how we
integrated GPUs and having this into V4L2.

> 
> > > Additionally, V4L2 M2M has one source and one destination. How do you
> > > implement alpha blending in that case, which by definition requires at
> > > least two sources ?
> > 
> > This type of HW only do in-place blits. When using such a node, the
> > buffer queued on the V4L2_CAPTURE contains the destination image, and
> > the buffer queued on the V4L2_OUTPUT is the source image.
> > 
> > > > The code in rga-hw.c is used to configure regs accroding to operations.
> > > > 
> > > > The code in 

Re: [PATCH v6 1/3] [media] v4l: add parsed MPEG-2 support

2017-07-08 Thread Nicolas Dufresne
Le samedi 08 juillet 2017 à 13:16 +0800, ayaka a écrit :
> 
> On 07/08/2017 02:33 AM, Nicolas Dufresne wrote:
> > Le samedi 08 juillet 2017 à 01:29 +0800, ayaka a écrit :
> > > On 07/04/2017 05:29 PM, Hugues FRUCHET wrote:
> > > > Hi Randy,
> > > > Thanks for review, and sorry for late reply, answers inline.
> > > > BR,
> > > > Hugues.
> > > > 
> > > > On 06/11/2017 01:41 PM, ayaka wrote:
> > > > > On 04/28/2017 09:25 PM, Hugues Fruchet wrote:
> > > > > > Add "parsed MPEG-2" pixel format & related controls
> > > > > > needed by stateless video decoders.
> > > > > > In order to decode the video bitstream chunk provided
> > > > > > by user on output queue, stateless decoders require
> > > > > > also some extra data resulting from this video bitstream
> > > > > > chunk parsing.
> > > > > > Those parsed extra data have to be set by user through
> > > > > > control framework using the dedicated mpeg video extended
> > > > > > controls introduced in this patchset.
> > > > > 
> > > > > I have compared those v4l2 controls with the registers of the rockchip
> > > > > video IP.
> > > > > 
> > > > > Most of them are met, but only lacks of sw_init_qp.
> > > > 
> > > > In case of MPEG-1/2, this register seems forced to 1, please double
> > > > check the on2 headers parsing library related to MPEG2. Nevertheless, I
> > > > see this hardware register used with VP8/H264.
> > > 
> > > Yes, it is forced to be 1. We can skip this field for MPEG1/2
> > > > Hence, no need to put this field on MPEG-2 interface, but should come
> > > > with VP8/H264.
> > > > 
> > > > > Here is the full translation table of the registers of the rockchip
> > > > > video IP.
> > > > > 
> > > > > q_scale_type
> > > > > sw_qscale_type
> > > > > concealment_motion_vectorssw_con_mv_e
> > > > > intra_dc_precision  
> > > > > sw_intra_dc_prec
> > > > > intra_vlc_format
> > > > > sw_intra_vlc_tab
> > > > > frame_pred_frame_dct  
> > > > > sw_frame_pred_dct
> > > > > 
> > > > > alternate_scan
> > > > > sw_alt_scan_flag_e
> > > > > 
> > > > > f_code
> > > > > sw_fcode_bwd_ver
> > > > >   
> > > > > 
> > > > > sw_fcode_bwd_hor
> > > > >   
> > > > > 
> > > > > sw_fcode_fwd_ver
> > > > >   
> > > > > 
> > > > > sw_fcode_fwd_hor
> > > > > full_pel_forward_vector  
> > > > > sw_mv_accuracy_fwd
> > > > > full_pel_backward_vector   
> > > > > sw_mv_accuracy_bwd
> > > > > 
> > > > > 
> > > > > I also saw you add two format for parsed MPEG-2/MPEG-1 format, I would
> > > > > not recommand to do that.
> > > > 
> > > > We need to differentiate MPEG-1/MPEG-2, not all the fields are
> > > > applicable depending on version.
> > > 
> > > Usually the MPEG-2 decoder could support MPEG-1, as I know, the syntax
> > > of byte stream of them are the same.
> > > > > That is what google does, because for a few video format and some
> > > > > hardware, they just request a offsets from the original video byte 
> > > > > stream.
> > > > 
> > > > I don't understand your comment, perhaps have you some as a basis of
> > > > discussion ?
> > > 
> > > I mean
> > > 
> > > V4L2-PIX-FMT-MPEG2-PARSED V4L2-PIX-FMT-MPEG1-PARSED I wonder whether you
> > > want use the new format to inform the userspace that this device is for
> > > stateless video decoder, as google defined something like
> > > V4L2_PIX_FMT_H264_SLICE. I think the driver registers some controls is
> > > enough for the userspace to detect whether it is a stateless device. Or
>

Re: [PATCH v6 1/3] [media] v4l: add parsed MPEG-2 support

2017-07-07 Thread Nicolas Dufresne
Le samedi 08 juillet 2017 à 01:29 +0800, ayaka a écrit :
> 
> On 07/04/2017 05:29 PM, Hugues FRUCHET wrote:
> > Hi Randy,
> > Thanks for review, and sorry for late reply, answers inline.
> > BR,
> > Hugues.
> > 
> > On 06/11/2017 01:41 PM, ayaka wrote:
> > > 
> > > On 04/28/2017 09:25 PM, Hugues Fruchet wrote:
> > > > Add "parsed MPEG-2" pixel format & related controls
> > > > needed by stateless video decoders.
> > > > In order to decode the video bitstream chunk provided
> > > > by user on output queue, stateless decoders require
> > > > also some extra data resulting from this video bitstream
> > > > chunk parsing.
> > > > Those parsed extra data have to be set by user through
> > > > control framework using the dedicated mpeg video extended
> > > > controls introduced in this patchset.
> > > 
> > > I have compared those v4l2 controls with the registers of the rockchip
> > > video IP.
> > > 
> > > Most of them are met, but only lacks of sw_init_qp.
> > 
> > In case of MPEG-1/2, this register seems forced to 1, please double
> > check the on2 headers parsing library related to MPEG2. Nevertheless, I
> > see this hardware register used with VP8/H264.
> 
> Yes, it is forced to be 1. We can skip this field for MPEG1/2
> > 
> > Hence, no need to put this field on MPEG-2 interface, but should come
> > with VP8/H264.
> > 
> > > 
> > > Here is the full translation table of the registers of the rockchip
> > > video IP.
> > > 
> > > q_scale_type
> > > sw_qscale_type
> > > concealment_motion_vectorssw_con_mv_e
> > > intra_dc_precision  
> > > sw_intra_dc_prec
> > > intra_vlc_format
> > > sw_intra_vlc_tab
> > > frame_pred_frame_dct  sw_frame_pred_dct
> > > 
> > > alternate_scan
> > > sw_alt_scan_flag_e
> > > 
> > > f_code
> > > sw_fcode_bwd_ver
> > >  
> > > sw_fcode_bwd_hor
> > >  
> > > sw_fcode_fwd_ver
> > >  
> > > sw_fcode_fwd_hor
> > > full_pel_forward_vector  
> > > sw_mv_accuracy_fwd
> > > full_pel_backward_vector   sw_mv_accuracy_bwd
> > > 
> > > 
> > > I also saw you add two format for parsed MPEG-2/MPEG-1 format, I would
> > > not recommand to do that.
> > 
> > We need to differentiate MPEG-1/MPEG-2, not all the fields are
> > applicable depending on version.
> 
> Usually the MPEG-2 decoder could support MPEG-1, as I know, the syntax 
> of byte stream of them are the same.
> > > That is what google does, because for a few video format and some
> > > hardware, they just request a offsets from the original video byte stream.
> > 
> > I don't understand your comment, perhaps have you some as a basis of
> > discussion ?
> 
> I mean
> 
> V4L2-PIX-FMT-MPEG2-PARSED V4L2-PIX-FMT-MPEG1-PARSED I wonder whether you 
> want use the new format to inform the userspace that this device is for 
> stateless video decoder, as google defined something like 
> V4L2_PIX_FMT_H264_SLICE. I think the driver registers some controls is 
> enough for the userspace to detect whether it is a stateless device. Or 
> it will increase the work of the userspace(I mean Gstreamer).

Just a note that SLICE has nothing to do with PARSED here. You could
have an H264 decoder that is stateless and support handling slices
rather then full frames (e.g. V4L2_PIX_FMT_H264_SLICE_PARSED could be
valid).

I would not worry to much about Gst, as we will likely use this device
through the libv4l2 here, hence will only notice the "emulated"
V4L2_PIX_FMT_MPEG2 and ignore the _PARSED variant. And without libv4l2,
we'd just ignore this driver completely. I doubt we will implement per-
device parsing inside Gst itself if it's already done in an external
library for us. libv4l2 might need some fixing, but hopefully it's not
beyond repair.

> 
> > Offset from the beginning of original video bitstream is supported
> > within proposed interface, see v4l2_mpeg_video_mpeg2_pic_hd->offset field.
> > 
> > > > > > > > Signed-off-by: Hugues Fruchet
> > > > ---
> > > >    Documentation/media/uapi/v4l/extended-controls.rst | 363 
> > > > +
> > > >    Documentation/media/uapi/v4l/pixfmt-013.rst|  10 +
> > > >    Documentation/media/uapi/v4l/vidioc-queryctrl.rst  |  38 ++-
> > > >    Documentation/media/videodev2.h.rst.exceptions |   6 +
> > > >    drivers/media/v4l2-core/v4l2-ctrls.c   |  53 +++
> > > >    drivers/media/v4l2-core/v4l2-ioctl.c   |   2 +
> > > >    include/uapi/linux/v4l2-controls.h |  94 ++
> > > >    include/uapi/linux/videodev2.h |   8 +
> > > >    8 files changed, 572 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git 

Re: [PATCH 1/5] [media] rockchip/rga: v4l2 m2m support

2017-06-27 Thread Nicolas Dufresne
Le mardi 27 juin 2017 à 23:11 +0800, Jacob Chen a écrit :
> Hi Nicolas.
> 
> 2017-06-26 23:49 GMT+08:00 Nicolas Dufresne <nico...@ndufresne.ca>:
> > 
> > Le lundi 26 juin 2017 à 22:51 +0800, Jacob Chen a écrit :
> > > Rockchip RGA is a separate 2D raster graphic acceleration unit.
> > > It
> > > accelerates 2D graphics operations, such as point/line drawing,
> > > image
> > > scaling, rotation, BitBLT, alpha blending and image
> > > blur/sharpness.
> > > 
> > > The drvier is mostly based on s5p-g2d v4l2 m2m driver.
> > > And supports various operations from the rendering pipeline.
> > >  - copy
> > >  - fast solid color fill
> > >  - rotation
> > >  - flip
> > >  - alpha blending
> > > 
> > > The code in rga-hw.c is used to configure regs accroding to
> > > operations.
> > > 
> > > The code in rga-buf.c is used to create private mmu table for
> > > RGA.
> > > The tables is stored in a list, and be removed when buffer is
> > > cleanup.
> > > 
> > > Signed-off-by: Jacob Chen <jacob-c...@iotwrt.com>
> > > ---
> > >  drivers/media/platform/Kconfig|  11 +
> > >  drivers/media/platform/Makefile   |   2 +
> > >  drivers/media/platform/rockchip-rga/Makefile  |   3 +
> > >  drivers/media/platform/rockchip-rga/rga-buf.c | 176 +
> > >  drivers/media/platform/rockchip-rga/rga-hw.c  | 456 
> > >  drivers/media/platform/rockchip-rga/rga-hw.h  | 434 
> > >  drivers/media/platform/rockchip-rga/rga.c | 979
> > > ++
> > >  drivers/media/platform/rockchip-rga/rga.h | 133 
> > >  8 files changed, 2194 insertions(+)
> > >  create mode 100644 drivers/media/platform/rockchip-rga/Makefile
> > >  create mode 100644 drivers/media/platform/rockchip-rga/rga-buf.c
> > >  create mode 100644 drivers/media/platform/rockchip-rga/rga-hw.c
> > >  create mode 100644 drivers/media/platform/rockchip-rga/rga-hw.h
> > >  create mode 100644 drivers/media/platform/rockchip-rga/rga.c
> > >  create mode 100644 drivers/media/platform/rockchip-rga/rga.h
> > > 
> > > diff --git a/drivers/media/platform/Kconfig
> > > b/drivers/media/platform/Kconfig
> > > index c9106e1..8199bcf 100644
> > > --- a/drivers/media/platform/Kconfig
> > > +++ b/drivers/media/platform/Kconfig
> > > @@ -411,6 +411,17 @@ config VIDEO_RENESAS_VSP1
> > > To compile this driver as a module, choose M here: the
> > > module
> > > will be called vsp1.
> > > 
> > > +config VIDEO_ROCKCHIP_RGA
> > > + tristate "Rockchip Raster 2d Grapphic Acceleration Unit"
> > > + depends on VIDEO_DEV && VIDEO_V4L2 && HAS_DMA
> > > + depends on ARCH_ROCKCHIP || COMPILE_TEST
> > > + select VIDEOBUF2_DMA_SG
> > > + select V4L2_MEM2MEM_DEV
> > > + default n
> > > + ---help---
> > > +   This is a v4l2 driver for Rockchip SOC RGA2
> > > +   2d graphics accelerator.
> > > +
> > >  config VIDEO_TI_VPE
> > >   tristate "TI VPE (Video Processing Engine) driver"
> > >   depends on VIDEO_DEV && VIDEO_V4L2
> > > diff --git a/drivers/media/platform/Makefile
> > > b/drivers/media/platform/Makefile
> > > index 349ddf6..3bf096f 100644
> > > --- a/drivers/media/platform/Makefile
> > > +++ b/drivers/media/platform/Makefile
> > > @@ -54,6 +54,8 @@ obj-$(CONFIG_VIDEO_RENESAS_FDP1)+=
> > > rcar_fdp1.o
> > >  obj-$(CONFIG_VIDEO_RENESAS_JPU)  += rcar_jpu.o
> > >  obj-$(CONFIG_VIDEO_RENESAS_VSP1) += vsp1/
> > > 
> > > +obj-$(CONFIG_VIDEO_ROCKCHIP_RGA) += rockchip-rga/
> > > +
> > >  obj-y+= omap/
> > > 
> > >  obj-$(CONFIG_VIDEO_AM437X_VPFE)  += am437x/
> > > diff --git a/drivers/media/platform/rockchip-rga/Makefile
> > > b/drivers/media/platform/rockchip-rga/Makefile
> > > new file mode 100644
> > > index 000..92fe254
> > > --- /dev/null
> > > +++ b/drivers/media/platform/rockchip-rga/Makefile
> > > @@ -0,0 +1,3 @@
> > > +rockchip-rga-objs := rga.o rga-hw.o rga-buf.o
> > > +
> > > +obj-$(CONFIG_VIDEO_ROCKCHIP_RGA) += rockchip-rga.o
> > > diff --git a/drivers/media/platform/rockchip-rga/rga-buf.c
> > > b/drivers/media/platform

Re: [PATCH 1/5] [media] rockchip/rga: v4l2 m2m support

2017-06-26 Thread Nicolas Dufresne
Le lundi 26 juin 2017 à 22:51 +0800, Jacob Chen a écrit :
> Rockchip RGA is a separate 2D raster graphic acceleration unit. It
> accelerates 2D graphics operations, such as point/line drawing, image
> scaling, rotation, BitBLT, alpha blending and image blur/sharpness.
> 
> The drvier is mostly based on s5p-g2d v4l2 m2m driver.
> And supports various operations from the rendering pipeline.
>  - copy
>  - fast solid color fill
>  - rotation
>  - flip
>  - alpha blending
> 
> The code in rga-hw.c is used to configure regs accroding to
> operations.
> 
> The code in rga-buf.c is used to create private mmu table for RGA.
> The tables is stored in a list, and be removed when buffer is
> cleanup.
> 
> Signed-off-by: Jacob Chen 
> ---
>  drivers/media/platform/Kconfig|  11 +
>  drivers/media/platform/Makefile   |   2 +
>  drivers/media/platform/rockchip-rga/Makefile  |   3 +
>  drivers/media/platform/rockchip-rga/rga-buf.c | 176 +
>  drivers/media/platform/rockchip-rga/rga-hw.c  | 456 
>  drivers/media/platform/rockchip-rga/rga-hw.h  | 434 
>  drivers/media/platform/rockchip-rga/rga.c | 979
> ++
>  drivers/media/platform/rockchip-rga/rga.h | 133 
>  8 files changed, 2194 insertions(+)
>  create mode 100644 drivers/media/platform/rockchip-rga/Makefile
>  create mode 100644 drivers/media/platform/rockchip-rga/rga-buf.c
>  create mode 100644 drivers/media/platform/rockchip-rga/rga-hw.c
>  create mode 100644 drivers/media/platform/rockchip-rga/rga-hw.h
>  create mode 100644 drivers/media/platform/rockchip-rga/rga.c
>  create mode 100644 drivers/media/platform/rockchip-rga/rga.h
> 
> diff --git a/drivers/media/platform/Kconfig
> b/drivers/media/platform/Kconfig
> index c9106e1..8199bcf 100644
> --- a/drivers/media/platform/Kconfig
> +++ b/drivers/media/platform/Kconfig
> @@ -411,6 +411,17 @@ config VIDEO_RENESAS_VSP1
>     To compile this driver as a module, choose M here: the
> module
>     will be called vsp1.
>  
> +config VIDEO_ROCKCHIP_RGA
> + tristate "Rockchip Raster 2d Grapphic Acceleration Unit"
> + depends on VIDEO_DEV && VIDEO_V4L2 && HAS_DMA
> + depends on ARCH_ROCKCHIP || COMPILE_TEST
> + select VIDEOBUF2_DMA_SG
> + select V4L2_MEM2MEM_DEV
> + default n
> + ---help---
> +   This is a v4l2 driver for Rockchip SOC RGA2
> +   2d graphics accelerator.
> +
>  config VIDEO_TI_VPE
>   tristate "TI VPE (Video Processing Engine) driver"
>   depends on VIDEO_DEV && VIDEO_V4L2
> diff --git a/drivers/media/platform/Makefile
> b/drivers/media/platform/Makefile
> index 349ddf6..3bf096f 100644
> --- a/drivers/media/platform/Makefile
> +++ b/drivers/media/platform/Makefile
> @@ -54,6 +54,8 @@ obj-$(CONFIG_VIDEO_RENESAS_FDP1)+=
> rcar_fdp1.o
>  obj-$(CONFIG_VIDEO_RENESAS_JPU)  += rcar_jpu.o
>  obj-$(CONFIG_VIDEO_RENESAS_VSP1) += vsp1/
>  
> +obj-$(CONFIG_VIDEO_ROCKCHIP_RGA) += rockchip-rga/
> +
>  obj-y+= omap/
>  
>  obj-$(CONFIG_VIDEO_AM437X_VPFE)  += am437x/
> diff --git a/drivers/media/platform/rockchip-rga/Makefile
> b/drivers/media/platform/rockchip-rga/Makefile
> new file mode 100644
> index 000..92fe254
> --- /dev/null
> +++ b/drivers/media/platform/rockchip-rga/Makefile
> @@ -0,0 +1,3 @@
> +rockchip-rga-objs := rga.o rga-hw.o rga-buf.o
> +
> +obj-$(CONFIG_VIDEO_ROCKCHIP_RGA) += rockchip-rga.o
> diff --git a/drivers/media/platform/rockchip-rga/rga-buf.c
> b/drivers/media/platform/rockchip-rga/rga-buf.c
> new file mode 100644
> index 000..8582092
> --- /dev/null
> +++ b/drivers/media/platform/rockchip-rga/rga-buf.c
> @@ -0,0 +1,176 @@
> +/*
> + * Copyright (C) Fuzhou Rockchip Electronics Co.Ltd
> + * Author: Jacob Chen 
> + *
> + * This software is licensed under the terms of the GNU General
> Public
> + * License version 2, as published by the Free Software Foundation,
> and
> + * may be copied, distributed, and modified under those terms.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + */
> +
> +#include 
> +
> +#include 
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#include "rga-hw.h"
> +#include "rga.h"
> +
> +static int
> +rga_queue_setup(struct vb2_queue *vq,
> +unsigned int *nbuffers, unsigned int *nplanes,
> +unsigned int sizes[], struct device *alloc_devs[])
> +{
> + struct rga_ctx *ctx = vb2_get_drv_priv(vq);
> + struct rga_frame *f = rga_get_frame(ctx, vq->type);
> +
> + if (IS_ERR(f))
> + return PTR_ERR(f);
> +
> + sizes[0] = f->size;
> + *nplanes = 1;
> +
> + if (*nbuffers == 0)
> + *nbuffers = 1;
> +
> + return 0;
> +}
> +
> +static int rga_buf_prepare(struct 

Re: [PATCH 08/12] [media] vb2: add 'ordered' property to queues

2017-06-16 Thread Nicolas Dufresne
Le vendredi 16 juin 2017 à 16:39 +0900, Gustavo Padovan a écrit :
> > From: Gustavo Padovan 
> 
> For explicit synchronization (and soon for HAL3/Request API) we need
> the v4l2-driver to guarantee the ordering which the buffer were queued
> by userspace. This is already true for many drivers, but we never had
> the need to say it.

Phrased this way, that sound like a statement that a m2m decoder
handling b-frame will just never be supported. I think decoders are a
very important use case for explicit synchronization.

What I believe happens with decoders is simply that the allocation
order (the order in which empty buffers are retrieved from the queue)
will be different then the actual presentation order. Also, multiple
buffers endup being filled at the same time. Some firmware may inform
of the new order at the last minute, making indeed the fence useless,
but these are firmware and the information can be known earlier. Also,
this information would be known by userspace for the case (up-coming,
see STM patches and Rockchip comments [0]) or state-less decoder,
because it is available while parsing the bitstream. For this last
scenarios, the fact that ordering is not the same should disable the
fences since userspace can know which fences to wait for first. Those
drivers would need to set "ordered" to 0, which would be counter
intuitive.

I think this use case is too important to just ignore it. I would
expect that we at least have a todo with something sensible as a plan
to cover this.

> 
> > Signed-off-by: Gustavo Padovan 
> ---
>  include/media/videobuf2-core.h | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/include/media/videobuf2-core.h b/include/media/videobuf2-core.h
> index aa43e43..a8b800e 100644
> --- a/include/media/videobuf2-core.h
> +++ b/include/media/videobuf2-core.h
> @@ -491,6 +491,9 @@ struct vb2_buf_ops {
>   * @last_buffer_dequeued: used in poll() and DQBUF to immediately return if 
> the
> >   * last decoded buffer was already dequeued. Set for capture queues
> >   * when a buffer with the V4L2_BUF_FLAG_LAST is dequeued.
> + * @ordered: if the driver can guarantee that the queue will be ordered or 
> not.
> > + * The default is not ordered unless the driver sets this flag. It
> > + * is mandatory for using explicit fences.
> >   * @fileio:file io emulator internal data, used only if emulator 
> > is active
> >   * @threadio:  thread io internal data, used only if thread is active
>   */
> @@ -541,6 +544,7 @@ struct vb2_queue {
> > >   unsigned intis_output:1;
> > >   unsigned intcopy_timestamp:1;
> > >   unsigned intlast_buffer_dequeued:1;
> > > + unsigned intordered:1;
>  
> > >   struct vb2_fileio_data  *fileio;
> > >   struct vb2_threadio_data*threadio;

signature.asc
Description: This is a digitally signed message part


Re: [RFC 00/10] V4L2 explicit synchronization support

2017-06-09 Thread Nicolas Dufresne
Le lundi 03 avril 2017 à 15:46 -0400, Javier Martinez Canillas a
écrit :
> > The problem is that adding implicit fences changed the behavior of
> > the ioctls, causing gstreamer to wait forever for buffers to be ready.
> > 
> 
> The problem was related to trying to make user-space unaware of the implicit
> fences support, and so it tried to QBUF a buffer that had already a pending
> fence. A workaround was to block the second QBUF ioctl if the buffer had a
> pending fence, but this caused the mentioned deadlock since GStreamer wasn't
> expecting the QBUF ioctl to block.

That QBUF may block isn't a problem, but modern userspace application,
not just GStreamer, need "cancellable" operations. This is achieved by
avoiding blocking call that cannot be interrupted. What is usually
done, is that we poll the device FD to determine when it is safe to
call QBUF in a way that it will return in a finit amount of time. For
the implicit fence, it could not work, since the driver is not yet
aware of the fence, hence cannot use it to delay the poll operation.
Though, it's not clear why it couldn't wait asynchronously like this
RFC is doing with the explicit fences.

In the current RFC, the fences are signalled through a callback, and
QBUF is split in half. So waiting on the fence is done elsewhere, and
the qbuf operation completes on the fence callback thread.

Nicolas 

signature.asc
Description: This is a digitally signed message part


Re: [PATCH v2 3/3] libv4l-codecparsers: add GStreamer mpeg2 parser

2017-05-01 Thread Nicolas Dufresne
Le vendredi 28 avril 2017 à 17:02 +0200, Hugues Fruchet a écrit :
> Add the mpeg2 codecparser backend glue which will
> call the GStreamer parsing functions.
> 
> Signed-off-by: Hugues Fruchet 
> ---
>  configure.ac|  21 ++
>  lib/libv4l-codecparsers/Makefile.am |  14 +-
>  lib/libv4l-codecparsers/libv4l-cparsers-mpeg2.c | 375
> 
>  lib/libv4l-codecparsers/libv4l-cparsers.c   |   4 +
>  4 files changed, 413 insertions(+), 1 deletion(-)
>  create mode 100644 lib/libv4l-codecparsers/libv4l-cparsers-mpeg2.c
> 
> diff --git a/configure.ac b/configure.ac
> index 9ce7392..ce43f18 100644
> --- a/configure.ac
> +++ b/configure.ac
> @@ -273,6 +273,25 @@ fi
>  
>  AC_SUBST([JPEG_LIBS])
>  
> +# Check for GStreamer codecparsers
> +
> +gst_codecparsers_pkgconfig=false
> +PKG_CHECK_MODULES([GST], [gstreamer-1.0 >= 1.8.0],
> [gst_pkgconfig=true], [gst_pkgconfig=false])
> +if test "x$gst_pkgconfig" = "xfalse"; then
> +   AC_MSG_WARN(GStreamer library is not available)
> +else
> +   PKG_CHECK_MODULES([GST_BASE], [gstreamer-base-1.0 >= 1.8.0],
> [gst_base_pkgconfig=true], [gst_base_pkgconfig=false])
> +   if test "x$gst_base_pkgconfig" = "xfalse"; then
> +  AC_MSG_WARN(GStreamer base library is not available)
> +   else
> +  PKG_CHECK_MODULES(GST_CODEC_PARSERS, [gstreamer-codecparsers-
> 1.0 >= 1.8.0], [gst_codecparsers_pkgconfig=true], 

You should only check for the codecparser library. The rest are
dependencies which will be pulled automatically by pkg-config. If for
some reason you needed multiple libs, that don't depend on each others,
notice the S in PKG_CHECK_MODULES. You can do a single invocation.

> [gst_codecparsers_pkgconfig=false])
> +  if test "x$gst_codecparsers_pkgconfig" = "xfalse"; then
> + AC_MSG_WARN(GStreamer codecparser library is not available)
> +  fi
> +   fi
> +fi
> +AM_CONDITIONAL([HAVE_GST_CODEC_PARSERS], [test
> x$gst_codecparsers_pkgconfig = xtrue])
> +
>  # Check for pthread
>  
>  AS_IF([test x$enable_shared != xno],
> @@ -477,6 +496,7 @@ AM_COND_IF([WITH_V4L2_CTL_LIBV4L],
> [USE_V4L2_CTL="yes"], [USE_V4L2_CTL="no"])
>  AM_COND_IF([WITH_V4L2_CTL_STREAM_TO], [USE_V4L2_CTL="yes"],
> [USE_V4L2_CTL="no"])
>  AM_COND_IF([WITH_V4L2_COMPLIANCE_LIBV4L],
> [USE_V4L2_COMPLIANCE="yes"], [USE_V4L2_COMPLIANCE="no"])
>  AS_IF([test "x$alsa_pkgconfig" = "xtrue"], [USE_ALSA="yes"],
> [USE_ALSA="no"])
> +AS_IF([test "x$gst_codecparsers_pkgconfig" = "xtrue"],
> [USE_GST_CODECPARSERS="yes"], [USE_GST_CODECPARSERS="no"])
>  
>  AC_OUTPUT
>  
> @@ -497,6 +517,7 @@ compile time options summary
>  pthread : $have_pthread
>  QT version  : $QT_VERSION
>  ALSA support: $USE_ALSA
> +GST codecparsers: $USE_GST_CODECPARSERS
>  
>  build dynamic libs  : $enable_shared
>  build static libs   : $enable_static
> diff --git a/lib/libv4l-codecparsers/Makefile.am b/lib/libv4l-
> codecparsers/Makefile.am
> index a9d6c8b..61f4730 100644
> --- a/lib/libv4l-codecparsers/Makefile.am
> +++ b/lib/libv4l-codecparsers/Makefile.am
> @@ -1,9 +1,21 @@
>  if WITH_V4L_PLUGINS
> +if HAVE_GST_CODEC_PARSERS
> +
>  libv4l2plugin_LTLIBRARIES = libv4l-codecparsers.la
> -endif
>  
>  libv4l_codecparsers_la_SOURCES = libv4l-cparsers.c libv4l-cparsers.h
>  
>  libv4l_codecparsers_la_CPPFLAGS = $(CFLAG_VISIBILITY)
> -I$(top_srcdir)/lib/libv4l2/ -I$(top_srcdir)/lib/libv4lconvert/
>  libv4l_codecparsers_la_LDFLAGS = -avoid-version -module -shared
> -export-dynamic -lpthread
>  libv4l_codecparsers_la_LIBADD = ../libv4l2/libv4l2.la
> +
> +# GStreamer codecparsers library
> +libv4l_codecparsers_la_CFLAGS = $(GST_CFLAGS) -DGST_USE_UNSTABLE_API
> +libv4l_codecparsers_la_LDFLAGS += $(GST_LIB_LDFLAGS)
> +libv4l_codecparsers_la_LIBADD += $(GLIB_LIBS) $(GST_LIBS)
> $(GST_BASE_LIBS) $(GST_CODEC_PARSERS_LIBS) $(NULL)
> +
> +# MPEG-2 parser back-end
> +libv4l_codecparsers_la_SOURCES += libv4l-cparsers-mpeg2.c
> +
> +endif
> +endif
> diff --git a/lib/libv4l-codecparsers/libv4l-cparsers-mpeg2.c
> b/lib/libv4l-codecparsers/libv4l-cparsers-mpeg2.c
> new file mode 100644
> index 000..3456b73
> --- /dev/null
> +++ b/lib/libv4l-codecparsers/libv4l-cparsers-mpeg2.c
> @@ -0,0 +1,375 @@
> +/*
> + * libv4l-cparsers-mpeg2.c
> + *
> + * Copyright (C) STMicroelectronics SA 2017
> + * Authors: Hugues Fruchet 
> + *  Tifaine Inguere 
> + *  for STMicroelectronics.
> + *
> + * This program is free software; you can redistribute it and/or
> modify
> + * it under the terms of the GNU Lesser General Public License as
> published by
> + * the Free Software Foundation; either version 2.1 of the License,
> or
> + * (at your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR 

Re: [RFC 0/4] Exynos DRM: add Picture Processor extension

2017-04-26 Thread Nicolas Dufresne
Le mercredi 26 avril 2017 à 21:31 +0200, Tobias Jakobi a écrit :
> I'm pretty sure you have misread Marek's description of the patchset.
> The picture processor API should replaced/deprecate the IPP API that is
> currently implemented in the Exynos DRM.
> 
> In particular this affects the following files:
> - drivers/gpu/drm/exynos/exynos_drm_ipp.{c,h}
> - drivers/gpu/drm/exynos/exynos_drm_fimc.{c,h}
> - drivers/gpu/drm/exynos/exynos_drm_gsc.{c,h}
> - drivers/gpu/drm/exynos/exynos_drm_rotator.{c,h}
> 
> I know only two places where the IPP API is actually used. Tizen and my
> experimental mpv backend.

Sorry for the noise then.

regards,
Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [RFC 0/4] Exynos DRM: add Picture Processor extension

2017-04-26 Thread Nicolas Dufresne
Le mercredi 26 avril 2017 à 18:52 +0200, Tobias Jakobi a écrit :
> Hello again,
> 
> 
> Nicolas Dufresne wrote:
> > Le mercredi 26 avril 2017 à 01:21 +0300, Sakari Ailus a écrit :
> > > Hi Marek,
> > > 
> > > On Thu, Apr 20, 2017 at 01:23:09PM +0200, Marek Szyprowski wrote:
> > > > Hi Laurent,
> > > > 
> > > > On 2017-04-20 12:25, Laurent Pinchart wrote:
> > > > > Hi Marek,
> > > > > 
> > > > > (CC'ing Sakari Ailus)
> > > > > 
> > > > > Thank you for the patches.
> > > > > 
> > > > > On Thursday 20 Apr 2017 11:13:36 Marek Szyprowski wrote:
> > > > > > Dear all,
> > > > > > 
> > > > > > This is an updated proposal for extending EXYNOS DRM API with 
> > > > > > generic
> > > > > > support for hardware modules, which can be used for processing 
> > > > > > image data
> > > > > > from the one memory buffer to another. Typical memory-to-memory 
> > > > > > operations
> > > > > > are: rotation, scaling, colour space conversion or mix of them. 
> > > > > > This is a
> > > > > > follow-up of my previous proposal "[RFC 0/2] New feature: 
> > > > > > Framebuffer
> > > > > > processors", which has been rejected as "not really needed in the 
> > > > > > DRM
> > > > > > core":
> > > > > > http://www.mail-archive.com/dri-devel@lists.freedesktop.org/msg146286.html
> > > > > > 
> > > > > > In this proposal I moved all the code to Exynos DRM driver, so now 
> > > > > > this
> > > > > > will be specific only to Exynos DRM. I've also changed the name from
> > > > > > framebuffer processor (fbproc) to picture processor (pp) to avoid 
> > > > > > confusion
> > > > > > with fbdev API.
> > > > > > 
> > > > > > Here is a bit more information what picture processors are:
> > > > > > 
> > > > > > Embedded SoCs are known to have a number of hardware blocks, which 
> > > > > > perform
> > > > > > such operations. They can be used in paralel to the main GPU module 
> > > > > > to
> > > > > > offload CPU from processing grapics or video data. One of example 
> > > > > > use of
> > > > > > such modules is implementing video overlay, which usually requires 
> > > > > > color
> > > > > > space conversion from NV12 (or similar) to RGB32 color space and 
> > > > > > scaling to
> > > > > > target window size.
> > > > > > 
> > > > > > The proposed API is heavily inspired by atomic KMS approach - it is 
> > > > > > also
> > > > > > based on DRM objects and their properties. A new DRM object is 
> > > > > > introduced:
> > > > > > picture processor (called pp for convenience). Such objects have a 
> > > > > > set of
> > > > > > standard DRM properties, which describes the operation to be 
> > > > > > performed by
> > > > > > respective hardware module. In typical case those properties are a 
> > > > > > source
> > > > > > fb id and rectangle (x, y, width, height) and destination fb id and
> > > > > > rectangle. Optionally a rotation property can be also specified if
> > > > > > supported by the given hardware. To perform an operation on image 
> > > > > > data,
> > > > > > userspace provides a set of properties and their values for given 
> > > > > > fbproc
> > > > > > object in a similar way as object and properties are provided for
> > > > > > performing atomic page flip / mode setting.
> > > > > > 
> > > > > > The proposed API consists of the 3 new ioctls:
> > > > > > - DRM_IOCTL_EXYNOS_PP_GET_RESOURCES: to enumerate all available 
> > > > > > picture
> > > > > >   processors,
> > > > > > - DRM_IOCTL_EXYNOS_PP_GET: to query capabilities of given picture
> > > > > >   processor,
> > > > > > - DRM_IOCTL_EXYNOS_PP_COMMIT: to perform operation described by 
> > > > > > given
> > > >

Re: [RFC 0/4] Exynos DRM: add Picture Processor extension

2017-04-26 Thread Nicolas Dufresne
Le mercredi 26 avril 2017 à 01:21 +0300, Sakari Ailus a écrit :
> Hi Marek,
> 
> On Thu, Apr 20, 2017 at 01:23:09PM +0200, Marek Szyprowski wrote:
> > Hi Laurent,
> > 
> > On 2017-04-20 12:25, Laurent Pinchart wrote:
> > > Hi Marek,
> > > 
> > > (CC'ing Sakari Ailus)
> > > 
> > > Thank you for the patches.
> > > 
> > > On Thursday 20 Apr 2017 11:13:36 Marek Szyprowski wrote:
> > > > Dear all,
> > > > 
> > > > This is an updated proposal for extending EXYNOS DRM API with generic
> > > > support for hardware modules, which can be used for processing image 
> > > > data
> > > > from the one memory buffer to another. Typical memory-to-memory 
> > > > operations
> > > > are: rotation, scaling, colour space conversion or mix of them. This is 
> > > > a
> > > > follow-up of my previous proposal "[RFC 0/2] New feature: Framebuffer
> > > > processors", which has been rejected as "not really needed in the DRM
> > > > core":
> > > > http://www.mail-archive.com/dri-devel@lists.freedesktop.org/msg146286.html
> > > > 
> > > > In this proposal I moved all the code to Exynos DRM driver, so now this
> > > > will be specific only to Exynos DRM. I've also changed the name from
> > > > framebuffer processor (fbproc) to picture processor (pp) to avoid 
> > > > confusion
> > > > with fbdev API.
> > > > 
> > > > Here is a bit more information what picture processors are:
> > > > 
> > > > Embedded SoCs are known to have a number of hardware blocks, which 
> > > > perform
> > > > such operations. They can be used in paralel to the main GPU module to
> > > > offload CPU from processing grapics or video data. One of example use of
> > > > such modules is implementing video overlay, which usually requires color
> > > > space conversion from NV12 (or similar) to RGB32 color space and 
> > > > scaling to
> > > > target window size.
> > > > 
> > > > The proposed API is heavily inspired by atomic KMS approach - it is also
> > > > based on DRM objects and their properties. A new DRM object is 
> > > > introduced:
> > > > picture processor (called pp for convenience). Such objects have a set 
> > > > of
> > > > standard DRM properties, which describes the operation to be performed 
> > > > by
> > > > respective hardware module. In typical case those properties are a 
> > > > source
> > > > fb id and rectangle (x, y, width, height) and destination fb id and
> > > > rectangle. Optionally a rotation property can be also specified if
> > > > supported by the given hardware. To perform an operation on image data,
> > > > userspace provides a set of properties and their values for given fbproc
> > > > object in a similar way as object and properties are provided for
> > > > performing atomic page flip / mode setting.
> > > > 
> > > > The proposed API consists of the 3 new ioctls:
> > > > - DRM_IOCTL_EXYNOS_PP_GET_RESOURCES: to enumerate all available picture
> > > >   processors,
> > > > - DRM_IOCTL_EXYNOS_PP_GET: to query capabilities of given picture
> > > >   processor,
> > > > - DRM_IOCTL_EXYNOS_PP_COMMIT: to perform operation described by given
> > > >   property set.
> > > > 
> > > > The proposed API is extensible. Drivers can attach their own, custom
> > > > properties to add support for more advanced picture processing (for 
> > > > example
> > > > blending).
> > > > 
> > > > This proposal aims to replace Exynos DRM IPP (Image Post Processing)
> > > > subsystem. IPP API is over-engineered in general, but not really 
> > > > extensible
> > > > on the other side. It is also buggy, with significant design flaws - the
> > > > biggest issue is the fact that the API covers memory-2-memory picture
> > > > operations together with CRTC writeback and duplicating features, which
> > > > belongs to video plane. Comparing with IPP subsystem, the PP framework 
> > > > is
> > > > smaller (1807 vs 778 lines) and allows driver simplification (Exynos
> > > > rotator driver smaller by over 200 lines).

Just a side note, we have written code in GStreamer using the Exnynos 4
FIMC IPP driver. I don't know how many, if any, deployment still exist
(Exynos 4 is relatively old now), but there exist userspace for the
FIMC driver. We use this for color transformation (from tiled to
linear) and scaling. The FIMC driver is in fact quite stable in
upstream kernel today. The GScaler V4L2 M2M driver on Exynos 5 is
largely based on it and has received some maintenance to properly work
in GStreamer. unlike this DRM API, you can reuse the same userspace
code across multiple platforms (which we do already). We have also
integrated this driver in Chromium in the past (not upstream though).

I am well aware that the blitter driver has not got much attention
though. But again, V4L2 offers a generic interface to userspace
application. Fixing this driver could enable some work like this one:

https://bugzilla.gnome.org/show_bug.cgi?id=772766

This work in progress feature is a generic hardware accelerated video
mixer. It has been tested with IMX.6 v4l2 m2m blitter driver 

Re: support autofocus / autogain in libv4l2

2017-04-25 Thread Nicolas Dufresne
Le mardi 25 avril 2017 à 13:30 +0200, Pali Rohár a écrit :
> Pinos (renamed from PulseVideo)
> 
> https://blogs.gnome.org/uraeus/2015/06/30/introducing-pulse-video/
> https://cgit.freedesktop.org/~wtay/pinos/
> 
> But from git history it looks like it is probably dead now...

This is also incorrect. See "work" branch. It is still a one man show,
code being aggressively re-factored. I suspect this will be the case
until the "form" is considered acceptable.

Nicolas

signature.asc
Description: This is a digitally signed message part


Re: support autofocus / autogain in libv4l2

2017-04-25 Thread Nicolas Dufresne
Le mardi 25 avril 2017 à 10:05 +0200, Pavel Machek a écrit :
> Well, fd's are hard, because application can do fork() and now
> interesting stuff happens. Threads are tricky, because now you have
> locking etc.
> 
> libv4l2 is designed to be LD_PRELOADED. That is not really feasible
> with "complex" library.

That is incorrect. The library propose an API where you simply replace
certain low level calls, like ioctl -> v4l2_ioctl, open -> v4l2_open().
You have to do that explicitly in your existing code. It does not
abstract the API itself unlike libdrm.

Nicolas

signature.asc
Description: This is a digitally signed message part


Re: support autofocus / autogain in libv4l2

2017-04-25 Thread Nicolas Dufresne
Le mardi 25 avril 2017 à 10:08 +0200, Pali Rohár a écrit :
> On Tuesday 25 April 2017 10:05:38 Pavel Machek wrote:
> > > > It would be nice if more than one application could be
> > > > accessing the
> > > > camera at the same time... (I.e. something graphical running
> > > > preview
> > > > then using command line tool to grab a picture.) This one is
> > > > definitely not solveable inside a library...
> > > 
> > > Someone once suggested to have something like pulseaudio for V4L.
> > > For such usage, a server would be interesting. Yet, I would code
> > > it
> > > in a way that applications using libv4l will talk with such
> > > daemon
> > > in a transparent way.
> > 
> > Yes, we need something like pulseaudio for V4L. And yes, we should
> > make it transparent for applications using libv4l.
> 
> IIRC there is already some effort in writing such "video" server
> which
> would support accessing more application into webcam video, like
> pulseaudio server for accessing more applications to microphone
> input.
> 

Because references are nice:

https://blogs.gnome.org/uraeus/2015/06/30/introducing-pulse-video/
https://gstconf.ubicast.tv/videos/camera-sharing-and-sandboxing-with-pinos/

And why the internals are not going to be implemented using GStreamer in the 
end:
https://gstconf.ubicast.tv/videos/keep-calm-and-refactor-about-the-essence-of-gstreamer/

regards,
Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [PATCH v7 5/9] media: venus: vdec: add video decoder files

2017-03-27 Thread Nicolas Dufresne
Le lundi 27 mars 2017 à 10:45 +0200, Hans Verkuil a écrit :
> > > timestamp and sequence are only set for CAPTURE, not OUTPUT. Is
> > > that correct?
> > 
> > Correct. I can add sequence for the OUTPUT queue too, but I have no
> > idea how that sequence is used by userspace.
> 
> You set V4L2_BUF_FLAG_TIMESTAMP_COPY, so you have to copy the
> timestamp from the output buffer
> to the capture buffer, if that makes sense for this codec. If not,
> then you shouldn't use that
> V4L2_BUF_FLAG and just generate new timestamps whenever a capture
> buffer is ready.
> 
> For sequence numbering just give the output queue its own sequence
> counter.

Btw, GStreamer and Chromium only supports TIMESTAMP_COPY, and will most
likely leak frames if you craft timestamp.

Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [PATCH v7 5/9] media: venus: vdec: add video decoder files

2017-03-26 Thread Nicolas Dufresne
Le dimanche 26 mars 2017 à 00:30 +0200, Stanimir Varbanov a écrit :
> > > +vb->planes[0].data_offset = data_offset;
> > > +vb->timestamp = timestamp_us * NSEC_PER_USEC;
> > > +vbuf->sequence = inst->sequence++;
> > 
> > timestamp and sequence are only set for CAPTURE, not OUTPUT. Is
> > that correct?
> 
> Correct. I can add sequence for the OUTPUT queue too, but I have no idea 
> how that sequence is used by userspace.

Neither GStreamer or Chromium seems to use it. What does that number
means for a m2m driver ? Does it really means something ?

Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [PATCH v7 5/9] media: venus: vdec: add video decoder files

2017-03-24 Thread Nicolas Dufresne
Le vendredi 24 mars 2017 à 15:41 +0100, Hans Verkuil a écrit :
> > +static const struct venus_format vdec_formats[] = {
> > + {
> > + .pixfmt = V4L2_PIX_FMT_NV12,
> > + .num_planes = 1,
> > + .type = V4L2_BUF_TYPE_VIDEO_CAPTURE_MPLANE,
> 
> Just curious: is NV12 the only uncompressed format supported by the
> hardware?
> Or just the only one that is implemented here?

The downstream kernel[0], from Qualcomm have:

{
.name = "UBWC YCbCr Semiplanar 4:2:0",
.description = "UBWC Y/CbCr 4:2:0",
.fourcc = V4L2_PIX_FMT_NV12_UBWC,
.num_planes = 2,
.get_frame_size = get_frame_size_nv12_ubwc,
.type = CAPTURE_PORT,
},
{
.name = "UBWC YCbCr Semiplanar 4:2:0 10bit",
.description = "UBWC Y/CbCr 4:2:0 10bit",
.fourcc = V4L2_PIX_FMT_NV12_TP10_UBWC,
.num_planes = 2,
.get_frame_size = get_frame_size_nv12_ubwc_10bit,
.type = CAPTURE_PORT,
},

I have no idea what UBWC stands for. The performance in NV12 is more
then decent from my testing. Though, there is no 10bit variant.

regards,
Nicolas

[0] https://android.googlesource.com/kernel/msm/+/android-7.1.0_r0.2/dr
ivers/media/platform/msm/vidc/msm_vdec.c#695

signature.asc
Description: This is a digitally signed message part


Re: [PATCH v5 00/39] i.MX Media Driver

2017-03-22 Thread Nicolas Dufresne
Le mardi 21 mars 2017 à 11:36 +, Russell King - ARM Linux a écrit :
>     warn: v4l2-test-formats.cpp(1187): S_PARM is
> supported for buftype 2, but not ENUM_FRAMEINTERVALS
>     warn: v4l2-test-formats.cpp(1194): S_PARM is
> supported but doesn't report V4L2_CAP_TIMEPERFRAME

For encoders, the framerate value is used as numerical value to
implement bitrate control. So in most cases any interval is accepted.
Though, it would be cleaner to just implement the enumeration. It's
quite simple when you support everything.

Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [PATCH v5 00/39] i.MX Media Driver

2017-03-19 Thread Nicolas Dufresne
Le dimanche 19 mars 2017 à 00:54 +, Russell King - ARM Linux a
écrit :
> > 
> > In practice, I have the impression there is a fair reason why
> > framerate
> > enumeration isn't implemented (considering there is only 1 valid
> > rate).
> 
> That's actually completely incorrect.
> 
> With the capture device interfacing directly with CSI, it's possible
> _today_ to select:
> 
> * the CSI sink pad's resolution
> * the CSI sink pad's resolution with the width and/or height halved
> * the CSI sink pad's frame rate
> * the CSI sink pad's frame rate divided by the frame drop factor
> 
> To put it another way, these are possible:
> 
> # v4l2-ctl -d /dev/video10 --list-formats-ext
> ioctl: VIDIOC_ENUM_FMT
>     Index   : 0
>     Type    : Video Capture
>     Pixel Format: 'RGGB'
>     Name    : 8-bit Bayer RGRG/GBGB
>     Size: Discrete 816x616
>     Interval: Discrete 0.040s (25.000 fps)
>     Interval: Discrete 0.048s (20.833 fps)
>     Interval: Discrete 0.050s (20.000 fps)
>     Interval: Discrete 0.053s (18.750 fps)
>     Interval: Discrete 0.060s (16.667 fps)
>     Interval: Discrete 0.067s (15.000 fps)
>     Interval: Discrete 0.080s (12.500 fps)
>     Interval: Discrete 0.100s (10.000 fps)
>     Interval: Discrete 0.120s (8.333 fps)
>     Interval: Discrete 0.160s (6.250 fps)
>     Interval: Discrete 0.200s (5.000 fps)
>     Interval: Discrete 0.240s (4.167 fps)
>     Size: Discrete 408x616
> 
>     Size: Discrete 816x308
> 
>     Size: Discrete 408x308
> 
> 
> These don't become possible as a result of implementing the enums,
> they're all already requestable through /dev/video10.

Ok that wasn't clear. So basically video9 is a front-end to video10,
and it does not proxy the enumerations. I understand this is what you
are now fixing. And this has to be fixed, because I can image cases
where the front-end could support only a subset of the sub-dev. So
having userspace enumerate on another device (and having to find this
device by walking the tree) is unlikely to work in all scenarios.

regards,
Nicolas

p.s. This is why caps negotiation is annoyingly complex in GStreamer,
specially that there is no shortcut, you connect pads, and they figure-
out what format they will use between each other.

signature.asc
Description: This is a digitally signed message part


Re: [PATCH v5 00/39] i.MX Media Driver

2017-03-19 Thread Nicolas Dufresne
Le dimanche 19 mars 2017 à 14:21 +, Russell King - ARM Linux a
écrit :
> > Can it be a point of failure?
> 
> There's a good reason why I dumped a full debug log using
> GST_DEBUG=*:9,
> analysed it for the cause of the failure, and tried several different
> pipelines, including the standard bayer2rgb plugin.
> 
> Please don't blame this on random stuff after analysis of the logs
> _and_
> reading the appropriate plugin code has shown where the problem is. 
> I
> know gstreamer can be very complex, but it's very possible to analyse
> the cause of problems and pin them down with detailed logs in
> conjunction
> with the source code.

I read your analyses with GStreamer, and it was all correct.

Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [PATCH v5 00/39] i.MX Media Driver

2017-03-19 Thread Nicolas Dufresne
Le dimanche 19 mars 2017 à 09:55 +, Russell King - ARM Linux a
écrit :
> 2) would it also make sense to allow gstreamer's v4l2src to try
> setting
>    a these parameters, and only fail if it's unable to set it?  IOW,
> if
>    I use:
> 
> gst-launch-1.0 v4l2src device=/dev/video10 ! \
> video/x-bayer,format=RGGB,framerate=20/1 ! ...
> 
>    where G_PARM says its currently configured for 25fps, but a S_PARM
>    with 20fps would actually succeed.

In current design, v4l2src will "probe" all possible formats, cache
this, and use this information for negotiation. So after the caps has
been probed, there will be no TRY_FMT or anything like this happening
until it's too late. You have spotted a bug though, it should be
reading back the parm structure to validate (and probably produce a
not-negotiated error here).

Recently, specially for the IMX work done by Pengutronix, there was
contributions to enhance this probing to support probing capabilities
that are not enumerable (e.g. interlacing, colorimetry) using TRY_FMT.
There is no TRY_PARM in the API to implement similar fallback. Also,
those ended up creating a massive disaster for slow cameras. We now
have UVC cameras that takes 6s or more to start. I have no other choice
but to rewrite that now. We will negotiate the non-enumerable at the
last minute with TRY_FMT (when the subset is at it's smallest). This
will by accident add support for this camera interface, but that wasn't
the goal. It would still fail with application that enumerates the
possible resolutions and framerate and let you select them with a drop-
down (like cheese). In general, I can only conclude that making
everything that matter enumerable is the only working way to go for
generic userspace. 

Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [PATCH v5 00/39] i.MX Media Driver

2017-03-18 Thread Nicolas Dufresne
Le samedi 18 mars 2017 à 20:43 +, Russell King - ARM Linux a
écrit :
> On Sat, Mar 18, 2017 at 12:58:27PM -0700, Steve Longerbeam wrote:
> > Can you share your gstreamer pipeline? For now, until
> > VIDIOC_ENUM_FRAMESIZES is implemented, try a pipeline that
> > does not attempt to specify a frame rate. I use the attached
> > script for testing, which works for me.
> 
> It's nothing more than
> 
>   gst-launch-1.0 -v v4l2src !  ! xvimagesink
> 
> in my case, the conversions are bayer2rgbneon.  However, this only
> shows
> you the frame rate negotiated on the pads (which is actually good
> enough
> to show the issue.)
> 
> How I stumbled across though this was when I was trying to encode:
> 
>  gst-launch-1.0 v4l2src device=/dev/video9 ! bayer2rgbneon ! \
> videoconvert ! x264enc speed-preset=1 ! avimux ! \
> filesink location=test.avi
> 
> I noticed that vlc would always say it was playing the resulting AVI
> at 30fps.

In practice, I have the impression there is a fair reason why framerate
enumeration isn't implemented (considering there is only 1 valid rate).
Along with the norm fallback, GStreamer could could also consider the
currently set framerate as returned by VIDIOC_G_PARM. At the same time,
implementing that enumeration shall be straightforward, and will make a
large amount of existing userspace work.

regards,
Nicolas

signature.asc
Description: This is a digitally signed message part


Re: media / v4l2-mc: wishlist for complex cameras (was Re: [PATCH v4 14/36] [media] v4l2-mc: add a function to inherit controls from a pipeline)

2017-03-15 Thread Nicolas Dufresne
Le mercredi 15 mars 2017 à 11:50 +0100, Philippe De Muyter a écrit :
> > I would say: camorama, xawtv3, zbar, google talk, skype. If it runs
> > with those, it will likely run with any other application.
> > 
> 
> I would like to add the 'v4l2src' plugin of gstreamer, and on the
> imx6 its

While it would be nice if somehow you would get v4l2src to work (in
some legacy/emulation mode through libv4l2), the longer plan is to
implement smart bin that handle several v4l2src, that can do the
required interactions so we can expose similar level of controls as
found in Android Camera HAL3, and maybe even further assuming userspace
can change the media tree at run-time. We might be a long way from
there, specially that some of the features depends on how much the
hardware can do. Just being able to figure-out how to build the MC tree
dynamically seems really hard when thinking of generic mechanism. Also,
Request API will be needed.

I think for this one, we'll need some userspace driver that enable the
features (not hide them), and that's what I'd be looking for from
libv4l2 in this regard.

> imx-specific counterpart 'imxv4l2videosrc' from the gstreamer-imx
> package
> at https://github.com/Freescale/gstreamer-imx, and 'v4l2-ctl'.

This one is specific to IMX hardware using vendor driver. You can
probably ignore that.

Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [RFC PATCH 00/12] Ion cleanup in preparation for moving out of staging

2017-03-14 Thread Nicolas Dufresne
Le mardi 14 mars 2017 à 15:47 +0100, Benjamin Gaignard a écrit :
> Should we use /devi/ion/$heap instead of /dev/ion_$heap ?
> I think it would be easier for user to look into one directory rather
> then in whole /dev to find the heaps
> 
> > is that we don't have to worry about a limit of 32 possible
> > heaps per system (32-bit heap id allocation field). But dealing
> > with an ioctl seems easier than names. Userspace might be less
> > likely to hardcode random id numbers vs. names as well.
> 
> In the futur I think that heap type will be replaced by a "get caps"
> ioctl which will
> describe heap capabilities. At least that is my understanding of
> kernel part
> of "unix memory allocator" project

I think what we really need from userspace point of view, is the
ability to find a compatible heap for a set of drivers. And this
without specific knowledge of the drivers.

Nicolas

signature.asc
Description: This is a digitally signed message part


Re: [PATCH v5 15/39] [media] v4l2: add a frame interval error event

2017-03-14 Thread Nicolas Dufresne
Le lundi 13 mars 2017 à 10:45 +, Russell King - ARM Linux a écrit :
> On Mon, Mar 13, 2017 at 11:02:34AM +0100, Hans Verkuil wrote:
> > On 03/11/2017 07:14 PM, Steve Longerbeam wrote:
> > > The event must be user visible, otherwise the user has no indication
> > > the error, and can't correct it by stream restart.
> > 
> > In that case the driver can detect this and call vb2_queue_error. It's
> > what it is there for.
> > 
> > The event doesn't help you since only this driver has this issue. So nobody
> > will watch this event, unless it is sw specifically written for this SoC.
> > 
> > Much better to call vb2_queue_error to signal a fatal error (which this
> > apparently is) since there are more drivers that do this, and vivid supports
> > triggering this condition as well.
> 
> So today, I can fiddle around with the IMX219 registers to help gain
> an understanding of how this sensor works.  Several of the registers
> (such as the PLL setup [*]) require me to disable streaming on the
> sensor while changing them.
> 
> This is something I've done many times while testing various ideas,
> and is my primary way of figuring out and testing such things.
> 
> Whenever I resume streaming (provided I've let the sensor stop
> streaming at a frame boundary) it resumes as if nothing happened.  If I
> stop the sensor mid-frame, then I get the rolling issue that Steve
> reports, but once the top of the frame becomes aligned with the top of
> the capture, everything then becomes stable again as if nothing happened.
> 
> The side effect of what you're proposing is that when I disable streaming
> at the sensor by poking at its registers, rather than the capture just
> stopping, an error is going to be delivered to gstreamer, and gstreamer
> is going to exit, taking the entire capture process down.

Indeed, there is no recovery attempt in GStreamer code, and it's hard
for an higher level programs to handle this. Nothing prevents from
adding something of course, but the errors are really un-specific, so
it would be something pretty blind. For what it has been tested, this
case was never met, usually the error is triggered by a USB camera
being un-plugged, a driver failure or even a firmware crash. Most of
the time, this is not recoverable.

My main concern here based on what I'm reading, is that this driver is
not even able to notice immediately that a produced frame was corrupted
(because it's out of sync). From usability perspective, this is really
bad. Can't the driver derive a clock from some irq and calculate for
each frame if the timing was correct ? And if not mark the buffer with
V4L2_BUF_FLAG_ERROR ?

> 
> This severely restricts the ability to be able to develop and test
> sensor drivers.
> 
> So, I strongly disagree with you.
> 
> Loss of capture frames is not necessarily a fatal error - as I have been
> saying repeatedly.  In Steve's case, there's some unknown interaction
> between the source and iMX6 hardware that is causing the instability,
> but that is simply not true of other sources, and I oppose any idea that
> we should cripple the iMX6 side of the capture based upon just one
> hardware combination where this is a problem.

Indeed, it happens all the time with slow USB port and UVC devices.
Though, the driver is well aware, and mark the buffers with
V4L2_BUF_FLAG_ERROR.

> 
> Steve suggested that the problem could be in the iMX6 CSI block - and I
> note comparing Steve's code with the code in FSL's repository that there
> are some changes that are missing in Steve's code to do with the CCIR656
> sync code setup, particularly for >8 bit.  The progressive CCIR656 8-bit
> setup looks pretty similar though - but I think what needs to be asked
> is whether the same problem is visible using the FSL/NXP vendor kernel.
> 
> 
> * - the PLL setup is something that requires research at the moment.
> Sony's official position (even to their customers) is that they do not
> supply the necessary information, instead they expect customers to tell
> them the capture settings they want, and Sony will throw the values into
> a spreadsheet, and they'll supply the register settings back to the
> customer.  Hence, the only way to proceed with a generic driver for
> this sensor is to experiment, and experimenting requires the ability to
> pause the stream at the sensor while making changes.  Take this away,
> and we're stuck with the tables-of-register-settings-for-set-of-fixed-
> capture-settings approach.  I've made a lot of progress away from this
> which is all down to the flexibility afforded by _not_ killing the
> capture process.
> 

signature.asc
Description: This is a digitally signed message part


  1   2   3   >