It seems that there are too many miss understandings or maybe we're just
talking the same thing on different ways.

So, instead of answering again, let's re-start this discussion on a
different way.

One of the requirements that it was discussed a lot on both mailing
lists and on the Media Controllers meetings that we had (or, at least
in the ones where I've participated) is that:

        "A pure V4L2 userspace application, knowing about video device
         nodes only, can still use the driver. Not all advanced features 
         will be available."

This is easily said than done. Also, different understandings can be
obtained by a simple phrase like that.

The solution for this problem is to make a compliance profile that
drivers need to implement. We should define such profile, change
the existing drivers to properly implement it and enforce it for the
newcoming drivers.

Btw, I think we should also work on a profile for other kinds of hardware
as well, but the thing is that, as some things can now be implemented
using two different API's, we need to define the minimal requirements
for the V4L2 implementation.


For me, the above requirement means that, at least, the following features
need to be present:

1) The media driver should properly detect the existing hardware and
should expose the available sensors for capture via the V4L2 API.

For hardware development kits, it should be possible to specify the
hardware sensor(s) at runtime via some tool at the v4l-utils tree 
(or on another tree hosted at linuxtv.org or clearly indicated at
the Kernel tree Documentation files) or via a modprobe parameter.

2) Different sensors present at the hardware may be exposed either
via S_INPUT or, if they're completely independent, via two different
node interface;

3) The active sensor basic controls to adjust color, bright, aperture time
and exposition time, if the hardware directly supports them;

4) The driver should implement the streaming ioctls and/or the read() method;

5) It should be possible to configure the frame rate, if the sensor supports it;

6) It should be possible to configure the crap area, if the sensor supports it.

7) It should be possible to configure the format standard and resolution

...
(the above list is not exhaustive. It is just a few obvious things that are
clear to me - I'm almost sure that I've forgot something).

We'll also end by having some optional requirements, like the DV timings ioctls
that also needs to be covered by the SoC hardware profile.

In practice, the above requirements should be converted into a list of features
and ioctl's that needs to be implemented on every SoC driver that implements
a capture or output video streaming device.

My suggestion is that we should start the discussions by filling the macro
requirements. Once we agree on that, we can make a list of the V4L and MC
ioctl's and convert them into a per-ioctl series of requirements.

Regards,
Mauro


--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to