Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-07 Thread Tomasz Figa
[+CC Ricky]

On Mon, Jun 4, 2018 at 10:33 PM Mauro Carvalho Chehab
 wrote:
>
> Hi all,
>
> I consolidated hopefully all comments I receive on the past announcement
> with regards to the complex camera workshop we're planning to happen in
> Tokyo, just before the Open Source Summit in Japan.
>
> The main focus of the workshop is to allow supporting devices with MC-based
> hardware connected to a camera.
>
> I'm enclosing a detailed description of the problem, in order to
> allow the interested parties to be at the same page.
>
> We need to work towards an agenda for the meeting.
>
> From my side, I think we should have at least the following topics at
> the agenda:
>
> - a quick review about what's currently at libv4l2;
> - a presentation about PipeWire solution;
> - a discussion with the requirements for the new solution;
> - a discussion about how we'll address - who will do what.
>
> Comments? Suggestions?
>
> Are there anyone else planning to either be there physically or via
> Google Hangouts?
>
> Tomaz,
>
> Do you have any limit about the number of people that could join us
> via Google Hangouts?
>
>
> Regards,
> Mauro
>
> ---
>
> 1. Introduction
> ===
>
> 1.1 V4L2 Kernel aspects
> ---
>
> The media subsystem supports two types of devices:
>
> - "traditional" media hardware, supported via V4L2 API. On such hardware,
>   opening a single device node (usually /dev/video0) is enough to control
>   the entire device. We call it as devnode-based devices.
>   An application sometimes may need to use multiple video nodes with
>   devnode-based drivers to capture multiple streams in parallel
>   (when the hardware allows it of course). That's quite common for
>   Analog TV devices, where both /dev/video0 and /dev/vbi0 are opened
>   at the same time.
>
> - Media-controller based devices. On those devices, there are typically
>   several /dev/video? nodes and several /dev/v4l2-subdev? nodes, plus
>   a media controller device node (usually /dev/media0).
>   We call it as mc-based devices. Controlling the hardware require
>   opening the media device (/dev/media0), setup the pipeline and adjust
>   the sub-devices via /dev/v4l2-subdev?. Only streaming is controlled
>   by /dev/video?.
>
> In other words, both configuration and streaming go through the video
> device node on devnode-based drivers, while video device nodes are used
> used for streaming on mc-based drivers.
>
> With devnode-based drivers, "standard" media applications, including open
> source ones (Camorama, Cheese, Xawtv, Firefox, Chromium, ...) and closed
> source ones (Skype, Chrome, ...) support devnode-based devices[1]. Also,
> when just one media device is connected, the streaming/control device
> is typically /dev/video0.
>
> [1] It should be noticed that closed-source applications tend to have
> various bugs that prevent them from working properly on many devnode-based
> devices. Due to that, some additional blocks were requred at libv4l to
> support some of them. Skype is a good example, as we had to include a
> software scaler in libv4l to make it happy. So in practice not everything
> works smoothly with closed-source applications with devnode-based drivers.
> A few such adjustments were also made on some drivers and/or libv4l, in
> order to fulfill some open-source app requirements.
>
> Support for mc-based devices currently require an specialized application
> in order to prepare the device for its usage (setup pipelines, adjust
> hardware controls, etc). Once pipeline is set, the streaming goes via
> /dev/video?, although usually some /dev/v4l2-subdev? devnodes should also
> be opened, in order to implement algorithms designed to make video quality
> reasonable. On such devices, it is not uncommon that the device used by the
> application to be a random number (on OMAP3 driver, typically, is either
> /dev/video4 or /dev/video6).
>
> One example of such hardware is at the OMAP3-based hardware:
>
> 
> http://www.infradead.org/~mchehab/mc-next-gen/omap3-igepv2-with-tvp5150.png
>
> On the picture, there's a graph with the hardware blocks in blue/dark/blue
> and the corresponding devnode interfaces in yellow.
>
> The mc-based approach was taken when support for Nokia N9/N900 cameras
> was added (with has OMAP3 SoC). It is required because the camera hardware
> on SoC comes with a media processor (ISP), with does a lot more than just
> capturing, allowing complex algorithms to enhance image quality in runtime.
> Those algorithms are known as 3A - an acronym for 3 other acronyms:
>
> - AE (Auto Exposure);
> - AF (Auto Focus);
> - AWB (Auto White Balance).
>
> The main reason that drove the MC design is that the 3A algorithms (that is
> the 3A control loop, and sometimes part of the image processing itself) often
> need to run, at least partially, on the CPU. As a kernel-space implementation
> wasn't possible, we needed a lower-level UAPI.
>
> Setting a camera with such ISPs are

cron job: media_tree daily build: OK

2018-06-07 Thread Hans Verkuil
This message is generated daily by a cron job that builds media_tree for
the kernels and architectures in the list below.

Results of the daily build of media_tree:

date:   Fri Jun  8 05:00:18 CEST 2018
media-tree git hash:f2809d20b9250c675fca8268a0f6274277cca7ff
media_build git hash:   464ef972618cc9f845f07c1a4e8957ce2270cf91
v4l-utils git hash: c3b46c2c53d7d815a53c902cfb2ddd96c3732c5b
gcc version:i686-linux-gcc (GCC) 8.1.0
sparse version: 0.5.2
smatch version: 0.5.1
host hardware:  x86_64
host os:4.16.0-1-amd64

linux-git-arm-at91: OK
linux-git-arm-davinci: OK
linux-git-arm-multi: OK
linux-git-arm-pxa: OK
linux-git-arm-stm32: OK
linux-git-arm64: OK
linux-git-i686: OK
linux-git-mips: OK
linux-git-powerpc64: OK
linux-git-sh: OK
linux-git-x86_64: OK
Check COMPILE_TEST: OK
linux-2.6.36.4-i686: OK
linux-2.6.36.4-x86_64: OK
linux-2.6.37.6-i686: OK
linux-2.6.37.6-x86_64: OK
linux-2.6.38.8-i686: OK
linux-2.6.38.8-x86_64: OK
linux-2.6.39.4-i686: OK
linux-2.6.39.4-x86_64: OK
linux-3.0.101-i686: OK
linux-3.0.101-x86_64: OK
linux-3.1.10-i686: OK
linux-3.1.10-x86_64: OK
linux-3.2.101-i686: OK
linux-3.2.101-x86_64: OK
linux-3.3.8-i686: OK
linux-3.3.8-x86_64: OK
linux-3.4.113-i686: OK
linux-3.4.113-x86_64: OK
linux-3.5.7-i686: OK
linux-3.5.7-x86_64: OK
linux-3.6.11-i686: OK
linux-3.6.11-x86_64: OK
linux-3.7.10-i686: OK
linux-3.7.10-x86_64: OK
linux-3.8.13-i686: OK
linux-3.8.13-x86_64: OK
linux-3.9.11-i686: OK
linux-3.9.11-x86_64: OK
linux-3.10.108-i686: OK
linux-3.10.108-x86_64: OK
linux-3.11.10-i686: OK
linux-3.11.10-x86_64: OK
linux-3.12.74-i686: OK
linux-3.12.74-x86_64: OK
linux-3.13.11-i686: OK
linux-3.13.11-x86_64: OK
linux-3.14.79-i686: OK
linux-3.14.79-x86_64: OK
linux-3.15.10-i686: OK
linux-3.15.10-x86_64: OK
linux-3.16.56-i686: OK
linux-3.16.56-x86_64: OK
linux-3.17.8-i686: OK
linux-3.17.8-x86_64: OK
linux-3.18.102-i686: OK
linux-3.18.102-x86_64: OK
linux-3.19.8-i686: OK
linux-3.19.8-x86_64: OK
linux-4.0.9-i686: OK
linux-4.0.9-x86_64: OK
linux-4.1.51-i686: OK
linux-4.1.51-x86_64: OK
linux-4.2.8-i686: OK
linux-4.2.8-x86_64: OK
linux-4.3.6-i686: OK
linux-4.3.6-x86_64: OK
linux-4.4.109-i686: OK
linux-4.4.109-x86_64: OK
linux-4.5.7-i686: OK
linux-4.5.7-x86_64: OK
linux-4.6.7-i686: OK
linux-4.6.7-x86_64: OK
linux-4.7.10-i686: OK
linux-4.7.10-x86_64: OK
linux-4.8.17-i686: OK
linux-4.8.17-x86_64: OK
linux-4.9.91-i686: OK
linux-4.9.91-x86_64: OK
linux-4.10.17-i686: OK
linux-4.10.17-x86_64: OK
linux-4.11.12-i686: OK
linux-4.11.12-x86_64: OK
linux-4.12.14-i686: OK
linux-4.12.14-x86_64: OK
linux-4.13.16-i686: OK
linux-4.13.16-x86_64: OK
linux-4.14.42-i686: OK
linux-4.14.42-x86_64: OK
linux-4.15.14-i686: OK
linux-4.15.14-x86_64: OK
linux-4.16.8-i686: OK
linux-4.16.8-x86_64: OK
linux-4.17-i686: OK
linux-4.17-x86_64: OK
apps: OK
spec-git: OK
sparse: WARNINGS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Friday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Friday.tar.bz2

The Media Infrastructure API from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/index.html


[no subject]

2018-06-07 Thread Sherri Gallagher
Please reply me back i have something to tell u I am Sgt.Sherri Gallagher.


Re: "media: ov5640: Add horizontal and vertical totals" regression issue on i.MX6QDL

2018-06-07 Thread Maxime Ripard
On Thu, Jun 07, 2018 at 08:02:28PM +0530, Jagan Teki wrote:
> Hi,
> 
> ov5640 camera is breaking with below commit on i.MXQDL platform.
> 
> commit 476dec012f4c6545b0b7599cd9adba2ed819ad3b
> Author: Maxime Ripard 
> Date:   Mon Apr 16 08:36:55 2018 -0400
> 
> media: ov5640: Add horizontal and vertical totals
> 
> All the initialization arrays are changing the horizontal and vertical
> totals for some value.
> 
> In order to clean up the driver, and since we're going to need that value
> later on, let's introduce in the ov5640_mode_info structure the horizontal
> and vertical total sizes, and move these out of the bytes array.
> 
> Signed-off-by: Maxime Ripard 
> Signed-off-by: Sakari Ailus 
> Signed-off-by: Mauro Carvalho Chehab 
> 
> We have reproduced as below [1] and along with ipu1_csi0 pipeline. I
> haven't debug further please let us know how to move further.
> 
> media-ctl --links "'ov5640 2-003c':0->'imx6-mipi-csi2':0[1]"
> media-ctl --links "'imx6-mipi-csi2':1->'ipu1_csi0_mux':0[1]"
> media-ctl --links "'ipu1_csi0_mux':2->'ipu1_csi0':0[1]"
> media-ctl --links "'ipu1_csi0':2->'ipu1_csi0 capture':0[1]"
> 
> media-ctl --set-v4l2 "'ov5640 2-003c':0[fmt:UYVY2X8/640x480 field:none]"
> media-ctl --set-v4l2 "'imx6-mipi-csi2':1[fmt:UYVY2X8/640x480 field:none]"
> media-ctl --set-v4l2 "'ipu1_csi0_mux':2[fmt:UYVY2X8/640x480 field:none]"
> media-ctl --set-v4l2 "'ipu1_csi0':0[fmt:AYUV32/640x480 field:none]"
> media-ctl --set-v4l2 "'ipu1_csi0':2[fmt:AYUV32/640x480 field:none]"
> 
> [1] https://lkml.org/lkml/2018/5/31/543

Yeah, this has already been reported as part as this serie:
https://www.mail-archive.com/linux-media@vger.kernel.org/msg131655.html

and some suggestions have been done here:
https://www.mail-archive.com/linux-media@vger.kernel.org/msg132570.html

Feel free to help debug this.

Maxime

-- 
Maxime Ripard, Bootlin (formerly Free Electrons)
Embedded Linux and Kernel engineering
https://bootlin.com


Re: Bug: media device controller node not removed when uvc device is unplugged

2018-06-07 Thread Nicolas Dufresne
Le jeudi 07 juin 2018 à 14:07 +0200, Torleiv Sundre a écrit :
> Hi,
> 
> Every time I plug in a UVC camera, a media controller node is created at 
> /dev/media.
> 
> In Ubuntu 17.10, running kernel 4.13.0-43, the media controller device 
> node is removed when the UVC camera is unplugged.
> 
> In Ubuntu 18.10, running kernel 4.15.0-22, the media controller device 
> node is not removed. For every time I plug the device, a new device node 
> with incremented minor number is created, leaving me with a growing list 
> of media controller device nodes. If I repeat for long enough, I get the 
> following error:
> "media: could not get a free minor"
> I also tried building a kernel from mainline, with the same result.
> 
> I'm running on x86_64.

I also observe this on 4.17.

> 
> Torleiv Sundre


"media: ov5640: Add horizontal and vertical totals" regression issue on i.MX6QDL

2018-06-07 Thread Jagan Teki
Hi,

ov5640 camera is breaking with below commit on i.MXQDL platform.

commit 476dec012f4c6545b0b7599cd9adba2ed819ad3b
Author: Maxime Ripard 
Date:   Mon Apr 16 08:36:55 2018 -0400

media: ov5640: Add horizontal and vertical totals

All the initialization arrays are changing the horizontal and vertical
totals for some value.

In order to clean up the driver, and since we're going to need that value
later on, let's introduce in the ov5640_mode_info structure the horizontal
and vertical total sizes, and move these out of the bytes array.

Signed-off-by: Maxime Ripard 
Signed-off-by: Sakari Ailus 
Signed-off-by: Mauro Carvalho Chehab 

We have reproduced as below [1] and along with ipu1_csi0 pipeline. I
haven't debug further please let us know how to move further.

media-ctl --links "'ov5640 2-003c':0->'imx6-mipi-csi2':0[1]"
media-ctl --links "'imx6-mipi-csi2':1->'ipu1_csi0_mux':0[1]"
media-ctl --links "'ipu1_csi0_mux':2->'ipu1_csi0':0[1]"
media-ctl --links "'ipu1_csi0':2->'ipu1_csi0 capture':0[1]"

media-ctl --set-v4l2 "'ov5640 2-003c':0[fmt:UYVY2X8/640x480 field:none]"
media-ctl --set-v4l2 "'imx6-mipi-csi2':1[fmt:UYVY2X8/640x480 field:none]"
media-ctl --set-v4l2 "'ipu1_csi0_mux':2[fmt:UYVY2X8/640x480 field:none]"
media-ctl --set-v4l2 "'ipu1_csi0':0[fmt:AYUV32/640x480 field:none]"
media-ctl --set-v4l2 "'ipu1_csi0':2[fmt:AYUV32/640x480 field:none]"

[1] https://lkml.org/lkml/2018/5/31/543

Jagan.

-- 
Jagan Teki
Senior Linux Kernel Engineer | Amarula Solutions
U-Boot, Linux | Upstream Maintainer
Hyderabad, India.


[PATCH, libv4l]: Make libv4l2 usable on devices with complex pipeline

2018-06-07 Thread Pavel Machek
Hi!

> > We may do some magic to do v4l2_open_complex() in v4l2_open(), but I
> > believe that should be separate step.
> 
> In order to avoid breaking the ABI for existing apps, v4l2_open() should
> internally call v4l2_open_complex() (or whatever we call it at the new
> API design).

Ok. Here's updated patch. open_complex() now takes fd. Any other
issues?

Best regards,
Pavel

diff --git a/lib/include/libv4l2.h b/lib/include/libv4l2.h
index ea1870d..a0ec0a9 100644
--- a/lib/include/libv4l2.h
+++ b/lib/include/libv4l2.h
@@ -58,6 +58,10 @@ LIBV4L_PUBLIC extern FILE *v4l2_log_file;
invalid memory address will not lead to failure with errno being EFAULT,
as it would with a real ioctl, but will cause libv4l2 to break, and you
get to keep both pieces.
+
+   You can open complex pipelines by passing ".cv" file with pipeline
+   description to v4l2_open(). libv4l2 will open all the required
+   devices automatically in that case.
 */
 
 LIBV4L_PUBLIC int v4l2_open(const char *file, int oflag, ...);
diff --git a/lib/libv4l2/libv4l2-priv.h b/lib/libv4l2/libv4l2-priv.h
index 1924c91..1ee697a 100644
--- a/lib/libv4l2/libv4l2-priv.h
+++ b/lib/libv4l2/libv4l2-priv.h
@@ -104,6 +104,7 @@ struct v4l2_dev_info {
void *plugin_library;
void *dev_ops_priv;
const struct libv4l_dev_ops *dev_ops;
+   struct v4l2_controls_map *map;
 };
 
 /* From v4l2-plugin.c */
@@ -130,4 +131,20 @@ static inline void v4l2_plugin_cleanup(void *plugin_lib, 
void *plugin_priv,
 extern const char *v4l2_ioctls[];
 void v4l2_log_ioctl(unsigned long int request, void *arg, int result);
 
+
+struct v4l2_control_map {
+   unsigned long control;
+   int fd;
+};
+
+struct v4l2_controls_map {
+   int main_fd;
+   int num_fds;
+   int num_controls;
+   struct v4l2_control_map map[];
+};
+
+int v4l2_open_pipeline(struct v4l2_controls_map *map, int v4l2_flags);
+LIBV4L_PUBLIC int v4l2_get_fd_for_control(int fd, unsigned long control);
+
 #endif
diff --git a/lib/libv4l2/libv4l2.c b/lib/libv4l2/libv4l2.c
index 2db25d1..ac430f0 100644
--- a/lib/libv4l2/libv4l2.c
+++ b/lib/libv4l2/libv4l2.c
@@ -70,6 +70,8 @@
 #include 
 #include 
 #include 
+#include 
+
 #include "libv4l2.h"
 #include "libv4l2-priv.h"
 #include "libv4l-plugin.h"
@@ -618,6 +620,8 @@ static void v4l2_update_fps(int index, struct 
v4l2_streamparm *parm)
devices[index].fps = 0;
 }
 
+static int v4l2_open_complex(int fd, int v4l2_flags);
+
 int v4l2_open(const char *file, int oflag, ...)
 {
int fd;
@@ -641,6 +645,21 @@ int v4l2_open(const char *file, int oflag, ...)
if (fd == -1)
return fd;
 
+   int len = strlen(file);
+   char *end = ".cv";
+   int len2 = strlen(end);
+   if ((len > len2) && (!strcmp(file + len - len2, end))) {
+   /* .cv extension */
+   struct stat sb;
+
+   if (fstat(fd, &sb) == 0) {
+   if ((sb.st_mode & S_IFMT) == S_IFREG) {
+   return v4l2_open_complex(fd, 0);
+   }
+   }
+   
+   }
+
if (v4l2_fd_open(fd, 0) == -1) {
int saved_err = errno;
 
@@ -787,6 +806,8 @@ no_capture:
if (index >= devices_used)
devices_used = index + 1;
 
+   devices[index].map = NULL;
+
/* Note we always tell v4lconvert to optimize src fmt selection for
   our default fps, the only exception is the app explicitly selecting
   a frame rate using the S_PARM ioctl after a S_FMT */
@@ -1056,12 +1077,47 @@ static int v4l2_s_fmt(int index, struct v4l2_format 
*dest_fmt)
return 0;
 }
 
+int v4l2_get_fd_for_control(int fd, unsigned long control)
+{
+   int index = v4l2_get_index(fd);
+   struct v4l2_controls_map *map;
+   int lo = 0;
+   int hi;
+
+   if (index < 0)
+   return fd;
+
+   map = devices[index].map;
+   if (!map)
+   return fd;
+   hi = map->num_controls;
+
+   while (lo < hi) {
+   int i = (lo + hi) / 2;
+   if (map->map[i].control == control) {
+   return map->map[i].fd;
+   }
+   if (map->map[i].control > control) {
+   hi = i;
+   continue;
+   }
+   if (map->map[i].control < control) {
+   lo = i+1;
+   continue;
+   }
+   printf("Bad: impossible condition in binary search\n");
+   exit(1);
+   }
+   return fd;
+}
+
 int v4l2_ioctl(int fd, unsigned long int request, ...)
 {
void *arg;
va_list ap;
int result, index, saved_err;
-   int is_capture_request = 0, stream_needs_locking = 0;
+   int is_capture_request = 0, stream_needs_locking = 0, 
+   is_subdev_request = 0;
 
va_sta

Bug: media device controller node not removed when uvc device is unplugged

2018-06-07 Thread Torleiv Sundre

Hi,

Every time I plug in a UVC camera, a media controller node is created at 
/dev/media.


In Ubuntu 17.10, running kernel 4.13.0-43, the media controller device 
node is removed when the UVC camera is unplugged.


In Ubuntu 18.10, running kernel 4.15.0-22, the media controller device 
node is not removed. For every time I plug the device, a new device node 
with incremented minor number is created, leaving me with a growing list 
of media controller device nodes. If I repeat for long enough, I get the 
following error:

"media: could not get a free minor"
I also tried building a kernel from mainline, with the same result.

I'm running on x86_64.

Torleiv Sundre


Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-07 Thread Alexandre Courbot
On Mon, Jun 4, 2018 at 10:33 PM Mauro Carvalho Chehab
 wrote:
>
> Hi all,
>
> I consolidated hopefully all comments I receive on the past announcement
> with regards to the complex camera workshop we're planning to happen in
> Tokyo, just before the Open Source Summit in Japan.
>
> The main focus of the workshop is to allow supporting devices with MC-based
> hardware connected to a camera.
>
> I'm enclosing a detailed description of the problem, in order to
> allow the interested parties to be at the same page.
>
> We need to work towards an agenda for the meeting.
>
> From my side, I think we should have at least the following topics at
> the agenda:
>
> - a quick review about what's currently at libv4l2;
> - a presentation about PipeWire solution;
> - a discussion with the requirements for the new solution;
> - a discussion about how we'll address - who will do what.
>
> Comments? Suggestions?
>
> Are there anyone else planning to either be there physically or via
> Google Hangouts?
>
> Tomaz,
>
> Do you have any limit about the number of people that could join us
> via Google Hangouts?
>
>
> Regards,
> Mauro
>
> ---
>
> 1. Introduction
> ===
>
> 1.1 V4L2 Kernel aspects
> ---
>
> The media subsystem supports two types of devices:
>
> - "traditional" media hardware, supported via V4L2 API. On such hardware,
>   opening a single device node (usually /dev/video0) is enough to control
>   the entire device. We call it as devnode-based devices.
>   An application sometimes may need to use multiple video nodes with
>   devnode-based drivers to capture multiple streams in parallel
>   (when the hardware allows it of course). That's quite common for
>   Analog TV devices, where both /dev/video0 and /dev/vbi0 are opened
>   at the same time.
>
> - Media-controller based devices. On those devices, there are typically
>   several /dev/video? nodes and several /dev/v4l2-subdev? nodes, plus
>   a media controller device node (usually /dev/media0).
>   We call it as mc-based devices. Controlling the hardware require
>   opening the media device (/dev/media0), setup the pipeline and adjust
>   the sub-devices via /dev/v4l2-subdev?. Only streaming is controlled
>   by /dev/video?.
>
> In other words, both configuration and streaming go through the video
> device node on devnode-based drivers, while video device nodes are used
> used for streaming on mc-based drivers.
>
> With devnode-based drivers, "standard" media applications, including open
> source ones (Camorama, Cheese, Xawtv, Firefox, Chromium, ...) and closed
> source ones (Skype, Chrome, ...) support devnode-based devices[1]. Also,
> when just one media device is connected, the streaming/control device
> is typically /dev/video0.
>
> [1] It should be noticed that closed-source applications tend to have
> various bugs that prevent them from working properly on many devnode-based
> devices. Due to that, some additional blocks were requred at libv4l to
> support some of them. Skype is a good example, as we had to include a
> software scaler in libv4l to make it happy. So in practice not everything
> works smoothly with closed-source applications with devnode-based drivers.
> A few such adjustments were also made on some drivers and/or libv4l, in
> order to fulfill some open-source app requirements.
>
> Support for mc-based devices currently require an specialized application
> in order to prepare the device for its usage (setup pipelines, adjust
> hardware controls, etc). Once pipeline is set, the streaming goes via
> /dev/video?, although usually some /dev/v4l2-subdev? devnodes should also
> be opened, in order to implement algorithms designed to make video quality
> reasonable. On such devices, it is not uncommon that the device used by the
> application to be a random number (on OMAP3 driver, typically, is either
> /dev/video4 or /dev/video6).
>
> One example of such hardware is at the OMAP3-based hardware:
>
> 
> http://www.infradead.org/~mchehab/mc-next-gen/omap3-igepv2-with-tvp5150.png
>
> On the picture, there's a graph with the hardware blocks in blue/dark/blue
> and the corresponding devnode interfaces in yellow.
>
> The mc-based approach was taken when support for Nokia N9/N900 cameras
> was added (with has OMAP3 SoC). It is required because the camera hardware
> on SoC comes with a media processor (ISP), with does a lot more than just
> capturing, allowing complex algorithms to enhance image quality in runtime.
> Those algorithms are known as 3A - an acronym for 3 other acronyms:
>
> - AE (Auto Exposure);
> - AF (Auto Focus);
> - AWB (Auto White Balance).
>
> The main reason that drove the MC design is that the 3A algorithms (that is
> the 3A control loop, and sometimes part of the image processing itself) often
> need to run, at least partially, on the CPU. As a kernel-space implementation
> wasn't possible, we needed a lower-level UAPI.
>
> Setting a camera with such ISPs are harder becau

Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-07 Thread Mauro Carvalho Chehab
Em Thu, 7 Jun 2018 16:47:50 +0900
Tomasz Figa  escreveu:

> On Thu, Jun 7, 2018 at 1:26 AM Mauro Carvalho Chehab
>  wrote:
> >
> > Em Wed, 6 Jun 2018 13:19:39 +0900
> > Tomasz Figa  escreveu:
> >  
> > > On Mon, Jun 4, 2018 at 10:33 PM Mauro Carvalho Chehab
> > >  wrote:  
> [snip]
> > > > 3.2 libv4l2 support for 3A algorithms
> > > > =
> > > >
> > > > The 3A algorithm handing is highly dependent on the hardware. The
> > > > idea here is to allow libv4l to have a set of 3A algorithms that
> > > > will be specific to certain mc-based hardware.
> > > >
> > > > One requirement, if we want vendor stacks to use our solution, is that
> > > > it should allow allow external closed-source algorithms to run as well.
> > > >
> > > > The 3A library API must be standardized, to allow the closed-source
> > > > vendor implementation to be replaced by an open-source implementation
> > > > should someone have the time and energy (and qualifications) to write
> > > > one.
> > > >
> > > > Sandboxed execution of the 3A library must be possible as closed-source
> > > > can't always be blindly trusted. This includes the ability to wrap the
> > > > library in a daemon should the platform's multimedia stack wishes
> > > > and to avoid any direct access to the kernel devices by the 3A library
> > > > itself (all accesses should be marshaled by the camera stack).
> > > >
> > > > Please note that this daemon is *not* a camera daemon that would
> > > > communicates with the V4L2 driver through a custom back channel.
> > > >
> > > > The decision to run the 3A library in a sandboxed process or to call
> > > > it directly from the camera stack should be left to the camera stack
> > > > and to the platform integrator, and should not be visible by the 3A
> > > > library.
> > > >
> > > > The 3A library must be usable on major Linux-based camera stacks (the
> > > > Android and Chrome OS camera HALs are certainly important targets,
> > > > more can be added) unmodified, which will allow usage of the vendor
> > > > binary provided for Chrome OS or Android on regular Linux systems.  
> > >
> > > This is quite an interesting idea and it would be really useful if it
> > > could be done. I'm kind of worried, though, about Android in
> > > particular, since the execution environment in Android differs
> > > significantly from a regular Linux distributions (including Chrome OS,
> > > which is not so far from such), namely:
> > > - different libc (bionic) and dynamic linker - I guess this could be
> > > solved by static linking?  
> >
> > Static link is one possible solution. IMHO, we should try to make it
> > use just a C library (if possible) and be sure that it will also compile
> > with bionic/ulibc in order to make it easier to be used by Android and
> > other embedded distros.
> >  
> > > - dedicated toolchains - perhaps not much of a problem if the per-arch
> > > ABI is the same?  
> >
> > Depending on library dependency, we could likely make it work with more
> > than one toolchain. I guess acconfig works with Android, right?
> > If so, it could auto-adjust to the different toolchains everywhere.  
> 
> That works for open source libraries obviously. I was thinking more
> about the closed source 3A libraries coming from Android, since we
> can't recompile them.

Ah! It probably makes sense to place them on some sandboxed environment.
If we're using that, it probably makes sense to have them running
on a sort of daemon with a sockets-based API.

If we're willing to do that, it doesn't really matter how the 3A
was implemented. It can even be in Java. All it matters is to have
a way to plug the library to it. A config file could provide such
link, telling what 3A library should be used (and, eventually, what
commands should be used to start/stop the daemon).

Thanks,
Mauro


Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-07 Thread Tomasz Figa
On Thu, Jun 7, 2018 at 1:26 AM Mauro Carvalho Chehab
 wrote:
>
> Em Wed, 6 Jun 2018 13:19:39 +0900
> Tomasz Figa  escreveu:
>
> > On Mon, Jun 4, 2018 at 10:33 PM Mauro Carvalho Chehab
> >  wrote:
[snip]
> > > 3.2 libv4l2 support for 3A algorithms
> > > =
> > >
> > > The 3A algorithm handing is highly dependent on the hardware. The
> > > idea here is to allow libv4l to have a set of 3A algorithms that
> > > will be specific to certain mc-based hardware.
> > >
> > > One requirement, if we want vendor stacks to use our solution, is that
> > > it should allow allow external closed-source algorithms to run as well.
> > >
> > > The 3A library API must be standardized, to allow the closed-source
> > > vendor implementation to be replaced by an open-source implementation
> > > should someone have the time and energy (and qualifications) to write
> > > one.
> > >
> > > Sandboxed execution of the 3A library must be possible as closed-source
> > > can't always be blindly trusted. This includes the ability to wrap the
> > > library in a daemon should the platform's multimedia stack wishes
> > > and to avoid any direct access to the kernel devices by the 3A library
> > > itself (all accesses should be marshaled by the camera stack).
> > >
> > > Please note that this daemon is *not* a camera daemon that would
> > > communicates with the V4L2 driver through a custom back channel.
> > >
> > > The decision to run the 3A library in a sandboxed process or to call
> > > it directly from the camera stack should be left to the camera stack
> > > and to the platform integrator, and should not be visible by the 3A
> > > library.
> > >
> > > The 3A library must be usable on major Linux-based camera stacks (the
> > > Android and Chrome OS camera HALs are certainly important targets,
> > > more can be added) unmodified, which will allow usage of the vendor
> > > binary provided for Chrome OS or Android on regular Linux systems.
> >
> > This is quite an interesting idea and it would be really useful if it
> > could be done. I'm kind of worried, though, about Android in
> > particular, since the execution environment in Android differs
> > significantly from a regular Linux distributions (including Chrome OS,
> > which is not so far from such), namely:
> > - different libc (bionic) and dynamic linker - I guess this could be
> > solved by static linking?
>
> Static link is one possible solution. IMHO, we should try to make it
> use just a C library (if possible) and be sure that it will also compile
> with bionic/ulibc in order to make it easier to be used by Android and
> other embedded distros.
>
> > - dedicated toolchains - perhaps not much of a problem if the per-arch
> > ABI is the same?
>
> Depending on library dependency, we could likely make it work with more
> than one toolchain. I guess acconfig works with Android, right?
> If so, it could auto-adjust to the different toolchains everywhere.

That works for open source libraries obviously. I was thinking more
about the closed source 3A libraries coming from Android, since we
can't recompile them.

Best regards,
Tomasz


Re: [PATCH v2 04/10] media: imx: interweave only for sequential input/interlaced output fields

2018-06-07 Thread Krzysztof Hałasa
Steve Longerbeam  writes:

> One final note, it is incorrect to assign 'seq-tb' to a PAL signal according
> to this new understanding. Because according to various sites (for example
> [1]), both standard definition NTSC and PAL are bottom field dominant,
> so 'seq-tb' is not correct for PAL.

Actually this isn't the case:

- real PAL (= analog) is (was) interlaced, so you could choose any
  "dominant field" and it would work fine (stuff originating as film
  movies being an exception).

- the general idea at these times was that NTSC-style digital video was
  bottom-first while PAL-style was top-first.

- for example, most (practically all?) commercial TV-style interlaced
  PAL DVDs were top-first (movies were usually progressive).

- standard TV (DVB 576i) uses (used) top-first as well.

- this seems to be confirmed by e.g. ITU-R REC-BR.469-7-2002 (annex 1).
  Hovewer, this suggests that field 1 is the top one and 2 is bottom
  (and not first and second in terms of time).

- if field 1 = top and 2 = bottom indeed, then we're back at BT.656 and
  my original idea that the SAV/EAV codes show straigh the so called
  odd/even lines and not some magic, standard-dependent lines of first
  and second fields. This needs to be verified.
  I think the ADV7180 forces the SAV/EAV codes (one can't set them
  depending od PAL/NTSC etc), so we could assume it does it right.

- a notable exception to PAL = top first rule was DV and similar stuff
  (the casette camcorder things, including Digital8, miniDV, and
  probably derivatives). DV (including PAL) used bottom-first
  universally.

I think we should stick to default PAL=TB, NTSC=BT rule, though the user
should be able to override it (if working with e.g. PAL DV camcorder
connected with an analog cable - not very probable, I guess).
-- 
Krzysztof Halasa

Industrial Research Institute for Automation and Measurements PIAP
Al. Jerozolimskie 202, 02-486 Warsaw, Poland


Re: [RFC, libv4l]: Make libv4l2 usable on devices with complex pipeline

2018-06-07 Thread Pavel Machek
Hi!

> I guess that could give some basic camera functionality on OMAP3-like 
> hardware.

Yeah, and that is the goal.

> For most of the current generation of imaging subsystems (e.g. Intel
> IPU3, Rockchip RKISP1) it's not enough. The reason is that there is
> more to be handled by userspace than just setting controls:
>  - configuring pixel formats, resolutions, crops, etc. through the
> whole pipeline - I guess that could be preconfigured per use case
> inside the configuration file, though,
>  - forwarding buffers between capture and processing pipelines, i.e.
> DQBUF raw frame from CSI2 video node and QBUF to ISP video node,
>  - handling metadata CAPTURE and OUTPUT buffers controlling the 3A
> feedback loop - this might be optional if all we need is just ability
> to capture some frames, but required for getting good quality,
>  - actually mapping legacy controls into the above metadata,

I just wanted to add few things:

It seems IPU3 and RKISP1 is really similar to what I have on
N900. Forwarding frames between parts of processing pipeline is not
neccessary, but the other parts are there.

There are also two points where you can gather the image data, either
(almost) raw GRBG10 data from the sensor, or scaled YUV data ready for
display. [And how to display that data without CPU involvement is
another, rather big, topic.]

Anyway, legacy applications expect simple webcams with bad pictures,
low resolution, and no AF support. And we should be able to provide
them with just that.

Best regards,

Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


signature.asc
Description: Digital signature