Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-18 Thread Paul Elder
Hi Tomasz,

On June 18, 2018 6:00:47 PM GMT+09:00, Tomasz Figa  wrote:
>Hi Paul,
>
>On Mon, Jun 18, 2018 at 5:42 PM Paul Elder
> wrote:
>>
>>
>>
>> Hello all,
>>
>> On June 4, 2018 10:33:03 PM GMT+09:00, Mauro Carvalho Chehab
> wrote:
>> >Hi all,
>> >
>> >I consolidated hopefully all comments I receive on the past
>> >announcement
>> >with regards to the complex camera workshop we're planning to happen
>in
>> >Tokyo, just before the Open Source Summit in Japan.
>> >
>> >The main focus of the workshop is to allow supporting devices with
>> >MC-based
>> >hardware connected to a camera.
>> >
>> >I'm enclosing a detailed description of the problem, in order to
>> >allow the interested parties to be at the same page.
>> >
>> >We need to work towards an agenda for the meeting.
>> >
>> >From my side, I think we should have at least the following topics
>at
>> >the agenda:
>> >
>> >- a quick review about what's currently at libv4l2;
>> >- a presentation about PipeWire solution;
>> >- a discussion with the requirements for the new solution;
>> >- a discussion about how we'll address - who will do what.
>> >
>> >Comments? Suggestions?
>> >
>> >Are there anyone else planning to either be there physically or via
>> >Google Hangouts?
>> >
>> My name is Paul Elder. I am a university student studying computer
>science, and I am interested in complex camera support in Linux.
>>
>> If it's not too late, could I join this meeting as well please, as I
>am in Tokyo?
>
>Done. You should have received 3 further emails with necessary
>invitations.

Thank you.
I have only received two: 来訪者事前登録のご案内, and the Google invitation.

Paul

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-18 Thread Tomasz Figa
Hi Paul,

On Mon, Jun 18, 2018 at 5:42 PM Paul Elder  wrote:
>
>
>
> Hello all,
>
> On June 4, 2018 10:33:03 PM GMT+09:00, Mauro Carvalho Chehab 
>  wrote:
> >Hi all,
> >
> >I consolidated hopefully all comments I receive on the past
> >announcement
> >with regards to the complex camera workshop we're planning to happen in
> >Tokyo, just before the Open Source Summit in Japan.
> >
> >The main focus of the workshop is to allow supporting devices with
> >MC-based
> >hardware connected to a camera.
> >
> >I'm enclosing a detailed description of the problem, in order to
> >allow the interested parties to be at the same page.
> >
> >We need to work towards an agenda for the meeting.
> >
> >From my side, I think we should have at least the following topics at
> >the agenda:
> >
> >- a quick review about what's currently at libv4l2;
> >- a presentation about PipeWire solution;
> >- a discussion with the requirements for the new solution;
> >- a discussion about how we'll address - who will do what.
> >
> >Comments? Suggestions?
> >
> >Are there anyone else planning to either be there physically or via
> >Google Hangouts?
> >
> My name is Paul Elder. I am a university student studying computer science, 
> and I am interested in complex camera support in Linux.
>
> If it's not too late, could I join this meeting as well please, as I am in 
> Tokyo?

Done. You should have received 3 further emails with necessary invitations.

Best regards,
Tomasz


Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-18 Thread Laurent Pinchart
Hello,

On Monday, 18 June 2018 11:42:37 EEST Paul Elder wrote:
> On June 4, 2018 10:33:03 PM GMT+09:00, Mauro Carvalho Chehab  wrote:
> > Hi all,
> >
> > I consolidated hopefully all comments I receive on the past announcement
> > with regards to the complex camera workshop we're planning to happen in
> > Tokyo, just before the Open Source Summit in Japan.
> >
> > The main focus of the workshop is to allow supporting devices with
> > MC-based hardware connected to a camera.
> >
> > I'm enclosing a detailed description of the problem, in order to
> > allow the interested parties to be at the same page.
> >
> > We need to work towards an agenda for the meeting.
> >
> > From my side, I think we should have at least the following topics at
> > the agenda:
> >
> > - a quick review about what's currently at libv4l2;
> > - a presentation about PipeWire solution;
> > - a discussion with the requirements for the new solution;
> > - a discussion about how we'll address - who will do what.
> >
> > Comments? Suggestions?
> >
> > Are there anyone else planning to either be there physically or via
> > Google Hangouts?
> 
> My name is Paul Elder. I am a university student studying computer science,
> and I am interested in complex camera support in Linux.
> 
> If it's not too late, could I join this meeting as well please, as I am in
> Tokyo?

For the record, Paul is working with Kieran and me on V4L2 (and UVC in 
particular).

-- 
Regards,

Laurent Pinchart





Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-07 Thread Tomasz Figa
[+CC Ricky]

On Mon, Jun 4, 2018 at 10:33 PM Mauro Carvalho Chehab
 wrote:
>
> Hi all,
>
> I consolidated hopefully all comments I receive on the past announcement
> with regards to the complex camera workshop we're planning to happen in
> Tokyo, just before the Open Source Summit in Japan.
>
> The main focus of the workshop is to allow supporting devices with MC-based
> hardware connected to a camera.
>
> I'm enclosing a detailed description of the problem, in order to
> allow the interested parties to be at the same page.
>
> We need to work towards an agenda for the meeting.
>
> From my side, I think we should have at least the following topics at
> the agenda:
>
> - a quick review about what's currently at libv4l2;
> - a presentation about PipeWire solution;
> - a discussion with the requirements for the new solution;
> - a discussion about how we'll address - who will do what.
>
> Comments? Suggestions?
>
> Are there anyone else planning to either be there physically or via
> Google Hangouts?
>
> Tomaz,
>
> Do you have any limit about the number of people that could join us
> via Google Hangouts?
>
>
> Regards,
> Mauro
>
> ---
>
> 1. Introduction
> ===
>
> 1.1 V4L2 Kernel aspects
> ---
>
> The media subsystem supports two types of devices:
>
> - "traditional" media hardware, supported via V4L2 API. On such hardware,
>   opening a single device node (usually /dev/video0) is enough to control
>   the entire device. We call it as devnode-based devices.
>   An application sometimes may need to use multiple video nodes with
>   devnode-based drivers to capture multiple streams in parallel
>   (when the hardware allows it of course). That's quite common for
>   Analog TV devices, where both /dev/video0 and /dev/vbi0 are opened
>   at the same time.
>
> - Media-controller based devices. On those devices, there are typically
>   several /dev/video? nodes and several /dev/v4l2-subdev? nodes, plus
>   a media controller device node (usually /dev/media0).
>   We call it as mc-based devices. Controlling the hardware require
>   opening the media device (/dev/media0), setup the pipeline and adjust
>   the sub-devices via /dev/v4l2-subdev?. Only streaming is controlled
>   by /dev/video?.
>
> In other words, both configuration and streaming go through the video
> device node on devnode-based drivers, while video device nodes are used
> used for streaming on mc-based drivers.
>
> With devnode-based drivers, "standard" media applications, including open
> source ones (Camorama, Cheese, Xawtv, Firefox, Chromium, ...) and closed
> source ones (Skype, Chrome, ...) support devnode-based devices[1]. Also,
> when just one media device is connected, the streaming/control device
> is typically /dev/video0.
>
> [1] It should be noticed that closed-source applications tend to have
> various bugs that prevent them from working properly on many devnode-based
> devices. Due to that, some additional blocks were requred at libv4l to
> support some of them. Skype is a good example, as we had to include a
> software scaler in libv4l to make it happy. So in practice not everything
> works smoothly with closed-source applications with devnode-based drivers.
> A few such adjustments were also made on some drivers and/or libv4l, in
> order to fulfill some open-source app requirements.
>
> Support for mc-based devices currently require an specialized application
> in order to prepare the device for its usage (setup pipelines, adjust
> hardware controls, etc). Once pipeline is set, the streaming goes via
> /dev/video?, although usually some /dev/v4l2-subdev? devnodes should also
> be opened, in order to implement algorithms designed to make video quality
> reasonable. On such devices, it is not uncommon that the device used by the
> application to be a random number (on OMAP3 driver, typically, is either
> /dev/video4 or /dev/video6).
>
> One example of such hardware is at the OMAP3-based hardware:
>
> 
> http://www.infradead.org/~mchehab/mc-next-gen/omap3-igepv2-with-tvp5150.png
>
> On the picture, there's a graph with the hardware blocks in blue/dark/blue
> and the corresponding devnode interfaces in yellow.
>
> The mc-based approach was taken when support for Nokia N9/N900 cameras
> was added (with has OMAP3 SoC). It is required because the camera hardware
> on SoC comes with a media processor (ISP), with does a lot more than just
> capturing, allowing complex algorithms to enhance image quality in runtime.
> Those algorithms are known as 3A - an acronym for 3 other acronyms:
>
> - AE (Auto Exposure);
> - AF (Auto Focus);
> - AWB (Auto White Balance).
>
> The main reason that drove the MC design is that the 3A algorithms (that is
> the 3A control loop, and sometimes part of the image processing itself) often
> need to run, at least partially, on the CPU. As a kernel-space implementation
> wasn't possible, we needed a lower-level UAPI.
>
> Setting a camera with such ISPs 

Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-07 Thread Alexandre Courbot
On Mon, Jun 4, 2018 at 10:33 PM Mauro Carvalho Chehab
 wrote:
>
> Hi all,
>
> I consolidated hopefully all comments I receive on the past announcement
> with regards to the complex camera workshop we're planning to happen in
> Tokyo, just before the Open Source Summit in Japan.
>
> The main focus of the workshop is to allow supporting devices with MC-based
> hardware connected to a camera.
>
> I'm enclosing a detailed description of the problem, in order to
> allow the interested parties to be at the same page.
>
> We need to work towards an agenda for the meeting.
>
> From my side, I think we should have at least the following topics at
> the agenda:
>
> - a quick review about what's currently at libv4l2;
> - a presentation about PipeWire solution;
> - a discussion with the requirements for the new solution;
> - a discussion about how we'll address - who will do what.
>
> Comments? Suggestions?
>
> Are there anyone else planning to either be there physically or via
> Google Hangouts?
>
> Tomaz,
>
> Do you have any limit about the number of people that could join us
> via Google Hangouts?
>
>
> Regards,
> Mauro
>
> ---
>
> 1. Introduction
> ===
>
> 1.1 V4L2 Kernel aspects
> ---
>
> The media subsystem supports two types of devices:
>
> - "traditional" media hardware, supported via V4L2 API. On such hardware,
>   opening a single device node (usually /dev/video0) is enough to control
>   the entire device. We call it as devnode-based devices.
>   An application sometimes may need to use multiple video nodes with
>   devnode-based drivers to capture multiple streams in parallel
>   (when the hardware allows it of course). That's quite common for
>   Analog TV devices, where both /dev/video0 and /dev/vbi0 are opened
>   at the same time.
>
> - Media-controller based devices. On those devices, there are typically
>   several /dev/video? nodes and several /dev/v4l2-subdev? nodes, plus
>   a media controller device node (usually /dev/media0).
>   We call it as mc-based devices. Controlling the hardware require
>   opening the media device (/dev/media0), setup the pipeline and adjust
>   the sub-devices via /dev/v4l2-subdev?. Only streaming is controlled
>   by /dev/video?.
>
> In other words, both configuration and streaming go through the video
> device node on devnode-based drivers, while video device nodes are used
> used for streaming on mc-based drivers.
>
> With devnode-based drivers, "standard" media applications, including open
> source ones (Camorama, Cheese, Xawtv, Firefox, Chromium, ...) and closed
> source ones (Skype, Chrome, ...) support devnode-based devices[1]. Also,
> when just one media device is connected, the streaming/control device
> is typically /dev/video0.
>
> [1] It should be noticed that closed-source applications tend to have
> various bugs that prevent them from working properly on many devnode-based
> devices. Due to that, some additional blocks were requred at libv4l to
> support some of them. Skype is a good example, as we had to include a
> software scaler in libv4l to make it happy. So in practice not everything
> works smoothly with closed-source applications with devnode-based drivers.
> A few such adjustments were also made on some drivers and/or libv4l, in
> order to fulfill some open-source app requirements.
>
> Support for mc-based devices currently require an specialized application
> in order to prepare the device for its usage (setup pipelines, adjust
> hardware controls, etc). Once pipeline is set, the streaming goes via
> /dev/video?, although usually some /dev/v4l2-subdev? devnodes should also
> be opened, in order to implement algorithms designed to make video quality
> reasonable. On such devices, it is not uncommon that the device used by the
> application to be a random number (on OMAP3 driver, typically, is either
> /dev/video4 or /dev/video6).
>
> One example of such hardware is at the OMAP3-based hardware:
>
> 
> http://www.infradead.org/~mchehab/mc-next-gen/omap3-igepv2-with-tvp5150.png
>
> On the picture, there's a graph with the hardware blocks in blue/dark/blue
> and the corresponding devnode interfaces in yellow.
>
> The mc-based approach was taken when support for Nokia N9/N900 cameras
> was added (with has OMAP3 SoC). It is required because the camera hardware
> on SoC comes with a media processor (ISP), with does a lot more than just
> capturing, allowing complex algorithms to enhance image quality in runtime.
> Those algorithms are known as 3A - an acronym for 3 other acronyms:
>
> - AE (Auto Exposure);
> - AF (Auto Focus);
> - AWB (Auto White Balance).
>
> The main reason that drove the MC design is that the 3A algorithms (that is
> the 3A control loop, and sometimes part of the image processing itself) often
> need to run, at least partially, on the CPU. As a kernel-space implementation
> wasn't possible, we needed a lower-level UAPI.
>
> Setting a camera with such ISPs are harder 

Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-07 Thread Mauro Carvalho Chehab
Em Thu, 7 Jun 2018 16:47:50 +0900
Tomasz Figa  escreveu:

> On Thu, Jun 7, 2018 at 1:26 AM Mauro Carvalho Chehab
>  wrote:
> >
> > Em Wed, 6 Jun 2018 13:19:39 +0900
> > Tomasz Figa  escreveu:
> >  
> > > On Mon, Jun 4, 2018 at 10:33 PM Mauro Carvalho Chehab
> > >  wrote:  
> [snip]
> > > > 3.2 libv4l2 support for 3A algorithms
> > > > =
> > > >
> > > > The 3A algorithm handing is highly dependent on the hardware. The
> > > > idea here is to allow libv4l to have a set of 3A algorithms that
> > > > will be specific to certain mc-based hardware.
> > > >
> > > > One requirement, if we want vendor stacks to use our solution, is that
> > > > it should allow allow external closed-source algorithms to run as well.
> > > >
> > > > The 3A library API must be standardized, to allow the closed-source
> > > > vendor implementation to be replaced by an open-source implementation
> > > > should someone have the time and energy (and qualifications) to write
> > > > one.
> > > >
> > > > Sandboxed execution of the 3A library must be possible as closed-source
> > > > can't always be blindly trusted. This includes the ability to wrap the
> > > > library in a daemon should the platform's multimedia stack wishes
> > > > and to avoid any direct access to the kernel devices by the 3A library
> > > > itself (all accesses should be marshaled by the camera stack).
> > > >
> > > > Please note that this daemon is *not* a camera daemon that would
> > > > communicates with the V4L2 driver through a custom back channel.
> > > >
> > > > The decision to run the 3A library in a sandboxed process or to call
> > > > it directly from the camera stack should be left to the camera stack
> > > > and to the platform integrator, and should not be visible by the 3A
> > > > library.
> > > >
> > > > The 3A library must be usable on major Linux-based camera stacks (the
> > > > Android and Chrome OS camera HALs are certainly important targets,
> > > > more can be added) unmodified, which will allow usage of the vendor
> > > > binary provided for Chrome OS or Android on regular Linux systems.  
> > >
> > > This is quite an interesting idea and it would be really useful if it
> > > could be done. I'm kind of worried, though, about Android in
> > > particular, since the execution environment in Android differs
> > > significantly from a regular Linux distributions (including Chrome OS,
> > > which is not so far from such), namely:
> > > - different libc (bionic) and dynamic linker - I guess this could be
> > > solved by static linking?  
> >
> > Static link is one possible solution. IMHO, we should try to make it
> > use just a C library (if possible) and be sure that it will also compile
> > with bionic/ulibc in order to make it easier to be used by Android and
> > other embedded distros.
> >  
> > > - dedicated toolchains - perhaps not much of a problem if the per-arch
> > > ABI is the same?  
> >
> > Depending on library dependency, we could likely make it work with more
> > than one toolchain. I guess acconfig works with Android, right?
> > If so, it could auto-adjust to the different toolchains everywhere.  
> 
> That works for open source libraries obviously. I was thinking more
> about the closed source 3A libraries coming from Android, since we
> can't recompile them.

Ah! It probably makes sense to place them on some sandboxed environment.
If we're using that, it probably makes sense to have them running
on a sort of daemon with a sockets-based API.

If we're willing to do that, it doesn't really matter how the 3A
was implemented. It can even be in Java. All it matters is to have
a way to plug the library to it. A config file could provide such
link, telling what 3A library should be used (and, eventually, what
commands should be used to start/stop the daemon).

Thanks,
Mauro


Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-07 Thread Tomasz Figa
On Thu, Jun 7, 2018 at 1:26 AM Mauro Carvalho Chehab
 wrote:
>
> Em Wed, 6 Jun 2018 13:19:39 +0900
> Tomasz Figa  escreveu:
>
> > On Mon, Jun 4, 2018 at 10:33 PM Mauro Carvalho Chehab
> >  wrote:
[snip]
> > > 3.2 libv4l2 support for 3A algorithms
> > > =
> > >
> > > The 3A algorithm handing is highly dependent on the hardware. The
> > > idea here is to allow libv4l to have a set of 3A algorithms that
> > > will be specific to certain mc-based hardware.
> > >
> > > One requirement, if we want vendor stacks to use our solution, is that
> > > it should allow allow external closed-source algorithms to run as well.
> > >
> > > The 3A library API must be standardized, to allow the closed-source
> > > vendor implementation to be replaced by an open-source implementation
> > > should someone have the time and energy (and qualifications) to write
> > > one.
> > >
> > > Sandboxed execution of the 3A library must be possible as closed-source
> > > can't always be blindly trusted. This includes the ability to wrap the
> > > library in a daemon should the platform's multimedia stack wishes
> > > and to avoid any direct access to the kernel devices by the 3A library
> > > itself (all accesses should be marshaled by the camera stack).
> > >
> > > Please note that this daemon is *not* a camera daemon that would
> > > communicates with the V4L2 driver through a custom back channel.
> > >
> > > The decision to run the 3A library in a sandboxed process or to call
> > > it directly from the camera stack should be left to the camera stack
> > > and to the platform integrator, and should not be visible by the 3A
> > > library.
> > >
> > > The 3A library must be usable on major Linux-based camera stacks (the
> > > Android and Chrome OS camera HALs are certainly important targets,
> > > more can be added) unmodified, which will allow usage of the vendor
> > > binary provided for Chrome OS or Android on regular Linux systems.
> >
> > This is quite an interesting idea and it would be really useful if it
> > could be done. I'm kind of worried, though, about Android in
> > particular, since the execution environment in Android differs
> > significantly from a regular Linux distributions (including Chrome OS,
> > which is not so far from such), namely:
> > - different libc (bionic) and dynamic linker - I guess this could be
> > solved by static linking?
>
> Static link is one possible solution. IMHO, we should try to make it
> use just a C library (if possible) and be sure that it will also compile
> with bionic/ulibc in order to make it easier to be used by Android and
> other embedded distros.
>
> > - dedicated toolchains - perhaps not much of a problem if the per-arch
> > ABI is the same?
>
> Depending on library dependency, we could likely make it work with more
> than one toolchain. I guess acconfig works with Android, right?
> If so, it could auto-adjust to the different toolchains everywhere.

That works for open source libraries obviously. I was thinking more
about the closed source 3A libraries coming from Android, since we
can't recompile them.

Best regards,
Tomasz


Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-06 Thread Mauro Carvalho Chehab
Em Wed, 6 Jun 2018 13:19:39 +0900
Tomasz Figa  escreveu:

> On Mon, Jun 4, 2018 at 10:33 PM Mauro Carvalho Chehab
>  wrote:
> >
> > Hi all,
> >
> > I consolidated hopefully all comments I receive on the past announcement
> > with regards to the complex camera workshop we're planning to happen in
> > Tokyo, just before the Open Source Summit in Japan.
> >
> > The main focus of the workshop is to allow supporting devices with MC-based
> > hardware connected to a camera.
> >
> > I'm enclosing a detailed description of the problem, in order to
> > allow the interested parties to be at the same page.
> >
> > We need to work towards an agenda for the meeting.
> >
> > From my side, I think we should have at least the following topics at
> > the agenda:
> >
> > - a quick review about what's currently at libv4l2;
> > - a presentation about PipeWire solution;
> > - a discussion with the requirements for the new solution;
> > - a discussion about how we'll address - who will do what.  
> 
> I believe Intel's Jian Xu would be able to give us some brief
> introduction to IPU3 hardware architecture and possibly also upcoming
> hardware generations as well.

That would be great!

> My experience with existing generations of ISPs from other vendors is
> that the main principles of operation are very similar to the model
> represented by IPU3 and very much different to the OMAP3 example
> mentioned by Mauro below. I further commented on it below.
> 
> >
> > Comments? Suggestions?
> >
> > Are there anyone else planning to either be there physically or via
> > Google Hangouts?
> >
> > Tomaz,
> >
> > Do you have any limit about the number of people that could join us
> > via Google Hangouts?
> >  
> 
> Technically, Hangouts should be able to work with really huge
> multi-party conferences. There is obviously some limitation on client
> side, since thumbnails of participants need to be decoded at real
> time, so even if the resolution is low, if the client is very slow,
> there might be some really bad frame drop happening on client side.
> 
> However, I often have meetings with around 8 parties and it tends to
> work fine. We can also disable video of all participants, who don't
> need to present anything at the moment and the problem would go away
> completely.

Ok, good!

> > Regards,
> > Mauro
> >
> > ---
> >
> > 1. Introduction
> > ===
> >
> > 1.1 V4L2 Kernel aspects
> > ---
> >
> > The media subsystem supports two types of devices:
> >
> > - "traditional" media hardware, supported via V4L2 API. On such hardware,
> >   opening a single device node (usually /dev/video0) is enough to control
> >   the entire device. We call it as devnode-based devices.
> >   An application sometimes may need to use multiple video nodes with
> >   devnode-based drivers to capture multiple streams in parallel
> >   (when the hardware allows it of course). That's quite common for
> >   Analog TV devices, where both /dev/video0 and /dev/vbi0 are opened
> >   at the same time.
> >
> > - Media-controller based devices. On those devices, there are typically
> >   several /dev/video? nodes and several /dev/v4l2-subdev? nodes, plus
> >   a media controller device node (usually /dev/media0).
> >   We call it as mc-based devices. Controlling the hardware require
> >   opening the media device (/dev/media0), setup the pipeline and adjust
> >   the sub-devices via /dev/v4l2-subdev?. Only streaming is controlled
> >   by /dev/video?.
> >
> > In other words, both configuration and streaming go through the video
> > device node on devnode-based drivers, while video device nodes are used
> > used for streaming on mc-based drivers.
> >
> > With devnode-based drivers, "standard" media applications, including open
> > source ones (Camorama, Cheese, Xawtv, Firefox, Chromium, ...) and closed
> > source ones (Skype, Chrome, ...) support devnode-based devices[1]. Also,
> > when just one media device is connected, the streaming/control device
> > is typically /dev/video0.
> >
> > [1] It should be noticed that closed-source applications tend to have
> > various bugs that prevent them from working properly on many devnode-based
> > devices. Due to that, some additional blocks were requred at libv4l to
> > support some of them. Skype is a good example, as we had to include a
> > software scaler in libv4l to make it happy. So in practice not everything
> > works smoothly with closed-source applications with devnode-based drivers.
> > A few such adjustments were also made on some drivers and/or libv4l, in
> > order to fulfill some open-source app requirements.
> >
> > Support for mc-based devices currently require an specialized application
> > in order to prepare the device for its usage (setup pipelines, adjust
> > hardware controls, etc). Once pipeline is set, the streaming goes via
> > /dev/video?, although usually some /dev/v4l2-subdev? devnodes should also
> > be opened, in order to implement algorithms designed to make video 

Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-06 Thread Javier Martinez Canillas
[adding Wim Taymans and Mario Limonciello to CC who said that they may
also join via Hangous]

On Wed, Jun 6, 2018 at 6:19 AM, Tomasz Figa  wrote:
> On Mon, Jun 4, 2018 at 10:33 PM Mauro Carvalho Chehab
>  wrote:
>>
>> Hi all,
>>
>> I consolidated hopefully all comments I receive on the past announcement
>> with regards to the complex camera workshop we're planning to happen in
>> Tokyo, just before the Open Source Summit in Japan.
>>
>> The main focus of the workshop is to allow supporting devices with MC-based
>> hardware connected to a camera.
>>
>> I'm enclosing a detailed description of the problem, in order to
>> allow the interested parties to be at the same page.
>>
>> We need to work towards an agenda for the meeting.
>>
>> From my side, I think we should have at least the following topics at
>> the agenda:
>>
>> - a quick review about what's currently at libv4l2;
>> - a presentation about PipeWire solution;

Wim mentioned that he could do this.

>> - a discussion with the requirements for the new solution;
>> - a discussion about how we'll address - who will do what.
>
> I believe Intel's Jian Xu would be able to give us some brief
> introduction to IPU3 hardware architecture and possibly also upcoming
> hardware generations as well.
>
> My experience with existing generations of ISPs from other vendors is
> that the main principles of operation are very similar to the model
> represented by IPU3 and very much different to the OMAP3 example
> mentioned by Mauro below. I further commented on it below.
>
>>
>> Comments? Suggestions?
>>
>> Are there anyone else planning to either be there physically or via
>> Google Hangouts?
>>
>> Tomaz,
>>
>> Do you have any limit about the number of people that could join us
>> via Google Hangouts?
>>
>
> Technically, Hangouts should be able to work with really huge
> multi-party conferences. There is obviously some limitation on client
> side, since thumbnails of participants need to be decoded at real
> time, so even if the resolution is low, if the client is very slow,
> there might be some really bad frame drop happening on client side.
>
> However, I often have meetings with around 8 parties and it tends to
> work fine. We can also disable video of all participants, who don't
> need to present anything at the moment and the problem would go away
> completely.
>
>>
>> Regards,
>> Mauro
>>
>> ---
>>
>> 1. Introduction
>> ===
>>
>> 1.1 V4L2 Kernel aspects
>> ---
>>
>> The media subsystem supports two types of devices:
>>
>> - "traditional" media hardware, supported via V4L2 API. On such hardware,
>>   opening a single device node (usually /dev/video0) is enough to control
>>   the entire device. We call it as devnode-based devices.
>>   An application sometimes may need to use multiple video nodes with
>>   devnode-based drivers to capture multiple streams in parallel
>>   (when the hardware allows it of course). That's quite common for
>>   Analog TV devices, where both /dev/video0 and /dev/vbi0 are opened
>>   at the same time.
>>
>> - Media-controller based devices. On those devices, there are typically
>>   several /dev/video? nodes and several /dev/v4l2-subdev? nodes, plus
>>   a media controller device node (usually /dev/media0).
>>   We call it as mc-based devices. Controlling the hardware require
>>   opening the media device (/dev/media0), setup the pipeline and adjust
>>   the sub-devices via /dev/v4l2-subdev?. Only streaming is controlled
>>   by /dev/video?.
>>
>> In other words, both configuration and streaming go through the video
>> device node on devnode-based drivers, while video device nodes are used
>> used for streaming on mc-based drivers.
>>
>> With devnode-based drivers, "standard" media applications, including open
>> source ones (Camorama, Cheese, Xawtv, Firefox, Chromium, ...) and closed
>> source ones (Skype, Chrome, ...) support devnode-based devices[1]. Also,
>> when just one media device is connected, the streaming/control device
>> is typically /dev/video0.
>>
>> [1] It should be noticed that closed-source applications tend to have
>> various bugs that prevent them from working properly on many devnode-based
>> devices. Due to that, some additional blocks were requred at libv4l to
>> support some of them. Skype is a good example, as we had to include a
>> software scaler in libv4l to make it happy. So in practice not everything
>> works smoothly with closed-source applications with devnode-based drivers.
>> A few such adjustments were also made on some drivers and/or libv4l, in
>> order to fulfill some open-source app requirements.
>>
>> Support for mc-based devices currently require an specialized application
>> in order to prepare the device for its usage (setup pipelines, adjust
>> hardware controls, etc). Once pipeline is set, the streaming goes via
>> /dev/video?, although usually some /dev/v4l2-subdev? devnodes should also
>> be opened, in order to implement algorithms designed to make video 

Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-05 Thread Tomasz Figa
On Mon, Jun 4, 2018 at 10:33 PM Mauro Carvalho Chehab
 wrote:
>
> Hi all,
>
> I consolidated hopefully all comments I receive on the past announcement
> with regards to the complex camera workshop we're planning to happen in
> Tokyo, just before the Open Source Summit in Japan.
>
> The main focus of the workshop is to allow supporting devices with MC-based
> hardware connected to a camera.
>
> I'm enclosing a detailed description of the problem, in order to
> allow the interested parties to be at the same page.
>
> We need to work towards an agenda for the meeting.
>
> From my side, I think we should have at least the following topics at
> the agenda:
>
> - a quick review about what's currently at libv4l2;
> - a presentation about PipeWire solution;
> - a discussion with the requirements for the new solution;
> - a discussion about how we'll address - who will do what.

I believe Intel's Jian Xu would be able to give us some brief
introduction to IPU3 hardware architecture and possibly also upcoming
hardware generations as well.

My experience with existing generations of ISPs from other vendors is
that the main principles of operation are very similar to the model
represented by IPU3 and very much different to the OMAP3 example
mentioned by Mauro below. I further commented on it below.

>
> Comments? Suggestions?
>
> Are there anyone else planning to either be there physically or via
> Google Hangouts?
>
> Tomaz,
>
> Do you have any limit about the number of people that could join us
> via Google Hangouts?
>

Technically, Hangouts should be able to work with really huge
multi-party conferences. There is obviously some limitation on client
side, since thumbnails of participants need to be decoded at real
time, so even if the resolution is low, if the client is very slow,
there might be some really bad frame drop happening on client side.

However, I often have meetings with around 8 parties and it tends to
work fine. We can also disable video of all participants, who don't
need to present anything at the moment and the problem would go away
completely.

>
> Regards,
> Mauro
>
> ---
>
> 1. Introduction
> ===
>
> 1.1 V4L2 Kernel aspects
> ---
>
> The media subsystem supports two types of devices:
>
> - "traditional" media hardware, supported via V4L2 API. On such hardware,
>   opening a single device node (usually /dev/video0) is enough to control
>   the entire device. We call it as devnode-based devices.
>   An application sometimes may need to use multiple video nodes with
>   devnode-based drivers to capture multiple streams in parallel
>   (when the hardware allows it of course). That's quite common for
>   Analog TV devices, where both /dev/video0 and /dev/vbi0 are opened
>   at the same time.
>
> - Media-controller based devices. On those devices, there are typically
>   several /dev/video? nodes and several /dev/v4l2-subdev? nodes, plus
>   a media controller device node (usually /dev/media0).
>   We call it as mc-based devices. Controlling the hardware require
>   opening the media device (/dev/media0), setup the pipeline and adjust
>   the sub-devices via /dev/v4l2-subdev?. Only streaming is controlled
>   by /dev/video?.
>
> In other words, both configuration and streaming go through the video
> device node on devnode-based drivers, while video device nodes are used
> used for streaming on mc-based drivers.
>
> With devnode-based drivers, "standard" media applications, including open
> source ones (Camorama, Cheese, Xawtv, Firefox, Chromium, ...) and closed
> source ones (Skype, Chrome, ...) support devnode-based devices[1]. Also,
> when just one media device is connected, the streaming/control device
> is typically /dev/video0.
>
> [1] It should be noticed that closed-source applications tend to have
> various bugs that prevent them from working properly on many devnode-based
> devices. Due to that, some additional blocks were requred at libv4l to
> support some of them. Skype is a good example, as we had to include a
> software scaler in libv4l to make it happy. So in practice not everything
> works smoothly with closed-source applications with devnode-based drivers.
> A few such adjustments were also made on some drivers and/or libv4l, in
> order to fulfill some open-source app requirements.
>
> Support for mc-based devices currently require an specialized application
> in order to prepare the device for its usage (setup pipelines, adjust
> hardware controls, etc). Once pipeline is set, the streaming goes via
> /dev/video?, although usually some /dev/v4l2-subdev? devnodes should also
> be opened, in order to implement algorithms designed to make video quality
> reasonable.

To further complicate the problem, on many modern imaging subsystems
(Intel IPU3, Rockchip RKISP1), there is more than 1 video output
(CAPTURE device), for example:
1) full resolution capture stream and
2) downscaled preview stream.

Moreover, many ISPs also produce per-frame metadata 

Re: [ANN v2] Complex Camera Workshop - Tokyo - Jun, 19

2018-06-05 Thread jacopo mondi
Hi Mauro,

On Mon, Jun 04, 2018 at 10:33:03AM -0300, Mauro Carvalho Chehab wrote:
> Hi all,


[snip]

> 4.1 Physical Attendees
> ==
>
> Tomasz Figa 
> Mauro Carvalho Chehab 
> Kieran Bingham 
> Laurent Pinchart 
> Niklas Söderlund 
> Zheng, Jian Xu Zheng 
>
> Anywone else?

Sorry, I've not listed myself in reply to the previous email.

As I'll be in Tokyo for OSS, I would like to join you for the
meeting.

Thanks
   j


signature.asc
Description: PGP signature