Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-17 Thread Mauro Carvalho Chehab
Em 17-08-2011 00:57, Laurent Pinchart escreveu:
 Hi Mauro,
 
 On Wednesday 17 August 2011 00:36:44 Mauro Carvalho Chehab wrote:
 Em 16-08-2011 08:44, Laurent Pinchart escreveu:
 On Tuesday 16 August 2011 17:30:47 Mauro Carvalho Chehab wrote:
 Em 16-08-2011 01:57, Laurent Pinchart escreveu:
 My point is that the ISP driver developer can't know in advance which
 sensor will be used systems that don't exist yet.

 As far as such hardware is projected, someone will know. It is a
 simple trivial patch to associate a new hardware with a hardware
 profile at the platform data.

 Platform data must contain hardware descriptions only, not policies.
 This is even clearer now that ARM is moving away from board code to
 the Device Tree.

 Again, a cell phone with one frontal camera and one hear camera has two
 sensor inputs only. This is not a policy. It is a hardware constraint.
 The driver should allow setting the pipeline for both sensors via
 S_INPUT, otherwise a V4L2 only userspace application won't work.

 It is as simple as that.

 When capturing from the main sensor on the OMAP3 ISP you need to capture
 raw data to memory on a video node, feed it back to the hardware through
 another video node, and finally capture it on a third video node. A
 V4L2-only userspace application won't work. That's how the hardware is,
 we can't do much about that.

 The raw data conversion is one of the functions that libv4l should do. So,
 if you've already submitted the patches for libv4l to do the hardware
 loopback trick, or add a function there to convert the raw data into a
 common format, that should be ok. Otherwise, we have a problem, that needs
 to be fixed.
 
 That's not what this is about. Bayer to YUV conversion needs to happen in 
 hardware in this case for performance reasons. The hardware will also perform 
 scaling on YUV, as well as many image processing tasks (white balance, defect 
 pixel correction, gamma correction, noise filtering, ...).
 
 Also, on most cases, probing a sensor is as trivial as reading a
 sensor ID during device probe. This applies, for example, for all
 Omnivision sensors.

 We do things like that all the times for PC world, as nobody knows
 what webcam someone would plug on his PC.

 Sorry, but that's not related. You simply can't decide in an embedded
 ISP driver how to deal with sensor controls, as the system will be
 used in a too wide variety of applications and hardware
 configurations. All controls need to be exposed, period.

 We're not talking about controls. We're talking about providing the
 needed V4L2 support to allow an userspace application to access the
 hardware sensor.

 OK, so we're discussing S_INPUT. Let's discuss controls later :-)

 I never saw an embedded hardware that allows physically changing the
 sensor.

 Beagleboard + pluggable sensor board.

 Development systems like beagleboard, pandaboard, Exynos SMDK, etc,
 aren't embeeded hardware. They're development kits.

 People create end-user products based on those kits. That make them
 first- class embedded hardware like any other.

 No doubt they should be supported, but it doesn't make sense to create
 tons of input pipelines to be used for S_INPUT for each different type
 of possible sensor. Somehow, userspace needs to tell what's the sensor
 that he attached to the hardware, or the driver should suport
 auto-detecting it.

 We're not creating tons of input pipelines. Look at
 http://www.ideasonboard.org/media/omap3isp.ps , every video node (in
 yellow) has its purpose.

 Not sure if I it understood well. The subdevs 8-11 are the sensors, right?
 
 et8ek8 and vs6555 are the sensors. ad5820 is the lens controller and adp1653 
 the flash controller. All other subdevs (green blocks) are part of the ISP.
 
 In other words, I see 2 options for that:
1) add hardware auto-detection at the sensor logic. At driver probe,

 try to probe all sensors, if it is a hardware development kit;

 We've worked quite hard to remove I2C device probing from the kernel,
 let's not add it back.

 We do I2C probing on several drivers. It is there for devices where
 the cards entry is not enough to identify the hardware. For example,
 two different devices with the same USB ID generally uses that.
 If the hardware information is not enough, there's nothing wrong
 on doing that.
 
 Except that probing might destroy the hardware in the general case. We can 
 only probe I2C devices on a bus that we know will not contain any other 
 sensitive devices.
 
2) add one new parameter at the driver: sensors. If the hardware

 is one of those kits, this parameter will allow the developer to specify
 the used sensors. It is the same logic as we do with userspace TV and
 grabber cards without eeprom or any other way to auto-detect the
 hardware.

 This will likely be done through the Device Tree.

 I don't mind much about the way it is implemented, but it should be there
 on a place where people can find it and use it.

 Devices that requires 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-17 Thread Sakari Ailus
Hi Mauro,

On Tue, Aug 16, 2011 at 03:36:44PM -0700, Mauro Carvalho Chehab wrote:
[clip]
  For instance, when switching between a bw and a color sensor you will need 
  to 
  reconfigure the whole pipeline to select the right gamma table, white 
  balance 
  parameters, color conversion matrix, ... That's not something we want to 
  hardcode in the kernel. This needs to be done from userspace.
 
 This is something that, if it is not written somehwere, no userspace
 applications not developed by the hardware vendor will ever work.
 
 I don't see any code for that any at the kernel or at libv4l. Am I missing
 something?

There actually is. The plugin interface patches went in to libv4l 0.9.0.
That's just an interface and it doesn't yet have support for any embedded
devices.

-- 
Sakari Ailus
sakari.ai...@iki.fi
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-17 Thread Ivan T. Ivanov

Hi everybody, 

On Wed, 2011-08-17 at 05:25 -0700, Mauro Carvalho Chehab wrote:
 Em 17-08-2011 00:57, Laurent Pinchart escreveu:
  Hi Mauro,
  
  On Wednesday 17 August 2011 00:36:44 Mauro Carvalho Chehab wrote:
  Em 16-08-2011 08:44, Laurent Pinchart escreveu:
  On Tuesday 16 August 2011 17:30:47 Mauro Carvalho Chehab wrote:
  Em 16-08-2011 01:57, Laurent Pinchart escreveu:
  My point is that the ISP driver developer can't know in advance which
  sensor will be used systems that don't exist yet.
 
  As far as such hardware is projected, someone will know. It is a
  simple trivial patch to associate a new hardware with a hardware
  profile at the platform data.
 
  Platform data must contain hardware descriptions only, not policies.
  This is even clearer now that ARM is moving away from board code to
  the Device Tree.
 
  Again, a cell phone with one frontal camera and one hear camera has two
  sensor inputs only. This is not a policy. It is a hardware constraint.
  The driver should allow setting the pipeline for both sensors via
  S_INPUT, otherwise a V4L2 only userspace application won't work.
 
  It is as simple as that.
 
  When capturing from the main sensor on the OMAP3 ISP you need to capture
  raw data to memory on a video node, feed it back to the hardware through
  another video node, and finally capture it on a third video node. A
  V4L2-only userspace application won't work. That's how the hardware is,
  we can't do much about that.
 
  The raw data conversion is one of the functions that libv4l should do. So,
  if you've already submitted the patches for libv4l to do the hardware
  loopback trick, or add a function there to convert the raw data into a
  common format, that should be ok. Otherwise, we have a problem, that needs
  to be fixed.
  
  That's not what this is about. Bayer to YUV conversion needs to happen in 
  hardware in this case for performance reasons. The hardware will also 
  perform 
  scaling on YUV, as well as many image processing tasks (white balance, 
  defect 
  pixel correction, gamma correction, noise filtering, ...).
  
  Also, on most cases, probing a sensor is as trivial as reading a
  sensor ID during device probe. This applies, for example, for all
  Omnivision sensors.
 
  We do things like that all the times for PC world, as nobody knows
  what webcam someone would plug on his PC.
 
  Sorry, but that's not related. You simply can't decide in an embedded
  ISP driver how to deal with sensor controls, as the system will be
  used in a too wide variety of applications and hardware
  configurations. All controls need to be exposed, period.
 
  We're not talking about controls. We're talking about providing the
  needed V4L2 support to allow an userspace application to access the
  hardware sensor.
 
  OK, so we're discussing S_INPUT. Let's discuss controls later :-)
 
  I never saw an embedded hardware that allows physically changing the
  sensor.
 
  Beagleboard + pluggable sensor board.
 
  Development systems like beagleboard, pandaboard, Exynos SMDK, etc,
  aren't embeeded hardware. They're development kits.
 
  People create end-user products based on those kits. That make them
  first- class embedded hardware like any other.
 
  No doubt they should be supported, but it doesn't make sense to create
  tons of input pipelines to be used for S_INPUT for each different type
  of possible sensor. Somehow, userspace needs to tell what's the sensor
  that he attached to the hardware, or the driver should suport
  auto-detecting it.
 
  We're not creating tons of input pipelines. Look at
  http://www.ideasonboard.org/media/omap3isp.ps , every video node (in
  yellow) has its purpose.
 
  Not sure if I it understood well. The subdevs 8-11 are the sensors, right?
  
  et8ek8 and vs6555 are the sensors. ad5820 is the lens controller and 
  adp1653 
  the flash controller. All other subdevs (green blocks) are part of the ISP.
  
  In other words, I see 2 options for that:
   1) add hardware auto-detection at the sensor logic. At driver probe,
 
  try to probe all sensors, if it is a hardware development kit;
 
  We've worked quite hard to remove I2C device probing from the kernel,
  let's not add it back.
 
  We do I2C probing on several drivers. It is there for devices where
  the cards entry is not enough to identify the hardware. For example,
  two different devices with the same USB ID generally uses that.
  If the hardware information is not enough, there's nothing wrong
  on doing that.
  
  Except that probing might destroy the hardware in the general case. We can 
  only probe I2C devices on a bus that we know will not contain any other 
  sensitive devices.
  
   2) add one new parameter at the driver: sensors. If the hardware
 
  is one of those kits, this parameter will allow the developer to specify
  the used sensors. It is the same logic as we do with userspace TV and
  grabber cards without eeprom or any other way to auto-detect the
  hardware.
 
  

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-17 Thread Mauro Carvalho Chehab
Em 17-08-2011 05:37, Ivan T. Ivanov escreveu:
 
 Hi everybody, 
 
 On Wed, 2011-08-17 at 05:25 -0700, Mauro Carvalho Chehab wrote:
 Em 17-08-2011 00:57, Laurent Pinchart escreveu:
 Hi Mauro,

 On Wednesday 17 August 2011 00:36:44 Mauro Carvalho Chehab wrote:
 Em 16-08-2011 08:44, Laurent Pinchart escreveu:
 On Tuesday 16 August 2011 17:30:47 Mauro Carvalho Chehab wrote:
 Em 16-08-2011 01:57, Laurent Pinchart escreveu:
 No. S_INPUT shouldn't be use to select between sensors. The hardware
 pipeline is more complex than just that. We can't make it all fit in the
 S_INPUT API.

 For instance, when switching between a bw and a color sensor you will
 need to reconfigure the whole pipeline to select the right gamma table,
 white balance parameters, color conversion matrix, ... That's not
 something we want to hardcode in the kernel. This needs to be done from
 userspace.

 This is something that, if it is not written somehwere, no userspace
 applications not developed by the hardware vendor will ever work.

 I don't see any code for that any at the kernel or at libv4l. Am I missing
 something?

 Code for that needs to be written in libv4l. It's not there yet as I don't 
 think we have any hardware for this particular example at the moment :-)

 As no pure V4L2 application would set the pipelines as you've said, and
 no libv4l code exists yet, 
 
 
 Actually there is such code for OMAP3 ISP driver. Plug-in support in
 libv4l have been extended a little bit [1] and plugin-in which handle
 request for regular video device nodes /dev/video0 and /dev/video1
 and translate them to MC and sub-device API have been posted here [2],
 but is still not merged.
 
 Regards, 
 iivanov
 
 [1] http://www.spinics.net/lists/linux-media/msg35570.html
 [2] http://www.spinics.net/lists/linux-media/msg32539.html

Ah, ok. So, it is just the usual delay of having some features merged.

Hans G.,

FYI. Please review the OMAP3 MC-aware patches for libv4l when you have
some time for that.

Thanks!
Mauro


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-16 Thread Laurent Pinchart
Hi Mauro,

On Tuesday 16 August 2011 02:13:00 Mauro Carvalho Chehab wrote:
 Em 15-08-2011 09:30, Laurent Pinchart escreveu:
  On Tuesday 09 August 2011 22:05:47 Mauro Carvalho Chehab wrote:
  Em 29-07-2011 05:36, Laurent Pinchart escreveu:
  On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
  Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
  On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
  Accumulating sub-dev controls at the video node is the right thing
  to do.
  
  An MC-aware application will need to handle with that, but that
  doesn't sound to be hard. All such application would need to do is
  to first probe the subdev controls, and, when parsing the videodev
  controls, not register controls with duplicated ID's, or to mark
  them with some special attribute.
  
  IMHO it's not a big issue in general. Still, both subdev and the host
  device may support same control id. And then even though the control
  ids are same on the subdev and the host they could mean physically
  different controls (since when registering a subdev at the host
  driver the host's controls take precedence and doubling subdev
  controls are skipped).
  
  True, but, except for specific usecases, the host control is enough.
  
  Not for embedded devices. In many case the control won't even be
  implemented by the host. If your system has two sensors connected to
  the same host, they will both expose an exposure time control. Which
  one do you configure through the video node ? The two sensors will
  likely have different bounds for the same control, how do you report
  that ?
  
  If the device has two sensors that are mutually exclusive, they should
  be mapped as two different inputs. The showed control should be the one
  used by the currently active input.
  
  If the sensors aren't mutually exclusive, then two different video nodes
  will be shown in userspace.
  
  It's more complex than that. The OMAP3 ISP driver exposes 7 video nodes
  regardless of the number of sensors. Sensors can be mutually exclusive or
  not, depending on the board. S_INPUT has its use cases, but is less
  useful on embedded hardware.
 
 Sorry, but exposing a video node that can't be used doesn't make sense.

All those 7 video nodes can be used, depending on userspace's needs.

  Those controls are also quite useless for generic V4L2 applications,
  which will want auto-exposure anyway. This needs to be implemented in
  userspace in libv4l.
  
  Several webcams export exposure controls. Why shouldn't those controls
  be exposed to userspace anymore?
  
  Ok, if the hardware won't support 3A algorithm, libv4l will implement
  it, eventually using an extra hardware-aware code to get the best
  performance for that specific device, but this doesn't mean that the
  user should always use it.
  
  Btw, the 3A algorithm is one of the things I don't like on my cell
  phone: while it works most of the time, sometimes I want to disable it
  and manually adjust, as it produces dark images, when there's a very
  bright light somewhere on the image background. Manually adjusting the
  exposure time and aperture is something relevant for some users.
  
  It is, but on embedded devices that usually requires the application to
  be hardware-aware. Exposure time limits depend on blanking, which in
  turn influences the frame rate along with the pixel clock (often
  configurable as well). Programming those settings wrong can exceed the
  ISP available bandwidth.
  
  The world unfortunately stopped being simple some time ago :-)
  
  Also there might be some preference at user space, at which stage of
  the pipeline to apply some controls. This is where the subdev API
  helps, and plain video node API does not.
  
  Again, this is for specific usecases. On such cases, what is expected
  is that the more generic control will be exported via V4L2 API.
  
  Thus it's a bit hard to imagine that we could do something like
  optionally not to inherit controls as the subdev/MC API is
  optional.
  
  :)
  
  This was actually implemented. There are some cases at ivtv/cx18
  driver where both the bridge and a subdev provides the same control
  (audio volume, for example). The idea is to allow the bridge driver
  to touch at the subdev control without exposing it to userspace,
  since the desire was that the bridge driver itself would expose such
  control, using a logic that combines changing the subdev and the
  bridge registers for volume.
  
  This seem like hard coding a policy in the driver;) Then there is no
  way (it might not be worth the effort though) to play with volume
  level at both devices, e.g. to obtain optimal S/N ratio.
  
  In general, playing with just one control is enough. Andy had a
  different opinion when this issue were discussed, and he thinks that
  playing with both is better. At the end, this is a developers
  decision, depending on how much information (and bug reports) he had.
  
  ivtv/cx18 is a completely 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-16 Thread Laurent Pinchart
Hi Mauro,

On Tuesday 16 August 2011 02:21:09 Mauro Carvalho Chehab wrote:
 Em 15-08-2011 05:45, Laurent Pinchart escreveu:
  After having it there properly working and tested independently, we may
  consider patches removing V4L2 interfaces that were obsoleted in favor
  of using the libv4l implementation, of course using the Kernel way of
  deprecating interfaces. But doing it before having it, doesn't make any
  sense.
  
  Once again we're not trying to remove controls, but expose them
  differently.
 
 See the comments at the patch series, starting from the patches that
 removes support for S_INPUT. I'm aware that the controls will be exposed
 by both MC and V4L2 API, althrough having ways to expose/hide controls via
 V4L2 API makes patch review a way more complicated than it used to be
 before the MC patches.

The MC API doesn't expose controls. Controls are still exposed by the V4L2 API 
only, but V4L2 can then expose them on subdev nodes in addition to video 
nodes.

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-16 Thread Mauro Carvalho Chehab
Em 16-08-2011 01:59, Laurent Pinchart escreveu:
 Hi Mauro,
 
 On Tuesday 16 August 2011 02:21:09 Mauro Carvalho Chehab wrote:
 Em 15-08-2011 05:45, Laurent Pinchart escreveu:
 After having it there properly working and tested independently, we may
 consider patches removing V4L2 interfaces that were obsoleted in favor
 of using the libv4l implementation, of course using the Kernel way of
 deprecating interfaces. But doing it before having it, doesn't make any
 sense.

 Once again we're not trying to remove controls, but expose them
 differently.

 See the comments at the patch series, starting from the patches that
 removes support for S_INPUT. I'm aware that the controls will be exposed
 by both MC and V4L2 API, althrough having ways to expose/hide controls via
 V4L2 API makes patch review a way more complicated than it used to be
 before the MC patches.
 
 The MC API doesn't expose controls. Controls are still exposed by the V4L2 
 API 
 only, but V4L2 can then expose them on subdev nodes in addition to video 
 nodes.

To be clear, when I'm talking about MC, I'm meaning also the subdev nodes API,
as a pure V4L2 api application doesn't know anything about them.

Regards,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-16 Thread Mauro Carvalho Chehab
Em 16-08-2011 01:57, Laurent Pinchart escreveu:
 Hi Mauro,
 
 My point is that the ISP driver developer can't know in advance which
 sensor will be used systems that don't exist yet.

 As far as such hardware is projected, someone will know. It is a simple
 trivial patch to associate a new hardware with a hardware profile at the
 platform data.
 
 Platform data must contain hardware descriptions only, not policies. This is 
 even clearer now that ARM is moving away from board code to the Device Tree.

Again, a cell phone with one frontal camera and one hear camera has two
sensor inputs only. This is not a policy. It is a hardware constraint.
The driver should allow setting the pipeline for both sensors via S_INPUT,
otherwise a V4L2 only userspace application won't work.

It is as simple as that.

 Also, on most cases, probing a sensor is as trivial as reading a sensor
 ID during device probe. This applies, for example, for all Omnivision
 sensors.

 We do things like that all the times for PC world, as nobody knows what
 webcam someone would plug on his PC.
 
 Sorry, but that's not related. You simply can't decide in an embedded ISP 
 driver how to deal with sensor controls, as the system will be used in a too 
 wide variety of applications and hardware configurations. All controls need 
 to 
 be exposed, period.

We're not talking about controls. We're talking about providing the needed
V4L2 support to allow an userspace application to access the hardware
sensor.

 I never saw an embedded hardware that allows physically changing the
 sensor.

 Beagleboard + pluggable sensor board.

 Development systems like beagleboard, pandaboard, Exynos SMDK, etc, aren't
 embeeded hardware. They're development kits.
 
 People create end-user products based on those kits. That make them first-
 class embedded hardware like any other.

No doubt they should be supported, but it doesn't make sense to create tons
of input pipelines to be used for S_INPUT for each different type of possible
sensor. Somehow, userspace needs to tell what's the sensor that he attached
to the hardware, or the driver should suport auto-detecting it.

In other words, I see 2 options for that:
1) add hardware auto-detection at the sensor logic. At driver probe,
try to probe all sensors, if it is a hardware development kit;
2) add one new parameter at the driver: sensors. If the hardware
is one of those kits, this parameter will allow the developer to specify
the used sensors. It is the same logic as we do with userspace TV and
grabber cards without eeprom or any other way to auto-detect the hardware.

 I don't mind if, for those kits the developer that is playing with it has to
 pass a mode parameter and/or run some open harware-aware small application
 that makes the driver to select the sensor type he is using, but, if the
 hardware is, instead, a N9 or a Galaxy Tab (or whatever embedded hardware),
 the driver should expose just the sensors that exists on such hardware. It
 shouldn't be ever allowed to change it on userspace, using whatever API on
 those hardware.
 
 Who talked about changing sensors from userspace on those systems ? Platform 
 data (regardless of whether it comes from board code, device tree, or 
 something else) will contain a hardware description, and the kernel will 
 create the right devices and load the right drivers. The issue we're 
 discussing is how to expose controls for those devices to userspace, and that 
 needs to be done through subdev nodes.

The issue that is under discussion is the removal of S_INPUT from the samsung 
driver, and the comments at the patches that talks about removing V4L2 API
support in favor of using a MC-only API for some fundamental things.

For example, with a patch like this one, only one sensor will be supported
without the MC API (either the front or back sensor on a multi-sensor camera):

http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/47751733a322a241927f9238b8ab1441389c9c41

Look at the comment on this patch also:

http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/c6fb462c38be60a45d16a29a9e56c886ee0aa08c

What is called API compatibility mode is not clear, but it transmitted
me that the idea is to expose the controls only via subnodes.

Those are the rationale for those discussions: V4L2 API is not being deprecated
in favor of MC API, e. g. controls shouldn't be hidden from the V4L2 API without
a good reason.

 Even if you did, fine image quality tuning requires accessing pretty
 much all controls individually anyway.

 The same is also true for non-embedded hardware. The only situation
 where V4L2 API is not enough is when there are two controls of the same
 type active. For example, 2 active volume controls, one at the audio
 demod, and another at the bridge. There may have some cases where you
 can do the same thing at the sensor or at a DSP block. This is where MC
 API gives an improvement, by allowing changing both, 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-16 Thread Laurent Pinchart
Hi Mauro,

On Tuesday 16 August 2011 17:30:47 Mauro Carvalho Chehab wrote:
 Em 16-08-2011 01:57, Laurent Pinchart escreveu:
  My point is that the ISP driver developer can't know in advance which
  sensor will be used systems that don't exist yet.
  
  As far as such hardware is projected, someone will know. It is a simple
  trivial patch to associate a new hardware with a hardware profile at the
  platform data.
  
  Platform data must contain hardware descriptions only, not policies. This
  is even clearer now that ARM is moving away from board code to the
  Device Tree.
 
 Again, a cell phone with one frontal camera and one hear camera has two
 sensor inputs only. This is not a policy. It is a hardware constraint.
 The driver should allow setting the pipeline for both sensors via S_INPUT,
 otherwise a V4L2 only userspace application won't work.
 
 It is as simple as that.

When capturing from the main sensor on the OMAP3 ISP you need to capture raw 
data to memory on a video node, feed it back to the hardware through another 
video node, and finally capture it on a third video node. A V4L2-only 
userspace application won't work. That's how the hardware is, we can't do much 
about that.

  Also, on most cases, probing a sensor is as trivial as reading a sensor
  ID during device probe. This applies, for example, for all Omnivision
  sensors.
  
  We do things like that all the times for PC world, as nobody knows what
  webcam someone would plug on his PC.
  
  Sorry, but that's not related. You simply can't decide in an embedded ISP
  driver how to deal with sensor controls, as the system will be used in a
  too wide variety of applications and hardware configurations. All
  controls need to be exposed, period.
 
 We're not talking about controls. We're talking about providing the needed
 V4L2 support to allow an userspace application to access the hardware
 sensor.

OK, so we're discussing S_INPUT. Let's discuss controls later :-)

  I never saw an embedded hardware that allows physically changing the
  sensor.
  
  Beagleboard + pluggable sensor board.
  
  Development systems like beagleboard, pandaboard, Exynos SMDK, etc,
  aren't embeeded hardware. They're development kits.
  
  People create end-user products based on those kits. That make them
  first- class embedded hardware like any other.
 
 No doubt they should be supported, but it doesn't make sense to create tons
 of input pipelines to be used for S_INPUT for each different type of
 possible sensor. Somehow, userspace needs to tell what's the sensor that
 he attached to the hardware, or the driver should suport auto-detecting
 it.

We're not creating tons of input pipelines. Look at 
http://www.ideasonboard.org/media/omap3isp.ps , every video node (in yellow) 
has its purpose.

 In other words, I see 2 options for that:
   1) add hardware auto-detection at the sensor logic. At driver probe,
 try to probe all sensors, if it is a hardware development kit;

We've worked quite hard to remove I2C device probing from the kernel, let's 
not add it back.

   2) add one new parameter at the driver: sensors. If the hardware
 is one of those kits, this parameter will allow the developer to specify
 the used sensors. It is the same logic as we do with userspace TV and
 grabber cards without eeprom or any other way to auto-detect the hardware.

This will likely be done through the Device Tree.

  I don't mind if, for those kits the developer that is playing with it
  has to pass a mode parameter and/or run some open harware-aware small
  application that makes the driver to select the sensor type he is
  using, but, if the hardware is, instead, a N9 or a Galaxy Tab (or
  whatever embedded hardware), the driver should expose just the sensors
  that exists on such hardware. It shouldn't be ever allowed to change it
  on userspace, using whatever API on those hardware.
  
  Who talked about changing sensors from userspace on those systems ?
  Platform data (regardless of whether it comes from board code, device
  tree, or something else) will contain a hardware description, and the
  kernel will create the right devices and load the right drivers. The
  issue we're discussing is how to expose controls for those devices to
  userspace, and that needs to be done through subdev nodes.
 
 The issue that is under discussion is the removal of S_INPUT from the
 samsung driver, and the comments at the patches that talks about removing
 V4L2 API support in favor of using a MC-only API for some fundamental
 things.
 
 For example, with a patch like this one, only one sensor will be supported
 without the MC API (either the front or back sensor on a multi-sensor
 camera):
 http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/47751733a32
 2a241927f9238b8ab1441389c9c41
 
 Look at the comment on this patch also:
   
 http://git.infradead.org/users/kmpark/linux-2.6-samsung/commit/c6fb462c38b
 e60a45d16a29a9e56c886ee0aa08c
 
 What is called API 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-16 Thread Sylwester Nawrocki
Hello,

On 08/16/2011 05:30 PM, Mauro Carvalho Chehab wrote:
 Em 16-08-2011 01:57, Laurent Pinchart escreveu:
 Hi Mauro,
 
 My point is that the ISP driver developer can't know in advance which
 sensor will be used systems that don't exist yet.

 As far as such hardware is projected, someone will know. It is a simple
 trivial patch to associate a new hardware with a hardware profile at the
 platform data.

My question here would be whether we want to have a kernel patch fixing up
the configuration for each H/W case or rather a user space script that will
set up a data processing chain ?


 Platform data must contain hardware descriptions only, not policies. This is
 even clearer now that ARM is moving away from board code to the Device Tree.
 
 Again, a cell phone with one frontal camera and one hear camera has two
 sensor inputs only. This is not a policy. It is a hardware constraint.

There is a policy because you have to tweak the device configuration for a 
specific
board and sensor. It seems impossible to have fully functional drivers in the 
mainline
with such a simplistic approach.


 The driver should allow setting the pipeline for both sensors via S_INPUT,
 otherwise a V4L2 only userspace application won't work.
 
 It is as simple as that.
 
 Also, on most cases, probing a sensor is as trivial as reading a sensor
 ID during device probe. This applies, for example, for all Omnivision
 sensors.

 We do things like that all the times for PC world, as nobody knows what
 webcam someone would plug on his PC.

 Sorry, but that's not related. You simply can't decide in an embedded ISP
 driver how to deal with sensor controls, as the system will be used in a too
 wide variety of applications and hardware configurations. All controls need 
 to
 be exposed, period.
 
 We're not talking about controls. We're talking about providing the needed
 V4L2 support to allow an userspace application to access the hardware
 sensor.

Which involves the controls as well.

 
 I never saw an embedded hardware that allows physically changing the
 sensor.

 Beagleboard + pluggable sensor board.

 Development systems like beagleboard, pandaboard, Exynos SMDK, etc, aren't
 embeeded hardware. They're development kits.

 People create end-user products based on those kits. That make them first-
 class embedded hardware like any other.
 
 No doubt they should be supported, but it doesn't make sense to create tons
 of input pipelines to be used for S_INPUT for each different type of possible
 sensor. Somehow, userspace needs to tell what's the sensor that he attached
 to the hardware, or the driver should suport auto-detecting it.
 
 In other words, I see 2 options for that:
   1) add hardware auto-detection at the sensor logic. At driver probe,
 try to probe all sensors, if it is a hardware development kit;

Isn't it better to let the application discover what type of sensor
the kernel exposes and apply a pre-defined ISP configuration for it ? 
If I am not mistaken, this is what happens currently in OMAP3 ISP systems.
Should the base parameters for the I2C client registration be taken from
platform_data/the flatted device tree.

   2) add one new parameter at the driver: sensors. If the hardware
 is one of those kits, this parameter will allow the developer to specify
 the used sensors. It is the same logic as we do with userspace TV and
 grabber cards without eeprom or any other way to auto-detect the hardware.

Sorry, this does not look sane to me.. What is the driver supposed to do
with such a list of sensors ? Hard code the pipeline configurations for them ?

 
 I don't mind if, for those kits the developer that is playing with it has to
 pass a mode parameter and/or run some open harware-aware small application
 that makes the driver to select the sensor type he is using, but, if the
 hardware is, instead, a N9 or a Galaxy Tab (or whatever embedded hardware),
 the driver should expose just the sensors that exists on such hardware. It
 shouldn't be ever allowed to change it on userspace, using whatever API on
 those hardware.

 Who talked about changing sensors from userspace on those systems ? Platform
 data (regardless of whether it comes from board code, device tree, or
 something else) will contain a hardware description, and the kernel will
 create the right devices and load the right drivers. The issue we're
 discussing is how to expose controls for those devices to userspace, and that
 needs to be done through subdev nodes.
 
 The issue that is under discussion is the removal of S_INPUT from the samsung
 driver, and the comments at the patches that talks about removing V4L2 API
 support in favor of using a MC-only API for some fundamental things.
 
 For example, with a patch like this one, only one sensor will be supported
 without the MC API (either the front or back sensor on a multi-sensor camera):
   
 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-15 Thread Laurent Pinchart
Hi Mauro,

On Tuesday 09 August 2011 22:05:47 Mauro Carvalho Chehab wrote:
 Em 29-07-2011 05:36, Laurent Pinchart escreveu:
  On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
  Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
  On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
  Accumulating sub-dev controls at the video node is the right thing to
  do.
  
  An MC-aware application will need to handle with that, but that
  doesn't sound to be hard. All such application would need to do is to
  first probe the subdev controls, and, when parsing the videodev
  controls, not register controls with duplicated ID's, or to mark them
  with some special attribute.
  
  IMHO it's not a big issue in general. Still, both subdev and the host
  device may support same control id. And then even though the control
  ids are same on the subdev and the host they could mean physically
  different controls (since when registering a subdev at the host driver
  the host's controls take precedence and doubling subdev controls are
  skipped).
  
  True, but, except for specific usecases, the host control is enough.
  
  Not for embedded devices. In many case the control won't even be
  implemented by the host. If your system has two sensors connected to the
  same host, they will both expose an exposure time control. Which one do
  you configure through the video node ? The two sensors will likely have
  different bounds for the same control, how do you report that ?
 
 If the device has two sensors that are mutually exclusive, they should be
 mapped as two different inputs. The showed control should be the one used
 by the currently active input.
 
 If the sensors aren't mutually exclusive, then two different video nodes
 will be shown in userspace.

It's more complex than that. The OMAP3 ISP driver exposes 7 video nodes 
regardless of the number of sensors. Sensors can be mutually exclusive or not, 
depending on the board. S_INPUT has its use cases, but is less useful on 
embedded hardware.

  Those controls are also quite useless for generic V4L2 applications,
  which will want auto-exposure anyway. This needs to be implemented in
  userspace in libv4l.
 
 Several webcams export exposure controls. Why shouldn't those controls be
 exposed to userspace anymore?
 
 Ok, if the hardware won't support 3A algorithm, libv4l will implement it,
 eventually using an extra hardware-aware code to get the best performance
 for that specific device, but this doesn't mean that the user should always
 use it.

 Btw, the 3A algorithm is one of the things I don't like on my cell phone:
 while it works most of the time, sometimes I want to disable it and
 manually adjust, as it produces dark images, when there's a very bright
 light somewhere on the image background. Manually adjusting the exposure
 time and aperture is something relevant for some users.

It is, but on embedded devices that usually requires the application to be 
hardware-aware. Exposure time limits depend on blanking, which in turn 
influences the frame rate along with the pixel clock (often configurable as 
well). Programming those settings wrong can exceed the ISP available 
bandwidth.

The world unfortunately stopped being simple some time ago :-)

  Also there might be some preference at user space, at which stage of
  the pipeline to apply some controls. This is where the subdev API
  helps, and plain video node API does not.
  
  Again, this is for specific usecases. On such cases, what is expected is
  that the more generic control will be exported via V4L2 API.
  
  Thus it's a bit hard to imagine that we could do something like
  optionally not to inherit controls as the subdev/MC API is
  optional.
  
  :)
  
  This was actually implemented. There are some cases at ivtv/cx18
  driver where both the bridge and a subdev provides the same control
  (audio volume, for example). The idea is to allow the bridge driver
  to touch at the subdev control without exposing it to userspace,
  since the desire was that the bridge driver itself would expose such
  control, using a logic that combines changing the subdev and the
  bridge registers for volume.
  
  This seem like hard coding a policy in the driver;) Then there is no
  way (it might not be worth the effort though) to play with volume
  level at both devices, e.g. to obtain optimal S/N ratio.
  
  In general, playing with just one control is enough. Andy had a
  different opinion when this issue were discussed, and he thinks that
  playing with both is better. At the end, this is a developers decision,
  depending on how much information (and bug reports) he had.
  
  ivtv/cx18 is a completely different use case, where hardware
  configurations are known, and experiments possible to find out which
  control(s) to use and how. In this case you can't know in advance what
  the sensor and host drivers will be used for.
 
 Why not?

My point is that the ISP driver developer can't know in advance which 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-15 Thread Laurent Pinchart
Hi Mauro,

On Wednesday 10 August 2011 02:22:00 Mauro Carvalho Chehab wrote:
 Em 09-08-2011 20:18, Sakari Ailus escreveu:
  On Tue, Aug 09, 2011 at 05:05:47PM -0300, Mauro Carvalho Chehab wrote:
  Em 29-07-2011 05:36, Laurent Pinchart escreveu:
  On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
  Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
  On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:

[snip]

  Those controls are also quite useless for generic V4L2 applications,
  which will want auto-exposure anyway. This needs to be implemented in
  userspace in libv4l.
  
  Several webcams export exposure controls. Why shouldn't those controls
  be exposed to userspace anymore?
  
  This is not a webcam,
 
 I know, but the analogy is still valid.
 
  it is a software controlled high end digital camera on
  a mobile device. The difference is that the functionality offered by the
  hardware is at much lower level compared to a webcam; the result is that
  more detailed control ia required but also much flexibility and
  performance is gained.
 
 I see. What I failed to see is why to remove control from userspace. If the
 hardware is more powerful, I expect to see more controls exported, and not
 removing the V4L2 functionality from the driver.

We're not trying to remove the controls. We expose them differently. That's a 
big difference :-)

  Ok, if the hardware won't support 3A algorithm, libv4l will implement
  it, eventually using an extra hardware-aware code to get the best
  performance for that specific device, but this doesn't mean that the
  user should always use it.
  
  Why not? What would be the alternative?
 
 User may want or need to disable the 3A algo and control some hardware
 parameters hardware directly, for example, to take an over-exposed
 picture, to create some neat effects, or to get some image whose exposure
 aperture/time is out of the 3A range.
 
  Btw, the 3A algorithm is one of the things I don't like on my cell
  phone: while it works most of the time, sometimes I want to disable it
  and manually adjust, as it produces dark images, when there's a very
  bright light somewhere on the image background. Manually adjusting the
  exposure time and aperture is something relevant for some users.
  
  You do want the 3A algorithms even if you use manual white balance. What
  the automatic white balance algorithm produces is (among other things)
  gamma tables, rgb-to-rgb conversion matrices and lens shading correction
  tables. I doubt any end user, even if it was you, would like to fiddle
  with such large tables directly when capturing photos.
 
 There are some hacks for several professional and amateur cameras that
 replace the existing 3A algorithms by... NONE. The idea is to get the raw
 data directly from the sensor, and use some software like Gimp or
 Photoshop to do lens correction, temperature correction, whitespace
 ballance, etc, at post-processing. The advantage of such type of usage is
 that the photographer can fine-tune the generated image to produce what he
 wants, using more sophisticated (and not real-time) algorithms.
 
 [1] for example, one of such firmwares, that I use on my Canon Digital
 Camera is available at:
 http://chdk.wikia.com/wiki/CHDK

That's something you can very easily do with

http://git.ideasonboard.org/?p=media-ctl.git;a=summary

to configure the pipeline and

http://git.ideasonboard.org/?p=yavta.git;a=summary

to set controls and capture video. The later uses standard V4L2 ioctls only, 
even to set control on subdevs.

  The size of this configuration could
  easily be around 10 kB. A higher level control is required; colour
  temperature, for instance. And its implementation involves the same
  automatic white balance algorithm.
  
  You must know your hardware very, very well to use the aforementioned low
  level controls and in such a case you have no reason not to use the MC
  interface to configure the device either. Configuring the device image
  pipe using MC is actually a number of magnitudes less complex, and I say
  it's been quite an achievement that we have such an interface which
  makes it so effortless to do.
 
 For sure, such kind of controls that exposes the 3A correction algorithm at
 the DSP level shouldn't be exposed via V4L2 interface, but things like
 disabling 3A and manually controlling the sensor, like aperture, exposure,
 analog zoom, etc, makes sense to be exported.

Drivers can't export 3A enable/disable controls, as 3A is implement in 
userspace. All manual controls are exported on subdev nodes, there's no issue 
with that. Any application (including the 3A implementation in libv4l) can use 
them.

  Also there might be some preference at user space, at which stage of
  the pipeline to apply some controls. This is where the subdev API
  helps, and plain video node API does not.
  
  Again, this is for specific usecases. On such cases, what is expected
  is that the more generic control will be 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-15 Thread Mauro Carvalho Chehab
Em 15-08-2011 09:30, Laurent Pinchart escreveu:
 Hi Mauro,
 
 On Tuesday 09 August 2011 22:05:47 Mauro Carvalho Chehab wrote:
 Em 29-07-2011 05:36, Laurent Pinchart escreveu:
 On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
 Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
 On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
 Accumulating sub-dev controls at the video node is the right thing to
 do.

 An MC-aware application will need to handle with that, but that
 doesn't sound to be hard. All such application would need to do is to
 first probe the subdev controls, and, when parsing the videodev
 controls, not register controls with duplicated ID's, or to mark them
 with some special attribute.

 IMHO it's not a big issue in general. Still, both subdev and the host
 device may support same control id. And then even though the control
 ids are same on the subdev and the host they could mean physically
 different controls (since when registering a subdev at the host driver
 the host's controls take precedence and doubling subdev controls are
 skipped).

 True, but, except for specific usecases, the host control is enough.

 Not for embedded devices. In many case the control won't even be
 implemented by the host. If your system has two sensors connected to the
 same host, they will both expose an exposure time control. Which one do
 you configure through the video node ? The two sensors will likely have
 different bounds for the same control, how do you report that ?

 If the device has two sensors that are mutually exclusive, they should be
 mapped as two different inputs. The showed control should be the one used
 by the currently active input.

 If the sensors aren't mutually exclusive, then two different video nodes
 will be shown in userspace.
 
 It's more complex than that. The OMAP3 ISP driver exposes 7 video nodes 
 regardless of the number of sensors. Sensors can be mutually exclusive or 
 not, 
 depending on the board. S_INPUT has its use cases, but is less useful on 
 embedded hardware.

Sorry, but exposing a video node that can't be used doesn't make sense.

 Those controls are also quite useless for generic V4L2 applications,
 which will want auto-exposure anyway. This needs to be implemented in
 userspace in libv4l.

 Several webcams export exposure controls. Why shouldn't those controls be
 exposed to userspace anymore?

 Ok, if the hardware won't support 3A algorithm, libv4l will implement it,
 eventually using an extra hardware-aware code to get the best performance
 for that specific device, but this doesn't mean that the user should always
 use it.

 Btw, the 3A algorithm is one of the things I don't like on my cell phone:
 while it works most of the time, sometimes I want to disable it and
 manually adjust, as it produces dark images, when there's a very bright
 light somewhere on the image background. Manually adjusting the exposure
 time and aperture is something relevant for some users.
 
 It is, but on embedded devices that usually requires the application to be 
 hardware-aware. Exposure time limits depend on blanking, which in turn 
 influences the frame rate along with the pixel clock (often configurable as 
 well). Programming those settings wrong can exceed the ISP available 
 bandwidth.
 
 The world unfortunately stopped being simple some time ago :-)
 
 Also there might be some preference at user space, at which stage of
 the pipeline to apply some controls. This is where the subdev API
 helps, and plain video node API does not.

 Again, this is for specific usecases. On such cases, what is expected is
 that the more generic control will be exported via V4L2 API.

 Thus it's a bit hard to imagine that we could do something like
 optionally not to inherit controls as the subdev/MC API is
 optional.

 :)

 This was actually implemented. There are some cases at ivtv/cx18
 driver where both the bridge and a subdev provides the same control
 (audio volume, for example). The idea is to allow the bridge driver
 to touch at the subdev control without exposing it to userspace,
 since the desire was that the bridge driver itself would expose such
 control, using a logic that combines changing the subdev and the
 bridge registers for volume.

 This seem like hard coding a policy in the driver;) Then there is no
 way (it might not be worth the effort though) to play with volume
 level at both devices, e.g. to obtain optimal S/N ratio.

 In general, playing with just one control is enough. Andy had a
 different opinion when this issue were discussed, and he thinks that
 playing with both is better. At the end, this is a developers decision,
 depending on how much information (and bug reports) he had.

 ivtv/cx18 is a completely different use case, where hardware
 configurations are known, and experiments possible to find out which
 control(s) to use and how. In this case you can't know in advance what
 the sensor and host drivers will be used for.

 Why not?
 
 My point is 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-15 Thread Mauro Carvalho Chehab
Em 15-08-2011 05:45, Laurent Pinchart escreveu:
 Hi Mauro,

 After having it there properly working and tested independently, we may
 consider patches removing V4L2 interfaces that were obsoleted in favor of
 using the libv4l implementation, of course using the Kernel way of
 deprecating interfaces. But doing it before having it, doesn't make any
 sense.
 
 Once again we're not trying to remove controls, but expose them differently. 

See the comments at the patch series, starting from the patches that removes
support for S_INPUT. I'm aware that the controls will be exposed by both
MC and V4L2 API, althrough having ways to expose/hide controls via V4L2 API
makes patch review a way more complicated than it used to be before the MC
patches.

Regards,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-10 Thread Mauro Carvalho Chehab
Em 10-08-2011 05:41, Sylwester Nawrocki escreveu:
 Why not? I never saw an embedded hardware that allows physically changing 
 the
 sensor.
 
 I understood Laurent's statement that you can have same ISP driver deployed on
 multiple boards fitted with various sensors. Hence the multiple configurations
 that cannot be known in advance,

True, but such kind of dependence should solved either at config time or at
probe time. It doesn't make any sense to show that a hardware is present, when
it is not. This applies to both V4L or MC APIs (and also to sysfs).

 If V4L2 API is not enough, implementing it on libv4l won't solve, as 
 userspace
 apps will use V4L2 API for requresting it.

 There are two kind of applications: specialised and generic. The generic
 ones may rely on restrictive policies put in place by a libv4l plugin
 whereas the specialised applications need to access the device's features
 directly to get the most out of it.

 A submitted upstream driver should be capable of working with the existing
 tools/userspace.

 Currently, there isn't such libv4l plugins (or, at least, I failed to see a
 merged plugin there for N9, S5P, etc). Let's not upstream new drivers or 
 remove 
 functionalities from already existing drivers based on something that has 
 yet 
 to be developed.

 After having it there properly working and tested independently, we may 
 consider
 patches removing V4L2 interfaces that were obsoleted in favor of using the 
 libv4l
 implementation, of course using the Kernel way of deprecating interfaces. But
 doing it before having it, doesn't make any sense.

 Let's not put the the cart before the horse.
 
 That's a good point. My long term plan was to deprecate and remove duplicated 
 ioctls
 at the driver _once_ support for regular V4L2 interface on top of MC/subdev 
 API
 is added at the v4l2 libraries. But this will happen after I create an 
 initial.. 
 *cough* openmax IL for the driver. Which is not what the Tigers like best..

Ok.
 
 --
 Regards,
 Sylwester

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-09 Thread Mauro Carvalho Chehab
Em 29-07-2011 05:36, Laurent Pinchart escreveu:
 Hi Mauro,
 
 On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
 Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
 On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
 Accumulating sub-dev controls at the video node is the right thing to
 do.

 An MC-aware application will need to handle with that, but that doesn't
 sound to be hard. All such application would need to do is to first
 probe the subdev controls, and, when parsing the videodev controls, not
 register controls with duplicated ID's, or to mark them with some
 special attribute.

 IMHO it's not a big issue in general. Still, both subdev and the host
 device may support same control id. And then even though the control ids
 are same on the subdev and the host they could mean physically different
 controls (since when registering a subdev at the host driver the host's
 controls take precedence and doubling subdev controls are skipped).

 True, but, except for specific usecases, the host control is enough.
 
 Not for embedded devices. In many case the control won't even be implemented 
 by the host. If your system has two sensors connected to the same host, they 
 will both expose an exposure time control. Which one do you configure through 
 the video node ? The two sensors will likely have different bounds for the 
 same control, how do you report that ?

If the device has two sensors that are mutually exclusive, they should be mapped
as two different inputs. The showed control should be the one used by the 
currently
active input.

If the sensors aren't mutually exclusive, then two different video nodes will
be shown in userspace.

 Those controls are also quite useless for generic V4L2 applications, which 
 will want auto-exposure anyway. This needs to be implemented in userspace in 
 libv4l.

Several webcams export exposure controls. Why shouldn't those controls be 
exposed
to userspace anymore?

Ok, if the hardware won't support 3A algorithm, libv4l will implement it, 
eventually using an extra hardware-aware code to get the best performance 
for that specific device, but this doesn't mean that the user should always 
use it.

Btw, the 3A algorithm is one of the things I don't like on my cell phone: while
it works most of the time, sometimes I want to disable it and manually adjust,
as it produces dark images, when there's a very bright light somewhere on the
image background. Manually adjusting the exposure time and aperture is
something relevant for some users.

 Also there might be some preference at user space, at which stage of the
 pipeline to apply some controls. This is where the subdev API helps, and
 plain video node API does not.

 Again, this is for specific usecases. On such cases, what is expected is
 that the more generic control will be exported via V4L2 API.

 Thus it's a bit hard to imagine that we could do something like
 optionally not to inherit controls as the subdev/MC API is optional.
 :)

 This was actually implemented. There are some cases at ivtv/cx18 driver
 where both the bridge and a subdev provides the same control (audio
 volume, for example). The idea is to allow the bridge driver to touch
 at the subdev control without exposing it to userspace, since the
 desire was that the bridge driver itself would expose such control,
 using a logic that combines changing the subdev and the bridge
 registers for volume.

 This seem like hard coding a policy in the driver;) Then there is no way
 (it might not be worth the effort though) to play with volume level at
 both devices, e.g. to obtain optimal S/N ratio.

 In general, playing with just one control is enough. Andy had a different
 opinion when this issue were discussed, and he thinks that playing with
 both is better. At the end, this is a developers decision, depending on
 how much information (and bug reports) he had.
 
 ivtv/cx18 is a completely different use case, where hardware configurations 
 are known, and experiments possible to find out which control(s) to use and 
 how. In this case you can't know in advance what the sensor and host drivers 
 will be used for.

Why not? I never saw an embedded hardware that allows physically changing the
sensor.

 Even if you did, fine image quality tuning requires 
 accessing pretty much all controls individually anyway.

The same is also true for non-embedded hardware. The only situation where V4L2
API is not enough is when there are two controls of the same type active. For
example, 2 active volume controls, one at the audio demod, and another at the
bridge. There may have some cases where you can do the same thing at the sensor
or at a DSP block. This is where MC API gives an improvement, by allowing 
changing
both, instead of just one of the controls.

 This is a hack...sorry, just joking ;-) Seriously, I think the
 situation with the userspace subdevs is a bit different. Because with one
 API we directly expose some functionality for applications, with 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-08-09 Thread Mauro Carvalho Chehab
Em 09-08-2011 20:18, Sakari Ailus escreveu:
 Hi Mauro,
 
 On Tue, Aug 09, 2011 at 05:05:47PM -0300, Mauro Carvalho Chehab wrote:
 Em 29-07-2011 05:36, Laurent Pinchart escreveu:
 Hi Mauro,

 On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
 Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
 On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
 Accumulating sub-dev controls at the video node is the right thing to
 do.

 An MC-aware application will need to handle with that, but that doesn't
 sound to be hard. All such application would need to do is to first
 probe the subdev controls, and, when parsing the videodev controls, not
 register controls with duplicated ID's, or to mark them with some
 special attribute.

 IMHO it's not a big issue in general. Still, both subdev and the host
 device may support same control id. And then even though the control ids
 are same on the subdev and the host they could mean physically different
 controls (since when registering a subdev at the host driver the host's
 controls take precedence and doubling subdev controls are skipped).

 True, but, except for specific usecases, the host control is enough.

 Not for embedded devices. In many case the control won't even be 
 implemented 
 by the host. If your system has two sensors connected to the same host, 
 they 
 will both expose an exposure time control. Which one do you configure 
 through 
 the video node ? The two sensors will likely have different bounds for the 
 same control, how do you report that ?

 If the device has two sensors that are mutually exclusive, they should be
 mapped as two different inputs. The showed control should be the one used
 by the currently active input.
 
 Video nodes should represent a dma engine rather than an image source.

True. Image sources are represented, on V4L2, by inputs.

 You
 could use the same hardware to process data from memory to memory while
 streaming video from a sensor to memory at the same time. This is a quite
 typical use in embedded systems.
 
 Usually boards where two sensors are connected to an isp the dependencies
 are not as straightforward as the sensors being fully independent or require
 exclusive access. Typically some of the hardware blocks are shared between
 the two and can be used by only one sensor at a time, so instead you may not
 get full functionality from both at the same time. And you need to be able
 to choose which one uses that hardware block. This is exactly what Media
 controller interface models perfectly.

I see.

 See FIMC Media controller graph AND Sylwester's explanation on it; a few
 links are actually missing from the grapg.
 
 URL:http://www.spinics.net/lists/linux-media/msg35504.html
 URL:http://wstaw.org/m/2011/05/26/fimc_graph__.png
 
 Cc Hans.
 
 If the sensors aren't mutually exclusive, then two different video nodes will
 be shown in userspace.

 Those controls are also quite useless for generic V4L2 applications, which 
 will want auto-exposure anyway. This needs to be implemented in userspace 
 in 
 libv4l.

 Several webcams export exposure controls. Why shouldn't those controls be 
 exposed
 to userspace anymore?
 
 This is not a webcam,

I know, but the analogy is still valid.

 it is a software controlled high end digital camera on
 a mobile device. The difference is that the functionality offered by the
 hardware is at much lower level compared to a webcam; the result is that
 more detailed control ia required but also much flexibility and performance
 is gained.

I see. What I failed to see is why to remove control from userspace. If the
hardware is more powerful, I expect to see more controls exported, and not
removing the V4L2 functionality from the driver.

 Ok, if the hardware won't support 3A algorithm, libv4l will implement it, 
 eventually using an extra hardware-aware code to get the best performance 
 for that specific device, but this doesn't mean that the user should always 
 use it.
 
 Why not? What would be the alternative?

User may want or need to disable the 3A algo and control some hardware 
parameters
hardware directly, for example, to take an over-exposed picture, to create some
neat effects, or to get some image whose exposure aperture/time is out of the
3A range.

 Btw, the 3A algorithm is one of the things I don't like on my cell phone: 
 while
 it works most of the time, sometimes I want to disable it and manually 
 adjust,
 as it produces dark images, when there's a very bright light somewhere on the
 image background. Manually adjusting the exposure time and aperture is
 something relevant for some users.
 
 You do want the 3A algorithms even if you use manual white balance. What the
 automatic white balance algorithm produces is (among other things) gamma
 tables, rgb-to-rgb conversion matrices and lens shading correction tables. I
 doubt any end user, even if it was you, would like to fiddle with such large
 tables directly when capturing photos. 

There are some hacks for 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-07-29 Thread Sakari Ailus
On Wed, Jul 27, 2011 at 11:55:55PM -0300, Mauro Carvalho Chehab wrote:
 Hi Sylwester,
 
 Em 27-07-2011 13:35, Sylwester Nawrocki escreveu:
  Hi Mauro,
  
  The following changes since commit f0a21151140da01c71de636f482f2eddec2840cc:
  
Merge tag 'v3.0' into staging/for_v3.1 (2011-07-22 13:33:14 -0300)
  
  are available in the git repository at:
  
git://git.infradead.org/users/kmpark/linux-2.6-samsung fimc-for-mauro
  
  Sylwester Nawrocki (28):
s5p-fimc: Add support for runtime PM in the mem-to-mem driver
s5p-fimc: Add media entity initialization
s5p-fimc: Remove registration of video nodes from probe()
s5p-fimc: Remove sclk_cam clock handling
s5p-fimc: Limit number of available inputs to one
s5p-fimc: Remove sensor management code from FIMC capture driver
s5p-fimc: Remove v4l2_device from video capture and m2m driver
s5p-fimc: Add the media device driver
s5p-fimc: Conversion to use struct v4l2_fh
s5p-fimc: Conversion to the control framework
s5p-fimc: Add media operations in the capture entity driver
s5p-fimc: Add PM helper function for streaming control
s5p-fimc: Correct color format enumeration
s5p-fimc: Convert to use media pipeline operations
s5p-fimc: Add subdev for the FIMC processing block
s5p-fimc: Add support for camera capture in JPEG format
s5p-fimc: Add v4l2_device notification support for single frame 
  capture
s5p-fimc: Use consistent names for the buffer list functions
s5p-fimc: Add runtime PM support in the camera capture driver
s5p-fimc: Correct crop offset alignment on exynos4
s5p-fimc: Remove single-planar capability flags
noon010pc30: Do not ignore errors in initial controls setup
noon010pc30: Convert to the pad level ops
noon010pc30: Clean up the s_power callback
noon010pc30: Remove g_chip_ident operation handler
s5p-csis: Handle all available power supplies
s5p-csis: Rework of the system suspend/resume helpers
s5p-csis: Enable v4l subdev device node
 
 From the last time you've submitted a similar set of patches:
 
  Why? The proper way to select an input is via S_INPUT. The driver 
  may also
  optionally allow changing it via the media device, but it should 
  not be
  a mandatory requirement, as the media device API is optional.
  
  The problem I'm trying to solve here is sharing the sensors and mipi-csi 
  receivers between multiple FIMC H/W instances. Previously the driver 
  supported attaching a sensor to only one selected FIMC at compile time. You 
  could, for instance, specify all sensors as the selected FIMC's platform 
  data and then use S_INPUT to choose between them. The sensor could not be 
  used together with any other FIMC. But this is desired due to different 
  capabilities of the FIMC IP instances. And now, instead of hardcoding a 
  sensor assigment to particular video node, the sensors are bound to the 
  media device. The media device driver takes the list of sensors and 
  attaches them one by one to subsequent FIMC instances when it is 
  initializing. Each sensor has a link to each FIMC but only one of them is 
  active by default. That said an user application can use selected camera by 
  opening corresponding video node. Which camera is at which node can be 
  queried with G_INPUT.
  
  I could try to implement the previous S_INPUT behaviour, but IMHO this 
  would lead to considerable and unnecessary driver code complication due to 
  supporting overlapping APIs
 
 From this current pull request:
 
 From c6fb462c38be60a45d16a29a9e56c886ee0aa08c Mon Sep 17 00:00:00 2001
 From: Sylwester Nawrocki s.nawro...@samsung.com
 Date: Fri, 10 Jun 2011 20:36:51 +0200
 Subject: s5p-fimc: Conversion to the control framework
 Cc: Linux Media Mailing List linux-media@vger.kernel.org
 
 Make the driver inherit sensor controls when video node only API
 compatibility mode is enabled. The control framework allows to
 properly handle sensor controls through controls inheritance when
 pipeline is re-configured at media device level.
 
 ...
 -   .vidioc_queryctrl   = fimc_vidioc_queryctrl,
 -   .vidioc_g_ctrl  = fimc_vidioc_g_ctrl,
 -   .vidioc_s_ctrl  = fimc_cap_s_ctrl,
 ...
 
 I'll need to take some time to review this patchset. So, it will likely
 miss the bus for 3.1.
 
 While the code inside this patch looked ok, your comments scared me ;)
 
 In summary: The V4L2 API is not a legacy API that needs a compatibility
 mode. Removing controls like VIDIOC_S_INPUT, VIDIOC_*CTRL, etc in
 favor of the media controller API is wrong. This specific patch itself seems
 ok, but it is easy to loose the big picture on a series of 28 patches
 with about 4000 lines changed.

I remember this was discussed among with the last pull request on the
patches. I don't remember seeing your reply in the thread 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-07-29 Thread Laurent Pinchart
Hi Mauro,

On Friday 29 July 2011 06:02:54 Mauro Carvalho Chehab wrote:
 Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
  On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
  Accumulating sub-dev controls at the video node is the right thing to
  do.
  
  An MC-aware application will need to handle with that, but that doesn't
  sound to be hard. All such application would need to do is to first
  probe the subdev controls, and, when parsing the videodev controls, not
  register controls with duplicated ID's, or to mark them with some
  special attribute.
  
  IMHO it's not a big issue in general. Still, both subdev and the host
  device may support same control id. And then even though the control ids
  are same on the subdev and the host they could mean physically different
  controls (since when registering a subdev at the host driver the host's
  controls take precedence and doubling subdev controls are skipped).
 
 True, but, except for specific usecases, the host control is enough.

Not for embedded devices. In many case the control won't even be implemented 
by the host. If your system has two sensors connected to the same host, they 
will both expose an exposure time control. Which one do you configure through 
the video node ? The two sensors will likely have different bounds for the 
same control, how do you report that ?

Those controls are also quite useless for generic V4L2 applications, which 
will want auto-exposure anyway. This needs to be implemented in userspace in 
libv4l.

  Also there might be some preference at user space, at which stage of the
  pipeline to apply some controls. This is where the subdev API helps, and
  plain video node API does not.
 
 Again, this is for specific usecases. On such cases, what is expected is
 that the more generic control will be exported via V4L2 API.
 
  Thus it's a bit hard to imagine that we could do something like
  optionally not to inherit controls as the subdev/MC API is optional.
  :)
  
  This was actually implemented. There are some cases at ivtv/cx18 driver
  where both the bridge and a subdev provides the same control (audio
  volume, for example). The idea is to allow the bridge driver to touch
  at the subdev control without exposing it to userspace, since the
  desire was that the bridge driver itself would expose such control,
  using a logic that combines changing the subdev and the bridge
  registers for volume.
  
  This seem like hard coding a policy in the driver;) Then there is no way
  (it might not be worth the effort though) to play with volume level at
  both devices, e.g. to obtain optimal S/N ratio.
 
 In general, playing with just one control is enough. Andy had a different
 opinion when this issue were discussed, and he thinks that playing with
 both is better. At the end, this is a developers decision, depending on
 how much information (and bug reports) he had.

ivtv/cx18 is a completely different use case, where hardware configurations 
are known, and experiments possible to find out which control(s) to use and 
how. In this case you can't know in advance what the sensor and host drivers 
will be used for. Even if you did, fine image quality tuning requires 
accessing pretty much all controls individually anyway.

  This is a hack...sorry, just joking ;-) Seriously, I think the
  situation with the userspace subdevs is a bit different. Because with one
  API we directly expose some functionality for applications, with other
  we code it in the kernel, to make the devices appear uniform at user
  space.
 
 Not sure if I understood you. V4L2 export drivers functionality to
 userspace in an uniform way. MC api is for special applications that might
 need to access some internal functions on embedded devices.
 
 Of course, there are some cases where it doesn't make sense to export a
 subdev control via V4L2 API.
 
  Also, the sensor subdev can be configured in the video node driver as
  well as through the subdev device node. Both APIs can do the same
  thing but in order to let the subdev API work as expected the video
  node driver must be forbidden to configure the subdev.
  
  Why? For the sensor, a V4L2 API call will look just like a bridge driver
  call. The subdev will need a mutex anyway, as two MC applications may
  be opening it simultaneously. I can't see why it should forbid changing
  the control from the bridge driver call.
  
  Please do not forget there might be more than one subdev to configure and
  that the bridge itself is also a subdev (which exposes a scaler
  interface, for instance). A situation pretty much like in Figure 4.4 [1]
  (after the scaler there is also a video node to configure, but we may
  assume that pixel resolution at the scaler pad 1 is same as at the video
  node). Assuming the format and crop configuration flow is from sensor to
  host scaler direction, if we have tried to configure _all_ subdevs when
  the last stage of the pipeline is configured (i.e. video node) the whole

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-07-28 Thread Sylwester Nawrocki
Hi Mauro,

On 07/28/2011 04:55 AM, Mauro Carvalho Chehab wrote:
 Hi Sylwester,
 
 Em 27-07-2011 13:35, Sylwester Nawrocki escreveu:
 Hi Mauro,

 The following changes since commit f0a21151140da01c71de636f482f2eddec2840cc:

   Merge tag 'v3.0' into staging/for_v3.1 (2011-07-22 13:33:14 -0300)

 are available in the git repository at:

   git://git.infradead.org/users/kmpark/linux-2.6-samsung fimc-for-mauro

 Sylwester Nawrocki (28):
   s5p-fimc: Add support for runtime PM in the mem-to-mem driver
   s5p-fimc: Add media entity initialization
   s5p-fimc: Remove registration of video nodes from probe()
   s5p-fimc: Remove sclk_cam clock handling
   s5p-fimc: Limit number of available inputs to one
   s5p-fimc: Remove sensor management code from FIMC capture driver
   s5p-fimc: Remove v4l2_device from video capture and m2m driver
   s5p-fimc: Add the media device driver
   s5p-fimc: Conversion to use struct v4l2_fh
   s5p-fimc: Conversion to the control framework
   s5p-fimc: Add media operations in the capture entity driver
   s5p-fimc: Add PM helper function for streaming control
   s5p-fimc: Correct color format enumeration
   s5p-fimc: Convert to use media pipeline operations
   s5p-fimc: Add subdev for the FIMC processing block
   s5p-fimc: Add support for camera capture in JPEG format
   s5p-fimc: Add v4l2_device notification support for single frame capture
   s5p-fimc: Use consistent names for the buffer list functions
   s5p-fimc: Add runtime PM support in the camera capture driver
   s5p-fimc: Correct crop offset alignment on exynos4
   s5p-fimc: Remove single-planar capability flags
   noon010pc30: Do not ignore errors in initial controls setup
   noon010pc30: Convert to the pad level ops
   noon010pc30: Clean up the s_power callback
   noon010pc30: Remove g_chip_ident operation handler
   s5p-csis: Handle all available power supplies
   s5p-csis: Rework of the system suspend/resume helpers
   s5p-csis: Enable v4l subdev device node
 
 From the last time you've submitted a similar set of patches:
 
 Why? The proper way to select an input is via S_INPUT. The driver 
 may also
 optionally allow changing it via the media device, but it should 
 not be
 a mandatory requirement, as the media device API is optional.

 The problem I'm trying to solve here is sharing the sensors and
 mipi-csi receivers between multiple FIMC H/W instances. Previously
 the driver supported attaching a sensor to only one selected FIMC
 at compile time. You could, for instance, specify all sensors as
 the selected FIMC's platform data and then use S_INPUT to choose
 between them. The sensor could not be used together with any other
 FIMC. But this is desired due to different capabilities of the FIMC
 IP instances. And now, instead of hardcoding a sensor assigment to
 particular video node, the sensors are bound to the media device.
 The media device driver takes the list of sensors and attaches them
 one by one to subsequent FIMC instances when it is initializing. Each
 sensor has a link to each FIMC but only one of them is active by 
 default. That said an user application can use selected camera by
 opening corresponding video node. Which camera is at which node
 can be queried with G_INPUT.

 I could try to implement the previous S_INPUT behaviour, but IMHO this
 would lead to considerable and unnecessary driver code complication due
 to supporting overlapping APIs
 
 From this current pull request:
 
 From c6fb462c38be60a45d16a29a9e56c886ee0aa08c Mon Sep 17 00:00:00 2001
 From: Sylwester Nawrocki s.nawro...@samsung.com
 Date: Fri, 10 Jun 2011 20:36:51 +0200
 Subject: s5p-fimc: Conversion to the control framework
 Cc: Linux Media Mailing List linux-media@vger.kernel.org
 
 Make the driver inherit sensor controls when video node only API
 compatibility mode is enabled. The control framework allows to
 properly handle sensor controls through controls inheritance when
 pipeline is re-configured at media device level.
 
 ...
 -   .vidioc_queryctrl   = fimc_vidioc_queryctrl,
 -   .vidioc_g_ctrl  = fimc_vidioc_g_ctrl,
 -   .vidioc_s_ctrl  = fimc_cap_s_ctrl,
 ...
 
 I'll need to take some time to review this patchset. So, it will likely
 miss the bus for 3.1.

OK, thanks for your time!

 
 While the code inside this patch looked ok, your comments scared me ;)

I didn't mean to scare you, sorry ;)

 
 In summary: The V4L2 API is not a legacy API that needs a compatibility
 mode. Removing controls like VIDIOC_S_INPUT, VIDIOC_*CTRL, etc in
 favor of the media controller API is wrong. This specific patch itself seems

Yes, it's the second time you say MC API is only optional;) I should
had formulated the summary from the other point of view. I wrote this
in context of two: V4L2 and MC API compatibility modes. Perhaps not too
fortunate wording.

 ok, but it is easy to 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-07-28 Thread Mauro Carvalho Chehab
Em 28-07-2011 07:09, Sylwester Nawrocki escreveu:
 Hi Mauro,
 
 On 07/28/2011 04:55 AM, Mauro Carvalho Chehab wrote:
 Hi Sylwester,

 Em 27-07-2011 13:35, Sylwester Nawrocki escreveu:
 Hi Mauro,
 In summary: The V4L2 API is not a legacy API that needs a compatibility
 mode. Removing controls like VIDIOC_S_INPUT, VIDIOC_*CTRL, etc in
 favor of the media controller API is wrong. This specific patch itself seems
 
 Yes, it's the second time you say MC API is only optional;) I should
 had formulated the summary from the other point of view. I wrote this
 in context of two: V4L2 and MC API compatibility modes. Perhaps not too
 fortunate wording.

A clear patch description helps for reviewers to understand what, why and how
the patch is doing. Sometimes, I write a one-line patch, together with a 30+
lines of patch description. Especially on tricky patches, please be verbose
at the descriptions.

 ok, but it is easy to loose the big picture on a series of 28 patches
 with about 4000 lines changed.
 
 Yes, I agree. You really have to look carefully at the final result too.
 
 When it comes to controls, as you might know, I didn't remove any
 functionality. Although the ioctl handlers are gotten rid of in the driver,
 they are handled in the control framework.

Yes, I noticed, but, on a complex driver with several subdevs, it is not that
simple to check where the controls are actually created, or if they require a
MC API call to create them or not. Especially on patch series made by the
manufacturer, I generally don't spend much time to fully understand the driver 
logic,
as I assume that the developers are doing the better to make the driver to work.
My main concern is to be sure that the driver will be doing the right thing,
in the light of the V4L2 API. The MC API made my work harder, as now, I need to
check also if, for the device to work, it needs some MC API calls. 

So, I have now 2 alternatives left:
  a) to test with a device; 
  b) to fully understand the driver's logic.

Both are very time-consuming, but (a) is quicker, and safer, of course provided 
that
I don't need to dig into several trees to get patches because not everything is 
not
upstream yet.

This also means that I'll need the (complex) patches for devices with MC 
several weeks 
before the next merge window, to give me some time to open a window for testing.

 The media controller API is meant to be used only by specific applications
 that might add some extra features to the driver. So, it is an optional
 API. In all cases where both API's can do the same thing, the proper way 
 is to use the V4L2 API only, and not the media controller API.
 
 Yes I continually keep that in mind. But there are some corner cases when
 things are not so obvious, e.g. normally a video node inherits controls
 from a sensor subdev. So all the controls are available at the video node.
 
 But when using the subdev API, the right thing to do at the video node is not
 to inherit sensor's controls. You could of course accumulate all controls at
 video node at all times, such as same (sensor subdev's) controls are available
 at /dev/video* and /dev/v4l-subdev*. But this is confusing to MC API aware
 application which could wrongly assume that there is more controls at the host
 device than there really is.

Accumulating sub-dev controls at the video node is the right thing to do.

An MC-aware application will need to handle with that, but that doesn't sound to
be hard. All such application would need to do is to first probe the subdev 
controls,
and, when parsing the videodev controls, not register controls with duplicated 
ID's,
or to mark them with some special attribute.

 Thus it's a bit hard to imagine that we could do something like optionally
 not to inherit controls as the subdev/MC API is optional. :)

This was actually implemented. There are some cases at ivtv/cx18 driver where 
both
the bridge and a subdev provides the same control (audio volume, for example). 
The
idea is to allow the bridge driver to touch at the subdev control without 
exposing
it to userspace, since the desire was that the bridge driver itself would expose
such control, using a logic that combines changing the subdev and the bridge 
registers
for volume.

 Also, the sensor subdev can be configured in the video node driver as well as
 through the subdev device node. Both APIs can do the same thing but in order
 to let the subdev API work as expected the video node driver must be forbidden
 to configure the subdev.

Why? For the sensor, a V4L2 API call will look just like a bridge driver call.
The subdev will need a mutex anyway, as two MC applications may be opening it
simultaneously. I can't see why it should forbid changing the control from the
bridge driver call.

 There is a conflict there that in order to use 
 'optional' API the 'main' API behaviour must be affected

It is optional from userspace perspective. A V4L2-only application should be 
able
to work with all drivers. 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-07-28 Thread Sylwester Nawrocki
On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:
 In summary: The V4L2 API is not a legacy API that needs a compatibility
 mode. Removing controls like VIDIOC_S_INPUT, VIDIOC_*CTRL, etc in
 favor of the media controller API is wrong. This specific patch itself seems

 Yes, it's the second time you say MC API is only optional;) I should
 had formulated the summary from the other point of view. I wrote this
 in context of two: V4L2 and MC API compatibility modes. Perhaps not too
 fortunate wording.
 
 A clear patch description helps for reviewers to understand what, why and how
 the patch is doing. Sometimes, I write a one-line patch, together with a 30+
 lines of patch description. Especially on tricky patches, please be verbose
 at the descriptions.
 
 ok, but it is easy to loose the big picture on a series of 28 patches
 with about 4000 lines changed.

 Yes, I agree. You really have to look carefully at the final result too.

 When it comes to controls, as you might know, I didn't remove any
 functionality. Although the ioctl handlers are gotten rid of in the driver,
 they are handled in the control framework.
 
 Yes, I noticed, but, on a complex driver with several subdevs, it is not that
 simple to check where the controls are actually created, or if they require a
 MC API call to create them or not. Especially on patch series made by the

Sure, I fully understand.

 manufacturer, I generally don't spend much time to fully understand the 
 driver logic,
 as I assume that the developers are doing the better to make the driver to 
 work.
 My main concern is to be sure that the driver will be doing the right thing,
 in the light of the V4L2 API. The MC API made my work harder, as now, I need 
 to
 check also if, for the device to work, it needs some MC API calls.
 
 So, I have now 2 alternatives left:
a) to test with a device;
b) to fully understand the driver's logic.
 
 Both are very time-consuming, but (a) is quicker, and safer, of course 
 provided that
 I don't need to dig into several trees to get patches because not everything 
 is not
 upstream yet.
 
 This also means that I'll need the (complex) patches for devices with MC 
 several weeks
 before the next merge window, to give me some time to open a window for 
 testing.
 
 The media controller API is meant to be used only by specific applications
 that might add some extra features to the driver. So, it is an optional
 API. In all cases where both API's can do the same thing, the proper way
 is to use the V4L2 API only, and not the media controller API.

 Yes I continually keep that in mind. But there are some corner cases when
 things are not so obvious, e.g. normally a video node inherits controls
 from a sensor subdev. So all the controls are available at the video node.

 But when using the subdev API, the right thing to do at the video node is not
 to inherit sensor's controls. You could of course accumulate all controls at
 video node at all times, such as same (sensor subdev's) controls are 
 available
 at /dev/video* and /dev/v4l-subdev*. But this is confusing to MC API aware
 application which could wrongly assume that there is more controls at the 
 host
 device than there really is.
 
 Accumulating sub-dev controls at the video node is the right thing to do.
 
 An MC-aware application will need to handle with that, but that doesn't sound 
 to
 be hard. All such application would need to do is to first probe the subdev 
 controls,
 and, when parsing the videodev controls, not register controls with 
 duplicated ID's,
 or to mark them with some special attribute.

IMHO it's not a big issue in general. Still, both subdev and the host device 
may 
support same control id. And then even though the control ids are same on the 
subdev
and the host they could mean physically different controls (since when 
registering 
a subdev at the host driver the host's controls take precedence and doubling 
subdev
controls are skipped).

Also there might be some preference at user space, at which stage of the 
pipeline
to apply some controls. This is where the subdev API helps, and plain video node
API does not.

 
 Thus it's a bit hard to imagine that we could do something like optionally
 not to inherit controls as the subdev/MC API is optional. :)
 
 This was actually implemented. There are some cases at ivtv/cx18 driver where 
 both
 the bridge and a subdev provides the same control (audio volume, for 
 example). The
 idea is to allow the bridge driver to touch at the subdev control without 
 exposing
 it to userspace, since the desire was that the bridge driver itself would 
 expose
 such control, using a logic that combines changing the subdev and the bridge 
 registers
 for volume.

This seem like hard coding a policy in the driver;) Then there is no way (it 
might not
be worth the effort though) to play with volume level at both devices, e.g. to 
obtain
optimal S/N ratio. This is a hack...sorry, just joking ;-) Seriously, I think 
the

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-07-28 Thread Mauro Carvalho Chehab
Em 28-07-2011 19:57, Sylwester Nawrocki escreveu:
 On 07/28/2011 03:20 PM, Mauro Carvalho Chehab wrote:

 Accumulating sub-dev controls at the video node is the right thing to do.

 An MC-aware application will need to handle with that, but that doesn't 
 sound to
 be hard. All such application would need to do is to first probe the subdev 
 controls,
 and, when parsing the videodev controls, not register controls with 
 duplicated ID's,
 or to mark them with some special attribute.
 
 IMHO it's not a big issue in general. Still, both subdev and the host device 
 may 
 support same control id. And then even though the control ids are same on the 
 subdev
 and the host they could mean physically different controls (since when 
 registering 
 a subdev at the host driver the host's controls take precedence and doubling 
 subdev
 controls are skipped).

True, but, except for specific usecases, the host control is enough.

 Also there might be some preference at user space, at which stage of the 
 pipeline
 to apply some controls. This is where the subdev API helps, and plain video 
 node
 API does not.

Again, this is for specific usecases. On such cases, what is expected is that 
the more
generic control will be exported via V4L2 API.


 Thus it's a bit hard to imagine that we could do something like optionally
 not to inherit controls as the subdev/MC API is optional. :)

 This was actually implemented. There are some cases at ivtv/cx18 driver 
 where both
 the bridge and a subdev provides the same control (audio volume, for 
 example). The
 idea is to allow the bridge driver to touch at the subdev control without 
 exposing
 it to userspace, since the desire was that the bridge driver itself would 
 expose
 such control, using a logic that combines changing the subdev and the bridge 
 registers
 for volume.
 
 This seem like hard coding a policy in the driver;) Then there is no way (it 
 might not
 be worth the effort though) to play with volume level at both devices, e.g. 
 to obtain
 optimal S/N ratio.

In general, playing with just one control is enough. Andy had a different 
opinion
when this issue were discussed, and he thinks that playing with both is better.
At the end, this is a developers decision, depending on how much information 
(and bug reports) he had.

 This is a hack...sorry, just joking ;-) Seriously, I think the
 situation with the userspace subdevs is a bit different. Because with one API 
 we
 directly expose some functionality for applications, with other we code it in 
 the
 kernel, to make the devices appear uniform at user space.

Not sure if I understood you. V4L2 export drivers functionality to userspace in 
an
uniform way. MC api is for special applications that might need to access some
internal functions on embedded devices.

Of course, there are some cases where it doesn't make sense to export a subdev 
control
via V4L2 API.

 Also, the sensor subdev can be configured in the video node driver as well 
 as
 through the subdev device node. Both APIs can do the same thing but in order
 to let the subdev API work as expected the video node driver must be 
 forbidden
 to configure the subdev.

 Why? For the sensor, a V4L2 API call will look just like a bridge driver 
 call.
 The subdev will need a mutex anyway, as two MC applications may be opening it
 simultaneously. I can't see why it should forbid changing the control from 
 the
 bridge driver call.
 
 Please do not forget there might be more than one subdev to configure and that
 the bridge itself is also a subdev (which exposes a scaler interface, for 
 instance).
 A situation pretty much like in Figure 4.4 [1] (after the scaler there is also
 a video node to configure, but we may assume that pixel resolution at the 
 scaler
 pad 1 is same as at the video node). Assuming the format and crop 
 configuration 
 flow is from sensor to host scaler direction, if we have tried to configure 
 _all_
 subdevs when the last stage of the pipeline is configured (i.e. video node) 
 the whole scaler and crop/composition configuration we have been destroyed at
 that time. And there is more to configure than VIDIOC_S_FMT can do.

Think from users perspective: all user wants is to see a video of a given 
resolution.
S_FMT (and a few other VIDIOC_* calls) have everything that the user wants: the
desired resolution, framerate and format.

Specialized applications indeed need more, in order to get the best images for
certain types of usages. So, MC is there.

Such applications will probably need to know exactly what's the sensor, what are
their bugs, how it is connected, what are the DSP blocks in the patch, how the
DSP algorithms are implemented, etc, in order to obtain the the perfect image.

Even on embedded devices like smartphones and tablets, I predict that both
types of applications will be developed and used: people may use a generic
application like flash player, and an specialized application provided by
the manufacturer. Users can even 

Re: [GIT PATCHES FOR 3.1] s5p-fimc and noon010pc30 driver updates

2011-07-27 Thread Mauro Carvalho Chehab
Hi Sylwester,

Em 27-07-2011 13:35, Sylwester Nawrocki escreveu:
 Hi Mauro,
 
 The following changes since commit f0a21151140da01c71de636f482f2eddec2840cc:
 
   Merge tag 'v3.0' into staging/for_v3.1 (2011-07-22 13:33:14 -0300)
 
 are available in the git repository at:
 
   git://git.infradead.org/users/kmpark/linux-2.6-samsung fimc-for-mauro
 
 Sylwester Nawrocki (28):
   s5p-fimc: Add support for runtime PM in the mem-to-mem driver
   s5p-fimc: Add media entity initialization
   s5p-fimc: Remove registration of video nodes from probe()
   s5p-fimc: Remove sclk_cam clock handling
   s5p-fimc: Limit number of available inputs to one
   s5p-fimc: Remove sensor management code from FIMC capture driver
   s5p-fimc: Remove v4l2_device from video capture and m2m driver
   s5p-fimc: Add the media device driver
   s5p-fimc: Conversion to use struct v4l2_fh
   s5p-fimc: Conversion to the control framework
   s5p-fimc: Add media operations in the capture entity driver
   s5p-fimc: Add PM helper function for streaming control
   s5p-fimc: Correct color format enumeration
   s5p-fimc: Convert to use media pipeline operations
   s5p-fimc: Add subdev for the FIMC processing block
   s5p-fimc: Add support for camera capture in JPEG format
   s5p-fimc: Add v4l2_device notification support for single frame capture
   s5p-fimc: Use consistent names for the buffer list functions
   s5p-fimc: Add runtime PM support in the camera capture driver
   s5p-fimc: Correct crop offset alignment on exynos4
   s5p-fimc: Remove single-planar capability flags
   noon010pc30: Do not ignore errors in initial controls setup
   noon010pc30: Convert to the pad level ops
   noon010pc30: Clean up the s_power callback
   noon010pc30: Remove g_chip_ident operation handler
   s5p-csis: Handle all available power supplies
   s5p-csis: Rework of the system suspend/resume helpers
   s5p-csis: Enable v4l subdev device node

From the last time you've submitted a similar set of patches:

 Why? The proper way to select an input is via S_INPUT. The driver 
 may also
 optionally allow changing it via the media device, but it should 
 not be
 a mandatory requirement, as the media device API is optional.
 
 The problem I'm trying to solve here is sharing the sensors and mipi-csi 
 receivers between multiple FIMC H/W instances. Previously the driver 
 supported attaching a sensor to only one selected FIMC at compile time. You 
 could, for instance, specify all sensors as the selected FIMC's platform data 
 and then use S_INPUT to choose between them. The sensor could not be used 
 together with any other FIMC. But this is desired due to different 
 capabilities of the FIMC IP instances. And now, instead of hardcoding a 
 sensor assigment to particular video node, the sensors are bound to the media 
 device. The media device driver takes the list of sensors and attaches them 
 one by one to subsequent FIMC instances when it is initializing. Each sensor 
 has a link to each FIMC but only one of them is active by default. That said 
 an user application can use selected camera by opening corresponding video 
 node. Which camera is at which node can be queried with G_INPUT.
 
 I could try to implement the previous S_INPUT behaviour, but IMHO this would 
 lead to considerable and unnecessary driver code complication due to 
 supporting overlapping APIs

From this current pull request:

From c6fb462c38be60a45d16a29a9e56c886ee0aa08c Mon Sep 17 00:00:00 2001
From: Sylwester Nawrocki s.nawro...@samsung.com
Date: Fri, 10 Jun 2011 20:36:51 +0200
Subject: s5p-fimc: Conversion to the control framework
Cc: Linux Media Mailing List linux-media@vger.kernel.org

Make the driver inherit sensor controls when video node only API
compatibility mode is enabled. The control framework allows to
properly handle sensor controls through controls inheritance when
pipeline is re-configured at media device level.

...
-   .vidioc_queryctrl   = fimc_vidioc_queryctrl,
-   .vidioc_g_ctrl  = fimc_vidioc_g_ctrl,
-   .vidioc_s_ctrl  = fimc_cap_s_ctrl,
...

I'll need to take some time to review this patchset. So, it will likely
miss the bus for 3.1.

While the code inside this patch looked ok, your comments scared me ;)

In summary: The V4L2 API is not a legacy API that needs a compatibility
mode. Removing controls like VIDIOC_S_INPUT, VIDIOC_*CTRL, etc in
favor of the media controller API is wrong. This specific patch itself seems
ok, but it is easy to loose the big picture on a series of 28 patches
with about 4000 lines changed.

The media controller API is meant to be used only by specific applications
that might add some extra features to the driver. So, it is an optional
API. In all cases where both API's can do the same thing, the proper way 
is to use the V4L2 API only, and not the media controller API.

So, my current