Re: pxa ccic driver based on soc_camera and videobuf

2011-05-18 Thread Guennadi Liakhovetski
Hi Kassey

On Wed, 18 May 2011, Kassey Lee wrote:

 hi, Guennadi, Hans:
 
   I just converted  Marvell CCIC driver from ccic_cafe to
 soc_camera + videobuf, and make it stable and robust.

Nice!

   do you accept the soc_camera + videobuf to the latest kernel ?

My understanding is, that since videobuf2 is really an improved videobuf, 
the latter shall be deprecated and removed in some time, after all 
existing drivers are converted, so, there is no point in developing new 
drivers with videobuf. That said, the conversion is not very difficult, 
so, please, either do it yourself (preferred;)), or post your driver as is 
and we'll help you convert it.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC v2] tea575x: convert to control framework

2011-05-18 Thread Hans Verkuil
On Tuesday, May 17, 2011 23:45:07 Ondrej Zary wrote:
 Convert tea575x-tuner to use the new V4L2 control framework.
 
 Signed-off-by: Ondrej Zary li...@rainbow-software.org
 
 --- linux-2.6.39-rc2-/include/sound/tea575x-tuner.h   2011-05-13 
 19:39:23.0 +0200
 +++ linux-2.6.39-rc2/include/sound/tea575x-tuner.h2011-05-17 
 22:35:19.0 +0200
 @@ -23,8 +23,7 @@
   */
  
  #include linux/videodev2.h
 -#include media/v4l2-dev.h
 -#include media/v4l2-ioctl.h
 +#include media/v4l2-ctrls.h
  
  #define TEA575X_FMIF 10700
  
 @@ -54,6 +53,8 @@ struct snd_tea575x {
   void *private_data;
   u8 card[32];
   u8 bus_info[32];
 + struct v4l2_ctrl_handler ctrl_handler;
 + void (*ext_init)(struct snd_tea575x *tea);
  };
  
  int snd_tea575x_init(struct snd_tea575x *tea);
 --- linux-2.6.39-rc2-/sound/i2c/other/tea575x-tuner.c 2011-05-13 
 19:39:23.0 +0200
 +++ linux-2.6.39-rc2/sound/i2c/other/tea575x-tuner.c  2011-05-17 
 23:32:07.0 +0200
 @@ -22,11 +22,11 @@
  
  #include asm/io.h
  #include linux/delay.h
 -#include linux/interrupt.h
  #include linux/init.h
  #include linux/slab.h
  #include linux/version.h
 -#include sound/core.h
 +#include media/v4l2-dev.h
 +#include media/v4l2-ioctl.h
  #include sound/tea575x-tuner.h
  
  MODULE_AUTHOR(Jaroslav Kysela pe...@perex.cz);
 @@ -62,17 +62,6 @@ module_param(radio_nr, int, 0);
  #define TEA575X_BIT_DUMMY(115) /* buffer */
  #define TEA575X_BIT_FREQ_MASK0x7fff
  
 -static struct v4l2_queryctrl radio_qctrl[] = {
 - {
 - .id= V4L2_CID_AUDIO_MUTE,
 - .name  = Mute,
 - .minimum   = 0,
 - .maximum   = 1,
 - .default_value = 1,
 - .type  = V4L2_CTRL_TYPE_BOOLEAN,
 - }
 -};
 -
  /*
   * lowlevel part
   */
 @@ -266,47 +255,19 @@ static int vidioc_s_audio(struct file *f
   return 0;
  }
  
 -static int vidioc_queryctrl(struct file *file, void *priv,
 - struct v4l2_queryctrl *qc)
 -{
 - int i;
 -
 - for (i = 0; i  ARRAY_SIZE(radio_qctrl); i++) {
 - if (qc-id  qc-id == radio_qctrl[i].id) {
 - memcpy(qc, (radio_qctrl[i]),
 - sizeof(*qc));
 - return 0;
 - }
 - }
 - return -EINVAL;
 -}
 -
 -static int vidioc_g_ctrl(struct file *file, void *priv,
 - struct v4l2_control *ctrl)
 -{
 - struct snd_tea575x *tea = video_drvdata(file);
 -
 - switch (ctrl-id) {
 - case V4L2_CID_AUDIO_MUTE:
 - ctrl-value = tea-mute;
 - return 0;
 - }
 - return -EINVAL;
 -}
 -
 -static int vidioc_s_ctrl(struct file *file, void *priv,
 - struct v4l2_control *ctrl)
 +static int tea575x_s_ctrl(struct v4l2_ctrl *ctrl)
  {
 - struct snd_tea575x *tea = video_drvdata(file);
 + struct snd_tea575x *tea = container_of(ctrl-handler, struct 
 snd_tea575x, ctrl_handler);
  
   switch (ctrl-id) {
   case V4L2_CID_AUDIO_MUTE:
 - if (tea-mute != ctrl-value) {
 - tea-mute = ctrl-value;
 + if (tea-mute != ctrl-val) {

This test should be removed. s_ctrl is only called when the value actually
changes, and also during handler_setup. With this test the setup call will
actually fail to mute the device.

 + tea-mute = ctrl-val;
   snd_tea575x_set_freq(tea);
   }
   return 0;
   }
 +
   return -EINVAL;
  }
  
 @@ -355,9 +316,6 @@ static const struct v4l2_ioctl_ops tea57
   .vidioc_s_input = vidioc_s_input,
   .vidioc_g_frequency = vidioc_g_frequency,
   .vidioc_s_frequency = vidioc_s_frequency,
 - .vidioc_queryctrl   = vidioc_queryctrl,
 - .vidioc_g_ctrl  = vidioc_g_ctrl,
 - .vidioc_s_ctrl  = vidioc_s_ctrl,
  };
  
  static struct video_device tea575x_radio = {
 @@ -367,6 +325,10 @@ static struct video_device tea575x_radio
   .release= video_device_release,
  };
  
 +static const struct v4l2_ctrl_ops tea575x_ctrl_ops = {
 + .s_ctrl = tea575x_s_ctrl,
 +};
 +
  /*
   * initialize all the tea575x chips
   */
 @@ -384,29 +346,39 @@ int snd_tea575x_init(struct snd_tea575x
   tea-in_use = 0;
   tea-val = TEA575X_BIT_BAND_FM | TEA575X_BIT_SEARCH_10_40;
   tea-freq = 90500 * 16; /* 90.5Mhz default */
 + snd_tea575x_set_freq(tea);
  
   tea575x_radio_inst = video_device_alloc();
 - if (tea575x_radio_inst == NULL) {
 + if (!tea575x_radio_inst) {
   printk(KERN_ERR tea575x-tuner: not enough memory\n);
   return -ENOMEM;
   }
  
   memcpy(tea575x_radio_inst, tea575x_radio, sizeof(tea575x_radio));
 + video_set_drvdata(tea575x_radio_inst, tea);
  
 - strcpy(tea575x_radio.name, tea-tea5759 ?
 -TEA5759 radio 

Re: [PATCH RFC v2] radio-sf16fmr2: convert to generic TEA575x interface

2011-05-18 Thread Hans Verkuil
On Tuesday, May 17, 2011 22:05:26 Ondrej Zary wrote:
 On Tuesday 17 May 2011 21:33:14 Hans Verkuil wrote:
  Hi Ondrej!
 
  On Sunday, May 15, 2011 23:26:33 Hans Verkuil wrote:
   On Sunday, May 15, 2011 22:18:21 Ondrej Zary wrote:
Thanks, it's much simpler with the new control framework.
Do the negative volume control values make sense? The TC9154A chip can
attenuate the volume from 0 to -68dB in 2dB steps.
  
   It does make sense, but I think I would offset the values so they start
   at 0. Mostly because there might be some old apps that set the volume to
   0 when they want to mute, which in this case is full volume.
  
   I am not aware of any driver where a volume of 0 isn't the same as the
   lowest volume possible, so in this particular case I would apply an
   offset.
  
   I will have to do a closer review tomorrow or the day after. I think
   there are a few subtleties that I need to look at. Ping me if you haven't
   heard from me by Wednesday. I would really like to get these drivers up
   to spec now that I have someone who can test them, and once that's done I
   hope that I never have to look at them again :-) (Unlikely, but one can
   dream...)
 
  OK, I looked at it a bit more and it needs to be changed a little bit. The
  problem is that the VOLUME control is added after snd_tea575x_init, i.e.
  after the video_register_device call. The video_register_device call should
  be the last thing done before the init sequence returns. There may be
  applications (dbus/hal) that open devices as soon as they appear, so doing
  more initialization after the video node is registered is not a good idea
  (many older V4L drivers make this mistake).
 
  Perhaps creating a snd_tea575x_register function doing just the
  registration may be a good idea. Or a callback before doing the
  video_register_device.
 
 OK, I'll reorder the lines in snd_tea575x_init function and add a callback 
 that radio-sf16fmr2 can use.
 
 Also upgraded my card with TC9154AP chip so I can actually test the volume 
 control code (and it was broken in my previous patch...). The left and right 
 channels can be separately controlled - is there a way to provide separate 
 left and right volume controls? Or do I need to fake up a balance control?

A fake balance control would be the way to go. There are other drivers that
do it like that.

  Another thing: the tea-mute field shouldn't be needed anymore. And the
  'mute on init' bit in snd_tea575x_init can be removed as well since that
  is automatically performed by v4l2_ctrl_handler_setup.
 
 Thought about this too but the snd_tea575x_write() and snd_tea575x_read() 
 functions need to know the mute status. And these functions are also used to 
 detect the tuner presence before initializing the controls. I don't see any 
 elegant solution.

What typically is done is that the mute v4l2_ctrl pointer is stored and
dereferenced to get the value. But in a simple case like this backing up
the value works just as well.

Regards,

Hans

  In addition, the .ioctl field in tea575x_fops can be replaced by
  .unlocked_ioctl. The whole exclusive open stuff and the in_use field can be
  removed. The only thing needed is a struct mutex in struct snd_tea575x,
  initialize it and set tea575x_radio_inst-lock to the mutex. This will
  serialize all access safely.
 
 I'll do this as a separate patch later.
 
  To do this really right you should add struct v4l2_device to struct
  snd_tea575x (the radio-sf16fmr2 driver has one, so you can use that as an
  example). With that in place you can also add support for 'priority'
  handling. I'd say see what you can do, and if it takes too much time then
  mail me the tea575x code and the radio-sf16frm2 code and I'll finish it.
 
  Regards,
 
  Hans
 
 
 
 
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Audio Video synchronization for data received from a HDMI receiver chip

2011-05-18 Thread Hans Verkuil
On Wednesday, May 18, 2011 06:10:43 Bhupesh SHARMA wrote:
 Hi,
 
 (adding alsa mailing list in cc)
 
  On Thursday, May 12, 2011 18:59:33 Charlie X. Liu wrote:
   Which HDMI receiver chip?
  
  Indeed, that's my question as well :-)
 
 We use Sil 9135 receiver chip which is provided by Silicon Image.
 Please see details here: 
 http://www.siliconimage.com/products/product.aspx?pid=109
  
  Anyway, this question comes up regularly. V4L2 provides timestamps for
  each
  frame, so that's no problem. But my understanding is that ALSA does not
  give
  you timestamps, so if there are processing delays between audio and
  video, then
  you have no way of knowing. The obvious solution is to talk to the ALSA
  people
  to see if some sort of timestamping is possible, but nobody has done
  that.
 
 I am aware of the time stamping feature provided by V4L2, but I am also
 not sure whether the same feature is supported by ALSA. I have included
 alsa-mailing list also in copy of this mail. Let's see if we can get
 some sort of confirmation on this from them.
  
  This is either because everyone that needs it hacks around it instead
  of trying
  to really solve it, or because it is never a problem in practice.
 
 What should be the proper solution according to you to solve this issue.
 Do we require a Audio-Video Bridge kind of utility/mechanism?

I don't believe so. All you need is reliable time stamping for your audio
and video streams. That's enough for userspace to detect AV sync issues.

Regards,

Hans

 
 Regards,
 Bhupesh
 
  
  
   -Original Message-
   From: linux-media-ow...@vger.kernel.org
   [mailto:linux-media-ow...@vger.kernel.org] On Behalf Of Bhupesh
  SHARMA
   Sent: Wednesday, May 11, 2011 10:49 PM
   To: linux-media@vger.kernel.org
   Cc: Laurent Pinchart; Guennadi Liakhovetski; Hans Verkuil
   Subject: Audio Video synchronization for data received from a HDMI
  receiver
   chip
  
   Hi Linux media folks,
  
   We are considering putting an advanced HDMI receiver chip on our SoC,
   to allow reception of HDMI audio and video. The chip receives HDMI
  data
   from a host like a set-up box or DVD player. It provides a video data
   interface
   and SPDIF/I2S audio data interface.
  
   We plan to support the HDMI video using the V4L2 framework and the
  HDMI
   audio using ALSA framework.
  
   Now, what seems to be intriguing us is how the audio-video
  synchronization
   will be maintained? Will a separate bridging entity required to
  ensure the
   same
   or whether this can be left upon a user space application like
  mplayer or
   gstreamer.
  
   Also is there a existing interface between the V4L2 and ALSA
  frameworks and
   the same
   can be used in our design?
  
   Regards,
   Bhupesh
   ST Microelectronics
   --
   To unsubscribe from this list: send the line unsubscribe linux-
  media in
   the body of a message to majord...@vger.kernel.org
   More majordomo info at  http://vger.kernel.org/majordomo-info.html
  
  
  --
  To unsubscribe from this list: send the line unsubscribe linux-media
  in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
  
  
  
  --
  regards
  Shiraz Hashim
 
 
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Standardize YUV support in the fbdev API

2011-05-18 Thread Hans Verkuil
On Wednesday, May 18, 2011 00:44:26 Felipe Contreras wrote:
 On Wed, May 18, 2011 at 1:07 AM, Laurent Pinchart
 laurent.pinch...@ideasonboard.com wrote:
  I need to implement support for a YUV frame buffer in an fbdev driver. As 
  the
  fbdev API doesn't support this out of the box, I've spent a couple of days
  reading fbdev (and KMS) code and thinking about how we could cleanly add YUV
  support to the API. I'd like to share my findings and thoughts, and 
  hopefully
  receive some comments back.
 
  The terms 'format', 'pixel format', 'frame buffer format' and 'data format'
  will be used interchangeably in this e-mail. They all refer to the way 
  pixels
  are stored in memory, including both the representation of a pixel as 
  integer
  values and the layout of those integer values in memory.
 
 This is a great proposal. It was about time!
 
  The third solution has my preference. Comments and feedback will be
  appreciated. I will then work on a proof of concept and submit patches.
 
 I also would prefer the third solution. I don't think there's much
 difference from the user-space point of view, and a new ioctl would be
 cleaner. Also the v4l2 fourcc's should do.

I agree with this.

We might want to take the opportunity to fix this section of the V4L2 Spec:

http://www.xs4all.nl/~hverkuil/spec/media.html#pixfmt-rgb

There are two tables, 2.6 and 2.7. But 2.6 is almost certainly wrong and should
be removed. I suspect many if not all V4L2 drivers are badly broken for
big-endian systems and report the wrong pixel formats.

Officially the pixel formats reflect the contents of the memory. But everything
is swapped on a big endian system, so you are supposed to report a different
pix format. I can't remember seeing any driver do that. Some have built-in
swapping, though, and turn that on if needed.

I really need to run some tests, but I've been telling myself this for years
now :-(

Regards,

Hans
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v4] V4L2 API for flash devices

2011-05-18 Thread Laurent Pinchart
Hi Sakari,

On Tuesday 17 May 2011 22:34:36 Sakari Ailus wrote:
 Sylwester Nawrocki wrote:
  On 05/09/2011 12:11 AM, Sakari Ailus wrote:
  Sylwester Nawrocki wrote:
  On 05/07/2011 02:46 PM, Hans Verkuil wrote:
  On Thursday, May 05, 2011 20:49:21 Sakari Ailus wrote:
  Hi,
  
  This is a fourth proposal for an interface for controlling flash
  devices on the V4L2/v4l2_subdev APIs.
  
  I want to thank everyone who have participated to the development of
  the flash interface.
  
  Comments and questions are very, very welcome as always.
  
  
  Changes since v3 [12]
  =
  
  - V4L2_CID_FLASH_STROBE changed to button control,
  V4L2_CID_FLASH_STROBE_STOP button control added,
  V4L2_CID_FLASH_STROBE_STATUS boolean control added.
  
  - No reason to say xenon flashes can't use V4L2_CID_FLASH_STROBE.
  
  - V4L2_CID_FLASH_STROBE_WHENCE changed to V4L2_CID_FLASH_STROBE_MODE.
  
  - V4L2_CID_TORCH_INTENSITY renamed to V4L2_CID_FLASH_TORCH_INTENSITY
  and V4L2_CID_INDICATOR_INTENSITY renamed to
  V4L2_CID_FLASH_INDICATOR_INTENSITY.
  
  - Moved V4L2_CID_FLASH_EXTERNAL_STROBE_MODE under Possible future
  extensions.
  
  [snip]
  
  3. Sensor metadata on frames
  
  
  It'd be useful to be able to read back sensor metadata. If the flash
  is strobed (on sensor hardware) while streaming, it's difficult to
  know otherwise which frame in the stream has been exposed with
  flash.
  
  I wonder if it would make sense to have a V4L2_BUF_FLAG_FLASH buffer
  flag?
  That way userspace can tell if that particular frame was taken with
  flash.
  
  This looks more as a workaround for the problem rather than a good long
  term solution. It might be tempting to use the buffer flags which seem
  to be be more or less intended for buffer control.
  I'd like much more to see a buffer flags to be used to indicate whether
  an additional plane of (meta)data is carried by the buffer.
  There seem to be many more parameters, than a single flag indicating
  whether the frame has been exposed with flash or not, needed to be
  carried over to user space.
  But then we might need some standard format of the meta data, perhaps
  control id/value pairs and possibly a per plane configurable memory
  type.
  
  There are multiple possible approaches for this.
  
  For sensors where metadata is register-value pairs, that is, essentially
  V4L2 control values, I think this should be parsed by the sensor driver.
  The ISP (camera bridge) driver does receive the data so it'd have to
  ask for help from the sensor driver.
  
  I am inclined to let the ISP drivers parse the data but on the other hand
  it might be difficult to access same DMA buffers in kernel _and_ user
  space.
 
 This is just about mapping the buffer to both kernel and user spaces. If
 the ISP has an iommu the kernel mapping might already exist if it comes
 from vmalloc().

And we're also trying to get rid of that mapping to facilitate cache 
management. Any API we create for metadata parsing will need to take potential 
cache-related performances issues into account at the design stage.

  As discussed previously, using V4L2 control events shouldn't probably be
  the way to go, but I think it's obvious that this is _somehow_ bound to
  controls, at least control ids.
  
  Also as Sakari indicated some sensors adopt custom meta data formats
  so maybe we need to introduce standard fourcc like IDs for meta data
  formats? I am not sure whether it is possible to create common
  description of an image meta data that fits all H/W.
  
  I'm not sure either since I know of only one example. That example, i.e.
  register-value pairs, should be something that I'd assume _some_ other
  hardware uses as well, but there could exist also hardware which
  doesn't. This solution might not work on that hardware.
  
  Of course it's hard to find a silver bullet for a hardware we do not know
  ;)
  
  If there is metadata which does not associate to V4L2 controls (or
  ioctls), either existing or not yet existing, then that probably should
  be parsed by the driver. On the other hand, I can't think of metadata
  that wouldn't fall under this right now. :-)
  
  Some metadata are arrays of length specific to a given attribute,
  I wonder how to support that with v4l2 controls ?
 
 Is the metadata something which really isn't associated to any V4L2
 control? Are we now talking about a sensor which is more complex than a
 regular raw bayer sensor?
 
  Do you know more examples of sensor produced metadata than SMIA++?
  
  The only metadata I've had a bit experience with was regular EXIF tags
  which could be retrieved from ISP through I2C bus.
 
 That obviously won't map to V4L2 controls.
 
 This should very likely be just passed to user space as-is as
 different... plane?
 
 In some cases it's time critical to pass data to user space that
 otherwise could be associated with a video buffer. I wonder if this case
 is time critical or not.
 
  

Re: [RFC v4] V4L2 API for flash devices

2011-05-18 Thread Sakari Ailus
Laurent Pinchart wrote:
 Hi Sakari,

Hi Laurent,

 On Tuesday 17 May 2011 22:34:36 Sakari Ailus wrote:
 Sylwester Nawrocki wrote:
 On 05/09/2011 12:11 AM, Sakari Ailus wrote:
 Sylwester Nawrocki wrote:
 On 05/07/2011 02:46 PM, Hans Verkuil wrote:
 On Thursday, May 05, 2011 20:49:21 Sakari Ailus wrote:
 Hi,

 This is a fourth proposal for an interface for controlling flash
 devices on the V4L2/v4l2_subdev APIs.

 I want to thank everyone who have participated to the development of
 the flash interface.

 Comments and questions are very, very welcome as always.


 Changes since v3 [12]
 =

 - V4L2_CID_FLASH_STROBE changed to button control,
 V4L2_CID_FLASH_STROBE_STOP button control added,
 V4L2_CID_FLASH_STROBE_STATUS boolean control added.

 - No reason to say xenon flashes can't use V4L2_CID_FLASH_STROBE.

 - V4L2_CID_FLASH_STROBE_WHENCE changed to V4L2_CID_FLASH_STROBE_MODE.

 - V4L2_CID_TORCH_INTENSITY renamed to V4L2_CID_FLASH_TORCH_INTENSITY
 and V4L2_CID_INDICATOR_INTENSITY renamed to
 V4L2_CID_FLASH_INDICATOR_INTENSITY.

 - Moved V4L2_CID_FLASH_EXTERNAL_STROBE_MODE under Possible future
 extensions.

 [snip]

 3. Sensor metadata on frames
 

 It'd be useful to be able to read back sensor metadata. If the flash
 is strobed (on sensor hardware) while streaming, it's difficult to
 know otherwise which frame in the stream has been exposed with
 flash.

 I wonder if it would make sense to have a V4L2_BUF_FLAG_FLASH buffer
 flag?
 That way userspace can tell if that particular frame was taken with
 flash.

 This looks more as a workaround for the problem rather than a good long
 term solution. It might be tempting to use the buffer flags which seem
 to be be more or less intended for buffer control.
 I'd like much more to see a buffer flags to be used to indicate whether
 an additional plane of (meta)data is carried by the buffer.
 There seem to be many more parameters, than a single flag indicating
 whether the frame has been exposed with flash or not, needed to be
 carried over to user space.
 But then we might need some standard format of the meta data, perhaps
 control id/value pairs and possibly a per plane configurable memory
 type.

 There are multiple possible approaches for this.

 For sensors where metadata is register-value pairs, that is, essentially
 V4L2 control values, I think this should be parsed by the sensor driver.
 The ISP (camera bridge) driver does receive the data so it'd have to
 ask for help from the sensor driver.

 I am inclined to let the ISP drivers parse the data but on the other hand
 it might be difficult to access same DMA buffers in kernel _and_ user
 space.

 This is just about mapping the buffer to both kernel and user spaces. If
 the ISP has an iommu the kernel mapping might already exist if it comes
 from vmalloc().
 
 And we're also trying to get rid of that mapping to facilitate cache 
 management. Any API we create for metadata parsing will need to take 
 potential 
 cache-related performances issues into account at the design stage.

In this case, it's not necessary to map this memory to user space at all
so the kernel mapping would be the only one.

-- 
Sakari Ailus
sakari.ai...@maxwell.research.nokia.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v2 3/3] adp1653: Add driver for LED flash controller

2011-05-18 Thread Laurent Pinchart
Hi Sakari,

On Tuesday 17 May 2011 17:14:04 Sakari Ailus wrote:
 This patch adds the driver for the adp1653 LED flash controller. This
 controller supports a high power led in flash and torch modes and an
 indicator light, sometimes also called privacy light.
 
 The adp1653 is used on the Nokia N900.

[snip]

 diff --git a/drivers/media/video/adp1653.c b/drivers/media/video/adp1653.c
 new file mode 100644
 index 000..1679707
 --- /dev/null
 +++ b/drivers/media/video/adp1653.c

[snip]

 +static int adp1653_get_fault(struct adp1653_flash *flash)
 +{
 + struct i2c_client *client = v4l2_get_subdevdata(flash-subdev);
 + int fault;
 + int rval;
 +
 + fault = i2c_smbus_read_byte_data(client, ADP1653_REG_FAULT);
 + if (IS_ERR_VALUE(fault))
 + return fault;
 +
 + flash-fault |= fault;
 +
 + if (!flash-fault)
 + return 0;
 +
 + /* Clear faults. */
 + rval = i2c_smbus_write_byte_data(client, ADP1653_REG_OUT_SEL, 0);
 + if (IS_ERR_VALUE(rval))
 + return rval;

Should the faults be cleared right away, instead of when the user reads the 
faults control ?

 + flash-led_mode-val = V4L2_FLASH_LED_MODE_NONE;

Does the hardware switch back to none mode when a fault occurs ? The 
datasheet just states that the ADP1653 is disabled. Does that mean 
temporarily disabled until the faults are cleared ? If so you should update 
the registers to turn the LED off.

 + return flash-fault;
 +}

[snip]

 +static int adp1653_get_ctrl(struct v4l2_ctrl *ctrl)
 +{
 + struct adp1653_flash *flash =
 + container_of(ctrl-handler, struct adp1653_flash, ctrls);
 +
 + adp1653_get_fault(flash);
 + if (IS_ERR_VALUE(flash-fault))

Shouldn't you check the adp1653_get_fault() return value instead ?

 + return flash-fault;
 +
 + ctrl-cur.val = 0;
 +
 + if (flash-fault  ADP1653_REG_FAULT_FLT_SCP)
 + ctrl-cur.val |= V4L2_FLASH_FAULT_SHORT_CIRCUIT;
 + if (flash-fault  ADP1653_REG_FAULT_FLT_OT)
 + ctrl-cur.val |= V4L2_FLASH_FAULT_OVER_TEMPERATURE;
 + if (flash-fault  ADP1653_REG_FAULT_FLT_TMR)
 + ctrl-cur.val |= V4L2_FLASH_FAULT_TIMEOUT;
 + if (flash-fault  ADP1653_REG_FAULT_FLT_OV)
 + ctrl-cur.val |= V4L2_FLASH_FAULT_OVER_VOLTAGE;
 +
 + flash-fault = 0;
 +
 + return 0;
 +}

[snip]

 +static int
 +adp1653_registered(struct v4l2_subdev *subdev)
 +{
 + struct adp1653_flash *flash = to_adp1653_flash(subdev);
 +
 + return adp1653_init_controls(flash);

Can't this be moved to adp1653_probe() ? You could then get rid of the 
registered callback.

 +}
 +
 +static int
 +adp1653_init_device(struct adp1653_flash *flash)
 +{
 + struct i2c_client *client = v4l2_get_subdevdata(flash-subdev);
 + int rval;
 +
 + /* Clear FAULT register by writing zero to OUT_SEL */
 + rval = i2c_smbus_write_byte_data(client, ADP1653_REG_OUT_SEL, 0);
 + if (rval  0) {
 + dev_err(client-dev, failed writing fault register\n);
 + return -EIO;
 + }
 +
 + /* Read FAULT register */
 + rval = i2c_smbus_read_byte_data(client, ADP1653_REG_FAULT);
 + if (rval  0) {
 + dev_err(client-dev, failed reading fault register\n);
 + return -EIO;
 + }
 +
 + if ((rval  0x0f) != 0) {
 + dev_err(client-dev, device fault\n);

Same comment as last time :-)

 + return -EIO;
 + }
 +
 + mutex_lock(flash-ctrls.lock);
 + rval = adp1653_update_hw(flash);
 + mutex_unlock(flash-ctrls.lock);
 + if (rval) {
 + dev_err(client-dev,
 + adp1653_update_hw failed at %s\n, __func__);
 + return -EIO;
 + }
 +
 + return 0;
 +}

[snip]

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] mt9p031: Add mt9p031 sensor driver.

2011-05-18 Thread Laurent Pinchart
Hi Chris,

On Wednesday 18 May 2011 07:09:44 Chris Rodley wrote:
 
 Trying out the new patch.. I get this error:
 
 root@beagle:~# modprobe iommu2
 [   24.691375] omap-iommu omap-iommu.0: isp registered
 root@beagle:~# modprobe omap3_isp
 [   26.923065] Linux video capture interface: v2.00
 [   27.027679] omap3isp omap3isp: Revision 2.0 found
 [   27.032684] omap-iommu omap-iommu.0: isp: version 1.1
 [   27.333648] mt9p031 2-0048: Detected a MT9P031 chip ID 1801
 [   27.427459] kernel BUG at drivers/media/video/omap3isp/isp.c:1494!

What kernel and OMAP3 ISP driver are you using ? The BUG_ON statement is at 
line 1492 in mainline. Bugs in error paths were present in older versions, 
make sure you use the latest one.

 [   27.434082] Unable to handle kernel NULL pointer dereference at virtual
 address  [   27.442657] pgd = c7724000
 [   27.445526] [] *pgd=87700831, *pte=, *ppte=
 [   27.452178] Internal error: Oops: 817 [#1]
 [   27.456512] last sysfs file:
 /sys/devices/platform/omap3isp/video4linux/v4l-subdev7/dev [   27.464965]
 Modules linked in: mt9p031 omap3_isp(+) v4l2_common videodev iovmm iommu2
 iommu [   27.473815] CPU: 0Not tainted  (2.6.39 #17)
 [   27.481536] PC is at __bug+0x1c/0x28
 [   27.485321] LR is at __bug+0x18/0x28
 [   27.489105] pc : [c003dc98]lr : [c003dc94]psr: 2013
 [   27.489105] sp : c76c1e08  ip :   fp : c050ff48
 [   27.501220] r10: c051bd40  r9 : c050ff48  r8 : c7788000
 [   27.506744] r7 :   r6 : c7788348  r5 :   r4 : c7788000
 [   27.513610] r3 :   r2 : c76c1dfc  r1 : c0473627  r0 : 004c
 [   27.520507] Flags: nzCv  IRQs on  FIQs on  Mode SVC_32  ISA ARM  Segment
 user [   27.528045] Control: 10c5387d  Table: 87724019  DAC: 0015
 [   27.534118] Process modprobe (pid: 269, stack limit = 0xc76c02f0)
 [   27.540557] Stack: (0xc76c1e08 to 0xc76c2000)
 [   27.545166] 1e00:    bf0404d4 c7788000 
 c773f800 bf040dd4 [   27.553802] 1e20:  c7788000 c740c598 
 c778e1a8 c778ef18 c050ff7c c050ff48 [   27.562438] 1e40: c050ff7c bf053a28
 bf053a28  0025 001c  c02336fc [   27.571075] 1e60:
 c02336e8 c0232608  c050ff48 c050ff7c bf053a28  c0232724
 [   27.579711] 1e80: bf053a28 c76c1e90 c02326c4 c0231888 c740c578 c747d230
 bf053a28 bf053a28 [   27.588348] 1ea0: c763f980 c0540880  c0231f24
 bf051331 bf051336 0033 bf053a28 [   27.596984] 1ec0:  0001
 bf05b000  001c c0232c4c  bf053c68 [   27.605621] 1ee0:
  0001 bf05b000  001c c0036510 bf05b000 
 [   27.614257] 1f00: 0001  bf053c68  0001 c7703ec0
 0001 c0094028 [   27.622924] 1f20: bf053c74  c00919c8 c03a4998
 bf053dbc 00b71148 c765ae00 c88b7000 [   27.631561] 1f40: 00021d91 c88ce794
 c88ce627 c88d62cc c7665000 00014ddc 00017a1c  [   27.640197] 1f60:
  0023 0024 0018 001c 000f  c04cab04
 [   27.648834] 1f80:  00b71008 00b71148  0080 c003b124
 c76c  [   27.657470] 1fa0: bec2dc84 c003afa0 00b71008 00b71148
 4009d000 00021d91 00b71148 0001a6d8 [   27.666107] 1fc0: 00b71008 00b71148
  0080 00b71040 bec2dc84  bec2dc84 [   27.674743] 1fe0:
 00b712a8 bec2d984 b4ec 40208084 6010 4009d000 0a5c 
 [   27.683502] [c003dc98] (__bug+0x1c/0x28) from [bf0404d4]
 (omap3isp_put+0x30/0x90 [omap3_isp]) [   27.692962] [bf0404d4]
 (omap3isp_put+0x30/0x90 [omap3_isp]) from [bf040dd4]
 (isp_probe+0x7dc/0x958 [omap3_isp]) [   27.704040] [bf040dd4]
 (isp_probe+0x7dc/0x958 [omap3_isp]) from [c02336fc]
 (platform_drv_probe+0x14/0x18) [   27.714538] [c02336fc]
 (platform_drv_probe+0x14/0x18) from [c0232608]
 (driver_probe_device+0xc8/0x184) [   27.724731] [c0232608]
 (driver_probe_device+0xc8/0x184) from [c0232724]
 (__driver_attach+0x60/0x84) [   27.734649] [c0232724]
 (__driver_attach+0x60/0x84) from [c0231888] (bus_for_each_dev+0x4c/0x78)
 [   27.744201] [c0231888] (bus_for_each_dev+0x4c/0x78) from [c0231f24]
 (bus_add_driver+0xac/0x224) [   27.753784] [c0231f24]
 (bus_add_driver+0xac/0x224) from [c0232c4c] (driver_register+0xa8/0x12c)
 [   27.763336] [c0232c4c] (driver_register+0xa8/0x12c) from [c0036510]
 (do_one_initcall+0x90/0x160) [   27.773010] [c0036510]
 (do_one_initcall+0x90/0x160) from [c0094028]
 (sys_init_module+0x16e0/0x1854) [   27.782928] [c0094028]
 (sys_init_module+0x16e0/0x1854) from [c003afa0]
 (ret_fast_syscall+0x0/0x30) [   27.792755] Code: e59f0010 e1a01003
 eb0d66a1 e3a03000 (e5833000) [   27.799285] ---[ end trace
 e4f3fc044db258d3 ]---

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] OMAP3BEAGLE: Add support for mt9p031 sensor (LI-5M03 module).

2011-05-18 Thread Laurent Pinchart
Hi Russell,

On Wednesday 18 May 2011 01:08:21 Russell King - ARM Linux wrote:
 On Tue, May 17, 2011 at 11:28:48AM +0200, Javier Martin wrote:
  +#include devices.h
  +#include ../../../drivers/media/video/omap3isp/isp.h
  +#include ../../../drivers/media/video/omap3isp/ispreg.h
 
 This suggests that there's something very wrong with what's going on;
 it suggests that you're trying to access driver internals which should
 be handled via some better means.  And it looks like it's this:

  @@ -654,6 +715,62 @@ static void __init beagle_opp_init(void)
  
  return;
   
   }
  
  +extern struct platform_device omap3isp_device;
  +
  +static int beagle_cam_set_xclk(struct v4l2_subdev *subdev, int hz)
  +{
  +   struct isp_device *isp = platform_get_drvdata(omap3isp_device);
  +   int ret;
  +
  +   ret = isp-platform_cb.set_xclk(isp, hz, MT9P031_XCLK);
  +   return 0;
  +}
 
 That really needs fixing in a different way.

I plan to look into whether I can expose the OMAP3 ISP clocks through the 
Linux clock framework.

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] mt9p031: Add mt9p031 sensor driver.

2011-05-18 Thread javier Martin
Hi Laurent,
I've already fixed almost every issue you pointed out.
However, I still have got some doubts that I hope you can clarify.

On 17 May 2011 13:33, Laurent Pinchart
laurent.pinch...@ideasonboard.com wrote:
 Hi Javier,

 Thanks for the patch.

 On Tuesday 17 May 2011 11:28:47 Javier Martin wrote:
 It has been tested in beagleboard xM, using LI-5M03 module.

 Signed-off-by: Javier Martin javier.mar...@vista-silicon.com

 +
 +static int mt9p031_power_on(struct mt9p031 *mt9p031)
 +{
 + int ret;
 +
 + if (mt9p031-pdata-set_xclk)
 + mt9p031-pdata-set_xclk(mt9p031-subdev, 5400);
 + /* turn on VDD_IO */
 + ret = regulator_enable(mt9p031-reg_2v8);
 + if (ret) {
 + pr_err(Failed to enable 2.8v regulator: %d\n, ret);
 + return -1;
 + }

I would enable the regulator first. As a general rule, chips should be powered
up before their I/Os are actively driven.

You need to restore registers here, otherwise all controls set by the user
will not be applied to the device.

It's my mistake. This driver uses two regulators: 1,8 and 2,8 V
respectively. 2,8V regulator powers analog part and I/O whereas 1,8V
one powers the core. What I failed to do was keeping 1,8V regulator
always powered on, so that register configuration was not lost, and
power 2,8V regulator on and off as needed since it does not affect
register values. However, I messed it all up.

Ideally I would have to power 1,8V regulator on and off too. However,
as you wisely pointed out, registers should be restored in that case.
How am I supposed to keep track of register values? Are there any
helper functions I can use for that purpose or must I create a custom
register cache? Do you know any driver that uses this technique?

 [snip]
 +static int mt9p031_set_params(struct i2c_client *client,
 +                           struct v4l2_rect *rect, u16 xskip, u16 yskip)

 set_params should apply the parameters, not change them. They should have
 already been validated by the callers.

mt9p031_set_params() function is used by mt9p031_set_crop() and
mt9p031_set_format(), as you have correctly stated, these functions
shouldn' apply parameters but only change them.
I've checked mt9v032 driver and it is as you said. The question is,
where should these parameters get applied then?

 +static int mt9p031_registered(struct v4l2_subdev *sd)
 +{
 +     struct mt9p031 *mt9p031;
 +     mt9p031 = container_of(sd, struct mt9p031, subdev);
 +
 +     mt9p031_power_off(mt9p031);

 What's that for ?

 +     return 0;
 +}

Since mt9p031_power_off() and mt9p031_power_on() functions
disable/enable the 2,8V regulator which powers I/O, it must be powered
on during probe and after registering, it can be safely powered off.


 You have a set_xclk callback to board code, so I assume the chip can be driven
 by one of the OMAP3 ISP XCLK signals. To call back to the OMAP3 ISP from board
 code, you need to get hold of the OMAP3 ISP device pointer. Your next patch
 exports omap3isp_device, but I'm not sure that's the way to go. One option is

 struct isp_device *isp = v4l2_dev_to_isp_device(subdev-v4l2_dev);

 but that requires the subdev to be registered before the function can be
 called. In that case you would need to move the probe code to the registered
 subdev internal function.


Yes, I tried using that function but it didn't work because subdev
hadn't been registeret yet.

 A clean solution is needed in the long run, preferably not involving board
 code at all. It would be nice if the OMAP3 ISP driver could export XCLKA/XCLKB
 as generic clock objects.

So, what should I do in order to submit the driver to mainline?
 Do you want me to move the probe code to registered callback?

Thank you.

-- 
Javier Martin
Vista Silicon S.L.
CDTUC - FASE C - Oficina S-345
Avda de los Castros s/n
39005- Santander. Cantabria. Spain
+34 942 25 32 60
www.vista-silicon.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] mt9p031: Add mt9p031 sensor driver.

2011-05-18 Thread javier Martin
Hi Ivan,

On 17 May 2011 19:14, Ivan Nazarenko ivan.nazare...@gmail.com wrote:
 Javier,

 I have been using the aptina patch (https://github.com/Aptina/BeagleBoard-xM) 
 on beagleboard while waiting linux-media solve this mt9p031 issue. Now that 
 you have something working, I would like to try it - but I would like to know 
 what is the clock rate you actually drove the sensor.

 Reviewing your path, I suppose it is 54MHz, so you would be achieving some 10 
 full 5MPix frames/s from the sensor. Is that correct? (the aptina patch 
 delivers less than 4 fps).


Yes, you are right. Whereas clock rate is set to 54MHz, with my
oscilloscope I have measured 57 MHz.


-- 
Javier Martin
Vista Silicon S.L.
CDTUC - FASE C - Oficina S-345
Avda de los Castros s/n
39005- Santander. Cantabria. Spain
+34 942 25 32 60
www.vista-silicon.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 2/2] mt9m111: fix pixel clock

2011-05-18 Thread Guennadi Liakhovetski
Hi Teresa

I've verified the mt9v022 patch - finally I have noticed the difference, 
it does look correct! As for this one, how about this version of your 
patch:

diff --git a/drivers/media/video/mt9m111.c b/drivers/media/video/mt9m111.c
index 53fa2a7..ebebed9 100644
--- a/drivers/media/video/mt9m111.c
+++ b/drivers/media/video/mt9m111.c
@@ -315,10 +315,20 @@ static int mt9m111_setup_rect(struct i2c_client *client,
 static int mt9m111_setup_pixfmt(struct i2c_client *client, u16 outfmt)
 {
int ret;
+   u16 mask = MT9M111_OUTFMT_PROCESSED_BAYER | MT9M111_OUTFMT_RGB |
+   MT9M111_OUTFMT_BYPASS_IFP | MT9M111_OUTFMT_SWAP_RGB_EVEN |
+   MT9M111_OUTFMT_RGB565 | MT9M111_OUTFMT_RGB555 |
+   MT9M111_OUTFMT_SWAP_YCbCr_Cb_Cr |
+   MT9M111_OUTFMT_SWAP_YCbCr_C_Y;
 
-   ret = reg_write(OUTPUT_FORMAT_CTRL2_A, outfmt);
+   ret = reg_read(OUTPUT_FORMAT_CTRL2_A);
+   if (ret = 0)
+   ret = reg_write(OUTPUT_FORMAT_CTRL2_A, (ret  ~mask) | outfmt);
if (!ret)
-   ret = reg_write(OUTPUT_FORMAT_CTRL2_B, outfmt);
+   ret = reg_read(OUTPUT_FORMAT_CTRL2_B);
+   if (ret = 0)
+   ret = reg_write(OUTPUT_FORMAT_CTRL2_B, (ret  ~mask) | outfmt);
+
return ret;
 }
 

It reduces the number of I2C accesses and avoids writing some not 
necessarily desired value to the register. Does this look ok to you? Could 
you maybe test - I've got no mt9m111 cameras.

Thanks
Guennadi

On Wed, 6 Apr 2011, Teresa Gámez wrote:

 This camera driver supports only rising edge, which is the default
 setting of the device. The function mt9m111_setup_pixfmt() overwrites
 this setting. So the driver actually uses falling edge.
 This patch corrects that.
 
 Signed-off-by: Teresa Gámez t.ga...@phytec.de
 ---
  drivers/media/video/mt9m111.c |   14 --
  1 files changed, 12 insertions(+), 2 deletions(-)
 
 diff --git a/drivers/media/video/mt9m111.c b/drivers/media/video/mt9m111.c
 index 53fa2a7..4040a96 100644
 --- a/drivers/media/video/mt9m111.c
 +++ b/drivers/media/video/mt9m111.c
 @@ -315,10 +315,20 @@ static int mt9m111_setup_rect(struct i2c_client *client,
  static int mt9m111_setup_pixfmt(struct i2c_client *client, u16 outfmt)
  {
   int ret;
 + u16 mask = MT9M111_OUTFMT_PROCESSED_BAYER | MT9M111_OUTFMT_RGB |
 + MT9M111_OUTFMT_BYPASS_IFP | MT9M111_OUTFMT_SWAP_RGB_EVEN |
 + MT9M111_OUTFMT_RGB565 | MT9M111_OUTFMT_RGB555 |
 + MT9M111_OUTFMT_SWAP_YCbCr_Cb_Cr |
 + MT9M111_OUTFMT_SWAP_YCbCr_C_Y;
  
 - ret = reg_write(OUTPUT_FORMAT_CTRL2_A, outfmt);
 + ret = reg_clear(OUTPUT_FORMAT_CTRL2_A, mask);
   if (!ret)
 - ret = reg_write(OUTPUT_FORMAT_CTRL2_B, outfmt);
 + ret = reg_set(OUTPUT_FORMAT_CTRL2_A, outfmt);
 + if (!ret)
 + ret = reg_clear(OUTPUT_FORMAT_CTRL2_B, mask);
 + if (!ret)
 + ret = reg_set(OUTPUT_FORMAT_CTRL2_B, outfmt);
 +
   return ret;
  }
  
 -- 
 1.7.0.4
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC v2 3/3] adp1653: Add driver for LED flash controller

2011-05-18 Thread Sakari Ailus
On Wed, May 18, 2011 at 09:31:24AM +0200, Laurent Pinchart wrote:
 Hi Sakari,

Hi Laurent,

 On Tuesday 17 May 2011 17:14:04 Sakari Ailus wrote:
  This patch adds the driver for the adp1653 LED flash controller. This
  controller supports a high power led in flash and torch modes and an
  indicator light, sometimes also called privacy light.
  
  The adp1653 is used on the Nokia N900.
 
 [snip]
 
  diff --git a/drivers/media/video/adp1653.c b/drivers/media/video/adp1653.c
  new file mode 100644
  index 000..1679707
  --- /dev/null
  +++ b/drivers/media/video/adp1653.c
 
 [snip]
 
  +static int adp1653_get_fault(struct adp1653_flash *flash)
  +{
  +   struct i2c_client *client = v4l2_get_subdevdata(flash-subdev);
  +   int fault;
  +   int rval;
  +
  +   fault = i2c_smbus_read_byte_data(client, ADP1653_REG_FAULT);
  +   if (IS_ERR_VALUE(fault))
  +   return fault;
  +
  +   flash-fault |= fault;
  +
  +   if (!flash-fault)
  +   return 0;
  +
  +   /* Clear faults. */
  +   rval = i2c_smbus_write_byte_data(client, ADP1653_REG_OUT_SEL, 0);
  +   if (IS_ERR_VALUE(rval))
  +   return rval;
 
 Should the faults be cleared right away, instead of when the user reads the 
 faults control ?
 
  +   flash-led_mode-val = V4L2_FLASH_LED_MODE_NONE;
 
 Does the hardware switch back to none mode when a fault occurs ? The 
 datasheet just states that the ADP1653 is disabled. Does that mean 
 temporarily disabled until the faults are cleared ? If so you should update 
 the registers to turn the LED off.

My understanding is that this is temporary until the faults are cleared.
However, this is difficult to find out since the faults don't just occur
that easily.

OUT_SEL register controls the current to LEDs so this turned everything off.
The indicator should still remain on, I guess, since it's not affected by
the faults, except possibly the over-temperature one.

  +   return flash-fault;
  +}
 
 [snip]
 
  +static int adp1653_get_ctrl(struct v4l2_ctrl *ctrl)
  +{
  +   struct adp1653_flash *flash =
  +   container_of(ctrl-handler, struct adp1653_flash, ctrls);
  +
  +   adp1653_get_fault(flash);
  +   if (IS_ERR_VALUE(flash-fault))
 
 Shouldn't you check the adp1653_get_fault() return value instead ?

Yes. Good catch. I've tried to simulate this code a bit but as always, error
handling tends not to be one of the best parts of the driver, especially
those errors that generally do not happen. :-I

  +   return flash-fault;
  +
  +   ctrl-cur.val = 0;
  +
  +   if (flash-fault  ADP1653_REG_FAULT_FLT_SCP)
  +   ctrl-cur.val |= V4L2_FLASH_FAULT_SHORT_CIRCUIT;
  +   if (flash-fault  ADP1653_REG_FAULT_FLT_OT)
  +   ctrl-cur.val |= V4L2_FLASH_FAULT_OVER_TEMPERATURE;
  +   if (flash-fault  ADP1653_REG_FAULT_FLT_TMR)
  +   ctrl-cur.val |= V4L2_FLASH_FAULT_TIMEOUT;
  +   if (flash-fault  ADP1653_REG_FAULT_FLT_OV)
  +   ctrl-cur.val |= V4L2_FLASH_FAULT_OVER_VOLTAGE;
  +
  +   flash-fault = 0;
  +
  +   return 0;
  +}
 
 [snip]
 
  +static int
  +adp1653_registered(struct v4l2_subdev *subdev)
  +{
  +   struct adp1653_flash *flash = to_adp1653_flash(subdev);
  +
  +   return adp1653_init_controls(flash);
 
 Can't this be moved to adp1653_probe() ? You could then get rid of the 
 registered callback.

Good point. I'll look into this.

  +}
  +
  +static int
  +adp1653_init_device(struct adp1653_flash *flash)
  +{
  +   struct i2c_client *client = v4l2_get_subdevdata(flash-subdev);
  +   int rval;
  +
  +   /* Clear FAULT register by writing zero to OUT_SEL */
  +   rval = i2c_smbus_write_byte_data(client, ADP1653_REG_OUT_SEL, 0);
  +   if (rval  0) {
  +   dev_err(client-dev, failed writing fault register\n);
  +   return -EIO;
  +   }
  +
  +   /* Read FAULT register */
  +   rval = i2c_smbus_read_byte_data(client, ADP1653_REG_FAULT);
  +   if (rval  0) {
  +   dev_err(client-dev, failed reading fault register\n);
  +   return -EIO;
  +   }
  +
  +   if ((rval  0x0f) != 0) {
  +   dev_err(client-dev, device fault\n);
 
 Same comment as last time :-)

Ouch. I'll try to get the fix done for the next time. :-)

Many thanks for the comments.

Regards,

-- 
Sakari Ailus
sakari dot ailus at iki dot fi
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] V4L: Extended crop/compose API

2011-05-18 Thread Sylwester Nawrocki
Hi Laurent, Hans,

On 05/16/2011 09:21 AM, Laurent Pinchart wrote:
 On Saturday 14 May 2011 12:50:32 Hans Verkuil wrote:
 On Friday, May 13, 2011 14:43:08 Laurent Pinchart wrote:
 On Saturday 07 May 2011 13:52:25 Hans Verkuil wrote:
 On Thursday, May 05, 2011 11:39:54 Tomasz Stanislawski wrote:
 
 [snip]

...
 
 All cropcap fields except pixel aspect are supported in new API. I
 noticed that there was discussion about pixel aspect and I am not
 convinced that it should be a part of the cropping API. Please refer
 to the post:
 http://lists-archives.org/video4linux/22837-need-vidioc_cropcap-clari
 fica tion.html

 Yeah, what are we going to do about that? I agree that it does not
 belong here. But we can't sweep it under the carpet either.

 The pixel aspect ratio is typically a property of the current input or
 output video format (either a DV preset or a PAL/NTSC STD). For DV
 presets we could add it to struct v4l2_dv_enum_presets and we could do
 the same for STD formats by adding it to struct v4l2_standard.

 Cropping and composing doesn't modify the pixel aspect ratio, so I agree
 it doesn't belong there.

 This would fail for sensors, though, since there the chosen sensor
 framesize is set through S_FMT. (This never quite made sense to me,
 though, from a V4L2 API perspective). I'm not sure whether we can
 always assume 1:1 pixel ratio for sensors. Does anyone know?

 Most of the time I suppose so, but I wouldn't be surprise if some exotic
 sensors had non-square pixel aspect ratios.

 Would it make sense to add the pixel ratio to struct v4l2_frmsizeenum?

 And what about adding a VIDIOC_G/S_FRAMESIZE to select a sensor resolution?

 This would typically be one of the discrete framesizes supported by a
 sensor through binning/skipping. If there is also a scaler on the sensor,
 then that is controlled through S_FMT. For video it is S_FMT that controls
 the scaler (together with S_CROP at the moment), but the source resolution
 is set through S_STD/S_DV_PRESET/S_DV_TIMINGS. It always felt very
 inconsistent to me that there is no equivalent for sensors, even though
 you can enumerate all the available framesizes (just as you can with
 ENUMSTD and ENUM_DV_PRESETS).
 
 Let's take one step back here.
 
 We started with the V4L2 device node API to control (more or less) simple 
 devices. Device became more complex, and we created a new MC API (along with 
 the subdev pad-level API) to configure complex pipelines. The V4L2 device 
 node 
 API still lives, and we want to enhance it to configure medium complexity 
 devices.
 
 Before going much further, I think we need to define what a medium complexity 
 device is and where we put the boundary between devices that can be 
 configured 
 with the V4L2 device node API, and devices that require the MC API.
 
 I believe this shouldn't be too difficult. What we need to do is create a 
 simple virtual pipeline that supports cropping, scaling and composing, and 
 map 
 the V4L2 device node API to that pipeline configuration. Devices that map to 
 that pipeline could then use the V4L2 device node API only, with clearly 
 defined semantics.

The need to define clearly what's possible to cover with the V4L2 device node
API sounds reasonable. Nevertheless I like Hans' proposal of new 
VIDIOC_G/S_FRAMESIZE ioctls unifying the video capture and tv/dv APIs.
It extends V4L2 API with capability to handle scaling feature which well might
be available in video pipelines that might not want to use MC API at all.

 [snip]
 
   * resolution of an image combined with support for
   VIDIOC_S_MULTISELECTION
   
 allows to pass a triple format/crop/compose sizes in a single
 ioctl

 I don't believe S_MULTISELECTION will solve anything. Specific
 use-cases perhaps, but not the general problem of setting up a
 pipeline. I feel another brainstorm session coming to solve that. We
 never came to a solution for it in Warsaw.

 Pipelines are configured on subdev nodes, not on video nodes, so I'm also
 unsure whether multiselection support would really be useful.

 If we decide to implement multiselection support, we might as well use
 that only. We would need a multiselection target bitmask, so selection
 targets should all be  32.

 Thinking some more about it, does it make sense to set both crop and
 compose on a single video device node (not talking about mem-to-mem,
 where you use the type to multiplex input/output devices on the same
 node) ? If so, what would the use cases be ?

I can't think of any, one either use crop or compose.

 Should all devices support the selection API, or only the simple ones
 that don't expose a user-configurable pipeline to userspace through the
 MC API ? The proposed API looks good to me, but before acking it I'd
 like to clarify how (if at all) this will interact with subdev pad-level

 configuration on devices that support that. My current feeling is that
 video device nodes for such devices should only be used for video
 streaming. All 

Re: [RFC v2 3/3] adp1653: Add driver for LED flash controller

2011-05-18 Thread Laurent Pinchart
Hi Sakari,

On Wednesday 18 May 2011 13:29:56 Sakari Ailus wrote:
 On Wed, May 18, 2011 at 09:31:24AM +0200, Laurent Pinchart wrote:
  On Tuesday 17 May 2011 17:14:04 Sakari Ailus wrote:
   This patch adds the driver for the adp1653 LED flash controller. This
   controller supports a high power led in flash and torch modes and an
   indicator light, sometimes also called privacy light.
   
   The adp1653 is used on the Nokia N900.
  
  [snip]
  
   diff --git a/drivers/media/video/adp1653.c
   b/drivers/media/video/adp1653.c new file mode 100644
   index 000..1679707
   --- /dev/null
   +++ b/drivers/media/video/adp1653.c
  
  [snip]
  
   +static int adp1653_get_fault(struct adp1653_flash *flash)
   +{
   + struct i2c_client *client = v4l2_get_subdevdata(flash-subdev);
   + int fault;
   + int rval;
   +
   + fault = i2c_smbus_read_byte_data(client, ADP1653_REG_FAULT);
   + if (IS_ERR_VALUE(fault))
   + return fault;
   +
   + flash-fault |= fault;
   +
   + if (!flash-fault)
   + return 0;
   +
   + /* Clear faults. */
   + rval = i2c_smbus_write_byte_data(client, ADP1653_REG_OUT_SEL, 0);
   + if (IS_ERR_VALUE(rval))
   + return rval;
  
  Should the faults be cleared right away, instead of when the user reads
  the faults control ?
  
   + flash-led_mode-val = V4L2_FLASH_LED_MODE_NONE;
  
  Does the hardware switch back to none mode when a fault occurs ? The
  datasheet just states that the ADP1653 is disabled. Does that mean
  temporarily disabled until the faults are cleared ? If so you should
  update the registers to turn the LED off.
 
 My understanding is that this is temporary until the faults are cleared.
 However, this is difficult to find out since the faults don't just occur
 that easily.
 
 OUT_SEL register controls the current to LEDs so this turned everything
 off. The indicator should still remain on, I guess, since it's not
 affected by the faults, except possibly the over-temperature one.

OK, I'm fine with the code then.

   + return flash-fault;
   +}
  
  [snip]
  
   +static int adp1653_get_ctrl(struct v4l2_ctrl *ctrl)
   +{
   + struct adp1653_flash *flash =
   + container_of(ctrl-handler, struct adp1653_flash, ctrls);
   +
   + adp1653_get_fault(flash);
   + if (IS_ERR_VALUE(flash-fault))
  
  Shouldn't you check the adp1653_get_fault() return value instead ?
 
 Yes. Good catch. I've tried to simulate this code a bit but as always,
 error handling tends not to be one of the best parts of the driver,
 especially those errors that generally do not happen. :-I
 
   + return flash-fault;
   +
   + ctrl-cur.val = 0;
   +
   + if (flash-fault  ADP1653_REG_FAULT_FLT_SCP)
   + ctrl-cur.val |= V4L2_FLASH_FAULT_SHORT_CIRCUIT;
   + if (flash-fault  ADP1653_REG_FAULT_FLT_OT)
   + ctrl-cur.val |= V4L2_FLASH_FAULT_OVER_TEMPERATURE;
   + if (flash-fault  ADP1653_REG_FAULT_FLT_TMR)
   + ctrl-cur.val |= V4L2_FLASH_FAULT_TIMEOUT;
   + if (flash-fault  ADP1653_REG_FAULT_FLT_OV)
   + ctrl-cur.val |= V4L2_FLASH_FAULT_OVER_VOLTAGE;
   +
   + flash-fault = 0;
   +
   + return 0;
   +}
  
  [snip]
  
   +static int
   +adp1653_registered(struct v4l2_subdev *subdev)
   +{
   + struct adp1653_flash *flash = to_adp1653_flash(subdev);
   +
   + return adp1653_init_controls(flash);
  
  Can't this be moved to adp1653_probe() ? You could then get rid of the
  registered callback.
 
 Good point. I'll look into this.
 
   +}
   +
   +static int
   +adp1653_init_device(struct adp1653_flash *flash)
   +{
   + struct i2c_client *client = v4l2_get_subdevdata(flash-subdev);
   + int rval;
   +
   + /* Clear FAULT register by writing zero to OUT_SEL */
   + rval = i2c_smbus_write_byte_data(client, ADP1653_REG_OUT_SEL, 0);
   + if (rval  0) {
   + dev_err(client-dev, failed writing fault register\n);
   + return -EIO;
   + }
   +
   + /* Read FAULT register */
   + rval = i2c_smbus_read_byte_data(client, ADP1653_REG_FAULT);
   + if (rval  0) {
   + dev_err(client-dev, failed reading fault register\n);
   + return -EIO;
   + }
   +
   + if ((rval  0x0f) != 0) {
   + dev_err(client-dev, device fault\n);
  
  Same comment as last time :-)
 
 Ouch. I'll try to get the fix done for the next time. :-)

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] V4L: Extended crop/compose API

2011-05-18 Thread Sylwester Nawrocki
On 05/18/2011 02:31 PM, Hans Verkuil wrote:
 On Wednesday, May 18, 2011 14:06:21 Sylwester Nawrocki wrote:
 On 05/16/2011 09:21 AM, Laurent Pinchart wrote:
  On Saturday 14 May 2011 12:50:32 Hans Verkuil wrote:
  On Friday, May 13, 2011 14:43:08 Laurent Pinchart wrote:
  On Saturday 07 May 2011 13:52:25 Hans Verkuil wrote:
  On Thursday, May 05, 2011 11:39:54 Tomasz Stanislawski wrote:
 
 
  [snip]
 ...
 
  * resolution of an image combined with support for
 
  VIDIOC_S_MULTISELECTION
 
 
 
  allows to pass a triple format/crop/compose sizes in a single
 
  ioctl
 
 
 
  I don't believe S_MULTISELECTION will solve anything. Specific
 
  use-cases perhaps, but not the general problem of setting up a
 
  pipeline. I feel another brainstorm session coming to solve that. We
 
  never came to a solution for it in Warsaw.
 
 
 
  Pipelines are configured on subdev nodes, not on video nodes, so I'm also
 
  unsure whether multiselection support would really be useful.
 
 
 
  If we decide to implement multiselection support, we might as well use
 
  that only. We would need a multiselection target bitmask, so selection
 
  targets should all be  32.
 
 
 
  Thinking some more about it, does it make sense to set both crop and
 
  compose on a single video device node (not talking about mem-to-mem,
 
  where you use the type to multiplex input/output devices on the same
 
  node) ? If so, what would the use cases be ?
 

 
 I can't think of any, one either use crop or compose.
 
 
 I can: you crop in the video receiver and compose it into a larger buffer.
 
 Actually quite a desirable feature.

Yes, right. Don't know why I imagined something different.
And we need it in Samsung capture capture interfaces as well. The H/W
is capable of cropping and composing with camera interface as a data source
similarly as it is done with memory buffers.


Regards,
-- 
Sylwester Nawrocki
Samsung Poland RD Center
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 1/4] V4L: add three new ioctl()s for multi-size videobuffer management

2011-05-18 Thread Laurent Pinchart
On Friday 13 May 2011 09:45:51 Guennadi Liakhovetski wrote:
 I've found some more time to get back to this. Let me try to recap, what
 has been discussed. I've looked through all replies again (thanks to
 all!), so, I'll present a summary. Any mistakes and misinterpretations are
 mine;) If I misunderstand someone or forget anything - please, shout!
 
 On Fri, 1 Apr 2011, Guennadi Liakhovetski wrote:
  A possibility to preallocate and initialise buffers of different sizes
  in V4L2 is required for an efficient implementation of asnapshot mode.
  This patch adds three new ioctl()s: VIDIOC_CREATE_BUFS,
  VIDIOC_DESTROY_BUFS, and VIDIOC_SUBMIT_BUF and defines respective data
  structures.
  
  Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
  ---
  
   drivers/media/video/v4l2-compat-ioctl32.c |3 ++
   drivers/media/video/v4l2-ioctl.c  |   43
   + include/linux/videodev2.h
   |   24  include/media/v4l2-ioctl.h|   
   3 ++
   4 files changed, 73 insertions(+), 0 deletions(-)
  
  diff --git a/drivers/media/video/v4l2-compat-ioctl32.c
  b/drivers/media/video/v4l2-compat-ioctl32.c index 7c26947..d71b289
  100644
  --- a/drivers/media/video/v4l2-compat-ioctl32.c
  +++ b/drivers/media/video/v4l2-compat-ioctl32.c
  @@ -922,6 +922,9 @@ long v4l2_compat_ioctl32(struct file *file, unsigned
  int cmd, unsigned long arg)
  
  case VIDIOC_DQEVENT:
  case VIDIOC_SUBSCRIBE_EVENT:
  
  case VIDIOC_UNSUBSCRIBE_EVENT:
  +   case VIDIOC_CREATE_BUFS:
  +   case VIDIOC_DESTROY_BUFS:
  
  +   case VIDIOC_SUBMIT_BUF:
  ret = do_video_ioctl(file, cmd, arg);
  break;
  
  diff --git a/drivers/media/video/v4l2-ioctl.c
  b/drivers/media/video/v4l2-ioctl.c index a01ed39..b80a211 100644
  --- a/drivers/media/video/v4l2-ioctl.c
  +++ b/drivers/media/video/v4l2-ioctl.c
  @@ -259,6 +259,9 @@ static const char *v4l2_ioctls[] = {
  
  [_IOC_NR(VIDIOC_DQEVENT)]  = VIDIOC_DQEVENT,
  [_IOC_NR(VIDIOC_SUBSCRIBE_EVENT)]  = VIDIOC_SUBSCRIBE_EVENT,
  [_IOC_NR(VIDIOC_UNSUBSCRIBE_EVENT)] = VIDIOC_UNSUBSCRIBE_EVENT,
  
  +   [_IOC_NR(VIDIOC_CREATE_BUFS)]  = VIDIOC_CREATE_BUFS,
  +   [_IOC_NR(VIDIOC_DESTROY_BUFS)] = VIDIOC_DESTROY_BUFS,
  +   [_IOC_NR(VIDIOC_SUBMIT_BUF)]   = VIDIOC_SUBMIT_BUF,
  
   };
   #define V4L2_IOCTLS ARRAY_SIZE(v4l2_ioctls)
  
  @@ -2184,6 +2187,46 @@ static long __video_do_ioctl(struct file *file,
  
  dbgarg(cmd, type=0x%8.8x, sub-type);
  break;
  
  }
  
  +   case VIDIOC_CREATE_BUFS:
  +   {
  +   struct v4l2_create_buffers *create = arg;
  +
  +   if (!ops-vidioc_create_bufs)
  +   break;
  +   ret = check_fmt(ops, create-format.type);
  +   if (ret)
  +   break;
  +
  +   if (create-size)
  +   CLEAR_AFTER_FIELD(create, count);
  +
  +   ret = ops-vidioc_create_bufs(file, fh, create);
  +
  +   dbgarg(cmd, count=%d\n, create-count);
  +   break;
  +   }
  +   case VIDIOC_DESTROY_BUFS:
  +   {
  +   struct v4l2_buffer_span *span = arg;
  +
  +   if (!ops-vidioc_destroy_bufs)
  +   break;
  +
  +   ret = ops-vidioc_destroy_bufs(file, fh, span);
  +
  +   dbgarg(cmd, count=%d, span-count);
  +   break;
  +   }
  +   case VIDIOC_SUBMIT_BUF:
  +   {
  +   unsigned int *i = arg;
  +
  +   if (!ops-vidioc_submit_buf)
  +   break;
  +   ret = ops-vidioc_submit_buf(file, fh, *i);
  +   dbgarg(cmd, index=%d, *i);
  +   break;
  +   }
  
  default:
  {
  
  bool valid_prio = true;
  
  diff --git a/include/linux/videodev2.h b/include/linux/videodev2.h
  index aa6c393..b6ef46e 100644
  --- a/include/linux/videodev2.h
  +++ b/include/linux/videodev2.h
  @@ -1847,6 +1847,26 @@ struct v4l2_dbg_chip_ident {
  
  __u32 revision;/* chip revision, chip specific */
   
   } __attribute__ ((packed));
  
  +/* VIDIOC_DESTROY_BUFS */
  +struct v4l2_buffer_span {
  +   __u32   index;  /* output: buffers index...index + 
  count - 1 have been
  created */ +__u32   count;
  +   __u32   reserved[2];
  +};
  +
  +/* struct v4l2_createbuffers::flags */
  +#define V4L2_BUFFER_FLAG_NO_CACHE_INVALIDATE   (1  0)
 
 1. An additional flag FLAG_NO_CACHE_FLUSH is needed for output devices.

Shouldn't it be NO_CACHE_CLEAN ?

 2. Both these flags should not be passed with CREATE, but with SUBMIT
 (which will be renamed to PREPARE or something similar). It should be
 possible to prepare the same buffer with different cacheing attributes
 during a running operation. Shall these flags be added to values, taken by
 struct v4l2_buffer::flags, since that is the struct, that will be used as
 the argument for the new version of the SUBMIT ioctl()?

Do you have a 

Re: [PATCH/RFC 1/4] V4L: add three new ioctl()s for multi-size videobuffer management

2011-05-18 Thread Laurent Pinchart
On Tuesday 17 May 2011 07:52:28 Sakari Ailus wrote:
 Guennadi Liakhovetski wrote:
  Hi Sakari
 
 Hi Guennadi,
 
 [clip]
 
   bool valid_prio = true;
  
  diff --git a/include/linux/videodev2.h b/include/linux/videodev2.h
  index aa6c393..b6ef46e 100644
  --- a/include/linux/videodev2.h
  +++ b/include/linux/videodev2.h
  @@ -1847,6 +1847,26 @@ struct v4l2_dbg_chip_ident {
  
   __u32 revision;/* chip revision, chip specific */
   
   } __attribute__ ((packed));
  
  +/* VIDIOC_DESTROY_BUFS */
  +struct v4l2_buffer_span {
  +__u32   index;  /* output: buffers 
  index...index + count - 1 have
  been created */ +__u32   count;
  +__u32   reserved[2];
  +};
  +
  +/* struct v4l2_createbuffers::flags */
  +#define V4L2_BUFFER_FLAG_NO_CACHE_INVALIDATE(1  0)
  
  1. An additional flag FLAG_NO_CACHE_FLUSH is needed for output devices.
  
  Should this be called FLAG_NO_CACHE_CLEAN?
  
  Invalidate == Make contents of the cache invalid
  
  Clean == Make sure no dirty stuff resides in the cache
  
  and mark it clean?...
  
  Flush == invalidate + clean
  
  Maybe you meant first clean, then invalidate?
  
  In principle, I think, yes, CLEAN would define it more strictly.
 
 Yes; I'd prefer that. :-)
 
  It occurred to me to wonder if two flags are needed for this, but I
  think the answer is yes, since there can be memory-to-memory devices
  which are both OUTPUT and CAPTURE.
  
  2. Both these flags should not be passed with CREATE, but with SUBMIT
  (which will be renamed to PREPARE or something similar). It should be
  possible to prepare the same buffer with different cacheing attributes
  during a running operation. Shall these flags be added to values, taken
  by struct v4l2_buffer::flags, since that is the struct, that will be
  used as the argument for the new version of the SUBMIT ioctl()?
  
  +
  +/* VIDIOC_CREATE_BUFS */
  +struct v4l2_create_buffers {
  +__u32   index;  /* output: buffers 
  index...index + count - 1 
have
  been created */ +__u32   count;
  +__u32   flags;  /* V4L2_BUFFER_FLAG_* */
  +enum v4l2_memorymemory;
  +__u32   size;   /* Explicit size, e.g., 
  for compressed streams 
*/
  +struct v4l2_format  format; /* type is used 
  always, the rest 
if
  size == 0 */ +};
  
  1. Care must be taken to keep index = V4L2_MAX_FRAME
  
  This will make allocating new ranges of buffers impossible if the
  existing buffer indices are scattered within the range.
  
  What about making it possible to pass an array of buffer indices to the
  user, just like VIDIOC_S_EXT_CTRLS does? I'm not sure if this would be
  perfect, but it would avoid the problem of requiring continuous ranges
  of buffer ids.
  
  struct v4l2_create_buffers {
  
 __u32   *index;
 __u32   count;
 __u32   flags;
 enum v4l2_memorymemory;
 __u32   size;
 struct v4l2_format  format;
  
  };
  
  Index would be a pointer to an array of buffer indices and its length
  would be count.
  
  I don't understand this. We do _not_ want to allow holes in indices. For
  now we decide to not implement DESTROY at all. In this case indices just
  increment contiguously.
  
  The next stage is to implement DESTROY, but only in strict reverse order
  - without holes and in the same ranges, as buffers have been CREATEd
  before. So, I really don't understand why we need arrays, sorry.
 
 Well, now that we're defining a second interface to make new buffer
 objects, I just thought it should be made as future-proof as we can.

I second that. I don't like rushing new APIs to find out we need something 
else after 6 months.

 But even with single index, it's always possible to issue the ioctl more
 than once and achieve the same result as if there was an array of indices.
 
 What would be the reason to disallow creating holes to index range? I
 don't see much reason from application or implementation point of view,
 as we're even being limited to such low numbers.
 
 Speaking of which; perhaps I'm bringing this up rather late, but should
 we define the API to allow larger numbers than VIDEO_MAX_FRAME? 32 isn't
 all that much after all --- this might become a limiting factor later on
 when there are devices with huge amounts of memory.
 
 Allowing CREATE_BUF to do that right now would be possible since
 applications using it are new users and can be expected to be using it
 properly. :-)

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Codec controls question

2011-05-18 Thread Laurent Pinchart
Hi Kamil,

On Tuesday 17 May 2011 18:23:19 Kamil Debski wrote:
 Hi,
 
 Some time ago we were discussing the set of controls that should be
 implemented for codec support.
 
 I remember that the result of this discussion was that the controls should
 be as integrated as possible. This included the V4L2_CID_MPEG_LEVEL and
 all controls related to the quantization parameter.
 The problem with such approach is that the levels are different for MPEG4,
 H264 and H263. Same for quantization parameter - it ranges from 1 to 31
 for MPEG4/H263 and from 0 to 51 for H264.
 
 Having single controls for the more than one codec seemed as a good
 solution. Unfortunately I don't see a good option to implement it,
 especially with the control framework. My idea was to have the min/max
 values for QP set in the S_FMT call on the CAPTURE. For MPEG_LEVEL it
 would be checked in the S_CTRL callback and if it did not fit the chosen
 format it failed.
 
 So I see three solutions to this problem and I wanted to ask about your
 opinion.
 
 1) Have a separate controls whenever the range or valid value range
 differs.
 
 This is the simplest and in my opinion the best solution I can think of.
 This way we'll have different set of controls if the valid values are
 different (e.g. V4L2_CID_MPEG_MPEG4_LEVEL, V4L2_CID_MPEG_H264_LEVEL).
 User can set the controls at any time. The only con of this approach is
 having more controls.
 
 2) Permit the user to set the control only after running S_FMT on the
 CAPTURE. This approach would enable us to keep less controls, but would
 require to set the min/max values for controls in the S_FMT. This could be
 done by adding controls in S_FMT or by manipulating their range and
 disabling unused controls. In case of MPEG_LEVEL it would require s_ctrl
 callback to check whether the requested level is valid for the chosen
 codec.
 
 This would be somehow against the spec, but if we allow the codec
 interface to have some differences this would be ok.
 
 3) Let the user set the controls whenever and check them during the
 STREAMON call.
 
 The controls could be set anytime, and the control range supplied to the
 control framework would cover values possible for all supported codecs.
 
 This approach is more difficult than first approach. It is worse in case of
 user space than the second approach - the user is unaware of any mistakes
 until the STREAMON call. The argument for this approach is the possibility
 to have a few controls less.
 
 So I would like to hear a comment about the above propositions. Personally
 I would opt for the first solution.

I think the question boils down to whether we want to support controls that 
have different valid ranges depending on formats, or even other controls. I 
think the issue isn't specific to codoc controls.

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/5] soc-camera for .40: adapt to changed and new mediabus pixel codes

2011-05-18 Thread Guennadi Liakhovetski
With the integration of the Media Controller and related drivers, pixel 
codes chenged their values and new ones have been added. soc-camera uses 
of those codes have to be adjusted too now.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 1/5] V4L: soc-camera: avoid huge arrays, caused by changed format codes

2011-05-18 Thread Guennadi Liakhovetski
Recently mediabus pixel format codes have become a part of the user-
space API, at which time their values also have been changed from
contiguous numbers, running from 0 to sparse numbers with values
around 0x1000, 0x2000, 0x3000... This made them unsuitable for the
use as array indices. This patch switches soc-camera internal format
look-ups to not depend on values of those macros.

Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
---

Yes, this violates the coding style, I've chosen to do that to minimize 
the patch to simplify its verification. We can reformat it in the future 
if desired.

 drivers/media/video/soc_mediabus.c |   89 ++-
 include/media/soc_mediabus.h   |   14 ++
 2 files changed, 80 insertions(+), 23 deletions(-)

diff --git a/drivers/media/video/soc_mediabus.c 
b/drivers/media/video/soc_mediabus.c
index ed77aa0..505b586 100644
--- a/drivers/media/video/soc_mediabus.c
+++ b/drivers/media/video/soc_mediabus.c
@@ -15,121 +15,152 @@
 #include media/v4l2-mediabus.h
 #include media/soc_mediabus.h
 
-#define MBUS_IDX(f) (V4L2_MBUS_FMT_ ## f - V4L2_MBUS_FMT_FIXED - 1)
-
-static const struct soc_mbus_pixelfmt mbus_fmt[] = {
-   [MBUS_IDX(YUYV8_2X8)] = {
+static const struct soc_mbus_lookup mbus_fmt[] = {
+{
+   .code = V4L2_MBUS_FMT_YUYV8_2X8,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_YUYV,
.name   = YUYV,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_LE,
},
-   [MBUS_IDX(YVYU8_2X8)] = {
+}, {
+   .code = V4L2_MBUS_FMT_YVYU8_2X8,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_YVYU,
.name   = YVYU,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_LE,
},
-   [MBUS_IDX(UYVY8_2X8)] = {
+}, {
+   .code = V4L2_MBUS_FMT_UYVY8_2X8,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_UYVY,
.name   = UYVY,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_LE,
},
-   [MBUS_IDX(VYUY8_2X8)] = {
+}, {
+   .code = V4L2_MBUS_FMT_VYUY8_2X8,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_VYUY,
.name   = VYUY,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_LE,
},
-   [MBUS_IDX(RGB555_2X8_PADHI_LE)] = {
+}, {
+   .code = V4L2_MBUS_FMT_RGB555_2X8_PADHI_LE,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_RGB555,
.name   = RGB555,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_LE,
},
-   [MBUS_IDX(RGB555_2X8_PADHI_BE)] = {
+}, {
+   .code = V4L2_MBUS_FMT_RGB555_2X8_PADHI_BE,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_RGB555X,
.name   = RGB555X,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_LE,
},
-   [MBUS_IDX(RGB565_2X8_LE)] = {
+}, {
+   .code = V4L2_MBUS_FMT_RGB565_2X8_LE,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_RGB565,
.name   = RGB565,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_LE,
},
-   [MBUS_IDX(RGB565_2X8_BE)] = {
+}, {
+   .code = V4L2_MBUS_FMT_RGB565_2X8_BE,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_RGB565X,
.name   = RGB565X,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_LE,
},
-   [MBUS_IDX(SBGGR8_1X8)] = {
+}, {
+   .code = V4L2_MBUS_FMT_SBGGR8_1X8,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_SBGGR8,
.name   = Bayer 8 BGGR,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_NONE,
.order  = SOC_MBUS_ORDER_LE,
},
-   [MBUS_IDX(SBGGR10_1X10)] = {
+}, {
+   .code = V4L2_MBUS_FMT_SBGGR10_1X10,
+   .fmt = {
  

[PATCH 4/5] V4L: soc-camera: add more format look-up entries

2011-05-18 Thread Guennadi Liakhovetski
Add new look-up entries for all mediabus codes, for which respective
fourcc codes exist.

Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
---
 drivers/media/video/soc_mediabus.c |  144 
 1 files changed, 144 insertions(+), 0 deletions(-)

diff --git a/drivers/media/video/soc_mediabus.c 
b/drivers/media/video/soc_mediabus.c
index 5090ec5..3a5bf01 100644
--- a/drivers/media/video/soc_mediabus.c
+++ b/drivers/media/video/soc_mediabus.c
@@ -160,6 +160,150 @@ static const struct soc_mbus_lookup mbus_fmt[] = {
.packing= SOC_MBUS_PACKING_2X8_PADLO,
.order  = SOC_MBUS_ORDER_BE,
},
+}, {
+   .code = V4L2_MBUS_FMT_RGB444_2X8_PADHI_BE,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_RGB444,
+   .name   = RGB444,
+   .bits_per_sample= 8,
+   .packing= SOC_MBUS_PACKING_2X8_PADHI,
+   .order  = SOC_MBUS_ORDER_BE,
+   },
+}, {
+   .code = V4L2_MBUS_FMT_YUYV8_1_5X8,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_YUV420,
+   .name   = YUYV 4:2:0,
+   .bits_per_sample= 8,
+   .packing= SOC_MBUS_PACKING_1_5X8,
+   .order  = SOC_MBUS_ORDER_LE,
+   },
+}, {
+   .code = V4L2_MBUS_FMT_YVYU8_1_5X8,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_YVU420,
+   .name   = YVYU 4:2:0,
+   .bits_per_sample= 8,
+   .packing= SOC_MBUS_PACKING_1_5X8,
+   .order  = SOC_MBUS_ORDER_LE,
+   },
+}, {
+   .code = V4L2_MBUS_FMT_UYVY8_1X16,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_UYVY,
+   .name   = UYVY 16bit,
+   .bits_per_sample= 16,
+   .packing= SOC_MBUS_PACKING_EXTEND16,
+   .order  = SOC_MBUS_ORDER_LE,
+   },
+}, {
+   .code = V4L2_MBUS_FMT_VYUY8_1X16,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_VYUY,
+   .name   = VYUY 16bit,
+   .bits_per_sample= 16,
+   .packing= SOC_MBUS_PACKING_EXTEND16,
+   .order  = SOC_MBUS_ORDER_LE,
+   },
+}, {
+   .code = V4L2_MBUS_FMT_YUYV8_1X16,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_YUYV,
+   .name   = YUYV 16bit,
+   .bits_per_sample= 16,
+   .packing= SOC_MBUS_PACKING_EXTEND16,
+   .order  = SOC_MBUS_ORDER_LE,
+   },
+}, {
+   .code = V4L2_MBUS_FMT_YVYU8_1X16,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_YVYU,
+   .name   = YVYU 16bit,
+   .bits_per_sample= 16,
+   .packing= SOC_MBUS_PACKING_EXTEND16,
+   .order  = SOC_MBUS_ORDER_LE,
+   },
+}, {
+   .code = V4L2_MBUS_FMT_SGRBG8_1X8,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_SGRBG8,
+   .name   = Bayer 8 GRBG,
+   .bits_per_sample= 8,
+   .packing= SOC_MBUS_PACKING_NONE,
+   .order  = SOC_MBUS_ORDER_LE,
+   },
+}, {
+   .code = V4L2_MBUS_FMT_SGRBG10_DPCM8_1X8,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_SGRBG10DPCM8,
+   .name   = Bayer 10 BGGR DPCM 8,
+   .bits_per_sample= 8,
+   .packing= SOC_MBUS_PACKING_NONE,
+   .order  = SOC_MBUS_ORDER_LE,
+   },
+}, {
+   .code = V4L2_MBUS_FMT_SGBRG10_1X10,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_SGBRG10,
+   .name   = Bayer 10 GBRG,
+   .bits_per_sample= 10,
+   .packing= SOC_MBUS_PACKING_EXTEND16,
+   .order  = SOC_MBUS_ORDER_LE,
+   },
+}, {
+   .code = V4L2_MBUS_FMT_SGRBG10_1X10,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_SGRBG10,
+   .name   = Bayer 10 GRBG,
+   .bits_per_sample= 10,
+   .packing= SOC_MBUS_PACKING_EXTEND16,
+   .order  = SOC_MBUS_ORDER_LE,
+   },
+}, {
+   .code = V4L2_MBUS_FMT_SRGGB10_1X10,
+   .fmt = {
+   .fourcc = V4L2_PIX_FMT_SRGGB10,
+   .name   = Bayer 10 RGGB,
+   .bits_per_sample   

[PATCH 2/5] V4L: omap1-camera: fix huge lookup array

2011-05-18 Thread Guennadi Liakhovetski
Since V4L2_MBUS_FMT_* codes have become large and sparse, they cannot
be used as array indices anymore.

Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
---
 drivers/media/video/omap1_camera.c |   41 ++-
 1 files changed, 30 insertions(+), 11 deletions(-)

diff --git a/drivers/media/video/omap1_camera.c 
b/drivers/media/video/omap1_camera.c
index 5954b93..fe577a9 100644
--- a/drivers/media/video/omap1_camera.c
+++ b/drivers/media/video/omap1_camera.c
@@ -990,63 +990,80 @@ static void omap1_cam_remove_device(struct 
soc_camera_device *icd)
 }
 
 /* Duplicate standard formats based on host capability of byte swapping */
-static const struct soc_mbus_pixelfmt omap1_cam_formats[] = {
-   [V4L2_MBUS_FMT_UYVY8_2X8] = {
+static const struct soc_mbus_lookup omap1_cam_formats[] = {
+{
+   .code = V4L2_MBUS_FMT_UYVY8_2X8,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_YUYV,
.name   = YUYV,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_BE,
},
-   [V4L2_MBUS_FMT_VYUY8_2X8] = {
+}, {
+   .code = V4L2_MBUS_FMT_VYUY8_2X8,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_YVYU,
.name   = YVYU,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_BE,
},
-   [V4L2_MBUS_FMT_YUYV8_2X8] = {
+}, {
+   .code = V4L2_MBUS_FMT_YUYV8_2X8,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_UYVY,
.name   = UYVY,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_BE,
},
-   [V4L2_MBUS_FMT_YVYU8_2X8] = {
+}, {
+   .code = V4L2_MBUS_FMT_YVYU8_2X8,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_VYUY,
.name   = VYUY,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_BE,
},
-   [V4L2_MBUS_FMT_RGB555_2X8_PADHI_BE] = {
+}, {
+   .code = V4L2_MBUS_FMT_RGB555_2X8_PADHI_BE,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_RGB555,
.name   = RGB555,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_BE,
},
-   [V4L2_MBUS_FMT_RGB555_2X8_PADHI_LE] = {
+}, {
+   .code = V4L2_MBUS_FMT_RGB555_2X8_PADHI_LE,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_RGB555X,
.name   = RGB555X,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_BE,
},
-   [V4L2_MBUS_FMT_RGB565_2X8_BE] = {
+}, {
+   .code = V4L2_MBUS_FMT_RGB565_2X8_BE,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_RGB565,
.name   = RGB565,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_BE,
},
-   [V4L2_MBUS_FMT_RGB565_2X8_LE] = {
+}, {
+   .code = V4L2_MBUS_FMT_RGB565_2X8_LE,
+   .fmt = {
.fourcc = V4L2_PIX_FMT_RGB565X,
.name   = RGB565X,
.bits_per_sample= 8,
.packing= SOC_MBUS_PACKING_2X8_PADHI,
.order  = SOC_MBUS_ORDER_BE,
},
+},
 };
 
 static int omap1_cam_get_formats(struct soc_camera_device *icd,
@@ -1085,12 +1102,14 @@ static int omap1_cam_get_formats(struct 
soc_camera_device *icd,
case V4L2_MBUS_FMT_RGB565_2X8_LE:
formats++;
if (xlate) {
-   xlate-host_fmt = omap1_cam_formats[code];
+   xlate-host_fmt = soc_mbus_find_fmtdesc(code,
+   omap1_cam_formats,
+   ARRAY_SIZE(omap1_cam_formats));
xlate-code = code;
xlate++;
dev_dbg(dev,
%s: providing format %s as byte swapped code 
#%d\n,
-   __func__, omap1_cam_formats[code].name, code);
+   __func__, xlate-host_fmt-name, code);
}
default:
if 

Re: [PATCH 1/2] mt9p031: Add mt9p031 sensor driver.

2011-05-18 Thread Laurent Pinchart
Hi Javier,

On Wednesday 18 May 2011 11:10:03 javier Martin wrote:
 Hi Laurent,
 I've already fixed almost every issue you pointed out.
 However, I still have got some doubts that I hope you can clarify.
 
 On 17 May 2011 13:33, Laurent Pinchart wrote:
  On Tuesday 17 May 2011 11:28:47 Javier Martin wrote:
  It has been tested in beagleboard xM, using LI-5M03 module.
  
  Signed-off-by: Javier Martin javier.mar...@vista-silicon.com
  
  +
  +static int mt9p031_power_on(struct mt9p031 *mt9p031)
  +{
  + int ret;
  +
  + if (mt9p031-pdata-set_xclk)
  + mt9p031-pdata-set_xclk(mt9p031-subdev, 5400);
  + /* turn on VDD_IO */
  + ret = regulator_enable(mt9p031-reg_2v8);
  + if (ret) {
  + pr_err(Failed to enable 2.8v regulator: %d\n, ret);
  + return -1;
  + }
 
 I would enable the regulator first. As a general rule, chips should be
 powered up before their I/Os are actively driven.
 
 You need to restore registers here, otherwise all controls set by the user
 will not be applied to the device.
 
 It's my mistake. This driver uses two regulators: 1,8 and 2,8 V
 respectively. 2,8V regulator powers analog part and I/O whereas 1,8V
 one powers the core. What I failed to do was keeping 1,8V regulator
 always powered on, so that register configuration was not lost, and
 power 2,8V regulator on and off as needed since it does not affect
 register values. However, I messed it all up.
 
 Ideally I would have to power 1,8V regulator on and off too. However,
 as you wisely pointed out, registers should be restored in that case.
 How am I supposed to keep track of register values? Are there any
 helper functions I can use for that purpose or must I create a custom
 register cache? Do you know any driver that uses this technique?

You can either keep track of register contents manually, or use the control 
framework to restore the value of all controls by calling 
v4l2_ctrl_handler_setup(). That's what the mt9v032 driver does in its power on 
handler. You might have to restore a couple of registers manually if they're 
not handled through controls.

  [snip]
  
  +static int mt9p031_set_params(struct i2c_client *client,
  +   struct v4l2_rect *rect, u16 xskip, u16
  yskip)
  
  set_params should apply the parameters, not change them. They should have
  already been validated by the callers.
 
 mt9p031_set_params() function is used by mt9p031_set_crop() and
 mt9p031_set_format(), as you have correctly stated, these functions
 shouldn' apply parameters but only change them.
 I've checked mt9v032 driver and it is as you said. The question is,
 where should these parameters get applied then?

The mt9v032 driver applies those parameters at stream on time. The downside is 
that it doesn't support resolution or crop changes at runtime. While changing 
resolution at runtime might not make much sense in most cases, changing the 
crop rectangle (providing its size stays the same) is a useful feature.

You can handle resolution changes at runtime, or you can choose to only set 
the resolution at stream on time. In both cases, the mt9p031_set_params() 
function should only apply parameters to the device, not change their value on 
the driver side. That was the purpose of my original comment.

  +static int mt9p031_registered(struct v4l2_subdev *sd)
  +{
  + struct mt9p031 *mt9p031;
  + mt9p031 = container_of(sd, struct mt9p031, subdev);
  +
  + mt9p031_power_off(mt9p031);
  
  What's that for ?
  
  + return 0;
  +}
 
 Since mt9p031_power_off() and mt9p031_power_on() functions
 disable/enable the 2,8V regulator which powers I/O, it must be powered
 on during probe and after registering, it can be safely powered off.

Can't you just power it off at the end of your probe function, instead of 
doing it here ?

  You have a set_xclk callback to board code, so I assume the chip can be
  driven by one of the OMAP3 ISP XCLK signals. To call back to the OMAP3
  ISP from board code, you need to get hold of the OMAP3 ISP device
  pointer. Your next patch exports omap3isp_device, but I'm not sure
  that's the way to go. One option is
  
  struct isp_device *isp = v4l2_dev_to_isp_device(subdev-v4l2_dev);
  
  but that requires the subdev to be registered before the function can be
  called. In that case you would need to move the probe code to the
  registered subdev internal function.
 
 Yes, I tried using that function but it didn't work because subdev
 hadn't been registeret yet.
 
  A clean solution is needed in the long run, preferably not involving
  board code at all. It would be nice if the OMAP3 ISP driver could export
  XCLKA/XCLKB as generic clock objects.
 
 So, what should I do in order to submit the driver to mainline?
 Do you want me to move the probe code to registered callback?

You should move the part that powers the device up and reads registers to the 
registered callback. The board code should then use 

RE: Codec controls question

2011-05-18 Thread Kamil Debski
 From: Laurent Pinchart [mailto:laurent.pinch...@ideasonboard.com]
 Sent: 18 May 2011 16:10
 Subject: Re: Codec controls question
 
 On Tuesday 17 May 2011 18:23:19 Kamil Debski wrote:
  Hi,

Hi,

 
  Some time ago we were discussing the set of controls that should be
  implemented for codec support.
 
  I remember that the result of this discussion was that the controls should
  be as integrated as possible. This included the V4L2_CID_MPEG_LEVEL and
  all controls related to the quantization parameter.
  The problem with such approach is that the levels are different for MPEG4,
  H264 and H263. Same for quantization parameter - it ranges from 1 to 31
  for MPEG4/H263 and from 0 to 51 for H264.
 
  Having single controls for the more than one codec seemed as a good
  solution. Unfortunately I don't see a good option to implement it,
  especially with the control framework. My idea was to have the min/max
  values for QP set in the S_FMT call on the CAPTURE. For MPEG_LEVEL it
  would be checked in the S_CTRL callback and if it did not fit the chosen
  format it failed.
 
  So I see three solutions to this problem and I wanted to ask about your
  opinion.
 
  1) Have a separate controls whenever the range or valid value range
  differs.
 
  This is the simplest and in my opinion the best solution I can think of.
  This way we'll have different set of controls if the valid values are
  different (e.g. V4L2_CID_MPEG_MPEG4_LEVEL, V4L2_CID_MPEG_H264_LEVEL).
  User can set the controls at any time. The only con of this approach is
  having more controls.
 
  2) Permit the user to set the control only after running S_FMT on the
  CAPTURE. This approach would enable us to keep less controls, but would
  require to set the min/max values for controls in the S_FMT. This could be
  done by adding controls in S_FMT or by manipulating their range and
  disabling unused controls. In case of MPEG_LEVEL it would require s_ctrl
  callback to check whether the requested level is valid for the chosen
  codec.
 
  This would be somehow against the spec, but if we allow the codec
  interface to have some differences this would be ok.
 
  3) Let the user set the controls whenever and check them during the
  STREAMON call.
 
  The controls could be set anytime, and the control range supplied to the
  control framework would cover values possible for all supported codecs.
 
  This approach is more difficult than first approach. It is worse in case
 of
  user space than the second approach - the user is unaware of any mistakes
  until the STREAMON call. The argument for this approach is the possibility
  to have a few controls less.
 
  So I would like to hear a comment about the above propositions. Personally
  I would opt for the first solution.
 
 I think the question boils down to whether we want to support controls that
 have different valid ranges depending on formats, or even other controls. I
 think the issue isn't specific to codoc controls.
 

So what is your opinion on this? If there are more controls where the valid
range could depend on other controls or the chosen format then it might be worth
implementing such functionality. If there would be only a few such controls then
it might be better to just have separate controls (with the codec controls - 
only
*_MPEG_LEVEL and quantization parameter related controls would have different
valid range depending on the format).

--
Kamil Debski
Linux Platform Group
Samsung Poland RD Center

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 1/4] V4L: add three new ioctl()s for multi-size videobuffer management

2011-05-18 Thread Guennadi Liakhovetski
On Wed, 18 May 2011, Laurent Pinchart wrote:

 On Tuesday 17 May 2011 07:52:28 Sakari Ailus wrote:
  Guennadi Liakhovetski wrote:

[snip]

   What about making it possible to pass an array of buffer indices to the
   user, just like VIDIOC_S_EXT_CTRLS does? I'm not sure if this would be
   perfect, but it would avoid the problem of requiring continuous ranges
   of buffer ids.
   
   struct v4l2_create_buffers {
   
__u32   *index;
__u32   count;
__u32   flags;
enum v4l2_memorymemory;
__u32   size;
struct v4l2_format  format;
   
   };
   
   Index would be a pointer to an array of buffer indices and its length
   would be count.
   
   I don't understand this. We do _not_ want to allow holes in indices. For
   now we decide to not implement DESTROY at all. In this case indices just
   increment contiguously.
   
   The next stage is to implement DESTROY, but only in strict reverse order
   - without holes and in the same ranges, as buffers have been CREATEd
   before. So, I really don't understand why we need arrays, sorry.
  
  Well, now that we're defining a second interface to make new buffer
  objects, I just thought it should be made as future-proof as we can.
 
 I second that. I don't like rushing new APIs to find out we need something 
 else after 6 months.

Ok, so, we pass an array from user-space with CREATE of size count. The 
kernel fills it with as many buffers entries as it has allocated. But 
currently drivers are also allowed to allocate more buffers, than the 
user-space has requested. What do we do in such a case?

Thanks
Guennadi

  But even with single index, it's always possible to issue the ioctl more
  than once and achieve the same result as if there was an array of indices.
  
  What would be the reason to disallow creating holes to index range? I
  don't see much reason from application or implementation point of view,
  as we're even being limited to such low numbers.
  
  Speaking of which; perhaps I'm bringing this up rather late, but should
  we define the API to allow larger numbers than VIDEO_MAX_FRAME? 32 isn't
  all that much after all --- this might become a limiting factor later on
  when there are devices with huge amounts of memory.
  
  Allowing CREATE_BUF to do that right now would be possible since
  applications using it are new users and can be expected to be using it
  properly. :-)
 
 -- 
 Regards,
 
 Laurent Pinchart
 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 1/4] V4L: add three new ioctl()s for multi-size videobuffer management

2011-05-18 Thread Guennadi Liakhovetski
On Wed, 18 May 2011, Laurent Pinchart wrote:

 On Friday 13 May 2011 09:45:51 Guennadi Liakhovetski wrote:

[snip]

  2. Both these flags should not be passed with CREATE, but with SUBMIT
  (which will be renamed to PREPARE or something similar). It should be
  possible to prepare the same buffer with different cacheing attributes
  during a running operation. Shall these flags be added to values, taken by
  struct v4l2_buffer::flags, since that is the struct, that will be used as
  the argument for the new version of the SUBMIT ioctl()?
 
 Do you have a use case for per-submit cache handling ?

This was Sakari's idea, I think, ask him;) But I think, he suggested a 
case, when not all buffers have to be processed in the user-space. In 
principle, I can imagine such a case too. E.g., if most of the buffers you 
pass directly to output / further processing, and only some of them you 
analyse in software, perhaps, for some WB, exposure, etc.

   +
   +/* VIDIOC_CREATE_BUFS */
   +struct v4l2_create_buffers {
   + __u32   index;  /* output: buffers 
   index...index + count - 1 have 
 been
   created */ +  __u32   count;
   + __u32   flags;  /* V4L2_BUFFER_FLAG_* */
   + enum v4l2_memorymemory;
   + __u32   size;   /* Explicit size, e.g., for 
   compressed streams */
   + struct v4l2_format  format; /* type is used always, the 
   rest if 
 size
   == 0 */ +};
  
  1. Care must be taken to keep index = V4L2_MAX_FRAME
 
 Does that requirement still make sense ?

Don't know, again, not my idea. videobuf2-core still uses it. At one 
location it seems to be pretty arbitrary, at another it is the size of an 
array in struct vb2_fileio_data, but maybe we could allocate that one 
dynamically too. So, do I understand it right, that this would affect our 
case, if we would CREATE our buffers and then the user would decide to do 
read() / write() on them?

  2. A reserved field is needed.
  
   +
   
/*

 *   I O C T L   C O D E S   F O R   V I D E O   D E V I C E S
 *
   
   @@ -1937,6 +1957,10 @@ struct v4l2_dbg_chip_ident {
   
#define  VIDIOC_SUBSCRIBE_EVENT   _IOW('V', 90, struct
v4l2_event_subscription) #define VIDIOC_UNSUBSCRIBE_EVENT _IOW('V', 91,
struct v4l2_event_subscription)
   
   +#define VIDIOC_CREATE_BUFS   _IOWR('V', 92, struct 
   v4l2_create_buffers)
   +#define VIDIOC_DESTROY_BUFS  _IOWR('V', 93, struct v4l2_buffer_span)
   +#define VIDIOC_SUBMIT_BUF _IOW('V', 94, int)
  
  This has become the hottest point for discussion.
  
  1. VIDIOC_CREATE_BUFS: should the REQBUFS and CREATE/DESTROY APIs be
  allowed to be mixed? REQBUFS is compulsory, CREATE/DESTROY will be
  optional. But shall applications be allowed to mix them? No consensus has
  been riched. This will also depend on whether DESTROY will be implemented
  at all (see below).
 
 Would mixing them help in any known use case ? The API and implementation 
 would be cleaner if we didn't allow mixing them.

It would help us avoid designing non-mature APIs and still have the 
functionality, we need;)

  2. VIDIOC_DESTROY_BUFS: has been discussed a lot
  
  (a) shall it be allowed to create holes in indices? agreement was: not at
  this stage, but in the future this might be needed.
  
  (b) ioctl() argument: shall it take a span or an array of indices? I don't
  think arrays make any sense here: on CREATE you always get contiguous
  index sequences, and you are only allowed to DESTROY the same index sets.
  
  (c) shall it be implemented at all, now that we don't know, how to handle
  holes, or shall we just continue using REQBUFS(0) or close() to release
  all buffers at once? Not implementing DESTROY now has the disadvantage,
  that if you allocate 2 buffer sets of sizes A (small) and B (big), and
  then don't need B any more, but instead need C != B (big), you cannot do
  this. But this is just one of hypothetical use-cases. No consensus
  reached.
 
 We could go with (c) as a first step, but it might be temporary only. I feel 
 a 
 bit uneasy implementing an API that we know will be temporary.

It shouldn't be temporary. CREATE and PREPARE should stay. It's just 
DESTROY that we're not certain about yet.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Codec controls question

2011-05-18 Thread Hans Verkuil
 From: Laurent Pinchart [mailto:laurent.pinch...@ideasonboard.com]
 Sent: 18 May 2011 16:10
 Subject: Re: Codec controls question

 On Tuesday 17 May 2011 18:23:19 Kamil Debski wrote:
  Hi,

 Hi,

 
  Some time ago we were discussing the set of controls that should be
  implemented for codec support.
 
  I remember that the result of this discussion was that the controls
 should
  be as integrated as possible. This included the V4L2_CID_MPEG_LEVEL
 and
  all controls related to the quantization parameter.
  The problem with such approach is that the levels are different for
 MPEG4,
  H264 and H263. Same for quantization parameter - it ranges from 1 to
 31
  for MPEG4/H263 and from 0 to 51 for H264.
 
  Having single controls for the more than one codec seemed as a good
  solution. Unfortunately I don't see a good option to implement it,
  especially with the control framework. My idea was to have the min/max
  values for QP set in the S_FMT call on the CAPTURE. For MPEG_LEVEL it
  would be checked in the S_CTRL callback and if it did not fit the
 chosen
  format it failed.
 
  So I see three solutions to this problem and I wanted to ask about
 your
  opinion.
 
  1) Have a separate controls whenever the range or valid value range
  differs.
 
  This is the simplest and in my opinion the best solution I can think
 of.
  This way we'll have different set of controls if the valid values are
  different (e.g. V4L2_CID_MPEG_MPEG4_LEVEL, V4L2_CID_MPEG_H264_LEVEL).
  User can set the controls at any time. The only con of this approach
 is
  having more controls.
 
  2) Permit the user to set the control only after running S_FMT on the
  CAPTURE. This approach would enable us to keep less controls, but
 would
  require to set the min/max values for controls in the S_FMT. This
 could be
  done by adding controls in S_FMT or by manipulating their range and
  disabling unused controls. In case of MPEG_LEVEL it would require
 s_ctrl
  callback to check whether the requested level is valid for the chosen
  codec.
 
  This would be somehow against the spec, but if we allow the codec
  interface to have some differences this would be ok.
 
  3) Let the user set the controls whenever and check them during the
  STREAMON call.
 
  The controls could be set anytime, and the control range supplied to
 the
  control framework would cover values possible for all supported
 codecs.
 
  This approach is more difficult than first approach. It is worse in
 case
 of
  user space than the second approach - the user is unaware of any
 mistakes
  until the STREAMON call. The argument for this approach is the
 possibility
  to have a few controls less.
 
  So I would like to hear a comment about the above propositions.
 Personally
  I would opt for the first solution.

 I think the question boils down to whether we want to support controls
 that
 have different valid ranges depending on formats, or even other
 controls. I
 think the issue isn't specific to codoc controls.


 So what is your opinion on this? If there are more controls where the
 valid
 range could depend on other controls or the chosen format then it might be
 worth
 implementing such functionality. If there would be only a few such
 controls then
 it might be better to just have separate controls (with the codec controls
 - only
 *_MPEG_LEVEL and quantization parameter related controls would have
 different
 valid range depending on the format).

I have experimented with control events to change ranges and while it can
be done technically it is in practice a bit of a mess. I think personally
it is just easier to have separate controls.

We are going to have similar problems if different video inputs are
controlled by different i2c devices with different (but partially
overlapping) controls. So switching an input also changes the controls. I
have experimented with this while working on control events and it became
very messy indeed. I won't do this for the first version of control
events.

One subtle but real problem with changing control ranges on the fly is
that it makes it next to impossible to save all control values to a file
and restore them later. That is a desirable feature that AFAIK is actually
in use already.

Regards,

Hans

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Codec controls question

2011-05-18 Thread Sylwester Nawrocki
Hi Hans,

On 05/18/2011 05:22 PM, Hans Verkuil wrote:
 From: Laurent Pinchart [mailto:laurent.pinch...@ideasonboard.com]
 Sent: 18 May 2011 16:10
 Subject: Re: Codec controls question
 On Tuesday 17 May 2011 18:23:19 Kamil Debski wrote:
 Hi,
 Hi,


 Some time ago we were discussing the set of controls that should be
 implemented for codec support.

 I remember that the result of this discussion was that the controls
 should
 be as integrated as possible. This included the V4L2_CID_MPEG_LEVEL
 and
 all controls related to the quantization parameter.
 The problem with such approach is that the levels are different for
 MPEG4,
 H264 and H263. Same for quantization parameter - it ranges from 1 to
 31
 for MPEG4/H263 and from 0 to 51 for H264.

 Having single controls for the more than one codec seemed as a good
 solution. Unfortunately I don't see a good option to implement it,
 especially with the control framework. My idea was to have the min/max
 values for QP set in the S_FMT call on the CAPTURE. For MPEG_LEVEL it
 would be checked in the S_CTRL callback and if it did not fit the
 chosen
 format it failed.

 So I see three solutions to this problem and I wanted to ask about
 your
 opinion.

 1) Have a separate controls whenever the range or valid value range
 differs.

 This is the simplest and in my opinion the best solution I can think
 of.
 This way we'll have different set of controls if the valid values are
 different (e.g. V4L2_CID_MPEG_MPEG4_LEVEL, V4L2_CID_MPEG_H264_LEVEL).
 User can set the controls at any time. The only con of this approach
 is
 having more controls.

 2) Permit the user to set the control only after running S_FMT on the
 CAPTURE. This approach would enable us to keep less controls, but
 would
 require to set the min/max values for controls in the S_FMT. This
 could be
 done by adding controls in S_FMT or by manipulating their range and
 disabling unused controls. In case of MPEG_LEVEL it would require
 s_ctrl
 callback to check whether the requested level is valid for the chosen
 codec.

 This would be somehow against the spec, but if we allow the codec
 interface to have some differences this would be ok.

 3) Let the user set the controls whenever and check them during the
 STREAMON call.

 The controls could be set anytime, and the control range supplied to
 the
 control framework would cover values possible for all supported
 codecs.

 This approach is more difficult than first approach. It is worse in
 case
 of
 user space than the second approach - the user is unaware of any
 mistakes
 until the STREAMON call. The argument for this approach is the
 possibility
 to have a few controls less.

 So I would like to hear a comment about the above propositions.
 Personally
 I would opt for the first solution.

 I think the question boils down to whether we want to support controls
 that
 have different valid ranges depending on formats, or even other
 controls. I
 think the issue isn't specific to codoc controls.


 So what is your opinion on this? If there are more controls where the
 valid
 range could depend on other controls or the chosen format then it might be
 worth
 implementing such functionality. If there would be only a few such
 controls then
 it might be better to just have separate controls (with the codec controls
 - only
 *_MPEG_LEVEL and quantization parameter related controls would have
 different
 valid range depending on the format).
 
 I have experimented with control events to change ranges and while it can
 be done technically it is in practice a bit of a mess. I think personally
 it is just easier to have separate controls.
 
 We are going to have similar problems if different video inputs are
 controlled by different i2c devices with different (but partially
 overlapping) controls. So switching an input also changes the controls. I
 have experimented with this while working on control events and it became
 very messy indeed. I won't do this for the first version of control
 events.
 
 One subtle but real problem with changing control ranges on the fly is
 that it makes it next to impossible to save all control values to a file
 and restore them later. That is a desirable feature that AFAIK is actually
 in use already.

What are your views on creating controls in subdev s_power operation ?
Some sensors/ISPs have control ranges dependant on a firmware revision.
So before creating the controls min/max/step values need to be read from them
over I2C. We chose to postpone enabling ISP's power until a corresponding video
(or subdev) device node is opened. And thus controls are not created during
driver probing, because there is no enough information to do this.

I don't see a possibility for the applications to be able to access the controls
before they are created as this happens during a first device (either video
or subdev) open(). And they are destroyed only in video/subdev device relase().

Do you see any potential issues with this scheme ?

Regards,
-- 

Re: Codec controls question

2011-05-18 Thread Hans Verkuil
 Hi Hans,

 On 05/18/2011 05:22 PM, Hans Verkuil wrote:
 From: Laurent Pinchart [mailto:laurent.pinch...@ideasonboard.com]
 Sent: 18 May 2011 16:10
 Subject: Re: Codec controls question
 On Tuesday 17 May 2011 18:23:19 Kamil Debski wrote:
 Hi,
 Hi,


 Some time ago we were discussing the set of controls that should be
 implemented for codec support.

 I remember that the result of this discussion was that the controls
 should
 be as integrated as possible. This included the V4L2_CID_MPEG_LEVEL
 and
 all controls related to the quantization parameter.
 The problem with such approach is that the levels are different for
 MPEG4,
 H264 and H263. Same for quantization parameter - it ranges from 1 to
 31
 for MPEG4/H263 and from 0 to 51 for H264.

 Having single controls for the more than one codec seemed as a good
 solution. Unfortunately I don't see a good option to implement it,
 especially with the control framework. My idea was to have the
 min/max
 values for QP set in the S_FMT call on the CAPTURE. For MPEG_LEVEL it
 would be checked in the S_CTRL callback and if it did not fit the
 chosen
 format it failed.

 So I see three solutions to this problem and I wanted to ask about
 your
 opinion.

 1) Have a separate controls whenever the range or valid value range
 differs.

 This is the simplest and in my opinion the best solution I can think
 of.
 This way we'll have different set of controls if the valid values are
 different (e.g. V4L2_CID_MPEG_MPEG4_LEVEL, V4L2_CID_MPEG_H264_LEVEL).
 User can set the controls at any time. The only con of this approach
 is
 having more controls.

 2) Permit the user to set the control only after running S_FMT on the
 CAPTURE. This approach would enable us to keep less controls, but
 would
 require to set the min/max values for controls in the S_FMT. This
 could be
 done by adding controls in S_FMT or by manipulating their range and
 disabling unused controls. In case of MPEG_LEVEL it would require
 s_ctrl
 callback to check whether the requested level is valid for the chosen
 codec.

 This would be somehow against the spec, but if we allow the codec
 interface to have some differences this would be ok.

 3) Let the user set the controls whenever and check them during the
 STREAMON call.

 The controls could be set anytime, and the control range supplied to
 the
 control framework would cover values possible for all supported
 codecs.

 This approach is more difficult than first approach. It is worse in
 case
 of
 user space than the second approach - the user is unaware of any
 mistakes
 until the STREAMON call. The argument for this approach is the
 possibility
 to have a few controls less.

 So I would like to hear a comment about the above propositions.
 Personally
 I would opt for the first solution.

 I think the question boils down to whether we want to support controls
 that
 have different valid ranges depending on formats, or even other
 controls. I
 think the issue isn't specific to codoc controls.


 So what is your opinion on this? If there are more controls where the
 valid
 range could depend on other controls or the chosen format then it might
 be
 worth
 implementing such functionality. If there would be only a few such
 controls then
 it might be better to just have separate controls (with the codec
 controls
 - only
 *_MPEG_LEVEL and quantization parameter related controls would have
 different
 valid range depending on the format).

 I have experimented with control events to change ranges and while it
 can
 be done technically it is in practice a bit of a mess. I think
 personally
 it is just easier to have separate controls.

 We are going to have similar problems if different video inputs are
 controlled by different i2c devices with different (but partially
 overlapping) controls. So switching an input also changes the controls.
 I
 have experimented with this while working on control events and it
 became
 very messy indeed. I won't do this for the first version of control
 events.

 One subtle but real problem with changing control ranges on the fly is
 that it makes it next to impossible to save all control values to a file
 and restore them later. That is a desirable feature that AFAIK is
 actually
 in use already.

 What are your views on creating controls in subdev s_power operation ?
 Some sensors/ISPs have control ranges dependant on a firmware revision.
 So before creating the controls min/max/step values need to be read from
 them
 over I2C. We chose to postpone enabling ISP's power until a corresponding
 video
 (or subdev) device node is opened. And thus controls are not created
 during
 driver probing, because there is no enough information to do this.

 I don't see a possibility for the applications to be able to access the
 controls
 before they are created as this happens during a first device (either
 video
 or subdev) open(). And they are destroyed only in video/subdev device
 relase().

 Do you see any potential issues with this 

Re: Codec controls question

2011-05-18 Thread Laurent Pinchart
Hi Sylwester,

On Wednesday 18 May 2011 17:57:37 Sylwester Nawrocki wrote:
 On 05/18/2011 05:22 PM, Hans Verkuil wrote:
  
  I have experimented with control events to change ranges and while it can
  be done technically it is in practice a bit of a mess. I think personally
  it is just easier to have separate controls.
  
  We are going to have similar problems if different video inputs are
  controlled by different i2c devices with different (but partially
  overlapping) controls. So switching an input also changes the controls. I
  have experimented with this while working on control events and it became
  very messy indeed. I won't do this for the first version of control
  events.
  
  One subtle but real problem with changing control ranges on the fly is
  that it makes it next to impossible to save all control values to a file
  and restore them later. That is a desirable feature that AFAIK is
  actually in use already.
 
 What are your views on creating controls in subdev s_power operation ?
 Some sensors/ISPs have control ranges dependant on a firmware revision.
 So before creating the controls min/max/step values need to be read from
 them over I2C. We chose to postpone enabling ISP's power until a
 corresponding video (or subdev) device node is opened. And thus controls
 are not created during driver probing, because there is no enough
 information to do this.

You can power the device up during probe, read the hardware/firmware version, 
power it down and create/initialize controls depending on the retrieved 
information.

 I don't see a possibility for the applications to be able to access the
 controls before they are created as this happens during a first device
 (either video or subdev) open(). And they are destroyed only in
 video/subdev device relase().
 
 Do you see any potential issues with this scheme ?

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] omap3: isp: fix compiler warning

2011-05-18 Thread Sanjeev Premi
This patch fixes this compiler warning:
  drivers/media/video/omap3isp/isp.c: In function 'isp_isr_dbg':
  drivers/media/video/omap3isp/isp.c:392:2: warning: zero-length
   gnu_printf format string

Since printk() is used in next few statements, same was used
here as well.

Signed-off-by: Sanjeev Premi pr...@ti.com
Cc: laurent.pinch...@ideasonboard.com
---

 Actually full block can be converted to dev_dbg()
 as well; but i am not sure about original intent
 of the mix.

 Based on comments, i can resubmit with all prints
 converted to dev_dbg.



 drivers/media/video/omap3isp/isp.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/media/video/omap3isp/isp.c 
b/drivers/media/video/omap3isp/isp.c
index 503bd79..1d38d96 100644
--- a/drivers/media/video/omap3isp/isp.c
+++ b/drivers/media/video/omap3isp/isp.c
@@ -387,7 +387,7 @@ static inline void isp_isr_dbg(struct isp_device *isp, u32 
irqstatus)
};
int i;
 
-   dev_dbg(isp-dev, );
+   printk(KERN_DEBUG %s:\n, dev_driver_string(isp-dev));
 
for (i = 0; i  ARRAY_SIZE(name); i++) {
if ((1  i)  irqstatus)
-- 
1.7.2.2

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Codec controls question

2011-05-18 Thread Kamil Debski

 -Original Message-
 From: Hans Verkuil [mailto:hverk...@xs4all.nl]
 Sent: 18 May 2011 17:23
 To: Kamil Debski
 Cc: 'Laurent Pinchart'; linux-media@vger.kernel.org; hansv...@cisco.com;
 Marek Szyprowski
 Subject: RE: Codec controls question
 
  From: Laurent Pinchart [mailto:laurent.pinch...@ideasonboard.com]
  Sent: 18 May 2011 16:10
  Subject: Re: Codec controls question
 
  On Tuesday 17 May 2011 18:23:19 Kamil Debski wrote:
   Hi,
 
  Hi,
 
  
   Some time ago we were discussing the set of controls that should be
   implemented for codec support.
  
   I remember that the result of this discussion was that the controls
  should
   be as integrated as possible. This included the V4L2_CID_MPEG_LEVEL
  and
   all controls related to the quantization parameter.
   The problem with such approach is that the levels are different for
  MPEG4,
   H264 and H263. Same for quantization parameter - it ranges from 1 to
  31
   for MPEG4/H263 and from 0 to 51 for H264.
  
   Having single controls for the more than one codec seemed as a good
   solution. Unfortunately I don't see a good option to implement it,
   especially with the control framework. My idea was to have the min/max
   values for QP set in the S_FMT call on the CAPTURE. For MPEG_LEVEL it
   would be checked in the S_CTRL callback and if it did not fit the
  chosen
   format it failed.
  
   So I see three solutions to this problem and I wanted to ask about
  your
   opinion.
  
   1) Have a separate controls whenever the range or valid value range
   differs.
  
   This is the simplest and in my opinion the best solution I can think
  of.
   This way we'll have different set of controls if the valid values are
   different (e.g. V4L2_CID_MPEG_MPEG4_LEVEL, V4L2_CID_MPEG_H264_LEVEL).
   User can set the controls at any time. The only con of this approach
  is
   having more controls.
  
   2) Permit the user to set the control only after running S_FMT on the
   CAPTURE. This approach would enable us to keep less controls, but
  would
   require to set the min/max values for controls in the S_FMT. This
  could be
   done by adding controls in S_FMT or by manipulating their range and
   disabling unused controls. In case of MPEG_LEVEL it would require
  s_ctrl
   callback to check whether the requested level is valid for the chosen
   codec.
  
   This would be somehow against the spec, but if we allow the codec
   interface to have some differences this would be ok.
  
   3) Let the user set the controls whenever and check them during the
   STREAMON call.
  
   The controls could be set anytime, and the control range supplied to
  the
   control framework would cover values possible for all supported
  codecs.
  
   This approach is more difficult than first approach. It is worse in
  case
  of
   user space than the second approach - the user is unaware of any
  mistakes
   until the STREAMON call. The argument for this approach is the
  possibility
   to have a few controls less.
  
   So I would like to hear a comment about the above propositions.
  Personally
   I would opt for the first solution.
 
  I think the question boils down to whether we want to support controls
  that
  have different valid ranges depending on formats, or even other
  controls. I
  think the issue isn't specific to codoc controls.
 
 
  So what is your opinion on this? If there are more controls where the
  valid
  range could depend on other controls or the chosen format then it might be
  worth
  implementing such functionality. If there would be only a few such
  controls then
  it might be better to just have separate controls (with the codec controls
  - only
  *_MPEG_LEVEL and quantization parameter related controls would have
  different
  valid range depending on the format).
 
 I have experimented with control events to change ranges and while it can
 be done technically it is in practice a bit of a mess. I think personally
 it is just easier to have separate controls.
 
 We are going to have similar problems if different video inputs are
 controlled by different i2c devices with different (but partially
 overlapping) controls. So switching an input also changes the controls. I
 have experimented with this while working on control events and it became
 very messy indeed. I won't do this for the first version of control
 events.
 
 One subtle but real problem with changing control ranges on the fly is
 that it makes it next to impossible to save all control values to a file
 and restore them later. That is a desirable feature that AFAIK is actually
 in use already.
 

Thanks for your comment. I will go for implementing separate controls as I also
find having dynamic ranges a bit messy.

Best regards,
-- 
Kamil Debski
Linux Platform Group
Samsung Poland RD Center



--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

Re: Codec controls question

2011-05-18 Thread Sylwester Nawrocki
Hi Laurent,

On 05/18/2011 06:03 PM, Laurent Pinchart wrote:
 On Wednesday 18 May 2011 17:57:37 Sylwester Nawrocki wrote:
 On 05/18/2011 05:22 PM, Hans Verkuil wrote:

 I have experimented with control events to change ranges and while it can
 be done technically it is in practice a bit of a mess. I think personally
 it is just easier to have separate controls.

 We are going to have similar problems if different video inputs are
 controlled by different i2c devices with different (but partially
 overlapping) controls. So switching an input also changes the controls. I
 have experimented with this while working on control events and it became
 very messy indeed. I won't do this for the first version of control
 events.

 One subtle but real problem with changing control ranges on the fly is
 that it makes it next to impossible to save all control values to a file
 and restore them later. That is a desirable feature that AFAIK is
 actually in use already.

 What are your views on creating controls in subdev s_power operation ?
 Some sensors/ISPs have control ranges dependant on a firmware revision.
 So before creating the controls min/max/step values need to be read from
 them over I2C. We chose to postpone enabling ISP's power until a
 corresponding video (or subdev) device node is opened. And thus controls
 are not created during driver probing, because there is no enough
 information to do this.
 
 You can power the device up during probe, read the hardware/firmware version, 
 power it down and create/initialize controls depending on the retrieved 
 information.

Yes, I suppose this is what all drivers should normally do. But if for example
there are 2 sensor's registered during a media device initialization and it 
takes
about 100ms and 600 ms to initialize each one respectively, then if the driver
is compiled in the kernel the system boot time would increase by 700ms.   
If the whole driver is compiled as a LKM this could be acceptable though.

I'm still not convinced, the most straightforward method would be to power up
the sensor in probe(), but there comes that unfortunate delay. 

 
 I don't see a possibility for the applications to be able to access the
 controls before they are created as this happens during a first device
 (either video or subdev) open(). And they are destroyed only in
 video/subdev device relase().

 Do you see any potential issues with this scheme ?
 

Thanks,
-- 
Sylwester Nawrocki
Samsung Poland RD Center
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/2] V4L: Extended crop/compose API

2011-05-18 Thread Tomasz Stanislawski

Laurent Pinchart wrote:

On Saturday 14 May 2011 12:50:32 Hans Verkuil wrote:

On Friday, May 13, 2011 14:43:08 Laurent Pinchart wrote:

On Saturday 07 May 2011 13:52:25 Hans Verkuil wrote:

On Thursday, May 05, 2011 11:39:54 Tomasz Stanislawski wrote:

Hi Laurent and Hans,
I am very sorry for replying so lately. I was really busy during last 
week. Thank you very much for your interest and comments :).


[snip]


 -

 Name

VIDIOC_S_SELECTION - set cropping rectangle on an input of a device

[snip]

 v4l2-selection::r is filled with suggested coordinates. The 
coordinates

 are expressed in driver-dependant units.

 What kind of driver-dependant units do you think will be used in 
practice ?


In a case of sensors, units could be subpixels. For analog TV inputs, 
units may be milliseconds. For printers/scanner, units may be centimeters.

I think that there are many other examples out there.


Applications set V4L2_SEL_TRY flag in v4l2_selection::flags to
prevent a driver from applying selection configuration to hardware.

I mentioned this before but I am very much opposed to this flag. It is
inconsistent with the TRY_FMT and TRY_EXT_CTRLS ioctls. Note that in
video_ioctl2 it should be just one function with 'try' boolean
argument. It has always been a mistake that the try and set functions
were separated in the driver code IMHO.

I know that the subdev ioctls do not have a try version, but it is not
quite the same in that those try functions actually store the try
information.

That's exactly why the subdevs pad-level API has a try flag instead of a
try operation, and that's why g/s_selection on subdevs will be done with
a try flag.

As for the video device node API, I won't oppose a TRY ioctl, as long as
we can guarantee there will never be dependencies between the selection
rectangles and other parameters (or between the different selection
rectangles). If the crop rectangle depends on the compose rectangle for
instance, how can you implement a TRY ioctl to try a crop rectangle for
a specific compose rectangle, without modifying the active compose
rectangle first ?

In that case the TRY would adjust the crop so that it works with the
current compose rectangle.


And how do you try both crop and compose settings without modifying the active 
configuration ? That's not possible, and I think it's a bad API limitation.


VIDIOC_TRY_MULTISELECTION ?




But this is just one more example of our lack of proper support for pipeline
setup. It doesn't really matter whether this is at the subdev level or at
the driver level, both have the same problems.

This requires a brainstorm session to work out.


This is something we need to look into more carefully. I am slowly
becoming convinced that we need some sort of transaction-based
configuration for pipelines.

This RFC is about video device node configuration, not pipelines. For
pipelines I think we'll need a transaction-based API. For video device
nodes, I'm still unsure. As stated above, if we have multiple parameters
that depend on each other, how do we let the user try them without
changing the active configuration ?

Cropping/scaling/composing IS a pipeline. Until recently the V4L2 device
node API was sufficient to setup the trivial pipelines that most V4L2
consumer devices have. But with the more modern devices it starts to show
its limitations.


It's still a simple pipeline, and I think we should aim at making the V4L2 
device node API useful to configure this kind of pipeline. The selection API 
is a superset of the crop API, applications should be able to use it to 
replace the crop API in the long term.



The whole 'try' concept we had for a long time needs to be re-examined.


I agree.


As you remember, I was never satisfied with the subdev 'try' approach
either, but I never could come up with a better alternative.


I've noticed that there are two different meaning of TRY flag
a) checking if a proposed configuration is applicable for a driver
b) storing proposed configuration in some form of temporary buffer

Ad. a. This is a real TRY command. The state of both hardware and driver 
does not change after TRY call.


Ad. b. It should be called not TRY flag because the internal state of a 
driver changes. It should be named as something like SHADOW flag. It 
would indicate that the change is applied only to shadow configuration.


I propose to change name of TRY flag for subdev to SHADOW flag. I think 
it would much clear to express the difference of TRY meaning in video 
node and subdev contexts.


Therefore ioctl VIDIOC_TRY_SELECTION is probably better and more 
convenient way of testing if configuration is applicable.



Regardless of how such a scheme should work, one thing that I believe
is missing in the format ioctls (both on the video and subdev level)
is a similar concept like the flags in this proposal. It would be
quite useful if you could indicate that the desired WxH has 

Re: [PATCH/RFC 1/4] V4L: add three new ioctl()s for multi-size videobuffer management

2011-05-18 Thread Sakari Ailus
Hi,

Guennadi Liakhovetski wrote:
 On Wed, 18 May 2011, Laurent Pinchart wrote:
 
 On Friday 13 May 2011 09:45:51 Guennadi Liakhovetski wrote:
 
 [snip]
 
 2. Both these flags should not be passed with CREATE, but with SUBMIT
 (which will be renamed to PREPARE or something similar). It should be
 possible to prepare the same buffer with different cacheing attributes
 during a running operation. Shall these flags be added to values, taken by
 struct v4l2_buffer::flags, since that is the struct, that will be used as
 the argument for the new version of the SUBMIT ioctl()?

 Do you have a use case for per-submit cache handling ?
 
 This was Sakari's idea, I think, ask him;) But I think, he suggested a 
 case, when not all buffers have to be processed in the user-space. In 
 principle, I can imagine such a case too. E.g., if most of the buffers you 
 pass directly to output / further processing, and only some of them you 
 analyse in software, perhaps, for some WB, exposure, etc.

Yes; I think I mentioned this. It's possible that some rather
CPU-intensive processing is only done on the CPU on every n:th image,
for example. In this case flushing the cache on images that won't be
touched by the CPU is not necessary.

 +
 +/* VIDIOC_CREATE_BUFS */
 +struct v4l2_create_buffers {
 +  __u32   index;  /* output: buffers 
 index...index + count - 1 have 
 been
 created */ +   __u32   count;
 +  __u32   flags;  /* V4L2_BUFFER_FLAG_* */
 +  enum v4l2_memorymemory;
 +  __u32   size;   /* Explicit size, e.g., for 
 compressed streams */
 +  struct v4l2_format  format; /* type is used always, the 
 rest if 
 size
 == 0 */ +};

 1. Care must be taken to keep index = V4L2_MAX_FRAME

 Does that requirement still make sense ?
 
 Don't know, again, not my idea. videobuf2-core still uses it. At one 
 location it seems to be pretty arbitrary, at another it is the size of an 
 array in struct vb2_fileio_data, but maybe we could allocate that one 
 dynamically too. So, do I understand it right, that this would affect our 
 case, if we would CREATE our buffers and then the user would decide to do 
 read() / write() on them?

My issue with this number, as I gave it a little more thought, is that
it is actually quite low. The devices will have more memory in the
future and 32 might become a real limitation. I think it would be wise
to define the API so that the number of simultaneous buffers allocated
on a device is not limited by this number.

Kind regards,

-- 
Sakari Ailus
sakari.ai...@maxwell.research.nokia.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[cron job] v4l-dvb daily build: ERRORS

2011-05-18 Thread Hans Verkuil
This message is generated daily by a cron job that builds v4l-dvb for
the kernels and architectures in the list below.

Results of the daily build of v4l-dvb:

date:Wed May 18 19:00:32 CEST 2011
git hash:f9b51477fe540fb4c65a05027fdd6f2ecce4db3b
gcc version:  i686-linux-gcc (GCC) 4.5.1
host hardware:x86_64
host os:  2.6.32.5

linux-git-armv5: OK
linux-git-armv5-davinci: OK
linux-git-armv5-ixp: OK
linux-git-armv5-omap2: OK
linux-git-i686: OK
linux-git-m32r: OK
linux-git-mips: WARNINGS
linux-git-powerpc64: OK
linux-git-x86_64: OK
linux-2.6.31.12-i686: ERRORS
linux-2.6.32.6-i686: WARNINGS
linux-2.6.33-i686: WARNINGS
linux-2.6.34-i686: WARNINGS
linux-2.6.35.3-i686: WARNINGS
linux-2.6.36-i686: WARNINGS
linux-2.6.37-i686: WARNINGS
linux-2.6.38.2-i686: WARNINGS
linux-2.6.31.12-x86_64: ERRORS
linux-2.6.32.6-x86_64: WARNINGS
linux-2.6.33-x86_64: WARNINGS
linux-2.6.34-x86_64: WARNINGS
linux-2.6.35.3-x86_64: WARNINGS
linux-2.6.36-x86_64: WARNINGS
linux-2.6.37-x86_64: WARNINGS
linux-2.6.38.2-x86_64: WARNINGS
spec-git: OK
sparse: ERRORS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Wednesday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Wednesday.tar.bz2

The V4L-DVB specification from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/media.html
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [libdvben50221] [PATCH] Assign same resource_id in open_session_response when resource non-existent

2011-05-18 Thread Tomer Barletz
On Tue, May 17, 2011 at 8:46 AM, Brice DUBOST bra...@braice.net wrote:
 On 18/01/2011 15:42, Tomer Barletz wrote:
 Attached a patch for a bug in the lookup_callback function, were in
 case of a non-existent resource, the connected_resource_id is not
 initialized and then used in the open_session_response call of the
 session layer.


 Hello

 Can you explain what kind of bug it fixes ?

 Thanks


The standard states that in case the module can't provide the
requested resource , it should reply with the same resource id - this
is the only line that was added.
Also, since the caller to this function might use the variable
returned, this variable must be initialized.
The attached patch solves both bugs.

Tomer
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH v3] tea575x: convert to control framework

2011-05-18 Thread Ondrej Zary
Convert tea575x-tuner to use the new V4L2 control framework. Also add
ext_init() callback that can be used by a card driver for additional
initialization right before registering the video device (for SF16-FMR2).

Signed-off-by: Ondrej Zary li...@rainbow-software.org

--- linux-2.6.39-rc2-/include/sound/tea575x-tuner.h 2011-05-13 
19:39:23.0 +0200
+++ linux-2.6.39-rc2/include/sound/tea575x-tuner.h  2011-05-17 
22:35:19.0 +0200
@@ -23,8 +23,7 @@
  */
 
 #include linux/videodev2.h
-#include media/v4l2-dev.h
-#include media/v4l2-ioctl.h
+#include media/v4l2-ctrls.h
 
 #define TEA575X_FMIF   10700
 
@@ -54,6 +53,8 @@ struct snd_tea575x {
void *private_data;
u8 card[32];
u8 bus_info[32];
+   struct v4l2_ctrl_handler ctrl_handler;
+   void (*ext_init)(struct snd_tea575x *tea);
 };
 
 int snd_tea575x_init(struct snd_tea575x *tea);
--- linux-2.6.39-rc2-/sound/i2c/other/tea575x-tuner.c   2011-05-13 
19:39:23.0 +0200
+++ linux-2.6.39-rc2/sound/i2c/other/tea575x-tuner.c2011-05-18 
21:33:05.0 +0200
@@ -22,11 +22,11 @@
 
 #include asm/io.h
 #include linux/delay.h
-#include linux/interrupt.h
 #include linux/init.h
 #include linux/slab.h
 #include linux/version.h
-#include sound/core.h
+#include media/v4l2-dev.h
+#include media/v4l2-ioctl.h
 #include sound/tea575x-tuner.h
 
 MODULE_AUTHOR(Jaroslav Kysela pe...@perex.cz);
@@ -62,17 +62,6 @@ module_param(radio_nr, int, 0);
 #define TEA575X_BIT_DUMMY  (115) /* buffer */
 #define TEA575X_BIT_FREQ_MASK  0x7fff
 
-static struct v4l2_queryctrl radio_qctrl[] = {
-   {
-   .id= V4L2_CID_AUDIO_MUTE,
-   .name  = Mute,
-   .minimum   = 0,
-   .maximum   = 1,
-   .default_value = 1,
-   .type  = V4L2_CTRL_TYPE_BOOLEAN,
-   }
-};
-
 /*
  * lowlevel part
  */
@@ -266,47 +255,17 @@ static int vidioc_s_audio(struct file *f
return 0;
 }
 
-static int vidioc_queryctrl(struct file *file, void *priv,
-   struct v4l2_queryctrl *qc)
+static int tea575x_s_ctrl(struct v4l2_ctrl *ctrl)
 {
-   int i;
-
-   for (i = 0; i  ARRAY_SIZE(radio_qctrl); i++) {
-   if (qc-id  qc-id == radio_qctrl[i].id) {
-   memcpy(qc, (radio_qctrl[i]),
-   sizeof(*qc));
-   return 0;
-   }
-   }
-   return -EINVAL;
-}
-
-static int vidioc_g_ctrl(struct file *file, void *priv,
-   struct v4l2_control *ctrl)
-{
-   struct snd_tea575x *tea = video_drvdata(file);
+   struct snd_tea575x *tea = container_of(ctrl-handler, struct 
snd_tea575x, ctrl_handler);
 
switch (ctrl-id) {
case V4L2_CID_AUDIO_MUTE:
-   ctrl-value = tea-mute;
+   tea-mute = ctrl-val;
+   snd_tea575x_set_freq(tea);
return 0;
}
-   return -EINVAL;
-}
-
-static int vidioc_s_ctrl(struct file *file, void *priv,
-   struct v4l2_control *ctrl)
-{
-   struct snd_tea575x *tea = video_drvdata(file);
 
-   switch (ctrl-id) {
-   case V4L2_CID_AUDIO_MUTE:
-   if (tea-mute != ctrl-value) {
-   tea-mute = ctrl-value;
-   snd_tea575x_set_freq(tea);
-   }
-   return 0;
-   }
return -EINVAL;
 }
 
@@ -355,9 +314,6 @@ static const struct v4l2_ioctl_ops tea57
.vidioc_s_input = vidioc_s_input,
.vidioc_g_frequency = vidioc_g_frequency,
.vidioc_s_frequency = vidioc_s_frequency,
-   .vidioc_queryctrl   = vidioc_queryctrl,
-   .vidioc_g_ctrl  = vidioc_g_ctrl,
-   .vidioc_s_ctrl  = vidioc_s_ctrl,
 };
 
 static struct video_device tea575x_radio = {
@@ -367,6 +323,10 @@ static struct video_device tea575x_radio
.release= video_device_release,
 };
 
+static const struct v4l2_ctrl_ops tea575x_ctrl_ops = {
+   .s_ctrl = tea575x_s_ctrl,
+};
+
 /*
  * initialize all the tea575x chips
  */
@@ -384,31 +344,43 @@ int snd_tea575x_init(struct snd_tea575x
tea-in_use = 0;
tea-val = TEA575X_BIT_BAND_FM | TEA575X_BIT_SEARCH_10_40;
tea-freq = 90500 * 16; /* 90.5Mhz default */
+   snd_tea575x_set_freq(tea);
 
tea575x_radio_inst = video_device_alloc();
-   if (tea575x_radio_inst == NULL) {
+   if (!tea575x_radio_inst) {
printk(KERN_ERR tea575x-tuner: not enough memory\n);
return -ENOMEM;
}
 
memcpy(tea575x_radio_inst, tea575x_radio, sizeof(tea575x_radio));
-
-   strcpy(tea575x_radio.name, tea-tea5759 ?
-  TEA5759 radio : TEA5757 radio);
-
video_set_drvdata(tea575x_radio_inst, tea);
 
-   retval = video_register_device(tea575x_radio_inst,
-   

Re: Summary of the V4L2 discussions during LDS - was: Re: Embedded Linux memory management interest group list

2011-05-18 Thread Sakari Ailus
Hans Verkuil wrote:
 Note that many video receivers cannot stall. You can't tell them to wait until
 the last buffer finished processing. This is different from some/most? 
 sensors.

Not even image sensors. They just output the frame data; if the receiver
runs out of buffers the data is just lost. And if any part of the frame
is lost, there's no use for other parts of it either. But that's
something the receiver must handle, i.e. discard the data and increment
frame number (field_count in v4l2_buffer).

The interfaces used by image sensors, be they parallel or serial, do not
provide means to inform the sensor that the receiver has run out of
buffer space. These interfaces are just unidirectional.

Regards,

-- 
Sakari Ailus
sakari.ai...@maxwell.research.nokia.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC 1/4] V4L: add three new ioctl()s for multi-size videobuffer management

2011-05-18 Thread Sakari Ailus
Hi Guennadi and Laurent,

Guennadi Liakhovetski wrote:
 On Wed, 18 May 2011, Laurent Pinchart wrote:
 
 On Tuesday 17 May 2011 07:52:28 Sakari Ailus wrote:
 Guennadi Liakhovetski wrote:
 
 [snip]
 
 What about making it possible to pass an array of buffer indices to the
 user, just like VIDIOC_S_EXT_CTRLS does? I'm not sure if this would be
 perfect, but it would avoid the problem of requiring continuous ranges
 of buffer ids.

 struct v4l2_create_buffers {

   __u32   *index;
   __u32   count;
   __u32   flags;
   enum v4l2_memorymemory;
   __u32   size;
   struct v4l2_format  format;

 };

 Index would be a pointer to an array of buffer indices and its length
 would be count.

 I don't understand this. We do _not_ want to allow holes in indices. For
 now we decide to not implement DESTROY at all. In this case indices just
 increment contiguously.

 The next stage is to implement DESTROY, but only in strict reverse order
 - without holes and in the same ranges, as buffers have been CREATEd
 before. So, I really don't understand why we need arrays, sorry.

 Well, now that we're defining a second interface to make new buffer
 objects, I just thought it should be made as future-proof as we can.

 I second that. I don't like rushing new APIs to find out we need something 
 else after 6 months.
 
 Ok, so, we pass an array from user-space with CREATE of size count. The 
 kernel fills it with as many buffers entries as it has allocated. But 
 currently drivers are also allowed to allocate more buffers, than the 
 user-space has requested. What do we do in such a case?

That's a good point.

But even if there was no array, shouldn't the user be allowed to create
the buffers using a number of separate CREATE_BUF calls? The result
would be still the same n buffers as with a single call allocating the n
buffers at once.

Also, consider the (hopefully!) forthcoming DMA buffer management API
patches. It looks like that those buffers will be referred to by file
handles. To associate several DMA buffer objects to V4L2 buffers at
once, there would have to be an array of those objects.

URL:http://www.spinics.net/lists/linux-media/msg32448.html

(See the links, too!)

Thus, I would think that CREATE_BUF can be used to create buffers but
not to enforce how many of them are required by a device on a single
CREATE_BUF call.

I don't have a good answer for the stated problem, but these ones
crossed my mind:

- Have a new ioctl to tell the minimum number of buffers to make
streaming possible.

- Add a field for the minimum number of buffers to CREATE_BUF.

- Use the old REQBUFS to tell the number. It didn't do much other work
in the past either, right?

Regards,

-- 
Sakari Ailus
sakari.ai...@maxwell.research.nokia.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3] tea575x: convert to control framework

2011-05-18 Thread Hans Verkuil
On Wednesday, May 18, 2011 21:36:27 Ondrej Zary wrote:
 Convert tea575x-tuner to use the new V4L2 control framework. Also add
 ext_init() callback that can be used by a card driver for additional
 initialization right before registering the video device (for SF16-FMR2).

Just one comment left, see below...

 
 Signed-off-by: Ondrej Zary li...@rainbow-software.org
 
 --- linux-2.6.39-rc2-/include/sound/tea575x-tuner.h   2011-05-13 
 19:39:23.0 +0200
 +++ linux-2.6.39-rc2/include/sound/tea575x-tuner.h2011-05-17 
 22:35:19.0 +0200
 @@ -23,8 +23,7 @@
   */
  
  #include linux/videodev2.h
 -#include media/v4l2-dev.h
 -#include media/v4l2-ioctl.h
 +#include media/v4l2-ctrls.h
  
  #define TEA575X_FMIF 10700
  
 @@ -54,6 +53,8 @@ struct snd_tea575x {
   void *private_data;
   u8 card[32];
   u8 bus_info[32];
 + struct v4l2_ctrl_handler ctrl_handler;
 + void (*ext_init)(struct snd_tea575x *tea);
  };
  
  int snd_tea575x_init(struct snd_tea575x *tea);
 --- linux-2.6.39-rc2-/sound/i2c/other/tea575x-tuner.c 2011-05-13 
 19:39:23.0 +0200
 +++ linux-2.6.39-rc2/sound/i2c/other/tea575x-tuner.c  2011-05-18 
 21:33:05.0 +0200
 @@ -22,11 +22,11 @@
  
  #include asm/io.h
  #include linux/delay.h
 -#include linux/interrupt.h
  #include linux/init.h
  #include linux/slab.h
  #include linux/version.h
 -#include sound/core.h
 +#include media/v4l2-dev.h
 +#include media/v4l2-ioctl.h
  #include sound/tea575x-tuner.h
  
  MODULE_AUTHOR(Jaroslav Kysela pe...@perex.cz);
 @@ -62,17 +62,6 @@ module_param(radio_nr, int, 0);
  #define TEA575X_BIT_DUMMY(115) /* buffer */
  #define TEA575X_BIT_FREQ_MASK0x7fff
  
 -static struct v4l2_queryctrl radio_qctrl[] = {
 - {
 - .id= V4L2_CID_AUDIO_MUTE,
 - .name  = Mute,
 - .minimum   = 0,
 - .maximum   = 1,
 - .default_value = 1,
 - .type  = V4L2_CTRL_TYPE_BOOLEAN,
 - }
 -};
 -
  /*
   * lowlevel part
   */
 @@ -266,47 +255,17 @@ static int vidioc_s_audio(struct file *f
   return 0;
  }
  
 -static int vidioc_queryctrl(struct file *file, void *priv,
 - struct v4l2_queryctrl *qc)
 +static int tea575x_s_ctrl(struct v4l2_ctrl *ctrl)
  {
 - int i;
 -
 - for (i = 0; i  ARRAY_SIZE(radio_qctrl); i++) {
 - if (qc-id  qc-id == radio_qctrl[i].id) {
 - memcpy(qc, (radio_qctrl[i]),
 - sizeof(*qc));
 - return 0;
 - }
 - }
 - return -EINVAL;
 -}
 -
 -static int vidioc_g_ctrl(struct file *file, void *priv,
 - struct v4l2_control *ctrl)
 -{
 - struct snd_tea575x *tea = video_drvdata(file);
 + struct snd_tea575x *tea = container_of(ctrl-handler, struct 
 snd_tea575x, ctrl_handler);
  
   switch (ctrl-id) {
   case V4L2_CID_AUDIO_MUTE:
 - ctrl-value = tea-mute;
 + tea-mute = ctrl-val;
 + snd_tea575x_set_freq(tea);
   return 0;
   }
 - return -EINVAL;
 -}
 -
 -static int vidioc_s_ctrl(struct file *file, void *priv,
 - struct v4l2_control *ctrl)
 -{
 - struct snd_tea575x *tea = video_drvdata(file);
  
 - switch (ctrl-id) {
 - case V4L2_CID_AUDIO_MUTE:
 - if (tea-mute != ctrl-value) {
 - tea-mute = ctrl-value;
 - snd_tea575x_set_freq(tea);
 - }
 - return 0;
 - }
   return -EINVAL;
  }
  
 @@ -355,9 +314,6 @@ static const struct v4l2_ioctl_ops tea57
   .vidioc_s_input = vidioc_s_input,
   .vidioc_g_frequency = vidioc_g_frequency,
   .vidioc_s_frequency = vidioc_s_frequency,
 - .vidioc_queryctrl   = vidioc_queryctrl,
 - .vidioc_g_ctrl  = vidioc_g_ctrl,
 - .vidioc_s_ctrl  = vidioc_s_ctrl,
  };
  
  static struct video_device tea575x_radio = {
 @@ -367,6 +323,10 @@ static struct video_device tea575x_radio
   .release= video_device_release,
  };
  
 +static const struct v4l2_ctrl_ops tea575x_ctrl_ops = {
 + .s_ctrl = tea575x_s_ctrl,
 +};
 +
  /*
   * initialize all the tea575x chips
   */
 @@ -384,31 +344,43 @@ int snd_tea575x_init(struct snd_tea575x
   tea-in_use = 0;
   tea-val = TEA575X_BIT_BAND_FM | TEA575X_BIT_SEARCH_10_40;
   tea-freq = 90500 * 16; /* 90.5Mhz default */
 + snd_tea575x_set_freq(tea);
  
   tea575x_radio_inst = video_device_alloc();

video_device_alloc allocates the struct video_device. It would be easier
to just embed that struct in the snd_tea575x struct. It prevents having to
allocate memory twice and the test below can be removed entirely.

The .release field above can be set to video_device_release_empty, which
is the appropriate release callback for embedded video_device structs.

 - if (tea575x_radio_inst == NULL) {
 + if 

[PATCH v4] tea575x: convert to control framework

2011-05-18 Thread Ondrej Zary
Convert tea575x-tuner to use the new V4L2 control framework. Also add
ext_init() callback that can be used by a card driver for additional
initialization right before registering the video device (for SF16-FMR2).

Also embed struct video_device to struct snd_tea575x to simplify the code.

Signed-off-by: Ondrej Zary li...@rainbow-software.org

--- linux-2.6.39-rc2-/include/sound/tea575x-tuner.h 2011-05-13 
19:39:23.0 +0200
+++ linux-2.6.39-rc2/include/sound/tea575x-tuner.h  2011-05-18 
23:14:48.0 +0200
@@ -23,8 +23,8 @@
  */
 
 #include linux/videodev2.h
+#include media/v4l2-ctrls.h
 #include media/v4l2-dev.h
-#include media/v4l2-ioctl.h
 
 #define TEA575X_FMIF   10700
 
@@ -42,7 +42,8 @@ struct snd_tea575x_ops {
 };
 
 struct snd_tea575x {
-   struct video_device *vd;/* video device */
+   struct video_device vd; /* video device */
+   bool initialized;
bool tea5759;   /* 5759 chip is present */
bool mute;  /* Device is muted? */
bool stereo;/* receiving stereo */
@@ -54,6 +55,8 @@ struct snd_tea575x {
void *private_data;
u8 card[32];
u8 bus_info[32];
+   struct v4l2_ctrl_handler ctrl_handler;
+   void (*ext_init)(struct snd_tea575x *tea);
 };
 
 int snd_tea575x_init(struct snd_tea575x *tea);
--- linux-2.6.39-rc2-/sound/i2c/other/tea575x-tuner.c   2011-05-13 
19:39:23.0 +0200
+++ linux-2.6.39-rc2/sound/i2c/other/tea575x-tuner.c2011-05-18 
23:22:44.0 +0200
@@ -22,11 +22,11 @@
 
 #include asm/io.h
 #include linux/delay.h
-#include linux/interrupt.h
 #include linux/init.h
 #include linux/slab.h
 #include linux/version.h
-#include sound/core.h
+#include media/v4l2-dev.h
+#include media/v4l2-ioctl.h
 #include sound/tea575x-tuner.h
 
 MODULE_AUTHOR(Jaroslav Kysela pe...@perex.cz);
@@ -62,17 +62,6 @@ module_param(radio_nr, int, 0);
 #define TEA575X_BIT_DUMMY  (115) /* buffer */
 #define TEA575X_BIT_FREQ_MASK  0x7fff
 
-static struct v4l2_queryctrl radio_qctrl[] = {
-   {
-   .id= V4L2_CID_AUDIO_MUTE,
-   .name  = Mute,
-   .minimum   = 0,
-   .maximum   = 1,
-   .default_value = 1,
-   .type  = V4L2_CTRL_TYPE_BOOLEAN,
-   }
-};
-
 /*
  * lowlevel part
  */
@@ -266,47 +255,17 @@ static int vidioc_s_audio(struct file *f
return 0;
 }
 
-static int vidioc_queryctrl(struct file *file, void *priv,
-   struct v4l2_queryctrl *qc)
-{
-   int i;
-
-   for (i = 0; i  ARRAY_SIZE(radio_qctrl); i++) {
-   if (qc-id  qc-id == radio_qctrl[i].id) {
-   memcpy(qc, (radio_qctrl[i]),
-   sizeof(*qc));
-   return 0;
-   }
-   }
-   return -EINVAL;
-}
-
-static int vidioc_g_ctrl(struct file *file, void *priv,
-   struct v4l2_control *ctrl)
+static int tea575x_s_ctrl(struct v4l2_ctrl *ctrl)
 {
-   struct snd_tea575x *tea = video_drvdata(file);
+   struct snd_tea575x *tea = container_of(ctrl-handler, struct 
snd_tea575x, ctrl_handler);
 
switch (ctrl-id) {
case V4L2_CID_AUDIO_MUTE:
-   ctrl-value = tea-mute;
+   tea-mute = ctrl-val;
+   snd_tea575x_set_freq(tea);
return 0;
}
-   return -EINVAL;
-}
-
-static int vidioc_s_ctrl(struct file *file, void *priv,
-   struct v4l2_control *ctrl)
-{
-   struct snd_tea575x *tea = video_drvdata(file);
 
-   switch (ctrl-id) {
-   case V4L2_CID_AUDIO_MUTE:
-   if (tea-mute != ctrl-value) {
-   tea-mute = ctrl-value;
-   snd_tea575x_set_freq(tea);
-   }
-   return 0;
-   }
return -EINVAL;
 }
 
@@ -355,16 +314,17 @@ static const struct v4l2_ioctl_ops tea57
.vidioc_s_input = vidioc_s_input,
.vidioc_g_frequency = vidioc_g_frequency,
.vidioc_s_frequency = vidioc_s_frequency,
-   .vidioc_queryctrl   = vidioc_queryctrl,
-   .vidioc_g_ctrl  = vidioc_g_ctrl,
-   .vidioc_s_ctrl  = vidioc_s_ctrl,
 };
 
 static struct video_device tea575x_radio = {
.name   = tea575x-tuner,
.fops   = tea575x_fops,
.ioctl_ops  = tea575x_ioctl_ops,
-   .release= video_device_release,
+   .release= video_device_release_empty,
+};
+
+static const struct v4l2_ctrl_ops tea575x_ctrl_ops = {
+   .s_ctrl = tea575x_s_ctrl,
 };
 
 /*
@@ -373,7 +333,6 @@ static struct video_device tea575x_radio
 int snd_tea575x_init(struct snd_tea575x *tea)
 {
int retval;
-   struct video_device *tea575x_radio_inst;
 
tea-mute = 1;
 
@@ -384,39 +343,44 @@ int snd_tea575x_init(struct snd_tea575x

[PATCH v4] radio-sf16fmr2: convert to generic TEA575x interface

2011-05-18 Thread Ondrej Zary
Convert radio-sf16fmr2 to use generic TEA575x implementation. Most of the
driver code goes away as SF16-FMR2 is basically just a TEA5757 tuner
connected to ISA bus.
The card can optionally be equipped with PT2254A volume control (equivalent
of TC9154AP) - the volume setting is completely reworked (with balance control
added) and tested.

Signed-off-by: Ondrej Zary li...@rainbow-software.org

--- linux-2.6.39-rc2-/sound/pci/Kconfig 2011-05-15 18:50:18.0 +0200
+++ linux-2.6.39-rc2/sound/pci/Kconfig  2011-05-17 23:35:30.0 +0200
@@ -565,8 +565,8 @@ config SND_FM801_TEA575X_BOOL
 
 config SND_TEA575X
tristate
-   depends on SND_FM801_TEA575X_BOOL || SND_ES1968_RADIO
-   default SND_FM801 || SND_ES1968
+   depends on SND_FM801_TEA575X_BOOL || SND_ES1968_RADIO || RADIO_SF16FMR2
+   default SND_FM801 || SND_ES1968 || RADIO_SF16FMR2
 
 source sound/pci/hda/Kconfig
 
--- linux-2.6.39-rc2-/drivers/media/radio/radio-sf16fmr2.c  2011-04-06 
03:30:43.0 +0200
+++ linux-2.6.39-rc2/drivers/media/radio/radio-sf16fmr2.c   2011-05-18 
22:41:07.0 +0200
@@ -1,441 +1,204 @@
-/* SF16FMR2 radio driver for Linux radio support
- * heavily based on fmi driver...
- * (c) 2000-2002 Ziglio Frediano, fredd...@angelfire.com
+/* SF16-FMR2 radio driver for Linux
+ * Copyright (c) 2011 Ondrej Zary
  *
- * Notes on the hardware
- *
- *  Frequency control is done digitally -- ie out(port,encodefreq(95.8));
- *  No volume control - only mute/unmute - you have to use line volume
- *
- *  For read stereo/mono you must wait 0.1 sec after set frequency and
- *  card unmuted so I set frequency on unmute
- *  Signal handling seem to work only on autoscanning (not implemented)
- *
- *  Converted to V4L2 API by Mauro Carvalho Chehab mche...@infradead.org
+ * Original driver was (c) 2000-2002 Ziglio Frediano, fredd...@angelfire.com
+ * but almost nothing remained here after conversion to generic TEA575x
+ * implementation
  */
 
+#include linux/delay.h
 #include linux/module.h  /* Modules  */
 #include linux/init.h/* Initdata */
 #include linux/ioport.h  /* request_region   */
-#include linux/delay.h   /* udelay   */
-#include linux/videodev2.h   /* kernel radio structs */
-#include linux/mutex.h
-#include linux/version.h  /* for KERNEL_VERSION MACRO */
 #include linux/io.h  /* outb, outb_p */
-#include media/v4l2-device.h
-#include media/v4l2-ioctl.h
+#include sound/tea575x-tuner.h
 
-MODULE_AUTHOR(Ziglio Frediano, fredd...@angelfire.com);
-MODULE_DESCRIPTION(A driver for the SF16FMR2 radio.);
+MODULE_AUTHOR(Ondrej Zary);
+MODULE_DESCRIPTION(MediaForte SF16-FMR2 FM radio card driver);
 MODULE_LICENSE(GPL);
 
-static int io = 0x384;
-static int radio_nr = -1;
-
-module_param(io, int, 0);
-MODULE_PARM_DESC(io, I/O address of the SF16FMR2 card (should be 0x384, if do 
not work try 0x284));
-module_param(radio_nr, int, 0);
-
-#define RADIO_VERSION KERNEL_VERSION(0,0,2)
-
-#define AUD_VOL_INDEX 1
-
-#undef DEBUG
-//#define DEBUG 1
-
-#ifdef DEBUG
-# define  debug_print(s) printk s
-#else
-# define  debug_print(s)
-#endif
-
-/* this should be static vars for module size */
-struct fmr2
-{
-   struct v4l2_device v4l2_dev;
-   struct video_device vdev;
-   struct mutex lock;
+struct fmr2 {
int io;
-   int curvol; /* 0-15 */
-   int mute;
-   int stereo; /* card is producing stereo audio */
-   unsigned long curfreq; /* freq in kHz */
-   int card_type;
+   struct snd_tea575x tea;
+   struct v4l2_ctrl *volume;
+   struct v4l2_ctrl *balance;
 };
 
+/* the port is hardwired so no need to support multiple cards */
+#define FMR2_PORT  0x384
 static struct fmr2 fmr2_card;
 
-/* hw precision is 12.5 kHz
- * It is only useful to give freq in interval of 200 (=0.0125Mhz),
- * other bits will be truncated
- */
-#define RSF16_ENCODE(x)((x) / 200 + 856)
-#define RSF16_MINFREQ (87 * 16000)
-#define RSF16_MAXFREQ (108 * 16000)
-
-static inline void wait(int n, int io)
-{
-   for (; n; --n)
-   inb(io);
-}
-
-static void outbits(int bits, unsigned int data, int nWait, int io)
-{
-   int bit;
-
-   for (; --bits = 0;) {
-   bit = (data  bits)  1;
-   outb(bit, io);
-   wait(nWait, io);
-   outb(bit | 2, io);
-   wait(nWait, io);
-   outb(bit, io);
-   wait(nWait, io);
-   }
-}
-
-static inline void fmr2_mute(int io)
-{
-   outb(0x00, io);
-   wait(4, io);
-}
-
-static inline void fmr2_unmute(int io)
-{
-   outb(0x04, io);
-   wait(4, io);
-}
-
-static inline int fmr2_stereo_mode(int io)
-{
-   int n = inb(io);
-
-   outb(6, io);
-   inb(io);
-   n = ((n  3)  1) ^ 1;
-   debug_print((KERN_DEBUG stereo: %d\n, n));
-   return n;
-}
-
-static int 

Re: [RFC v4] V4L2 API for flash devices

2011-05-18 Thread Sylwester Nawrocki
Hi Sakari,

On 05/17/2011 10:34 PM, Sakari Ailus wrote:
 Sylwester Nawrocki wrote:
 On 05/09/2011 12:11 AM, Sakari Ailus wrote:
 Sylwester Nawrocki wrote:
 On 05/07/2011 02:46 PM, Hans Verkuil wrote:
 On Thursday, May 05, 2011 20:49:21 Sakari Ailus wrote:
 Hi,

 This is a fourth proposal for an interface for controlling flash devices
 on the V4L2/v4l2_subdev APIs.
...
 3. Sensor metadata on frames
 

 It'd be useful to be able to read back sensor metadata. If the flash is
 strobed (on sensor hardware) while streaming, it's difficult to know
 otherwise which frame in the stream has been exposed with flash.

 I wonder if it would make sense to have a V4L2_BUF_FLAG_FLASH buffer
 flag?
 That way userspace can tell if that particular frame was taken with
 flash.

 This looks more as a workaround for the problem rather than a good long
 term solution. It might be tempting to use the buffer flags which seem
 to be be more or less intended for buffer control.
 I'd like much more to see a buffer flags to be used to indicate whether
 an additional plane of (meta)data is carried by the buffer.
 There seem to be many more parameters, than a single flag indicating
 whether the frame has been exposed with flash or not, needed to be
 carried over to user space.
 But then we might need some standard format of the meta data, perhaps
 control id/value pairs and possibly a per plane configurable memory
 type.

 There are multiple possible approaches for this.

 For sensors where metadata is register-value pairs, that is, essentially
 V4L2 control values, I think this should be parsed by the sensor driver.
 The ISP (camera bridge) driver does receive the data so it'd have to
 ask for help from the sensor driver.

 I am inclined to let the ISP drivers parse the data but on the other hand
 it might be difficult to access same DMA buffers in kernel _and_ user space.
 
 This is just about mapping the buffer to both kernel and user spaces. If
 the ISP has an iommu the kernel mapping might already exist if it comes
 from vmalloc().

Yes, I know. I was thinking of possibly required different mapping attributes
for kernel and user space and the problems on ARM with that.
Also as metadata is supposed to occupy only small part of a frame buffer perhaps
only one page or so could be mapped in kernel space.
I'm referring here mainly to SMIA++ method of storing metadata.

In case of sensors I used to work with it wouldn't be necessary to touch the
main frame buffers as the metadata is transmitted out of band.

 
 As discussed previously, using V4L2 control events shouldn't probably be
 the way to go, but I think it's obvious that this is _somehow_ bound to
 controls, at least control ids.

 Also as Sakari indicated some sensors adopt custom meta data formats
 so maybe we need to introduce standard fourcc like IDs for meta data
 formats? I am not sure whether it is possible to create common
 description of an image meta data that fits all H/W.

 I'm not sure either since I know of only one example. That example, i.e.
 register-value pairs, should be something that I'd assume _some_ other
 hardware uses as well, but there could exist also hardware which
 doesn't. This solution might not work on that hardware.

 Of course it's hard to find a silver bullet for a hardware we do not know ;)


 If there is metadata which does not associate to V4L2 controls (or
 ioctls), either existing or not yet existing, then that probably should
 be parsed by the driver. On the other hand, I can't think of metadata
 that wouldn't fall under this right now. :-)

 Some metadata are arrays of length specific to a given attribute,
 I wonder how to support that with v4l2 controls ?
 
 Is the metadata something which really isn't associated to any V4L2
 control? Are we now talking about a sensor which is more complex than a
 regular raw bayer sensor?

I referred to tags defined in EXIF standard, as you may know every tag
is basically an array of specific data type. For most tag types the array
length is 1 though.
Similar problem is solved by the V4L extended string controls API.

And yes this kind of tags are mostly produced by more powerful ISPs,
with, for instance, 3A or distortion corrections implemented in their firmware.
So this is a little bit different situation than a raw sensor with OMAP3 ISP.

 
 Do you know more examples of sensor produced metadata than SMIA++?

 The only metadata I've had a bit experience with was regular EXIF tags
 which could be retrieved from ISP through I2C bus.
 
 That obviously won't map to V4L2 controls.
 
 This should very likely be just passed to user space as-is as
 different... plane?

Yes, but I am trying to assess whether it's possible to create some
generic tag data structure so such plane contains an array of such 
data structures and application would know how to handle that. 
Independently of the underlying H/W.

 
 In some cases it's time critical to pass data to user space that
 otherwise 

em28xx: 240p video modes not working

2011-05-18 Thread David Korth
Hi,

I'm having a problem connecting devices that output 240p (non-interlaced 
NTSC) on composite video. Specifically, connecting either a Sega Genesis or 
Nintendo N64 to my WinTV HVR-950 results in any V4L2 program hanging while 
playing back the stream. The program starts responding again the instant the 
240p device is removed.

Is it possible to get 240p input working with the em28xx driver?

Thanks,
- David Korth


signature.asc
Description: This is a digitally signed message part.