Re: libv4l release: 0.5.97: the whitebalance release!

2009-04-16 Thread Gilles Gigan
Hans,
I have tested libv4lconvert with a PCI hauppauge hvr1300 DVB-T and
found that v4lconvert_create() returns NULL. The problem comes from
the shm_open calls in v4lcontrol_create() in libv4lcontrol.c (lines
187  190). libv4lconvert constructs the shared memory name based on
the video device's name. And in this case the video device's name
(literally Hauppauge WinTV-HVR1300 DVB-T/H) contains a slash, which
makes both calls to shm_open() fail. I can put together a quick patch
to replace '/' with '-' or white spaces if you want.
Gilles


On Wed, Apr 15, 2009 at 10:36 PM, Hans de Goede j.w.r.dego...@hhs.nl wrote:
 Hi All,

 As the version number shows this is a beta release of the 0.6.x series,
 the big change here is the addition of video processing to libv4l
 currently this only does whitebalance and normalizing (which turns out
 to be useless for most cams) but the basic framework for doing video
 processing, and being able to control it through fake v4l2 controls using
 for example v4l2ucp is there.

 Currently only whitebalancing is enabled and only on Pixarts (pac) webcams
 (which benefit tremendously from this). To test this with other webcams
 (after instaling this release) do:

 export LIBV4LCONTROL_CONTROLS=15
 LD_PRELOAD=/usr/lib/libv4l/v4l2convert.so v4l2ucp

 Notice the whitebalance and normalize checkboxes in v4l2ucp,
 as well as low and high limits for normalize.

 Now start your favorite webcam viewing app and play around with the
 2 checkboxes. Note normalize seems to be useless in most cases. If
 whitebalancing makes a *strongly noticable* difference for your webcam
 please mail me info about your cam (the usb id), then I can add it to
 the list of cams which will have the whitebalancing algorithm (and the v4l2
 control to enable/disable it) enabled by default.

 Unfortunately doing videoprocessing can be quite expensive, as for example
 whitebalancing is quite hard todo in yuv space, so doing white balancing
 with the pac7302, with an apps which wants yuv changes the flow from
 pixart-jpeg - yuv420 - rotate90
 to:
 pixart-jpeg - rgb24 - whitebalance - yuv420 - rotate90

 This is not a problem for cams which deliver (compressed) raw bayer,
 as bayer is rgb too, so I've implemented a version of the whitebalancing
 algorithm which operates directly on bayer data, so for bayer cams
 (like the pac207) it goes from:
 bayer-  yuv
 to:
 bayer - whitebalance - yuv

 For the near future I plan to change the code so that the analyse phase
 (which does not get done every frame) creates per component look up tables,
 this will make it easier to stack multiple effects in one pass without
 special casing it as the current special normalize+whitebalance in one
 pass code. Then we can add for example gamma correction with a negligible
 performance impact (when already doing white balancing that is).


 libv4l-0.5.97
 -
 * As the version number shows this is a beta release of the 0.6.x series,
  the big change here is the addition of video processing to libv4l
  currently this only does whitebalance and normalizing (which turns out
  to be useless for most cams) but the basic framework for doing video
  processing, and being able to control it through fake v4l2 controls using
  for example v4l2ucp is there.
  The initial version of this code was written by 3 of my computer science
  students: Elmar Kleijn, Sjoerd Piepenbrink and Radjnies Bhansingh
 * Currently whitebalancing gets enabled based on USB-ID's and it only gets
  enabled for Pixart webcam's. You can force it being enabled with other
  webcams by setting the environment variable LIBV4LCONTROL_CONTROLS, this
  sets a bitmask enabling certain v4l2 controls which control the video
  processing set it to 15 to enable both whitebalancing and normalize. You
  can then change the settings using a v4l2 control panel like v4l2ucp
 * Only report / allow supported destination formats in enum_fmt / try_fmt /
  g_fmt / s_fmt when processing, rotating or flipping.
 * Some applications / libs (*cough* gstreamer *cough*) will not work
  correctly with planar YUV formats when the width is not a multiple of 8,
  so crop widths which are not a multiple of 8 to the nearest multiple of 8
  when converting to planar YUV
 * Add dependency generation to libv4l by: Gilles Gigan
 gilles.gi...@gmail.com
 * Add support to use orientation from VIDIOC_ENUMINPUT by:
  Adam Baker li...@baker-net.org.uk
 * sn9c20x cams have occasional bad jpeg frames, drop these to avoid the
  flickering effect they cause, by: Brian Johnson brij...@gmail.com
 * adjust libv4l's upside down cam detection to also work with devices
  which have the usb interface as parent instead of the usb device
 * fix libv4l upside down detection for the new v4l minor numbering scheme
 * fix reading outside of the source memory when doing yuv420-rgb conversion


 Get it here:
 http://people.atrpms.net/~hdegoede/libv4l-0.5.97.tar.gz

 Regards,

 Hans



 --
 To unsubscribe from this list: send the line 

Re: [linux-dvb] DVB-T USB dib0700 device recomendations?

2009-04-16 Thread covert covert

 Thats wierd. So the usb controler on the Nova-TD and the host controler on
 the SB700 are incompatible?


I tried a few different USB tuners with a SB700 based motherboard
until I found out the drivers where not up to scratch for the USB on
the SB700 and caused a lot of dvb-usb: bulk message failed
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libv4l release: 0.5.97: the whitebalance release!

2009-04-16 Thread Gilles Gigan
Hans,
The patch  fixes the problem.

Gilles

On Thu, Apr 16, 2009 at 7:25 PM, Hans de Goede hdego...@redhat.com wrote:


 On 04/16/2009 08:16 AM, Gilles Gigan wrote:

 Hans,
 I have tested libv4lconvert with a PCI hauppauge hvr1300 DVB-T and
 found that v4lconvert_create() returns NULL. The problem comes from
 the shm_open calls in v4lcontrol_create() in libv4lcontrol.c (lines
 187  190). libv4lconvert constructs the shared memory name based on
 the video device's name. And in this case the video device's name
 (literally Hauppauge WinTV-HVR1300 DVB-T/H) contains a slash, which
 makes both calls to shm_open() fail. I can put together a quick patch
 to replace '/' with '-' or white spaces if you want.
 Gilles


 Hi,

 Thanks for reporting this! Can you please test the attached patch to see if
 it
 fixes this?

 Thanks,

 Hans



 On Wed, Apr 15, 2009 at 10:36 PM, Hans de Goedej.w.r.dego...@hhs.nl
  wrote:

 Hi All,

 As the version number shows this is a beta release of the 0.6.x series,
 the big change here is the addition of video processing to libv4l
 currently this only does whitebalance and normalizing (which turns out
 to be useless for most cams) but the basic framework for doing video
 processing, and being able to control it through fake v4l2 controls using
 for example v4l2ucp is there.

 Currently only whitebalancing is enabled and only on Pixarts (pac)
 webcams
 (which benefit tremendously from this). To test this with other webcams
 (after instaling this release) do:

 export LIBV4LCONTROL_CONTROLS=15
 LD_PRELOAD=/usr/lib/libv4l/v4l2convert.so v4l2ucp

 Notice the whitebalance and normalize checkboxes in v4l2ucp,
 as well as low and high limits for normalize.

 Now start your favorite webcam viewing app and play around with the
 2 checkboxes. Note normalize seems to be useless in most cases. If
 whitebalancing makes a *strongly noticable* difference for your webcam
 please mail me info about your cam (the usb id), then I can add it to
 the list of cams which will have the whitebalancing algorithm (and the
 v4l2
 control to enable/disable it) enabled by default.

 Unfortunately doing videoprocessing can be quite expensive, as for
 example
 whitebalancing is quite hard todo in yuv space, so doing white balancing
 with the pac7302, with an apps which wants yuv changes the flow from
 pixart-jpeg -  yuv420 -  rotate90
 to:
 pixart-jpeg -  rgb24 -  whitebalance -  yuv420 -  rotate90

 This is not a problem for cams which deliver (compressed) raw bayer,
 as bayer is rgb too, so I've implemented a version of the whitebalancing
 algorithm which operates directly on bayer data, so for bayer cams
 (like the pac207) it goes from:
 bayer-  yuv
 to:
 bayer -  whitebalance -  yuv

 For the near future I plan to change the code so that the analyse phase
 (which does not get done every frame) creates per component look up
 tables,
 this will make it easier to stack multiple effects in one pass without
 special casing it as the current special normalize+whitebalance in one
 pass code. Then we can add for example gamma correction with a negligible
 performance impact (when already doing white balancing that is).


 libv4l-0.5.97
 -
 * As the version number shows this is a beta release of the 0.6.x series,
  the big change here is the addition of video processing to libv4l
  currently this only does whitebalance and normalizing (which turns out
  to be useless for most cams) but the basic framework for doing video
  processing, and being able to control it through fake v4l2 controls
 using
  for example v4l2ucp is there.
  The initial version of this code was written by 3 of my computer science
  students: Elmar Kleijn, Sjoerd Piepenbrink and Radjnies Bhansingh
 * Currently whitebalancing gets enabled based on USB-ID's and it only
 gets
  enabled for Pixart webcam's. You can force it being enabled with other
  webcams by setting the environment variable LIBV4LCONTROL_CONTROLS, this
  sets a bitmask enabling certain v4l2 controls which control the video
  processing set it to 15 to enable both whitebalancing and normalize. You
  can then change the settings using a v4l2 control panel like v4l2ucp
 * Only report / allow supported destination formats in enum_fmt / try_fmt
 /
  g_fmt / s_fmt when processing, rotating or flipping.
 * Some applications / libs (*cough* gstreamer *cough*) will not work
  correctly with planar YUV formats when the width is not a multiple of 8,
  so crop widths which are not a multiple of 8 to the nearest multiple of
 8
  when converting to planar YUV
 * Add dependency generation to libv4l by: Gilles Gigan
 gilles.gi...@gmail.com
 * Add support to use orientation from VIDIOC_ENUMINPUT by:
  Adam Bakerli...@baker-net.org.uk
 * sn9c20x cams have occasional bad jpeg frames, drop these to avoid the
  flickering effect they cause, by: Brian Johnsonbrij...@gmail.com
 * adjust libv4l's upside down cam detection to also work with devices
  which have the usb interface as parent instead 

Re: [PATCH 5/5] soc-camera: Convert to a platform driver

2009-04-16 Thread Dongsoo, Nathaniel Kim
Hello Guennadi,

On Thu, Apr 16, 2009 at 5:58 PM, Guennadi Liakhovetski
g.liakhovet...@gmx.de wrote:
 On Thu, 16 Apr 2009, Dongsoo, Nathaniel Kim wrote:

 Hello Guennadi,


 Reviewing your patch, I've got curious about a thing.
 I think your soc camera subsystem is covering multiple camera
 devices(sensors) in one target board, but if that is true I'm afraid
 I'm confused how to handle them properly.
 Because according to your patch, video_dev_create() takes camera
 device as parameter and it seems to be creating device node for each
 camera devices.

 This patch is a preparatory step for the v4l2-(sub)dev conversion. With it
 yes (I think) a video device will be created for every registered on the
 platform level camera, but only the one(s) that probed successfully will
 actually work, others will return -ENODEV on open().

 It means, if I have one camera host and several camera devices, there
 should be several device nodes for camera devices but cannot be used
 at the same time. Because typical camera host(camera interface) can
 handle only one camera device at a time. But multiple device nodes
 mean we can open and handle them at the same time.

 How about registering camera host device as v4l2 device and make
 camera device a input device which could be handled using
 VIDIOC_S_INPUT/G_INPUT api?

 There are also cases, when you have several cameras simultaneously (think
 for example about stereo vision), even though we don't have any such cases
 just yet.

I think, there are some specific camera interfaces for stereo camera.
Like stereo camera controller chip from Epson.

But in case of camera interface which can handle only one single
camera at a time, I'm strongly believing that we should use only one
device node for camera.
I mean device node should be the camera interface not the sensor
device. If you are using stereo camera controller chip, you can make
that with a couple of device nodes, like /dev/video0 and /dev/video1.



 Actually, I'm working on S3C64xx camera interface driver with soc
 camera subsystem,

 Looking forward to it!:-)

 and I'm facing that issue right now because I've got
 dual camera on my target board.

 Good, I think, there also has been a similar design based on a pxa270 SoC.
 How are cameras switched in your case? You probably have some additional
 hardware logic to switch between them, right? So, you need some code to
 control that. I think, you should even be able to do this automatically in
 your platform code using power hooks from the struct soc_camera_link. You
 could fail to power on a camera if another camera is currently active. In
 fact, I have to add a return code test to the call to icl-power(icl, 1)
 in soc_camera_open(), I'll do this for the final v4l2-dev version. Would
 this work for you or do you have another requirements? In which case, can
 you describe your use-case in more detail - should both cameras be open by
 applications simultaneously (looks like not), do you need a more explicit
 switching control, than just first open switches, which shouldn't be the
 case, since you can even create a separate task, that does nothing but
 just keeps the required camera device open.


Yes exactly right. My H/W is designed to share data pins and mclk,
pclk pins between both of cameras.
And they have to work mutually exclusive.
For now I'm working on s3c64xx with soc camera subsystem, so no way to
make dual camera control with VIDIOC_S_INPUT, VIDIOC_G_INPUT. But the
prior version of my driver was made to control dual camera with those
S_INPUT/G_INPUT api.
Actually with single device node and switching camera with S_INPUT and
G_INPUT, there is no way to mis-control dual camera.
Because both of cameras work mutually exclusive.

To make it easier, you can take a look at my presentation file which I
gave a talk at CELF ELC2009 in San Francisco.
Here it is the presentation file

http://tree.celinuxforum.org/CelfPubWiki/ELC2009Presentations?action=AttachFiledo=gettarget=Framework_for_digital_camera_in_linux-in_detail.ppt

I think it is more decent way to control dual camera. No need to check
whether the sensor is available or not using this way. Just use
G_INPUT to check current active sensor and do S_INPUT to switch into
another one.
Cheers,

Nate


 I hope you to consider this concept, and also want to know your opinion.

 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer




-- 

DongSoo, Nathaniel Kim
Engineer
Mobile S/W Platform Lab.
Digital Media  Communications RD Centre
Samsung Electronics CO., LTD.
e-mail : dongsoo@gmail.com
  dongsoo45@samsung.com

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libv4l release: 0.5.97: the whitebalance release!

2009-04-16 Thread Hans de Goede



On 04/16/2009 08:16 AM, Gilles Gigan wrote:

Hans,
I have tested libv4lconvert with a PCI hauppauge hvr1300 DVB-T and
found that v4lconvert_create() returns NULL. The problem comes from
the shm_open calls in v4lcontrol_create() in libv4lcontrol.c (lines
187  190). libv4lconvert constructs the shared memory name based on
the video device's name. And in this case the video device's name
(literally Hauppauge WinTV-HVR1300 DVB-T/H) contains a slash, which
makes both calls to shm_open() fail. I can put together a quick patch
to replace '/' with '-' or white spaces if you want.
Gilles



Hi,

Thanks for reporting this! Can you please test the attached patch to see if it
fixes this?

Thanks,

Hans




On Wed, Apr 15, 2009 at 10:36 PM, Hans de Goedej.w.r.dego...@hhs.nl  wrote:

Hi All,

As the version number shows this is a beta release of the 0.6.x series,
the big change here is the addition of video processing to libv4l
currently this only does whitebalance and normalizing (which turns out
to be useless for most cams) but the basic framework for doing video
processing, and being able to control it through fake v4l2 controls using
for example v4l2ucp is there.

Currently only whitebalancing is enabled and only on Pixarts (pac) webcams
(which benefit tremendously from this). To test this with other webcams
(after instaling this release) do:

export LIBV4LCONTROL_CONTROLS=15
LD_PRELOAD=/usr/lib/libv4l/v4l2convert.so v4l2ucp

Notice the whitebalance and normalize checkboxes in v4l2ucp,
as well as low and high limits for normalize.

Now start your favorite webcam viewing app and play around with the
2 checkboxes. Note normalize seems to be useless in most cases. If
whitebalancing makes a *strongly noticable* difference for your webcam
please mail me info about your cam (the usb id), then I can add it to
the list of cams which will have the whitebalancing algorithm (and the v4l2
control to enable/disable it) enabled by default.

Unfortunately doing videoprocessing can be quite expensive, as for example
whitebalancing is quite hard todo in yuv space, so doing white balancing
with the pac7302, with an apps which wants yuv changes the flow from
pixart-jpeg -  yuv420 -  rotate90
to:
pixart-jpeg -  rgb24 -  whitebalance -  yuv420 -  rotate90

This is not a problem for cams which deliver (compressed) raw bayer,
as bayer is rgb too, so I've implemented a version of the whitebalancing
algorithm which operates directly on bayer data, so for bayer cams
(like the pac207) it goes from:
bayer-  yuv
to:
bayer -  whitebalance -  yuv

For the near future I plan to change the code so that the analyse phase
(which does not get done every frame) creates per component look up tables,
this will make it easier to stack multiple effects in one pass without
special casing it as the current special normalize+whitebalance in one
pass code. Then we can add for example gamma correction with a negligible
performance impact (when already doing white balancing that is).


libv4l-0.5.97
-
* As the version number shows this is a beta release of the 0.6.x series,
  the big change here is the addition of video processing to libv4l
  currently this only does whitebalance and normalizing (which turns out
  to be useless for most cams) but the basic framework for doing video
  processing, and being able to control it through fake v4l2 controls using
  for example v4l2ucp is there.
  The initial version of this code was written by 3 of my computer science
  students: Elmar Kleijn, Sjoerd Piepenbrink and Radjnies Bhansingh
* Currently whitebalancing gets enabled based on USB-ID's and it only gets
  enabled for Pixart webcam's. You can force it being enabled with other
  webcams by setting the environment variable LIBV4LCONTROL_CONTROLS, this
  sets a bitmask enabling certain v4l2 controls which control the video
  processing set it to 15 to enable both whitebalancing and normalize. You
  can then change the settings using a v4l2 control panel like v4l2ucp
* Only report / allow supported destination formats in enum_fmt / try_fmt /
  g_fmt / s_fmt when processing, rotating or flipping.
* Some applications / libs (*cough* gstreamer *cough*) will not work
  correctly with planar YUV formats when the width is not a multiple of 8,
  so crop widths which are not a multiple of 8 to the nearest multiple of 8
  when converting to planar YUV
* Add dependency generation to libv4l by: Gilles Gigan
gilles.gi...@gmail.com
* Add support to use orientation from VIDIOC_ENUMINPUT by:
  Adam Bakerli...@baker-net.org.uk
* sn9c20x cams have occasional bad jpeg frames, drop these to avoid the
  flickering effect they cause, by: Brian Johnsonbrij...@gmail.com
* adjust libv4l's upside down cam detection to also work with devices
  which have the usb interface as parent instead of the usb device
* fix libv4l upside down detection for the new v4l minor numbering scheme
* fix reading outside of the source memory when doing yuv420-rgb conversion


Get it here:

Re: [PATCH 5/5] soc-camera: Convert to a platform driver

2009-04-16 Thread Guennadi Liakhovetski
On Thu, 16 Apr 2009, Dongsoo, Nathaniel Kim wrote:

 Hello Guennadi,
 
 
 Reviewing your patch, I've got curious about a thing.
 I think your soc camera subsystem is covering multiple camera
 devices(sensors) in one target board, but if that is true I'm afraid
 I'm confused how to handle them properly.
 Because according to your patch, video_dev_create() takes camera
 device as parameter and it seems to be creating device node for each
 camera devices.

This patch is a preparatory step for the v4l2-(sub)dev conversion. With it 
yes (I think) a video device will be created for every registered on the 
platform level camera, but only the one(s) that probed successfully will 
actually work, others will return -ENODEV on open().

 It means, if I have one camera host and several camera devices, there
 should be several device nodes for camera devices but cannot be used
 at the same time. Because typical camera host(camera interface) can
 handle only one camera device at a time. But multiple device nodes
 mean we can open and handle them at the same time.
 
 How about registering camera host device as v4l2 device and make
 camera device a input device which could be handled using
 VIDIOC_S_INPUT/G_INPUT api?

There are also cases, when you have several cameras simultaneously (think 
for example about stereo vision), even though we don't have any such cases 
just yet.

 Actually, I'm working on S3C64xx camera interface driver with soc
 camera subsystem,

Looking forward to it!:-)

 and I'm facing that issue right now because I've got
 dual camera on my target board.

Good, I think, there also has been a similar design based on a pxa270 SoC. 
How are cameras switched in your case? You probably have some additional 
hardware logic to switch between them, right? So, you need some code to 
control that. I think, you should even be able to do this automatically in 
your platform code using power hooks from the struct soc_camera_link. You 
could fail to power on a camera if another camera is currently active. In 
fact, I have to add a return code test to the call to icl-power(icl, 1) 
in soc_camera_open(), I'll do this for the final v4l2-dev version. Would 
this work for you or do you have another requirements? In which case, can 
you describe your use-case in more detail - should both cameras be open by 
applications simultaneously (looks like not), do you need a more explicit 
switching control, than just first open switches, which shouldn't be the 
case, since you can even create a separate task, that does nothing but 
just keeps the required camera device open.

 I hope you to consider this concept, and also want to know your opinion.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] soc-camera: Convert to a platform driver

2009-04-16 Thread Guennadi Liakhovetski
On Thu, 16 Apr 2009, Dongsoo, Nathaniel Kim wrote:

 Hello Guennadi,
 
 On Thu, Apr 16, 2009 at 5:58 PM, Guennadi Liakhovetski
 g.liakhovet...@gmx.de wrote:
  On Thu, 16 Apr 2009, Dongsoo, Nathaniel Kim wrote:
 
  Hello Guennadi,
 
 
  Reviewing your patch, I've got curious about a thing.
  I think your soc camera subsystem is covering multiple camera
  devices(sensors) in one target board, but if that is true I'm afraid
  I'm confused how to handle them properly.
  Because according to your patch, video_dev_create() takes camera
  device as parameter and it seems to be creating device node for each
  camera devices.
 
  This patch is a preparatory step for the v4l2-(sub)dev conversion. With it
  yes (I think) a video device will be created for every registered on the
  platform level camera, but only the one(s) that probed successfully will
  actually work, others will return -ENODEV on open().
 
  It means, if I have one camera host and several camera devices, there
  should be several device nodes for camera devices but cannot be used
  at the same time. Because typical camera host(camera interface) can
  handle only one camera device at a time. But multiple device nodes
  mean we can open and handle them at the same time.
 
  How about registering camera host device as v4l2 device and make
  camera device a input device which could be handled using
  VIDIOC_S_INPUT/G_INPUT api?
 
  There are also cases, when you have several cameras simultaneously (think
  for example about stereo vision), even though we don't have any such cases
  just yet.
 
 I think, there are some specific camera interfaces for stereo camera.
 Like stereo camera controller chip from Epson.
 
 But in case of camera interface which can handle only one single
 camera at a time, I'm strongly believing that we should use only one
 device node for camera.
 I mean device node should be the camera interface not the sensor
 device. If you are using stereo camera controller chip, you can make
 that with a couple of device nodes, like /dev/video0 and /dev/video1.

There are also some generic CMOS camera sensors, that support stereo mode, 
e.g., mt9v022. In this case you would do the actual stereo processing in 
host software, I think. The sensors just provide some synchronisation 
possibilities. And you would need both sensors in user-space over video0 
and video1. Also, i.MX31 datasheet says the (single) camera interface can 
handle up to two cameras (simultaneously), however, I haven't found any 
details how this could be supported in software, but I didn't look hard 
either, because I didn't need it until now.

  Actually, I'm working on S3C64xx camera interface driver with soc
  camera subsystem,
 
  Looking forward to it!:-)
 
  and I'm facing that issue right now because I've got
  dual camera on my target board.
 
  Good, I think, there also has been a similar design based on a pxa270 SoC.
  How are cameras switched in your case? You probably have some additional
  hardware logic to switch between them, right? So, you need some code to
  control that. I think, you should even be able to do this automatically in
  your platform code using power hooks from the struct soc_camera_link. You
  could fail to power on a camera if another camera is currently active. In
  fact, I have to add a return code test to the call to icl-power(icl, 1)
  in soc_camera_open(), I'll do this for the final v4l2-dev version. Would
  this work for you or do you have another requirements? In which case, can
  you describe your use-case in more detail - should both cameras be open by
  applications simultaneously (looks like not), do you need a more explicit
  switching control, than just first open switches, which shouldn't be the
  case, since you can even create a separate task, that does nothing but
  just keeps the required camera device open.
 
 
 Yes exactly right. My H/W is designed to share data pins and mclk,
 pclk pins between both of cameras.
 And they have to work mutually exclusive.
 For now I'm working on s3c64xx with soc camera subsystem, so no way to
 make dual camera control with VIDIOC_S_INPUT, VIDIOC_G_INPUT. But the
 prior version of my driver was made to control dual camera with those
 S_INPUT/G_INPUT api.
 Actually with single device node and switching camera with S_INPUT and
 G_INPUT, there is no way to mis-control dual camera.
 Because both of cameras work mutually exclusive.
 
 To make it easier, you can take a look at my presentation file which I
 gave a talk at CELF ELC2009 in San Francisco.
 Here it is the presentation file
 
 http://tree.celinuxforum.org/CelfPubWiki/ELC2009Presentations?action=AttachFiledo=gettarget=Framework_for_digital_camera_in_linux-in_detail.ppt
 
 I think it is more decent way to control dual camera. No need to check
 whether the sensor is available or not using this way. Just use
 G_INPUT to check current active sensor and do S_INPUT to switch into
 another one.

I understand your idea, but I don't see any 

Re: [PATCH 5/5] soc-camera: Convert to a platform driver

2009-04-16 Thread Dongsoo, Nathaniel Kim
Hi Guennadi,

On Thu, Apr 16, 2009 at 7:30 PM, Guennadi Liakhovetski
g.liakhovet...@gmx.de wrote:
 On Thu, 16 Apr 2009, Dongsoo, Nathaniel Kim wrote:

 Hello Guennadi,

 On Thu, Apr 16, 2009 at 5:58 PM, Guennadi Liakhovetski
 g.liakhovet...@gmx.de wrote:
  On Thu, 16 Apr 2009, Dongsoo, Nathaniel Kim wrote:
 
  Hello Guennadi,
 
 
  Reviewing your patch, I've got curious about a thing.
  I think your soc camera subsystem is covering multiple camera
  devices(sensors) in one target board, but if that is true I'm afraid
  I'm confused how to handle them properly.
  Because according to your patch, video_dev_create() takes camera
  device as parameter and it seems to be creating device node for each
  camera devices.
 
  This patch is a preparatory step for the v4l2-(sub)dev conversion. With it
  yes (I think) a video device will be created for every registered on the
  platform level camera, but only the one(s) that probed successfully will
  actually work, others will return -ENODEV on open().
 
  It means, if I have one camera host and several camera devices, there
  should be several device nodes for camera devices but cannot be used
  at the same time. Because typical camera host(camera interface) can
  handle only one camera device at a time. But multiple device nodes
  mean we can open and handle them at the same time.
 
  How about registering camera host device as v4l2 device and make
  camera device a input device which could be handled using
  VIDIOC_S_INPUT/G_INPUT api?
 
  There are also cases, when you have several cameras simultaneously (think
  for example about stereo vision), even though we don't have any such cases
  just yet.

 I think, there are some specific camera interfaces for stereo camera.
 Like stereo camera controller chip from Epson.

 But in case of camera interface which can handle only one single
 camera at a time, I'm strongly believing that we should use only one
 device node for camera.
 I mean device node should be the camera interface not the sensor
 device. If you are using stereo camera controller chip, you can make
 that with a couple of device nodes, like /dev/video0 and /dev/video1.

 There are also some generic CMOS camera sensors, that support stereo mode,
 e.g., mt9v022. In this case you would do the actual stereo processing in
 host software, I think. The sensors just provide some synchronisation
 possibilities. And you would need both sensors in user-space over video0
 and video1. Also, i.MX31 datasheet says the (single) camera interface can
 handle up to two cameras (simultaneously), however, I haven't found any
 details how this could be supported in software, but I didn't look hard
 either, because I didn't need it until now.

Oh, interesting. I should look for mt9v022 datasheet.
BTW, also on OMAP3 user manual you can see that two cameras could be
opened at once (with different clock and so on), but it says also that
only one camera's data could be handled by ISP in OMAP.
I think the  freescale CPU case could be the same condition.(sorry I'm not sure)


  Actually, I'm working on S3C64xx camera interface driver with soc
  camera subsystem,
 
  Looking forward to it!:-)
 
  and I'm facing that issue right now because I've got
  dual camera on my target board.
 
  Good, I think, there also has been a similar design based on a pxa270 SoC.
  How are cameras switched in your case? You probably have some additional
  hardware logic to switch between them, right? So, you need some code to
  control that. I think, you should even be able to do this automatically in
  your platform code using power hooks from the struct soc_camera_link. You
  could fail to power on a camera if another camera is currently active. In
  fact, I have to add a return code test to the call to icl-power(icl, 1)
  in soc_camera_open(), I'll do this for the final v4l2-dev version. Would
  this work for you or do you have another requirements? In which case, can
  you describe your use-case in more detail - should both cameras be open by
  applications simultaneously (looks like not), do you need a more explicit
  switching control, than just first open switches, which shouldn't be the
  case, since you can even create a separate task, that does nothing but
  just keeps the required camera device open.
 

 Yes exactly right. My H/W is designed to share data pins and mclk,
 pclk pins between both of cameras.
 And they have to work mutually exclusive.
 For now I'm working on s3c64xx with soc camera subsystem, so no way to
 make dual camera control with VIDIOC_S_INPUT, VIDIOC_G_INPUT. But the
 prior version of my driver was made to control dual camera with those
 S_INPUT/G_INPUT api.
 Actually with single device node and switching camera with S_INPUT and
 G_INPUT, there is no way to mis-control dual camera.
 Because both of cameras work mutually exclusive.

 To make it easier, you can take a look at my presentation file which I
 gave a talk at CELF ELC2009 in San Francisco.
 Here it is the 

Re: [PATCH 5/5] soc-camera: Convert to a platform driver

2009-04-16 Thread Guennadi Liakhovetski
On Thu, 16 Apr 2009, Dongsoo, Nathaniel Kim wrote:

 My concern is all about the logical thing. Why can't we open device
 node even if it is not opened from any other process.

The answer is of course because the other node is currently active, but 
I can understand the sort of confusion that the user might have: we have 
two independent device nodes, but only one of them can be active at any 
given time. So, in a way you're right, this might not be very intuitive.

 I have been working on dual camera with Linux for few years, and
 everybody who I'm working with wants not to fail opening camera device
 node in the first place. Actually I'm mobile phone developer and I've
 been seeing so many exceptional cases in field with dual camera
 applications. With all my experiences, I got my conclusion which is
 Don't make user get confused with device opening failure. I want you
 to know that no offence but just want to make it better.

Sure, I appreciate your opinion and respect your experience, but let's 
have a look at the current concept:

1. the platform has N cameras on camera interface X
2. soc_camera.c finds the matching interface X and creates M (= N) nodes 
for all successfully probed devices.
3. in the beginning, as long as no device is open, all cameras are powered 
down / inactive.
4. you then open() one of them, it gets powered on / activated, the others 
become unaccessible as long as one is used.
5. this way switching is easy - you're sure, that when no device is open, 
all cameras are powered down, so, you can safely select any of them.
6. module reference-counting is easy too - every open() of a device-node 
increments the use-count

With your proposed approach:

1. the platform has N cameras on camera interface X.
2. as long as at least one camera probed successfully for interface X, you 
create a videoX device and count inputs for it - successfully probed 
cameras.
3. you open videoX, one default camera gets activated immediately - not 
all applications issue S_INPUT, so, there has to be a default.
4. if an S_INPUT is issued, you have to verify, whether any camera is 
currently active / capturing, if none - switch to the requested one, if 
one is active - return -EBUSY.
5. reference-counting and guaranteeing consistency is more difficult, as 
well as handling camera driver loading / unloading.

So, I would say, your approach adds complexity and asymmetry. Can it be 
that one camera client has several inputs itself? E.g., a decoder? In any 
case, I wouldn't do this now, if we do decide in favour of your approach, 
then only after the v4l2-device transition, please.

 But the mt9v022 case, I should need some research.

Ok.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [linux-dvb] DVB-T USB dib0700 device recomendations?

2009-04-16 Thread hermann pitton
Hi,

Am Donnerstag, den 16.04.2009, 18:14 +1000 schrieb covert covert:
 
  Thats wierd. So the usb controler on the Nova-TD and the host controler on
  the SB700 are incompatible?
 
 
 I tried a few different USB tuners with a SB700 based motherboard
 until I found out the drivers where not up to scratch for the USB on
 the SB700 and caused a lot of dvb-usb: bulk message failed

does somebody know if the problem is still there even with this print

ehci_hcd :00:12.2: applying AMD SB600/SB700 USB freeze workaround
ehci_hcd :00:13.2: applying AMD SB600/SB700 USB freeze workaround

visible in dmesg caused by this patch?

http://lkml.org/lkml/2008/12/3/287

Thanks,
Hermann


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] soc-camera: Convert to a platform driver

2009-04-16 Thread Hans Verkuil

 On Thu, 16 Apr 2009, Dongsoo, Nathaniel Kim wrote:

 My concern is all about the logical thing. Why can't we open device
 node even if it is not opened from any other process.

 The answer is of course because the other node is currently active, but
 I can understand the sort of confusion that the user might have: we have
 two independent device nodes, but only one of them can be active at any
 given time. So, in a way you're right, this might not be very intuitive.

 I have been working on dual camera with Linux for few years, and
 everybody who I'm working with wants not to fail opening camera device
 node in the first place. Actually I'm mobile phone developer and I've
 been seeing so many exceptional cases in field with dual camera
 applications. With all my experiences, I got my conclusion which is
 Don't make user get confused with device opening failure. I want you
 to know that no offence but just want to make it better.

 Sure, I appreciate your opinion and respect your experience, but let's
 have a look at the current concept:

 1. the platform has N cameras on camera interface X
 2. soc_camera.c finds the matching interface X and creates M (= N) nodes
 for all successfully probed devices.
 3. in the beginning, as long as no device is open, all cameras are powered
 down / inactive.
 4. you then open() one of them, it gets powered on / activated, the others
 become unaccessible as long as one is used.
 5. this way switching is easy - you're sure, that when no device is open,
 all cameras are powered down, so, you can safely select any of them.
 6. module reference-counting is easy too - every open() of a device-node
 increments the use-count

 With your proposed approach:

 1. the platform has N cameras on camera interface X.
 2. as long as at least one camera probed successfully for interface X, you
 create a videoX device and count inputs for it - successfully probed
 cameras.
 3. you open videoX, one default camera gets activated immediately - not
 all applications issue S_INPUT, so, there has to be a default.
 4. if an S_INPUT is issued, you have to verify, whether any camera is
 currently active / capturing, if none - switch to the requested one, if
 one is active - return -EBUSY.
 5. reference-counting and guaranteeing consistency is more difficult, as
 well as handling camera driver loading / unloading.

 So, I would say, your approach adds complexity and asymmetry. Can it be
 that one camera client has several inputs itself? E.g., a decoder? In any
 case, I wouldn't do this now, if we do decide in favour of your approach,
 then only after the v4l2-device transition, please.

If you have mutually exclusive sources, then those should be implemented
as one device with multiple inputs. There is really no difference between
a TV capture driver that selects between a tuner and S-Video input, and a
camera driver that selects between multiple cameras.

A completely different question is whether soc-camera should be used at
all for this. The RFC Nate posted today said that this implementation was
based around the S3C64XX SoC. The limitation of allowing only one camera
at a time is a limitation of the hardware implementation, not of the SoC
as far as I could tell.

Given the fact that the SoC also supports codecs and other fun stuff, I
really wonder whether there shouldn't be a proper driver for that SoC that
supports all those features. Similar to what TI is doing for their davinci
platform. It is my understanding that soc-camera is really meant as a
simple framework around a sensor device, and not as a full-featured
implementation for codecs, previews, etc. Please correct me if I'm wrong.

Regards,

  Hans

 But the mt9v022 case, I should need some research.

 Ok.

 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] soc-camera: Convert to a platform driver

2009-04-16 Thread Dongsoo, Nathaniel Kim
Hello Guennadi,

On Thu, Apr 16, 2009 at 9:06 PM, Guennadi Liakhovetski
g.liakhovet...@gmx.de wrote:
 On Thu, 16 Apr 2009, Dongsoo, Nathaniel Kim wrote:

 My concern is all about the logical thing. Why can't we open device
 node even if it is not opened from any other process.

 The answer is of course because the other node is currently active, but
 I can understand the sort of confusion that the user might have: we have
 two independent device nodes, but only one of them can be active at any
 given time. So, in a way you're right, this might not be very intuitive.

 I have been working on dual camera with Linux for few years, and
 everybody who I'm working with wants not to fail opening camera device
 node in the first place. Actually I'm mobile phone developer and I've
 been seeing so many exceptional cases in field with dual camera
 applications. With all my experiences, I got my conclusion which is
 Don't make user get confused with device opening failure. I want you
 to know that no offence but just want to make it better.

 Sure, I appreciate your opinion and respect your experience, but let's
 have a look at the current concept:

 1. the platform has N cameras on camera interface X
 2. soc_camera.c finds the matching interface X and creates M (= N) nodes
 for all successfully probed devices.
 3. in the beginning, as long as no device is open, all cameras are powered
 down / inactive.
 4. you then open() one of them, it gets powered on / activated, the others
 become unaccessible as long as one is used.
 5. this way switching is easy - you're sure, that when no device is open,
 all cameras are powered down, so, you can safely select any of them.
 6. module reference-counting is easy too - every open() of a device-node
 increments the use-count


Honestly it is not that bad. but in situation of multiple processes
trying to access camera devices like process A already opened video0
and process B tries to open video1, process B should face an error
returns even though process B checked for video1 is already opened or
not and verified that it is not opened.


 With your proposed approach:

 1. the platform has N cameras on camera interface X.
 2. as long as at least one camera probed successfully for interface X, you
 create a videoX device and count inputs for it - successfully probed
 cameras.
 3. you open videoX, one default camera gets activated immediately - not
 all applications issue S_INPUT, so, there has to be a default.
 4. if an S_INPUT is issued, you have to verify, whether any camera is
 currently active / capturing, if none - switch to the requested one, if
 one is active - return -EBUSY.
 5. reference-counting and guaranteeing consistency is more difficult, as
 well as handling camera driver loading / unloading.

Oops I forgot to say that we need to enforce legacy v4l2 applications
to use VIDIOC_S_INPUT  after opening device.
And every S_INPUT issuing should come after G_INPUT like every set
API in v4l2.



 So, I would say, your approach adds complexity and asymmetry. Can it be
 that one camera client has several inputs itself? E.g., a decoder? In any
 case, I wouldn't do this now, if we do decide in favour of your approach,
 then only after the v4l2-device transition, please.


Of course. I didn't mean to disturb your transition job. Please do
your priority job first.

And about camera client with several inputs question, I will say that
almost every 3G UMTS phone has dual camera on it. And we can consider
every 3G UMTS smart phones have dual camera on it with soc camera
solution.
BTW, thank you for this conversation. It was a pleasure to discuss
about this issue with you.
Cheers,

Nate

 But the mt9v022 case, I should need some research.

 Ok.

 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer




-- 

DongSoo, Nathaniel Kim
Engineer
Mobile S/W Platform Lab.
Digital Media  Communications RD Centre
Samsung Electronics CO., LTD.
e-mail : dongsoo@gmail.com
  dongsoo45@samsung.com

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] soc-camera: Convert to a platform driver

2009-04-16 Thread Guennadi Liakhovetski
On Thu, 16 Apr 2009, Dongsoo, Nathaniel Kim wrote:

 Hello Guennadi,
 
 On Thu, Apr 16, 2009 at 9:06 PM, Guennadi Liakhovetski
 g.liakhovet...@gmx.de wrote:
  3. you open videoX, one default camera gets activated immediately - not
  all applications issue S_INPUT, so, there has to be a default.
  4. if an S_INPUT is issued, you have to verify, whether any camera is
  currently active / capturing, if none - switch to the requested one, if
  one is active - return -EBUSY.
  5. reference-counting and guaranteeing consistency is more difficult, as
  well as handling camera driver loading / unloading.
 
 Oops I forgot to say that we need to enforce legacy v4l2 applications
 to use VIDIOC_S_INPUT  after opening device.
 And every S_INPUT issuing should come after G_INPUT like every set
 API in v4l2.

Hm? Does the API require it? If not, I don't think we should inforce it. 
And what do you mean legacy v4l2 applications - which applications are 
not legacy?

  So, I would say, your approach adds complexity and asymmetry. Can it be
  that one camera client has several inputs itself? E.g., a decoder? In any
  case, I wouldn't do this now, if we do decide in favour of your approach,
  then only after the v4l2-device transition, please.
 
 
 Of course. I didn't mean to disturb your transition job. Please do
 your priority job first.
 
 And about camera client with several inputs question, I will say that
 almost every 3G UMTS phone has dual camera on it. And we can consider
 every 3G UMTS smart phones have dual camera on it with soc camera
 solution.

No, sorry, this wasn't my question. By client I meant one camera or 
decoder or whatever chip connects to a camera host. I.e., if we have a 
single chip with several inputs, that should logically be handled with 
S_INPUT ioctl, this would further add to the confusion of using different 
inputs on one video device to switch between chips or inputs / functions 
on one chip.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] soc-camera: Convert to a platform driver

2009-04-16 Thread Guennadi Liakhovetski
On Thu, 16 Apr 2009, Hans Verkuil wrote:

 If you have mutually exclusive sources, then those should be implemented
 as one device with multiple inputs. There is really no difference between
 a TV capture driver that selects between a tuner and S-Video input, and a
 camera driver that selects between multiple cameras.
 
 A completely different question is whether soc-camera should be used at
 all for this. The RFC Nate posted today said that this implementation was
 based around the S3C64XX SoC. The limitation of allowing only one camera
 at a time is a limitation of the hardware implementation, not of the SoC
 as far as I could tell.

This is the opposite to how I understood it. S3C6400 only has one set of 
camera interface signals, so, it is supposed to only handle one camera (at 
a time). As for mutual exclusivity - this is not enforced by the 
soc-camera framework, rather it is a limitation of the hardware - SoC and 
implementation. The implementor wants to prohibit access to the inactive 
camera, and that's where the conflict arises. The framework would then 
have to treat a solution with one host and multiple cameras differently 
depending on board implementation: if they are not mutually exclusive map 
them to multiple video devices, if they are - map them to multiple inputs 
on one video device...

 Given the fact that the SoC also supports codecs and other fun stuff, I
 really wonder whether there shouldn't be a proper driver for that SoC that
 supports all those features. Similar to what TI is doing for their davinci
 platform. It is my understanding that soc-camera is really meant as a
 simple framework around a sensor device, and not as a full-featured
 implementation for codecs, previews, etc. Please correct me if I'm wrong.

Having briefly looked at s3c6400, its video interface doesn't seem to be 
more advanced than, for instance, that of the PXA270 SoC. Ok, maybe only 
the preview path is missing on PXA.

soc-camera framework has been designed as a standard framework between 
SoCs and video data sources with the primary goal to allow driver reuse. 
The functionality that it implements is what was required at that time, 
plus what has been added since then. Yes, it does impose a couple of 
simplifications on the current V4L2 API. So, of course, a decision has to 
be made either or not to use it in every specific case.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] soc-camera: Convert to a platform driver

2009-04-16 Thread Guennadi Liakhovetski
On Thu, 16 Apr 2009, Dongsoo Kim wrote:

   And about camera client with several inputs question, I will say that
   almost every 3G UMTS phone has dual camera on it. And we can consider
   every 3G UMTS smart phones have dual camera on it with soc camera
   solution.
  
  No, sorry, this wasn't my question. By client I meant one camera or
  decoder or whatever chip connects to a camera host. I.e., if we have a
  single chip with several inputs, that should logically be handled with
  S_INPUT ioctl, this would further add to the confusion of using different
  inputs on one video device to switch between chips or inputs / functions
  on one chip.
 
 Yes exactly. It was  single chip with several inputs. that I intended to
 tell. but still don't get what the confusion you mean. Sorry ;-()
 Cheers,

Wow, so, on those phone a dual camera is a single (CMOS) controller with 
two sensors / lenses / filters?... Cool, do you have an example of such a 
camera to look for on the net? Preferably with a datasheet available.

Confusion I meant that in this case switching between inputs sometimes 
switches you to another controller and sometimes to another function 
within the same controller...

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] soc-camera: Convert to a platform driver

2009-04-16 Thread Dongsoo Kim


2009. 04. 16, 오후 11:56, Guennadi Liakhovetski 작성:


On Thu, 16 Apr 2009, Dongsoo Kim wrote:

And about camera client with several inputs question, I will say  
that
almost every 3G UMTS phone has dual camera on it. And we can  
consider

every 3G UMTS smart phones have dual camera on it with soc camera
solution.


No, sorry, this wasn't my question. By client I meant one camera  
or
decoder or whatever chip connects to a camera host. I.e., if we  
have a
single chip with several inputs, that should logically be handled  
with
S_INPUT ioctl, this would further add to the confusion of using  
different
inputs on one video device to switch between chips or inputs /  
functions

on one chip.


Yes exactly. It was  single chip with several inputs. that I  
intended to

tell. but still don't get what the confusion you mean. Sorry ;-()
Cheers,


Wow, so, on those phone a dual camera is a single (CMOS)  
controller with
two sensors / lenses / filters?... Cool, do you have an example of  
such a

camera to look for on the net? Preferably with a datasheet available.



Oops sorry I didn't mean that.
I just meant one single camera interface on Application Processor and  
two camera modules (sensor, lens, isp) connected. Sorry I explained  
badly.

I considered this as single camera interface with several inputs.

Confusion I meant that in this case switching between inputs  
sometimes

switches you to another controller and sometimes to another function
within the same controller...


I think we don't need to worry about that if  we can query camera  
inputs clearly.

Cheers,

Nate



Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Some questions about mr97310 controls (continuing previous thread on mr97310a.c)

2009-04-16 Thread Thomas Kaiser

Hello Theodore

My answers/comments inline .

On 04/16/2009 01:59 AM, Theodore Kilgore wrote:



Thomas,

A few questions in the text below.


On Thu, 5 Mar 2009, Thomas Kaiser wrote:


Hello Theodore

kilg...@banach.math.auburn.edu wrote:



On Wed, 4 Mar 2009, Thomas Kaiser wrote:
As to the actual contents of the header, as you describe things,

0. Do you have any idea how to account for the discrepancy between


 From usb snoop.
FF FF 00 FF 96 64 xx 00 xx xx xx xx xx xx 00 00

and

In Linux the header looks like this:

FF FF 00 FF 96 64 xx 00 xx xx xx xx xx xx F0 00


(I am referring to the 00 00 as opposed to F0 00)? Or could this have 
happened somehow just because these were not two identical sessions?


In case I did not answer this one, I suspect it was probably different 
sessions. I can think of no other explanation which makes sense to me.




Doesn't remember what the differences was. The first is from Windoz 
(usbsnoop) and the second is from Linux.





1. xx: don't know but value is changing between 0x00 to 0x07


as I said, this signifies the image format, qua compression algorithm 
in use, or if 00 then no compression.


On the PAC207, the compression can be controlled with a register 
called Compression Balance size. So, I guess, depending on the value 
set in the register this value in the header will show what 
compression level is set.


One of my questions:

Just how does it work to set the Compression Balance size? Is this 
some kind of special command sequence? Are we able to set this to 
whatever we want?


It looks like. One can set a value from 0x0 to 0xff in the Compression 
Balance size register (reg 0x4a).
In the pac207 Linux driver, this register is set to 0xff to turn off the 
compression. While we use compression 0x88 is set (I think the same 
value like in Windoz). Hans did play with this register and found out 
that the compression changes with different values.


Hans, may you explain a bit more what you found out?









2. xx: this is the actual pixel clock


So there is a control setting for this?


Yes, in the PAC207, register 2. (12 MHz divided by the value set).




3. xx: this is changing according light conditions from 0x03 (dark) to
0xfc (bright) (center)
4. xx: this is changing according light conditions from 0x03 (dark) to
0xfc (bright) (edge)
5. xx: set value Digital Gain of Red
6. xx: set value Digital Gain of Green
7. xx: set value Digital Gain of Blue



Varying some old questions: Precisely what is meant by the value of 
Digital Gain for XX where XX is one of Red, Green, or Blue? On what 
scale is this measured? Is is some kind of standardized scale? Or is it 
something which is camera-specific? Also what is does set mean in this 
context? This last in view of the fact that this is data which the 
camera provides for our presumed information, not something which we are 
sending to the camera?


When I recall correctly, I just saw that this fields in the header have 
the same value which I set in the digital gain of Red/Green/Blue 
registers. Therefor, I called it set value. But I don't remember if a 
change of these registers had any impact on the picture.


The range for these registers is from 0x0 to 0xff but as I don't know 
what they do, I don't know any more :-(


Thomas
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/5] soc-camera: convert to platform device

2009-04-16 Thread Robert Jarzmik
Robert Jarzmik robert.jarz...@free.fr writes:

 I need to make some additionnal tests with I2C loading/unloading, but 
 otherwise
 it works perfectly for (soc_camera / pxa_camera /mt9m111 combination).

Guennadi,

I made some testing, and there is something I don't understand in the new device
model.
This is the testcase I'm considering :
 - I unload i2c-pxa, pxa-camera, mt9m111, soc-camera modules
 - I load pxa-camera, mt9m111, soc-camera modules
 - I then load i2c-pxa
= the mt9m111 is not detected
 - I unload and reload mt9m111 and pxa_camera
= not any better
 - I unload soc_camera, mt9m111, pxa_camera and reload
= this time the video device is detected

What I'm getting at is that if soc_camera is loaded before the i2c host driver,
no camera will get any chance to work. Is that normal considering the new driver
model ?
I was naively thinking that there would be a rescan when the control was
being available for a sensor.

Cheers.

--
Robert
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFC on proposed patches to mr97310a.c for gspca and v4l

2009-04-16 Thread Theodore Kilgore



On Thu, 16 Apr 2009, Kyle Guinn wrote:


On Wednesday 04 March 2009 02:41:05 Thomas Kaiser wrote:

Hello Theodore

kilg...@banach.math.auburn.edu wrote:

Also, after the byte indicator for the compression algorithm there are
some more bytes, and these almost definitely contain information which
could be valuable while doing image processing on the output. If they
are already kept and passed out of the module over to libv4lconvert,
then it would be very easy to do something with those bytes if it is
ever figured out precisely what they mean. But if it is not done now it
would have to be done then and would cause even more trouble.


I sent it already in private mail to you. Here is the observation I made
for the PAC207 SOF some years ago:

 From usb snoop.
FF FF 00 FF 96 64 xx 00 xx xx xx xx xx xx 00 00
1. xx: looks like random value
2. xx: changed from 0x03 to 0x0b
3. xx: changed from 0x06 to 0x49
4. xx: changed from 0x07 to 0x55
5. xx: static 0x96
6. xx: static 0x80
7. xx: static 0xa0

And I did play in Linux and could identify some fields :-) .
In Linux the header looks like this:

FF FF 00 FF 96 64 xx 00 xx xx xx xx xx xx F0 00
1. xx: don't know but value is changing between 0x00 to 0x07
2. xx: this is the actual pixel clock
3. xx: this is changing according light conditions from 0x03 (dark) to
0xfc (bright) (center)
4. xx: this is changing according light conditions from 0x03 (dark) to
0xfc (bright) (edge)
5. xx: set value Digital Gain of Red
6. xx: set value Digital Gain of Green
7. xx: set value Digital Gain of Blue

Thomas


I've been looking through the frame headers sent by the MR97310A (the Aiptek
PenCam VGA+, 08ca:0111).  Here are my observations.

FF FF 00 FF 96 6x x0 xx xx xx xx xx

In binary that looks something like this:

   
10010110 011001aa a101 
   

All of the values look to be MSbit first.  A looks like a 3-bit frame sequence
number that seems to start with 1 and increments for each frame.


Hmmm. This I never noticed. What you are saying is that the two bytes 6x 
and x0 are variable? You are sure about that? What I have previously 
experienced is that the first is always 64 with these cameras, and the 
second one indicates the absence of compression (in which case it is 0, 
which of course only arises for still cameras), or if there is data 
compression then it is not zero. I have never seen this byte to change 
during a session with a camera. Here is a little table of what I have 
previously witnessed, and perhaps you can suggest what to do in order to 
see this is not happening:


Camera  that byte   compression solved, or not  streaming
Aiptek  00  no  N/A no
Aiptek  50  yes yes both
the Sakar cam   00  no  N/A no
ditto   50  yes yes both
Argus QuikClix  20  yes no  doesn't work
Argus DC1620	50		yes		yes	doesn't work 
CIF cameras	00		no		N/A		no

ditto   50  yes yes no
ditto   d0  yes no  yes

Other strange facts are

-- that the Sakar camera, the Argus QuikClix, and the 
DC1620 all share the same USB ID of 0x93a:0x010f and yet only one of them 
will stream with the existing driver. The other two go through the 
motions, but the isoc packets do not actually get sent, so there is no 
image coming out. I do not understand the reason for this; I have been 
trying to figure it out and it is rather weird. I should add that, yes, 
those two cameras were said to be capable of streaming when I bought them. 
Could it be a problem of age? I don't expect that, but maybe.


-- the CIF cameras all share the USB id of 0x93a:0x010e (I bought several 
of them) and they all are using a different compression algorithm while 
streaming from what they use if running as still cameras in compressed 
mode. This leads to the question whether it is possible to set the 
compression algorithm during the initialization sequence, so that the 
camera also uses the 0x50 mode while streaming, because we already know 
how to use that mode.


But I have never seen the 0x64 0xX0 bytes used to count the frames. Could 
you tell me how to repeat that? It certainly would knock down the validity 
of the above table wouldn't it?


B, C, and D

might be brightness and contrast; minimum and maximum values for these vary
with the image size.

For 640x480, 320x240, and 160x120:
 dark scene (all black):
   B:  near 0
   C:  0x000
   D:  0xC60

 bright scene (all white):
   B:  usually 0xC15C
   C:  0xC60
   D:  0x000

For 352x288 and 176x144:
 dark scene (all black):
   B:  near 0
   C:  0x000
   D:  0xB5B

 bright scene (all white):
   B:  usually 0xB0FE
   C:  0xB53
   D:  0x007

B increases with increasing brightness.  C increases with more white 

Re: [PATCH 0/5] soc-camera: convert to platform device

2009-04-16 Thread Guennadi Liakhovetski
On Thu, 16 Apr 2009, Robert Jarzmik wrote:

 Robert Jarzmik robert.jarz...@free.fr writes:
 
  I need to make some additionnal tests with I2C loading/unloading, but 
  otherwise
  it works perfectly for (soc_camera / pxa_camera /mt9m111 combination).
 
 Guennadi,
 
 I made some testing, and there is something I don't understand in the new 
 device
 model.
 This is the testcase I'm considering :
  - I unload i2c-pxa, pxa-camera, mt9m111, soc-camera modules
  - I load pxa-camera, mt9m111, soc-camera modules
  - I then load i2c-pxa
 = the mt9m111 is not detected

correct

  - I unload and reload mt9m111 and pxa_camera
 = not any better

Actually, I think, in this case it should be found again, as long as you 
reload pxa-camera while i2c-pxa is already loaded.

  - I unload soc_camera, mt9m111, pxa_camera and reload
 = this time the video device is detected
 
 What I'm getting at is that if soc_camera is loaded before the i2c host 
 driver,
 no camera will get any chance to work. Is that normal considering the new 
 driver
 model ?
 I was naively thinking that there would be a rescan when the control was
 being available for a sensor.

Yes, unfortunately, it is normal:-( On the one hand, we shouldn't really 
spend _too_ much time on this intermediate version, because, as I said, it 
is just a preparatory step for v4l2-subdev. We just have to make sure it 
doesn't introduce any significant regressions and doesn't crash too often. 
OTOH, this is also how it is with v4l2-subdev. With it you first must have 
the i2c-adapter driver loaded. Then, when a match between a camera host 
and a camera client (sensor) platform device is detected, it is reported 
to the v4l2-subdev core, which loads the respective camera i2c driver. If 
you then unload the camera-host and i2c adapter drivers, and then you load 
the camera-host driver, it then fails to get the adapter, and if you then 
load it, nothing else happens. To reprobe you have to unload and reload 
the camera host driver.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[cron job] v4l-dvb daily build 2.6.22 and up: ERRORS, 2.6.16-2.6.21: ERRORS

2009-04-16 Thread Hans Verkuil
This message is generated daily by a cron job that builds v4l-dvb for
the kernels and architectures in the list below.

Results of the daily build of v4l-dvb:

date:Thu Apr 16 19:00:03 CEST 2009
path:http://www.linuxtv.org/hg/v4l-dvb
changeset:   11516:6ce311bdeee0
gcc version: gcc (GCC) 4.3.1
hardware:x86_64
host os: 2.6.26

linux-2.6.22.19-armv5: OK
linux-2.6.23.12-armv5: OK
linux-2.6.24.7-armv5: OK
linux-2.6.25.11-armv5: OK
linux-2.6.26-armv5: OK
linux-2.6.27-armv5: OK
linux-2.6.28-armv5: OK
linux-2.6.29.1-armv5: OK
linux-2.6.30-rc1-armv5: OK
linux-2.6.27-armv5-ixp: OK
linux-2.6.28-armv5-ixp: OK
linux-2.6.29.1-armv5-ixp: OK
linux-2.6.30-rc1-armv5-ixp: WARNINGS
linux-2.6.28-armv5-omap2: OK
linux-2.6.29.1-armv5-omap2: OK
linux-2.6.30-rc1-armv5-omap2: WARNINGS
linux-2.6.22.19-i686: WARNINGS
linux-2.6.23.12-i686: ERRORS
linux-2.6.24.7-i686: OK
linux-2.6.25.11-i686: OK
linux-2.6.26-i686: OK
linux-2.6.27-i686: OK
linux-2.6.28-i686: OK
linux-2.6.29.1-i686: OK
linux-2.6.30-rc1-i686: WARNINGS
linux-2.6.23.12-m32r: OK
linux-2.6.24.7-m32r: OK
linux-2.6.25.11-m32r: OK
linux-2.6.26-m32r: OK
linux-2.6.27-m32r: OK
linux-2.6.28-m32r: OK
linux-2.6.29.1-m32r: OK
linux-2.6.30-rc1-m32r: OK
linux-2.6.22.19-mips: OK
linux-2.6.26-mips: OK
linux-2.6.27-mips: OK
linux-2.6.28-mips: OK
linux-2.6.29.1-mips: OK
linux-2.6.30-rc1-mips: WARNINGS
linux-2.6.27-powerpc64: OK
linux-2.6.28-powerpc64: OK
linux-2.6.29.1-powerpc64: OK
linux-2.6.30-rc1-powerpc64: WARNINGS
linux-2.6.22.19-x86_64: WARNINGS
linux-2.6.23.12-x86_64: ERRORS
linux-2.6.24.7-x86_64: OK
linux-2.6.25.11-x86_64: OK
linux-2.6.26-x86_64: OK
linux-2.6.27-x86_64: OK
linux-2.6.28-x86_64: OK
linux-2.6.29.1-x86_64: OK
linux-2.6.30-rc1-x86_64: WARNINGS
fw/apps: OK
sparse (linux-2.6.29.1): OK
sparse (linux-2.6.30-rc1): OK
linux-2.6.16.61-i686: ERRORS
linux-2.6.17.14-i686: ERRORS
linux-2.6.18.8-i686: ERRORS
linux-2.6.19.5-i686: WARNINGS
linux-2.6.20.21-i686: ERRORS
linux-2.6.21.7-i686: ERRORS
linux-2.6.16.61-x86_64: ERRORS
linux-2.6.17.14-x86_64: ERRORS
linux-2.6.18.8-x86_64: ERRORS
linux-2.6.19.5-x86_64: WARNINGS
linux-2.6.20.21-x86_64: ERRORS
linux-2.6.21.7-x86_64: ERRORS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Thursday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Thursday.tar.bz2

The V4L2 specification from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/v4l2.html

The DVB API specification from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/dvbapi.pdf

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/5] soc-camera: convert to platform device

2009-04-16 Thread Robert Jarzmik
Guennadi Liakhovetski g.liakhovet...@gmx.de writes:

  - I unload and reload mt9m111 and pxa_camera
 = not any better

 Actually, I think, in this case it should be found again, as long as you 
 reload pxa-camera while i2c-pxa is already loaded.
Damn, you're right. I cross-checked, and reloading pxa_camera rescans the
sensor.

 What I'm getting at is that if soc_camera is loaded before the i2c host 
 driver,
 no camera will get any chance to work. Is that normal considering the new 
 driver
 model ?
 I was naively thinking that there would be a rescan when the control was
 being available for a sensor.

 Yes, unfortunately, it is normal:-( On the one hand, we shouldn't really 
 spend _too_ much time on this intermediate version, because, as I said, it 
 is just a preparatory step for v4l2-subdev. We just have to make sure it 
 doesn't introduce any significant regressions and doesn't crash too often. 
OK. So from my side everything is OK (let aside my nitpicking in mioa701.c and
mt9m111.c).

 OTOH, this is also how it is with v4l2-subdev. With it you first must have 
 the i2c-adapter driver loaded. Then, when a match between a camera host 
 and a camera client (sensor) platform device is detected, it is reported 
 to the v4l2-subdev core, which loads the respective camera i2c driver.
OK, why not.

 If you then unload the camera-host and i2c adapter drivers, and then you load
 the camera-host driver, it then fails to get the adapter, and if you then load
 it, nothing else happens. To reprobe you have to unload and reload the camera
 host driver.

So be it. I'm sure we'll be through it once more in the v4l2-subdev transition,
so I'll let aside any objection I could mutter :)

Cheers.

--
Robert
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


soc-camera to v4l2-subdev conversion

2009-04-16 Thread Guennadi Liakhovetski
Hi Hans,

I have so far partially converted a couple of example setups, namely the 
i.MX31-based pcm037/pcm970 and PXA270-based pcm027/pcm990 boards.

Partially means, that I use v4l2_i2c_new_subdev() to register new cameras 
and v4l2_device_register() to register hosts, I use some core and video 
operations, but there are still quite a few extra bonds that tie camera 
drivers and soc-camera core, that have to be broken. The current diff is 
at http://download.open-technology.de/testing/20090416-4.gitdiff, 
although, you, probably, don't want to look at it:-)

A couple of minor general remarks first:

Shouldn't v4l2_device_call_until_err() return an error if the call is 
unimplemented?

There's no counterpart to v4l2_i2c_new_subdev() in the API, so one is 
supposed to call i2c_unregister_device() directly?

We'll have to extend v4l2_subdev_video_ops with [gs]_crop.

Now I'm thinking about how best to break those remaining ties in 
soc-camera. The remaining bindings that have to be torn are in 
struct soc_camera_device. Mostly these are:

1. current geometry and geometry limits - as seen on the canera host - 
camera client interfase. I think, these are common to all video devices, 
so, maybe we could put them meaningfully in a struct video_data, 
accessible for both v4l2 subdevices and devices - one per subdevice?

2. current exposure and gain. There are of course other video parameters 
similar to these, like gamma, saturation, hue... Actually, these are only 
needed in the sensor driver, the only reason why I keep them globally 
available it to reply to V4L2_CID_GAIN and V4L2_CID_EXPOSURE G_CTRL 
requests. So, if I pass these down to the sensor drivers just like all 
other control requests, they can be removed from soc_camera_device.

3. format negotiation. This is a pretty important part of the soc-camera 
framework. Currently, sensor drivers provide a list of supported pixel 
formats, based on it camera host drivers build translation tables and 
calculate user pixel formats. I'd like to preserve this functionality in 
some form. I think, we could make an optional common data block, which, if 
available, can be used also for the format negotiation and conversion. If 
it is not available, I could just pass format requests one-to-one down to 
sensor drivers.

Maybe a more universal approach would be to just keep synthetic formats 
in each camera host driver. Then, on any format request first just request 
it from the sensor trying to pass it one-to-one to the user. If this 
doesn't work, look through the possible conversion table, if the requested 
format is found among output formats, try to request all input formats, 
that can be converted to it, one by one from the sensor. Hm...

4. bus parameter negotiation. Also an important thing. Should do the same: 
if available - use it, if not - use platform-provided defaults.

I think, I just finalise this partial conversion and we commit it, because 
if I keep it locally for too long, I'll be getting multiple merge 
conflicts, because this conversion also touches platform code... Then, 
when the first step is in the tree we can work on breaking the remaining 
bonds.

Ideas? Comments?

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Making Samsung S3C64XX camera interface driver in SoC camera subsystem

2009-04-16 Thread Guennadi Liakhovetski
On Thu, 16 Apr 2009, Dongsoo, Nathaniel Kim wrote:

 Hello,
 
 I'm planing to make a new camera interface driver for S3C64XX from Samsung.
 Even if it already has a driver, it seems to be re-designed for some
 reasons. If you are interested in, take a look at following repository
 (http://git.kernel.org/?p=linux/kernel/git/eyryu_ap/samsung-ap-2.6.24.git;a=summary)
 drivers/media/video/s3c_* files
 
 Before beginning to implement a new driver for that, I need to clarify
 some of features about how to implement in driver.
 
 Please take a look at the diagram on page 610 of following user manual
 of s3c6400.
 http://www.ebv.com/fileadmin/products/Products/Samsung/S3C6400/S3C6400X_UserManual_rev1-0_2008-02_661558um.pdf
 
 It seems to have a couple of path for camera data named codec and
 preview, and they could be used at the same time.
 It means that it has no problem making those two paths into
 independent device nodes like /dev/video0 and /dev/video1
 
 But there is a limit of size using both of paths at the same time. I
 mean, If you are using preview path and camera sensor is running with
 1280*720 resolution (which seems to be the max resolution could be
 handled by preview path), codec path can't use resolution bigger than
 1280*720 at the same time because camera sensor can't produce
 different resolution at a time.
 
 And also we should face a big problem when we are making dual camera
 system with s3c64xx. Dual camera with single camera interface has some
 restriction using clock and data path, because they have to be shared
 between both of cameras.
 I suppose to handle them with VIDIOC_S_INPUT and G_INPUT. And with
 those, we can handle dual camera with single camera interface in a
 decent way.
 
 But the thing is that there should be a problem using dual camera with
 preview and codec path of s3c64xx. Even if we have each preview, and
 codec device node and can't open them concurrently when user is
 attempting to open each camera sensor like camera A with preview node
 and camera B with codec node. Because both of those camera sensors
 are sharing same data path and clock source, and s3c64xx camera
 interface only can handle one camera at a time.
 
 So, what I am concerned is how to make it a elegant driver which has
 two device nodes handling multiple sensors as input devices.
 Sounds complicated but I'm asking you to help me with any opinion
 about designing this driver. Any opinion about these issues will be
 greatly helpful to me.

Ok, now I understand your comments to my soc-camera thread better. Now, 
what about making one (or more) video devices with V4L2_CAP_VIDEO_CAPTURE 
type and one with V4L2_CAP_VIDEO_OUTPUT? Then you can use your capture 
type devices to switch between cameras and to configure input, and your 
output device to configure preview? Then you can use soc-camera to control 
your capture devices (if you want to of course) and implement an output 
device directly. It should be a much simpler device, because it will not 
be communicating with the cameras and only modify various preview 
parameters.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Some questions about mr97310 controls (continuing previous thread on mr97310a.c)

2009-04-16 Thread Theodore Kilgore



On Thu, 16 Apr 2009, Thomas Kaiser wrote:


Hello Theodore

My answers/comments inline .


Mine, too. I will also cut out some currently non-interesting parts, in 
the interest of saving space.





On 04/16/2009 01:59 AM, Theodore Kilgore wrote:



Thomas,

A few questions in the text below.


On Thu, 5 Mar 2009, Thomas Kaiser wrote:


Hello Theodore

kilg...@banach.math.auburn.edu wrote:



 From usb snoop.
FF FF 00 FF 96 64 xx 00 xx xx xx xx xx xx 00 00

and

In Linux the header looks like this:

FF FF 00 FF 96 64 xx 00 xx xx xx xx xx xx F0 00





1. xx: don't know but value is changing between 0x00 to 0x07


as I said, this signifies the image format, qua compression algorithm in 
use, or if 00 then no compression.


On the PAC207, the compression can be controlled with a register called 
Compression Balance size. So, I guess, depending on the value set in the 
register this value in the header will show what compression level is set.


One of my questions:

Just how does it work to set the Compression Balance size? Is this some 
kind of special command sequence? Are we able to set this to whatever we 
want?


It looks like. One can set a value from 0x0 to 0xff in the Compression 
Balance size register (reg 0x4a).
In the pac207 Linux driver, this register is set to 0xff to turn off the 
compression. While we use compression 0x88 is set (I think the same value 
like in Windoz). Hans did play with this register and found out that the 
compression changes with different values.


I wonder how this relates to the mr97310a. There is no such register 
present, that I can see.




Hans, may you explain a bit more what you found out?


(Yes, please.)


2. xx: this is the actual pixel clock


So there is a control setting for this?


Yes, in the PAC207, register 2. (12 MHz divided by the value set).


Again, I wonder how this might translate for the mr97310a ...

The following is pretty much the same, it seems.


3. xx: this is changing according light conditions from 0x03 (dark) to
0xfc (bright) (center)
4. xx: this is changing according light conditions from 0x03 (dark) to
0xfc (bright) (edge)
5. xx: set value Digital Gain of Red
6. xx: set value Digital Gain of Green
7. xx: set value Digital Gain of Blue



Varying some old questions: Precisely what is meant by the value of 
Digital Gain for XX where XX is one of Red, Green, or Blue? On what scale 
is this measured? Is is some kind of standardized scale? Or is it something 
which is camera-specific? Also what is does set mean in this context? 
This last in view of the fact that this is data which the camera provides 
for our presumed information, not something which we are sending to the 
camera?


When I recall correctly, I just saw that this fields in the header have the 
same value which I set in the digital gain of Red/Green/Blue registers. 
Therefor, I called it set value. But I don't remember if a change of these 
registers had any impact on the picture.


Hmmm. My experience is that these settings depend purely on the frame, and 
whether the camera is pointed at something bright or something dark, that 
kind of thing. Thus my idea was to try to use the information, somehow, in 
a constructive way. It never occurred to me, actually, that it is possible 
to set these things by issuing commands to a camera. But what do I know?




The range for these registers is from 0x0 to 0xff but as I don't know what 
they do, I don't know any more :-(


Yes, that I can understand.

Theodore Kilgore
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RFC on proposed patches to mr97310a.c for gspca and v4l

2009-04-16 Thread Kyle Guinn
On Thursday 16 April 2009 13:22:11 Theodore Kilgore wrote:
 On Thu, 16 Apr 2009, Kyle Guinn wrote:
  On Wednesday 04 March 2009 02:41:05 Thomas Kaiser wrote:
  Hello Theodore
 
  kilg...@banach.math.auburn.edu wrote:
  Also, after the byte indicator for the compression algorithm there are
  some more bytes, and these almost definitely contain information which
  could be valuable while doing image processing on the output. If they
  are already kept and passed out of the module over to libv4lconvert,
  then it would be very easy to do something with those bytes if it is
  ever figured out precisely what they mean. But if it is not done now it
  would have to be done then and would cause even more trouble.
 
  I sent it already in private mail to you. Here is the observation I made
  for the PAC207 SOF some years ago:
 
   From usb snoop.
  FF FF 00 FF 96 64 xx 00 xx xx xx xx xx xx 00 00
  1. xx: looks like random value
  2. xx: changed from 0x03 to 0x0b
  3. xx: changed from 0x06 to 0x49
  4. xx: changed from 0x07 to 0x55
  5. xx: static 0x96
  6. xx: static 0x80
  7. xx: static 0xa0
 
  And I did play in Linux and could identify some fields :-) .
  In Linux the header looks like this:
 
  FF FF 00 FF 96 64 xx 00 xx xx xx xx xx xx F0 00
  1. xx: don't know but value is changing between 0x00 to 0x07
  2. xx: this is the actual pixel clock
  3. xx: this is changing according light conditions from 0x03 (dark) to
  0xfc (bright) (center)
  4. xx: this is changing according light conditions from 0x03 (dark) to
  0xfc (bright) (edge)
  5. xx: set value Digital Gain of Red
  6. xx: set value Digital Gain of Green
  7. xx: set value Digital Gain of Blue
 
  Thomas
 
  I've been looking through the frame headers sent by the MR97310A (the
  Aiptek PenCam VGA+, 08ca:0111).  Here are my observations.
 
  FF FF 00 FF 96 6x x0 xx xx xx xx xx
 
  In binary that looks something like this:
 
     
  10010110 011001aa a101 
     
 
  All of the values look to be MSbit first.  A looks like a 3-bit frame
  sequence number that seems to start with 1 and increments for each frame.

 Hmmm. This I never noticed. What you are saying is that the two bytes 6x
 and x0 are variable? You are sure about that? What I have previously
 experienced is that the first is always 64 with these cameras, and the
 second one indicates the absence of compression (in which case it is 0,
 which of course only arises for still cameras), or if there is data
 compression then it is not zero. I have never seen this byte to change
 during a session with a camera. Here is a little table of what I have
 previously witnessed, and perhaps you can suggest what to do in order to
 see this is not happening:

 Camerathat byte   compression solved, or not  
 streaming
 Aiptek00  no  N/A no
 Aiptek50  yes yes both
 the Sakar cam 00  no  N/A no
 ditto 50  yes yes both
 Argus QuikClix20  yes no  doesn't work
 Argus DC1620  50  yes yes doesn't work
 CIF cameras   00  no  N/A no
 ditto 50  yes yes no
 ditto d0  yes no  yes

 Other strange facts are

 -- that the Sakar camera, the Argus QuikClix, and the
 DC1620 all share the same USB ID of 0x93a:0x010f and yet only one of them
 will stream with the existing driver. The other two go through the
 motions, but the isoc packets do not actually get sent, so there is no
 image coming out. I do not understand the reason for this; I have been
 trying to figure it out and it is rather weird. I should add that, yes,
 those two cameras were said to be capable of streaming when I bought them.
 Could it be a problem of age? I don't expect that, but maybe.

 -- the CIF cameras all share the USB id of 0x93a:0x010e (I bought several
 of them) and they all are using a different compression algorithm while
 streaming from what they use if running as still cameras in compressed
 mode. This leads to the question whether it is possible to set the
 compression algorithm during the initialization sequence, so that the
 camera also uses the 0x50 mode while streaming, because we already know
 how to use that mode.

 But I have never seen the 0x64 0xX0 bytes used to count the frames. Could
 you tell me how to repeat that? It certainly would knock down the validity
 of the above table wouldn't it?


I've modified libv4l to print out the 12-byte header before it skips over it.  
Then when I fire up mplayer it prints out each header as each frame is 
received.  The framerate is only about 5 fps so there isn't a ton of data to 
parse through.  When I point the camera into a light I 

Re: [REVIEW] v4l2 loopback

2009-04-16 Thread Mauro Carvalho Chehab
On Tue, 14 Apr 2009 16:04:50 +0200
Antonio Ospite osp...@studenti.unina.it wrote:

 On Tue, 14 Apr 2009 15:53:00 +0300
 vas...@gmail.com wrote:
 
  On Tue, Apr 14, 2009 at 3:12 PM, Mauro Carvalho Chehab
  mche...@infradead.org wrote:
 
   The issue I see is that the V4L drivers are meant to support real 
   devices. This
   driver that is a loopback for some userspace driver. I don't discuss its 
   value
   for testing purposes or other random usage, but I can't see why this 
   should be
   at upstream kernel.
  
   So, I'm considering to add it at v4l-dvb tree, but as an out-of-tree 
   driver
   only. For this to happen, probably, we'll need a few adjustments at v4l 
   build.
  
   Cheers,
   Mauro
  
  
  Mauro,
  
  ok, let it be out-of -tree driver, this is also good as I do not have
  to adapt the driver to each new kernel, but I want to argue alittle
  about Inclusion of the driver into upstream kernel.
  
   Main reason for inclusion to the kernel is ease of use, as I
  understand installing the out-of-tree driver for some kernel needs
  downloading of the whole v4l-dvb tree(am I right?).
  
   Loopback gives one opportunities to do many fun things with video
  streams and when it needs just one step to begin using it chances that
  someone will do something useful with the driver are higher.
 
 
 I, as a target user of vloopback, agree that having it in mainline
 would be really handy. Think that with a stable vloopback solution,
 with device detection and parameter setting, we can really make PTP
 digicams as webcams[1] useful, right now this is tricky and very
 uncomfortable on kernel update.

This is, in fact, a good reason why we shouldn't add it upstream: instead of
adding proper V4L interface to PTP and other similar stuff, people could just
do some userspace hack with a in-kernel loopback (or even worse: work against
Open Source community, by writing binary-only drivers), and use the loopback to
make it work with existing applications (ok, there are other forms to provide
such things, but we shouldn't make it even easier).

I can see the value of a video loopback for development and tests, but those
people could easily download some tree with the video loopback driver and use
it.

   Awareness that there is such thing as loopback is also. If the driver
  is in upstream tree - more people will see it and more chances that
  more people will participate in loopback getiing better.

I'm afraid not. The contributions we generally receive on other drivers from
developers that don't participate on v4l-dvb community are generally just API
fixups and new board additions. In fact, the people that can help with this
driver will be already developing using v4l-dvb tree, so, I doubt you'll have
more contributions by having it on kernel.

   vivi is an upstream driver :-)
 
 
 Even vivi can be seen as a particular case of a vloopback device, can't
 it?

Vivi is just a driver skeleton. It could eventually be removed from upstream,
without any real damage. 

Yet, it is the easiest way for a video app developer to test their driver.

Also, vivi is very useful to test newer core improvements, before actually
damag^Wchanging the internal API's at the real drivers. I used it with this
objective during video_ioctl2() callback changes, during videobuf split into a
core and a helper module, and on other similar situations.

On the other hand, It is dubious that a distro would provide a kernel with this
module enabled. So, even being at the kernel tree, for you to use it, you'll
need to download the kernel and compile it by hand, or use v4l-dvb (it is the
same case of the DVB dummy frontend, for example).

Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html