Re: [DVB Digital Devices Cine CT V6] status support

2012-02-26 Thread Lars Hanisch

Hi,

Am 25.02.2012 20:35, schrieb Martin Herrman:

Op 10 januari 2012 09:12 schreef Martin Herrman
martin.herr...@gmail.com  het volgende:


2012/1/9 Thomas Kaiserlinux-...@kaiser-linux.li:


Hello Martin

I use the DD Cine CT V6 with DVB-C. It works without problems.
I got the driver before Oliver integrated it in his tree. Therefor I did
not
compile Olivers tree, yet.

At the moment I run the card on Ubuntu 11.10 with kernel 3.0.0-14.

Hope this helps.

Thomas


Hi Thomas,

that is very good news, thanks a lot for the confirmation. Time to
order one myself!

Regards,

Martin


So.. couple of weeks later, the card arrived, and I have some time to
play with it.

Note that I'm running latest stable Ubuntu 64-bit with kernel 3.0.0-16-generic.


 Since you are using Ubuntu, you can find a nearly up-to-date dkms of 
linux-media with the patches of Oliver Endriss at
 https://launchpad.net/~yavdr/+archive/main called linux-media-dkms

 With this my Cine-C/T with a ddbridge runs without any problems.

Regards,
Lars.



First I tried the drivers from
http://linuxtv.org/hg/~endriss/media_build_experimental/. In that
case, dmesg output is:

[   11.728370] WARNING: You are using an experimental version of the
media stack.
[   11.728372]  As the driver is backported to an older kernel, it doesn't offer
[   11.728373]  enough quality for its usage in production.
[   11.728373]  Use it with care.
[   11.728374] Latest git patches (needed if you report a bug to
linux-media@vger.kernel.org):
[   11.728375]  59b30294e14fa6a370fdd2bc2921cca1f977ef16 Merge branch
'v4l_for_linus' into staging/for_v3.4
[   11.728376]  72565224609a23a60d10fcdf42f87a2fa8f7b16d [media]
cxd2820r: sleep on DVB-T/T2 delivery system switch
[   11.728377]  46de20a78ae4b122b79fc02633e9a6c3d539ecad [media]
anysee: fix CI init
[   11.728852] ddbridge: disagrees about version of symbol cxd2099_attach
[   11.728856] ddbridge: Unknown symbol cxd2099_attach (err -22)

So I started to try the build instructions found here:

http://linuxtv.org/wiki/index.php/How_to_Obtain,_Build_and_Install_V4L-DVB_Device_Drivers

And after compile, install and a reboot, dmesg output is:

(..)
[   11.592959] Adding 976892k swap on /dev/sdb2.  Priority:-2
extents:1 across:976892k
[   11.628781] WARNING: You are using an experimental version of the
media stack.
[   11.628784]  As the driver is backported to an older kernel, it doesn't offer
[   11.628785]  enough quality for its usage in production.
[   11.628785]  Use it with care.
[   11.628786] Latest git patches (needed if you report a bug to
linux-media@vger.kernel.org):
[   11.628787]  a3db60bcf7671cc011ab4f848cbc40ff7ab52c1e [media]
xc5000: declare firmware configuration structures as static const
[   11.628788]  6fab81dfdc7b48c2e30ab05e9b30afb0c418bbbe [media]
xc5000: drivers should specify chip revision rather than firmware
[   11.628790]  ddea427fb3e64d817d4432e5efd2abbfc4ddb02e [media]
xc5000: remove static dependencies on xc5000 created by previous
changesets
[   11.629238] Digital Devices PCIE bridge driver, Copyright (C)
2010-11 Digital Devices GmbH
[   11.629298] DDBridge :03:00.0: PCI INT A -  GSI 18 (level, low) -  IRQ 
18
[   11.629306] DDBridge driver detected: Digital Devices PCIe bridge
[   11.629331] HW 00010007 FW 00010003
[   11.632593] cfg80211: Calling CRDA to update world regulatory domain
[   11.643411] rt2800pci :05:01.0: PCI INT A -  GSI 19 (level,
low) -  IRQ 19
(..)
[   11.781023] cfg80211: (5735000 KHz - 5835000 KHz @ 4 KHz),
(300 mBi, 2000 mBm)
[   11.844516] skipping empty audio interface (v1)
[   11.844528] snd-usb-audio: probe of 1-3:1.0 failed with error -5
[   11.844540] skipping empty audio interface (v1)
[   11.844546] snd-usb-audio: probe of 1-3:1.1 failed with error -5
[   11.845406] Linux media interface: v0.10
[   11.868177] Linux video capture interface: v2.00
[   11.868181] WARNING: You are using an experimental version of the
media stack.
[   11.868182]  As the driver is backported to an older kernel, it doesn't offer
[   11.868183]  enough quality for its usage in production.
[   11.868184]  Use it with care.
[   11.868184] Latest git patches (needed if you report a bug to
linux-media@vger.kernel.org):
[   11.868185]  a3db60bcf7671cc011ab4f848cbc40ff7ab52c1e [media]
xc5000: declare firmware configuration structures as static const
[   11.868187]  6fab81dfdc7b48c2e30ab05e9b30afb0c418bbbe [media]
xc5000: drivers should specify chip revision rather than firmware
[   11.868188]  ddea427fb3e64d817d4432e5efd2abbfc4ddb02e [media]
xc5000: remove static dependencies on xc5000 created by previous
changesets
[   12.110903] EXT4-fs (md1): re-mounted. Opts: errors=remount-ro,user_xattr
[   12.213875] usbcore: registered new interface driver snd-usb-audio
[   12.213906] uvcvideo: Found UVC 1.00 deviceunnamed  (046d:0990)
[   12.229795] input: UVC Camera (046d:0990) as
/devices/pci:00/:00:1a.7/usb1/1-3/1-3:1.0/input/input6
[   12.229904] usbcore: registered new 

Re: [PATCH][libv4l] Bytes per Line

2012-02-26 Thread Robert Abel
A patch for conversion bayer = rgb as well as bayer = yuv is attached.
Basically, every time where width was assumed to be the offset to the
neighboring pixel below, now step is used for compatibility with images
where width != bytesperline.

Signed-off-by: Robert Abel a...@uni-bielefeld.de
diff -Naur a/lib/libv4lconvert/bayer.c b/lib/libv4lconvert/bayer.c
--- a/lib/libv4lconvert/bayer.c 2012-02-15 11:03:46.792279638 +0100
+++ b/lib/libv4lconvert/bayer.c 2012-02-20 20:17:36.741026768 +0100
@@ -44,7 +44,7 @@
 /* inspired by OpenCV's Bayer decoding */
 static void v4lconvert_border_bayer_line_to_bgr24(
const unsigned char *bayer, const unsigned char *adjacent_bayer,
-   unsigned char *bgr, int width, int start_with_green, int 
blue_line)
+   unsigned char *bgr, int width, const int start_with_green, 
const int blue_line)
 {
int t0, t1;
 
@@ -164,11 +164,11 @@
 
 /* From libdc1394, which on turn was based on OpenCV's Bayer decoding */
 static void bayer_to_rgbbgr24(const unsigned char *bayer,
-   unsigned char *bgr, int width, int height, unsigned int pixfmt,
+   unsigned char *bgr, int width, int height, const unsigned int 
step, unsigned int pixfmt,
int start_with_green, int blue_line)
 {
/* render the first line */
-   v4lconvert_border_bayer_line_to_bgr24(bayer, bayer + width, bgr, width,
+   v4lconvert_border_bayer_line_to_bgr24(bayer, bayer + step, bgr, width,
start_with_green, blue_line);
bgr += width * 3;
 
@@ -179,139 +179,141 @@
const unsigned char *bayer_end = bayer + (width - 2);
 
if (start_with_green) {
-   /* OpenCV has a bug in the next line, which was
-  t0 = (bayer[0] + bayer[width * 2] + 1)  1; */
-   t0 = (bayer[1] + bayer[width * 2 + 1] + 1)  1;
+
+   t0 = (bayer[1] + bayer[step * 2 + 1] + 1)  1;
/* Write first pixel */
-   t1 = (bayer[0] + bayer[width * 2] + bayer[width + 1] + 
1) / 3;
+   t1 = (bayer[0] + bayer[step * 2] + bayer[step + 1] + 1) 
/ 3;
if (blue_line) {
*bgr++ = t0;
*bgr++ = t1;
-   *bgr++ = bayer[width];
+   *bgr++ = bayer[step];
} else {
-   *bgr++ = bayer[width];
+   *bgr++ = bayer[step];
*bgr++ = t1;
*bgr++ = t0;
}
 
/* Write second pixel */
-   t1 = (bayer[width] + bayer[width + 2] + 1)  1;
+   t1 = (bayer[step] + bayer[step + 2] + 1)  1;
if (blue_line) {
*bgr++ = t0;
-   *bgr++ = bayer[width + 1];
+   *bgr++ = bayer[step + 1];
*bgr++ = t1;
} else {
*bgr++ = t1;
-   *bgr++ = bayer[width + 1];
+   *bgr++ = bayer[step + 1];
*bgr++ = t0;
}
bayer++;
} else {
/* Write first pixel */
-   t0 = (bayer[0] + bayer[width * 2] + 1)  1;
+   t0 = (bayer[0] + bayer[step * 2] + 1)  1;
if (blue_line) {
*bgr++ = t0;
-   *bgr++ = bayer[width];
-   *bgr++ = bayer[width + 1];
+   *bgr++ = bayer[step];
+   *bgr++ = bayer[step + 1];
} else {
-   *bgr++ = bayer[width + 1];
-   *bgr++ = bayer[width];
+   *bgr++ = bayer[step + 1];
+   *bgr++ = bayer[step];
*bgr++ = t0;
}
}
 
if (blue_line) {
for (; bayer = bayer_end - 2; bayer += 2) {
-   t0 = (bayer[0] + bayer[2] + bayer[width * 2] +
-   bayer[width * 2 + 2] + 2)  2;
-   t1 = (bayer[1] + bayer[width] + bayer[width + 
2] +
-   bayer[width * 2 + 1] + 2)  2;
+   t0 = (bayer[0] + bayer[2] + bayer[step * 2] +
+   bayer[step * 2 + 2] + 2)  2;
+   t1 = (bayer[1] + bayer[step] + bayer[step + 2] +
+   

Problem with HVR4000 since 3.3-rc..

2012-02-26 Thread Robert Gadsdon
My HVR4000 has been working correctly with kernel versions up to 3.2.7, 
but with 3.3-rc2/3/4/5 I get:


'' # kobject_add_internal failed for dvb with -EEXIST, don't try to 
register things with the same name in the same directory. ''


.. errors, repeated, and dev/dvb/..  does not exist.

Is this a known problem, or do I need to change my configuration in some 
way, to accommodate the 3.3 kernel changes?


Thanks..

Robert Gadsdon.

--
.
Robert Gadsdon
email: rhgadsdonatgmail.com
.

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: i.mx35 live video

2012-02-26 Thread Alex Gershgorin

Thanks Guennadi for your quick response ,  

Hi Alex
 
 Hi Guennadi,

 We would like to use I.MX35 processor in new project.
 An important element of the project is to obtain life video from the camera 
 and display it on display.
 For these purposes, we want to use mainline Linux kernel which supports all 
 the necessary drivers for the implementation of this task.
 As I understand that soc_camera is not currently supported userptr method, in 
 which case how I can configure the video pipeline in user space
 to get the live video on display, without the intervention of the processor.

soc-camera does support USERPTR, also the mx3_camera driver claims to
support it.

I based on soc-camera.txt document.

The soc-camera subsystem provides a unified API between camera host drivers and
camera sensor drivers. It implements a V4L2 interface to the user, currently
only the mmap method is supported.

In any case, I glad that this supported :-) 

What do you think it is possible to implement video streaming without the 
intervention of the processor?   

Regards,

Alex Gershgorin 
 
  
 


 


 
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 01/11] v4l: Add driver for Micron MT9M032 camera sensor

2012-02-26 Thread Fabio Estevam
On Sun, Feb 26, 2012 at 12:27 AM, Laurent Pinchart
laurent.pinch...@ideasonboard.com wrote:

 +static int __init mt9m032_init(void)
 +{
 +       int rval;
 +
 +       rval = i2c_add_driver(mt9m032_i2c_driver);
 +       if (rval)
 +               pr_err(%s: failed registering  MT9M032_NAME \n, __func__);
 +
 +       return rval;
 +}
 +
 +static void mt9m032_exit(void)
 +{
 +       i2c_del_driver(mt9m032_i2c_driver);
 +}
 +
 +module_init(mt9m032_init);
 +module_exit(mt9m032_exit);

module_i2c_driver could be used here instead.

 +
 +MODULE_AUTHOR(Martin Hostettler);

E-mail address missing.

Regards,

Fabio Estevam
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 08/11] mt9m032: Compute PLL parameters at runtime

2012-02-26 Thread Laurent Pinchart
Hi,

On Sunday 26 February 2012 04:27:34 Laurent Pinchart wrote:
 Remove the PLL parameters from platform data and pass the external clock
 and desired internal clock frequencies instead. The PLL parameters are
 now computed at runtime.

My bad, this was supposed to be squashed with patch 11/11. I'll resend the 
whole set.

 Signed-off-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
 ---
  drivers/media/video/mt9m032.c |   16 ++--
  include/media/mt9m032.h   |4 +---
  2 files changed, 7 insertions(+), 13 deletions(-)
 
 diff --git a/drivers/media/video/mt9m032.c b/drivers/media/video/mt9m032.c
 index 7b458d9..b636ad4 100644
 --- a/drivers/media/video/mt9m032.c
 +++ b/drivers/media/video/mt9m032.c
 @@ -221,21 +221,17 @@ static int mt9m032_setup_pll(struct mt9m032 *sensor)
   struct mt9m032_platform_data* pdata = sensor-pdata;
   u16 reg_pll1;
   unsigned int pre_div;
 + unsigned int pll_out_div;
 + unsigned int pll_mul;
   int res, ret;
 
 - /* TODO: also support other pre-div values */
 - if (pdata-pll_pre_div != 6) {
 - dev_warn(to_dev(sensor),
 - Unsupported PLL pre-divisor value %u, using default 
 6\n,
 - pdata-pll_pre_div);
 - }
   pre_div = 6;
 
 - sensor-pix_clock = pdata-ext_clock * pdata-pll_mul /
 - (pre_div * pdata-pll_out_div);
 + sensor-pix_clock = pdata-ext_clock * pll_mul /
 + (pre_div * pll_out_div);
 
 - reg_pll1 = ((pdata-pll_out_div - 1)  MT9M032_PLL_CONFIG1_OUTDIV_MASK)
 -| pdata-pll_mul  MT9M032_PLL_CONFIG1_MUL_SHIFT;
 + reg_pll1 = ((pll_out_div - 1)  MT9M032_PLL_CONFIG1_OUTDIV_MASK)
 +  | (pll_mul  MT9M032_PLL_CONFIG1_MUL_SHIFT);
 
   ret = mt9m032_write_reg(client, MT9M032_PLL_CONFIG1, reg_pll1);
   if (!ret)
 diff --git a/include/media/mt9m032.h b/include/media/mt9m032.h
 index 94cefc5..4e84840 100644
 --- a/include/media/mt9m032.h
 +++ b/include/media/mt9m032.h
 @@ -29,9 +29,7 @@
 
  struct mt9m032_platform_data {
   u32 ext_clock;
 - u32 pll_pre_div;
 - u32 pll_mul;
 - u32 pll_out_div;
 + u32 int_clock;
   int invert_pixclock;
 
  };
-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 01/11] v4l: Add driver for Micron MT9M032 camera sensor

2012-02-26 Thread Laurent Pinchart
Hi Fabio,

Thanks for the review.

On Sunday 26 February 2012 11:16:19 Fabio Estevam wrote:
 On Sun, Feb 26, 2012 at 12:27 AM, Laurent Pinchart wrote:
  +static int __init mt9m032_init(void)
  +{
  +   int rval;
  +
  +   rval = i2c_add_driver(mt9m032_i2c_driver);
  +   if (rval)
  +   pr_err(%s: failed registering  MT9M032_NAME \n,
  __func__); +
  +   return rval;
  +}
  +
  +static void mt9m032_exit(void)
  +{
  +   i2c_del_driver(mt9m032_i2c_driver);
  +}
  +
  +module_init(mt9m032_init);
  +module_exit(mt9m032_exit);
 
 module_i2c_driver could be used here instead.

That's fixed by patch 4/11. As explained in the cover letter, patch 01/11 is 
the original driver as submitted by Martin. I've decided not to change it to 
make review easier. I can then squash some of the other patches onto this one 
when pushing the set upstream. 
 
  +
  +MODULE_AUTHOR(Martin Hostettler);
 
 E-mail address missing.

Good point. Martin, can I add your e-mail address here ?

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: i.mx35 live video

2012-02-26 Thread Guennadi Liakhovetski
On Sun, 26 Feb 2012, Alex Gershgorin wrote:

 
 Thanks Guennadi for your quick response ,  
 
 Hi Alex
  
  Hi Guennadi,
 
  We would like to use I.MX35 processor in new project.
  An important element of the project is to obtain life video from the camera 
  and display it on display.
  For these purposes, we want to use mainline Linux kernel which supports all 
  the necessary drivers for the implementation of this task.
  As I understand that soc_camera is not currently supported userptr method, 
  in which case how I can configure the video pipeline in user space
  to get the live video on display, without the intervention of the processor.
 
 soc-camera does support USERPTR, also the mx3_camera driver claims to
 support it.
 
 I based on soc-camera.txt document.

Yeah, I really have to update it...

 The soc-camera subsystem provides a unified API between camera host drivers 
 and
 camera sensor drivers. It implements a V4L2 interface to the user, currently
 only the mmap method is supported.
 
 In any case, I glad that this supported :-) 
 
 What do you think it is possible to implement video streaming without 
 the intervention of the processor?

It might be difficult to completely eliminate the CPU, at the very least 
you need to queue and dequeue buffers to and from the V4L driver. To avoid 
even that, in principle, you could try to use only one buffer, but I don't 
think the current version of the mx3_camera driver would be very happy 
about that. You could take 2 buffers and use panning, then you'd just have 
to send queue and dequeue buffers and pan the display. But in any case, 
you probably will have to process buffers, but your most important 
advantage is, that you won't have to copy data, you only have to move 
pointers around.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Video Capture Issue

2012-02-26 Thread Sriram V
Hi,
  When I take the dump of the buffer which is pointed by DATA MEM
PING ADDRESS. It always shows 0x55.
  Even if i write 0x00 to the address. I do notice that it quickly
changes to 0x55.
  Under what conditions could this happen? What am i missing here.

  I do notice that the OMAP4 ISS is tested to work with OV5640 (YUV422
Frames) and OV5650 (Raw Data)
  When you say 422 Frames only. Do you mean 422-8Bit Mode?.

  I havent tried RAW12 which my device gives, Do i have to update only
the Data Format Selection register
  of the ISS  for RAW12?

  Please advice.


On Thu, Feb 23, 2012 at 11:24 PM, Sriram V vshrir...@gmail.com wrote:
 Hi,
  1) An Hexdump of the captured file shows 0x55 at all locations.
      Is there any buffer location i need to check.
  2) I have tried with  devel branch.
  3) Changing the polarities doesnt help either.
  4) The sensor is giving out YUV422 8Bit Mode,
      Will 0x52001074 = 0x0A1E (UYVY Format)  it bypass the ISP
       and dump directly into memory.

 On 2/23/12, Aguirre, Sergio saagui...@ti.com wrote:
 Hi Sriram,

 On Thu, Feb 23, 2012 at 11:25 AM, Sriram V vshrir...@gmail.com wrote:
 Hi,
  1) I am trying to get a HDMI to CSI Bridge chip working with OMAP4 ISS.
      The issue is the captured frames are completely green in color.

 Sounds like the buffer is all zeroes, can you confirm?

  2) The Chip is configured to output VGA Color bar sequence with
 YUV422-8Bit and
       uses datalane 0 only.
  3) The Format on OMAP4 ISS  is UYVY (Register 0x52001074 = 0x0A1E)
  I am trying to directly dump the data into memory without ISP processing.


  Please advice.

 Just to be clear on your environment, which branch/commitID are you based
 on?

 Regards,
 Sergio


 --
 Regards,
 Sriram
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



 --
 Regards,
 Sriram



-- 
Regards,
Sriram
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Frame format descriptors

2012-02-26 Thread Sylwester Nawrocki
Hi Sakari,

thank you for the RFC. Nice work!

On 02/25/2012 04:49 AM, Sakari Ailus wrote:
 Hi all,
 
 We've been talking some time about frame format desciptors. I don't mean just
 image data --- there can be metadata and image data which cannot be
 currently described using struct v4l2_mbus_framefmt, such as JPEG images and
 snapshots. I thought it was about the time to write an RFC.
 
 I think we should have additional ways to describe the frame format, a part
 of thee frame is already described by struct v4l2_mbus_framefmt which only
 describes image data.
 
 
 Background
 ==
 
 I want to first begin by listing known use cases. There are a number of
 variations of these use cases that would be nice to be supported. It depends
 not only on the sensor but also on the receiver driver i.e. how it is able
 to handle the data it receives.
 
 1. Sensor metadata. Sensors produce interesting kinds of metadata. Typically
 the metadata format is very hardware specific. It is known the metadata can
 consist e.g. register values or floating point numbers describing sensor
 state. The metadata may have certain length or it can span a few lines at
 the beginning or the end of the frame, or both.
 
 2. JPEG images. JPEG images are produced by some sensors either separately
 or combined with the regular image data frame.
 
 3. Interleaved YUV and JPEG data. Separating the two may only done in
 software, so the driver has no option but to consider both as blobs.
 
 4. Regular image data frames. Described by struct v4l2_mbus_framefmt.
 
 5. Multi-format images. See the end of the messagefor more information.
 
 Some busses such as the CSI-2 are able to transport some of this on separate
 channels. This provides logical separation of different parts of the frame
 while still sharing the same physical bus. Some sensors are known to send

AFAICS data on separate channels are mostly considered as separate streams,
like JPEG, MPEG or audio. Probably more often parts of same stream are
just carried with different Data Type, or not even that.

 the metadata on the same channel as the regular image data frame.
 
 I currently don't know of cases where the frame format could be
 significantly changed, with the exception that the sensor may either produce
 YUV, JPEG or both of the two. Changing the frame format is best done by
 other means than referring to the frame format itself: there hardly is
 reason to inform the user about the frame format, at least currently.

Not quite so. User space is usually interested where each data can be found 
in memory. Either we provide this information through a fourcc or in some 
other ways. Snapshot in V4L2 is currently virtually not supported. 
For instance, consider a use case where camera produces a data frame which 
consists of JPEG compressed frame with 3000 x 2000 pixel resolution and 
320x240 pixels YUYV frame. JPEG data is padded so the YUYV data starts at 
a specific offset within the container frame, known to the sensor. 
Something like:

+-+
| |
| |
|  JPEG  3000 x 2000  |
| |
| |
+~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~+
| JPEG buffer padding |
| |
+-+ YUYV data offset
| |
|  YUYV 320 x 240 |
| |
+-+ Thumbnail data offset
| |
|YUV or JPEG thumbnail|
| 96 x 72 |
+-+

There is additionally a third plane there, that contains the thumbnail image 
data.

So user space wants to know all the offsets and sizes in order to interpret 
what's in a v4l2 buffer.

This information could be provided to user space in several ways, using:

- controls,
- private ioctl at sensor subdev,
- additional v4l2 buffer plane containing all required data with some
  pre-defined layout,
- ... 


 Most of the time it's possible to use the hardware to separate the different
 parts of the buffer e.g. into separate memory areas or into separate planes
 of a multi-plane buffer, but not quite always (the case we don't care
 about).

I'm wondering, if there is any sensor/bridge pair in mainline that is capable
of storing data into separate memory regions ?

 This leads me to think we need two relatively independent things: to describe
 frame format and provide ways to provide the non-image part of the frame to
 user space.
 
 
 Frame format descriptor
 ===
 
 The frame format descriptor describes the layout of the frame, not only the
 image data but also other parts of it. What struct v4l2_mbus_framefmt
 describes is part of it. Changes to v4l2_mbus_framefmt affect the frame
 format descriptor rather than the other way around.
 
 enum {
   V4L2_SUBDEV_FRAME_FORMAT_TYPE_CSI2,
   V4L2_SUBDEV_FRAME_FORMAT_TYPE_CCP2,
   V4L2_SUBDEV_FRAME_FORMAT_TYPE_PARALLEL,
 };
 
 struct v4l2_subdev_frame_format {
   int type;
   struct 

Re: [PATCH/RFC][DRAFT] V4L: Add camera auto focus controls

2012-02-26 Thread Sylwester Nawrocki
Hi,

On 01/16/2012 10:33 PM, Sylwester Nawrocki wrote:
 diff --git a/include/linux/videodev2.h b/include/linux/videodev2.h
 index 012a296..0808b12 100644
 --- a/include/linux/videodev2.h
 +++ b/include/linux/videodev2.h
 @@ -1662,6 +1662,34 @@ enum  v4l2_exposure_auto_type {
   #define V4L2_CID_IRIS_ABSOLUTE  
 (V4L2_CID_CAMERA_CLASS_BASE+17)
   #define V4L2_CID_IRIS_RELATIVE  
 (V4L2_CID_CAMERA_CLASS_BASE+18)

 +#define V4L2_CID_AUTO_FOCUS_START(V4L2_CID_CAMERA_CLASS_BASE+19)
 +#define V4L2_CID_AUTO_FOCUS_STOP (V4L2_CID_CAMERA_CLASS_BASE+20)
 +#define V4L2_CID_AUTO_FOCUS_STATUS   (V4L2_CID_CAMERA_CLASS_BASE+21)
 +enum v4l2_auto_focus_status {
 + V4L2_AUTO_FOCUS_STATUS_IDLE = 0,
 + V4L2_AUTO_FOCUS_STATUS_BUSY = 1,
 + V4L2_AUTO_FOCUS_STATUS_SUCCESS  = 2,
 + V4L2_AUTO_FOCUS_STATUS_FAIL = 3,
 +};
 +
 +#define V4L2_CID_AUTO_FOCUS_DISTANCE (V4L2_CID_CAMERA_CLASS_BASE+22)
 +enum v4l2_auto_focus_distance {
 + V4L2_AUTO_FOCUS_DISTANCE_NORMAL = 0,
 + V4L2_AUTO_FOCUS_DISTANCE_MACRO  = 1,
 + V4L2_AUTO_FOCUS_DISTANCE_INFINITY   = 2,
 +};
 +
 +#define V4L2_CID_AUTO_FOCUS_SELECTION
 (V4L2_CID_CAMERA_CLASS_BASE+23)
 +enum v4l2_auto_focus_selection {
 + V4L2_AUTO_FOCUS_SELECTION_NORMAL= 0,
 + V4L2_AUTO_FOCUS_SELECTION_SPOT  = 1,
 + V4L2_AUTO_FOCUS_SELECTION_RECTANGLE = 2,
 +};

I'd like to ask your advice, I've found those two above controls 
rather painful in use. After changing V4L2_CID_AUTO_FOCUS_SELECTION to

#define V4L2_CID_AUTO_FOCUS_AREA(V4L2_CID_CAMERA_CLASS_BASE+23)
enum v4l2_auto_focus_selection {
V4L2_AUTO_FOCUS_SELECTION_ALL   = 0,
V4L2_AUTO_FOCUS_SELECTION_SPOT  = 1,
V4L2_AUTO_FOCUS_SELECTION_RECTANGLE = 2,
};

I tried use them with the M-5MOLS sensor driver where there is only 
one register for setting following automatic focus modes:

NORMAL AUTO (single-shot),
MACRO,
INFINITY,
SPOT,
FACE_DETECTION

The issue is that when V4L2_CID_AUTO_FOCUS_AREA is set to for example
V4L2_AUTO_FOCUS_SELECTION_SPOT, none of the menu entries of
V4L2_CID_AUTO_FOCUS_DISTANCE is valid.

So it would really be better to use single control for automatic focus
mode. A private control could handle that. But there will be more than
one sensor driver needing such a control, so I thought about an
additional header, e.g. samsung_camera.h in include/linux/ that would 
define reguired control IDs and menus in the camera class private id 
range.

What do you think about it ?


 +#define V4L2_CID_AUTO_FOCUS_X_POSITION   
 (V4L2_CID_CAMERA_CLASS_BASE+24)
 +#define V4L2_CID_AUTO_FOCUS_Y_POSITION   
 (V4L2_CID_CAMERA_CLASS_BASE+25)
...

--

Regards,
Sylwester
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC][DRAFT] V4L: Add camera auto focus controls

2012-02-26 Thread Sylwester Nawrocki
On 02/26/2012 05:57 PM, Sylwester Nawrocki wrote:
 rather painful in use. After changing V4L2_CID_AUTO_FOCUS_SELECTION to
 
 #define V4L2_CID_AUTO_FOCUS_AREA  (V4L2_CID_CAMERA_CLASS_BASE+23)

Oops, of course each occurence of SELECTION below should be replaced with
AREA. Sorry for the confusion.

 enum v4l2_auto_focus_selection {
   V4L2_AUTO_FOCUS_SELECTION_ALL   = 0,
   V4L2_AUTO_FOCUS_SELECTION_SPOT  = 1,
   V4L2_AUTO_FOCUS_SELECTION_RECTANGLE = 2,
 };
 
 I tried use them with the M-5MOLS sensor driver where there is only
 one register for setting following automatic focus modes:
 
 NORMAL AUTO (single-shot),
 MACRO,
 INFINITY,
 SPOT,
 FACE_DETECTION
 
 The issue is that when V4L2_CID_AUTO_FOCUS_AREA is set to for example
 V4L2_AUTO_FOCUS_SELECTION_SPOT, none of the menu entries of
 V4L2_CID_AUTO_FOCUS_DISTANCE is valid.
 
 So it would really be better to use single control for automatic focus
 mode. A private control could handle that. But there will be more than
 one sensor driver needing such a control, so I thought about an
 additional header, e.g. samsung_camera.h in include/linux/ that would
 define reguired control IDs and menus in the camera class private id
 range.
 
 What do you think about it ?
 
 
 +#define V4L2_CID_AUTO_FOCUS_X_POSITION  
 (V4L2_CID_CAMERA_CLASS_BASE+24)
 +#define V4L2_CID_AUTO_FOCUS_Y_POSITION  
 (V4L2_CID_CAMERA_CLASS_BASE+25)
 ...
 
 --
 
 Regards,
 Sylwester

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: DVB nGene CI : TS Discontinuities issues

2012-02-26 Thread Anssi Hannula
Hello,

13.05.2011 14:54, Ralph Metzler kirjoitti:
 Below my test code. You just need to adjust the device name.
 
 I had it running for an hour and had no discontinuities (except at
 restarts, might have to look into buffer flushing).
 I tested it with nGene and Octopus boards on an Asus ION2 board and on a
 Marvell Kirkwood based ARM board.

Should your test code (quoted below) work with e.g. Octopus DDBridge on
vanilla 3.2.6 kernel, without any additional initialization needed
through ca0 or so?

When I try it here like that, the reader thread simply blocks
indefinitely on the first read, while the writer thread continues to
write packets into the device.
Am I missing something, or is this a bug?

 Btw., what hardware exactly are you using? 
 Which DVB card version, CI type, motherboard chipset?

I'm not sure what do you need, exactly, but here's the relevant section
of the kernel log. Motherboard chipset is Intel X58. Feel free to ask
for anything else.

[ 1333.801243] Digital Devices PCIE bridge driver, Copyright (C) 2010-11
Digital Devices GmbH
[ 1333.801302] DDBridge :08:00.0: PCI INT A - GSI 32 (level, low)
- IRQ 32
[ 1333.801314] DDBridge driver detected: Digital Devices Octopus DVB adapter
[ 1333.801357] HW 00010004 FW 00010001
[ 1333.802371] Port 0 (TAB 1): DUAL DVB-C/T
[ 1333.802819] Port 1 (TAB 2): CI
[ 1333.803785] Port 2 (TAB 3): DUAL DVB-C/T
[ 1333.804369] Port 3 (TAB 4): NO MODULE
[ 1333.805176] DVB: registering new adapter (DDBridge)
[ 1333.824506] drxk: detected a drx-3913k, spin A3, xtal 27.000 MHz
[ 1334.313799] DRXK driver version 0.9.4300
[ 1337.120786] DVB: registering adapter 0 frontend 0 (DRXK DVB-C)...
[ 1337.120996] DVB: registering adapter 0 frontend 0 (DRXK DVB-T)...
[ 1337.121165] DVB: registering new adapter (DDBridge)
[ 1337.151565] drxk: detected a drx-3913k, spin A3, xtal 27.000 MHz
[ 1337.653400] DRXK driver version 0.9.4300
[ 1340.467888] DVB: registering adapter 1 frontend 0 (DRXK DVB-C)...
[ 1340.468097] DVB: registering adapter 1 frontend 0 (DRXK DVB-T)...
[ 1340.468203] DVB: registering new adapter (DDBridge)
[ 1340.477045] Attached CXD2099AR at 40
[ 1340.477502] DVB: registering new adapter (DDBridge)
[ 1340.498717] drxk: detected a drx-3913k, spin A3, xtal 27.000 MHz
[ 1340.978018] DRXK driver version 0.9.4300
[ 1343.784964] DVB: registering adapter 3 frontend 0 (DRXK DVB-C)...
[ 1343.785168] DVB: registering adapter 3 frontend 0 (DRXK DVB-T)...
[ 1343.785322] DVB: registering new adapter (DDBridge)
[ 1343.805712] drxk: detected a drx-3913k, spin A3, xtal 27.000 MHz
[ 1344.295293] DRXK driver version 0.9.4300
[ 1347.062278] DVB: registering adapter 4 frontend 0 (DRXK DVB-C)...
[ 1347.062490] DVB: registering adapter 4 frontend 0 (DRXK DVB-T)...
[ 1347.816555] dvb_ca adapter 2: DVB CAM detected and initialised
successfully


 Regards,
 Ralph
 
 
 
 #include stdio.h
 #include ctype.h
 #include string.h
 #include unistd.h
 #include sys/types.h
 #include sys/stat.h
 #include stdint.h
 #include stdlib.h
 #include fcntl.h
 #include sys/ioctl.h
 #include pthread.h
 
 uint8_t fill[188]={0x47, 0x1f, 0xff, 0x10,

 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,

 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,

 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,

 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,

 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,

 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,

 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,

 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,

 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,

 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,

 0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff,
  0xff,0xff,0xff,0xff,0xff,0xff,0xff,0xff };
 
 uint8_t ts[188]={0x47, 0x0a, 0xaa, 0x00 };
 
 
 void proc_buf(uint8_t *buf, uint32_t *d)
 {
   uint32_t c;
 
   if (buf[1]==0x1f  buf[2]==0xff) {
   //printf(fill\n);
   return;
   }
   if (buf[1]==0x9f  buf[2]==0xff) {
   //printf(fill\n);
   return;
   }
   if (buf[1]!=0x0a || buf[2]!=0xaa)
   return;
   c=(buf[4]24)|(buf[5]16)|(buf[6]8)|buf[7];
   if (c!=*d) {
   printf(CONT ERROR %08x %08x\n, c, *d);
   *d=c;
   } else {
   if (memcmp(ts+8, buf+8, 180))
   printf(error\n);
   if (!(c0x))
   printf(R %d\n, c);
   }
   (*d)++;
 }
 
 void *get_ts(void *a)
 {
   uint8_t buf[188*1024];
   int len, off;
 
   int fdi=open(/dev/dvb/adapter4/sec0, O_RDONLY);
   

RE: i.mx35 live video

2012-02-26 Thread Alex Gershgorin



 Thanks Guennadi for your quick response ,  
 
 Hi Alex
  
  Hi Guennadi,
 
  We would like to use I.MX35 processor in new project.
  An important element of the project is to obtain life video from the camera 
  and display it on display.
  For these purposes, we want to use mainline Linux kernel which supports all 
  the necessary drivers for the implementation of this task.
  As I understand that soc_camera is not currently supported userptr method, 
  in which case how I can configure the video pipeline in user space
  to get the live video on display, without the intervention of the processor.
 
 soc-camera does support USERPTR, also the mx3_camera driver claims to
 support it.
 
 I based on soc-camera.txt document.

 Yeah, I really have to update it...

 The soc-camera subsystem provides a unified API between camera host drivers 
 and
 camera sensor drivers. It implements a V4L2 interface to the user, currently
 only the mmap method is supported.
 
 In any case, I glad that this supported :-) 
 
 What do you think it is possible to implement video streaming without 
 the intervention of the processor?

It might be difficult to completely eliminate the CPU, at the very least 
you need to queue and dequeue buffers to and from the V4L driver. To avoid 
even that, in principle, you could try to use only one buffer, but I don't 
think the current version of the mx3_camera driver would be very happy 
about that. You could take 2 buffers and use panning, then you'd just have 
to send queue and dequeue buffers and pan the display. But in any case, 
you probably will have to process buffers, but your most important 
advantage is, that you won't have to copy data, you only have to move 
pointers around.

The method that you describe is exactly what I had in mind.
It would be more correct to say it is minimum CPU intervention and not 
without CPU intervention. 
As far I understand, I can implement MMAP method for frame buffer device and 
pass this pointer directly to mx3_camera driver with use USERPTR method, then 
send queue and dequeue buffers to mx3_camera driver.
What is not clear, if it is possible to pass the same pointer of frame buffer 
in mx3_camera, if the driver is using two buffers?

Thanks,
Alex Gershgorin



 

 
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 08/33] v4l: Add subdev selections documentation: svg and dia files

2012-02-26 Thread Sakari Ailus
Hi Laurent,

Laurent Pinchart wrote:
 Hi Sakari,
 
 Thanks for the patch.
 
 On Monday 20 February 2012 03:56:47 Sakari Ailus wrote:
 Add svga and dia files for V4L2 subdev selections documentation.

 Signed-off-by: Sakari Ailus sakari.ai...@iki.fi
 
 The diagram look fine, although a bit complex. They could be simplified by 
 merging the identical rectangles (for instance moving the sink crop selection 
 label to the dotted blue rectangle, and removing the plain blue rectangle). 
 I'm not sure if that would be really more readable though, it's up to you.

I did that change, and I indeed think it improves readability. Now the
documentation has equal number of rectangles that there really are.

Cheers,

-- 
Sakari Ailus
sakari.ai...@iki.fi
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


cron job: media_tree daily build: WARNINGS

2012-02-26 Thread Hans Verkuil
This message is generated daily by a cron job that builds media_tree for
the kernels and architectures in the list below.

Results of the daily build of media_tree:

date:Sun Feb 26 19:00:18 CET 2012
git hash:a3db60bcf7671cc011ab4f848cbc40ff7ab52c1e
gcc version:  i686-linux-gcc (GCC) 4.6.2
host hardware:x86_64
host os:  3.1-2.slh.1-amd64

linux-git-arm-eabi-enoxys: WARNINGS
linux-git-arm-eabi-omap: WARNINGS
linux-git-armv5-ixp: WARNINGS
linux-git-i686: WARNINGS
linux-git-m32r: WARNINGS
linux-git-mips: WARNINGS
linux-git-powerpc64: WARNINGS
linux-git-x86_64: WARNINGS
linux-2.6.31.12-i686: WARNINGS
linux-2.6.32.6-i686: WARNINGS
linux-2.6.33-i686: WARNINGS
linux-2.6.34-i686: WARNINGS
linux-2.6.35.3-i686: WARNINGS
linux-2.6.36-i686: WARNINGS
linux-2.6.37-i686: WARNINGS
linux-2.6.38.2-i686: WARNINGS
linux-2.6.39.1-i686: WARNINGS
linux-3.0-i686: WARNINGS
linux-3.1-i686: WARNINGS
linux-3.2.1-i686: WARNINGS
linux-3.3-rc1-i686: WARNINGS
linux-2.6.31.12-x86_64: WARNINGS
linux-2.6.32.6-x86_64: WARNINGS
linux-2.6.33-x86_64: WARNINGS
linux-2.6.34-x86_64: WARNINGS
linux-2.6.35.3-x86_64: WARNINGS
linux-2.6.36-x86_64: WARNINGS
linux-2.6.37-x86_64: WARNINGS
linux-2.6.38.2-x86_64: WARNINGS
linux-2.6.39.1-x86_64: WARNINGS
linux-3.0-x86_64: WARNINGS
linux-3.1-x86_64: WARNINGS
linux-3.2.1-x86_64: WARNINGS
linux-3.3-rc1-x86_64: WARNINGS
apps: WARNINGS
spec-git: WARNINGS
sparse: ERRORS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Sunday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Sunday.tar.bz2

The V4L-DVB specification from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/media.html
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: i.mx35 live video

2012-02-26 Thread Guennadi Liakhovetski
On Sun, 26 Feb 2012, Alex Gershgorin wrote:

  Thanks Guennadi for your quick response ,  
  
  Hi Alex
   
   Hi Guennadi,
  
   We would like to use I.MX35 processor in new project.
   An important element of the project is to obtain life video from the 
   camera and display it on display.
   For these purposes, we want to use mainline Linux kernel which supports 
   all the necessary drivers for the implementation of this task.
   As I understand that soc_camera is not currently supported userptr 
   method, in which case how I can configure the video pipeline in user space
   to get the live video on display, without the intervention of the 
   processor.
  
  soc-camera does support USERPTR, also the mx3_camera driver claims to
  support it.
  
  I based on soc-camera.txt document.
 
  Yeah, I really have to update it...
 
  The soc-camera subsystem provides a unified API between camera host drivers 
  and
  camera sensor drivers. It implements a V4L2 interface to the user, currently
  only the mmap method is supported.
  
  In any case, I glad that this supported :-) 
  
  What do you think it is possible to implement video streaming without 
  the intervention of the processor?
 
 It might be difficult to completely eliminate the CPU, at the very least 
 you need to queue and dequeue buffers to and from the V4L driver. To avoid 
 even that, in principle, you could try to use only one buffer, but I don't 
 think the current version of the mx3_camera driver would be very happy 
 about that. You could take 2 buffers and use panning, then you'd just have 
 to send queue and dequeue buffers and pan the display. But in any case, 
 you probably will have to process buffers, but your most important 
 advantage is, that you won't have to copy data, you only have to move 
 pointers around.
 
 The method that you describe is exactly what I had in mind.
 It would be more correct to say it is minimum CPU intervention and not 
 without CPU intervention. 

 As far I understand, I can implement MMAP method for frame buffer device 
 and pass this pointer directly to mx3_camera driver with use USERPTR 
 method, then send queue and dequeue buffers to mx3_camera driver.
 What is not clear, if it is possible to pass the same pointer of frame 
 buffer in mx3_camera, if the driver is using two buffers?

Sorry, I really don't know for sure. It should work, but I don't think I 
tested thid myself nor I remember anybody reporting having tested this 
mode. So, you can either try to search mailing list archives, or just test 
it. Begin with a simpler mode - USERPTR with separately allocated buffers 
and copying them manually to the framebuffer, then try to switch to just 
one buffer in this same mode, then switch to direct framebuffer memory.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] tea575x: fix HW seek

2012-02-26 Thread Ondrej Zary
On Friday 24 February 2012 10:00:01 Hans Verkuil wrote:
 On Wednesday, February 22, 2012 09:35:28 Ondrej Zary wrote:
  On Tuesday 21 February 2012, Hans Verkuil wrote:
   On Saturday, February 18, 2012 17:45:45 Ondrej Zary wrote:
Fix HW seek in TEA575x to work properly:
 - a delay must be present after search start and before first
register read or the seek does weird things
 - when the search stops, the new frequency is not available
immediately, we must wait until it appears in the register
(fortunately, we can clear the frequency bits when starting the
search as it starts at the frequency currently set, not from the
value written)
 - sometimes, seek remains on the current frequency (or moves only a
little), so repeat it until it moves by at least 50 kHz
   
Signed-off-by: Ondrej Zary li...@rainbow-software.org
   
--- a/sound/i2c/other/tea575x-tuner.c
+++ b/sound/i2c/other/tea575x-tuner.c
@@ -89,7 +89,7 @@ static void snd_tea575x_write(struct snd_tea575x
*tea, unsigned int val) tea-ops-set_pins(tea, 0);
 }
   
-static unsigned int snd_tea575x_read(struct snd_tea575x *tea)
+static u32 snd_tea575x_read(struct snd_tea575x *tea)
 {
u16 l, rdata;
u32 data = 0;
@@ -120,6 +120,27 @@ static unsigned int snd_tea575x_read(struct
snd_tea575x *tea) return data;
 }
   
+static void snd_tea575x_get_freq(struct snd_tea575x *tea)
+{
+   u32 freq = snd_tea575x_read(tea)  TEA575X_BIT_FREQ_MASK;
+
+   if (freq == 0) {
+   tea-freq = 0;
  
   Wouldn't it be better to return -EBUSY in this case? VIDIOC_G_FREQUENCY
   should not return frequencies outside the valid frequency range. In
   this case returning -EBUSY seems to make more sense to me.
 
  The device returns zero frequency when the scan fails to find a
  frequency. This is not an error, just an indication that nothing is
  tuned. So maybe we can return some bogus frequency in vidioc_g_frequency
  (like FREQ_LO) in this case (don't know if -EBUSY will break anything).
  But HW seek should get the real one (i.e. zero when it's there).

 How about the following patch? vidioc_g_frequency just returns the last set
 frequency and the hw_seek restores the original frequency if it can't find
 another channel.

Seems to work. That's probably the right thing to do.

 Also note that the check for  50 kHz in hw_seek actually checked for  500
 kHz. I've fixed that, but I can't test it.

Thanks. It finds more stations now. To improve reliability, an additional 
check should be added - the seek sometimes stop at the same station, just a 
bit more than 50kHz of the original frequency, often in wrong direction. 
Something like this:

--- a/sound/i2c/other/tea575x-tuner.c
+++ b/sound/i2c/other/tea575x-tuner.c
@@ -280,8 +280,13 @@ static int vidioc_s_hw_freq_seek(struct file *file, void 
*fh,
}
if (freq == 0) /* shouldn't happen */
break;
-   /* if we moved by less than 50 kHz, continue seeking 
*/
-   if (abs(tea-freq - freq)  16 * 50) {
+   /*
+* if we moved by less than 50 kHz, or in the wrong
+* direction, continue seeking
+*/
+   if (abs(tea-freq - freq)  16 * 50 ||
+   (a-seek_upward  freq  tea-freq) ||
+   (!a-seek_upward  freq  tea-freq)) {
snd_tea575x_write(tea, tea-val);
continue;
}


 Do you also know what happens at the boundaries of the frequency range?
 Does it wrap around, or do you get a timeout?

No wraparound, it times out so the original frequency is restored. I wonder 
if -ETIMEDOUT is correct here.

 Regards,

   Hans

 diff --git a/sound/i2c/other/tea575x-tuner.c
 b/sound/i2c/other/tea575x-tuner.c index 474bb81..1bdf1f3 100644
 --- a/sound/i2c/other/tea575x-tuner.c
 +++ b/sound/i2c/other/tea575x-tuner.c
 @@ -120,14 +120,12 @@ static u32 snd_tea575x_read(struct snd_tea575x *tea)
   return data;
  }

 -static void snd_tea575x_get_freq(struct snd_tea575x *tea)
 +static u32 snd_tea575x_get_freq(struct snd_tea575x *tea)
  {
   u32 freq = snd_tea575x_read(tea)  TEA575X_BIT_FREQ_MASK;

 - if (freq == 0) {
 - tea-freq = 0;
 - return;
 - }
 + if (freq == 0)
 + return freq;

   /* freq *= 12.5 */
   freq *= 125;
 @@ -138,7 +136,7 @@ static void snd_tea575x_get_freq(struct snd_tea575x
 *tea) else
   freq -= TEA575X_FMIF;

 - tea-freq = clamp(freq * 16, FREQ_LO, FREQ_HI); /* from kHz */
 + return clamp(freq * 16, FREQ_LO, FREQ_HI); /* from kHz */
  }

  static void snd_tea575x_set_freq(struct snd_tea575x *tea)
 @@ -224,8 +222,6 @@ static int vidioc_g_frequency(struct file 

Re: i.mx35 live video

2012-02-26 Thread Sylwester Nawrocki
Hi,

On 02/26/2012 09:58 PM, Guennadi Liakhovetski wrote:
 It might be difficult to completely eliminate the CPU, at the very least
 you need to queue and dequeue buffers to and from the V4L driver. To avoid
 even that, in principle, you could try to use only one buffer, but I don't
 think the current version of the mx3_camera driver would be very happy
 about that. You could take 2 buffers and use panning, then you'd just have
 to send queue and dequeue buffers and pan the display. But in any case,
 you probably will have to process buffers, but your most important
 advantage is, that you won't have to copy data, you only have to move
 pointers around.

 The method that you describe is exactly what I had in mind.
 It would be more correct to say it is minimum CPU intervention and not 
 without CPU intervention.
 
 As far I understand, I can implement MMAP method for frame buffer device
 and pass this pointer directly to mx3_camera driver with use USERPTR
 method, then send queue and dequeue buffers to mx3_camera driver.
 What is not clear, if it is possible to pass the same pointer of frame
 buffer in mx3_camera, if the driver is using two buffers?

It should work when you request 2 USERPTR buffers and assign same address 
(frame buffer start) to them. I've seen setups like this working with videbuf2
based drivers. However it's really poor configuration, to avoid tearing
you could just set framebuffer virtual window size to contain at least
two screen windows and for the second buffer use framebuffer start address
with a proper offset as the USERPTR address. Then you could just add display
panning to display every frame.  

--

Regards,
Sylwester

 Sorry, I really don't know for sure. It should work, but I don't think I
 tested thid myself nor I remember anybody reporting having tested this
 mode. So, you can either try to search mailing list archives, or just test
 it. Begin with a simpler mode - USERPTR with separately allocated buffers
 and copying them manually to the framebuffer, then try to switch to just
 one buffer in this same mode, then switch to direct framebuffer memory.
 
 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer
 http://www.open-technology.de/

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 09/33] v4l: Add subdev selections documentation

2012-02-26 Thread Sakari Ailus
Hi Laurent,

Many thanks for the comments!

Laurent Pinchart wrote:
 On Monday 20 February 2012 03:56:48 Sakari Ailus wrote:
 Add documentation for V4L2 subdev selection API. This changes also
 experimental V4L2 subdev API so that scaling now works through selection API
 only.

 Signed-off-by: Sakari Ailus sakari.ai...@iki.fi
 
 
 diff --git a/Documentation/DocBook/media/v4l/dev-subdev.xml
 b/Documentation/DocBook/media/v4l/dev-subdev.xml index 0916a73..9d5e7da
 100644
 --- a/Documentation/DocBook/media/v4l/dev-subdev.xml
 +++ b/Documentation/DocBook/media/v4l/dev-subdev.xml
 
 [snip]
 
 +  paraScaling operation changes the size of the image by scaling
 +  it to new dimensions. Some sub-devices support it. The scaled
 +  size (width and height) is represented by v4l2-rect;. In the
 +  case of scaling, top and left will always be zero. Scaling is
 +  configured using sub-subdev-g-selection; and
 +  constantV4L2_SUBDEV_SEL_COMPOSE_ACTIVE/constant selection
 +  target on the sink pad of the subdev. The scaling is performed
 +  related to the width and height of the crop rectangle on the
 +  subdev's sink pad./para
 
 I'm not sure if that would be very clear for readers who are not yet familiar 
 with the API. What about the following text instead ?
 
 The scaling operation changes the size of the image by scaling it to new 
 dimensions. The scaling ratio isn't specified explicitly, but is implied from 
 the original and scaled image sizes. Both sizes are represented by v4l2-
 rect;.
 
 Scaling support is optional. When supported by a subdev, the crop rectangle 
 on 
 the subdev's sink pad is scaled to the size configured using VIDIOC-SUBDEV-G-
 SELECTION; and constantV4L2_SUBDEV_SEL_COMPOSE_ACTIVE/constant selection 
 target on the same pad. If the subdev supports scaling but no composing, the 
 top and left values are not used and must always be set to zero.
 
 (note that sub-subdev-g-selection; has been replaced with VIDIOC-SUBDEV-G-
 SELECTION;)
 
 I would also move this text after the sink pad crop description to follow the 
 order in which operations are applied by subdevs.

I'm fine with that change, so I did it. However, I won't replace
sub-subdev-g-selection; with VIDIOC-SUBDEV-G-SELECTION; simply because
it won't work:

dev-subdev.xml:310: parser error : Entity 'VIDIOC-SUBDEV-G-SELECTION'
not defined
  size configured using VIDIOC-SUBDEV-G-SELECTION; and

It's beyond me why not; similar references are being used elsewhere with
otherwise equivalent definitions. Perhaps the name is just too long?
That's the only difference I could think of: xmlto typically segfaults
on errors so I wouldn't be surprised of something so simple.

 +  paraAs for pad formats, drivers store try and active
 +  rectangles for the selection targets of ACTIVE type xref
 +  linkend=v4l2-subdev-selection-targets./xref/para
 +
 +  paraOn sink pads, cropping is applied relatively to the
 +  current pad format. The pad format represents the image size as
 +  received by the sub-device from the previous block in the
 +  pipeline, and the crop rectangle represents the sub-image that
 +  will be transmitted further inside the sub-device for
 +  processing./para
 +
 +  paraOn source pads, cropping is similar to sink pads, with the
 +  exception that the source size from which the cropping is
 +  performed, is the COMPOSE rectangle on the sink pad. In both
 +  sink and source pads, the crop rectangle must be entirely
 +  containted inside the source image size for the crop
 +  operation./para
 +
 +  paraThe drivers should always use the closest possible
 +  rectangle the user requests on all selection targets, unless
 +  specificly told otherwisexref
 +  linkend=v4l2-subdev-selection-flags./xref/para
 +/section
 +
 +section
 +  titleTypes of selection targets/title
 +
 +  section
 +titleACTIVE targets/title
 +
 +paraACTIVE targets reflect the actual hardware configuration
 +at any point of time./para
 +  /section
 +
 +  section
 +titleBOUNDS targets/title
 +
 +paraBOUNDS targets is the smallest rectangle within which
 +contains all valid ACTIVE rectangles.
 
 s/within which/that/ ?

Ack.

 It may not be possible
 +to set the ACTIVE rectangle as large as the BOUNDS rectangle,
 +however./para
 
 What about
 
 The BOUNDS rectangle might not itself be a valid ACTIVE rectangle when all 
 possible ACTIVE pixels do not form a rectangular shape (e.g. cross-shaped or 
 round sensors).

There are cases where the active size is limited, even if it's
rectangular. I can add the above case there, sure, if you think such
devices exist --- I've never heard of nor seen them. Some sensors are
documented to be cross-shaped but the only thing separating these from
the rest is that the manufacturer doesn't guarantee the quality of the
pixels in the corners. At least on those I've seen. You 

Re: DVB nGene CI : TS Discontinuities issues

2012-02-26 Thread Ralph Metzler
Anssi Hannula writes:
   I had it running for an hour and had no discontinuities (except at
   restarts, might have to look into buffer flushing).
   I tested it with nGene and Octopus boards on an Asus ION2 board and on a
   Marvell Kirkwood based ARM board.
  
  Should your test code (quoted below) work with e.g. Octopus DDBridge on
  vanilla 3.2.6 kernel, without any additional initialization needed
  through ca0 or so?
  
  When I try it here like that, the reader thread simply blocks
  indefinitely on the first read, while the writer thread continues to
  write packets into the device.
  Am I missing something, or is this a bug?


Yes, it should work as it is. 
I assume you adjusted the adapter numbers of course.



Regards,
Ralph
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 01/11] v4l: Add driver for Micron MT9M032 camera sensor

2012-02-26 Thread Laurent Pinchart
On Sunday 26 February 2012 15:28:29 Laurent Pinchart wrote:
 On Sunday 26 February 2012 11:16:19 Fabio Estevam wrote:
  On Sun, Feb 26, 2012 at 12:27 AM, Laurent Pinchart wrote:
   +static int __init mt9m032_init(void)
   +{
   +   int rval;
   +
   +   rval = i2c_add_driver(mt9m032_i2c_driver);
   +   if (rval)
   +   pr_err(%s: failed registering  MT9M032_NAME \n,
   __func__); +
   +   return rval;
   +}
   +
   +static void mt9m032_exit(void)
   +{
   +   i2c_del_driver(mt9m032_i2c_driver);
   +}
   +
   +module_init(mt9m032_init);
   +module_exit(mt9m032_exit);
  
  module_i2c_driver could be used here instead.
 
 That's fixed by patch 4/11. As explained in the cover letter, patch 01/11 is
 the original driver as submitted by Martin. I've decided not to change it
 to make review easier. I can then squash some of the other patches onto
 this one when pushing the set upstream.
 
   +
   +MODULE_AUTHOR(Martin Hostettler);
  
  E-mail address missing.
 
 Good point. Martin, can I add your e-mail address here ?

$ find drivers/ -type f -name \*.c -exec grep MODULE_AUTHOR {} \; \
| awk '/@/ { print email } ! /@/ { print name }' \
| | sort | uniq -c
   2304 email
   1511 name

I guess the e-mail address isn't mandatory :-)

Martin, I can keep your name there (with or without e-mail address) or put 
mine (with an e-mail address). You will of course be the author of the git 
commit (even if I end up squashing several of my other patches onto this one). 
I can also optionally put my name and e-mail address at the beginning of the 
file as a contact person if you don't want to be bothered. It's your call 
really, just tell me what you prefer.

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC/PATCH 1/6] V4L: Add V4L2_MBUS_FMT_VYUY_JPEG_I1_1X8 media bus format

2012-02-26 Thread Sakari Ailus
Hi Sylwester,

Sylwester Nawrocki wrote:
 On 02/17/2012 07:15 PM, Sakari Ailus wrote:
 On Fri, Feb 17, 2012 at 03:26:29PM +0100, Sylwester Nawrocki wrote:
 On 02/16/2012 08:46 PM, Sakari Ailus wrote:
 On Thu, Feb 16, 2012 at 07:23:54PM +0100, Sylwester Nawrocki wrote:
 This patch adds media bus pixel code for the interleaved JPEG/YUYV image
 format used by S5C73MX Samsung cameras. The interleaved image data is
 transferred on MIPI-CSI2 bus as User Defined Byte-based Data.

 Signed-off-by: Sylwester Nawrockis.nawro...@samsung.com
 Signed-off-by: Kyungmin Parkkyungmin.p...@samsung.com
 ---
   include/linux/v4l2-mediabus.h |3 +++
   1 files changed, 3 insertions(+), 0 deletions(-)

 diff --git a/include/linux/v4l2-mediabus.h b/include/linux/v4l2-mediabus.h
 index 5ea7f75..c2f0e4e 100644
 --- a/include/linux/v4l2-mediabus.h
 +++ b/include/linux/v4l2-mediabus.h
 @@ -92,6 +92,9 @@ enum v4l2_mbus_pixelcode {

   /* JPEG compressed formats - next is 0x4002 */
   V4L2_MBUS_FMT_JPEG_1X8 = 0x4001,
 +
 + /* Interleaved JPEG and YUV formats - next is 0x4102 */
 + V4L2_MBUS_FMT_VYUY_JPEG_I1_1X8 = 0x4101,
   };

 Thanks for the patch. Just a tiny comment:

 I'd go with a new hardware-specific buffer range, e.g. 0x5000.

 Sure, that makes more sense. But I guess you mean format not buffer 
 range ?

 Yeah, a format that begins a new range.

 Guennadi also proposed an interesting idea: a pass-through format. Does
 your format have dimensions that the driver would use for something or is
 that just a blob?

 It's just a blob for the drivers, dimensions may be needed for subdevs to
 compute overall size of data for example. But the host driver, in case of
 Samsung devices, basically just needs to know the total size of frame data.

 I'm afraid the host would have to additionally configure subdevs in the data
 pipeline in case of hardware-specific format, when we have used a single
 binary blob media bus format identifier. For example MIPI-CSI2 data format
 would have to be set up along the pipeline. There might be more attributes
 in the future like this. Not sure if we want to go that path ?

 I'll try and see how it would look like with a single pass-through format.
 Probably using g/s_mbus_config operations ?

 I think we could use the framesize control to tell the size of the frame, or
 however it is done for jpeg blobs.
 
 Yes, we could add a standard framesize control to the Image Source class but 
 it
 will solve only part of the problem. Nevertheless it might be worth to have 
 it.
 It could be used by applications to configure subdevs directly, while the 
 host 
 drivers could use e.g. s/g_frame_config op for that.

(I think we could continue this discussion in the context of the RFC.)

 The issue I see in the pass-through mode is that the user would have no
 information whatsoever what he's getting. This would be perhaps fixed by
 adding the frame format descriptor: it could contain information how to
 handle the data. (Just thinking out loud. :))
 
 Do you mean a user space application by user ?

Yeah.

 I'd like to clearly separate blob media bus pixel codes and hardware-specific
 blob fourccs. If we don't want to change fundamental assumptions of V4L2
 we likely need separate fourccs for each weird format.

 I can imagine pass-through media bus pixel code but a transparent fourcc
 sounds like a higher abstraction. :)

I agree... how about this:

We currently provide the information on the media bus pixel code to the
CSI-2 receivers but most of the time it's not necessary for them to know
what the pixel code exactly is: it doesn't do anything with the data but
writes it to memory. Bits uncompressed, compressed and the compression
method are enough --- if uncompression is desired. Even pixel order
isn't always needed.

What might make sense is to provide generic table with pixel code
related information, such as bits compressed and uncompressed, pixel
order, compression method and default 4CC.

Custom formats would only be present in this table without individual
CSI-2 receiver drivers having to know about them. Same goes with 4CC's.

Regards,

-- 
Sakari Ailus
sakari.ai...@iki.fi
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


cxd2099 CI on DDBridge not working (was: Re: DVB nGene CI : TS Discontinuities issues)

2012-02-26 Thread Anssi Hannula
27.02.2012 00:14, Ralph Metzler kirjoitti:
 Anssi Hannula writes:
I had it running for an hour and had no discontinuities (except at
restarts, might have to look into buffer flushing).
I tested it with nGene and Octopus boards on an Asus ION2 board and on a
Marvell Kirkwood based ARM board.
   
   Should your test code (quoted below) work with e.g. Octopus DDBridge on
   vanilla 3.2.6 kernel, without any additional initialization needed
   through ca0 or so?
   
   When I try it here like that, the reader thread simply blocks
   indefinitely on the first read, while the writer thread continues to
   write packets into the device.
   Am I missing something, or is this a bug?
 
 
 Yes, it should work as it is. 
 I assume you adjusted the adapter numbers of course.

I did. Do you have any idea on what could be the cause of the issue or
any debugging tips?

I have also tried to do actual decrypting with the CI. As expected, the
same thing happened, i.e. data was written but no data was read (CAM in
ca0 also responds properly to VDR).

-- 
Anssi Hannula
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH][trivial] media, DiB0090: remove redundant ';' from dib0090_fw_identify()

2012-02-26 Thread Jesper Juhl
One semi-colon is enough.

Signed-off-by: Jesper Juhl j...@chaosbits.net
---
 drivers/media/dvb/frontends/dib0090.c |2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/media/dvb/frontends/dib0090.c 
b/drivers/media/dvb/frontends/dib0090.c
index 224d81e..d9fe60b 100644
--- a/drivers/media/dvb/frontends/dib0090.c
+++ b/drivers/media/dvb/frontends/dib0090.c
@@ -519,7 +519,7 @@ static int dib0090_fw_identify(struct dvb_frontend *fe)
return 0;
 
 identification_error:
-   return -EIO;;
+   return -EIO;
 }
 
 static void dib0090_reset_digital(struct dvb_frontend *fe, const struct 
dib0090_config *cfg)
-- 
1.7.9.2


-- 
Jesper Juhl j...@chaosbits.net   http://www.chaosbits.net/
Don't top-post http://www.catb.org/jargon/html/T/top-post.html
Plain text mails only, please.

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 26/33] omap3isp: Default link validation for ccp2, csi2, preview and resizer

2012-02-26 Thread Laurent Pinchart
Hi Sakari,

On Saturday 25 February 2012 03:34:36 Sakari Ailus wrote:
 On Wed, Feb 22, 2012 at 12:01:26PM +0100, Laurent Pinchart wrote:
  On Monday 20 February 2012 03:57:05 Sakari Ailus wrote:
   Use default link validation for ccp2, csi2, preview and resizer. On
   ccp2, csi2 and ccdc we also collect information on external subdevs as
   one may be connected to those entities.
   
   The CCDC link validation still must be done separately.
   
   Also set pipe-external correctly as we go

[snip]

   @@ -1999,6 +1999,27 @@ static int ccdc_set_format(struct v4l2_subdev
   *sd,
   struct v4l2_subdev_fh *fh, return 0;
   
}
   
   +static int ccdc_link_validate(struct v4l2_subdev *sd,
   +   struct media_link *link,
   +   struct v4l2_subdev_format *source_fmt,
   +   struct v4l2_subdev_format *sink_fmt)
   +{
   + struct isp_ccdc_device *ccdc = v4l2_get_subdevdata(sd);
   + struct isp_pipeline *pipe = to_isp_pipeline(ccdc-subdev.entity);
   + int rval;
   +
   + /* We've got a parallel sensor here. */
   + if (ccdc-input == CCDC_INPUT_PARALLEL) {
   + pipe-external =
   + media_entity_to_v4l2_subdev(link-source-entity);
   + rval = omap3isp_get_external_info(pipe, link);
   + if (rval  0)
   + return 0;
   + }
  
  Pending my comments on 25/33, this wouldn't be needed in this patch, and
  could be squashed with 27/33.
 
 If I moved this code out of pipeline validation, I'd have to walk the graph
 one additional time. Do you think it's worth it, or are there easier ways to
 find the external entity connected to a pipeline?

If I understand you correctly, the problem is that 
omap3isp_get_external_info() can only be called when the external entity has 
been located, and the CCDC link validation operation would be called before 
that. Is that correct ?

One option would be to locate the external entity before validating the link. 
When the validation pipeline walk operation gets to the CCDC entity, it would 
first follow the link, check if the connected entity is external (and in that 
case sotre it in pipe-external and call omap3isp_get_external_info()), and 
then only call the CCDC link validation operation.

The other option is to leave the code as-is :-) Or rather modify it slightly: 
assigning the entity to pipe-external and calling 
omap3isp_get_external_info() should be done in ispvideo.c at pipeline 
validation time.

   +
   + return 0;
   +}
   +
   
/*

 * ccdc_init_formats - Initialize formats on all pads
 * @sd: ISP CCDC V4L2 subdevice
   

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: OMAP CCDC with sensors that are always on...

2012-02-26 Thread Laurent Pinchart
Hi Chris,

On Saturday 25 February 2012 01:48:02 Sakari Ailus wrote:
 On Fri, Feb 17, 2012 at 05:32:31PM -0600, Chris Whittenburg wrote:
  I fixed my sensor to respect a run signal from the omap, so that now
  it only sends data when the ccdc is expecting it.
  
  This fixed my problem, and now I can capture the 640x1440 frames.
  
  At least the first one...
  
  Subsequent frames are always full of 0x55, like the ISP didn't write
  anything into them.
  
  I still get the VD0 interrupts, and I checked that WEN in the
  CCDC_SYN_MODE register is set, and that the EXWEN bit is clear.
  
  I'm using the command:
  yavta -c2 -p -F --skip 0 -f Y8 -s 640x1440 /dev/video2
  
  Here are my register settings:
  
  [ 6534.029907] omap3isp omap3isp: -CCDC Register
  dump- [ 6534.029907] omap3isp omap3isp: ###CCDC
  PCR=0x
  [ 6534.029937] omap3isp omap3isp: ###CCDC SYN_MODE=0x00030f00
  [ 6534.029937] omap3isp omap3isp: ###CCDC HD_VD_WID=0x
  [ 6534.029937] omap3isp omap3isp: ###CCDC PIX_LINES=0x
  [ 6534.029968] omap3isp omap3isp: ###CCDC HORZ_INFO=0x027f
  [ 6534.029968] omap3isp omap3isp: ###CCDC VERT_START=0x
  [ 6534.029968] omap3isp omap3isp: ###CCDC VERT_LINES=0x059f
  [ 6534.029998] omap3isp omap3isp: ###CCDC CULLING=0x00ff
  [ 6534.029998] omap3isp omap3isp: ###CCDC HSIZE_OFF=0x0280
  [ 6534.029998] omap3isp omap3isp: ###CCDC SDOFST=0x
  [ 6534.030029] omap3isp omap3isp: ###CCDC SDR_ADDR=0x1000
  [ 6534.030029] omap3isp omap3isp: ###CCDC CLAMP=0x0010
  [ 6534.030029] omap3isp omap3isp: ###CCDC DCSUB=0x
  [ 6534.030059] omap3isp omap3isp: ###CCDC COLPTN=0xbb11bb11
  [ 6534.030059] omap3isp omap3isp: ###CCDC BLKCMP=0x
  [ 6534.030059] omap3isp omap3isp: ###CCDC FPC=0x
  [ 6534.030090] omap3isp omap3isp: ###CCDC FPC_ADDR=0x
  [ 6534.030090] omap3isp omap3isp: ###CCDC VDINT=0x059e03c0
  [ 6534.030090] omap3isp omap3isp: ###CCDC ALAW=0x
  [ 6534.030120] omap3isp omap3isp: ###CCDC REC656IF=0x
  [ 6534.030120] omap3isp omap3isp: ###CCDC CFG=0x8000
  [ 6534.030120] omap3isp omap3isp: ###CCDC FMTCFG=0xe000
  [ 6534.030151] omap3isp omap3isp: ###CCDC FMT_HORZ=0x0280
  [ 6534.030151] omap3isp omap3isp: ###CCDC FMT_VERT=0x05a0
  [ 6534.030151] omap3isp omap3isp: ###CCDC PRGEVEN0=0x
  [ 6534.030181] omap3isp omap3isp: ###CCDC PRGEVEN1=0x
  [ 6534.030181] omap3isp omap3isp: ###CCDC PRGODD0=0x
  [ 6534.030181] omap3isp omap3isp: ###CCDC PRGODD1=0x
  [ 6534.030212] omap3isp omap3isp: ###CCDC VP_OUT=0x0b3e2800
  [ 6534.030212] omap3isp omap3isp: ###CCDC LSC_CONFIG=0x6600
  [ 6534.030212] omap3isp omap3isp: ###CCDC LSC_INITIAL=0x
  [ 6534.030242] omap3isp omap3isp: ###CCDC LSC_TABLE_BASE=0x
  [ 6534.030242] omap3isp omap3isp: ###CCDC LSC_TABLE_OFFSET=0x
  [ 6534.030242] omap3isp omap3isp:
  
  
  Output frame 0 is always good, while output frame 1 is 0x.
  
  I believe my sensor is respecting the clocks required before and after
  the frame.
  
  Could the ISP driver be writing my data to some unexpected location
  rather than to the v4l2 buffer?
  
  Is there a way to determine if the CCDC is writing to memory or not?
 
 How long vertical blanking do you have? It shouldn't have an effect, though.

It definitely can :-) If vertical blanking isn't long enough, the CCDC will 
start processing the next frame before the driver gets time to update the 
hardware with the pointer to the next buffer. The first frame will then be 
overwritten.

 Is the polarity of the hs/vs signals correct in platform data?
 
  On Wed, Feb 15, 2012 at 11:29 AM, Chris Whittenburg
  
  whittenb...@gmail.com wrote:
   Maybe this is more of a OMAP specific question, but I'm using a
   beagleboard-xm with a custom image sensor on a 3.0.17 kernel.
   
   Everything configures ok with:
   
   media-ctl -r
   media-ctl -l 'xrtcam 2-0048:0-OMAP3 ISP CCDC:0[1]'
   media-ctl -l 'OMAP3 ISP CCDC:1-OMAP3 ISP CCDC output:0[1]'
   media-ctl -f 'xrtcam 2-0048:0 [Y8 640x1440]'
   media-ctl -f 'OMAP3 ISP CCDC:1 [Y8 640x1440]'
   media-ctl -e 'OMAP3 ISP CCDC output'
   
   root@beagleboard:~# ./setup.sh
   Resetting all links to inactive
   Setting up link 16:0 - 5:0 [1]
   Setting up link 5:1 - 6:0 [1]
   Setting up format Y8 640x1440 on pad irtcam 2-0048/0
   Format set: Y8 640x1440
   Setting up format Y8 640x1440 on pad OMAP3 ISP CCDC/0
   Format set: Y8 640x1440
   Setting up format Y8 640x1440 on pad OMAP3 ISP CCDC/1
   Format set: Y8 640x1440
   /dev/video2
   
   But when I go to capture, with:
   yavta -c2 -p -F --skip 0 -f Y8 -s 640x1440 /dev/video2
   
   I don't seem to get any interrupts.  Actually I get some HS_VS_IRQ
   after I launch yavta, but before I press return at the Press enter to
   start capture prompt.  After that, I don't believe I am getting any
   interrupts.
   
   The one problem I 

Re: OMAP CCDC with sensors that are always on...

2012-02-26 Thread Sakari Ailus
Hi Laurent,

Laurent Pinchart wrote:
 Hi Chris,
 
 On Saturday 25 February 2012 01:48:02 Sakari Ailus wrote:
 On Fri, Feb 17, 2012 at 05:32:31PM -0600, Chris Whittenburg wrote:
 I fixed my sensor to respect a run signal from the omap, so that now
 it only sends data when the ccdc is expecting it.

 This fixed my problem, and now I can capture the 640x1440 frames.

 At least the first one...

 Subsequent frames are always full of 0x55, like the ISP didn't write
 anything into them.

 I still get the VD0 interrupts, and I checked that WEN in the
 CCDC_SYN_MODE register is set, and that the EXWEN bit is clear.

 I'm using the command:
 yavta -c2 -p -F --skip 0 -f Y8 -s 640x1440 /dev/video2

 Here are my register settings:

 [ 6534.029907] omap3isp omap3isp: -CCDC Register
 dump- [ 6534.029907] omap3isp omap3isp: ###CCDC
 PCR=0x
 [ 6534.029937] omap3isp omap3isp: ###CCDC SYN_MODE=0x00030f00
 [ 6534.029937] omap3isp omap3isp: ###CCDC HD_VD_WID=0x
 [ 6534.029937] omap3isp omap3isp: ###CCDC PIX_LINES=0x
 [ 6534.029968] omap3isp omap3isp: ###CCDC HORZ_INFO=0x027f
 [ 6534.029968] omap3isp omap3isp: ###CCDC VERT_START=0x
 [ 6534.029968] omap3isp omap3isp: ###CCDC VERT_LINES=0x059f
 [ 6534.029998] omap3isp omap3isp: ###CCDC CULLING=0x00ff
 [ 6534.029998] omap3isp omap3isp: ###CCDC HSIZE_OFF=0x0280
 [ 6534.029998] omap3isp omap3isp: ###CCDC SDOFST=0x
 [ 6534.030029] omap3isp omap3isp: ###CCDC SDR_ADDR=0x1000
 [ 6534.030029] omap3isp omap3isp: ###CCDC CLAMP=0x0010
 [ 6534.030029] omap3isp omap3isp: ###CCDC DCSUB=0x
 [ 6534.030059] omap3isp omap3isp: ###CCDC COLPTN=0xbb11bb11
 [ 6534.030059] omap3isp omap3isp: ###CCDC BLKCMP=0x
 [ 6534.030059] omap3isp omap3isp: ###CCDC FPC=0x
 [ 6534.030090] omap3isp omap3isp: ###CCDC FPC_ADDR=0x
 [ 6534.030090] omap3isp omap3isp: ###CCDC VDINT=0x059e03c0
 [ 6534.030090] omap3isp omap3isp: ###CCDC ALAW=0x
 [ 6534.030120] omap3isp omap3isp: ###CCDC REC656IF=0x
 [ 6534.030120] omap3isp omap3isp: ###CCDC CFG=0x8000
 [ 6534.030120] omap3isp omap3isp: ###CCDC FMTCFG=0xe000
 [ 6534.030151] omap3isp omap3isp: ###CCDC FMT_HORZ=0x0280
 [ 6534.030151] omap3isp omap3isp: ###CCDC FMT_VERT=0x05a0
 [ 6534.030151] omap3isp omap3isp: ###CCDC PRGEVEN0=0x
 [ 6534.030181] omap3isp omap3isp: ###CCDC PRGEVEN1=0x
 [ 6534.030181] omap3isp omap3isp: ###CCDC PRGODD0=0x
 [ 6534.030181] omap3isp omap3isp: ###CCDC PRGODD1=0x
 [ 6534.030212] omap3isp omap3isp: ###CCDC VP_OUT=0x0b3e2800
 [ 6534.030212] omap3isp omap3isp: ###CCDC LSC_CONFIG=0x6600
 [ 6534.030212] omap3isp omap3isp: ###CCDC LSC_INITIAL=0x
 [ 6534.030242] omap3isp omap3isp: ###CCDC LSC_TABLE_BASE=0x
 [ 6534.030242] omap3isp omap3isp: ###CCDC LSC_TABLE_OFFSET=0x
 [ 6534.030242] omap3isp omap3isp:
 

 Output frame 0 is always good, while output frame 1 is 0x.

 I believe my sensor is respecting the clocks required before and after
 the frame.

 Could the ISP driver be writing my data to some unexpected location
 rather than to the v4l2 buffer?

 Is there a way to determine if the CCDC is writing to memory or not?

 How long vertical blanking do you have? It shouldn't have an effect, though.
 
 It definitely can :-) If vertical blanking isn't long enough, the CCDC will 
 start processing the next frame before the driver gets time to update the 
 hardware with the pointer to the next buffer. The first frame will then be 
 overwritten.

Sure, but in that case no buffers should be dequeued from the driver
either --- as they should always be marked faulty since reprogramming
the CCDC isn't possible.

Regards,

-- 
Sakari Ailus
sakari.ai...@iki.fi
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 26/33] omap3isp: Default link validation for ccp2, csi2, preview and resizer

2012-02-26 Thread Sakari Ailus
Hi Laurent,

Laurent Pinchart wrote:
 Hi Sakari,
 
 On Saturday 25 February 2012 03:34:36 Sakari Ailus wrote:
 On Wed, Feb 22, 2012 at 12:01:26PM +0100, Laurent Pinchart wrote:
 On Monday 20 February 2012 03:57:05 Sakari Ailus wrote:
 Use default link validation for ccp2, csi2, preview and resizer. On
 ccp2, csi2 and ccdc we also collect information on external subdevs as
 one may be connected to those entities.

 The CCDC link validation still must be done separately.

 Also set pipe-external correctly as we go
 
 [snip]
 
 @@ -1999,6 +1999,27 @@ static int ccdc_set_format(struct v4l2_subdev
 *sd,
 struct v4l2_subdev_fh *fh, return 0;

  }

 +static int ccdc_link_validate(struct v4l2_subdev *sd,
 +struct media_link *link,
 +struct v4l2_subdev_format *source_fmt,
 +struct v4l2_subdev_format *sink_fmt)
 +{
 +  struct isp_ccdc_device *ccdc = v4l2_get_subdevdata(sd);
 +  struct isp_pipeline *pipe = to_isp_pipeline(ccdc-subdev.entity);
 +  int rval;
 +
 +  /* We've got a parallel sensor here. */
 +  if (ccdc-input == CCDC_INPUT_PARALLEL) {
 +  pipe-external =
 +  media_entity_to_v4l2_subdev(link-source-entity);
 +  rval = omap3isp_get_external_info(pipe, link);
 +  if (rval  0)
 +  return 0;
 +  }

 Pending my comments on 25/33, this wouldn't be needed in this patch, and
 could be squashed with 27/33.

 If I moved this code out of pipeline validation, I'd have to walk the graph
 one additional time. Do you think it's worth it, or are there easier ways to
 find the external entity connected to a pipeline?
 
 If I understand you correctly, the problem is that 
 omap3isp_get_external_info() can only be called when the external entity has 
 been located, and the CCDC link validation operation would be called before 
 that. Is that correct ?
 
 One option would be to locate the external entity before validating the link. 
 When the validation pipeline walk operation gets to the CCDC entity, it would 
 first follow the link, check if the connected entity is external (and in that 
 case sotre it in pipe-external and call omap3isp_get_external_info()), and 
 then only call the CCDC link validation operation.
 
 The other option is to leave the code as-is :-) Or rather modify it slightly: 
 assigning the entity to pipe-external and calling 
 omap3isp_get_external_info() should be done in ispvideo.c at pipeline 
 validation time.

I've modified it so that the entities which are part of the pipe will be
disovered by media_entity_pipeline_start() and stored in struct
media_pipeline.entities (as bitmask).

It's trivial to figure out the external entity from that one in the ISP
driver.

I did it so since I assume pretty much every single driver supporting
any non-linear data path must perform the same. It's also almost no work
in doing this in the above function, compared to a relatively
significant headache in the ISP driver.

I'll resend the patchset once I've gotten your reply on my selections
documentation changes. :-)

Cheers,

-- 
Sakari Ailus
sakari.ai...@iki.fi
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: OMAP CCDC with sensors that are always on...

2012-02-26 Thread Laurent Pinchart
Hi Sakari,

On Monday 27 February 2012 01:35:14 Sakari Ailus wrote:
 Laurent Pinchart wrote:
  On Saturday 25 February 2012 01:48:02 Sakari Ailus wrote:
  On Fri, Feb 17, 2012 at 05:32:31PM -0600, Chris Whittenburg wrote:
  I fixed my sensor to respect a run signal from the omap, so that now
  it only sends data when the ccdc is expecting it.
  
  This fixed my problem, and now I can capture the 640x1440 frames.
  
  At least the first one...
  
  Subsequent frames are always full of 0x55, like the ISP didn't write
  anything into them.
  
  I still get the VD0 interrupts, and I checked that WEN in the
  CCDC_SYN_MODE register is set, and that the EXWEN bit is clear.
  
  I'm using the command:
  yavta -c2 -p -F --skip 0 -f Y8 -s 640x1440 /dev/video2
  
  Here are my register settings:
  
  [ 6534.029907] omap3isp omap3isp: -CCDC Register
  dump- [ 6534.029907] omap3isp omap3isp: ###CCDC
  PCR=0x
  [ 6534.029937] omap3isp omap3isp: ###CCDC SYN_MODE=0x00030f00
  [ 6534.029937] omap3isp omap3isp: ###CCDC HD_VD_WID=0x
  [ 6534.029937] omap3isp omap3isp: ###CCDC PIX_LINES=0x
  [ 6534.029968] omap3isp omap3isp: ###CCDC HORZ_INFO=0x027f
  [ 6534.029968] omap3isp omap3isp: ###CCDC VERT_START=0x
  [ 6534.029968] omap3isp omap3isp: ###CCDC VERT_LINES=0x059f
  [ 6534.029998] omap3isp omap3isp: ###CCDC CULLING=0x00ff
  [ 6534.029998] omap3isp omap3isp: ###CCDC HSIZE_OFF=0x0280
  [ 6534.029998] omap3isp omap3isp: ###CCDC SDOFST=0x
  [ 6534.030029] omap3isp omap3isp: ###CCDC SDR_ADDR=0x1000
  [ 6534.030029] omap3isp omap3isp: ###CCDC CLAMP=0x0010
  [ 6534.030029] omap3isp omap3isp: ###CCDC DCSUB=0x
  [ 6534.030059] omap3isp omap3isp: ###CCDC COLPTN=0xbb11bb11
  [ 6534.030059] omap3isp omap3isp: ###CCDC BLKCMP=0x
  [ 6534.030059] omap3isp omap3isp: ###CCDC FPC=0x
  [ 6534.030090] omap3isp omap3isp: ###CCDC FPC_ADDR=0x
  [ 6534.030090] omap3isp omap3isp: ###CCDC VDINT=0x059e03c0
  [ 6534.030090] omap3isp omap3isp: ###CCDC ALAW=0x
  [ 6534.030120] omap3isp omap3isp: ###CCDC REC656IF=0x
  [ 6534.030120] omap3isp omap3isp: ###CCDC CFG=0x8000
  [ 6534.030120] omap3isp omap3isp: ###CCDC FMTCFG=0xe000
  [ 6534.030151] omap3isp omap3isp: ###CCDC FMT_HORZ=0x0280
  [ 6534.030151] omap3isp omap3isp: ###CCDC FMT_VERT=0x05a0
  [ 6534.030151] omap3isp omap3isp: ###CCDC PRGEVEN0=0x
  [ 6534.030181] omap3isp omap3isp: ###CCDC PRGEVEN1=0x
  [ 6534.030181] omap3isp omap3isp: ###CCDC PRGODD0=0x
  [ 6534.030181] omap3isp omap3isp: ###CCDC PRGODD1=0x
  [ 6534.030212] omap3isp omap3isp: ###CCDC VP_OUT=0x0b3e2800
  [ 6534.030212] omap3isp omap3isp: ###CCDC LSC_CONFIG=0x6600
  [ 6534.030212] omap3isp omap3isp: ###CCDC LSC_INITIAL=0x
  [ 6534.030242] omap3isp omap3isp: ###CCDC LSC_TABLE_BASE=0x
  [ 6534.030242] omap3isp omap3isp: ###CCDC LSC_TABLE_OFFSET=0x
  [ 6534.030242] omap3isp omap3isp:
  
  
  Output frame 0 is always good, while output frame 1 is 0x.
  
  I believe my sensor is respecting the clocks required before and after
  the frame.
  
  Could the ISP driver be writing my data to some unexpected location
  rather than to the v4l2 buffer?
  
  Is there a way to determine if the CCDC is writing to memory or not?
  
  How long vertical blanking do you have? It shouldn't have an effect,
  though.
  It definitely can :-) If vertical blanking isn't long enough, the CCDC
  will start processing the next frame before the driver gets time to update
  the hardware with the pointer to the next buffer. The first frame will
  then be overwritten.
 
 Sure, but in that case no buffers should be dequeued from the driver
 either --- as they should always be marked faulty since reprogramming
 the CCDC isn't possible.

Does the driver detect that ?

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: OMAP CCDC with sensors that are always on...

2012-02-26 Thread Sakari Ailus
Hi Laurent,

Laurent Pinchart wrote:
 On Monday 27 February 2012 01:35:14 Sakari Ailus wrote:
 Laurent Pinchart wrote:
 On Saturday 25 February 2012 01:48:02 Sakari Ailus wrote:
 On Fri, Feb 17, 2012 at 05:32:31PM -0600, Chris Whittenburg wrote:
 I fixed my sensor to respect a run signal from the omap, so that now
 it only sends data when the ccdc is expecting it.

 This fixed my problem, and now I can capture the 640x1440 frames.

 At least the first one...

 Subsequent frames are always full of 0x55, like the ISP didn't write
 anything into them.

 I still get the VD0 interrupts, and I checked that WEN in the
 CCDC_SYN_MODE register is set, and that the EXWEN bit is clear.

 I'm using the command:
 yavta -c2 -p -F --skip 0 -f Y8 -s 640x1440 /dev/video2

 Here are my register settings:

 [ 6534.029907] omap3isp omap3isp: -CCDC Register
 dump- [ 6534.029907] omap3isp omap3isp: ###CCDC
 PCR=0x
 [ 6534.029937] omap3isp omap3isp: ###CCDC SYN_MODE=0x00030f00
 [ 6534.029937] omap3isp omap3isp: ###CCDC HD_VD_WID=0x
 [ 6534.029937] omap3isp omap3isp: ###CCDC PIX_LINES=0x
 [ 6534.029968] omap3isp omap3isp: ###CCDC HORZ_INFO=0x027f
 [ 6534.029968] omap3isp omap3isp: ###CCDC VERT_START=0x
 [ 6534.029968] omap3isp omap3isp: ###CCDC VERT_LINES=0x059f
 [ 6534.029998] omap3isp omap3isp: ###CCDC CULLING=0x00ff
 [ 6534.029998] omap3isp omap3isp: ###CCDC HSIZE_OFF=0x0280
 [ 6534.029998] omap3isp omap3isp: ###CCDC SDOFST=0x
 [ 6534.030029] omap3isp omap3isp: ###CCDC SDR_ADDR=0x1000
 [ 6534.030029] omap3isp omap3isp: ###CCDC CLAMP=0x0010
 [ 6534.030029] omap3isp omap3isp: ###CCDC DCSUB=0x
 [ 6534.030059] omap3isp omap3isp: ###CCDC COLPTN=0xbb11bb11
 [ 6534.030059] omap3isp omap3isp: ###CCDC BLKCMP=0x
 [ 6534.030059] omap3isp omap3isp: ###CCDC FPC=0x
 [ 6534.030090] omap3isp omap3isp: ###CCDC FPC_ADDR=0x
 [ 6534.030090] omap3isp omap3isp: ###CCDC VDINT=0x059e03c0
 [ 6534.030090] omap3isp omap3isp: ###CCDC ALAW=0x
 [ 6534.030120] omap3isp omap3isp: ###CCDC REC656IF=0x
 [ 6534.030120] omap3isp omap3isp: ###CCDC CFG=0x8000
 [ 6534.030120] omap3isp omap3isp: ###CCDC FMTCFG=0xe000
 [ 6534.030151] omap3isp omap3isp: ###CCDC FMT_HORZ=0x0280
 [ 6534.030151] omap3isp omap3isp: ###CCDC FMT_VERT=0x05a0
 [ 6534.030151] omap3isp omap3isp: ###CCDC PRGEVEN0=0x
 [ 6534.030181] omap3isp omap3isp: ###CCDC PRGEVEN1=0x
 [ 6534.030181] omap3isp omap3isp: ###CCDC PRGODD0=0x
 [ 6534.030181] omap3isp omap3isp: ###CCDC PRGODD1=0x
 [ 6534.030212] omap3isp omap3isp: ###CCDC VP_OUT=0x0b3e2800
 [ 6534.030212] omap3isp omap3isp: ###CCDC LSC_CONFIG=0x6600
 [ 6534.030212] omap3isp omap3isp: ###CCDC LSC_INITIAL=0x
 [ 6534.030242] omap3isp omap3isp: ###CCDC LSC_TABLE_BASE=0x
 [ 6534.030242] omap3isp omap3isp: ###CCDC LSC_TABLE_OFFSET=0x
 [ 6534.030242] omap3isp omap3isp:
 

 Output frame 0 is always good, while output frame 1 is 0x.

 I believe my sensor is respecting the clocks required before and after
 the frame.

 Could the ISP driver be writing my data to some unexpected location
 rather than to the v4l2 buffer?

 Is there a way to determine if the CCDC is writing to memory or not?

 How long vertical blanking do you have? It shouldn't have an effect,
 though.
 It definitely can :-) If vertical blanking isn't long enough, the CCDC
 will start processing the next frame before the driver gets time to update
 the hardware with the pointer to the next buffer. The first frame will
 then be overwritten.

 Sure, but in that case no buffers should be dequeued from the driver
 either --- as they should always be marked faulty since reprogramming
 the CCDC isn't possible.
 
 Does the driver detect that ?

It does. The CCDC is disabled in VD1 interrupt which should arrive well
before VD0. The CCDC continues to process frame until the end of it, and
once it becomes idle, a new buffer address is programmed. At least as
far as I remember and how the code looks like to me this late in the
evening. ;)

Chris: are you capturing at CCDC output video node?

It might also make sense to check whether VD1 and VD0 interrupts arrive
as expected. (I.e. VD1 first, then VD0 on each frame.)

Regards,

-- 
Sakari Ailus
sakari.ai...@iki.fi
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 04/33] v4l: VIDIOC_SUBDEV_S_SELECTION and VIDIOC_SUBDEV_G_SELECTION IOCTLs

2012-02-26 Thread Laurent Pinchart
Hi Sakari,

On Thursday 23 February 2012 08:01:23 Sakari Ailus wrote:
 Laurent Pinchart wrote:
  [snip]
  
  +/* active cropping area */
  +#define V4L2_SUBDEV_SEL_TGT_CROP_ACTIVE   0x
  +/* cropping bounds */
  +#define V4L2_SUBDEV_SEL_TGT_CROP_BOUNDS   0x0002
  +/* current composing area */
  +#define V4L2_SUBDEV_SEL_TGT_COMPOSE_ACTIVE0x0100
  +/* composing bounds */
  
  I'm not sure if ACTIVE is a good name here. It sounds confusing as we
  already have V4L2_SUBDEV_FORMAT_ACTIVE.
 
 We are using V4L2_SEL_TGT_COMPOSE_ACTIVE on V4L2 nodes already --- the
 name I'm using here just mirrors the naming on V4L2 device nodes. If I
 choose a different name here, some of that analogy is lost.
 
 That said, I'm not against changing this but the equivalent change
 should then be made on V4L2 selection API for consistency.

I'm not against changing the V4L2 selection API either :-) Just think about 
developers talking about try crop active or active crop bounds. Even 
worse, will active crop refer to the active target or the active which ? 
That will be very confusing.

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 3/3] Firmware for AF9035/AF9033 driver

2012-02-26 Thread Daniel Glöckner
On Wed, Feb 22, 2012 at 11:22:02PM +0100, Hans-Frieder Vogt wrote:
 0040: Firmware_CODELENGTH bytes

Some time ago I analyzed the firmware of the AF9035.
The firmware download command inside the on-chip ROM expects chunks
with a 7 byte header:

Byte 0: MCS 51 core
There are two inside the AF9035 (1=Link and 2=OFDM) with
separate address spaces
Byte 1-2: Big endian destination address
Byte 3-4: Big endian number of data bytes following the header
Byte 5-6: Big endian header checksum, apparently ignored by the chip
Calculated as ~(h[0]*256+h[1]+h[2]*256+h[3]+h[4]*256)

This might help locate the firmware inside the Windows drivers.
The Windows drivers often contain two copies of the same firmware.

  Daniel
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 06/33] v4l: Check pad number in get try pointer functions

2012-02-26 Thread Laurent Pinchart
Hi Sakari,

On Thursday 23 February 2012 07:57:54 Sakari Ailus wrote:
 Laurent Pinchart wrote:
  On Monday 20 February 2012 03:56:45 Sakari Ailus wrote:
  Unify functions to get try pointers and validate the pad number accessed
  by
  the user.
  
  Signed-off-by: Sakari Ailus sakari.ai...@iki.fi
  ---
  
   include/media/v4l2-subdev.h |   31 ++-
   1 files changed, 14 insertions(+), 17 deletions(-)
  
  diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
  index bcaf6b8..d48dae5 100644
  --- a/include/media/v4l2-subdev.h
  +++ b/include/media/v4l2-subdev.h
  @@ -565,23 +565,20 @@ struct v4l2_subdev_fh {
  
 container_of(fh, struct v4l2_subdev_fh, vfh)
   
   #if defined(CONFIG_VIDEO_V4L2_SUBDEV_API)
  
  -static inline struct v4l2_mbus_framefmt *
  -v4l2_subdev_get_try_format(struct v4l2_subdev_fh *fh, unsigned int pad)
  -{
  -  return fh-pad[pad].try_fmt;
  -}
  -
  -static inline struct v4l2_rect *
  -v4l2_subdev_get_try_crop(struct v4l2_subdev_fh *fh, unsigned int pad)
  -{
  -  return fh-pad[pad].try_crop;
  -}
  -
  -static inline struct v4l2_rect *
  -v4l2_subdev_get_try_compose(struct v4l2_subdev_fh *fh, unsigned int pad)
  -{
  -  return fh-pad[pad].try_compose;
  -}
  +#define __V4L2_SUBDEV_MK_GET_TRY(rtype, fun_name, field_name) 
  \
  +  static inline struct rtype *\
  +  v4l2_subdev_get_try_##fun_name(struct v4l2_subdev_fh *fh,   \
  + unsigned int pad)\
  +  {   \
  +  if (unlikely(pad  vdev_to_v4l2_subdev( \
  +   fh-vfh.vdev-entity.num_pads) \
  +  return NULL;\
  +  return fh-pad[pad].field_name;\
  +  }
  +
  +__V4L2_SUBDEV_MK_GET_TRY(v4l2_mbus_framefmt, format, try_fmt)
  +__V4L2_SUBDEV_MK_GET_TRY(v4l2_rect, crop, try_compose)
  +__V4L2_SUBDEV_MK_GET_TRY(v4l2_rect, compose, try_compose)
  
   #endif
   
   extern const struct v4l2_file_operations v4l2_subdev_fops;
  
  I'm not sure if this is a good idea. Drivers usually access the active and
  try formats/rectangles through a single function that checks the which
  argument and returns the active format/rectangle from the driver-specific
  device structure, or calls v4l2_subdev_get_try_*. The pad number should
  be checked for both active and try formats/rectangles, as both can result
  in accessing a wrong memory location.
  
  Furthermore, only in-kernel access to the active/try formats/rectangles
  need to be checked, as the pad argument to subdev ioctls are already
  checked in v4l2-subdev.c. If your goal is to catch buggy kernel code
  here, a BUG_ON might be more suitable (although accessing the NULL
  pointer would result in an oops anyway).
 
 This was basically the reason for the memory corryption issue I had some
 time ago with the driver. The drivers (typically, I guess) need to
 access this data also to validate the following selection rectangles
 inside the subdev.
 
 The active rectangles are also driver's own property so it's the matter
 of driver to access them properly. In principle the same goes for the
 try rectangles, but the fact still is that this patch would have caught
 the bad accesses right at the time they were made. I feel it's just too
 easy to give the function a faulty pad number --- see the SMIA++ driver
 for an example.
 
 I'd prefer to keep this change, and also I'm fine with doing BUG()
 instead of returning NULL.

I think I would prefer a BUG() as well. I'm OK with keeping the check. If 
drivers were bug-free this wouldn't be needed at all of course :-)

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 04/33] v4l: VIDIOC_SUBDEV_S_SELECTION and VIDIOC_SUBDEV_G_SELECTION IOCTLs

2012-02-26 Thread Sakari Ailus
Hi Laurent,

On Mon, Feb 27, 2012 at 01:22:34AM +0100, Laurent Pinchart wrote:
 Hi Sakari,
 
 On Thursday 23 February 2012 08:01:23 Sakari Ailus wrote:
  Laurent Pinchart wrote:
   [snip]
   
   +/* active cropping area */
   +#define V4L2_SUBDEV_SEL_TGT_CROP_ACTIVE 0x
   +/* cropping bounds */
   +#define V4L2_SUBDEV_SEL_TGT_CROP_BOUNDS 0x0002
   +/* current composing area */
   +#define V4L2_SUBDEV_SEL_TGT_COMPOSE_ACTIVE  0x0100
   +/* composing bounds */
   
   I'm not sure if ACTIVE is a good name here. It sounds confusing as we
   already have V4L2_SUBDEV_FORMAT_ACTIVE.
  
  We are using V4L2_SEL_TGT_COMPOSE_ACTIVE on V4L2 nodes already --- the
  name I'm using here just mirrors the naming on V4L2 device nodes. If I
  choose a different name here, some of that analogy is lost.
  
  That said, I'm not against changing this but the equivalent change
  should then be made on V4L2 selection API for consistency.
 
 I'm not against changing the V4L2 selection API either :-) Just think about 
 developers talking about try crop active or active crop bounds. Even 
 worse, will active crop refer to the active target or the active which ? 
 That will be very confusing.

I think I understand your concern. An easy solution would be to rename
active targets to something else, but what would that be exactly?

Also I can't currently think non-active rectangles would have use with which
== try as they're not (typically) changeable. I guess this doesn't matter in
resolving the issue.

Current?
Effective?
Real?
Brisk?

Cheers,

-- 
Sakari Ailus
e-mail: sakari.ai...@iki.fi jabber/XMPP/Gmail: sai...@retiisi.org.uk
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v3 33/33] rm680: Add camera init

2012-02-26 Thread Laurent Pinchart
Hi Sakari,

Thanks for the patch.

On Monday 20 February 2012 03:57:12 Sakari Ailus wrote:
 From: Sakari Ailus sakari.ai...@maxwell.research.nokia.com
 
 This currently introduces an extra file to the arch/arm/mach-omap2
 directory: board-rm680-camera.c. Keeping the device tree in mind, the
 context of the file could be represented as static data with one exception:
 the external clock to the sensor.
 
 This external clock is provided by the OMAP 3 SoC and required by the
 sensor. The issue is that the clock originates from the ISP and not from
 PRCM block as the other clocks and thus is not supported by the clock
 framework. Otherwise the sensor driver could just clk_get() and clk_enable()
 it, just like the regulators and gpios.
 
 Signed-off-by: Sakari Ailus sakari.ai...@maxwell.research.nokia.com
 ---
  arch/arm/mach-omap2/Makefile |3 +-
  arch/arm/mach-omap2/board-rm680-camera.c |  375 +++
  arch/arm/mach-omap2/board-rm680.c|   38 +++
  3 files changed, 415 insertions(+), 1 deletions(-)
  create mode 100644 arch/arm/mach-omap2/board-rm680-camera.c
 

[snip]

 diff --git a/arch/arm/mach-omap2/board-rm680-camera.c
 b/arch/arm/mach-omap2/board-rm680-camera.c new file mode 100644
 index 000..5059821
 --- /dev/null
 +++ b/arch/arm/mach-omap2/board-rm680-camera.c

[snip]

 +#include asm/mach-types.h
 +#include plat/omap-pm.h
 +
 +#include media/omap3isp.h
 +#include media/smiapp.h
 +
 +#include ../../../drivers/media/video/omap3isp/isp.h

Do we still need the private OMAP3 ISP header ? You can move the ISP_XCLK_* 
macros to the public header (and maybe rename them to OMAP3ISP_XCLK_*).

 +#include devices.h
 +
 +#define SEC_CAMERA_RESET_GPIO97
 +
 +#define RM680_PRI_SENSOR 1
 +#define RM680_PRI_LENS   2
 +#define RM680_SEC_SENSOR 3
 +#define MAIN_CAMERA_XCLK ISP_XCLK_A
 +#define SEC_CAMERA_XCLK  ISP_XCLK_B
 +
 +/*
 + *
 + * Main Camera Module EXTCLK
 + * Used by the sensor and the actuator driver.
 + *
 + */
 +static struct camera_xclk {
 + u32 hz;
 + u32 lock;
 + u8 xclksel;
 +} cameras_xclk;
 +
 +static DEFINE_MUTEX(lock_xclk);
 +
 +static int rm680_update_xclk(struct v4l2_subdev *subdev, u32 hz, u32 which,
 +  u8 xclksel)
 +{
 + struct isp_device *isp = v4l2_dev_to_isp_device(subdev-v4l2_dev);
 + int ret;
 +
 + mutex_lock(lock_xclk);
 +
 + if (which == RM680_SEC_SENSOR) {
 + if (cameras_xclk.xclksel == MAIN_CAMERA_XCLK) {
 + ret = -EBUSY;
 + goto done;
 + }
 + } else {
 + if (cameras_xclk.xclksel == SEC_CAMERA_XCLK) {
 + ret = -EBUSY;
 + goto done;
 + }
 + }
 +
 + if (hz) {   /* Turn on */
 + cameras_xclk.lock |= which;
 + if (cameras_xclk.hz == 0) {
 + isp-platform_cb.set_xclk(isp, hz, xclksel);
 + cameras_xclk.hz = hz;
 + cameras_xclk.xclksel = xclksel;
 + }
 + } else {/* Turn off */
 + cameras_xclk.lock = ~which;
 + if (cameras_xclk.lock == 0) {
 + isp-platform_cb.set_xclk(isp, 0, xclksel);
 + cameras_xclk.hz = 0;
 + cameras_xclk.xclksel = 0;
 + }
 + }
 +
 + ret = cameras_xclk.hz;
 +
 +done:
 + mutex_unlock(lock_xclk);
 + return ret;
 +}

I don't like this, but we can't do much better until the generic struct clk is 
available :-) However, in addition to handling the ISP clocks, the above code 
also prevents the two sensors from being used at the same time. This won't be 
handle by the clock framework and will need to be implemented somewhere else. 
Shouldn't we already split the two functions ?

 +
 +/*
 + *
 + * Main Camera Sensor
 + *
 + */
 +
 +static int rm680_main_camera_set_xclk(struct v4l2_subdev *sd, int hz)
 +{
 + return rm680_update_xclk(sd, hz, RM680_PRI_SENSOR, MAIN_CAMERA_XCLK);
 +}
 +
 +static struct smiapp_flash_strobe_parms rm680_main_camera_strobe_setup = {
 + .mode   = 0x0c,
 + .strobe_width_high_us   = 10,
 + .strobe_delay   = 0,
 + .stobe_start_point  = 0,
 + .trigger= 0,
 +};
 +
 +static struct smiapp_platform_data rm696_main_camera_platform_data = {
 + .i2c_addr_dfl   = SMIAPP_DFL_I2C_ADDR,
 + .i2c_addr_alt   = SMIAPP_ALT_I2C_ADDR,
 + .nvm_size   = 16 * 64,
 + .ext_clk= (9.6 * 1000 * 1000),

Parenthesis are not needed.

 + .lanes  = 2,
 + /* bit rate / ddr / lanes */
 + .op_sys_clock   = (s64 []){ 79680 / 2 / 2,
 + 84000 / 2 / 2,
 + 199680 / 2 / 2, 0 },
 + .csi_signalling_mode= SMIAPP_CSI_SIGNALLING_MODE_CSI2,
 +