RE: [RFC PATCH] adding support for setting bus parameters in sub device

2009-06-30 Thread Karicheri, Muralidharan
 data9-data15 for MT9T031
 data11-data15 for MT9P031
But then you could set the host bus width accordingly for example :
MT9T031 MSB connected do data 9 : HOST Buswidth = 10
MT9T031 MSB connected to data 15: Host Buswdth = 16

How does the driver know which MSB of the sensor data line is connected to the 
host bus? This is to be configured in our hardware register. Without this, I 
need to hardcode it which doesn't seem right.

Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
Germantown, MD 20874
Phone : 301-515-3736
email: m-kariche...@ti.com

-Original Message-
From: Jean-Philippe François [mailto:jp.franc...@cynove.com]
Sent: Monday, June 29, 2009 12:23 PM
To: Karicheri, Muralidharan
Cc: Hans Verkuil; davinci-linux-open-sou...@linux.davincidsp.com; linux-
me...@vger.kernel.org
Subject: Re: [RFC PATCH] adding support for setting bus parameters in sub
device

Karicheri, Muralidharan a écrit :
 Hans,

 Had hit the send by mistake. Please ignore my previous reply...

 snip
 It seems very unlikely to me that someone would ever choose to e.g.
zero
 one or more MSB pins instead of the LSB pins in a case like this.


 No case in my experience...

 snip
 Or do you have examples where that actually happens?

 DM365 can work with 10 bit or 12 bit sensors. DM365 CCDC that receives
the
 image data needs to know which data lines have valid data. This is done
by
 specifying the MSB position. There is ccdc register to configure this
information. Ideally, you could connect the MSB of sensor to following
lines on host bus:-

 data9-data15 for MT9T031
 data11-data15 for MT9P031
But then you could set the host bus width accordingly for example :
MT9T031 MSB connected do data 9 : HOST Buswidth = 10
MT9T031 MSB connected to data 15: Host Buswdth = 16

Since the host/sensor info are in the same structure, we can have
several subdev with different host buswidth.

Jean-Philippe François



 So it makes sense to specify this choice in the structure. I have not
added this earlier since we wanted to use the structure only for sub device
configuration. It has changed since then.

 I am also not sure if s_bus() is required since this will get set in the
platform data which could then be passed to the sub device using the new
api while loading it. When would host driver call s_bus()?


 If this never happens, then there is also no need for such a bitfield.

 I think I want to actually see someone using this before we add a field
 like that.

 Regards,

   Hans



 Murali Karicheri
 Software Design Engineer
 Texas Instruments Inc.
 Germantown, MD 20874
 email: m-kariche...@ti.com

 -Original Message-
 From: Hans Verkuil [mailto:hverk...@xs4all.nl]
 Sent: Monday, June 29, 2009 5:26 AM
 To: Karicheri, Muralidharan
 Cc: linux-media@vger.kernel.org; davinci-linux-open-
 sou...@linux.davincidsp.com
 Subject: Re: [RFC PATCH] adding support for setting bus parameters in
 sub
 device

 Hi Murali,

 From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

 This patch adds support for setting bus parameters such as bus type
 (Raw Bayer or Raw YUV image data bus), bus width (example 10 bit raw
 image data bus, 10 bit BT.656 etc.), and polarities (vsync, hsync,
 field
 etc) in sub device. This allows bridge driver to configure the sub
 device
 bus for a specific set of bus parameters through s_bus() function
call.
 This also can be used to define platform specific bus parameters for
 host and sub-devices.

 Reviewed by: Hans Verkuil hverk...@xs4all.nl
 Signed-off-by: Murali Karicheri m-kariche...@ti.com
 ---
 Applies to v4l-dvb repository

  include/media/v4l2-subdev.h |   40
 
  1 files changed, 40 insertions(+), 0 deletions(-)

 diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-
subdev.h
 index 1785608..2f5ec98 100644
 --- a/include/media/v4l2-subdev.h
 +++ b/include/media/v4l2-subdev.h
 @@ -37,6 +37,43 @@ struct v4l2_decode_vbi_line {
 u32 type;   /* VBI service type (V4L2_SLICED_*). 0 
 if no
 service found */
  };

 +/*
 + * Some sub-devices are connected to the host/bridge device through
a
 bus
 that
 + * carries the clock, vsync, hsync and data. Some interfaces such
as
 BT.656
 + * carries the sync embedded in the data where as others have
 separate
 line
 + * carrying the sync signals. The structure below is used to define
 bus
 + * configuration parameters for host as well as sub-device
 + */
 +enum v4l2_subdev_bus_type {
 +   /* Raw YUV image data bus */
 +   V4L2_SUBDEV_BUS_RAW_YUV,
 +   /* Raw Bayer image data bus */
 +   V4L2_SUBDEV_BUS_RAW_BAYER
 +};
 +
 +struct v4l2_bus_settings {
 +   /* yuv or bayer image data bus */
 +   enum v4l2_subdev_bus_type type;
 +   /* subdev bus width */
 +   u8 subdev_width;
 +   /* host bus width */
 +   u8 host_width;
 +   /* embedded sync, set this when sync is embedded in the data
 stream
 */
 +   unsigned embedded_sync:1

Re: [RFC PATCH] adding support for setting bus parameters in sub device

2009-06-30 Thread Jean-Philippe François

Karicheri, Muralidharan a écrit :

data9-data15 for MT9T031
data11-data15 for MT9P031

But then you could set the host bus width accordingly for example :
MT9T031 MSB connected do data 9 : HOST Buswidth = 10
MT9T031 MSB connected to data 15: Host Buswdth = 16


How does the driver know which MSB of the sensor data line is connected to the 
host bus? This is to be configured in our hardware register. Without this, I 
need to hardcode it which doesn't seem right.

I know you don't want to hardcode this, it is just not clear to me why 3 
parameters are needed. Let's take two different hardware setup : MT9T031 
with MSB connected on video pin 9 and 11


struct v4l2_bus_settings mt9t031_msb9 = {
.subdev_width=10
.host_width= ???
.host_msb = 9
};

struct v4l2_bus_settings mt9t031_msb11 = {
.subdev_width=10
.host_width= ???
.host_msb = 11
};

I can't see what is the use of those three parameters, and you have a 
cohabitation of width and bit number which is error prone.


Can you explain what is the meaning of host_width, or provide a 
realistic example where this 3 settings are useful ?





Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
Germantown, MD 20874
Phone : 301-515-3736
email: m-kariche...@ti.com


-Original Message-
From: Jean-Philippe François [mailto:jp.franc...@cynove.com]
Sent: Monday, June 29, 2009 12:23 PM
To: Karicheri, Muralidharan
Cc: Hans Verkuil; davinci-linux-open-sou...@linux.davincidsp.com; linux-
me...@vger.kernel.org
Subject: Re: [RFC PATCH] adding support for setting bus parameters in sub
device

Karicheri, Muralidharan a écrit :

Hans,

Had hit the send by mistake. Please ignore my previous reply...

snip

It seems very unlikely to me that someone would ever choose to e.g.

zero

one or more MSB pins instead of the LSB pins in a case like this.


No case in my experience...

snip

Or do you have examples where that actually happens?


DM365 can work with 10 bit or 12 bit sensors. DM365 CCDC that receives

the

image data needs to know which data lines have valid data. This is done

by

specifying the MSB position. There is ccdc register to configure this

information. Ideally, you could connect the MSB of sensor to following
lines on host bus:-

data9-data15 for MT9T031
data11-data15 for MT9P031

But then you could set the host bus width accordingly for example :
MT9T031 MSB connected do data 9 : HOST Buswidth = 10
MT9T031 MSB connected to data 15: Host Buswdth = 16

Since the host/sensor info are in the same structure, we can have
several subdev with different host buswidth.

Jean-Philippe François



So it makes sense to specify this choice in the structure. I have not

added this earlier since we wanted to use the structure only for sub device
configuration. It has changed since then.

I am also not sure if s_bus() is required since this will get set in the

platform data which could then be passed to the sub device using the new
api while loading it. When would host driver call s_bus()?

If this never happens, then there is also no need for such a bitfield.

I think I want to actually see someone using this before we add a field
like that.

Regards,

  Hans



Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
Germantown, MD 20874
email: m-kariche...@ti.com


-Original Message-
From: Hans Verkuil [mailto:hverk...@xs4all.nl]
Sent: Monday, June 29, 2009 5:26 AM
To: Karicheri, Muralidharan
Cc: linux-media@vger.kernel.org; davinci-linux-open-
sou...@linux.davincidsp.com
Subject: Re: [RFC PATCH] adding support for setting bus parameters in

sub

device

Hi Murali,


From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

This patch adds support for setting bus parameters such as bus type
(Raw Bayer or Raw YUV image data bus), bus width (example 10 bit raw
image data bus, 10 bit BT.656 etc.), and polarities (vsync, hsync,
field
etc) in sub device. This allows bridge driver to configure the sub
device
bus for a specific set of bus parameters through s_bus() function

call.

This also can be used to define platform specific bus parameters for
host and sub-devices.

Reviewed by: Hans Verkuil hverk...@xs4all.nl
Signed-off-by: Murali Karicheri m-kariche...@ti.com
---
Applies to v4l-dvb repository

 include/media/v4l2-subdev.h |   40

 1 files changed, 40 insertions(+), 0 deletions(-)

diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-

subdev.h

index 1785608..2f5ec98 100644
--- a/include/media/v4l2-subdev.h
+++ b/include/media/v4l2-subdev.h
@@ -37,6 +37,43 @@ struct v4l2_decode_vbi_line {
u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no

service found */

 };

+/*
+ * Some sub-devices are connected to the host/bridge device through

a

bus

that
+ * carries the clock, vsync, hsync and data. Some interfaces such

as

BT.656
+ * carries the sync embedded in the data where as others have

separate

line

RE: [RFC PATCH] adding support for setting bus parameters in sub device

2009-06-29 Thread Karicheri, Muralidharan
Hans,

When connecting a sensor like mt9t031 to SoC like DM355, DM6446 etc, driver 
also need to know which MSB of the sensor data bus connected to which host bus. 
For example, on DM365, we have following connection:-

data 9 (MSB) of the sensor is connected to data 11 of the host bus. For 10 bit 
sensor, this means, the lower 2 bits of the received data are zeros and for a 
12 bit sensor, it has valid data. 

So I suggest including another field for this. 

unsigned host_msb:5; (8 - 15) ??


Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
Germantown, MD 20874
email: m-kariche...@ti.com

-Original Message-
From: Hans Verkuil [mailto:hverk...@xs4all.nl]
Sent: Monday, June 29, 2009 5:26 AM
To: Karicheri, Muralidharan
Cc: linux-media@vger.kernel.org; davinci-linux-open-
sou...@linux.davincidsp.com
Subject: Re: [RFC PATCH] adding support for setting bus parameters in sub
device

Hi Murali,

 From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

 This patch adds support for setting bus parameters such as bus type
 (Raw Bayer or Raw YUV image data bus), bus width (example 10 bit raw
 image data bus, 10 bit BT.656 etc.), and polarities (vsync, hsync, field
 etc) in sub device. This allows bridge driver to configure the sub device
 bus for a specific set of bus parameters through s_bus() function call.
 This also can be used to define platform specific bus parameters for
 host and sub-devices.

 Reviewed by: Hans Verkuil hverk...@xs4all.nl
 Signed-off-by: Murali Karicheri m-kariche...@ti.com
 ---
 Applies to v4l-dvb repository

  include/media/v4l2-subdev.h |   40
 
  1 files changed, 40 insertions(+), 0 deletions(-)

 diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
 index 1785608..2f5ec98 100644
 --- a/include/media/v4l2-subdev.h
 +++ b/include/media/v4l2-subdev.h
 @@ -37,6 +37,43 @@ struct v4l2_decode_vbi_line {
  u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no
service found */
  };

 +/*
 + * Some sub-devices are connected to the host/bridge device through a
bus
 that
 + * carries the clock, vsync, hsync and data. Some interfaces such as
 BT.656
 + * carries the sync embedded in the data where as others have separate
 line
 + * carrying the sync signals. The structure below is used to define bus
 + * configuration parameters for host as well as sub-device
 + */
 +enum v4l2_subdev_bus_type {
 +/* Raw YUV image data bus */
 +V4L2_SUBDEV_BUS_RAW_YUV,
 +/* Raw Bayer image data bus */
 +V4L2_SUBDEV_BUS_RAW_BAYER
 +};
 +
 +struct v4l2_bus_settings {
 +/* yuv or bayer image data bus */
 +enum v4l2_subdev_bus_type type;
 +/* subdev bus width */
 +u8 subdev_width;
 +/* host bus width */
 +u8 host_width;
 +/* embedded sync, set this when sync is embedded in the data stream
*/
 +unsigned embedded_sync:1;
 +/* master or slave */
 +unsigned host_is_master:1;
 +/* 0 - active low, 1 - active high */
 +unsigned pol_vsync:1;
 +/* 0 - active low, 1 - active high */
 +unsigned pol_hsync:1;
 +/* 0 - low to high , 1 - high to low */
 +unsigned pol_field:1;
 +/* 0 - sample at falling edge , 1 - sample at rising edge */
 +unsigned pol_pclock:1;
 +/* 0 - active low , 1 - active high */
 +unsigned pol_data:1;
 +};

I've been thinking about this for a while and I think this struct should
be extended with the host bus parameters as well:

struct v4l2_bus_settings {
   /* yuv or bayer image data bus */
   enum v4l2_bus_type type;
   /* embedded sync, set this when sync is embedded in the data stream
*/
   unsigned embedded_sync:1;
   /* master or slave */
   unsigned host_is_master:1;

   /* bus width */
   unsigned sd_width:8;
   /* 0 - active low, 1 - active high */
   unsigned sd_pol_vsync:1;
   /* 0 - active low, 1 - active high */
   unsigned sd_pol_hsync:1;
   /* 0 - low to high, 1 - high to low */
   unsigned sd_pol_field:1;
   /* 0 - sample at falling edge, 1 - sample at rising edge */
   unsigned sd_edge_pclock:1;
   /* 0 - active low, 1 - active high */
   unsigned sd_pol_data:1;

   /* host bus width */
   unsigned host_width:8;
   /* 0 - active low, 1 - active high */
   unsigned host_pol_vsync:1;
   /* 0 - active low, 1 - active high */
   unsigned host_pol_hsync:1;
   /* 0 - low to high, 1 - high to low */
   unsigned host_pol_field:1;
   /* 0 - sample at falling edge, 1 - sample at rising edge */
   unsigned host_edge_pclock:1;
   /* 0 - active low, 1 - active high */
   unsigned host_pol_data:1;
};

It makes sense since you need to setup both ends of the bus, and having
both ends defined in the same struct keeps everything together. I have
thought about having separate host and subdev structs, but part of the bus
description is always common (bus type, master/slave, embedded/separate

RE: [RFC PATCH] adding support for setting bus parameters in sub device

2009-06-29 Thread Hans Verkuil

 Hans,

 When connecting a sensor like mt9t031 to SoC like DM355, DM6446 etc,
 driver also need to know which MSB of the sensor data bus connected to
 which host bus. For example, on DM365, we have following connection:-

 data 9 (MSB) of the sensor is connected to data 11 of the host bus. For 10
 bit sensor, this means, the lower 2 bits of the received data are zeros
 and for a 12 bit sensor, it has valid data.

 So I suggest including another field for this.

 unsigned host_msb:5; (8 - 15) ??

It seems very unlikely to me that someone would ever choose to e.g. zero
one or more MSB pins instead of the LSB pins in a case like this.

Or do you have examples where that actually happens?

If this never happens, then there is also no need for such a bitfield.

I think I want to actually see someone using this before we add a field
like that.

Regards,

   Hans




 Murali Karicheri
 Software Design Engineer
 Texas Instruments Inc.
 Germantown, MD 20874
 email: m-kariche...@ti.com

-Original Message-
From: Hans Verkuil [mailto:hverk...@xs4all.nl]
Sent: Monday, June 29, 2009 5:26 AM
To: Karicheri, Muralidharan
Cc: linux-media@vger.kernel.org; davinci-linux-open-
sou...@linux.davincidsp.com
Subject: Re: [RFC PATCH] adding support for setting bus parameters in sub
device

Hi Murali,

 From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

 This patch adds support for setting bus parameters such as bus type
 (Raw Bayer or Raw YUV image data bus), bus width (example 10 bit raw
 image data bus, 10 bit BT.656 etc.), and polarities (vsync, hsync,
 field
 etc) in sub device. This allows bridge driver to configure the sub
 device
 bus for a specific set of bus parameters through s_bus() function call.
 This also can be used to define platform specific bus parameters for
 host and sub-devices.

 Reviewed by: Hans Verkuil hverk...@xs4all.nl
 Signed-off-by: Murali Karicheri m-kariche...@ti.com
 ---
 Applies to v4l-dvb repository

  include/media/v4l2-subdev.h |   40
 
  1 files changed, 40 insertions(+), 0 deletions(-)

 diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
 index 1785608..2f5ec98 100644
 --- a/include/media/v4l2-subdev.h
 +++ b/include/media/v4l2-subdev.h
 @@ -37,6 +37,43 @@ struct v4l2_decode_vbi_line {
 u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no
service found */
  };

 +/*
 + * Some sub-devices are connected to the host/bridge device through a
bus
 that
 + * carries the clock, vsync, hsync and data. Some interfaces such as
 BT.656
 + * carries the sync embedded in the data where as others have separate
 line
 + * carrying the sync signals. The structure below is used to define
 bus
 + * configuration parameters for host as well as sub-device
 + */
 +enum v4l2_subdev_bus_type {
 +   /* Raw YUV image data bus */
 +   V4L2_SUBDEV_BUS_RAW_YUV,
 +   /* Raw Bayer image data bus */
 +   V4L2_SUBDEV_BUS_RAW_BAYER
 +};
 +
 +struct v4l2_bus_settings {
 +   /* yuv or bayer image data bus */
 +   enum v4l2_subdev_bus_type type;
 +   /* subdev bus width */
 +   u8 subdev_width;
 +   /* host bus width */
 +   u8 host_width;
 +   /* embedded sync, set this when sync is embedded in the data stream
*/
 +   unsigned embedded_sync:1;
 +   /* master or slave */
 +   unsigned host_is_master:1;
 +   /* 0 - active low, 1 - active high */
 +   unsigned pol_vsync:1;
 +   /* 0 - active low, 1 - active high */
 +   unsigned pol_hsync:1;
 +   /* 0 - low to high , 1 - high to low */
 +   unsigned pol_field:1;
 +   /* 0 - sample at falling edge , 1 - sample at rising edge */
 +   unsigned pol_pclock:1;
 +   /* 0 - active low , 1 - active high */
 +   unsigned pol_data:1;
 +};

I've been thinking about this for a while and I think this struct should
be extended with the host bus parameters as well:

struct v4l2_bus_settings {
  /* yuv or bayer image data bus */
  enum v4l2_bus_type type;
  /* embedded sync, set this when sync is embedded in the data stream
*/
  unsigned embedded_sync:1;
  /* master or slave */
  unsigned host_is_master:1;

  /* bus width */
  unsigned sd_width:8;
  /* 0 - active low, 1 - active high */
  unsigned sd_pol_vsync:1;
  /* 0 - active low, 1 - active high */
  unsigned sd_pol_hsync:1;
  /* 0 - low to high, 1 - high to low */
  unsigned sd_pol_field:1;
  /* 0 - sample at falling edge, 1 - sample at rising edge */
  unsigned sd_edge_pclock:1;
  /* 0 - active low, 1 - active high */
  unsigned sd_pol_data:1;

  /* host bus width */
  unsigned host_width:8;
  /* 0 - active low, 1 - active high */
  unsigned host_pol_vsync:1;
  /* 0 - active low, 1 - active high */
  unsigned host_pol_hsync:1;
  /* 0 - low to high, 1 - high to low */
  unsigned host_pol_field:1;
  /* 0 - sample at falling edge, 1 - sample at rising edge */
  unsigned host_edge_pclock:1;
  /* 0 - active low, 1 - active high

RE: [RFC PATCH] adding support for setting bus parameters in sub device

2009-06-29 Thread Karicheri, Muralidharan

It seems very unlikely to me that someone would ever choose to e.g. zero
one or more MSB pins instead of the LSB pins in a case like this.

No case in my experience...
Or do you have examples where that actually happens?

DM365 can work with 10 bit or 12 bit sensors. DM365 CCDC that receives the 
image data needs to know which data lines have valid data. This is done by 
specifying the MSB position. Ideally, you could connect the MSB of sensor to 
following lines on host bus
9-15 for MT9T031
11-15 for MT9P031) 


If this never happens, then there is also no need for such a bitfield.

I think I want to actually see someone using this before we add a field
like that.

Regards,

   Hans




 Murali Karicheri
 Software Design Engineer
 Texas Instruments Inc.
 Germantown, MD 20874
 email: m-kariche...@ti.com

-Original Message-
From: Hans Verkuil [mailto:hverk...@xs4all.nl]
Sent: Monday, June 29, 2009 5:26 AM
To: Karicheri, Muralidharan
Cc: linux-media@vger.kernel.org; davinci-linux-open-
sou...@linux.davincidsp.com
Subject: Re: [RFC PATCH] adding support for setting bus parameters in sub
device

Hi Murali,

 From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

 This patch adds support for setting bus parameters such as bus type
 (Raw Bayer or Raw YUV image data bus), bus width (example 10 bit raw
 image data bus, 10 bit BT.656 etc.), and polarities (vsync, hsync,
 field
 etc) in sub device. This allows bridge driver to configure the sub
 device
 bus for a specific set of bus parameters through s_bus() function call.
 This also can be used to define platform specific bus parameters for
 host and sub-devices.

 Reviewed by: Hans Verkuil hverk...@xs4all.nl
 Signed-off-by: Murali Karicheri m-kariche...@ti.com
 ---
 Applies to v4l-dvb repository

  include/media/v4l2-subdev.h |   40
 
  1 files changed, 40 insertions(+), 0 deletions(-)

 diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
 index 1785608..2f5ec98 100644
 --- a/include/media/v4l2-subdev.h
 +++ b/include/media/v4l2-subdev.h
 @@ -37,6 +37,43 @@ struct v4l2_decode_vbi_line {
u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no
service found */
  };

 +/*
 + * Some sub-devices are connected to the host/bridge device through a
bus
 that
 + * carries the clock, vsync, hsync and data. Some interfaces such as
 BT.656
 + * carries the sync embedded in the data where as others have separate
 line
 + * carrying the sync signals. The structure below is used to define
 bus
 + * configuration parameters for host as well as sub-device
 + */
 +enum v4l2_subdev_bus_type {
 +  /* Raw YUV image data bus */
 +  V4L2_SUBDEV_BUS_RAW_YUV,
 +  /* Raw Bayer image data bus */
 +  V4L2_SUBDEV_BUS_RAW_BAYER
 +};
 +
 +struct v4l2_bus_settings {
 +  /* yuv or bayer image data bus */
 +  enum v4l2_subdev_bus_type type;
 +  /* subdev bus width */
 +  u8 subdev_width;
 +  /* host bus width */
 +  u8 host_width;
 +  /* embedded sync, set this when sync is embedded in the data stream
*/
 +  unsigned embedded_sync:1;
 +  /* master or slave */
 +  unsigned host_is_master:1;
 +  /* 0 - active low, 1 - active high */
 +  unsigned pol_vsync:1;
 +  /* 0 - active low, 1 - active high */
 +  unsigned pol_hsync:1;
 +  /* 0 - low to high , 1 - high to low */
 +  unsigned pol_field:1;
 +  /* 0 - sample at falling edge , 1 - sample at rising edge */
 +  unsigned pol_pclock:1;
 +  /* 0 - active low , 1 - active high */
 +  unsigned pol_data:1;
 +};

I've been thinking about this for a while and I think this struct should
be extended with the host bus parameters as well:

struct v4l2_bus_settings {
 /* yuv or bayer image data bus */
 enum v4l2_bus_type type;
 /* embedded sync, set this when sync is embedded in the data stream
*/
 unsigned embedded_sync:1;
 /* master or slave */
 unsigned host_is_master:1;

 /* bus width */
 unsigned sd_width:8;
 /* 0 - active low, 1 - active high */
 unsigned sd_pol_vsync:1;
 /* 0 - active low, 1 - active high */
 unsigned sd_pol_hsync:1;
 /* 0 - low to high, 1 - high to low */
 unsigned sd_pol_field:1;
 /* 0 - sample at falling edge, 1 - sample at rising edge */
 unsigned sd_edge_pclock:1;
 /* 0 - active low, 1 - active high */
 unsigned sd_pol_data:1;

 /* host bus width */
 unsigned host_width:8;
 /* 0 - active low, 1 - active high */
 unsigned host_pol_vsync:1;
 /* 0 - active low, 1 - active high */
 unsigned host_pol_hsync:1;
 /* 0 - low to high, 1 - high to low */
 unsigned host_pol_field:1;
 /* 0 - sample at falling edge, 1 - sample at rising edge */
 unsigned host_edge_pclock:1;
 /* 0 - active low, 1 - active high */
 unsigned host_pol_data:1;
};

It makes sense since you need to setup both ends of the bus, and having
both ends defined in the same struct keeps everything together. I have
thought about having separate host and subdev

RE: [PATCH] adding support for setting bus parameters in sub device

2009-06-25 Thread Karicheri, Muralidharan
Hi,

Is this ready to get merged or still require discussion before merge?

Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
Germantown, MD 20874
email: m-kariche...@ti.com

-Original Message-
From: Karicheri, Muralidharan
Sent: Wednesday, June 17, 2009 5:17 PM
To: linux-media@vger.kernel.org
Cc: davinci-linux-open-sou...@linux.davincidsp.com; Muralidharan Karicheri;
Karicheri, Muralidharan
Subject: [PATCH] adding support for setting bus parameters in sub device

From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

This patch adds support for setting bus parameters such as bus type
(Raw Bayer or Raw YUV image data bus), bus width (example 10 bit raw
image data bus, 10 bit BT.656 etc.), and polarities (vsync, hsync, field
etc) in sub device. This allows bridge driver to configure the sub device
bus for a specific set of bus parameters through s_bus() function call.
This also can be used to define platform specific bus parameters for
host and sub-devices.

Reviewed by: Hans Verkuil hverk...@xs4all.nl
Signed-off-by: Murali Karicheri m-kariche...@ti.com
---
Applies to v4l-dvb repository

 include/media/v4l2-subdev.h |   40

 1 files changed, 40 insertions(+), 0 deletions(-)

diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
index 1785608..8532b91 100644
--- a/include/media/v4l2-subdev.h
+++ b/include/media/v4l2-subdev.h
@@ -37,6 +37,43 @@ struct v4l2_decode_vbi_line {
   u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no
service found */
 };

+/*
+ * Some sub-devices are connected to the host/bridge device through a bus
that
+ * carries the clock, vsync, hsync and data. Some interfaces such as
BT.656
+ * carries the sync embedded in the data where as others have separate
line
+ * carrying the sync signals. The structure below is used to define bus
+ * configuration parameters for host as well as sub-device
+ */
+enum v4l2_bus_type {
+  /* Raw YUV image data bus */
+  V4L2_BUS_RAW_YUV,
+  /* Raw Bayer image data bus */
+  V4L2_BUS_RAW_BAYER
+};
+
+struct v4l2_bus_settings {
+  /* yuv or bayer image data bus */
+  enum v4l2_bus_type type;
+  /* subdev bus width */
+  u8 subdev_width;
+  /* host bus width */
+  u8 host_width;
+  /* embedded sync, set this when sync is embedded in the data stream
*/
+  unsigned embedded_sync:1;
+  /* master or slave */
+  unsigned host_is_master:1;
+  /* 0 - active low, 1 - active high */
+  unsigned pol_vsync:1;
+  /* 0 - active low, 1 - active high */
+  unsigned pol_hsync:1;
+  /* 0 - low to high , 1 - high to low */
+  unsigned pol_field:1;
+  /* 0 - active low , 1 - active high */
+  unsigned pol_data:1;
+  /* 0 - sample at falling edge , 1 - sample at rising edge */
+  unsigned edge_pclock:1;
+};
+
 /* Sub-devices are devices that are connected somehow to the main bridge
device. These devices are usually audio/video muxers/encoders/decoders
or
sensors and webcam controllers.
@@ -199,6 +236,8 @@ struct v4l2_subdev_audio_ops {

s_routing: see s_routing in audio_ops, except this version is for video
   devices.
+
+   s_bus: set bus parameters in sub device to configure the bus
  */
 struct v4l2_subdev_video_ops {
   int (*s_routing)(struct v4l2_subdev *sd, u32 input, u32 output, u32
config);
@@ -219,6 +258,7 @@ struct v4l2_subdev_video_ops {
   int (*s_parm)(struct v4l2_subdev *sd, struct v4l2_streamparm *param);
   int (*enum_framesizes)(struct v4l2_subdev *sd, struct
v4l2_frmsizeenum *fsize);
   int (*enum_frameintervals)(struct v4l2_subdev *sd, struct
v4l2_frmivalenum *fival);
+  int (*s_bus)(struct v4l2_subdev *sd, const struct v4l2_bus_settings
*bus);
 };

 struct v4l2_subdev_ops {
--
1.6.0.4

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-18 Thread Magnus Damm
On Wed, Jun 17, 2009 at 5:33 PM, Hans Verkuilhverk...@xs4all.nl wrote:
 I think automatic negotiation is a good thing if it is implemented
 correctly.

 Actually, i think modelling software after hardware is a good thing
 and from that perspective the soc_camera was (and still is) a very
 good fit for our on-chip SoC. Apart from host/sensor separation, the
 main benefits in my mind are autonegotiation and separate
 configuration for camera sensor, capture interface and board.

 I don't mind doing the same outside soc_camera, and I agree with Hans
 that in some cases it's nice to hard code and skip the magic
 negotiation. I'm however pretty sure the soc_camera allows hard coding
 though, so in that case you get the best of two worlds.

 It is my strong opinion that while autonegotiation is easy to use, it is
 not a wise choice to make. Filling in a single struct with the bus
 settings to use for each board-subdev combination (usually there is only
 one) is simple, straight-forward and unambiguous. And I really don't see
 why that should take much time at all. And I consider it a very good point
 that the programmer is forced to think about this for a bit.

I agree that it's good to force the programmer to think. In this case
I assume you are talking about the board support engineer or at least
the person writing software to attach a camera sensor with capture
hardware.

You are not against letting drivers export their capabilites at least?
I'd like to see drivers that exports capabilites about which signals
that are supported and which states that are valid. So for instance,
the SuperH CEU driver supports both active high and active low HSYNC
and VSYNC signals. I'd like to make sure that the driver writers are
forced to think and export a bitmap of capabilites describing all
valid pin states. A little bit in the same way that i2c drivers use
-functionality() to export a bitmap of capabilites. Then if the
assignment of the pin states is automatic or hard coded I don't care
about.

Cheers,

/ magnus
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-17 Thread Magnus Damm
On Mon, Jun 15, 2009 at 12:33 AM, Guennadi
Liakhovetskig.liakhovet...@gmx.de wrote:
 On Fri, 12 Jun 2009, Hans Verkuil wrote:

 On Friday 12 June 2009 14:59:03 Guennadi Liakhovetski wrote:
  On Fri, 12 Jun 2009, Hans Verkuil wrote:
 
1. it is very unusual that the board designer has to mandate what 
signal
polarity has to be used - only when there's additional logic between 
the
capture device and the host. So, we shouldn't overload all boards with
this information. Board-code authors will be grateful to us!
  
   I talked to my colleague who actually designs boards like that about what
   he would prefer. His opinion is that he wants to set this himself, rather
   than leave it as the result of a software negotiation. It simplifies
   verification and debugging the hardware, and in addition there may be
   cases where subtle timing differences between e.g. sampling on a falling
   edge vs rising edge can actually become an important factor, particularly
   on high frequencies.

Let me guess, your coworker is a hardware designer? Letting hardware
people do hardware design is usually a good idea, but I'm yet to see
good software written by hardware people. =)

  I'd say this is different. You're talking about cases where you _want_ to
  be able to configure it explicitly, I am saying you do not have to _force_
  all to do this. Now, this selection only makes sense if both are
  configurable, right? In this case, e.g., pxa270 driver does support
  platform-specified preference. So, if both the host and the client can
  configure either polarity in the software you _can_ still specify the
  preferred one in platform data and it will be used.
 
  I think, the ability to specify inverters and the preferred polarity
  should cover all possible cases.

 In my opinion you should always want to set this explicitly. This is not
 something you want to leave to chance. Say you autoconfigure this. Now
 someone either changes the autoconf algorithm, or a previously undocumented
 register was discovered for the i2c device and it can suddenly configure the
 polarity of some signal that was previously thought to be fixed, or something
 else happens causing a different polarity to be negotiated.

 TBH, the argumentation like someone changes the autoconf algorithm or
 previously undocumented register is discovered doesn't convince me. In
 any case, I am adding authors, maintainers and major contributors to
 various soc-camera host drivers to CC and asking them to express their
 opinion on this matter. I will not add anything else here to avoid any
 unfair competition:-) they will have to go a couple emails back in this
 thread to better understand what is being discussed here.

I think automatic negotiation is a good thing if it is implemented correctly.

Actually, i think modelling software after hardware is a good thing
and from that perspective the soc_camera was (and still is) a very
good fit for our on-chip SoC. Apart from host/sensor separation, the
main benefits in my mind are autonegotiation and separate
configuration for camera sensor, capture interface and board.

I don't mind doing the same outside soc_camera, and I agree with Hans
that in some cases it's nice to hard code and skip the magic
negotiation. I'm however pretty sure the soc_camera allows hard coding
though, so in that case you get the best of two worlds.

Cheers,

/ magnus
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-17 Thread Hans Verkuil

 On Mon, Jun 15, 2009 at 12:33 AM, Guennadi
 Liakhovetskig.liakhovet...@gmx.de wrote:
 On Fri, 12 Jun 2009, Hans Verkuil wrote:

 On Friday 12 June 2009 14:59:03 Guennadi Liakhovetski wrote:
  On Fri, 12 Jun 2009, Hans Verkuil wrote:
 
1. it is very unusual that the board designer has to mandate what
 signal
polarity has to be used - only when there's additional logic
 between the
capture device and the host. So, we shouldn't overload all boards
 with
this information. Board-code authors will be grateful to us!
  
   I talked to my colleague who actually designs boards like that
 about what
   he would prefer. His opinion is that he wants to set this himself,
 rather
   than leave it as the result of a software negotiation. It
 simplifies
   verification and debugging the hardware, and in addition there may
 be
   cases where subtle timing differences between e.g. sampling on a
 falling
   edge vs rising edge can actually become an important factor,
 particularly
   on high frequencies.

 Let me guess, your coworker is a hardware designer? Letting hardware
 people do hardware design is usually a good idea, but I'm yet to see
 good software written by hardware people. =)

I agree. That's why I'm doing the software part :-)

  I'd say this is different. You're talking about cases where you
 _want_ to
  be able to configure it explicitly, I am saying you do not have to
 _force_
  all to do this. Now, this selection only makes sense if both are
  configurable, right? In this case, e.g., pxa270 driver does support
  platform-specified preference. So, if both the host and the client
 can
  configure either polarity in the software you _can_ still specify the
  preferred one in platform data and it will be used.
 
  I think, the ability to specify inverters and the preferred polarity
  should cover all possible cases.

 In my opinion you should always want to set this explicitly. This is
 not
 something you want to leave to chance. Say you autoconfigure this. Now
 someone either changes the autoconf algorithm, or a previously
 undocumented
 register was discovered for the i2c device and it can suddenly
 configure the
 polarity of some signal that was previously thought to be fixed, or
 something
 else happens causing a different polarity to be negotiated.

 TBH, the argumentation like someone changes the autoconf algorithm or
 previously undocumented register is discovered doesn't convince me. In
 any case, I am adding authors, maintainers and major contributors to
 various soc-camera host drivers to CC and asking them to express their
 opinion on this matter. I will not add anything else here to avoid any
 unfair competition:-) they will have to go a couple emails back in
 this
 thread to better understand what is being discussed here.

 I think automatic negotiation is a good thing if it is implemented
 correctly.

 Actually, i think modelling software after hardware is a good thing
 and from that perspective the soc_camera was (and still is) a very
 good fit for our on-chip SoC. Apart from host/sensor separation, the
 main benefits in my mind are autonegotiation and separate
 configuration for camera sensor, capture interface and board.

 I don't mind doing the same outside soc_camera, and I agree with Hans
 that in some cases it's nice to hard code and skip the magic
 negotiation. I'm however pretty sure the soc_camera allows hard coding
 though, so in that case you get the best of two worlds.

It is my strong opinion that while autonegotiation is easy to use, it is
not a wise choice to make. Filling in a single struct with the bus
settings to use for each board-subdev combination (usually there is only
one) is simple, straight-forward and unambiguous. And I really don't see
why that should take much time at all. And I consider it a very good point
that the programmer is forced to think about this for a bit.

Regards,

Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-17 Thread Guennadi Liakhovetski
On Wed, 17 Jun 2009, Hans Verkuil wrote:

 It is my strong opinion that while autonegotiation is easy to use, it is
 not a wise choice to make. Filling in a single struct with the bus
 settings to use for each board-subdev combination (usually there is only
 one) is simple, straight-forward and unambiguous. And I really don't see
 why that should take much time at all. And I consider it a very good point
 that the programmer is forced to think about this for a bit.

Ok, my opinion is, that we should keep autonegotiation, but if you like, 
we can print a BIG-FAT-WARNING if both polarities are supported and no 
platform preference is set.

I think, we've heard all opinions, unless someone would like to add 
something? Would it be fair to ask Mauro to make a decision? Or we can 
just count votes (which I would obviously prefer), but I'll accept Mauro's 
decision too.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-17 Thread Hans Verkuil

 On Wed, 17 Jun 2009, Hans Verkuil wrote:

 It is my strong opinion that while autonegotiation is easy to use, it is
 not a wise choice to make. Filling in a single struct with the bus
 settings to use for each board-subdev combination (usually there is only
 one) is simple, straight-forward and unambiguous. And I really don't see
 why that should take much time at all. And I consider it a very good
 point
 that the programmer is forced to think about this for a bit.

 Ok, my opinion is, that we should keep autonegotiation, but if you like,
 we can print a BIG-FAT-WARNING if both polarities are supported and no
 platform preference is set.

I'd rather see a message stating which bus settings were chosen. That way
if something fails in the future you can compare which bus settings were
chosen in the past with the new bus settings and see if something changed
there.

 I think, we've heard all opinions, unless someone would like to add
 something? Would it be fair to ask Mauro to make a decision? Or we can
 just count votes (which I would obviously prefer),

Obviously :-) Since the only non-soc driver that needs this right now is
tvp514x I'm pretty sure you'll get the most votes :-)

But this is something that should be decided on technical merits, and not
on what is easier for converting soc-camera. I'm not saying that is your
only or main reason for wanting to keep autonegotiation, but it no doubt
plays a role (perfectly understandable, BTW).

Just note that it is exactly my experiences with dm646x and with closely
working with the hardware team that made me realize the dangers of
autonegotiation. A year ago I would have agreed with you, but now I feel
very strongly that it is the wrong approach. Usually I would accept this
code, even if I thought it was not the optimal solution, in the interest
of finishing the conversion quickly. But I fear that if this goes in, then
it will be next to impossible to change in the future.

It simply boils down to this for me: I want to see unambiguous and
explicit bus settings in the code so the reader can safely assume that the
hardware was verified and/or certified for those settings. Even if you
just picked some settings because you didn't have access to the preferred
bus settings that the hardware manufacturer did his verification or
certification with, then that will still show which settings you used to
do your own testing. That's very important information to have in the
code.

Assuming that any autonegotiation code will always return the same result
is in my opinion wishful thinking. Look at the problems we have in
removing autoprobing from i2c: I'm pretty sure someone at the time also
thought that autoprobing would never cause a problem.

 but I'll accept Mauro's
 decision too.

That's fine by me as well.

Regards,

Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-17 Thread Trent Piepho
On Wed, 17 Jun 2009, Guennadi Liakhovetski wrote:
 On Wed, 17 Jun 2009, Hans Verkuil wrote:
  It is my strong opinion that while autonegotiation is easy to use, it is
  not a wise choice to make. Filling in a single struct with the bus
  settings to use for each board-subdev combination (usually there is only
  one) is simple, straight-forward and unambiguous. And I really don't see
  why that should take much time at all. And I consider it a very good point
  that the programmer is forced to think about this for a bit.

 Ok, my opinion is, that we should keep autonegotiation, but if you like,
 we can print a BIG-FAT-WARNING if both polarities are supported and no
 platform preference is set.

 I think, we've heard all opinions, unless someone would like to add
 something? Would it be fair to ask Mauro to make a decision? Or we can
 just count votes (which I would obviously prefer), but I'll accept Mauro's
 decision too.

There is a similar situation in the networking code, where there is a
driver for a PHY and another driver for a MAC, much like a sensor and
bridge.  phylib will find the common subset of the supported modes between
the MAC and the detected PHY and use that to configure aneg advertisement
settings.
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH] adding support for setting bus parameters in sub device

2009-06-17 Thread m-karicheri2
From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

This patch adds support for setting bus parameters such as bus type
(Raw Bayer or Raw YUV image data bus), bus width (example 10 bit raw
image data bus, 10 bit BT.656 etc.), and polarities (vsync, hsync, field
etc) in sub device. This allows bridge driver to configure the sub device
bus for a specific set of bus parameters through s_bus() function call.
This also can be used to define platform specific bus parameters for
host and sub-devices.

Reviewed by: Hans Verkuil hverk...@xs4all.nl
Signed-off-by: Murali Karicheri m-kariche...@ti.com
---
Applies to v4l-dvb repository

 include/media/v4l2-subdev.h |   40 
 1 files changed, 40 insertions(+), 0 deletions(-)

diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
index 1785608..2f5ec98 100644
--- a/include/media/v4l2-subdev.h
+++ b/include/media/v4l2-subdev.h
@@ -37,6 +37,43 @@ struct v4l2_decode_vbi_line {
u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no 
service found */
 };
 
+/*
+ * Some sub-devices are connected to the host/bridge device through a bus that
+ * carries the clock, vsync, hsync and data. Some interfaces such as BT.656
+ * carries the sync embedded in the data where as others have separate line
+ * carrying the sync signals. The structure below is used to define bus 
+ * configuration parameters for host as well as sub-device
+ */
+enum v4l2_subdev_bus_type {
+   /* Raw YUV image data bus */
+   V4L2_SUBDEV_BUS_RAW_YUV,
+   /* Raw Bayer image data bus */
+   V4L2_SUBDEV_BUS_RAW_BAYER
+};
+
+struct v4l2_bus_settings {
+   /* yuv or bayer image data bus */
+   enum v4l2_subdev_bus_type type;
+   /* subdev bus width */
+   u8 subdev_width;
+   /* host bus width */
+   u8 host_width;
+   /* embedded sync, set this when sync is embedded in the data stream */
+   unsigned embedded_sync:1;
+   /* master or slave */
+   unsigned host_is_master:1;
+   /* 0 - active low, 1 - active high */
+   unsigned pol_vsync:1;
+   /* 0 - active low, 1 - active high */
+   unsigned pol_hsync:1;
+   /* 0 - low to high , 1 - high to low */
+   unsigned pol_field:1;
+   /* 0 - sample at falling edge , 1 - sample at rising edge */
+   unsigned pol_pclock:1;
+   /* 0 - active low , 1 - active high */
+   unsigned pol_data:1;
+};
+
 /* Sub-devices are devices that are connected somehow to the main bridge
device. These devices are usually audio/video muxers/encoders/decoders or
sensors and webcam controllers.
@@ -199,6 +236,8 @@ struct v4l2_subdev_audio_ops {
 
s_routing: see s_routing in audio_ops, except this version is for video
devices.
+
+   s_bus: set bus parameters in sub device to configure the bus 
  */
 struct v4l2_subdev_video_ops {
int (*s_routing)(struct v4l2_subdev *sd, u32 input, u32 output, u32 
config);
@@ -219,6 +258,7 @@ struct v4l2_subdev_video_ops {
int (*s_parm)(struct v4l2_subdev *sd, struct v4l2_streamparm *param);
int (*enum_framesizes)(struct v4l2_subdev *sd, struct v4l2_frmsizeenum 
*fsize);
int (*enum_frameintervals)(struct v4l2_subdev *sd, struct 
v4l2_frmivalenum *fival);
+   int (*s_bus)(struct v4l2_subdev *sd, const struct v4l2_bus_settings 
*bus);
 };
 
 struct v4l2_subdev_ops {
-- 
1.6.0.4

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [RFC PATCH] adding support for setting bus parameters in sub device

2009-06-17 Thread Karicheri, Muralidharan
Hans,

Let me know if this has all changes that you are expecting. This is just for 
review. I will send the final one against the latest v4l-dvb kernel tree.

Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
Germantown, MD 20874
email: m-kariche...@ti.com

-Original Message-
From: Karicheri, Muralidharan
Sent: Wednesday, June 17, 2009 11:44 AM
To: linux-media@vger.kernel.org
Cc: davinci-linux-open-sou...@linux.davincidsp.com; Muralidharan Karicheri;
Karicheri, Muralidharan
Subject: [RFC PATCH] adding support for setting bus parameters in sub
device

From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

This patch adds support for setting bus parameters such as bus type
(Raw Bayer or Raw YUV image data bus), bus width (example 10 bit raw
image data bus, 10 bit BT.656 etc.), and polarities (vsync, hsync, field
etc) in sub device. This allows bridge driver to configure the sub device
bus for a specific set of bus parameters through s_bus() function call.
This also can be used to define platform specific bus parameters for
host and sub-devices.

Reviewed by: Hans Verkuil hverk...@xs4all.nl
Signed-off-by: Murali Karicheri m-kariche...@ti.com
---
Applies to v4l-dvb repository

 include/media/v4l2-subdev.h |   40

 1 files changed, 40 insertions(+), 0 deletions(-)

diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
index 1785608..2f5ec98 100644
--- a/include/media/v4l2-subdev.h
+++ b/include/media/v4l2-subdev.h
@@ -37,6 +37,43 @@ struct v4l2_decode_vbi_line {
   u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no
service found */
 };

+/*
+ * Some sub-devices are connected to the host/bridge device through a bus
that
+ * carries the clock, vsync, hsync and data. Some interfaces such as
BT.656
+ * carries the sync embedded in the data where as others have separate
line
+ * carrying the sync signals. The structure below is used to define bus
+ * configuration parameters for host as well as sub-device
+ */
+enum v4l2_subdev_bus_type {
+  /* Raw YUV image data bus */
+  V4L2_SUBDEV_BUS_RAW_YUV,
+  /* Raw Bayer image data bus */
+  V4L2_SUBDEV_BUS_RAW_BAYER
+};
+
+struct v4l2_bus_settings {
+  /* yuv or bayer image data bus */
+  enum v4l2_subdev_bus_type type;
+  /* subdev bus width */
+  u8 subdev_width;
+  /* host bus width */
+  u8 host_width;
+  /* embedded sync, set this when sync is embedded in the data stream
*/
+  unsigned embedded_sync:1;
+  /* master or slave */
+  unsigned host_is_master:1;
+  /* 0 - active low, 1 - active high */
+  unsigned pol_vsync:1;
+  /* 0 - active low, 1 - active high */
+  unsigned pol_hsync:1;
+  /* 0 - low to high , 1 - high to low */
+  unsigned pol_field:1;
+  /* 0 - sample at falling edge , 1 - sample at rising edge */
+  unsigned pol_pclock:1;
+  /* 0 - active low , 1 - active high */
+  unsigned pol_data:1;
+};
+
 /* Sub-devices are devices that are connected somehow to the main bridge
device. These devices are usually audio/video muxers/encoders/decoders
or
sensors and webcam controllers.
@@ -199,6 +236,8 @@ struct v4l2_subdev_audio_ops {

s_routing: see s_routing in audio_ops, except this version is for video
   devices.
+
+   s_bus: set bus parameters in sub device to configure the bus
  */
 struct v4l2_subdev_video_ops {
   int (*s_routing)(struct v4l2_subdev *sd, u32 input, u32 output, u32
config);
@@ -219,6 +258,7 @@ struct v4l2_subdev_video_ops {
   int (*s_parm)(struct v4l2_subdev *sd, struct v4l2_streamparm *param);
   int (*enum_framesizes)(struct v4l2_subdev *sd, struct
v4l2_frmsizeenum *fsize);
   int (*enum_frameintervals)(struct v4l2_subdev *sd, struct
v4l2_frmivalenum *fival);
+  int (*s_bus)(struct v4l2_subdev *sd, const struct v4l2_bus_settings
*bus);
 };

 struct v4l2_subdev_ops {
--
1.6.0.4

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH] adding support for setting bus parameters in sub device

2009-06-17 Thread Hans Verkuil
On Wednesday 17 June 2009 17:43:42 m-kariche...@ti.com wrote:
 From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

 This patch adds support for setting bus parameters such as bus type
 (Raw Bayer or Raw YUV image data bus), bus width (example 10 bit raw
 image data bus, 10 bit BT.656 etc.), and polarities (vsync, hsync, field
 etc) in sub device. This allows bridge driver to configure the sub device
 bus for a specific set of bus parameters through s_bus() function call.
 This also can be used to define platform specific bus parameters for
 host and sub-devices.

 Reviewed by: Hans Verkuil hverk...@xs4all.nl
 Signed-off-by: Murali Karicheri m-kariche...@ti.com
 ---
 Applies to v4l-dvb repository

  include/media/v4l2-subdev.h |   40
  1 files changed, 40
 insertions(+), 0 deletions(-)

 diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
 index 1785608..2f5ec98 100644
 --- a/include/media/v4l2-subdev.h
 +++ b/include/media/v4l2-subdev.h
 @@ -37,6 +37,43 @@ struct v4l2_decode_vbi_line {
   u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no 
 service found
 */ };

 +/*
 + * Some sub-devices are connected to the host/bridge device through a
 bus that + * carries the clock, vsync, hsync and data. Some interfaces
 such as BT.656 + * carries the sync embedded in the data where as others
 have separate line + * carrying the sync signals. The structure below is
 used to define bus + * configuration parameters for host as well as
 sub-device
 + */
 +enum v4l2_subdev_bus_type {
 + /* Raw YUV image data bus */
 + V4L2_SUBDEV_BUS_RAW_YUV,
 + /* Raw Bayer image data bus */
 + V4L2_SUBDEV_BUS_RAW_BAYER
 +};

Remove the _subdev prefix from the enum above.

 +
 +struct v4l2_bus_settings {
 + /* yuv or bayer image data bus */
 + enum v4l2_subdev_bus_type type;
 + /* subdev bus width */
 + u8 subdev_width;
 + /* host bus width */
 + u8 host_width;
 + /* embedded sync, set this when sync is embedded in the data stream */
 + unsigned embedded_sync:1;
 + /* master or slave */
 + unsigned host_is_master:1;
 + /* 0 - active low, 1 - active high */
 + unsigned pol_vsync:1;
 + /* 0 - active low, 1 - active high */
 + unsigned pol_hsync:1;
 + /* 0 - low to high , 1 - high to low */
 + unsigned pol_field:1;
 + /* 0 - sample at falling edge , 1 - sample at rising edge */
 + unsigned pol_pclock:1;

I'm not sure whether the pol_ prefix is correct here. Perhaps edge_pclock is 
a more appropriate name.

Regards,

Hans

 + /* 0 - active low , 1 - active high */
 + unsigned pol_data:1;
 +};
 +
  /* Sub-devices are devices that are connected somehow to the main bridge
 device. These devices are usually audio/video
 muxers/encoders/decoders or sensors and webcam controllers.
 @@ -199,6 +236,8 @@ struct v4l2_subdev_audio_ops {

 s_routing: see s_routing in audio_ops, except this version is for
 video devices.
 +
 +   s_bus: set bus parameters in sub device to configure the bus
   */
  struct v4l2_subdev_video_ops {
   int (*s_routing)(struct v4l2_subdev *sd, u32 input, u32 output, u32
 config); @@ -219,6 +258,7 @@ struct v4l2_subdev_video_ops {
   int (*s_parm)(struct v4l2_subdev *sd, struct v4l2_streamparm *param);
   int (*enum_framesizes)(struct v4l2_subdev *sd, struct v4l2_frmsizeenum
 *fsize); int (*enum_frameintervals)(struct v4l2_subdev *sd, struct
 v4l2_frmivalenum *fival); +   int (*s_bus)(struct v4l2_subdev *sd, const
 struct v4l2_bus_settings *bus); };

  struct v4l2_subdev_ops {



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] adding support for setting bus parameters in sub device

2009-06-17 Thread m-karicheri2
From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

This patch adds support for setting bus parameters such as bus type
(Raw Bayer or Raw YUV image data bus), bus width (example 10 bit raw
image data bus, 10 bit BT.656 etc.), and polarities (vsync, hsync, field
etc) in sub device. This allows bridge driver to configure the sub device
bus for a specific set of bus parameters through s_bus() function call.
This also can be used to define platform specific bus parameters for
host and sub-devices.

Reviewed by: Hans Verkuil hverk...@xs4all.nl
Signed-off-by: Murali Karicheri m-kariche...@ti.com
---
Applies to v4l-dvb repository

 include/media/v4l2-subdev.h |   40 
 1 files changed, 40 insertions(+), 0 deletions(-)

diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
index 1785608..8532b91 100644
--- a/include/media/v4l2-subdev.h
+++ b/include/media/v4l2-subdev.h
@@ -37,6 +37,43 @@ struct v4l2_decode_vbi_line {
u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no 
service found */
 };
 
+/*
+ * Some sub-devices are connected to the host/bridge device through a bus that
+ * carries the clock, vsync, hsync and data. Some interfaces such as BT.656
+ * carries the sync embedded in the data where as others have separate line
+ * carrying the sync signals. The structure below is used to define bus
+ * configuration parameters for host as well as sub-device
+ */
+enum v4l2_bus_type {
+   /* Raw YUV image data bus */
+   V4L2_BUS_RAW_YUV,
+   /* Raw Bayer image data bus */
+   V4L2_BUS_RAW_BAYER
+};
+
+struct v4l2_bus_settings {
+   /* yuv or bayer image data bus */
+   enum v4l2_bus_type type;
+   /* subdev bus width */
+   u8 subdev_width;
+   /* host bus width */
+   u8 host_width;
+   /* embedded sync, set this when sync is embedded in the data stream */
+   unsigned embedded_sync:1;
+   /* master or slave */
+   unsigned host_is_master:1;
+   /* 0 - active low, 1 - active high */
+   unsigned pol_vsync:1;
+   /* 0 - active low, 1 - active high */
+   unsigned pol_hsync:1;
+   /* 0 - low to high , 1 - high to low */
+   unsigned pol_field:1;
+   /* 0 - active low , 1 - active high */
+   unsigned pol_data:1;
+   /* 0 - sample at falling edge , 1 - sample at rising edge */
+   unsigned edge_pclock:1;
+};
+
 /* Sub-devices are devices that are connected somehow to the main bridge
device. These devices are usually audio/video muxers/encoders/decoders or
sensors and webcam controllers.
@@ -199,6 +236,8 @@ struct v4l2_subdev_audio_ops {
 
s_routing: see s_routing in audio_ops, except this version is for video
devices.
+
+   s_bus: set bus parameters in sub device to configure the bus
  */
 struct v4l2_subdev_video_ops {
int (*s_routing)(struct v4l2_subdev *sd, u32 input, u32 output, u32 
config);
@@ -219,6 +258,7 @@ struct v4l2_subdev_video_ops {
int (*s_parm)(struct v4l2_subdev *sd, struct v4l2_streamparm *param);
int (*enum_framesizes)(struct v4l2_subdev *sd, struct v4l2_frmsizeenum 
*fsize);
int (*enum_frameintervals)(struct v4l2_subdev *sd, struct 
v4l2_frmivalenum *fival);
+   int (*s_bus)(struct v4l2_subdev *sd, const struct v4l2_bus_settings 
*bus);
 };
 
 struct v4l2_subdev_ops {
-- 
1.6.0.4

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-14 Thread Guennadi Liakhovetski
On Fri, 12 Jun 2009, Hans Verkuil wrote:

 On Friday 12 June 2009 14:59:03 Guennadi Liakhovetski wrote:
  On Fri, 12 Jun 2009, Hans Verkuil wrote:
  
1. it is very unusual that the board designer has to mandate what signal
polarity has to be used - only when there's additional logic between the
capture device and the host. So, we shouldn't overload all boards with
this information. Board-code authors will be grateful to us!
   
   I talked to my colleague who actually designs boards like that about what
   he would prefer. His opinion is that he wants to set this himself, rather
   than leave it as the result of a software negotiation. It simplifies
   verification and debugging the hardware, and in addition there may be
   cases where subtle timing differences between e.g. sampling on a falling
   edge vs rising edge can actually become an important factor, particularly
   on high frequencies.
  
  I'd say this is different. You're talking about cases where you _want_ to 
  be able to configure it explicitly, I am saying you do not have to _force_ 
  all to do this. Now, this selection only makes sense if both are 
  configurable, right? In this case, e.g., pxa270 driver does support 
  platform-specified preference. So, if both the host and the client can 
  configure either polarity in the software you _can_ still specify the 
  preferred one in platform data and it will be used.
  
  I think, the ability to specify inverters and the preferred polarity 
  should cover all possible cases.
 
 In my opinion you should always want to set this explicitly. This is not
 something you want to leave to chance. Say you autoconfigure this. Now
 someone either changes the autoconf algorithm, or a previously undocumented
 register was discovered for the i2c device and it can suddenly configure the
 polarity of some signal that was previously thought to be fixed, or something
 else happens causing a different polarity to be negotiated.

TBH, the argumentation like someone changes the autoconf algorithm or 
previously undocumented register is discovered doesn't convince me. In 
any case, I am adding authors, maintainers and major contributors to 
various soc-camera host drivers to CC and asking them to express their 
opinion on this matter. I will not add anything else here to avoid any 
unfair competition:-) they will have to go a couple emails back in this 
thread to better understand what is being discussed here.

 Suddenly the board
 doesn't work because it was never verified or tested with that different
 polarity. Or worse: it glitches only 0.001% of the time. That's going to be a
 nasty bug to find.
 
 You generally verify your board with specific bus settings, and that's what
 should also be configured explicitly. Sure, it is nice not to have to think
 about this. The problem is that I believe that you *have* to think about it.
 
 The longer I think about this, the more convinced I am that relying on
 autoconfiguration is a bad design.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-14 Thread Hans Verkuil
On Sunday 14 June 2009 17:33:19 Guennadi Liakhovetski wrote:
 On Fri, 12 Jun 2009, Hans Verkuil wrote:
 
  On Friday 12 June 2009 14:59:03 Guennadi Liakhovetski wrote:
   On Fri, 12 Jun 2009, Hans Verkuil wrote:
   
 1. it is very unusual that the board designer has to mandate what 
 signal
 polarity has to be used - only when there's additional logic between 
 the
 capture device and the host. So, we shouldn't overload all boards with
 this information. Board-code authors will be grateful to us!

I talked to my colleague who actually designs boards like that about 
what
he would prefer. His opinion is that he wants to set this himself, 
rather
than leave it as the result of a software negotiation. It simplifies
verification and debugging the hardware, and in addition there may be
cases where subtle timing differences between e.g. sampling on a falling
edge vs rising edge can actually become an important factor, 
particularly
on high frequencies.
   
   I'd say this is different. You're talking about cases where you _want_ to 
   be able to configure it explicitly, I am saying you do not have to 
   _force_ 
   all to do this. Now, this selection only makes sense if both are 
   configurable, right? In this case, e.g., pxa270 driver does support 
   platform-specified preference. So, if both the host and the client can 
   configure either polarity in the software you _can_ still specify the 
   preferred one in platform data and it will be used.
   
   I think, the ability to specify inverters and the preferred polarity 
   should cover all possible cases.
  
  In my opinion you should always want to set this explicitly. This is not
  something you want to leave to chance. Say you autoconfigure this. Now
  someone either changes the autoconf algorithm, or a previously undocumented
  register was discovered for the i2c device and it can suddenly configure the
  polarity of some signal that was previously thought to be fixed, or 
  something
  else happens causing a different polarity to be negotiated.
 
 TBH, the argumentation like someone changes the autoconf algorithm or 
 previously undocumented register is discovered doesn't convince me.

The point I'm making here is that since the autoconf part is done in software
it *can* be changed. And while just looking at the code there is no reason why
choosing a positive vs. negative polarity makes any difference if both host
and i2c device support it, from a hardware standpoint it *can* make a
difference.

In practice you verify and certify your hardware using specific bus settings.
An autoconf algorithm just obfuscates those settings. And relying on it to
always return the same settings in the future seems also wishful thinking.

 In  
 any case, I am adding authors, maintainers and major contributors to 
 various soc-camera host drivers to CC and asking them to express their 
 opinion on this matter. I will not add anything else here to avoid any 
 unfair competition:-) they will have to go a couple emails back in this 
 thread to better understand what is being discussed here.

It will definitely be interesting to see what others think.

Regards,

Hans

 
  Suddenly the board
  doesn't work because it was never verified or tested with that different
  polarity. Or worse: it glitches only 0.001% of the time. That's going to be 
  a
  nasty bug to find.
  
  You generally verify your board with specific bus settings, and that's what
  should also be configured explicitly. Sure, it is nice not to have to think
  about this. The problem is that I believe that you *have* to think about it.
  
  The longer I think about this, the more convinced I am that relying on
  autoconfiguration is a bad design.
 
 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer
 http://www.open-technology.de/
 
 



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-14 Thread Guennadi Liakhovetski
On Sun, 14 Jun 2009, Hans Verkuil wrote:

 The point I'm making here is that since the autoconf part is done in software
 it *can* be changed. And while just looking at the code there is no reason why
 choosing a positive vs. negative polarity makes any difference if both host
 and i2c device support it, from a hardware standpoint it *can* make a
 difference.
 
 In practice you verify and certify your hardware using specific bus settings.
 An autoconf algorithm just obfuscates those settings. And relying on it to
 always return the same settings in the future seems also wishful thinking.

Ok, I think, now I get it. Your real concern is the only case when both 
parties can be configured in software for either polarity. And whereas we 
think (ok, make it I think) this means, both configurations should work, 
in practice only one of them is guaranteed to. And you think having an 
optional board preference flag is not enough, it should be mandatory.

I see your point now. I am still not positive this case alone is enough to 
force all boards to specify all polarities. How about, we use 
autonegotiation where there's only one valid configuration. If both 
possibilities and no preference is set - ok, we can decide. Either we 
complain loudly in the log and try our luck, or we complain and fail. 
Let's see:

hs hi  hs lo  vs hi  vs lo  pclk rise  pclk fall  d hi  d lo  master  
slave

mt9v022   x  x  x  xx  x   x -  x   
x

mt9m001   x  -  x  --  x   x -  x   
-

mt9m111   x  -  x  -x  -   x -  x   
-

mt9t031   x  -  x  -x  x   x -  x   
-

ov772xx  -  x  -x  -   x -  x   
-

tw9910x  -  x  -x  -   x -  x   
-

(hs = hsync, vs = vsync, pclk = pixel clock, d = data) So, as you see, 
this free choice is not so often.

  In  
  any case, I am adding authors, maintainers and major contributors to 
  various soc-camera host drivers to CC and asking them to express their 
  opinion on this matter. I will not add anything else here to avoid any 
  unfair competition:-) they will have to go a couple emails back in this 
  thread to better understand what is being discussed here.
 
 It will definitely be interesting to see what others think.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-14 Thread Robert Jarzmik
Let's begin the maintainers party.

 A board designer knows what the host supports, knows what the sensor 
 supports, and knows if he added any inverters on the board, and based on 
 all that information he can just setup these parameters for the sensor 
 chip. Settings that are fixed on the sensor chip he can just ignore, he 
 only need to specify those settings that the sensor really needs.
I don't think that's true Hans.
A lot of mainline's kernel boards have been written by passionate people, having
no access to boards layout (for pxa, the includes corgi, tosa, hx4700, mioa701,
all palm series, ...)

For these people, having an autonegociation algorithm is one less thing to
bother about.

  In my opinion you should always want to set this explicitly. This is not
  something you want to leave to chance. Say you autoconfigure this. Now
  someone either changes the autoconf algorithm, or a previously undocumented
  register was discovered for the i2c device and it can suddenly configure the
  polarity of some signal that was previously thought to be fixed, or 
  something
  else happens causing a different polarity to be negotiated.
If you're afraid of side effects, you can force the polarity in board code with
the current framework.

If we reduce the current autonegociation code to polarity (forget bus witdh,
...) :
 - if board coder sets unique polarities, they'll be chosen (1)
 - if board coder doesn't set them, the autonegociation algorithm will choose
   (2)

What you want to do is to force all board developers to explicitely polarities,
to only use subset (1) of current negociation algorithm. I see no technical
point motivating this. The existing algorithm is richer.

Personnaly, I'll consider that reducing soc_camera framework to (1) instead of
(1)+(2) is a regretable regression. As part of the board maintaineers having no
access to my board's design, I find the current framework a help.

I still don't understand clearly why delete (2) from current framework. As I
said, the board designer knows polarities doesn't stand in our communauty
where board are developped without prior knowledge.

So Hans, why do you want to delete (2) ?

--
Robert
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-12 Thread Guennadi Liakhovetski
On Wed, 10 Jun 2009, Hans Verkuil wrote:

 On Wednesday 10 June 2009 23:30:55 Guennadi Liakhovetski wrote:
  On Wed, 10 Jun 2009, Hans Verkuil wrote:
   My view of this would be that the board specification specifies the
   sensor (and possibly other chips) that are on the board. And to me it
   makes sense that that also supplies the bus settings. I agree that it
   is not complex code, but I think it is also unnecessary code. Why
   negotiate if you can just set it?
 
  Why force all platforms to set it if the driver is perfectly capable do
  this itself? As I said - this is not a platform-specific feature, it's
  chip-specific. What good would it make to have all platforms using
  mt9t031 to specify, that yes, the chip can use both falling and rising
  pclk edge, but only active high vsync and hsync?
 
 ???
 
 You will just tell the chip what to use. So you set 'use falling edge' and 
 either set 'active high vsync/hsync' or just leave that out since you know 
 the mt9t031 has that fixed. You don't specify in the platform data what the 
 chip can support, that's not relevant. You know what the host expects and 
 you pass that information on to the chip.
 
 A board designer knows what the host supports, knows what the sensor 
 supports, and knows if he added any inverters on the board, and based on 
 all that information he can just setup these parameters for the sensor 
 chip. Settings that are fixed on the sensor chip he can just ignore, he 
 only need to specify those settings that the sensor really needs.

I'd like to have this resolved somehow (preferably my way of ourse:-)), 
here once again (plus some new) my main arguments:

1. it is very unusual that the board designer has to mandate what signal 
polarity has to be used - only when there's additional logic between the 
capture device and the host. So, we shouldn't overload all boards with 
this information. Board-code authors will be grateful to us!

2. what if you do have an inverter between the two? You'd have to tell the 
sensor to use active high, and the host to use active low, i.e., you need 
two sets of flags.

3. all soc-camera boards rely on this autonegotiation. Do we really want 
(and have) to add this useless information back to them? Back - because, 
yes, we've been there we've done that before, but then we switched to the 
current autonegotiation, which we are perfectly happy with so far (anyone 
dares to object?:-)).

4. the autonegiation code is simple and small, so, I really don't see a 
reason to hardcode something, that we can perfectly autoconfigure.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-12 Thread Hans Verkuil

 On Wed, 10 Jun 2009, Hans Verkuil wrote:

 On Wednesday 10 June 2009 23:30:55 Guennadi Liakhovetski wrote:
  On Wed, 10 Jun 2009, Hans Verkuil wrote:
   My view of this would be that the board specification specifies the
   sensor (and possibly other chips) that are on the board. And to me
 it
   makes sense that that also supplies the bus settings. I agree that
 it
   is not complex code, but I think it is also unnecessary code. Why
   negotiate if you can just set it?
 
  Why force all platforms to set it if the driver is perfectly capable
 do
  this itself? As I said - this is not a platform-specific feature, it's
  chip-specific. What good would it make to have all platforms using
  mt9t031 to specify, that yes, the chip can use both falling and rising
  pclk edge, but only active high vsync and hsync?

 ???

 You will just tell the chip what to use. So you set 'use falling edge'
 and
 either set 'active high vsync/hsync' or just leave that out since you
 know
 the mt9t031 has that fixed. You don't specify in the platform data what
 the
 chip can support, that's not relevant. You know what the host expects
 and
 you pass that information on to the chip.

 A board designer knows what the host supports, knows what the sensor
 supports, and knows if he added any inverters on the board, and based on
 all that information he can just setup these parameters for the sensor
 chip. Settings that are fixed on the sensor chip he can just ignore, he
 only need to specify those settings that the sensor really needs.

 I'd like to have this resolved somehow (preferably my way of ourse:-)),
 here once again (plus some new) my main arguments:

 1. it is very unusual that the board designer has to mandate what signal
 polarity has to be used - only when there's additional logic between the
 capture device and the host. So, we shouldn't overload all boards with
 this information. Board-code authors will be grateful to us!

I talked to my colleague who actually designs boards like that about what
he would prefer. His opinion is that he wants to set this himself, rather
than leave it as the result of a software negotiation. It simplifies
verification and debugging the hardware, and in addition there may be
cases where subtle timing differences between e.g. sampling on a falling
edge vs rising edge can actually become an important factor, particularly
on high frequencies.

 2. what if you do have an inverter between the two? You'd have to tell the
 sensor to use active high, and the host to use active low, i.e., you need
 two sets of flags.

You obviously need to set this for both the host and for the sub-device.
The s_bus op in v4l2_subdev is meant to set the sub-device. Setting this
up for the host is a platform/board issue.


 3. all soc-camera boards rely on this autonegotiation. Do we really want
 (and have) to add this useless information back to them? Back - because,
 yes, we've been there we've done that before, but then we switched to the
 current autonegotiation, which we are perfectly happy with so far (anyone
 dares to object?:-)).

Well, I do :-) This is not useless information and particularly at high
clock frequencies (e.g. 74.25 MHz or higher) you do want to have full
control over this. Remember that this API is not only meant for sensor
devices, but also for HDTV video receivers.

 4. the autonegiation code is simple and small, so, I really don't see a
 reason to hardcode something, that we can perfectly autoconfigure.

The fact that that code exists doesn't mean that we also have to use it.
While I am not a hardware engineer myself, I do work closely together with
them. And having seen some of the tricky things that can go wrong with
timings I think there is a very good case against autoconfiguring these
things. For simple designs where the timings aren't critical I am sure
that autoconfiguring works fine, but when you get to HD quality video
streaming then it does become an issue. And this will become ever more
important in the future as the pixel clock frequencies keep increasing.

Regards,

  Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-12 Thread Guennadi Liakhovetski
On Fri, 12 Jun 2009, Hans Verkuil wrote:

  1. it is very unusual that the board designer has to mandate what signal
  polarity has to be used - only when there's additional logic between the
  capture device and the host. So, we shouldn't overload all boards with
  this information. Board-code authors will be grateful to us!
 
 I talked to my colleague who actually designs boards like that about what
 he would prefer. His opinion is that he wants to set this himself, rather
 than leave it as the result of a software negotiation. It simplifies
 verification and debugging the hardware, and in addition there may be
 cases where subtle timing differences between e.g. sampling on a falling
 edge vs rising edge can actually become an important factor, particularly
 on high frequencies.

I'd say this is different. You're talking about cases where you _want_ to 
be able to configure it explicitly, I am saying you do not have to _force_ 
all to do this. Now, this selection only makes sense if both are 
configurable, right? In this case, e.g., pxa270 driver does support 
platform-specified preference. So, if both the host and the client can 
configure either polarity in the software you _can_ still specify the 
preferred one in platform data and it will be used.

I think, the ability to specify inverters and the preferred polarity 
should cover all possible cases.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-12 Thread Hans Verkuil
On Friday 12 June 2009 14:59:03 Guennadi Liakhovetski wrote:
 On Fri, 12 Jun 2009, Hans Verkuil wrote:
 
   1. it is very unusual that the board designer has to mandate what signal
   polarity has to be used - only when there's additional logic between the
   capture device and the host. So, we shouldn't overload all boards with
   this information. Board-code authors will be grateful to us!
  
  I talked to my colleague who actually designs boards like that about what
  he would prefer. His opinion is that he wants to set this himself, rather
  than leave it as the result of a software negotiation. It simplifies
  verification and debugging the hardware, and in addition there may be
  cases where subtle timing differences between e.g. sampling on a falling
  edge vs rising edge can actually become an important factor, particularly
  on high frequencies.
 
 I'd say this is different. You're talking about cases where you _want_ to 
 be able to configure it explicitly, I am saying you do not have to _force_ 
 all to do this. Now, this selection only makes sense if both are 
 configurable, right? In this case, e.g., pxa270 driver does support 
 platform-specified preference. So, if both the host and the client can 
 configure either polarity in the software you _can_ still specify the 
 preferred one in platform data and it will be used.
 
 I think, the ability to specify inverters and the preferred polarity 
 should cover all possible cases.

In my opinion you should always want to set this explicitly. This is not
something you want to leave to chance. Say you autoconfigure this. Now
someone either changes the autoconf algorithm, or a previously undocumented
register was discovered for the i2c device and it can suddenly configure the
polarity of some signal that was previously thought to be fixed, or something
else happens causing a different polarity to be negotiated. Suddenly the board
doesn't work because it was never verified or tested with that different
polarity. Or worse: it glitches only 0.001% of the time. That's going to be a
nasty bug to find.

You generally verify your board with specific bus settings, and that's what
should also be configured explicitly. Sure, it is nice not to have to think
about this. The problem is that I believe that you *have* to think about it.

The longer I think about this, the more convinced I am that relying on
autoconfiguration is a bad design.

Regards,

Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mt9t031 (was RE: [PATCH] adding support for setting bus parameters in sub device)

2009-06-11 Thread Hans Verkuil

 Karicheri, Muralidharan a écrit :

 We need
 streaming capability in the driver. This is how our driver works
 with mt9t031 sensor
  raw-bus (10 bit)
 vpfe-capture  - mt9t031 driver
  ||
  V  V
VPFEMT9T031

 VPFE hardware has internal timing and DMA controller to
 copy data frame by frame from the sensor output to SDRAM.
 The PCLK form the sensor is used to generate the internal
 timing.
 So, what is missing in the driver apart from the ability to specify
 a frame-rate?

 [MK] Does the mt9t031 output one frame (snapshot) like in a camera or
 can it output frame continuously along with PCLK, Hsync and Vsync
 signals like in a video streaming device. VPFE capture can accept frames
 continuously from the sensor synchronized to PCLK, HSYNC and VSYNC and
 output frames to application using QBUF/DQBUF. In our implementation, we
 have timing parameters for the sensor to do streaming at various
 resolutions and fps. So application calls S_STD to set these timings. I
 am not sure if this is an acceptable way of implementing it. Any
 comments?

 PCLK, HSYNC, VSYNC are generated by the CMOS sensor. I don't think you
 can set the timings. Depending on sensor settings, pixel clock speed etc
 .., the frame rate will vary.

 You could perhaps play with the CMOS sensor registers so that when
 settings a standard, the driver somehow set the various exposition
 parameter and pll settings to get a specified framerate.

 This will vary with each sensor and each platform (because of
 pixelclock). More over, chances are that it will be conflicting with
 other control.

 For example if you set a fixed gain and autoexpo, some sensor will see
 a drop in fps under low light conditions. I think this kind of
 arbitration  should be left to userspace.

 Unless the sensor supports a specific standard, I don't think the driver
 should try to make behind the scene modification to camera sensor
 register in response to a S_STD ioctl.

The S_STD call is hopelessly inadequate to deal with these types of
devices. What is needed is a new call that allows you to set the exact
timings you want: frame width/height, back/front porch, h/vsync width,
pixelclock. It is my opinion that the use of S_STD should be limited to
standard definition type inputs, and not used for other devices like
sensors or HD video.

Proposals for such a new ioctl are welcome :-)

Regards,

 Hans


 JP François


 Thanks

 Murali

 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer
 http://www.open-technology.de/

 ___
 Davinci-linux-open-source mailing list
 davinci-linux-open-sou...@linux.davincidsp.com
 http://linux.davincidsp.com/mailman/listinfo/davinci-linux-open-source



 ___
 Davinci-linux-open-source mailing list
 davinci-linux-open-sou...@linux.davincidsp.com
 http://linux.davincidsp.com/mailman/listinfo/davinci-linux-open-source




-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mt9t031 (was RE: [PATCH] adding support for setting bus parameters in sub device)

2009-06-11 Thread Hans de Goede



On 06/11/2009 11:33 AM, Hans Verkuil wrote:


On 06/11/2009 10:35 AM, Hans Verkuil wrote:


snip (a lot)


Hmm,

Why would we want the *application* to set things like this *at all* ?
with sensors hsync and vsync and other timing are something between
the bridge and the sensor, actaully in my experience the correct
hsync / vsync timings to program the sensor to are very much bridge
specific. So IMHO this should not be exposed to userspace at all.

All userspace should be able to control is the resolution and the
framerate. Although controlling the framerate in many cases also
means controlling the maximum exposure time. So in many cases
one cannot even control the framerate. (Asking for 30 fps in an
artificially illuminated room will get you a very dark, useless
picture, with most sensors). Yes this means that with cams with
use autoexposure (which is something which we really want where ever
possible), the framerate can and will change while streaming.


I think we have three possible use cases here:

- old-style standard definition video: use S_STD



Ack


- webcam-like devices: a combination of S_FMT and S_PARM I think? Correct
me if I'm wrong. S_STD is useless for this, right?



Ack


- video streaming devices like the davinci videoports where you can hook
up HDTV receivers or FPGAs: here you definitely need a new API to setup
the streaming parameters, and you want to be able to do that from the
application as well. Actually, sensors are also hooked up to these devices
in practice. And there you also want to be able to setup these parameters.
You will see this mostly (only?) on embedded platforms.



I agree we need an in kernel API for this, but why expose it to 
userspace, as you say this will only happen on embedded systems, 
shouldn't the info then go in a board_info file / struct ?


Regards,

Hans
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: mt9t031 (was RE: [PATCH] adding support for setting bus parameters in sub device)

2009-06-11 Thread Hans Verkuil



 On 06/11/2009 11:33 AM, Hans Verkuil wrote:

 On 06/11/2009 10:35 AM, Hans Verkuil wrote:

 snip (a lot)

 Hmm,

 Why would we want the *application* to set things like this *at all* ?
 with sensors hsync and vsync and other timing are something between
 the bridge and the sensor, actaully in my experience the correct
 hsync / vsync timings to program the sensor to are very much bridge
 specific. So IMHO this should not be exposed to userspace at all.

 All userspace should be able to control is the resolution and the
 framerate. Although controlling the framerate in many cases also
 means controlling the maximum exposure time. So in many cases
 one cannot even control the framerate. (Asking for 30 fps in an
 artificially illuminated room will get you a very dark, useless
 picture, with most sensors). Yes this means that with cams with
 use autoexposure (which is something which we really want where ever
 possible), the framerate can and will change while streaming.

 I think we have three possible use cases here:

 - old-style standard definition video: use S_STD


 Ack

 - webcam-like devices: a combination of S_FMT and S_PARM I think?
 Correct
 me if I'm wrong. S_STD is useless for this, right?


 Ack

 - video streaming devices like the davinci videoports where you can hook
 up HDTV receivers or FPGAs: here you definitely need a new API to setup
 the streaming parameters, and you want to be able to do that from the
 application as well. Actually, sensors are also hooked up to these
 devices
 in practice. And there you also want to be able to setup these
 parameters.
 You will see this mostly (only?) on embedded platforms.


 I agree we need an in kernel API for this, but why expose it to
 userspace, as you say this will only happen on embedded systems,
 shouldn't the info then go in a board_info file / struct ?

These timings are not fixed. E.g. a 720p60 video stream has different
timings compared to a 1080p60 stream. So you have to be able to switch
from userspace. It's like PAL and NTSC, but then many times worse :-)

Regards,

 Hans

-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH] adding support for setting bus parameters in sub device

2009-06-11 Thread Karicheri, Muralidharan


 Why force all platforms to set it if the driver is perfectly capable do
 this itself? As I said - this is not a platform-specific feature, it's
 chip-specific. What good would it make to have all platforms using
 mt9t031 to specify, that yes, the chip can use both falling and rising
 pclk edge, but only active high vsync and hsync?

???

You will just tell the chip what to use. So you set 'use falling edge' and
either set 'active high vsync/hsync' or just leave that out since you know
the mt9t031 has that fixed. You don't specify in the platform data what the
chip can support, that's not relevant. You know what the host expects and
you pass that information on to the chip.

A board designer knows what the host supports, knows what the sensor
supports, and knows if he added any inverters on the board, and based on
all that information he can just setup these parameters for the sensor
chip. Settings that are fixed on the sensor chip he can just ignore, he
only need to specify those settings that the sensor really needs.

[MK] I agree with Hans. Looking my experience with various platforms, this is 
true. We have following decoders connected to the VPFE bus working in our 
internal release kernel. In all these cases, the bus parameters are fixed given 
a board and the platform. Given below are the list of boards, and decoders 
connected and bus available

DM6446 -Bayer RGB bus(10 bit connected to MSB), Vsync, Hsync, PCLK
  (MT9T001)
 -YUV bus (embedded sync / separate sync), PCLK, Vsync, HSync, Field
  (TVP5146) - 8/10 bit data bus
DM355 - Same as above 
- VPFE Also supports YUV bus with 16 bit bus (8 bit, Y and 8 bit 
  Cb/Cr), but no decoders tested with this interface.
DM365 - Bayer RGB bus Same as above
- YUV bus - similar to that in DM355
- TVP7002 - connected through 16 bit YUV bus with embedded sync 
  (BT.1120)

  BTW, looking at the code there doesn't seem to be a bus type (probably
  only Bayer is used), correct? How is the datawidth negotiation done? Is
  the largest available width chosen? I assume that the datawidth is
  generally fixed by the host? I don't quite see how that can be
  negotiated since what is chosen there is linked to how the video data
  is transferred to memory. E.g., chosing a bus width of 8 or 10 bits can
  be the difference between transferring 8 bit or 16 bit data (with 6
  bits padding) when the image is transferred to memory. If I would
  implement negotiation I would probably limit it to those one-bit
  settings and not include bus type and width.

 Well, this is indeed hairy. And hardware developers compete in creativity
 making us invent new and new algorithms:-( I think, so far _practically_
 I have only worked with the following setups:

 8 bit parallel with syncs and clocks. Data can be either Bayer or YUV.
 This is most usual in my practice so far.

 10 bit parallel (PXA270) with syncs and clocks bus with an 8 bit sensor
 connected to most significant lanes... This is achieved by providing an
 additional I2C controlled switch...

 10 bit parallel with a 10 bit sensor, data 0-extended to 16 bit, raw
 Bayer.

 15 bit parallel bus (i.MX31) with 8 bit sensor connected to least
 significant lanes, because i.MX31 is smart enough to use them correctly.

 ATM this is a part of soc-camera pixel format negotiation code, you can
 look at various .set_bus_param() methods in sensor drivers. E.g., look in
 mt9m001 and mt9v022 for drivers, using external bus-width switches...

 Now, I do not know how standard these various data packing methods on the
 bus are, I think, we will have to give it a good thought when designing
 an API, comments from developers familiar with various setups are much
 appreciated.

I think we should not attempt to be too fancy with this part of the API.
Something like the pixel format API in the v4l2 spec which is basically
just a random number with an associated format specification and no attempt
and trying high-level format descriptions. In this case a simple enum might
be enough. I'm not even sure whether we should specify the bus width but
instead have it implicit in the bus type.

[MK] VPFE at least need to know if YUV bus has sync embedded or separate sync 
lines as well if it 10 bit or 8 bit. Similarly it needs 16 bit for interfacing 
with TVP7002.  For Raw Bayer, it needs to know the size of valid data on the 
bus and where the MSB of the sensor is connected (this can be specified in the 
platform data of the bridge driver, and is not applicable for sensor device). 
So data width information could either be explicit in the bus type or could be 
specified in width parameter. Both works for our driver.

What about embedded sync bool I have added in the latest patch? This can be 
removed too if it is explicit in the bus type. But my preference is to keep 
them as per my latest patch. I am also not sure if query capability will be 
really useful versus defining them per 

RE: mt9t031 (was RE: [PATCH] adding support for setting bus parameters in sub device)

2009-06-11 Thread Karicheri, Muralidharan
Hi,

In our experience, I have following cases where we need to set the device to 
capture/display at certain resolution and fps.

Capture side
-
1) IP Netcam applications. If the solution uses Micron sensors such as 
MT9T001/031/P031 etc, application will require to do streaming at various 
resolution ranging from VGA, SVGA to UXGA or 480p, 576p to 1080p. In these case 
what we have implemented is a table of timings looked up by the STD string. 
Though this is not a standard, it helps in achieving streaming at desired frame 
rate. The exposure used is to get the desired frame rate and video quality. The 
VPFE has modules to fine tune the image quality.

2) TVP7002 decoder chip has following table for various analog video standard 
that it supports.
SDTV(YPbPr component) - 480i/576i
EDTV (YPbPr component) - 480p/576p
HDTV (YPbPr component) - 7...@50/60, 10...@50/60, 10...@50/60
PC graphics (RGB Component) - VGA to UXGA

3) TVP5146/TVP5147 - supports NTSC/PAL standards from SDTV

4) camera applications that do preview and take snapshots. We don't support it 
in Linux, but have solutions based on other OS.

Display side

1)VPBE (video processing back end) can support NTSC/PAL timing signals directly 
from the SOC.

2) By connecting a daughter card that does voltage translation, to the digital 
LCD port of VPBE, it can support PC graphics timings. Examples are logic PD 
LCD/ Avnet LCD kits that can be connected using these daughter card.

3) The Digital LCD port of VPBE can generate BT.656/BT.1120 timings. So you 
could connect a encoder chip such as THS8200 to generate 720P/1080 YPbPr 
component outputs. This can support any encoder chip that can accepts YUV data 
or RGB666 or RGB888 data along with timings signals and output PC graphic or 
YPbPr component output or standard NTSC/PAL outputs.

As you can see, S_STD can be used only for 3) on the capture side and 1) on the 
display side since it doesn't specify all the above timings and is not quite 
useful. So we need an API that can do the following...

Query available timings settings from the encoder/decoder/sensors. Since these 
timings are not relevant to application domain, it can be defined in either in 
the driver and only expose following as part of the query. Driver uses this to 
look up the correct timing. 

1) resolution (VGA, 720p, 1080p, 576p etc)
2) frame rate 

Set the timing by specifying
Detect the signal for capture similar to QUERYSTD??
Get the current timing...

Is VIDIOC_S_PARM  G_PARM added for this purpose. May be this might need to be 
enhanced for this purpose to add the resolution as well or add a new set of 
APIs...


Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
email: m-kariche...@ti.com

-Original Message-
From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
ow...@vger.kernel.org] On Behalf Of Hans Verkuil
Sent: Thursday, June 11, 2009 6:40 AM
To: Hans de Goede
Cc: Jean-Philippe François; Karicheri, Muralidharan; davinci-linux-open-
sou...@linux.davincidsp.com; Muralidharan Karicheri; Guennadi Liakhovetski;
linux-media@vger.kernel.org
Subject: Re: mt9t031 (was RE: [PATCH] adding support for setting bus
parameters in sub device)




 On 06/11/2009 11:33 AM, Hans Verkuil wrote:

 On 06/11/2009 10:35 AM, Hans Verkuil wrote:

 snip (a lot)

 Hmm,

 Why would we want the *application* to set things like this *at all* ?
 with sensors hsync and vsync and other timing are something between
 the bridge and the sensor, actaully in my experience the correct
 hsync / vsync timings to program the sensor to are very much bridge
 specific. So IMHO this should not be exposed to userspace at all.

 All userspace should be able to control is the resolution and the
 framerate. Although controlling the framerate in many cases also
 means controlling the maximum exposure time. So in many cases
 one cannot even control the framerate. (Asking for 30 fps in an
 artificially illuminated room will get you a very dark, useless
 picture, with most sensors). Yes this means that with cams with
 use autoexposure (which is something which we really want where ever
 possible), the framerate can and will change while streaming.

 I think we have three possible use cases here:

 - old-style standard definition video: use S_STD


 Ack

 - webcam-like devices: a combination of S_FMT and S_PARM I think?
 Correct
 me if I'm wrong. S_STD is useless for this, right?


 Ack

 - video streaming devices like the davinci videoports where you can hook
 up HDTV receivers or FPGAs: here you definitely need a new API to setup
 the streaming parameters, and you want to be able to do that from the
 application as well. Actually, sensors are also hooked up to these
 devices
 in practice. And there you also want to be able to setup these
 parameters.
 You will see this mostly (only?) on embedded platforms.


 I agree we need an in kernel API for this, but why expose it to
 userspace

RE: mt9t031 (was RE: [PATCH] adding support for setting bus parameters in sub device)

2009-06-11 Thread Karicheri, Muralidharan


 - video streaming devices like the davinci videoports where you can hook
 up HDTV receivers or FPGAs: here you definitely need a new API to setup
 the streaming parameters, and you want to be able to do that from the
 application as well. Actually, sensors are also hooked up to these
devices
 in practice. And there you also want to be able to setup these parameters.
 You will see this mostly (only?) on embedded platforms.


I agree we need an in kernel API for this, but why expose it to
userspace, as you say this will only happen on embedded systems,
shouldn't the info then go in a board_info file / struct ?

No we still need a way for application to set these timings at the device. For 
example, it needs to tell a TVP7002 device to scan at 720p/1080p similar to 
S_STD. From user prespective, it is just like S_STD. See my email on the 
details...

Regards,

Hans

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: mt9t031 (was RE: [PATCH] adding support for setting bus parameters in sub device)

2009-06-11 Thread Karicheri, Muralidharan
On Wed, 10 Jun 2009, Karicheri, Muralidharan wrote:

 So how do I know what frame-rate I get? Sensor output frame rate depends
 on the resolution of the frame, blanking, exposure time etc.

This is not supported.

I am still not clear. You had said in an earlier email that it can support 
streaming. That means application can stream frames from the capture device.
I know you don't have support for setting a specific frame rate, but it must be 
outputting frame at some rate right?

Here is my usecase.

open capture device,
set resolutions (say VGA) for capture (S_FMT ???)
request buffer for streaming  mmap  QUERYBUF
start streaming (STREAMON)
DQBUF/QBUF in a loop - get VGA buffers at some fps.
STREAMOFF
close device

Is this possible with mt9t031 available currently in the tree? This requires 
sensor device output frames continuously on the bus using PCLK/HSYNC/VSYNC 
timing to the bridge device connected to the bus. Can you give a use case like 
above that you are using. I just want to estimate how much effort is required 
to add this support in the mt9t031 driver.

Thanks

Murali

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: mt9t031 (was RE: [PATCH] adding support for setting bus parameters in sub device)

2009-06-11 Thread Guennadi Liakhovetski
On Thu, 11 Jun 2009, Karicheri, Muralidharan wrote:

 On Wed, 10 Jun 2009, Karicheri, Muralidharan wrote:
 
  So how do I know what frame-rate I get? Sensor output frame rate depends
  on the resolution of the frame, blanking, exposure time etc.
 
 This is not supported.
 
 I am still not clear. You had said in an earlier email that it can 
 support streaming. That means application can stream frames from the 
 capture device.
 I know you don't have support for setting a specific frame rate, but it 
 must be outputting frame at some rate right?

I am sorry, I do not know how I can explain myself clearer.

Yes, you can stream video with mt9t031.

No, you neither get the framerate measured by the driver nor can you set a 
specific framerate. Frames are produced as fast as it goes, depending on 
clock settings, frame size, black areas, autoexposure.

Thanks
Guennadi

 
 Here is my usecase.
 
 open capture device,
 set resolutions (say VGA) for capture (S_FMT ???)
 request buffer for streaming  mmap  QUERYBUF
 start streaming (STREAMON)
 DQBUF/QBUF in a loop - get VGA buffers at some fps.
 STREAMOFF
 close device
 
 Is this possible with mt9t031 available currently in the tree? This requires 
 sensor device output frames continuously on the bus using PCLK/HSYNC/VSYNC 
 timing to the bridge device connected to the bus. Can you give a use case 
 like above that you are using. I just want to estimate how much effort is 
 required to add this support in the mt9t031 driver.
 
 Thanks
 
 Murali
 
 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer
 http://www.open-technology.de/
 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: mt9t031 (was RE: [PATCH] adding support for setting bus parameters in sub device)

2009-06-11 Thread Karicheri, Muralidharan

I am sorry, I do not know how I can explain myself clearer.

Thanks for helping me to understand better :)
Yes, you can stream video with mt9t031.

No, you neither get the framerate measured by the driver nor can you set a
specific framerate. Frames are produced as fast as it goes, depending on
clock settings, frame size, black areas, autoexposure.

Ok. It is now clear to me. 

Thanks for all your help.

Thanks
Guennadi


 Here is my usecase.

 open capture device,
 set resolutions (say VGA) for capture (S_FMT ???)
 request buffer for streaming  mmap  QUERYBUF
 start streaming (STREAMON)
 DQBUF/QBUF in a loop - get VGA buffers at some fps.
 STREAMOFF
 close device

 Is this possible with mt9t031 available currently in the tree? This
requires sensor device output frames continuously on the bus using
PCLK/HSYNC/VSYNC timing to the bridge device connected to the bus. Can you
give a use case like above that you are using. I just want to estimate how
much effort is required to add this support in the mt9t031 driver.

 Thanks

 Murali

 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer
 http://www.open-technology.de/


---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[RFC PATCH] adding support for setting bus parameters in sub device

2009-06-10 Thread m-karicheri2
From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

This patch adds support for setting bus parameters such as bus type
(Raw Bayer or Raw YUV image data bus), bus width (example 10 bit raw
image data bus, 10 bit BT.656 etc.), and polarities (vsync, hsync, field
etc) in sub device. This allows bridge driver to configure the sub device
interface for a specific set of bus parameters through s_bus() function call.

Reviewed By Hans Verkuil.
Signed-off-by: Muralidharan Karicheri m-kariche...@ti.com
---
Applies to v4l-dvb repository

 include/media/v4l2-subdev.h |   36 
 1 files changed, 36 insertions(+), 0 deletions(-)

diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
index 1785608..8e719c4 100644
--- a/include/media/v4l2-subdev.h
+++ b/include/media/v4l2-subdev.h
@@ -37,6 +37,39 @@ struct v4l2_decode_vbi_line {
u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no 
service found */
 };
 
+/*
+ * Some sub-devices are connected to the bridge device through a bus that
+ * carries * the clock, vsync, hsync and data. Some interfaces such as BT.656
+ * carries the sync embedded in the data where as others have separate line
+ * carrying the sync signals. The structure below is used by bridge driver to
+ * set the desired bus parameters in the sub device to work with it.
+ */
+enum v4l2_subdev_bus_type {
+   /* Raw YUV image data bus */
+   V4L2_SUBDEV_BUS_RAW_YUV,
+   /* Raw Bayer image data bus */
+   V4L2_SUBDEV_BUS_RAW_BAYER
+};
+
+struct v4l2_subdev_bus {
+   /* yuv or bayer image data bus */
+   enum v4l2_subdev_bus_type type;
+   /* bus width */
+   u8 width;
+   /* embedded sync, set this when sync is embedded in the data stream */
+   unsigned embedded_sync:1;
+   /* 0 - active low, 1 - active high */
+   unsigned pol_vsync:1;
+   /* 0 - active low, 1 - active high */
+   unsigned pol_hsync:1;
+   /* 0 - low to high , 1 - high to low */
+   unsigned pol_field:1;
+   /* 0 - sample at falling edge , 1 - sample at rising edge */
+   unsigned pol_pclock:1;
+   /* 0 - active low , 1 - active high */
+   unsigned pol_data:1;
+};
+
 /* Sub-devices are devices that are connected somehow to the main bridge
device. These devices are usually audio/video muxers/encoders/decoders or
sensors and webcam controllers.
@@ -199,6 +232,8 @@ struct v4l2_subdev_audio_ops {
 
s_routing: see s_routing in audio_ops, except this version is for video
devices.
+
+   s_bus: set bus parameters in sub device to configure the interface
  */
 struct v4l2_subdev_video_ops {
int (*s_routing)(struct v4l2_subdev *sd, u32 input, u32 output, u32 
config);
@@ -219,6 +254,7 @@ struct v4l2_subdev_video_ops {
int (*s_parm)(struct v4l2_subdev *sd, struct v4l2_streamparm *param);
int (*enum_framesizes)(struct v4l2_subdev *sd, struct v4l2_frmsizeenum 
*fsize);
int (*enum_frameintervals)(struct v4l2_subdev *sd, struct 
v4l2_frmivalenum *fival);
+   int (*s_bus)(struct v4l2_subdev *sd, const struct v4l2_subdev_bus *bus);
 };
 
 struct v4l2_subdev_ops {
-- 
1.6.0.4

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-10 Thread Guennadi Liakhovetski
On Tue, 9 Jun 2009, m-kariche...@ti.com wrote:

 From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com
 
 This patch adds support for setting bus parameters such as bus type
 (BT.656, BT.1120 etc), width (example 10 bit raw image data bus)
 and polarities (vsync, hsync, field etc) in sub device. This allows
 bridge driver to configure the sub device for a specific set of bus
 parameters through s_bus() function call.

Yes, this is required, but this is not enough. Firstly, you're missing at 
least one more flag - master or slave. Secondly, it is not enough to 
provide a s_bus function. Many hosts and sensors can configure one of 
several alternate configurations - they can select signal polarities, data 
widths, master / slave role, etc. Whereas others have some or all of these 
parameters fixed. That's why we have a query method in soc-camera, which 
delivers all supported configurations, and then the host can select some 
mutually acceptable subset. No, just returning an error code is not 
enough.

So, you would need to request supported flags, the sensor would return a 
bitmask, and the host would then issue a s_bus call with a selected subset 
and configure itself. And then you realise, that one bit per parameter is 
not enough - you need a bit for both, e.g., vsync active low and active 
high.

Have a look at 
include/media/soc_camera.h::soc_camera_bus_param_compatible() and macros 
defined there, then you might want to have a look at 
drivers/media/video/pxa_camera.c::pxa_camera_set_bus_param() to see how 
the whole machinery works. And you also want inverter flags, see 
drivers/media/video/soc_camera.c::soc_camera_apply_sensor_flags().

So, this is a step in the right direction, but it doesn't seem final yet.

Thanks
Guennadi

 
 Reviewed By Hans Verkuil.
 Signed-off-by: Muralidharan Karicheri m-kariche...@ti.com
 ---
 Applies to v4l-dvb repository
 
  include/media/v4l2-subdev.h |   36 
  1 files changed, 36 insertions(+), 0 deletions(-)
 
 diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
 index 1785608..c1cfb3b 100644
 --- a/include/media/v4l2-subdev.h
 +++ b/include/media/v4l2-subdev.h
 @@ -37,6 +37,41 @@ struct v4l2_decode_vbi_line {
   u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no 
 service found */
  };
  
 +/*
 + * Some sub-devices are connected to the bridge device through a bus that
 + * carries * the clock, vsync, hsync and data. Some interfaces such as BT.656
 + * carries the sync embedded in the data where as others have separate line
 + * carrying the sync signals. The structure below is used by bridge driver to
 + * set the desired bus parameters in the sub device to work with it.
 + */
 +enum v4l2_subdev_bus_type {
 + /* BT.656 interface. Embedded sync */
 + V4L2_SUBDEV_BUS_BT_656,
 + /* BT.1120 interface. Embedded sync */
 + V4L2_SUBDEV_BUS_BT_1120,
 + /* 8 bit muxed YCbCr bus, separate sync and field signals */
 + V4L2_SUBDEV_BUS_YCBCR_8,
 + /* 16 bit YCbCr bus, separate sync and field signals */
 + V4L2_SUBDEV_BUS_YCBCR_16,
 + /* Raw Bayer image data bus , 8 - 16 bit wide, sync signals */
 + V4L2_SUBDEV_BUS_RAW_BAYER
 +};
 +
 +struct v4l2_subdev_bus   {
 + enum v4l2_subdev_bus_type type;
 + u8 width;
 + /* 0 - active low, 1 - active high */
 + unsigned pol_vsync:1;
 + /* 0 - active low, 1 - active high */
 + unsigned pol_hsync:1;
 + /* 0 - low to high , 1 - high to low */
 + unsigned pol_field:1;
 + /* 0 - sample at falling edge , 1 - sample at rising edge */
 + unsigned pol_pclock:1;
 + /* 0 - active low , 1 - active high */
 + unsigned pol_data:1;
 +};
 +
  /* Sub-devices are devices that are connected somehow to the main bridge
 device. These devices are usually audio/video muxers/encoders/decoders or
 sensors and webcam controllers.
 @@ -109,6 +144,7 @@ struct v4l2_subdev_core_ops {
   int (*querymenu)(struct v4l2_subdev *sd, struct v4l2_querymenu *qm);
   int (*s_std)(struct v4l2_subdev *sd, v4l2_std_id norm);
   long (*ioctl)(struct v4l2_subdev *sd, unsigned int cmd, void *arg);
 + int (*s_bus)(struct v4l2_subdev *sd, struct v4l2_subdev_bus *bus);
  #ifdef CONFIG_VIDEO_ADV_DEBUG
   int (*g_register)(struct v4l2_subdev *sd, struct v4l2_dbg_register 
 *reg);
   int (*s_register)(struct v4l2_subdev *sd, struct v4l2_dbg_register 
 *reg);
 -- 
 1.6.0.4
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 

---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-10 Thread Guennadi Liakhovetski
On Wed, 10 Jun 2009, Hans Verkuil wrote:

 On Wednesday 10 June 2009 20:32:25 Guennadi Liakhovetski wrote:
  On Tue, 9 Jun 2009, m-kariche...@ti.com wrote:
   From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com
  
   This patch adds support for setting bus parameters such as bus type
   (BT.656, BT.1120 etc), width (example 10 bit raw image data bus)
   and polarities (vsync, hsync, field etc) in sub device. This allows
   bridge driver to configure the sub device for a specific set of bus
   parameters through s_bus() function call.
 
  Yes, this is required, but this is not enough. Firstly, you're missing at
  least one more flag - master or slave. Secondly, it is not enough to
  provide a s_bus function. Many hosts and sensors can configure one of
  several alternate configurations - they can select signal polarities,
  data widths, master / slave role, etc. Whereas others have some or all of
  these parameters fixed. That's why we have a query method in soc-camera,
  which delivers all supported configurations, and then the host can select
  some mutually acceptable subset. No, just returning an error code is not
  enough.
 
 Why would you want to query this? I would expect this to be fixed settings: 
 something that is determined by the architecture. Actually, I would expect 
 this to be part of the platform_data in many cases and setup when the i2c 
 driver is initialized and not touched afterwards.
 
 If we want to negotiate these bus settings, then we indeed need something 
 more. But it seems unnecessarily complex to me to implement autonegotiation 
 when this is in practice a fixed setting determined by how the sensor is 
 hooked up to the host.

On the platform level I have so far seen two options: signal connected 
directly or via an inverter. For that you need platform data, yes. But 
otherwise - why? say, if both your sensor and your host can select hsync 
polarity in software - what should platform tell about it? This knowledge 
belongs in the respective drivers - they know, that they can configure 
arbitrary polarity and they advertise that. Another sensor, that is 
statically active high - what does platform have to do with it? The driver 
knows perfectly, that it can only do one polarity, and it negotiates that. 
Earlier we also had this flags configured in platform code, but then we 
realised that this wasn't correct. This information and configuration 
methods are chip-specific, not platform-specific.

And the negotiation code is not that complex - just copy respective 
soc-camera functions, later the originals will be removed.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: [PATCH] adding support for setting bus parameters in sub device

2009-06-10 Thread Karicheri, Muralidharan
Guennadi,

Thanks for responding. I acknowledge I need to add
master  slave information in the structure. Querying
the capabilities from the sub device is a good feature.
I will look into your references and let you know if I
have any question.

I will send an updated patch based on this.

BTW, I have a question about the mt9t031.c driver. Could
I use this driver to stream VGA frames from the sensor?
Is it possible to configure the driver to stream at a
specific fps? We have a version of the driver used internally
and it can do capture and stream of Bayer RGB data at VGA,
480p, 576p and 720p resolutions. I have started integrating
your driver with my vpfe capture driver and it wasn't very
clear to me. Looks like driver calculates the timing parameters
based on the width and height of the capture area. We need
streaming capability in the driver. This is how our driver works
with mt9t031 sensor
  raw-bus (10 bit)
vpfe-capture  - mt9t031 driver
  ||
  V  V
VPFEMT9T031

VPFE hardware has internal timing and DMA controller to
copy data frame by frame from the sensor output to SDRAM.
The PCLK form the sensor is used to generate the internal
timing.

Thanks.

Murali
email: m-kariche...@ti.com

-Original Message-
From: Guennadi Liakhovetski [mailto:g.liakhovet...@gmx.de]
Sent: Wednesday, June 10, 2009 2:32 PM
To: Karicheri, Muralidharan
Cc: linux-media@vger.kernel.org; davinci-linux-open-
sou...@linux.davincidsp.com; Muralidharan Karicheri
Subject: Re: [PATCH] adding support for setting bus parameters in sub
device

On Tue, 9 Jun 2009, m-kariche...@ti.com wrote:

 From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

 This patch adds support for setting bus parameters such as bus type
 (BT.656, BT.1120 etc), width (example 10 bit raw image data bus)
 and polarities (vsync, hsync, field etc) in sub device. This allows
 bridge driver to configure the sub device for a specific set of bus
 parameters through s_bus() function call.

Yes, this is required, but this is not enough. Firstly, you're missing at
least one more flag - master or slave. Secondly, it is not enough to
provide a s_bus function. Many hosts and sensors can configure one of
several alternate configurations - they can select signal polarities, data
widths, master / slave role, etc. Whereas others have some or all of these
parameters fixed. That's why we have a query method in soc-camera, which
delivers all supported configurations, and then the host can select some
mutually acceptable subset. No, just returning an error code is not
enough.

So, you would need to request supported flags, the sensor would return a
bitmask, and the host would then issue a s_bus call with a selected subset
and configure itself. And then you realise, that one bit per parameter is
not enough - you need a bit for both, e.g., vsync active low and active
high.

Have a look at
include/media/soc_camera.h::soc_camera_bus_param_compatible() and macros
defined there, then you might want to have a look at
drivers/media/video/pxa_camera.c::pxa_camera_set_bus_param() to see how
the whole machinery works. And you also want inverter flags, see
drivers/media/video/soc_camera.c::soc_camera_apply_sensor_flags().

So, this is a step in the right direction, but it doesn't seem final yet.

Thanks
Guennadi


 Reviewed By Hans Verkuil.
 Signed-off-by: Muralidharan Karicheri m-kariche...@ti.com
 ---
 Applies to v4l-dvb repository

  include/media/v4l2-subdev.h |   36 
  1 files changed, 36 insertions(+), 0 deletions(-)

 diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
 index 1785608..c1cfb3b 100644
 --- a/include/media/v4l2-subdev.h
 +++ b/include/media/v4l2-subdev.h
 @@ -37,6 +37,41 @@ struct v4l2_decode_vbi_line {
  u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no
service found */
  };

 +/*
 + * Some sub-devices are connected to the bridge device through a bus
that
 + * carries * the clock, vsync, hsync and data. Some interfaces such as
BT.656
 + * carries the sync embedded in the data where as others have separate
line
 + * carrying the sync signals. The structure below is used by bridge
driver to
 + * set the desired bus parameters in the sub device to work with it.
 + */
 +enum v4l2_subdev_bus_type {
 +/* BT.656 interface. Embedded sync */
 +V4L2_SUBDEV_BUS_BT_656,
 +/* BT.1120 interface. Embedded sync */
 +V4L2_SUBDEV_BUS_BT_1120,
 +/* 8 bit muxed YCbCr bus, separate sync and field signals */
 +V4L2_SUBDEV_BUS_YCBCR_8,
 +/* 16 bit YCbCr bus, separate sync and field signals */
 +V4L2_SUBDEV_BUS_YCBCR_16,
 +/* Raw Bayer image data bus , 8 - 16 bit wide, sync signals */
 +V4L2_SUBDEV_BUS_RAW_BAYER
 +};
 +
 +struct v4l2_subdev_bus  {
 +enum v4l2_subdev_bus_type type;
 +u8

Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-10 Thread Hans Verkuil
On Wednesday 10 June 2009 21:59:13 Guennadi Liakhovetski wrote:
 On Wed, 10 Jun 2009, Hans Verkuil wrote:
  On Wednesday 10 June 2009 20:32:25 Guennadi Liakhovetski wrote:
   On Tue, 9 Jun 2009, m-kariche...@ti.com wrote:
From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com
   
This patch adds support for setting bus parameters such as bus type
(BT.656, BT.1120 etc), width (example 10 bit raw image data bus)
and polarities (vsync, hsync, field etc) in sub device. This allows
bridge driver to configure the sub device for a specific set of bus
parameters through s_bus() function call.
  
   Yes, this is required, but this is not enough. Firstly, you're
   missing at least one more flag - master or slave. Secondly, it is not
   enough to provide a s_bus function. Many hosts and sensors can
   configure one of several alternate configurations - they can select
   signal polarities, data widths, master / slave role, etc. Whereas
   others have some or all of these parameters fixed. That's why we have
   a query method in soc-camera, which delivers all supported
   configurations, and then the host can select some mutually acceptable
   subset. No, just returning an error code is not enough.
 
  Why would you want to query this? I would expect this to be fixed
  settings: something that is determined by the architecture. Actually, I
  would expect this to be part of the platform_data in many cases and
  setup when the i2c driver is initialized and not touched afterwards.
 
  If we want to negotiate these bus settings, then we indeed need
  something more. But it seems unnecessarily complex to me to implement
  autonegotiation when this is in practice a fixed setting determined by
  how the sensor is hooked up to the host.

 On the platform level I have so far seen two options: signal connected
 directly or via an inverter. For that you need platform data, yes. But
 otherwise - why? say, if both your sensor and your host can select hsync
 polarity in software - what should platform tell about it? This knowledge
 belongs in the respective drivers - they know, that they can configure
 arbitrary polarity and they advertise that. Another sensor, that is
 statically active high - what does platform have to do with it? The
 driver knows perfectly, that it can only do one polarity, and it
 negotiates that. Earlier we also had this flags configured in platform
 code, but then we realised that this wasn't correct. This information and
 configuration methods are chip-specific, not platform-specific.

 And the negotiation code is not that complex - just copy respective
 soc-camera functions, later the originals will be removed.

My view of this would be that the board specification specifies the sensor 
(and possibly other chips) that are on the board. And to me it makes sense 
that that also supplies the bus settings. I agree that it is not complex 
code, but I think it is also unnecessary code. Why negotiate if you can 
just set it?

BTW, looking at the code there doesn't seem to be a bus type (probably only 
Bayer is used), correct? How is the datawidth negotiation done? Is the 
largest available width chosen? I assume that the datawidth is generally 
fixed by the host? I don't quite see how that can be negotiated since what 
is chosen there is linked to how the video data is transferred to memory. 
E.g., chosing a bus width of 8 or 10 bits can be the difference between 
transferring 8 bit or 16 bit data (with 6 bits padding) when the image is 
transferred to memory. If I would implement negotiation I would probably 
limit it to those one-bit settings and not include bus type and width.

Regards,

Hans


 Thanks
 Guennadi
 ---
 Guennadi Liakhovetski, Ph.D.
 Freelance Open-Source Software Developer
 http://www.open-technology.de/
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Hans Verkuil - video4linux developer - sponsored by TANDBERG Telecom
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: mt9t031 (was RE: [PATCH] adding support for setting bus parameters in sub device)

2009-06-10 Thread Karicheri, Muralidharan


 We need
 streaming capability in the driver. This is how our driver works
 with mt9t031 sensor
raw-bus (10 bit)
 vpfe-capture  - mt9t031 driver
||
V  V
  VPFEMT9T031

 VPFE hardware has internal timing and DMA controller to
 copy data frame by frame from the sensor output to SDRAM.
 The PCLK form the sensor is used to generate the internal
 timing.

So, what is missing in the driver apart from the ability to specify
a frame-rate?

[MK] Does the mt9t031 output one frame (snapshot) like in a camera or can it 
output frame continuously along with PCLK, Hsync and Vsync signals like in a 
video streaming device. VPFE capture can accept frames continuously from the 
sensor synchronized to PCLK, HSYNC and VSYNC and output frames to application 
using QBUF/DQBUF. In our implementation, we have timing parameters for the 
sensor to do streaming at various resolutions and fps. So application calls 
S_STD to set these timings. I am not sure if this is an acceptable way of 
implementing it. Any comments?

Thanks

Murali

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-10 Thread Guennadi Liakhovetski
On Wed, 10 Jun 2009, Hans Verkuil wrote:

 My view of this would be that the board specification specifies the sensor 
 (and possibly other chips) that are on the board. And to me it makes sense 
 that that also supplies the bus settings. I agree that it is not complex 
 code, but I think it is also unnecessary code. Why negotiate if you can 
 just set it?

Why force all platforms to set it if the driver is perfectly capable do 
this itself? As I said - this is not a platform-specific feature, it's 
chip-specific. What good would it make to have all platforms using mt9t031 
to specify, that yes, the chip can use both falling and rising pclk edge, 
but only active high vsync and hsync?

 BTW, looking at the code there doesn't seem to be a bus type (probably only 
 Bayer is used), correct? How is the datawidth negotiation done? Is the 
 largest available width chosen? I assume that the datawidth is generally 
 fixed by the host? I don't quite see how that can be negotiated since what 
 is chosen there is linked to how the video data is transferred to memory. 
 E.g., chosing a bus width of 8 or 10 bits can be the difference between 
 transferring 8 bit or 16 bit data (with 6 bits padding) when the image is 
 transferred to memory. If I would implement negotiation I would probably 
 limit it to those one-bit settings and not include bus type and width.

Well, this is indeed hairy. And hardware developers compete in creativity 
making us invent new and new algorithms:-( I think, so far _practically_ I 
have only worked with the following setups:

8 bit parallel with syncs and clocks. Data can be either Bayer or YUV. 
This is most usual in my practice so far.

10 bit parallel (PXA270) with syncs and clocks bus with an 8 bit sensor 
connected to most significant lanes... This is achieved by providing an 
additional I2C controlled switch...

10 bit parallel with a 10 bit sensor, data 0-extended to 16 bit, raw 
Bayer.

15 bit parallel bus (i.MX31) with 8 bit sensor connected to least 
significant lanes, because i.MX31 is smart enough to use them correctly.

ATM this is a part of soc-camera pixel format negotiation code, you can 
look at various .set_bus_param() methods in sensor drivers. E.g., look in 
mt9m001 and mt9v022 for drivers, using external bus-width switches...

Now, I do not know how standard these various data packing methods on the 
bus are, I think, we will have to give it a good thought when designing an 
API, comments from developers familiar with various setups are much 
appreciated.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: mt9t031 (was RE: [PATCH] adding support for setting bus parameters in sub device)

2009-06-10 Thread Guennadi Liakhovetski
On Wed, 10 Jun 2009, Karicheri, Muralidharan wrote:

 
 
  We need
  streaming capability in the driver. This is how our driver works
  with mt9t031 sensor
   raw-bus (10 bit)
  vpfe-capture  - mt9t031 driver
   ||
   V  V
 VPFEMT9T031
 
  VPFE hardware has internal timing and DMA controller to
  copy data frame by frame from the sensor output to SDRAM.
  The PCLK form the sensor is used to generate the internal
  timing.
 
 So, what is missing in the driver apart from the ability to specify
 a frame-rate?
 
 [MK] Does the mt9t031 output one frame (snapshot) like in a camera or 
 can it output frame continuously along with PCLK, Hsync and Vsync 
 signals like in a video streaming device. VPFE capture can accept frames 
 continuously from the sensor synchronized to PCLK, HSYNC and VSYNC and 
 output frames to application using QBUF/DQBUF. In our implementation, we 
 have timing parameters for the sensor to do streaming at various 
 resolutions and fps. So application calls S_STD to set these timings. I 
 am not sure if this is an acceptable way of implementing it. Any 
 comments?

Yes, it is streaming.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] adding support for setting bus parameters in sub device

2009-06-10 Thread Guennadi Liakhovetski
On Wed, 10 Jun 2009, Hans Verkuil wrote:

 On Wednesday 10 June 2009 23:30:55 Guennadi Liakhovetski wrote:
  On Wed, 10 Jun 2009, Hans Verkuil wrote:
   My view of this would be that the board specification specifies the
   sensor (and possibly other chips) that are on the board. And to me it
   makes sense that that also supplies the bus settings. I agree that it
   is not complex code, but I think it is also unnecessary code. Why
   negotiate if you can just set it?
 
  Why force all platforms to set it if the driver is perfectly capable do
  this itself? As I said - this is not a platform-specific feature, it's
  chip-specific. What good would it make to have all platforms using
  mt9t031 to specify, that yes, the chip can use both falling and rising
  pclk edge, but only active high vsync and hsync?
 
 ???
 
 You will just tell the chip what to use. So you set 'use falling edge' and 
 either set 'active high vsync/hsync' or just leave that out since you know 
 the mt9t031 has that fixed. You don't specify in the platform data what the 
 chip can support, that's not relevant. You know what the host expects and 
 you pass that information on to the chip.
 
 A board designer knows what the host supports,

No, he doesn't have to. That's not board specific, that's SoC specific.

 knows what the sensor supports,

Ditto, this is sensor-specific, not board-specific.

 and knows if he added any inverters on the board, and based on 
 all that information he can just setup these parameters for the sensor 
 chip. Settings that are fixed on the sensor chip he can just ignore, he 
 only need to specify those settings that the sensor really needs.

Of all the boards that I know of that use soc-camera only one (supposedly) 
had an inverter on one line, and even that one is not in the mainline. So, 
in the present soc-camera code not a single board have to bother with 
that. And now you want to add _all_ those polarity, master / slave flags 
to _all_ of them? Let me try again:

you have an HSYNC output on the sensor. Its capabilities are known to the 
sensor driver

you have an HSYNC input on the SoC. Its capabilities are known to the 
SoC-specific camera host driver

these two lines are routed on the board to connect either directly or over 
some logic, hopefully, not more complex than an inverter. Now, this is 
_the_ _only_ bit of information, that is specific to the board.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: mt9t031 (was RE: [PATCH] adding support for setting bus parameters in sub device)

2009-06-10 Thread Guennadi Liakhovetski
On Wed, 10 Jun 2009, Karicheri, Muralidharan wrote:

 So how do I know what frame-rate I get? Sensor output frame rate depends 
 on the resolution of the frame, blanking, exposure time etc.

This is not supported.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] adding support for setting bus parameters in sub device

2009-06-09 Thread m-karicheri2
From: Muralidharan Karicheri a0868...@gt516km11.gt.design.ti.com

This patch adds support for setting bus parameters such as bus type
(BT.656, BT.1120 etc), width (example 10 bit raw image data bus)
and polarities (vsync, hsync, field etc) in sub device. This allows
bridge driver to configure the sub device for a specific set of bus
parameters through s_bus() function call.

Reviewed By Hans Verkuil.
Signed-off-by: Muralidharan Karicheri m-kariche...@ti.com
---
Applies to v4l-dvb repository

 include/media/v4l2-subdev.h |   36 
 1 files changed, 36 insertions(+), 0 deletions(-)

diff --git a/include/media/v4l2-subdev.h b/include/media/v4l2-subdev.h
index 1785608..c1cfb3b 100644
--- a/include/media/v4l2-subdev.h
+++ b/include/media/v4l2-subdev.h
@@ -37,6 +37,41 @@ struct v4l2_decode_vbi_line {
u32 type;   /* VBI service type (V4L2_SLICED_*). 0 if no 
service found */
 };
 
+/*
+ * Some sub-devices are connected to the bridge device through a bus that
+ * carries * the clock, vsync, hsync and data. Some interfaces such as BT.656
+ * carries the sync embedded in the data where as others have separate line
+ * carrying the sync signals. The structure below is used by bridge driver to
+ * set the desired bus parameters in the sub device to work with it.
+ */
+enum v4l2_subdev_bus_type {
+   /* BT.656 interface. Embedded sync */
+   V4L2_SUBDEV_BUS_BT_656,
+   /* BT.1120 interface. Embedded sync */
+   V4L2_SUBDEV_BUS_BT_1120,
+   /* 8 bit muxed YCbCr bus, separate sync and field signals */
+   V4L2_SUBDEV_BUS_YCBCR_8,
+   /* 16 bit YCbCr bus, separate sync and field signals */
+   V4L2_SUBDEV_BUS_YCBCR_16,
+   /* Raw Bayer image data bus , 8 - 16 bit wide, sync signals */
+   V4L2_SUBDEV_BUS_RAW_BAYER
+};
+
+struct v4l2_subdev_bus {
+   enum v4l2_subdev_bus_type type;
+   u8 width;
+   /* 0 - active low, 1 - active high */
+   unsigned pol_vsync:1;
+   /* 0 - active low, 1 - active high */
+   unsigned pol_hsync:1;
+   /* 0 - low to high , 1 - high to low */
+   unsigned pol_field:1;
+   /* 0 - sample at falling edge , 1 - sample at rising edge */
+   unsigned pol_pclock:1;
+   /* 0 - active low , 1 - active high */
+   unsigned pol_data:1;
+};
+
 /* Sub-devices are devices that are connected somehow to the main bridge
device. These devices are usually audio/video muxers/encoders/decoders or
sensors and webcam controllers.
@@ -109,6 +144,7 @@ struct v4l2_subdev_core_ops {
int (*querymenu)(struct v4l2_subdev *sd, struct v4l2_querymenu *qm);
int (*s_std)(struct v4l2_subdev *sd, v4l2_std_id norm);
long (*ioctl)(struct v4l2_subdev *sd, unsigned int cmd, void *arg);
+   int (*s_bus)(struct v4l2_subdev *sd, struct v4l2_subdev_bus *bus);
 #ifdef CONFIG_VIDEO_ADV_DEBUG
int (*g_register)(struct v4l2_subdev *sd, struct v4l2_dbg_register 
*reg);
int (*s_register)(struct v4l2_subdev *sd, struct v4l2_dbg_register 
*reg);
-- 
1.6.0.4

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html