RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Hiremath, Vaibhav

 -Original Message-
 From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
 ow...@vger.kernel.org] On Behalf Of Ivan T. Ivanov
 Sent: Friday, October 02, 2009 9:55 PM
 To: Marek Szyprowski
 Cc: linux-media@vger.kernel.org; kyungmin.p...@samsung.com; Tomasz
 Fujak; Pawel Osciak
 Subject: Re: Mem2Mem V4L2 devices [RFC]
 
 
 Hi Marek,
 
 
 On Fri, 2009-10-02 at 13:45 +0200, Marek Szyprowski wrote:
  Hello,
 
snip

  image format and size, while the existing v4l2 ioctls would only
 refer
  to the output buffer. Frankly speaking, we don't like this idea.
 
 I think that is not unusual one video device to define that it can
 support at the same time input and output operation.
 
 Lets take as example resizer device. it is always possible that it
 inform user space application that
 
 struct v4l2_capability.capabilities ==
   (V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT)
 
 User can issue S_FMT ioctl supplying
 
 struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
 .pix  = width x height
 
 which will instruct this device to prepare its output for this
 resolution. after that user can issue S_FMT ioctl supplying
 
 struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT
 .pix  = width x height
 
 using only these ioctls should be enough to device driver
 to know down/up scale factor required.
 
 regarding color space struct v4l2_pix_format have field
 'pixelformat'
 which can be used to define input and output buffers content.
 so using only existing ioctl's user can have working resizer device.
 
 also please note that there is VIDIOC_S_CROP which can add
 additional
 flexibility of adding cropping on input or output.
 
[Hiremath, Vaibhav] I think this makes more sense in capture pipeline, for 
example,

Sensor/decoder - previewer - resizer - /dev/videoX


 last thing which should be done is to QBUF 2 buffers and call
 STREAMON.
 
[Hiremath, Vaibhav] IMO, this implementation is not streaming model, we are 
trying to fit mem-to-mem forcefully to streaming. We have to put some 
constraints - 

- Driver will treat index 0 as input always, irrespective of number of 
buffers queued.
- Or, application should not queue more that 2 buffers.
- Multi-channel use-case

I think we have to have 2 device nodes which are capable of streaming multiple 
buffers, both are queuing the buffers. The constraint would be the buffers must 
be mapped one-to-one.

User layer library would be important here to play major role in supporting 
multi-channel feature. I think we need to do some more investigation on this.

Thanks,
Vaibhav

 i think this will simplify a lot buffer synchronization.
 
 iivanov
 
 
 
  2. Input and output in the same video node would not be compatible
 with
  the upcoming media controller, with which we will get an ability
 to
  arrange devices into a custom pipeline. Piping together two
 separate
  input-output nodes to create a new mem2mem device would be
 difficult and
  unintuitive. And that not even considering multi-output devices.
 
  My idea is to get back to the 2 video nodes per device approach
 and
  introduce a new ioctl for matching input and output instances of
 the
  same device. When such an ioctl could be called is another
 question. I
  like the idea of restricting such a call to be issued after
 opening
  video nodes and before using them. Using this ioctl, a user
 application
  would be able to match output instance to an input one, by
 matching
  their corresponding file descriptors.
 
  What do you think of such a solution?
 
  Best regards
  --
  Marek Szyprowski
  Samsung Poland RD Center
 
 
  --
  To unsubscribe from this list: send the line unsubscribe linux-
 media in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-
 media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: pxa_camera + mt9m1111: Failed to configure for format 50323234

2009-10-05 Thread Stefan Herbrechtsmeier

Antonio Ospite schrieb:

On Sun, 4 Oct 2009 00:31:24 +0200 (CEST)
Guennadi Liakhovetski g.liakhovet...@gmx.de wrote:

  

On Sat, 3 Oct 2009, Antonio Ospite wrote:



[...]
  

Anyways your patch works, but the picture is now shifted, see:
http://people.openezx.org/ao2/a780-pxa-camera-mt9m111-shifted.jpg

Is this because of the new cropping code?
  
Hm, it shouldn't be. Does it look always like this - reproducible? What 
program are you using? What about other geometry configurations? Have you 
ever seen this with previous kernel versions? New cropping - neither 
mplayer nor gstreamer use cropping normally. This seems more like a HSYNC 
problem to me. Double-check platform data? Is it mioa701 or some custom 
board?





It seemed to be reproducible yesterday, but I can't get it today, maybe
it happens in low battery conditions. I am using capture-example.c from
v4l2-apps. Never seen before. I am testing this on a Motorola A780,
the soc-camera platform code is not in mainline yet.
  

Only for your information. Maybe it helps to reproduce the error.

I have the same problem with my own ov9655 driver on a pxa platform 
since I update to kernel 2.6.30
and add crop support. Every  first open of the camera after system reset 
the image looks like yours.
If I use the camera the next time without changing the resolution 
everything is OK. Only during the
first open the resolution of the camera is changed  and function fmt set 
in the ov9655 driver is called

twice. I use the camera with my one program and it doesn't use crop.

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/



Ciao ciao,
   Antoni

Regards,
   Stefan
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] pac_common: redesign function for finding Start Of Frame

2009-10-05 Thread Hans de Goede

Hi,

Good one,

Acked-by: Hans de Goede hdego...@redhat.com

Jean-Francois, can you please add this patch to your tree?

Thanks,

Hans


On 10/04/2009 10:55 PM, Németh Márton wrote:

From: Márton Némethnm...@freemail.hu

The original implementation of pac_find_sof() does not always find
the Start Of Frame (SOF) marker. Replace it with a state machine
based design.

The change was tested with Labtec Webcam 2200.

Signed-off-by: Márton Némethnm...@freemail.hu
---
--- linux-2.6.32-rc1.orig/drivers/media/video/gspca/pac_common.h
2009-09-10 00:13:59.0 +0200
+++ linux-2.6.32-rc1/drivers/media/video/gspca/pac_common.h 2009-10-04 
21:49:19.0 +0200
@@ -33,6 +33,45 @@
  static const unsigned char pac_sof_marker[5] =
{ 0xff, 0xff, 0x00, 0xff, 0x96 };

+/*
+   The following state machine finds the SOF marker sequence
+   0xff, 0xff, 0x00, 0xff, 0x96 in a byte stream.
+
+  +--+
+  | 0: START |---\
+  +--+-\ |
+|   \---/otherwise|
+v 0xff|
+  +--+ otherwise  |
+  | 1|---*
+  |  |^
+  +--+|
+| |
+v 0xff|
+  +--+-\0xff |
+   /-|  |--/ |
+   |  | 2|---*
+   |  |  | otherwise  ^
+   |  +--+|
+   || |
+   |v 0x00|
+   |  +--+|
+   |  | 3||
+   |  |  |---*
+   |  +--+ otherwise  ^
+   || |
+   0xff |v 0xff|
+   |  +--+|
+   \--| 4||
+  |  |/
+  +--+ otherwise
+|
+v 0x96
+  +--+
+  |  FOUND   |
+  +--+
+*/
+
  static unsigned char *pac_find_sof(struct gspca_dev *gspca_dev,
unsigned char *m, int len)
  {
@@ -41,17 +80,54 @@ static unsigned char *pac_find_sof(struc

/* Search for the SOF marker (fixed part) in the header */
for (i = 0; i  len; i++) {
-   if (m[i] == pac_sof_marker[sd-sof_read]) {
-   sd-sof_read++;
-   if (sd-sof_read == sizeof(pac_sof_marker)) {
+   switch (sd-sof_read) {
+   case 0:
+   if (m[i] == 0xff)
+   sd-sof_read = 1;
+   break;
+   case 1:
+   if (m[i] == 0xff)
+   sd-sof_read = 2;
+   else
+   sd-sof_read = 0;
+   break;
+   case 2:
+   switch (m[i]) {
+   case 0x00:
+   sd-sof_read = 3;
+   break;
+   case 0xff:
+   /* stay in this state */
+   break;
+   default:
+   sd-sof_read = 0;
+   }
+   break;
+   case 3:
+   if (m[i] == 0xff)
+   sd-sof_read = 4;
+   else
+   sd-sof_read = 0;
+   break;
+   case 4:
+   switch (m[i]) {
+   case 0x96:
+   /* Pattern found */
PDEBUG(D_FRAM,
SOF found, bytes to analyze: %u.
 Frame starts at byte #%u,
len, i + 1);
sd-sof_read = 0;
return m + i + 1;
+   break;
+   case 0xff:
+   sd-sof_read = 2;
+   break;
+   default:
+   sd-sof_read = 0;
}
-   } else {
+   break;
+   default:
sd-sof_read = 0;
}
}
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  

dib3000mb dvb-t with kernel 2.6.32-rc3 do not work

2009-10-05 Thread Mario Bachmann
Hi there, 

with kernel 2.6.30.8 my TwinhanDTV USB-Ter USB1.1 / Magic Box I
worked. 

Now with kernel 2.6.32-rc3 (and 2.6.31.1) the modules seems to be
loaded fine, but tzap/kaffeine/mplayer can not tune to a channel:

dmesg says:
dvb-usb: found a 'TwinhanDTV USB-Ter USB1.1 / Magic Box I / HAMA USB1.1 DVB-T 
device' in warm state.
dvb-usb: will use the device's hardware PID filter (table count: 16).
DVB: registering new adapter (TwinhanDTV USB-Ter USB1.1 / Magic Box I / HAMA 
USB1.1 DVB-T device)
DVB: registering adapter 0 frontend 0 (DiBcom 3000M-B DVB-T)...
dibusb: This device has the Thomson Cable onboard. Which is default.
input: IR-receiver inside an USB DVB receiver as 
/devices/pci:00/:00:04.0/usb4/4-2/input/input5
dvb-usb: schedule remote query interval to 150 msecs.
dvb-usb: TwinhanDTV USB-Ter USB1.1 / Magic Box I / HAMA USB1.1 DVB-T device 
successfully initialized and connected.
usbcore: registered new interface driver dvb_usb_dibusb_mb

grep DVB .config says (no chaanges between 2.6.30.8 and 2.6.32-rc3):
CONFIG_DVB_CORE=m
CONFIG_DVB_MAX_ADAPTERS=8
CONFIG_DVB_CAPTURE_DRIVERS=y
CONFIG_DVB_USB=m
CONFIG_DVB_USB_DIBUSB_MB=m
CONFIG_DVB_DIB3000MB=m
CONFIG_DVB_PLL=m

lsmod |grep dvb
dvb_usb_dibusb_mb  16715  0 
dvb_usb_dibusb_common 3559  1 dvb_usb_dibusb_mb
dvb_pll 8604  1 dvb_usb_dibusb_mb
dib3000mb  10969  1 dvb_usb_dibusb_mb
dvb_usb13737  2 dvb_usb_dibusb_mb,dvb_usb_dibusb_common
dvb_core   85727  1 dvb_usb

tzap arte -r
using '/dev/dvb/adapter0/frontend0' and '/dev/dvb/adapter0/demux0'
reading channels from file '/home/grafrotz/.tzap/channels.conf'
tuning to 60200 Hz
video pid 0x00c9, audio pid 0x00ca
status 00 | signal  | snr  | ber 001f | unc  | 
status 00 | signal  | snr  | ber 001f | unc  | 
status 04 | signal  | snr  | ber 001f | unc  | 
status 04 | signal  | snr  | ber 001f | unc  | 
status 04 | signal  | snr  | ber 001f | unc  | 
status 04 | signal 00b2 | snr  | ber 001f | unc  | 
status 04 | signal  | snr  | ber 001f | unc  | 
status 04 | signal  | snr  | ber 001f | unc  | 
status 04 | signal  | snr  | ber 001f | unc  | 
status 04 | signal  | snr  | ber 001f | unc  | 
status 04 | signal  | snr  | ber 001f | unc  | 

and so on. The signal-values are zero or near zero, but when i boot the old 
kernel 2.6.30.8, t can tune without problems. 

kaffeine DVB says:
Using DVB device 0:0 DiBcom 3000M-B DVB-T
tuning DVB-T to 60200 Hz
inv:2 bw:0 fecH:2 fecL:9 mod:1 tm:1 gi:3 hier:0


Not able to lock to the signal on the given frequency
Frontend closed
Tuning delay: 2611 ms

mplayer dvb://arte   says:
MPlayer SVN-r29699-4.4.1 (C) 2000-2009 MPlayer Team

Spiele dvb://arte.
dvb_tune Freq: 60200
Not able to lock to the signal on the given frequency, timeout: 30
dvb_tune, TUNING FAILED
ERROR, COULDN'T SET CHANNEL  13: Konnte 'dvb://arte' nicht öffnen.


Beenden... (Dateiende erreicht)


Greetings
Mario
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] fix use-after-free Oops, resulting from a driver-core API change

2009-10-05 Thread Guennadi Liakhovetski
Commit b4028437876866aba4747a655ede00f892089e14 has broken again re-use of 
device objects across device_register() / device_unregister() cycles. Fix 
soc-camera by nullifying the struct after device_unregister().

Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
---
diff --git a/drivers/media/video/soc_camera.c b/drivers/media/video/soc_camera.c
index 59aa7a3..36e617b 100644
--- a/drivers/media/video/soc_camera.c
+++ b/drivers/media/video/soc_camera.c
@@ -1160,13 +1160,15 @@ void soc_camera_host_unregister(struct soc_camera_host 
*ici)
if (icd-iface == ici-nr) {
/* The bus-remove will be called */
device_unregister(icd-dev);
-   /* Not before device_unregister(), .remove
-* needs parent to call ici-ops-remove() */
-   icd-dev.parent = NULL;
-
-   /* If the host module is loaded again, device_register()
-* would complain already initialised */
-   memset(icd-dev.kobj, 0, sizeof(icd-dev.kobj));
+   /*
+* Not before device_unregister(), .remove
+* needs parent to call ici-ops-remove().
+* If the host module is loaded again, device_register()
+* would complain already initialised, since 2.6.32
+* this is also needed to prevent use-after-free of the
+* device private data.
+*/
+   memset(icd-dev, 0, sizeof(icd-dev));
}
}
 
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [REVIEW] ivtv, ir-kbd-i2c: Explicit IR support for the AVerTV M116 for newer kernels

2009-10-05 Thread Jean Delvare
On Sun, 04 Oct 2009 21:54:37 -0400, Andy Walls wrote:
 On Mon, 2009-10-05 at 01:23 +0300, Aleksandr V. Piskunov wrote:
  So the solution(?) I found was to decrease the udelay in
  ivtv_i2c_algo_template from 10 to 5. Guess it just doubles the frequency
  of ivtv i2c bus or something like that. Problem went away, IR controller
  is now working as expected.
 
 That's a long standing error in the ivtv driver.  It ran the I2C bus at
 1/(2*10 usec) = 50 kHz instead of the standard 100 kHz.
 
 Technically any I2C device should be able to handle clock rates down to
 about DC IIRC; so there must be a bug in the IR microcontroller
 implementation.
 
 Also the CX23416 errantly marks its PCI register space as cacheable
 which is probably wrong (see lspci output).  This may also be
 interfering with proper I2C operation with i2c_algo_bit depedning on the
 PCI bridges in your system.
 
  
  So question is:
  1) Is it ok to decrease udelay for this board?
 
 Sure, I think.  It would actually run the ivtv I2C bus at the nominal
 clock rate specified by the I2C specification.

FWIW, 100 kHz isn't the nominal I2C clock rate, but the maximum clock
rate for normal I2C. It is perfectly valid to run I2C buses as lower
clock frequencies. I don't even think there is a minimum for I2C (but
there is a minimum of 10 kHz for SMBus.)

But of course different hardware implementations may not fully cover
the standard I2C or SMBus frequency range, and it is possible that a TV
adapter manufacturer designed its hardware to run the I2C bus at a
fixed frequency and we have to use that frequency to make the adapter
happy.

 I never had any reason to change it, as I feared causing regressions in
 many well tested boards.

This is a possibility, indeed. But for obvious performance reasons, I'd
rather use 100 kHz as the default, and let boards override it with a
lower frequency of their choice if needed. Obviously this provides an
easy improvement path, where each board can be tested separately and
I2C bus frequency bumped from 50 kHz to 100 kHz after some good testing.

Some boards might even support fast I2C, up to 400 kHz but limited to
250 kHz by the i2c-algo-bit implementation.

-- 
Jean Delvare
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [REVIEW] ivtv, ir-kbd-i2c: Explicit IR support for the AVerTV M116 for newer kernels

2009-10-05 Thread Aleksandr V. Piskunov
  Basicly during the I2C operation that reads scancode, controller seems
  to stop processing input from IR sensor, resulting a loss of keypress.
  
  So the solution(?) I found was to decrease the udelay in
  ivtv_i2c_algo_template from 10 to 5. Guess it just doubles the frequency
  of ivtv i2c bus or something like that. Problem went away, IR controller
  is now working as expected.
 
 That's a long standing error in the ivtv driver.  It ran the I2C bus at
 1/(2*10 usec) = 50 kHz instead of the standard 100 kHz.
 
 Technically any I2C device should be able to handle clock rates down to
 about DC IIRC; so there must be a bug in the IR microcontroller
 implementation.
 
 Also the CX23416 errantly marks its PCI register space as cacheable
 which is probably wrong (see lspci output).  This may also be
 interfering with proper I2C operation with i2c_algo_bit depedning on the
 PCI bridges in your system.
 
  
  So question is:
  1) Is it ok to decrease udelay for this board?
 
 Sure, I think.  It would actually run the ivtv I2C bus at the nominal
 clock rate specified by the I2C specification.
 
 I never had any reason to change it, as I feared causing regressions in
 many well tested boards.
 
 
  2) If yes, how to do it right?
 
 Try:
 
 # modprobe ivtv newi2c=1
 
 to see if that works first. 
 

udelay=10, newi2c=0  = BAD
udelay=10, newi2c=1  = BAD
udelay=5,  newi2c=0  = OK
udelay=5,  newi2c=1  = BAD


newi2c=1 also throws some log messages, not sure if its ok or not.

Oct  5 11:41:16 moon kernel: [45430.916449] ivtv: Start initialization, version 
1.4.1
Oct  5 11:41:16 moon kernel: [45430.916618] ivtv0: Initializing card 0
Oct  5 11:41:16 moon kernel: [45430.916628] ivtv0: Autodetected AVerTV MCE 116 
Plus card (cx23416 based)
Oct  5 11:41:16 moon kernel: [45430.918887] ivtv :03:06.0: PCI INT A - GSI 
20 (level, low) - IRQ 20
Oct  5 11:41:16 moon kernel: [45430.919229] ivtv0:  i2c: i2c init
Oct  5 11:41:16 moon kernel: [45430.919234] ivtv0:  i2c: setting scl and sda to 
1
Oct  5 11:41:16 moon kernel: [45430.937745] cx25840 0-0044: cx25843-23 found @ 
0x88 (ivtv i2c driver #0)
Oct  5 11:41:16 moon kernel: [45430.949145] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.951628] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.954191] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.956724] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.959211] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.961749] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.964236] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.966722] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.966786] ivtv0:  i2c: i2c write to 43 failed
Oct  5 11:41:16 moon kernel: [45430.971106] tuner 0-0061: chip found @ 0xc2 
(ivtv i2c driver #0)
Oct  5 11:41:16 moon kernel: [45430.974404] wm8739 0-001a: chip found @ 0x34 
(ivtv i2c driver #0)
Oct  5 11:41:16 moon kernel: [45430.986328] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.988871] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.991355] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.993904] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.996427] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45430.998938] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45431.001477] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45431.003968] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45431.004053] ivtv0:  i2c: i2c write to 18 failed
Oct  5 11:41:16 moon kernel: [45431.011333] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45431.013883] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45431.016418] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45431.018911] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45431.021463] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45431.023937] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45431.026478] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45431.028998] ivtv0:  i2c: Slave did not ack
Oct  5 11:41:16 moon kernel: [45431.029063] ivtv0:  i2c: i2c write to 71 failed
Oct  5 11:41:16 moon kernel: [45431.031468] ivtv0:  i2c: Slave did not ack


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [REVIEW] ivtv, ir-kbd-i2c: Explicit IR support for the AVerTV M116 for newer kernels

2009-10-05 Thread Jean Delvare
On Mon, 5 Oct 2009 11:50:31 +0300, Aleksandr V. Piskunov wrote:
  Try:
  
  # modprobe ivtv newi2c=1
  
  to see if that works first. 
  
 
 udelay=10, newi2c=0  = BAD
 udelay=10, newi2c=1  = BAD
 udelay=5,  newi2c=0  = OK
 udelay=5,  newi2c=1  = BAD

The udelay value is only used by i2c-algo-bit, not newi2c, so the last
test was not needed.

 newi2c=1 also throws some log messages, not sure if its ok or not.
 
 Oct  5 11:41:16 moon kernel: [45430.916449] ivtv: Start initialization, 
 version 1.4.1
 Oct  5 11:41:16 moon kernel: [45430.916618] ivtv0: Initializing card 0
 Oct  5 11:41:16 moon kernel: [45430.916628] ivtv0: Autodetected AVerTV MCE 
 116 Plus card (cx23416 based)
 Oct  5 11:41:16 moon kernel: [45430.918887] ivtv :03:06.0: PCI INT A - 
 GSI 20 (level, low) - IRQ 20
 Oct  5 11:41:16 moon kernel: [45430.919229] ivtv0:  i2c: i2c init
 Oct  5 11:41:16 moon kernel: [45430.919234] ivtv0:  i2c: setting scl and sda 
 to 1
 Oct  5 11:41:16 moon kernel: [45430.937745] cx25840 0-0044: cx25843-23 found 
 @ 0x88 (ivtv i2c driver #0)
 Oct  5 11:41:16 moon kernel: [45430.949145] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.951628] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.954191] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.956724] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.959211] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.961749] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.964236] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.966722] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.966786] ivtv0:  i2c: i2c write to 43 
 failed
 Oct  5 11:41:16 moon kernel: [45430.971106] tuner 0-0061: chip found @ 0xc2 
 (ivtv i2c driver #0)
 Oct  5 11:41:16 moon kernel: [45430.974404] wm8739 0-001a: chip found @ 0x34 
 (ivtv i2c driver #0)
 Oct  5 11:41:16 moon kernel: [45430.986328] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.988871] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.991355] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.993904] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.996427] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45430.998938] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45431.001477] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45431.003968] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45431.004053] ivtv0:  i2c: i2c write to 18 
 failed
 Oct  5 11:41:16 moon kernel: [45431.011333] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45431.013883] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45431.016418] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45431.018911] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45431.021463] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45431.023937] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45431.026478] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45431.028998] ivtv0:  i2c: Slave did not ack
 Oct  5 11:41:16 moon kernel: [45431.029063] ivtv0:  i2c: i2c write to 71 
 failed
 Oct  5 11:41:16 moon kernel: [45431.031468] ivtv0:  i2c: Slave did not ack
 

That would be I2C probe attempts such as the ones done by ir-kbd-i2c.
Nothing to be afraid of.

-- 
Jean Delvare
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [REVIEW] ivtv, ir-kbd-i2c: Explicit IR support for the AVerTV M116 for newer kernels

2009-10-05 Thread Aleksandr V. Piskunov
On Mon, Oct 05, 2009 at 11:04:02AM +0200, Jean Delvare wrote:
 On Mon, 5 Oct 2009 11:50:31 +0300, Aleksandr V. Piskunov wrote:
   Try:
   
   # modprobe ivtv newi2c=1
   
   to see if that works first. 
   
  
  udelay=10, newi2c=0  = BAD
  udelay=10, newi2c=1  = BAD
  udelay=5,  newi2c=0  = OK
  udelay=5,  newi2c=1  = BAD
 
 The udelay value is only used by i2c-algo-bit, not newi2c, so the last
 test was not needed.
 

Yup, also tried udelay=4, IR controller handles it without problems,
though cx25840 and xc2028 doesn't seem to like the 125 KHz frequency,
refusing to communicate. xc2028 even stopped responding, requiring a cold
reboot.

So for M116 board, the most stable combination seems to be 100 KHz i2c bus
and 150ms polling delay (up from 100 default). With this combination
I can quickly press 1234567890 on remote and driver gets the combination
without any losses.
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [REVIEW] ivtv, ir-kbd-i2c: Explicit IR support for the AVerTV M116 for newer kernels

2009-10-05 Thread Andy Walls
On Mon, 2009-10-05 at 10:29 +0200, Jean Delvare wrote:
 On Sun, 04 Oct 2009 21:54:37 -0400, Andy Walls wrote:
  On Mon, 2009-10-05 at 01:23 +0300, Aleksandr V. Piskunov wrote:

   
   So question is:
   1) Is it ok to decrease udelay for this board?
  
  Sure, I think.  It would actually run the ivtv I2C bus at the nominal
  clock rate specified by the I2C specification.
 
 FWIW, 100 kHz isn't the nominal I2C clock rate, but the maximum clock
 rate for normal I2C. It is perfectly valid to run I2C buses as lower
 clock frequencies. I don't even think there is a minimum for I2C (but
 there is a minimum of 10 kHz for SMBus.)

Ah, thanks.  I was too lazy to go read my copy of the spec.


 But of course different hardware implementations may not fully cover
 the standard I2C or SMBus frequency range, and it is possible that a TV
 adapter manufacturer designed its hardware to run the I2C bus at a
 fixed frequency and we have to use that frequency to make the adapter
 happy.

This is very plausible for a microcontroller implementation of an I2C
slave, which is the case here.


  I never had any reason to change it, as I feared causing regressions in
  many well tested boards.
 
 This is a possibility, indeed. But for obvious performance reasons, I'd
 rather use 100 kHz as the default, and let boards override it with a
 lower frequency of their choice if needed. Obviously this provides an
 easy improvement path, where each board can be tested separately and
 I2C bus frequency bumped from 50 kHz to 100 kHz after some good testing.
 
 Some boards might even support fast I2C, up to 400 kHz but limited to
 250 kHz by the i2c-algo-bit implementation.

I can add a module option to ivtv for I2C clock rate.  It may take a few
days.

Regards,
Andy

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


writing to dvr0 for playback

2009-10-05 Thread phil

Hi,

I'm currently trying to replay a transport stream from a file, having 
read through the v3 API docs and this mailing list I'm fairly certain I 
have a good understanding of how to do this. I am however using the 
test_dvr_play test program from the dvb-apps suite rather than writing 
my own code, I have the latest version of dvb-apps from hg as of today.


The dvb hardware which I'm using is a Hauppauge Nova-T usb stick version
3, so thats the DIB7070p tuner and I'm using it with the 2.6.31 kernel
from kernel.org. I've got that working perfectly fine, I can watch tv,
stream to disk and stream a multiplex to disk without issue. The problem
is that if I try to open dvr0 for writing then I get an Error 22
(Invalid Argument). I've looked through the list archives and I've found
similar issues before with no resolution, with this
http://www.linuxtv.org/pipermail/linux-dvb/2008-June/026661.html being
the most recent and most comprehensive I think. This error happens if I
try to cat a ts file into dvr0 or if I run test_dvr_play as follows:

# DVR=/dev/dvb/adapter0/dvr0  DEMUX=/dev/dvb/adapter0/demux0 \
./test_dvr_play /srv/nfs/dave.ts 0x191 0x192
Playing '/srv/nfs/dave.ts', video PID 0x0191, audio PID 0x0192
Failed to open '/dev/dvb/adapter0/dvr0': 22 Invalid argument

I've looked into the test_dvr_play source and it is trying to open dvr0
for writing:

if ((dvrfd = open(dvrdev, O_WRONLY)) == -1) {


Now I've looked into the driver code and this appears to be an issue in
drivers/media/dvb/dvb-core/dmxdev.c specfically in the dvb_dvr_open
routine, from following the code through I've determined that it's
failing because it can't get a frontend (ie dvbdemux-frontend_list is
empty)  when it calls get_fe (line 169 of dmxdev.c) in the following
section of code


 if ((file-f_flags  O_ACCMODE) == O_WRONLY) {
 dmxdev-dvr_orig_fe = dmxdev-demux-frontend;

 if (!dmxdev-demux-write) {
 mutex_unlock(dmxdev-mutex);
 return -EOPNOTSUPP;
 }

 front = get_fe(dmxdev-demux, DMX_MEMORY_FE);

 if (!front) {
 mutex_unlock(dmxdev-mutex);
 return -EINVAL;
 }
 dmxdev-demux-disconnect_frontend(dmxdev-demux);
 dmxdev-demux-connect_frontend(dmxdev-demux, front);
 }


I'm now wondering if anyone could shed some light on why it's failing
here and specifically why if I'm trying to avoid using the frontend by
writing in my own TS, it would fail on account of not being able to get
a frontend. Should test_dvr_play be setting up a frontend first before
attempting to open dvr0?


Thanks,

Phil


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: TM6010 driver and firmware

2009-10-05 Thread Mauro Carvalho Chehab
Hi Dênis,

Em Sat, 3 Oct 2009 10:02:26 -0300
Dênis Goes denish...@gmail.com escreveu:

 Hi People...
 
 I'm a programmer and I want to help in development of tm6010 driver to
 finish the driver and use my PixelView 405 USB card.
 
 What the status of tm6010 driver ??? How to obtain the correct tridvid.sys
 file ??? I have here 7 file versions from many driver versions, but none
 have the correct md5sum.

Probably it will use v2.7 firmware or v3.6 (if it has a xc3028L). Those 
firmwares
are available via Documentation/video4linux/extract_xc3028.pl script. The 
instructions
for use it are commented on the top of the script file.

The driver is at the staging directory at the mercurial tree. It compiles fine, 
but
it generates some OOPSes when you try to use it. It may be related to the i2c
conversion or to the buffer filling routines.

Feel free to contribute. While I want to finish the driver, due to some higher
priority tasks on my large TODO list, it is unlikely that I'll have some time
for doing it soon, unfortunately.



Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PULL] http://mercurial.intuxication.org/hg/v4l-dvb-commits

2009-10-05 Thread Mauro Carvalho Chehab
Em Wed, 23 Sep 2009 20:47:17 +0300
Igor M. Liplianin liplia...@me.by escreveu:

 Mauro,
 
 Please pull from http://mercurial.intuxication.org/hg/v4l-dvb-commits
 
 for the following 2 changesets:
 
 01/02: Add support for TBS-likes remotes
 http://mercurial.intuxication.org/hg/v4l-dvb-commits?cmd=changeset;node=c4e209d7decc

+   { 0x1a, KEY_SHUFFLE},   /* snapshot */

Snapshot should use KEY_CAMERA instead. Please see the API reference at:
http://linuxtv.org/downloads/v4l-dvb-apis/ch17s01.html

 02/02: Add support for TeVii remotes
 http://mercurial.intuxication.org/hg/v4l-dvb-commits?cmd=changeset;node=471f55ec066a

Some keys here also seem weird to my eyes:

+   { 0x41, KEY_AB},
+   { 0x46, KEY_F1},
+   { 0x47, KEY_F2},
+   { 0x5e, KEY_F3},
+   { 0x5c, KEY_F4},
+   { 0x52, KEY_F5},
+   { 0x5a, KEY_F6},

Do you have keys labeled as AB, F1..F6 at the IR?

Also, I don't like using KEY_POWER for power. Some Linux distros turn the
computer off with this keycode. It is better to use KEY_POWER2 instead, and let
the userspace apps (or lirc) to properly associate it to something useful, like
finishing the media application, instead of turning the computer off.
 
 
  drivers/media/common/ir-keymaps.c |   99 
 +-
  drivers/media/video/cx88/cx88-input.c |   26 
  include/media/ir-common.h |2 
  3 files changed, 124 insertions(+), 3 deletions(-)
 
 Thanks,
 Igor
 
 --
 To unsubscribe from this list: send the line unsubscribe linux-media in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html




Cheers,
Mauro
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] v4l2_subdev: rename tuner s_standby operation to core s_power

2009-10-05 Thread Laurent Pinchart
Upcoming I2C v4l2_subdev drivers need a way to control the subdevice
power state from the core. This use case is already partially covered by
the tuner s_standby operation, but no way to explicitly come back from
the standby state is available.

Rename the tuner s_standby operation to core s_power, and fix tuner
drivers accordingly. The tuner core will call s_power(0) instead of
s_standby(). No explicit call to s_power(1) is required for tuners as
they are supposed to wake up from standby automatically.

Signed-off-by: Laurent Pinchart laurent.pinch...@ideasonboard.com
---
 drivers/media/video/au0828/au0828-video.c   |2 +-
 drivers/media/video/cx231xx/cx231xx-video.c |2 +-
 drivers/media/video/cx23885/cx23885-core.c  |2 +-
 drivers/media/video/cx23885/cx23885-dvb.c   |2 +-
 drivers/media/video/cx88/cx88-cards.c   |2 +-
 drivers/media/video/cx88/cx88-dvb.c |2 +-
 drivers/media/video/cx88/cx88-video.c   |2 +-
 drivers/media/video/em28xx/em28xx-cards.c   |2 +-
 drivers/media/video/em28xx/em28xx-video.c   |2 +-
 drivers/media/video/saa7134/saa7134-core.c  |2 +-
 drivers/media/video/saa7134/saa7134-video.c |2 +-
 drivers/media/video/tuner-core.c|9 ++---
 include/media/v4l2-subdev.h |7 ---
 13 files changed, 21 insertions(+), 17 deletions(-)

diff --git a/drivers/media/video/au0828/au0828-video.c 
b/drivers/media/video/au0828/au0828-video.c
index 51527d7..1485aee 100644
--- a/drivers/media/video/au0828/au0828-video.c
+++ b/drivers/media/video/au0828/au0828-video.c
@@ -830,7 +830,7 @@ static int au0828_v4l2_close(struct file *filp)
au0828_uninit_isoc(dev);
 
/* Save some power by putting tuner to sleep */
-   v4l2_device_call_all(dev-v4l2_dev, 0, tuner, s_standby);
+   v4l2_device_call_all(dev-v4l2_dev, 0, core, s_power, 0);
 
/* When close the device, set the usb intf0 into alt0 to free
   USB bandwidth */
diff --git a/drivers/media/video/cx231xx/cx231xx-video.c 
b/drivers/media/video/cx231xx/cx231xx-video.c
index 609bae6..1d57972 100644
--- a/drivers/media/video/cx231xx/cx231xx-video.c
+++ b/drivers/media/video/cx231xx/cx231xx-video.c
@@ -2106,7 +2106,7 @@ static int cx231xx_v4l2_close(struct file *filp)
}
 
/* Save some power by putting tuner to sleep */
-   call_all(dev, tuner, s_standby);
+   call_all(dev, core, s_power, 0);
 
/* do this before setting alternate! */
cx231xx_uninit_isoc(dev);
diff --git a/drivers/media/video/cx23885/cx23885-core.c 
b/drivers/media/video/cx23885/cx23885-core.c
index bf7bb1c..c46bae2 100644
--- a/drivers/media/video/cx23885/cx23885-core.c
+++ b/drivers/media/video/cx23885/cx23885-core.c
@@ -875,7 +875,7 @@ static int cx23885_dev_setup(struct cx23885_dev *dev)
cx23885_i2c_register(dev-i2c_bus[1]);
cx23885_i2c_register(dev-i2c_bus[2]);
cx23885_card_setup(dev);
-   call_all(dev, tuner, s_standby);
+   call_all(dev, core, s_power, 0);
cx23885_ir_init(dev);
 
if (cx23885_boards[dev-board].porta == CX23885_ANALOG_VIDEO) {
diff --git a/drivers/media/video/cx23885/cx23885-dvb.c 
b/drivers/media/video/cx23885/cx23885-dvb.c
index 86ac529..a003a3c 100644
--- a/drivers/media/video/cx23885/cx23885-dvb.c
+++ b/drivers/media/video/cx23885/cx23885-dvb.c
@@ -848,7 +848,7 @@ static int dvb_register(struct cx23885_tsport *port)
fe0-dvb.frontend-callback = cx23885_tuner_callback;
 
/* Put the analog decoder in standby to keep it quiet */
-   call_all(dev, tuner, s_standby);
+   call_all(dev, core, s_power, 0);
 
if (fe0-dvb.frontend-ops.analog_ops.standby)
fe0-dvb.frontend-ops.analog_ops.standby(fe0-dvb.frontend);
diff --git a/drivers/media/video/cx88/cx88-cards.c 
b/drivers/media/video/cx88/cx88-cards.c
index 3946530..9e1656c 100644
--- a/drivers/media/video/cx88/cx88-cards.c
+++ b/drivers/media/video/cx88/cx88-cards.c
@@ -3213,7 +3213,7 @@ static void cx88_card_setup(struct cx88_core *core)
ctl.fname);
call_all(core, tuner, s_config, xc2028_cfg);
}
-   call_all(core, tuner, s_standby);
+   call_all(core, core, s_power, 0);
 }
 
 /* -- */
diff --git a/drivers/media/video/cx88/cx88-dvb.c 
b/drivers/media/video/cx88/cx88-dvb.c
index e237b50..dd2769b 100644
--- a/drivers/media/video/cx88/cx88-dvb.c
+++ b/drivers/media/video/cx88/cx88-dvb.c
@@ -1170,7 +1170,7 @@ static int dvb_register(struct cx8802_dev *dev)
fe1-dvb.frontend-ops.ts_bus_ctrl = cx88_dvb_bus_ctrl;
 
/* Put the analog decoder in standby to keep it quiet */
-   call_all(core, tuner, s_standby);
+   call_all(core, core, s_power, 0);
 
/* register everything */
return 

Re: dib3000mb dvb-t with kernel 2.6.32-rc3 do not work

2009-10-05 Thread Patrick Boettcher

Hi Mario,

On Mon, 5 Oct 2009, Mario Bachmann wrote:

with kernel 2.6.30.8 my TwinhanDTV USB-Ter USB1.1 / Magic Box I
worked.

Now with kernel 2.6.32-rc3 (and 2.6.31.1) the modules seems to be
loaded fine, but tzap/kaffeine/mplayer can not tune to a channel:

dmesg says:
dvb-usb: found a 'TwinhanDTV USB-Ter USB1.1 / Magic Box I / HAMA USB1.1 DVB-T 
device' in warm state.
dvb-usb: will use the device's hardware PID filter (table count: 16).
DVB: registering new adapter (TwinhanDTV USB-Ter USB1.1 / Magic Box I / HAMA 
USB1.1 DVB-T device)
DVB: registering adapter 0 frontend 0 (DiBcom 3000M-B DVB-T)...
dibusb: This device has the Thomson Cable onboard. Which is default.
input: IR-receiver inside an USB DVB receiver as 
/devices/pci:00/:00:04.0/usb4/4-2/input/input5
dvb-usb: schedule remote query interval to 150 msecs.
dvb-usb: TwinhanDTV USB-Ter USB1.1 / Magic Box I / HAMA USB1.1 DVB-T device 
successfully initialized and connected.
usbcore: registered new interface driver dvb_usb_dibusb_mb

[..]
and so on. The signal-values are zero or near zero, but when i boot the old 
kernel 2.6.30.8, t can tune without problems.


In a personal email to me you are saying that the differences between 
dibusb-common.c in 2.6.30.8 and 2.6.32-rc3 are the main cause for the 
problem.


Is it possible for you find out which exact change is causing the trouble?

With the v4l-dvb-hg-repository it is possible to get each intemediate 
version of this file. Afaics, there is only 3 modifications for the 
timeframe we are talking about.


best regards,

--

Patrick Boettcher - Kernel Labs
http://www.kernellabs.com/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [REVIEW] ivtv, ir-kbd-i2c: Explicit IR support for the AVerTV M116 for newer kernels

2009-10-05 Thread Devin Heitmueller
On Mon, Oct 5, 2009 at 6:02 AM, Aleksandr V. Piskunov
aleksandr.v.pisku...@gmail.com wrote:
 Yup, also tried udelay=4, IR controller handles it without problems,
 though cx25840 and xc2028 doesn't seem to like the 125 KHz frequency,
 refusing to communicate. xc2028 even stopped responding, requiring a cold
 reboot.

The i2c maximum clock rate for xc3028 is 100 KHz.  Nobody should ever
be running it at anything higher.

Devin

-- 
Devin J. Heitmueller - Kernel Labs
http://www.kernellabs.com
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Marek Szyprowski
Hello,

On Friday, October 02, 2009 6:25 PM Ivan T. Ivanov wrote:

 On Fri, 2009-10-02 at 13:45 +0200, Marek Szyprowski wrote:
  Hello,
 
  During the V4L2 mini-summit and the Media Controller RFC discussion on
  Linux Plumbers 2009 Conference a mem2mem video device has been mentioned
  a few times (usually in a context of a 'resizer device' which might be a
  part of Camera interface pipeline or work as a standalone device). We
  are doing a research how our custom video/multimedia drivers can fit
  into the V4L2 framework. Most of our multimedia devices work in mem2mem
  mode.
 
  I did a quick research and I found that currently in the V4L2 framework
  there is no device that processes video data in a memory-to-memory
  model. In terms of V4L2 framework such device would be both video sink
  and source at the same time. The main problem is how the video nodes
  (/dev/videoX) should be assigned to such a device.
 
  The simplest way of implementing mem2mem device in v4l2 framework would
  use two video nodes (one for input and one for output). Such an idea has
  been already suggested on V4L2 mini-summit. Each DMA engine (either
  input or output) that is available in the hardware should get its own
  video node. In this approach an application can write() source image to
  for example /dev/video0 and then read the processed output from for
  example /dev/video1. Source and destination format/params/other custom
  settings also can be easily set for either source or destination node.
  Besides a single image, user applications can also process video streams
  by calling stream_on(), qbuf() + dqbuf(), stream_off() simultaneously on
  both video nodes.
 
  This approach has a limitation however. As user applications would have
  to open 2 different file descriptors to perform the processing of a
  single image, the v4l2 driver would need to match read() calls done on
  one file descriptor with write() calls from the another. The same thing
  would happen with buffers enqueued with qbuf(). In practice, this would
  result in a driver that allows only one instance of /dev/video0 as well
  as /dev/video1 opened. Otherwise, it would not be possible to track
  which opened /dev/video0 instance matches which /dev/video1 one.
 
  The real limitation of this approach is the fact, that it is hardly
  possible to implement multi-instance support and application
  multiplexing on a video device. In a typical embedded system, in
  contrast to most video-source-only or video-sink-only devices, a mem2mem
  device is very often used by more than one application at a time. Be it
  either simple one-shot single video frame processing or stream
  processing. Just consider that the 'resizer' module might be used in
  many applications for scaling bitmaps (xserver video subsystem,
  gstreamer, jpeglib, etc) only.
 
  At the first glance one might think that implementing multi-instance
  support should be done in a userspace daemon instead of mem2mem drivers.
  However I have run into problems designing such a user space daemon.
  Usually, video buffers are passed to v4l2 device as a user pointer or
  are mmaped directly from the device. The main issue that cannot be
  easily resolved is passing video buffers from the client application to
  the daemon. The daemon would queue a request on the device and return
  results back to the client application after a transaction is finished.
  Passing userspace pointers between an application and the daemon cannot
  be done, as they are two different processes. Mmap-type buffers are
  similar in this aspect - at least 2 buffer copy operations are required
  (from client application to device input buffers mmaped in daemon's
  memory and then from device output buffers to client application).
  Buffer copying and process context switches add both latency and
  additional cpu workload. In our custom drivers for mem2mem multimedia
  devices we implemented a queue shared between all instances of an opened
  mem2mem device. Each instance is assigned to an open device file
  descriptor. The queue is serviced in the device context, thus maximizing
  the device throughput. This is achieved by scheduling the next
  transaction in the driver (kernel) context. This may not even require a
  context switch at all.
 
  Do you have any ideas how would this solution fit into the current v4l2
  design?
 
  Another solution that came into my mind that would not suffer from this
  limitation is to use the same video node for both writing input buffers
  and reading output buffers (or queuing both input and output buffers).
  Such a design causes more problems with the current v4l2 design however:
 
  1. How to set different color space or size for input and output buffer
  each? It could be solved by adding a set of ioctls to get/set source
  image format and size, while the existing v4l2 ioctls would only refer
  to the output buffer. Frankly speaking, we don't like this idea.
 
 I think that is not 

RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Marek Szyprowski
Hello,

On Monday, October 05, 2009 7:43 AM Hiremath, Vaibhav wrote:

 In terms of V4L2 framework such device would be both video
  sink
  and source at the same time. The main problem is how the video nodes
  (/dev/videoX) should be assigned to such a device.
 
  The simplest way of implementing mem2mem device in v4l2 framework
  would
  use two video nodes (one for input and one for output). Such an idea
  has
  been already suggested on V4L2 mini-summit.
 [Hiremath, Vaibhav] We discussed 2 options during summit,
 
 1) Only one video device node, and configuring parameters using 
 V4L2_BUF_TYPE_VIDEO_CAPTURE for input
 parameter and V4L2_BUF_TYPE_VIDEO_OUTPUT for output parameter.
 
 2) 2 separate video device node, one with V4L2_BUF_TYPE_VIDEO_CAPTURE and 
 another with
 V4L2_BUF_TYPE_VIDEO_OUTPUT, as mentioned by you.
 
 The obvious and preferred option would be 2, because with option 1 we could 
 not able to achieve real
 streaming. And again we have to put constraint on application for fixed input 
 buffer index.

What do you mean by real streaming?

 
  This approach has a limitation however. As user applications would
  have
  to open 2 different file descriptors to perform the processing of a
  single image, the v4l2 driver would need to match read() calls done
  on
  one file descriptor with write() calls from the another. The same
  thing
  would happen with buffers enqueued with qbuf(). In practice, this
  would
  result in a driver that allows only one instance of /dev/video0 as
  well
  as /dev/video1 opened. Otherwise, it would not be possible to track
  which opened /dev/video0 instance matches which /dev/video1 one.
 
 [Hiremath, Vaibhav] Please note that we must put one limitation to 
 application that, the buffers in
 both the video nodes are mapped one-to-one. This means that,
 
 Video0 (input)Video1 (output)
 Index-0   == index-0
 Index-1   == index-1
 Index-2   == index-2
 
 Do you see any other option to this? I think this constraint is obvious from 
 application point of view
 in during streaming.

This is correct. Every application should queue a corresponding output buffer 
for each queued input buffer.
NOTE that the this while discussion is how make it possible to have 2 different 
applications running at the same time, each of them
queuing their own input and output buffers. It will look somehow like this:

Video0 (input)  Video1 (output)
App1, Index-0   == App1, index-0
App2, Index-0   == App2, index-0
App1, Index-1   == App1, index-1
App2, Index-1   == App2, index-1
App1, Index-2   == App1, index-2
App2, Index-2   == App2, index-2

Note, that the absolute order of the queue/dequeue might be different, but each 
application should get the right output buffer,
which corresponds to the queued input buffer.

 [Hiremath, Vaibhav] Initially I thought of having separate queue in driver 
 which tries to make maximum
 usage of underneath hardware. Application just will queue the buffers and 
 call streamon, driver
 internally queues it in his own queue and issues a resize operation (in this 
 case) for all the queued
 buffers, releasing one-by-one to application. We have similar implementation 
 internally, but not with
 standard V4L2 framework, it uses custom IOCTL's for everything.

This is similar to what we have currently, however we want to move all our 
custom drivers into the generic kernel frameworks.

 But when we decided to provide User Space library with media controller, I 
 thought of moving this
 burden to application layer. Application library will create an interface and 
 queue and call streamon
 for all the buffers queued.
 
 Do you see any loopholes here? Am I missing any use-case scenario?

How do you want to pass buffers from your client applications through the user 
space library to the video nodes?

  Such a design causes more problems with the current v4l2 design
  however:
 
  1. How to set different color space or size for input and output
  buffer
  each? It could be solved by adding a set of ioctls to get/set source
  image format and size, while the existing v4l2 ioctls would only
  refer
  to the output buffer. Frankly speaking, we don't like this idea.
 
  2. Input and output in the same video node would not be compatible
  with
  the upcoming media controller, with which we will get an ability to
  arrange devices into a custom pipeline. Piping together two separate
  input-output nodes to create a new mem2mem device would be difficult
  and
  unintuitive. And that not even considering multi-output devices.
 
 [Hiremath, Vaibhav] irrespective of the 2 options I mentioned before the 
 media controller will come
 into picture, either for custom parameter configuration or creating/deleting 
 links.
 
 We are only discussing about buffer queue/de-queue and input output params 
 configuration and this has
 to happen 

RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Marek Szyprowski
Hello,

On Monday, October 05, 2009 7:59 AM Hiremath, Vaibhav wrote:

 -Original Message-
 From: linux-media-ow...@vger.kernel.org 
 [mailto:linux-media-ow...@vger.kernel.org] On Behalf Of
 Hiremath, Vaibhav
 Sent: Monday, October 05, 2009 7:59 AM
 To: Ivan T. Ivanov; Marek Szyprowski
 Cc: linux-media@vger.kernel.org; kyungmin.p...@samsung.com; Tomasz Fujak; 
 Pawel Osciak
 Subject: RE: Mem2Mem V4L2 devices [RFC]
 
 
  -Original Message-
  From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
  ow...@vger.kernel.org] On Behalf Of Ivan T. Ivanov
  Sent: Friday, October 02, 2009 9:55 PM
  To: Marek Szyprowski
  Cc: linux-media@vger.kernel.org; kyungmin.p...@samsung.com; Tomasz
  Fujak; Pawel Osciak
  Subject: Re: Mem2Mem V4L2 devices [RFC]
 
 
  Hi Marek,
 
 
  On Fri, 2009-10-02 at 13:45 +0200, Marek Szyprowski wrote:
   Hello,
  
 snip
 
   image format and size, while the existing v4l2 ioctls would only
  refer
   to the output buffer. Frankly speaking, we don't like this idea.
 
  I think that is not unusual one video device to define that it can
  support at the same time input and output operation.
 
  Lets take as example resizer device. it is always possible that it
  inform user space application that
 
  struct v4l2_capability.capabilities ==
  (V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT)
 
  User can issue S_FMT ioctl supplying
 
  struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
.pix  = width x height
 
  which will instruct this device to prepare its output for this
  resolution. after that user can issue S_FMT ioctl supplying
 
  struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT
.pix  = width x height
 
  using only these ioctls should be enough to device driver
  to know down/up scale factor required.
 
  regarding color space struct v4l2_pix_format have field
  'pixelformat'
  which can be used to define input and output buffers content.
  so using only existing ioctl's user can have working resizer device.
 
  also please note that there is VIDIOC_S_CROP which can add
  additional
  flexibility of adding cropping on input or output.
 
 [Hiremath, Vaibhav] I think this makes more sense in capture pipeline, for 
 example,
 
 Sensor/decoder - previewer - resizer - /dev/videoX
 

I don't get this. In strictly capture pipeline we will get one video node 
anyway. 

However the question is how we should support a bit more complicated pipeline.

Just consider a resizer module and the pipeline:

sensor/decoder -[bus]- previewer - [memory] - resizer - [memory]

([bus] means some kind of internal bus that is completely interdependent from 
the system memory)

Mapping to video nodes is not so trivial. In fact this pipeline consist of 2 
independent (sub)pipelines connected by user space
application:

sensor/decoder -[bus]- previewer - [memory] -[user application]- [memory] - 
resizer - [memory]

For further analysis it should be cut into 2 separate pipelines: 

a. sensor/decoder -[bus]- previewer - [memory]
b. [memory] - resizer - [memory]

Again, mapping the first subpipeline is trivial:

sensor/decoder -[bus]- previewer - /dev/video0

But the last, can be mapped either as:

/dev/video1 - resizer - /dev/video1
(one video node approach)

or

/dev/video1 - resizer - /dev/video2
(2 video nodes approach).


So at the end the pipeline would look like this:

sensor/decoder -[bus]- previewer - /dev/video0 -[user application]- 
/dev/video1 - resizer - /dev/video2

or 

sensor/decoder -[bus]- previewer - /dev/video0 -[user application]- 
/dev/video1 - resizer - /dev/video1

  last thing which should be done is to QBUF 2 buffers and call
  STREAMON.
 
 [Hiremath, Vaibhav] IMO, this implementation is not streaming model, we are 
 trying to fit mem-to-mem
 forcefully to streaming.

Why this does not fit streaming? I see no problems with streaming over mem2mem 
device with only one video node. You just queue input
and output buffers (they are distinguished by 'type' parameter) on the same 
video node.

 We have to put some constraints -
 
   - Driver will treat index 0 as input always, irrespective of number of 
 buffers queued.
   - Or, application should not queue more that 2 buffers.
   - Multi-channel use-case
 
 I think we have to have 2 device nodes which are capable of streaming 
 multiple buffers, both are
 queuing the buffers.

In one video node approach there can be 2 buffer queues in one video node, for 
input and output respectively.

 The constraint would be the buffers must be mapped one-to-one.

Right, each queued input buffer must have corresponding output buffer.

Best regards
--
Marek Szyprowski
Samsung Poland RD Center


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Ivan T. Ivanov

Hi, 

On Mon, 2009-10-05 at 15:54 +0200, Marek Szyprowski wrote:
 Hello,
 
 On Friday, October 02, 2009 6:25 PM Ivan T. Ivanov wrote:
 
  On Fri, 2009-10-02 at 13:45 +0200, Marek Szyprowski wrote:
   Hello,
  
   During the V4L2 mini-summit and the Media Controller RFC discussion on
   Linux Plumbers 2009 Conference a mem2mem video device has been mentioned
   a few times (usually in a context of a 'resizer device' which might be a
   part of Camera interface pipeline or work as a standalone device). We
   are doing a research how our custom video/multimedia drivers can fit
   into the V4L2 framework. Most of our multimedia devices work in mem2mem
   mode.
  
   I did a quick research and I found that currently in the V4L2 framework
   there is no device that processes video data in a memory-to-memory
   model. In terms of V4L2 framework such device would be both video sink
   and source at the same time. The main problem is how the video nodes
   (/dev/videoX) should be assigned to such a device.
  
   The simplest way of implementing mem2mem device in v4l2 framework would
   use two video nodes (one for input and one for output). Such an idea has
   been already suggested on V4L2 mini-summit. Each DMA engine (either
   input or output) that is available in the hardware should get its own
   video node. In this approach an application can write() source image to
   for example /dev/video0 and then read the processed output from for
   example /dev/video1. Source and destination format/params/other custom
   settings also can be easily set for either source or destination node.
   Besides a single image, user applications can also process video streams
   by calling stream_on(), qbuf() + dqbuf(), stream_off() simultaneously on
   both video nodes.
  
   This approach has a limitation however. As user applications would have
   to open 2 different file descriptors to perform the processing of a
   single image, the v4l2 driver would need to match read() calls done on
   one file descriptor with write() calls from the another. The same thing
   would happen with buffers enqueued with qbuf(). In practice, this would
   result in a driver that allows only one instance of /dev/video0 as well
   as /dev/video1 opened. Otherwise, it would not be possible to track
   which opened /dev/video0 instance matches which /dev/video1 one.
  
   The real limitation of this approach is the fact, that it is hardly
   possible to implement multi-instance support and application
   multiplexing on a video device. In a typical embedded system, in
   contrast to most video-source-only or video-sink-only devices, a mem2mem
   device is very often used by more than one application at a time. Be it
   either simple one-shot single video frame processing or stream
   processing. Just consider that the 'resizer' module might be used in
   many applications for scaling bitmaps (xserver video subsystem,
   gstreamer, jpeglib, etc) only.
  
   At the first glance one might think that implementing multi-instance
   support should be done in a userspace daemon instead of mem2mem drivers.
   However I have run into problems designing such a user space daemon.
   Usually, video buffers are passed to v4l2 device as a user pointer or
   are mmaped directly from the device. The main issue that cannot be
   easily resolved is passing video buffers from the client application to
   the daemon. The daemon would queue a request on the device and return
   results back to the client application after a transaction is finished.
   Passing userspace pointers between an application and the daemon cannot
   be done, as they are two different processes. Mmap-type buffers are
   similar in this aspect - at least 2 buffer copy operations are required
   (from client application to device input buffers mmaped in daemon's
   memory and then from device output buffers to client application).
   Buffer copying and process context switches add both latency and
   additional cpu workload. In our custom drivers for mem2mem multimedia
   devices we implemented a queue shared between all instances of an opened
   mem2mem device. Each instance is assigned to an open device file
   descriptor. The queue is serviced in the device context, thus maximizing
   the device throughput. This is achieved by scheduling the next
   transaction in the driver (kernel) context. This may not even require a
   context switch at all.
  
   Do you have any ideas how would this solution fit into the current v4l2
   design?
  
   Another solution that came into my mind that would not suffer from this
   limitation is to use the same video node for both writing input buffers
   and reading output buffers (or queuing both input and output buffers).
   Such a design causes more problems with the current v4l2 design however:
  
   1. How to set different color space or size for input and output buffer
   each? It could be solved by adding a set of ioctls to get/set source
   image format 

Re: dib3000mb dvb-t with kernel 2.6.32-rc3 do not work

2009-10-05 Thread Mario Bachmann
Am Mon, 5 Oct 2009 15:50:13 +0200 (CEST)
schrieb Patrick Boettcher pboettc...@kernellabs.com:

 Hi Mario,
 
 On Mon, 5 Oct 2009, Mario Bachmann wrote:
  with kernel 2.6.30.8 my TwinhanDTV USB-Ter USB1.1 / Magic Box I
  worked.
 
  Now with kernel 2.6.32-rc3 (and 2.6.31.1) the modules seems to be
  loaded fine, but tzap/kaffeine/mplayer can not tune to a channel:
 
  dmesg says:
  dvb-usb: found a 'TwinhanDTV USB-Ter USB1.1 / Magic Box I / HAMA
  USB1.1 DVB-T device' in warm state. dvb-usb: will use the device's
  hardware PID filter (table count: 16). DVB: registering new adapter
  (TwinhanDTV USB-Ter USB1.1 / Magic Box I / HAMA USB1.1 DVB-T
  device) DVB: registering adapter 0 frontend 0 (DiBcom 3000M-B
  DVB-T)... dibusb: This device has the Thomson Cable onboard. Which
  is default. input: IR-receiver inside an USB DVB receiver
  as /devices/pci:00/:00:04.0/usb4/4-2/input/input5 dvb-usb:
  schedule remote query interval to 150 msecs. dvb-usb: TwinhanDTV
  USB-Ter USB1.1 / Magic Box I / HAMA USB1.1 DVB-T device
  successfully initialized and connected. usbcore: registered new
  interface driver dvb_usb_dibusb_mb
 
  [..]
  and so on. The signal-values are zero or near zero, but when i boot
  the old kernel 2.6.30.8, t can tune without problems.
 
 In a personal email to me you are saying that the differences between 
 dibusb-common.c in 2.6.30.8 and 2.6.32-rc3 are the main cause for the 
 problem.
 
 Is it possible for you find out which exact change is causing the
 trouble?
 
 With the v4l-dvb-hg-repository it is possible to get each intemediate 
 version of this file. Afaics, there is only 3 modifications for the 
 timeframe we are talking about.
 
 best regards,
 
 --
 
 Patrick Boettcher - Kernel Labs
 http://www.kernellabs.com/

i think the cause must be here:
/usr/src/linux-2.6.32-rc3/drivers/media/dvb/dvb-usb/dibusb-common.c
line 136 to line 146

i changed this hole section to the version of 2.6.30.8:

if (i+1  num  (msg[i+1].flags  I2C_M_RD)) {
if (dibusb_i2c_msg(d, msg[i].addr,
msg[i].buf,msg[i].len,
msg[i+1].buf,msg[i+1].len)
 0)
break;
i++;
} else
if (dibusb_i2c_msg(d, msg[i].addr,
msg[i].buf,msg[i].len,NULL,0)  0)
break;

and it works again. my posted part is inside the 
for (i = 0; i  num; i++) { ... } -Section !

Mario
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] SH: add support for the RJ54N1CB0C camera for the kfr2r09 platform

2009-10-05 Thread Guennadi Liakhovetski
On Mon, 5 Oct 2009, Paul Mundt wrote:

 On Sat, Oct 03, 2009 at 01:21:30PM +0200, Guennadi Liakhovetski wrote:
  Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de
  ---
   arch/sh/boards/mach-kfr2r09/setup.c |  139 
  +++
   1 files changed, 139 insertions(+), 0 deletions(-)
  
 This seems to depend on the RJ54N1CB0C driver, so I'll queue this up
 after that has been merged in the v4l tree. If it's available on a topic
 branch upstream that isn't going to be rebased, then I can pull that in,
 but this is not so critical either way.

It actually shouldn't depend on the driver patch. The driver has no 
headers, so... I haven't verified, but it should work either way. OTOH, 
waiting for the driver patch is certainly a safe bet:-)

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


tvcard Leadtek WinFast PxDVR3200 H not working

2009-10-05 Thread Chifan Cosmin
I need to use the analog part, the Mandriva team from bugzilla try to help me , 
but in the end they suggest to look for help at you; the problem and trials are 
found here:
https://qa.mandriva.com/show_bug.cgi?id=54131
can you help me please, I don't wan go back to windows...
http://pigulici.110mb.com/


  
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PULL] soc-camera fixes and a new driver for 2.6.32

2009-10-05 Thread Guennadi Liakhovetski
Hi Mauro,

As agreed upon, I regenerated my tree, which now includes 4 fixes and a 
new sensor driver. All marked Priority: high.

Please pull from http://linuxtv.org/hg/~gliakhovetski/v4l-dvb

for the following 6 changesets:

01/06: sh_mobile_ceu: add soft reset function
http://linuxtv.org/hg/~gliakhovetski/v4l-dvb?cmd=changeset;node=74c7deed99ab

02/06: sh_mobile_ceu_camera: add VBP error support
http://linuxtv.org/hg/~gliakhovetski/v4l-dvb?cmd=changeset;node=cb1a46850d59

03/06: sh_mobile_ceu_camera: fix cropping for scaling clients
http://linuxtv.org/hg/~gliakhovetski/v4l-dvb?cmd=changeset;node=1ec5d4b2baf9

04/06: soc-camera: add a new driver for the RJ54N1CB0C camera sensor from Sharp
http://linuxtv.org/hg/~gliakhovetski/v4l-dvb?cmd=changeset;node=3694d2ae959a

05/06: pxa_camera: fix camera pixel format configuration
http://linuxtv.org/hg/~gliakhovetski/v4l-dvb?cmd=changeset;node=74be1809c9f1

06/06: fix use-after-free Oops, resulting from a driver-core API change
http://linuxtv.org/hg/~gliakhovetski/v4l-dvb?cmd=changeset;node=a0605260c650


 b/linux/drivers/media/video/rj54n1cb0c.c | 1219 +++
 linux/drivers/media/video/Kconfig|6 
 linux/drivers/media/video/Makefile   |1 
 linux/drivers/media/video/pxa_camera.c   |4 
 linux/drivers/media/video/sh_mobile_ceu_camera.c |   87 +
 linux/drivers/media/video/soc_camera.c   |   16 
 linux/include/media/v4l2-chip-ident.h|3 
 7 files changed, 1312 insertions(+), 24 deletions(-)

Thanks
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 1/2] soc-camera: add a new driver for the RJ54N1CB0C camera sensor from Sharp

2009-10-05 Thread Guennadi Liakhovetski
Hello Morimoto-san

On Mon, 5 Oct 2009, Kuninori Morimoto wrote:

 Dear Guennadi
 
  diff --git a/drivers/media/video/Makefile b/drivers/media/video/Makefile
  index e706cee..2851e5e 100644
  --- a/drivers/media/video/Makefile
  +++ b/drivers/media/video/Makefile
  @@ -79,6 +79,7 @@ obj-$(CONFIG_SOC_CAMERA_MT9V022)  += mt9v022.o
   obj-$(CONFIG_SOC_CAMERA_OV772X)+= ov772x.o
   obj-$(CONFIG_SOC_CAMERA_OV9640)+= ov9640.o
   obj-$(CONFIG_SOC_CAMERA_TW9910)+= tw9910.o
  +obj-$(CONFIG_SOC_CAMERA_RJ54N1)+= rj54n1cb0c.o
 
 alphabet order wrong ?
 'R' is earlier than 'T' ?

Thanks, I forgot they were ordered:-) Fixed in the final version.

Regards
Guennadi
---
Guennadi Liakhovetski, Ph.D.
Freelance Open-Source Software Developer
http://www.open-technology.de/
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Hiremath, Vaibhav

 -Original Message-
 From: Marek Szyprowski [mailto:m.szyprow...@samsung.com]
 Sent: Monday, October 05, 2009 7:26 PM
 To: Hiremath, Vaibhav; linux-media@vger.kernel.org
 Cc: kyungmin.p...@samsung.com; Tomasz Fujak; Pawel Osciak; Marek
 Szyprowski
 Subject: RE: Mem2Mem V4L2 devices [RFC]
 
 Hello,
 
 On Monday, October 05, 2009 7:43 AM Hiremath, Vaibhav wrote:
 
  In terms of V4L2 framework such device would be both video
   sink
   and source at the same time. The main problem is how the video
 nodes
   (/dev/videoX) should be assigned to such a device.
  
   The simplest way of implementing mem2mem device in v4l2
 framework
   would
   use two video nodes (one for input and one for output). Such an
 idea
   has
   been already suggested on V4L2 mini-summit.
  [Hiremath, Vaibhav] We discussed 2 options during summit,
 
  1) Only one video device node, and configuring parameters using
 V4L2_BUF_TYPE_VIDEO_CAPTURE for input
  parameter and V4L2_BUF_TYPE_VIDEO_OUTPUT for output parameter.
 
  2) 2 separate video device node, one with
 V4L2_BUF_TYPE_VIDEO_CAPTURE and another with
  V4L2_BUF_TYPE_VIDEO_OUTPUT, as mentioned by you.
 
  The obvious and preferred option would be 2, because with option 1
 we could not able to achieve real
  streaming. And again we have to put constraint on application for
 fixed input buffer index.
 
 What do you mean by real streaming?
 
[Hiremath, Vaibhav] I meant, after streamon, there will be just sequence of 
queuing and de-queuing of buffers. With single node of operation, how are we 
deciding which is input buffer and which one is output? We have to assume or 
put constraint on application that the 0th index will be always input, 
irrespective of number of buffers requested. 

In normal scenario (for example in codecs), the application will open the 
device once and start pumping the buffers, driver should queue the buffers as 
and when it comes directly to driver.

 
   This approach has a limitation however. As user applications
 would
   have
   to open 2 different file descriptors to perform the processing
 of a
   single image, the v4l2 driver would need to match read() calls
 done
   on
   one file descriptor with write() calls from the another. The
 same
   thing
   would happen with buffers enqueued with qbuf(). In practice,
 this
   would
   result in a driver that allows only one instance of /dev/video0
 as
   well
   as /dev/video1 opened. Otherwise, it would not be possible to
 track
   which opened /dev/video0 instance matches which /dev/video1 one.
  
  [Hiremath, Vaibhav] Please note that we must put one limitation to
 application that, the buffers in
  both the video nodes are mapped one-to-one. This means that,
 
  Video0 (input)  Video1 (output)
  Index-0 == index-0
  Index-1 == index-1
  Index-2 == index-2
 
  Do you see any other option to this? I think this constraint is
 obvious from application point of view
  in during streaming.
 
 This is correct. Every application should queue a corresponding
 output buffer for each queued input buffer.
 NOTE that the this while discussion is how make it possible to have
 2 different applications running at the same time, each of them
 queuing their own input and output buffers. It will look somehow
 like this:
 
 Video0 (input)Video1 (output)
 App1, Index-0 == App1, index-0
 App2, Index-0 == App2, index-0
 App1, Index-1 == App1, index-1
 App2, Index-1 == App2, index-1
 App1, Index-2 == App1, index-2
 App2, Index-2 == App2, index-2
 
 Note, that the absolute order of the queue/dequeue might be
 different, but each application should get the right output buffer,
 which corresponds to the queued input buffer.
 
[Hiremath, Vaibhav] We have to create separate queues for every device open 
call. It would be difficult/complex for the driver to maintain special queue 
for request from number of applications.

  [Hiremath, Vaibhav] Initially I thought of having separate queue
 in driver which tries to make maximum
  usage of underneath hardware. Application just will queue the
 buffers and call streamon, driver
  internally queues it in his own queue and issues a resize
 operation (in this case) for all the queued
  buffers, releasing one-by-one to application. We have similar
 implementation internally, but not with
  standard V4L2 framework, it uses custom IOCTL's for everything.
 
 This is similar to what we have currently, however we want to move
 all our custom drivers into the generic kernel frameworks.
 
  But when we decided to provide User Space library with media
 controller, I thought of moving this
  burden to application layer. Application library will create an
 interface and queue and call streamon
  for all the buffers queued.
 
  Do you see any loopholes here? Am I missing any use-case scenario?
 
 How do you want to pass buffers from your client 

[cron job] v4l-dvb daily build 2.6.22 and up: ERRORS, 2.6.16-2.6.21: ERRORS

2009-10-05 Thread Hans Verkuil
This message is generated daily by a cron job that builds v4l-dvb for
the kernels and architectures in the list below.

Results of the daily build of v4l-dvb:

date:Mon Oct  5 19:00:08 CEST 2009
path:http://www.linuxtv.org/hg/v4l-dvb
changeset:   13046:c7aa399e5dac
gcc version: gcc (GCC) 4.3.1
hardware:x86_64
host os: 2.6.26

linux-2.6.22.19-armv5: OK
linux-2.6.23.12-armv5: OK
linux-2.6.24.7-armv5: OK
linux-2.6.25.11-armv5: OK
linux-2.6.26-armv5: OK
linux-2.6.27-armv5: OK
linux-2.6.28-armv5: OK
linux-2.6.29.1-armv5: OK
linux-2.6.30-armv5: OK
linux-2.6.31-armv5: OK
linux-2.6.32-rc3-armv5: ERRORS
linux-2.6.32-rc3-armv5-davinci: ERRORS
linux-2.6.27-armv5-ixp: ERRORS
linux-2.6.28-armv5-ixp: ERRORS
linux-2.6.29.1-armv5-ixp: ERRORS
linux-2.6.30-armv5-ixp: ERRORS
linux-2.6.31-armv5-ixp: ERRORS
linux-2.6.32-rc3-armv5-ixp: ERRORS
linux-2.6.28-armv5-omap2: OK
linux-2.6.29.1-armv5-omap2: OK
linux-2.6.30-armv5-omap2: OK
linux-2.6.31-armv5-omap2: ERRORS
linux-2.6.32-rc3-armv5-omap2: ERRORS
linux-2.6.22.19-i686: ERRORS
linux-2.6.23.12-i686: ERRORS
linux-2.6.24.7-i686: ERRORS
linux-2.6.25.11-i686: ERRORS
linux-2.6.26-i686: OK
linux-2.6.27-i686: OK
linux-2.6.28-i686: OK
linux-2.6.29.1-i686: WARNINGS
linux-2.6.30-i686: WARNINGS
linux-2.6.31-i686: WARNINGS
linux-2.6.32-rc3-i686: ERRORS
linux-2.6.23.12-m32r: OK
linux-2.6.24.7-m32r: OK
linux-2.6.25.11-m32r: OK
linux-2.6.26-m32r: OK
linux-2.6.27-m32r: OK
linux-2.6.28-m32r: OK
linux-2.6.29.1-m32r: OK
linux-2.6.30-m32r: OK
linux-2.6.31-m32r: OK
linux-2.6.32-rc3-m32r: ERRORS
linux-2.6.30-mips: WARNINGS
linux-2.6.31-mips: OK
linux-2.6.32-rc3-mips: ERRORS
linux-2.6.27-powerpc64: ERRORS
linux-2.6.28-powerpc64: ERRORS
linux-2.6.29.1-powerpc64: ERRORS
linux-2.6.30-powerpc64: ERRORS
linux-2.6.31-powerpc64: ERRORS
linux-2.6.32-rc3-powerpc64: ERRORS
linux-2.6.22.19-x86_64: ERRORS
linux-2.6.23.12-x86_64: ERRORS
linux-2.6.24.7-x86_64: ERRORS
linux-2.6.25.11-x86_64: ERRORS
linux-2.6.26-x86_64: OK
linux-2.6.27-x86_64: OK
linux-2.6.28-x86_64: OK
linux-2.6.29.1-x86_64: WARNINGS
linux-2.6.30-x86_64: WARNINGS
linux-2.6.31-x86_64: WARNINGS
linux-2.6.32-rc3-x86_64: ERRORS
sparse (linux-2.6.31): OK
sparse (linux-2.6.32-rc3): OK
linux-2.6.16.61-i686: ERRORS
linux-2.6.17.14-i686: ERRORS
linux-2.6.18.8-i686: ERRORS
linux-2.6.19.5-i686: ERRORS
linux-2.6.20.21-i686: ERRORS
linux-2.6.21.7-i686: ERRORS
linux-2.6.16.61-x86_64: ERRORS
linux-2.6.17.14-x86_64: ERRORS
linux-2.6.18.8-x86_64: ERRORS
linux-2.6.19.5-x86_64: ERRORS
linux-2.6.20.21-x86_64: ERRORS
linux-2.6.21.7-x86_64: ERRORS

Detailed results are available here:

http://www.xs4all.nl/~hverkuil/logs/Monday.log

Full logs are available here:

http://www.xs4all.nl/~hverkuil/logs/Monday.tar.bz2

The V4L2 specification failed to build, but the last compiled spec is here:

http://www.xs4all.nl/~hverkuil/spec/v4l2.html

The DVB API specification from this daily build is here:

http://www.xs4all.nl/~hverkuil/spec/dvbapi.pdf

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Hiremath, Vaibhav
 -Original Message-
 From: Marek Szyprowski [mailto:m.szyprow...@samsung.com]
 Sent: Monday, October 05, 2009 7:26 PM
 To: Hiremath, Vaibhav; 'Ivan T. Ivanov'; linux-media@vger.kernel.org
 Cc: kyungmin.p...@samsung.com; Tomasz Fujak; Pawel Osciak; Marek
 Szyprowski
 Subject: RE: Mem2Mem V4L2 devices [RFC]
 
 Hello,
 
 On Monday, October 05, 2009 7:59 AM Hiremath, Vaibhav wrote:
 
  -Original Message-
  From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
 ow...@vger.kernel.org] On Behalf Of
  Hiremath, Vaibhav
  Sent: Monday, October 05, 2009 7:59 AM
  To: Ivan T. Ivanov; Marek Szyprowski
  Cc: linux-media@vger.kernel.org; kyungmin.p...@samsung.com; Tomasz
 Fujak; Pawel Osciak
  Subject: RE: Mem2Mem V4L2 devices [RFC]
 
 
   -Original Message-
   From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
   ow...@vger.kernel.org] On Behalf Of Ivan T. Ivanov
   Sent: Friday, October 02, 2009 9:55 PM
   To: Marek Szyprowski
   Cc: linux-media@vger.kernel.org; kyungmin.p...@samsung.com;
 Tomasz
   Fujak; Pawel Osciak
   Subject: Re: Mem2Mem V4L2 devices [RFC]
  
  
   Hi Marek,
  
  
   On Fri, 2009-10-02 at 13:45 +0200, Marek Szyprowski wrote:
Hello,
   
  snip
 
image format and size, while the existing v4l2 ioctls would
 only
   refer
to the output buffer. Frankly speaking, we don't like this
 idea.
  
   I think that is not unusual one video device to define that it
 can
   support at the same time input and output operation.
  
   Lets take as example resizer device. it is always possible that
 it
   inform user space application that
  
   struct v4l2_capability.capabilities ==
 (V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT)
  
   User can issue S_FMT ioctl supplying
  
   struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
   .pix  = width x height
  
   which will instruct this device to prepare its output for this
   resolution. after that user can issue S_FMT ioctl supplying
  
   struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT
   .pix  = width x height
  
   using only these ioctls should be enough to device driver
   to know down/up scale factor required.
  
   regarding color space struct v4l2_pix_format have field
   'pixelformat'
   which can be used to define input and output buffers content.
   so using only existing ioctl's user can have working resizer
 device.
  
   also please note that there is VIDIOC_S_CROP which can add
   additional
   flexibility of adding cropping on input or output.
  
  [Hiremath, Vaibhav] I think this makes more sense in capture
 pipeline, for example,
 
  Sensor/decoder - previewer - resizer - /dev/videoX
 
 
 I don't get this. In strictly capture pipeline we will get one video
 node anyway.
 
 However the question is how we should support a bit more complicated
 pipeline.
 
 Just consider a resizer module and the pipeline:
 
 sensor/decoder -[bus]- previewer - [memory] - resizer - [memory]
 
[Hiremath, Vaibhav] For me this is not single pipeline, it has two separate 
links - 

1) sensor/decoder -[bus]- previewer - [memory]

2) [memory] - resizer - [memory]


 ([bus] means some kind of internal bus that is completely
 interdependent from the system memory)
 
 Mapping to video nodes is not so trivial. In fact this pipeline
 consist of 2 independent (sub)pipelines connected by user space
 application:
 
 sensor/decoder -[bus]- previewer - [memory] -[user application]-
 [memory] - resizer - [memory]
 
 For further analysis it should be cut into 2 separate pipelines:
 
 a. sensor/decoder -[bus]- previewer - [memory]
 b. [memory] - resizer - [memory]
 
[Hiremath, Vaibhav] Correct, I wouldn't call them as sub-pipeline. Application 
is linking them, so from driver point of view they are completely separate.

 Again, mapping the first subpipeline is trivial:
 
 sensor/decoder -[bus]- previewer - /dev/video0
 
[Hiremath, Vaibhav] Correct, it is separate streaming device.

 But the last, can be mapped either as:
 
 /dev/video1 - resizer - /dev/video1
 (one video node approach)
 
[Hiremath, Vaibhav] Please go through my last response where I have mentioned 
about buffer queuing constraints with this approach.

 or
 
 /dev/video1 - resizer - /dev/video2
 (2 video nodes approach).
 
 
 So at the end the pipeline would look like this:
 
 sensor/decoder -[bus]- previewer - /dev/video0 -[user
 application]- /dev/video1 - resizer - /dev/video2
 
 or
 
 sensor/decoder -[bus]- previewer - /dev/video0 -[user
 application]- /dev/video1 - resizer - /dev/video1
 
   last thing which should be done is to QBUF 2 buffers and call
   STREAMON.
  
  [Hiremath, Vaibhav] IMO, this implementation is not streaming
 model, we are trying to fit mem-to-mem
  forcefully to streaming.
 
 Why this does not fit streaming? I see no problems with streaming
 over mem2mem device with only one video node. You just queue input
 and output buffers (they are distinguished by 'type' parameter) on
 the same 

Re: [REVIEW] ivtv, ir-kbd-i2c: Explicit IR support for the AVerTV M116 for newer kernels

2009-10-05 Thread Oldrich Jedlicka
On Monday 05 of October 2009 at 00:23:47, Aleksandr V. Piskunov wrote:
 On Sat, Oct 03, 2009 at 11:44:20AM -0400, Andy Walls wrote:
  Aleksandr and Jean,
 
  Zdrastvoitye  Bonjour,
 
  To support the AVerMedia M166's IR microcontroller in ivtv and
  ir-kbd-i2c with the new i2c binding model, I have added 3 changesets in
 
  http://linuxtv.org/hg/~awalls/ivtv

 Now the last step to the decent support of M116 remote.

 I spent hours banging my head against the wall, controller just doesn't
 give a stable keypresses, skips a lot of them. Increasing polling interval
 from default 100 ms to 400-500 ms helps a bit, but it only masks the
 problem. Decreasing polling interval below 50ms makes it skip virtually
 90% of keypresses.

 Basicly during the I2C operation that reads scancode, controller seems
 to stop processing input from IR sensor, resulting a loss of keypress.

Hi Aleksandr,

Just a side note. If your M166 has the same remote control chip as my CardBus 
Plus/Hybrid (I2C address 0x40), then I have to say it is very fragile. From 
my experience it doesn't like probing (empty read), when reading the value it 
doesn't like interruptions (you have to write the address and read 
immediately). So I wouldn't be surprised if it doesn't work under some other 
conditions.

Regards,
Oldrich.


 So the solution(?) I found was to decrease the udelay in
 ivtv_i2c_algo_template from 10 to 5. Guess it just doubles the frequency
 of ivtv i2c bus or something like that. Problem went away, IR controller
 is now working as expected.

 So question is:
 1) Is it ok to decrease udelay for this board?
 2) If yes, how to do it right?


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Ivan T. Ivanov

Hi Vaibhav,

On Mon, 2009-10-05 at 23:57 +0530, Hiremath, Vaibhav wrote:
  -Original Message-
  From: Marek Szyprowski [mailto:m.szyprow...@samsung.com]
  Sent: Monday, October 05, 2009 7:26 PM
  To: Hiremath, Vaibhav; 'Ivan T. Ivanov'; linux-media@vger.kernel.org
  Cc: kyungmin.p...@samsung.com; Tomasz Fujak; Pawel Osciak; Marek
  Szyprowski
  Subject: RE: Mem2Mem V4L2 devices [RFC]
  
  Hello,
  
  On Monday, October 05, 2009 7:59 AM Hiremath, Vaibhav wrote:
  
   -Original Message-
   From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
  ow...@vger.kernel.org] On Behalf Of
   Hiremath, Vaibhav
   Sent: Monday, October 05, 2009 7:59 AM
   To: Ivan T. Ivanov; Marek Szyprowski
   Cc: linux-media@vger.kernel.org; kyungmin.p...@samsung.com; Tomasz
  Fujak; Pawel Osciak
   Subject: RE: Mem2Mem V4L2 devices [RFC]
  
  
-Original Message-
From: linux-media-ow...@vger.kernel.org [mailto:linux-media-
ow...@vger.kernel.org] On Behalf Of Ivan T. Ivanov
Sent: Friday, October 02, 2009 9:55 PM
To: Marek Szyprowski
Cc: linux-media@vger.kernel.org; kyungmin.p...@samsung.com;
  Tomasz
Fujak; Pawel Osciak
Subject: Re: Mem2Mem V4L2 devices [RFC]
   
   
Hi Marek,
   
   
On Fri, 2009-10-02 at 13:45 +0200, Marek Szyprowski wrote:
 Hello,

   snip
  
 image format and size, while the existing v4l2 ioctls would
  only
refer
 to the output buffer. Frankly speaking, we don't like this
  idea.
   
I think that is not unusual one video device to define that it
  can
support at the same time input and output operation.
   
Lets take as example resizer device. it is always possible that
  it
inform user space application that
   
struct v4l2_capability.capabilities ==
(V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT)
   
User can issue S_FMT ioctl supplying
   
struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
  .pix  = width x height
   
which will instruct this device to prepare its output for this
resolution. after that user can issue S_FMT ioctl supplying
   
struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT
  .pix  = width x height
   
using only these ioctls should be enough to device driver
to know down/up scale factor required.
   
regarding color space struct v4l2_pix_format have field
'pixelformat'
which can be used to define input and output buffers content.
so using only existing ioctl's user can have working resizer
  device.
   
also please note that there is VIDIOC_S_CROP which can add
additional
flexibility of adding cropping on input or output.
   
   [Hiremath, Vaibhav] I think this makes more sense in capture
  pipeline, for example,
  
   Sensor/decoder - previewer - resizer - /dev/videoX
  
  
  I don't get this. In strictly capture pipeline we will get one video
  node anyway.
  
  However the question is how we should support a bit more complicated
  pipeline.
  
  Just consider a resizer module and the pipeline:
  
  sensor/decoder -[bus]- previewer - [memory] - resizer - [memory]
  
 [Hiremath, Vaibhav] For me this is not single pipeline, it has two separate 
 links - 
 
 1) sensor/decoder -[bus]- previewer - [memory]
 
 2) [memory] - resizer - [memory]
 
 
  ([bus] means some kind of internal bus that is completely
  interdependent from the system memory)
  
  Mapping to video nodes is not so trivial. In fact this pipeline
  consist of 2 independent (sub)pipelines connected by user space
  application:
  
  sensor/decoder -[bus]- previewer - [memory] -[user application]-
  [memory] - resizer - [memory]
  
  For further analysis it should be cut into 2 separate pipelines:
  
  a. sensor/decoder -[bus]- previewer - [memory]
  b. [memory] - resizer - [memory]
  
 [Hiremath, Vaibhav] Correct, I wouldn't call them as sub-pipeline. 
 Application is linking them, so from driver point of view they are completely 
 separate.
 
  Again, mapping the first subpipeline is trivial:
  
  sensor/decoder -[bus]- previewer - /dev/video0
  
 [Hiremath, Vaibhav] Correct, it is separate streaming device.
 
  But the last, can be mapped either as:
  
  /dev/video1 - resizer - /dev/video1
  (one video node approach)
  
 [Hiremath, Vaibhav] Please go through my last response where I have mentioned 
 about buffer queuing constraints with this approach.
 
  or
  
  /dev/video1 - resizer - /dev/video2
  (2 video nodes approach).
  
  
  So at the end the pipeline would look like this:
  
  sensor/decoder -[bus]- previewer - /dev/video0 -[user
  application]- /dev/video1 - resizer - /dev/video2
  
  or
  
  sensor/decoder -[bus]- previewer - /dev/video0 -[user
  application]- /dev/video1 - resizer - /dev/video1
  
last thing which should be done is to QBUF 2 buffers and call
STREAMON.
   
   [Hiremath, Vaibhav] IMO, this implementation is not streaming
  model, we are trying to fit mem-to-mem
 

RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Hiremath, Vaibhav

 -Original Message-
 From: Ivan T. Ivanov [mailto:iiva...@mm-sol.com]
 Sent: Tuesday, October 06, 2009 12:27 AM
 To: Hiremath, Vaibhav
 Cc: Marek Szyprowski; linux-media@vger.kernel.org;
 kyungmin.p...@samsung.com; Tomasz Fujak; Pawel Osciak
 Subject: RE: Mem2Mem V4L2 devices [RFC]
 
 
snip
 last thing which should be done is to QBUF 2 buffers and
 call
 STREAMON.

[Hiremath, Vaibhav] IMO, this implementation is not streaming
   model, we are trying to fit mem-to-mem
forcefully to streaming.
  
   Why this does not fit streaming? I see no problems with
 streaming
   over mem2mem device with only one video node. You just queue
 input
   and output buffers (they are distinguished by 'type' parameter)
 on
   the same video node.
  
  [Hiremath, Vaibhav] Do we create separate queue of buffers based
 on type? I think we don't.
 
  App1App2App3... AppN
|  |  |   | |
 ---
  |
  /dev/video0
  |
  Resizer Driver
 
  why not? they can be per file handler input/output queue. and we
  can do time sharing use of resizer driver like Marek suggests.
 
[Hiremath, Vaibhav] Ivan,
File handle based queue and buffer type based queue are two different terms. 

Yes, definitely we have to create separate queues for each file handle to 
support multiple channels. But my question was for buffer type, CAPTURE and 
OUTPUT.

Thanks,
Vaibhav

 
 
  Everyone will be doing streamon, and in normal use case every
 application must be getting buffers from another module (another
 driver, codecs, DSP, etc...) in multiple streams, 0, 1,2,3,4N
 
  Every application will start streaming with (mostly) fixed scaling
 factor which mostly never changes. This one video node approach is
 possible only with constraint that, the application will always
 queue only 2 buffers with one CAPTURE and one with OUTPUT type.
 
 i don't see how 2 device node approach can help with this case.
 even in normal video capture device you should stop streaming
 when change buffer sizes.
 
  He has to wait till first/second gets finished, you can't queue
 multiple buffers (input and output) simultaneously.
 
 actually this should be possible.
 
 iivanov
 
 
  I do agree here with you that we need to investigate on whether we
 really have such use-case. Does it make sense to put such constraint
 on application? What is the impact? Again in case of down-scaling,
 application may want to use same buffer as input, which is easily
 possible with single node approach.
 
  Thanks,
  Vaibhav
 
We have to put some constraints -
   
- Driver will treat index 0 as input always,
 irrespective of
   number of buffers queued.
- Or, application should not queue more that 2 buffers.
- Multi-channel use-case
   
I think we have to have 2 device nodes which are capable of
   streaming multiple buffers, both are
queuing the buffers.
  
   In one video node approach there can be 2 buffer queues in one
 video
   node, for input and output respectively.
  
The constraint would be the buffers must be mapped one-to-one.
  
   Right, each queued input buffer must have corresponding output
   buffer.
  
   Best regards
   --
   Marek Szyprowski
   Samsung Poland RD Center
  
  
 
 

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Ivan T. Ivanov
On Tue, 2009-10-06 at 00:31 +0530, Hiremath, Vaibhav wrote:
  -Original Message-
  From: Ivan T. Ivanov [mailto:iiva...@mm-sol.com]
  Sent: Tuesday, October 06, 2009 12:27 AM
  To: Hiremath, Vaibhav
  Cc: Marek Szyprowski; linux-media@vger.kernel.org;
  kyungmin.p...@samsung.com; Tomasz Fujak; Pawel Osciak
  Subject: RE: Mem2Mem V4L2 devices [RFC]
  
  
 snip
  last thing which should be done is to QBUF 2 buffers and
  call
  STREAMON.
 
 [Hiremath, Vaibhav] IMO, this implementation is not streaming
model, we are trying to fit mem-to-mem
 forcefully to streaming.
   
Why this does not fit streaming? I see no problems with
  streaming
over mem2mem device with only one video node. You just queue
  input
and output buffers (they are distinguished by 'type' parameter)
  on
the same video node.
   
   [Hiremath, Vaibhav] Do we create separate queue of buffers based
  on type? I think we don't.
  
   App1  App2App3... AppN
 ||  |   | |
  ---
 |
 /dev/video0
 |
 Resizer Driver
  
   why not? they can be per file handler input/output queue. and we
   can do time sharing use of resizer driver like Marek suggests.
  
 [Hiremath, Vaibhav] Ivan,
 File handle based queue and buffer type based queue are two different terms. 

really? ;)

 
 Yes, definitely we have to create separate queues for each file handle to 
 support multiple channels. But my question was for buffer type, CAPTURE and 
 OUTPUT.
 

let me see. you concern is that for very big frames 1X Mpix, managing
separate buffers for input and output will be waste of space
for operations like downs calling. i know that such operations can be
done in-place ;). but what about up-scaling. this also should 
be possible, but with some very dirty hacks.

iivanov

 Thanks,
 Vaibhav
 
  
  
   Everyone will be doing streamon, and in normal use case every
  application must be getting buffers from another module (another
  driver, codecs, DSP, etc...) in multiple streams, 0, 1,2,3,4N
  
   Every application will start streaming with (mostly) fixed scaling
  factor which mostly never changes. This one video node approach is
  possible only with constraint that, the application will always
  queue only 2 buffers with one CAPTURE and one with OUTPUT type.
  
  i don't see how 2 device node approach can help with this case.
  even in normal video capture device you should stop streaming
  when change buffer sizes.
  
   He has to wait till first/second gets finished, you can't queue
  multiple buffers (input and output) simultaneously.
  
  actually this should be possible.
  
  iivanov
  
  
   I do agree here with you that we need to investigate on whether we
  really have such use-case. Does it make sense to put such constraint
  on application? What is the impact? Again in case of down-scaling,
  application may want to use same buffer as input, which is easily
  possible with single node approach.
  
   Thanks,
   Vaibhav
  
 We have to put some constraints -

   - Driver will treat index 0 as input always,
  irrespective of
number of buffers queued.
   - Or, application should not queue more that 2 buffers.
   - Multi-channel use-case

 I think we have to have 2 device nodes which are capable of
streaming multiple buffers, both are
 queuing the buffers.
   
In one video node approach there can be 2 buffer queues in one
  video
node, for input and output respectively.
   
 The constraint would be the buffers must be mapped one-to-one.
   
Right, each queued input buffer must have corresponding output
buffer.
   
Best regards
--
Marek Szyprowski
Samsung Poland RD Center
   
   
  
  
 

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Hiremath, Vaibhav


Thanks,
Vaibhav Hiremath
Platform Support Products
Texas Instruments Inc
Ph: +91-80-25099927

 -Original Message-
 From: Ivan T. Ivanov [mailto:iiva...@mm-sol.com]
 Sent: Tuesday, October 06, 2009 12:39 AM
 To: Hiremath, Vaibhav
 Cc: Marek Szyprowski; linux-media@vger.kernel.org;
 kyungmin.p...@samsung.com; Tomasz Fujak; Pawel Osciak
 Subject: RE: Mem2Mem V4L2 devices [RFC]
 
 On Tue, 2009-10-06 at 00:31 +0530, Hiremath, Vaibhav wrote:
   -Original Message-
   From: Ivan T. Ivanov [mailto:iiva...@mm-sol.com]
   Sent: Tuesday, October 06, 2009 12:27 AM
   To: Hiremath, Vaibhav
   Cc: Marek Szyprowski; linux-media@vger.kernel.org;
   kyungmin.p...@samsung.com; Tomasz Fujak; Pawel Osciak
   Subject: RE: Mem2Mem V4L2 devices [RFC]
  
  
  snip
   last thing which should be done is to QBUF 2 buffers and
   call
   STREAMON.
  
  [Hiremath, Vaibhav] IMO, this implementation is not
 streaming
 model, we are trying to fit mem-to-mem
  forcefully to streaming.

 Why this does not fit streaming? I see no problems with
   streaming
 over mem2mem device with only one video node. You just queue
   input
 and output buffers (they are distinguished by 'type'
 parameter)
   on
 the same video node.

[Hiremath, Vaibhav] Do we create separate queue of buffers
 based
   on type? I think we don't.
   
App1App2App3... AppN
  |  |  |   | |
   ---
|
/dev/video0
|
Resizer Driver
  
why not? they can be per file handler input/output queue. and
 we
can do time sharing use of resizer driver like Marek suggests.
  
  [Hiremath, Vaibhav] Ivan,
  File handle based queue and buffer type based queue are two
 different terms.
 
 really? ;)
 
 
  Yes, definitely we have to create separate queues for each file
 handle to support multiple channels. But my question was for buffer
 type, CAPTURE and OUTPUT.
 
 
 let me see. you concern is that for very big frames 1X Mpix,
 managing
 separate buffers for input and output will be waste of space
 for operations like downs calling. i know that such operations can
 be
 done in-place ;). but what about up-scaling. this also should
 be possible, but with some very dirty hacks.
 
[Hiremath, Vaibhav] Dirty hacks??? 
I think, for upscaling we have to have 2 separate buffers, I do not see any 
options here.

Thanks,
Vaibhav

 iivanov
 
  Thanks,
  Vaibhav
 
  
   
Everyone will be doing streamon, and in normal use case every
   application must be getting buffers from another module (another
   driver, codecs, DSP, etc...) in multiple streams, 0,
 1,2,3,4N
   
Every application will start streaming with (mostly) fixed
 scaling
   factor which mostly never changes. This one video node approach
 is
   possible only with constraint that, the application will always
   queue only 2 buffers with one CAPTURE and one with OUTPUT type.
  
   i don't see how 2 device node approach can help with this case.
   even in normal video capture device you should stop streaming
   when change buffer sizes.
  
He has to wait till first/second gets finished, you can't
 queue
   multiple buffers (input and output) simultaneously.
  
   actually this should be possible.
  
   iivanov
  
   
I do agree here with you that we need to investigate on
 whether we
   really have such use-case. Does it make sense to put such
 constraint
   on application? What is the impact? Again in case of down-
 scaling,
   application may want to use same buffer as input, which is
 easily
   possible with single node approach.
   
Thanks,
Vaibhav
   
  We have to put some constraints -
 
  - Driver will treat index 0 as input always,
   irrespective of
 number of buffers queued.
  - Or, application should not queue more that 2 buffers.
  - Multi-channel use-case
 
  I think we have to have 2 device nodes which are capable
 of
 streaming multiple buffers, both are
  queuing the buffers.

 In one video node approach there can be 2 buffer queues in
 one
   video
 node, for input and output respectively.

  The constraint would be the buffers must be mapped one-to-
 one.

 Right, each queued input buffer must have corresponding
 output
 buffer.

 Best regards
 --
 Marek Szyprowski
 Samsung Poland RD Center


   
  
 
 

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Karicheri, Muralidharan



 1. How to set different color space or size for input and output buffer
 each? It could be solved by adding a set of ioctls to get/set source
 image format and size, while the existing v4l2 ioctls would only refer
 to the output buffer. Frankly speaking, we don't like this idea.

I think that is not unusual one video device to define that it can
support at the same time input and output operation.

Lets take as example resizer device. it is always possible that it
inform user space application that

struct v4l2_capability.capabilities ==
   (V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT)

User can issue S_FMT ioctl supplying

struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
 .pix  = width x height

which will instruct this device to prepare its output for this
resolution. after that user can issue S_FMT ioctl supplying

struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT
 .pix  = width x height

using only these ioctls should be enough to device driver
to know down/up scale factor required.

regarding color space struct v4l2_pix_format have field 'pixelformat'
which can be used to define input and output buffers content.
so using only existing ioctl's user can have working resizer device.

also please note that there is VIDIOC_S_CROP which can add additional
flexibility of adding cropping on input or output.

last thing which should be done is to QBUF 2 buffers and call STREAMON.

i think this will simplify a lot buffer synchronization.


Ivan,

There is another use case where there are two Resizer hardware working on the 
same input frame and give two different output frames of different resolution. 
How do we handle this using the one video device approach you
just described here?

Murali
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Ivan T. Ivanov
Hi, 


On Mon, 2009-10-05 at 15:02 -0500, Karicheri, Muralidharan wrote:
 
 
  1. How to set different color space or size for input and output buffer
  each? It could be solved by adding a set of ioctls to get/set source
  image format and size, while the existing v4l2 ioctls would only refer
  to the output buffer. Frankly speaking, we don't like this idea.
 
 I think that is not unusual one video device to define that it can
 support at the same time input and output operation.
 
 Lets take as example resizer device. it is always possible that it
 inform user space application that
 
 struct v4l2_capability.capabilities ==
  (V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT)
 
 User can issue S_FMT ioctl supplying
 
 struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
.pix  = width x height
 
 which will instruct this device to prepare its output for this
 resolution. after that user can issue S_FMT ioctl supplying
 
 struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT
.pix  = width x height
 
 using only these ioctls should be enough to device driver
 to know down/up scale factor required.
 
 regarding color space struct v4l2_pix_format have field 'pixelformat'
 which can be used to define input and output buffers content.
 so using only existing ioctl's user can have working resizer device.
 
 also please note that there is VIDIOC_S_CROP which can add additional
 flexibility of adding cropping on input or output.
 
 last thing which should be done is to QBUF 2 buffers and call STREAMON.
 
 i think this will simplify a lot buffer synchronization.
 
 
 Ivan,
 
 There is another use case where there are two Resizer hardware working on the 
 same input frame and give two different output frames of different 
 resolution. How do we handle this using the one video device approach you
 just described here?

 what is the difference?
 
- you can have only one resizer device driver which will hide that 
  they are actually 2 hardware resizers. just operations will be
  faster ;).

- they are two device drivers (nodes) with similar characteristics.

in both cases input buffer can be the same. 

iivanov



 
 Murali

--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: Mem2Mem V4L2 devices [RFC]

2009-10-05 Thread Karicheri, Muralidharan


 Ivan,

 There is another use case where there are two Resizer hardware working on
the same input frame and give two different output frames of different
resolution. How do we handle this using the one video device approach you
 just described here?

 what is the difference?

- you can have only one resizer device driver which will hide that
  they are actually 2 hardware resizers. just operations will be
  faster ;).


In your implementation as mentioned above, there will be one queue for the 
OUTPUT buffer type and another queue for the CAPTURE buffer type right?
So if we have two Resizer outputs, then we would need two queues of the CAPTURE 
buffer type. When application calls QBUF, on the node, which queue will be used 
for the buffer? So this makes me believe we need to two capture nodes and one 
output node for this driver. 

- they are two device drivers (nodes) with similar characteristics.

in both cases input buffer can be the same.

iivanov




 Murali


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [REVIEW] ivtv, ir-kbd-i2c: Explicit IR support for the AVerTV M116 for newer kernels

2009-10-05 Thread Jean Delvare
Hi Andy,

On Sun, 04 Oct 2009 16:11:32 -0400, Andy Walls wrote:
 On Sun, 2009-10-04 at 10:54 +0200, Jean Delvare wrote:
  On Sat, 03 Oct 2009 11:44:20 -0400, Andy Walls wrote:
/* This array should match the IVTV_HW_ defines */
   @@ -126,7 +131,8 @@
 wm8739,
 vp27smpx,
 m52790,
   - NULL
   + NULL,
   + NULL/* IVTV_HW_EM78P153S_IR_RX_AVER */
};

/* This array should match the IVTV_HW_ defines */
   @@ -151,9 +157,38 @@
 vp27smpx,
 m52790,
 gpio,
   + ir_rx_em78p153s_aver,
  
  This exceeds the maximum length for I2C client names (19 chars max.) So
  your patch won't work. I could make the name field slightly larger (say
  23 chars) if really needed, but I'd rather have you simply use shorter
  names.
 
 I'll use shorter names.  I was trying to be maintain some uniqueness.
 The bridge driver has the knowledge of the exact chip and nothing else
 does unless the bridge exposes it somehow.  It seemed like a good way to
 expose the knowledge.

The knowledge is already exposed through the platform data attached to
the instantiated i2c device (.ir_codes, .internal_get_key_func, .type
and .name). The i2c client name isn't used by the ir-kbd-i2c driver to
do anything useful.

   +static int ivtv_i2c_new_ir(struct i2c_adapter *adap, u32 hw, const char 
   *type,
   +u8 addr)
   +{
   + struct i2c_board_info info;
   + unsigned short addr_list[2] = { addr, I2C_CLIENT_END };
   +
   + memset(info, 0, sizeof(struct i2c_board_info));
   + strlcpy(info.type, type, I2C_NAME_SIZE);
   +
   + /* Our default information for ir-kbd-i2c.c to use */
   + switch (hw) {
   + case IVTV_HW_EM78P153S_IR_RX_AVER:
   + info.platform_data = (void *) em78p153s_aver_ir_init_data;
  
  Useless cast. You never need to cast to void *.
 
 The compiler gripes because the const gets discarded; Mauro asked me
 to fix it in cx18 previously.  I could have cast it to the proper type,
 but then it wouldn't have fit in 80 columns.
 
 (void *) wasn't useless, it kept gcc, checkpatch, Mauro and me happy.
 Now I guess I'll have to break the assignment to be over two lines. :(

Ah, good point, I had missed it. Well basically this means that you're
not supposed to pass const data structures as platform data. So maybe
you'd rather follow the approach used in the saa7134 and em28xx drivers.

   --- a/linux/drivers/media/video/ir-kbd-i2c.c  Sat Oct 03 11:23:00 
   2009 -0400
   +++ b/linux/drivers/media/video/ir-kbd-i2c.c  Sat Oct 03 11:27:19 
   2009 -0400
   @@ -730,6 +730,7 @@
 { ir_video, 0 },
 /* IR device specific entries should be added here */
 { ir_rx_z8f0811_haup, 0 },
   + { ir_rx_em78p153s_aver, 0 },
 { }
};

  
  I think we need to discuss this. I don't really see the point of adding
  new entries if the ir-kbd-i2c driver doesn't do anything about it. This
  makes device probing slower with no benefit. As long as you pass device
  information with all the details, the ir-kbd-i2c driver won't care
  about the device name.
 
 I though a matching name was required for ir-kbd-i2c to bind to the IR
 controller deivce.  I personally don't like the ir_video name as it is
 a bit too generic, but then again I don't know whwre that name is
 visible outside the kernel.  My plan was to have rather specific names,
 so LIRC in the future could know automatically how to handle some of
 these devices without the user trying to guess what an ir_video device
 was as that name supplied no information to LIRC or the user.

The name is visible in sysfs as the client's name attribute. But no
user-space application should rely on this. If a user-space application
should use a name string, that should be the _input_ name and not the
i2c client name. For the simple reason that IR devices don't have to be
I2C devices.

The i2c device name is merely used for device-driver matching. For this
purpose, ir_video works just fine. As I said before, there is a point
in defining other names if it allows the ir-kbd-i2c driver to make
decisions by itself, or if you envision to move support for a specific
device to a separate driver as some point. If not then you're making
things more complex with zero benefit.

I'd like to add that, IMHO, LIRC shouldn't have to care about this at
all. The name should be purely informative. I have experimented in the
past with user-space trying to do device-specific handling based on a
name string. This is what we did in libsensors 2. This ended up being a
totally unmaintainable mess, where each new kernel had to be followed
by updated user-space. This was a pain and you really shouldn't go in
this direction. For libsensors 3, we've defined a clean sysfs
interface, which describes the functionality of each supported device,
so the library doesn't do any name-driver processing. Very easy to
maintain.

So if you want a piece of advice: either handle all device-specific
things in the kernel, or in user-space, but don't do half in the kernel

Re: tm6010 status

2009-10-05 Thread matthieu castet

Hi,

Dênis Goes wrote:

Hi Matthieu...
I made the same answer yesterday... I want to help in development for use my
PixelView 405 USB.

Do you have the correct tridvid.sys file to extract the firmware ?


No, I took the firmware (for the tuner) somewhere on internet.

Some time ago I have done some usb sniffing on Windows for my HVR900H, study 
the linux driver and start
some analysis [2].

I found some strange thing on i2c bus [1]. Then I figure out what should be 
done to make
work the digital part.
But because of lack of time and motivation (like everybody ;) ), I stopped 
working on this.

Matthieu

[1] 
http://www.mail-archive.com/linux-media@vger.kernel.org/msg00987.html


[2]
== i2c ==
0x1f zl
0xa0 eeprom
0xa2 ??? (0xff)
0xa4
0xa6-0xac ??? (0xff)
0xae 
0xc2 (tuner)

== gpio ==
0 (WP eeprom ?? )
1 ZL RESET
2 tuner_reset_gpio
4 input sel ???
5 (led green)
7 (led blue)
== eeprom format ==
0x0-0x3 : magic ???
0x4-0x15 : GetDescriptor device
0xc VID
0xE PID
0x10 DID
0x12 iManufacturer
0x13 Product string
0x14 SerialNumber

0x40 string size (10 03) ???
0x42-0x4f (Product string @32)
0x94 string size (16 03)
0x96-0xa9 (SerialNumber @64)
0x16 string size (02 03)
0x18- (iManufacturer @16)

0x60 : iConfiguration index ???
0x6a string size (0a 03)
0x6c (iConfiguration @48)

where is mac address and rev ???
--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: ISP OMAP3 camera support ov7690

2009-10-05 Thread Aguirre Rodriguez, Sergio Alberto
Hi Michael, 

 -Original Message-
 From: linux-omap-ow...@vger.kernel.org 
 [mailto:linux-omap-ow...@vger.kernel.org] On Behalf Of michael
 Sent: Sunday, October 04, 2009 7:29 PM
 To: Nishanth Menon
 Cc: linux-o...@vger.kernel.org; linux-media@vger.kernel.org
 Subject: Re: ISP OMAP3 camera support ov7690
 
 Hi,
 
 cc: linux-media
 
 Nishanth Menon wrote:
  michael said the following on 10/03/2009 06:13 PM:
  I'm writing a driver to support the ov7690 camera and I have some
  question about the meaning of:
 
  - datalane configuration
  CSI2 Data lanes - each CSI2 lane is a differential pair. 
 And, at least 1
  clock and data lane is used in devices.
 
 Sorry can you explain a little bit more. I have the camera 
 connected to the
 cam_hs and cam_vs and the data is 8Bit. I use the the isp init
 structure. The sccb bus works great and I can send 
 configuration to it,
 but I don't receive any interrupt from the ics, seems that it 
 doen't see
 the transaction:
 
 The ISPCCDC: ###CCDC SYN_MODE=0x31704 seems ok.
 
 
 static struct isp_interface_config ov7690_if_config = {
 .ccdc_par_ser   = ISP_CSIA,
 .dataline_shift = 0x0,
 .hsvs_syncdetect= ISPCTRL_SYNC_DETECT_VSFALL,

Can you try with ISPCTRL_SYNC_DETECT_VSRISE ?

 .strobe = 0x0,
 .prestrobe  = 0x0,
 .shutter= 0x0,
 .wenlog = ISPCCDC_CFG_WENLOG_AND,
 .wait_hs_vs = 0x4,
 .raw_fmt_in = ISPCCDC_INPUT_FMT_GR_BG,
 .u.csi.crc  = 0x0,
 .u.csi.mode = 0x0,
 .u.csi.edge = 0x0,
 .u.csi.signalling   = 0x0,
 .u.csi.strobe_clock_inv = 0x0,
 .u.csi.vs_edge  = 0x0,
 .u.csi.channel  = 0x0,
 .u.csi.vpclk= 0x1,
 .u.csi.data_start   = 0x0,
 .u.csi.data_size= 0x0,
 .u.csi.format   = V4L2_PIX_FMT_YUYV,
 };
 
 and I don't know the meaning of
 
 lanecfg.clk.pol = OV7690_CSI2_CLOCK_POLARITY;
 lanecfg.clk.pos = OV7690_CSI2_CLOCK_LANE;
 lanecfg.data[0].pol = OV7690_CSI2_DATA0_POLARITY;
 lanecfg.data[0].pos = OV7690_CSI2_DATA0_LANE;
 lanecfg.data[1].pol = OV7690_CSI2_DATA1_POLARITY;
 lanecfg.data[1].pos = OV7690_CSI2_DATA1_LANE;
 lanecfg.data[2].pol = 0;
 lanecfg.data[2].pos = 0;
 lanecfg.data[3].pol = 0;
 lanecfg.data[3].pos = 0;
 

This is the physical connection details:

- The .pol field stands for the differntial pair polarity.
  (i.e. the order in which the negative and positive connections
  are pugged in to the CSI2 ComplexIO module)

- The .pos field is for telling in which position of the 4
  available physically you have your clock, or data lane located.

Regards,
Sergio

  - phyconfiguration
  PHY - Physical timing configurations. btw, if it is camera 
 specific you
  could get a lot of inputs from [1].
 
 Ok I wil ask to them.
 
  
  Regards,
  Nishanth Menon
  
  Ref:
  [1] http://vger.kernel.org/vger-lists.html#linux-media
  --
  To unsubscribe from this list: send the line unsubscribe 
 linux-omap in
  the body of a message to majord...@vger.kernel.org
  More majordomo info at  http://vger.kernel.org/majordomo-info.html
  
 
 Michael
 --
 To unsubscribe from this list: send the line unsubscribe 
 linux-omap in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
 
 --
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Mem2Mem V4L2 devices [RFC] - Can we enhance the V4L2 API?

2009-10-05 Thread Karicheri, Muralidharan
Hi,

Are we constrained to use the QBUF/DQBUF/STREAMON/STREAMOFF model for this 
specific device (memory to memory)? What about adding new IOCTLs that can be 
used for this specific device type that possibly can simplify the 
implementation? As we have seen in the discussion, this is not a streaming 
device, rather a transaction/conversion device which operate on a given frame 
to get a desired output frame. Each transaction may have it's own set of 
configuration context which will be applied to the hardware before starting the 
operation. This is unlike a streaming device, where most of the configuration 
is done prior to starting the streaming. The changes done during streaming are 
controls like brightness, contrast, gain etc. The frames received by 
application are either synchronized to an input source timing or application 
output frame based on a display timing. Also a single IO instance is usually 
maintained at the driver where as in the case of memory to memory device, 
hardware needs to switch contexts between operations. So we might need a 
different approach than capture/output device. 

Just a thought to see if others think in the same way. Once we know we are free 
to enhance the API to support this new device, I am sure there will be better 
ideas to implement the same.

Murali Karicheri
Software Design Engineer
Texas Instruments Inc.
Germantown, MD 20874
email: m-kariche...@ti.com

-Original Message-
From: Ivan T. Ivanov [mailto:iiva...@mm-sol.com]
Sent: Monday, October 05, 2009 4:14 PM
To: Karicheri, Muralidharan
Cc: Marek Szyprowski; linux-media@vger.kernel.org;
kyungmin.p...@samsung.com; Tomasz Fujak; Pawel Osciak
Subject: RE: Mem2Mem V4L2 devices [RFC]

Hi,


On Mon, 2009-10-05 at 15:02 -0500, Karicheri, Muralidharan wrote:

 
  1. How to set different color space or size for input and output
buffer
  each? It could be solved by adding a set of ioctls to get/set source
  image format and size, while the existing v4l2 ioctls would only refer
  to the output buffer. Frankly speaking, we don't like this idea.
 
 I think that is not unusual one video device to define that it can
 support at the same time input and output operation.
 
 Lets take as example resizer device. it is always possible that it
 inform user space application that
 
 struct v4l2_capability.capabilities ==
 (V4L2_CAP_VIDEO_CAPTURE | V4L2_CAP_VIDEO_OUTPUT)
 
 User can issue S_FMT ioctl supplying
 
 struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_CAPTURE
   .pix  = width x height
 
 which will instruct this device to prepare its output for this
 resolution. after that user can issue S_FMT ioctl supplying
 
 struct v4l2_format.type = V4L2_BUF_TYPE_VIDEO_OUTPUT
   .pix  = width x height
 
 using only these ioctls should be enough to device driver
 to know down/up scale factor required.
 
 regarding color space struct v4l2_pix_format have field 'pixelformat'
 which can be used to define input and output buffers content.
 so using only existing ioctl's user can have working resizer device.
 
 also please note that there is VIDIOC_S_CROP which can add additional
 flexibility of adding cropping on input or output.
 
 last thing which should be done is to QBUF 2 buffers and call STREAMON.
 
 i think this will simplify a lot buffer synchronization.
 

 Ivan,

 There is another use case where there are two Resizer hardware working on
the same input frame and give two different output frames of different
resolution. How do we handle this using the one video device approach you
 just described here?

 what is the difference?

- you can have only one resizer device driver which will hide that
  they are actually 2 hardware resizers. just operations will be
  faster ;).

- they are two device drivers (nodes) with similar characteristics.

in both cases input buffer can be the same.

iivanov




 Murali


--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: ISP OMAP3 camera support ov7690

2009-10-05 Thread michael

Hi,

Aguirre Rodriguez, Sergio Alberto wrote:
Hi Michael, 


-Original Message-
From: linux-omap-ow...@vger.kernel.org 
[mailto:linux-omap-ow...@vger.kernel.org] On Behalf Of michael

Sent: Sunday, October 04, 2009 7:29 PM
To: Nishanth Menon
Cc: linux-o...@vger.kernel.org; linux-media@vger.kernel.org
Subject: Re: ISP OMAP3 camera support ov7690

Hi,

cc: linux-media

Nishanth Menon wrote:

michael said the following on 10/03/2009 06:13 PM:

I'm writing a driver to support the ov7690 camera and I have some
question about the meaning of:

- datalane configuration
CSI2 Data lanes - each CSI2 lane is a differential pair. 

And, at least 1

clock and data lane is used in devices.
Sorry can you explain a little bit more. I have the camera 
connected to the

cam_hs and cam_vs and the data is 8Bit. I use the the isp init
structure. The sccb bus works great and I can send 
configuration to it,
but I don't receive any interrupt from the ics, seems that it 
doen't see

the transaction:

The ISPCCDC: ###CCDC SYN_MODE=0x31704 seems ok.


static struct isp_interface_config ov7690_if_config = {
.ccdc_par_ser   = ISP_CSIA,
.dataline_shift = 0x0,
.hsvs_syncdetect= ISPCTRL_SYNC_DETECT_VSFALL,


Can you try with ISPCTRL_SYNC_DETECT_VSRISE ?


I just try to invert the polary and I have no problem to share the code with 
the community.
The documentation miss info about the 4 clock and 8 clock before and after the 
frame.
I have the limitation that the camera is always on, so in the POWER_DOWN 
condidion
I reset it to have the output and signal in HZ state. I think that is correct,
Now I will try to take a look to the logic, because that receive interrupt must 
be
normal when I put out the signal connection. I can send the patch set to the 
mailing
linux-media to share the code and maybe there is somenthing else working on the 
same
device. Do you thing that is is a good idea. There are a lot of work to do for 
setting
varius control and I don't have a perfect knolowment of the OMAP part. I try to read 
the code, but I suspect tha I can have basically two problem:


- missend register setting (so I don't enable correcly the signal output
- or somenthing regarding the timing. 


The documentation lack and it's incomplete about timing and setting and I'm 
under nda
but the code is offcorse gpl and I have the intention to submit to review or 
help
people like me that are working around the same issue.




.strobe = 0x0,
.prestrobe  = 0x0,
.shutter= 0x0,
.wenlog = ISPCCDC_CFG_WENLOG_AND,
.wait_hs_vs = 0x4,
.raw_fmt_in = ISPCCDC_INPUT_FMT_GR_BG,
.u.csi.crc  = 0x0,
.u.csi.mode = 0x0,
.u.csi.edge = 0x0,
.u.csi.signalling   = 0x0,
.u.csi.strobe_clock_inv = 0x0,
.u.csi.vs_edge  = 0x0,
.u.csi.channel  = 0x0,
.u.csi.vpclk= 0x1,
.u.csi.data_start   = 0x0,
.u.csi.data_size= 0x0,
.u.csi.format   = V4L2_PIX_FMT_YUYV,
};

and I don't know the meaning of

lanecfg.clk.pol = OV7690_CSI2_CLOCK_POLARITY;
lanecfg.clk.pos = OV7690_CSI2_CLOCK_LANE;
lanecfg.data[0].pol = OV7690_CSI2_DATA0_POLARITY;
lanecfg.data[0].pos = OV7690_CSI2_DATA0_LANE;
lanecfg.data[1].pol = OV7690_CSI2_DATA1_POLARITY;
lanecfg.data[1].pos = OV7690_CSI2_DATA1_LANE;
lanecfg.data[2].pol = 0;
lanecfg.data[2].pos = 0;
lanecfg.data[3].pol = 0;
lanecfg.data[3].pos = 0;



This is the physical connection details:

- The .pol field stands for the differntial pair polarity.
  (i.e. the order in which the negative and positive connections
  are pugged in to the CSI2 ComplexIO module)

- The .pos field is for telling in which position of the 4
  available physically you have your clock, or data lane located.


Ok, so if I don't receive the clock for a wrong routing I can't read the
data but I can receive interumpt for vsync falling or rising transaction.
Is it correct?



Regards,
Sergio


Thank's for your time
Regards Michael



- phyconfiguration
PHY - Physical timing configurations. btw, if it is camera 

specific you

could get a lot of inputs from [1].

Ok I wil ask to them.


Regards,
Nishanth Menon

Ref:
[1] http://vger.kernel.org/vger-lists.html#linux-media
--
To unsubscribe from this list: send the line unsubscribe 

linux-omap in

the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Michael
--
To unsubscribe from this list: send the line unsubscribe 
linux-omap in

the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
To unsubscribe from this list: send the line unsubscribe linux-media in
the body of a message to majord...@vger.kernel.org
More majordomo info at