Re: New media framework user space usage
Hello, I succesfully got out images reliably. I just the nokia-tree and mixed up your mt9t031 and Laurent's mt9t001 code. It's just a barebone that gets images in default size. It's not cleaned up yet, but I post what I have. Greetings, Bastian 2010/12/6 Guennadi Liakhovetski g.liakhovet...@gmx.de: On Thu, 4 Nov 2010, Laurent Pinchart wrote: Hi Bastian, Hi Bastian, all Have you or anyone ben successful getting mt9p031 to work with the omap3 ISP driver? If so - can I have the code? Or even if it never worked - could you post the latest version of your driver and platform bindings? Thanks Guennadi On Tuesday 02 November 2010 11:31:28 Bastian Hecht wrote: I am the first guy needing a 12 bit-bus? Yes you are :-) You will need to implement 12 bit support in the ISP driver, or start by hacking the sensor driver to report a 10 bit format (2 bits will be lost but you should still be able to capture an image). Isn't that an officially supported procedure to drop the least significant bits? You gave me the isp configuration .bus = { .parallel = { .data_lane_shift = 1, ... that instructs the isp to use 10 of the 12 bits. If you don't need the full 12 bits, sure, that should work. Second thing is, the yavta app now gets stuck while dequeuing a buffer. strace ./yavta -f SGRBG10 -s 2592x1944 -n 4 --capture=4 --skip 3 -F /dev/video2 ... ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_STREAMON, 0xbec11154) = 0 ioctl(3, VIDIOC_DQBUF strace gets stuck in mid of this line. Somehow the ISP_ENABLE_IRQ register was reset at some point that is unclear to me. When I put it on again manually yavta succeeds to read the frames. That's weird. Let me know if you can reproduce the problem. Unfortunately the image consists of black pixels only. We found out that the 2.8V voltage regulator got broken in the course of development - the 1.8V logic still worked but the ADC did not... But the heck - I was never that close :) -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- Guennadi Liakhovetski, Ph.D. Freelance Open-Source Software Developer http://www.open-technology.de/ /* * arch/arm/mach-omap2/board-bastix.c * * Copyright (C) 2010 Bastian Hecht hec...@gmail.com * * based on * * Copyright (C) 2008 Nokia Corporation * * Contact: Sakari Ailus sakari.ai...@nokia.com * Tuukka Toivonen tuukka.o.toivo...@nokia.com * * This program is free software; you can redistribute it and/or * modify it under the terms of the GNU General Public License * version 2 as published by the Free Software Foundation. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA * 02110-1301 USA * */ #include linux/i2c.h #include linux/i2c/twl.h #include linux/delay.h #include linux/mm.h #include linux/platform_device.h #include linux/videodev2.h #include asm/gpio.h #include plat/control.h #include ../../../drivers/media/video/isp/isp.h #include ../../../drivers/media/video/isp/ispreg.h #include media/mt9t001.h #include devices.h #define GPIO_DIR_OUTPUT 0 // IGEP CAM BUS NUM #define BASTIX_CAM_I2C_BUS_NUM 2 static int __init cam_init(void) { return 0; } static int bastix_configure_interface(struct v4l2_subdev *subdev, int width, int height) { struct isp_device *isp = v4l2_dev_to_isp_device(subdev-v4l2_dev); //isp_set_pixel_clock(isp, 0); return 0; } static struct mt9t001_platform_data bastix_mt9p031_platform_data = { .clk_pol = 0, }; static struct i2c_board_info bastix_camera_i2c_devices[] = { { I2C_BOARD_INFO(MT9P031_NAME, MT9P031_I2C_ADDR), .platform_data = bastix_mt9p031_platform_data, }, }; static struct v4l2_subdev_i2c_board_info bastix_camera_mt9p031[] = { { .board_info = bastix_camera_i2c_devices[0], .i2c_adapter_id = BASTIX_CAM_I2C_BUS_NUM, }, { NULL, 0, }, }; static struct isp_v4l2_subdevs_group bastix_camera_subdevs[] = { { .subdevs = bastix_camera_mt9p031, .interface = ISP_INTERFACE_PARALLEL, .bus = { .parallel = { .data_lane_shift= 1, .clk_pol= 0, .bridge
Re: New media framework user space usage
Hello, 2010/12/7 Guennadi Liakhovetski g.liakhovet...@gmx.de: Hi Bastian On Tue, 7 Dec 2010, Bastian Hecht wrote: Hello, I succesfully got out images reliably. I just the nokia-tree and mixed up your mt9t031 and Laurent's mt9t001 code. It's just a barebone that gets images in default size. It's not cleaned up yet, but I post what I have. Thanks! Will have a look and give them a try. If you like, I can try to help you cleaning up the patches and mainlining them - at least to Laurent's tree;) That would be superb, as we do not continue our mt9p031 development because we try an Omnivion chip. On the other hand, I received so much support here, that I want the driver code to be in open hands and not the bin. Sung Hee develops on the mt9p031, too. We should try to mainline the mt9p031 code. cheers, Bastian Thanks Guennadi Greetings, Bastian 2010/12/6 Guennadi Liakhovetski g.liakhovet...@gmx.de: On Thu, 4 Nov 2010, Laurent Pinchart wrote: Hi Bastian, Hi Bastian, all Have you or anyone ben successful getting mt9p031 to work with the omap3 ISP driver? If so - can I have the code? Or even if it never worked - could you post the latest version of your driver and platform bindings? Thanks Guennadi On Tuesday 02 November 2010 11:31:28 Bastian Hecht wrote: I am the first guy needing a 12 bit-bus? Yes you are :-) You will need to implement 12 bit support in the ISP driver, or start by hacking the sensor driver to report a 10 bit format (2 bits will be lost but you should still be able to capture an image). Isn't that an officially supported procedure to drop the least significant bits? You gave me the isp configuration .bus = { .parallel = { .data_lane_shift = 1, ... that instructs the isp to use 10 of the 12 bits. If you don't need the full 12 bits, sure, that should work. Second thing is, the yavta app now gets stuck while dequeuing a buffer. strace ./yavta -f SGRBG10 -s 2592x1944 -n 4 --capture=4 --skip 3 -F /dev/video2 ... ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_STREAMON, 0xbec11154) = 0 ioctl(3, VIDIOC_DQBUF strace gets stuck in mid of this line. Somehow the ISP_ENABLE_IRQ register was reset at some point that is unclear to me. When I put it on again manually yavta succeeds to read the frames. That's weird. Let me know if you can reproduce the problem. Unfortunately the image consists of black pixels only. We found out that the 2.8V voltage regulator got broken in the course of development - the 1.8V logic still worked but the ADC did not... But the heck - I was never that close :) -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- Guennadi Liakhovetski, Ph.D. Freelance Open-Source Software Developer http://www.open-technology.de/ --- Guennadi Liakhovetski, Ph.D. Freelance Open-Source Software Developer http://www.open-technology.de/ -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
On Thu, 4 Nov 2010, Laurent Pinchart wrote: Hi Bastian, Hi Bastian, all Have you or anyone ben successful getting mt9p031 to work with the omap3 ISP driver? If so - can I have the code? Or even if it never worked - could you post the latest version of your driver and platform bindings? Thanks Guennadi On Tuesday 02 November 2010 11:31:28 Bastian Hecht wrote: I am the first guy needing a 12 bit-bus? Yes you are :-) You will need to implement 12 bit support in the ISP driver, or start by hacking the sensor driver to report a 10 bit format (2 bits will be lost but you should still be able to capture an image). Isn't that an officially supported procedure to drop the least significant bits? You gave me the isp configuration .bus = { .parallel = { .data_lane_shift= 1, ... that instructs the isp to use 10 of the 12 bits. If you don't need the full 12 bits, sure, that should work. Second thing is, the yavta app now gets stuck while dequeuing a buffer. strace ./yavta -f SGRBG10 -s 2592x1944 -n 4 --capture=4 --skip 3 -F /dev/video2 ... ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_STREAMON, 0xbec11154) = 0 ioctl(3, VIDIOC_DQBUF strace gets stuck in mid of this line. Somehow the ISP_ENABLE_IRQ register was reset at some point that is unclear to me. When I put it on again manually yavta succeeds to read the frames. That's weird. Let me know if you can reproduce the problem. Unfortunately the image consists of black pixels only. We found out that the 2.8V voltage regulator got broken in the course of development - the 1.8V logic still worked but the ADC did not... But the heck - I was never that close :) -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html --- Guennadi Liakhovetski, Ph.D. Freelance Open-Source Software Developer http://www.open-technology.de/ -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
Hi Bastian, On Tuesday 02 November 2010 11:31:28 Bastian Hecht wrote: I am the first guy needing a 12 bit-bus? Yes you are :-) You will need to implement 12 bit support in the ISP driver, or start by hacking the sensor driver to report a 10 bit format (2 bits will be lost but you should still be able to capture an image). Isn't that an officially supported procedure to drop the least significant bits? You gave me the isp configuration .bus = { .parallel = { .data_lane_shift= 1, ... that instructs the isp to use 10 of the 12 bits. If you don't need the full 12 bits, sure, that should work. Second thing is, the yavta app now gets stuck while dequeuing a buffer. strace ./yavta -f SGRBG10 -s 2592x1944 -n 4 --capture=4 --skip 3 -F /dev/video2 ... ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_STREAMON, 0xbec11154) = 0 ioctl(3, VIDIOC_DQBUF strace gets stuck in mid of this line. Somehow the ISP_ENABLE_IRQ register was reset at some point that is unclear to me. When I put it on again manually yavta succeeds to read the frames. That's weird. Let me know if you can reproduce the problem. Unfortunately the image consists of black pixels only. We found out that the 2.8V voltage regulator got broken in the course of development - the 1.8V logic still worked but the ADC did not... But the heck - I was never that close :) -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
Hello Laurent, I am the first guy needing a 12 bit-bus? Yes you are :-) You will need to implement 12 bit support in the ISP driver, or start by hacking the sensor driver to report a 10 bit format (2 bits will be lost but you should still be able to capture an image). Isn't that an officially supported procedure to drop the least significant bits? You gave me the isp configuration .bus = { .parallel = { .data_lane_shift= 1, ... that instructs the isp to use 10 of the 12 bits. Second thing is, the yavta app now gets stuck while dequeuing a buffer. strace ./yavta -f SGRBG10 -s 2592x1944 -n 4 --capture=4 --skip 3 -F /dev/video2 ... ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_STREAMON, 0xbec11154) = 0 ioctl(3, VIDIOC_DQBUF strace gets stuck in mid of this line. Somehow the ISP_ENABLE_IRQ register was reset at some point that is unclear to me. When I put it on again manually yavta succeeds to read the frames. Unfortunately the image consists of black pixels only. We found out that the 2.8V voltage regulator got broken in the course of development - the 1.8V logic still worked but the ADC did not... But the heck - I was never that close :) bye, Bastian -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
On 11/2/2010 3:31 AM, Bastian Hecht wrote: Hello Laurent, I am the first guy needing a 12 bit-bus? Yes you are :-) You will need to implement 12 bit support in the ISP driver, or start by hacking the sensor driver to report a 10 bit format (2 bits will be lost but you should still be able to capture an image). Isn't that an officially supported procedure to drop the least significant bits? You gave me the isp configuration .bus = { .parallel = { .data_lane_shift= 1, ... that instructs the isp to use 10 of the 12 bits. I suspect what Laurent means is that there's no way to send out 12-bit raw data to memory without ISP code changes. You can connect up a 12-bit sensor and just decimate to 10 bits, but that's not the same as the ISP driver supporting the 12-bit data paths that are possible in hardware. That is, the OMAP3 ISP _is_ capable of writing out raw data to memory that's 12 bits per pixel, but the current ISP code hardcodes the data lane shift value in the ISP configuration, instead of making it depend on format - you'd want GRBG12 to set the data lane shift to 0, and GRBG10 or UYVY (enum names approximate) to set the data lane shift to 1. We had a hack deep in the ISP code for a bit that did this, but it was hardcoded for the MT9P031, and we abandoned the idea pretty quickly. Eino-Ville Talvala Stanford University -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
To clarify this: The number of pixels in an image sensor is typically simply the number of independent photosites - so the 5-MP MT9P031 sensor will give you a raw image with 5 million 12-bit values in it. (not 5x3 million, or 5x4 million, just 5 million) Each photosite is covered by a single color filter, so each 12-bit raw value represents a single color channel, and it is the only color channel measured at that pixel. Which color channel is recorded for each pixel depends on the arrangement of the color filters. The most common arrangement is the Bayer pattern, which you wrote: G R G R G R G R B G B G B G B G G R G R G R G R B G B G B G B G So the top-left pixel in the sensor is covered by a green filter, the one to the right of it is covered by a red filter, the one below it is a blue filter. The pattern tiles across the whole sensor in this fashion. (Note that which color is the top-leftmost does vary between sensors, but the basic repeating tile is the same - two greens for each red and blue, diagonally arranged) To convert this 5-million-pixel raw image into a 5-million-pixel RGB image, you have to demosaic the image - come up with the missing two color values for each pixel. It suffices to say that there are lots of ways to do this, of varying levels of complexity and quality. The OMAP3 ISP preview pipe runs such a method in hardware, to give you a 3-channel YUV 4:2:2 output from a raw sensor image, with 5 million Y values, 2.5 million U, and 2.5 million V values. There is a 3x3 color conversion matrix inside the preview pipeline that converts from the sensor's RGB space to a standard RGB space (at least if you set up the matrix right), and then a second matrix to go from that RGB space to YUV. The number of bits per channel also gets reduced from 10 to 8 using a gamma lookup table. So if you ask the ISP for raw data, you get 5 million 16-bit values (of which only the lower 10 or 12 bits are valid) total. If you ask it for YUV data, you'll get 10 million 8-bit values. Hope that clarifies, and doesn't further confuse things. OK, sure! Somehow I got stuck with the idea that you can get 1 pixel only from each quadruple, but as you said you can check the neighbourhood from each raw pixel with a kernel-matrix. Another step to a clearer understanding of the materia, thank you. So, I followed the stuck ioctl in the code until I saw that the ISP simply waits for an image to complete. As the signals seem to come out right of the chip, I will double check my mux settings and investigate the ISP_IRQ0STATUS register to see if interrupts are generated at all. The reference manual states on page 1503 that this register is located at 0x480B C010 in physical memory. Instead of polluting the kernel code I tried to use inw() to read the register from userspace: unsigned int a; a = inw(0xC010480B); // and I tried a = inw(0x480BC010); Both tries gave me segfaults. Any idea why that does not work? Well now I put the debug message in the kernel code. bye, Bastian Eino-Ville Talvala Stanford University -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
Hi Bastian, On Friday 29 October 2010 16:06:18 Bastian Hecht wrote: Hello Laurant, sorry I am flooding a bit here, but now I reached a point where I am really stuck. In the get_fmt_pad I set the following format *format = mt9p031-format; that is defined as mt9p031-format.code = V4L2_MBUS_FMT_SGRBG10_1X10; mt9p031-format.width = MT9P031_MAX_WIDTH; mt9p031-format.height = MT9P031_MAX_HEIGHT; mt9p031-format.field = V4L2_FIELD_NONE; mt9p031-format.colorspace = V4L2_COLORSPACE_SRGB; I found the different formats in /include/linux/v4l2-mediabus.h. I have 12 data bit channels, but there is no enum for that (like V4L2_MBUS_FMT_SGRBG10_1X12). I am the first guy needing a 12 bit-bus? Yes you are :-) You will need to implement 12 bit support in the ISP driver, or start by hacking the sensor driver to report a 10 bit format (2 bits will be lost but you should still be able to capture an image). Second thing is, the yavta app now gets stuck while dequeuing a buffer. strace ./yavta -f SGRBG10 -s 2592x1944 -n 4 --capture=4 --skip 3 -F /dev/video2 ... ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_STREAMON, 0xbec11154) = 0 ioctl(3, VIDIOC_DQBUF strace gets stuck in mid of this line. -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
Hi Bastian, On Monday 01 November 2010 12:10:55 Bastian Hecht wrote: To clarify this: The number of pixels in an image sensor is typically simply the number of independent photosites - so the 5-MP MT9P031 sensor will give you a raw image with 5 million 12-bit values in it. (not 5x3 million, or 5x4 million, just 5 million) Each photosite is covered by a single color filter, so each 12-bit raw value represents a single color channel, and it is the only color channel measured at that pixel. Which color channel is recorded for each pixel depends on the arrangement of the color filters. The most common arrangement is the Bayer pattern, which you wrote: G R G R G R G R B G B G B G B G G R G R G R G R B G B G B G B G So the top-left pixel in the sensor is covered by a green filter, the one to the right of it is covered by a red filter, the one below it is a blue filter. The pattern tiles across the whole sensor in this fashion. (Note that which color is the top-leftmost does vary between sensors, but the basic repeating tile is the same - two greens for each red and blue, diagonally arranged) To convert this 5-million-pixel raw image into a 5-million-pixel RGB image, you have to demosaic the image - come up with the missing two color values for each pixel. It suffices to say that there are lots of ways to do this, of varying levels of complexity and quality. The OMAP3 ISP preview pipe runs such a method in hardware, to give you a 3-channel YUV 4:2:2 output from a raw sensor image, with 5 million Y values, 2.5 million U, and 2.5 million V values. There is a 3x3 color conversion matrix inside the preview pipeline that converts from the sensor's RGB space to a standard RGB space (at least if you set up the matrix right), and then a second matrix to go from that RGB space to YUV. The number of bits per channel also gets reduced from 10 to 8 using a gamma lookup table. So if you ask the ISP for raw data, you get 5 million 16-bit values (of which only the lower 10 or 12 bits are valid) total. If you ask it for YUV data, you'll get 10 million 8-bit values. Hope that clarifies, and doesn't further confuse things. OK, sure! Somehow I got stuck with the idea that you can get 1 pixel only from each quadruple, but as you said you can check the neighbourhood from each raw pixel with a kernel-matrix. Another step to a clearer understanding of the materia, thank you. So, I followed the stuck ioctl in the code until I saw that the ISP simply waits for an image to complete. As the signals seem to come out right of the chip, I will double check my mux settings and investigate the ISP_IRQ0STATUS register to see if interrupts are generated at all. Try to capture raw data first (at the CCDC output). If the ISP driver waits endlessly for the frame to arrive it probably means that the sensor outputs less columns/lines than the ISP expect. The reference manual states on page 1503 that this register is located at 0x480B C010 in physical memory. Instead of polluting the kernel code I tried to use inw() to read the register from userspace: unsigned int a; a = inw(0xC010480B); // and I tried a = inw(0x480BC010); Both tries gave me segfaults. Any idea why that does not work? You can't read physical memory like that. Userspace applications can only access their virtual memory space. You could mmap() /dev/mem but it's not worth the effort. Well now I put the debug message in the kernel code. That's a better solution. -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
Hello Eino-Ville, Most of the ISP can't handle more than 10-bit input - unless you're streaming raw sensor data straight to memory, you'll have to use the bridge lane shifter to decimate the input. In the new framework, I don't know how that's done, unfortunately. Thank you for pointing me to it. Now I read about it in the technical reference manual too (http://focus.ti.com/lit/ug/spruf98k/spruf98k.pdf). At page 1392 it mentions the possibility to reduce the precision from 12- to 10-bit. It turns out Laurent already sent me the right configuration in a side note of a former post of me. On page 1574 I found another related register: CCDC_FMTCFG. Here you can select which 10 of the 12 bits you want to keep. I looked up the code-flow for the isp-framework and post it here for reference: static struct isp_v4l2_subdevs_group bastix_camera_subdevs[] = { { .subdevs = bastix_camera_mt9p031, .interface = ISP_INTERFACE_PARALLEL, .bus = { .parallel = { .data_lane_shift= 1, .clk_pol= 1, .bridge = ISPCTRL_PAR_BRIDGE_DISABLE, } }, }, { NULL, 0, }, }; static struct isp_platform_data bastix_isp_platform_data = { .subdevs = bastix_camera_subdevs, }; ... omap3isp_device.dev.platform_data = bastix_isp_platform_data; --- The config is handled in isp.c here: void isp_configure_bridge(struct isp_device *isp, enum ccdc_input_entity input, const struct isp_parallel_platform_data *pdata) { ... switch (input) { case CCDC_INPUT_PARALLEL: ispctrl_val |= ISPCTRL_PAR_SER_CLK_SEL_PARALLEL; ispctrl_val |= pdata-data_lane_shift ISPCTRL_SHIFT_SHIFT; ispctrl_val |= pdata-clk_pol ISPCTRL_PAR_CLK_POL_SHIFT; ispctrl_val |= pdata-bridge ISPCTRL_PAR_BRIDGE_SHIFT; break; ... } Also, technically, the mt9p031 output colorspace is not sRGB, although I'm not sure how close it is. It's its own sensor-specific space, determined by the color filters on it, and you'll want to calibrate for it at some point. The output format of the sensor is R Gr Gb B The same colorspace is given as example in spruf98k on page 1409. There I am still confused about the sematic of 1 pixel. Is it the quadruple of the bayer values or each component? Or does it depend on the context? Does the the sensor send 5MP data to the isp or 5MPx4 bayer values? Does the 12-bit width belong to each bayer value? In the sensor you read from right to left, I don't know if the ISP doc means reading left to right. And so on and so on... Good luck, As you can see I need and appreciate it :) About the freezing ioctl. I discovered that I have a clocking issue. I will solve it monday and see if it works better and had an impact on the isp-driver. Eino-Ville Talvala Stanford University cheers, Bastian -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
Hello Laurant, With the media-ctl and yavta test applications, just run ./media-ctl -r -l 'mt9t001 3-005d:0-OMAP3 ISP CCDC:0[1], OMAP3 ISP CCDC:1-OMAP3 ISP CCDC output:0[1]' ./media-ctl -f 'mt9t001 3-005d:0[SGRBG10 1024x768], OMAP3 ISP CCDC:1[SGRBG10 1024x768]' ./yavta -f SGRBG10 -s 1024x768 -n 4 --capture=4 --skip 3 -F $(./media-ctl -e OMAP3 ISP CCDC output) Replace all occurences of 1024x768 by your sensor native resolution, and mt9t001 3-005d by the sensur subdev name. I did as you said and everything works fine until I use yavta: Video format set: width: 2952 height: 1944 buffer size: 11508480 Video format: BA10 (30314142) 2952x1944 4 buffers requested. length: 11508480 offset: 0 Buffer 0 mapped at address 0x4016d000. length: 11508480 offset: 11509760 Buffer 1 mapped at address 0x40c67000. length: 11508480 offset: 23019520 Buffer 2 mapped at address 0x41761000. length: 11508480 offset: 34529280 Buffer 3 mapped at address 0x4225b000. Unable to start streaming: 22 This is in ret = ioctl(dev-fd, enable ? VIDIOC_STREAMON : VIDIOC_STREAMOFF, type); errno 22 is: Invalid argument Any ideas where to look next? Thanks, Bastian -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
Hi, I did as you said and everything works fine until I use yavta: Video format set: width: 2952 height: 1944 buffer size: 11508480 Video format: BA10 (30314142) 2952x1944 ooops, I had a typo... 2952 becomes 2592 4 buffers requested. length: 11508480 offset: 0 Buffer 0 mapped at address 0x4016d000. length: 11508480 offset: 11509760 Buffer 1 mapped at address 0x40c67000. length: 11508480 offset: 23019520 Buffer 2 mapped at address 0x41761000. length: 11508480 offset: 34529280 Buffer 3 mapped at address 0x4225b000. Unable to start streaming: 22 This is in ret = ioctl(dev-fd, enable ? VIDIOC_STREAMON : VIDIOC_STREAMOFF, type); errno 22 is: Invalid argument Now it becomes Unable to start streaming: 32 : Broken pipe I will check if the video format of the sensor chip is SGRBG10 in default. cheers, Bastian Any ideas where to look next? Thanks, Bastian -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
Hello, Now it becomes Unable to start streaming: 32 : Broken pipe I saw that my stub mt9p031_get_format gets called. Thanks a lot. So I reached the point where I can fill my driver with life. I will check if the video format of the sensor chip is SGRBG10 in default. I guess this is GRBG. What does the S stand for? Datasheet says: Pixels are output in a Bayer pattern format consisting of four “colors”—GreenR, GreenB, Red, and Blue (Gr, Gb, R, B)—representing three filter colors. When no mirror modes are enabled, the first row output alternates between Gr and R pixels, and the second row output alternates between B and Gb pixels. The Gr and Gb pixels have the same color filter, but they are treated as separate colors by the data path and analog signal chain. ciao, Bastian cheers, Bastian Any ideas where to look next? Thanks, Bastian -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
Hello Laurant, sorry I am flooding a bit here, but now I reached a point where I am really stuck. In the get_fmt_pad I set the following format *format = mt9p031-format; that is defined as mt9p031-format.code = V4L2_MBUS_FMT_SGRBG10_1X10; mt9p031-format.width = MT9P031_MAX_WIDTH; mt9p031-format.height = MT9P031_MAX_HEIGHT; mt9p031-format.field = V4L2_FIELD_NONE; mt9p031-format.colorspace = V4L2_COLORSPACE_SRGB; I found the different formats in /include/linux/v4l2-mediabus.h. I have 12 data bit channels, but there is no enum for that (like V4L2_MBUS_FMT_SGRBG10_1X12). I am the first guy needing a 12 bit-bus? Second thing is, the yavta app now gets stuck while dequeuing a buffer. strace ./yavta -f SGRBG10 -s 2592x1944 -n 4 --capture=4 --skip 3 -F /dev/video2 ... ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_QBUF, 0xbec111cc) = 0 ioctl(3, VIDIOC_STREAMON, 0xbec11154) = 0 ioctl(3, VIDIOC_DQBUF strace gets stuck in mid of this line. cheers, Bastian -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
On 10/29/2010 7:06 AM, Bastian Hecht wrote: Hello Laurant, sorry I am flooding a bit here, but now I reached a point where I am really stuck. In the get_fmt_pad I set the following format *format = mt9p031-format; that is defined as mt9p031-format.code = V4L2_MBUS_FMT_SGRBG10_1X10; mt9p031-format.width = MT9P031_MAX_WIDTH; mt9p031-format.height = MT9P031_MAX_HEIGHT; mt9p031-format.field = V4L2_FIELD_NONE; mt9p031-format.colorspace = V4L2_COLORSPACE_SRGB; I found the different formats in /include/linux/v4l2-mediabus.h. I have 12 data bit channels, but there is no enum for that (like V4L2_MBUS_FMT_SGRBG10_1X12). I am the first guy needing a 12 bit-bus? Most of the ISP can't handle more than 10-bit input - unless you're streaming raw sensor data straight to memory, you'll have to use the bridge lane shifter to decimate the input. In the new framework, I don't know how that's done, unfortunately. Also, technically, the mt9p031 output colorspace is not sRGB, although I'm not sure how close it is. It's its own sensor-specific space, determined by the color filters on it, and you'll want to calibrate for it at some point. Good luck, Eino-Ville Talvala Stanford University -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
New media framework user space usage
Hello Laurent, my mt9p031 camera project for the omap3530 isp has come to the point where the ISP registered video[0-6], media0 and v4l-subdev[0-7]. As far as I can see from the names... cat /sys/class/video4linux/video*/names OMAP3 ISP CCP2 input OMAP3 ISP CSI2a output OMAP3 ISP CCDC output OMAP3 ISP preview input OMAP3 ISP preview output OMAP3 ISP resizer input OMAP3 ISP resizer output cat /sys/class/video4linux/v4l-subdev*/names OMAP3 ISP CCP2 OMAP3 ISP CSI2a OMAP3 ISP CCDC OMAP3 ISP preview OMAP3 ISP resizer OMAP3 ISP AEWB OMAP3 ISP AF OMAP3 ISP histogram ... I want to read /dev/video2 (CCDC). When I try out a little test program from the V4L2 doc, this line fails: ioctl (fd, VIDIOC_G_STD, std_id) So far I adopted your mt9t001 driver, merged it with Guennadis mt9p031. It contains lot of stubs that I want to fill out when I succeed to make them called inside the kernel. I looked at your presentation for the media controller and wonder if I have to set up a pipeline by myself before I can read /dev/video2 (http://linuxtv.org/downloads/presentations/summit_jun_2010/20100614-v4l2_summit-media.pdf). I failed at the point where I wanted to try out the little snippet on page 17 as I don't have definitions of the MEDIA_IOC_ENUM_ENTITIES. Are there somewhere userspace headers available? int fd; fd = open(“/dev/media0”, O_RDWR); while (1) { struct media_user_entity entity; struct media_user_links links; ret = ioctl(fd, MEDIA_IOC_ENUM_ENTITIES, entity); if (ret 0) break; while (1) { ret = ioctl(fd, MEDIA_IOC_ENUM_LINKS, links); if (ret 0) break; } Thanks for help, Bastian APPENDIX A: dmesg [ 103.356018] Linux media interface: v0.10 [ 103.356048] device class 'media': registering [ 103.442230] Linux video capture interface: v2.00 [ 103.442260] device class 'video4linux': registering [ 103.814239] bus: 'i2c': add driver mt9p031 [ 103.894622] bus: 'platform': add driver omap3isp [ 103.933959] address of isp_platform_data in boardconfig: bf065074 [ 103.940155] Registering platform device 'omap3isp'. Parent at platform [ 103.940185] device: 'omap3isp': device_add [ 103.940246] bus: 'platform': add device omap3isp [ 103.940490] bus: 'platform': driver_probe_device: matched device omap3isp with driver omap3isp [ 103.940490] bus: 'platform': really_probe: probing driver omap3isp with device omap3isp [ 103.940551] address of isp_platform_data bf065074 [ 103.954467] omap3isp omap3isp: Revision 2.0 found [ 103.962738] omap-iommu omap-iommu.0: isp: version 1.1 [ 103.969879] omap3isp omap3isp: hist: DMA channel = 0 [ 103.970001] omap3isp omap3isp: isp_set_xclk(): cam_xclka set to 576 Hz [ 103.972229] omap3isp omap3isp: -ISP Register dump-- [ 103.972259] omap3isp omap3isp: ###ISP SYSCONFIG=0x0001 [ 103.972259] omap3isp omap3isp: ###ISP SYSSTATUS=0x0001 [ 103.972290] omap3isp omap3isp: ###ISP IRQ0ENABLE=0x [ 103.972290] omap3isp omap3isp: ###ISP IRQ0STATUS=0x [ 103.972320] omap3isp omap3isp: ###ISP TCTRL_GRESET_LENGTH=0x [ 103.972320] omap3isp omap3isp: ###ISP TCTRL_PSTRB_REPLAY=0x [ 103.972351] omap3isp omap3isp: ###ISP CTRL=0x00200200 [ 103.972351] omap3isp omap3isp: ###ISP TCTRL_CTRL=0x001e [ 103.972381] omap3isp omap3isp: ###ISP TCTRL_FRAME=0x [ 103.972381] omap3isp omap3isp: ###ISP TCTRL_PSTRB_DELAY=0x [ 103.972412] omap3isp omap3isp: ###ISP TCTRL_STRB_DELAY=0x [ 103.972442] omap3isp omap3isp: ###ISP TCTRL_SHUT_DELAY=0x [ 103.972442] omap3isp omap3isp: ###ISP TCTRL_PSTRB_LENGTH=0x [ 103.972473] omap3isp omap3isp: ###ISP TCTRL_STRB_LENGTH=0x [ 103.972473] omap3isp omap3isp: ###ISP TCTRL_SHUT_LENGTH=0x [ 103.972503] omap3isp omap3isp: ###SBL PCR=0x [ 103.972503] omap3isp omap3isp: ###SBL SDR_REQ_EXP=0x [ 103.972534] omap3isp omap3isp: [ 103.974700] device: 'media0': device_add [ 103.975128] device: 'v4l-subdev0': device_add [ 103.975524] device: 'video0': device_add [ 103.975799] device: 'v4l-subdev1': device_add [ 103.976104] device: 'video1': device_add [ 103.976409] device: 'v4l-subdev2': device_add [ 103.976684] device: 'video2': device_add [ 103.976959] device: 'v4l-subdev3': device_add [ 103.977294] device: 'video3': device_add [ 103.977600] device: 'video4': device_add [ 103.977905] device: 'v4l-subdev4': device_add [ 103.978210] device: 'video5': device_add [ 103.978485] device: 'video6': device_add [ 103.978759] device: 'v4l-subdev5': device_add [ 103.979156] device: 'v4l-subdev6': device_add [ 103.979461] device: 'v4l-subdev7': device_add [ 104.752685] device: '2-005d': device_add [ 104.752777] bus: 'i2c': add device 2-005d [ 104.753051] bus: 'i2c': driver_probe_device: matched device 2-005d with driver mt9p031 [ 104.753082] bus: 'i2c': really_probe: probing driver mt9p031 with device 2-005d [ 104.769897] mt9p031 2-005d: Detected a MT9P031
Re: New media framework user space usage
Hi, after reading the topic controls, subdevs, and media framework (http://www.spinics.net/lists/linux-media/msg24474.html) I guess I double-posted something here :S But what I still don't understand is, how configuring the camera works. You say that the subdevs (my camera sensor) are configured directly. 2 things make me wonder. How gets the ISP informed about the change and why don't I see my camera in the subdevs name list I posted. All subdevs are from the ISP. My camera already receives a clock, the i2c connection works and my oscilloscope shows that the sensor is throwing out data on the parallel bus pins. But unfortunately I am a completely v4l2 newbie. I read through the v4l2-docs now but the first example already didn't work because of the new framework. Can you point me to a way to read /dev/video2? Thank you very much, Bastian 2010/10/28 Bastian Hecht hec...@googlemail.com: Hello Laurent, my mt9p031 camera project for the omap3530 isp has come to the point where the ISP registered video[0-6], media0 and v4l-subdev[0-7]. As far as I can see from the names... cat /sys/class/video4linux/video*/names OMAP3 ISP CCP2 input OMAP3 ISP CSI2a output OMAP3 ISP CCDC output OMAP3 ISP preview input OMAP3 ISP preview output OMAP3 ISP resizer input OMAP3 ISP resizer output cat /sys/class/video4linux/v4l-subdev*/names OMAP3 ISP CCP2 OMAP3 ISP CSI2a OMAP3 ISP CCDC OMAP3 ISP preview OMAP3 ISP resizer OMAP3 ISP AEWB OMAP3 ISP AF OMAP3 ISP histogram ... I want to read /dev/video2 (CCDC). When I try out a little test program from the V4L2 doc, this line fails: ioctl (fd, VIDIOC_G_STD, std_id) So far I adopted your mt9t001 driver, merged it with Guennadis mt9p031. It contains lot of stubs that I want to fill out when I succeed to make them called inside the kernel. I looked at your presentation for the media controller and wonder if I have to set up a pipeline by myself before I can read /dev/video2 (http://linuxtv.org/downloads/presentations/summit_jun_2010/20100614-v4l2_summit-media.pdf). I failed at the point where I wanted to try out the little snippet on page 17 as I don't have definitions of the MEDIA_IOC_ENUM_ENTITIES. Are there somewhere userspace headers available? int fd; fd = open(“/dev/media0”, O_RDWR); while (1) { struct media_user_entity entity; struct media_user_links links; ret = ioctl(fd, MEDIA_IOC_ENUM_ENTITIES, entity); if (ret 0) break; while (1) { ret = ioctl(fd, MEDIA_IOC_ENUM_LINKS, links); if (ret 0) break; } Thanks for help, Bastian APPENDIX A: dmesg [ 103.356018] Linux media interface: v0.10 [ 103.356048] device class 'media': registering [ 103.442230] Linux video capture interface: v2.00 [ 103.442260] device class 'video4linux': registering [ 103.814239] bus: 'i2c': add driver mt9p031 [ 103.894622] bus: 'platform': add driver omap3isp [ 103.933959] address of isp_platform_data in boardconfig: bf065074 [ 103.940155] Registering platform device 'omap3isp'. Parent at platform [ 103.940185] device: 'omap3isp': device_add [ 103.940246] bus: 'platform': add device omap3isp [ 103.940490] bus: 'platform': driver_probe_device: matched device omap3isp with driver omap3isp [ 103.940490] bus: 'platform': really_probe: probing driver omap3isp with device omap3isp [ 103.940551] address of isp_platform_data bf065074 [ 103.954467] omap3isp omap3isp: Revision 2.0 found [ 103.962738] omap-iommu omap-iommu.0: isp: version 1.1 [ 103.969879] omap3isp omap3isp: hist: DMA channel = 0 [ 103.970001] omap3isp omap3isp: isp_set_xclk(): cam_xclka set to 576 Hz [ 103.972229] omap3isp omap3isp: -ISP Register dump-- [ 103.972259] omap3isp omap3isp: ###ISP SYSCONFIG=0x0001 [ 103.972259] omap3isp omap3isp: ###ISP SYSSTATUS=0x0001 [ 103.972290] omap3isp omap3isp: ###ISP IRQ0ENABLE=0x [ 103.972290] omap3isp omap3isp: ###ISP IRQ0STATUS=0x [ 103.972320] omap3isp omap3isp: ###ISP TCTRL_GRESET_LENGTH=0x [ 103.972320] omap3isp omap3isp: ###ISP TCTRL_PSTRB_REPLAY=0x [ 103.972351] omap3isp omap3isp: ###ISP CTRL=0x00200200 [ 103.972351] omap3isp omap3isp: ###ISP TCTRL_CTRL=0x001e [ 103.972381] omap3isp omap3isp: ###ISP TCTRL_FRAME=0x [ 103.972381] omap3isp omap3isp: ###ISP TCTRL_PSTRB_DELAY=0x [ 103.972412] omap3isp omap3isp: ###ISP TCTRL_STRB_DELAY=0x [ 103.972442] omap3isp omap3isp: ###ISP TCTRL_SHUT_DELAY=0x [ 103.972442] omap3isp omap3isp: ###ISP TCTRL_PSTRB_LENGTH=0x [ 103.972473] omap3isp omap3isp: ###ISP TCTRL_STRB_LENGTH=0x [ 103.972473] omap3isp omap3isp: ###ISP TCTRL_SHUT_LENGTH=0x [ 103.972503] omap3isp omap3isp: ###SBL PCR=0x [ 103.972503] omap3isp omap3isp: ###SBL SDR_REQ_EXP=0x [ 103.972534] omap3isp omap3isp: [ 103.974700] device: 'media0': device_add [ 103.975128]
Re: New media framework user space usage
Hi Bastian, On Thursday 28 October 2010 16:38:01 Bastian Hecht wrote: Hello Laurent, my mt9p031 camera project for the omap3530 isp has come to the point where the ISP registered video[0-6], media0 and v4l-subdev[0-7]. As far as I can see from the names... cat /sys/class/video4linux/video*/names OMAP3 ISP CCP2 input OMAP3 ISP CSI2a output OMAP3 ISP CCDC output OMAP3 ISP preview input OMAP3 ISP preview output OMAP3 ISP resizer input OMAP3 ISP resizer output cat /sys/class/video4linux/v4l-subdev*/names OMAP3 ISP CCP2 OMAP3 ISP CSI2a OMAP3 ISP CCDC OMAP3 ISP preview OMAP3 ISP resizer OMAP3 ISP AEWB OMAP3 ISP AF OMAP3 ISP histogram That's nice, but you seem to be missing a sensor sub-device. See below. ... I want to read /dev/video2 (CCDC). When I try out a little test program from the V4L2 doc, this line fails: ioctl (fd, VIDIOC_G_STD, std_id) The VIDIOC_G_STD ioctl isn't implemented. Just skip that. So far I adopted your mt9t001 driver, merged it with Guennadis mt9p031. It contains lot of stubs that I want to fill out when I succeed to make them called inside the kernel. I looked at your presentation for the media controller and wonder if I have to set up a pipeline by myself before I can read /dev/video2 (http://linuxtv.org/downloads/presentations/summit_jun_2010/20100614- v4l2_summit-media.pdf). I failed at the point where I wanted to try out the little snippet on page 17 as I don't have definitions of the MEDIA_IOC_ENUM_ENTITIES. Are there somewhere userspace headers available? Yes, in include/linux/media.h. int fd; fd = open(“/dev/media0”, O_RDWR); while (1) { struct media_user_entity entity; struct media_user_links links; ret = ioctl(fd, MEDIA_IOC_ENUM_ENTITIES, entity); if (ret 0) break; while (1) { ret = ioctl(fd, MEDIA_IOC_ENUM_LINKS, links); if (ret 0) break; } The structure names have changed, you should now use media_entity and media_links instead of media_user_entity and media_user_links. You can have a look at http://git.ideasonboard.org/?p=media-ctl.git;a=summary (new-api branch) to see how links are configured. [snip] static int mt9p031_probe(struct i2c_client *client, const struct i2c_device_id *did) { struct mt9p031 *mt9p031; struct i2c_adapter *adapter = to_i2c_adapter(client-dev.parent); int ret; if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_WORD_DATA)) { dev_warn(adapter-dev, I2C-Adapter doesn't support I2C_FUNC_SMBUS_WORD\n); return -EIO; } mt9p031 = kzalloc(sizeof(struct mt9p031), GFP_KERNEL); if (!mt9p031) return -ENOMEM; v4l2_i2c_subdev_init(mt9p031-subdev, client, mt9p031_subdev_ops); Add mt9p031-subdev.flags |= V4L2_SUBDEV_FL_HAS_DEVNODE; here to create a subdev node for the sensor. [snip] -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: New media framework user space usage
Hi Bastian, On Thursday 28 October 2010 17:16:10 Bastian Hecht wrote: after reading the topic controls, subdevs, and media framework (http://www.spinics.net/lists/linux-media/msg24474.html) I guess I double-posted something here :S But what I still don't understand is, how configuring the camera works. You say that the subdevs (my camera sensor) are configured directly. 2 things make me wonder. How gets the ISP informed about the change The ISP doesn't need to know about sensor parameters except for image formats. Formats need to be set on both ends of every link, so the ISP will get informed when you will setup the sensor - CCDC link. and why don't I see my camera in the subdevs name list I posted. All subdevs are from the ISP. See my answer to your previous e-mail for that. My camera already receives a clock, the i2c connection works and my oscilloscope shows that the sensor is throwing out data on the parallel bus pins. But unfortunately I am a completely v4l2 newbie. I read through the v4l2-docs now but the first example already didn't work because of the new framework. Can you point me to a way to read /dev/video2? With the media-ctl and yavta test applications, just run ./media-ctl -r -l 'mt9t001 3-005d:0-OMAP3 ISP CCDC:0[1], OMAP3 ISP CCDC:1-OMAP3 ISP CCDC output:0[1]' ./media-ctl -f 'mt9t001 3-005d:0[SGRBG10 1024x768], OMAP3 ISP CCDC:1[SGRBG10 1024x768]' ./yavta -f SGRBG10 -s 1024x768 -n 4 --capture=4 --skip 3 -F $(./media-ctl -e OMAP3 ISP CCDC output) Replace all occurences of 1024x768 by your sensor native resolution, and mt9t001 3-005d by the sensur subdev name. -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html