Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
On Monday, October 03, 2011 04:17:06 Mauro Carvalho Chehab wrote: Em 02-10-2011 18:18, Javier Martinez Canillas escreveu: On Sun, Oct 2, 2011 at 6:30 PM, Sakari Ailus sakari.ai...@iki.fi wrote: Hi Javier, Thanks for the patch! It's very interesting to see a driver for a video decoder using the MC interface. Before this we've had just image sensors. Hello Sakari, Thanks for your comments. Javier Martinez Canillas wrote: + /* use the standard status register */ + std_status = tvp5150_read(sd, TVP5150_STATUS_REG_5); + else + /* use the standard register itself */ + std_status = std; Braces would be nice here. Ok. + switch (std_status VIDEO_STD_MASK) { + case VIDEO_STD_NTSC_MJ_BIT: + case VIDEO_STD_NTSC_MJ_BIT_AS: + return STD_NTSC_MJ; + + case VIDEO_STD_PAL_BDGHIN_BIT: + case VIDEO_STD_PAL_BDGHIN_BIT_AS: + return STD_PAL_BDGHIN; + + default: + return STD_INVALID; + } + + return STD_INVALID; This return won't do anything. Yes, will clean this. @@ -704,19 +812,19 @@ static int tvp5150_set_std(struct v4l2_subdev *sd, v4l2_std_id std) if (std == V4L2_STD_ALL) { fmt = 0;/* Autodetect mode */ } else if (std V4L2_STD_NTSC_443) { - fmt = 0xa; + fmt = VIDEO_STD_NTSC_4_43_BIT; } else if (std V4L2_STD_PAL_M) { - fmt = 0x6; + fmt = VIDEO_STD_PAL_M_BIT; } else if (std (V4L2_STD_PAL_N | V4L2_STD_PAL_Nc)) { - fmt = 0x8; + fmt = VIDEO_STD_PAL_COMBINATION_N_BIT; } else { /* Then, test against generic ones */ if (std V4L2_STD_NTSC) - fmt = 0x2; + fmt = VIDEO_STD_NTSC_MJ_BIT; else if (std V4L2_STD_PAL) - fmt = 0x4; + fmt = VIDEO_STD_PAL_BDGHIN_BIT; else if (std V4L2_STD_SECAM) - fmt = 0xc; + fmt = VIDEO_STD_SECAM_BIT; } Excellent! Less magic numbers... +static struct v4l2_mbus_framefmt * +__tvp5150_get_pad_format(struct tvp5150 *tvp5150, struct v4l2_subdev_fh *fh, + unsigned int pad, enum v4l2_subdev_format_whence which) +{ + switch (which) { + case V4L2_SUBDEV_FORMAT_TRY: + return v4l2_subdev_get_try_format(fh, pad); + case V4L2_SUBDEV_FORMAT_ACTIVE: + return tvp5150-format; + default: + return NULL; Hmm. This will never happen, but is returning NULL the right thing to do? An easy alternative is to just replace this with if (which may only have either of the two values). Ok I'll cleanup, I was being a bit paranoid there :) + +static int tvp5150_set_pad_format(struct v4l2_subdev *subdev, + struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *format) +{ + struct tvp5150 *tvp5150 = to_tvp5150(subdev); + tvp5150-std_idx = STD_INVALID; The above assignment will always be overwritten immediately. Yes, since tvp515x_query_current_std() already returns STD_INVALID on error the assignment is not needed. Will change that. + tvp5150-std_idx = tvp515x_query_current_std(subdev); + if (tvp5150-std_idx == STD_INVALID) { + v4l2_err(subdev, Unable to query std\n); + return 0; Isn't this an error? Yes, I'll change to report the error to the caller. + * tvp515x_mbus_fmt_cap() - V4L2 decoder interface handler for try/s/g_mbus_fmt The name of the function is different. Yes, I'll change that. static const struct v4l2_subdev_video_ops tvp5150_video_ops = { .s_routing = tvp5150_s_routing, + .s_stream = tvp515x_s_stream, + .enum_mbus_fmt = tvp515x_enum_mbus_fmt, + .g_mbus_fmt = tvp515x_mbus_fmt, + .try_mbus_fmt = tvp515x_mbus_fmt, + .s_mbus_fmt = tvp515x_mbus_fmt, + .g_parm = tvp515x_g_parm, + .s_parm = tvp515x_s_parm, + .s_std_output = tvp5150_s_std, Do we really need both video and pad format ops? Good question, I don't know. Can this device be used as a standalone v4l2 device? Or is supposed to always be a part of a video streaming pipeline as a sub-device with a source pad? Sorry if my questions are silly but as I stated before, I'm a newbie with v4l2 and MCF. The tvp5150 driver is used on some em28xx devices. It is nice to add auto-detection code to the driver, but converting it to the media bus should be done with enough care to not break support for the existing devices. So in other words, the tvp5150 driver needs both pad and non-pad ops. Eventually all non-pad variants in subdev drivers should be replaced
Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
On Mon, Oct 3, 2011 at 8:30 AM, Hans Verkuil hverk...@xs4all.nl wrote: On Monday, October 03, 2011 04:17:06 Mauro Carvalho Chehab wrote: Em 02-10-2011 18:18, Javier Martinez Canillas escreveu: Yes, I'll change that. static const struct v4l2_subdev_video_ops tvp5150_video_ops = { .s_routing = tvp5150_s_routing, + .s_stream = tvp515x_s_stream, + .enum_mbus_fmt = tvp515x_enum_mbus_fmt, + .g_mbus_fmt = tvp515x_mbus_fmt, + .try_mbus_fmt = tvp515x_mbus_fmt, + .s_mbus_fmt = tvp515x_mbus_fmt, + .g_parm = tvp515x_g_parm, + .s_parm = tvp515x_s_parm, + .s_std_output = tvp5150_s_std, Do we really need both video and pad format ops? Good question, I don't know. Can this device be used as a standalone v4l2 device? Or is supposed to always be a part of a video streaming pipeline as a sub-device with a source pad? Sorry if my questions are silly but as I stated before, I'm a newbie with v4l2 and MCF. The tvp5150 driver is used on some em28xx devices. It is nice to add auto-detection code to the driver, but converting it to the media bus should be done with enough care to not break support for the existing devices. So in other words, the tvp5150 driver needs both pad and non-pad ops. Eventually all non-pad variants in subdev drivers should be replaced by the pad variants so you don't have duplication of ops. But that will take a lot more work. Great, that was a doubt I had, thanks for the clarification. In the specific code of standards auto-detection, a few drivers currently support this feature. They're (or should be) coded to do is: If V4L2_STD_ALL is used, the driver should autodetect the video standard of the currently tuned channel. Actually, this is optional. As per the spec: When the standard set is ambiguous drivers may return EINVAL or choose any of the requested standards. Nor does the spec say anything about doing an autodetect when STD_ALL is passed in. Most drivers will just set the std to PAL or NTSC in this case. If you want to autodetect, then use QUERYSTD. Applications cannot rely on drivers to handle V4L2_STD_ALL the way you say. The detected standard can be returned to userspace via VIDIOC_G_STD. No! G_STD always returns the current *selected* standard. Only QUERYSTD returns the detected standard. If otherwise, another standard mask is sent to the driver via VIDIOC_S_STD, the expected behavior is that the driver should select the standards detector to conform with the desired mask. If an unsupported configuration is requested, the driver should return the mask it actually used at the return of VIDIOC_S_STD call. S_STD is a write-only ioctl, so the mask isn't updated. For example, if V4L2_STD_NTSC_M_JP is used, the driver should disable the auto-detector, and use NTSC/M with the Japanese audio standard. both S_STD and G_STD will return V4L2_STD_NTSC_M_JP. If V4L2_STD_MN is used and the driver can auto-detect between all those formats, the driver should detect if the standard is PAL or NTSC and detect between STD/M or STD/M (and the corresponding audio standards). If an unsupported mask like V4L2_STD_PAL_J | V4L2_STD_NTSC_M_JP is used, the driver should return a valid combination to S_STD (for example, returning V4L2_STD_PAL_J. In any case, on V4L2_G_STD, if the driver can't detect what's the standard, it should just return the current detection mask to userspace (instead of returning something like STD_INVALID). G_STD must always return the currently selected standard, never the detected standard. That's QUERYSTD. When the driver is first loaded it must pre-select a standard (usually in the probe function), either hardcoded (NTSC or PAL), or by doing an initial autodetect. But the standard should always be set to something. This allows you to start streaming immediately. Regards, Hans I hope that helps, Mauro. Thanks Mauro and Hans for your comments. I plan to work on the autodetect code and the issues called out by Sakari and resubmit the patch, can you point me a driver that got auto-detect the right way so I can use it as a reference? Best regards, -- Javier Martínez Canillas (+34) 682 39 81 69 Barcelona, Spain -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH] media: vb2: fix incorrect return value
This patch fixes incorrect return value. Errors should be returned as negative numbers. Reported-by: Tomasz Stanislawski t.stanisl...@samsung.com Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com --- drivers/media/video/videobuf2-core.c |2 +- 1 files changed, 1 insertions(+), 1 deletions(-) diff --git a/drivers/media/video/videobuf2-core.c b/drivers/media/video/videobuf2-core.c index 6687ac3..3f5c7a3 100644 --- a/drivers/media/video/videobuf2-core.c +++ b/drivers/media/video/videobuf2-core.c @@ -751,7 +751,7 @@ static int __qbuf_userptr(struct vb2_buffer *vb, struct v4l2_buffer *b) /* Check if the provided plane buffer is large enough */ if (planes[plane].length q-plane_sizes[plane]) { - ret = EINVAL; + ret = -EINVAL; goto err; } -- 1.7.1.569.g6f426 -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH v2 2/2] v4l: Add v4l2 subdev driver for S5K6AAFX sensor
Hi Sylwester and Sakari, On Sunday 02 October 2011 09:20:11 Sakari Ailus wrote: On Sat, Oct 01, 2011 at 11:17:27AM +0200, Sylwester Nawrocki wrote: On 09/27/2011 10:55 PM, Sakari Ailus wrote: Sylwester Nawrocki wrote: On 09/25/2011 12:08 PM, Sakari Ailus wrote: On Fri, Sep 23, 2011 at 12:12:58PM +0200, Sylwester Nawrocki wrote: On 09/23/2011 12:02 AM, Sakari Ailus wrote: Hi Sylwester, On Wed, Sep 21, 2011 at 07:45:07PM +0200, Sylwester Nawrocki wrote: [snip] I'm not sure if this makes it easier for the user space. The user space must know about such a think and aslo which parameters it applies to. I don't Yes, I agree on this entirely. think this ocnforms to V4L2 either, but I might have misunderstood something. If there is anything in the code not conforming to V4L2 please indicate it clearly. Ok, what the driver currently implements, does conform to V4L2. But getting advantage (which I'd like to argue might be irrelevant) using different presets would be challenging to implement while obeying the V4L2 spec. That's why we probably need to extend the V4L2 spec :-) Snapshot mode is currently ill-defined. At the hardware level, the name is used to describe different concepts depending on the sensor. This can cover a wide range of features, from enabling external trigger to using different register sets. Some of those features (such as registers set) are even not restricted to preview/snapshot/capture modes and can be used for other purpose. At the application level, capturing a snapshot image usually involve much more than just selecting a snapshot mode, especially when ISPs (either on the sensor side or on the host side) are involved. Snapshot mode will be discussed in Prague. Given the complexity of the issue we won't solve it there, but I hope to at least gather a list of requirements and use cases (and if possible a list that everybody agrees on :-)). It would be nice if every person interested in the subject could prepare such a list beforehand, as we will have little time to discuss the topic. Anyway, I've taken a closer look at what I need in the single user configuration set data structure and reworked the driver quite extensively. Should post that in the coming week, unless some unexpected disasters occur;) Do you see any problem in defining real still capture interface in V4L ? It's probably just a small set of new controls, new capability, plus multi-size buffer queue Guennadi has been working on. Some devices will require explicitly switching between preview and capture mode, and it may make difference if they are programmed in advance or on demand. For this specific use case, it's probably just that. If we take other use cases into account, it isn't. I don't think we should rush in API support for snapshot capture mode in V4L2 without a proper understanding of all (or most) use cases. I don't think V4L2 should have a still capture interface. Still capture is just one use case as viewfinder and video. V4L2 deals with frames, formats and parameters that are all generic and use case independent. Instead of use cases, we have independent configurable settings and that's the way I think it should stay. If your hardware requires switching mode to still before taking a still image, then the driver should expose this functionality as such. I'd be Yeah, of course this should work. But I don't quite see how would I expose the still/normal switch control with existing API. Aren't you going to blackball this as just another 'use case' ? :) That's fine because it's implemented by the sensor already. My point is that we should only show this fact in V4L2 API as little as possible. I agree with Sakari here. We need a high level snapshot capture API in userspace, something similar to (and not necessarily separate from) GstPhotography. The userspace components will of course need proper support from V4L2, but that doesn't mean the whole snapshot capture API must be implemented in V4L2. I see this more like a collection of V4L2 extensions (such as the multi-size buffer queue) that can be used to support snapshot capture in userspace. I remember Guennadi had sensors which provide something called snapshot mode. A single boolean control to turn this on or off would suffice --- snapshot mode is something that's going to be discussed in the Multimedia summit if my memory serves me right. This could be one option for this sensor as well but the implementation might not be quite optimal since you'd still need to switch the configuration. In fact preview/still mode control seems to be a minimum needed to expose full functionality of the S5K6AAFX device in V4L2 API. But I am not really interested in still capture with this device ATM. The current driver is more than enough :) really wary of e.g. exposing register configuration flipping
Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
Hi Hans, On Monday 03 October 2011 08:30:25 Hans Verkuil wrote: On Monday, October 03, 2011 04:17:06 Mauro Carvalho Chehab wrote: Em 02-10-2011 18:18, Javier Martinez Canillas escreveu: On Sun, Oct 2, 2011 at 6:30 PM, Sakari Ailus wrote: [snip] static const struct v4l2_subdev_video_ops tvp5150_video_ops = { .s_routing = tvp5150_s_routing, + .s_stream = tvp515x_s_stream, + .enum_mbus_fmt = tvp515x_enum_mbus_fmt, + .g_mbus_fmt = tvp515x_mbus_fmt, + .try_mbus_fmt = tvp515x_mbus_fmt, + .s_mbus_fmt = tvp515x_mbus_fmt, + .g_parm = tvp515x_g_parm, + .s_parm = tvp515x_s_parm, + .s_std_output = tvp5150_s_std, Do we really need both video and pad format ops? Good question, I don't know. Can this device be used as a standalone v4l2 device? Or is supposed to always be a part of a video streaming pipeline as a sub-device with a source pad? Sorry if my questions are silly but as I stated before, I'm a newbie with v4l2 and MCF. The tvp5150 driver is used on some em28xx devices. It is nice to add auto-detection code to the driver, but converting it to the media bus should be done with enough care to not break support for the existing devices. So in other words, the tvp5150 driver needs both pad and non-pad ops. Eventually all non-pad variants in subdev drivers should be replaced by the pad variants so you don't have duplication of ops. But that will take a lot more work. What about replacing direct calls to non-pad operations with core V4L2 functions that would use the subdev non-pad operation if available, and emulate if with the pad operation otherwise ? I think this would ease the transition, as subdev drivers could be ported to pad operations without worrying about the bridges that use them, and bridge drivers could be switched to the new wrappers with a simple search and replace. Also, as I've argued with Laurent before, the expected behavior is that the standards format selection should be done via the video node, and not via the media controller node. The V4L2 API has enough support for all you need to do with the video decoder, so there's no excuse to duplicate it with any other API. This is relevant for bridge drivers, not for subdev drivers. The media controller API is there not to replace V4L2, but to complement it where needed. That will be a nice discussion during the workshop :-) I don't think we disagree on that, but we probably disagree on what it means :-) In the specific code of standards auto-detection, a few drivers currently support this feature. They're (or should be) coded to do is: If V4L2_STD_ALL is used, the driver should autodetect the video standard of the currently tuned channel. Actually, this is optional. As per the spec: When the standard set is ambiguous drivers may return EINVAL or choose any of the requested standards. Nor does the spec say anything about doing an autodetect when STD_ALL is passed in. Most drivers will just set the std to PAL or NTSC in this case. If you want to autodetect, then use QUERYSTD. Applications cannot rely on drivers to handle V4L2_STD_ALL the way you say. The detected standard can be returned to userspace via VIDIOC_G_STD. No! G_STD always returns the current *selected* standard. Only QUERYSTD returns the detected standard. If otherwise, another standard mask is sent to the driver via VIDIOC_S_STD, the expected behavior is that the driver should select the standards detector to conform with the desired mask. If an unsupported configuration is requested, the driver should return the mask it actually used at the return of VIDIOC_S_STD call. S_STD is a write-only ioctl, so the mask isn't updated. For example, if V4L2_STD_NTSC_M_JP is used, the driver should disable the auto-detector, and use NTSC/M with the Japanese audio standard. both S_STD and G_STD will return V4L2_STD_NTSC_M_JP. If V4L2_STD_MN is used and the driver can auto-detect between all those formats, the driver should detect if the standard is PAL or NTSC and detect between STD/M or STD/M (and the corresponding audio standards). If an unsupported mask like V4L2_STD_PAL_J | V4L2_STD_NTSC_M_JP is used, the driver should return a valid combination to S_STD (for example, returning V4L2_STD_PAL_J. In any case, on V4L2_G_STD, if the driver can't detect what's the standard, it should just return the current detection mask to userspace (instead of returning something like STD_INVALID). G_STD must always return the currently selected standard, never the detected standard. That's QUERYSTD. When the driver is first loaded it must pre-select a standard (usually in the probe function), either hardcoded (NTSC or PAL), or by doing an initial autodetect. But the standard should always be set to something. This allows you to start streaming immediately. -- Regards,
Re: Problems tuning PAL-D with a Hauppauge HVR-1110 (TDA18271 tuner) - workaround hack included
On Friday 30 September 2011, Malcolm Priestley tvbox...@gmail.com wrote: On 28/09/11 13:50, Simon Farnsworth wrote: (note - the CC list is everyone over 50% certainty from get_maintainer.pl) I'm having problems getting a Hauppauge HVR-1110 card to successfully tune PAL-D at 85.250 MHz vision frequency; by experimentation, I've determined that the tda18271 is tuning to a frequency 1.25 MHz lower than the vision frequency I've requested, so the following workaround fixes it for me. Are you sure the transmitter concerned doesn't have a VSB filter for an adjacent DVB-T digital transmitter? The transmitter concerned is a test pattern generator - it has no filters applied to its output. The intended customer for this device is in China, hence the use of PAL-D. -- Simon Farnsworth Software Engineer ONELAN Limited http://www.onelan.com/ -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Problems tuning PAL-D with a Hauppauge HVR-1110 (TDA18271 tuner) - workaround hack included
On Friday 30 September 2011, Andy Walls awa...@md.metrocast.net wrote: Steven Toth st...@kernellabs.com wrote: The TDA18271 driver on linux DOES NOT use the same I/F's that the windows driver uses. Reason? Mike Decided to follow the data sheet and NOT use the Hauppauge specifically select IFs. If you have one of the latest HVR1600's with that analog tuner, does PAL-D work with it without and offset? I don't have a current model HVR-1600 to hand - if I get hold of one, I will test it. -- Simon Farnsworth Software Engineer ONELAN Limited http://www.onelan.com/ -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
On Mon, Oct 3, 2011 at 10:39 AM, Laurent Pinchart laurent.pinch...@ideasonboard.com wrote: Hi Hans, On Monday 03 October 2011 08:30:25 Hans Verkuil wrote: On Monday, October 03, 2011 04:17:06 Mauro Carvalho Chehab wrote: Em 02-10-2011 18:18, Javier Martinez Canillas escreveu: On Sun, Oct 2, 2011 at 6:30 PM, Sakari Ailus wrote: [snip] static const struct v4l2_subdev_video_ops tvp5150_video_ops = { .s_routing = tvp5150_s_routing, + .s_stream = tvp515x_s_stream, + .enum_mbus_fmt = tvp515x_enum_mbus_fmt, + .g_mbus_fmt = tvp515x_mbus_fmt, + .try_mbus_fmt = tvp515x_mbus_fmt, + .s_mbus_fmt = tvp515x_mbus_fmt, + .g_parm = tvp515x_g_parm, + .s_parm = tvp515x_s_parm, + .s_std_output = tvp5150_s_std, Do we really need both video and pad format ops? Good question, I don't know. Can this device be used as a standalone v4l2 device? Or is supposed to always be a part of a video streaming pipeline as a sub-device with a source pad? Sorry if my questions are silly but as I stated before, I'm a newbie with v4l2 and MCF. The tvp5150 driver is used on some em28xx devices. It is nice to add auto-detection code to the driver, but converting it to the media bus should be done with enough care to not break support for the existing devices. So in other words, the tvp5150 driver needs both pad and non-pad ops. Eventually all non-pad variants in subdev drivers should be replaced by the pad variants so you don't have duplication of ops. But that will take a lot more work. What about replacing direct calls to non-pad operations with core V4L2 functions that would use the subdev non-pad operation if available, and emulate if with the pad operation otherwise ? I think this would ease the transition, as subdev drivers could be ported to pad operations without worrying about the bridges that use them, and bridge drivers could be switched to the new wrappers with a simple search and replace. Ok, that is a good solution. I'll do that. Implement V4L2 core operations as wrappers of the subdev pad operations. Also, as I've argued with Laurent before, the expected behavior is that the standards format selection should be done via the video node, and not via the media controller node. The V4L2 API has enough support for all you need to do with the video decoder, so there's no excuse to duplicate it with any other API. This is relevant for bridge drivers, not for subdev drivers. The media controller API is there not to replace V4L2, but to complement it where needed. That will be a nice discussion during the workshop :-) I don't think we disagree on that, but we probably disagree on what it means :-) -- Regards, Laurent Pinchart Laurent, I have a few questions about MCF and the OMAP3ISP driver if you are so kind to answer. 1- User-space programs that are not MCF aware negotiate the format with the V4L2 device (i.e: OMAP3 ISP CCDC output), which is a sink pad. But the real format is driven by the analog video format in the source pad (i.e: tvp5151). I modified the ISP driver to get the data format from the source pad and set the format for each pad on the pipeline accordingly but I've read from the documentation [1] that is not correct to propagate a data format from source pads to sink pads, that the correct thing is to do it from sink to source. So, in this case an administrator has to externally configure the format for each pad and to guarantee a coherent format on the whole pipeline?. Or does exist a way to do this automatic?. i.e: The output entity on the pipeline promotes the capabilities of the source pad so applications can select a data format and this format gets propagated all over the pipeline from the sink pad to the source? [1]: http://linuxtv.org/downloads/v4l-dvb-apis/subdev.html 2- If the application want a different format that the default provided by the tvp5151, (i.e: 720x576 for PAL), where do I have to crop the image? I thought this can be made using the CCDC, copying less lines to memory or the RESIZER if the application wants a bigger image. What is the best approach for this? 3- When using embedded sync, CCDC doesn't have an external vertical sync signal, so we have to manually configure when we want the VD0 interrupt to raise. This works for progressive frames, since each frame has the same size but in the case of interlaced video, sub-frames have different sizes (i.e: 313 and 312 vertical lines for PAL). What I did is to reconfigure the CCDC on the VD1 interrupt handler, but I think this is more a hack than a clean solution. What do you think is the best approach to solve this? Best regards, -- Javier Martínez Canillas (+34) 682 39 81 69 Barcelona, Spain -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
On Sun, Oct 02, 2011 at 11:18:29PM +0200, Javier Martinez Canillas wrote: On Sun, Oct 2, 2011 at 6:30 PM, Sakari Ailus sakari.ai...@iki.fi wrote: Hi Javier, Thanks for the patch! It's very interesting to see a driver for a video decoder using the MC interface. Before this we've had just image sensors. Hello Sakari, Thanks for your comments. Hi Javier, You're welcome. You also got very good comments from others. Javier Martinez Canillas wrote: + /* use the standard status register */ + std_status = tvp5150_read(sd, TVP5150_STATUS_REG_5); + else + /* use the standard register itself */ + std_status = std; Braces would be nice here. Ok. + switch (std_status VIDEO_STD_MASK) { + case VIDEO_STD_NTSC_MJ_BIT: + case VIDEO_STD_NTSC_MJ_BIT_AS: + return STD_NTSC_MJ; + + case VIDEO_STD_PAL_BDGHIN_BIT: + case VIDEO_STD_PAL_BDGHIN_BIT_AS: + return STD_PAL_BDGHIN; + + default: + return STD_INVALID; + } + + return STD_INVALID; This return won't do anything. Yes, will clean this. @@ -704,19 +812,19 @@ static int tvp5150_set_std(struct v4l2_subdev *sd, v4l2_std_id std) if (std == V4L2_STD_ALL) { fmt = 0; /* Autodetect mode */ } else if (std V4L2_STD_NTSC_443) { - fmt = 0xa; + fmt = VIDEO_STD_NTSC_4_43_BIT; } else if (std V4L2_STD_PAL_M) { - fmt = 0x6; + fmt = VIDEO_STD_PAL_M_BIT; } else if (std (V4L2_STD_PAL_N | V4L2_STD_PAL_Nc)) { - fmt = 0x8; + fmt = VIDEO_STD_PAL_COMBINATION_N_BIT; } else { /* Then, test against generic ones */ if (std V4L2_STD_NTSC) - fmt = 0x2; + fmt = VIDEO_STD_NTSC_MJ_BIT; else if (std V4L2_STD_PAL) - fmt = 0x4; + fmt = VIDEO_STD_PAL_BDGHIN_BIT; else if (std V4L2_STD_SECAM) - fmt = 0xc; + fmt = VIDEO_STD_SECAM_BIT; } Excellent! Less magic numbers... +static struct v4l2_mbus_framefmt * +__tvp5150_get_pad_format(struct tvp5150 *tvp5150, struct v4l2_subdev_fh *fh, + unsigned int pad, enum v4l2_subdev_format_whence which) +{ + switch (which) { + case V4L2_SUBDEV_FORMAT_TRY: + return v4l2_subdev_get_try_format(fh, pad); + case V4L2_SUBDEV_FORMAT_ACTIVE: + return tvp5150-format; + default: + return NULL; Hmm. This will never happen, but is returning NULL the right thing to do? An easy alternative is to just replace this with if (which may only have either of the two values). Ok I'll cleanup, I was being a bit paranoid there :) + +static int tvp5150_set_pad_format(struct v4l2_subdev *subdev, + struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *format) +{ + struct tvp5150 *tvp5150 = to_tvp5150(subdev); + tvp5150-std_idx = STD_INVALID; The above assignment will always be overwritten immediately. Yes, since tvp515x_query_current_std() already returns STD_INVALID on error the assignment is not needed. Will change that. + tvp5150-std_idx = tvp515x_query_current_std(subdev); + if (tvp5150-std_idx == STD_INVALID) { + v4l2_err(subdev, Unable to query std\n); + return 0; Isn't this an error? Yes, I'll change to report the error to the caller. Thinking about this again, the error likely shouldn't be returned to the user. URL:http://hverkuil.home.xs4all.nl/spec/media.html#vidioc-subdev-g-fmt Nonetheless, something should definitely be returned to the user. It might be best to leave it unchanged. + * tvp515x_mbus_fmt_cap() - V4L2 decoder interface handler for try/s/g_mbus_fmt The name of the function is different. Yes, I'll change that. static const struct v4l2_subdev_video_ops tvp5150_video_ops = { .s_routing = tvp5150_s_routing, + .s_stream = tvp515x_s_stream, + .enum_mbus_fmt = tvp515x_enum_mbus_fmt, + .g_mbus_fmt = tvp515x_mbus_fmt, + .try_mbus_fmt = tvp515x_mbus_fmt, + .s_mbus_fmt = tvp515x_mbus_fmt, + .g_parm = tvp515x_g_parm, + .s_parm = tvp515x_s_parm, + .s_std_output = tvp5150_s_std, Do we really need both video and pad format ops? Good question, I don't know. Can this device be used as a standalone v4l2 device? Or is supposed to always be a part of a video streaming pipeline as a sub-device with a source pad? Sorry if my questions are silly but as I stated before, I'm a newbie with v4l2 and MCF. You got good comments from others to this. I agree with Laurent, the right thing to do is to implement
Re: [PATCH 6/9] V4L: soc-camera: prepare hooks for Media Controller wrapper
Hi Guennadi, Thanks for the patch. It's very nice to see you working on that :-) I'm not a soc-camera expert, so my review is by no means extensive. On Thursday 29 September 2011 18:18:54 Guennadi Liakhovetski wrote: [snip] diff --git a/drivers/media/video/soc_camera.c b/drivers/media/video/soc_camera.c index 2905a88..790c14c 100644 --- a/drivers/media/video/soc_camera.c +++ b/drivers/media/video/soc_camera.c [snip] @@ -1361,9 +1402,11 @@ void soc_camera_host_unregister(struct soc_camera_host *ici) if (icd-iface == ici-nr to_soc_camera_control(icd)) soc_camera_remove(icd); - mutex_unlock(list_lock); + soc_camera_mc_unregister(ici); v4l2_device_unregister(ici-v4l2_dev); + + mutex_unlock(list_lock); } EXPORT_SYMBOL(soc_camera_host_unregister); Do soc_camera_mc_unregister() and v4l2_device_unregister() need to be protected by the mutex ? @@ -1443,7 +1486,6 @@ static int video_dev_create(struct soc_camera_device *icd) strlcpy(vdev-name, ici-drv_name, sizeof(vdev-name)); - vdev-parent= icd-pdev; vdev-current_norm = V4L2_STD_UNKNOWN; vdev-fops = soc_camera_fops; vdev-ioctl_ops = soc_camera_ioctl_ops; @@ -1451,6 +1493,8 @@ static int video_dev_create(struct soc_camera_device *icd) vdev-tvnorms = V4L2_STD_UNKNOWN; vdev-ctrl_handler = icd-ctrl_handler; vdev-lock = icd-video_lock; + vdev-v4l2_dev = ici-v4l2_dev; + video_set_drvdata(vdev, icd); icd-vdev = vdev; This is an important change, maybe you can move it to a patch of its own. diff --git a/include/media/soc_camera.h b/include/media/soc_camera.h index d60bad4..0a21ff1 100644 --- a/include/media/soc_camera.h +++ b/include/media/soc_camera.h [snip] @@ -63,6 +65,18 @@ struct soc_camera_host { void *priv; const char *drv_name; struct soc_camera_host_ops *ops; +#if defined(CONFIG_MEDIA_CONTROLLER) + struct media_device mdev; + struct v4l2_subdev bus_sd; + struct media_pad bus_pads[2]; + struct media_pad vdev_pads[1]; +#endif Those fields are not used in this patch. Don't they belong to the next one ? +}; + +enum soc_camera_target { + SOCAM_TARGET_PIPELINE, + SOCAM_TARGET_HOST_IN, + SOCAM_TARGET_HOST_OUT, }; struct soc_camera_host_ops { [snip] diff --git a/include/media/soc_entity.h b/include/media/soc_entity.h new file mode 100644 index 000..e461f5e --- /dev/null +++ b/include/media/soc_entity.h @@ -0,0 +1,19 @@ +/* + * soc-camera Media Controller wrapper + * + * Copyright (C) 2011, Guennadi Liakhovetski g.liakhovet...@gmx.de + * + * This program is free software; you can redistribute it and/or modify + * it under the terms of the GNU General Public License version 2 as + * published by the Free Software Foundation. + */ + +#ifndef SOC_ENTITY_H +#define SOC_ENTITY_H + +#define soc_camera_mc_install(x) 0 +#define soc_camera_mc_free(x) do {} while (0) +#define soc_camera_mc_register(x) do {} while (0) +#define soc_camera_mc_unregister(x) do {} while (0) Doesn't this (and the corresponding changes to drivers/media/video/soc_camera.c) belong to the next patch ? + +#endif -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
On 2011-10-01 10:39, Enrico wrote: On Sat, Oct 1, 2011 at 5:55 PM, Javier Martinez Canillas martinez.jav...@gmail.com wrote: We hack a few bits of the ISP CCDC driver to support ITU-R BT656 interlaced data with embedded syncs video format and ported the tvp5150 driver to the MCF so it can be detected as a sub-device and be part of the OMAP ISP image processing pipeline (as a source pad). That was already posted on the list [1], there was some discussion but i don't know what's the status/plan to get it into mainline. And, as you can see in [2], don't expect many comments :D [1]: http://www.spinics.net/lists/linux-media/msg37710.html [2]: http://www.spinics.net/lists/linux-media/msg37116.html Even if it does detect the signal shape (NTSC, PAL), doesn't one still need to [externally] configure the pads for this shape? Yes, that is why I wanted to do the auto-detection for the tvp5151, so we only have to manually configure the ISP components (or any other hardware video processing pipeline entities, sorry for my OMAP-specific comments). Laurent was not very happy [3] about changing video formats out of the driver control, so this should be discussed more. [3]: http://www.spinics.net/lists/linux-omap/msg56983.html I didn't know that the physical connection affected the video output format, I thought that it was only a physical medium to carry the same information, sorry if my comments are silly but I'm really newbie with video in general. I think you got it right, i haven't tested it but the output format shouldn't be affected by the video source( if it stays pal/ntsc of course). Maybe you will get only a different active video area so only cropping will be affected. It's not so much the video output [shape], rather that the input source can be selected and there does not seem to be a way to do that currently using the MC framework. I was thinking perhaps to have the driver have 3 different output pads and depending on which one you choose to link up tells the driver how to configure the input. -- Gary Thomas | Consulting for the MLB Associates |Embedded world -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 7/9] V4L: soc-camera: add a Media Controller wrapper
Hi Guennadi, Thanks for the patch. On Thursday 29 September 2011 18:18:55 Guennadi Liakhovetski wrote: This wrapper adds a Media Controller implementation to soc-camera drivers. To really benefit from it individual host drivers should implement support for values of enum soc_camera_target other than SOCAM_TARGET_PIPELINE in their .set_fmt() and .try_fmt() methods. [snip] diff --git a/drivers/media/video/soc_entity.c b/drivers/media/video/soc_entity.c new file mode 100644 index 000..3a04700 --- /dev/null +++ b/drivers/media/video/soc_entity.c @@ -0,0 +1,284 @@ [snip] +static int bus_sd_pad_g_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *sd_fmt) +{ + struct soc_camera_device *icd = v4l2_get_subdevdata(sd); + struct v4l2_mbus_framefmt *f = sd_fmt-format; + + if (sd_fmt-which == V4L2_SUBDEV_FORMAT_TRY) { + sd_fmt-format = *v4l2_subdev_get_try_format(fh, sd_fmt-pad); + return 0; + } + + if (sd_fmt-pad == SOC_HOST_BUS_PAD_SINK) { + f-width= icd-host_input_width; + f-height = icd-host_input_height; + } else { + f-width= icd-user_width; + f-height = icd-user_height; + } + f-field= icd-field; + f-code = icd-current_fmt-code; + f-colorspace = icd-colorspace; Can soc-camera hosts perform format conversion ? If so you will likely need to store the mbus code for the input and output separately, possibly in v4l2_mbus_format fields. You could then simplify the [gs]_fmt functions by implementing similar to the __*_get_format functions in the OMAP3 ISP driver. + return 0; +} + +static int bus_sd_pad_s_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *sd_fmt) +{ + struct soc_camera_device *icd = v4l2_get_subdevdata(sd); + struct v4l2_mbus_framefmt *mf = sd_fmt-format; + struct v4l2_format vf = { + .type = V4L2_BUF_TYPE_VIDEO_CAPTURE, + }; + enum soc_camera_target tgt = sd_fmt-pad == SOC_HOST_BUS_PAD_SINK ? + SOCAM_TARGET_HOST_IN : SOCAM_TARGET_HOST_OUT; + int ret; + + se_mbus_to_v4l2(icd, mf, vf); + + if (sd_fmt-which == V4L2_SUBDEV_FORMAT_TRY) { + struct v4l2_mbus_framefmt *try_fmt = + v4l2_subdev_get_try_format(fh, sd_fmt-pad); + ret = soc_camera_try_fmt(icd, vf, tgt); + if (!ret) { + se_v4l2_to_mbus(icd, vf, try_fmt); + sd_fmt-format = *try_fmt; + } + return ret; + } + + ret = soc_camera_set_fmt(icd, vf, tgt); + if (!ret) + se_v4l2_to_mbus(icd, vf, sd_fmt-format); + + return ret; +} + +static int bus_sd_pad_enum_mbus_code(struct v4l2_subdev *sd, + struct v4l2_subdev_fh *fh, + struct v4l2_subdev_mbus_code_enum *ce) +{ + struct soc_camera_device *icd = v4l2_get_subdevdata(sd); + + if (ce-index = icd-num_user_formats) + return -EINVAL; + + ce-code = icd-user_formats[ce-index].code; + return 0; +} + +static const struct v4l2_subdev_pad_ops se_bus_sd_pad_ops = { + .get_fmt= bus_sd_pad_g_fmt, + .set_fmt= bus_sd_pad_s_fmt, + .enum_mbus_code = bus_sd_pad_enum_mbus_code, +}; + +static const struct v4l2_subdev_ops se_bus_sd_ops = { + .pad= se_bus_sd_pad_ops, +}; + +static const struct media_entity_operations se_bus_me_ops = { +}; + +static const struct media_entity_operations se_vdev_me_ops = { +}; NULL operations are allowed, you don't have to use an empty structure. + +int soc_camera_mc_streamon(struct soc_camera_device *icd) +{ + struct soc_camera_host *ici = to_soc_camera_host(icd-parent); + struct v4l2_subdev *bus_sd = ici-bus_sd; + struct media_entity *bus_me = bus_sd-entity; + struct v4l2_subdev *sd = soc_camera_to_subdev(icd); + struct v4l2_mbus_framefmt mf; + int ret = v4l2_subdev_call(sd, video, g_mbus_fmt, mf); + if (WARN_ON(ret 0)) + return ret; + if (icd-host_input_width != mf.width || + icd-host_input_height != mf.height || + icd-current_fmt-code != mf.code) + return -EINVAL; Shouldn't you also check that the source pad format matches the video node format ? + + media_entity_pipeline_start(bus_me, ici-pipe); + return 0; +} + +void soc_camera_mc_streamoff(struct soc_camera_device *icd) +{ + struct soc_camera_host *ici = to_soc_camera_host(icd-parent); + struct v4l2_subdev *bus_sd = ici-bus_sd; + struct media_entity *bus_me = bus_sd-entity; + media_entity_pipeline_stop(bus_me); +} + +int soc_camera_mc_install(struct soc_camera_device *icd) +{ + struct
Re: [PATCH 8/9] V4L: mt9t112: add pad level operations
Hi Guennadi, Thanks for the patch. On Thursday 29 September 2011 18:18:56 Guennadi Liakhovetski wrote: On Media Controller enabled systems this patch allows the user to communicate with the driver directly over /dev/v4l-subdev* device nodes using VIDIOC_SUBDEV_* ioctl()s. Signed-off-by: Guennadi Liakhovetski g.liakhovet...@gmx.de [snip] diff --git a/drivers/media/video/mt9t112.c b/drivers/media/video/mt9t112.c index 32114a3..bb95ad1 100644 --- a/drivers/media/video/mt9t112.c +++ b/drivers/media/video/mt9t112.c [snip] @@ -739,8 +741,7 @@ static int mt9t112_init_camera(const struct i2c_client *client) static int mt9t112_g_chip_ident(struct v4l2_subdev *sd, struct v4l2_dbg_chip_ident *id) { - struct i2c_client *client = v4l2_get_subdevdata(sd); - struct mt9t112_priv *priv = to_mt9t112(client); + struct mt9t112_priv *priv = container_of(sd, struct mt9t112_priv, subdev); What about modifying to_mt9t112() to take a subdev pointer, or possibly creating a sd_to_mt9t112() ? id-ident= priv-model; id-revision = 0; [snip] @@ -1018,14 +1015,67 @@ static struct v4l2_subdev_video_ops [snip] +static int mt9t112_set_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *sd_fmt) +{ + struct v4l2_mbus_framefmt *mf; + + if (sd_fmt-which == V4L2_SUBDEV_FORMAT_ACTIVE) + return mt9t112_s_fmt(sd, sd_fmt-format); + + mf = v4l2_subdev_get_try_format(fh, sd_fmt-pad); + *mf = sd_fmt-format; + return mt9t112_try_fmt(sd, mf); I think the code would be clea[nr]er if you split mt9t112_s_fmt() into try and set, and called try unconditionally in mt9t112_set_fmt(). +} + +struct v4l2_subdev_pad_ops mt9t112_subdev_pad_ops = { + .enum_mbus_code = mt9t112_enum_mbus_code, + .get_fmt= mt9t112_get_fmt, + .set_fmt= mt9t112_set_fmt, Having bot mt9t112_[gs]_fmt and mt9t112_[gs]et_fmt looks confusing to me. What about renaming the later mt9t112_[gs]et_pad_fmt ? +}; + static struct v4l2_subdev_ops mt9t112_subdev_ops = { .core = mt9t112_subdev_core_ops, .video = mt9t112_subdev_video_ops, + .pad= mt9t112_subdev_pad_ops, }; +static int mt9t112_open(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh) +{ + struct v4l2_mbus_framefmt *mf = v4l2_subdev_get_try_format(fh, 0); + return mf ? mt9t112_try_fmt(sd, mf) : 0; Can mf be NULL ? +} -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/9] Media Controller for soc-camera
Hi Guennadi, Thanks for the patches. I'm glad to see soc-camera adopting the MC API :-) On Thursday 29 September 2011 18:18:48 Guennadi Liakhovetski wrote: This is the first attempt at extending soc-camera with Media Controller / pad-level APIs. Yes, I know, that Laurent wasn't quite happy with V4L: add convenience macros to the subdevice / Media Controller API, maybe we'll remove it eventually, but so far my patches use it, so, I kept it for now. I'm fine with keeping it to allow the other patches to be reviewed already, but I still think we should drop it. The general idea has been described in http://article.gmane.org/gmane.linux.drivers.video-input-infrastructure/380 83 In short: soc-camera implements a media controller device and two entities per camera host (bridge) instance, linked statically to each other and to the client. The host driver gets a chance to implement local only configuration, as opposed to the standard soc-camera way of propagating the configuration up the pipeline to the client (sensor / decoder) driver. An example implementation is provided for sh_mobile_ceu_camera and two sensor drivers. The whole machinery gets activated if the soc-camera core finds a client driver, that implements pad operations. In that case both the standard (V4L2) and the new (MC) ways of addressing the driver become available. I.e., it is possible to run both standard V4L2 applications and MC-aware ones. Of course, applies on top of git://linuxtv.org/gliakhovetski/v4l-dvb.git for-3.2 Deepthy: this is what I told you about in http://article.gmane.org/gmane.linux.ports.arm.omap/64847 it just took me a bit longer, than I thought. -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
Hi Javier, On Monday 03 October 2011 11:53:44 Javier Martinez Canillas wrote: [snip] Laurent, I have a few questions about MCF and the OMAP3ISP driver if you are so kind to answer. 1- User-space programs that are not MCF aware negotiate the format with the V4L2 device (i.e: OMAP3 ISP CCDC output), which is a sink pad. But the real format is driven by the analog video format in the source pad (i.e: tvp5151). That's not different from existing systems using digital sensors, where the format is driven by the sensor. I modified the ISP driver to get the data format from the source pad and set the format for each pad on the pipeline accordingly but I've read from the documentation [1] that is not correct to propagate a data format from source pads to sink pads, that the correct thing is to do it from sink to source. So, in this case an administrator has to externally configure the format for each pad and to guarantee a coherent format on the whole pipeline?. That's correct (except you don't need to be an administrator to do so :-)). Or does exist a way to do this automatic?. i.e: The output entity on the pipeline promotes the capabilities of the source pad so applications can select a data format and this format gets propagated all over the pipeline from the sink pad to the source? It can be automated in userspace (through a libv4l plugin for instance), but it's really not the kernel's job to do so. [1]: http://linuxtv.org/downloads/v4l-dvb-apis/subdev.html 2- If the application want a different format that the default provided by the tvp5151, (i.e: 720x576 for PAL), where do I have to crop the image? I thought this can be made using the CCDC, copying less lines to memory or the RESIZER if the application wants a bigger image. What is the best approach for this? Cropping can be done in the resizer, and I will soon post patches that add cropping support in the preview engine (although that will be useless for the TVP5151, as the preview engine doesn't support YUV data). The CCDC supports cropping too, but that's not implemented in the driver yet. 3- When using embedded sync, CCDC doesn't have an external vertical sync signal, so we have to manually configure when we want the VD0 interrupt to raise. This works for progressive frames, since each frame has the same size but in the case of interlaced video, sub-frames have different sizes (i.e: 313 and 312 vertical lines for PAL). What I did is to reconfigure the CCDC on the VD1 interrupt handler, but I think this is more a hack than a clean solution. What do you think is the best approach to solve this? I *really* wish the CCDC had an end of frame interrupt :-( I'm not sure if there's a non-hackish solution to this. -- Regards, Laurent Pinchart -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] pctv452e: hm.. tidy bogus code up
Hi Igor, On 30.09.2011 22:58, Igor M. Liplianin wrote: Currently, usb_register calls two times with cloned structures, but for different driver names. Let's remove it. Signed-off-by: Igor M. Liplianinliplia...@me.by Well spotted... The cloned struct should have been removed a long time go. The final version of patch I submitted for the tt-connect S2-3600, did not contain it anymore: http://www.linuxtv.org/pipermail/linux-dvb/2008-March/024233.html Acked-by: André Weideammandre.weidem...@web.de Regards André -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH] pctv452e: hm.. tidy bogus code up
On 03.10.2011 14:30, André Weidemann wrote: Hi Igor, On 30.09.2011 22:58, Igor M. Liplianin wrote: Currently, usb_register calls two times with cloned structures, but for different driver names. Let's remove it. Signed-off-by: Igor M. Liplianinliplia...@me.by Well spotted... The cloned struct should have been removed a long time go. The final version of patch I submitted for the tt-connect S2-3600, did not contain it anymore: http://www.linuxtv.org/pipermail/linux-dvb/2008-March/024233.html Acked-by: André Weideammandre.weidem...@web.de This should read: Acked-by: André Weidemannandre.weidem...@web.de ;-) Regards André -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Smart card reader support for Anysee DVB devices
2011/8/29 Antti Palosaari cr...@iki.fi: On 08/29/2011 05:44 PM, István Váradi wrote: Hi, 2011/8/17 Antti Palosaaricr...@iki.fi: On 08/15/2011 02:14 PM, Antti Palosaari wrote: On 08/15/2011 02:51 AM, Antti Palosaari wrote: Biggest problem I see whole thing is poor application support. OpenCT is rather legacy but there is no good alternative. All this kind of serial drivers seems to be OpenCT currently. I wonder if it is possible to make virtual CCID device since CCID seems to be unfortunately the only interface SmartCard guys currently care. I studied scenario and looks like it is possible to implement way like, register virtual USB HCI (virtual motherboard USB controller) then register virtual PC/SC device to that which hooks all calls to HW via Anysee driver. Some glue surely needed for emulate PC/SC. I think there is not any such driver yet. Anyhow, there is virtual USB HCI driver currently in staging which can be used as example, or even use it to register virtual device. That kind of functionality surely needs more talking... It maybe that smartcard guys care only for CCID, but wouldn't it be an overkill to implement an emulation of that for the driver? It can be done, of course, but I think it would be much more complicated than the current one. Is it really necessary to put such complexity into the kernel? In my opinion, this should be handled in user-space. Only De facto serial smartcard protocol is so called Phoenix/Smartmouse, implementing new protocol is totally dead idea. It will never got any support. There is already such drivers, at least Infinity Unlimited USB Phoenix driver (iuu_phoenix.c). It uses USB-serial driver framework and some small emulation for Phoenix protocol. Look that driver to see which kind of complexity it adds. Anysee have *just* same situation. I helped write the iuu_phoenix.c driver. With regards to The character device supports two ioctl's (see anysee_sc), one for detecting the presence of a card, the other one for resetting the card and querying the ATR. The iuu_phoenix.c driver uses standard phoenix/smartmouse reset and atr controls. (i.e. with DCD, DTR, RTS, CTS lines etc) As the result the iuu_phoenix.c driver works out of the box with oscam. It might be a good idea to use a similar interface for your driver. The result would be that your driver would work out of the box with oscam as well as other user space programs that read smart cards. The problem would be if you wished to support smart card program capabilities, the Phoenix/Smartmouse interface does not support that. If I add programmer functionallity to the iuu_phoenix driver, I would probably add an IOCTL for it. Kind Regards James -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [DVB] CXD2099 - Question about the CAM clock
Dear Oliver, Ive done some tests with the CAM reader from Digital Devices based on Sony CXD2099 chip and I noticed some issues with some CAM: * SMIT CAM: working fine * ASTON CAM : working fine, except that it's crashing quite regularly * NEOTION CAM : no stream going out but access to the CAM menu is ok When looking at the CXD2099 driver code, I noticed the CAM clock (fMCLKI) is fixed at 9MHz using the 27MHz onboard oscillator and using the integer divider set to 3 (as MCLKI_FREQ=2). I was wondering if some CAM were not able to work correctly at such high clock frequency. So, I've tried to enable the NCO (numeric controlled oscillator) in order to setup a lower frequency for the CAM clock, but I wasn't successful, it's looking like the frequency must be around the 9MHz or I can't get any stream. Do you know a way to decrease this CAM clock frequency to do some testing? Best regards, Sebastien. Weird that the frequency would pose a problem for those CAMs. The CI spec [1] explains that the minimum byte transfer clock period must be 111ns. This gives us a frequency of ~9MHz. Anyway, wouldn't it be wiser to base MCLKI on TICLK ? -- Issa [1] http://www.dvb.org/technology/standards/En50221.V1.pdf -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: [DVB] CXD2099 - Question about the CAM clock
-Original Message- From: Issa Gorissen [mailto:flo...@usa.net] Sent: lundi 3 octobre 2011 15:18 To: o.endr...@gmx.de; Sébastien RAILLARD Cc: Linux Media Mailing List Subject: Re: [DVB] CXD2099 - Question about the CAM clock Dear Oliver, Ive done some tests with the CAM reader from Digital Devices based on Sony CXD2099 chip and I noticed some issues with some CAM: * SMIT CAM: working fine * ASTON CAM : working fine, except that it's crashing quite regularly * NEOTION CAM : no stream going out but access to the CAM menu is ok When looking at the CXD2099 driver code, I noticed the CAM clock (fMCLKI) is fixed at 9MHz using the 27MHz onboard oscillator and using the integer divider set to 3 (as MCLKI_FREQ=2). I was wondering if some CAM were not able to work correctly at such high clock frequency. So, I've tried to enable the NCO (numeric controlled oscillator) in order to setup a lower frequency for the CAM clock, but I wasn't successful, it's looking like the frequency must be around the 9MHz or I can't get any stream. Do you know a way to decrease this CAM clock frequency to do some testing? Best regards, Sebastien. Weird that the frequency would pose a problem for those CAMs. The CI spec [1] explains that the minimum byte transfer clock period must be 111ns. This gives us a frequency of ~9MHz. You're totally right about the maximum clock frequency specified in the norm, but I had confirmation from CAM manufacturers that their CAM may not work correctly up to this maximum frequency. Usually, the CAM clock is coming from the input TS stream and I don't think there is for now a DVB-S2 transponder having a 72mbps bitrate (so a 9MHz for parallel CAM clocking). Anyway, wouldn't it be wiser to base MCLKI on TICLK ? I've tried to use mode C instead of mode D, and I have the same problem, so I guess TICLK is around 72MHz. It could be a good idea to use TICLK, but I don't know the value and if the clock is constant or only active during data transmission. Did you manage to enable and use the NCO of the CXD2099 (instead of the integer divider) ? -- Issa [1] http://www.dvb.org/technology/standards/En50221.V1.pdf -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: [DVB] CXD2099 - Question about the CAM clock
-Original Message- From: Issa Gorissen [mailto:flo...@usa.net] Sent: lundi 3 octobre 2011 15:59 To: o.endr...@gmx.de; Sébastien RAILLARD Cc: 'Linux Media Mailing List' Subject: RE: [DVB] CXD2099 - Question about the CAM clock Dear Oliver, Ive done some tests with the CAM reader from Digital Devices based on Sony CXD2099 chip and I noticed some issues with some CAM: * SMIT CAM: working fine * ASTON CAM : working fine, except that it's crashing quite regularly * NEOTION CAM : no stream going out but access to the CAM menu is ok When looking at the CXD2099 driver code, I noticed the CAM clock (fMCLKI) is fixed at 9MHz using the 27MHz onboard oscillator and using the integer divider set to 3 (as MCLKI_FREQ=2). I was wondering if some CAM were not able to work correctly at such high clock frequency. So, I've tried to enable the NCO (numeric controlled oscillator) in order to setup a lower frequency for the CAM clock, but I wasn't successful, it's looking like the frequency must be around the 9MHz or I can't get any stream. Do you know a way to decrease this CAM clock frequency to do some testing? Best regards, Sebastien. Weird that the frequency would pose a problem for those CAMs. The CI spec [1] explains that the minimum byte transfer clock period must be 111ns. This gives us a frequency of ~9MHz. You're totally right about the maximum clock frequency specified in the norm, but I had confirmation from CAM manufacturers that their CAM may not work correctly up to this maximum frequency. Usually, the CAM clock is coming from the input TS stream and I don't think there is for now a DVB-S2 transponder having a 72mbps bitrate (so a 9MHz for parallel CAM clocking). Anyway, wouldn't it be wiser to base MCLKI on TICLK ? I've tried to use mode C instead of mode D, and I have the same problem, so I guess TICLK is around 72MHz. It could be a good idea to use TICLK, but I don't know the value and if the clock is constant or only active during data transmission. Did you manage to enable and use the NCO of the CXD2099 (instead of the integer divider) ? No, but if your output to the CAM is slower than what comes from the ngene chip, you will lose bytes, no ? The real bandwidth of my transponder is 62mbps, so I've room to decrease the CAM clock. I did more tests with the NCO, and I've strange results: * Using MCLKI=0x5553 = fMCLKI= 8,99903 = Not working, a lot of TS errors * Using MCLKI=0x5554 = fMCLKI= 8,99945 = Working fine * Using MCLKI=0x = fMCLKI= 8,99986 = Not working, a lot of TS errors It's strange that changing very slightly the clock make so much errors! -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Bttv and composite audio
So, I was corrected by irc #v4l user iive who told me there is no such thing as composite audio, but what happens is that composite video comes often paired with rca audio connectors. Thanks for the clarification. Now the problem may be in switching the audio inputs. Because tv-tuner audio works fine. Also when I switch to composite I still hear the tv-tuner audio. iive also mentioned the issue might be about how to multiplex the audio inputs and some gpio. Is this some sort of bttv driver bug? or can it be sorted out with a modprobe bttv parameter? If there is any other info I can include please tell me. Thanks. -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 7/9] V4L: soc-camera: add a Media Controller wrapper
Hi Laurent Thanks for the reviews! On Mon, 3 Oct 2011, Laurent Pinchart wrote: Hi Guennadi, Thanks for the patch. On Thursday 29 September 2011 18:18:55 Guennadi Liakhovetski wrote: This wrapper adds a Media Controller implementation to soc-camera drivers. To really benefit from it individual host drivers should implement support for values of enum soc_camera_target other than SOCAM_TARGET_PIPELINE in their .set_fmt() and .try_fmt() methods. [snip] diff --git a/drivers/media/video/soc_entity.c b/drivers/media/video/soc_entity.c new file mode 100644 index 000..3a04700 --- /dev/null +++ b/drivers/media/video/soc_entity.c @@ -0,0 +1,284 @@ [snip] +static int bus_sd_pad_g_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *sd_fmt) +{ + struct soc_camera_device *icd = v4l2_get_subdevdata(sd); + struct v4l2_mbus_framefmt *f = sd_fmt-format; + + if (sd_fmt-which == V4L2_SUBDEV_FORMAT_TRY) { + sd_fmt-format = *v4l2_subdev_get_try_format(fh, sd_fmt-pad); + return 0; + } + + if (sd_fmt-pad == SOC_HOST_BUS_PAD_SINK) { + f-width= icd-host_input_width; + f-height = icd-host_input_height; + } else { + f-width= icd-user_width; + f-height = icd-user_height; + } + f-field= icd-field; + f-code = icd-current_fmt-code; + f-colorspace = icd-colorspace; Can soc-camera hosts perform format conversion ? If so you will likely need to store the mbus code for the input and output separately, possibly in v4l2_mbus_format fields. You could then simplify the [gs]_fmt functions by implementing similar to the __*_get_format functions in the OMAP3 ISP driver. They can, yes. But, under soc-camera conversions are performed between mediabus codes and fourcc formats. Upon pipeline construction (probing) a table of format conversions is built, where hosts generate one or more translation entries for all client formats, that they support. The only example of a more complex translations so far is MIPI CSI-2, but even there we have decided to identify CSI-2 formats using the same media-bus codes, as what you get between the CSI-2 block and the DMA engine. For the only CSI-2 capable soc-camera host so far - the CEU driver - this is also a very natural representation, because there the CSI-2 block is indeed an additional pipeline stage, uniquely translating CSI-2 to media-bus codes, that are then fed to the CEU parallel port. + return 0; +} + +static int bus_sd_pad_s_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *sd_fmt) +{ + struct soc_camera_device *icd = v4l2_get_subdevdata(sd); + struct v4l2_mbus_framefmt *mf = sd_fmt-format; + struct v4l2_format vf = { + .type = V4L2_BUF_TYPE_VIDEO_CAPTURE, + }; + enum soc_camera_target tgt = sd_fmt-pad == SOC_HOST_BUS_PAD_SINK ? + SOCAM_TARGET_HOST_IN : SOCAM_TARGET_HOST_OUT; + int ret; + + se_mbus_to_v4l2(icd, mf, vf); + + if (sd_fmt-which == V4L2_SUBDEV_FORMAT_TRY) { + struct v4l2_mbus_framefmt *try_fmt = + v4l2_subdev_get_try_format(fh, sd_fmt-pad); + ret = soc_camera_try_fmt(icd, vf, tgt); + if (!ret) { + se_v4l2_to_mbus(icd, vf, try_fmt); + sd_fmt-format = *try_fmt; + } + return ret; + } + + ret = soc_camera_set_fmt(icd, vf, tgt); + if (!ret) + se_v4l2_to_mbus(icd, vf, sd_fmt-format); + + return ret; +} + +static int bus_sd_pad_enum_mbus_code(struct v4l2_subdev *sd, +struct v4l2_subdev_fh *fh, +struct v4l2_subdev_mbus_code_enum *ce) +{ + struct soc_camera_device *icd = v4l2_get_subdevdata(sd); + + if (ce-index = icd-num_user_formats) + return -EINVAL; + + ce-code = icd-user_formats[ce-index].code; + return 0; +} + +static const struct v4l2_subdev_pad_ops se_bus_sd_pad_ops = { + .get_fmt= bus_sd_pad_g_fmt, + .set_fmt= bus_sd_pad_s_fmt, + .enum_mbus_code = bus_sd_pad_enum_mbus_code, +}; + +static const struct v4l2_subdev_ops se_bus_sd_ops = { + .pad= se_bus_sd_pad_ops, +}; + +static const struct media_entity_operations se_bus_me_ops = { +}; + +static const struct media_entity_operations se_vdev_me_ops = { +}; NULL operations are allowed, you don't have to use an empty structure. Ok + +int soc_camera_mc_streamon(struct soc_camera_device *icd) +{ + struct soc_camera_host *ici = to_soc_camera_host(icd-parent); + struct v4l2_subdev *bus_sd = ici-bus_sd; + struct media_entity *bus_me = bus_sd-entity; + struct v4l2_subdev *sd =
cron job: media_tree daily build: WARNINGS
This message is generated daily by a cron job that builds media_tree for the kernels and architectures in the list below. Results of the daily build of media_tree: date:Mon Oct 3 19:00:14 CEST 2011 git hash:2f4cf2c3a971c4d5154def8ef9ce4811d702852d gcc version: i686-linux-gcc (GCC) 4.6.1 host hardware:x86_64 host os: 3.0-4.slh.7-amd64 linux-git-armv5: WARNINGS linux-git-armv5-davinci: WARNINGS linux-git-armv5-ixp: WARNINGS linux-git-armv5-omap2: WARNINGS linux-git-i686: WARNINGS linux-git-m32r: OK linux-git-mips: WARNINGS linux-git-powerpc64: WARNINGS linux-git-x86_64: WARNINGS linux-2.6.31.12-i686: WARNINGS linux-2.6.32.6-i686: WARNINGS linux-2.6.33-i686: WARNINGS linux-2.6.34-i686: WARNINGS linux-2.6.35.3-i686: WARNINGS linux-2.6.36-i686: WARNINGS linux-2.6.37-i686: WARNINGS linux-2.6.38.2-i686: WARNINGS linux-2.6.39.1-i686: WARNINGS linux-3.0-i686: WARNINGS linux-3.1-rc1-i686: WARNINGS linux-2.6.31.12-x86_64: WARNINGS linux-2.6.32.6-x86_64: WARNINGS linux-2.6.33-x86_64: WARNINGS linux-2.6.34-x86_64: WARNINGS linux-2.6.35.3-x86_64: WARNINGS linux-2.6.36-x86_64: WARNINGS linux-2.6.37-x86_64: WARNINGS linux-2.6.38.2-x86_64: WARNINGS linux-2.6.39.1-x86_64: WARNINGS linux-3.0-x86_64: WARNINGS linux-3.1-rc1-x86_64: WARNINGS spec-git: WARNINGS sparse: ERRORS Detailed results are available here: http://www.xs4all.nl/~hverkuil/logs/Monday.log Full logs are available here: http://www.xs4all.nl/~hverkuil/logs/Monday.tar.bz2 The V4L-DVB specification from this daily build is here: http://www.xs4all.nl/~hverkuil/spec/media.html -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Smart card reader support for Anysee DVB devices
Antti Palosaari cr...@iki.fi writes: If you would like to help me then you can find out correct device name and whats needed for that. I mainly see following possibilities; * /dev/ttyAnyseeN * /dev/ttyDVBN * /dev/adapterN/serial You should probably include the TTY maintainer in that discussion. I assume this device won't really be a TTY device? Then it probably shouldn't be named like one. Bjørn -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 1/2] [media] saa7115: Fix standards detection
There are several bugs at saa7115 standards detection: After the fix, the driver is returning the proper standards, as tested with 3 different broadcast sources: On an invalid channel (without any TV signal): [ 4394.931630] saa7115 15-0021: Status byte 2 (0x1f)=0xe0 [ 4394.931635] saa7115 15-0021: detected std mask = 00ff With a PAL/M signal: [ 4410.836855] saa7115 15-0021: Status byte 2 (0x1f)=0xb1 [ 4410.837727] saa7115 15-0021: Status byte 1 (0x1e)=0x82 [ 4410.837731] saa7115 15-0021: detected std mask = 0900 With a NTSC/M signal: [ 4422.383893] saa7115 15-0021: Status byte 2 (0x1f)=0xb1 [ 4422.384768] saa7115 15-0021: Status byte 1 (0x1e)=0x81 [ 4422.384772] saa7115 15-0021: detected std mask = b000 Tests were done with a WinTV PVR USB2 Model 29xx card. Signed-off-by: Mauro Carvalho Chehab mche...@redhat.com --- drivers/media/video/saa7115.c | 47 +++- 1 files changed, 32 insertions(+), 15 deletions(-) diff --git a/drivers/media/video/saa7115.c b/drivers/media/video/saa7115.c index cee98ea..86627a8 100644 --- a/drivers/media/video/saa7115.c +++ b/drivers/media/video/saa7115.c @@ -1344,35 +1344,52 @@ static int saa711x_g_vbi_data(struct v4l2_subdev *sd, struct v4l2_sliced_vbi_dat static int saa711x_querystd(struct v4l2_subdev *sd, v4l2_std_id *std) { struct saa711x_state *state = to_state(sd); - int reg1e; + int reg1f, reg1e; - *std = V4L2_STD_ALL; - if (state-ident != V4L2_IDENT_SAA7115) { - int reg1f = saa711x_read(sd, R_1F_STATUS_BYTE_2_VD_DEC); - - if (reg1f 0x20) - *std = V4L2_STD_525_60; - else - *std = V4L2_STD_625_50; - - return 0; + reg1f = saa711x_read(sd, R_1F_STATUS_BYTE_2_VD_DEC); + v4l2_dbg(1, debug, sd, Status byte 2 (0x1f)=0x%02x\n, reg1f); + if (reg1f 0x40) { + /* horizontal/vertical not locked */ + *std = V4L2_STD_ALL; + goto ret; } + if (reg1f 0x20) + *std = V4L2_STD_525_60; + else + *std = V4L2_STD_625_50; + + if (state-ident != V4L2_IDENT_SAA7115) + goto ret; reg1e = saa711x_read(sd, R_1E_STATUS_BYTE_1_VD_DEC); switch (reg1e 0x03) { case 1: - *std = V4L2_STD_NTSC; + *std = V4L2_STD_NTSC; break; case 2: - *std = V4L2_STD_PAL; + /* +* V4L2_STD_PAL just cover the european PAL standards. +* This is wrong, as the device could also be using an +* other PAL standard. +*/ + *std = V4L2_STD_PAL | V4L2_STD_PAL_N | V4L2_STD_PAL_Nc | + V4L2_STD_PAL_M | V4L2_STD_PAL_60; break; case 3: - *std = V4L2_STD_SECAM; + *std = V4L2_STD_SECAM; break; default: + /* Can't detect anything */ + *std = V4L2_STD_ALL; break; } + + v4l2_dbg(1, debug, sd, Status byte 1 (0x1e)=0x%02x\n, reg1e); + +ret: + v4l2_dbg(1, debug, sd, detected std mask = %08Lx\n, *std); + return 0; } -- 1.7.6.4 -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[PATCH 2/2] [media] pvrusb2: implement VIDIOC_QUERYSTD
Signed-off-by: Mauro Carvalho Chehab mche...@redhat.com --- drivers/media/video/pvrusb2/pvrusb2-hdw.c |7 +++ drivers/media/video/pvrusb2/pvrusb2-hdw.h |3 +++ drivers/media/video/pvrusb2/pvrusb2-v4l2.c |7 +++ 3 files changed, 17 insertions(+), 0 deletions(-) diff --git a/drivers/media/video/pvrusb2/pvrusb2-hdw.c b/drivers/media/video/pvrusb2/pvrusb2-hdw.c index e98d382..5a6f24d 100644 --- a/drivers/media/video/pvrusb2/pvrusb2-hdw.c +++ b/drivers/media/video/pvrusb2/pvrusb2-hdw.c @@ -2993,6 +2993,13 @@ static void pvr2_subdev_set_control(struct pvr2_hdw *hdw, int id, pvr2_subdev_set_control(hdw, id, #lab, (hdw)-lab##_val); \ } +int pvr2_hdw_get_detected_std(struct pvr2_hdw *hdw, v4l2_std_id *std) +{ + v4l2_device_call_all(hdw-v4l2_dev, 0, +video, querystd, std); + return 0; +} + /* Execute whatever commands are required to update the state of all the sub-devices so that they match our current control values. */ static void pvr2_subdev_update(struct pvr2_hdw *hdw) diff --git a/drivers/media/video/pvrusb2/pvrusb2-hdw.h b/drivers/media/video/pvrusb2/pvrusb2-hdw.h index d7753ae..6654658 100644 --- a/drivers/media/video/pvrusb2/pvrusb2-hdw.h +++ b/drivers/media/video/pvrusb2/pvrusb2-hdw.h @@ -214,6 +214,9 @@ struct pvr2_stream *pvr2_hdw_get_video_stream(struct pvr2_hdw *); int pvr2_hdw_get_stdenum_value(struct pvr2_hdw *hdw,struct v4l2_standard *std, unsigned int idx); +/* Get the detected video standard */ +int pvr2_hdw_get_detected_std(struct pvr2_hdw *hdw, v4l2_std_id *std); + /* Enable / disable retrieval of CPU firmware or prom contents. This must be enabled before pvr2_hdw_cpufw_get() will function. Note that doing this may prevent the device from running (and leaving this mode may diff --git a/drivers/media/video/pvrusb2/pvrusb2-v4l2.c b/drivers/media/video/pvrusb2/pvrusb2-v4l2.c index e27f8ab..0d029da 100644 --- a/drivers/media/video/pvrusb2/pvrusb2-v4l2.c +++ b/drivers/media/video/pvrusb2/pvrusb2-v4l2.c @@ -227,6 +227,13 @@ static long pvr2_v4l2_do_ioctl(struct file *file, unsigned int cmd, void *arg) break; } + case VIDIOC_QUERYSTD: + { + v4l2_std_id *std = arg; + ret = pvr2_hdw_get_detected_std(hdw, std); + break; + } + case VIDIOC_G_STD: { int val = 0; -- 1.7.6.4 -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
Em 03-10-2011 03:30, Hans Verkuil escreveu: On Monday, October 03, 2011 04:17:06 Mauro Carvalho Chehab wrote: Em 02-10-2011 18:18, Javier Martinez Canillas escreveu: On Sun, Oct 2, 2011 at 6:30 PM, Sakari Ailus sakari.ai...@iki.fi wrote: Hi Javier, Thanks for the patch! It's very interesting to see a driver for a video decoder using the MC interface. Before this we've had just image sensors. Hello Sakari, Thanks for your comments. Javier Martinez Canillas wrote: + /* use the standard status register */ + std_status = tvp5150_read(sd, TVP5150_STATUS_REG_5); + else + /* use the standard register itself */ + std_status = std; Braces would be nice here. Ok. + switch (std_status VIDEO_STD_MASK) { + case VIDEO_STD_NTSC_MJ_BIT: + case VIDEO_STD_NTSC_MJ_BIT_AS: + return STD_NTSC_MJ; + + case VIDEO_STD_PAL_BDGHIN_BIT: + case VIDEO_STD_PAL_BDGHIN_BIT_AS: + return STD_PAL_BDGHIN; + + default: + return STD_INVALID; + } + + return STD_INVALID; This return won't do anything. Yes, will clean this. @@ -704,19 +812,19 @@ static int tvp5150_set_std(struct v4l2_subdev *sd, v4l2_std_id std) if (std == V4L2_STD_ALL) { fmt = 0;/* Autodetect mode */ } else if (std V4L2_STD_NTSC_443) { - fmt = 0xa; + fmt = VIDEO_STD_NTSC_4_43_BIT; } else if (std V4L2_STD_PAL_M) { - fmt = 0x6; + fmt = VIDEO_STD_PAL_M_BIT; } else if (std (V4L2_STD_PAL_N | V4L2_STD_PAL_Nc)) { - fmt = 0x8; + fmt = VIDEO_STD_PAL_COMBINATION_N_BIT; } else { /* Then, test against generic ones */ if (std V4L2_STD_NTSC) - fmt = 0x2; + fmt = VIDEO_STD_NTSC_MJ_BIT; else if (std V4L2_STD_PAL) - fmt = 0x4; + fmt = VIDEO_STD_PAL_BDGHIN_BIT; else if (std V4L2_STD_SECAM) - fmt = 0xc; + fmt = VIDEO_STD_SECAM_BIT; } Excellent! Less magic numbers... +static struct v4l2_mbus_framefmt * +__tvp5150_get_pad_format(struct tvp5150 *tvp5150, struct v4l2_subdev_fh *fh, + unsigned int pad, enum v4l2_subdev_format_whence which) +{ + switch (which) { + case V4L2_SUBDEV_FORMAT_TRY: + return v4l2_subdev_get_try_format(fh, pad); + case V4L2_SUBDEV_FORMAT_ACTIVE: + return tvp5150-format; + default: + return NULL; Hmm. This will never happen, but is returning NULL the right thing to do? An easy alternative is to just replace this with if (which may only have either of the two values). Ok I'll cleanup, I was being a bit paranoid there :) + +static int tvp5150_set_pad_format(struct v4l2_subdev *subdev, + struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *format) +{ + struct tvp5150 *tvp5150 = to_tvp5150(subdev); + tvp5150-std_idx = STD_INVALID; The above assignment will always be overwritten immediately. Yes, since tvp515x_query_current_std() already returns STD_INVALID on error the assignment is not needed. Will change that. + tvp5150-std_idx = tvp515x_query_current_std(subdev); + if (tvp5150-std_idx == STD_INVALID) { + v4l2_err(subdev, Unable to query std\n); + return 0; Isn't this an error? Yes, I'll change to report the error to the caller. + * tvp515x_mbus_fmt_cap() - V4L2 decoder interface handler for try/s/g_mbus_fmt The name of the function is different. Yes, I'll change that. static const struct v4l2_subdev_video_ops tvp5150_video_ops = { .s_routing = tvp5150_s_routing, + .s_stream = tvp515x_s_stream, + .enum_mbus_fmt = tvp515x_enum_mbus_fmt, + .g_mbus_fmt = tvp515x_mbus_fmt, + .try_mbus_fmt = tvp515x_mbus_fmt, + .s_mbus_fmt = tvp515x_mbus_fmt, + .g_parm = tvp515x_g_parm, + .s_parm = tvp515x_s_parm, + .s_std_output = tvp5150_s_std, Do we really need both video and pad format ops? Good question, I don't know. Can this device be used as a standalone v4l2 device? Or is supposed to always be a part of a video streaming pipeline as a sub-device with a source pad? Sorry if my questions are silly but as I stated before, I'm a newbie with v4l2 and MCF. The tvp5150 driver is used on some em28xx devices. It is nice to add auto-detection code to the driver, but converting it to the media bus should be done with enough care to not break support for the existing devices. So in other words, the tvp5150 driver needs both pad and non-pad ops. Eventually all non-pad variants in subdev drivers should be replaced by the pad variants so you don't have duplication of ops. But that will take a lot more work. Also, as
Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
Em 03-10-2011 04:11, Javier Martinez Canillas escreveu: On Mon, Oct 3, 2011 at 8:30 AM, Hans Verkuil hverk...@xs4all.nl wrote: On Monday, October 03, 2011 04:17:06 Mauro Carvalho Chehab wrote: Em 02-10-2011 18:18, Javier Martinez Canillas escreveu: Yes, I'll change that. static const struct v4l2_subdev_video_ops tvp5150_video_ops = { .s_routing = tvp5150_s_routing, + .s_stream = tvp515x_s_stream, + .enum_mbus_fmt = tvp515x_enum_mbus_fmt, + .g_mbus_fmt = tvp515x_mbus_fmt, + .try_mbus_fmt = tvp515x_mbus_fmt, + .s_mbus_fmt = tvp515x_mbus_fmt, + .g_parm = tvp515x_g_parm, + .s_parm = tvp515x_s_parm, + .s_std_output = tvp5150_s_std, Do we really need both video and pad format ops? Good question, I don't know. Can this device be used as a standalone v4l2 device? Or is supposed to always be a part of a video streaming pipeline as a sub-device with a source pad? Sorry if my questions are silly but as I stated before, I'm a newbie with v4l2 and MCF. The tvp5150 driver is used on some em28xx devices. It is nice to add auto-detection code to the driver, but converting it to the media bus should be done with enough care to not break support for the existing devices. So in other words, the tvp5150 driver needs both pad and non-pad ops. Eventually all non-pad variants in subdev drivers should be replaced by the pad variants so you don't have duplication of ops. But that will take a lot more work. Why doing it at pad level? It makes no sense. After selecting the pipeline/input all pad's that handle analog TV formats should be changed at the same time: video decoder, audio decoder, video enhancer/filters, etc. Great, that was a doubt I had, thanks for the clarification. In the specific code of standards auto-detection, a few drivers currently support this feature. They're (or should be) coded to do is: If V4L2_STD_ALL is used, the driver should autodetect the video standard of the currently tuned channel. Actually, this is optional. As per the spec: When the standard set is ambiguous drivers may return EINVAL or choose any of the requested standards. Nor does the spec say anything about doing an autodetect when STD_ALL is passed in. Most drivers will just set the std to PAL or NTSC in this case. If you want to autodetect, then use QUERYSTD. Applications cannot rely on drivers to handle V4L2_STD_ALL the way you say. The detected standard can be returned to userspace via VIDIOC_G_STD. No! G_STD always returns the current *selected* standard. Only QUERYSTD returns the detected standard. If otherwise, another standard mask is sent to the driver via VIDIOC_S_STD, the expected behavior is that the driver should select the standards detector to conform with the desired mask. If an unsupported configuration is requested, the driver should return the mask it actually used at the return of VIDIOC_S_STD call. S_STD is a write-only ioctl, so the mask isn't updated. For example, if V4L2_STD_NTSC_M_JP is used, the driver should disable the auto-detector, and use NTSC/M with the Japanese audio standard. both S_STD and G_STD will return V4L2_STD_NTSC_M_JP. If V4L2_STD_MN is used and the driver can auto-detect between all those formats, the driver should detect if the standard is PAL or NTSC and detect between STD/M or STD/M (and the corresponding audio standards). If an unsupported mask like V4L2_STD_PAL_J | V4L2_STD_NTSC_M_JP is used, the driver should return a valid combination to S_STD (for example, returning V4L2_STD_PAL_J. In any case, on V4L2_G_STD, if the driver can't detect what's the standard, it should just return the current detection mask to userspace (instead of returning something like STD_INVALID). G_STD must always return the currently selected standard, never the detected standard. That's QUERYSTD. When the driver is first loaded it must pre-select a standard (usually in the probe function), either hardcoded (NTSC or PAL), or by doing an initial autodetect. But the standard should always be set to something. This allows you to start streaming immediately. Regards, Hans I hope that helps, Mauro. Thanks Mauro and Hans for your comments. I plan to work on the autodetect code and the issues called out by Sakari and resubmit the patch, can you point me a driver that got auto-detect the right way so I can use it as a reference? The saa7115 driver implements it right. I've reviewed its code and tested it with a real device. Regards, Mauro -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: request information
Checking in the CARDLIST.saa7134 ( http://www.mjmwired.net/kernel/Documentation/video4linux/CARDLIST.saa7134 ), sounds, it (Device [1043:8188]) is not in the CARDLIST yet. Then, you may check with ASUSTeK and see which one in the CARDLIST is closer to it. Like: 78 - ASUSTeK P7131 Dual [1043:4862] 112 - ASUSTeK P7131 Hybrid [1043:4876] 146 - ASUSTeK P7131 Analog .. .. 174 - Asus Europa Hybrid OEM [1043:4847] -Original Message- From: linux-media-ow...@vger.kernel.org [mailto:linux-media-ow...@vger.kernel.org] On Behalf Of LD Sent: Sunday, October 02, 2011 7:47 AM To: linux-media@vger.kernel.org Subject: request information I would like to know which firmware and drivers are helpful to install and set this type of card: Multimedia controller [0480]: Philips Semiconductors SAA7131/SAA7133/SAA7135 Video Broadcast Decoder [1131:7133] (rev d0) Subsystem: ASUSTeK Computer Inc. Device [1043:8188] Control: I/O- Mem + BusMaster + SpecCycle-MemWINV-VGASnoop-ParErr-Stepping-SERR-FastB2B-DisINTx- Status: Cap + 66MHz-UDF-FastB2B + ParErr-DEVSEL = medium TAbort-TAbort-MAbort- SERR-PERR-intX- Latency: 64 (21000ns min, 8000ns max) Interrupt: pin A routed to IRQ 23 Region 0: Memory at dbedb800 (32-bit, non-prefetchable) [size = 2K] Capabilities: access denied Kernel driver in use: saa7134 Kernel modules: saa7134 Thank you for the answer LD -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
Em 03-10-2011 06:53, Javier Martinez Canillas escreveu: On Mon, Oct 3, 2011 at 10:39 AM, Laurent Pinchart laurent.pinch...@ideasonboard.com wrote: Hi Hans, On Monday 03 October 2011 08:30:25 Hans Verkuil wrote: On Monday, October 03, 2011 04:17:06 Mauro Carvalho Chehab wrote: Em 02-10-2011 18:18, Javier Martinez Canillas escreveu: On Sun, Oct 2, 2011 at 6:30 PM, Sakari Ailus wrote: [snip] static const struct v4l2_subdev_video_ops tvp5150_video_ops = { .s_routing = tvp5150_s_routing, + .s_stream = tvp515x_s_stream, + .enum_mbus_fmt = tvp515x_enum_mbus_fmt, + .g_mbus_fmt = tvp515x_mbus_fmt, + .try_mbus_fmt = tvp515x_mbus_fmt, + .s_mbus_fmt = tvp515x_mbus_fmt, + .g_parm = tvp515x_g_parm, + .s_parm = tvp515x_s_parm, + .s_std_output = tvp5150_s_std, Do we really need both video and pad format ops? Good question, I don't know. Can this device be used as a standalone v4l2 device? Or is supposed to always be a part of a video streaming pipeline as a sub-device with a source pad? Sorry if my questions are silly but as I stated before, I'm a newbie with v4l2 and MCF. The tvp5150 driver is used on some em28xx devices. It is nice to add auto-detection code to the driver, but converting it to the media bus should be done with enough care to not break support for the existing devices. So in other words, the tvp5150 driver needs both pad and non-pad ops. Eventually all non-pad variants in subdev drivers should be replaced by the pad variants so you don't have duplication of ops. But that will take a lot more work. What about replacing direct calls to non-pad operations with core V4L2 functions that would use the subdev non-pad operation if available, and emulate if with the pad operation otherwise ? I think this would ease the transition, as subdev drivers could be ported to pad operations without worrying about the bridges that use them, and bridge drivers could be switched to the new wrappers with a simple search and replace. Ok, that is a good solution. I'll do that. Implement V4L2 core operations as wrappers of the subdev pad operations. As I said, I can't see _any_ reason why setting a format would be needed at pad level. Patches shouldn't increase driver/core and userspace complexity for nothing. Regards, Mauro -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 1/2] [media] saa7115: Fix standards detection
On Monday, October 03, 2011 20:47:36 Mauro Carvalho Chehab wrote: There are several bugs at saa7115 standards detection: After the fix, the driver is returning the proper standards, as tested with 3 different broadcast sources: On an invalid channel (without any TV signal): [ 4394.931630] saa7115 15-0021: Status byte 2 (0x1f)=0xe0 [ 4394.931635] saa7115 15-0021: detected std mask = 00ff With a PAL/M signal: [ 4410.836855] saa7115 15-0021: Status byte 2 (0x1f)=0xb1 [ 4410.837727] saa7115 15-0021: Status byte 1 (0x1e)=0x82 [ 4410.837731] saa7115 15-0021: detected std mask = 0900 With a NTSC/M signal: [ 4422.383893] saa7115 15-0021: Status byte 2 (0x1f)=0xb1 [ 4422.384768] saa7115 15-0021: Status byte 1 (0x1e)=0x81 [ 4422.384772] saa7115 15-0021: detected std mask = b000 Tests were done with a WinTV PVR USB2 Model 29xx card. Signed-off-by: Mauro Carvalho Chehab mche...@redhat.com Reviewed-by: Hans Verkuil hans.verk...@cisco.com Looks good! Regards, Hans --- drivers/media/video/saa7115.c | 47 +++- 1 files changed, 32 insertions(+), 15 deletions(-) diff --git a/drivers/media/video/saa7115.c b/drivers/media/video/saa7115.c index cee98ea..86627a8 100644 --- a/drivers/media/video/saa7115.c +++ b/drivers/media/video/saa7115.c @@ -1344,35 +1344,52 @@ static int saa711x_g_vbi_data(struct v4l2_subdev *sd, struct v4l2_sliced_vbi_dat static int saa711x_querystd(struct v4l2_subdev *sd, v4l2_std_id *std) { struct saa711x_state *state = to_state(sd); - int reg1e; + int reg1f, reg1e; - *std = V4L2_STD_ALL; - if (state-ident != V4L2_IDENT_SAA7115) { - int reg1f = saa711x_read(sd, R_1F_STATUS_BYTE_2_VD_DEC); - - if (reg1f 0x20) - *std = V4L2_STD_525_60; - else - *std = V4L2_STD_625_50; - - return 0; + reg1f = saa711x_read(sd, R_1F_STATUS_BYTE_2_VD_DEC); + v4l2_dbg(1, debug, sd, Status byte 2 (0x1f)=0x%02x\n, reg1f); + if (reg1f 0x40) { + /* horizontal/vertical not locked */ + *std = V4L2_STD_ALL; + goto ret; } + if (reg1f 0x20) + *std = V4L2_STD_525_60; + else + *std = V4L2_STD_625_50; + + if (state-ident != V4L2_IDENT_SAA7115) + goto ret; reg1e = saa711x_read(sd, R_1E_STATUS_BYTE_1_VD_DEC); switch (reg1e 0x03) { case 1: - *std = V4L2_STD_NTSC; + *std = V4L2_STD_NTSC; break; case 2: - *std = V4L2_STD_PAL; + /* + * V4L2_STD_PAL just cover the european PAL standards. + * This is wrong, as the device could also be using an + * other PAL standard. + */ + *std = V4L2_STD_PAL | V4L2_STD_PAL_N | V4L2_STD_PAL_Nc | + V4L2_STD_PAL_M | V4L2_STD_PAL_60; break; case 3: - *std = V4L2_STD_SECAM; + *std = V4L2_STD_SECAM; break; default: + /* Can't detect anything */ + *std = V4L2_STD_ALL; break; } + + v4l2_dbg(1, debug, sd, Status byte 1 (0x1e)=0x%02x\n, reg1e); + +ret: + v4l2_dbg(1, debug, sd, detected std mask = %08Lx\n, *std); + return 0; } -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 2/2] [media] pvrusb2: implement VIDIOC_QUERYSTD
Acked-By: Mike Isely is...@pobox.com -Mike On Mon, 3 Oct 2011, Mauro Carvalho Chehab wrote: Signed-off-by: Mauro Carvalho Chehab mche...@redhat.com --- drivers/media/video/pvrusb2/pvrusb2-hdw.c |7 +++ drivers/media/video/pvrusb2/pvrusb2-hdw.h |3 +++ drivers/media/video/pvrusb2/pvrusb2-v4l2.c |7 +++ 3 files changed, 17 insertions(+), 0 deletions(-) diff --git a/drivers/media/video/pvrusb2/pvrusb2-hdw.c b/drivers/media/video/pvrusb2/pvrusb2-hdw.c index e98d382..5a6f24d 100644 --- a/drivers/media/video/pvrusb2/pvrusb2-hdw.c +++ b/drivers/media/video/pvrusb2/pvrusb2-hdw.c @@ -2993,6 +2993,13 @@ static void pvr2_subdev_set_control(struct pvr2_hdw *hdw, int id, pvr2_subdev_set_control(hdw, id, #lab, (hdw)-lab##_val); \ } +int pvr2_hdw_get_detected_std(struct pvr2_hdw *hdw, v4l2_std_id *std) +{ + v4l2_device_call_all(hdw-v4l2_dev, 0, + video, querystd, std); + return 0; +} + /* Execute whatever commands are required to update the state of all the sub-devices so that they match our current control values. */ static void pvr2_subdev_update(struct pvr2_hdw *hdw) diff --git a/drivers/media/video/pvrusb2/pvrusb2-hdw.h b/drivers/media/video/pvrusb2/pvrusb2-hdw.h index d7753ae..6654658 100644 --- a/drivers/media/video/pvrusb2/pvrusb2-hdw.h +++ b/drivers/media/video/pvrusb2/pvrusb2-hdw.h @@ -214,6 +214,9 @@ struct pvr2_stream *pvr2_hdw_get_video_stream(struct pvr2_hdw *); int pvr2_hdw_get_stdenum_value(struct pvr2_hdw *hdw,struct v4l2_standard *std, unsigned int idx); +/* Get the detected video standard */ +int pvr2_hdw_get_detected_std(struct pvr2_hdw *hdw, v4l2_std_id *std); + /* Enable / disable retrieval of CPU firmware or prom contents. This must be enabled before pvr2_hdw_cpufw_get() will function. Note that doing this may prevent the device from running (and leaving this mode may diff --git a/drivers/media/video/pvrusb2/pvrusb2-v4l2.c b/drivers/media/video/pvrusb2/pvrusb2-v4l2.c index e27f8ab..0d029da 100644 --- a/drivers/media/video/pvrusb2/pvrusb2-v4l2.c +++ b/drivers/media/video/pvrusb2/pvrusb2-v4l2.c @@ -227,6 +227,13 @@ static long pvr2_v4l2_do_ioctl(struct file *file, unsigned int cmd, void *arg) break; } + case VIDIOC_QUERYSTD: + { + v4l2_std_id *std = arg; + ret = pvr2_hdw_get_detected_std(hdw, std); + break; + } + case VIDIOC_G_STD: { int val = 0; -- Mike Isely isely @ isely (dot) net PGP: 03 54 43 4D 75 E5 CC 92 71 16 01 E2 B5 F5 C1 E8 -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 7/9] V4L: soc-camera: add a Media Controller wrapper
Hi Guennadi, On Monday 03 October 2011 17:29:23 Guennadi Liakhovetski wrote: Hi Laurent Thanks for the reviews! You're welcome. On Mon, 3 Oct 2011, Laurent Pinchart wrote: On Thursday 29 September 2011 18:18:55 Guennadi Liakhovetski wrote: This wrapper adds a Media Controller implementation to soc-camera drivers. To really benefit from it individual host drivers should implement support for values of enum soc_camera_target other than SOCAM_TARGET_PIPELINE in their .set_fmt() and .try_fmt() methods. [snip] diff --git a/drivers/media/video/soc_entity.c b/drivers/media/video/soc_entity.c new file mode 100644 index 000..3a04700 --- /dev/null +++ b/drivers/media/video/soc_entity.c @@ -0,0 +1,284 @@ [snip] +static int bus_sd_pad_g_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *sd_fmt) +{ + struct soc_camera_device *icd = v4l2_get_subdevdata(sd); + struct v4l2_mbus_framefmt *f = sd_fmt-format; + + if (sd_fmt-which == V4L2_SUBDEV_FORMAT_TRY) { + sd_fmt-format = *v4l2_subdev_get_try_format(fh, sd_fmt-pad); + return 0; + } + + if (sd_fmt-pad == SOC_HOST_BUS_PAD_SINK) { + f-width= icd-host_input_width; + f-height = icd-host_input_height; + } else { + f-width= icd-user_width; + f-height = icd-user_height; + } + f-field= icd-field; + f-code = icd-current_fmt-code; + f-colorspace = icd-colorspace; Can soc-camera hosts perform format conversion ? If so you will likely need to store the mbus code for the input and output separately, possibly in v4l2_mbus_format fields. You could then simplify the [gs]_fmt functions by implementing similar to the __*_get_format functions in the OMAP3 ISP driver. They can, yes. But, under soc-camera conversions are performed between mediabus codes and fourcc formats. Upon pipeline construction (probing) a table of format conversions is built, where hosts generate one or more translation entries for all client formats, that they support. The only example of a more complex translations so far is MIPI CSI-2, but even there we have decided to identify CSI-2 formats using the same media-bus codes, as what you get between the CSI-2 block and the DMA engine. For the only CSI-2 capable soc-camera host so far - the CEU driver - this is also a very natural representation, because there the CSI-2 block is indeed an additional pipeline stage, uniquely translating CSI-2 to media-bus codes, that are then fed to the CEU parallel port. How does that work with the MC API then ? If the bridge can, let's say, convert between raw bayer and YUV, shouldn't the format at the bridge input be raw bayer and at the bridge output YUV ? + return 0; +} + +static int bus_sd_pad_s_fmt(struct v4l2_subdev *sd, struct v4l2_subdev_fh *fh, + struct v4l2_subdev_format *sd_fmt) +{ + struct soc_camera_device *icd = v4l2_get_subdevdata(sd); + struct v4l2_mbus_framefmt *mf = sd_fmt-format; + struct v4l2_format vf = { + .type = V4L2_BUF_TYPE_VIDEO_CAPTURE, + }; + enum soc_camera_target tgt = sd_fmt-pad == SOC_HOST_BUS_PAD_SINK ? + SOCAM_TARGET_HOST_IN : SOCAM_TARGET_HOST_OUT; + int ret; + + se_mbus_to_v4l2(icd, mf, vf); + + if (sd_fmt-which == V4L2_SUBDEV_FORMAT_TRY) { + struct v4l2_mbus_framefmt *try_fmt = + v4l2_subdev_get_try_format(fh, sd_fmt-pad); + ret = soc_camera_try_fmt(icd, vf, tgt); + if (!ret) { + se_v4l2_to_mbus(icd, vf, try_fmt); + sd_fmt-format = *try_fmt; + } + return ret; + } + + ret = soc_camera_set_fmt(icd, vf, tgt); + if (!ret) + se_v4l2_to_mbus(icd, vf, sd_fmt-format); + + return ret; +} + +static int bus_sd_pad_enum_mbus_code(struct v4l2_subdev *sd, + struct v4l2_subdev_fh *fh, + struct v4l2_subdev_mbus_code_enum *ce) +{ + struct soc_camera_device *icd = v4l2_get_subdevdata(sd); + + if (ce-index = icd-num_user_formats) + return -EINVAL; + + ce-code = icd-user_formats[ce-index].code; + return 0; +} + +static const struct v4l2_subdev_pad_ops se_bus_sd_pad_ops = { + .get_fmt= bus_sd_pad_g_fmt, + .set_fmt= bus_sd_pad_s_fmt, + .enum_mbus_code = bus_sd_pad_enum_mbus_code, +}; + +static const struct v4l2_subdev_ops se_bus_sd_ops = { + .pad= se_bus_sd_pad_ops, +}; + +static const struct media_entity_operations se_bus_me_ops = { +}; + +static const struct media_entity_operations se_vdev_me_ops = { +}; NULL operations are allowed, you don't have to use an empty structure. Ok + +int
Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
Em 03-10-2011 18:44, Laurent Pinchart escreveu: Hi Mauro, On Monday 03 October 2011 21:16:45 Mauro Carvalho Chehab wrote: Em 03-10-2011 08:53, Laurent Pinchart escreveu: On Monday 03 October 2011 11:53:44 Javier Martinez Canillas wrote: [snip] Laurent, I have a few questions about MCF and the OMAP3ISP driver if you are so kind to answer. 1- User-space programs that are not MCF aware negotiate the format with the V4L2 device (i.e: OMAP3 ISP CCDC output), which is a sink pad. But the real format is driven by the analog video format in the source pad (i.e: tvp5151). That's not different from existing systems using digital sensors, where the format is driven by the sensor. I modified the ISP driver to get the data format from the source pad and set the format for each pad on the pipeline accordingly but I've read from the documentation [1] that is not correct to propagate a data format from source pads to sink pads, that the correct thing is to do it from sink to source. So, in this case an administrator has to externally configure the format for each pad and to guarantee a coherent format on the whole pipeline?. That's correct (except you don't need to be an administrator to do so :-)). NACK. Double NACK :-D When userspace sends a VIDIOC_S_STD ioctl to the sink node, the subdevs that are handling the video/audio standard should be changed, in order to obey the V4L2 ioctl. This is what happens with all other drivers since the beginning of the V4L1 API. There's no reason to change it, and such change would be a regression. The same could have been told for the format API: When userspace sends a VIDIOC_S_FMT ioctl to the sink node, the subdevs that are handling the video format should be changed, in order to obey the V4L2 ioctl. This is what happens with all other drivers since the beginning of the V4L1 API. There's no reason to change it, and such change would be a regression. But we've introduced a pad-level format API. I don't see any reason to treat standard differently. Neither do I. The pad-level API should not replace the V4L2 API for standard, for controls and/or for format settings. Or does exist a way to do this automatic?. i.e: The output entity on the pipeline promotes the capabilities of the source pad so applications can select a data format and this format gets propagated all over the pipeline from the sink pad to the source? It can be automated in userspace (through a libv4l plugin for instance), but it's really not the kernel's job to do so. It is a kernel job to handle VIDIOC_S_STD, and not a task to be left to any userspace plugin. And VIDIOC_S_FMT is handled by userspace for the OMAP3 ISP today. Why are standards different ? All v4l media devices have sub-devices with either tv decoders or sensors connected into a sink. The sink implements the /dev/video? node. When an ioctl is sent to the v4l node, the sensors/decoders are controlled to implement whatever is requested: video standards, formats etc. Changing it would be a major regression. If OMAP3 is not doing the right thing, it should be fixed, and not the opposite. The MC/subdev API is there to fill the blanks, e. g. to handle cases where the same function could be implemented on two different places of the pipeline, e. g. when both the sensor and the bridge could do scaling, and userspace wants to explicitly use one, or the other, but it were never meant to replace the V4L2 functionality. [1]: http://linuxtv.org/downloads/v4l-dvb-apis/subdev.html 2- If the application want a different format that the default provided by the tvp5151, (i.e: 720x576 for PAL), where do I have to crop the image? I thought this can be made using the CCDC, copying less lines to memory or the RESIZER if the application wants a bigger image. What is the best approach for this? Not sure if I understood your question, but maybe you're mixing two different concepts here. If the application wants a different image resolution, it will use S_FMT. In this case, what userspace expects is that the driver will scale, if supported, or return -EINVAL otherwise. With the OMAP3 ISP, which is I believe what Javier was asking about, the application will set the format on the OMAP3 ISP resizer input and output pads to configure scaling. The V4L2 API doesn't tell where a function like scaler will be implemented. So, it is fine to implement it at tvp5151 or at the omap3 resizer, when a V4L2 call is sent. I'm ok if you want to offer the possibility of doing scale on the other parts of the pipeline, as a bonus, via the MC/subdev API, but the absolute minimum requirement is to implement it via the V4L2 API. Regards, Mauro -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
RE: [DVB] CXD2099 - Question about the CAM clock
-Original Message- From: linux-media-ow...@vger.kernel.org [mailto:linux-media- ow...@vger.kernel.org] On Behalf Of Sébastien RAILLARD (COEXSI) Sent: lundi 3 octobre 2011 16:46 To: 'Issa Gorissen'; o.endr...@gmx.de Cc: 'Linux Media Mailing List' Subject: RE: [DVB] CXD2099 - Question about the CAM clock -Original Message- From: Issa Gorissen [mailto:flo...@usa.net] Sent: lundi 3 octobre 2011 15:59 To: o.endr...@gmx.de; Sébastien RAILLARD Cc: 'Linux Media Mailing List' Subject: RE: [DVB] CXD2099 - Question about the CAM clock Dear Oliver, Ive done some tests with the CAM reader from Digital Devices based on Sony CXD2099 chip and I noticed some issues with some CAM: * SMIT CAM: working fine * ASTON CAM : working fine, except that it's crashing quite regularly * NEOTION CAM : no stream going out but access to the CAM menu is ok When looking at the CXD2099 driver code, I noticed the CAM clock (fMCLKI) is fixed at 9MHz using the 27MHz onboard oscillator and using the integer divider set to 3 (as MCLKI_FREQ=2). I was wondering if some CAM were not able to work correctly at such high clock frequency. So, I've tried to enable the NCO (numeric controlled oscillator) in order to setup a lower frequency for the CAM clock, but I wasn't successful, it's looking like the frequency must be around the 9MHz or I can't get any stream. Do you know a way to decrease this CAM clock frequency to do some testing? Best regards, Sebastien. Weird that the frequency would pose a problem for those CAMs. The CI spec [1] explains that the minimum byte transfer clock period must be 111ns. This gives us a frequency of ~9MHz. You're totally right about the maximum clock frequency specified in the norm, but I had confirmation from CAM manufacturers that their CAM may not work correctly up to this maximum frequency. Usually, the CAM clock is coming from the input TS stream and I don't think there is for now a DVB-S2 transponder having a 72mbps bitrate (so a 9MHz for parallel CAM clocking). Anyway, wouldn't it be wiser to base MCLKI on TICLK ? I've tried to use mode C instead of mode D, and I have the same problem, so I guess TICLK is around 72MHz. It could be a good idea to use TICLK, but I don't know the value and if the clock is constant or only active during data transmission. Did you manage to enable and use the NCO of the CXD2099 (instead of the integer divider) ? No, but if your output to the CAM is slower than what comes from the ngene chip, you will lose bytes, no ? The real bandwidth of my transponder is 62mbps, so I've room to decrease the CAM clock. I did more tests with the NCO, and I've strange results: * Using MCLKI=0x5553 = fMCLKI= 8,99903 = Not working, a lot of TS errors * Using MCLKI=0x5554 = fMCLKI= 8,99945 = Working fine * Using MCLKI=0x = fMCLKI= 8,99986 = Not working, a lot of TS errors It's strange that changing very slightly the clock make so much errors! I managed to find a series of values that are working correctly for MCLKI: MCLKI = 0x5554 - i * 0x0c In my case I can go down to 0x5338 before having TS errors. -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: About the patch I sent.
Em 03-10-2011 19:06, Marco Diego Aurélio Mesquita escreveu: Hi! I'd really like my patch[1] accepted. Is there anything I can do about it? [1] http://patchwork.linuxtv.org/patch/6850/ Hans, Could you please ack or nack this patch? Thanks! Mauro -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
About the patch I sent.
Hi! I'd really like my patch[1] accepted. Is there anything I can do about it? [1] http://patchwork.linuxtv.org/patch/6850/ -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[BUG] ir-mce_kbd-decoder keyboard repeat issue
I originally fixed some bugs on the original lirc_mod_mce driver that Jon Davies was hosting, but found myself without a working keyboard after upgrading to Ubuntu Natty with the new ir-core. I just quckly backported the ir-mce_kbd-decoder driver that Jarod Wilson posted a while back for my Ubuntu box running 2.6.38. http://git.linuxtv.org/media_tree.git/history/refs/heads/staging/for_v3.1:/drivers/media/rc/ir-mce_kbd-decoder.c After porting, I discovered the keyboard repeat logic is broken. The keyboard repeat delay or interval isn't working properly, which makes typing on the keyboard about impossible. The repeat delay isn't being respected, so you often get double characters when typing if you hold a key just a bit too long (over 100ms, repeat delay is usually 250ms). When you hold a key down to get repeats, however, the repeat is very slow. The two changes I made were: 1. Keep track of the last scancode and only report the event to the input subsystem if it has changed. (Not sure if this is actually necessary as input.c might sort it all out, I'm not sure). 2. Don't use the rc_dev timeout for the key up timeout. The timeout for the IR reciever has nothing to do with the rate at which the MCE keyboard sends key press events, so this seems to be invalid. In fact, the timeout from my rc device was 100 ns (1ms) which meant a key up event was occurring almost immediately. From my testing, the keyboard will send events every 100ms, so I made the timeout 150ms and everything seems to work great. Here's the patch. It's not clean/final (and is missing a couple irrelavent lines for my Ubuntu module), but I wanted to get some feedback to validate this. @@ -44,6 +45,7 @@ #define MCIR2_KEYBOARD_HEADER 0x4 #define MCIR2_MOUSE_HEADER 0x1 #define MCIR2_MASK_KEYS_START 0xe0 +#define MCIR2_RX_TIMEOUT_MS 150 enum mce_kbd_mode { MCIR2_MODE_KEYBOARD, @@ -121,6 +123,9 @@ int i; unsigned char maskcode; + if( mce_kbd-last_scancode == 0 ) + return; + IR_dprintk(2, timer callback clearing all keys\n); for (i = 0; i 7; i++) { @@ -322,13 +327,28 @@ case MCIR2_KEYBOARD_NBITS: scancode = data-body 0x; IR_dprintk(1, keyboard data 0x%08x\n, data-body); - if (dev-timeout) + IR_dprintk(1, keyboard timeout %d us\n, dev-timeout); + + /* The IR device timeout has nothing to do with the keyboard timing, so + I'm not sure why we would use that here. From observation, the keyboard + seems to send events at most once every 100ms. Let's be optimistic and timeout + after MCIR2_RX_TIMEOUT_MS (150ms) only if we don't recieve a valid keyup. */ + /*if (dev-timeout) delay = usecs_to_jiffies(dev-timeout / 1000); else - delay = msecs_to_jiffies(100); + delay = msecs_to_jiffies(100);*/ + delay = msecs_to_jiffies(MCIR2_RX_TIMEOUT_MS); + mod_timer(data-rx_timeout, jiffies + delay); - /* Pass data to keyboard buffer parser */ - ir_mce_kbd_process_keyboard_data(data-idev, scancode); + + /* Only process keypress data if it has changed to allow kernel + keyboard repeat logic to work */ + if( scancode != data-last_scancode ) + { + /* Pass data to keyboard buffer parser */ + ir_mce_kbd_process_keyboard_data(data-idev, scancode); + data-last_scancode = scancode; + } break; case MCIR2_MOUSE_NBITS: scancode = data-body 0x1f; @@ -356,7 +376,7 @@ -- To unsubscribe from this list: send the line unsubscribe linux-media in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
Hello, Reading the last emails I understand that still isn't a consensus on the way this has to be made. If it has to be implemented at the video device node level or at the sub-device level. And if it has to be made in kernel or user-space. On Mon, Oct 3, 2011 at 11:56 PM, Mauro Carvalho Chehab mche...@infradead.org wrote: Em 03-10-2011 18:44, Laurent Pinchart escreveu: Hi Mauro, On Monday 03 October 2011 21:16:45 Mauro Carvalho Chehab wrote: Em 03-10-2011 08:53, Laurent Pinchart escreveu: On Monday 03 October 2011 11:53:44 Javier Martinez Canillas wrote: [snip] Laurent, I have a few questions about MCF and the OMAP3ISP driver if you are so kind to answer. 1- User-space programs that are not MCF aware negotiate the format with the V4L2 device (i.e: OMAP3 ISP CCDC output), which is a sink pad. But the real format is driven by the analog video format in the source pad (i.e: tvp5151). That's not different from existing systems using digital sensors, where the format is driven by the sensor. I modified the ISP driver to get the data format from the source pad and set the format for each pad on the pipeline accordingly but I've read from the documentation [1] that is not correct to propagate a data format from source pads to sink pads, that the correct thing is to do it from sink to source. So, in this case an administrator has to externally configure the format for each pad and to guarantee a coherent format on the whole pipeline?. That's correct (except you don't need to be an administrator to do so :-)). NACK. Double NACK :-D When userspace sends a VIDIOC_S_STD ioctl to the sink node, the subdevs that are handling the video/audio standard should be changed, in order to obey the V4L2 ioctl. This is what happens with all other drivers since the beginning of the V4L1 API. There's no reason to change it, and such change would be a regression. The same could have been told for the format API: When userspace sends a VIDIOC_S_FMT ioctl to the sink node, the subdevs that are handling the video format should be changed, in order to obey the V4L2 ioctl. This is what happens with all other drivers since the beginning of the V4L1 API. There's no reason to change it, and such change would be a regression. But we've introduced a pad-level format API. I don't see any reason to treat standard differently. Neither do I. The pad-level API should not replace the V4L2 API for standard, for controls and/or for format settings. Or does exist a way to do this automatic?. i.e: The output entity on the pipeline promotes the capabilities of the source pad so applications can select a data format and this format gets propagated all over the pipeline from the sink pad to the source? It can be automated in userspace (through a libv4l plugin for instance), but it's really not the kernel's job to do so. It is a kernel job to handle VIDIOC_S_STD, and not a task to be left to any userspace plugin. And VIDIOC_S_FMT is handled by userspace for the OMAP3 ISP today. Why are standards different ? All v4l media devices have sub-devices with either tv decoders or sensors connected into a sink. The sink implements the /dev/video? node. When an ioctl is sent to the v4l node, the sensors/decoders are controlled to implement whatever is requested: video standards, formats etc. Changing it would be a major regression. If OMAP3 is not doing the right thing, it should be fixed, and not the opposite. That is the approach we took, we hack the isp v4l2 device driver (ispvideo) to bypass the ioctls to the sub-device that as the source pad (tvp5151 in our case, but it could be a sensor as well). So, for example the VIDIOC_S_STD ioctl handler looks like this (I post a simplified version of the code, just to give an idea): static int isp_video_s_std(struct file *file, void *fh, v4l2_std_id *norm) { struct isp_video *video = video_drvdata(file); struct v4l2_subdev *sink_subdev; struct v4l2_subdev *source_subdev; sink_subdev = isp_video_remote_subdev(video, NULL); sink_pad = sink_subdev-entity.pads[0]; source_pad = media_entity_remote_source(sink_pad); source_subdev = media_entity_to_v4l2_subdev(source_pad-entity); v4l2_subdev_call(source_subdev, core, s_std, NULL, norm); } So applications interact with the /dev/video? via V4L2 ioctls whose handlers call the sub-dev functions. Is that what you propose? The MC/subdev API is there to fill the blanks, e. g. to handle cases where the same function could be implemented on two different places of the pipeline, e. g. when both the sensor and the bridge could do scaling, and userspace wants to explicitly use one, or the other, but it were never meant to replace the V4L2 functionality. [1]: http://linuxtv.org/downloads/v4l-dvb-apis/subdev.html 2- If the application want a different format that the default provided by the tvp5151, (i.e: 720x576 for PAL), where do I have to crop the
Re: [PATCH 3/3] [media] tvp5150: Migrate to media-controller framework and add video format detection
Em 03-10-2011 19:37, Javier Martinez Canillas escreveu: Hello, Reading the last emails I understand that still isn't a consensus on the way this has to be made. True. If it has to be implemented at the video device node level or at the sub-device level. And if it has to be made in kernel or user-space. For now, I propose you to just add/improve the auto-detection on the existing callbacks. We need to reach a consensus before working at the pad level. On Mon, Oct 3, 2011 at 11:56 PM, Mauro Carvalho Chehab mche...@infradead.org wrote: Em 03-10-2011 18:44, Laurent Pinchart escreveu: Hi Mauro, On Monday 03 October 2011 21:16:45 Mauro Carvalho Chehab wrote: Em 03-10-2011 08:53, Laurent Pinchart escreveu: On Monday 03 October 2011 11:53:44 Javier Martinez Canillas wrote: [snip] Laurent, I have a few questions about MCF and the OMAP3ISP driver if you are so kind to answer. 1- User-space programs that are not MCF aware negotiate the format with the V4L2 device (i.e: OMAP3 ISP CCDC output), which is a sink pad. But the real format is driven by the analog video format in the source pad (i.e: tvp5151). That's not different from existing systems using digital sensors, where the format is driven by the sensor. I modified the ISP driver to get the data format from the source pad and set the format for each pad on the pipeline accordingly but I've read from the documentation [1] that is not correct to propagate a data format from source pads to sink pads, that the correct thing is to do it from sink to source. So, in this case an administrator has to externally configure the format for each pad and to guarantee a coherent format on the whole pipeline?. That's correct (except you don't need to be an administrator to do so :-)). NACK. Double NACK :-D When userspace sends a VIDIOC_S_STD ioctl to the sink node, the subdevs that are handling the video/audio standard should be changed, in order to obey the V4L2 ioctl. This is what happens with all other drivers since the beginning of the V4L1 API. There's no reason to change it, and such change would be a regression. The same could have been told for the format API: When userspace sends a VIDIOC_S_FMT ioctl to the sink node, the subdevs that are handling the video format should be changed, in order to obey the V4L2 ioctl. This is what happens with all other drivers since the beginning of the V4L1 API. There's no reason to change it, and such change would be a regression. But we've introduced a pad-level format API. I don't see any reason to treat standard differently. Neither do I. The pad-level API should not replace the V4L2 API for standard, for controls and/or for format settings. Or does exist a way to do this automatic?. i.e: The output entity on the pipeline promotes the capabilities of the source pad so applications can select a data format and this format gets propagated all over the pipeline from the sink pad to the source? It can be automated in userspace (through a libv4l plugin for instance), but it's really not the kernel's job to do so. It is a kernel job to handle VIDIOC_S_STD, and not a task to be left to any userspace plugin. And VIDIOC_S_FMT is handled by userspace for the OMAP3 ISP today. Why are standards different ? All v4l media devices have sub-devices with either tv decoders or sensors connected into a sink. The sink implements the /dev/video? node. When an ioctl is sent to the v4l node, the sensors/decoders are controlled to implement whatever is requested: video standards, formats etc. Changing it would be a major regression. If OMAP3 is not doing the right thing, it should be fixed, and not the opposite. That is the approach we took, we hack the isp v4l2 device driver (ispvideo) to bypass the ioctls to the sub-device that as the source pad (tvp5151 in our case, but it could be a sensor as well). So, for example the VIDIOC_S_STD ioctl handler looks like this (I post a simplified version of the code, just to give an idea): static int isp_video_s_std(struct file *file, void *fh, v4l2_std_id *norm) { struct isp_video *video = video_drvdata(file); struct v4l2_subdev *sink_subdev; struct v4l2_subdev *source_subdev; sink_subdev = isp_video_remote_subdev(video, NULL); sink_pad = sink_subdev-entity.pads[0]; source_pad = media_entity_remote_source(sink_pad); source_subdev = media_entity_to_v4l2_subdev(source_pad-entity); v4l2_subdev_call(source_subdev, core, s_std, NULL, norm); } So applications interact with the /dev/video? via V4L2 ioctls whose handlers call the sub-dev functions. Is that what you propose? Something like that. For example: static int vidioc_s_std(struct file *file, void *priv, v4l2_std_id *norm) { /* Do some sanity test/resolution adjustments, etc */ v4l2_device_call_all(dev-v4l2_dev, 0, core, s_std, dev-norm); return 0; } It should be noticed that: 1)