Hi Lane,

On Tuesday 23 November 2010 23:29:10 Lane Brooks wrote:
> Laurent,
> 
> Things in general are working with our camera implementation using the
> OMAP ISP module. There is, however, a lingering issue that I now need to
> work out regarding the fact that most user space applications do not
> work with our camera because of the new media framework.
> 
> Currently, the only way we have to use our camera is through a custom
> user space application we wrote (that makes heavy use of the media-ctl
> user space application for setting up the media links). What I am hoping
> for, though, is the ability to setup the media links from the media-ctl
> application and then have a typical V4L2 user space application use the
> OMAP ISP resizer output device node as usual.
> 
> Here is what I would like to do:
> 
> 1. Setup the links to the resizer using a command line app (like
> media-ctl).
> 2. Point a typical V4L2 application (like gstreamer or ffmpeg)
> to read from the resizer output device node and have it negotiate the
> format using traditional V4L2 ioctls (VIDIOC_G_FMT/VIDIOC_S_FMT).
> 
> If the links are setup to the resizer, then it seems that user space
> applications should be able to talk the resizer output (/dev/video3)
> like a traditional V4L2 device and need not worry about the new media
> framework. It even seems possible for the resizer to allow the final
> link format to be adjusted so that the user space application can
> actually adjust the resizer subdev output format across the range of
> valid resizer options based on the format of the resizer input pad. If
> the resizer output device node worked this way, then our camera would
> work with all the existing V4L2 applications with the simple caveat that
> the user has to run a separate setup application first.
> 
> The resizer output device node does not currently behave this way, and I
> am not sure why. These are the reasons that I can think of as to why:
> 1. It has not been implemented this way yet.
> 2. I am doing something incorrectly with the media-ctl application.
> 3. It not intended to work this way (by the new media framework design
> principles).
> 4. It cannot work this way because of some reason I am not considering.
> 
> I haven't looked at the resizer code yet, but if the answer is 1, then I
> will take a look at implementing it as I described. Otherwise, let me know.

It's probably a combination of 1 and "it cannot work this way because of 
reasons I can't remember at 1AM" :-)

The ISP video device nodes implementation doesn't initialize vfh->format when 
the device node is opened. I think this should be fixed by querying to 
connected subdevice for its current format. Of course there could be no 
connected subdevice when the video device node is opened, in which case the 
format can't be initialized. Pure V4L2 applications must not try to use the 
video device nodes before the pipeline is initialized.

Regarding adjusting the format at the output of the connected subdevice when 
the video device node format is set, that might be possible to implement, but 
we will run into several issues. One of them is that applications currently 
can open the video device nodes, set the format and request buffers without 
influencing the ISP at all. The format set on the video device node will be 
checked against the format on the connected pad at streamon time. This allows 
preallocating buffers for snapshot capture to lower snapshot latency. Making 
set_format configure the connected subdev directly would break this.

-- 
Regards,

Laurent Pinchart
--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to