> To accomplish this noble goal we need a new V4L API that has not one but
> two components. The kernel portion sits in kernel space (maybe) and
> sucks the data out of the camera. The userspace portion postprocesses
> the raw data and provides a standard interface to user apps.
It really belongs as a support library
> As it stands now, V4L does not provide any means to move the conversion
> into userspace. Many cameras even have different "native" formats in
> different modes and on different image sizes. IBM cameras, for example,
> usually stream something resembling YUV but not exactly that. The
> conversion library will need to know most intimate details about the
> datastream being processed.
Ultimately that is the right thing to do. It works well with ALSA for
example. Scanning the apps I have here, all but one of them supports YUV420
_______________________________________________
[EMAIL PROTECTED]
To unsubscribe, use the last form field at:
http://lists.sourceforge.net/lists/listinfo/linux-usb-devel