On Mon, 6 Dec 2010 22:49:48 +0100
Hans Verkuil <hverk...@xs4all.nl> wrote:

> On Monday, December 06, 2010 22:18:47 Antonio Ospite wrote:

[...]
> > Now the hard part begins, here's a loose TODO-list:
> >   - Discuss the "fragmentation problem":
> >      * the webcam kernel driver and the libusb backend of libfreenect
> >        are not going to conflict each other in practice, but code
> >        duplication could be avoided to some degree; we could start
> >        listing the advantages and disadvantages of a v4l2 backend
> >        opposed to a libusb backend for video data in libfreenct (don't
> >        think in terms of userspace/kernelspace for now).
> >      * Would exposing the accelerometer as an input device make sense
> >        too?
> 
> How do other accelerometer drivers do this?
>

input device of course, the question was more with regard to libfreenect
than linux...

> >        The only reason for that is to use the data in already
> >        existing applications. And what about led and motor?
> 
> We are talking about LED(s?) on the webcam and the motor controlling the 
> webcam?
> That is typically also handled via v4l2, usually by the control API.
>

I have to check whether the control API fits this case, the led (only
one) and motor are on another USB device: the Kinect sensor appears as
a hub with several distinct devices (camera, motor/led/accel, audio)
plugged in it.

[...]
> >   - Decide if we want two separate video nodes, or a
> >     combined RGB-D data stream coming from a single video device node.
> >     (I haven't even looked at the synchronization logic yet).
> 
> My gut feeling is that a combined RGB-D stream is only feasible if the two
> streams as received from the hardware are completely in sync. If they are in
> sync, then it would probably simplify the driver logic to output a combined
> planar RGB+D format (one plane of RGB and one of D). Otherwise two nodes are
> probably better.
>

Agreed.

> >   - If a combined format would be chosen, settle on a format usable also
> >     by future RGB-D devices.
> 
> In general the video format should match what the hardware supplies. Any
> format conversions should take place in libv4lconvert. Format conversions do
> not belong in kernel space, that's much better done in userspace.
>

Ok, so if I wanted to visualize the depth data in a general v4l2 app,
then libv4lconvert should provide some conversion routines to some
2d image format like the "rgb heat map" in libfreenect; then the
question here becomes:
    Is it OK to have depth data, which strictly speaking is not video
    data, coming out from a video device node?
Are there any other examples of such "abuses" in kernel drivers right
now?

> Hope this helps. Sounds like this is a cool device :-)
> 

Yeah, I played a little bit with accelerometers and image stabilization
(a primordial version of it), and it is fun:
http://blip.tv/file/get/Ao2-KinectImageStabilization247.webm

Regards,
   Antonio

-- 
Antonio Ospite
http://ao2.it

PGP public key ID: 0x4553B001

A: Because it messes up the order in which people normally read text.
   See http://en.wikipedia.org/wiki/Posting_style
Q: Why is top-posting such a bad thing?

Attachment: pgpXcHC7d2NwD.pgp
Description: PGP signature

Reply via email to