On Wed, Jul 31, 2013 at 5:04 PM, Johannes Scholz
<[email protected]> wrote:
> True, for sure. My first thought was to do it that way. Grab the object, then 
> move and rotate directly by the hands position and rotation. I could not 
> realize that in the first demo, because there is no GRAB gesture available 
> (at least I did not find it, yet) then I lost track of the hand's fingers 
> when they are oriented perpendicularly above the Leap. Then also I felt sad 
> about the accuracy of the hands rotation.

I have tried again with the visualizer tool and I can get some
reasonable hand rotation - if the fingers are visible. They obviously
use some Kalman filter or something to keep the rotation smooth and
going even if the fingers disappear for a moment. However, tracking of
a closed hand ("grabbed object") is hopeless - they obviously use the
fingers for tracking and estimate the rest from that.

What a pain :( I went through the SDK and the API seems to be designed
more around simulating mouse and 2D multitouch interaction (they even
let you define virtual 2D "screens" where you want to interact with
things) than actual 3D work. If they rather made the device available
as two synchronized cameras, one could do some custom development on
it, at least. Like this one is stuck with their SDK :(

Regards,

Jan
_______________________________________________
osg-users mailing list
[email protected]
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to