Hi Robert!

Thank you so much for taking the time to try and break down the approach for 
me.  Right now our API gives us the left hand and right hand positions in world 
coordinates.

So with this I have managed to get the hands moving around on the screen using 
a callback on the hand node. 

Similar to how you described it I use one method to pull the data from the api 
each frame and apply its world coordinate position to the node of the hand 
using a PositionAttitudeTransform and setPosition(). That works great!

Now, I am still very confused with the indiviual parts of the handmodel 
transforms.  I have not made the model.  I am using a model from LeapMotions 
SDK.  The model as mentioned is a hand.  I tried to attach the model with this 
post but I cant attach .3ds files.  Its available with leapmotions SDK.  Or via 
the Unity store as well I think, if you want to look at the model and how they 
have structured it.  

The naming convention is something that LeapMotion have implemented.  What I am 
unsure of is how to "wire" the right callbacks to each fingerpart.  

If I do run a nodeTraversal on the hand model, I get a large list of 
matrixTransforms.  The only geometry it finds are a nails_fix and a HandsReal ( 
Again LeapMotions naming conventions ).  

This makes me think, well ok I can move around matrices of the hand but the 
geometry always stays the same because whats telling it to look different?  
Then I though maybe the model was not right.  Im not sure.  

Also how do I attach a callback to different sub nodes of the hand model 
without traversing through it?  How do I access subnodes without knowing what 
they are?  

Also I am using c++ to establish the scene graph:


Code:

ref_ptr<Viewer> viwer                                   = new Viewer();
        ref_ptr<GraphicsContext::Traits> traits;
        ref_ptr<GraphicsContext> gc;
        ref_ptr<Camera> cam;

viwer->setUpViewInWindow(0, 0, desktop.right - desktop.left, desktop.bottom - 
desktop.top);


        traits = new 
GraphicsContext::Traits(*(viwer->getCamera()->getGraphicsContext()->getTraits()));

        traits->alpha                   = true;
        traits->doubleBuffer    = true;
        traits->blue                    = 8;
        traits->red                             = 8;
        traits->green                   = 8;
        traits->depth                   = 24;
        traits->windowDecoration = false;
gc = GraphicsContext::createGraphicsContext(traits.get());
        cam = new Camera(*(viwer->getCamera()));

        cam->setGraphicsContext(gc);
        cam->setViewMatrixAsLookAt(Vec3d(0, -100, 0), Vec3d(0, 0, 0), Vec3d(0, 
0, 1));
        cam->setClearColor(osg::Vec4(0., 0., 0., 0.));
        cam->setClearMask(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);


        viwer->setCamera(cam.get());
        viwer->setSceneData(root);

        viwer->realize();

        while (!viwer->done())
        {
                viwer->frame();
        }

        return 0;




What I can't understand still is just why I can read and access the 
"osg::MatrixTransform" casted object of the hand model.  I can even move them 
around with our in house API, as when I attach a visual to them they move 
around the screen, and yet the model does not ever change.

------------------
Read this topic online here:
http://forum.openscenegraph.org/viewtopic.php?p=61061#61061





_______________________________________________
osg-users mailing list
osg-users@lists.openscenegraph.org
http://lists.openscenegraph.org/listinfo.cgi/osg-users-openscenegraph.org

Reply via email to