-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Hi,

Zach Deedler wrote:
> Lets just start over from scratch.  We'll call it osgchar...
> 
> Just kidding.  All of this stuff is awesome.  I have to thank whomever was
> involved with creating cal3d, rbody, and collada.
> 
> Putting all APIs aside for a moment.  How do we implement hardware skinning?
> Or is someone already halfway there?  
> 
> I am trying to figure out how to apply page 416 of the Orange Book to do
> this.  I am still a novice as far as shaders, but I understand the concept.
> How do we get the keyframe vertices into the shader?  Do we load the model
> with its default stance, and then create a couple uniforms to pass in the
> two keyframes?  I guess I'm just not sure how you get the two keyframes you
> want to blend, into the shader.

You may want to read this first:
http://developer.nvidia.com/object/skinning.html

> I know there are a whole lot of gotchas that you guys know about that I'm
> not aware of.  Like do you do full-body morphing, or submesh morphing.  

Usually full body. Skinning the body part by part and then stitching
these together is a mess. VRlab's (the one in Lausanne, not the one
Anders Backman is from) original code used this approach and it was
incredibly complex and slow. It was done for historical reasons, the
original VRlab's animations were done on segmented, non-skinned bodies
(in the 90-ties the SGI hardware used didn't have sufficient power to do
that in real time). In order to be able to reuse the production pipeline
and the models the new code kept this style of data. It was used until
2001-2002 when there was a need for faster rendering and this was phased
out in favor of deforming the whole mesh at once.

> What
> do we do when you have a guy walking, and then wave while he is walking.
> 
> I need:
> Walk, run, crawl, standing, idle, kneel
> Wave, wave frantic, and various other gestures
> Man, woman, dog, deer, rabbit, bicyclist
> 

Animation blending is done usually using weights or priorities. You
simply calculate the two poses corresponding to two frames from
different animations and for each joint you make a weighted sum of the
transformations. It can get pretty complex once you start considering
that each animation can have different scope (e.g. waving should affect
only arm and not torso) or when you need to integrate procedural
animation - what do you do when you have two animations influencing an
arm or leg? If you just blend them together, you will get a mess - it
will be neither of the two and probably not what you wanted.

You may want to read this thesis by Zhyong Huang:
http://vrlab.epfl.ch/Publications/theses/Z_Huang_Thesis.pdf

It is a ten years old work but it pretty much summarizes how the
animation works. He was focused a lot on motion capture and animation
from mocap data but the techniques are still exactly the same. Look also
for papers by people around Daniel and Nadia Thalmann - they have done a
lot of early work on character animation, whether from keyframes,
procedural or real-time motion capture.

Regards,

Jan


-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.7 (GNU/Linux)
Comment: Using GnuPG with Mandriva - http://enigmail.mozdev.org

iD8DBQFGKOadn11XseNj94gRAuJcAJ9zKluZu/MSLzvaSTKLqxvVsqyGIwCfWGyE
+JpM+i8ZCxXV6S0jr1HZ9tY=
=liTt
-----END PGP SIGNATURE-----
_______________________________________________
osg-users mailing list
[email protected]
http://openscenegraph.net/mailman/listinfo/osg-users
http://www.openscenegraph.org/

Reply via email to