Hi Shuvro, I believe, that there are could actually many directions in implementing what I mentioned in previous message.
1)st auto calibrate kinect(s) see available software http://graphics.cs.msu.ru/en/science/research/calibration/cpp 2)a reconstruct scene from kinect using http://vimeo.com/21096739 ( diff here http://www.pasteall.org/20117/diff ) using either Kreilos approach or maybe http://www.ros.org/wiki/openni/Contests/ROS%203D/RGBD-6D-SLAM approach <http://graphics.cs.msu.ru/en/science/research/calibration/cpp>2) b fit generic figures to captured data ( generic figures could be taken from MakeHuman, I think ) 3) estimate motion. see additionally link http://www.ros.org/wiki/openni/Contests/ROS%203D/Skeleton%20Tracker%20Teleoperation%20Package%20for%20Mobile%20Robot <http://www.ros.org/wiki/openni/Contests/ROS%203D/Skeleton%20Tracker%20Teleoperation%20Package%20for%20Mobile%20Robot>so pretty much to do. but unlike your proposal - it does not rely on patented pending approach implemented in Nite based code (Extraction of skeletons from 3D maps United States Patent Application 20110052006 ) and potentially provides much more reliable motion data capture. as for Blender and Nite you may see here http://www.youtube.com/watch?v=UxIcwuo5Rts it is already tested with http://www.brekel.com/ tool - which is basically a wrapper around Nite. I just add some info to think on if the way you describe is really worth pursuite Regards Sergey _______________________________________________ Bf-committers mailing list [email protected] http://lists.blender.org/mailman/listinfo/bf-committers
