I have already talked to a few people about that, because I have gotten the
Kinect part going. The depth image and getting smooth coordinates from
decent gesture recognition is not the problem. A bit more tricky is the
inter-process-communication. I did it via a throttled http stream that gets
sent from one application (Kinect reading) to the second one (display). A
crude version (sans gesture recognition but with process communication) is
available on my dropbox. 

 

http://dl.dropbox.com/u/30662912/interprocess_comm.m4v

 

 

You see the hand position (x,y) and z depth (color) transferred into a
client window. So the mouse interrupt handler or whatever would need to read
those from a stream.

 

It seems to me useful not for bona fide building for public exhibit displays
of molecules, and that is what I intend to use the Kinect for. Resolution is
not an issue and smoothing and gesture recognition are also done. The most
difficult part was to get the openNI drivers to work with the X-box Kinect,
with the usual 32/64 bit hassles etc. ( I am not using the windows Kinect
which came out later and not the MS Kinect SDK). I use Processing for the
Kinect control.

 

So if somebody is seriously interested in making the basic display program
part (coot? Pymol? Other?) work, we can try to crowdsource it or ask the
IUCr for some money for IYCr2014 purposes.

 

BR

From: Mailing list for users of COOT Crystallographic Software
[mailto:[email protected]] On Behalf Of Sebastiano Pasqualato
Sent: Monday, May 21, 2012 5:01 AM
To: [email protected]
Subject: [ot]: will we move structures with our hands?

 

 

Hi guys,

wouldn't it be nice to see this working with COOT?

 

http://www.theverge.com/2012/5/21/3033634/leap-3d-motion-control-system-vide
o

 

ciao,

s


-- 
Sebastiano Pasqualato, PhD
Crystallography Unit

Department of Experimental Oncology

European Institute of Oncology

IFOM-IEO Campus

via Adamello, 16

20139 - Milano
Italy


tel +39 02 9437 5167
fax +39 02 9437 5990

 

 

 

 

 

 

Reply via email to