Am 06.04.2010 um 12:16 schrieb Stefan Kainbacher, Neon Golden:

> hello again and sorry for the delay.
> 
> basically the system works fine, but there are some things to consider. ie. 
> how much of the object you get. depth etc. at least with our calibration 
> tool. (steps 1-3) and then we also load the matrices from xml an project it 
> onto the scene. 
> 
> i cant tell you to much about your questions, as you are using a camera 
> calibration, but so far i understand the problems (with our object). the 
> front projector, covering the whole frontside und captering the depth of the 
> object works really good. the other projectors which are positioned closer to 
> the object and covering a smaller area of the object are not thet good. also 
> its a matter of resolution. 1 pixel in the calibration makes already a big 
> difference for the calculation ...
> 
> best, stefan
> 
> 
> Am 30.03.2010 um 14:41 schrieb luca palmili:
> 
>> Hi Stefan,
>> thank you for your reply. Your project is very impressive, good luck for 
>> your installations! :)
>> 
>> My approach is this:
>> My application take the camera input using OpenCV at a resolution WxH, and 
>> put it as a background texture in an OpenGL window.
>> I click on the OpenGL window on 4 non co-planar points of the cube in the 
>> scene.
>> I use cvPOSIT function to get translation, rotation and projection OpenGL 
>> matrix.
>> I hard-code this matrices in a GLSL patch in QC, witch render a cube into a 
>> viewer of WxH resolution.
>> The viewer output is then projected by a projector (with resolution WxH) 
>> onto the real scene.
>> The result is quite good, but there are some errors: the two cubes doesn't 
>> match exactly. 
>> My camera is positioned over the projector, so extrinsic parameters are 
>> referred to the camera point of view. Even if camera and projector lenses 
>> are near, how can the camera extrinsic parameter work well also for the 
>> projector ones?
>> How did you find camera estrinsic parameters? Did you use POSIT algorithm 
>> and OpenCV cameraCalibration methods?
>> Thank you for any kind of help!
>> Luke
>> 
>> 
>> we implemented the paper on quartz composer. 
>> first we wrote a applicaton with objective-c/open gl to do the calibration 
>> and then we implemented it in qc with glsl
>> http://www.facebook.com/video/video.php?v=1387742499852
>> the model and the 3d model have to mach as exact as possible
>> the higher the resolution the better the result
>> its also depending on the angle of the projecter / object ... the more depth 
>> you have, the better it works ... 
>> we just opend our first installation based on it, using a matrox tripplehead 
>> and 3 projectors ...
>> http://www.facebook.com/album.php?aid=196408&id=66111817237 
>> best, stefan
>> 
> 

 _______________________________________________
Do not post admin requests to the list. They will be ignored.
Quartzcomposer-dev mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/quartzcomposer-dev/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to