Hi Deqiang,

here you can find a brief explanation about the relation of the IR-image and 
the depth image: http://www.metrilus.de/range-imaging/structured-light/

If you want to detect corners, why don't you use the RGB-image? It comes with a 
good calibration to your depth image, but this is all done in OpenNI.

Regards,
Thomas

From: Deqiang Xiao [mailto:[email protected]]
Sent: Dienstag, 8. Januar 2013 16:18
To: Kilgus, Thomas
Cc: [email protected]
Subject: Re: [mitk-users] Camera intrinsic parameters in 
mitkToFDistanceImageToPointSetFilter.cpp of ToFProcessing module

Dear Thomas,

I have another question about calibration for depth image and rgb image. Why 
doesn't directly do the calibration between ir image and rgb image instead of 
depth-rgb, you know it's difficult to find corner point in depth image and it 
seems that depth image is produced based on ir image, if not, what's 
relationship between depth image and ir image?

Regards,
Deqiang

2013/1/8 Kilgus, Thomas 
<[email protected]<mailto:[email protected]>>
Ok thanks anyway! Let me know if you find out anything regarding this issue.

Regards
Thomas

From: Deqiang Xiao [mailto:[email protected]<mailto:[email protected]>]
Sent: Dienstag, 8. Januar 2013 14:58
To: Kilgus, Thomas
Cc: [email protected]<mailto:[email protected]>
Subject: Re: [mitk-users] Camera intrinsic parameters in 
mitkToFDistanceImageToPointSetFilter.cpp of ToFProcessing module

Dear Kilgus,

Thank you very much for your quick reply! I will do an experiment with default 
parameters as you suggested. For the third question, I have to say that I never 
do any experiment for it and maybe I'm wrong, so I'm sorry I can not offer any 
information to prove my opinion. Thank you very much!

Regards,
Deqiang

2013/1/8 Kilgus, Thomas 
<[email protected]<mailto:[email protected]>>
Hi Deqiang,

To 1): The current default parameters in the mitkToFDistanceImageToXXXFilter 
are parameters for the PMD CamCube Camera. You are absolutely right, that they 
don't fit to the Kinect and you have to adapt them. In my experience, the 
Kinect comes with an accurate default calibration. You could just use freely 
available parameters, like the one from Nicolas Burrus: 
http://nicolas.burrus.name/index.php/Research/KinectCalibration. Attention: We 
use the stereo calibration from OpenNI. That means the distance image is 
already in the RGB-image-space and you have to use the RGB-intrinsics (not the 
IR-intrinsics!). We are working on a default calibration file for all available 
cameras which would solve this problem. Hopefully, we will manage this feature 
for the next release.

To 2): Yes, we use the stereo calibration from OpenNI. Pixel indices should be 
the same, however, only for a default Kinect calibration. Your personal Kinect 
device can be slightly different. This should not be a big deal, but is 
depending on your application and your desired accuracy.

To 3): In principle, you need camera intrinsics to perform 3D reconstruction of 
Time-of-Flight data. I am currently trying to figure out whether this is true 
or not for the Kinect in combination with OpenNI. Do you have any information 
(e.g. URL) to prove your point? I did an experiment where I measured a Lego 
Phantom with the Kinect and compared it to ground truth. My results are always 
better when I used the RGB-Intrinsics to reconstruct the 3D surface.  Could you 
tell me how exactly you would perform the 3D reconstruction assuming you use 
OpenNI? Looking at the book "Hacking the Kinect" the reconstruction is 
performed without any intrinsics (see chapter 7, "Moving from depth map to 
point cloud"), however, they are not using OpenNI. I could not find any good 
source, yet, which tells if OpenNI needs this conversion or not. I just have 
better results when I use the intrinsics. Any help of yours is appreciated.

(see for "Hacking the Kinect" 
http://books.google.de/books?id=IftkzqRjbO4C&pg=PA130&lpg=PA130&dq=kinect+depth+image+to+point+cloud+pinhole+camera+model&source=bl&ots=XuWzUbeny6&sig=3kAaHm5_Z02Yj3Ft7PybAJauYOM&hl=en&sa=X&ei=pue4UMKtLof1sgar-YCgBQ&ved=0CEgQ6AEwBA#v=onepage&q&f=true)

Regards,
Thomas


Thomas Kilgus
German Cancer Research Center (DKFZ)
Div. Medical and Biological Informatics
Junior group: Computer-assisted Interventions (E131)
Im Neuenheimer Feld 280
69120 Heidelberg, Germany
Phone: +49(0) 6221-42-3545


From: Deqiang Xiao [mailto:[email protected]<mailto:[email protected]>]
Sent: Dienstag, 8. Januar 2013 14:04
To: [email protected]<mailto:[email protected]>
Subject: [mitk-users] Camera intrinsic parameters in 
mitkToFDistanceImageToPointSetFilter.cpp of ToFProcessing module

Dear all,

My current work is based on MITK ToFUtil with kinect for windows and I have 
read most part of related code in ToFUtil, then I get three question about this 
plug-in:

1) Where are the camera intrinsic parameters in 
mimitkToFDistanceImageToPointSetFilter.cpp\mitkToFDistanceImageToSurfaceFilter.cpp(..\MITK\Modules\ToFProcessing\)
 from? If this camera intrinsic parameters are not fit for kinect, the 
calibration for IR camera of kinect is need or not?

2) Do rgb image and ir image have been calibrated in ToFUtil?(If they are 
calibrated, two pixels which have the same coordinate index in rgb image and ir 
image correspond the same point in physical space)

3) Why the camera intrinsic parameters are used in 3D reconstruction from depth 
image?(In my opinion, 3D reconstruction based on depth image doesn't need 
camera intrinsic parameters)

Any help is appreciated!

Regards,
Deqiang



------------------------------------------------------------------------------
Master Visual Studio, SharePoint, SQL, ASP.NET, C# 2012, HTML5, CSS,
MVC, Windows 8 Apps, JavaScript and much more. Keep your skills current
with LearnDevNow - 3,200 step-by-step video tutorials by Microsoft
MVPs and experts. ON SALE this month only -- learn more at:
http://p.sf.net/sfu/learnmore_122712
_______________________________________________
mitk-users mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/mitk-users

Reply via email to