Hi Guys, OK, so I'm trying to go from screen-space (ie, the final UV position from a cameras rendered image) back to a world space of the location selected on the image.
If my understanding is correct, it is simply as follows ... 1) Get the screen space co-ordinate - center of the image is 0,0 and the extremes go from -1 to 1. The z-value is the depth 2) Use the cameras inverse projection matrix to turn this into camera space. 3) Use cameras world projection matrix to get a ray from the camera towards the scene (in world space). 4) Use this vector and the camera position to create a ray to then intersect the geometry. 5) Job done :) The problems arise in step (2). I have tried creating a 3-vector and a 4-vector to represent the screen space. The x/y is the screen coordinates, and I have put different values of Z in to see how that affects things, but it did not. (Does the z-value run from 0-1 from the near to far clipping planes?) I then tried a 4-vector, with 1 for the W-value but this made no difference either. As I increased the z-values I expected the resulting camera space Z value to scale accordingly. I feel that I'm missing something fundamental in how Maya applied or uses projection matricies. Should the W-value in the screen space always be 1? Changing the near clipping plane seems to change the transforms so that must be telling me something useful! This is the basis of the test code that I've been using: There is just a single default camera in the scene moved along along the z-axis. I did this as the resulting camera space vectors can be validated simply by inspection. I tried with test screenspace coordinates of: 0,0,0,1 -> expect to be at 0,0,0 in camera space 0,0,1,1 -> expect to be at 0,0,z in camera space - with z being non-zero (it was always still zero!) The x/y values that come out are always the same no matter what screen space z value I use, but how do I get a valid value for this? I need this z value to create a suitable test ray .... sl = om.MSelectionList() sl.add("cameraShape1") dpathCameraShape = om.MDagPath() sl.getDagPath(0, dpathCameraShape) dpathCameraShape.extendToShape() cam = om.MFnCamera(dpathCameraShape.node()) matProj = floatMMatrixToMMatrix_(cam.projectionMatrix()) matInvProj = matProj.inverse() matCam = dpathCameraShape.inclusiveMatrix() matInvCam = matCam.inverse() printMatrix_(matCam, "matCam") printMatrix_(matProj, "matProj") # Use test points at known locations in screenspace. point = om.MFloatPoint(0, 0, 0, 1) vect = om.MVector(point) result = vect * matInvProj print4Vec_(point, "Test Point In Screen Space:") print4Vec_(result, "Test Point In Camera Space:") If anyone could point me in the right direction that would be great! I just know it's going to be something bloody obvious, but cannnot see the wood for the trees :) Best Regards, Simon. -- You received this message because you are subscribed to the Google Groups "Python Programming for Autodesk Maya" group. To unsubscribe from this group and stop receiving emails from it, send an email to python_inside_maya+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/python_inside_maya/ed261d3b-492d-4ab8-82bb-e616893d937a%40googlegroups.com. For more options, visit https://groups.google.com/d/optout.