I took another look at this, again thanks to Ivan for covering this topic
in so much detail:

mtx = nuke.math.Matrix4()
k = cameranode['world_matrix']

for y in range(k.height()):
    for x in range(k.width()):
        mtx[x+(y*k.width())] = k.value(x,y)

inverse_mtx=mtx.inverse()

point=nuke.math.Vector4(55.1387, 13.7498, 7.90089,1)
transformedPoint=inverse_mtx.transform(point)
print transformedPoint


On 13 March 2013 14:38, Michael Garrett <michaeld...@gmail.com> wrote:

> I've had a shot at this using Nuke.math but it doesn't match up with doing
> it with expressions (or with C44Matrix, love that plugin!) on an rgb pixel,
> which I know is right, so I'm doing something wrong.
>
> Ultimately I want the z component of the Axis position in camera
> coordinates (depth from camera, equivalent to deep z units).
>
> All I need to do is invert the camera matrix then multiply the Axis world
> matrix by that, but the result is not matching the "pixel math" version.
>
> Using Ivan's Nukepedia tutorial as a guide, I stored the Axis and Cam
> world matrix knobs as objects, inverted the Cam matrix object using the
> matrix inverse function and simply multiplied them together. But the final
> position is not correct.
>
> Any ideas on what I'm doing wrong, or if there is a more elegant way to do
> this? I want this to evaluate live, like expressions, similarly to Jose's
> live reconcile3d example that used a snap3d function and this seems the way
> to go.
>
> Thanks,
> Michael
>
>
>
>
_______________________________________________
Nuke-python mailing list
Nuke-python@support.thefoundry.co.uk, http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-python

Reply via email to