Hi guys,

I am trying to solve cameras of still images along with LIDAR data
acquired. Ideally I would like to solve each still camera positions and
lock in with the LIDAR data in terms of scale and accuracy, and then
triangulate the positions of multiple HDR panos.

I first created 6+ user tracks across a few frames at the beginning of the
sequence and gave each of them xyz read out from corresponding LIDAR points
(which is cached in PFTrack). Flagged them as solved. Then either
auto-track with them and solve, or auto-track without them to solve still
camera positions; the result is either failed or out of whack wrong.

There is a section in the documentation about using 3D survey points for
camera solve but it seems to tie to using 3D geometry/point positions that
is already generated from the existing still camera solve result whereas I
wanted to integrated external 3D survey points like the LIDAR...

Maybe I didn't create enough user tracks across more frames or there is a
specific steps I should take?

Btw, if anyone knows how to bring in 8GB 170 million points LIDAR data into
Nuke, I am all ears.

Thanks!
_______________________________________________
Nuke-users mailing list
[email protected], http://forums.thefoundry.co.uk/
http://support.thefoundry.co.uk/cgi-bin/mailman/listinfo/nuke-users

Reply via email to