I have determined what was wrong with my usage. The third argument to the Python function cv.cvCalibrateCamera2 must be a one-dimensional matrix of integers (e.g. type cv.CV_32SC1). The cv.cvmSet function will interpret its value argument as a "double", regardless of whether it was an "int" to begin with. If this function is used to populate the values of a matrix of type cv.CV_32SC1, the values seem to be incorrectly reinterpreted as integers on the binary level, rather than simply truncating the fractional part. Thus, when populating the matrix of integers in Python, one must use the array notation, rather than cvmSet.
This works: point_counts[3] = 15 while this does not: cv.cvmSet(point_counts, 3, 0, 15) A correct call to cv.cvCalibrateCamera2 would look like this: intrinsic, distortion = cv.cvCalibrateCamera2( object_points_matrix, # Object points: a 3xN matrix, listing all the points in all the images corners_matrix, # Image points: a 2xN matrix, listing all the 2D points in all the images point_counts, # A vector (i.e. 1xM matrix) of point counts per image frame_size # Image size, type CvSize ) -- cvCalibrateCamera2 broken in python-opencv https://bugs.launchpad.net/bugs/188539 You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs