Am 21.09.21 um 18:23 schrieb T. Modes:
kfj schrieb am Dienstag, 21. September 2021 um 11:28:33 UTC+2:
- The cropping area of the two circular fisheye images is not placed
correctly, parts of the image which show the inside of the lens are
inside the cropping area, some content is cut off so much that the
output has black areas. The cropping area it's too small and
off-center.
This assistant may need some fine-tuning for a specific model, as
already written. The dual lens assistant was meant as a starting point,
not as final product.
I hope my observations will help improve it.
- the assistant produces two separate hfov values for the two images,
but both cameras in the device have identical hfov, so the second
i-line
should have v=0. The only thing that differs are d and e, and y, p, r
for the second image
I doubt that the two lenses have really identical hfov - at least at a
pixel scale. So the idea was to take the deviation between the two
lenses into account.
Producing hfov values which are 20 degrees apart is definitely wrong. I
suspect that the assistant relies on too few control points. If you want
to stick with separate v values for the two i-lines, my suggestion would
be to try something like this:
start out with the same v for both i-lines (so v=0 in the second one)
and optimize for y,p,r,d,e in the second image. This gives you a good
base to start from to detect pixel-level differences in v for the two
cameras, because with the initial result you can generate more and
better CPs. Once you have these, you can decouple v and see if your
optimization gets any better. I tried that, and with 27 CPs and v
decoupled the difference the optimizer produced was a mere .003 degrees.
Typically, the overlap in this type of device is small and in an area
where the lens does not 'behave well', so hoping to get the lens
characteristic right by optimizing based on (too few) CPs in this
overlap region is likely to produce suboptimal results, especially if
the initial guess which cpfind needs to find good CPs is not very
accurate. I am convinced that assuming equal FOV is a better initial
guess, at least as a starting point.
Some models will have more deviation, other are similar.
I'd be curious to see measurements from others working with these
devices. I'd assume that the differences you get from device to device
result from slight differences in how the individual cameras are mounted
in the device, and are reflected in the outcome of the optimization for
y,p,r,d and e, while the hfov should come out very similar, and widely
different values are most likely wrong.
I hope to get more sample images in the near future, also from a
different device, and I've also asked for samples with the camera yawed
by 90 degrees which would help greatly in working out the fov issue.
I'll post again when I have the data.
Kay
--
A list of frequently asked questions is available at:
http://wiki.panotools.org/Hugin_FAQ
---
You received this message because you are subscribed to the Google Groups "hugin and other free panoramic software" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/hugin-ptx/4362c06d-344e-efdb-4e53-35f78529717d%40yahoo.com.