I worried this sidetrack of the discussion may be stopping someone who 
knows more about the answers I'm looking for from replying in this thread.

I really want a better understanding of why results differ so much in the 
same optimization problem among version and builds of pano13 (and maybe of 
hugin).

I half remember seeing some user option somewhere for how hard the 
optimizer tries before stopping.  But now I can't find it in the UI or the 
code.  Am I remembering something that wasn't there?  (I have worked on a 
LOT of different optimizers in recent years in different projects).  Or am 
I failing to find something that ought to be obvious?  (I've been known to 
do that too).

What makes the 2021 official version give so much worse results than the 
2020 official version?

To the very limited extent to which I understand lev-mar stopping rules, it 
might stop because it hit
A: A limit in the number of iterations
B: A limit in the number of function evaluations
C: A limit in the rate of change of the parameter values
D: A limit in the rate of change of the total error
?: There seem to be several other possibilities.

The code seems to report which of those it was to the caller.  I haven't 
yet figured out how to get any of that reported to the user.

Use of sparse lev-mar, combined with the choice of where the 
finite-difference approximation of partial derivatives is done, greatly 
affects the number of function evaluations.
So comparing identical versions with and without that build option (as I 
have) and guessing that the stopping condition is B, you would expect the 
build with sparse lev-mar to do more iterations for the same number of 
function evaluations (as long as there are 3 or more images).  So sparse 
should get a better final answer.  As long as there are 4 or more images, 
you would expect sparse to take less time per iteration.  Above some number 
of images (much harder to predict and also depending on a bunch of other 
factors) the factor of faster per iteration should exceed the factor of 
more iterations, so sparse should take less total time in addition to 
producing a better result.

In fact doing that comparison, with the current repository contents of 
hugin and libpano13 (not your fork of them), gives results semi consistent 
with all that.  Sparse is much faster.  Sparse gives a much smaller max 
value, but a slightly worse average value.  Since it is a least squares 
problem, a much better max is a better indicator of a better solution to 
the optimization problem than the opposite change in average.  But the 
absurdly large number of CP's weighs against that.
Anyway, I think that result supports the guess that the stopping condition 
is B.  For all other stopping conditions, I think the result difference 
with/without sparse should be smaller (with a timing difference more 
favorable to sparse).
But I feel like I'm missing other important factors.  Other differences 
among the various versions don't reasonably fit any guess at stopping 
condition or other cause.

As a development aid, there ought to be a way to adjust the individual 
stopping conditions, primarily to raise the limit on B to understand the 
other differences.  I think that also would be a good expert-user feature 
for some situations (and thought I remembered it being there).
I am going into too many coding sidetracks already.  But maybe I should 
sidetrack into finding a way to fit that user option in.

Another BIG sidetrack I'll likely take:  During high school in the 70's, I 
invented a way to compute partial derivatives other than either the finite 
difference or analytical.  It gives more correct partial derivatives than 
finite difference but doesn't have the potential complexity explosion of 
analytical.  After using it in a couple work situations years later, I took 
a job on a team that was already using the exact same method, and later 
interacted with other teams (within a big employer) that also independently 
came up with it.  (Despite that I've never found a description of it online 
and don't know what it is called).  On a basic timing level, for N partial 
derivatives, you do N+1 times as much work during one evaluation instead of 
N+1 times as many function evaluations for finite difference.  Depending on 
other factors the total time might range from twice as long as finite 
difference down to a small fraction as long.  Usually it is done for 
accuracy, not time.  I think pano13 doesn't need that improved.  But taking 
advantage of several images per lens in hugin would cause my method to take 
significantly less time than finite difference.  If I do that, I should 
remember to kludge the counter of function evaluations to pretend it is 
doing N+1 times as many as it actually is, both in order to keep the 
stopping condition reasonable and to keep result accuracy comparable.  I 
first wrote that in APL and later in C.  But it is really ugly code in C 
and I won't do that again now that there is a choice.  The only decent 
language for it is C++.  It is really annoying that libpano13 is coded in 
C.  (I don't still have any of the code and only the APL version ever 
belonged to me).

On Tuesday, August 9, 2022 at 2:14:37 AM UTC-4 Florian Königstein wrote:

>
> At the moment I'd like to illustrate the rotation / translation ambiguity 
> with the following image that I got from this link:
>
> https://www2.eecs.berkeley.edu/Research/Projects/CS/vision/classes/cs280_s99/notes/lec9.ps
>

For two images, that seems to say the results will be "wrong" but it won't 
matter:  The "wrong" parameters fed into stitching will give the same 
output as the right parameters.
For a collection of many images either grid or scattered in 2d, I think the 
problem should not occur.
For a big collection of small images in 1d, I haven't worked out the math, 
but I expect the results would get ugly.  In the best case it should look 
like an badly chosen projection for the final image, but it could be worse.
So I think all this tells us that for a big collection of small images in 
1d, automatic methods aren't going to figure this out and the user needs 
inject some knowledge.  For the "no translation" case specifying no 
translation is important I think only for this "big collection of small 
images in 1" (not more generally).  I don't believe the "no rotation" 
(moving tripod along a horizontal mural) really is "no rotation".  But that 
certainly would benefit from very tight bounds on the amount of rotation.
 

-- 
A list of frequently asked questions is available at: 
http://wiki.panotools.org/Hugin_FAQ
--- 
You received this message because you are subscribed to the Google Groups 
"hugin and other free panoramic software" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to hugin-ptx+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/hugin-ptx/687d9989-c280-4458-8194-c393e1af6050n%40googlegroups.com.

Reply via email to