On Mon 26-Jan-2009 at 02:16 -0800, Pedro Venda wrote: > >My main question is: Given that we have more than one way of getting >control points across all images, does it really help to have lots and >lots of (good) control points? I mean, given a good result from a >certain group of control points, is it reasonable to assume that if we >can produce a number of extra good control points, then a further >iteration will yield a better fitting?
It depends on how many parameters you are optimising, if you have your lens calibrated and you are only optimising positions, then you only 'need' two good points per image pair. >I have read on this list that after fine tuning, the cp distance is >actually their correlation (not sure if their values become equivalent >or if this is a tweak to help the ignorant chap trying to make a >decent panorama). In the Control Points table the correlation is placed in the Distance column. The numbers stay there until you optimise and they then become pixel distances again. >The issue with this is that if I manually check one >control point with a distanceof 0.31 (after fine tuning, before >optimising) it seems fine - matches the exact same spot in both >images. The correlation is just a guide to how similar the computer thinks the points are, machines are very often wrong about this sort of thing. >This leads me to propose a couple of suggestions that would help the >GUI in ease of use: >- Offer to discard points with that low correlation value range >(<=0.900). This would be an option that could appear on the dialog >just after completion of a fine-tuning process or could be enabled on >the control point list dialog just after the fine tuning process; The interface isn't great, but you can do all this with the current Control Points table: http://wiki.panotools.org/Hugin_Control_Points_table >Because I shoot hand held, I have lots of overlapping and that does >not always help. So I end up taking out a few images - 20-40% of them >generally - to improve fitting, seam and stitching quality. > >My suggestion was about iteratively optimising fitting with a single >image taken out in an attempt to minimise error. Or in other words: >0. optimise, store maximum and RMS error values; set n=1; >1. remove image #n, optimise, store error values; >2. put image #n back, set n+=1, go back to step 1.; >3. compare all obtained error values; suggest to the user that if >image #m is removed from the project, errors are minimised to X and Y, >against the original values x and y; This would work in the case that there are surplus images. But surely you would rather remove images because they are blurry, or have unwanted people/cars, rather than rely on a control point distance - Which may or may not be important when the images get stitched? -- Bruno --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "hugin and other free panoramic software" group. A list of frequently asked questions is available at: http://wiki.panotools.org/Hugin_FAQ To post to this group, send email to [email protected] To unsubscribe from this group, send email to [email protected] For more options, visit this group at http://groups.google.com/group/hugin-ptx -~----------~----~----~----~------~----~------~--~---
