This question isn't specifically related to Gimp, but I thought some  
folks on this list would have the appropriate expertise to answer it.   
So here goes...

I'm working on software that involves recognizing a certain pattern in  
extremely blurry images.  Thus far, standard deconvolution techniques  
haven't adequately cleaned up the blur.  So instead, I thought I might  
try it the other way around -- that is, distort the pattern that I'm  
looking for and then compare the distorted pattern to the actual image.

So basically, I have a series of specimen images taken with a  
particular camera.  For each one, I can precisely define what the  
image is supposed to look like.  So how would I empirically determine  
the point-spread function or convolution kernel that would best  
translate my model images into the specimen images?

Or am I asking the wrong question entirely?

Thanks in advance.

Gimp-user mailing list

Reply via email to