Caveman wrote:
Toralf Lund wrote:
And no matter which way you look at it, you cannot extend the bandwidth. Which is why I say interpolation doesn't change the resolution.
Depends what your definition for resolution is. If you define it as the size of the smallest details that can be recorded, then you're right. But the film guys would say that you're talking about resolving power not resolution. If you define it as number of pixels (per inch) then it doesn't make sense.
Well. I originally used the term loosely, or without thinking, but I was trying to talk about the actual resolution of the data as opposed to the number of pixels in the file. And of course, when we talk about resolution of scans, we always mean *optical* resolution, or the level of detail in the data captured. "Interpolated resolution" is not a term you use if you're professional about this...
Mathematically, interpolation is the estimation of a function based on certain known function values.
Yes I know, what we have here again is a problem of language abuse.
What is important to know is that the ideal reconstruction (or "interpolation" that guesses 100% right) is done with a sum of weighted sinc functions (something like a sum of p*sin(x)/x terms, one per each pixel in the image).
Or a bit more complicated. Or maybe this is an approximation, too. What's you'd need to use is probably a Fourier series, which is a sum involving sin and cos, but not exactly the way you describe it (Argh. I don't remember the exact expression. It's been way to long since I did tels. But it's something like a weighted sum of (cos(nx) + sin(nx)) for n=1, ...)
That would be a Fourier series, I think. As I thought I should have said after I sent my last mail, yes, you do indeed get a signal filtered through a low-pass filter, if that's how you choose to see, and you should be able to give a 100% accurate representation of that signal using a Fourier series. But, seeing the traditional interpolation algorithms as attempts at an approximation of that series is drawing it way to far, I think. Interpolation is a much more practical or ad-hoc process.
Since this takes ages to compute, you use approximations of this functions in the form of "bicubic" "bilinear" or "nearest". They compute much faster but are not perfect i.e. they give some slightly wrong guesses (artifacts).
OK, OK. You can in a way see these as approximations of the Fourier, since you *know* that's the right function. Unless you actually want to do something else than representing the "filtered" data; you could be trying to estimate the *unfiltered* version, too. At any rate, I'd say it's all about choosing a polynomial function based on a limited region of the picture because you know that generally gives you a decent estimate - not because you see it (mathematically) as a simplification or approximation of a specific known function.
Now if you explore some really proffesional image processing software (like the kind used for satellite imagery), you'll notice that the "sinc" method is available too. Recommended of course to be used only with parallel computing clusters.
Hmmm. Maybe that one is a serious attempt at something Fourier-like...
cheers !

