Igor PDML-StR wrote:
Larry,
I just wanted to comment on your statement about
"it would in theory be possible to extract depth/distance information
from the image.."
I haven't thought about it too deeply, but my intuition tells me that
you would be missing sufficient amount of information to be able to
reconstitute the original image. Let me try to tell you what I mean.
While it is not a complete analogy, you can think of describing the
image that you get on the sensor as a Fourier spectrum of the original
object.
I didn't say that you could reconstitute the original image. I said
that you could extract depth information. For example, the amount of
blur would tell you that an object is + 20% or -20% of the focus point.
It won't tell you which, but you'd know it was one or the other.
You can reconstitute the original object if you have ALL the Fourier
harmonics. For that you'd need to collect the light that was scattered
from the lens in all directions. But because of the finite size of the
sensor, you are catching only a very limited subset of those "F.
harmonics".
So, you can "reconstitute" something, but how well it will reproduce the
original, that "will depend".
I hope this helps,
I was discussing this with someone a couple days later, and he found
this article on the subject:
http://web.media.mit.edu/~bandy/refocus/PG07refocus.pdf
Also, my goal wasn't to reconstitute the original image. My goal was to
exaggerate the effects of depth of field. If you have, for example, a
36 MP image, but your final image is only 6 MP (my print in the PDML
show was only 3MP), then you have 6 raw pixels per final image pixel, so
you are able (in theory) to detect blur at less than half the final
circle of confusion. Once you do this, you could then increase the blur
by some amount, so that the processed image has effectively much less
depth of field than an image processed normally would have.
It did occur to me later that for my goal of detecting which photos are
well focused, and correctly focused, or other definitions of "sharp" one
could apply a high pass filter (edge detection), much like focus peaking
in live view. I'd love a filter like this for lightroom to help me more
easily determine which of many photos was focused more closely to what I
want.
Igor
Larry Colen Mon, 29 Feb 2016 12:28:23 -0800 wrote:
If you look at the depth of field equations, they are based on whether
the circle of confusion is smaller than the size of the pixel. When you
look at an image on the web (generally about 2MP or below) the effective
dof is a lot greater than if you pixel peep the raw file (16-24 MP or
more), because the smaller files can handle a lot larger circle of
confusion than the raw image.
It seems to me that with sensors so high resolution that they are
diffraction limited at f/5.6 or so, that it would be possible for image
processing software to detect the increasing circle of confusion (at 2
or 3 pixels) with a lot more accuracy than our eyes can, and could
therefore more accurately enhance the effects of depth of field than
just applying a blur to the background of an image.
Likewise, given a good model of the lens (bokeh) it might also be
possible to mathematically increase the depth of field.
A corollary to this is that it would in theory be possible to extract
depth/distance information from the image (though it might be hard to
tell the difference between something being 2 times the focal distance
and 1/2 the focal distance.
Are there any hard core signal/image processing nerds on the list who
know anything about work being done on this? It wasn't too long ago that
it would take the sort of processing that only Los Alamos or the NSA had
to do this, but desktop computers are probably running something like
2005 "Craymarks", particularly with GPUs.
--
Larry Colen [email protected] (postbox on min4est) http://red4est.com/lrc
--
PDML Pentax-Discuss Mail List
[email protected]
http://pdml.net/mailman/listinfo/pdml_pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow
the directions.