This whole idea makes sense to me only if the camera has a large lens
diameter-to-aperture ratio such that the sensor image is entirely in focus
anyway. For instance, consider a corner of a sensor getting image data: even
if it knew the range from camera-to-object for each pixel (useless info?),
the camera and software cannot know whether pixel X color values should be
slightly different (a blur gradient?) or highly contrasting (a sharply
focused edge?) from an adjacent pixel Y.
 
It's easy to see how software can use focused data to locate pixels making
up highly contrasting edges, and blur them (witness blur effects in Premiere
and Photoshop), but can it generate pixels for highly-contrasting edges from
within a large gradient area (sharpness would not qualify for this)?
 
So if the idea is to offer a user-selectable bokeh derived from a
well-focused image, that would be understandable to me, but if the idea is
for the camera to put into focus what comes from the lens originally as
out-of-focus, then I wonder.
 
Lee
 
From: [email protected] [mailto:[email protected]]
On Behalf Of Edward Martin III
Sent: Friday, August 05, 2011 12:26 PM
To: [email protected]
Subject: Re: [AP]HDR
 
  
Lee writes "Am I being too suspicious?"

I had a similar response when I learned of it.

It seems to me that this is a camera that simply captures one
additional dimension and that is depth-from-lens. Then, it's simply a
matter of having software to interpret that data.

Don't get me wrong -- that IS a very clever thing to do. I've used
ultrasonics to build topo maps before -- and building an HD topo map
at the same time you're shooting an HD image at, say, 30 fps, is no
cake walk. The old Polaroids used basically the same technology, and
you can get more sophisticated ultrasound tech nowadays.

Unless I'm misunderstanding something, this is one of those solutions
to a problem whose solution set happens to include "pull focus right
the first time."

It also seems useful if you've got elements you're shooting that offer
a depth-of-field challenge, such as a character walking toward the
camera through a complex setting (say, trees with hanging vines).
Trying to maintain focus with a shallow depth-of-field in that
situation can be... tricky. I guess a device such as this, where one
can fiddle endlessly in post customizing one's DoF can help fix such
things.

Assuming post is cheaper than production.

But I could be wrong. It could be something far more sophisticated.

So, yeah, seems like you and I had the same response. 8)

Cheers,

Edward



[Non-text portions of this message have been removed]



------------------------------------

Yahoo! Groups Links

<*> To visit your group on the web, go to:
    http://groups.yahoo.com/group/Adobe-Premiere/

<*> Your email settings:
    Individual Email | Traditional

<*> To change settings online go to:
    http://groups.yahoo.com/group/Adobe-Premiere/join
    (Yahoo! ID required)

<*> To change settings via email:
    [email protected] 
    [email protected]

<*> To unsubscribe from this group, send an email to:
    [email protected]

<*> Your use of Yahoo! Groups is subject to:
    http://docs.yahoo.com/info/terms/

Reply via email to