Hi, I brought up this shift function with APS SENSORS/FF lenses
a while back, I don't see why it wouldn't work but you have
not addressed the addition burden of needing a FF viewfinder
for it to be workable and some sort viewfinder indication to
show what portion of the viewfinder image the sensor is capturing...

As for whether it's a viable product or not its all about cost.
If it could be done for only slightly more that plain ol APS 
it might be viable. The other thing is that your good idea
about covering the whole frame is only possible with stationary
subjects while the shift function doesn't have that limitation.
The stationary subject requirement makes it very little better
than Pan and stitch techniques already possible with any digital
camera...

JCO

-----Original Message-----
From: Glen [mailto:[EMAIL PROTECTED] 
Sent: Thursday, September 22, 2005 3:18 AM
To: [email protected]
Subject: Sensors That Shift?


Since the current sensors are smaller than 24 x 36 mm, could a camera be 
built in such a way that it could shift the sensor up and down, and from 
side to side? This would have a similar use as a shift lens would have. Of 
course, you would have to use lenses made to cover the full 24 x 36 format. 
The current DA lenses wouldn't work with a moveable sensor.

I think this might be a cool feature for some people, especially for those 
who photograph architecture.

There is also one other potential use for a moveable sensor. When 
photographing stationary objects and using a tripod, the sensor could be 
used to take more than one image of the subject. Each image would be taken 
with the sensor shifted a sub-pixel distance vertically and horizontally 
between images. Let's say that instead of a single image, we capture 9 
images, arranged in a grid pattern centered around what would have been the 
normal single image. We then use this grid of 9 images to create a 
higher-resolution image than a single image capture would have produced. 
This should be a way to quadruple the amount of effective pixels, while 
using the same sensor. Unfortunately, it would only work for situations 
where the camera and subject were kept stationary with respect to each 
other. Still, I bet a lot of photographers would benefit from such a boost 
in resolution under such circumstances. Also, you could possibly develop 
some enhanced noise reduction techniques by analyzing the extra exposures 
and tossing out any pixels which seemed abnormally bright.

Would this idea of shifting the sensor in sub-pixel amounts actually help 
yield higher resolutions? My intuition tells me that it would give images 
with higher effective resolution than the single images we currently have, 
but perhaps not quite as nice as a sensor with a truly quadrupled pixel 
count. However, it should be a lot cheaper to build than a sensor with a 
quadrupled pixel count.  ;)

Of course, the sub-pixel shift was thought up as a way to boost effective 
resolution, while maintaining full compatibility with DA lenses. If the 
sub-pixel shift idea doesn't work, I have a second idea for increasing the 
total resolution of the captured image. Once again, it only works for 
stationary subjects. For each image, make 4 captures. Shift the position of 
the sensor to the top-left corner of the 24x36mm frame for the 1st capture, 
then to the top-right corner, then the bottom-right, and finally to the 
bottom-left corner of the 24x36mm frame. This way, you have covered the 
full 24x36mm frame in 4 tiles. Then, seamlessly stitch these tiles together 
in software to create a single full-frame image with much more than our 
normal 6.1 megapixels. Of course you lose the compatibility with DA lenses, 
and would have to use lenses designed to cover the full 24x36mm frame. I'm 
not sure how many extra megapixels you would gain from simply extending the 
effective sensor coverage to 24x36 mm. Maybe someone on the list knows how 
to calculate this?


take care,
Glen


Reply via email to