On Mon, Apr 11, 2011 at 1:19 PM, Larry Colen <[email protected]> wrote:

> I suspect that part of it has to do with the fact that light isn't composed 
> of R,G,B photons, it's just that our eyes are composed of RGB cones:
> Cone type       Name    Range   Peak wavelength[9][10]
> S       β       400–500 nm      420–440 nm  violet-green peaking in low violet
> M       γ       450–630 nm      534–555 nm  blue-red peaking in green
> L       ρ       500–700 nm      564–580 nm  green-red peaking in yellow-orange
>
> If a 600 nm (orange) photon hits our eyes, the M&L cones are activated, or in 
> the RGB parlance the red and green sensor sites, I'm not sure which of your 
> sensors it would trigger and in what percentage.
>
> Could you explain your sensor idea to me in terms of photon wavelengths?  I 
> got this far and am not clever enough to work it out.

Well, a normal sensor divides visible light into rgb, no doubt with some
overlap, and usually with an rggb pattern. Taking whatever bands they
use, or somewhat altered ones if that works better, why not let each
sensor have more light?

xyz causes confusion, We cannot use abc or pqr since those each
have a letter overlapping rgb. Call my suggestion ijk

i is r+g
j gets all of rgb; it is just white
k is g+b

Use an ijjk pattern where rggb would be.

If r, g and b have equal amplitudes, this lets 7/3 as much
light through to the sensor. Even if you just use the middle
jj's for monochrome, you get 4/3 the light at half the
resolution.

-- 
PDML Pentax-Discuss Mail List
[email protected]
http://pdml.net/mailman/listinfo/pdml_pdml.net
to UNSUBSCRIBE from the PDML, please visit the link directly above and follow 
the directions.

Reply via email to