Yes and no. Much of an image is the result of image processing in our
retinas and brains. When we get to the edge where grain is barely
perceptible or where individual pixels are barely perceptible the brain
weighs in. At the edge, at the same resolution, same color rendering and
same contrast, a random grain pattern in a photographic print is perceived a
bit differently than a regular, pixilated pattern from a sensor. To my eye
(and I can only speak for my eye) the apparent "sharpness" and "naturalness"
of the random grain is better. I attribute this to my brain being able to
interpolate "true" edges better from random input than from a regularized
pattern that causes my eye to follow a false path until the jump to the next
pixel. This despite the variable mixing of pixel colors at the "digital"
edge.
I probably haven't explained my perception well, but I tried.
Regards,
Bob...
-----------------------------------------------------------------
"The art of taxation consists in so plucking the goose
as to obtain the largest possible amount of feathers
with the smallest possible amount of hissing."
- Jean-Baptiste Colbert,
minister of finance to French King Louis XIV
From: "Steve Jolly" <[EMAIL PROTECTED]>
mike wilson wrote:
"My personal testing (to be published soon) shows that a 5MP sensor
with dedicated optics can match the performance of a 35mm negative
of superb quality coupled to a lens of equally superb quality."
To do what, though? Produce prints up to a certain size, arguably.
You can do that with an even smaller sensor, although the print size
will change. The statement is meaningless.
I don't understand the importance of print size. If the two systems
capture the same amount of information about a scene, then they will
produce prints of the same quality at any size. I don't think the
statement is meaningless.