On 12 May, Keith Packard wrote:
> 
> Around 10 o'clock on May 12, [EMAIL PROTECTED] wrote:
> 
>> Isn't it less misleading to say partial coverage is indistinguishable
>> from a transmissivity less than one.
> 
> Or partial coverage is indistinguishable from partial opacity, which is 
> more-or-less what alpha measures.
> 

It is the more-or-less that is the source of some of our problems.  

The following is an optical model that could be implemented in software
and that might be at least partially accelerable by present hardware. (I
wonder to what extent this is how the "gamma correction" is done in the
current hardware.)

I will start with a monochrome world and add color later.  First, a
definition from the photographic imaging world:

                                  1 
Optical Density (OD) = log10 (  ------ )
                                (1-T)

and 

T = transmissivity = fraction of light transmitted = alpha

There is no "more-or-less" here, although when encoded into integers as
required by computers there will some approximation errors.

If you perform compositing operations in OD you find that

Composite( ODi, ODj) = ODi + ODj

which is exactly what happens when I superimpose layers of transparent
material with known OD.  If I superimpose four layers, with OD of
0.3, 0.3, 0.4, and 0.5 the result is an OD of 1.5.  Black stencils have
an OD of Dmax.  Working in OD is computationally simple and corresponds
to the photographic transparency model.

The problems with working in OD are:

1) It does not correspond the the voltages needed by CRTs.  If you
 are greyscale and have a spare LUT this is a minor problem.  You just
 put the OD -> voltage function into the LUT.  If not, there must be a
 computerized conversion.

2) It is a subtractive model and most computer users are accustomed to
 additive models.  With the OD approach, I start with a pure white image
 at maximum brightness and start subtracting things.  When I superimpose
 a pure red and a pure green transparency I get black.  In the additive
 world you get yellow.  Most people are accustomed to the additive
 model.  They can learn the difference, but existing software would also
 need to be changed.

 Since there is always a mapping between additive and subtractive
 models this could be dealt with in the interface to the programs.
 Provide both additive and subtractive interfaces, then use the
 subtractive model internally.

 You also want to add a kind of special stencil to provide for the
 common overlay operation in additive systems.  In a subtractive system
 I cannot superimpose an opaque image layer. Opaque layers yield black.
 But I can stencil a hole (e.g. OD of 0.0) and superimpose the image
 onto the resulting white background.

 The real world cannot create negative OD, but the concept of a negative
 OD is mathematically well behaved and potentially very interesting as a
 computer operation.  It is a wierd kind of amplifying layer, perhaps
 similar to a bleaching operation.  This is not really needed.

3) The color model would need careful definition.  In the OD world there
 is no fourth alpha channel.  Each pixel is defined in terms of an OD
 for that color band.  (I could define four color bands and get real
 exotic if I wanted.  I am aware of some color films that have four
 color sensitive layers and get enhanced color resolution as a result.
 This could be an interesting experiment for computers.)

 As with 1) above, you need to convert from the color band OD into
 RGB on both input and output.  The output must convert OD into voltages
 for the red, green, and blue guns. The input must convert from the
 typical RGBA into OD. This aspect can be hidden by the interface layer.
 The translation between RGBA and optical density is a matter of simple
 definition, but might involve cross terms that make it more than a
 simple LUT operation.

4) You need to think about resolution and range.  An 8-bit resolution
 for positive OD could provide an OD range of 0.0 to 2.55.  This is not
 too bad for the typical office PC.  The actual lighting and luminance
 of typical PC monitors can be mapped onto that range and look fairly
 realistic.  If you measure the human eye looking at film on a lightbox
 or slide projector in a dimly lit room, most people can see OD
 difference down to somewhere in the range of 0.001 to 0.005. (The eye
 becomes rather non-linear in OD sensitivity near white and near black.
 Those numbers are for tones from dark grey to light grey.)  The upper
 bound for usable OD range is around 3.5.  The typical human eye has
 problems seeing larger OD ranges.  So a 12-bit encoding covers the
 typical human range.

 Using an 8-bit resolution opens up lots of potential with hardware
 accelerators.  Using a 10-, 12-, or 16-bit resolution shifts you into
 the domain of computer processing, but modern vector processors can
 still provide noticable acceleration for the 16-bit encodings. Some
 bright soul can probably devise a 10-bit encoding that converts into
 efficient 32-bit arithmetic.  Compositing OD merely needs saturating
 addition operations, so it can be very fast. Adding amplification
 layers introduces only saturating subtraction.

R Horn

_______________________________________________
Render mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/render

Reply via email to