Daniel Rogers ([EMAIL PROTECTED]) wrote:
> [...].  This is 
> specifically because of the overloaded nature of alpha here.  Alpha is 
> being used as transparecy but (correctly) is mathematiclly treated as 
> the coverage.
[...]
> This is why I suggested earlier the seperation between transparency and 
> coverage.  Any drawing operation would have to consider whether it is 
> adding transparency or coverage or both at every pixel (a pixel could be 
> partially covered by a transparent effect).

Sorry, but I don't believe that this destinction would make sense.

>From my point of view "transparency/opacity" and "coverage" are two
models to explain what happens when talking about alpha. I do know that
the original Porter Duff paper based its conclusions on the coverage
model, however, the transparency analogy comes closer to what happens
when gimp is building its projection of the image.

For "proper" coverage handling you'd have to store the information
*what* part of the pixel has been covered. Better leave that to a vector
based implementation. The coverage model also fails to model a flat area
of e.g. 50% opacity (a surface with a small hole in each pixel...).

> This would mean that 
> instead of an alpha channel and a layer mask, we should have a coverage 
> channel and a transparency channel.  (giving way to RGBCT colorspaces). 
>  In this sort of situation, the full measurement of the pixel includes 
> all five numbers, and any algoritm that affect pixels would have to take 
> into account all five numbers (just as any operation now must account 
> for all four exsisting pixel measurement numbers).  Indcidenally, alpha, 
> in the way it as been used would be C*T.

I fail to see what would be the win in this situation. All algorithms
would have to be revised and I really doubt that this separation would
make the algorithms simpler. E.g. Blur: It is ok, to blur the opacity
channel, but blurring the coverage channel does not make sense, because
it does not fit in the model of partially covered pixels. What should we
do? And how would we present unexpected results to the user?

And where would be the win for the added memory requirements, more
complicated algorithms and users looking blankly at the screen and
trying to figure out what is going on?

That said I could see some use for additional channels in the image.
Normal-Vectors or glossiness are an exciting idea, especially when using
them for generating textures. It also would be cool to have spot-color
channels in the image so e.g. distort plugins would distort the
image+spot color information together and you don't have to apply the
same plugin multiple times in the very same manner on multiple
drawables. It would be nice if these things were possible.

Bye,
        Simon
-- 
      [EMAIL PROTECTED]       http://www.home.unix-ag.org/simon/
_______________________________________________
Gimp-developer mailing list
[EMAIL PROTECTED]
http://lists.xcf.berkeley.edu/mailman/listinfo/gimp-developer

Reply via email to