On May 17, Keith Packard wrote:
 > I respectfully disagree.  Sub-pixel coverage areas are very visible along
 > the smooth edges of objects; coverage discrepancies on the order of 1/4 of
 > a pixel are visually jarring, while discrepancies down around 1/10 of a
 > pixel are discernable on close visual inspection.  They appear as 
 > discontinuities in otherwise smooth surfaces.

Actually Keith, I agree with you completely. While muddling through
the implementation I've been quite surprised to see how nasty things
can look when I make minor miscalculations. I wasn't very clear with
what I was trying to get across. Let me try again.

 > The goal of the algorithm is to reproduce alpha values for cummulative 
 > rendering proceses to within a few parts of the true alpha value; this 
 > ensures clean edges for figures even when some pixels are rendered by many 
 > trapezoids.
 > 
 > I think we could use some quantitative measurement about how often such 
 > pixels will occur.  I think we can analyse this:
 > 
 >  1   They can only occur at the upper left corner of a trapezoid;
 >      those are the only pixels with more than two alpha values. used
 >      in their computation.

Absolutely. Thanks for getting this started.

 >  1   They can only occur at the upper left corner of a trapezoid;
 >      those are the only pixels with more than two alpha values. used
 >      in their computation.

Keith, the analysis that follows is good, and alone would be good
enough to convince me to stick with the current algorithm. But, I'll
go one better and show that these negative alpha values pose no
problems whatsoever.

We've been getting hung up a bit on negative alpha values. They seem
odd since they are obviously wrong, (alpha like area is non-negative
by definition). But actually, the error that causes negative alpha is
no greater than the error in many other alpha calculations that we
accept just fine.

Let's look at the bounds of error in the current algorithm for various
pixels covered by a single trapezoid:

1) Fully covered pixels have no error.

2) Fully vacant pixels have no error.

3) Pixels which have alpha calculations with one term have a maximum
   error of 1/2. These include:

        The pixel containing the lower-right corner.
        Non-corner pixels along the bottom edge
        Non-corner pixels along the right edge

4) Pixels which have alpha calculations with two terms have a maximum
   error of 1. These include:

        The pixel containing the lower-left corner
        The pixel containing the upper-right corner
        Non-corner pixels along the left edge
        Non-corner pixels along the top edge

5) Pixels which have alpha calculations with four terms have a maximum
   error of 2. This is only:

        The pixel containing the upper-left corner.

So, the upper-left corner pixel is special in that it has more error
than any other pixel in the trapezoid. However, it is not necessarily
the only pixel that can have a negative alpha.

More importantly, however, an alpha value that is negative is no worse
than other non-negative alpha values that can have errors just as
large or larger.

The reason negative alpha values are so disturbing is that the proof
of the "sum to 1" aspect of the algorithm relies on the negative
errors within the negative alpha values canceling out positive errors
within other alpha values. So, truncating these values to zero feels
like the remainder of the pixel might fill too quickly.

But, in fact, the negative alpha values cause no problems:

1) When a negative alpha value is computed it is truncated to 0,
   (which actually reduces the error for this particular alpha
   value[*]).

2) When the remaining regions of the same pixel are computed, they are
   "unaware" of whether there might be a negative alpha value
   somewhere within the pixel. And independent of that, they will have
   some amount of error.

A separate question entirely is whether the error from many different
trapezoids can accumulate and cause problems within a single pixel. Of
course it can, (whether involving negative alphas or not). The most
important case here is when the pixel is fully covered. In that case,
the algorithm was envisioned to have zero error, but with negative
pixels, the claim is simply that it has non-negative error, (which is
just as good in this case since the result can be truncated to 1
giving zero error once again).

If the pixel is not fully covered, the only solution to any
accumulated error problems is to increase alpha depth.

-Carl

[*] In fact, truncating negative alpha values to 0 is somewhat
arbitrary. We could just as well give some positive alpha value in
these cases, (eg. one half of the alpha of the minimum area that could
result in a negative alpha). But this likely wouldn't be worth it.

-- 
Carl Worth                                        
USC Information Sciences Institute                 [EMAIL PROTECTED]
3811 N. Fairfax Dr. #200, Arlington VA 22203              703-812-3725
_______________________________________________
Render mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/render

Reply via email to