On May 16, Keith Packard wrote:
 > No, that wasn't the problem.  The basic problem is that alphas are only
 > an approximation of area and the result of arithmatic operations on
 > approximations are worse approximations.  In particular:
 > 
 > So, now the obvious question is what to do about the problem.
 > 
 > I don't believe this algorithm is "fixable" in any reasonable way, it's a 
 > fundemental limitation in computing similar areas in different ways, which 
 > is the central idea in this algorithm.

Keith,

Thanks for the analysis. I agree that there's no good way to ensure
the "sum to 1" invariant while also guaranteeing non-negative alpha
with this algorithm.

 > I'm not at all adverse to developing a completely different
 > algorithm

I'd be happy to take, (and implement), one as well. But I'm not coming
up with much yet. The dropout problem with the pixel-division approach
seems unpleasant to me.

 > The other obvious solution is to live with the limitations of the current 
 > algorithm, truncating the negative alpha values to zero.

I've been thinking about this, and this might be the best approach.

Keith just demonstrated that the maximum error from a single trapezoid
alpha computation is 2 alpha units. As the alpha depth gets higher,
these 2 units become less significant.

But, it's also possible that errors from many trapezoid computations
accumulate within a single pixel, (Keith described a worst case in
which a half-covered pixel contributes a full-pixel alpha). I've been
trying to come up with a tessellation that approaches that case. I can
actually do it, but not with the covered and uncovered portions of the
pixel separated by a simple line for example. Instead, I can cover all
of the pixel leaving tiny little holes scattered throughout.
Approximating that bizarre coverage with an alpha value of 1 feels
perfectly acceptable to me. Have I missed any more reasonable cases?

On the other side, (truncating errors resulting in negative alphas to
zero), there is a potential practical problem. In the worst case, a
large portion of a pixel could be covered by many small trapezoids but
result in a combined alpha value of zero. It's easy to come up with a
tessellation that exposes this problem. For example, the apex of a
triangle fan used to to draw "pie slices". In order to make the
trapezoids small enough to expose problems here, the triangle fan
would have to be made of a very large number of triangles, (which
might become impractical on its own), or the center of the fan would
have to be close to the edge of the pixel, (which would reduce the
pixel coverage and hence, the total error here). So perhaps that makes
this an error case we can stomach?

Can anyone think of other reasonable tessellations that cause
problems?  Keith, have you found noticeable errors in practice by
truncating negative alphas to zero? You did discover this problem
after all -- how did you find it?

Of course, it would be more satisfying to have an algorithm without
problems like this, so keep the algorithmic ideas coming.

-Carl

-- 
Carl Worth                                        
USC Information Sciences Institute                 [EMAIL PROTECTED]
3811 N. Fairfax Dr. #200, Arlington VA 22203              703-812-3725
_______________________________________________
Render mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/render

Reply via email to