Around 14 o'clock on May 11, Jeremy Fitzhardinge wrote:
> Does this allow the framebuffer alpha channel to be used for subsequent
> composition (for example, hardware which blends graphics with an
> underlying video surface)?
Depending on what was in the image, this would work fine; the final alpha
value would represent coverage of that pixel, so you'd even get the
"right" results with further compositing.
> Or would you need two destination alpha channels to support that correctly
> (one for coverage and one for translucency)?
That's another option. Render already permits each Picture to have it's
own alpha channel, so the underlying Pixmap/Window can end up with an
arbitrary number of alpha channels associated.
> It seems to me that the blending operator needed to make an anti-aliased
> common edge join correctly will be different from the one needed to do
> translucency
Not usually; Porter & Duff's description of alpha coverage is of a
unification between translucency and partial pixel coverage. At the pixel
level, partial coverage is indistinguishable from translucency. Where this
doesn't quite hold is when you "know" the geometric relationships at the
sub-pixel level and are doing incremental rendering; that's what the
Disjoint and Conjoint operators are for. These operators give us another
solution to the problem, but at the cost of an alpha channel for each
destination.
> The other way of working this is to simply only perform AA on trapezoid
> edges which are not adjoining other traps in the same polygon.
You "can't" do that -- pixels at the corners end up with the wrong
coverage values:
Here's two trapezoids:
+----+
/ \
/ A \
X----------+
\ B \
+----------+
Now here's a picture of the pixel containing X:
+------------/-----+
| / |
| / A |
| / |
| X============
| \ |
| \ B |
| \ |
+------------\-----+
If you aren't anti-aliasing the edges between A and B, then this pixel
is drawn by only one of A or B. The trapezoids are rendered independently,
so the actual coverage of this pixel can't be computed.
This may seem like a "minor error" which could be ignored, but the effect
is quite visible on the screen, leading to "lumpy" looking edges.
> Does this mean a two-pass composition? Composite the traps together,
> then composite the result with the framebuffer?
Yes, that would be the logical effect. Whether this step was actually
required would have to be determined by examining the trapezoids. I
believe it could be done by clipping trapezoids to not overlap, drawing
those portions and drawing the overlapping portions separately. In the
typical tesselation case, that could involve a set of scanlines at the
horizontal trapezoid boundaries.
Note that the CPU and the graphics accelerator could work in tandem during
this operation; as the CPU finished with a section of the result, it could
pass the alpha plane to the graphics card and start the compositing
operation while building another set of alpha data. Or, the individual
trapezoids could be passed to the card for it to composite into a
temporary mask and then onto the destination.
Such acceleration optimizations are fodder for future development; the
goal right now is to just make the spec work.
The other option is to go with a more global representation of the
tesselated figure like libart uses, the figure is represented by two
horizontal lines and *lots* of diagonal/vertical ones. This would allow
us to tesselate figures without needing any artificial internal breaks,
however it would require probably more work to accelerate this in hardware.
Keith Packard XFree86 Core Team HP Cambridge Research Lab
_______________________________________________
Render mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/render