Keith Packard <[EMAIL PROTECTED]> writes:
> > The invariant that needs to be preserved is that the alpha values for a
> > transformation of a solid image should be identical to the clip region
> > transformed as a polygon then rendered via the polygon rendering
> > rules.
>
> Please remember that the transformed pixels form a virtual source image and
> don't represent the final rendered data. Rendering is always constrained
> by the mask operand, whether implicit (for polygons), or explicit (for
> Compose and the Glyph operations). If you want AA edges, draw trapezoids.
It's not just AA edges I want, its sharply AA'ed edges....
> The polygon rules shouldn't apply here -- 'nearest neighbor' will have
> alpha values of either 0 or 1 depending on whether the nearest pixel falls
> within the image or without. Similarly, bi-linear interpolated data
> should average the alpha values of the surrounding pixels. By defining
> the pixels outside of the image as transparent, you average the interior
> alpha values with transparency yielding a nice alpha-blended edge. Any
> other definition makes a lot less sense, and is really expensive to boot.
Neither nearest neighbour or bilinear interpolation produces good
results for downscaling. A filtered transformation consists
of two parts:
A) Interpolating the source image
B) Resampling the result
If you use a delta function for B) you will get aliasing, no
matter how carefully you do A).
The expense isn't *that* bad becuase only edge pixels need fancy
processing; you get increased complexity in spceial casing the edge
pixels, but the bulk of the pixels can be handled in a
straightforward fashion.
[....]
> > Edge conditions are definitely the hard part of the specification; consider
> > a straight scale of a solid rectangle from 100x100 pixels by a ratio
> > of 75/100. The desired result is obvious ... a 75x75 pixel solid square
> > with hard edges.
>
> Both nearest neighbor and bilinear interpolation will yield this result.
>
> > Then get the final values by something like:
>
> > (SOURCE_tranformed IN SOURCE_boundary) IN (MASK_transformed IN MASK_boundary) OP
>DEST
>
> This seems excessively complicated; I don't see the utility of
> anti-aliasing the edges of a nearest-neighbor resampling operation.
I don't think nearest neighbour is very interesting... it certainly
isn't useful for desktop icon type situations and 3D hardware typically
does something fancier these days as well.
I haven't thought much about the case of separate source and mask;
I've mostly considered the case of gdk-pixbuf's scaling operations;
so the above may not be quite right for the general case.
I would certainly *like* it to be as simple as you propose,
and it has the nice feature that you can pad source images with
transparent pixels without changing the result. But I've never
been able to convince myself that would produce satisfactory
results for gdk-pixbuf.
Guess I need some sample code and screenshots....
What hardware does is also very relevant here; I think the model I'm
proposing above is somewhat similar to transforming a polygon then
applying an alpha-texture to the transformed polygon.
Regards,
Owen
_______________________________________________
Render mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/render