Around 18 o'clock on Sep 12, Owen Taylor wrote:
> It's not just AA edges I want, its sharply AA'ed edges....
I'm not sure I understand. There are two separate pieces to this puzzle,
the first is how the virtual source pixels are generated and the second is
how those pixels are applied to the destination.
Sampling theory says we use a filter to resample the original source
pixels into the virtual source pixels. Current hardware says you get two
choices for filtering; nearest neighbor and bilinear interpolation.
Neither of those gives decent results when scaling by more than a factor of
2 in either direction (up or down). That's fine; we can specify additional
filters that are based in real signal processing theory, but there's no
hardware that will accelerate them.
So we have a software implementation of a 2D DSP function. The projective
transformation will make that particular filter quite an adventure;
essentially you have to take pixel locations in the source, transform them
forward to the virtual source and compute filter coefficients from the 2D
filter shape in that space.
The final question about filtering is how to handle the edge pixels. There
are several traditional answers -- reflecting the image across the border,
filling the exterior with a constant color. Given that we have
transparency, it seems like the most obvious solution is to fill the
missing pixels with transparent values; in our pre-multiplied world, the
results are reasonable, and, also importantly, depend upon the kind of
filter involved.
Use a nearest neighbor filter and you'll be assured that the all of the
pixel values in the virtual source come from the actual source, no
interpolated pixel values are used anywhere.
Use a bilinear interpolation, and the edge pixels will be smoothly graded
from the actual source edge pixel value towards transparent, including the
alpha values so that things will blend nicely when applied to the
destination.
I think things will also work right with other filters; I'll have to poke
at the equations a bit to make sure, but it seems to fit intuitively.
> Neither nearest neighbour or bilinear interpolation produces good
> results for downscaling. A filtered transformation consists
> of two parts:
Yeah, they both suck when scaling by more than a factor of two. That's
because when downsampling, you really want to look at more than two pixels
in each dimension. The problem is that hardware generally only supports
these two operations. OpenGL provides mipmapping to "fix" bilinear
interpolation at the expense of a lot of memory. I'm avoiding that
because I think our images are more dynamic than GL textures.
Adding new filters will be supported, and perhaps the sample
implementation should provide something better than bilinear interpolation.
> A) Interpolating the source image
> B) Resampling the result
Yeah, sample up to the LCM and then filter down to the final image. We've
got a slight twist in that the scaling factor changes for each pixel in
each dimension, but the basic idea is the same.
> If you use a delta function for B) you will get aliasing, no
> matter how carefully you do A).
Yeah, using a box filter always sucks. Why is it that hardware never
bothers to provide anything better?
> The expense isn't *that* bad becuase only edge pixels need fancy
> processing; you get increased complexity in spceial casing the edge
> pixels, but the bulk of the pixels can be handled in a
> straightforward fashion.
No, edge pixels and central pixels are filtered the same. The only
distiction for edge pixels is how the samples beyond the edge are
synthesized before being applied to the filter. With a larger low-pass
filter, even computing pixels far from the edge will involve filling in
the missing data.
> But I've never been able to convince myself that would produce
> satisfactory results for gdk-pixbuf.
>
> Guess I need some sample code and screenshots....
Did I mention I've got this working? The code looks like:
v.vector[0] = IntToxFixed(op->x);
v.vector[1] = IntToxFixed(op->y);
v.vector[2] = xFixed1;
if (!PictureTransformPoint (op->transform, &v))
return 0;
/* XXX bilinear interpolation only */
rtot = gtot = btot = atot = 0;
y = xFixedToInt (v.vector[1]);
yerr = xFixed1 - xFixedFrac (v.vector[1]);
while (y <= xFixedToInt (xFixedCeil (v.vector[1])))
{
CARD32 lrtot = 0, lgtot = 0, lbtot = 0, latot = 0;
x = xFixedToInt (v.vector[0]);
xerr = xFixed1 - xFixedFrac (v.vector[0]);
while (x <= xFixedToInt (xFixedCeil (v.vector[0])))
{
if (POINT_IN_REGION (0, op->clip, x, y, &box))
{
line = op[1].line;
op[1].offset = (x + op[1].start_x) * op[1].bpp;
op[1].line = line + y * op[1].stride;
bits = (*op[1].fetch) (&op[1]);
op[1].line = line;
{
Splita(bits);
lrtot += r * xerr;
lgtot += g * xerr;
lbtot += b * xerr;
latot += a * xerr;
n++;
}
}
x++;
xerr = xFixed1 - xerr;
}
rtot += (lrtot >> 10) * yerr;
gtot += (lgtot >> 10) * yerr;
btot += (lbtot >> 10) * yerr;
atot += (latot >> 10) * yerr;
y++;
yerr = xFixed1 - yerr;
}
if ((atot >>= 22) > 0xff) atot = 0xff;
if ((rtot >>= 22) > 0xff) rtot = 0xff;
if ((gtot >>= 22) > 0xff) gtot = 0xff;
if ((btot >>= 22) > 0xff) btot = 0xff;
Note that this code is "a bit" inefficient :-)
A sample image is at:
http://keithp.com/~keithp/download/transform.png
I think the key is to specify a more correct filter and generate some
sample code based on that so that we can separate the discussion of
filters from that of edge conditions. The sample above is careful
to avoid resampling the image very far, this makes the results pretty
succesful even with simple bilinear interpolation.
> What hardware does is also very relevant here; I think the model I'm
> proposing above is somewhat similar to transforming a polygon then
> applying an alpha-texture to the transformed polygon.
Yes, we need to explore what the hardware does at the edges of the
textures and come up with a spec which has a mode which matches that.
Keith Packard XFree86 Core Team HP Cambridge Research Lab
_______________________________________________
Render mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/render