Keith Packard <[EMAIL PROTECTED]> writes:
[...]
> Some questions:
>
> + How should I specify filters? I'd like to avoid round trips,
> so using atoms seems like a bad idea. Is there any good reason
> to not just use a simple enumeration of the obvious common
> filter types? A query operation would permit apps to find out
> which filters were supported, a default filter would allow
> applications which didn't care to avoid even that round trip.
If you want to have any support for hardware acceleration, filter
specification should support "intent based" instead of precisely
specified. (None, Fast-but-filtered, best-tradeoff, best-looks).
> + What about expose events? Because the source image now forms
> an arbitrary quadrilateral in the destination, missing pieces
> from the source don't form nice clean rectangles in the dest.
>
> I could compute the actual expose region and send the whole mess
> off to the application. That could be a few rectangles, but
> presumably most apps would never do something that stupid.
>
> I could implicitly disable graphics expose events for non I
> transformations.
I don't actually see the difference between sending an "exact area"
and a bounding rectangle once you consider subpixels, except that
the app has to draw a bit less.
But to my knowledge, the only valid use for graphics expose events is
scrolling a single drawable. So, I think you could very easily
disable graphics expose events for non-identify transformations.
Why make things hard for yourself?
> Separately, I'm thinking here of changing the existing Render
> semantics to specify that pixels beyond the border of the source
> or mask are transparent. This gives a nice clean semantic for
> the edges of these transformed sources. The current semantics
> call for clipping to the source; clipping to the source is
> equivalent to pretending that the source is transparent for
> the Over operator.
Edge conditions are definitely the hard part of the specification; consider
a straight scale of a solid rectangle from 100x100 pixels by a ratio
of 75/100. The desired result is obvious ... a 75x75 pixel solid square
with hard edges.
That's not what you'll get with the model of an infinite source with
out-of-bounds areas transparent and a filtered transformation; you'll
get fuzzy edges.
The invariant that needs to be preserved is that the alpha values for a
transformation of a solid image should be identical to the clip region
transformed as a polygon then rendered via the polygon rendering
rules.
I think based on that, the right rule is that you get the transformed
source values by doing a partial convolution with the filter matrix"
(don't know the technical term) where
sum_ij (A[ij] S(xi,yj)) / sum_ij (A[ij])
Becomes
sum_ij (A[ij] P(xi,yj) S(xi,yj)) / Sum_ij (A[ij] P(xi,yj))
1 if (xi,uj) is in the source
P(xi,yj) =
0 otherwise,
Then get the final values by something like:
(SOURCE_tranformed IN SOURCE_boundary) IN (MASK_transformed IN MASK_boundary) OP DEST
(Where Source_boundary is the polygon tranformation of the source region
as above.)
I'm sure there must be existing practice in this area that can be adopted.
> + Existing accelerated drives will all need to check for
> the presense of a transformation operator in the source
> or mask pictures and fall back to software rendering until
> acceleration is added. Can I do this easily in XAA?
Isn't this trivial? I think you just return FALSE out of
SetupForCPUToScreenAlphaTexture
(But you might want to add an extra flag for the capability, to avoid
breaking the ABI for existing drivers.)
Regards,
Owen
_______________________________________________
Render mailing list
[EMAIL PROTECTED]
http://XFree86.Org/mailman/listinfo/render