This is kind of a long post.  You may want to skim or skip it if
you aren't into LibXMI development or 2D graphics.

        The pixel pipe portion of LibXMI seems to be up and running fairly
well now.  Texture mapping (and tiling) is working, as well as the alpha
blending that people have been wanting for a while now in GGI.  It ain't
too fast just yet (unaccelerated), but it works properly.  I've got types
made up for every kind of blending/testing operation I can think of:

* ROP256
* Colorkey (transparency)
* Texture filter (pixel multiply, bilinear, anisotropic, etc)
* Alpha blend
* Alpha test
* Stencil mask
* Z-buffer update/test/mask passthru (Here you go Christoph)
* Multitexture blending with an arbitrary number of stages

        Not all of these are implemented yet |->.  But I think I got
everything important in the world of 2D (except more powerful clipping,
see below) accounted for.  If anyone knows of any good 2D drawing
abstractions/blendmodes/buffer update/check/whatever in any other 2D
graphics APIs which I've neglected to snarf up for use in XMI, please let
me know.  I want the base namespace for the blend system well fleshed out,
so that target implementors have a good number of acceleration hooks to
implement.  There's no reason we can't cover pretty much everything out
there.  Suggestions on which blend ops I should tackle next in the pixel
stubs are welcome - I've got no preference now that I've implemented those
blend types which I was being paid to implement, so I might as well work
on what would be useful to the most people.

        Using the blend stage objects is easy - you simply allocate and
set up some simple structs in an array, and then you hook that array into
your miGC struct, just as you do with the array of miPixels for the
styles.  Then when you go to use your miGC to draw something, the target
environment can easily parse the list of blend stages and accelerate them,
singly or in combination for e.g. multitexturing.  See the blend.c demo
for examples.  Caching of the target-dependent "compiled" rendering pipe
info on the target side will be useful here at some point.

        This abstraction scheme will allow for very fine-grained
optimization of the acceleration of 2D pixel blending in the targets, but
the back-end "intraface" I'll need to export to create a "blend" target
set will be a strange thing for sure.  Ideas are welcome here.  I think
I'm going to use Glide as my reference target implementation for
acceleration of blending in LibXMI.  I'd like to use OpenGL, but we don't
have an OpenGL target system for LibGGI just yet so that seems a bit
premature.  Glide is simpler and easier to work with, and it has a nice
LibGGI target to build on.

        The miPixmap objects are supposed to be "fast visuals" - opaque
references to pixel-typed rectangular regions bound to a ggi_visual and/or
a ggi_directbuffer which are much more lightweight than a ggi_visual
struct and thus lend themselves well to such applications as image
caching, sprites/pointers, typed buffers and large tree/graph data
structures.  Most of the ideas below were prototyped by Thomas Mittelstadt
in a non-XMI context, in particular the staged performance fallback stuff.  

        Copying of data with source and/or destination pixmaps is (well,
will be) automatically accelerated through a staged-fallback process:

* If both pixmaps have different GT_* types, we ignore any DirectBuffers,
ggiCrossBlit() between the visuals and pray.

* If we have compatible GT_ types but no DirectBuffers, we can use
ggi[Get,Put]Box() to do a quicker copy with no colorspace conversion.  
The retrieved box data makes a nice cache object, too.  You are tied to
using the same visual to ggiPutBox() that you used to ggiGetBox() for any
particular box, which may or may not be a problem.

* If we have Directbuffers available, we can accelerate per-pixel
operations much more efficiently than with ggi[Get,Put]Pixel(), and we
have the quite useful ability to store differently-typed pixmaps in the
same visual, irrespective of the visual's type.  We have to do our own
crossblitting if we play these type of games, but that is OK since some
hardware can accelerate surface-surface blits with colorspace conversion.

        This is going to be the first time(?) that we've had an extension
implement an accelerated target which will make substantial use of the
ggi_resource system in order to interact with the underlying functionality
of the base LibGGI target.  Proper resource locking (neglected by many,
including myself, all over the GGI code) suddenly becomes vital.  This
should be interesting... it will be a good thing, though, for the exercise
it gives to those corners of the LibGGI API which often don't get out in
the sun and play enough |->.

        I'm considering adding the following abstractions (back) into
LibXMI: Cursor handling and multiple cliprects.  These abstractions are
present in X but were absent in the original LibXMI codebase because XMI
was supposed to be "just a drawing library".  Well, I think there's a case
for bringing them back now.  The cursor support is kind of a wart, since
we all want a good general-purpose cursor/sprite API in the long term.  
In the short term however, it would be awfully nice to have something that
worked.  Also, if the goal of XMI is truly to be as compatible with the X
API as reasonably possible, the cursor stuff would bring us closer to that
goal (Ideally, the X target implementation would be a direct functional
passthrough).  And at least the cursor pixmap will be far more flexibly
typed and accelerated than the original X cursor drek.  The GUI people
(OpenAmulet?) can sure use cursor support too.  I think it's a pretty wart
|->.  Multiple cliprects are also supported under X, and since there are a
variety of ways to accelerate them depending on the target environment, I
want this one back too.

        And finally, a tough architectural question: should we hide the
miPaintedSet type inside the stubs code, instead of exposing it and pasing
miPaintedSets around like footballs at the application level as we do now?
An miPaintedSet is a collection of span sets, and was nominally intended
to be a convenient cache object for prerendered spans in X.  You
prerendered a bunch of spans, cached them offscreen in video memory or in
system mem if necessary, and quickly rendered them on the server side in
whatever video mode the server was in.  

        AFAIK we mostly handle it the same way X does.  However, since we
became a LibGGI extension, span-based rasterization is now nothing more
than a convenient internal rendering technique for the XMI stubs library.
If we use a target which directly accelerates whole triangles or polygons,
we won't need to deal with the individual spans anymore.  This would be
nice, because almost every XMI drawing function now takes an miPaintedSet
pointer as an argument.  This would also make possible the removal of the
miCopyPaintedSetToVisual() function.

        However, this is a fairly radical departure from the X roots XMI
comes from.  I think it is a worthy change, but it will require every XMI
function call to be rewritten to remove the old miPaintedSet references
and other associated applications code.  I'd like to get the pain over
with soon, while there's not yet too much code which depends on XMI.  Any
objections from those using XMI?  I won't be doing this tomorrow or
anytime soon, but be prepared for this if you don't speak up.

******

        Wow, that was much more than I had intended to write.  I never
would have thought 2D graphics could be complex and interesting to code,
but it really is.  Not so much as 3D, but right now that is a good thing
|->.  If you can, please download and test LibXMI and let me know if you
find any bugs.  I know that there are some stubborn segfaults lurking in
edge cases, some of which I suspect were present in the old X consortium
code itself.  If you can manage to make one of the segfaults reappear
reliably, that would be very cool.  In any case, any and all feedback is
appreciated.

Jon

---
'Cloning and the reprogramming of DNA is the first serious step in 
becoming one with God.'
        - Scientist G. Richard Seed


Reply via email to