Hello all,

KGI is progressing very nicely, and the ggi target for kgi is getting very
useable (quake-ggi is much fun). So natrually I started looking at all the
new ggi extensions for getting the most out of the hardware, such as
libgalloc, libbuf and libblt. Unfortunately, IMHO, as the interface is
right now, it is simply too complicated to implement for KGI.

The greatest issue seems to me how much libgalloc exposes to the exterior.
Suppose you want to use overlays. You'd use libovl to do that. Now libovl
will go and negotiate that with libgalloc. This means that libgalloc needs
to understand overlays as well as libovl. So libgalloc become very
complicated since it needs to understand everything about the graphics
card. This is simply unnecessary, i.e. libovl should be the only one that
understands overlays, it should be libovl that does the resource checking.
What if you have more than one interface for using overlays?  Well, they
can both be capable of checking the overlay resource, as wasteful as it
may seem, I seriously doubt we'll ever have two different interfaces for
accessing overlays (this is just a practical note).

So what is so commonly used for us to need some central resource manager?
Video RAM. In my opinion, libgalloc should only manage video RAM along
with transfering between video RAM and system RAM (i.e. virtualize
framebuffer). I one point I had hopes of virtualizing this directly in
kgi, but there are too many obstacles to overcome and the benefits are not
that obvious. Right now, libgalloc seems like the best place for this.

The next issue is libbuf. Ancilary buffers by themselves are perfectly
useless. They are only useful if you have an interface to draw to them.
Hence there is no point in exposing this interface, extensions that need
buffers can get their own buffers as they need them. Yes, there should be
some sharing, but the point is that this sharing should be *target
specific* and hence not exposed at all (this would be done at libgalloc
level).

Well, there is an interface, batchops. So you can get a z-buffer and then
the next rectangle will write to the z-buffer as well. But this is not
really something very useful. First of all, if you get a z-buffer, that's
in vast majority of cases because you want to draw 3d operations so the
fact that rectangles are now z-buffered is of little consequence. And
there is of course the issue that no GPU can actually use any of the
ancilary buffers for 2d primitives so this would require adding the whole
of 3d pipeline knowledge to ggi targets just so that rectagles can be
drawn z-buffered (software implementation is not really reasonable as that
would require total flushing of the 3d pipeline to be able to consistently
write to the z-buffer). But is it really such a big deal if rectangles
don't wite to z-buffer? I for one think that it is perfectly consistent if
the standard libggi primitives only draw to the color buffer. Maybe
extensions could use the batchops to do their own rendering? The thing
here is that I would expect extensions have their own, (much less general)
way of dealing with the issue of too many different drawing operations.
For example, forcing ggiMesa to use batchops would certainly not make the
implementation any simpler as Mesa has all the functionality needed to
deal with all different formats of vertices. There's also another issues
with batchops (this one might be a bit flamable, so don't take it too
seriously): in the war of batch processing vs. immediate mode, immediate
processing is the clear winner. PEX is dead. DirectX got rid of retained
mode. OpenGL's display lists are now only very rarely used. Experience
seems to show that immediate processing is the way to go.

The common theme here is that I don't think it is necessary to create
complete, perfect generic interfaces. There are very good, specific
interfaces out there. For example, there is no need to expose acquisition
of z-buffer, because the only 3d interface we should aim at implementing
is OpenGL.

So what would I like to see? In terms of user interfaces, we have a nice
base in libggi. We need a good 2d interface and a good 3d one. ggiMesa
fullfils the second requirement very completely. As for the 2d interface,
we could either resurrect libggi2d or look at things like OpenML.
Finally, the only resource management would be the video RAM management
provided by libgalloc.

By now, I've probably instulted a bunch of people and started a flame war.
The point I'm trying to make isn't that the current design is bad, it's
just a bit too generic and from a purely practical perspective I don't
think it is possible to fully implement it. What I'm proposing is a
simplification that would make the interface implementable (by which I
mean I would go and implement it for KGI) while not sacrificing too much
of flexibility.

-Filip


Reply via email to