On Sat, 7 Dec 2002, Filip Spacek wrote: > The greatest issue seems to me how much libgalloc exposes to the exterior. > Suppose you want to use overlays. You'd use libovl to do that. Now libovl > will go and negotiate that with libgalloc. This means that libgalloc needs > to understand overlays as well as libovl. So libgalloc become very > complicated since it needs to understand everything about the graphics > card. This is simply unnecessary, i.e. libovl should be the only one that > understands overlays, it should be libovl that does the resource checking. > What if you have more than one interface for using overlays? Well, they > can both be capable of checking the overlay resource, as wasteful as it > may seem, I seriously doubt we'll ever have two different interfaces for > accessing overlays (this is just a practical note).
It's actually quite funny that you chose this as your example. I was talking to neiljp of fresco and he may have a proposal for just that instance exactly -- an extension for GGI targets that are window system based to allow access to things like setting the title bar, resizing, and... pointers. If we did this we would have both LibOvl and this other extension potentially contending for any available overlays. I expect there will be MANY extensions using the same types of resources because there are likely to be many "simplified" extensions or extensions mimicking other APIs. > So what is so commonly used for us to need some central resource manager? > Video RAM. In my opinion, libgalloc should only manage video RAM along > with transfering between video RAM and system RAM (i.e. virtualize > framebuffer). I one point I had hopes of virtualizing this directly in > kgi, but there are too many obstacles to overcome and the benefits are not > that obvious. Right now, libgalloc seems like the best place for this. LibGAlloc doesn't just allocate VRAM or other forms of RAM, it formalizes the capabilities that that RAM has. For example, say you had an application that wanted to use an ALPHA pixel channel which was capable of doing "top/bottom" style blending, but your graphics card only supports "over/under" style blending. LibGAlloc would be able to tell you this was the case. > The next issue is libbuf. Ancilary buffers by themselves are perfectly > useless. They are only useful if you have an interface to draw to them. Umm... LibBuf provides such an interface, so what's the point here? > Well, there is an interface, batchops. So you can get a z-buffer and then > the next rectangle will write to the z-buffer as well. But this is not > really something very useful. First of all, if you get a z-buffer, that's > in vast majority of cases because you want to draw 3d operations so the > fact that rectangles are now z-buffered is of little consequence. And > there is of course the issue that no GPU can actually use any of the > ancilary buffers for 2d primitives so this would require adding the whole > of 3d pipeline knowledge to ggi targets just so that rectagles can be > drawn z-buffered There are numerous misconceptions here. First off, the most useful primitives in LibBuf are putbox and crossblit, not rectangles. Both of these operations are 2D and most GPU's can tackle them in hardware without the 3D engine. Batchops in fact are not even used yet, and aren't used at all in LibBuf. That's LibBlt. LibBuf is mainly for use by 2D games and appliances using spiffy looking menu systems. > mode. OpenGL's display lists are now only very rarely used. Experience > seems to show that immediate processing is the way to go. For 3D games, sure. I wouldn't expect LibBuf to be used in 3D games. > By now, I've probably instulted a bunch of people and started a flame war. No, not at all. You just haven't managed to change my mind, that's all :-) > The point I'm trying to make isn't that the current design is bad, it's > just a bit too generic and from a purely practical perspective I don't > think it is possible to fully implement it. What I'm proposing is a > simplification that would make the interface implementable (by which I > mean I would go and implement it for KGI) while not sacrificing too much > of flexibility. Implement what you think will be a good interface for KGI to provide. I'll be happy to implement support for it in LibGAlloc. Alpha and Z buffer/channel negotiation from KGI would be a great motivator for me. -- Brian
