On Tue, 26 Oct 1999, Jon M. Taylor wrote:
> Remember, the way to eat an elephant is one bite at a time |->.
> That is why I do not think that we should jump into designing a
> somphisticated memory management subsystem in KGIcon right now. We should
> take it nice and easy until we have a few drivers which can do basic
> textured triangle drawing to play with. Just MHO of course.
There is definitely something to be said for keeping things
simple. At the same time though, it helps you not to go off
on a tangent if you have an overview of the territory ahead. So,
to put everyone on the same page, I discussed this with Andy some
several months back, and we went back and forth about how complex
certain things should be with the management system. What we did
agree on mostly was the following:
1) Most of the memory management would take place in userspace
LibGGI, in a chipset sub-library that had prior knowlege about
a certain chipset, family of chipsets, or group of chipsets that
have the same general model. There may be certain information about
the card's capabilities, beyond what is already available in KGI,
passed from the kernel driver, but only to the extent that it
reduces the number of sublibs necessary without making them
more complex than they need to be or adding cruft to the KGI driver.
2) There must be a way to select what kind of textures/sprites/etc
you want when they get in each other's way, and this must take a
form which is comfortable to work with in C programs. A very
simplified example: if you have to trade N textures in order to get
M sprites, with a maximum of P sprites and Q textures, then it
should be relatively easy to write a peice of code that gets the
best combination for your program regardless of what the values
N,M,P,Q are (and without knowing N,M,P,Q because there will actually
be many more than four variables in the equation.)
3) The number and properties of the available textures/sprites/etc
can also be affected by the mode selected for the main framebuffer.
This means that the system has to work before SetMode and in
conjunction with checkmode, but that it also might allow you to
rearrange the textures/sprites/etc after SetMode without changing
the main mode.
4) Instead of limiting the "management" to the video memory,
all chipset "features" should be dealt with in the same system,
so we don't have to go back and write a separate library for
features that don't use video memory. For example, if a card
implements a pass-through window for an image from a TV tuner
card in some modes, even though it requires no video memory to
do so, that feature should be negotiated along with the
textures/sprites/etc. This mainly because the availability
or properties of such features can be closely tied to the mode
or to the other available features, and integrating a separate
system to deal with them later would be very complex.
5) Actually implementing a texture-cache system would be
doing too much, since it would be impossible to make a cache
system that met everyone's needs. But the basic primitives
(moving texture data to/from main memory) might be something
to consider implementing. Leaving room for more advanced,
but still "primitive" memory management functions, e.g. putting
something in unswappable memory; flagging something to be
managed by a hardware texture cache manager; etc. might
be advisable.
I don't know where Andy, Steffen and Marcus have taken things,
but it seems to me the two places that need action are:
1) Refining a "comprehensive feature negotiation API", which
basically consists of a linked list of "feature" structures which
allow specification of a wide array of features and details about
them and a SetMode/CheckMode equivalent that returns enough
information about what went wrong to allow a developer to
get the most out of the chipset. Basically you submit
a laundry list, sorted in the order of importance, to the
LibGGI feature extension that looks like this:
A) I want a mouse pointer sprite
B) I want 640x480x16, 2 frames, on the main framebuffer
C) I want five 256x256x8 textures
Suppose the mouse pointer didn't work in 16bpp. The sublib takes
each feature in order and does a best effort on them, returning:
A) There's a 64x64x2 mouse pointer available
B) The best I can do is 640x480x8, 2 frames
C) There are twenty 256x256x8 textures available.
But if you were to submit:
A) I want 640x480x16, 2 frames, on the main framebuffer
B) I want a mouse pointer sprite
C) I want five 256x256x8 textures
You'd get:
A) You can get 640x480x16, 2 frames.
B) Sorry, no mouse pointer is available
C) You can have four 256x256x8 textures.
Since there is no way to avoid this process being tedious,
designs that make the API easy to use are helpful, for example,
a flag that says whether a requested item had to be modified
would prevent the application developer from having to compare
each detail of the feature that was requested with the list
that was returned, and a separate library of convenience
functions for commonly wanted configurations would be good.
To my knowledge no solid detailed proposal for the general
form of the negotiation system has been proposed.
2) A review of different types of chipsets to group them
into families that can be managed by the same sublibs
(e.g. all the old SVGA chipsets with a single mouse sprite,
a hardware cursor in textmode, and a BLT engine that can
rip an arbitrary sized 1, 2, 4, or 8 bit texture from any
location in the video RAM, but no 3D engine) could probably
be handled by the same generic-features-kgi-svga2dblt extension
sublib and have a standardized user/kernel metalanguage
without introducing any cruft in the KGI drivers for those
chipsets.
--
Brian S. Julin