Hi Alex,
> Thanks for that very useful information (can you copy and paste that > into freesci.sgml by any chance?? :) ). That might be a good idea. I'll try to remember to do that later, when I get a chance. > When I say object-based driver, I mean vector-based. Are there no plans > to allow this for something like DirectX that is designed for it? Are > you aware of other graphics libraries for other platforms that are > vector based? OpenGL can be made to work this way. Essentially, such an "object-based" driver would (probably) have to hook up to our widget layer directly, bypassing the gfx_operations layer. This means that it would be inherently more complex; also, questions arise as to how partial screen updates (e.g. for anmiations) could be done. I'm not sure if there is a uniform way to do this, except for rasterizing the data, of course... > > No, drivers are supposed to provide native line-drawing functions. That > > is, the driver takes a line and draws it to whichever interpretation the > > underlying hardware has of a "line". X11, for example, provides > > accellerated line drawing operations, but still produces a line. > > So does that mean I have to write my own line drawing function if > DirectX doesn't provide one for drawing on pixmaps? Or can I use the > functions in gfx_support.c without a problem? My statement above was too strong, you're right. Drivers may provide native line-drawing functions, but they may also fall back to pixmap operations provided by gfx_support.c. > In regards to getting DirectX to read FreeSCI pixmap data, can you > please give me the exact details of what form the data comes in? I have > two options for creating a DirectX texture from FreeSCI pixmap data: [...] > If that's no good I can describe the image format to DirectX, copy the > image data over, and hope it works. The info I need for that is the > pixel format for the image data. That is, 24-bit RGB, or 32-bit with > alpha channel, etc... Heh... actually, that works the other way 'round, too ;-) When creating your driver's gfx_mode_t object (drv->mode), you may pass values for these whichever way you want (the idea is that FreeSCI's graphics subsystem adapts to the target device, which may be 8 bpp palettized or any n-byte RGB (n from [1..4]). If DirectX doesn't tell you a preferred screen depth/color layout scheme, just use the best you can get (i.e. RGBA using 8bpp for every channel, and set the masks and shift values accordingly). In that case, whichever channel ends up in the most significant byte should get shift value zero and mask 0xff000000, and shift value 24 and mask 0x000000ff go to the channel in the least significant byte. The others are in between accordingly. In order to minimize translation overhead, it would, of course, be preferrable to set these to whatever the graphics card below can eat natively. It seems to me that DirectX tries to abstract from many of the details our graphics system takes care of already... I guess looking into an object-based layer might be worthwhile (also for an OpenGL port, but there doesn't appear to be anyone interested in doing that ATM, apparently...); most of the work for that would probably be in figuring out which parts of kgraphics.c and kmenu.c call the gfx_operations layer directly and separate the gfx_ops layer accordingly (or put another layer on top or something). Other than that, our factory functions for widgets would have to be adapted to accomodate for these "high-level" graphics drivers. Interesting challenge, actually; wish I had time to spend more time to think about that... llap, Christoph
