I had a thought... Might be a random brain fart, or it might just use far too too much logic, or whatever...
Most interfaces I've seen that use DMA, you allocate a block of memory, add commands to it & then wait for the card to go idle & then tell the card to go for... Now with hard-drives you do pretty much the same. Except modern (OK, anything since early 90's perhaps inthe SCSI world) you can queue commands. So why not with the graphics card as well? Instead of telling the card about a block of commands to execute straight away, why not simply tell it to enqueue a block of commands for execution? The card then saves the address of the block in an SRAM block (circular buffer). The graphics engine itself then simply loop sthrough block after block until there are no more to execute. But wait there's more... Prioritise blocks... Have n+1 priorties of command block. One is reserved as the highest. This could be used to execute something NOW! Without waiting. (Or perhaps execute as the next block if the logic to return is too great). You could then optionally have multiple levels of priority for command blocks... And more still... Tag the block for execution at certain times. Why interrupt the system processor(s) to say hey retrace is on us, or we're at line x of the display. Allow blocks to be enqueued for execution at start of the vblank. Or at line xxx... (Not to interrupt, but to start execution of). The way I see it, any interrupt saved is a bonus. Having the card able to do as much as possible for scheduling itself has got to save system cpu. (The exercise in telling the system when a memory block is free is engineering still... You could put something like a condition variable/mutex in it... Or a simple flag... OK. Flame time :) Hamish.
pgpa99K7pOGTp.pgp
Description: PGP signature
_______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
