Hamish wrote:
I had a thought... Might be a random brain fart, or it might just use far too too much logic, or whatever...

Most interfaces I've seen that use DMA, you allocate a block of memory, add commands to it & then wait for the card to go idle & then tell the card to go for... Now with hard-drives you do pretty much the same. Except modern (OK, anything since early 90's perhaps inthe SCSI world) you can queue commands.
So why not with the graphics card as well?

Instead of telling the card about a block of commands to execute straight away, why not simply tell it to enqueue a block of commands for execution? The card then saves the address of the block in an SRAM block (circular buffer). The graphics engine itself then simply loop sthrough block after block until there are no more to execute.

There are also two common methods:

        Exchange buffering

        Linked list

which might be better.

With exchange buffering, there are two fixed buffers and the GPU would know the addresses of both of them. With this application IIUC they would need to be variable length and be terminated by an end instruction. So, an empty buffer or a buffer not yet filled would need to start with the end instruction. This would require an interrupt at the end of each buffer to reset the buffer just read.

A linked list is quite similar except that the header of the current buffer contains the address of the next buffer. With a linked list, an interrupt would only be required if the DMA runs out of buffers to read.

--
JRT
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to