On Thu, 24 Feb 2005 17:57:44 -0500, Daniel Phillips <[EMAIL PROTECTED]> wrote: > On Thursday 24 February 2005 09:46, Timothy Miller wrote: > > The memory controller will be designed with a set of "ports" which > > are connections to agents that need access to memory. I then > > prioritizes requests and accesses memory as necessary. In fact, > > there'll be four memory controllers working somewhat independently. > > Is the memory arranged so that each four adjacent memory words are > divided between the four chips? Given that memory requests may be > completed in a fairly unpredictable order, it seems out of order > fragment processing would be an advantage, but I can see how that might > be a bit ambitious. So that implies we will queue of what, three or > four entries between texture address generation and color combining?
It's not THAT ambitious, I don't think. Most of the memory controller is temporary storage. Replicating the state machine won't be that big of a deal. The chips are 32-bit, and being DDR, it means we transfer 64 bits per clock (rising and falling edges), so adjacent pairs will be on one chip, then the next pair on the next chip, etc. > > > Anyhow, each port will have its own very small cache. This way, > > accessing the same pixel twice does not require a re-read of memory. > > (Although the two texture units will share a port, I think I want to > > give each one its own cache.) > > Could you clarify please: each port has its own cache? And did you mean > "share each port" rather than "share a port"? There is really only one texture unit. It just happens to have two sets of registers that it will process. > > > Caching with low overhead can result in non-coherency. Therefore, > > certain things like bitblts will need to be able to specify that the > > read caches are to be flushed before reading starts (and other > > similar things). > > Ugh, that sounds like a big mess. If the blit is a read then the read > cache does not need to be flushed, except we need to worry about > ordering between pixels written to the frame buffer and bitblt reads > from the frame buffer. Handling framebuffer blits separately from the > rendering pipeline is a huge race. > > WritePixels has to go through the pipeline in many cases anyway, so this > isn't a problem. We only have to worry about reads and raw writes. > The easy solution is to flush the whole pipeline, just for the case of > a raw blit to or from the framebuffer, don't you think? Otherwise, we > could keep track of the bounds of the current trapezoid and avoid > flushing in most cases, but that is a lot more work. Mind you, I've dealt with this all before, so it's really a solved problem. But the basic idea is, "What do you do when you write to an address that's represented in a read cache?" Simple answer: Whenever you're going to read something you may have written to, flush the read caches. _______________________________________________ Open-graphics mailing list [email protected] http://lists.duskglow.com/mailman/listinfo/open-graphics List service provided by Duskglow Consulting, LLC (www.duskglow.com)
