On 2008-08-19, Timothy Normand Miller wrote:
> I wouldn't read more than 8 or 16 words at a time anyway.

Good.

> A full cache line is fetched for every cache miss.  So for random
> access, lots of time is wasted.  But sequential is more common.  It's
> best to pick a cache size that's a good compromise.

Does this mean the read requests from PCI will always be aligned when
the cache is on?  Can we use the cache for all memory- and engine-to-PCI
operations?  That would simplify the code quite a bit.

> >> > Another goofy thing:  It seems tricky at best to unroll the
> >> > transfer-loop for target write.  The reason is that we only know the
> >> > number of queued commands, but what we need is the number of queued
> >> > write-data commands.  Any idea?
> >>
> >> That is tricky, and we may have no good answer for that.
> >>
> >> I suggest we do nothing about it right now.  We should get a working
> >> revision out, then we can go back later and see if we can do anything
> >> clever with the CPU design.
> >
> > Okay, we don't unroll, but we can still avoid sending an address for
> > each write request.
> 
> You can, although the check to decide whether or not to send an
> address would probably take longer than just sending an address.  It
> depends on what you're doing.
>
> For instance, if you get an address and some writes, then you don't
> get anything, so you go on about some VGA business, then you get
> another write without an address, you'll have to reissue the correct
> address (where PCI left off), because you may have changed it to
> access some memory for what VGA needs to do.

My idea was to enter an inner loop once we have a write, and exit to the
top level on the first non-write.
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to