On Dec 11, 2007 5:10 PM, Petter Urkedal <[EMAIL PROTECTED]> wrote:
> On 2007-12-10, Timothy Normand Miller wrote:
> > Here's a new XP10 bridge design.  This one expects fifos to be
> > attached to it, one for commands (address, write data, read count),
> > and the other for returning read data.  This doesn't account for HQ,
> > but logically, HQ goes between this and the physical bridge.  We'll
> > have to work that out.
>
> I'm thinking about how to integrate the HQ I/O into your xp10_bridge,
> and I think its fairly straight forward except for how HQ deals with
> return data from across the bridge.

I've been thinking about a bit too, and I retract my earlier
suggestion that HQ should speak bridge protocol, per se.  In
particular, you make the critical point that we need a read return
queue because HQ can't and shouldn't try to catch the return data in
time.

>
> The idea is to have a writable port with two mode bits.
>
>   * HQ can at any time read FIFO counts and a status register composed
>     of selected internal signals from the bridge, as long as no side
>     effect (queuing) is involved.

Yes.

>   * One bit decides whether HQ intercepts data_in/deq_in and
>     bridge_ad_out.  In that case HQ can dequeue the read end of the
>     command FIFO and write to bridge_ad_out and bridge_cmd, where
>     bridge_cmd is decided by the port address.

We may want to have the address decode drop stuff into one fifo that
is either read by HQ or dumps straight into the one connected to the
bridge.

>   * The other bit decides whether HQ intercepts bridge_ad_in and
>     data_out/enq_out, in a similar fashion, except:

Yeah, we need to work out all the control bits.  Some should be under
control of HQ, others under control of the host.

> One issue remaining is that HQ cannot accept data from across the bridge
> on every cycle.  Solutions I can think of are a) add a new FIFO, b)
> reuse the return FIFO if possible, c) let the I/O unit override memory
> operations in HQ and write directly to its BRAM, or d) only allow HQ to
> do short reads using a tiny return FIFO.

There's a fifo that connects the bridge read return path to the
address decode.  Logically, we'd want to intercept that at the end
that connects to the decode.  The problem is that that end is in the
wrong clock domain, so we solve that with an extra fifo.  When HQ is
enabled, it uses one fifo to get data from RAM and the other to return
it to PCI.  When HQ is disabled, one fifo just dumps straight into the
other.

I prefer the extra fifo option.  If we get tight on resources, we may
have to change our minds.  If we have a fifo that has width 64 and
depth 16, that requires 16 LUTs for the queue RAM plus some number of
others for the pointer management, etc.  Conceivably, multiplexing the
inputs and outputs of a fifo could require as much extra logic as
another independent fifo.  It's only when we use block ram fifos that
sharing becomes really compelling.

>
> Ad c.  HQ will still be able to use registers and access I/O ports.  In
> the worst case, it will just run in a loop polling the status register
> until the BRAM is available again.
>
> Ad d.  Single-word reads would be easy to implement, but only feasible
> if this are really unimportant.
>
> That was my thoughts, I might have missed a more compelling solution.
> Also, I haven't considered how this will work with PCI master/target
> modes.  Can we get away with only the two FIFOs connecting the bridge/HQ
> to the PCI controller?

We're going to have to see how this adds up.  It's a simpler solution
to just add more fifos.  If we can, let's.  Let's get something that
works.  Keeping that in mind, however, we do need to minimize the
logic for the ASIC, so once we have our heads wrapped around this
fully, we can go back and see what can be shared.


-- 
Timothy Normand Miller
http://www.cse.ohio-state.edu/~millerti
Open Graphics Project
_______________________________________________
Open-graphics mailing list
[email protected]
http://lists.duskglow.com/mailman/listinfo/open-graphics
List service provided by Duskglow Consulting, LLC (www.duskglow.com)

Reply via email to