I see that the bridge and cache are in parallel like you're describing.
The culprit seems to be this line:

configs/example/fs.py:    test_sys.bridge.filter_ranges_a=[AddrRange(0,
Addr.max)]

where the bridge is being told explicitly not to let anything through
from the IO side to the memory side. That should be fairly
straightforward to poke a hole in for the necessary ranges. The
corresponding line for the other direction (below) brings up another
question. What happens if the bridge doesn't disallow something to go
across and something else wants to respond to an address? The bridge
isn't set to ignore APIC messages implementing IPIs between CPUs, but
those seem to be going between CPUs and not out into the IO system. Are
we just getting lucky? This same thing would seem to apply to any other
memory side object that isn't in the address range 0-mem_size.

configs/example/fs.py:   
test_sys.bridge.filter_ranges_b=[AddrRange(mem_size)]

Gabe

Steve Reinhardt wrote:
> I believe the I/O cache is normally paired with a bridge that lets
> things flow in the other direction.  It's really just designed to
> handle accesses to cacheable space from devices on the I/O bus without
> requiring each device to have a cache.  It's possible we've never had
> a situation before where I/O devices issue accesses to uncacheable
> non-memory locations on the CPU side of the I/O cache, in which case I
> would not be terribly surprised if that didn't quite work.
>
> Steve
>
> On Mon, Nov 22, 2010 at 11:59 AM, Gabe Black <gbl...@eecs.umich.edu
> <mailto:gbl...@eecs.umich.edu>> wrote:
>
>     The cache claims to support all addresses on the CPU side (or so says
>     the comments), but no addresses on the memory side. Messages going
>     from
>     the IO interrupt controller get to the IO bus but then don't know
>     where
>     to go since the IO cache hides the fact that the CPU interrupt
>     controller wants to receive messages on that address range. I also
>     don't
>     know if the cache can handle messages passing through originating from
>     the memory side, but I didn't look into that.
>
>     Gabe
>
>     Ali Saidi wrote:
>     > Something has to maintain i/o coherency and that something looks
>     an whole lot like a couple line cache. Why is having a cache there
>     any issue, they should pass right through the cache?
>     >
>     > Ali
>     >
>     >
>     >
>     > On Nov 22, 2010, at 4:42 AM, Gabe Black wrote:
>     >
>     >
>     >> Hmm. It looks like this IO cache is only added when there are
>     caches in
>     >> the system (a fix for some coherency something? I sort of
>     remember that
>     >> discussion.) and that wouldn't propagate to the IO bus the fact
>     that the
>     >> CPU's local APIC wanted to receive interrupt messages passed
>     over the
>     >> memory system. I don't know the intricacies of why the IO cache was
>     >> necessary, or what problems passing requests back up through
>     the cache
>     >> might cause, but this is a serious issue for x86 and any other
>     ISA that
>     >> wants to move to a message based interrupt scheme. I suppose the
>     >> interrupt objects could be connected all the way out onto the
>     IO bus
>     >> itself, bypassing that cache, but I'm not sure how realistic
>     that is.
>     >>
>     >> Gabe Black wrote:
>     >>
>     >>>    For anybody waiting for an x86 FS regression (yes, I know,
>     you can
>     >>> all hardly wait, but don't let this spoil your Thanksgiving)
>     I'm getting
>     >>> closer to having it working, but I've discovered some issues
>     with the
>     >>> mechanisms behind the --caches flag with fs.py and x86. I'm
>     surprised I
>     >>> never thought to try it before. It also brings up some
>     questions about
>     >>> where the table walkers should be hooked up in x86 and ARM.
>     Currently
>     >>> it's after the L1, if any, but before the L2, if any, which
>     seems wrong
>     >>> to me. Also caches don't seem to propagate requests upwards to
>     the CPUs
>     >>> which may or may not be an issue. I'm still looking into that.
>     >>>
>     >>> Gabe
>     >>> _______________________________________________
>     >>> m5-dev mailing list
>     >>> m5-dev@m5sim.org <mailto:m5-dev@m5sim.org>
>     >>> http://m5sim.org/mailman/listinfo/m5-dev
>     >>>
>     >>>
>     >> _______________________________________________
>     >> m5-dev mailing list
>     >> m5-dev@m5sim.org <mailto:m5-dev@m5sim.org>
>     >> http://m5sim.org/mailman/listinfo/m5-dev
>     >>
>     >>
>     >
>     > _______________________________________________
>     > m5-dev mailing list
>     > m5-dev@m5sim.org <mailto:m5-dev@m5sim.org>
>     > http://m5sim.org/mailman/listinfo/m5-dev
>     >
>
>     _______________________________________________
>     m5-dev mailing list
>     m5-dev@m5sim.org <mailto:m5-dev@m5sim.org>
>     http://m5sim.org/mailman/listinfo/m5-dev
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> m5-dev mailing list
> m5-dev@m5sim.org
> http://m5sim.org/mailman/listinfo/m5-dev
>   

_______________________________________________
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev

Reply via email to