Hi Mike,

If you look at the code at the bottom of example/fs.py you'll see what is going on. The bridge is setup to allow accesses to memory mapped devices to flow through it to the device (and the return path will go back through it). The cache is setup to now allow cpu requests to flow through it, but to present cacheable memory to the I/O devices for DMA.

Ali


On Wed, 31 Aug 2011 10:56:46 -0600, "Michael Levenhagen" <[email protected]> wrote:
I'm digging into the python code that configures a full system and it
appears to me that the IOCache and Bridge are connected in parallel
between the IOBus and MemBus.
Is this correct? If so it's not obvious to me how read/write requests
move between the IOBus and MemBus given that there appears to be two
connections between the  two buses.

Mike


On Aug 30, 2011, at 9:51 AM, Ali Saidi wrote:

Hi Mike,

The I/O devices in gem5 are cache coherent. So if you leave them attached to the iobus, their reads/writes will snoop the caches. If you want it closer you can move it up the hierarchy, but yes, you'll need a cache to participate in the coherence protocol for you. By default if your device is below the L2 cache it will not allocate data in any of the higher level caches (it's possible for a dirty block in the IO cache to migrate to another cacehe if a read happens before it's written back), but by default data will end up in the I/O cache and be written back to memory. If you want some other behavior you'll need to hack the cache models or attach the nic like I showed below where data would allocate into the L2.

Ali


On Tue, 30 Aug 2011 08:55:56 -0600, Michael Levenhagen <[email protected]> wrote:
This does help. It looks like my configuration is invalid. I'm trying
to model a cache coherent NIC so I attached a DMADevice directly to
the Bus that the CPU caches and PhysicalMemory are connected to.
If I understand I need to place a Cache between the NICs DMADevice and Bus.

Mike

On Aug 29, 2011, at 6:21 PM, Ali Saidi wrote:

Hi Michael,

The short answer is that it should work. M5 (gem5) was created out of a desire to do multi-system simulation of TCP/IP and one of the experiments we did early on was cache placement of DMA data from the NIC. By default DMADevice::dmaRead() and DMADevice::dmaWrite() should issue standard cacheable requests. The read/write requests that the device is doing are no different than the read/write requests that a CPU is doing. All DMA devices do this in a sense as the IOCache is just a small cache that makes DMA operations coherent with the rest of memory (otherwise the devices wouldn't work because they couldn't read dirty data (e.g a descriptor ring) out of the caches). There are several possibilities. (1) There might be a bug in the code you're using, a year is a pretty long time with our code base. (2) You might have created some topology that isn't supported. (3) If your cache models don't support the is_top_level flag there could definitely be as issue with the cache tryin
 g
to
hand ownership to the device (which it shouldn't) in the case of full-block reads to dirty data. A topology like the following should work with the current code base:

  CPU              NIC
 |  |              |
 I  D        IOC2/Bridge2
 |  |              |
 -------------------------
           |
           L2
           |
mem ---------------
         |
       IOC1/Bridge1--------------
                     |     |
                  Other  Devices...


You can see the bridge/cache pairs in our configuration files. The ports are setup to only allow accesses in one direction. The I/O cache participates in the coherences protocol and converts the read/write transaction into the appropriate coherency operations and the bridge provides memory mapped I/O access to the peripheral.

Hope that helps,

Ali


On Aug 29, 2011, at 4:01 PM, Michael Levenhagen wrote:

Do the cache models in M5 support Cache Injection?

I've created a simulation where I have a memory mapped NIC that uses a DmaDevice to read and write physical memory (note the memory is not restricted to UNCACHEABLE). I've followed WriteReq packets from the DmaDevice to the Cache and it appears that the data in these packets is not injected into the cache even if the cache block is present. I've hacked handleSnoop() so it injects write request into the cache if the cache block is present. This hack has allowed me to run MPI apps such as osu_bw and osu_latency but I'm running into a problem, with another app, that looks like a memory consistency issue. I'd consider my use of the DmaDevice to be pretty standard so I'm stumped why I'm having a problem unless using it to access CACHEABLE memory is not allowed.

Note that I'm using a snapshot of M5 from last fall.

Michael Levenhagen
Sandia National Labs.
_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev


_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev


_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev



_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev

Reply via email to