After digging into the issue, I found out that ruby actually doesn't cache
memory requests within memory mapped devices address
range(0xC0000000-0xFFFFFFFF) and send them to the pio port. The problem is that
when I run gem5 with ruby, some of memory requests which their address is not
belong to the reserved memory region for devices reach the iobus and PCI
configspace after going through ruby! However no memory request beyond the
reserved memory space reaches to the PCI configspace when I use classic memory
system.
Bellow is the gem5 output and command line. The address of request that reaches
PCI configspace is "0x13f21e9c0". I've added 1GB extra memory to ruby memory
size based on this post
https://www.mail-archive.com/[email protected]/msg11106.html to be able to
use ruby for memory size larger than 3GB.
command line: ./gem5.opt --debug-flags=PciConfigAll configs/example/fs.py
--mem-size=4096MB --num-cpus=1 --cpu-type=timing --ruby -r 1
**** REAL SIMULATION ****
2379481334283000: system.pc.pciconfig: read va=0x13f21e9c0 size=16
panic: invalid access size(?) for PCI configspace!
@ tick 2379481334283000
[read:build/X86/dev/pciconfigall.cc, line 72]
Memory Usage: 5253512 KBytes
Program aborted at cycle 2379481334283000
Aborted (core dumped)
Any idea about what is going on here and possible fixes?
Thank you,Mohammad
On Thursday, January 22, 2015 10:28 AM, Mohammad Alian via gem5-dev
<[email protected]> wrote:
Hello,
How can I force a request to be uncacheable when using Ruby memory
system?"req->setFlags(Request::UNCACHEABLE)" works for classic memory system
but it doesn't have any effect on the request while using Ruby.
Thank you,Mohammad
_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev
_______________________________________________
gem5-dev mailing list
[email protected]
http://m5sim.org/mailman/listinfo/gem5-dev