Hello all,
This e-mail is going to be a long and detailed one, sorry in advance.

I am trying to run a FS simulation on ARM architecture using HMC model, a 
custom image file which contains a customised gcc. I have been struggling for a 
while, had managed to solve some of them thanks to Erfan Azarkhish and Andreas 
Hansson but hit a wall again. It would be great if someone could give any ideas.

With the newly cloned and built gem5, it is not possible to run the example 
configuration script configs/example/hmctest.py (by Abdul Mutaal) out of the 
box. It gives the following error:
$ ./build/ARM/gem5.opt configs/example/hmctest.py --mem-type=HMC_2500_x32 
--mode=RANDOM  --arch=same
...
gem5.opt: build/ARM/mem/dram_ctrl.cc:684: void DRAMCtrl::processRespondEvent(): 
Assertion `(dram_pkt->rankRef.pwrState == PWR_ACT) || 
(dram_pkt->rankRef.pwrState == PWR_IDLE)' failed.
...
Andreas suggested to comment out this line at the moment (a patch is coming 
soon).
$ hg diff src/mem/dram_ctrl.cc
diff -r cd7f3a1dbf55 src/mem/dram_ctrl.cc
--- a/src/mem/dram_ctrl.cc    Wed Nov 09 14:27:40 2016 -0600
+++ b/src/mem/dram_ctrl.cc    Wed Jan 25 15:46:13 2017 +0000
@@ -680,8 +680,8 @@

     // at this moment should be either ACT or IDLE depending on
     // if PRE has occurred to close all banks
-    assert((dram_pkt->rankRef.pwrState == PWR_ACT) ||
-           (dram_pkt->rankRef.pwrState == PWR_IDLE));
+    // assert((dram_pkt->rankRef.pwrState == PWR_ACT) ||
+    //        (dram_pkt->rankRef.pwrState == PWR_IDLE));

     // track if this is the last packet before idling
     // and that there are no outstanding commands to this rank
After this change one can run the example (which runs in SE mode):
$ ./build/ARM/gem5.opt configs/example/hmctest.py --mem-type=HMC_2500_x32 
--mode=RANDOM  --arch=same
...
Global frequency set at 1000000000000 ticks per second
warn: DRAM device capacity (8 Mbytes) does not match the address range assigned 
(256 Mbytes)
...
warn: DRAM device capacity (8 Mbytes) does not match the address range assigned 
(256 Mbytes)
warn: Cache line size is neither 16, 32, 64 nor 128 bytes.
info: Entering event queue @ 0.  Starting simulation...
Done!
However, even with the change it is not possible to use configs/example.fs.py 
for FS mode (had to cut out --arch and --mode since there are no such options 
in fs.py or Options.py).
$  ./build/ARM/gem5.opt configs/example/fs.py --mem-type=HMC_2500_x32
...
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "src/python/m5/main.py", line 400, in main
    exec filecode in scope
  File "configs/example/fs.py", line 341, in <module>
    test_sys = build_test_system(np)
  File "configs/example/fs.py", line 232, in build_test_system
    MemConfig.config_mem(options, test_sys)
  File "configs/common/MemConfig.py", line 156, in config_mem
    HMChost = HMC.config_host_hmc(options, system)
  File "configs/common/HMC.py", line 261, in config_host_hmc
    for i in xrange(system.hmc_host.num_serial_links)]
AttributeError: Values instance has no attribute 'ser_ranges'
After getting this error I created a configuration script to use FS mode with 
HMC model: hmcfs.py (link to this script is at the bottom of this e-mail)
Using this script gave the following error at first:
$ ./build/ARM/gem5.opt configs/example/hmcfs.py --mem-type=HMC_2500_x32 
--mode=RANDOM --arch=same --mem-channels=16 --cpu-type=timing 
--disk-image=images/disks/aarch64-ubuntu-trusty-headless.img --machine-type 
VExpress_EMM64

fatal: Port <orphan LinuxArmSystem>.system_port is already connected to <orphan 
LinuxArmSystem>.membus.slave[1], cannot connect <orphan 
LinuxArmSystem>.hmc_dev.xbar3.slave[1]
The X86 architecture gave a similar error:
$ ./build/X86/gem5.opt configs/example/hmcfs.py --mem-type=HMC_2500_x32  
--arch=same --mem-channels=16 --cpu-type=timing 
--disk-image=images/disks/linux.img

fatal: Port <orphan LinuxX86System>.system_port is already connected to <orphan 
LinuxX86System>.membus.slave[1], cannot connect <orphan 
LinuxX86System>.hmc_dev.xbar3.slave[1]
configs/common/HMC.py file, at line 375: "system.system_port = 
system.hmc_dev.xbar[3].slave" tries to connect system_port to 
hmc_dev.xbar3.slave[1] but at this point configs/common/FSConfig.py already 
have had the system_port connected to the membus.slave. Applying the following 
changes (could be done better) allowed me to progress.
$ hg diff configs/common/FSConfig.py
diff -r 97eebddaae84 configs/common/FSConfig.py
--- a/configs/common/FSConfig.py    Wed Nov 09 14:27:40 2016 -0600
+++ b/configs/common/FSConfig.py    Wed Jan 25 16:16:29 2017 +0000
@@ -390,8 +390,8 @@
     self.terminal = Terminal()
     self.vncserver = VncServer()

-    if not ruby:
-        self.system_port = self.membus.slave
+    # if not ruby:
+    #     self.system_port = self.membus.slave

     if ruby:
         fatal("You're trying to use Ruby on ARM, which is not working " \
@@ -489,7 +489,7 @@
     # connect the io bus
     x86_sys.pc.attachIO(x86_sys.iobus)

-    x86_sys.system_port = x86_sys.membus.slave
+    # x86_sys.system_port = x86_sys.membus.slave

 def connectX86RubySystem(x86_sys):
     # North Bridge
After the changes above, I get the following errors:
$ ./build/ARM/gem5.opt configs/example/hmcfs.py --mem-type=HMC_2500_x32 
--arch=same --mem-channels=16 --cpu-type=timing 
--disk-image=images/disks/aarch64-ubuntu-trusty-headless.img --machine-type 
VExpress_EMM64
...
Global frequency set at 1000000000000 ticks per second
warn: DRAM device capacity (8 Mbytes) does not match the address range assigned 
(256 Mbytes)
...
warn: DRAM device capacity (8 Mbytes) does not match the address range assigned 
(256 Mbytes)
fatal condition addrMap.insert(m->getAddrRange(), m) == addrMap.end() occurred: 
Memory address range for system.realview.nvmem is overlapping
 @ tick 0
[PhysicalMemory:build/ARM/mem/physical.cc, line 97]
Memory Usage: 137888 KBytes
I thought that there is a problem with address ranges, was using the same lines 
provided in configs/example/hmctest.py so removed the line 373 on my hmcfs.py 
file (link to the file is at the bottom). When the line is removed, another 
error occured:
$ ./build/ARM/gem5.opt configs/example/hmcfs.py --mem-type=HMC_2500_x32 
--arch=same --mem-channels=16 --cpu-type=timing 
--disk-image=images/disks/aarch64-ubuntu-trusty-headless.img --machine-type 
VExpress_EMM64
...
Global frequency set at 1000000000000 ticks per second
warn: DRAM device capacity (8 Mbytes) does not match the address range assigned 
(512 Mbytes)
...
warn: Highest ARM exception-level set to AArch32 but bootloader is for AArch64. 
Assuming you wanted these to match.
Listening for system connection on port 5900
Listening for system connection on port 3456
0: system.remote_gdb.listener: listening for remote gdb #0 on port 7000
warn: ClockedObject: More than one power state change request encountered 
within the same simulation tick
fatal: Unable to find destination for addr 0x10 on system.hmc_dev.xbar3
 @ tick 0
[findPort:build/ARM/mem/xbar.cc, line 359]
Memory Usage: 863116 KBytes
Program aborted at tick 0
--- BEGIN LIBC BACKTRACE ---
...
While x86 gives (for both situations with the line kept and removed):
$ ./build/X86/gem5.opt configs/example/hmcfs.py --mem-type=HMC_2500_x32 
--arch=same --mem-channels=16 --cpu-type=timing 
--disk-image=images/disks/linux.img
...
Global frequency set at 1000000000000 ticks per second
warn: DRAM device capacity (8 Mbytes) does not match the address range assigned 
(512 Mbytes)
...
warn: Reading current count from inactive timer.
warn: ClockedObject: More than one power state change request encountered 
within the same simulation tick
**** REAL SIMULATION ****
info: Entering event queue @ 0.  Starting simulation...
gem5.opt: build/X86/arch/x86/pagetable_walker.cc:635: bool 
X86ISA::Walker::WalkerState::recvPacket(PacketPtr): Assertion 
`!delayedResponse' failed.
Program aborted at tick 110000
--- BEGIN LIBC BACKTRACE ---
...
Basically I am stuck at this point. Could anyone point me out my mistakes/how 
to solve this problem?
The link to the configuration script I used: https://paste.debian.net/910584/

Thank you and kind regards

Serhat Gesoglu
The University of Manchester
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to