Hi Gabe, I am extremely sorry that I didn't notice your replies and requests until now, because I was on vacation those days and hundreds of emails stacked afterwards.
Here is the way I produced the checkpoints: First. I have some shell command lines that automatically generate scripts recognizable by GEM5. The following is an example script for spec2k6 "bwaves", in which "tbind" essentially uses sched_setaffinity to bind a program to a core. #!/bin/sh cd /bwaves_dir /sbin/m5 checkpoint 10000000000 /sbin/m5 checkpoint 20000000000 /sbin/m5 checkpoint 30000000000 /sbin/m5 exit 30000000000 /sbin/m5 resetstats ./tbind 0 ./bwaves.base 2 echo 'Done: D' /sbin/m5 exit /sbin/m5 exit Then, I execute "/build/X86_FS/gem5.opt -d bwaves configs/example/fs.py -n 1 --script=bwaves.script". That was how I got the checkpoints. Please let me know if you need any other information to accurately reproduce the problems. Thanks, Leonard On Thu, Jul 7, 2011 at 4:39 AM, Gabe Black <[email protected]> wrote: > Hello again. I'd like to reproduce your problem as accurately as > possible, so could you please tell me how you're taking the checkpoint? > It probably doesn't matter exactly how the checkpoint is taken, but it > wouldn't hurt to get rid of that variable. > > Gabe > > On 07/05/11 20:17, Gabe Black wrote: > > Yes, I think so. That's a very important device since it's actually the > > interrupt controller attached to each CPU. Without it, they won't be > > able to send IPIs to each other or receive interrupts from devices. Our > > implementation of x86 is different from the other ISAs since I set it up > > to send interrupts through the memory system instead of using the > > Platform object. Most likely either the Interrupts object in x86 needs > > to be extended to explicitly support switching CPUs, other parts of the > > system need to be extended to hook it back up after a switch, or both, > > probably both. I'll take a look when I get a chance. > > > > Gabe > > > > On 07/05/11 19:15, 冠男陳 wrote: > >> Hi, Gabe, > >> I got the same problem. > >> I use "m5/build/X86_FS/m5.debug configs/example/fs.py -n 1 --timing" > >> to boot up full system, and set checkpoint. > >> But I can't restore it by "m5/build/X86_FS/m5.debug > >> configs/example/fs.py -n 1 --timing -r 0". > >> > >> I use gdb to break at the panic line, > m5/build/X86_FS/dev/io_device.cc:74 > >> =========================================== > >> 70 void > >> 71 PioDevice::init() > >> 72 { > >> 73 if (!pioPort) > >> 74 panic("Pio port not connected to anything!"); > >> 75 pioPort->sendStatusChange(Port::RangeChange); > >> 76 } > >> =========================================== > >> and print name() > >> =========================================== > >> (gdb) p name() > >> $1 = {static npos = 18446744073709551615, _M_dataplus = > >> {<std::allocator<char>> = {<__gnu_cxx::new_allocator<char>> = {<No > >> data fields>}, <No data fields>}, > >> _M_p = 0x222cc58 "system.switch_cpus.interrupts"}} > >> =========================================== > >> Does it mean that system.switch_cpus.interrupts not connected to > anything? > >> > >> Thank you > >> Mark Chen > >> > >> 2011/7/3 Gabe Black <[email protected]>: > >>> Hi. What commands would I have to run to reproduce this using > unmodified > >>> gem5? Do you have any files I would need like the checkpoint file? > >>> > >>> What seems to be happening is that in the new system, one of the > devices (I > >>> can't tell which one) in your simulation isn't attached to anything, > and > >>> it's upset about that. To start looking into it yourself, you could > modify > >>> that panic to print the name of the device that port is associated > with. All > >>> simobjects have a function called name() which (not surprisingly) > returns > >>> their name. Once you know which device it is, you can look into why > it's not > >>> getting hooked up when resuming from the checkpoint. > >>> > >>> Gabe > >>> > >>> On 07/01/11 12:43, Sage wrote: > >>> > >>> Hi everyone, > >>> > >>> I have a problem with using "--checkpoint-restore" and "--timing > --caches" > >>> together for the X86_FS mode. For instance, if I type > "build/X86_FS/gem5.opt > >>> configs/example/fs.py --checkpoint-restore=1 --timing --caches", I > would get > >>> the errors as follows: > >>> > >>> =================================== > >>> panic: Pio port not connected to anything! > >>> @ cycle 0 > >>> [init:build/X86_FS/dev/io_device.cc, line 74] > >>> Memory Usage: 3370604 KBytes > >>> Program aborted at cycle 0 > >>> =================================== > >>> > >>> But If I use the flags separately, e.g. "build/X86_FS/gem5.opt > >>> configs/example/fs.py --checkpoint-restore=1" or "build/X86_FS/gem5.opt > >>> configs/example/fs.py --timing --caches", everything is ok. > >>> > >>> However, I do need to resume from a checkpoint first and then attach a > >>> memory hierarchy. How can I solve the problem? > >>> > >>> I would greatly appreciate your help! > >>> > >>> Thanks, > >>> Leonard > >>> > >>> -- > >>> Give our ability to our work, but our genius to our life! > >>> > >>> _______________________________________________ > >>> gem5-users mailing list > >>> [email protected] > >>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users > >>> > >>> _______________________________________________ > >>> gem5-users mailing list > >>> [email protected] > >>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users > >>> > >> _______________________________________________ > >> gem5-users mailing list > >> [email protected] > >> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users > > _______________________________________________ > > gem5-users mailing list > > [email protected] > > http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users > > _______________________________________________ > gem5-users mailing list > [email protected] > http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users > -- Give our ability to our work, but our genius to our life!
_______________________________________________ gem5-users mailing list [email protected] http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
