Hi Andrew, 

You should be able to re-compile gem5 with
USE_CHECKER=1 on the command line and it will include the checker and
run it when you restore to the o3 cpu. 

Thanks, 

Ali 

On 01.03.2012
14:02, Andrew Cebulski wrote: 

> Hi Ali, 
> Okay, thanks, I'll try out
the checker cpu. Is this the best resource available on how to use the
Checker CPU? -- http://gem5.org/Checker [3] 
> Also, my run restoring
the O3 CPU from my checkpoint has the same result: 
> Detailed CPU
(checkpoint restore) : system.cpu.committedInsts = 646985567 
>
system.cpu.fetch.Insts = 648951747 
> Thanks, 
> Andrew
> 
> On Thu, Mar
1, 2012 at 2:40 PM, Ali Saidi <[email protected] [4]> wrote:
> 
>> Hi
Andrew, 
>> 
>> The first guess is that possibly the cpu results in a
different code path or different scheduler decisions which lengthen
execution. Another possibility is that the O3 cpu as configured by the
arm-detailed configuration has some issue. While this is possible it's
not incredibly likely. You could try to restore from the checkpoint and
run with the checker cpu. This creates a little atomic like cpu that
sits next to the o3 core and verifies it's execution which might tell
you if there is a bug in the o3 model. 
>> 
>> Thanks, 
>> 
>> Ali 
>>

>> On 01.03.2012 13:04, Andrew Cebulskiwrote: 
>> 
>>> Hi, 
>>> I'm
experiencing some problems that I currently am attributing to restoring
from a checkpoint, then switching to an arm_detailed CPU (O3_ARM_v7a_3).
I first noticed the problem due to my committed instruction counts not
lining up correctly between different CPUs for a benchmark I'm running
(by roughly 170M instructions). The stats below are reset right before
running the benchmark, then dumped afterwards: 
>>> Atomic CPU (no
checkpoint restore): system.cpu.numInsts = 476085242 
>>> Detailed CPU
(no checkpoint restore): system.cpu.committedInsts = 476128320 
>>>
system.cpu.fetch.Insts = 478463491 
>>> Arm_detailed CPU (checkpoint
restore): system.switch_cpus_1.committedInsts = 646468886 
>>>
system.switch_cpus_1.fetch.Insts = 660969371 
>>> Arm_detailed CPU (no
checkpoint restore): system.cpu.committedInsts = 476107801 
>>>
system.cpu.fetch.Insts = 491814681 
>>> I included both the committed
and fetched instructions, to see if the problem is with fetchs getting
counted as committed even if they are not (i.e. insts not getting
squashed). It does not seem like that is the case from the stats
above...as the arm_detailed run without a checkpoint has roughly the
same difference between fetched/committed instructions. I noticed that
the switch arm_detailed cpu when restoring from a checkpoint lacks both
a icache and dcache as children, but I read in a previous post that they
are connected to fetch/iew respectively, so this is probably not the
issue. I assume it's just not shown explicitly in the config.ini file...

>>> I'm running a test right now to see if switching to a regular
DerivO3CPU has the same issue. Regardless of its results, does anyone
have any idea why I'm seeing roughly 170M more committed instructions in
the arm_detailed CPU run when I restore from a checkpoint? I've attached
my config file from the arm_detailed with checkpoint run for reference.

>>> Here's the run command for when I use a checkpoint: 
>>>
build/ARM/gem5.fast -d [dir] configs/example/fs.py -b [benchmark] -r 1
--checkpoint-dir=[chkpt-dir] --caches -s 
>>> Lastly, I'm running off of
revision 8813 from 2/3/12. Let me know if you need anymore info (i.e.
stats). 
>>> Thanks, 
>>> Andrew
>> 
>>
_______________________________________________
>> gem5-users mailing
list
>> [email protected] [1]
>>
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users [2]




Links:
------
[1] mailto:[email protected]
[2]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
[3]
http://gem5.org/Checker
[4] mailto:[email protected]
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to