Hi Shervin,

On 07/25/2013 09:36 PM, shervin hajiamini wrote:
I want to dump the statistics periodically based on the fixed number of instructions. I already followed the following post:

http://www.mail-archive.com/[email protected]/msg07544.html

<http://www.mail-archive.com/[email protected]/msg07544.html>
and this is the code I use in Simulation.py for periodic dumping (this code works in gem5-master version):

testsys.cpu[0].scheduleInstStop(0, #instructions_upto_ROI, "dump statistics")
event = m5.simulate()
if event.getCause() == "dump statistics":
m5.stats.dump()
m5.stats.reset()
else
pass

In general, I'd strongly recommend against just ignoring unhandled events. gem5 can return exit events during normal simulations, like when sending it SIGINT (ctrl+c), in which case you should stop simulating. There are likely other cases as well.

According to this code, gem5 dumps the stats up to ROI and then dumps stats periodically inside the ROI. However this code works only for one core. I have tried using the following code in case of multiple cores:
for i in xrange(np):
testsys.cpu[i].scheduleInstStop(0, #instructions_upto_ROI, "dump statistics")
event = m5.simulate()
if event.getCause() == "dump statistics":
m5.stats.dump()
m5.stats.reset()
else
pass

The code above won't work as it simulate roughly np*instructions_upto_ROI in every core (assuming that they have roughly equal IPC) since the simulate call will run all CPUs.

It seems like you're trying to define your ROI as the point when all threads have executed exactly N instructions. You /could/ implement that by scheduling an instruction stop after N instructions in all CPUs and then simulate the system. After m5.simulate() returns a "dump statistics" event, you'll have to stop that CPU temporarily until all other CPUs have reached the ROI. AFAIK, there is currently no reliable way of suspending execution in on CPU temporarily from Python. There are a few (suspend|activate|halt|deallocate)Context calls that are a part of the CPU API, but they aren't exported to Python and are completely undocumented. I /think/ calling suspendContext would suspend the thread temporarily and activateContext reactivates the context, but I haven't tried myself. However, I'd argue that this way of defining the ROI is broken since suspending application threads can cause deadlocks.

Another possibility is to define the ROI as the point when one of the threads has executed N instructions. Just do something like this:

  for i in xrange(np):
testsys.cpu[i].scheduleInstStop(0, #instructions_upto_ROI, "roi")

  event = m5.simulate()
  if event.getCause() == "roi":
     do stuff
  else:
     fail/warn/exit
  ...

Note that you need to give this exit event a different name than the event you use for periodic dumping since you'll have to ignore any such event happening after you have started you periodic dumping.

A third possibility is of course to define the ROI as the sum of the number of instructions executed in the system. See my reply [1] to your previous mail for more details.

for i in xrange(np):
  for n in range(#periods):
testsys.cpu[i].scheduleInstStop(0, #instructions_per_period, "dump statistics")
event = m5.simulate()
if event.getCause() == "dump statistics":
m5.stats.dump()
m5.stats.reset()
else:
pass
where 'np' is the number of cores (-n option in the command line). Using this code, the simulation goes into infinite loop and it does not dump the stats periodically for any of the cores.

The problem in this code snippet is similar to the previous one. You seem to assume that m5.simulate only simulates one core. In order to sample accurately, you need to do something similar to the case where you want to reach the POI. You should probably use the total number of instructions in the system rather than per CPU to trigger a sample, but that really depends on what you try to accomplish.

//Andreas

[1] http://www.mail-archive.com/[email protected]/msg08308.html
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to