Hi everyone,
For a while now I've been trying to implement SMARTS-like simulation
within M5. I'm almost there now but am stuck on one particular part.
For SMARTS simulation we repeatedly switch CPUs between Atomic (for
fast-forwarding and functional warming), then O3 (for measurements) and
back again. When switching, I change the memory mode to suit the CPU
I'm next using.
This works fine except for a corner case. If O3 gets told to drain
whilst waiting for an instruction cache access, it may finish draining
and be switched out before the request is returned. Then Atomic is
switched in and gets a timing callback, so fails.
I've tried various schemes to address this (apart from simply ignoring
the timing callback in Atomic, since it seems like this shouldn't be the
place to sort this out - after all, it's not Atomic's fault it's getting
a timing callback but a problem with something that happened
beforehand). However, I can't find a solution to fix this easily.
Could someone help me out and point me in the best direction to go?
I've tried preventing O3 from draining if fetch is waiting on an
instruction cache access, but that's not always obvious to spot since
the request can be squashed leaving no trace of it.
I've tried preventing the port from signaling that it's drained until
its event queue is empty. However, this has the knock-on effect that it
is sometimes not empty when switching from Atomic to O3. Since Atomic
doesn't have a drain function, it simulates forever.
Are either of these solutions the best way to solve this problem and if
so, which is the path I should continue to pursue? Or, is there a third
option that I haven't thought about?
Cheers
Tim
--
Timothy M. Jones
http://homepages.inf.ed.ac.uk/tjones1
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
_______________________________________________
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev