Thanks Korey, I see what you're saying. I guess I would have to
implement the code for walking the memory hierarchy to call drain again.
The problem is that some of these objects already have drain functions
that are mistakenly saying that draining is done, so they will just
continue saying that on a later pass. I'm thinking here of the
SimpleTimingPort that is giving me difficulties. Basically I've got to
wait for that object's event queue to become empty which I've tried to
fold into the current drain function, but that's giving me the problems
with AtomicSimpleCPU I described.
Cheers
Tim
On 06/07/2010 15:04, Korey Sewell wrote:
I had a similar issue a while back I didn't all the way solve but I had
some thoughts on that that may be relevant here.
To be short,
waiting for the CPU to drain isnt exactly enough because of what you
witnessed with the different timing mode intricacies that affect simulation.
But what I hypothesize might work is that *After* you drain the CPU you
directly wait for the memory system to drain. If I'm not mistaken, the
various queues throughout the caches/memory system have drain functions
that you can call. It might work that you walk the memory hierarchy and
call drain once again after you call the CPU drain and then that would
signify a case for a clean switch (and if not that functionality isn't
there, it seems some type of code to walk through the memory hierarchy
and ensure all the relevant queues are empty would be the right thing to
do).
On Tue, Jul 6, 2010 at 2:22 PM, Timothy M Jones <tjon...@inf.ed.ac.uk
<mailto:tjon...@inf.ed.ac.uk>> wrote:
Hi everyone,
For a while now I've been trying to implement SMARTS-like simulation
within M5. I'm almost there now but am stuck on one particular part.
For SMARTS simulation we repeatedly switch CPUs between Atomic (for
fast-forwarding and functional warming), then O3 (for measurements)
and back again. When switching, I change the memory mode to suit
the CPU I'm next using.
This works fine except for a corner case. If O3 gets told to drain
whilst waiting for an instruction cache access, it may finish
draining and be switched out before the request is returned. Then
Atomic is switched in and gets a timing callback, so fails.
I've tried various schemes to address this (apart from simply
ignoring the timing callback in Atomic, since it seems like this
shouldn't be the place to sort this out - after all, it's not
Atomic's fault it's getting a timing callback but a problem with
something that happened beforehand). However, I can't find a
solution to fix this easily. Could someone help me out and point me
in the best direction to go?
I've tried preventing O3 from draining if fetch is waiting on an
instruction cache access, but that's not always obvious to spot
since the request can be squashed leaving no trace of it.
I've tried preventing the port from signaling that it's drained
until its event queue is empty. However, this has the knock-on
effect that it is sometimes not empty when switching from Atomic to
O3. Since Atomic doesn't have a drain function, it simulates forever.
Are either of these solutions the best way to solve this problem and
if so, which is the path I should continue to pursue? Or, is there
a third option that I haven't thought about?
Cheers
Tim
--
Timothy M. Jones
http://homepages.inf.ed.ac.uk/tjones1
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
_______________________________________________
m5-dev mailing list
m5-dev@m5sim.org <mailto:m5-dev@m5sim.org>
http://m5sim.org/mailman/listinfo/m5-dev
--
- Korey
_______________________________________________
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev
--
Timothy M. Jones
http://homepages.inf.ed.ac.uk/tjones1
The University of Edinburgh is a charitable body, registered in
Scotland, with registration number SC005336.
_______________________________________________
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev