Hi Joe,

I downloaded and compiled stream (on Alpha Tru64, not using the Linux
cross-compiler, though that shouldn't matter).  I also put in a quick
hack to have the main memory latency be a random value betw 20K and
40K ticks (see patch below).  I ran stream like this:

build/ALPHA_SE/m5.opt configs/example/se.py -c ~/stream/stream -d --caches

and it ran to completion successfully.  I also ran with
--trace-flags=Bus,MemoryAccess long enough to verify that there were
responses coming back out of order.  So simply returning responses out
of order to the cache does not seem to be sufficient to trigger your
assertion failure.  I'm not sure what the problem is then.  If you're
running with different arguments please let me know.

Is there a particular reason you're adding a whole new object that
doesn't derive from PhysicalMemory but still uses PhysicalMemory to
maintain storage?  Would it be possible for you to make
PhysicalMemory::calculateLatency() a virtual function, then implement
your model by deriving from PhysicalMemory and just overriding that
function?  That seems like it could eliminate a lot of redundant code
that you must have, which might be the source of the problem.

Steve

% hg diff
diff -r 0f92b8eeaec5 src/mem/physical.cc
--- a/src/mem/physical.cc       Thu Nov 08 23:50:10 2007 -0800
+++ b/src/mem/physical.cc       Sat Nov 10 08:13:29 2007 -0800
@@ -112,7 +112,9 @@ Tick
 Tick
 PhysicalMemory::calculateLatency(PacketPtr pkt)
 {
-    return lat;
+    Tick l = 20000*lat + randomGen.random((Tick)0, (Tick)20000*lat);
+    DPRINTF(MemoryAccess, "latency %d\n", l);
+    return l;
 }


diff -r 0f92b8eeaec5 src/mem/physical.hh
--- a/src/mem/physical.hh       Thu Nov 08 23:50:10 2007 -0800
+++ b/src/mem/physical.hh       Sat Nov 10 08:13:29 2007 -0800
@@ -37,6 +37,7 @@
 #include <map>
 #include <string>

+#include "base/random.hh"
 #include "base/range.hh"
 #include "mem/mem_object.hh"
 #include "mem/packet.hh"
@@ -149,6 +150,8 @@ class PhysicalMemory : public MemObject
     std::vector<MemoryPort*> ports;
     typedef std::vector<MemoryPort*>::iterator PortIterator;

+    Random randomGen;
+
   public:
     Addr new_page();
     uint64_t size() { return params()->range.size(); }


On 11/10/07, Joe Gross <[EMAIL PROTECTED]> wrote:
> Currently I take the standard se.py file and all its settings and just
> replace PhysicalMemory with my own simulator, leaving everything else alone.
> I also test using the stream benchmark
> (http://www.cs.virginia.edu/stream/ref.html), cross-compiled with the
> -O3 flag.
> I'm able to reproduce the error by using only the following code in
> recvTiming():
>
> bool nr = pkt->needsResponse();
> memory->doAtomicAccess(pkt);
> if (nr)
> {
>     memory->ports[memory->lastPortIndex]->doSendTiming(pkt,curTick +
> 18750 + rand()%400);
> }
> else
> {
>     delete pkt->req;
>     delete pkt;
> }
> return true;
>
> Interestingly, it seems to work when I use 1875, but breaks with 18750.
> Since the default CPU is 1THz and the memory system is ~400/800MHz,
> latencies can be up to 200000 CPU cycles.
>
> Joe
>
> Steve Reinhardt wrote:
> > OK, I'll take a look... I just hacked a randomized latency into
> > PhysicalMemory to try and reproduce the problem but it's still working
> > for me.  I'll run some longer tests to see if I can reproduce it.  If
> > not, would you be willing to send me your code so I can see if I can
> > track it down?
> >
> > Also, what kind of memory hierarchy are you using (uni- vs
> > multiprocessor, # of cache levels, etc.)?
> >
> > Steve
> >
> > On 11/9/07, Joe Gross <[EMAIL PROTECTED]> wrote:
> >
> >> Same problem even when doAtomicAccess() is performed before delaying the
> >> return of the packet (thus ensuring that this is performed in order). I
> >> checked to see if the packets were being returned in order with any
> >> configuration of my memory simulator and they frequently aren't (some
> >> ranks/banks see much more traffic). So it seems that it's a problem not
> >> of doing the doAtomicAccess() but rather of sending the packets back in
> >> a different order and it's somehow different than b3 when this
> >> rearrangement worked fine.
> >>
> >> Joe
> >>
> >>
> > _______________________________________________
> > m5-users mailing list
> > m5-users@m5sim.org
> > http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
> >
> _______________________________________________
> m5-users mailing list
> m5-users@m5sim.org
> http://m5sim.org/cgi-bin/mailman/listinfo/m5-users
>
_______________________________________________
m5-users mailing list
m5-users@m5sim.org
http://m5sim.org/cgi-bin/mailman/listinfo/m5-users

Reply via email to