[m5-dev] Cron m5test@zizzer /z/m5/regression/do-regression quick

2011-02-08 Thread Cron Daemon
* 
build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/00.hello/alpha/tru64/simple-timing-ruby-MESI_CMP_directory
 FAILED!
* 
build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/00.hello/alpha/linux/simple-timing-ruby-MESI_CMP_directory
 FAILED!
* 
build/ALPHA_SE_MOESI_hammer/tests/fast/quick/60.rubytest/alpha/linux/rubytest-ruby-MOESI_hammer
 FAILED!
* 
build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/50.memtest/alpha/linux/memtest-ruby-MESI_CMP_directory
 FAILED!
* 
build/ALPHA_SE_MOESI_hammer/tests/fast/quick/00.hello/alpha/tru64/simple-timing-ruby-MOESI_hammer
 FAILED!
* 
build/ALPHA_SE_MOESI_hammer/tests/fast/quick/00.hello/alpha/linux/simple-timing-ruby-MOESI_hammer
 FAILED!
* 
build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/00.hello/alpha/linux/simple-timing-ruby-MOESI_CMP_directory
 FAILED!
* 
build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/00.hello/alpha/tru64/simple-timing-ruby-MOESI_CMP_directory
 FAILED!
* 
build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/60.rubytest/alpha/linux/rubytest-ruby-MOESI_CMP_token
 FAILED!
* 
build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/00.hello/alpha/tru64/simple-timing-ruby-MOESI_CMP_token
 FAILED!
* 
build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/00.hello/alpha/linux/simple-timing-ruby-MOESI_CMP_token
 FAILED!
* 
build/ALPHA_SE_MOESI_hammer/tests/fast/quick/50.memtest/alpha/linux/memtest-ruby-MOESI_hammer
 FAILED!
* 
build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/50.memtest/alpha/linux/memtest-ruby-MOESI_CMP_token
 FAILED!
* 
build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/50.memtest/alpha/linux/memtest-ruby-MOESI_CMP_directory
 FAILED!
scons: *** Source 
`tests/quick/01.hello-2T-smt/ref/alpha/linux/o3-timing/stats.txt' not found, 
needed by target 
`build/ALPHA_SE/tests/fast/quick/01.hello-2T-smt/alpha/linux/o3-timing/status'.
* build/ALPHA_SE/tests/fast/quick/60.rubytest/alpha/linux/rubytest-ruby 
passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/simple-timing-ruby 
passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/o3-timing passed.
* build/ALPHA_SE/tests/fast/quick/30.eio-mp/alpha/eio/simple-atomic-mp 
passed.
* build/ALPHA_SE/tests/fast/quick/30.eio-mp/alpha/eio/simple-timing-mp 
passed.
* 
build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/60.rubytest/alpha/linux/rubytest-ruby-MESI_CMP_directory
 passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/simple-atomic passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/simple-timing passed.
* build/ALPHA_SE/tests/fast/quick/20.eio-short/alpha/eio/simple-atomic 
passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/simple-atomic passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/simple-timing-ruby 
passed.
* build/ALPHA_SE/tests/fast/quick/20.eio-short/alpha/eio/simple-timing 
passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/simple-timing passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/o3-timing passed.
* build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/inorder-timing 
passed.
* 
build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/60.rubytest/alpha/linux/rubytest-ruby-MOESI_CMP_directory
 passed.
* build/ALPHA_SE/tests/fast/quick/50.memtest/alpha/linux/memtest-ruby 
passed.
* 
build/ALPHA_FS/tests/fast/quick/10.linux-boot/alpha/linux/tsunami-simple-timing 
passed.
* 
build/ALPHA_FS/tests/fast/quick/10.linux-boot/alpha/linux/tsunami-simple-timing-dual
 passed.
* 
build/ALPHA_FS/tests/fast/quick/10.linux-boot/alpha/linux/tsunami-simple-atomic-dual
 passed.
* 
build/ALPHA_FS/tests/fast/quick/80.netperf-stream/alpha/linux/twosys-tsunami-simple-atomic
 passed.
* 
build/ALPHA_FS/tests/fast/quick/10.linux-boot/alpha/linux/tsunami-simple-atomic 
passed.
* build/ALPHA_SE/tests/fast/quick/50.memtest/alpha/linux/memtest passed.
* build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/simple-atomic passed.
* build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/simple-timing-ruby 
passed.
* build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/simple-timing passed.
* build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/o3-timing passed.
* build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/inorder-timing passed.
* build/POWER_SE/tests/fast/quick/00.hello/power/linux/simple-atomic passed.
* build/POWER_SE/tests/fast/quick/00.hello/power/linux/o3-timing passed.
* build/SPARC_SE/tests/fast/quick/00.hello/sparc/linux/simple-timing passed.
* build/SPARC_SE/tests/fast/quick/02.insttest/sparc/linux/simple-atomic 
passed.
* 
build/SPARC_SE/tests/fast/quick/40.m5threads-test-atomic/sparc/linux/simple-timing-mp
 passed.
* build/SPARC_SE/tests/fast/quick/00.hello/sparc/linux/simple-timing-ruby 
passed.
* 
build/SPARC_SE/tests/fast/quick/40.m5threads-test-atomic/sparc/linux/simple-atomic-mp
 passed.
* 

Re: [m5-dev] changeset in m5: Ruby: Fixes MESI CMP directory protocol

2011-02-08 Thread Nilay Vaish
I found the error in MESI CMP. In the L2 cache controller, when the TBE is 
deallocated, the pointer was not being set to NULL.


--
Nilay

On Mon, 7 Feb 2011, Arkaprava Basu wrote:


Nilay,

 If the same test completes with larger threshold then it certainly a 
case of false positive and certainly NOT a deadlock (but may be a case of 
starvation). If it were actually a deadlock, it would have just reported 
deadlock after some more time of simulation.


On extending stall and wait to other protocols, you are absolutely correct. 
Many of the starvation issues (and thus perceived deadlock) show up due to 
unfairness in handling coherence request. After the protocol trace 
segmentation issue is solved, I can get MESI_CMP_directory to use stall and 
wait.


I fully agree with Brad's argument about bumping up threshold for testers. 
And having large threshold (i.e. 5 M) does not hurt much. It will take bit 
more time of simulation to report the deadlock, but if there is an actual 
deadlock it would anyway report it. So I would vote to stick with Brad's 
threshold number in the patch.


Thanks
Arka


On 02/07/2011 12:39 PM, Nilay Vaish wrote:

Brad,

I think 5,000,000 is a lot. IIRC, a million worked the last time I tested 
the protocol. We can check the patch in, though I am of the view that we 
should let it remain as is till we can generate the protocol trace and make 
sure that this not an actual dead lock. I need to first detect the reason 
for the segmentation fault received only when trace is being collected.


Another issue is that we need to extend the stall and wait to other 
protocols as well. This, I believe, may help in reducing such deadlock 
instances. While working on MESI CMP, I saw many of the times earlier 
requests remain un-fulfilled because of later requests for the same 
address.


--
Nilay


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


[m5-dev] changeset in m5: MESI CMP: Unset TBE pointer in L2 cache controller

2011-02-08 Thread Nilay Vaish
changeset 9c245e375e05 in /z/repo/m5
details: http://repo.m5sim.org/m5?cmd=changeset;node=9c245e375e05
description:
MESI CMP: Unset TBE pointer in L2 cache controller
The TBE pointer in the MESI CMP implementation was not being set to NULL
when the TBE is deallocated. This resulted in segmentation fault on 
testing
the protocol when the ProtocolTrace was switched on.

diffstat:

 src/mem/protocol/MESI_CMP_directory-L2cache.sm |  1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diffs (11 lines):

diff -r cb7d946d4211 -r 9c245e375e05 
src/mem/protocol/MESI_CMP_directory-L2cache.sm
--- a/src/mem/protocol/MESI_CMP_directory-L2cache.smMon Feb 07 19:23:13 
2011 -0800
+++ b/src/mem/protocol/MESI_CMP_directory-L2cache.smTue Feb 08 07:47:02 
2011 -0600
@@ -593,6 +593,7 @@
 
   action(s_deallocateTBE, s, desc=Deallocate external TBE) {
 L2_TBEs.deallocate(address);
+unset_tbe();
   }
 
   action(jj_popL1RequestQueue, \j, desc=Pop incoming L1 request queue) {
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


[m5-dev] Missing _ in ruby_fs.py

2011-02-08 Thread Nilay Vaish

Hi Brad, did you miss out on the '_' in _dma_devices?

--
Nilay


diff -r 6f5299ff8260 -r 00ad807ed2ca configs/example/ruby_fs.py
--- a/configs/example/ruby_fs.pySun Feb 06 22:14:18 2011 -0800
+++ b/configs/example/ruby_fs.pySun Feb 06 22:14:18 2011 -0800
@@ -109,12 +109,19 @@

 CPUClass.clock = options.clock

-system = makeLinuxAlphaRubySystem(test_mem_mode, bm[0])
-
-system.ruby = Ruby.create_system(options,
- system,
- system.piobus,
- system._dma_devices)
+if buildEnv['TARGET_ISA'] == alpha:
+system = makeLinuxAlphaRubySystem(test_mem_mode, bm[0])
+system.ruby = Ruby.create_system(options,
+ system,
+ system.piobus,
+ system.dma_devices)
+elif buildEnv['TARGET_ISA'] == x86:
+system = makeLinuxX86System(test_mem_mode, options.num_cpus, bm[0], 
True)

+system.ruby = Ruby.create_system(options,
+ system,
+ system.piobus)
+else:
+fatal(incapable of building non-alpha or non-x86 full system!)

 system.cpu = [CPUClass(cpu_id=i) for i in xrange(options.num_cpus)]
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Cron m5test@zizzer /z/m5/regression/do-regression quick

2011-02-08 Thread Beckmann, Brad
Hi Gabe,

Since you successfully updated the tests I can't run (ARM_FS), I can take of 
the remaining errors (i.e. ruby protocol tests).  I have a few minor fixes I 
want to check in that I need to run the regression tester against anyways.

Brad


 -Original Message-
 From: m5-dev-boun...@m5sim.org [mailto:m5-dev-boun...@m5sim.org]
 On Behalf Of Gabe Black
 Sent: Tuesday, February 08, 2011 12:15 AM
 To: M5 Developer List
 Subject: Re: [m5-dev] Cron m5test@zizzer /z/m5/regression/do-
 regression quick
 
 Hmm. I didn't realize all the build targets for ruby protocols had their own
 separate regressions. I'll have to run those too.
 
 Gabe
 
 On 02/08/11 00:17, Cron Daemon wrote:
  *
 build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/00.hello/alpha/tru6
 4/simple-timing-ruby-MESI_CMP_directory FAILED!
  *
 build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/00.hello/alpha/linux
 /simple-timing-ruby-MESI_CMP_directory FAILED!
  *
 build/ALPHA_SE_MOESI_hammer/tests/fast/quick/60.rubytest/alpha/linux/
 rubytest-ruby-MOESI_hammer FAILED!
  *
 build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/50.memtest/alpha/l
 inux/memtest-ruby-MESI_CMP_directory FAILED!
  *
 build/ALPHA_SE_MOESI_hammer/tests/fast/quick/00.hello/alpha/tru64/sim
 ple-timing-ruby-MOESI_hammer FAILED!
  *
 build/ALPHA_SE_MOESI_hammer/tests/fast/quick/00.hello/alpha/linux/sim
 ple-timing-ruby-MOESI_hammer FAILED!
  *
 build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/00.hello/alpha/lin
 ux/simple-timing-ruby-MOESI_CMP_directory FAILED!
  *
 build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/00.hello/alpha/tru
 64/simple-timing-ruby-MOESI_CMP_directory FAILED!
  *
 build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/60.rubytest/alpha/lin
 ux/rubytest-ruby-MOESI_CMP_token FAILED!
  *
 build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/00.hello/alpha/tru64/
 simple-timing-ruby-MOESI_CMP_token FAILED!
  *
 build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/00.hello/alpha/linux/
 simple-timing-ruby-MOESI_CMP_token FAILED!
  *
 build/ALPHA_SE_MOESI_hammer/tests/fast/quick/50.memtest/alpha/linux
 /memtest-ruby-MOESI_hammer FAILED!
  *
 build/ALPHA_SE_MOESI_CMP_token/tests/fast/quick/50.memtest/alpha/li
 nux/memtest-ruby-MOESI_CMP_token FAILED!
  *
 build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/50.memtest/alpha
 /linux/memtest-ruby-MOESI_CMP_directory FAILED!
  scons: *** Source `tests/quick/01.hello-2T-smt/ref/alpha/linux/o3-
 timing/stats.txt' not found, needed by target
 `build/ALPHA_SE/tests/fast/quick/01.hello-2T-smt/alpha/linux/o3-
 timing/status'.
  * build/ALPHA_SE/tests/fast/quick/60.rubytest/alpha/linux/rubytest-
 ruby passed.
  * build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/simple-
 timing-ruby passed.
  * build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/o3-timing
 passed.
  * build/ALPHA_SE/tests/fast/quick/30.eio-mp/alpha/eio/simple-
 atomic-mp passed.
  * build/ALPHA_SE/tests/fast/quick/30.eio-mp/alpha/eio/simple-
 timing-mp passed.
  *
 build/ALPHA_SE_MESI_CMP_directory/tests/fast/quick/60.rubytest/alpha/li
 nux/rubytest-ruby-MESI_CMP_directory passed.
  * build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/simple-
 atomic passed.
  * build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/simple-
 timing passed.
  * build/ALPHA_SE/tests/fast/quick/20.eio-short/alpha/eio/simple-
 atomic passed.
  * build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/simple-
 atomic passed.
  * build/ALPHA_SE/tests/fast/quick/00.hello/alpha/tru64/simple-
 timing-ruby passed.
  * build/ALPHA_SE/tests/fast/quick/20.eio-short/alpha/eio/simple-
 timing passed.
  * build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/simple-
 timing passed.
  * build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/o3-timing
 passed.
  * build/ALPHA_SE/tests/fast/quick/00.hello/alpha/linux/inorder-
 timing passed.
  *
 build/ALPHA_SE_MOESI_CMP_directory/tests/fast/quick/60.rubytest/alpha
 /linux/rubytest-ruby-MOESI_CMP_directory passed.
  *
 build/ALPHA_SE/tests/fast/quick/50.memtest/alpha/linux/memtest-ruby
 passed.
  * build/ALPHA_FS/tests/fast/quick/10.linux-
 boot/alpha/linux/tsunami-simple-timing passed.
  * build/ALPHA_FS/tests/fast/quick/10.linux-
 boot/alpha/linux/tsunami-simple-timing-dual passed.
  * build/ALPHA_FS/tests/fast/quick/10.linux-
 boot/alpha/linux/tsunami-simple-atomic-dual passed.
  * build/ALPHA_FS/tests/fast/quick/80.netperf-
 stream/alpha/linux/twosys-tsunami-simple-atomic passed.
  * build/ALPHA_FS/tests/fast/quick/10.linux-
 boot/alpha/linux/tsunami-simple-atomic passed.
  *
 build/ALPHA_SE/tests/fast/quick/50.memtest/alpha/linux/memtest passed.
  * build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/simple-atomic
 passed.
  * build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/simple-timing-
 ruby passed.
  * build/MIPS_SE/tests/fast/quick/00.hello/mips/linux/simple-timing
 

Re: [m5-dev] Missing _ in ruby_fs.py

2011-02-08 Thread Beckmann, Brad
Ah, yes I did.  This actually reminds me that I need to fix how dma devices are 
connected within Ruby for x86_FS.  I'll push a batch that fixes these issues 
soon.

Brad


 -Original Message-
 From: m5-dev-boun...@m5sim.org [mailto:m5-dev-boun...@m5sim.org]
 On Behalf Of Nilay Vaish
 Sent: Tuesday, February 08, 2011 9:54 AM
 To: m5-dev@m5sim.org
 Subject: [m5-dev] Missing _ in ruby_fs.py
 
 Hi Brad, did you miss out on the '_' in _dma_devices?
 
 --
 Nilay
 
 
 diff -r 6f5299ff8260 -r 00ad807ed2ca configs/example/ruby_fs.py
 --- a/configs/example/ruby_fs.pySun Feb 06 22:14:18 2011 -0800
 +++ b/configs/example/ruby_fs.pySun Feb 06 22:14:18 2011 -0800
 @@ -109,12 +109,19 @@
 
   CPUClass.clock = options.clock
 
 -system = makeLinuxAlphaRubySystem(test_mem_mode, bm[0])
 -
 -system.ruby = Ruby.create_system(options,
 - system,
 - system.piobus,
 - system._dma_devices)
 +if buildEnv['TARGET_ISA'] == alpha:
 +system = makeLinuxAlphaRubySystem(test_mem_mode, bm[0])
 +system.ruby = Ruby.create_system(options,
 + system,
 + system.piobus,
 + system.dma_devices) elif
 +buildEnv['TARGET_ISA'] == x86:
 +system = makeLinuxX86System(test_mem_mode, options.num_cpus,
 bm[0],
 True)
 +system.ruby = Ruby.create_system(options,
 + system,
 + system.piobus)
 +else:
 +fatal(incapable of building non-alpha or non-x86 full system!)
 
   system.cpu = [CPUClass(cpu_id=i) for i in xrange(options.num_cpus)]
 ___
 m5-dev mailing list
 m5-dev@m5sim.org
 http://m5sim.org/mailman/listinfo/m5-dev


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] changeset in m5: Ruby: Fixes MESI CMP directory protocol

2011-02-08 Thread Beckmann, Brad
Hi Korey,

Just to clarify, the deadlock threshold in the sequencer is different than the 
deadlock threshold in the mem tester.  The sequencer's deadlock mechanism 
detects whether any particular request takes longer than the threshold.  
Meanwhile the mem tester deadlock threshold just ensures that a particular cpu 
sees at least one request complete within the deadlock threshold.  I don't 
think we want to degrade the deadlock checker to just a warning.  While in this 
particular case, the deadlock turned out to be just a performance issue, in my 
experience the vast majority of potential deadlock detections turn out to be 
real bugs.

Later today I'll check in patch that increases the ruby mem test deadlock 
threshold.

Brad


From: m5-dev-boun...@m5sim.org [mailto:m5-dev-boun...@m5sim.org] On Behalf Of 
Korey Sewell
Sent: Monday, February 07, 2011 2:27 PM
To: M5 Developer List
Subject: Re: [m5-dev] changeset in m5: Ruby: Fixes MESI CMP directory protocol

Another followup on this is that the deadlock_threshold parameter doesnt 
propagate to the MemTester CPU.

So when I'm testing 64 CPUS, the memtester.cc still has this code:
if (!tickEvent.scheduled())
schedule(tickEvent, curTick() + ticks(1));

if (++noResponseCycles = 50) {
if (issueDmas) {
cerr  DMA tester ;
}
cerr  name()  : deadlocked at cycle   curTick()  endl;
fatal();
}


That hardcoded 50 is not a great number (as people have said) because as 
your topologies/mem. hierarchies change, then the max # of cycles that you have 
to wait for a response can also change, right?

Increasing that # by hand is a arduous thing to do, so maybe that # should come 
off a parameter, as well as maybe we should warn there that a deadlock is 
possible after some type of inordinate wait time.

The fix should be just to warn about a long wait after an inordinate 
period...Something like this I think:

if (++noResponseCycles % 50 == 0) {
  warn(cpu X has waited for %i cycles, noResponseCycles);
}


Lastly, should the memtester really send out a memory access on every tick? The 
actual injection rate could be much higher than the rate at which we resolve 
contention.

Maybe we should consider having X many outstanding requests per CPU as a more 
realistic measure that can stress the system but not make the noResponseCycles 
stat (?) grow to such an high number..
On Mon, Feb 7, 2011 at 1:27 PM, Beckmann, Brad 
brad.beckm...@amd.commailto:brad.beckm...@amd.com wrote:
Yep, if I increase the deadlock threshold to 5 million cycles, the deadlock 
warning is not encountered.  However, I don't think that we should increase the 
default deadlock threshold to by an order-of-magnitude.  Instead, let's just 
increase the threashold for the mem tester.  How about I check in the following 
small patch.

Brad


diff --git a/configs/example/ruby_mem_test.py b/configs/example/ruby_mem_test.py
--- a/configs/example/ruby_mem_test.py
+++ b/configs/example/ruby_mem_test.py
@@ -135,6 +135,12 @@
cpu.test = system.ruby.cpu_ruby_ports[i].port
cpu.functional = system.funcmem.port

+#
+# Since the memtester is incredibly bursty, increase the deadlock
+# threshold to 5 million cycles
+#
+system.ruby.cpu_ruby_ports[i].deadlock_threshold = 500
+
 for (i, dma) in enumerate(dmas):
#
# Tie the dma memtester ports to the correct functional port
diff --git a/tests/configs/memtest-ruby.py b/tests/configs/memtest-ruby.py
--- a/tests/configs/memtest-ruby.py
+++ b/tests/configs/memtest-ruby.py
@@ -96,6 +96,12 @@
 #
 cpus[i].test = ruby_port.port
 cpus[i].functional = system.funcmem.port
+
+ #
+ # Since the memtester is incredibly bursty, increase the deadlock
+ # threshold to 5 million cycles
+ #
+ ruby_port.deadlock_threshold = 500

 # ---
 # run simulation



 -Original Message-
 From: m5-dev-boun...@m5sim.orgmailto:m5-dev-boun...@m5sim.org 
 [mailto:m5-dev-boun...@m5sim.orgmailto:m5-dev-boun...@m5sim.org]
 On Behalf Of Nilay Vaish
 Sent: Monday, February 07, 2011 9:12 AM
 To: M5 Developer List
 Subject: Re: [m5-dev] changeset in m5: Ruby: Fixes MESI CMP directory
 protocol

 Brad, I also see the protocol getting into a dead lock. I tried to get a 
 trace, but
 I get segmentation fault (yes, the segmentation fault only occurs when trace
 flag ProtocolTrace is supplied). It seems to me that memory is getting
 corrupted somewhere, because the fault occurs in malloc it self.

 It could be that protocol is actually not in a dead lock. Both Arka and I had
 increased the deadlock threashold while testing the protocol. I will try with
 increased threashold later in the day.

 One more thing, the Orion 2.0 code that was committed last night makes use
 of printf(). It did not compile cleanly for me. I had change it fatal() and 
 include
 the header file base/misc.hh.

 --
 Nilay

 On Mon, 7 Feb 2011, Beckmann, Brad wrote:

  

[m5-dev] changeset in m5: config: fixed minor bug connecting dma devices ...

2011-02-08 Thread Brad Beckmann
changeset bb6411d45356 in /z/repo/m5
details: http://repo.m5sim.org/m5?cmd=changeset;node=bb6411d45356
description:
config: fixed minor bug connecting dma devices to ruby

diffstat:

 configs/common/FSConfig.py |   3 +++
 configs/example/ruby_fs.py |  12 +---
 2 files changed, 8 insertions(+), 7 deletions(-)

diffs (41 lines):

diff -r 9c245e375e05 -r bb6411d45356 configs/common/FSConfig.py
--- a/configs/common/FSConfig.pyTue Feb 08 07:47:02 2011 -0600
+++ b/configs/common/FSConfig.pyTue Feb 08 15:52:44 2011 -0800
@@ -334,6 +334,9 @@
 # Create and connect the busses required by each memory system
 if Ruby:
 connectX86RubySystem(self)
+# add the ide to the list of dma devices that later need to attach to
+# dma controllers
+self._dma_devices = [self.pc.south_bridge.ide]
 else:
 connectX86ClassicSystem(self)
 
diff -r 9c245e375e05 -r bb6411d45356 configs/example/ruby_fs.py
--- a/configs/example/ruby_fs.pyTue Feb 08 07:47:02 2011 -0600
+++ b/configs/example/ruby_fs.pyTue Feb 08 15:52:44 2011 -0800
@@ -111,19 +111,17 @@
 
 if buildEnv['TARGET_ISA'] == alpha:
 system = makeLinuxAlphaRubySystem(test_mem_mode, bm[0])
-system.ruby = Ruby.create_system(options,
- system,
- system.piobus,
- system.dma_devices)
 elif buildEnv['TARGET_ISA'] == x86:
 system = makeLinuxX86System(test_mem_mode, options.num_cpus, bm[0], True)
 setWorkCountOptions(system, options)
-system.ruby = Ruby.create_system(options,
- system,
- system.piobus)
 else:
 fatal(incapable of building non-alpha or non-x86 full system!)
 
+system.ruby = Ruby.create_system(options,
+ system,
+ system.piobus,
+ system._dma_devices)
+
 system.cpu = [CPUClass(cpu_id=i) for i in xrange(options.num_cpus)]
 
 for (i, cpu) in enumerate(system.cpu):
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


[m5-dev] changeset in m5: memtest: due to contention increase, increased ...

2011-02-08 Thread Brad Beckmann
changeset 685719afafe6 in /z/repo/m5
details: http://repo.m5sim.org/m5?cmd=changeset;node=685719afafe6
description:
memtest: due to contention increase, increased deadlock threshold

diffstat:

 configs/example/ruby_mem_test.py |  6 ++
 tests/configs/memtest-ruby.py|  6 ++
 2 files changed, 12 insertions(+), 0 deletions(-)

diffs (32 lines):

diff -r bb6411d45356 -r 685719afafe6 configs/example/ruby_mem_test.py
--- a/configs/example/ruby_mem_test.py  Tue Feb 08 15:52:44 2011 -0800
+++ b/configs/example/ruby_mem_test.py  Tue Feb 08 15:53:33 2011 -0800
@@ -135,6 +135,12 @@
 cpu.test = system.ruby.cpu_ruby_ports[i].port
 cpu.functional = system.funcmem.port
 
+#
+# Since the memtester is incredibly bursty, increase the deadlock
+# threshold to 5 million cycles
+#
+system.ruby.cpu_ruby_ports[i].deadlock_threshold = 500
+
 for (i, dma) in enumerate(dmas):
 #
 # Tie the dma memtester ports to the correct functional port
diff -r bb6411d45356 -r 685719afafe6 tests/configs/memtest-ruby.py
--- a/tests/configs/memtest-ruby.py Tue Feb 08 15:52:44 2011 -0800
+++ b/tests/configs/memtest-ruby.py Tue Feb 08 15:53:33 2011 -0800
@@ -96,6 +96,12 @@
  #
  cpus[i].test = ruby_port.port
  cpus[i].functional = system.funcmem.port
+ 
+ #
+ # Since the memtester is incredibly bursty, increase the deadlock
+ # threshold to 1 million cycles
+ #
+ ruby_port.deadlock_threshold = 100
 
 # ---
 # run simulation
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev