[gem5-dev] Cron m5test@zizzer /z/m5/regression/do-regression quick

2014-07-29 Thread Cron Daemon via gem5-dev
* build/ALPHA/tests/opt/quick/se/60.rubytest/alpha/linux/rubytest-ruby 
passed.
* build/ALPHA/tests/opt/quick/se/00.hello/alpha/linux/simple-atomic passed.
* build/ALPHA/tests/opt/quick/se/30.eio-mp/alpha/eio/simple-atomic-mp 
passed.
* build/ALPHA/tests/opt/quick/se/00.hello/alpha/linux/inorder-timing passed.
* build/ALPHA/tests/opt/quick/se/00.hello/alpha/linux/simple-timing passed.
* 
build/ALPHA/tests/opt/quick/fs/10.linux-boot/alpha/linux/tsunami-simple-atomic-dual
 passed.
* 
build/ALPHA_MESI_Two_Level/tests/opt/quick/se/00.hello/alpha/linux/simple-timing-ruby-MESI_Two_Level
 passed.
* 
build/ALPHA_MESI_Two_Level/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing-ruby-MESI_Two_Level
 passed.
* 
build/ALPHA_MESI_Two_Level/tests/opt/quick/se/60.rubytest/alpha/linux/rubytest-ruby-MESI_Two_Level
 passed.
* build/ALPHA/tests/opt/quick/se/00.hello/alpha/tru64/simple-atomic passed.
* 
build/ALPHA/tests/opt/quick/fs/10.linux-boot/alpha/linux/tsunami-simple-timing 
passed.
* build/ALPHA/tests/opt/quick/se/30.eio-mp/alpha/eio/simple-timing-mp 
passed.
* 
build/ALPHA/tests/opt/quick/fs/10.linux-boot/alpha/linux/tsunami-simple-timing-dual
 passed.
* 
build/ALPHA_MOESI_hammer/tests/opt/quick/se/60.rubytest/alpha/linux/rubytest-ruby-MOESI_hammer
 passed.
* 
build/ALPHA_MOESI_hammer/tests/opt/quick/se/00.hello/alpha/linux/simple-timing-ruby-MOESI_hammer
 passed.
* build/ALPHA/tests/opt/quick/se/00.hello/alpha/linux/simple-timing-ruby 
passed.
* 
build/ALPHA_MOESI_hammer/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing-ruby-MOESI_hammer
 passed.
* build/ALPHA/tests/opt/quick/se/20.eio-short/alpha/eio/simple-atomic 
passed.
* build/ALPHA/tests/opt/quick/se/20.eio-short/alpha/eio/simple-timing 
passed.
* build/ALPHA/tests/opt/quick/se/00.hello/alpha/linux/minor-timing passed.
* build/ALPHA/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing passed.
* build/ALPHA/tests/opt/quick/se/00.hello/alpha/tru64/minor-timing passed.
* 
build/ALPHA_MOESI_CMP_directory/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing-ruby-MOESI_CMP_directory
 passed.
* 
build/ALPHA_MOESI_CMP_directory/tests/opt/quick/se/00.hello/alpha/linux/simple-timing-ruby-MOESI_CMP_directory
 passed.
* 
build/ALPHA_MOESI_CMP_directory/tests/opt/quick/se/60.rubytest/alpha/linux/rubytest-ruby-MOESI_CMP_directory
 passed.
* 
build/ALPHA_MESI_Two_Level/tests/opt/quick/se/50.memtest/alpha/linux/memtest-ruby-MESI_Two_Level
 passed.
* build/ALPHA/tests/opt/quick/se/01.hello-2T-smt/alpha/linux/o3-timing 
passed.
* build/ALPHA/tests/opt/quick/se/50.memtest/alpha/linux/memtest-ruby passed.
* build/ALPHA/tests/opt/quick/se/00.hello/alpha/tru64/o3-timing passed.
* build/ALPHA/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing-ruby 
passed.
* build/ALPHA/tests/opt/quick/se/00.hello/alpha/linux/o3-timing passed.
* 
build/ALPHA/tests/opt/quick/fs/10.linux-boot/alpha/linux/tsunami-simple-atomic 
passed.
* 
build/ALPHA_MOESI_hammer/tests/opt/quick/se/50.memtest/alpha/linux/memtest-ruby-MOESI_hammer
 passed.
* 
build/ALPHA_MOESI_CMP_token/tests/opt/quick/se/60.rubytest/alpha/linux/rubytest-ruby-MOESI_CMP_token
 passed.
* 
build/ALPHA_MOESI_CMP_token/tests/opt/quick/se/00.hello/alpha/linux/simple-timing-ruby-MOESI_CMP_token
 passed.
* 
build/ALPHA_MOESI_CMP_directory/tests/opt/quick/se/50.memtest/alpha/linux/memtest-ruby-MOESI_CMP_directory
 passed.
* 
build/ALPHA_MOESI_CMP_token/tests/opt/quick/se/00.hello/alpha/tru64/simple-timing-ruby-MOESI_CMP_token
 passed.
* build/MIPS/tests/opt/quick/se/00.hello/mips/linux/simple-timing passed.
* build/MIPS/tests/opt/quick/se/00.hello/mips/linux/inorder-timing passed.
* build/MIPS/tests/opt/quick/se/00.hello/mips/linux/simple-atomic passed.
* build/NULL/tests/opt/quick/se/70.tgen/null/none/tgen-simple-mem passed.
* build/NULL/tests/opt/quick/se/50.memtest/null/none/memtest passed.
* build/NULL/tests/opt/quick/se/70.tgen/null/none/tgen-dram-ctrl passed.
* build/MIPS/tests/opt/quick/se/00.hello/mips/linux/o3-timing passed.
* build/MIPS/tests/opt/quick/se/00.hello/mips/linux/simple-timing-ruby 
passed.
* build/POWER/tests/opt/quick/se/00.hello/power/linux/o3-timing passed.
* build/POWER/tests/opt/quick/se/00.hello/power/linux/simple-atomic passed.
* 
build/ALPHA_MOESI_CMP_token/tests/opt/quick/se/50.memtest/alpha/linux/memtest-ruby-MOESI_CMP_token
 passed.
* build/SPARC/tests/opt/quick/se/00.hello/sparc/linux/simple-timing passed.
* build/SPARC/tests/opt/quick/se/02.insttest/sparc/linux/simple-timing 
passed.
* build/SPARC/tests/opt/quick/se/02.insttest/sparc/linux/inorder-timing 
passed.
* 
build/ALPHA/tests/opt/quick/fs/80.netperf-stream/alpha/linux/twosys-tsunami-simple-atomic
 passed.
* build/SPARC/tests/opt/quick/se/00.hello/sparc/linux/simple-atomic passed.
* 

Re: [gem5-dev] Review Request 2316: mem: add tCCD to DRAM controller

2014-07-29 Thread Amin Farmahini via gem5-dev


 On July 28, 2014, 12:34 a.m., Andreas Hansson wrote:
  Thanks for the input Amin. One high-level question: what is the main aim of 
  the patch? Until now we have tried to keep the timing constraints to a 
  minimum (without sacrificing fidelity). In general you could argue that 
  tBURST can be used, rather than adding tCCD and then taking the max. Do you 
  envision any use-cases where this is not the case?

Yes, generally you can use tBUSRT without sacrificing accuracy since usually 
tBURST = tCCD. 
But in DDR4 tBURST could be smaller than tCCD. For example, in 
DDR4-2400-17-17-17 model in gem5, Samsung K4A4G085WD, tCCD_S is 4 cycles 
(3.33ns) and tCCD_L is 6 cycles (5.0ns). So the average tCCD is 4.16ns, whereas 
tBURST is 3.33ns. Also, we have internally developed some DRAM architectures 
where tBURST is smaller than tCCD. So in those cases taking the max increases 
accuracy. 
Anyways, I don't mind at all if you would like to ignore tCCD to keep the 
timing constraints to a minimum. That makes sense.


- Amin


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2316/#review5228
---


On July 23, 2014, 10:16 p.m., Amin Farmahini wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.gem5.org/r/2316/
 ---
 
 (Updated July 23, 2014, 10:16 p.m.)
 
 
 Review request for Default.
 
 
 Repository: gem5
 
 
 Description
 ---
 
 This patch adds support for tCCD to the DRAM controller.
 After changeset 10211: e084db2b1527 (Merge DRAM latency calculation and bank 
 state update), DRAM latency calculations has changed and that changeset 
 provides a rather simple way to incorporate the tCCD parameter into latency 
 calculations.
 
 
 Diffs
 -
 
   src/mem/DRAMCtrl.py UNKNOWN 
   src/mem/dram_ctrl.hh UNKNOWN 
   src/mem/dram_ctrl.cc UNKNOWN 
 
 Diff: http://reviews.gem5.org/r/2316/diff/
 
 
 Testing
 ---
 
 None
 
 
 Thanks,
 
 Amin Farmahini
 


___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 2316: mem: add tCCD to DRAM controller

2014-07-29 Thread Andreas Hansson via gem5-dev


 On July 28, 2014, 12:34 a.m., Andreas Hansson wrote:
  Thanks for the input Amin. One high-level question: what is the main aim of 
  the patch? Until now we have tried to keep the timing constraints to a 
  minimum (without sacrificing fidelity). In general you could argue that 
  tBURST can be used, rather than adding tCCD and then taking the max. Do you 
  envision any use-cases where this is not the case?
 
 Amin Farmahini wrote:
 Yes, generally you can use tBUSRT without sacrificing accuracy since 
 usually tBURST = tCCD. 
 But in DDR4 tBURST could be smaller than tCCD. For example, in 
 DDR4-2400-17-17-17 model in gem5, Samsung K4A4G085WD, tCCD_S is 4 cycles 
 (3.33ns) and tCCD_L is 6 cycles (5.0ns). So the average tCCD is 4.16ns, 
 whereas tBURST is 3.33ns. Also, we have internally developed some DRAM 
 architectures where tBURST is smaller than tCCD. So in those cases taking the 
 max increases accuracy. 
 Anyways, I don't mind at all if you would like to ignore tCCD to keep the 
 timing constraints to a minimum. That makes sense.

We have some patches adding bank groups, and that should address the issue of 
_S and _L. The argument I was making regarding your patch is that the user 
could rather do the max(tCCD, tBURST) in advance and set tBURST to this value. 
Agreed?


- Andreas


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2316/#review5228
---


On July 23, 2014, 10:16 p.m., Amin Farmahini wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.gem5.org/r/2316/
 ---
 
 (Updated July 23, 2014, 10:16 p.m.)
 
 
 Review request for Default.
 
 
 Repository: gem5
 
 
 Description
 ---
 
 This patch adds support for tCCD to the DRAM controller.
 After changeset 10211: e084db2b1527 (Merge DRAM latency calculation and bank 
 state update), DRAM latency calculations has changed and that changeset 
 provides a rather simple way to incorporate the tCCD parameter into latency 
 calculations.
 
 
 Diffs
 -
 
   src/mem/DRAMCtrl.py UNKNOWN 
   src/mem/dram_ctrl.hh UNKNOWN 
   src/mem/dram_ctrl.cc UNKNOWN 
 
 Diff: http://reviews.gem5.org/r/2316/diff/
 
 
 Testing
 ---
 
 None
 
 
 Thanks,
 
 Amin Farmahini
 


___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 2316: mem: add tCCD to DRAM controller

2014-07-29 Thread Amin Farmahini via gem5-dev


 On July 28, 2014, 12:34 a.m., Andreas Hansson wrote:
  Thanks for the input Amin. One high-level question: what is the main aim of 
  the patch? Until now we have tried to keep the timing constraints to a 
  minimum (without sacrificing fidelity). In general you could argue that 
  tBURST can be used, rather than adding tCCD and then taking the max. Do you 
  envision any use-cases where this is not the case?
 
 Amin Farmahini wrote:
 Yes, generally you can use tBUSRT without sacrificing accuracy since 
 usually tBURST = tCCD. 
 But in DDR4 tBURST could be smaller than tCCD. For example, in 
 DDR4-2400-17-17-17 model in gem5, Samsung K4A4G085WD, tCCD_S is 4 cycles 
 (3.33ns) and tCCD_L is 6 cycles (5.0ns). So the average tCCD is 4.16ns, 
 whereas tBURST is 3.33ns. Also, we have internally developed some DRAM 
 architectures where tBURST is smaller than tCCD. So in those cases taking the 
 max increases accuracy. 
 Anyways, I don't mind at all if you would like to ignore tCCD to keep the 
 timing constraints to a minimum. That makes sense.
 
 Andreas Hansson wrote:
 We have some patches adding bank groups, and that should address the 
 issue of _S and _L. The argument I was making regarding your patch is that 
 the user could rather do the max(tCCD, tBURST) in advance and set tBURST to 
 this value. Agreed?

Agree :).


- Amin


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2316/#review5228
---


On July 23, 2014, 10:16 p.m., Amin Farmahini wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.gem5.org/r/2316/
 ---
 
 (Updated July 23, 2014, 10:16 p.m.)
 
 
 Review request for Default.
 
 
 Repository: gem5
 
 
 Description
 ---
 
 This patch adds support for tCCD to the DRAM controller.
 After changeset 10211: e084db2b1527 (Merge DRAM latency calculation and bank 
 state update), DRAM latency calculations has changed and that changeset 
 provides a rather simple way to incorporate the tCCD parameter into latency 
 calculations.
 
 
 Diffs
 -
 
   src/mem/DRAMCtrl.py UNKNOWN 
   src/mem/dram_ctrl.hh UNKNOWN 
   src/mem/dram_ctrl.cc UNKNOWN 
 
 Diff: http://reviews.gem5.org/r/2316/diff/
 
 
 Testing
 ---
 
 None
 
 
 Thanks,
 
 Amin Farmahini
 


___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 2313: Making KvmCPU model usable in SE mode.

2014-07-29 Thread Alexandru Dutu via gem5-dev


 On July 14, 2014, 4:42 p.m., Andreas Sandberg wrote:
  I have two high-level comments that need to be fixed:
  
  * This is should be four different commits (configuration, CPUID updates, 
  segment initialization, kvm, process).
  * Write a proper commit message (see http://www.m5sim.org/Commit_Access). 
  Specifically, include a short one-line summary and a longer description of 
  what the patch does and why.
  
  I'm also a bit unsure about the design of this patch. From RB2312, it 
  /seems/ like the useArchPT flag is supposed to enable page tables in SE 
  emulation mode, however in this patch you seem to overload it to mean use 
  magic syscall redirection. Could you clean that up a bit?
  
  Could you redesign the syscall  fault handler to use the memory mapped 
  m5ops interface instead? That way, the code would in the kvm CPU would be 
  much cleaner and you'd get rid of the magic ports. As a side effect, we 
  could potentially scrap the SE-specific bits of the ISA (this is definitely 
  something I'd like to see long term).
 

Thank you for your review! If I break this in four different commits, the 
simulator might not compile after each commit just after all of them are 
applied. That does not sound acceptable to me.


 On July 14, 2014, 4:42 p.m., Andreas Sandberg wrote:
  src/arch/x86/system.hh, line 95
  http://reviews.gem5.org/r/2313/diff/1/?file=40380#file40380line95
 
  Shouldn't this be in process.hh since it is SE specific?

These get used in places where it does not make sense to include process.hh and 
it makes sense to include system.hh (i.e. mem/ and cpu/).


 On July 14, 2014, 4:42 p.m., Andreas Sandberg wrote:
  src/arch/x86/system.hh, line 101
  http://reviews.gem5.org/r/2313/diff/1/?file=40380#file40380line101
 
  SE-specific?

Same as above.


 On July 14, 2014, 4:42 p.m., Andreas Sandberg wrote:
  src/cpu/kvm/x86_cpu.cc, line 1409
  http://reviews.gem5.org/r/2313/diff/1/?file=40385#file40385line1409
 
  Is this really needed here?

Actually, this is not needed anymore. I went through 3 modes of exit out of KVM 
for syscall emulation and entering to continue execution. This got left here 
from the previous 2 modes.


 On July 14, 2014, 4:42 p.m., Andreas Sandberg wrote:
  configs/example/se.py, line 66
  http://reviews.gem5.org/r/2313/diff/1/?file=40376#file40376line66
 
  You need to make more checks here as this patch only adds SE support(?) 
  for X86.

This only checks whether KvmCPU is in use, I will add more checks to line 200.


- Alexandru


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2313/#review5187
---


On July 11, 2014, 4:01 p.m., Alexandru Dutu wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.gem5.org/r/2313/
 ---
 
 (Updated July 11, 2014, 4:01 p.m.)
 
 
 Review request for Default.
 
 
 Repository: gem5
 
 
 Description
 ---
 
 Changeset 10254:661c482f7d58
 ---
 Making KvmCPU model usable in SE mode.
 
 
 Diffs
 -
 
   configs/example/se.py c625a3c51bac050879457e666dd83299a36d761b 
   src/arch/x86/cpuid.cc c625a3c51bac050879457e666dd83299a36d761b 
   src/arch/x86/process.cc c625a3c51bac050879457e666dd83299a36d761b 
   src/arch/x86/regs/misc.hh c625a3c51bac050879457e666dd83299a36d761b 
   src/arch/x86/system.hh c625a3c51bac050879457e666dd83299a36d761b 
   src/arch/x86/system.cc c625a3c51bac050879457e666dd83299a36d761b 
   src/cpu/kvm/base.hh c625a3c51bac050879457e666dd83299a36d761b 
   src/cpu/kvm/base.cc c625a3c51bac050879457e666dd83299a36d761b 
   src/cpu/kvm/x86_cpu.hh c625a3c51bac050879457e666dd83299a36d761b 
   src/cpu/kvm/x86_cpu.cc c625a3c51bac050879457e666dd83299a36d761b 
 
 Diff: http://reviews.gem5.org/r/2313/diff/
 
 
 Testing
 ---
 
 Quick regressions passed.
 
 
 Thanks,
 
 Alexandru Dutu
 


___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev