[gem5-dev] changeset in gem5: mem: Split the hit_latency into tag_latency a...

2016-11-30 Thread Sophiane Senni
changeset f15f02d8c79e in /z/repo/gem5
details: http://repo.gem5.org/gem5?cmd=changeset;node=f15f02d8c79e
description:
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.

Signed-off-by: Jason Lowe-Power 

diffstat:

 configs/common/Caches.py  |  12 
 configs/common/O3_ARM_v7a.py  |  12 
 configs/example/arm/devices.py|  15 ++-
 configs/example/memcheck.py   |   5 +++--
 configs/example/memtest.py|   5 +++--
 configs/learning_gem5/part1/caches.py |   6 --
 src/mem/cache/Cache.py|   3 ++-
 src/mem/cache/base.cc |   7 ---
 src/mem/cache/base.hh |   6 ++
 src/mem/cache/tags/Tags.py|  15 ++-
 src/mem/cache/tags/base.cc|   6 +-
 src/mem/cache/tags/base.hh|   8 +++-
 src/mem/cache/tags/base_set_assoc.hh  |  17 -
 src/mem/cache/tags/fa_lru.cc  |  13 -
 src/mem/cache/tags/fa_lru.hh  |   1 +
 15 files changed, 95 insertions(+), 36 deletions(-)

diffs (truncated from 401 to 300 lines):

diff -r b0853929e223 -r f15f02d8c79e configs/common/Caches.py
--- a/configs/common/Caches.py  Wed Nov 30 17:10:27 2016 -0500
+++ b/configs/common/Caches.py  Wed Nov 30 17:10:27 2016 -0500
@@ -48,7 +48,8 @@
 
 class L1Cache(Cache):
 assoc = 2
-hit_latency = 2
+tag_latency = 2
+data_latency = 2
 response_latency = 2
 mshrs = 4
 tgts_per_mshr = 20
@@ -63,7 +64,8 @@
 
 class L2Cache(Cache):
 assoc = 8
-hit_latency = 20
+tag_latency = 20
+data_latency = 20
 response_latency = 20
 mshrs = 20
 tgts_per_mshr = 12
@@ -71,7 +73,8 @@
 
 class IOCache(Cache):
 assoc = 8
-hit_latency = 50
+tag_latency = 50
+data_latency = 50
 response_latency = 50
 mshrs = 20
 size = '1kB'
@@ -79,7 +82,8 @@
 
 class PageTableWalkerCache(Cache):
 assoc = 2
-hit_latency = 2
+tag_latency = 2
+data_latency = 2
 response_latency = 2
 mshrs = 10
 size = '1kB'
diff -r b0853929e223 -r f15f02d8c79e configs/common/O3_ARM_v7a.py
--- a/configs/common/O3_ARM_v7a.py  Wed Nov 30 17:10:27 2016 -0500
+++ b/configs/common/O3_ARM_v7a.py  Wed Nov 30 17:10:27 2016 -0500
@@ -147,7 +147,8 @@
 
 # Instruction Cache
 class O3_ARM_v7a_ICache(Cache):
-hit_latency = 1
+tag_latency = 1
+data_latency = 1
 response_latency = 1
 mshrs = 2
 tgts_per_mshr = 8
@@ -159,7 +160,8 @@
 
 # Data Cache
 class O3_ARM_v7a_DCache(Cache):
-hit_latency = 2
+tag_latency = 2
+data_latency = 2
 response_latency = 2
 mshrs = 6
 tgts_per_mshr = 8
@@ -172,7 +174,8 @@
 # TLB Cache
 # Use a cache as a L2 TLB
 class O3_ARM_v7aWalkCache(Cache):
-hit_latency = 4
+tag_latency = 4
+data_latency = 4
 response_latency = 4
 mshrs = 6
 tgts_per_mshr = 8
@@ -185,7 +188,8 @@
 
 # L2 Cache
 class O3_ARM_v7aL2(Cache):
-hit_latency = 12
+tag_latency = 12
+data_latency = 12
 response_latency = 12
 mshrs = 16
 tgts_per_mshr = 8
diff -r b0853929e223 -r f15f02d8c79e configs/example/arm/devices.py
--- a/configs/example/arm/devices.pyWed Nov 30 17:10:27 2016 -0500
+++ b/configs/example/arm/devices.pyWed Nov 30 17:10:27 2016 -0500
@@ -45,7 +45,8 @@
 from common import CpuConfig
 
 class L1I(L1_ICache):
-hit_latency = 1
+tag_latency = 1
+data_latency = 1
 response_latency = 1
 mshrs = 4
 tgts_per_mshr = 8
@@ -54,7 +55,8 @@
 
 
 class L1D(L1_DCache):
-hit_latency = 2
+tag_latency = 2
+data_latency = 2
 response_latency = 1
 mshrs = 16
 tgts_per_mshr = 16
@@ -64,7 +66,8 @@
 
 
 class WalkCache(PageTableWalkerCache):
-hit_latency = 4
+tag_latency = 4
+data_latency = 4
 response_latency = 4
 mshrs = 6
 tgts_per_mshr = 8
@@ -74,7 +77,8 @@
 
 
 class L2(L2Cache):
-hit_latency = 12
+tag_latency = 12
+data_latency = 12
 response_latency = 5
 mshrs = 32
 tgts_per_mshr = 8
@@ -87,7 +91,8 @@
 class L3(Cache):
 size = '16MB'
 assoc = 16
-hit_latency = 20
+tag_latency = 20
+data_latency = 20
 response_latency = 20
 mshrs = 20
 tgts_per_mshr = 12
diff -r b0853929e223 -r f15f02d8c79e configs/example/memcheck.py
--- a/configs/example/memcheck.py   Wed Nov 30 17:10:27 2016 -0500
+++ b/configs/example/memcheck.py   Wed Nov 30 

Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-11-17 Thread Sophiane SENNI


> On oct. 27, 2016, 12:11 après-midi, Andreas Hansson wrote:
> > thanks for getting this in shape

Hi all,

Could someone commit this patch, please ?
Thanks


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8995
---


On oct. 27, 2016, 11:25 matin, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated oct. 27, 2016, 11:25 matin)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11688:1a792798e845
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True",
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   configs/common/Caches.py 4aac82f10951 
>   configs/common/O3_ARM_v7a.py 4aac82f10951 
>   configs/example/arm/devices.py 4aac82f10951 
>   configs/learning_gem5/part1/caches.py 4aac82f10951 
>   src/mem/cache/Cache.py 4aac82f10951 
>   src/mem/cache/base.hh 4aac82f10951 
>   src/mem/cache/base.cc 4aac82f10951 
>   src/mem/cache/tags/Tags.py 4aac82f10951 
>   src/mem/cache/tags/base.hh 4aac82f10951 
>   src/mem/cache/tags/base.cc 4aac82f10951 
>   src/mem/cache/tags/base_set_assoc.hh 4aac82f10951 
>   src/mem/cache/tags/fa_lru.hh 4aac82f10951 
>   src/mem/cache/tags/fa_lru.cc 4aac82f10951 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-10-27 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated Oct. 27, 2016, 11:25 a.m.)


Review request for Default.


Repository: gem5


Description (updated)
---

Changeset 11688:1a792798e845
---
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-

  configs/common/Caches.py 4aac82f10951 
  configs/common/O3_ARM_v7a.py 4aac82f10951 
  configs/example/arm/devices.py 4aac82f10951 
  configs/learning_gem5/part1/caches.py 4aac82f10951 
  src/mem/cache/Cache.py 4aac82f10951 
  src/mem/cache/base.hh 4aac82f10951 
  src/mem/cache/base.cc 4aac82f10951 
  src/mem/cache/tags/Tags.py 4aac82f10951 
  src/mem/cache/tags/base.hh 4aac82f10951 
  src/mem/cache/tags/base.cc 4aac82f10951 
  src/mem/cache/tags/base_set_assoc.hh 4aac82f10951 
  src/mem/cache/tags/fa_lru.hh 4aac82f10951 
  src/mem/cache/tags/fa_lru.cc 4aac82f10951 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-10-25 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated Oct. 25, 2016, 9:18 a.m.)


Review request for Default.


Repository: gem5


Description (updated)
---

Changeset 11688:74be5cba513a
---
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-

  configs/common/Caches.py 4aac82f10951 
  configs/common/O3_ARM_v7a.py 4aac82f10951 
  configs/example/arm/devices.py 4aac82f10951 
  configs/learning_gem5/part1/caches.py 4aac82f10951 
  src/mem/cache/Cache.py 4aac82f10951 
  src/mem/cache/base.hh 4aac82f10951 
  src/mem/cache/base.cc 4aac82f10951 
  src/mem/cache/tags/Tags.py 4aac82f10951 
  src/mem/cache/tags/base.hh 4aac82f10951 
  src/mem/cache/tags/base.cc 4aac82f10951 
  src/mem/cache/tags/base_set_assoc.hh 4aac82f10951 
  src/mem/cache/tags/fa_lru.hh 4aac82f10951 
  src/mem/cache/tags/fa_lru.cc 4aac82f10951 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-10-24 Thread Sophiane SENNI


> On oct. 21, 2016, 1:29 après-midi, Pierre-Yves Péneau wrote:
> > Hi,
> > 
> > Someone can commit this patch ? I don't have right access on the 
> > repository, either Sophiane.
> > Thank you.
> 
> Jason Lowe-Power wrote:
> Sorry we've been so slow on this patch. A couple of questions before I 
> commit.
> 
> 1. Are all of Andreas H.'s comments resolved? I'd like to see a "Ship It" 
> from him.
> 2. You need to make sure the regressions are passing. I understand that 
> our regression testing is poor, but I know that the learning_gem5 regression 
> is failing because of this patch. The file 
> configs/learning_gem5/part1/caches.py needs to be updated. There are likely 
> other files that need to be updated as well (configs/examples/arm/devices.py 
> comes to mind, there may be others).
> 
> Pierre-Yves Péneau wrote:
> 1. Sophiane answered to Andreas H.' issues but I did not respond (quote: 
> "Please go ahead with the patch as is"). I assume it's ok even without a 
> "Ship It" from him.
> 2. Regression tests have been done. Failures are due to missing CPU2000 
> benchmarks. The review will be update soon.

The regression tests passed, except the ones that require proprietary binaries.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8960
---


On oct. 24, 2016, 2:56 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated oct. 24, 2016, 2:56 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11688:9dba209f1590
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True",
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/base.cc 4aac82f10951 
>   src/mem/cache/tags/base_set_assoc.hh 4aac82f10951 
>   src/mem/cache/tags/fa_lru.cc 4aac82f10951 
>   configs/common/Caches.py 4aac82f10951 
>   configs/common/O3_ARM_v7a.py 4aac82f10951 
>   configs/example/arm/devices.py 4aac82f10951 
>   configs/learning_gem5/part1/caches.py 4aac82f10951 
>   src/mem/cache/Cache.py 4aac82f10951 
>   src/mem/cache/base.hh 4aac82f10951 
>   src/mem/cache/base.cc 4aac82f10951 
>   src/mem/cache/tags/Tags.py 4aac82f10951 
>   src/mem/cache/tags/base.hh 4aac82f10951 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-10-24 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated Oct. 24, 2016, 2:56 p.m.)


Review request for Default.


Repository: gem5


Description (updated)
---

Changeset 11688:9dba209f1590
---
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True",
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-

  src/mem/cache/tags/base.cc 4aac82f10951 
  src/mem/cache/tags/base_set_assoc.hh 4aac82f10951 
  src/mem/cache/tags/fa_lru.cc 4aac82f10951 
  configs/common/Caches.py 4aac82f10951 
  configs/common/O3_ARM_v7a.py 4aac82f10951 
  configs/example/arm/devices.py 4aac82f10951 
  configs/learning_gem5/part1/caches.py 4aac82f10951 
  src/mem/cache/Cache.py 4aac82f10951 
  src/mem/cache/base.hh 4aac82f10951 
  src/mem/cache/base.cc 4aac82f10951 
  src/mem/cache/tags/Tags.py 4aac82f10951 
  src/mem/cache/tags/base.hh 4aac82f10951 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-08-26 Thread Sophiane SENNI


> On juil. 27, 2016, 4:52 après-midi, Andreas Hansson wrote:
> > src/mem/cache/tags/base_set_assoc.hh, line 228
> > <http://reviews.gem5.org/r/3502/diff/12/?file=57342#file57342line228>
> >
> > Could you add a comment here?
> > 
> > It seems to me this code is not right, as it checks if the data is 
> > technically written now, but we only need the data at time T.
> > 
> > Should we not rather add the dataLatency to the blk->whenReady and then 
> > do the plus or max opteration?
> 
> Sophiane SENNI wrote:
> I actually don't really understand what this code represents, which was 
> already present before applying the patch. Because it seems to show a case 
> where the cache latency is greater than accessLatency, when the lat variable 
> is updated as follows:
> lat = cache->ticksToCycles(blk->whenReady - curTick())
> Can this situation actually occur ?
> 
> Andreas Hansson wrote:
> blk->whenReady represents the fact that the block is technically not 
> available yet. Due to how we do timing modelling we annotate the block when 
> it arrives, but have to remember when it is _actually_ availalbe. Thus, 
> anything we do here should add on top of the blk->whenReady. Same for fa_lru

Ok. So if I understood, we actually need to apply the accessLatency on top of 
the blk->whenReady. Hence, the good code would be as follows:

if (blk->whenReady > curTick()
&& cache->ticksToCycles(blk->whenReady - curTick())
> accessLatency) {
lat = cache->ticksToCycles(blk->whenReady - curTick()) + 
accessLatency;
}

Does this change make more sense ?


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8552
---


On juil. 28, 2016, 10:31 matin, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juil. 28, 2016, 10:31 matin)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter 
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True", 
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842 
>   configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842 
>   src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842 
>   src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842 
>   src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842 
>   src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842 
>   src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842 
>   src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842 
>   src/mem/cache/tags/base_set_assoc.hh 
> 4aac82f109517217e6bfb3812689280e7a8fa842 
>   src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-29 Thread Sophiane SENNI


> On juil. 25, 2016, 1:18 après-midi, Nikos Nikoleris wrote:
> > Ship It!
> 
> Sophiane SENNI wrote:
> How can I commit the patch ? I am not sure I have the commit access ?
> 
> Nikos Nikoleris wrote:
> You can't commit it youself, one of the maintainers will have to commit 
> it for you.
> 
> Jason Lowe-Power wrote:
> Hi Sophanie,
> 
> Have you run all of the regression tests? Are there changes to the stats 
> (I would expect so)? Have you checked to make sure the stat changes make 
> intuitive sense? Thanks!
> 
> Also, it would be nice to see Andreas H., or Steve, or someone who has 
> been modifying the cache code lately to take a quick look at this patch 
> before it's committed.
> 
> Sophiane SENNI wrote:
> Hi Jason,
> 
> I did not run any regression test. I only checked (with 
> --debug-flags=Cache) if the cache access latencies were correct for:
> - hits and misses
> - parallel and sequential accesses
> Regarding the stats, I only checked if the "sim_seconds" changes were 
> intuitive when modifying:
> - tag_latency and data_latency
> - cache access mode (i.e. sequential_access = True or False)
> 
> Jason Lowe-Power wrote:
> Please run all of the regression tests that you can (e.g., the ones that 
> don't require proprietary binaries). I know there are some other config files 
> that will need to be changed (e.g., the learning_gem5 scripts), and there may 
> be others.
> 
> Also, I expect this will require another patch to update the stats, too. 
> But you don't have to post that one on reviewboard :).
> 
> Sophiane SENNI wrote:
> I ran the following command: "scons build/ARM/tests/opt/quick/se"
> The output seems to be the same as when compiling gem5 with the command 
> "scons build/ARM/gem5.opt". Is it normal ?
> I expected to get outputs like the following one: "* 
> build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed."
> 
> Sophiane SENNI wrote:
> I successfully launched the regression tests. You are right. Some config 
> files need to be changed. For instance, the "O3_ARM_v7a.py" file where the 
> "hit_latency" has to be replaced by the two new parameters "tag_latency" and 
> "data_latency". I will try to make all necessary changes according to the 
> regression tests results.
> 
> Sophiane SENNI wrote:
> Jason,
> 
> Below are the results of the regression tests I ran:
> 
> * build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: 
> passed.
> * build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing: passed.
> * build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing-checker: 
> passed.
> * build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic: 
> passed.
> * 
> build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic-dummychecker: 
> passed.
> * build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-timing: 
> passed.
> * 
> build/ARM/tests/opt/quick/se/03.learning-gem5/arm/linux/learning-gem5-p1-simple:
>  passed.
> * 
> build/ARM/tests/opt/quick/se/03.learning-gem5/arm/linux/learning-gem5-p1-two-level:
>  FAILED!
> 
> The last test failed with the following error (Do you know how to resolve 
> it ?):
> 
> File 
> "/home/senni/Partage/shared_folder_gem5/gem5/gem5_from_mercurial/gem5/tests/testing/../../configs/learning_gem5/part1/two_level.py",
>  line 93, in 
> system.cpu.icache = L1ICache(opts)
> NameError: name 'L1ICache' is not defined
> 
> 
> For the fs mode regression tests, it fails with the following error:
> 
> File 
> "/home/senni/Partage/shared_folder_gem5/gem5/gem5_from_mercurial/gem5/configs/common/SysPaths.py",
>  line 69, in system
> raise IOError, "Can't find a path to system files."
> IOError: Can't find a path to system files.
> 
> I have this error even if the $M5_PATH variable is correctly set.
> 
> Jason Lowe-Power wrote:
> I don't know why you're seeing that error for the FS tests. I would 
> suggest adding some print statements to the python config scripts. 
> Additionally, it may be helpful to run gem5 without the regression scripts to 
> track down these issues.
> 
> For learning_gem5... It should be working. I just ran the test on the 
> head and it passed. I imagine you need to modify 
> learning_gem5/part1/caches.py to add the tag/data latencies.

Ok for the FS tests.
For learning_gem5, that is odd. I replaced the hit_latencies by tag/data

Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-29 Thread Sophiane SENNI


> On juil. 25, 2016, 1:18 après-midi, Nikos Nikoleris wrote:
> > Ship It!
> 
> Sophiane SENNI wrote:
> How can I commit the patch ? I am not sure I have the commit access ?
> 
> Nikos Nikoleris wrote:
> You can't commit it youself, one of the maintainers will have to commit 
> it for you.
> 
> Jason Lowe-Power wrote:
> Hi Sophanie,
> 
> Have you run all of the regression tests? Are there changes to the stats 
> (I would expect so)? Have you checked to make sure the stat changes make 
> intuitive sense? Thanks!
> 
> Also, it would be nice to see Andreas H., or Steve, or someone who has 
> been modifying the cache code lately to take a quick look at this patch 
> before it's committed.
> 
> Sophiane SENNI wrote:
> Hi Jason,
> 
> I did not run any regression test. I only checked (with 
> --debug-flags=Cache) if the cache access latencies were correct for:
> - hits and misses
> - parallel and sequential accesses
> Regarding the stats, I only checked if the "sim_seconds" changes were 
> intuitive when modifying:
> - tag_latency and data_latency
> - cache access mode (i.e. sequential_access = True or False)
> 
> Jason Lowe-Power wrote:
> Please run all of the regression tests that you can (e.g., the ones that 
> don't require proprietary binaries). I know there are some other config files 
> that will need to be changed (e.g., the learning_gem5 scripts), and there may 
> be others.
> 
> Also, I expect this will require another patch to update the stats, too. 
> But you don't have to post that one on reviewboard :).
> 
> Sophiane SENNI wrote:
> I ran the following command: "scons build/ARM/tests/opt/quick/se"
> The output seems to be the same as when compiling gem5 with the command 
> "scons build/ARM/gem5.opt". Is it normal ?
> I expected to get outputs like the following one: "* 
> build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed."
> 
> Sophiane SENNI wrote:
> I successfully launched the regression tests. You are right. Some config 
> files need to be changed. For instance, the "O3_ARM_v7a.py" file where the 
> "hit_latency" has to be replaced by the two new parameters "tag_latency" and 
> "data_latency". I will try to make all necessary changes according to the 
> regression tests results.

Jason,

Below are the results of the regression tests I ran:

* build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed.
* build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing: passed.
* build/ARM/tests/opt/quick/se/00.hello/arm/linux/o3-timing-checker: passed.
* build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic: passed.
* 
build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-atomic-dummychecker: 
passed.
* build/ARM/tests/opt/quick/se/00.hello/arm/linux/simple-timing: passed.
* 
build/ARM/tests/opt/quick/se/03.learning-gem5/arm/linux/learning-gem5-p1-simple:
 passed.
* 
build/ARM/tests/opt/quick/se/03.learning-gem5/arm/linux/learning-gem5-p1-two-level:
 FAILED!

The last test failed with the following error (Do you know how to resolve it ?):

File 
"/home/senni/Partage/shared_folder_gem5/gem5/gem5_from_mercurial/gem5/tests/testing/../../configs/learning_gem5/part1/two_level.py",
 line 93, in 
system.cpu.icache = L1ICache(opts)
NameError: name 'L1ICache' is not defined


For the fs mode regression tests, it fails with the following error:

File 
"/home/senni/Partage/shared_folder_gem5/gem5/gem5_from_mercurial/gem5/configs/common/SysPaths.py",
 line 69, in system
raise IOError, "Can't find a path to system files."
IOError: Can't find a path to system files.

I have this error even if the $M5_PATH variable is correctly set.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
---


On juil. 28, 2016, 10:31 matin, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juil. 28, 2016, 10:31 matin)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter 
> is set to "False

Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-28 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juil. 28, 2016, 10:31 matin)


Review request for Default.


Repository: gem5


Description
---

Changeset 11536:1a3a96d435ed
---
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter 
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True", 
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-

  configs/common/Caches.py 4aac82f109517217e6bfb3812689280e7a8fa842 
  configs/common/O3_ARM_v7a.py 4aac82f109517217e6bfb3812689280e7a8fa842 
  src/mem/cache/Cache.py 4aac82f109517217e6bfb3812689280e7a8fa842 
  src/mem/cache/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842 
  src/mem/cache/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842 
  src/mem/cache/tags/Tags.py 4aac82f109517217e6bfb3812689280e7a8fa842 
  src/mem/cache/tags/base.hh 4aac82f109517217e6bfb3812689280e7a8fa842 
  src/mem/cache/tags/base.cc 4aac82f109517217e6bfb3812689280e7a8fa842 
  src/mem/cache/tags/base_set_assoc.hh 4aac82f109517217e6bfb3812689280e7a8fa842 
  src/mem/cache/tags/fa_lru.cc 4aac82f109517217e6bfb3812689280e7a8fa842 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-28 Thread Sophiane SENNI


> On juil. 27, 2016, 4:52 après-midi, Andreas Hansson wrote:
> > src/mem/cache/tags/Tags.py, line 57
> > <http://reviews.gem5.org/r/3502/diff/12/?file=57339#file57339line57>
> >
> > Why would the tags care about the data latency?

This is required to initialize the "accessLatency" variable in tags/base.cc
If I am right, the "accessLatency" variable declared in tags/base.hh is used to 
consider the cache latency on every access (i.e. considering tags and data 
accesses). Previously, "accessLatency" was always assigned to "hit_latency". 
With this patch, "accessLatency" value is initialized according to the cache 
access mode (parallel or sequential).


> On juil. 27, 2016, 4:52 après-midi, Andreas Hansson wrote:
> > src/mem/cache/tags/base.hh, line 75
> > <http://reviews.gem5.org/r/3502/diff/12/?file=57340#file57340line75>
> >
> > Seems odd that the tags need to track this. Is this still the best 
> > division? Perhaps explain why it's here.

This can be removed. We actually do not need it since data latency is already 
included in accessLatency.


> On juil. 27, 2016, 4:52 après-midi, Andreas Hansson wrote:
> > src/mem/cache/tags/base_set_assoc.hh, line 228
> > <http://reviews.gem5.org/r/3502/diff/12/?file=57342#file57342line228>
> >
> > Could you add a comment here?
> > 
> > It seems to me this code is not right, as it checks if the data is 
> > technically written now, but we only need the data at time T.
> > 
> > Should we not rather add the dataLatency to the blk->whenReady and then 
> > do the plus or max opteration?

I actually don't really understand what this code represents, which was already 
present before applying the patch. Because it seems to show a case where the 
cache latency is greater than accessLatency, when the lat variable is updated 
as follows:
lat = cache->ticksToCycles(blk->whenReady - curTick())
Can this situation actually occur ?


> On juil. 27, 2016, 4:52 après-midi, Andreas Hansson wrote:
> > src/mem/cache/tags/fa_lru.cc, line 188
> > <http://reviews.gem5.org/r/3502/diff/12/?file=57343#file57343line188>
> >
> > here we don't care about blk->whenReady?

I we care about blk->whenReady in base_set_assoc.hh, I assume we have also to 
care about it here.


- Sophiane


-------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8552
---


On juil. 25, 2016, 1:16 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juil. 25, 2016, 1:16 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter 
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True", 
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base_set_assoc.hh 
> 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-27 Thread Sophiane SENNI


> On juil. 25, 2016, 1:18 après-midi, Nikos Nikoleris wrote:
> > Ship It!
> 
> Sophiane SENNI wrote:
> How can I commit the patch ? I am not sure I have the commit access ?
> 
> Nikos Nikoleris wrote:
> You can't commit it youself, one of the maintainers will have to commit 
> it for you.
> 
> Jason Lowe-Power wrote:
> Hi Sophanie,
> 
> Have you run all of the regression tests? Are there changes to the stats 
> (I would expect so)? Have you checked to make sure the stat changes make 
> intuitive sense? Thanks!
> 
> Also, it would be nice to see Andreas H., or Steve, or someone who has 
> been modifying the cache code lately to take a quick look at this patch 
> before it's committed.
> 
> Sophiane SENNI wrote:
> Hi Jason,
> 
> I did not run any regression test. I only checked (with 
> --debug-flags=Cache) if the cache access latencies were correct for:
> - hits and misses
> - parallel and sequential accesses
> Regarding the stats, I only checked if the "sim_seconds" changes were 
> intuitive when modifying:
> - tag_latency and data_latency
> - cache access mode (i.e. sequential_access = True or False)
> 
> Jason Lowe-Power wrote:
> Please run all of the regression tests that you can (e.g., the ones that 
> don't require proprietary binaries). I know there are some other config files 
> that will need to be changed (e.g., the learning_gem5 scripts), and there may 
> be others.
> 
> Also, I expect this will require another patch to update the stats, too. 
> But you don't have to post that one on reviewboard :).
> 
> Sophiane SENNI wrote:
> I ran the following command: "scons build/ARM/tests/opt/quick/se"
> The output seems to be the same as when compiling gem5 with the command 
> "scons build/ARM/gem5.opt". Is it normal ?
> I expected to get outputs like the following one: "* 
> build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed."

I successfully launched the regression tests. You are right. Some config files 
need to be changed. For instance, the "O3_ARM_v7a.py" file where the 
"hit_latency" has to be replaced by the two new parameters "tag_latency" and 
"data_latency". I will try to make all necessary changes according to the 
regression tests results.


- Sophiane


-------
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
---


On juil. 25, 2016, 1:16 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juil. 25, 2016, 1:16 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter 
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True", 
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base_set_assoc.hh 
> 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-27 Thread Sophiane SENNI


> On juil. 25, 2016, 1:18 après-midi, Nikos Nikoleris wrote:
> > Ship It!
> 
> Sophiane SENNI wrote:
> How can I commit the patch ? I am not sure I have the commit access ?
> 
> Nikos Nikoleris wrote:
> You can't commit it youself, one of the maintainers will have to commit 
> it for you.
> 
> Jason Lowe-Power wrote:
> Hi Sophanie,
> 
> Have you run all of the regression tests? Are there changes to the stats 
> (I would expect so)? Have you checked to make sure the stat changes make 
> intuitive sense? Thanks!
> 
> Also, it would be nice to see Andreas H., or Steve, or someone who has 
> been modifying the cache code lately to take a quick look at this patch 
> before it's committed.
> 
> Sophiane SENNI wrote:
> Hi Jason,
> 
> I did not run any regression test. I only checked (with 
> --debug-flags=Cache) if the cache access latencies were correct for:
> - hits and misses
> - parallel and sequential accesses
> Regarding the stats, I only checked if the "sim_seconds" changes were 
> intuitive when modifying:
> - tag_latency and data_latency
> - cache access mode (i.e. sequential_access = True or False)
> 
> Jason Lowe-Power wrote:
> Please run all of the regression tests that you can (e.g., the ones that 
> don't require proprietary binaries). I know there are some other config files 
> that will need to be changed (e.g., the learning_gem5 scripts), and there may 
> be others.
> 
> Also, I expect this will require another patch to update the stats, too. 
> But you don't have to post that one on reviewboard :).

I ran the following command: "scons build/ARM/tests/opt/quick/se"
The output seems to be the same as when compiling gem5 with the command "scons 
build/ARM/gem5.opt". Is it normal ?
I expected to get outputs like the following one: "* 
build/ARM/tests/opt/quick/se/00.hello/arm/linux/minor-timing: passed."


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
---


On juil. 25, 2016, 1:16 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juil. 25, 2016, 1:16 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter 
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True", 
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base_set_assoc.hh 
> 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-26 Thread Sophiane SENNI


> On juil. 25, 2016, 1:18 après-midi, Nikos Nikoleris wrote:
> > Ship It!
> 
> Sophiane SENNI wrote:
> How can I commit the patch ? I am not sure I have the commit access ?
> 
> Nikos Nikoleris wrote:
> You can't commit it youself, one of the maintainers will have to commit 
> it for you.
> 
> Jason Lowe-Power wrote:
> Hi Sophanie,
> 
> Have you run all of the regression tests? Are there changes to the stats 
> (I would expect so)? Have you checked to make sure the stat changes make 
> intuitive sense? Thanks!
> 
> Also, it would be nice to see Andreas H., or Steve, or someone who has 
> been modifying the cache code lately to take a quick look at this patch 
> before it's committed.

Hi Jason,

I did not run any regression test. I only checked (with --debug-flags=Cache) if 
the cache access latencies were correct for:
- hits and misses
- parallel and sequential accesses
Regarding the stats, I only checked if the "sim_seconds" changes were intuitive 
when modifying:
- tag_latency and data_latency
- cache access mode (i.e. sequential_access = True or False)


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
---


On juil. 25, 2016, 1:16 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juil. 25, 2016, 1:16 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter 
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True", 
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base_set_assoc.hh 
> 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-25 Thread Sophiane SENNI


> On juil. 25, 2016, 1:18 après-midi, Nikos Nikoleris wrote:
> > Ship It!

How can I commit the patch ? I am not sure I have the commit access ?


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8525
---


On juil. 25, 2016, 1:16 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juil. 25, 2016, 1:16 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter 
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True", 
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base_set_assoc.hh 
> 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-25 Thread Sophiane SENNI


> On juil. 23, 2016, 2:05 après-midi, Nikos Nikoleris wrote:
> > src/mem/cache/tags/Tags.py, line 68
> > <http://reviews.gem5.org/r/3502/diff/11/?file=57299#file57299line68>
> >
> > This stale argument causes the problems you're facing please remove it, 
> > I don't think it is needed

You are right. This change resolved the problem. Thanks Nikos.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8519
---


On juil. 25, 2016, 1:16 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juil. 25, 2016, 1:16 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter 
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True", 
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base_set_assoc.hh 
> 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-25 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juil. 25, 2016, 1:16 après-midi)


Review request for Default.


Repository: gem5


Description
---

Changeset 11536:1a3a96d435ed
---
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter 
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True", 
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-

  src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-20 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juil. 20, 2016, 1:32 après-midi)


Review request for Default.


Repository: gem5


Description
---

Changeset 11536:1a3a96d435ed
---
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter 
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True", 
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-

  configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-20 Thread Sophiane SENNI


> On juin 27, 2016, 5 après-midi, Nikos Nikoleris wrote:
> > src/mem/cache/tags/base_set_assoc.hh, line 218
> > <http://reviews.gem5.org/r/3502/diff/8/?file=56390#file56390line218>
> >
> > Would it make sense to move most of this code in the constructor? The 
> > flag sequentialAccess and the condition lookupLatency >= dataLantency 
> > shouldn't change during the simulation.
> 
> Sophiane SENNI wrote:
> I think this code should be left here. Because the total access latency 
> now depends on both tag_latency and data_latency, the value of the 'lat' 
> parameter specifying the access latency is not correct when the accessBlock() 
> function is called. As a result, the 'lat' value should be updated inside the 
> function depending on if it is a sequential or parallel access.
> 
> Sophiane SENNI wrote:
> Regarding your suggestion, I am agree with you that it would be better to 
> move the code in the constructor since as you mentioned the flag 
> sequentialAccess and the access latency value should not change during the 
> simulation. I will see how I can modify the patch accordingly.
> 
> Nikos Nikoleris wrote:
> Thanks for doing this! I think you could create a new variable in the 
> base class (BaseTags) and use that as the latency on every access. In any 
> case, in the code the latency is not dependent on the replacement policy. 
> Mind though that if the access is a miss the latency should always be 
> tagLatency, even when we've enabled the parallel access. We could also move 
> the sequentialAccess variable to BaseCache although I am not sure how 
> applicable a parallel lookup to the tag and the data array is for a fully 
> associative cache.
> 
> Nikos Nikoleris wrote:
> Sophiane, how are things progressing? It would be really great to get 
> done with this and commit it.
> 
> Sophiane SENNI wrote:
> Hi Nikos,
> 
> I was and I am still quite busy with some other work I have to finish. 
> But I will try to publish the new version of the patch as soon as possible. 
> Sorry for the delay.

Nikos,

For the new patch, I use the "accessLatency" variable, as the latency on every 
access, in the base class (BaseTags). I tried to initialize this variable in 
the constructor according to the access mode as follows:

BaseTags::BaseTags(const Params *p)
: ClockedObject(p), blkSize(p->block_size), size(p->size),
  lookupLatency(p->tag_latency), dataLatency(p->data_latency),
  accessLatency(p->sequential_access ? 
  // If sequential access, sum tag lookup and data access latencies
(p->tag_latency + p->data_latency) :
  // If parallel access, take the max latency
  // between tag lookup and data access 
(p->tag_latency >= p->data_latency ? 
  p->tag_latency : p->data_latency )),
  cache(nullptr), warmupBound(0),
  warmedUp(false), numBlocks(0)
{
}

However, when checking by simulation, the value of "sequential_access" 
parameter is always taken as "False". Even if this paramater is set to "True" 
in configs/common/Caches.py

Do you have a solution to correctly initialize the "accessLatency" variable in 
the initialization list ?

Thanks.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
---


On juin 28, 2016, 1:06 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 28, 2016, 1:06 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter 
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True", 
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/C

Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-07-18 Thread Sophiane SENNI


> On juin 27, 2016, 5 après-midi, Nikos Nikoleris wrote:
> > src/mem/cache/tags/base_set_assoc.hh, line 218
> > <http://reviews.gem5.org/r/3502/diff/8/?file=56390#file56390line218>
> >
> > Would it make sense to move most of this code in the constructor? The 
> > flag sequentialAccess and the condition lookupLatency >= dataLantency 
> > shouldn't change during the simulation.
> 
> Sophiane SENNI wrote:
> I think this code should be left here. Because the total access latency 
> now depends on both tag_latency and data_latency, the value of the 'lat' 
> parameter specifying the access latency is not correct when the accessBlock() 
> function is called. As a result, the 'lat' value should be updated inside the 
> function depending on if it is a sequential or parallel access.
> 
> Sophiane SENNI wrote:
> Regarding your suggestion, I am agree with you that it would be better to 
> move the code in the constructor since as you mentioned the flag 
> sequentialAccess and the access latency value should not change during the 
> simulation. I will see how I can modify the patch accordingly.
> 
> Nikos Nikoleris wrote:
> Thanks for doing this! I think you could create a new variable in the 
> base class (BaseTags) and use that as the latency on every access. In any 
> case, in the code the latency is not dependent on the replacement policy. 
> Mind though that if the access is a miss the latency should always be 
> tagLatency, even when we've enabled the parallel access. We could also move 
> the sequentialAccess variable to BaseCache although I am not sure how 
> applicable a parallel lookup to the tag and the data array is for a fully 
> associative cache.
> 
> Nikos Nikoleris wrote:
> Sophiane, how are things progressing? It would be really great to get 
> done with this and commit it.

Hi Nikos,

I was and I am still quite busy with some other work I have to finish. But I 
will try to publish the new version of the patch as soon as possible. Sorry for 
the delay.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
---


On juin 28, 2016, 1:06 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 28, 2016, 1:06 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter 
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True", 
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base_set_assoc.hh 
> 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-06-30 Thread Sophiane SENNI


> On juin 27, 2016, 5 après-midi, Nikos Nikoleris wrote:
> > src/mem/cache/tags/base_set_assoc.hh, line 218
> > <http://reviews.gem5.org/r/3502/diff/8/?file=56390#file56390line218>
> >
> > Would it make sense to move most of this code in the constructor? The 
> > flag sequentialAccess and the condition lookupLatency >= dataLantency 
> > shouldn't change during the simulation.
> 
> Sophiane SENNI wrote:
> I think this code should be left here. Because the total access latency 
> now depends on both tag_latency and data_latency, the value of the 'lat' 
> parameter specifying the access latency is not correct when the accessBlock() 
> function is called. As a result, the 'lat' value should be updated inside the 
> function depending on if it is a sequential or parallel access.

Regarding your suggestion, I am agree with you that it would be better to move 
the code in the constructor since as you mentioned the flag sequentialAccess 
and the access latency value should not change during the simulation. I will 
see how I can modify the patch accordingly.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
---


On juin 28, 2016, 1:06 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 28, 2016, 1:06 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> mem: Split the hit_latency into tag_latency and data_latency
> 
> If the cache access mode is parallel, i.e. "sequential_access" parameter 
> is set to "False", tags and data are accessed in parallel. Therefore,
> the hit_latency is the maximum latency between tag_latency and
> data_latency. On the other hand, if the cache access mode is
> sequential, i.e. "sequential_access" parameter is set to "True", 
> tags and data are accessed sequentially. Therefore, the hit_latency
> is the sum of tag_latency plus data_latency.
> 
> 
> Diffs
> -
> 
>   configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base_set_assoc.hh 
> 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-06-28 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juin 28, 2016, 1:06 après-midi)


Review request for Default.


Repository: gem5


Description (updated)
---

Changeset 11536:1a3a96d435ed
---
mem: Split the hit_latency into tag_latency and data_latency

If the cache access mode is parallel, i.e. "sequential_access" parameter 
is set to "False", tags and data are accessed in parallel. Therefore,
the hit_latency is the maximum latency between tag_latency and
data_latency. On the other hand, if the cache access mode is
sequential, i.e. "sequential_access" parameter is set to "True", 
tags and data are accessed sequentially. Therefore, the hit_latency
is the sum of tag_latency plus data_latency.


Diffs (updated)
-

  configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: mem: Split the hit_latency into tag_latency and data_latency

2016-06-28 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juin 28, 2016, 9:57 matin)


Review request for Default.


Summary (updated)
-

mem: Split the hit_latency into tag_latency and data_latency


Repository: gem5


Description (updated)
---

Changeset 11536:1a3a96d435ed
---
mem: Split the hit_latency into tag_latency and data_latency

Sum the two latency values for sequential access. Otherwise, take the max.


Diffs (updated)
-

  src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and data access latency

2016-06-28 Thread Sophiane SENNI


> On juin 27, 2016, 5 après-midi, Nikos Nikoleris wrote:
> > src/mem/cache/tags/base_set_assoc.hh, line 218
> > <http://reviews.gem5.org/r/3502/diff/8/?file=56390#file56390line218>
> >
> > Would it make sense to move most of this code in the constructor? The 
> > flag sequentialAccess and the condition lookupLatency >= dataLantency 
> > shouldn't change during the simulation.

I think this code should be left here. Because the total access latency now 
depends on both tag_latency and data_latency, the value of the 'lat' parameter 
specifying the access latency is not correct when the accessBlock() function is 
called. As a result, the 'lat' value should be updated inside the function 
depending on if it is a sequential or parallel access.


> On juin 27, 2016, 5 après-midi, Nikos Nikoleris wrote:
> > src/mem/cache/tags/fa_lru.cc, line 212
> > <http://reviews.gem5.org/r/3502/diff/8/?file=56392#file56392line212>
> >
> > Same here

Same here


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8441
---


On juin 20, 2016, 3:07 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 20, 2016, 3:07 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> cache: Split the hit latency into tag lookup latency and data access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and data are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and data access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and data are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus data access latency.
> 
> 
> Diffs
> -
> 
>   configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base_set_assoc.hh 
> 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and data access latency

2016-06-20 Thread Sophiane SENNI


> On juin 17, 2016, 7:57 matin, Pierre-Yves Péneau wrote:
> > I don't like the variable names, I think it's confusing especially in the 
> > Python part which is the user part. "lookup_latency"  does not clearly 
> > refer to the tag lookup action , and "ram_latency" is also not very clear. 
> > Maybe something like "tag_latency" and "line_latency" could be better ? I 
> > think the two parts of a cache are well identified in this example.
> 
> Sophiane SENNI wrote:
> Hi Pierre-Yves,
> 
> I am agree with you that the variable names in the Python part should not 
> be confusing for users. I reused the name from a previous discussion with 
> Andreas H.
> We need feedback from other users to see what are the best annotation. In 
> Cache, there are tag arrays and data arrays, so maybe "tag_latency" and 
> "data_line_latency" could be a solution.
> Any feedback from other gem5 users would be useful.
> 
> Sophiane
> 
> Radhika Jagtap wrote:
> Thanks for bringing this up. I vote for 'tag_latency' (or 
> 'tag_lookup_latency') and 'data_latency'.
> 
> If I understand correctly the patch has an impact on timing/stats only if 
> sequential access is set to True and in that case only affects the hit 
> latency. The timing on the miss path and allocation of mshr (mshr entry, mshr 
> target, write buffer entry, ask for mem-side bus access) still uses the 
> forwardLatency value. The forwardLatency used to be 'hit_latency' (at one 
> point not so far in the past everything was 'hit_latency' anyway!). But this 
> change makes a distinction between tag and data access and it is logical to 
> make forward latency equal to tag_latency. If you also had this analysis in 
> mind, please could you add a comment for forwardLatency somewhere?

The distinction between tag and data access also raises the question of the 
fillLatency value. Does the sequential access mode has to be taken into account 
on a fill ? In this patch, the fillLatency is assumed to be always equal to 
data_latency.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8419
---


On juin 20, 2016, 3:07 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 20, 2016, 3:07 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> cache: Split the hit latency into tag lookup latency and data access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and data are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and data access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and data are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus data access latency.
> 
> 
> Diffs
> -
> 
>   configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/base_set_assoc.hh 
> 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and data access latency

2016-06-20 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juin 20, 2016, 3:07 après-midi)


Review request for Default.


Changes
---

New variable names in Caches.py : tag_latency and data_latency


Summary (updated)
-

cache: Split the hit latency into tag lookup latency and data access latency


Repository: gem5


Description (updated)
---

Changeset 11536:1a3a96d435ed
---
cache: Split the hit latency into tag lookup latency and data access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and data are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and data access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and data are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus data access latency.


Diffs (updated)
-

  configs/common/Caches.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/Cache.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/Tags.py 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/fa_lru.hh 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 
  src/mem/cache/tags/fa_lru.cc 80e79ae636ca6b021cbf7aa985b5fd56cb5b2708 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-17 Thread Sophiane SENNI


> On juin 17, 2016, 7:57 matin, Pierre-Yves Péneau wrote:
> > I don't like the variable names, I think it's confusing especially in the 
> > Python part which is the user part. "lookup_latency"  does not clearly 
> > refer to the tag lookup action , and "ram_latency" is also not very clear. 
> > Maybe something like "tag_latency" and "line_latency" could be better ? I 
> > think the two parts of a cache are well identified in this example.

Hi Pierre-Yves,

I am agree with you that the variable names in the Python part should not be 
confusing for users. I reused the name from a previous discussion with Andreas 
H.
We need feedback from other users to see what are the best annotation. In 
Cache, there are tag arrays and data arrays, so maybe "tag_latency" and 
"data_line_latency" could be a solution.
Any feedback from other gem5 users would be useful.

Sophiane


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8419
---


On juin 16, 2016, 6:55 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 16, 2016, 6:55 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 11536:1a3a96d435ed
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.hh 80e79ae636ca 
>   src/mem/cache/tags/base.cc 80e79ae636ca 
>   src/mem/cache/tags/Tags.py 80e79ae636ca 
>   src/mem/cache/tags/fa_lru.cc 80e79ae636ca 
>   src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca 
>   src/mem/cache/tags/base.hh 80e79ae636ca 
>   configs/common/Caches.py 80e79ae636ca 
>   src/mem/cache/Cache.py 80e79ae636ca 
>   src/mem/cache/base.hh 80e79ae636ca 
>   src/mem/cache/base.cc 80e79ae636ca 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated June 16, 2016, 6:55 p.m.)


Review request for Default.


Repository: gem5


Description (updated)
---

Changeset 11536:1a3a96d435ed
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-

  src/mem/cache/tags/fa_lru.hh 80e79ae636ca 
  src/mem/cache/tags/base.cc 80e79ae636ca 
  src/mem/cache/tags/Tags.py 80e79ae636ca 
  src/mem/cache/tags/fa_lru.cc 80e79ae636ca 
  src/mem/cache/tags/base_set_assoc.hh 80e79ae636ca 
  src/mem/cache/tags/base.hh 80e79ae636ca 
  configs/common/Caches.py 80e79ae636ca 
  src/mem/cache/Cache.py 80e79ae636ca 
  src/mem/cache/base.hh 80e79ae636ca 
  src/mem/cache/base.cc 80e79ae636ca 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI


> On juin 16, 2016, 2:37 après-midi, Jason Lowe-Power wrote:
> > Hi Sophiane,
> > 
> > Thanks for the contribution. It looks like some of the patch doesn't apply 
> > cleanly in reviewboard. Did you use the hg postreview extension? It may 
> > also help to use the "-o" option on the extension.
> > 
> > Cheers!
> 
> Sophiane SENNI wrote:
> Hi Jason,
> 
> If I use the hg postreview extension (with the following command hg 
> postreview -o -u -e 3502), all the patch does not apply cleanly.
> 
> Jason Lowe-Power wrote:
> Make sure you're applying your patch on top of the most recent version of 
> gem5 (NOT gem5-stable). "hg incoming" and "hg pull" may be helpful.
> 
> For instance, I believe BaseCache.py was renamed Cache.py in the last few 
> months (I don't remember exactly when).
> 
> Sophiane SENNI wrote:
> For the current posted patch, I used the following command "hg diff -g", 
> then I post the patch manually through reviewboard GUI. But using this method 
> does not also work properly. As you noticed, some of the patch doesn't apply 
> cleanly.

You are right, the patch was applied to gem5-stable. I will apply it to the 
most recent version. 
Thanks.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
---


On juin 16, 2016, 3:27 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 16, 2016, 3:27 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 10875:dd94e2606640
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.cc UNKNOWN 
>   src/mem/cache/tags/fa_lru.hh UNKNOWN 
>   src/mem/cache/tags/base_set_assoc.hh UNKNOWN 
>   src/mem/cache/tags/base.cc UNKNOWN 
>   src/mem/cache/tags/base.hh UNKNOWN 
>   src/mem/cache/tags/Tags.py UNKNOWN 
>   src/mem/cache/base.hh UNKNOWN 
>   src/mem/cache/base.cc UNKNOWN 
>   src/mem/cache/BaseCache.py UNKNOWN 
>   configs/common/Caches.py UNKNOWN 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI


> On juin 16, 2016, 2:37 après-midi, Jason Lowe-Power wrote:
> > Hi Sophiane,
> > 
> > Thanks for the contribution. It looks like some of the patch doesn't apply 
> > cleanly in reviewboard. Did you use the hg postreview extension? It may 
> > also help to use the "-o" option on the extension.
> > 
> > Cheers!
> 
> Sophiane SENNI wrote:
> Hi Jason,
> 
> If I use the hg postreview extension (with the following command hg 
> postreview -o -u -e 3502), all the patch does not apply cleanly.
> 
> Jason Lowe-Power wrote:
> Make sure you're applying your patch on top of the most recent version of 
> gem5 (NOT gem5-stable). "hg incoming" and "hg pull" may be helpful.
> 
> For instance, I believe BaseCache.py was renamed Cache.py in the last few 
> months (I don't remember exactly when).

For the current posted patch, I used the following command "hg diff -g", then I 
post the patch manually through reviewboard GUI. But using this method does not 
also work properly. As you noticed, some of the patch doesn't apply cleanly.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
---


On juin 16, 2016, 3:27 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 16, 2016, 3:27 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 10875:dd94e2606640
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   src/mem/cache/tags/fa_lru.cc UNKNOWN 
>   src/mem/cache/tags/fa_lru.hh UNKNOWN 
>   src/mem/cache/tags/base_set_assoc.hh UNKNOWN 
>   src/mem/cache/tags/base.cc UNKNOWN 
>   src/mem/cache/tags/base.hh UNKNOWN 
>   src/mem/cache/tags/Tags.py UNKNOWN 
>   src/mem/cache/base.hh UNKNOWN 
>   src/mem/cache/base.cc UNKNOWN 
>   src/mem/cache/BaseCache.py UNKNOWN 
>   configs/common/Caches.py UNKNOWN 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juin 16, 2016, 3:27 après-midi)


Review request for Default.


Repository: gem5


Description
---

Changeset 10875:dd94e2606640
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-

  src/mem/cache/tags/fa_lru.cc UNKNOWN 
  src/mem/cache/tags/fa_lru.hh UNKNOWN 
  src/mem/cache/tags/base_set_assoc.hh UNKNOWN 
  src/mem/cache/tags/base.cc UNKNOWN 
  src/mem/cache/tags/base.hh UNKNOWN 
  src/mem/cache/tags/Tags.py UNKNOWN 
  src/mem/cache/base.hh UNKNOWN 
  src/mem/cache/base.cc UNKNOWN 
  src/mem/cache/BaseCache.py UNKNOWN 
  configs/common/Caches.py UNKNOWN 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI


> On juin 16, 2016, 2:37 après-midi, Jason Lowe-Power wrote:
> > Hi Sophiane,
> > 
> > Thanks for the contribution. It looks like some of the patch doesn't apply 
> > cleanly in reviewboard. Did you use the hg postreview extension? It may 
> > also help to use the "-o" option on the extension.
> > 
> > Cheers!

Hi Jason,

If I use the hg postreview extension (with the following command hg postreview 
-o -u -e 3502), all the patch does not apply cleanly.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/#review8413
-------


On juin 16, 2016, 3:15 après-midi, Sophiane SENNI wrote:
> 
> ---
> This is an automatically generated e-mail. To reply, visit:
> http://reviews.gem5.org/r/3502/
> ---
> 
> (Updated juin 16, 2016, 3:15 après-midi)
> 
> 
> Review request for Default.
> 
> 
> Repository: gem5
> 
> 
> Description
> ---
> 
> Changeset 10875:dd94e2606640
> ---
> cache: Split the hit latency into tag lookup latency and RAM access latency
> 
> If the cache access mode is parallel ("sequential_access" parameter set to 
> "False"), tags and RAMs are accessed in parallel. Therefore, the hit latency 
> is the maximum latency between tag lookup latency and RAM access latency. On 
> the other hand, if the cache access mode is sequential ("sequential_access" 
> parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
> the hit latency is the sum of tag lookup latency plus RAM access latency.
> 
> 
> Diffs
> -
> 
>   configs/common/Caches.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/BaseCache.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/Tags.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/base_set_assoc.hh 
> 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/fa_lru.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
>   src/mem/cache/tags/fa_lru.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
> 
> Diff: http://reviews.gem5.org/r/3502/diff/
> 
> 
> Testing
> ---
> 
> Tested using --Debug-flags=Cache
> 
> 
> Thanks,
> 
> Sophiane SENNI
> 
>

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated June 16, 2016, 3:15 p.m.)


Review request for Default.


Repository: gem5


Description
---

Changeset 10875:dd94e2606640
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-

  configs/common/Caches.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/BaseCache.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/Tags.py 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/base.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/base.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/fa_lru.hh 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 
  src/mem/cache/tags/fa_lru.cc 629fe6e6c781ec542bcd1cfda0217dfc51c4826b 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated June 16, 2016, 3:14 p.m.)


Review request for Default.


Repository: gem5


Description (updated)
---

Changeset 10875:dd94e2606640
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-

  configs/common/Caches.py 629fe6e6c781 
  src/mem/cache/BaseCache.py 629fe6e6c781 
  src/mem/cache/base.hh 629fe6e6c781 
  src/mem/cache/base.cc 629fe6e6c781 
  src/mem/cache/tags/Tags.py 629fe6e6c781 
  src/mem/cache/tags/base.hh 629fe6e6c781 
  src/mem/cache/tags/base.cc 629fe6e6c781 
  src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.cc 629fe6e6c781 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-16 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juin 16, 2016, 1:37 après-midi)


Review request for Default.


Repository: gem5


Description
---

Changeset 10875:b498767cb7d8
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs (updated)
-

  configs/common/Caches.py UNKNOWN 
  src/mem/cache/BaseCache.py UNKNOWN 
  src/mem/cache/base.hh UNKNOWN 
  src/mem/cache/base.cc UNKNOWN 
  src/mem/cache/tags/Tags.py UNKNOWN 
  src/mem/cache/tags/base.hh UNKNOWN 
  src/mem/cache/tags/base.cc UNKNOWN 
  src/mem/cache/tags/base_set_assoc.hh UNKNOWN 
  src/mem/cache/tags/fa_lru.hh UNKNOWN 
  src/mem/cache/tags/fa_lru.cc UNKNOWN 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-15 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated juin 15, 2016, 2:43 après-midi)


Review request for Default.


Repository: gem5


Description (updated)
---

Changeset 10875:b498767cb7d8
---
cache: Split the hit latency into tag lookup latency and RAM access latency

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs
-

  configs/common/Caches.py 629fe6e6c781 
  src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.cc 629fe6e6c781 
  src/mem/cache/BaseCache.py 629fe6e6c781 
  src/mem/cache/base.hh 629fe6e6c781 
  src/mem/cache/base.cc 629fe6e6c781 
  src/mem/cache/tags/Tags.py 629fe6e6c781 
  src/mem/cache/tags/base.hh 629fe6e6c781 
  src/mem/cache/tags/base.cc 629fe6e6c781 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 3502: cache: Split the hit latency into tag lookup latency and RAM access latency

2016-06-15 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

(Updated June 15, 2016, 2:34 p.m.)


Review request for Default.


Summary (updated)
-

cache: Split the hit latency into tag lookup latency and RAM access latency


Repository: gem5


Description (updated)
---

Changeset 10875:b498767cb7d8
---
cache: Split the hit latency into tag lookup latency and RAM access latency


Diffs (updated)
-

  configs/common/Caches.py 629fe6e6c781 
  src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.cc 629fe6e6c781 
  src/mem/cache/BaseCache.py 629fe6e6c781 
  src/mem/cache/base.hh 629fe6e6c781 
  src/mem/cache/base.cc 629fe6e6c781 
  src/mem/cache/tags/Tags.py 629fe6e6c781 
  src/mem/cache/tags/base.hh 629fe6e6c781 
  src/mem/cache/tags/base.cc 629fe6e6c781 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


[gem5-dev] Review Request 3502: Split the hit latency into tag lookup latency and RAM access latency

2016-06-15 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/3502/
---

Review request for Default.


Repository: gem5


Description
---

Split the hit latency into tag lookup latency and RAM access latency.

If the cache access mode is parallel ("sequential_access" parameter set to 
"False"), tags and RAMs are accessed in parallel. Therefore, the hit latency is 
the maximum latency between tag lookup latency and RAM access latency. On the 
other hand, if the cache access mode is sequential ("sequential_access" 
parameter set to "True"), tags and RAM are accessed sequentially. Therefore, 
the hit latency is the sum of tag lookup latency plus RAM access latency.


Diffs
-

  configs/common/Caches.py 629fe6e6c781 
  src/mem/cache/BaseCache.py 629fe6e6c781 
  src/mem/cache/base.hh 629fe6e6c781 
  src/mem/cache/base.cc 629fe6e6c781 
  src/mem/cache/tags/Tags.py 629fe6e6c781 
  src/mem/cache/tags/base.hh 629fe6e6c781 
  src/mem/cache/tags/base.cc 629fe6e6c781 
  src/mem/cache/tags/base_set_assoc.hh 629fe6e6c781 
  src/mem/cache/tags/fa_lru.cc 629fe6e6c781 
  src/mem/cache/tags/fa_lru.hh 629fe6e6c781 

Diff: http://reviews.gem5.org/r/3502/diff/


Testing
---

Tested using --Debug-flags=Cache


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 2109: Different SimpleDRAM latency for read and write access

2014-05-20 Thread Sophiane SENNI via gem5-dev


 On May 20, 2014, 8:21 a.m., Andreas Hansson wrote:
  With the last patches that went in, the DRAM controller now has e.g. tWR 
  added to the constraints. Does that provide enough detail, or is there 
  still a need for turning tCL into tWL and tRL? If so, it would be good to 
  see an updated patch.

Thanks Andreas. As far as I am concerned, this provides enough detail to 
differentiate read and write operations.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2109/#review5106
---


On Dec. 6, 2013, 11:09 a.m., Sophiane SENNI wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.gem5.org/r/2109/
 ---
 
 (Updated Dec. 6, 2013, 11:09 a.m.)
 
 
 Review request for Default.
 
 
 Repository: gem5
 
 
 Description
 ---
 
 This patch allows specifying different SimpleDRAM latency for read and write 
 access. (In the code, tCL parameter if for read latency and tCL_write is for 
 write latency).
 
 Any feedback is welcomed^^
 
 
 Diffs
 -
 
   src/mem/SimpleDRAM.py 5e8970397ab7 
   src/mem/simple_dram.hh 5e8970397ab7 
   src/mem/simple_dram.cc 5e8970397ab7 
 
 Diff: http://reviews.gem5.org/r/2109/diff/
 
 
 Testing
 ---
 
 
 Thanks,
 
 Sophiane SENNI
 


___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 2072: Different cache latency for read and write access

2014-01-09 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2072/
---

(Updated Jan. 9, 2014, 10:06 a.m.)


Review request for Default.


Changes
---

Consider the cache write latency for writes coming from memory side (cache fill)


Repository: gem5


Description
---

This patch allows specifying different cache latency for read and write access. 
(In the code, the hit_latency parameter is actually the read_latency)


Diffs (updated)
-

  src/mem/cache/BaseCache.py 6a043adb1e8d 
  src/mem/cache/base.hh 6a043adb1e8d 
  src/mem/cache/base.cc 6a043adb1e8d 
  src/mem/cache/cache_impl.hh 6a043adb1e8d 
  src/mem/cache/tags/lru.cc 6a043adb1e8d 

Diff: http://reviews.gem5.org/r/2072/diff/


Testing
---

I used --debug-flags command to check read hit latency and write hit latency 
for Dcache and Icache. I checked the time between a request sent by the cpu and 
the response sent by the cache memory.


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


[gem5-dev] Review Request 2109: Different SimpleDRAM latency for read and write access

2013-12-06 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2109/
---

Review request for Default.


Summary (updated)
-

Different SimpleDRAM latency for read and write access


Repository: gem5


Description (updated)
---

This patch allows specifying different SimpleDRAM latency for read and write 
access. (In the code, tCL parameter if for read latency and tCL_write is for 
write latency).

Any feedback is welcomed^^


Diffs (updated)
-

  src/mem/SimpleDRAM.py 5e8970397ab7 
  src/mem/simple_dram.hh 5e8970397ab7 
  src/mem/simple_dram.cc 5e8970397ab7 

Diff: http://reviews.gem5.org/r/2109/diff/


Testing
---


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 2072: Different cache latency for read and write access

2013-11-20 Thread Sophiane SENNI


 On Nov. 13, 2013, 2:24 p.m., Mihai Lefter wrote:
  Currently I was also looking into this issue (adding an extra write latency 
  actually). As far as I can see this patch considers a different write 
  latency only for write operations that come from the CPU side, i.e., on the 
  cpuSidePort of the cache. For the patch to be complete, it should consider 
  also a different cache write latency when writes come from the memory side, 
  i.e., on the memSidePort of the cache.
  
  Some changes in the following functions may be required (in 
  src/mem/cache/cache_impl.hh):
  
  00858 CacheTagStore::recvTimingResp(PacketPtr pkt)
  .
  00939 // If critical word (no offset) return first word 
  time.
  00940 // responseLatency is the latency of the return path
  00941 // from lower level caches/memory to an upper level 
  cache or
  00942 // the core.
  00943 completion_time = clockEdge(responseLatency) +
  00944 (transfer_offset ? pkt-busLastWordDelay :
  00945  pkt-busFirstWordDelay);
  .
  01211 CacheTagStore::handleFill(PacketPtr pkt, BlkType *blk,
  01212 PacketList writebacks)
  .
  01270 blk-whenReady = clockEdge() + responseLatency * clockPeriod() +
  01271 pkt-busLastWordDelay;
  
  Apparently, in the recvTimingResp() function it seems that completion_time 
  is only used for stats, while in handlefill() the actual write (a cache 
  fill from the memory side) in the cache takes place. Note that the fill 
  takes multiple bus cycles, thus, the write latency should take this into 
  account. 
  
  As a last thing, maybe a new parameter should be added, i.e., 
  writeResponseLatency.
 
 
 Sophiane SENNI wrote:
 Thank you very much for your post. I will try to complete this patch and 
 post an update.

I am wondering if the latency for write operations coming from memSidePort 
(like cache fill) is not actually the response_latency ? Or the 
response_latency is something different ?

Ali, Andreas, could you confirm if response_latency is the cache fill latency 
or not ?

Thanks


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2072/#review4805
---


On Nov. 4, 2013, 9:44 a.m., Sophiane SENNI wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.gem5.org/r/2072/
 ---
 
 (Updated Nov. 4, 2013, 9:44 a.m.)
 
 
 Review request for Default.
 
 
 Repository: gem5
 
 
 Description
 ---
 
 This patch allows specifying different cache latency for read and write 
 access. (In the code, the hit_latency parameter is actually the read_latency)
 
 
 Diffs
 -
 
   src/mem/cache/BaseCache.py 07352f119e48 
   src/mem/cache/base.hh 07352f119e48 
   src/mem/cache/base.cc 07352f119e48 
   src/mem/cache/cache_impl.hh 07352f119e48 
   src/mem/cache/tags/lru.cc 07352f119e48 
 
 Diff: http://reviews.gem5.org/r/2072/diff/
 
 
 Testing
 ---
 
 I used --debug-flags command to check read hit latency and write hit latency 
 for Dcache and Icache. I checked the time between a request sent by the cpu 
 and the response sent by the cache memory.
 
 
 Thanks,
 
 Sophiane SENNI
 


___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 2072: Different cache latency for read and write access

2013-11-16 Thread Sophiane SENNI


 On Nov. 13, 2013, 2:24 p.m., Mihai Lefter wrote:
  Currently I was also looking into this issue (adding an extra write latency 
  actually). As far as I can see this patch considers a different write 
  latency only for write operations that come from the CPU side, i.e., on the 
  cpuSidePort of the cache. For the patch to be complete, it should consider 
  also a different cache write latency when writes come from the memory side, 
  i.e., on the memSidePort of the cache.
  
  Some changes in the following functions may be required (in 
  src/mem/cache/cache_impl.hh):
  
  00858 CacheTagStore::recvTimingResp(PacketPtr pkt)
  .
  00939 // If critical word (no offset) return first word 
  time.
  00940 // responseLatency is the latency of the return path
  00941 // from lower level caches/memory to an upper level 
  cache or
  00942 // the core.
  00943 completion_time = clockEdge(responseLatency) +
  00944 (transfer_offset ? pkt-busLastWordDelay :
  00945  pkt-busFirstWordDelay);
  .
  01211 CacheTagStore::handleFill(PacketPtr pkt, BlkType *blk,
  01212 PacketList writebacks)
  .
  01270 blk-whenReady = clockEdge() + responseLatency * clockPeriod() +
  01271 pkt-busLastWordDelay;
  
  Apparently, in the recvTimingResp() function it seems that completion_time 
  is only used for stats, while in handlefill() the actual write (a cache 
  fill from the memory side) in the cache takes place. Note that the fill 
  takes multiple bus cycles, thus, the write latency should take this into 
  account. 
  
  As a last thing, maybe a new parameter should be added, i.e., 
  writeResponseLatency.
 

Thank you very much for your post. I will try to complete this patch and post 
an update.


- Sophiane


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2072/#review4805
---


On Nov. 4, 2013, 9:44 a.m., Sophiane SENNI wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.gem5.org/r/2072/
 ---
 
 (Updated Nov. 4, 2013, 9:44 a.m.)
 
 
 Review request for Default.
 
 
 Repository: gem5
 
 
 Description
 ---
 
 This patch allows specifying different cache latency for read and write 
 access. (In the code, the hit_latency parameter is actually the read_latency)
 
 
 Diffs
 -
 
   src/mem/cache/BaseCache.py 07352f119e48 
   src/mem/cache/base.hh 07352f119e48 
   src/mem/cache/base.cc 07352f119e48 
   src/mem/cache/cache_impl.hh 07352f119e48 
   src/mem/cache/tags/lru.cc 07352f119e48 
 
 Diff: http://reviews.gem5.org/r/2072/diff/
 
 
 Testing
 ---
 
 I used --debug-flags command to check read hit latency and write hit latency 
 for Dcache and Icache. I checked the time between a request sent by the cpu 
 and the response sent by the cache memory.
 
 
 Thanks,
 
 Sophiane SENNI
 


___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev


Re: [gem5-dev] Review Request 2072: Different cache latency for read and write access

2013-11-04 Thread Sophiane SENNI

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.gem5.org/r/2072/
---

(Updated Nov. 4, 2013, 9:44 a.m.)


Review request for Default.


Changes
---

A mistake made in LRU::accessBlock function. The lat parameter has to be 
passed by reference (not by value). Sorry...


Repository: gem5


Description
---

This patch allows specifying different cache latency for read and write access. 
(In the code, the hit_latency parameter is actually the read_latency)


Diffs (updated)
-

  src/mem/cache/BaseCache.py 07352f119e48 
  src/mem/cache/base.hh 07352f119e48 
  src/mem/cache/base.cc 07352f119e48 
  src/mem/cache/cache_impl.hh 07352f119e48 
  src/mem/cache/tags/lru.cc 07352f119e48 

Diff: http://reviews.gem5.org/r/2072/diff/


Testing
---

I used --debug-flags command to check read hit latency and write hit latency 
for Dcache and Icache. I checked the time between a request sent by the cpu and 
the response sent by the cache memory.


Thanks,

Sophiane SENNI

___
gem5-dev mailing list
gem5-dev@gem5.org
http://m5sim.org/mailman/listinfo/gem5-dev