Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-13 Thread Brad Beckmann

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/#review727
---


I can't fully appreciate these latest changes without also seeing the updated 
.sm files, but overall this looks very inline with what we have been discussing 
over the past few days.  I just have one basic question on why we need to 
overload doTransition, getState, and setState.  Why can't we just have one 
version?  Also a possibly related idea is can you insert asserts (!= NULL) for 
the tbe and cache_entry pointers before they are dereferenced in the generated 
code?


src/mem/slicc/ast/FuncCallExprAST.py
http://reviews.m5sim.org/r/358/#comment991

Why do we need four versions of doTransition?  Can't we just pass in 
NULL/OOD for the tbe and cache_entry?



src/mem/slicc/symbols/StateMachine.py
http://reviews.m5sim.org/r/358/#comment990

After you make your change, why would getState ever be called without the 
address, tbe, and cache_entry?  Wouldn't you just pass in NULL/OOD if the tbe 
or cache_entry didn't exist?



src/mem/slicc/symbols/StateMachine.py
http://reviews.m5sim.org/r/358/#comment989

After you make this change, why would setState ever be called without 
passing the address, tbe, and cache_entry?


- Brad


On 2011-01-12 22:45:22, Nilay Vaish wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.m5sim.org/r/358/
 ---
 
 (Updated 2011-01-12 22:45:22)
 
 
 Review request for Default.
 
 
 Summary
 ---
 
 The purpose of this patch is to change the way CacheMemory interfaces with 
 coherence protocols. Currently, whenever a cache controller (defined in the 
 protocol under consideration) needs to carry out any operation on a cache 
 block, it looks up the tag hash map and figures out whether or not the block 
 exists in the cache. In case it does exist, the operation is carried out 
 (which requires another lookup). Over a single clock cycle, multiple such 
 lookups take place as observed through profiling of different protocols. It 
 was seen that the tag lookup takes anything from 10% to 20% of the simulation 
 time. In order to reduce this time, this patch is being posted. The 
 CacheMemory class now will have a function that will return pointer to a 
 cache block entry, instead of a reference (though the function that returns 
 the reference has been retained currently). Functions have been introduced 
 for setting/unsetting a pointer and check its pointer. Similar changes have 
 been carried out for Transaction Buffer entries as well.
 
 Note that changes are required to both SLICC and the protocol files. This 
 patch carries out changes to SLICC and committing this patch alone, I 
 believe, will ___break___ all the protocols. I am working on patching the 
 protocols as well. This patch is being put to get feed back from other 
 developers.
 
 
 Diffs
 -
 
   src/mem/protocol/RubySlicc_Types.sm c6bc8fe81e79 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.hh c6bc8fe81e79 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.cc c6bc8fe81e79 
   src/mem/ruby/system/CacheMemory.hh c6bc8fe81e79 
   src/mem/ruby/system/CacheMemory.cc c6bc8fe81e79 
   src/mem/ruby/system/TBETable.hh c6bc8fe81e79 
   src/mem/slicc/ast/ActionDeclAST.py c6bc8fe81e79 
   src/mem/slicc/ast/FormalParamAST.py c6bc8fe81e79 
   src/mem/slicc/ast/FuncCallExprAST.py c6bc8fe81e79 
   src/mem/slicc/ast/IfStatementAST.py c6bc8fe81e79 
   src/mem/slicc/ast/InPortDeclAST.py c6bc8fe81e79 
   src/mem/slicc/ast/IsValidPtrExprAST.py PRE-CREATION 
   src/mem/slicc/ast/LocalVariableAST.py PRE-CREATION 
   src/mem/slicc/ast/MethodCallExprAST.py c6bc8fe81e79 
   src/mem/slicc/ast/StaticCastAST.py c6bc8fe81e79 
   src/mem/slicc/ast/TypeDeclAST.py c6bc8fe81e79 
   src/mem/slicc/ast/__init__.py c6bc8fe81e79 
   src/mem/slicc/parser.py c6bc8fe81e79 
   src/mem/slicc/symbols/Func.py c6bc8fe81e79 
   src/mem/slicc/symbols/StateMachine.py c6bc8fe81e79 
 
 Diff: http://reviews.m5sim.org/r/358/diff
 
 
 Testing
 ---
 
 I have tested these changes using the MOESI_CMP_directory and MI protocols.
 
 
 Thanks,
 
 Nilay
 


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-13 Thread Nilay Vaish


 On 2011-01-13 08:33:49, Brad Beckmann wrote:
  I can't fully appreciate these latest changes without also seeing the 
  updated .sm files, but overall this looks very inline with what we have 
  been discussing over the past few days.  I just have one basic question on 
  why we need to overload doTransition, getState, and setState.  Why can't we 
  just have one version?  Also a possibly related idea is can you insert 
  asserts (!= NULL) for the tbe and cache_entry pointers before they are 
  dereferenced in the generated code?

Because Entry and/or TBE types might not be defined at all for particular
controllers. Same holds true for the getState() and setState(). Also,
getState() and setState() do not require the address, if the tbe and
cache_entry have been properly set.


 On 2011-01-13 08:33:49, Brad Beckmann wrote:
  src/mem/slicc/ast/FuncCallExprAST.py, line 127
  http://reviews.m5sim.org/r/358/diff/9/?file=9549#file9549line127
 
  Why do we need four versions of doTransition?  Can't we just pass in 
  NULL/OOD for the tbe and cache_entry?

Because Entry type might not be defined at all for that particular
controller. Same holds true for the getState() and setState(). Also,
getState() and setState() do not require the address, if the tbe and
cache_entry have been properly set.


- Nilay


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/#review727
---


On 2011-01-12 22:45:22, Nilay Vaish wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.m5sim.org/r/358/
 ---
 
 (Updated 2011-01-12 22:45:22)
 
 
 Review request for Default.
 
 
 Summary
 ---
 
 The purpose of this patch is to change the way CacheMemory interfaces with 
 coherence protocols. Currently, whenever a cache controller (defined in the 
 protocol under consideration) needs to carry out any operation on a cache 
 block, it looks up the tag hash map and figures out whether or not the block 
 exists in the cache. In case it does exist, the operation is carried out 
 (which requires another lookup). Over a single clock cycle, multiple such 
 lookups take place as observed through profiling of different protocols. It 
 was seen that the tag lookup takes anything from 10% to 20% of the simulation 
 time. In order to reduce this time, this patch is being posted. The 
 CacheMemory class now will have a function that will return pointer to a 
 cache block entry, instead of a reference (though the function that returns 
 the reference has been retained currently). Functions have been introduced 
 for setting/unsetting a pointer and check its pointer. Similar changes have 
 been carried out for Transaction Buffer entries as well.
 
 Note that changes are required to both SLICC and the protocol files. This 
 patch carries out changes to SLICC and committing this patch alone, I 
 believe, will ___break___ all the protocols. I am working on patching the 
 protocols as well. This patch is being put to get feed back from other 
 developers.
 
 
 Diffs
 -
 
   src/mem/protocol/RubySlicc_Types.sm c6bc8fe81e79 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.hh c6bc8fe81e79 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.cc c6bc8fe81e79 
   src/mem/ruby/system/CacheMemory.hh c6bc8fe81e79 
   src/mem/ruby/system/CacheMemory.cc c6bc8fe81e79 
   src/mem/ruby/system/TBETable.hh c6bc8fe81e79 
   src/mem/slicc/ast/ActionDeclAST.py c6bc8fe81e79 
   src/mem/slicc/ast/FormalParamAST.py c6bc8fe81e79 
   src/mem/slicc/ast/FuncCallExprAST.py c6bc8fe81e79 
   src/mem/slicc/ast/IfStatementAST.py c6bc8fe81e79 
   src/mem/slicc/ast/InPortDeclAST.py c6bc8fe81e79 
   src/mem/slicc/ast/IsValidPtrExprAST.py PRE-CREATION 
   src/mem/slicc/ast/LocalVariableAST.py PRE-CREATION 
   src/mem/slicc/ast/MethodCallExprAST.py c6bc8fe81e79 
   src/mem/slicc/ast/StaticCastAST.py c6bc8fe81e79 
   src/mem/slicc/ast/TypeDeclAST.py c6bc8fe81e79 
   src/mem/slicc/ast/__init__.py c6bc8fe81e79 
   src/mem/slicc/parser.py c6bc8fe81e79 
   src/mem/slicc/symbols/Func.py c6bc8fe81e79 
   src/mem/slicc/symbols/StateMachine.py c6bc8fe81e79 
 
 Diff: http://reviews.m5sim.org/r/358/diff
 
 
 Testing
 ---
 
 I have tested these changes using the MOESI_CMP_directory and MI protocols.
 
 
 Thanks,
 
 Nilay
 


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-13 Thread Nilay Vaish


 On 2011-01-13 08:33:49, Brad Beckmann wrote:
  I can't fully appreciate these latest changes without also seeing the 
  updated .sm files, but overall this looks very inline with what we have 
  been discussing over the past few days.  I just have one basic question on 
  why we need to overload doTransition, getState, and setState.  Why can't we 
  just have one version?  Also a possibly related idea is can you insert 
  asserts (!= NULL) for the tbe and cache_entry pointers before they are 
  dereferenced in the generated code?
 
 Nilay Vaish wrote:
 Because Entry and/or TBE types might not be defined at all for particular
 controllers. Same holds true for the getState() and setState(). Also,
 getState() and setState() do not require the address, if the tbe and
 cache_entry have been properly set.

After reading your comment, I realized that there is __huge__ mistake in this 
patch, which will not show up unless I test the generated code. I will have to 
change some parts again.


- Nilay


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/#review727
---


On 2011-01-12 22:45:22, Nilay Vaish wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.m5sim.org/r/358/
 ---
 
 (Updated 2011-01-12 22:45:22)
 
 
 Review request for Default.
 
 
 Summary
 ---
 
 The purpose of this patch is to change the way CacheMemory interfaces with 
 coherence protocols. Currently, whenever a cache controller (defined in the 
 protocol under consideration) needs to carry out any operation on a cache 
 block, it looks up the tag hash map and figures out whether or not the block 
 exists in the cache. In case it does exist, the operation is carried out 
 (which requires another lookup). Over a single clock cycle, multiple such 
 lookups take place as observed through profiling of different protocols. It 
 was seen that the tag lookup takes anything from 10% to 20% of the simulation 
 time. In order to reduce this time, this patch is being posted. The 
 CacheMemory class now will have a function that will return pointer to a 
 cache block entry, instead of a reference (though the function that returns 
 the reference has been retained currently). Functions have been introduced 
 for setting/unsetting a pointer and check its pointer. Similar changes have 
 been carried out for Transaction Buffer entries as well.
 
 Note that changes are required to both SLICC and the protocol files. This 
 patch carries out changes to SLICC and committing this patch alone, I 
 believe, will ___break___ all the protocols. I am working on patching the 
 protocols as well. This patch is being put to get feed back from other 
 developers.
 
 
 Diffs
 -
 
   src/mem/protocol/RubySlicc_Types.sm c6bc8fe81e79 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.hh c6bc8fe81e79 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.cc c6bc8fe81e79 
   src/mem/ruby/system/CacheMemory.hh c6bc8fe81e79 
   src/mem/ruby/system/CacheMemory.cc c6bc8fe81e79 
   src/mem/ruby/system/TBETable.hh c6bc8fe81e79 
   src/mem/slicc/ast/ActionDeclAST.py c6bc8fe81e79 
   src/mem/slicc/ast/FormalParamAST.py c6bc8fe81e79 
   src/mem/slicc/ast/FuncCallExprAST.py c6bc8fe81e79 
   src/mem/slicc/ast/IfStatementAST.py c6bc8fe81e79 
   src/mem/slicc/ast/InPortDeclAST.py c6bc8fe81e79 
   src/mem/slicc/ast/IsValidPtrExprAST.py PRE-CREATION 
   src/mem/slicc/ast/LocalVariableAST.py PRE-CREATION 
   src/mem/slicc/ast/MethodCallExprAST.py c6bc8fe81e79 
   src/mem/slicc/ast/StaticCastAST.py c6bc8fe81e79 
   src/mem/slicc/ast/TypeDeclAST.py c6bc8fe81e79 
   src/mem/slicc/ast/__init__.py c6bc8fe81e79 
   src/mem/slicc/parser.py c6bc8fe81e79 
   src/mem/slicc/symbols/Func.py c6bc8fe81e79 
   src/mem/slicc/symbols/StateMachine.py c6bc8fe81e79 
 
 Diff: http://reviews.m5sim.org/r/358/diff
 
 
 Testing
 ---
 
 I have tested these changes using the MOESI_CMP_directory and MI protocols.
 
 
 Thanks,
 
 Nilay
 


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-12 Thread Nilay

On Tue, January 11, 2011 7:24 pm, Lisa Hsu wrote:
 Hi Nilay,

 I've been talking with Brad here at work about some of these things so I
 will finally jump into the conversation via email.  First, great job on
 this
 - this has clearly been a substantial amount of work.  I'm impressed.

 I've got some comments below.

 I'm not sure why having the trigger function implicitly set the cache
 entries and tbe entries would require two calls to isTagPresent() if local
 variables are involved.  Is there some reason why something like the code
 below would not be possible?

 local_var := getL1ICacheEntry(in_msg.LineAddress)
 if (is_valid(local_var))
   trigger(Event, in_msg.LineAddress, local_var);



Are you sure you would call the above piece of code as __implicit__
setting of cache and tbe entry variables? In this case, the local variable
has been __explicitly__ passed in the call to the trigger function.

To me 'X is implicit' means that the programmer does not need to write 'X'
in the protocol file for the compiler. For example, currently trigger
function implicitly passes the state of the address as a parameter.

Such code is possible, my only concern is that once the variable is set,
it cannot be used again on the left hand side of the assignment operator.

Entry local_var := getL1ICacheEntry(in_msg.LineAddress)
/* Do some thing*/
local_var := getL1DCacheEntry(in_msg.LineAddress)

This SLICC code will not generate correct C++ code, since we assume that a
pointer variable can only be used in its dereferenced form, except when
passed on as a parameter in a function call.

 I don't see how having the entries being passed to the trigger function is
 functionally different than having to explicitly set things using two
 function calls is different - as long as there is a local variable
 involved
 to carry the value across some in_port logic it seems much cleaner  and
 more
 consistent to eliminate the need to explicitly call two functions for
 every
 in_port.

 I just saw Brad's latest email go by with pseudocode and I think that is
 both clean and flexible/functional - in certain cases you can explicitly
 make the getCacheEntry call you want, depending on your protocol, in the
 call to the trigger function, and in others you can set entries beforehand
 into local variables and do arbitrary logic, then pass whatever entry you
 want to the trigger function.

 Lisa
 ___
 m5-dev mailing list
 m5-dev@m5sim.org
 http://m5sim.org/mailman/listinfo/m5-dev



--
Nilay

___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-12 Thread Nilay Vaish

Hi Brad,

As I understand from your mail below, you do would like to write

ENTRY L1I_cache_entry = getL1ICacheEntry(in_msg.LineAddress);
trigger(Event, address, L1I_cache_entry);

instead of

set_cache_entry(getL1ICacheEntry(in_msg.LineAddress));
trigger(Event, address);


In other words, local variables should be used in inports, instead of 
__implicitly__ declared variables. And that these local variables will be 
__explicitly__ passed as arguments in the call to the trigger function.



Seems fair to me. Thanks for the clarification!

--
Nilay


On Tue, 11 Jan 2011, Beckmann, Brad wrote:


Hi Nilay,

Overall, I believe we are in more agreement with each other than maybe 
you think.  I'm glad you included pseudo code in your latest email. 
That is a great idea.  I think part of our problem is we are 
comprehending our textual descriptions in different ways.


Below are my  responses:



Absolutely, we still need the ability to allocate or deallocate
entries within actions.  I'm not advocating to completely eliminate
the set/unset cache and tbe entry functions.  I just want to avoid
including those calls in the inports.  I'm confused why the mandatory
queue is different than other queues.  They all trigger events in the same

way.

if (L1IcacheMemory.isTagPresent(in_msg.LineAddress)) {
// The tag matches for the L1, so the L1 asks the L2 for it.
   trigger(mandatory_request_type_to_event(in_msg.Type),
in_msg.LineAddress); }

Brad, mandatory queue is just an example where an inport may perform tag
lookup before cache and transaction buffer entries has been set. Above is an
excerpt from the file MOESI_CMP_directory-L1cache.sm. Before the
trigger() is called, isTagPresent() is called. This means tag look up is being
performed before cache or transaction buffer entries have been set.
Suppose the tag was present in L1Icache, then in the trigger() call, we will
again perform lookup.

Similarly, there is an inport in the Hammer's protocol implementation where
getCacheEntry() is called before a call to trigger(). Now, why should we use
getCacheEntry() in the inport and cache entry in the action?



The reason is, as you pointed out, we ideally want to call getCacheEntry 
once.  I believe your suggestion to use local variables in the input 
ports gets us there.  Below is what I'm envisioning for the MOESI_hammer 
mandatory queue in_port logic (at least the IFETCH half of the logic):


ENTRY getL1ICacheEntry(Address addr) {
  assert(is_valid(L1DcacheMemory.lookup(addr) == FALSE);
  assert(is_valid(L2cacheMemory.lookup(addr) == FALSE);
  return L1IcacheMemory.lookup(addr);
}

ENTRY getL1DCacheEntry(Address addr) {
  assert(is_valid(L1IcacheMemory.lookup(addr) == FALSE);
  assert(is_valid(L2cacheMemory.lookup(addr) == FALSE);
  return L1DcacheMemory.lookup(addr);
}

ENTRY getL2CacheEntry(Address addr) {
  assert(is_valid(L1IcacheMemory.lookup(addr) == FALSE);
  assert(is_valid(L1DcacheMemory.lookup(addr) == FALSE);
  return L2cacheMemory.lookup(addr);
}

 in_port(mandatoryQueue_in, CacheMsg, mandatoryQueue, desc=..., rank=0) {
   if (mandatoryQueue_in.isReady()) {
 peek(mandatoryQueue_in, CacheMsg, block_on=LineAddress) {

   // Set the local entry variables
   ENTRY L1I_cache_entry = getL1ICacheEntry(in_msg.LineAddress);
   ENTRY L1D_cache_entry = getL1DCacheEntry(in_msg.LineAddress);
   TBE_Entry tbe_entry = getTBE(in_msg.LineAddress);

   // Check for data access to blocks in I-cache and ifetchs to blocks in 
D-cache

   if (in_msg.Type == CacheRequestType:IFETCH) {
 // ** INSTRUCTION ACCESS ***

 // Check to see if it is in the OTHER L1
 if (is_valid(L1D_cache_entry)) {
   // The block is in the wrong L1, try to write it to the L2
   if (L2cacheMemory.cacheAvail(in_msg.LineAddress)) {
 trigger(Event:L1_to_L2, in_msg.LineAddress, L1D_cache_entry, 
tbe_entry);
   } else {
 replace_addr = L2cacheMemory.cacheProbe(in_msg.LineAddress);
 replace_cache_entry = getL2CacheEntry(replace_addr);
 replace_tbe_entry = getTBE(replace_addr);
 trigger(Event:L2_Replacement, replace_addr, replace_cache_entry, 
replace_tbe_entry);
   }
 }

 if (is_valid(L1I_cache_entry)) {
   // The tag matches for the L1, so the L1 fetches the line.  We know 
it can't be in the L2 due to exclusion
   trigger(mandatory_request_type_to_event(in_msg.Type), 
in_msg.LineAddress, L1I_cache_entry, tbe_entry);
 } else {
   if (L1IcacheMemory.cacheAvail(in_msg.LineAddress)) {
 // L1 does't have the line, but we have space for it in the L1
 ENTRY L2_cache_entry = getL2CacheEntry(in_msg.LineAddress);
 if (is_valid(L2_cache_entry)) {
   // L2 has it (maybe not with the right permissions)
   trigger(Event:Trigger_L2_to_L1I, in_msg.LineAddress, 
L2_cache_entry, tbe_entry);
 } else {
  

Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-12 Thread Beckmann, Brad
 
 Are you sure you would call the above piece of code as __implicit__ setting
 of cache and tbe entry variables? In this case, the local variable has been
 __explicitly__ passed in the call to the trigger function.
 
 To me 'X is implicit' means that the programmer does not need to write 'X'
 in the protocol file for the compiler. For example, currently trigger function
 implicitly passes the state of the address as a parameter.
 
 Such code is possible, my only concern is that once the variable is set, it
 cannot be used again on the left hand side of the assignment operator.
 
 Entry local_var := getL1ICacheEntry(in_msg.LineAddress)
 /* Do some thing*/
 local_var := getL1DCacheEntry(in_msg.LineAddress)
 
 This SLICC code will not generate correct C++ code, since we assume that a
 pointer variable can only be used in its dereferenced form, except when
 passed on as a parameter in a function call.
 

Yeah, I think we were confusing each other before because implicit was meaning 
different things.  When I said implicitly passes the cache entry, I meant that 
relative to the actions, not the trigger function.  As you mentioned, the input 
state is an implicit parameter to the trigger function, but the address is an 
explicit parameter to the trigger function and an implicit parameter to the 
actions.  You were thinking the former and we were thinking the latter.  Now I 
think we are on the same page.

Actually I was thinking that we only dereference the cache_entry pointer when 
we reference a member of the class.  I haven't thought this all the way 
through, but is that possible?  Then such an assignment would work.

Brad


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-12 Thread Nilay Vaish

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/
---

(Updated 2011-01-12 22:37:49.117675)


Review request for Default.


Summary
---

The purpose of this patch is to change the way CacheMemory interfaces with 
coherence protocols. Currently, whenever a cache controller (defined in the 
protocol under consideration) needs to carry out any operation on a cache 
block, it looks up the tag hash map and figures out whether or not the block 
exists in the cache. In case it does exist, the operation is carried out (which 
requires another lookup). Over a single clock cycle, multiple such lookups take 
place as observed through profiling of different protocols. It was seen that 
the tag lookup takes anything from 10% to 20% of the simulation time. In order 
to reduce this time, this patch is being posted. The CacheMemory class now will 
have a function that will return pointer to a cache block entry, instead of a 
reference (though the function that returns the reference has been retained 
currently). Functions have been introduced for setting/unsetting a pointer and 
check its pointer. Similar changes have been carried out for Transaction Buffer 
entries as well.

Note that changes are required to both SLICC and the protocol files. This patch 
carries out changes to SLICC and committing this patch alone, I believe, will 
___break___ all the protocols. I am working on patching the protocols as well. 
This patch is being put to get feed back from other developers.


Diffs (updated)
-

  src/mem/protocol/MOESI_CMP_directory-L1cache.sm bc8758d0e4ca 
  src/mem/protocol/MOESI_CMP_directory-L2cache.sm bc8758d0e4ca 
  src/mem/protocol/MOESI_CMP_directory-dir.sm bc8758d0e4ca 
  src/mem/protocol/MOESI_CMP_directory-dma.sm bc8758d0e4ca 

Diff: http://reviews.m5sim.org/r/358/diff


Testing
---

I have tested these changes using the MOESI_CMP_directory and MI protocols.


Thanks,

Nilay

___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-12 Thread Nilay Vaish

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/
---

(Updated 2011-01-12 22:42:07.876214)


Review request for Default.


Summary
---

The purpose of this patch is to change the way CacheMemory interfaces with 
coherence protocols. Currently, whenever a cache controller (defined in the 
protocol under consideration) needs to carry out any operation on a cache 
block, it looks up the tag hash map and figures out whether or not the block 
exists in the cache. In case it does exist, the operation is carried out (which 
requires another lookup). Over a single clock cycle, multiple such lookups take 
place as observed through profiling of different protocols. It was seen that 
the tag lookup takes anything from 10% to 20% of the simulation time. In order 
to reduce this time, this patch is being posted. The CacheMemory class now will 
have a function that will return pointer to a cache block entry, instead of a 
reference (though the function that returns the reference has been retained 
currently). Functions have been introduced for setting/unsetting a pointer and 
check its pointer. Similar changes have been carried out for Transaction Buffer 
entries as well.

Note that changes are required to both SLICC and the protocol files. This patch 
carries out changes to SLICC and committing this patch alone, I believe, will 
___break___ all the protocols. I am working on patching the protocols as well. 
This patch is being put to get feed back from other developers.


Diffs (updated)
-

  src/mem/protocol/MOESI_CMP_directory-L1cache.sm c6bc8fe81e79 
  src/mem/protocol/MOESI_CMP_directory-L2cache.sm c6bc8fe81e79 
  src/mem/protocol/MOESI_CMP_directory-dir.sm c6bc8fe81e79 
  src/mem/protocol/MOESI_CMP_directory-dma.sm c6bc8fe81e79 

Diff: http://reviews.m5sim.org/r/358/diff


Testing
---

I have tested these changes using the MOESI_CMP_directory and MI protocols.


Thanks,

Nilay

___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-12 Thread Nilay Vaish

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/
---

(Updated 2011-01-12 22:45:22.150420)


Review request for Default.


Summary
---

The purpose of this patch is to change the way CacheMemory interfaces with 
coherence protocols. Currently, whenever a cache controller (defined in the 
protocol under consideration) needs to carry out any operation on a cache 
block, it looks up the tag hash map and figures out whether or not the block 
exists in the cache. In case it does exist, the operation is carried out (which 
requires another lookup). Over a single clock cycle, multiple such lookups take 
place as observed through profiling of different protocols. It was seen that 
the tag lookup takes anything from 10% to 20% of the simulation time. In order 
to reduce this time, this patch is being posted. The CacheMemory class now will 
have a function that will return pointer to a cache block entry, instead of a 
reference (though the function that returns the reference has been retained 
currently). Functions have been introduced for setting/unsetting a pointer and 
check its pointer. Similar changes have been carried out for Transaction Buffer 
entries as well.

Note that changes are required to both SLICC and the protocol files. This patch 
carries out changes to SLICC and committing this patch alone, I believe, will 
___break___ all the protocols. I am working on patching the protocols as well. 
This patch is being put to get feed back from other developers.


Diffs (updated)
-

  src/mem/protocol/RubySlicc_Types.sm c6bc8fe81e79 
  src/mem/ruby/slicc_interface/AbstractCacheEntry.hh c6bc8fe81e79 
  src/mem/ruby/slicc_interface/AbstractCacheEntry.cc c6bc8fe81e79 
  src/mem/ruby/system/CacheMemory.hh c6bc8fe81e79 
  src/mem/ruby/system/CacheMemory.cc c6bc8fe81e79 
  src/mem/ruby/system/TBETable.hh c6bc8fe81e79 
  src/mem/slicc/ast/ActionDeclAST.py c6bc8fe81e79 
  src/mem/slicc/ast/FormalParamAST.py c6bc8fe81e79 
  src/mem/slicc/ast/FuncCallExprAST.py c6bc8fe81e79 
  src/mem/slicc/ast/IfStatementAST.py c6bc8fe81e79 
  src/mem/slicc/ast/InPortDeclAST.py c6bc8fe81e79 
  src/mem/slicc/ast/IsValidPtrExprAST.py PRE-CREATION 
  src/mem/slicc/ast/LocalVariableAST.py PRE-CREATION 
  src/mem/slicc/ast/MethodCallExprAST.py c6bc8fe81e79 
  src/mem/slicc/ast/StaticCastAST.py c6bc8fe81e79 
  src/mem/slicc/ast/TypeDeclAST.py c6bc8fe81e79 
  src/mem/slicc/ast/__init__.py c6bc8fe81e79 
  src/mem/slicc/parser.py c6bc8fe81e79 
  src/mem/slicc/symbols/Func.py c6bc8fe81e79 
  src/mem/slicc/symbols/StateMachine.py c6bc8fe81e79 

Diff: http://reviews.m5sim.org/r/358/diff


Testing
---

I have tested these changes using the MOESI_CMP_directory and MI protocols.


Thanks,

Nilay

___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-11 Thread Beckmann, Brad
Hi Nilay,

Sure, using a local variable to further reduce the calls to getCacheEntry is a 
great idea.  I think that is orthogonal to the suggestion I was making.  I just 
want the ability to directly set the cache_entry and tbe_entry variables in the 
trigger function.  That way the address, cache_entry, and tbe_entry variables 
are dealt with consistently and it avoids adding the separate calls to 
set_cache_entry() and set_tbe () in the inports.

Brad


-Original Message-
From: Nilay Vaish [mailto:ni...@cs.wisc.edu] 
Sent: Friday, January 07, 2011 11:40 AM
To: Beckmann, Brad
Cc: Default
Subject: RE: Review Request: Changing how CacheMemory interfaces with SLICC

Brad, my comments are inline.


On Fri, 7 Jan 2011, Beckmann, Brad wrote:

 Hi Nilay,



 Unfortunately I can't provide you an example of a protocol where 
 getCacheEntry behaves in a different manner, but they do exist.  I 
 reviewed your most recent patch updates and I don't think what we're 
 asking for is much different than what you have on reviewboard right 
 now.  Basically, all we need to do is add back in the capability for 
 the programmer to write their own getCacheEntry function in the .sm file.
 I know that I initially asked you to automatically generate those 
 functions, and I still think that is useful for most protocols, but 
 Lisa made me realize that we need customized getCacheEntry functions as well.
 Also we may want to change the name of generated getCacheEntry 
 function to getExclusiveCacheEntry so that one realizes the exclusive 
 assumption made by the function.



 Other than that, the only other change I suggest is to allow the trigger 
 function to directly set the implicit cache_entry and tbe_entry 
 variables.  Below is example of what I'm envisioning:


[Nilay] If we do things in this way, then any in_port, in which cache / tb 
entries are accessed before the trigger function, would still make calls 
to isCacheTagPresent().



 Currently in MOESI_CMP_directory-L1cache.sm:



 in_port(useTimerTable_in, Address, useTimerTable) {

if (useTimerTable_in.isReady()) {

set_cache_entry(getCacheEntry(useTimerTable.readyAddress()));

set_tbe(TBEs[useTimerTable.readyAddress()]);

trigger(Event:Use_Timeout, useTimerTable.readyAddress());

}

 }



 Replace that with the following:



 in_port(useTimerTable_in, Address, useTimerTable) {

if (useTimerTable_in.isReady()) {

trigger(Event:Use_Timeout, useTimerTable.readyAddress(),

getExclusiveCacheEntry(useTimerTable.readyAddress()),

   TBEs[useTimerTable.readyAddress()]);

}

 }


[Nilay] Instead of passing cache and tb entries as arguments, we can 
create local variables in the trigger function using the address argument.



 Please let me know if you have any questions.



 Thanks...you're almost done.  :)



 Brad






Thanks
Nilay


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-11 Thread Nilay Vaish

On Tue, 11 Jan 2011, Beckmann, Brad wrote:


Hi Nilay,

Sure, using a local variable to further reduce the calls to 
getCacheEntry is a great idea.  I think that is orthogonal to the 
suggestion I was making.  I just want the ability to directly set the 
cache_entry and tbe_entry variables in the trigger function.  That way 
the address, cache_entry, and tbe_entry variables are dealt with 
consistently and it avoids adding the separate calls to 
set_cache_entry() and set_tbe () in the inports.


Firstly, we have to set cache and transaction buffer entry variables 
whenever we do allocation or deallocation of entries. This means these 
calls cannot be completely avoided. Secondly, while processing events from 
the mandatory queue (as it is called in the current implementations), if 
these variables are not set, we will have to revert to the earlier 
approach. This would double the number of times cache entry lookups are 
performed as the trigger function will perform the lookup again. This 
would also mean that both the approaches for looking up cache entry in the 
cache will have to exist simultaneously.


Another concern is in implementation of getCacheEntry(). If this function 
has to return a pointer to a cache entry, we would have to provide support 
for local variables which internally SLICC would assume to be pointer 
variables.


In my opinion, we should maintain one method for looking up cache entries. 
My own experience informs me that it is not difficult to incorporate calls 
to set/unset_cache_entry () in already existing protocol implementations. 
For implementing new protocols, I think the greater obstacle will be in 
implementing the protocol correctly and not in using entry variables 
correctly. If we document this change lucidly, there is no reason to 
believe a SLICC programmer will be exceptionally pushed because of this 
change.


Assuming that this change does introduce some complexity in progamming 
with SLICC, does that complexity out weigh the performance improvements?


--
Nilay
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-11 Thread Beckmann, Brad
  Sure, using a local variable to further reduce the calls to
  getCacheEntry is a great idea.  I think that is orthogonal to the
  suggestion I was making.  I just want the ability to directly set the
  cache_entry and tbe_entry variables in the trigger function.  That way
  the address, cache_entry, and tbe_entry variables are dealt with
  consistently and it avoids adding the separate calls to
  set_cache_entry() and set_tbe () in the inports.
 
 Firstly, we have to set cache and transaction buffer entry variables whenever
 we do allocation or deallocation of entries. This means these calls cannot be
 completely avoided. Secondly, while processing events from the mandatory
 queue (as it is called in the current implementations), if these variables are
 not set, we will have to revert to the earlier approach. This would double the
 number of times cache entry lookups are performed as the trigger function
 will perform the lookup again. This would also mean that both the
 approaches for looking up cache entry in the cache will have to exist
 simultaneously.
 

Absolutely, we still need the ability to allocate or deallocate entries within 
actions.  I'm not advocating to completely eliminate the set/unset cache and 
tbe entry functions.  I just want to avoid including those calls in the 
inports.  I'm confused why the mandatory queue is different than other queues.  
They all trigger events in the same way.  Maybe I should point out that I'm 
assuming that getCacheEntry can return a NULL pointer and thus that can be 
passed into the trigger call when no cache or tbe entry exists.


 Another concern is in implementation of getCacheEntry(). If this function has
 to return a pointer to a cache entry, we would have to provide support for
 local variables which internally SLICC would assume to be pointer variables.
 

Within SLICC understanding that certain variables are actually pointers is a 
little bit of a nuisance, but there already exists examples where we make that 
distinction.  For instance, look at the if para.pointer conditionals in 
StateMachine.py.  We just have to treat cache and tbe entries in the same 
fashion.


 In my opinion, we should maintain one method for looking up cache entries.
 My own experience informs me that it is not difficult to incorporate calls to
 set/unset_cache_entry () in already existing protocol implementations.
 For implementing new protocols, I think the greater obstacle will be in
 implementing the protocol correctly and not in using entry variables
 correctly. If we document this change lucidly, there is no reason to believe a
 SLICC programmer will be exceptionally pushed because of this change.
 
 Assuming that this change does introduce some complexity in progamming
 with SLICC, does that complexity out weigh the performance improvements?
 

My position is we can leverage SLICC as an intermediate language and achieve 
the performance benefits of your change without significantly impacting the 
programmability.  I agree that we need the set/unset_cache_entry calls in the 
allocate and deallocate actions.  I see no problem with that.  I just want to 
treat these new implicit cache and tbe entry variables like the existing 
implicit variable address.  Therefore I want to pass them into the trigger 
operation like the address variable.  I also want just one method for looking 
up cache entries.  I believe the only difference is that I would like to set 
the cache and tbe entries in the trigger function, as well as allowing them to 
be set in the actions.  

I hope that clarifies at least what I'm envisioning.  I appreciate your 
feedback on this and I want to reiterate that I think your change is really 
close to being done.  If you still feel like I'm missing something, I would be 
happy to chat with you over-the-phone.

Brad



___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-11 Thread Nilay Vaish

Brad, my comments are inline.


On Tue, 11 Jan 2011, Beckmann, Brad wrote:


Sure, using a local variable to further reduce the calls to
getCacheEntry is a great idea.  I think that is orthogonal to the
suggestion I was making.  I just want the ability to directly set the
cache_entry and tbe_entry variables in the trigger function.  That way
the address, cache_entry, and tbe_entry variables are dealt with
consistently and it avoids adding the separate calls to
set_cache_entry() and set_tbe () in the inports.


Firstly, we have to set cache and transaction buffer entry variables whenever
we do allocation or deallocation of entries. This means these calls cannot be
completely avoided. Secondly, while processing events from the mandatory
queue (as it is called in the current implementations), if these variables are
not set, we will have to revert to the earlier approach. This would double the
number of times cache entry lookups are performed as the trigger function
will perform the lookup again. This would also mean that both the
approaches for looking up cache entry in the cache will have to exist
simultaneously.



Absolutely, we still need the ability to allocate or deallocate entries 
within actions.  I'm not advocating to completely eliminate the 
set/unset cache and tbe entry functions.  I just want to avoid including 
those calls in the inports.  I'm confused why the mandatory queue is 
different than other queues.  They all trigger events in the same way.


if (L1IcacheMemory.isTagPresent(in_msg.LineAddress)) {
   // The tag matches for the L1, so the L1 asks the L2 for it.
  trigger(mandatory_request_type_to_event(in_msg.Type), in_msg.LineAddress);
}

Brad, mandatory queue is just an example where an inport may perform tag 
lookup before cache and transaction buffer entries has been set. Above is 
an excerpt from the file MOESI_CMP_directory-L1cache.sm. Before the 
trigger() is called, isTagPresent() is called. This means tag look up is 
being performed before cache or transaction buffer entries have been set. 
Suppose the tag was present in L1Icache, then in the trigger() call, we 
will again perform lookup.


Similarly, there is an inport in the Hammer's protocol implementation 
where getCacheEntry() is called before a call to trigger(). Now, why 
should we use getCacheEntry() in the inport and cache entry in the action?


In MESI_CMP_directory, getState() is called from an inport. This means we 
cannot have an implementation of getState() that makes use of cache entry 
variable since it would not have been set. Now, different implementations 
for setState() and getState() simply does not make sense to me, so in my 
opinion setState() will also not use cache entry. These two function (I 
just went through the profile output for MOESI hammer), account for about 
half of the calls to isTagPresent(), the function that we are trying to 
get rid of.


Maybe I should point out that I'm assuming that getCacheEntry can return 
a NULL pointer and thus that can be passed into the trigger call when no 
cache or tbe entry exists.


You are correct that getCacheEntry() would have to return a NULL, another 
thing which I believe we earlier preferred avoiding. As an aside, we 
cannot use the keyword NULL since it is already in use. So, we will have 
to rename NULL as some thing else.


On second thought, I think it may not be necessary for getCacheEntry() to 
use the keyword.






Another concern is in implementation of getCacheEntry(). If this function has
to return a pointer to a cache entry, we would have to provide support for
local variables which internally SLICC would assume to be pointer variables.



Within SLICC understanding that certain variables are actually pointers 
is a little bit of a nuisance, but there already exists examples where 
we make that distinction.  For instance, look at the if para.pointer 
conditionals in StateMachine.py.  We just have to treat cache and tbe 
entries in the same fashion.


I know it is possible to declare pointers in SLICC, CacheMemory being the 
foremost example. But we only allow declaration of pointers. We tacitly 
assume that when ever they will be used, they will only be used after 
being dereferenced. Now, in case of getCacheEntry(), I do not see this 
happening. Below is the current implementation of getCacheEntry() from 
MOESI_CMP_directory-L1cache.sm.


Entry getCacheEntry(Address addr), return_by_ref=yes {
  if (L1DcacheMemory.isTagPresent(addr)) {
return static_cast(Entry, L1DcacheMemory[addr]);
  } else {
return static_cast(Entry, L1IcacheMemory[addr]);
  }
}

I would probably convert it to some thing like this.

AbstractCacheEntry getCacheEntry(Address addr) {
  AbstractCacheEntry * cache_entry = L1DcacheMemory.lookup(addr);
  if(is_valid(cache_entry)) return cache_entry;
  return L1IcacheMemory.lookup(addr);
}

Now, if we assume that cache_entry will always be used in its dereferenced 
form, then we will face problem in returning cache 

Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-11 Thread Beckmann, Brad
Hi Nilay,

Overall, I believe we are in more agreement with each other than maybe you 
think.  I'm glad you included pseudo code in your latest email.  That is a 
great idea.  I think part of our problem is we are comprehending our textual 
descriptions in different ways.

Below are my  responses:
 
 
  Absolutely, we still need the ability to allocate or deallocate
  entries within actions.  I'm not advocating to completely eliminate
  the set/unset cache and tbe entry functions.  I just want to avoid
  including those calls in the inports.  I'm confused why the mandatory
  queue is different than other queues.  They all trigger events in the same
 way.
 
 if (L1IcacheMemory.isTagPresent(in_msg.LineAddress)) {
 // The tag matches for the L1, so the L1 asks the L2 for it.
trigger(mandatory_request_type_to_event(in_msg.Type),
 in_msg.LineAddress); }
 
 Brad, mandatory queue is just an example where an inport may perform tag
 lookup before cache and transaction buffer entries has been set. Above is an
 excerpt from the file MOESI_CMP_directory-L1cache.sm. Before the
 trigger() is called, isTagPresent() is called. This means tag look up is being
 performed before cache or transaction buffer entries have been set.
 Suppose the tag was present in L1Icache, then in the trigger() call, we will
 again perform lookup.
 
 Similarly, there is an inport in the Hammer's protocol implementation where
 getCacheEntry() is called before a call to trigger(). Now, why should we use
 getCacheEntry() in the inport and cache entry in the action?
 

The reason is, as you pointed out, we ideally want to call getCacheEntry once.  
I believe your suggestion to use local variables in the input ports gets us 
there.  Below is what I'm envisioning for the MOESI_hammer mandatory queue 
in_port logic (at least the IFETCH half of the logic):

ENTRY getL1ICacheEntry(Address addr) {
   assert(is_valid(L1DcacheMemory.lookup(addr) == FALSE);
   assert(is_valid(L2cacheMemory.lookup(addr) == FALSE);
   return L1IcacheMemory.lookup(addr);
}

ENTRY getL1DCacheEntry(Address addr) {
   assert(is_valid(L1IcacheMemory.lookup(addr) == FALSE);
   assert(is_valid(L2cacheMemory.lookup(addr) == FALSE);
   return L1DcacheMemory.lookup(addr);
}

ENTRY getL2CacheEntry(Address addr) {
   assert(is_valid(L1IcacheMemory.lookup(addr) == FALSE);
   assert(is_valid(L1DcacheMemory.lookup(addr) == FALSE);
   return L2cacheMemory.lookup(addr);
}

  in_port(mandatoryQueue_in, CacheMsg, mandatoryQueue, desc=..., rank=0) {
if (mandatoryQueue_in.isReady()) {
  peek(mandatoryQueue_in, CacheMsg, block_on=LineAddress) {

// Set the local entry variables
ENTRY L1I_cache_entry = getL1ICacheEntry(in_msg.LineAddress);
ENTRY L1D_cache_entry = getL1DCacheEntry(in_msg.LineAddress);
TBE_Entry tbe_entry = getTBE(in_msg.LineAddress);

// Check for data access to blocks in I-cache and ifetchs to blocks in 
D-cache

if (in_msg.Type == CacheRequestType:IFETCH) {
  // ** INSTRUCTION ACCESS ***

  // Check to see if it is in the OTHER L1
  if (is_valid(L1D_cache_entry)) {
// The block is in the wrong L1, try to write it to the L2
if (L2cacheMemory.cacheAvail(in_msg.LineAddress)) {
  trigger(Event:L1_to_L2, in_msg.LineAddress, L1D_cache_entry, 
tbe_entry);
} else {
  replace_addr = L2cacheMemory.cacheProbe(in_msg.LineAddress);
  replace_cache_entry = getL2CacheEntry(replace_addr);
  replace_tbe_entry = getTBE(replace_addr);
  trigger(Event:L2_Replacement, replace_addr, replace_cache_entry, 
replace_tbe_entry);
}
  }

  if (is_valid(L1I_cache_entry)) { 
// The tag matches for the L1, so the L1 fetches the line.  We know 
it can't be in the L2 due to exclusion
trigger(mandatory_request_type_to_event(in_msg.Type), 
in_msg.LineAddress, L1I_cache_entry, tbe_entry);
  } else {
if (L1IcacheMemory.cacheAvail(in_msg.LineAddress)) {
  // L1 does't have the line, but we have space for it in the L1
  ENTRY L2_cache_entry = getL2CacheEntry(in_msg.LineAddress);
  if (is_valid(L2_cache_entry)) {
// L2 has it (maybe not with the right permissions)
trigger(Event:Trigger_L2_to_L1I, in_msg.LineAddress, 
L2_cache_entry, tbe_entry);
  } else {
// We have room, the L2 doesn't have it, so the L1 fetches the 
line
trigger(mandatory_request_type_to_event(in_msg.Type), 
in_msg.LineAddress, L1Icache_entry, tbe_entry);
// you could also say here: 
trigger(mandatory_request_type_to_event(in_msg.Type), in_msg.LineAddress, ODD, 
ODD);
  }
} else {
  // No room in the L1, so we need to make room  
  if 
(L2cacheMemory.cacheAvail(L1IcacheMemory.cacheProbe(in_msg.LineAddress))) {
  

Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-11 Thread Lisa Hsu
Hi Nilay,

I've been talking with Brad here at work about some of these things so I
will finally jump into the conversation via email.  First, great job on this
- this has clearly been a substantial amount of work.  I'm impressed.

I've got some comments below.

On Tue, Jan 11, 2011 at 3:46 PM, Nilay Vaish ni...@cs.wisc.edu wrote:

 Brad, my comments are inline.


 On Tue, 11 Jan 2011, Beckmann, Brad wrote:

  Sure, using a local variable to further reduce the calls to
 getCacheEntry is a great idea.  I think that is orthogonal to the
 suggestion I was making.  I just want the ability to directly set the
 cache_entry and tbe_entry variables in the trigger function.  That way
 the address, cache_entry, and tbe_entry variables are dealt with
 consistently and it avoids adding the separate calls to
 set_cache_entry() and set_tbe () in the inports.


 Firstly, we have to set cache and transaction buffer entry variables
 whenever
 we do allocation or deallocation of entries. This means these calls
 cannot be
 completely avoided. Secondly, while processing events from the mandatory
 queue (as it is called in the current implementations), if these
 variables are
 not set, we will have to revert to the earlier approach. This would
 double the
 number of times cache entry lookups are performed as the trigger function
 will perform the lookup again. This would also mean that both the
 approaches for looking up cache entry in the cache will have to exist
 simultaneously.


 Absolutely, we still need the ability to allocate or deallocate entries
 within actions.  I'm not advocating to completely eliminate the set/unset
 cache and tbe entry functions.  I just want to avoid including those calls
 in the inports.  I'm confused why the mandatory queue is different than
 other queues.  They all trigger events in the same way.


 if (L1IcacheMemory.isTagPresent(in_msg.LineAddress)) {
   // The tag matches for the L1, so the L1 asks the L2 for it.
  trigger(mandatory_request_type_to_event(in_msg.Type), in_msg.LineAddress);
 }

 Brad, mandatory queue is just an example where an inport may perform tag
 lookup before cache and transaction buffer entries has been set. Above is an
 excerpt from the file MOESI_CMP_directory-L1cache.sm. Before the trigger()
 is called, isTagPresent() is called. This means tag look up is being
 performed before cache or transaction buffer entries have been set. Suppose
 the tag was present in L1Icache, then in the trigger() call, we will again
 perform lookup.

 Similarly, there is an inport in the Hammer's protocol implementation where
 getCacheEntry() is called before a call to trigger(). Now, why should we use
 getCacheEntry() in the inport and cache entry in the action?


I'm not sure why having the trigger function implicitly set the cache
entries and tbe entries would require two calls to isTagPresent() if local
variables are involved.  Is there some reason why something like the code
below would not be possible?

local_var := getL1ICacheEntry(in_msg.LineAddress)
if (is_valid(local_var))
  trigger(Event, in_msg.LineAddress, local_var);


I don't see how having the entries being passed to the trigger function is
functionally different than having to explicitly set things using two
function calls is different - as long as there is a local variable involved
to carry the value across some in_port logic it seems much cleaner  and more
consistent to eliminate the need to explicitly call two functions for every
in_port.

I just saw Brad's latest email go by with pseudocode and I think that is
both clean and flexible/functional - in certain cases you can explicitly
make the getCacheEntry call you want, depending on your protocol, in the
call to the trigger function, and in others you can set entries beforehand
into local variables and do arbitrary logic, then pass whatever entry you
want to the trigger function.

Lisa
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-07 Thread Beckmann, Brad
Hi Nilay,



Unfortunately I can't provide you an example of a protocol where getCacheEntry 
behaves in a different manner, but they do exist.  I reviewed your most recent 
patch updates and I don't think what we're asking for is much different than 
what you have on reviewboard right now.  Basically, all we need to do is add 
back in the capability for the programmer to write their own getCacheEntry 
function in the .sm file.  I know that I initially asked you to automatically 
generate those functions, and I still think that is useful for most protocols, 
but Lisa made me realize that we need customized getCacheEntry functions as 
well.  Also we may want to change the name of generated getCacheEntry function 
to getExclusiveCacheEntry so that one realizes the exclusive assumption made by 
the function.



Other than that, the only other change I suggest is to allow the trigger 
function to directly set the implicit cache_entry and tbe_entry variables.  
Below is example of what I'm envisioning:



Currently in MOESI_CMP_directory-L1cache.sm:



in_port(useTimerTable_in, Address, useTimerTable) {

if (useTimerTable_in.isReady()) {

set_cache_entry(getCacheEntry(useTimerTable.readyAddress()));

set_tbe(TBEs[useTimerTable.readyAddress()]);

trigger(Event:Use_Timeout, useTimerTable.readyAddress());

}

}



Replace that with the following:



in_port(useTimerTable_in, Address, useTimerTable) {

if (useTimerTable_in.isReady()) {

trigger(Event:Use_Timeout, useTimerTable.readyAddress(),

getExclusiveCacheEntry(useTimerTable.readyAddress()),

   TBEs[useTimerTable.readyAddress()]);

}

}



Please let me know if you have any questions.



Thanks...you're almost done.  :)



Brad





-Original Message-
From: Nilay Vaish [mailto:ni...@cs.wisc.edu]
Sent: Thursday, January 06, 2011 6:32 AM
To: Beckmann, Brad
Cc: Default
Subject: RE: Review Request: Changing how CacheMemory interfaces with SLICC



Can you give me an example of a protocol where getCacheEntry() behaves in a 
different manner?



--

Nilay



On Wed, 5 Jan 2011, Beckmann, Brad wrote:



 Hi Nilay,



 Lisa Hsu (another member of the lab here at AMD) and I were discussing

 these changes a bit more and there was one particular idea that came

 out of our conversation that I wanted to relay to you.  Basically, we

 were thinking about how these changes will impact the flexibility of

 SLICC and we concluded that it is important to allow one to craft

 custom getCacheEntry functions for each protocol.  I know initially I

 was hoping to generate these functions, but I now don’t think that

 is possible without restricting what protocols can be support by SLICC.

 Instead we can use these customized getCacheEntry functions to pass

 the cache entry to the actions via the trigger function.  For those

 controllers that manage multiple cache memories, it is up to the

 programmer to understand what the cache entry pointer points to.  That

 should eliminate the need to have multiple *cacheMemory_entry

 variables in the .sm files.  Instead there is just the cache_entry

 variable that is set either by the trigger function call or set_cache_entry.



 Does that make sense to you?



 Brad




___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-07 Thread Nilay Vaish

Brad, my comments are inline.


On Fri, 7 Jan 2011, Beckmann, Brad wrote:


Hi Nilay,



Unfortunately I can't provide you an example of a protocol where 
getCacheEntry behaves in a different manner, but they do exist.  I 
reviewed your most recent patch updates and I don't think what we're 
asking for is much different than what you have on reviewboard right 
now.  Basically, all we need to do is add back in the capability for the 
programmer to write their own getCacheEntry function in the .sm file. 
I know that I initially asked you to automatically generate those 
functions, and I still think that is useful for most protocols, but Lisa 
made me realize that we need customized getCacheEntry functions as well. 
Also we may want to change the name of generated getCacheEntry function 
to getExclusiveCacheEntry so that one realizes the exclusive assumption 
made by the function.




Other than that, the only other change I suggest is to allow the trigger 
function to directly set the implicit cache_entry and tbe_entry 
variables.  Below is example of what I'm envisioning:




[Nilay] If we do things in this way, then any in_port, in which cache / tb 
entries are accessed before the trigger function, would still make calls 
to isCacheTagPresent().





Currently in MOESI_CMP_directory-L1cache.sm:



in_port(useTimerTable_in, Address, useTimerTable) {

   if (useTimerTable_in.isReady()) {

   set_cache_entry(getCacheEntry(useTimerTable.readyAddress()));

   set_tbe(TBEs[useTimerTable.readyAddress()]);

   trigger(Event:Use_Timeout, useTimerTable.readyAddress());

   }

}



Replace that with the following:



in_port(useTimerTable_in, Address, useTimerTable) {

   if (useTimerTable_in.isReady()) {

   trigger(Event:Use_Timeout, useTimerTable.readyAddress(),

   getExclusiveCacheEntry(useTimerTable.readyAddress()),

  TBEs[useTimerTable.readyAddress()]);

   }

}



[Nilay] Instead of passing cache and tb entries as arguments, we can 
create local variables in the trigger function using the address argument.





Please let me know if you have any questions.



Thanks...you're almost done.  :)



Brad







Thanks
Nilay
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-06 Thread Nilay Vaish
Can you give me an example of a protocol where getCacheEntry() behaves in 
a different manner?


--
Nilay

On Wed, 5 Jan 2011, Beckmann, Brad wrote:


Hi Nilay,

Lisa Hsu (another member of the lab here at AMD) and I were discussing 
these changes a bit more and there was one particular idea that came out 
of our conversation that I wanted to relay to you.  Basically, we were 
thinking about how these changes will impact the flexibility of SLICC 
and we concluded that it is important to allow one to craft custom 
getCacheEntry functions for each protocol.  I know initially I was 
hoping to generate these functions, but I now don???t think that is 
possible without restricting what protocols can be support by SLICC. 
Instead we can use these customized getCacheEntry functions to pass the 
cache entry to the actions via the trigger function.  For those 
controllers that manage multiple cache memories, it is up to the 
programmer to understand what the cache entry pointer points to.  That 
should eliminate the need to have multiple *cacheMemory_entry variables 
in the .sm files.  Instead there is just the cache_entry variable that 
is set either by the trigger function call or set_cache_entry.


Does that make sense to you?

Brad


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-06 Thread Nilay Vaish

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/
---

(Updated 2011-01-06 07:29:17.470364)


Review request for Default.


Changes
---

This revision has following changes from the previous version -

1. The most important change is that we now formally assume that if multiple  
cache memories under the controller of a single controller, then for any given
address, at most one of them can hold a cache entry for it. This simplifies a
lot many things. We are now required to work with only one cache entry pointer.
This pointer is set to the cache entry for the address being passed to the
trigger function. The checks that have to carried out for code generation also 
get simplified.

2. is_invalid() has been introduced that can be used for testing whether a given
entry (cache or transaction buffer) == NULL.

3. Earlier, *_ptr variables where being added on declaration of methods (which 
make use of TBE and / or Cache Entry), actions and in-ports. This is not being
done any longer.

4. In FuncCallExprAST.py, when function parameters are passed to a function,
  a check has been added for expressions that are being passed as parameters.
  If the expression has a type TBE or AbstractCacheEntry, then all the 
  occurrences of '*' are removed from the expression. This is under the
  assumption that the code from which the function call is being made has
  pointers to the variables being passed. Since SLICC will replace any
  occurrence of these variables with dereferenced version, therefore it is
  required that '*' be removed before the variable is passed to the function.


Summary
---

The purpose of this patch is to change the way CacheMemory interfaces with 
coherence protocols. Currently, whenever a cache controller (defined in the 
protocol under consideration) needs to carry out any operation on a cache 
block, it looks up the tag hash map and figures out whether or not the block 
exists in the cache. In case it does exist, the operation is carried out (which 
requires another lookup). Over a single clock cycle, multiple such lookups take 
place as observed through profiling of different protocols. It was seen that 
the tag lookup takes anything from 10% to 20% of the simulation time. In order 
to reduce this time, this patch is being posted. The CacheMemory class now will 
have a function that will return pointer to a cache block entry, instead of a 
reference (though the function that returns the reference has been retained 
currently). Functions have been introduced for setting/unsetting a pointer and 
check its pointer. Similar changes have been carried out for Transaction Buffer 
entries as well.

Note that changes are required to both SLICC and the protocol files. This patch 
carries out changes to SLICC and committing this patch alone, I believe, will 
___break___ all the protocols. I am working on patching the protocols as well. 
This patch is being put to get feed back from other developers.


Diffs (updated)
-

  src/mem/ruby/slicc_interface/AbstractCacheEntry.hh UNKNOWN 
  src/mem/ruby/slicc_interface/AbstractCacheEntry.cc UNKNOWN 
  src/mem/ruby/system/CacheMemory.hh UNKNOWN 
  src/mem/ruby/system/CacheMemory.cc UNKNOWN 
  src/mem/ruby/system/TBETable.hh UNKNOWN 
  src/mem/slicc/ast/ActionDeclAST.py UNKNOWN 
  src/mem/slicc/ast/FormalParamAST.py UNKNOWN 
  src/mem/slicc/ast/FuncCallExprAST.py UNKNOWN 
  src/mem/slicc/ast/InPortDeclAST.py UNKNOWN 
  src/mem/slicc/ast/MethodCallExprAST.py UNKNOWN 
  src/mem/slicc/ast/TypeDeclAST.py UNKNOWN 
  src/mem/slicc/ast/__init__.py UNKNOWN 
  src/mem/slicc/parser.py UNKNOWN 
  src/mem/slicc/symbols/StateMachine.py UNKNOWN 
  src/mem/protocol/RubySlicc_Types.sm UNKNOWN 

Diff: http://reviews.m5sim.org/r/358/diff


Testing
---

I have tested these changes using the MOESI_CMP_directory and MI protocols.


Thanks,

Nilay

___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-05 Thread Beckmann, Brad
Hi Nilay,

Lisa Hsu (another member of the lab here at AMD) and I were discussing these 
changes a bit more and there was one particular idea that came out of our 
conversation that I wanted to relay to you.  Basically, we were thinking about 
how these changes will impact the flexibility of SLICC and we concluded that it 
is important to allow one to craft custom getCacheEntry functions for each 
protocol.  I know initially I was hoping to generate these functions, but I now 
don’t think that is possible without restricting what protocols can be support 
by SLICC.  Instead we can use these customized getCacheEntry functions to pass 
the cache entry to the actions via the trigger function.  For those controllers 
that manage multiple cache memories, it is up to the programmer to understand 
what the cache entry pointer points to.  That should eliminate the need to have 
multiple *cacheMemory_entry  variables in the .sm files.  Instead there is just 
the cache_entry variable that is set either by the trigger function call or 
set_cache_entry.

Does that make sense to you?

Brad


From: Nilay Vaish [mailto:ni...@cs.wisc.edu]
Sent: Tuesday, January 04, 2011 9:43 AM
To: Nilay Vaish; Default; Beckmann, Brad
Subject: Re: Review Request: Changing how CacheMemory interfaces with SLICC

This is an automatically generated e-mail. To reply, visit: 
http://reviews.m5sim.org/r/358/



On January 3rd, 2011, 3:31 p.m., Brad Beckmann wrote:

Hi Nilay,



First, I must say this is an impressive amount of work.  You definitely got a 
lot done over holiday break. :)



Overall, this set of patches is definitely close, but I want to see if we can 
take them a step forward.  Also I have a few suggestions that may make things 
easier.  Finally, I have a bunch of minor questions/suggestions on individual 
lines, but I’ll hold off on those until you can respond to my higher-level 
questions.



The main thing I would like to see improved is not having to differentiate 
between “entry” and “entry_ptr” in the .sm files.  Am I correct that the only 
functions in the .sm files that are passed an “entry_ptr” are “is_valid_ptr”, 
“getCacheEntry”, and “set_cache_entry”?  If so, it seems that all three 
functions are generated with unique python code, either in an AST file or 
StateMachine.py.  Therefore, could we just pass these functions “entry” and 
rely on the underneath python code to generate the correct references?  This 
would make things more readable, “is_valid_ptr()” becomes “is_valid”, and it 
doesn’t require the slicc programmer to understand which functions take an 
entry pointer versus the entry itself.  If we can’t make such a change, I worry 
about how much extra complexity this change pushes on the slicc programmer.



Also another suggestion to make things more readable, please replace the name 
L1IcacheMemory_entry with L1I_entry.  Do the same for L1D_entry and L2_entry.  
That will shorten many of your lines.



So am I correct that hammer’s simultaneous usage of valid L1 and L2 cache 
entries in certain transitions is the only reason that within all actions, the 
getCacheEntry calls take multiple cache entries?  If so, I think it would be 
fairly trivial to use a tbe entry as an intermediary between the L1 and L2 for 
those particular hammer transitions.  That way only one cache entry is valid at 
any particular time, and we can simply use the variable cache_entry in the 
actions.  That should clean things up a lot.



By the way, once you check in these patches, the MESI_CMP_directory protocol 
will be deprecated, correct?  If so, make sure you include a patch that removes 
it from the regression tester.



Brad





 The main thing I would like to see improved is not having to differentiate

 between “entry” and “entry_ptr” in the .sm files.  Am I correct

 that the only functions in the .sm files that are passed an

 “entry_ptr” are “is_valid_ptr”, “getCacheEntry”, and

 “set_cache_entry”?  If so, it seems that all three functions are

 generated with unique python code, either in an AST file or

 StateMachine.py.  Therefore, could we just pass these functions

 “entry” and rely on the underneath python code to generate the correct

 references?  This would make things more readable, “is_valid_ptr()”

 becomes “is_valid”, and it doesn’t require the slicc programmer to

 understand which functions take an entry pointer versus the entry itself.

 If we can’t make such a change, I worry about how much extra complexity

 this change pushes on the slicc programmer.



There are functions that are passed cache entry and transaction buffer entry as 
arguments. Currently, I assume that these arguments are passed using pointers.





 Also another suggestion to make things more readable, please replace the

 name L1IcacheMemory_entry with L1I_entry.  Do the same for L1D_entry and

 L2_entry.  That will shorten many of your lines.



The names of the cache entry variables are 

Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-04 Thread Nilay Vaish


 On 2011-01-03 15:31:20, Brad Beckmann wrote:
  Hi Nilay,

First, I must say this is an impressive amount of work.  You definitely got a 
lot done over holiday break. :)

Overall, this set of patches is definitely close, but I want to see if we can 
take them a step forward.  Also I have a few suggestions that may make things 
easier.  Finally, I have a bunch of minor questions/suggestions on individual 
lines, but I’ll hold off on those until you can respond to my higher-level 
questions.

The main thing I would like to see improved is not having to differentiate 
between “entry” and “entry_ptr” in the .sm files.  Am I correct that the only 
functions in the .sm files that are passed an “entry_ptr” are “is_valid_ptr”, 
“getCacheEntry”, and “set_cache_entry”?  If so, it seems that all three 
functions are generated with unique python code, either in an AST file or 
StateMachine.py.  Therefore, could we just pass these functions “entry” and 
rely on the underneath python code to generate the correct references?  This 
would make things more readable, “is_valid_ptr()” becomes “is_valid”, and it 
doesn’t require the slicc programmer to understand which functions take an 
entry pointer versus the entry itself.  If we can’t make such a change, I worry 
about how much extra complexity this change pushes on the slicc programmer.

Also another suggestion to make things more readable, please replace the name 
L1IcacheMemory_entry with L1I_entry.  Do the same for L1D_entry and L2_entry.  
That will shorten many of your lines.

So am I correct that hammer’s simultaneous usage of valid L1 and L2 cache 
entries in certain transitions is the only reason that within all actions, the 
getCacheEntry calls take multiple cache entries?  If so, I think it would be 
fairly trivial to use a tbe entry as an intermediary between the L1 and L2 for 
those particular hammer transitions.  That way only one cache entry is valid at 
any particular time, and we can simply use the variable cache_entry in the 
actions.  That should clean things up a lot.

By the way, once you check in these patches, the MESI_CMP_directory protocol 
will be deprecated, correct?  If so, make sure you include a patch that removes 
it from the regression tester.

Brad

 The main thing I would like to see improved is not having to differentiate
 between “entry” and “entry_ptr” in the .sm files.  Am I correct
 that the only functions in the .sm files that are passed an
 “entry_ptr” are “is_valid_ptr”, “getCacheEntry”, and
 “set_cache_entry”?  If so, it seems that all three functions are
 generated with unique python code, either in an AST file or
 StateMachine.py.  Therefore, could we just pass these functions
 “entry” and rely on the underneath python code to generate the correct
 references?  This would make things more readable, “is_valid_ptr()”
 becomes “is_valid”, and it doesn’t require the slicc programmer to
 understand which functions take an entry pointer versus the entry itself. 
 If we can’t make such a change, I worry about how much extra complexity
 this change pushes on the slicc programmer.

There are functions that are passed cache entry and transaction buffer entry as 
arguments. Currently, I assume that these arguments are passed using pointers.

 
 Also another suggestion to make things more readable, please replace the
 name L1IcacheMemory_entry with L1I_entry.  Do the same for L1D_entry and
 L2_entry.  That will shorten many of your lines.

The names of the cache entry variables are currently tied with the names of the 
cache memory variables belonging to the machine. If the name of the cache 
memory variable is A, then the corresponding cache entry variable is named 
A_entry.

 So am I correct that hammer’s simultaneous usage of valid L1 and L2
 cache entries in certain transitions is the only reason that within all
 actions, the getCacheEntry calls take multiple cache entries?  If so, I
 think it would be fairly trivial to use a tbe entry as an intermediary
 between the L1 and L2 for those particular hammer transitions.  That way
 only one cache entry is valid at any particular time, and we can simply
 use the variable cache_entry in the actions.  That should clean things up
 a lot.

Oops! Should have thought of that before doing all those changes. But can we 
assume that we would always have only one valid cache entry pointer at any 
given time? If that's true, I would probably revert to previous version of the 
patch. This should also resolve the naming issue.

 By the way, once you check in these patches, the MESI_CMP_directory
 protocol will be deprecated, correct?  If so, make sure you include a
 patch that removes it from the regression tester.

I have a patch for the protocol, but I need to discuss it. Do you think it is 
possible that a protocol is not in a dead lock but random tester outputs so 
because the underlying memory system is taking too much time? The patch works 
for 1, 2, 

Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-04 Thread Arkaprava Basu

Hi Nilay,

   On deadlock issue with MESI_CMP_directory :
   Yes,  this can happen as ruby_tester or Sequencer only reports 
*possible* deadlocks. With higher number of processors there is more 
contention (and thus latency) and it can mistakenly report deadlock. I 
generally look at the protocol trace to figure out whether there is 
actually any deadlock or not. You can also try doubling the Sequencer 
deadlock threshold and see if the problem goes away. If its a true 
deadlock, it will  break again.


On some related note,  as Brad has pointed out MESI_CMP_directory has 
its share of issues. Recently one of Prof. Sarita Adve's student 
e-mailed us (Multifacet) about 6 bugs he found while model checking the 
MESI_CMP_directory (including a major one). I took some time to look at 
them and it seems like MESI_CMP_directory is now fixed (hopefully).  The 
modified protocol is now passing 1M checks with 16 processors with 
multiple random seeds.  I can locally coordinate with you on this, if 
you want.


Thanks
Arka

On 01/04/2011 11:43 AM, Nilay Vaish wrote:



On 2011-01-03 15:31:20, Brad Beckmann wrote:

Hi Nilay,

First, I must say this is an impressive amount of work.  You definitely got a 
lot done over holiday break. :)

Overall, this set of patches is definitely close, but I want to see if we can 
take them a step forward.  Also I have a few suggestions that may make things 
easier.  Finally, I have a bunch of minor questions/suggestions on individual 
lines, but I’ll hold off on those until you can respond to my higher-level 
questions.

The main thing I would like to see improved is not having to differentiate 
between “entry” and “entry_ptr” in the .sm files.  Am I correct that the only 
functions in the .sm files that are passed an “entry_ptr” are “is_valid_ptr”, 
“getCacheEntry”, and “set_cache_entry”?  If so, it seems that all three 
functions are generated with unique python code, either in an AST file or 
StateMachine.py.  Therefore, could we just pass these functions “entry” and 
rely on the underneath python code to generate the correct references?  This 
would make things more readable, “is_valid_ptr()” becomes “is_valid”, and it 
doesn’t require the slicc programmer to understand which functions take an 
entry pointer versus the entry itself.  If we can’t make such a change, I worry 
about how much extra complexity this change pushes on the slicc programmer.

Also another suggestion to make things more readable, please replace the name 
L1IcacheMemory_entry with L1I_entry.  Do the same for L1D_entry and L2_entry.  
That will shorten many of your lines.

So am I correct that hammer’s simultaneous usage of valid L1 and L2 cache 
entries in certain transitions is the only reason that within all actions, the 
getCacheEntry calls take multiple cache entries?  If so, I think it would be 
fairly trivial to use a tbe entry as an intermediary between the L1 and L2 for 
those particular hammer transitions.  That way only one cache entry is valid at 
any particular time, and we can simply use the variable cache_entry in the 
actions.  That should clean things up a lot.

By the way, once you check in these patches, the MESI_CMP_directory protocol 
will be deprecated, correct?  If so, make sure you include a patch that removes 
it from the regression tester.

Brad


The main thing I would like to see improved is not having to differentiate
between “entry” and “entry_ptr” in the .sm files.  Am I correct
that the only functions in the .sm files that are passed an
“entry_ptr” are “is_valid_ptr”, “getCacheEntry”, and
“set_cache_entry”?  If so, it seems that all three functions are
generated with unique python code, either in an AST file or
StateMachine.py.  Therefore, could we just pass these functions
“entry” and rely on the underneath python code to generate the correct
references?  This would make things more readable, “is_valid_ptr()”
becomes “is_valid”, and it doesn’t require the slicc programmer to
understand which functions take an entry pointer versus the entry itself.
If we can’t make such a change, I worry about how much extra complexity
this change pushes on the slicc programmer.

There are functions that are passed cache entry and transaction buffer entry as 
arguments. Currently, I assume that these arguments are passed using pointers.


Also another suggestion to make things more readable, please replace the
name L1IcacheMemory_entry with L1I_entry.  Do the same for L1D_entry and
L2_entry.  That will shorten many of your lines.

The names of the cache entry variables are currently tied with the names of the 
cache memory variables belonging to the machine. If the name of the cache 
memory variable is A, then the corresponding cache entry variable is named 
A_entry.


So am I correct that hammer’s simultaneous usage of valid L1 and L2
cache entries in certain transitions is the only reason that within all
actions, the getCacheEntry calls take 

Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-04 Thread Beckmann, Brad
Hi Nilay,

My responses are below:

 The main thing I would like to see improved is not having to differentiate
 between “entry” and “entry_ptr” in the .sm files.  Am I correct
 that the only functions in the .sm files that are passed an
 “entry_ptr” are “is_valid_ptr”, “getCacheEntry”, and
 “set_cache_entry”?  If so, it seems that all three functions are
 generated with unique python code, either in an AST file or
 StateMachine.py.  Therefore, could we just pass these functions
 “entry” and rely on the underneath python code to generate the correct
 references?  This would make things more readable, “is_valid_ptr()”
 becomes “is_valid”, and it doesn’t require the slicc programmer to
 understand which functions take an entry pointer versus the entry itself. 
 If we can’t make such a change, I worry about how much extra complexity
 this change pushes on the slicc programmer.

There are functions that are passed cache entry and transaction buffer entry as 
arguments. Currently, I assume that these arguments are passed using pointers.

[BB] So does that mean that the cache entry is always passed in as a pointer?  
If so, can one just use cache_entry for all function calls and remove any use 
of cache_entry_ptr in the .sm files?  That is essentially what I would like 
to see.  

 
 Also another suggestion to make things more readable, please replace the
 name L1IcacheMemory_entry with L1I_entry.  Do the same for L1D_entry and
 L2_entry.  That will shorten many of your lines.

The names of the cache entry variables are currently tied with the names of the 
cache memory variables belonging to the machine. If the name of the cache 
memory variable is A, then the corresponding cache entry variable is named 
A_entry.

[BB] Ah, I see.  Ok then let's just keep them the way they are for now.  We can 
deal with shorting the names later.

 So am I correct that hammer’s simultaneous usage of valid L1 and L2
 cache entries in certain transitions is the only reason that within all
 actions, the getCacheEntry calls take multiple cache entries?  If so, I
 think it would be fairly trivial to use a tbe entry as an intermediary
 between the L1 and L2 for those particular hammer transitions.  That way
 only one cache entry is valid at any particular time, and we can simply
 use the variable cache_entry in the actions.  That should clean things up
 a lot.

Oops! Should have thought of that before doing all those changes. But can we 
assume that we would always have only one valid cache entry pointer at any 
given time? If that's true, I would probably revert to previous version of the 
patch. This should also resolve the naming issue.

[BB] I wouldn't have expected you to realize that.  It is one of those things 
that isn't completely obvious without spending a lot of time developing 
protocols.  Yes, I think it is easiest for you to just revert to the previous 
version of the patch and just modify the hammer protocol to use a tbe entry as 
an intermediary.  We've always had an unofficial rule that a controller can 
only manage multiple caches if those caches are exclusive with respect to each 
other.  For the most part, that rule has been followed by all the protocols I'm 
familiar with.  I think your change just makes that an official policy.

 By the way, once you check in these patches, the MESI_CMP_directory
 protocol will be deprecated, correct?  If so, make sure you include a
 patch that removes it from the regression tester.

I have a patch for the protocol, but I need to discuss it. Do you think it is 
possible that a protocol is not in a dead lock but random tester outputs so 
because the underlying memory system is taking too much time? The patch works 
for 1, 2, and 4 processors for 10,000,000 loads. I have tested these processor 
configurations with 40 different seed values. But for 8 processors, random 
tester outputs some thing like this --

panic: Possible Deadlock detected. Aborting!
version: 6 request.paddr: 12779 m_writeRequestTable: 15 current time: 369500011 
issue_time: 368993771 difference: 506240
 @ cycle 369500011
[wakeup:build/ALPHA_SE_MESI_CMP_directory/mem/ruby/system/Sequencer.cc, line 
123]

[BB] Yes, the current version of MESI_CMP_directory is broken in many places.  
Arka just told me that he recently fixed many of those problems.  I suggest 
getting his fixes and working from there.

Brad
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-04 Thread Nilay Vaish

Brad

Is there a reason why each action name follows the pattern combination of 
several letters_action performed by the action? The letters used are 
not abbreviations of the action performed. Can we use any combination?


Thanks
Nilay

On Tue, 4 Jan 2011, Beckmann, Brad wrote:


Hi Nilay,

My responses are below:


The main thing I would like to see improved is not having to differentiate
between “entry” and “entry_ptr” in the .sm files.  Am I correct
that the only functions in the .sm files that are passed an
“entry_ptr” are “is_valid_ptr”, “getCacheEntry”, and
“set_cache_entry”?  If so, it seems that all three functions are
generated with unique python code, either in an AST file or
StateMachine.py.  Therefore, could we just pass these functions
“entry” and rely on the underneath python code to generate the correct
references?  This would make things more readable, “is_valid_ptr()”
becomes “is_valid”, and it doesn’t require the slicc programmer to
understand which functions take an entry pointer versus the entry itself.
If we can’t make such a change, I worry about how much extra complexity
this change pushes on the slicc programmer.


There are functions that are passed cache entry and transaction buffer entry as 
arguments. Currently, I assume that these arguments are passed using pointers.

[BB] So does that mean that the cache entry is always passed in as a pointer?  If so, can one just 
use cache_entry for all function calls and remove any use of 
cache_entry_ptr in the .sm files?  That is essentially what I would like to see.



Also another suggestion to make things more readable, please replace the
name L1IcacheMemory_entry with L1I_entry.  Do the same for L1D_entry and
L2_entry.  That will shorten many of your lines.


The names of the cache entry variables are currently tied with the names of the 
cache memory variables belonging to the machine. If the name of the cache 
memory variable is A, then the corresponding cache entry variable is named 
A_entry.

[BB] Ah, I see.  Ok then let's just keep them the way they are for now.  We can 
deal with shorting the names later.


So am I correct that hammer’s simultaneous usage of valid L1 and L2
cache entries in certain transitions is the only reason that within all
actions, the getCacheEntry calls take multiple cache entries?  If so, I
think it would be fairly trivial to use a tbe entry as an intermediary
between the L1 and L2 for those particular hammer transitions.  That way
only one cache entry is valid at any particular time, and we can simply
use the variable cache_entry in the actions.  That should clean things up
a lot.


Oops! Should have thought of that before doing all those changes. But can we 
assume that we would always have only one valid cache entry pointer at any 
given time? If that's true, I would probably revert to previous version of the 
patch. This should also resolve the naming issue.

[BB] I wouldn't have expected you to realize that.  It is one of those things 
that isn't completely obvious without spending a lot of time developing 
protocols.  Yes, I think it is easiest for you to just revert to the previous 
version of the patch and just modify the hammer protocol to use a tbe entry as 
an intermediary.  We've always had an unofficial rule that a controller can 
only manage multiple caches if those caches are exclusive with respect to each 
other.  For the most part, that rule has been followed by all the protocols I'm 
familiar with.  I think your change just makes that an official policy.


By the way, once you check in these patches, the MESI_CMP_directory
protocol will be deprecated, correct?  If so, make sure you include a
patch that removes it from the regression tester.


I have a patch for the protocol, but I need to discuss it. Do you think it is 
possible that a protocol is not in a dead lock but random tester outputs so 
because the underlying memory system is taking too much time? The patch works 
for 1, 2, and 4 processors for 10,000,000 loads. I have tested these processor 
configurations with 40 different seed values. But for 8 processors, random 
tester outputs some thing like this --

panic: Possible Deadlock detected. Aborting!
version: 6 request.paddr: 12779 m_writeRequestTable: 15 current time: 369500011 
issue_time: 368993771 difference: 506240
@ cycle 369500011
[wakeup:build/ALPHA_SE_MESI_CMP_directory/mem/ruby/system/Sequencer.cc, line 
123]

[BB] Yes, the current version of MESI_CMP_directory is broken in many places.  
Arka just told me that he recently fixed many of those problems.  I suggest 
getting his fixes and working from there.

Brad
___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-04 Thread Beckmann, Brad
Hi Nilay,

At one point in time, the combination of several letters at the beginning of 
the action name corresponded to the short hand name for the action.  The short 
hand name is the letter or letter combination that appears in the HTML tables.  
SLICC may have once enforced that the combination of letters matched the HTML 
short hand name, but I don't believe it does right now.  Therefore the letters 
are just a guide to match actions with their associated short hand name.  And 
yes, you can use any combination.

Brad


-Original Message-
From: Nilay Vaish [mailto:ni...@cs.wisc.edu] 
Sent: Tuesday, January 04, 2011 12:03 PM
To: Beckmann, Brad
Cc: Default
Subject: RE: Review Request: Changing how CacheMemory interfaces with SLICC

Brad

Is there a reason why each action name follows the pattern combination of 
several letters_action performed by the action? The letters used are not 
abbreviations of the action performed. Can we use any combination?

Thanks
Nilay

On Tue, 4 Jan 2011, Beckmann, Brad wrote:

 Hi Nilay,

 My responses are below:

 The main thing I would like to see improved is not having to 
 differentiate between “entry” and “entry_ptr” in the .sm 
 files.  Am I correct that the only functions in the .sm files that 
 are passed an “entry_ptr” are “is_valid_ptr”, 
 “getCacheEntry”, and “set_cache_entry”?  If so, it seems that 
 all three functions are generated with unique python code, either in 
 an AST file or StateMachine.py.  Therefore, could we just pass these 
 functions “entry” and rely on the underneath python code to 
 generate the correct references?  This would make things more 
 readable, “is_valid_ptr()” becomes “is_valid”, and it 
 doesn’t require the slicc programmer to understand which functions take an 
 entry pointer versus the entry itself.
 If we can’t make such a change, I worry about how much extra 
 complexity this change pushes on the slicc programmer.

 There are functions that are passed cache entry and transaction buffer entry 
 as arguments. Currently, I assume that these arguments are passed using 
 pointers.

 [BB] So does that mean that the cache entry is always passed in as a pointer? 
  If so, can one just use cache_entry for all function calls and remove any 
 use of cache_entry_ptr in the .sm files?  That is essentially what I would 
 like to see.


 Also another suggestion to make things more readable, please replace 
 the name L1IcacheMemory_entry with L1I_entry.  Do the same for 
 L1D_entry and L2_entry.  That will shorten many of your lines.

 The names of the cache entry variables are currently tied with the names of 
 the cache memory variables belonging to the machine. If the name of the cache 
 memory variable is A, then the corresponding cache entry variable is named 
 A_entry.

 [BB] Ah, I see.  Ok then let's just keep them the way they are for now.  We 
 can deal with shorting the names later.

 So am I correct that hammer’s simultaneous usage of valid L1 and L2 
 cache entries in certain transitions is the only reason that within 
 all actions, the getCacheEntry calls take multiple cache entries?  If 
 so, I think it would be fairly trivial to use a tbe entry as an 
 intermediary between the L1 and L2 for those particular hammer 
 transitions.  That way only one cache entry is valid at any 
 particular time, and we can simply use the variable cache_entry in 
 the actions.  That should clean things up a lot.

 Oops! Should have thought of that before doing all those changes. But can we 
 assume that we would always have only one valid cache entry pointer at any 
 given time? If that's true, I would probably revert to previous version of 
 the patch. This should also resolve the naming issue.

 [BB] I wouldn't have expected you to realize that.  It is one of those things 
 that isn't completely obvious without spending a lot of time developing 
 protocols.  Yes, I think it is easiest for you to just revert to the previous 
 version of the patch and just modify the hammer protocol to use a tbe entry 
 as an intermediary.  We've always had an unofficial rule that a controller 
 can only manage multiple caches if those caches are exclusive with respect to 
 each other.  For the most part, that rule has been followed by all the 
 protocols I'm familiar with.  I think your change just makes that an official 
 policy.

 By the way, once you check in these patches, the MESI_CMP_directory 
 protocol will be deprecated, correct?  If so, make sure you include a 
 patch that removes it from the regression tester.

 I have a patch for the protocol, but I need to discuss it. Do you 
 think it is possible that a protocol is not in a dead lock but random 
 tester outputs so because the underlying memory system is taking too 
 much time? The patch works for 1, 2, and 4 processors for 10,000,000 
 loads. I have tested these processor configurations with 40 different 
 seed values. But for 8 processors, random 

Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-03 Thread Steve Reinhardt

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/#review585
---



src/mem/ruby/slicc_interface/AbstractCacheEntry.hh
http://reviews.m5sim.org/r/358/#comment823

It looks like you're also changing the lock flag from being a separate 
table to a cache entry field... this seems reasonable, but also seems 
independent of the other changes.  I don't know that it's worth separating into 
separate changeset, but there should be some mention of this in the commit 
message I think.



src/mem/slicc/ast/ActionDeclAST.py
http://reviews.m5sim.org/r/358/#comment822

The expression '%s % foo' is a pretty unconventional way to force foo to 
be a string :-).  If foo is already a string (which might be the case here) you 
don't need it at all.  If foo is not a string, the conventional way to force a 
conversion is 'str(foo)'.  I see there are a number of places where you do this.


- Steve


On 2010-12-31 17:50:59, Nilay Vaish wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.m5sim.org/r/358/
 ---
 
 (Updated 2010-12-31 17:50:59)
 
 
 Review request for Default.
 
 
 Summary
 ---
 
 The purpose of this patch is to change the way CacheMemory interfaces with 
 coherence protocols. Currently, whenever a cache controller (defined in the 
 protocol under consideration) needs to carry out any operation on a cache 
 block, it looks up the tag hash map and figures out whether or not the block 
 exists in the cache. In case it does exist, the operation is carried out 
 (which requires another lookup). Over a single clock cycle, multiple such 
 lookups take place as observed through profiling of different protocols. It 
 was seen that the tag lookup takes anything from 10% to 20% of the simulation 
 time. In order to reduce this time, this patch is being posted. The 
 CacheMemory class now will have a function that will return pointer to a 
 cache block entry, instead of a reference (though the function that returns 
 the reference has been retained currently). Functions have been introduced 
 for setting/unsetting a pointer and check its pointer. Similar changes have 
 been carried out for Transaction Buffer entries as well.
 
 Note that changes are required to both SLICC and the protocol files. This 
 patch carries out changes to SLICC and committing this patch alone, I 
 believe, will ___break___ all the protocols. I am working on patching the 
 protocols as well. This patch is being put to get feed back from other 
 developers.
 
 
 Diffs
 -
 
   src/mem/protocol/RubySlicc_Types.sm UNKNOWN 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.hh UNKNOWN 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.cc UNKNOWN 
   src/mem/ruby/system/CacheMemory.hh UNKNOWN 
   src/mem/ruby/system/CacheMemory.cc UNKNOWN 
   src/mem/ruby/system/TBETable.hh UNKNOWN 
   src/mem/slicc/ast/ActionDeclAST.py UNKNOWN 
   src/mem/slicc/ast/FormalParamAST.py UNKNOWN 
   src/mem/slicc/ast/FuncCallExprAST.py UNKNOWN 
   src/mem/slicc/ast/FuncDeclAST.py UNKNOWN 
   src/mem/slicc/ast/InPortDeclAST.py UNKNOWN 
   src/mem/slicc/ast/IsValidPtrExprAST.py PRE-CREATION 
   src/mem/slicc/ast/MethodCallExprAST.py UNKNOWN 
   src/mem/slicc/ast/StaticCastAST.py UNKNOWN 
   src/mem/slicc/ast/TypeDeclAST.py UNKNOWN 
   src/mem/slicc/ast/__init__.py UNKNOWN 
   src/mem/slicc/parser.py UNKNOWN 
   src/mem/slicc/symbols/StateMachine.py UNKNOWN 
   src/mem/slicc/symbols/SymbolTable.py UNKNOWN 
 
 Diff: http://reviews.m5sim.org/r/358/diff
 
 
 Testing
 ---
 
 I have tested these changes using the MOESI_CMP_directory and MI protocols.
 
 
 Thanks,
 
 Nilay
 


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-03 Thread Nilay Vaish

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/
---

(Updated 2011-01-03 14:18:13.801375)


Review request for Default.


Changes
---

I have made the changes that Steve suggested for converting an expression in to 
a string.


Summary
---

The purpose of this patch is to change the way CacheMemory interfaces with 
coherence protocols. Currently, whenever a cache controller (defined in the 
protocol under consideration) needs to carry out any operation on a cache 
block, it looks up the tag hash map and figures out whether or not the block 
exists in the cache. In case it does exist, the operation is carried out (which 
requires another lookup). Over a single clock cycle, multiple such lookups take 
place as observed through profiling of different protocols. It was seen that 
the tag lookup takes anything from 10% to 20% of the simulation time. In order 
to reduce this time, this patch is being posted. The CacheMemory class now will 
have a function that will return pointer to a cache block entry, instead of a 
reference (though the function that returns the reference has been retained 
currently). Functions have been introduced for setting/unsetting a pointer and 
check its pointer. Similar changes have been carried out for Transaction Buffer 
entries as well.

Note that changes are required to both SLICC and the protocol files. This patch 
carries out changes to SLICC and committing this patch alone, I believe, will 
___break___ all the protocols. I am working on patching the protocols as well. 
This patch is being put to get feed back from other developers.


Diffs (updated)
-

  src/mem/protocol/RubySlicc_Types.sm UNKNOWN 
  src/mem/ruby/slicc_interface/AbstractCacheEntry.hh UNKNOWN 
  src/mem/ruby/slicc_interface/AbstractCacheEntry.cc UNKNOWN 
  src/mem/ruby/system/CacheMemory.hh UNKNOWN 
  src/mem/ruby/system/CacheMemory.cc UNKNOWN 
  src/mem/ruby/system/TBETable.hh UNKNOWN 
  src/mem/slicc/ast/ActionDeclAST.py UNKNOWN 
  src/mem/slicc/ast/FormalParamAST.py UNKNOWN 
  src/mem/slicc/ast/FuncCallExprAST.py UNKNOWN 
  src/mem/slicc/ast/InPortDeclAST.py UNKNOWN 
  src/mem/slicc/ast/MethodCallExprAST.py UNKNOWN 
  src/mem/slicc/ast/TypeDeclAST.py UNKNOWN 
  src/mem/slicc/ast/__init__.py UNKNOWN 
  src/mem/slicc/parser.py UNKNOWN 
  src/mem/slicc/symbols/StateMachine.py UNKNOWN 

Diff: http://reviews.m5sim.org/r/358/diff


Testing
---

I have tested these changes using the MOESI_CMP_directory and MI protocols.


Thanks,

Nilay

___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2011-01-03 Thread Brad Beckmann

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/#review596
---


Hi Nilay,

First, I must say this is an impressive amount of work.  You definitely got a 
lot done over holiday break. :)

Overall, this set of patches is definitely close, but I want to see if we can 
take them a step forward.  Also I have a few suggestions that may make things 
easier.  Finally, I have a bunch of minor questions/suggestions on individual 
lines, but I’ll hold off on those until you can respond to my higher-level 
questions.

The main thing I would like to see improved is not having to differentiate 
between “entry” and “entry_ptr” in the .sm files.  Am I correct that the only 
functions in the .sm files that are passed an “entry_ptr” are “is_valid_ptr”, 
“getCacheEntry”, and “set_cache_entry”?  If so, it seems that all three 
functions are generated with unique python code, either in an AST file or 
StateMachine.py.  Therefore, could we just pass these functions “entry” and 
rely on the underneath python code to generate the correct references?  This 
would make things more readable, “is_valid_ptr()” becomes “is_valid”, and it 
doesn’t require the slicc programmer to understand which functions take an 
entry pointer versus the entry itself.  If we can’t make such a change, I worry 
about how much extra complexity this change pushes on the slicc programmer.

Also another suggestion to make things more readable, please replace the name 
L1IcacheMemory_entry with L1I_entry.  Do the same for L1D_entry and L2_entry.  
That will shorten many of your lines.

So am I correct that hammer’s simultaneous usage of valid L1 and L2 cache 
entries in certain transitions is the only reason that within all actions, the 
getCacheEntry calls take multiple cache entries?  If so, I think it would be 
fairly trivial to use a tbe entry as an intermediary between the L1 and L2 for 
those particular hammer transitions.  That way only one cache entry is valid at 
any particular time, and we can simply use the variable cache_entry in the 
actions.  That should clean things up a lot.

By the way, once you check in these patches, the MESI_CMP_directory protocol 
will be deprecated, correct?  If so, make sure you include a patch that removes 
it from the regression tester.

Brad



- Brad


On 2011-01-03 14:18:13, Nilay Vaish wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.m5sim.org/r/358/
 ---
 
 (Updated 2011-01-03 14:18:13)
 
 
 Review request for Default.
 
 
 Summary
 ---
 
 The purpose of this patch is to change the way CacheMemory interfaces with 
 coherence protocols. Currently, whenever a cache controller (defined in the 
 protocol under consideration) needs to carry out any operation on a cache 
 block, it looks up the tag hash map and figures out whether or not the block 
 exists in the cache. In case it does exist, the operation is carried out 
 (which requires another lookup). Over a single clock cycle, multiple such 
 lookups take place as observed through profiling of different protocols. It 
 was seen that the tag lookup takes anything from 10% to 20% of the simulation 
 time. In order to reduce this time, this patch is being posted. The 
 CacheMemory class now will have a function that will return pointer to a 
 cache block entry, instead of a reference (though the function that returns 
 the reference has been retained currently). Functions have been introduced 
 for setting/unsetting a pointer and check its pointer. Similar changes have 
 been carried out for Transaction Buffer entries as well.
 
 Note that changes are required to both SLICC and the protocol files. This 
 patch carries out changes to SLICC and committing this patch alone, I 
 believe, will ___break___ all the protocols. I am working on patching the 
 protocols as well. This patch is being put to get feed back from other 
 developers.
 
 
 Diffs
 -
 
   src/mem/protocol/RubySlicc_Types.sm UNKNOWN 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.hh UNKNOWN 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.cc UNKNOWN 
   src/mem/ruby/system/CacheMemory.hh UNKNOWN 
   src/mem/ruby/system/CacheMemory.cc UNKNOWN 
   src/mem/ruby/system/TBETable.hh UNKNOWN 
   src/mem/slicc/ast/ActionDeclAST.py UNKNOWN 
   src/mem/slicc/ast/FormalParamAST.py UNKNOWN 
   src/mem/slicc/ast/FuncCallExprAST.py UNKNOWN 
   src/mem/slicc/ast/InPortDeclAST.py UNKNOWN 
   src/mem/slicc/ast/MethodCallExprAST.py UNKNOWN 
   src/mem/slicc/ast/TypeDeclAST.py UNKNOWN 
   src/mem/slicc/ast/__init__.py UNKNOWN 
   src/mem/slicc/parser.py UNKNOWN 
   src/mem/slicc/symbols/StateMachine.py UNKNOWN 
 
 Diff: http://reviews.m5sim.org/r/358/diff
 
 
 Testing
 ---
 
 I have tested these 

Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2010-12-31 Thread Nilay Vaish

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/
---

(Updated 2010-12-31 17:50:59.779517)


Review request for Default.


Changes
---

1. lookup_ptr() function has been removed from CacheMemory class. Instead, 
lookup() function has been changed so that it returns a pointer to an 
AbstractCacheEntry, as opposed to a reference.

2. The manner in which the declaration of a CacheEntry structure in the 
protocol file is detected has been changed. Earlier detection was carried out 
using the name of the structure i.e. the structure should be named 'Entry' for 
the compiler to assume that it is the CacheEntry structure for that particular 
machine under consideration. Now, the check is made on the interface supplied 
with the structure declaration. It should be 'AbstractCacheEntry'. I think an 
assumption has been made that this would happen only once in the entire file. 
An error is raised if it happens more than once, though the error condition has 
not yet been tested.

3. While making changes to the MOESI Hammer protocol, I realized that the 
previously made changes to SLICC would not suffice. Earlier, the changes had 
been made under the assumption that at any given point of time, only one cache 
entry would be updated. But in the Hammer protocol, two cache entries can be 
processed in a single step. One of these would belong to L1 and the other to 
L2. I think the Hammer protocol could not have been supported with just one 
cache entry variable being passed to the trigger function. So, now the number 
of cache entries passed to the trigger function and subsequently to actions is 
same as the number of cache memories in the machine under consideration. This 
is two for MOESI_CMP_* and three for MOESI_hammer. Most of the changes that 
have been made since the previous version of the diff, are there to support 
because of the reason outlined above.


Summary
---

The purpose of this patch is to change the way CacheMemory interfaces with 
coherence protocols. Currently, whenever a cache controller (defined in the 
protocol under consideration) needs to carry out any operation on a cache 
block, it looks up the tag hash map and figures out whether or not the block 
exists in the cache. In case it does exist, the operation is carried out (which 
requires another lookup). Over a single clock cycle, multiple such lookups take 
place as observed through profiling of different protocols. It was seen that 
the tag lookup takes anything from 10% to 20% of the simulation time. In order 
to reduce this time, this patch is being posted. The CacheMemory class now will 
have a function that will return pointer to a cache block entry, instead of a 
reference (though the function that returns the reference has been retained 
currently). Functions have been introduced for setting/unsetting a pointer and 
check its pointer. Similar changes have been carried out for Transaction Buffer 
entries as well.

Note that changes are required to both SLICC and the protocol files. This patch 
carries out changes to SLICC and committing this patch alone, I believe, will 
___break___ all the protocols. I am working on patching the protocols as well. 
This patch is being put to get feed back from other developers.


Diffs (updated)
-

  src/mem/protocol/RubySlicc_Types.sm UNKNOWN 
  src/mem/ruby/slicc_interface/AbstractCacheEntry.hh UNKNOWN 
  src/mem/ruby/slicc_interface/AbstractCacheEntry.cc UNKNOWN 
  src/mem/ruby/system/CacheMemory.hh UNKNOWN 
  src/mem/ruby/system/CacheMemory.cc UNKNOWN 
  src/mem/ruby/system/TBETable.hh UNKNOWN 
  src/mem/slicc/ast/ActionDeclAST.py UNKNOWN 
  src/mem/slicc/ast/FormalParamAST.py UNKNOWN 
  src/mem/slicc/ast/FuncCallExprAST.py UNKNOWN 
  src/mem/slicc/ast/FuncDeclAST.py UNKNOWN 
  src/mem/slicc/ast/InPortDeclAST.py UNKNOWN 
  src/mem/slicc/ast/IsValidPtrExprAST.py PRE-CREATION 
  src/mem/slicc/ast/MethodCallExprAST.py UNKNOWN 
  src/mem/slicc/ast/StaticCastAST.py UNKNOWN 
  src/mem/slicc/ast/TypeDeclAST.py UNKNOWN 
  src/mem/slicc/ast/__init__.py UNKNOWN 
  src/mem/slicc/parser.py UNKNOWN 
  src/mem/slicc/symbols/StateMachine.py UNKNOWN 
  src/mem/slicc/symbols/SymbolTable.py UNKNOWN 

Diff: http://reviews.m5sim.org/r/358/diff


Testing
---

I have tested these changes using the MOESI_CMP_directory and MI protocols.


Thanks,

Nilay

___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2010-12-24 Thread Nilay Vaish

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/
---

(Updated 2010-12-24 09:04:02.816498)


Review request for Default.


Changes
---

The functions is_valid_cache_entry() and is_valid_tbe() have been removed from 
the Controller class. Instead, these have been replaced with a unified call to 
is_valid_ptr(ptr variable). This new call is handled in the same fashion as 
check_allocate().

Functions changePermission() and getPermission() have been removed from the 
CacheMemory class.


Summary
---

The purpose of this patch is to change the way CacheMemory interfaces with 
coherence protocols. Currently, whenever a cache controller (defined in the 
protocol under consideration) needs to carry out any operation on a cache 
block, it looks up the tag hash map and figures out whether or not the block 
exists in the cache. In case it does exist, the operation is carried out (which 
requires another lookup). Over a single clock cycle, multiple such lookups take 
place as observed through profiling of different protocols. It was seen that 
the tag lookup takes anything from 10% to 20% of the simulation time. In order 
to reduce this time, this patch is being posted. The CacheMemory class now will 
have a function that will return pointer to a cache block entry, instead of a 
reference (though the function that returns the reference has been retained 
currently). Functions have been introduced for setting/unsetting a pointer and 
check its pointer. Similar changes have been carried out for Transaction Buffer 
entries as well.

Note that changes are required to both SLICC and the protocol files. This patch 
carries out changes to SLICC and committing this patch alone, I believe, will 
___break___ all the protocols. I am working on patching the protocols as well. 
This patch is being put to get feed back from other developers.


Diffs (updated)
-

  src/mem/protocol/RubySlicc_Types.sm UNKNOWN 
  src/mem/ruby/slicc_interface/AbstractCacheEntry.hh UNKNOWN 
  src/mem/ruby/slicc_interface/AbstractCacheEntry.cc UNKNOWN 
  src/mem/ruby/system/CacheMemory.hh UNKNOWN 
  src/mem/ruby/system/CacheMemory.cc UNKNOWN 
  src/mem/ruby/system/TBETable.hh UNKNOWN 
  src/mem/slicc/ast/ActionDeclAST.py UNKNOWN 
  src/mem/slicc/ast/FormalParamAST.py UNKNOWN 
  src/mem/slicc/ast/FuncCallExprAST.py UNKNOWN 
  src/mem/slicc/ast/FuncDeclAST.py UNKNOWN 
  src/mem/slicc/ast/InPortDeclAST.py UNKNOWN 
  src/mem/slicc/ast/IsValidPtrExprAST.py PRE-CREATION 
  src/mem/slicc/ast/MethodCallExprAST.py UNKNOWN 
  src/mem/slicc/ast/StaticCastAST.py UNKNOWN 
  src/mem/slicc/ast/TypeDeclAST.py UNKNOWN 
  src/mem/slicc/ast/__init__.py UNKNOWN 
  src/mem/slicc/parser.py UNKNOWN 
  src/mem/slicc/symbols/StateMachine.py UNKNOWN 

Diff: http://reviews.m5sim.org/r/358/diff


Testing (updated)
---

I have tested these changes using the MOESI_CMP_directory and MI protocols.


Thanks,

Nilay

___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2010-12-22 Thread Brad Beckmann

---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/#review570
---


Hi Nilay,

Could you please include the changes to the MOESI_CMP_directory protocol in 
this patch?  It is difficult to understand the ramifications of these changes 
without seeing their impact to the .sm files.  Also this is a very tricky patch 
and the holiday break is approaching, so please be patient.  I will try to get 
you my feedback as soon as I can.

Brad

- Brad


On 2010-12-22 14:21:09, Nilay Vaish wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.m5sim.org/r/358/
 ---
 
 (Updated 2010-12-22 14:21:09)
 
 
 Review request for Default.
 
 
 Summary
 ---
 
 The purpose of this patch is to change the way CacheMemory interfaces with 
 coherence protocols. Currently, whenever a cache controller (defined in the 
 protocol under consideration) needs to carry out any operation on a cache 
 block, it looks up the tag hash map and figures out whether or not the block 
 exists in the cache. In case it does exist, the operation is carried out 
 (which requires another lookup). Over a single clock cycle, multiple such 
 lookups take place as observed through profiling of different protocols. It 
 was seen that the tag lookup takes anything from 10% to 20% of the simulation 
 time. In order to reduce this time, this patch is being posted. The 
 CacheMemory class now will have a function that will return pointer to a 
 cache block entry, instead of a reference (though the function that returns 
 the reference has been retained currently). Functions have been introduced 
 for setting/unsetting a pointer and check its pointer. Similar changes have 
 been carried out for Transaction Buffer entries as well.
 
 Note that changes are required to both SLICC and the protocol files. This 
 patch carries out changes to SLICC and committing this patch alone, I 
 believe, will ___break___ all the protocols. I am working on patching the 
 protocols as well. This patch is being put to get feed back from other 
 developers.
 
 
 Diffs
 -
 
   src/mem/slicc/symbols/StateMachine.py 998b217dcae7 
   src/mem/slicc/parser.py 998b217dcae7 
   src/mem/slicc/symbols/Func.py 998b217dcae7 
   src/mem/slicc/ast/StaticCastAST.py 998b217dcae7 
   src/mem/slicc/ast/TypeDeclAST.py 998b217dcae7 
   src/mem/slicc/ast/MethodCallExprAST.py 998b217dcae7 
   src/mem/slicc/ast/InPortDeclAST.py 998b217dcae7 
   src/mem/slicc/ast/FuncDeclAST.py 998b217dcae7 
   src/mem/slicc/ast/FuncCallExprAST.py 998b217dcae7 
   src/mem/slicc/ast/FormalParamAST.py 998b217dcae7 
   src/mem/protocol/RubySlicc_Types.sm 998b217dcae7 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.hh 998b217dcae7 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.cc 998b217dcae7 
   src/mem/ruby/system/CacheMemory.hh 998b217dcae7 
   src/mem/ruby/system/CacheMemory.cc 998b217dcae7 
   src/mem/ruby/system/TBETable.hh 998b217dcae7 
   src/mem/slicc/ast/ActionDeclAST.py 998b217dcae7 
 
 Diff: http://reviews.m5sim.org/r/358/diff
 
 
 Testing
 ---
 
 I have tested these changes using the MOESI_CMP_directory protocol.
 
 
 Thanks,
 
 Nilay
 


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev


Re: [m5-dev] Review Request: Changing how CacheMemory interfaces with SLICC

2010-12-22 Thread Nilay Vaish


 On 2010-12-22 14:35:28, Brad Beckmann wrote:
  Hi Nilay,

Could you please include the changes to the MOESI_CMP_directory protocol in 
this patch?  It is difficult to understand the ramifications of these changes 
without seeing their impact to the .sm files.  Also this is a very tricky patch 
and the holiday break is approaching, so please be patient.  I will try to get 
you my feedback as soon as I can.

Brad

I am trying to add the MOESI_CMP_directory protocol patch separately. But 
apparently review board is not able to apply that patch cleanly.
Take your time to review all the changes.

--
Nilay


- Nilay


---
This is an automatically generated e-mail. To reply, visit:
http://reviews.m5sim.org/r/358/#review570
---


On 2010-12-22 14:21:09, Nilay Vaish wrote:
 
 ---
 This is an automatically generated e-mail. To reply, visit:
 http://reviews.m5sim.org/r/358/
 ---
 
 (Updated 2010-12-22 14:21:09)
 
 
 Review request for Default.
 
 
 Summary
 ---
 
 The purpose of this patch is to change the way CacheMemory interfaces with 
 coherence protocols. Currently, whenever a cache controller (defined in the 
 protocol under consideration) needs to carry out any operation on a cache 
 block, it looks up the tag hash map and figures out whether or not the block 
 exists in the cache. In case it does exist, the operation is carried out 
 (which requires another lookup). Over a single clock cycle, multiple such 
 lookups take place as observed through profiling of different protocols. It 
 was seen that the tag lookup takes anything from 10% to 20% of the simulation 
 time. In order to reduce this time, this patch is being posted. The 
 CacheMemory class now will have a function that will return pointer to a 
 cache block entry, instead of a reference (though the function that returns 
 the reference has been retained currently). Functions have been introduced 
 for setting/unsetting a pointer and check its pointer. Similar changes have 
 been carried out for Transaction Buffer entries as well.
 
 Note that changes are required to both SLICC and the protocol files. This 
 patch carries out changes to SLICC and committing this patch alone, I 
 believe, will ___break___ all the protocols. I am working on patching the 
 protocols as well. This patch is being put to get feed back from other 
 developers.
 
 
 Diffs
 -
 
   src/mem/slicc/symbols/StateMachine.py 998b217dcae7 
   src/mem/slicc/parser.py 998b217dcae7 
   src/mem/slicc/symbols/Func.py 998b217dcae7 
   src/mem/slicc/ast/StaticCastAST.py 998b217dcae7 
   src/mem/slicc/ast/TypeDeclAST.py 998b217dcae7 
   src/mem/slicc/ast/MethodCallExprAST.py 998b217dcae7 
   src/mem/slicc/ast/InPortDeclAST.py 998b217dcae7 
   src/mem/slicc/ast/FuncDeclAST.py 998b217dcae7 
   src/mem/slicc/ast/FuncCallExprAST.py 998b217dcae7 
   src/mem/slicc/ast/FormalParamAST.py 998b217dcae7 
   src/mem/protocol/RubySlicc_Types.sm 998b217dcae7 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.hh 998b217dcae7 
   src/mem/ruby/slicc_interface/AbstractCacheEntry.cc 998b217dcae7 
   src/mem/ruby/system/CacheMemory.hh 998b217dcae7 
   src/mem/ruby/system/CacheMemory.cc 998b217dcae7 
   src/mem/ruby/system/TBETable.hh 998b217dcae7 
   src/mem/slicc/ast/ActionDeclAST.py 998b217dcae7 
 
 Diff: http://reviews.m5sim.org/r/358/diff
 
 
 Testing
 ---
 
 I have tested these changes using the MOESI_CMP_directory protocol.
 
 
 Thanks,
 
 Nilay
 


___
m5-dev mailing list
m5-dev@m5sim.org
http://m5sim.org/mailman/listinfo/m5-dev