OK, I have got something useful from the gem5 online doc, 
http://www.m5sim.org/SLICC#Stalling.2FRecycling.2FWaiting_input_ports, where 
it's generally explained in the Special Functions section. So when an event 
comes for a cache block that is in transient state, the cache controller will 
put the request onto the back of the mandatory queue, which recycles the port 
for other unrelated requests for better performance. Nevertheless, I notice 
that in the doc, it says this implementation is unrealistic, what I want to ask 
is how it is implemented in real hardware cache controller, anyone can shed 
more light on this?

Yuan

On 16 Aug, 2013, at 12:33 PM, Yuan Yao <[email protected]> wrote:

> Hi, all,
> 
> I run some benchmarks with Ruby and O3CPU. As I look into the debug trace 
> file, I find many line say "[Version 0, L1Cache, mandatoryQueue_in]: 
> Recycling." After I grep "recycle()" function in src/mem folder, I find it's 
> invoked in an action in SLICC files. e.g., the code snippet below is from 
> MOESI_CMP_directory-L1cache.sm.
> 
>  action(zz_recycleMandatoryQueue, "\z", desc="Send the head of the mandatory 
> queue to the back of the queue.") {
>    mandatoryQueue_in.recycle();
>  }
> 
> My question is why do we need to put the request which is at the head of the 
> mandatory queue to the back of the queue? Any help will be highly appreciated.
> 
> 
> Yuan

_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to