The O3 CPU counts the number of requests it's submitted and limits
it to the number set. I'm not sure how you got the trace you posted, but
in our model it's the CPU's responsibility to limit the number of
requests to the cache. 

Ali 

On 12.06.2012 15:40, Amin Farmahini
wrote: 

> I only modified the code in cpu side and not the cache. And
CPU::DcachePort::recvTiming() is called when the l1 cache has made a
response packet ready. I would guess the same behavior can be seen in O3
cpu, because what I am doing is similar to LSQ implementation in O3.
> I
totally agree that this should be decided by L1 cache bandwidth, but I
don't know if this bandwidth is modeled in Gem5. 
> BTW, I am using
classic model, and not Ruby. 
> 
> Thanks,
> Amin
> 
> On Tue, Jun 12,
2012 at 3:24 PM, Nilay <ni...@cs.wisc.edu [1]> wrote:
> 
>> at
physically such a design would not make any sense (may be only for
>>
now). But this should be decided by the bandwidth of the L1 cache and
of
>> the link between the CPU the L1 cache.

 

Links:
------
[1]
mailto:ni...@cs.wisc.edu
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to