Hi Prathap

I have one doubt though.
Even if we change statically the number of mshrs in Caches.py (for all
cores) or in CacheConfig.py (for individual cores), how to confirm the
updated MSHR value. When I look at config.ini, then I see following:

*[system.cpu0.dcache]*
demand_mshr_reserve=1
mshrs=6
tgts_per_mshr=8

*[system.cpu0.icache]*
demand_mshr_reserve=1
mshrs=2
tgts_per_mshr=8

But in Caches.py, configuration is:

class L1Cache(BaseCache):
    assoc = 2
    hit_latency = 2
    response_latency = 2
    *mshrs *= 4
   * tgts_per_mshr* = 20
    is_top_level = True


So from where does it get those values?
ᐧ

On Tue, Jul 21, 2015 at 1:46 PM, Prathap Kolakkampadath <[email protected]
> wrote:

> Hello Davesh,
>
> I did this by manipulating the isFull function as you have rightly pointed
> out.
> Thanks for the reply.
>
> Regards,
> Prathap
>
> On Tue, Jul 21, 2015 at 2:20 PM, Davesh Shingari <[email protected]
> > wrote:
>
>> Hi
>>
>> I think you should look at the isFull function which checks whether the
>> MSHR queue is full or not. You can check if it is a miss request and can
>> allocate the size of the mshr queue per core dynamically.
>>
>> _______________________________________________
>> gem5-users mailing list
>> [email protected]
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
>
>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>



-- 
Have a great day!

Thanks and Warm Regards
Davesh Shingari
Master's in Computer Engineering [EE]
Arizona State University

[email protected]
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to