(There was a typo in previous mail)

Hello Nikos and Everyone,

In src/mem/cache/cache.cc for function handleTimingReqMiss(), we only check
if there is an existing mshr entry corresponding to Miss Request. And then
we call  *BaseCache::handleTimingReqMiss(pkt, mshr, blk, forward_time,
request_time);*

In BaseCache::handleTimingReqMiss(pkt, mshr, blk, forward_time,
request_time) function if its the first time the cache line is having a
miss we create MSHR entry by calling allocateMissbuffer which is defined in
base.hh, this allocateMissBuffer function creates new MSHR entry and first
target of that entry, so I have modified allcateMissBuffer in base.hh to :

   MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool sched_send =
true)

    {

        MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,

                                        pkt, time, order++,

                                        allocOnFill(pkt->cmd));


        if (mshrQueue.isFull()) {

            setBlocked((BlockedCause)MSHRQueue_MSHRs);

        }


*        if (mshr->getNumTargets() == numTarget) {*



*            //cout << "Blocked: " << name() << endl;*

*            noTargetMSHR = mshr;*

*            setBlocked(Blocked_NoTargets);*

*        }*


        if (sched_send) {

            // schedule the send

            schedMemSideSendEvent(time);

        }


        return mshr;

    }

What highlighted part does it blocks the cache from receiving any request
from LSQ until we get clear the MSHR entry through recvTimingResp function.
Note that *mshr->getNumTargets() *will always be one as this is the very
first time an MSHR entry is created. But somehow I do not think this is
correctas the simulation is not terminating but it does when hello binary
is tested.

Now if in baseline gem5 the current commit if we* set tgts_per_mshr from
configs/common/Caches.py *for dcache to 1 and compare the stats with *
tgts_per_mshr
set to 2 *we can clearly see that * tgts_per_mshr set to 2 * has more
cycles from stat *system.cpu.dcache.blocked:**:no_targets* when compared to
1 target per mshr. This is because we never blocked the requests coming to
Dcache when we had only one target per MSHR.

On Mon, Mar 25, 2019 at 5:43 PM Abhishek Singh <
abhishek.singh199...@gmail.com> wrote:

> Hello Nikos and Everyone,
>
> In src/mem/cache/cache.cc for function handleTimingReqMiss(), we only
> check if there is an existing mshr entry corresponding to Miss Request. And
> then we call  *BaseCache::handleTimingReqMiss(pkt, mshr, blk,
> forward_time, request_time);*
>
> In BaseCache::handleTimingReqMiss(pkt, mshr, blk, forward_time,
> request_time) function if its the first time the cache line is having a
> miss we create MSHR entry by calling allocateMissbuffer which is defined in
> base.hh, this allocateMissBuffer function creates new MSHR entry and first
> target or that entry, so I have modified allcateMissBuffer in base.hh to :
>
>    MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool sched_send =
> true)
>
>     {
>
>         MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize),
> blkSize,
>
>                                         pkt, time, order++,
>
>                                         allocOnFill(pkt->cmd));
>
>
>         if (mshrQueue.isFull()) {
>
>             setBlocked((BlockedCause)MSHRQueue_MSHRs);
>
>         }
>
>
> *        if (mshr->getNumTargets() == numTarget) {*
>
>
>
> *            //cout << "Blocked: " << name() << endl;*
>
> *            noTargetMSHR = mshr;*
>
> *            setBlocked(Blocked_NoTargets);*
>
> *        }*
>
>
>         if (sched_send) {
>
>             // schedule the send
>
>             schedMemSideSendEvent(time);
>
>         }
>
>
>         return mshr;
>
>     }
>
> What highlighted part does it blocks the cache from receiving any request
> from LSQ until we get clear the MSHR entry through recvTimingResp function.
> Note that *mshr->getNumTargets() *will always be one as this is the very
> first time an MSHR entry is created. But somehow I do not think this is
> correctas the simulation is not terminating but it does when hello binary
> is tested.
>
> Now if in baseline gem5 the current commit if we* set tgts_per_mshr from
> configs/common/Caches.py *for dcache to 1 and compare the stats with * 
> tgts_per_mshr
> set to 2 *we can clearly see that * tgts_per_mshr set to 2 * has more
> cycles from stat *system.cpu.dcache.blocked:**:no_targets* when compared
> to 1 target per mshr. This is because we never blocked the requests coming
> to Dcache when we had only one target per MSHR.
>
>
> Best regards,
>
> Abhishek
>
>
> On Mon, Mar 25, 2019 at 5:09 PM Nikos Nikoleris <nikos.nikole...@arm.com>
> wrote:
>
>> Abhishek,
>>
>> In that case the code you should be looking at is in
>> src/mem/cache/cache.cc in the function handleTimingReqMiss()
>>
>> mshr->allocateTarget(pkt, forward_time, order++,
>>                       allocOnFill(pkt->cmd));
>> if (mshr->getNumTargets() == numTarget) {
>>      noTargetMSHR = mshr;
>>      setBlocked(Blocked_NoTargets);
>> }
>>
>> I'm not sure why you don't see any stalls due to no_targets. You might
>> want to check how we measure the relevant stats and also make sure that
>> the simulation you run will trigger this.
>>
>> Nikos
>>
>>
>> On 25/03/2019 20:19, Abhishek Singh wrote:
>> > I want to have just 1 target per cache line in MHSR queue ie., one
>> > target per MSHR entry, but if I set parameter tgt_per_mshr to be 1, I
>> > get the number of blocked cycles to zero i.e, there is no blocking due
>> > to full targets.
>> >
>> > If we see the allocateMissBuffer calls code in base.cc and base.hh, the
>> > first time we allocate entry to a miss we create a mshr entry and target
>> > for that and do not check the tgts_per_mshr parameter. The second time
>> > when a miss occurs to the same cache line, we create target for that and
>> > then check tgts_per_mshr to block the future requests.
>> >
>> > The quicker way to find if tgt_per_mshr is working or not when set to 1
>> > is to check number for system.cpu.dcache.blocked::no_targets in
>> > m5out/stats.txt.
>> >
>> >
>> > Best regards,
>> >
>> > Abhishek
>> >
>> >
>> >
>> > On Mon, Mar 25, 2019 at 4:05 PM Nikos Nikoleris <
>> nikos.nikole...@arm.com
>> > <mailto:nikos.nikole...@arm.com>> wrote:
>> >
>> >     Hi Abishek,
>> >
>> >     A single MSHR can keep track of more than one requests for a given
>> cache
>> >     line. If you set tgts_per_mshr to 1, then the MSHR will only be
>> able to
>> >     keep track of a single request for any given cache line. But it can
>> >     still service requests to other cache lines by allocating more
>> MSHRs.
>> >
>> >     If you want the cache to block on a single request you will need to
>> >     limit both the number of MSHRs and tgts_per_mshr to 1.
>> >
>> >     I hope this helps.
>> >
>> >     Nikos
>> >
>> >     On 25/03/2019 19:03, Abhishek Singh wrote:
>> >      > Hello Everyone,
>> >      >
>> >      > I am trying to simulate D-cache with one target per mshr, I tried
>> >      > changing the parameter "tgts_per_mshr" defined in
>> >      > "configs/common/Caches.py"  to 1, it does not work.
>> >      >
>> >      > This is because when we allocate target for existing MSHR, we
>> always
>> >      > check the  "tgts_per_mshr" parameter after allocating the second
>> >     target
>> >      > and blocks the requests coming from LSQ.
>> >      >
>> >      > I tried copying the blocking target code to an allocateMissBuffer
>> >      > function defined and declared in "src/mem/cache/base.hh" as:
>> >      >
>> >      >
>> >      > *_BEFORE_*:
>> >      >
>> >      > MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool
>> >     sched_send = true)
>> >      >
>> >      > {
>> >      >
>> >      > MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize),
>> blkSize,
>> >      >
>> >      > pkt, time, order++,
>> >      >
>> >      > allocOnFill(pkt->cmd));
>> >      >
>> >      >
>> >      > if (mshrQueue.isFull()) {
>> >      >
>> >      > setBlocked((BlockedCause)MSHRQueue_MSHRs);
>> >      >
>> >      > }
>> >      >
>> >      >
>> >      > if (sched_send) {
>> >      >
>> >      > // schedule the send
>> >      >
>> >      > schedMemSideSendEvent(time);
>> >      >
>> >      > }
>> >      >
>> >      >
>> >      > return mshr;
>> >      >
>> >      > }
>> >      >
>> >      >
>> >      >
>> >      > *_AFTER:_*
>> >      >
>> >      >
>> >      >
>> >      > MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool
>> >     sched_send = true)
>> >      >
>> >      > {
>> >      >
>> >      > MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize),
>> blkSize,
>> >      >
>> >      > pkt, time, order++,
>> >      >
>> >      > allocOnFill(pkt->cmd));
>> >      >
>> >      >
>> >      > if (mshrQueue.isFull()) {
>> >      >
>> >      > setBlocked((BlockedCause)MSHRQueue_MSHRs);
>> >      >
>> >      > }
>> >      >
>> >      >
>> >      > *if (mshr->getNumTargets() == numTarget) {*
>> >      >
>> >      > **
>> >      >
>> >      > *//cout << "Blocked: " << name() << endl;*
>> >      >
>> >      > *noTargetMSHR = mshr;*
>> >      >
>> >      > *setBlocked(Blocked_NoTargets);*
>> >      >
>> >      > *}*
>> >      >
>> >      > *
>> >      > *
>> >      >
>> >      > if (sched_send) {
>> >      >
>> >      > // schedule the send
>> >      >
>> >      > schedMemSideSendEvent(time);
>> >      >
>> >      > }
>> >      >
>> >      >
>> >      > return mshr;
>> >      >
>> >      > }
>> >      >
>> >      >
>> >      > I also change the "tgts_per_mshr" defined in
>> >     "configs/common/Caches.py"
>> >      > to 1.
>> >      >
>> >      > But simulation goes into an infinite loop and does not end.
>> >      >
>> >      > Does anyone have the technique to create Blocking caches with 1
>> >      > target per MSHR entry?
>> >      >
>> >      >
>> >      >
>> >      > Best regards,
>> >      >
>> >      > Abhishek
>> >      >
>> >     IMPORTANT NOTICE: The contents of this email and any attachments are
>> >     confidential and may also be privileged. If you are not the intended
>> >     recipient, please notify the sender immediately and do not disclose
>> >     the contents to any other person, use it for any purpose, or store
>> >     or copy the information in any medium. Thank you.
>> >     _______________________________________________
>> >     gem5-users mailing list
>> >     gem5-users@gem5.org <mailto:gem5-users@gem5.org>
>> >     http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>> >
>> IMPORTANT NOTICE: The contents of this email and any attachments are
>> confidential and may also be privileged. If you are not the intended
>> recipient, please notify the sender immediately and do not disclose the
>> contents to any other person, use it for any purpose, or store or copy the
>> information in any medium. Thank you.
>> _______________________________________________
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
>
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to