Abhishek,

In that case the code you should be looking at is in
src/mem/cache/cache.cc in the function handleTimingReqMiss()

mshr->allocateTarget(pkt, forward_time, order++,
                      allocOnFill(pkt->cmd));
if (mshr->getNumTargets() == numTarget) {
     noTargetMSHR = mshr;
     setBlocked(Blocked_NoTargets);
}

I'm not sure why you don't see any stalls due to no_targets. You might
want to check how we measure the relevant stats and also make sure that
the simulation you run will trigger this.

Nikos


On 25/03/2019 20:19, Abhishek Singh wrote:
> I want to have just 1 target per cache line in MHSR queue ie., one
> target per MSHR entry, but if I set parameter tgt_per_mshr to be 1, I
> get the number of blocked cycles to zero i.e, there is no blocking due
> to full targets.
>
> If we see the allocateMissBuffer calls code in base.cc and base.hh, the
> first time we allocate entry to a miss we create a mshr entry and target
> for that and do not check the tgts_per_mshr parameter. The second time
> when a miss occurs to the same cache line, we create target for that and
> then check tgts_per_mshr to block the future requests.
>
> The quicker way to find if tgt_per_mshr is working or not when set to 1
> is to check number for system.cpu.dcache.blocked::no_targets in
> m5out/stats.txt.
>
>
> Best regards,
>
> Abhishek
>
>
>
> On Mon, Mar 25, 2019 at 4:05 PM Nikos Nikoleris <nikos.nikole...@arm.com
> <mailto:nikos.nikole...@arm.com>> wrote:
>
>     Hi Abishek,
>
>     A single MSHR can keep track of more than one requests for a given cache
>     line. If you set tgts_per_mshr to 1, then the MSHR will only be able to
>     keep track of a single request for any given cache line. But it can
>     still service requests to other cache lines by allocating more MSHRs.
>
>     If you want the cache to block on a single request you will need to
>     limit both the number of MSHRs and tgts_per_mshr to 1.
>
>     I hope this helps.
>
>     Nikos
>
>     On 25/03/2019 19:03, Abhishek Singh wrote:
>      > Hello Everyone,
>      >
>      > I am trying to simulate D-cache with one target per mshr, I tried
>      > changing the parameter "tgts_per_mshr" defined in
>      > "configs/common/Caches.py"  to 1, it does not work.
>      >
>      > This is because when we allocate target for existing MSHR, we always
>      > check the  "tgts_per_mshr" parameter after allocating the second
>     target
>      > and blocks the requests coming from LSQ.
>      >
>      > I tried copying the blocking target code to an allocateMissBuffer
>      > function defined and declared in "src/mem/cache/base.hh" as:
>      >
>      >
>      > *_BEFORE_*:
>      >
>      > MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool
>     sched_send = true)
>      >
>      > {
>      >
>      > MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,
>      >
>      > pkt, time, order++,
>      >
>      > allocOnFill(pkt->cmd));
>      >
>      >
>      > if (mshrQueue.isFull()) {
>      >
>      > setBlocked((BlockedCause)MSHRQueue_MSHRs);
>      >
>      > }
>      >
>      >
>      > if (sched_send) {
>      >
>      > // schedule the send
>      >
>      > schedMemSideSendEvent(time);
>      >
>      > }
>      >
>      >
>      > return mshr;
>      >
>      > }
>      >
>      >
>      >
>      > *_AFTER:_*
>      >
>      >
>      >
>      > MSHR *allocateMissBuffer(PacketPtr pkt, Tick time, bool
>     sched_send = true)
>      >
>      > {
>      >
>      > MSHR *mshr = mshrQueue.allocate(pkt->getBlockAddr(blkSize), blkSize,
>      >
>      > pkt, time, order++,
>      >
>      > allocOnFill(pkt->cmd));
>      >
>      >
>      > if (mshrQueue.isFull()) {
>      >
>      > setBlocked((BlockedCause)MSHRQueue_MSHRs);
>      >
>      > }
>      >
>      >
>      > *if (mshr->getNumTargets() == numTarget) {*
>      >
>      > **
>      >
>      > *//cout << "Blocked: " << name() << endl;*
>      >
>      > *noTargetMSHR = mshr;*
>      >
>      > *setBlocked(Blocked_NoTargets);*
>      >
>      > *}*
>      >
>      > *
>      > *
>      >
>      > if (sched_send) {
>      >
>      > // schedule the send
>      >
>      > schedMemSideSendEvent(time);
>      >
>      > }
>      >
>      >
>      > return mshr;
>      >
>      > }
>      >
>      >
>      > I also change the "tgts_per_mshr" defined in
>     "configs/common/Caches.py"
>      > to 1.
>      >
>      > But simulation goes into an infinite loop and does not end.
>      >
>      > Does anyone have the technique to create Blocking caches with 1
>      > target per MSHR entry?
>      >
>      >
>      >
>      > Best regards,
>      >
>      > Abhishek
>      >
>     IMPORTANT NOTICE: The contents of this email and any attachments are
>     confidential and may also be privileged. If you are not the intended
>     recipient, please notify the sender immediately and do not disclose
>     the contents to any other person, use it for any purpose, or store
>     or copy the information in any medium. Thank you.
>     _______________________________________________
>     gem5-users mailing list
>     gem5-users@gem5.org <mailto:gem5-users@gem5.org>
>     http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to