Hello Andreas
Thanks a lot for the reply. Yes it helped definitely.
I am posting what I found (might save some time for someone looking into
the same direction)
In cache_impl.hh file, in function "recvTimingReq"
- You check whether it is a hit or miss by " bool satisfied =
access(pkt, blk, lat, writebacks);"
- Then in case there is a miss and there is no MSHR (check line "else {
//no MSHR"), there is a call to allocateMissBuffer which internally calls
allocateBufferInternal. In this function isFull os called which checks "
allocated > numEntries - numReserve"
I have one question. Is there configuration way (i.e. by changing only in
Caches.py) to have different numbers of MSHR per core i.e. say L1 cache of
Core 0 has 10 while L1 cache of Core 1 has 20 MSHRs? If not then we can do
it in isFull function by having case statements.
ᐧ
On Tue, Jul 14, 2015 at 2:44 PM, Andreas Hansson <[email protected]>
wrote:
> Hi Davesh,
>
>
> 1. The promoteDeferredTargets is a “trick” used in the classic memory
> system where MSHR targets gets put “on hold” if we already have a target
> outstanding that will e.g. give us a block in modified or owned state. Once
> the first chunk of targets are resolved, and all caches agree on the new
> state, we then continue with the deferred targets (after promoting them).
> Thus, the deferred targets are serialised with respect to the normal
> targets, simplifying any intermittent states.
> 2. When the queue is full, we block the cpu-side port. This is quite
> crude, as there could be hits that would have been fine. However, at the
> moment we take the easy way out.Have a look at setBlocked and
> recvTimingReq.
>
> I hope that helps.
>
> Andreas
>
> From: gem5-users <[email protected]> on behalf of Davesh
> Shingari <[email protected]>
> Reply-To: gem5 users mailing list <[email protected]>
> Date: Tuesday, 14 July 2015 18:59
> To: gem5 users mailing list <[email protected]>
> Subject: [gem5-users] MSHR Queue Full Handling
>
> Hi
>
> In *cache_impl.hh*, I can see that
>
> MSHRQueue *mq = mshr->queue;
> *bool wasFull = mq->isFull();*
>
> But wasFull is used only in following code:
>
> if (mshr->promoteDeferredTargets()) {
> // avoid later read getting stale data while write miss is
> // outstanding.. see comment in timingAccess()
> if (blk) {
> blk->status &= ~BlkReadable;
> }
> mq = mshr->queue;
> mq->markPending(mshr);
> requestMemSideBus((RequestCause)mq->index, clockEdge() +
> pkt->lastWordDelay);
> } else {
> mq->deallocate(mshr);
> if (*wasFull && !mq->isFull()*) {
> clearBlocked((BlockedCause)mq->index);
> }
>
> I have 2 questions:
>
> 1. What does promoteDeferredTargets function do?
> 2. What happens when the queue is full. For eg. In Caches.py I can
> see that mshr for L2 is 20. What happens when the misses number exceeds 20?
> Can someone point me to the handling code?
>
>
>
> --
> Have a great day!
>
> Thanks and Warm Regards
> Davesh Shingari
> Master's in Computer Engineering [EE]
> Arizona State University
>
> [email protected]
> ᐧ
>
> -- IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
>
> ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
> Registered in England & Wales, Company No: 2557590
> ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
> Registered in England & Wales, Company No: 2548782
>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
--
Have a great day!
Thanks and Warm Regards
Davesh Shingari
Master's in Computer Engineering [EE]
Arizona State University
[email protected]
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users