Hi Nikos,

Thanks for your response.

For a packet which is a miss at L3, I have observed that the time its
response packet takes for reaching the cpu_side port of L3 from the
mem_side port of L3 (after the data is fetched from the main memory and
available at the mem_side port of L3) is always equal to the response
latency. I have checked this by varying only the response latency of L3,
while keeping other latency values the same. Any comment on this? From this
observation, I concluded that the response latency is the latency a
miss-fill incur at L3.

Thanks and regards,
Aritra

On Wed, 9 Sep, 2020, 9:40 PM Nikos Nikoleris, <[email protected]>
wrote:

> Hi,
>
> The response_latency doesn't necessarily correspond to the time it takes
> to fill in with the data from the response, but rather the time it takes
> for a cache to respond to a request from the point it has the data. In
> some cache designs, this will include the time to fill in but in other
> designs the fill will happen in parallel.
>
> If you wish to model a cache that sends out responses faster then you
> can change the response_latency. You could even set it to 0.
>
> Nikos
>
> On 09/09/2020 17:01, Aritra Bagchi via gem5-users wrote:
> > Hi all,
> >
> > I didn't hear from anybody. So this is just a gentle reminder. It would
> > be helpful if someone can respond. Thanks!
> >
> > On Tue, 8 Sep, 2020, 12:00 PM Aritra Bagchi, <[email protected]
> > <mailto:[email protected]>> wrote:
> >
> >     Hi all,
> >               I am using classic cache models in gem5. I have three
> >     levels of caches in the hierarchy: L1-D/I, L2, L3. Whenever there is
> >     an L3 miss, the data is fetched from memory and written to L3 using
> >     a latency equals to the response latency of L3.
> >     After tracing a memory request packet, I have found that the data is
> >     then written to L2, and next to L1-D, and after that it is available
> >     at the cpu_side port of L1-D so that the core can get it.
> >
> >               Instead of this, if I wanted to forward the data fetched
> >     from the main memory directly to the requesting core, and let these
> >     writes happen independently so that the core doesn't have to
> >     unnecessarily wait for its data, what do I need to do? I want
> >     suggestions to start. Can it be done? What changes need to be made,
> >     and where? Can anyone help me with this?
> >
> >     Thanks and regards,
> >
> >     Aritra Bagchi
> >     Research Scholar, CSE
> >     Indian Institute of Technology Delhi
> >
> >
> > _______________________________________________
> > gem5-users mailing list -- [email protected]
> > To unsubscribe send an email to [email protected]
> > %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
> >
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
>
_______________________________________________
gem5-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

Reply via email to