[gem5-users] Multi-threading in Gem5

2023-11-13 Thread Aritra Bagchi via gem5-users
Hi,

Can anyone share the methodology for running multi-threaded programmes in
Gem5 FS mode with the Ruby memory model? To be specific, I am interested in
the PARSEC benchmark suite. I have the following doubt:

Let 'A' be a programme from the PARSEC suite. Let there be 8 cores in the
simulated Gem5 system. How can we execute 'K' (K = 8) threads of 'A' on 8
available Gem5 cores? Is 'K' configurable? Can we choose any value for 'K'?


Thanks in advance,
Aritra
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org


[gem5-users] Re: Counters for # DRAM reads, writes, page hits, and page misses

2023-09-08 Thread Aritra Bagchi via gem5-users
Hi Eliot,

In the stats, I got some of the counters I wanted. In the source code, now
I can look at how they are computed and get ideas. I wanted these data not
at the end of simulation, but at intermediate times, but I could obtain
them by controlling some parameters.

Thanks,
Aritra



On Fri, Sep 8, 2023 at 6:06 PM Eliot Moss  wrote:

> On 9/8/2023 2:55 AM, Aritra Bagchi via gem5-users wrote:
> > Hi all,
> >
> > Can anyone indicate how to extract performance counters such as the
> number of DRAM read operations,
> > the number of DRAM write operations, the number of times a page miss
> occurs, etc.?
> >
> > Inside src/mem/mem_ctrl.cc, MemCtrl::recvTimingReq( ) method, there are
> two methods for inserting
> > new read and write operations into their respective queues,
> namely addToReadQueue( )
> > and addToWriteQueue( ). Can the #reads and #writes can be obtained from
> here? And what about # page
> > hits/misses? Any help is appreciated.
>
> The way things generally work in gem5 is that you get a stats dump at
> the end of a run.  There are also ways to request such dumps more
> frequently.
> You get a lot of details about accesses to caches and memories.  Are you
> looking at stats dumps and not seeing what you hope for?
>
> Best - Eliot Moss
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org


[gem5-users] Counters for # DRAM reads, writes, page hits, and page misses

2023-09-08 Thread Aritra Bagchi via gem5-users
Hi all,

Can anyone indicate how to extract performance counters such as the number
of DRAM read operations, the number of DRAM write operations, the number of
times a page miss occurs, etc.?

Inside src/mem/mem_ctrl.cc, MemCtrl::recvTimingReq( ) method, there are two
methods for inserting new read and write operations into their respective
queues, namely addToReadQueue( ) and addToWriteQueue( ). Can the #reads and
#writes can be obtained from here? And what about # page hits/misses? Any
help is appreciated.

Regards,
Aritra
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org


[gem5-users] Benchmark support in gem5 SE mode

2023-01-02 Thread Aritra Bagchi via gem5-users
Hello,

I am running gem5 version 21. I use it in SE mode with the classic memory
model to run SPEC CPU 2006 benchmarks. I have the following two queries
regarding the gem5's benchmark support:

1. Does gem5 SE support running multithreaded benchmarks such as PARSEC? If
yes, could someone please point to some available documentation discussing
the necessary steps?

2. I tried running SPEC CPU 2017 benchmarks in SE mode but in vain. Does
gem5 SE support running SPEC CPU 2017 benchmarks? If yes, could someone
please point to some available documentation discussing the necessary
steps?

Any help from anyone is highly appreciated. Thanks in advance.

Regards,
Aritra
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org


[gem5-users] Read Clean Request Packets

2021-12-08 Thread Aritra Bagchi via gem5-users
Hi all,

I am observing a lot of *ReadCleanReq* packets in the classic cache of
gem5. Could anyone tell me what is the function/significance of these
packets?

Thanks and regards,
Aritra Bagchi
Research Scholar,
Department of Computer Science and Engineering,
Indian Institute of Technology Delhi,
New Delhi - 110016
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Knowing the number of response packets from the stats generated in gem5 v20+

2021-10-24 Thread Aritra Bagchi via gem5-users
Hi all,

I am using the gem5 version 21. I can find stats such as
*system.cpu.l2..overall_accesses::total* which indicates the
total number of L2 cache accesses of a specific type.

Could anyone tell what stats are in the stats.txt file for knowing a) the
number of responses reaches "membus" from the memory controller (of a
specific type), b) the number of responses reaches from the "membus" to the
LLC (Li-level cache), and so on..in general, the number of responses from
L(i+1) cache to L(i) cache (of a specific type)? Thanks in advance.

Regards,
Aritra
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Write Buffer in Classic Cache in gem5

2021-08-06 Thread Aritra Bagchi via gem5-users
Just a gentle reminder. Any comment from anyone?

Thanks and regards,
Aritra



On Fri, Aug 6, 2021 at 11:51 AM Aritra Bagchi 
wrote:

> Hello All,
>
> Could anyone confirm what all types of requests a cache write-buffer
> holds? The documentation claims that a write-buffer stores a) uncached
> writes, and b) writeback from evicted (&dirty) cache lines. Does it also
> store evicted and *clean* writebacks?
>
> Thanks and regards,
> Aritra Bagchi
> Research Scholar,
> Department of Computer Science and Engineering,
> Indian Institute of Technology Delhi,
> New Delhi - 110016
> Mobile: +91-9382370400
>
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Write Buffer in Classic Cache in gem5

2021-08-05 Thread Aritra Bagchi via gem5-users
Hello All,

Could anyone confirm what all types of requests a cache write-buffer holds?
The documentation claims that a write-buffer stores a) uncached writes, and
b) writeback from evicted (&dirty) cache lines. Does it also store evicted
and *clean* writebacks?

Thanks and regards,
Aritra Bagchi
Research Scholar,
Department of Computer Science and Engineering,
Indian Institute of Technology Delhi,
New Delhi - 110016
Mobile: +91-9382370400
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Read request and writebacks in gem5

2021-06-22 Thread Aritra Bagchi via gem5-users
Any help from anyone is appreciated. Thanks!

Regards,
Aritra



On Tue, Jun 22, 2021, 01:18 Aritra Bagchi  wrote:

> Hi,
>
> Could anybody help me understand what happens in gem5 when a read request
> reaches a cache (say L3) and L2's write queue has a pending writeback
> (writeback that has not yet been written to L3) with the same block as the
> read request? Is the read request gets serviced from the write queue as
> the writeback has recent data? If so, where in gem5 can I find the code for
> this?
>
> Thanks and regards,
> Aritra
>
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Read request and writebacks in gem5

2021-06-21 Thread Aritra Bagchi via gem5-users
Hi,

Could anybody help me understand what happens in gem5 when a read request
reaches a cache (say L3) and L2's write queue has a pending writeback
(writeback that has not yet been written to L3) with the same block as the
read request? Is the read request gets serviced from the write queue as
the writeback has recent data? If so, where in gem5 can I find the code for
this?

Thanks and regards,
Aritra
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Handling a cache miss in gem5

2020-09-25 Thread Aritra Bagchi via gem5-users
Hi Daniel,

Thanks for your response! As I understand from the code, the access latency
for parallel-access cache (meaning the cache look up happens in parallel)
is determined in gem5 by max(lookup latency, data latency), and for
sequential access-mode it becomes (lookup latency + data latency).

I was particularly interested in observing what numbers of cycles is spent
in classic cache for handling a miss.

gem5 code designates two such latencies: *lookup latency* and *forward
latency*. *Both of them are exactly equal to the tag latency*. When a cache
miss occurs, gem5 assigns "*lat = lookupLatency*", where "lat" is gem5's
internal latency variable to hold the value of appropriate latency in terms
of clock cycles. This assignment is quite intuitive, and it *creates an
impression that whatever value lookup latency contains is used to handle a
cache miss*. If I pass any value "X" cycles into lat by "lat = X", it does
not get reflected in the system. But if I force forward latency to be "X",
I see it being reflected in the system handling the cache miss.

*So, I find even though the assignment "lat = lookupLatency" created an
impression that "lookupLatency" is used to handle a miss, "forwardLatency"
is actually used.*

All these things do not matter because lookup latency = forward latency =
tag latency (by default).

Regards,
Aritra

On Fri, Sep 25, 2020 at 3:42 PM Daniel Carvalho  wrote:

> Hello Aritra,
>
> It seems that the tag lookup latency is indeed disregarded on misses
> (except for SW prefetches). The cache behaves as if a miss is always
> assumed to happen and "pre-prepared" in parallel with the tag lookup. I am
> not sure if this was a design decision, or an implementation consequence,
> but my guess is the latter - there is no explicit definition of the cache
> model pursued by the classic cache.
>
> Regards,
> Daniel
> Em sexta-feira, 25 de setembro de 2020 11:00:39 GMT+2, Aritra Bagchi via
> gem5-users  escreveu:
>
>
> Just a humble reminder. Any comment would be highly solicited.
>
> Thanks,
> Aritra
>
> On Thu, 24 Sep, 2020, 12:22 PM Aritra Bagchi, 
> wrote:
>
> Hi all,
>
> While experimenting with gem5 classic cache, I tried to find out how an
> access miss is handled and with what latency.
>
> Even if in *cache/tags/base_set_assoc.hh*, the access (here a miss)
> handling latency *"lat"* gets assigned to the *"lookupLatency"*, the
> actual latency that is used to handle a miss (in *cache/base.cc:
> handleTimingReqMiss( )* method) is the *"forwardLatency"*. This is my
> observation.
>
> Both *"lookupLatency"* and *"forwardLatency"* are assigned to the cache
> *"tag_latency"*, which is okay! But I experimented with different values
> for them and observed that the value of *"forwardLatency"* actually gets
> reflected ( in terms of the clock cycle delay from the *cpu_side* port to
> the *mem_side* port) into the system for handling a cache miss.
>
> Could someone please confirm whether my observation and understanding is
> correct or not?
>
> Regards,
> Aritra
>
> ___
> gem5-users mailing list -- gem5-users@gem5.org
> To unsubscribe send an email to gem5-users-le...@gem5.org
> %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Handling a cache miss in gem5

2020-09-25 Thread Aritra Bagchi via gem5-users
Just a humble reminder. Any comment would be highly solicited.

Thanks,
Aritra

On Thu, 24 Sep, 2020, 12:22 PM Aritra Bagchi, 
wrote:

> Hi all,
>
> While experimenting with gem5 classic cache, I tried to find out how an
> access miss is handled and with what latency.
>
> Even if in *cache/tags/base_set_assoc.hh*, the access (here a miss)
> handling latency *"lat"* gets assigned to the *"lookupLatency"*, the
> actual latency that is used to handle a miss (in *cache/base.cc:
> handleTimingReqMiss( )* method) is the *"forwardLatency"*. This is my
> observation.
>
> Both *"lookupLatency"* and *"forwardLatency"* are assigned to the cache
> *"tag_latency"*, which is okay! But I experimented with different values
> for them and observed that the value of *"forwardLatency"* actually gets
> reflected ( in terms of the clock cycle delay from the *cpu_side* port to
> the *mem_side* port) into the system for handling a cache miss.
>
> Could someone please confirm whether my observation and understanding is
> correct or not?
>
> Regards,
> Aritra
>
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Handling a cache miss in gem5

2020-09-23 Thread Aritra Bagchi via gem5-users
Hi all,

While experimenting with gem5 classic cache, I tried to find out how an
access miss is handled and with what latency.

Even if in *cache/tags/base_set_assoc.hh*, the access (here a miss)
handling latency *"lat"* gets assigned to the *"lookupLatency"*, the actual
latency that is used to handle a miss (in *cache/base.cc:
handleTimingReqMiss( )* method) is the *"forwardLatency"*. This is my
observation.

Both *"lookupLatency"* and *"forwardLatency"* are assigned to the cache
*"tag_latency"*, which is okay! But I experimented with different values
for them and observed that the value of *"forwardLatency"* actually gets
reflected ( in terms of the clock cycle delay from the *cpu_side* port to
the *mem_side* port) into the system for handling a cache miss.

Could someone please confirm whether my observation and understanding is
correct or not?

Regards,
Aritra
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Queries regarding banking and response bypass in gem5 classic cache

2020-09-16 Thread Aritra Bagchi via gem5-users
Hi all,

Haven't heard anything from anyone. It would really be appreciated if
someone can provide any comment/suggestion/clarification on this.

Thanks,
Aritra

On Tue, 15 Sep, 2020, 3:41 PM Aritra Bagchi, 
wrote:

> Hi all,
>
> I have two questions regarding the classic cache of gem5. They are as
> follows:
>
> 1. Last-level caches in real hardware are usually not monolithic but are
> multi-banked. It seems multiple banking can be efficient only if the memory
> accesses are spread uniformly across all banks. Can such banking be
> implemented in gem5 classic cache? Could someone provide any hint on how to
> do that?
>
> 2. I have experimented with gem5 caches and found that a memory request
> which is a miss at the last-level cache (L3) has to traverse the entire
> memory hierarchy: L1-D (miss) > L2 (miss) > L3 (miss) > main memory (fetch
> data) > L3 (miss-fill/write) > L2 (miss-fill/write) > L1-D
> (miss-fill/write) . When the response comes from the main memory, I want to
> bypass it to the L2 cache and let the miss-fill happen independently at L3
> taking whatever latency it should take for an L3 write operation. I wanted
> to make sure that the requesting core does not have to stall (wait) for the
> miss-fill to finish, and can get the data as soon as it becomes available
> from the main memory. Could someone put some light on how can this be
> implemented in gem5?
>
> Any comment/suggestion/clarification will be highly appreciated.
>
> Thanks and regards,
> Aritra
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Queries regarding banking and response bypass in gem5 classic cache

2020-09-15 Thread Aritra Bagchi via gem5-users
Hi all,

I have two questions regarding the classic cache of gem5. They are as
follows:

1. Last-level caches in real hardware are usually not monolithic but are
multi-banked. It seems multiple banking can be efficient only if the memory
accesses are spread uniformly across all banks. Can such banking be
implemented in gem5 classic cache? Could someone provide any hint on how to
do that?

2. I have experimented with gem5 caches and found that a memory request
which is a miss at the last-level cache (L3) has to traverse the entire
memory hierarchy: L1-D (miss) > L2 (miss) > L3 (miss) > main memory (fetch
data) > L3 (miss-fill/write) > L2 (miss-fill/write) > L1-D
(miss-fill/write) . When the response comes from the main memory, I want to
bypass it to the L2 cache and let the miss-fill happen independently at L3
taking whatever latency it should take for an L3 write operation. I wanted
to make sure that the requesting core does not have to stall (wait) for the
miss-fill to finish, and can get the data as soon as it becomes available
from the main memory. Could someone put some light on how can this be
implemented in gem5?

Any comment/suggestion/clarification will be highly appreciated.

Thanks and regards,
Aritra
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Resource contention and queuing delay for timing accesses in gem5

2020-09-10 Thread Aritra Bagchi via gem5-users
Hi all,

A monolithic (single bank) single-port cache on a real hardware should
ideally block the cache for a request whenever another request is being
served in the cache. By "request", I mean both the request from any core to
that cache, and the response of that request (responses will cause
miss-fills, hence write operations) coming from lower-level cache or main
memory. Due to this blocking, queuing delays will arise for the memory
requests.

Wanted to ask whether the recent version of gem5 implements this blocking
mechanism and queuing delay or not. If it models, can anyone please put
some light on how is it modelled?

Thanks and regards,
Aritra
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Bypassing the last level cache in the response path

2020-09-09 Thread Aritra Bagchi via gem5-users
Hi Nikos,

Thanks for your response.

For a packet which is a miss at L3, I have observed that the time its
response packet takes for reaching the cpu_side port of L3 from the
mem_side port of L3 (after the data is fetched from the main memory and
available at the mem_side port of L3) is always equal to the response
latency. I have checked this by varying only the response latency of L3,
while keeping other latency values the same. Any comment on this? From this
observation, I concluded that the response latency is the latency a
miss-fill incur at L3.

Thanks and regards,
Aritra

On Wed, 9 Sep, 2020, 9:40 PM Nikos Nikoleris, 
wrote:

> Hi,
>
> The response_latency doesn't necessarily correspond to the time it takes
> to fill in with the data from the response, but rather the time it takes
> for a cache to respond to a request from the point it has the data. In
> some cache designs, this will include the time to fill in but in other
> designs the fill will happen in parallel.
>
> If you wish to model a cache that sends out responses faster then you
> can change the response_latency. You could even set it to 0.
>
> Nikos
>
> On 09/09/2020 17:01, Aritra Bagchi via gem5-users wrote:
> > Hi all,
> >
> > I didn't hear from anybody. So this is just a gentle reminder. It would
> > be helpful if someone can respond. Thanks!
> >
> > On Tue, 8 Sep, 2020, 12:00 PM Aritra Bagchi,  > <mailto:bagchi95ari...@gmail.com>> wrote:
> >
> > Hi all,
> >   I am using classic cache models in gem5. I have three
> > levels of caches in the hierarchy: L1-D/I, L2, L3. Whenever there is
> > an L3 miss, the data is fetched from memory and written to L3 using
> > a latency equals to the response latency of L3.
> > After tracing a memory request packet, I have found that the data is
> > then written to L2, and next to L1-D, and after that it is available
> > at the cpu_side port of L1-D so that the core can get it.
> >
> >   Instead of this, if I wanted to forward the data fetched
> > from the main memory directly to the requesting core, and let these
> > writes happen independently so that the core doesn't have to
> > unnecessarily wait for its data, what do I need to do? I want
> > suggestions to start. Can it be done? What changes need to be made,
> > and where? Can anyone help me with this?
> >
> > Thanks and regards,
> >
> > Aritra Bagchi
> > Research Scholar, CSE
> > Indian Institute of Technology Delhi
> >
> >
> > ___
> > gem5-users mailing list -- gem5-users@gem5.org
> > To unsubscribe send an email to gem5-users-le...@gem5.org
> > %(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s
> >
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Re: Bypassing the last level cache in the response path

2020-09-09 Thread Aritra Bagchi via gem5-users
Hi all,

I didn't hear from anybody. So this is just a gentle reminder. It would be
helpful if someone can respond. Thanks!

On Tue, 8 Sep, 2020, 12:00 PM Aritra Bagchi, 
wrote:

> Hi all,
>  I am using classic cache models in gem5. I have three levels of
> caches in the hierarchy: L1-D/I, L2, L3. Whenever there is an L3 miss, the
> data is fetched from memory and written to L3 using a latency equals to the
> response latency of L3.
> After tracing a memory request packet, I have found that the data is then
> written to L2, and next to L1-D, and after that it is available at the
> cpu_side port of L1-D so that the core can get it.
>
>  Instead of this, if I wanted to forward the data fetched from the
> main memory directly to the requesting core, and let these writes happen
> independently so that the core doesn't have to unnecessarily wait for its
> data, what do I need to do? I want suggestions to start. Can it be done?
> What changes need to be made, and where? Can anyone help me with this?
>
> Thanks and regards,
>
> Aritra Bagchi
> Research Scholar, CSE
> Indian Institute of Technology Delhi
>
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Bypassing the last level cache in the response path

2020-09-07 Thread Aritra Bagchi via gem5-users
Hi all,
 I am using classic cache models in gem5. I have three levels of
caches in the hierarchy: L1-D/I, L2, L3. Whenever there is an L3 miss, the
data is fetched from memory and written to L3 using a latency equals to the
response latency of L3.
After tracing a memory request packet, I have found that the data is then
written to L2, and next to L1-D, and after that it is available at the
cpu_side port of L1-D so that the core can get it.

 Instead of this, if I wanted to forward the data fetched from the
main memory directly to the requesting core, and let these writes happen
independently so that the core doesn't have to unnecessarily wait for its
data, what do I need to do? I want suggestions to start. Can it be done?
What changes need to be made, and where? Can anyone help me with this?

Thanks and regards,

Aritra Bagchi
Research Scholar, CSE
Indian Institute of Technology Delhi
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Regarding "CleanEvict" and "WritebackDirty" in gem5

2020-07-22 Thread Aritra Bagchi via gem5-users
Hi,

What is/are the difference(s) between "*CleanEvict*" requests and "
*WritebackClean*" requests in gem5 cache? The "*CleanEvict*" is considered
to be giving zero for both checks of *isRead( )* and* isWrite()*, whereas,
the "*WritebackClean*" is of type *isWrite( )*. What do they actually mean,
and how are they handled in gem5? Is there documentation specifying what
are the meaning of such different types of memory requests in gem5?

Thanks,
Aritra
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[gem5-users] Modelling Non-Volatile Cache in gem5

2020-07-22 Thread Aritra Bagchi via gem5-users
Hi,

How can we model a non-volatile cache, with asymmetric read and write
latency, into gem5? Which files need to be changed? How could we
differentiate between reads and writes?

- Aritra
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s