(Moving this discussion to the users mailing list, as it is better suited
there)
Hello, Patrick,
DictionaryCompressor::CompData contains the patterns (in general, any
compressor's "CompData" structure contains the compressed data, not the
original data - there are some exceptions). The
Hello, Deepika,
Prefetches are currently only considered useful when they are not late. To add
late prefetches to the useful ones you will have to modify
serviceMSHRTargets(), so that in the case MSHR::Target::FromCPU the prefetcher
is notified when the MSHR of a blk->wasPrefetched() is
Hello, John,
A few questions:- Did you add the respective SHIPRP class in
ReplacementPolicies.py?- Are you making sure the namespace is properly applied
in the Python declaration (Something like cxx_class='ReplacementPolicy::SHiP')?
- When you downloaded the patches for DRRIP, did you
n you please provide me with the entire
patch file(all the changes required for DRRIP for your implementation) if
that's ok with you.That will really help me to understand it.
Also is SRRIP implemented in the new gem5?
Thanks
John
On Sat, Nov 21, 2020 at 10:27 AM Daniel Carvalho wrote:
Hello, John,
I have uploaded for review some patches to make DRRIP work:
https://gem5-review.googlesource.com/c/public/gem5/+/37898I believe the code is
well documented enough to help you understand how it works.
To use DRRIPRP you must set the constituency size and the number of entries per
Hello,
A few years ago I have implemented a few PC-reliant RPs for fun, but did not
merge them upstream because I did not have time to fully test them. One day
they shall see the light of day, though :)
I don't remember what is required for the PC change in particular, but here are
the changes
Hello Aritra,
It seems that the tag lookup latency is indeed disregarded on misses (except
for SW prefetches). The cache behaves as if a miss is always assumed to happen
and "pre-prepared" in parallel with the tag lookup. I am not sure if this was a
design decision, or an implementation
Hello,
This message only concerns those who use the *develop* branch.
We have recently merged another patch creating a namespace
(https://gem5-review.googlesource.com/c/public/gem5/+/33294). Due to a small
issue with the SCons configuration, it does not trigger automatic recompilation
of the
Hello Sourjya,
First of all, welcome!
gem5 is very versatile, and there is an infinitude of things you can do with
it. The first thing you will need to decide is whether you are going to use the
Classic Cache
(https://www.gem5.org/documentation/general_docs/memory_system/classic_caches/)
or
Hello,
doWritebacks will only populate the write queue, which would then be emptied
when possible. Since you are evicting a multitude of blocks simultaneously, the
queue will become full and the assertion will trigger.
You will either have to implement a specialized version of doWritebacks()
Hello,
I believe what might be missing is a call to perform the writebacks
(doWritebacks(wb_pkts, clockEdge(lat+forwardLatency))) before any other cache
operations (e.g., access()). This will make sure that the coherence is kept,
and you will not mistakenly use stale data.
Regards,
Daniel
In particular, the following description seems to be relevant to your
questions:
"Simple CPU model based on the atomic CPU. Unlike the atomic CPU, this model
causes the memory system to bypass caches and is therefore slightly faster in
some cases. However, its main purpose is as a substitute
Hello Victor,
Ive never used functor(), cant compile right now, and there are very few
examples in gem5 of how to use it (check src/sim/stat_control.cc and
src/unittest/stattest.cc); however, if you are just interested in calculating
the number of zero bytes to extract the percentage, as in
- I think master should be stable
- I think gem5 should be released three times per year
Regards,Daniel
Em segunda-feira, 16 de dezembro de 2019 22:33:14 GMT+1, Bobby Bruce
escreveu:
* I think master should be stable
* I think gem5 should be released three times per year
--
Dr.
Hello,
I am unable to test nor fix it until next week, but it seems that both
implementations of fromDictionaryEntry() and toDictionaryEntry() must be moved
from the _impl.hh to the main header file, since the patterns that have been
added to the latter use them.
Kindly let me know if that
Hello,
These blocks are invalid (valid: 0). Due to that, cacheblk's invalidate() has
been called at some point previously, setting the tag address to MaxAddr (see
src/mem/cache/cache_blk.hh). Since set and way belong to ReplaceableEntry
Victor,
It depends on how you want the latency to be added. recvTimingResp() will
receive the packet at tick X, and will start the filling process, which is done
off the critical path, and thus we only need to care about this latency to
schedule the evictions caused by this fill. In any case,
Hello Victor,
Everything depends on your design and when the extra latency should be applied.
If it is within the tag-data access, you should likely put it inside
calculateXLatency. The compressor, for example, adds latency after the data has
been accessed, so the decompression latency is
Hello Debiprasanna,
Since you did not specify which level you are talking about, how you are
measuring zero data, system config, etc, I cannot give you a precise answer,
but these results seem incorrect to me. Have you tried using BDI and CPack,
which are present in your version, to verify if
the metrics of
cpu in the stats file are same for both of them. Do you know why it happens for
me due to different miss rate for L2 cache?
Many Thanks again!
Best,Pooneh
On Thu, May 23, 2019 at 6:58 AM Daniel Carvalho wrote:
Hello Pooneh,
You can check papers that discuss turning on and off compre
Hello Pooneh,
You can check papers that discuss turning on and off compression (among
others), for common explanations of the negative influence of compression in
some workloads. Here is an extract of one of my simulation results both for mcf
and geo mean of all SPEC 2017 benchmarks:
BDI on L3
Hello Pooneh,
There is currently no support for compressed L1 caches (and there is no plan to
add, since it would require great modifications to the caches), therefore if
you setup the configuration in src/mem/cache/Cache.py it is going to break it
(it sets for all caches, including L1).
What
If
I have to use the development version, where can I find new tutorials?
Best Regards
Zheng Liang
EECS, Peking University
-Original Messages-----
From:"Daniel Carvalho"
Sent Time:2018-11-12 13:52:06 (Monday)
To: gem5-users@gem5.org, gem5-...@gem5.org
Cc:
Subject: Re: [g
Hello Liang,
Regarding Gem5 and GCC, you can pull more recent versions of Gem5 which support
newer versions of GCC:- Up to GCC 7 support:
https://gem5-review.googlesource.com/c/public/gem5/+/9101- Up to GCC 8 support:
https://gem5-review.googlesource.com/c/public/gem5/+/11949- Up to GCC 8.1
Hello Liang,
The cache timing model is something that me, Jason and Nikos have been recently
discussing. You can follow part of the discussion on the following links:
https://gem5-review.googlesource.com/c/public/gem5/+/13697 and
https://gem5-review.googlesource.com/c/public/gem5/+/13835.
Hello!
First I will give some background of what is happening. This will be used to
formulate some questions.
I've got an error with the SnoopFilter after trying to implement cache
compression:
panic: panic condition !(sf_item.holder & req_port) occurred: requester 1 is
not a holder :( SF
26 matches
Mail list logo