Hi Xihui,
That's just a warning and you can safely ignore it. The most recent hotfix
release should remove this warning as well.
Cheers,
Jason
On Sun, Sep 19, 2021 at 10:02 PM Xihui Yuan via gem5-users <
gem5-users@gem5.org> wrote:
> Hello everyone:
>
> I am a beginner with GEM5.
>
Hi Sam,
I would *guess* it's the draining code getting stuck in an infinite loop.
The draining code calls "drain" on all SimObjects in the system, and they
do their thing. Then, the drain code asks all SimObjects if they're done
draining. If not, it starts over and calls drain on all objects
Hi Emil,
You can remove that check. However, you should note that the classic caches
aren't designed to support high-bandwidth operation. Also, this assert
triggering could be a sign that there's infinite queuing somewhere (which
is one reason why the classic caches aren't great for high
Hi Sam,
Sorry for the frustration. Writing better documentation is always #2 on the
priority list :(.
I always tell people not to trust any of the "options" to fs.py and se.py.
Those scripts have gotten so far beyond "out of hand" at this point that
they are almost useless. They are trying to be
Hi Sindhuja,
Yes, there is an expectation that valgrind causes a slowdown. Let me give
you a couple of suggestions.
1. Make sure you compile without tcmalloc (e.g., scons
build//gem5.opt --without-tcmalloc). Using tcmalloc will make
valgrind miss all allocations.
2. Use the suppressions file in
Hello,
It should be X86 (capitol X) instead of x86. You can see the files in
gem5/build_opts for the different possibilities for default build variables.
Cheers,
Jason
On Thu, Sep 2, 2021 at 3:58 AM Sravani Sravanam 20PHD7041 via gem5-users <
gem5-users@gem5.org> wrote:
> sir,
> i am sravani
Hi Sam,
This is a use case that I don't think we've thought about in the mainline
gem5. I think the easiest solution would be to add some custom Statistics
objects to track the info from the function you're interested in.
Cheers,
Jason
On Wed, Aug 11, 2021 at 10:59 AM Thomas, Samuel via
Hi Liyichao,
We welcome contributions to the gem5 resources! Currently, we have full
system resources available for x86 and one available for RISC-V. We don't
have any Arm resources available right now, but that's only because we
haven't had the time (or resources ;)) to get around to it. Again,
To use hardware-accelerated virtualization (i.e., KVM) your host and guest
must have the same ISA (and the host must have virtualization extension).
Cheers,
Jason
On Mon, Jul 12, 2021 at 12:17 PM Νικόλαος Ταμπουρατζής via gem5-users <
gem5-users@gem5.org> wrote:
> Dear gem5 community,
>
> I
Hi Sam,
My suggestion would be to use gdb. You can run gem5 in gdb and then use
ctrl-c to stop the execution and see where the program is getting stuck.
Also, enabling debug flags (or just good ole printf debugging) can also be
useful in these cases.
Another option with gdb would be to put
See https://gem5.atlassian.net/browse/GEM5-618
On Sat, Jul 3, 2021 at 5:23 PM lovline via gem5-users
wrote:
> Hi,
>We are working on an important project, and we want to use RSCV-V1.0
> vector instructions on Gem5.
>But we cann't find any features or codes about RSCV-V on Gem5.
>We
Hello,
Unfortunately, I don't think gem5 is the right tool for this job. When you
run that command, gem5's embedded python interpreter is executing `se.py`.
There's not really a way to easily get around this. You could try to
compile gem5 without python (--without-python, IIRC), but then
Hello,
It's somewhat possible. You can compile gem5 as a library (e.g., scons
build//libgem5-opt.so). However, gem5 *is a python
interpreter* and is configured via python scripts. Getting that to work
with an external program is "exciting". It's possible to get python
working, and there are other
Hi Adrian,
The AMD GPU model has never been tested with Arm. I doubt the ROCm stack
will compile/work with any ISA other than x86, unfortunately.
For multi-GPU support see
http://www.gem5.org/2020/05/30/enabling-multi-gpu.html
Of course, multiple CPUs will work with no problem with or without
Hi Adrian,
gem5 has support for AMD's GCN3 (compute) GPU in SE mode, and we're working
on merging both Vega support (AMD's newer GPU ISA) and full system support.
The status of these new features can be followed on Jira.
Here's documentation on the current GPU support:
What version of gem5 are you using? I believe gem5-20.0+ has the getdents
syscall implemented. I'm sure that 21.0 has the syscall implemented.
Whether you're using Ruby or classic caches it shouldn't make
any difference on whether the syscalls are implemented.
Cheers,
Jason
On Fri, Jun 25, 2021
Hi everyone,
These details on gem5-resources have also been tested multiple times. We
have also gotten unmodified OpenSBI working with gem5 as well. Ayaz can
provide more details if you need.
https://gem5.googlesource.com/public/gem5-resources/+/refs/heads/stable/src/riscv-fs/
Cheers,
Jason
On
Hi Vincent,
It depends on when/how you're ending the simulation. If you end the
simulation at some particular tick, then you'll see writes left in the
write queue. Just like a real machine, writes don't happen instantaneously,
and at some point in time, there are writes sitting in the write
Hi Sam,
Are the (virtual, physical?) addresses different when you use the larger
arrays? I wonder if the underlying mmap or malloc calls are breaking in SE
mode somehow. Maybe, after you allocate in your guest code you can print
out the virtual address to make sure it looks reasonable. You can
Hi Xijing,
You can set specific mappings from virtual to physical addresses by calling
the `map()` function on the Process object from your python configuration
file. See
https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/src/sim/Process.py#37
Then, once you have a virtual->physical
Hi Pedro,
No, I don't think there's an easy way to run m5_write_file on the guest
from the host. That is an instruction that is executed on the guest, and
the host can't easily control what is executing on the guest (especially
when you consider that it has to execute in the right context, etc.).
Hi Deepak,
Yeah, the cache disable bit may not work correctly in the page table
walker/TLB. You can check the code there to see if that's what's going
wrong.
You can also try adding an E820 entry to the workload object (e.g.,
Hi Deepak,
Have you tried the latest gem5? There's been a lot of work in both
gem5-21.0 and on gem5-develop to improve the RISC-V FS support. Another
option would be to look at how the RISC-V code has changed to see if that
helps diagnose this problem.
Cheers,
Jason
On Fri, Jun 18, 2021 at
Hi Sam,
No, there's not good documentation on this (yet ;)). It's relatively easy
to set up, though. Instead of using a single packet ptr, you can have a
queue (or whatever datastructure you would like), and you can set the
blocked flag only when it is "full" (e.g., the number of items in the
Hi Patrick,
gem5 doesn't support multiple cache line sizes "out of the box", but
there's no reason you couldn't add the support. Creating a memory-side
cache with a larger cache line is certainly possible! You might need to
make some modifications to the Cache SimObject or create your own object.
It seems to be back now, but please let us know if you can't access it!
I think it was related to the Fastly outage. See the CNN article for more
details :).
https://www.cnn.com/2021/06/08/tech/internet-outage-fastly/index.html
Cheers,
Jason
On Tue, Jun 8, 2021 at 3:21 AM Pedro Henrique
Hi Travix,
I believe Garnet (and/or Ruby) can only support 64 cores. They use a 64 bit
number as a mask for the cores. To fix this, we'll need to dig in and
replace all of those explicitly sized variables with vector. I don't
think it's actually that many plance that need to change, but it's been
Hmm... That's interesting. It could have something to do with the I/O hole
for x86 at 3-4GB. This is a huge pain for gem5's config scripts. We hack
around this in our config scripts (e.g., for gem5-resources). See
Hello,
I wonder if you have a maximum number of vcpus set on your host system.
Otherwise, I can't think of any specific limitation to creating vcpus.
Cheers,
Jason
On Tue, May 11, 2021 at 2:36 AM Liyichao wrote:
> Hi Jason:
>
>
>
> I use strace to follow the call stack, I find that
Hmm, I don't immediately know what's going wrong. I would extend the panic
on line 559 of vm.cc to also print the error code number so you can look it
up. I believe you can use `errno` like normal after calling `ioctl`. For
instance, you could add `strerror(errno)` to the panic.
Cheers,
Jason
Hi Arun,
Two quick ideas...
1. The address 0x56318e53ed40 looks suspect. That's not on the stack, in
the OS, on the heap... I think it's probably a bad address. Most likely
some other instruction before this one is causing a bad address to be
emitted.
2. CLWB/CLFLUSH/CLFLUSHOPT may or may not
Hello,
Check out the config.ini file in m5out/ and see what your CPU clock is
actually set to. I would guess that the options are not behaving the way
you expect (well, the way anyone would expect). se.py (and the options in
Options.py) is pretty fundamentally broken. There are tons of special
Hi Human,
On Sun, Apr 25, 2021 at 2:43 AM MOHAMMAD HUMAM KHAN via gem5-users <
gem5-users@gem5.org> wrote:
> Hello all,
>
> I am studying MESI Two Level Protocol in gem5 using garnet2.0 network and
> CPU2006 benchmarks. I want to know the L1 Cache MSHR entries that are
> present at any point in
Hello,
As far as I know, TLB misses are not modeled in SE mode at all.
Cheers,
Jason
On Thu, Apr 22, 2021 at 12:50 PM Θοδωρής Τροχάτος via gem5-users <
gem5-users@gem5.org> wrote:
> Hi Jason! Thanks for the info!
>
> Do you know what is happening when there is a TLB miss in SE mode?
> Is the
Hi Leon,
I believe you're correct. When there is a TLB hit, it's up to the *CPU
model* to model the latency of the TLB access. I think this implementation
was designed this way to give flexibility to the CPU models. Since the TLB
is deeply embedded in the pipeline, we wouldn't want to always have
Hi Leon,
This is exactly what gem5's region of interest (ROI) markers are for. You
can use these "special instructions" as either magic instructions (using
unused opcodes in the ISA) or as memory-mapped IO (useful for KVM CPUs).
You can embed these markers in your program by calling the m5ops
I *think* it's possible... At one point, I got c++ std::thread to work.
I've never tried something as complex as parsec, though.
Jason
On Fri, Apr 16, 2021 at 11:17 AM John Smith wrote:
> Does that mean I don't have to use m5threads and just use the regular
> pthread library ?
>
> On Fri, Apr
Hi John,
Yeah, it's something like that. We usually suggest using N + 1 cores where
N is the number of threads. You can always use more ;).
As a side note, if you configure things correctly (whatever that means...)
I believe you can get pthreads to work. You can link to the pthreads on the
host
Soon! https://gem5.atlassian.net/browse/GEM5-195
We're hopeful that in the next month or so all of this code will be public.
Cheers,
Jason
On Fri, Apr 16, 2021 at 9:55 AM John Smith wrote:
> Will I also be able to run the GPU model in the FS mode ?
>
> On Fri, Apr 16, 2021 at 11:39 AM Jason
Hi John,
I suggest using full system mode instead of SE mode if you're running a
multithreaded workload. In FS mode, there's a full OS so it can handle
thread switching, etc. For Parsec on x86 we've created a set of resources
for you to get started. See
Hi Gabriel,
I agree it's not intuitive and it's a bit awkward.
Is there a reason for adopting that design? My guess is that it allows to
> build the system top to bottom in the python scripts.
>
Haha! No, there's not an underlying reason for this. In fact, I would guess
that there is a much
Hi Gabriel,
First, Ruby is a bit of a mess as far as circular dependencies go. Some of
this is historic, and some of it is inherent to the design. I'm not too
surprised you're running into this issue.
The SimObject initialization is documented here:
Hi Chris,
Using Garnet or SimpleNetwork with Ruby will allow you to set the latency
of each link to anything you'd like and create any topology you'd like. You
should be able to configure this to model a multi-socket system. That said,
it's unclear if any of the current protocols will model a
Yes! That's no problem at all. The default memory controller is FRFCFS and
it snoops the write queue, so it is already out of order :).
Cheers,
Jason
On Tue, Mar 30, 2021 at 12:32 PM lsteiner--- via gem5-users <
gem5-users@gem5.org> wrote:
> Hi Jason,
> thank's for your help. So if I have
Hello,
Generally, there are no requirements or restrictions on memory access order
in gem5. In Ruby, you can create a protocol that requires the network to be
in order, but most protocols assume networks that are not explicitly
ordered. The classic caches have no ordering restrictions.
Cheers,
Hi Fugelin,
This is an interesting bug! I would guess that there's a packet being
copied in the cache when it should be reused. The simple cache isn't tested
with Arm, and it's really just an example and shouldn't be used for
anything "real". If you do figure out the bug, we'd love to accept your
What I can say confidently is that I've never been able to get a virtual
environment to work with gem5. It will 100% definitely work with python3
and scons installed on the system.
Jason
On Fri, Mar 26, 2021 at 10:18 PM haurunis--- via gem5-users <
gem5-users@gem5.org> wrote:
> Hi Jason,
>
>
Hello,
> I wonder how do I make gem5 detect the python in my current anaconda env,
while I can directly run by `python`?
This is difficult, and I've really struggled to get this to work with gem5.
You need to make sure that scons picks up the correct `python3-config`
binary. It *might* work to
Hi all,
We're having an issue with the billing for our Google cloud infrastructure.
There may be intermittent issues with gem5 infrastructure for a little
while. We hope to get this resolved within a few hours.
Cheers,
Jason
___
gem5-users mailing list
It's not terribly useful, but you can list all SimObjects with `gem5.opt
--list-simobjects` (or something like that, use `gem5.opt --help` to find
the exact argument).
You can also check in the src/mem/ directory for files that end in .py.
These files are the SimObject description files and all
Hi Krishnan,
There is also a native NVM model in gem5 now. See
http://www.gem5.org/2020/05/27/memory-controller.html for details.
Also, though not as well integrated with gem5, there is VANS from UCSD:
https://github.com/TheNetAdmin/VANS
Cheers,
Jason
On Tue, Feb 16, 2021 at 2:51 PM Samuel
Hello,
I don't think there's a simple way to do this in SE mode. In FS mode, you
could write a simple shell script to execute the program in a loop. One
thing you can do is to modify the program to execute the ROI multiple times.
Cheers,
Jason
On Sun, Jan 31, 2021 at 5:20 PM ABD ALRHMAN ABO
Hello,
If you want to trigger two different transitions based on different inputs,
this should always go in the in_port definition. For instance, see
http://www.gem5.org/documentation/learning_gem5/part3/cache-in-ports/. You
can base this on the message information by using peek() on the buffer.
I believe that's referring to RAM generally (e.g., registers, caches, DRAM,
etc.)
Cheers,
Jason
On Mon, Jan 11, 2021 at 10:23 PM husin alhaj ahmade via gem5-users <
gem5-users@gem5.org> wrote:
> "Gem5 already includes all key microarchitecture components which model
> hardware arrays on which
Hi Zhen,
Sorry for missing your previous message.
(1) I think the biggest difference is that the former does not implement a
port for each bank, is it right?
- I guess it assumes that the banks are the bottlenecks not the ports. It
assumes that the banks are distributed and have separate ports,
Hi Balazs,
That sounds a lot like elastic traces. See the documentation:
https://www.gem5.org/documentation/general_docs/cpu_models/TraceCPU and the
paper: https://ieeexplore.ieee.org/document/7482084.
Even if elastic traces don't work for your purpose, the code for them
should give you hints on
Hi Sam,
There's the "m5 utility" and the "m5 magic operations" for exactly what
you're describing! See util/m5 for details. There's some documentation here
http://www.gem5.org/documentation/general_docs/m5ops/ and in the code
https://gem5.googlesource.com/public/gem5/+/refs/heads/develop/util/m5/
Hello,
The quick answer is "no," we don't have any CXL implementation. However,
that seems like a great idea, and would be a great contribution to the
community!
I assume that you would be doing this Ruby. Feel free to let me know if you
have any questions or run into any issues.
Cheers,
Jason
Hi Leon,
SLICC really is its own language. It looks like C++ only to make it simpler
to implement, not because you should be able to write C++ code. If you
haven't read the Learning gem5 Ruby section, I would suggest starting
there: http://www.gem5.org/documentation/learning_gem5/part3/MSIintro/.
Hi Leon,
Scheduling arbitrary events is not allowed in SLICC. The SLICC language is
meant only for defining state machines and their actions. Any scheduling,
etc. should either be done within the restrictions of SLICC or outside of
SLICC in some other way. For instance, you could add a new method
Hey Patrick,
This isn't exactly an answer to your question, but you can find a similarly
"simple" x86 FS configuration in the gem5-resources repository. E.g.,
https://gem5.googlesource.com/public/gem5-resources/+/refs/heads/stable/src/boot-exit/configs/system/
.
I'm sure that it's *possible* to
Hi Arun,
That time is in simulator *ticks*, not cycles. By default, the tick time is
1ps, so that would be an average latency of 3.8us, which is high, but seems
possible for non-volatile memory.
Cheers,
Jason
On Sun, Nov 15, 2020 at 11:43 PM Arun Kavumkal via gem5-users <
gem5-users@gem5.org>
Hi Pedro,
No, I don't have any specific pointers beyond the code in src/sim/. One
quick note: on develop there is something in flux about how syscalls work.
There's been some recent changes from Gabe to the "Workload" and the
syscall dispatch. I have to admit I don't understand them, but it might
Hello,
(1) Yes, I believe so.
(2) I thought MOESI_hammer was annotated, but it doesn't look like
(huh...). However, AMD MOESI Base is annotated. See all of the transition
in the core-pair file, for instance:
Hello,
I've been using gem5 for ~10 years, and this is the first time I've ever
seen this code :D. It was committed 14 years ago, and it hasn't been
touched since.
It looks to me like it's used for gathering statistics about the activity
of different CPU pipeline stages. However, I *know* it's
Hello,
For (1), yes. You can set this *in the python configuration file*. You
should not modify the SimObject description file to change a default
parameter.
For (2), yes, that's exactly where you should modify.
Cheers,
Jason
On Wed, Oct 28, 2020 at 9:47 AM zhen bang via gem5-users <
Yes, this is possible, and I believe it's already implemented for Arm.
The best place to start is src/arch//tlb.cc
Cheers,
Jason
On Wed, Oct 28, 2020 at 1:27 AM Laney Laney via gem5-users <
gem5-users@gem5.org> wrote:
> Hi,all. I would like to know if it is possible to implement multi-level
>
Hi Predro,
It would certainly be easier in FS mode :D. Also, I would worry that the
system call emulation layer might not model your application with high
enough fidelity if you care about multithreaded apps (e.g., the futex
system call will take 0 time in SE mode).
If you dynamically link your
Hi Hasan,
I agree with Abhishek. Something as complex as tensorflow is going to be
very difficult to get working in syscall emulation mode. Using full system
mode should work (though without things like GPU acceleration, of course).
Cheers,
Jason
On Fri, Oct 23, 2020 at 1:55 PM Abhishek Singh
In this case, I would use the resource stalls to model banking. You can
extend the BankedCache implementation to model arbitrary address
interleaving, if that's important to your model. To do this, you'll have to
add annotations to the transitions in the L0 and L1 cache, but this should
be easier
Honestly, I'm not sure. I would need to dig much deeper into the
MESI_Three_Level protocol to be able to help.
Jason
On Tue, Oct 20, 2020 at 9:06 PM 1154063264--- via gem5-users <
gem5-users@gem5.org> wrote:
> In MOESI_hammer, the state transition in the I state is defined as follows,
>
>
I would look to see how it's done in MOESI_hammer.
https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/src/mem/ruby/protocol/MOESI_hammer-cache.sm#902
Cheers,
Jason
On Tue, Oct 20, 2020 at 9:04 AM 1154063264--- via gem5-users <
gem5-users@gem5.org> wrote:
> Hello Jason:
> It is
Hello,
The lack of error when writing a checkpoint doesn't mean it was successful.
Likely, the data was not written back to memory correctly if the random
test is failing.
No, you cannot use `writeCallbackScFail`. This is to signal that a store
conditional has failed.
It might help to learn
Hello,
DRAMSim isn't a drop in replacement for the memory object anymore. Since
the change in the memory interface (
http://www.gem5.org/project/2020/10/01/gem5-20-1.html#new-dram-interface-contributed-by-wendy-elsasser)
you can't just drop in a different type of DRAM model.
You'll probably have
Hello,
It depends on how you want to model banking. If you just want to set and
limit the bandwidth to a cache, you can use the "resourceStalls = true"
option on the RubyCache object and set the tag and data array values. You
will also have to tag every transition in the cache controller (i.e.,
Hello,
It's difficult for me to say for certain without digging much deeper.
However, my gut says the latter is probably closer to correct. I doubt that
you can drop the line without first receiving an ack (somehow).
I'm not sure if this was said before, but you can use the Ruby random
tester in
That's good to know, thanks, Ryan!
Is there any reason not to merge this? I know it's not a "perfect"
solution, but it would be nice if people didn't keep running into this
issue.
Bobby, can you test on both Intel and AMD (let me know if you need access
to an Intel machine). If it works, can you
Hello,
If the block is already in I, then there shouldn't be anything to
flush/write back. You should be able to simply do something like
transition(I, Flush_line) {
flushRespsonse;
}
action(flushResponse) {
sequencer.flushResponse(); (or whatever this function is on the sequencer)
}
Cheers,
Hi Teo,
That error is because you're still using MESI_Two_Level. You need to
recompile gem5 to use a different protocol.
I.e., to use MESI_Two_Level you could do the following:
> scons build/X86_MESI_Two_Level/gem5.opt
> build/X86_MESI_Two_Level/gem5.opt
To use MOESI_hammer you could do the
Hi Davide,
I echo 100% of what Giacomo said. Also, there's a proposal for updating the
Python API here: https://gem5.atlassian.net/browse/GEM5-432. That proposal
is a pretty extreme example, and we'll probably end up with something
closer to the status quo.
You can also check out Learning gem5
Hi Yifan,
To use the MMIO version of the m5 utility, you simply need to pass the
--addr parameter (if I remember correctly). Running `m5 --help` should
explain the options.
Cheers,
Jason
On Thu, Oct 8, 2020 at 4:14 AM wrote:
> Hi Jason,
>
> Thank you for your quick reply. I'll check out the
Hi Yifan,
First of all, the branch that you referred to is pretty old. I would use
gem5-20.1. You can check out the gem5-resources for information on how to
get KVM+x86 working with SPEC, Parsec, and other benchmarks.
http://www.gem5.org/documentation/general_docs/gem5_resources/
Switching CPUs
Hi Daecheol,
The complication comes from handling all of the corner cases. What happens
if you get a flush for a line when you've sent a request but haven't
received a response? What about when you've received an invalidation, but
you haven't responded yet? There are many intermediate states that
Hi Kavya,
This looks like a bug! The RubyPrefetcher has never been regularly tested,
as far as I know. I would guess the place where these were updated got
deleted at some point and no one noticed. I'd try checking out a gem5 from
5 years ago and see if it's there. We would welcome a contribution
Hi Balazs,
What you suggest sounds like a panacea! I'm not sure it's possible, though
:(.
There are multiple different levels of fidelity between the O3 CPU and the
Timing Simple CPU. You could imagine other CPU models with more or less
fidelity as well, but currently these are the two main
Hi Daecheol,
Unfortunately, the flush command is a bit more complicated to implement
than just a simple replacement. I responded to another message about this
on the mailing list a few minutes ago that you can see for more information.
Cheers,
Jason
On Mon, Sep 28, 2020 at 8:59 PM Daecheol You
Hi Shougang,
I think you can use .value() to get the actual value out of the stat.
That should be easy to cast to an unsigned, if needed.
However, I think there might be some confusion on how to register/use the
stats. If you've registered it correctly and it is updated during
simulation (e.g.,
Hello,
Yeah, adding flush to a protocol is a pretty large task, but it shouldn't
be too difficult. The key difficulty will be testing it, but the Ruby
tester does support testing flushes (probably not out of the box with
ruby_random_test.py, though).
There's no particular reason except that we
Hi everyone,
We are sending this email to be as transparent as possible. The gem5
community is respectful and inclusive of all people, and we enforce these
standards in all community spaces. We as members, contributors, and leaders
pledge to make participation in our community a harassment-free
opinions. Let's work
> together to keep it away from it.
>
> Hope you all have a good weekend.
>
> Thanks,
>
> [image: Screen Shot 2020-10-02 at 5.41.55 PM.png]
>
>
> -Tao
>
>
> On Fri, Oct 2, 2020 at 10:26 AM Jason Lowe-Power via gem5-users <
> gem5-use
Hi everyone,
A few things:
1. Being polite is important! We strive to make the gem5 community
inclusive and welcoming. :)
2. Email can be very impersonal. Adding a greeting and a signature with
your name helps create a more welcoming environment!
3. When asking questions on the mailing list, help
Hi Daecheol,
You're correct that MESI_Three_Level, like MESI_Two_Level has a shared
(banked) LLC.
The L2 cache is chosen based on the function "mapAddressToRange" (see
https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/src/mem/ruby/protocol/MESI_Three_Level-L1cache.sm#403
).
The
See the `readFunc` and `writeFunc` implementations in sycall_emul.hh:
https://gem5.googlesource.com/public/gem5/+/refs/heads/stable/src/sim/syscall_emul.hh#2454
On Tue, Sep 22, 2020 at 12:25 PM ABD ALRHMAN ABO ALKHEEL via gem5-users <
gem5-users@gem5.org> wrote:
> Hi All, can I track the
Hi Shyam,
I don't think so! However, we'd be happy to accept the contribution of new
indirect predictors!
Cheers,
Jason
On Fri, Sep 18, 2020 at 4:07 PM Shyam Murthy via gem5-users <
gem5-users@gem5.org> wrote:
> Hi All,
>
> Is there any other target predictor for indirect jumps on gem5 apart
Hi Shaikhul,
"trigger" isn't really a function. It's a language statement (like, `if` in
C/C++).
SLICC doesn't always give the most helpful errors. However, I notice that
the eros is actually showing line 124, not line 219. That seems suspicious
to me, and I'd look more into that.
Also, you can
Hi Aamir,
Slight changes (e.g., less that 1%) wouldn't be too surprising as SE mode
isn't always deterministic. Additionally, you're changing the number of
instructions and the layout of the binary when you add instructions. So,
again, slight changes wouldn't be surprising.
If the changes in
Hi Aritra,
See answers inline below.
Cheers,
Jason
On Tue, Sep 15, 2020 at 3:12 AM Aritra Bagchi via gem5-users <
gem5-users@gem5.org> wrote:
> Hi all,
>
> I have two questions regarding the classic cache of gem5. They are as
> follows:
>
> 1. Last-level caches in real hardware are usually not
On Tue, Sep 15, 2020 at 1:47 AM Liyichao wrote:
> I believe we make many small writes to disk (i.e., each object is a
> separate write).
>
>
>
> -yes, this is a point we have modified it and to be tested.
>
>
>
> It may be possible to instead use an mmap file and flush it to disk at the
>
This isn't a problem we've seen before, but it's not particularly
surprising. I believe we make many small writes to disk (i.e., each object
is a separate write). I'm not sure how to fix this, but it might be a place
to start.
Another place to look is that most of the data written is the contents
Hi Farhad,
The short answer is "no". You're modifying the cache behavior, which is
going to have implications on the coherence protocol. This is both the good
thing and the bad thing about using Ruby... it forces you to think through
many of the actual implementation details.
Cheers,
Jason
On
101 - 200 of 254 matches
Mail list logo