[gem5-users] Generate Multiple Trace Files for Multi-Threaded Workloads on FS

2023-09-08 Thread Abdelrahman S. Hussein via gem5-users
Hi,

I am seeking to generate multiple trace files for multi-threaded workloads
that run in FS (Full-System simulation mode). My plan is to configure the
simulation to have multiple cores, boot the image, run the workload, and
record the traces of instructions that run on each core, such that each
core/thread has its own trace file. My end goal is to have a number of
trace files, each represent a core or a thread.

Questions:

   - Is gem5 capable of generating multiple trace files?
   - I am checking the Trace CPU Model page on gem5
   
   website. It has this statement "The traces have been developed for
   single-threaded benchmarks simulating in both SE and FS mode". Does this
   mean generating multiple trace files for different threads/cores
   is unsupported by gem5?
   - Is the O3 CPU capable of recording such traces? The goal is to
   generate traces using an Out-of-Order superscalar CPU.
   - I may have to add a few more fields to the instruction class, such as
   a boolean variable to check if the instruction is a branch for example.
   which file(s) should I look at?


Looking forward to your answer.

Thanks,
~Abdelrahman

--

*Best,Abdelrahman Hussein*
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org


[gem5-users] Re: Not being able to execute GPU FS example 'hip_samples.py'

2023-09-08 Thread Poremba, Matthew via gem5-users
[Public]

This is not the first time I am hearing about this issue.  It seems stable 
needs to be hotfixed for GPU.

For now, you can try the develop branch instead.  It is tested quite well so it 
is relatively stable anyway despite the name.


-Matt

From: Pau Galindo Figuerola via gem5-users 
Sent: Friday, September 8, 2023 10:02 AM
To: gem5-users@gem5.org
Cc: Pau Galindo Figuerola 
Subject: [gem5-users] Not being able to execute GPU FS example 'hip_samples.py'

Caution: This message originated from an External Source. Use proper caution 
when opening attachments, clicking links, or responding.

Hi,

I'm trying to set up a GPU FS environment following the guidelines from here:  
https://github.com/gem5/gem5-resources/blob/stable/src/gpu-fs/README.md

First when I tried to execute 'hip_examples.py' an error saying that 
'exit_at_gpu_kernel' didn't exist. So I added the parameter to the 'runfs.py' 
to work around that. But when I execute againt a fatal error raises saying the 
following:

src/mem/physical.cc:247: fatal: Could not mmap 17179869184 bytes for range 
[0:0x4]!

It does also happen with 'hip_cookbook.py' and 'hip_rodinia.py' examples.

Someone could help me understand it and how to fix this. Aren't the examples 
ment to work without modifying anything?

I'm using the current stable branch github version for both gem5 and 
gem5-resources.

Thank you in advance!

Best regards,
Pau


Command Executed:

build/VEGA_X86/gem5.opt configs/example/gpufs/hip_samples.py --disk-image 
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/disk-image/rocm42/rocm42-image/rocm42
 --kernel 
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/vmlinux-5.4.0-105-generic 
--gpu-mmio-trace /home/pau-blackton/TFG/gem5-resources/src/gpu-fs/vega_mmio.log 
--app PrefixSum

Full trace:

gem5 Simulator System.  https://www.gem5.org
gem5 is copyrighted software; use the --copyright option for details.

gem5 version 23.0.1.0
gem5 compiled Sep  8 2023 15:36:02
gem5 started Sep  8 2023 18:35:16
gem5 executing on paublackton-MS-7A72, pid 5907
command line: build/VEGA_X86/gem5.opt configs/example/gpufs/hip_samples.py 
--disk-image 
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/disk-image/rocm42/rocm42-image/rocm42
 --kernel 
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/vmlinux-5.4.0-105-generic 
--gpu-mmio-trace /home/pau-blackton/TFG/gem5-resources/src/gpu-fs/vega_mmio.log 
--app PrefixSum

/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/disk-image/rocm42/rocm42-image/rocm42
warn: The `get_runtime_isa` function is deprecated. Please migrate away from 
using this function.
Global frequency set at 1 ticks per second
warn: system.workload.acpi_description_table_pointer.rsdt adopting orphan 
SimObject param 'entries'
warn: No dot file generated. Please install pydot to generate the dot file and 
pdf.
src/mem/dram_interface.cc:690: warn: DRAM device capacity (8192 Mbytes) does 
not match the address range assigned (4096 Mbytes)
src/sim/kernel_workload.cc:46: info: kernel located at: 
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/vmlinux-5.4.0-105-generic
src/base/statistics.hh:279: warn: One of the stats is a legacy stat. Legacy 
stat is a stat that does not belong to any statistics::Group. Legacy stat is 
deprecated.
src/base/stats/storage.hh:278: warn: Bucket size (5) does not divide range 
[1:75] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide range 
[1:10] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide range 
[1:64] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (1) does not divide range 
[1:1e+06] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (5) does not divide range 
[1:75] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide range 
[1:10] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide range 
[1:64] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (1) does not divide range 
[1:1e+06] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (5) does not divide range 
[1:75] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide range 
[1:10] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide range 
[1:64] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (1) does not divide range 
[1:1e+06] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (5) does not divide range 
[1:75] into equal-sized buckets. Rounding up.
src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide range 
[1:10] into equal-

[gem5-users] Not being able to execute GPU FS example 'hip_samples.py'

2023-09-08 Thread Pau Galindo Figuerola via gem5-users
Hi,

I'm trying to set up a GPU FS environment following the guidelines from
here:
https://github.com/gem5/gem5-resources/blob/stable/src/gpu-fs/README.md

First when I tried to execute 'hip_examples.py' an error saying that
'exit_at_gpu_kernel' didn't exist. So I added the parameter to the
'runfs.py' to work around that. But when I execute againt a fatal error
raises saying the following:

*src/mem/physical.cc:247: fatal: Could not mmap 17179869184 bytes for range
[0:0x4]!*

It does also happen with 'hip_cookbook.py' and 'hip_rodinia.py' examples.

Someone could help me understand it and how to fix this. Aren't the
examples ment to work without modifying anything?

I'm using the current stable branch github version for both gem5 and
gem5-resources.

Thank you in advance!

Best regards,
Pau


*Command Executed:*

*build/VEGA_X86/gem5.opt configs/example/gpufs/hip_samples.py --disk-image
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/disk-image/rocm42/rocm42-image/rocm42
--kernel
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/vmlinux-5.4.0-105-generic
--gpu-mmio-trace
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/vega_mmio.log --app
PrefixSum*

*Full trace:*















































































































*gem5 Simulator System.  https://www.gem5.org gem5
is copyrighted software; use the --copyright option for details.gem5
version 23.0.1.0gem5 compiled Sep  8 2023 15:36:02gem5 started Sep  8 2023
18:35:16gem5 executing on paublackton-MS-7A72, pid 5907command line:
build/VEGA_X86/gem5.opt configs/example/gpufs/hip_samples.py --disk-image
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/disk-image/rocm42/rocm42-image/rocm42
--kernel
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/vmlinux-5.4.0-105-generic
--gpu-mmio-trace
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/vega_mmio.log --app
PrefixSum/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/disk-image/rocm42/rocm42-image/rocm42warn:
The `get_runtime_isa` function is deprecated. Please migrate away from
using this function.Global frequency set at 1 ticks per
secondwarn: system.workload.acpi_description_table_pointer.rsdt adopting
orphan SimObject param 'entries'warn: No dot file generated. Please install
pydot to generate the dot file and pdf.src/mem/dram_interface.cc:690: warn:
DRAM device capacity (8192 Mbytes) does not match the address range
assigned (4096 Mbytes)src/sim/kernel_workload.cc:46: info: kernel located
at:
/home/pau-blackton/TFG/gem5-resources/src/gpu-fs/vmlinux-5.4.0-105-genericsrc/base/statistics.hh:279:
warn: One of the stats is a legacy stat. Legacy stat is a stat that does
not belong to any statistics::Group. Legacy stat is
deprecated.src/base/stats/storage.hh:278: warn: Bucket size (5) does not
divide range [1:75] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide
range [1:10] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide
range [1:64] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (1) does not divide
range [1:1e+06] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (5) does not divide
range [1:75] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide
range [1:10] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide
range [1:64] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (1) does not divide
range [1:1e+06] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (5) does not divide
range [1:75] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide
range [1:10] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide
range [1:64] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (1) does not divide
range [1:1e+06] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (5) does not divide
range [1:75] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide
range [1:10] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (2) does not divide
range [1:64] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (1) does not divide
range [1:1e+06] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (1) does not divide
range [1:1.6e+06] into equal-sized buckets. Rounding
up.src/base/stats/storage.hh:278: warn: Bucket size (1) does not divide
range [1:1.6e+06] into equal-sized buckets. Rounding
up.src/base/stats/storage.h

[gem5-users] Re: Error in an application running on gem5 GCN3 (with apu_se.py)

2023-09-08 Thread Poremba, Matthew via gem5-users
[Public]

Hi Anoop,


Based on that register count, I am going to guess you built the application 
with -O0 or some other debugging flags?  If you do this, the compiler makes 
some super large number of registers. I assume that is so a real GPU will not 
run any other applications simultaneously.

Similarly, if you are seeing s_sendmsg I am going to guess there is a printf() 
in your GPU kernel.  These aren’t currently supported in gem5, but something 
that would be very nice to have.

If these are true you will need to remove any printfs and compile with at least 
-O1 to run in gem5.


-Matt

From: Anoop Mysore 
Sent: Friday, September 8, 2023 7:33 AM
To: Matt Sinclair 
Cc: The gem5 Users mailing list ; Poremba, Matthew 

Subject: Re: [gem5-users] Re: Error in an application running on gem5 GCN3 
(with apu_se.py)

Caution: This message originated from an External Source. Use proper caution 
when opening attachments, clicking links, or responding.

Hi Matt,
I'm facing a few other problems:
1. `panic: panic condition (numWfs * vregDemandPerWI) > (numVectorALUs * 
numVecRegsPerSimd) occurred: WG with 1 WFs and 29285 VGPRs per WI can not be 
allocated to CU that has 8192 VGPRs`
The corresponding line of the code in gem5: 
https://github.com/gem5/gem5/blob/f29bfc0640c88a79eb7f94454ce31b3237ec0066/src/gpu-compute/compute_unit.cc#L565
One of the variables (vregDemandPerWI) is ultimately derived from reading the 
executable for the kernel code. Is it possible to reduce this VGRP demand 
somehow, or is increasing the VGPRs (to what seems like an unrealistically high 
value) be the only solution? Similar error for SGPRs as well.
2. Some kernels (compiled for gfx801/3) have instructions such as 
ds_add_u32
 (Data Store instruction page: 12-161), s_sendmsg (send message to host CPU) -- 
which do not have their relevant decoding code 
available.
 Is this intentional or was this just punted for later -- anything to keep in 
mind when coding for these?




On Thu, Aug 17, 2023 at 5:13 PM Matt Sinclair 
mailto:mattdsinclair.w...@gmail.com>> wrote:
Hi Anoop,

I'm glad that increasing -n helped.  It's hard to say what exactly the problem 
is without digging in further, but often the ROCm stack will launch additional 
processes to do a variety of things (e.g., check which version of LLVM is being 
used).  In gem5, each of these require a separate CPU thread context -- which 
increasing -n handles in SE mode.  So if I had to guess, I would say that this 
is what is happening.

If you added gdb locally to your docker, and you built the docker properly, 
then I would expect gdb to work with gem5.

Thanks,
Matt

On Wed, Aug 16, 2023 at 11:41 PM Anoop Mysore 
mailto:mysan...@gmail.com>> wrote:
Thank you, Matt, having 10 CPUs (up from previous 3) in the simulated system 
seems to make it work! (At least, I don't see that error at that point 
anymore). Is "resource temporarily unavailable" commonly due to CPU count? 
Curious to know how you made that connection.

Re gdb: I am indeed using a local docker build (gem5/util/dockerfiles/gcn-gpu) 
with an added gdb installation -- is that what you meant?

Will send in a PR to the repo soon as I'm done :)
On Wed, Aug 16, 2023, 5:03 PM Matt Sinclair 
mailto:mattdsinclair.w...@gmail.com>> wrote:
Hi Anoop,

A few things here:

- Regarding the original failure (at least the !FS part), this is normally 
happening either because of the GPU Target ISA (e.g., gfx900) you used in your 
Makefile (e.g., it is not supported) or because you didn't properly specify 
what GPU ISA you are using when running the program.  So, what is your command 
line for running this application and what ISA are you specifying in your 
Makefile?
- If the "what()" is the real source of the error, then I think this could be 
related to the number of CPU thread contexts you are running with gem5.  What 
did you set "-n" to?
- Regarding gdb, @Matt P: did you remove gdb from what is installed in the 
Docker a while back?  If so, I think Anoop would need to add it back and create 
a local docker or something like that.
- Setting aside the above, it would be wonderful if you contribute the CHAI 
benchmarks to gem5-resources once you get them working!  Please let us know if 
we can do anything to help with that.

Thanks,
Matt

On Wed, Aug 16, 2023 at 9:51 AM Anoop Mysore via gem5-users 
mailto:gem5-users@gem5.org>> wrote:
Curiously, running the gem5.debug executable with gdb within docker results in:
Reading symbols from gem5/build/GCN3_X86/gem5.debug...
(gdb) quit
(the quit wasn't a command I provided, it just quits automatically). Is gdb 
working with gem5 GCN3 in Docker?

I ran gem5.opt with ExecAll and SyscallAll debug flags, the debug tail and the 
simerr logs are attached.
I don't see anything peculiar other than a tg

[gem5-users] Re: Error in an application running on gem5 GCN3 (with apu_se.py)

2023-09-08 Thread Anoop Mysore via gem5-users
Hi Matt,
I'm facing a few other problems:
1. `panic: panic condition (numWfs * vregDemandPerWI) > (numVectorALUs *
numVecRegsPerSimd) occurred: WG with 1 WFs and 29285 VGPRs per WI can not
be allocated to CU that has 8192 VGPRs`
The corresponding line of the code in gem5:
https://github.com/gem5/gem5/blob/f29bfc0640c88a79eb7f94454ce31b3237ec0066/src/gpu-compute/compute_unit.cc#L565
One of the variables (vregDemandPerWI) is ultimately derived from reading
the executable for the kernel code. Is it possible to reduce this VGRP
demand somehow, or is increasing the VGPRs (to what seems like an
unrealistically high value) be the only solution? Similar error for SGPRs
as well.
2. Some kernels (compiled for gfx801/3) have instructions such as ds_add_u32

(Data
Store instruction page: 12-161), s_sendmsg (send message to host CPU) --
which do not have their relevant decoding code available
.
Is this intentional or was this just punted for later -- anything to keep
in mind when coding for these?




On Thu, Aug 17, 2023 at 5:13 PM Matt Sinclair 
wrote:

> Hi Anoop,
>
> I'm glad that increasing -n helped.  It's hard to say what exactly the
> problem is without digging in further, but often the ROCm stack will launch
> additional processes to do a variety of things (e.g., check which version
> of LLVM is being used).  In gem5, each of these require a separate CPU
> thread context -- which increasing -n handles in SE mode.  So if I had to
> guess, I would say that this is what is happening.
>
> If you added gdb locally to your docker, and you built the docker
> properly, then I would expect gdb to work with gem5.
>
> Thanks,
> Matt
>
> On Wed, Aug 16, 2023 at 11:41 PM Anoop Mysore  wrote:
>
>> Thank you, Matt, having 10 CPUs (up from previous 3) in the simulated
>> system seems to make it work! (At least, I don't see that error at that
>> point anymore). Is "resource temporarily unavailable" commonly due to CPU
>> count? Curious to know how you made that connection.
>>
>> Re gdb: I am indeed using a local docker build
>> (gem5/util/dockerfiles/gcn-gpu) with an added gdb installation -- is that
>> what you meant?
>>
>> Will send in a PR to the repo soon as I'm done :)
>>
>> On Wed, Aug 16, 2023, 5:03 PM Matt Sinclair 
>> wrote:
>>
>>> Hi Anoop,
>>>
>>> A few things here:
>>>
>>> - Regarding the original failure (at least the !FS part), this is
>>> normally happening either because of the GPU Target ISA (e.g., gfx900) you
>>> used in your Makefile (e.g., it is not supported) or because you didn't
>>> properly specify what GPU ISA you are using when running the program.  So,
>>> what is your command line for running this application and what ISA are you
>>> specifying in your Makefile?
>>> - If the "what()" is the real source of the error, then I think this
>>> could be related to the number of CPU thread contexts you are running with
>>> gem5.  What did you set "-n" to?
>>> - Regarding gdb, @Matt P: did you remove gdb from what is installed in
>>> the Docker a while back?  If so, I think Anoop would need to add it back
>>> and create a local docker or something like that.
>>> - Setting aside the above, it would be wonderful if you contribute the
>>> CHAI benchmarks to gem5-resources once you get them working!  Please let us
>>> know if we can do anything to help with that.
>>>
>>> Thanks,
>>> Matt
>>>
>>> On Wed, Aug 16, 2023 at 9:51 AM Anoop Mysore via gem5-users <
>>> gem5-users@gem5.org> wrote:
>>>
 Curiously, running the gem5.debug executable with gdb within docker results
 in:
 Reading symbols from gem5/build/GCN3_X86/gem5.debug...
 (gdb) quit
 (the quit wasn't a command I provided, it just quits automatically). Is
 gdb working with gem5 GCN3 in Docker?

 I ran gem5.opt with ExecAll and SyscallAll debug flags, the debug tail
 and the simerr logs are attached.
 I don't see anything peculiar other than a tgkill syscall with a
 SIGABRT sent to a thread thereafter halting within a few instructions.

 On Tue, Aug 15, 2023 at 9:00 PM Anoop Mysore 
 wrote:

> I am trying to port CHAI benchmarks
> similarly to
> gem5-resources/src/gpu/pannotia
> .
> I was able to HIPify (through the perl script + some manual changes) all
> the code files, and ran the BFS program. I see the following error message
> at the point of launching the CPU threads here
> 
>  (fork
> of HIPified CHAI). I do not see any of the prints from the CPU threads
> which leads me to believe the error is to do with the threads not being
>

[gem5-users] Simulation of MSHR and write-back buffer in Ruby?

2023-09-08 Thread Ghadeer Almusaddar via gem5-users
Hello,

How are MSHRs and write-back buffers implemented in Ruby? is TBE used as an
MSHR and write-back buffer? If so, is it possible to control the size of
MSHR or write-back buffer independently?

Thanks,
Ghadeer
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org


[gem5-users] Re: Counters for # DRAM reads, writes, page hits, and page misses

2023-09-08 Thread Aritra Bagchi via gem5-users
Hi Eliot,

In the stats, I got some of the counters I wanted. In the source code, now
I can look at how they are computed and get ideas. I wanted these data not
at the end of simulation, but at intermediate times, but I could obtain
them by controlling some parameters.

Thanks,
Aritra



On Fri, Sep 8, 2023 at 6:06 PM Eliot Moss  wrote:

> On 9/8/2023 2:55 AM, Aritra Bagchi via gem5-users wrote:
> > Hi all,
> >
> > Can anyone indicate how to extract performance counters such as the
> number of DRAM read operations,
> > the number of DRAM write operations, the number of times a page miss
> occurs, etc.?
> >
> > Inside src/mem/mem_ctrl.cc, MemCtrl::recvTimingReq( ) method, there are
> two methods for inserting
> > new read and write operations into their respective queues,
> namely addToReadQueue( )
> > and addToWriteQueue( ). Can the #reads and #writes can be obtained from
> here? And what about # page
> > hits/misses? Any help is appreciated.
>
> The way things generally work in gem5 is that you get a stats dump at
> the end of a run.  There are also ways to request such dumps more
> frequently.
> You get a lot of details about accesses to caches and memories.  Are you
> looking at stats dumps and not seeing what you hope for?
>
> Best - Eliot Moss
>
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org


[gem5-users] Re: Counters for # DRAM reads, writes, page hits, and page misses

2023-09-08 Thread Eliot Moss via gem5-users

On 9/8/2023 2:55 AM, Aritra Bagchi via gem5-users wrote:

Hi all,

Can anyone indicate how to extract performance counters such as the number of DRAM read operations, 
the number of DRAM write operations, the number of times a page miss occurs, etc.?


Inside src/mem/mem_ctrl.cc, MemCtrl::recvTimingReq( ) method, there are two methods for inserting 
new read and write operations into their respective queues, namely addToReadQueue( ) 
and addToWriteQueue( ). Can the #reads and #writes can be obtained from here? And what about # page 
hits/misses? Any help is appreciated.


The way things generally work in gem5 is that you get a stats dump at
the end of a run.  There are also ways to request such dumps more frequently.
You get a lot of details about accesses to caches and memories.  Are you
looking at stats dumps and not seeing what you hope for?

Best - Eliot Moss
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org


[gem5-users] Counters for # DRAM reads, writes, page hits, and page misses

2023-09-08 Thread Aritra Bagchi via gem5-users
Hi all,

Can anyone indicate how to extract performance counters such as the number
of DRAM read operations, the number of DRAM write operations, the number of
times a page miss occurs, etc.?

Inside src/mem/mem_ctrl.cc, MemCtrl::recvTimingReq( ) method, there are two
methods for inserting new read and write operations into their respective
queues, namely addToReadQueue( ) and addToWriteQueue( ). Can the #reads and
#writes can be obtained from here? And what about # page hits/misses? Any
help is appreciated.

Regards,
Aritra
___
gem5-users mailing list -- gem5-users@gem5.org
To unsubscribe send an email to gem5-users-le...@gem5.org