Hi,
As far as I know, i don't think you can send data of that size unless you
change the parameter of how big a packet can be, or issue multiple packets
at the same time.
For the memory port, you can use the virtual to physical mapping feature,
which i believe is much easier. To do this you can
Hi Dan,
Attempting to answers your questions in order:
- Yes, by data cache I meant a global memory array. You've highlighted the
issue exactly though, by making the array a global memory array instead, it
will now be subject to thrashing with other global memory data. You could
try
Matt,
Thanks for the detailed response. Yeah that sounds pretty involved, I
probably won't go down that path unless I see no other way.
When you say the data cache do you mean make it a global memory array? This
is actually what I already have, and I wanted to keep the "constant" data
from
Hi All,
I using the GEM5 simulator to collect statistics of a
micro-benchmark program. I am encountering the functional read access
failed for address "0x".
I have attached the source file of the micro-benchmark program. The
simulation is running fine for the case "1" and "4" in the switch
Hi Muhammad,
I want to read the data (256) from an memory address in Gem5. I did the
following but I have two issues which are how to specify the data size (256
bytes) and get the memory port. Any help would be appreciated.
RequestPtr req1;
PacketPtr newPkt = new
Hi Daniel,
If you don't mind, can you please post the patch(es) to develop? It would
be great to have these included in the publicly available code.
Thanks,
Matt
On Tue, Aug 18, 2020 at 11:22 AM Sampad Mohapatra wrote:
> Hey Daniel,
>
> Thanks for the patch. I am using the staging branch.
>
Hi Dan,
Tony will have to confirm, but I believe AMD didn’t add support for
constant memory because none of the applications they looked at used it.
The mincore error is kind of a catch all, saying that something bad
happened and you went down a failure path.
Assuming the above is correct, if
Hey Daniel,
Thanks for the patch. I am using the staging branch.
If you have added any stats to the L3, can you please provide a patch for
that as well ?
Thanks again,
Sampad
On Tue, Aug 18, 2020 at 12:06 PM Daniel Gerzhoy
wrote:
> Sampad,
>
> I thought the L3 would be inclusive too, but the
Sampad,
I thought the L3 would be inclusive too, but the code reads otherwise, it
only caches entries on a writeback (always) or writethrough (if enabled).
I am considering changing it to be inclusive for my own purposes (I don't
know if that is something the community would want or not). Which
Hi Daniel,
I am just starting out so it would be really helpful if you could kindly
provide your patches.
Have you verified the changes, otherwise I will try to verify them.
Also, isn't the L3 inclusive ?
Thank you,
Sampad
On Tue, Aug 18, 2020 at 9:57 AM Daniel Gerzhoy
wrote:
> Hi Sampad,
>
Hi,
I could use Gem5ToMcPAT-Parser.py and template.xml and output-gem5 for
generate input-mcpat and run it on mcpat and calculate power.
but I have some questions about mcpat and gem5 when running aprogram or
dvfs on gem5:
1: How can I convert output gem5(stast.txt) to input
Hi,
Thanks for your answer.
I have some questions about imcpat again :
1: How can I convert output gem5(stast.txt) to input mcpat(template.xml)
and run dvfs on mcpat , when I run dvfs on gem5?
2: There may be multiple of " Begin Simulation Statistics " in stats.txt
when I run a program
Hi Sampad,
I've added corepair profiling to MOESI_AMD_BASE-CorePair.sm if you haven't
already done so. I can create a patch for you (or I'd be happy to review if
you end up submitting one).
I was confused about the L3Cache in the <...>-dir.sm file as well.
The MOESI_AMD_Base-L3cache.sm file
Hey all,
Is there a way to use constant memory in the GPU Model right now?
Using the
*__constant__ float variable[SIZE];*
and
*hipMemcpyToSymbol(...)*
results in a
*fatal: syscall mincore (#27) unimplemented.*
I've been looking through the code to find a way, but I haven't yet.
I guess a
Hi Jason,
I was able to solve the problem, i had set my MSHR's to 1 thus making it a
blocking cache. Needed to change this parameter. Thank you for pointing out
to look at my memory system.
Thanks,
Aamir
On Tue, 18 Aug 2020 at 12:43, Muhammad Aamir
wrote:
> Hi Jason,
>
> Is there a parameter
Hi Jason,
Is there a parameter that I can see where my caches are accepting more than
one request per cycle or not. Also assuming that I am not sending requests
in the same cycle but in different cycles, they still wait for one response
to come back before another one is issued. Also after
16 matches
Mail list logo