Re: [gem5-users] Cache bank model

2019-03-29 Thread Gambord, Ryan
Do you have some example config you are working off of? The following file
may be a good place to start. I encountered address resolution problems
with it as implemented, but I am sure you can figure it out :)

https://github.com/darchr/gem5/blob/jason/kvm-testing/configs/myconfigs/system/caches.py

Ryan Gambord 


On Wed, Mar 27, 2019 at 11:20 PM Muhammad Avais 
wrote:

> Dear all,
>Can anyone guide how to implement bank model for caches in gem5?
>Is there any patch to implement bank model
> Many thanks,
> Best Regards,
> Avais
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Addition of gem5 x86 instructions

2019-03-29 Thread Shyam Murthy
Apologize for the wrong title in my previous email. Correcting.

Thanks,
Shyam

On Fri, Mar 29, 2019 at 6:16 PM Shyam Murthy 
wrote:

> Hi Gabe,
>
> As I am trying to run SPEC 2017 on gem5 in SE mode, I ran into some
> unimplemented instructions namely *frndint*, *fsqrt* and *fistp* to name
> a few. I see that within the *src/arch/x86/isa/insts/x87/**arithmetic*
> folder, there are placeholder files to write implementations for some of
> the macro operations, like square root and rounding. Can I write my
> implementations and have my code reviewed, so that it can be checked in?
> In addition, for float to integer operation, I did not find any
> corresponding micro-op in the folder *src/arch/x86/isa/microops, *is
> there already a corresponding micro-op (that I missed), or should I write
> my own?
>
> Thanks,
> Shyam
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] gabebl...@google.com

2019-03-29 Thread Shyam Murthy
Hi Gabe,

As I am trying to run SPEC 2017 on gem5 in SE mode, I ran into some
unimplemented instructions namely *frndint*, *fsqrt* and *fistp* to name a
few. I see that within the *src/arch/x86/isa/insts/x87/**arithmetic*
folder, there are placeholder files to write implementations for some of
the macro operations, like square root and rounding. Can I write my
implementations and have my code reviewed, so that it can be checked in?
In addition, for float to integer operation, I did not find any
corresponding micro-op in the folder *src/arch/x86/isa/microops, *is there
already a corresponding micro-op (that I missed), or should I write my own?

Thanks,
Shyam
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Bypassing Dcache on MSHR 1

2019-03-29 Thread Abhishek Singh
Hello Everyone,

I want to bypass Dcache i.e., do not allocate anything in Dcache, in order
to do that, I use tempBlock in *handleFill* function in
src/mem/cache/base.cc.


*Before: *

 blk = allocate ? allocateBlock(pkt, writebacks) : nullptr;

*After:*


if(name() == "system.cpu.dcache”) blk = nullptr;

else

 blk = allocate ? allocateBlock(pkt, writebacks) : nullptr;

The problem I run into, I get stuck in continuous request and response
cycle, that is I keep getting request for particular address and I satisfy
the response for it.

This only happens when I set mshr for dcache as 1.
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Write through cache

2019-03-29 Thread Nikos Nikoleris
Hi Shougang,

The cache is gem5 doesn't support write through, you would have to
implement it yourself. The write through flag in the packet is only used
for cache maintenance operations but it might be useful for implementing
write-through requests in general.

Nikos

On 28/03/2019 18:28, yuan wrote:
> Hi, there,
>
> I am wondering how to enable write through policy in gem5? I tried to
> use pkt->SetWritethrough() for each write request in cache class. But it
> seems that these write requests never go down stream to the lower level
> cache and eventually to memory controller. Does anyone know how to
> enable write through in gem5?
>
> Best,
>
> Shougang
>
> Sent from Mail  for
> Windows 10
>
IMPORTANT NOTICE: The contents of this email and any attachments are 
confidential and may also be privileged. If you are not the intended recipient, 
please notify the sender immediately and do not disclose the contents to any 
other person, use it for any purpose, or store or copy the information in any 
medium. Thank you.
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Simulate limit reached, MinorCPU, related to PF_EXCLUSIVE

2019-03-29 Thread Nikos Nikoleris
It might be obvious and I guess that's what you are doing already but I
forgot to mention that another solution ofc would be to revert the
patches that add support for the prefetch exclusive instruction.

Nikos

On 29/03/2019 11:26, Nikos Nikoleris wrote:
> Hi William,
>
> This is indeed a bug which is due to the change that implement the
> prefetch exclusive instruction.
>
> The classic memory system in gem5 was designed with two assumptions
> which are relevant here:
>
> * A cache which fetches a block with an intention to modify it is
> expected to become the point of ordering and therefore commits to
> respond to any snoop requests [1].
> * A cache that fetches an exclusive copy of the block, does so with the
> intention to modify it [2]. Immediately after it receives the block, it
> will mark it as dirty and it will become the point of ordering. As the
> point of ordering it responds to any pending snoops.
>
> Unfortunately, the prefetch exclusive request breaks the second
> assumption, it fetches an exclusive block without a clear intention to
> modify it. In cases where the block is not modified, it won't be marked
> as a dirty and the cache doesn't become the point of ordering although
> it has committed to respond to pending snoops. This is the reason for
> the deadlock. There are snoops that are stuck waiting for responses.
>
> One simple solution [3] is to force the prefetch exclusive request to
> unconditionally mark the block dirty. This should work as it doesn't
> break any of the assumptions *but* it might mark blocks as dirty even
> though they are not. As a result it might increase writebacks. In
> practice this shouldn't be a big problem unless the application is
> unnecessarily using prefetch exclusive instructions.
>
> Another solution is to change the design of the memory system to handle
> this properly, I've been thinking about it and it's not that
> straightforward.
>
> [1]: When a cache commits to respond, it "informs" the xbar/PoC (point
> of coherence) and the other caches of its intention to respond. As a
> result the request will not be send to the main memory.
> [2]: In fact the assumption is that in the needsWritable MSHR there is
> at least one WriteReq before any snoops from other caches.
> [3]: https://gem5-review.googlesource.com/c/public/gem5/+/17729
>
> Hope this helps.
>
> Nikos
>
> On 29/03/2019 10:33, Simon William wrote:
>> Hello all,
>>
>> I have been using gem5 for about 2 years now, and have run into a
>> problem that I cannot seem to surmount, though I have located the root
>> cause. I have been using workload automation to mount a local folder
>> from host into my gem5 fs simulation, allowing me to modify and cross
>> compile applications on the host and then run them easily in gem5. To do
>> this, I followed the instructions on
>> http://gem5.org/WA-gem5#Using_gem5_with_Workload_Automation, and run the
>> command
>>
>> mount -t 9p -n gem5 /mnt
>> -oaname=/home/wsimon/temp,version=9p2000.L,uname=root,access=user
>>
>> To mount 9p filesystem within gem5. This has worked very well up until
>> recently updating the the newest version of gem5. Now, while mounting
>> still works in the O3 CPU, I get a simulation timout in the Minor CPU:
>>
>> Exiting @ tick 18446744073709551615 because simulate() limit reached
>>
>> diod: caught SIGTERM: shutting down
>>
>> info: Trying to kill diod with SIGKILL as SIGTERM failed
>>
>> I tracked down the cause of this error to git commit
>> *59e3585a84ef172eba57c9936680c0248f9a97db
>> *,
>> in which Prefetching for stores is added to the ARM ISA (I am using
>> aarch64 ISA), specifically the 4 lines in
>> *src/arch/arm/isa/insts/ldr64.isa*.
>>
>>   LoadImm64("prfm", "PRFM64_IMM", 8, flavor="mprefetch").emit()
>>
>>   LoadReg64("prfm", "PRFM64_REG", 8, flavor="mprefetch").emit()
>>
>>   LoadLit64("prfm", "PRFM64_LIT", 8, literal=True,
>>
>> flavor="mprefetch").emit()
>>
>>   LoadImm64("prfum", "PRFUM64_IMM", 8, flavor="mprefetch").emit()
>>
>> The “mprefetch” flavor modifies the memFlags in the following way (also
>> in src/arch/arm/isa/insts/ldr64.isa):
>>
>>   if self.flavor == "dprefetch":
>>
>>   self.memFlags.append("Request::PREFETCH")
>>
>>   self.instFlags = ['IsDataPrefetch']
>>
>>   elif self.flavor == "iprefetch":
>>
>>   self.memFlags.append("Request::PREFETCH")
>>
>>   self.instFlags = ['IsInstPrefetch']
>>
>> *elif self.flavor == "mprefetch":*
>>
>> *self.memFlags.append("dest>>3)&3)==2)? \*
>>
>> * (Request::PF_EXCLUSIVE):(Request::PREFETCH))")*
>>
>> *self.instFlags = ['IsDataPrefetch']*
>>
>>   if self.micro:
>>
>>   self.instFlags.append("IsMicroop")
>>
>> If I modify the emit commands 

Re: [gem5-users] Simulate limit reached, MinorCPU, related to PF_EXCLUSIVE

2019-03-29 Thread Nikos Nikoleris
Hi William,

This is indeed a bug which is due to the change that implement the
prefetch exclusive instruction.

The classic memory system in gem5 was designed with two assumptions
which are relevant here:

* A cache which fetches a block with an intention to modify it is
expected to become the point of ordering and therefore commits to
respond to any snoop requests [1].
* A cache that fetches an exclusive copy of the block, does so with the
intention to modify it [2]. Immediately after it receives the block, it
will mark it as dirty and it will become the point of ordering. As the
point of ordering it responds to any pending snoops.

Unfortunately, the prefetch exclusive request breaks the second
assumption, it fetches an exclusive block without a clear intention to
modify it. In cases where the block is not modified, it won't be marked
as a dirty and the cache doesn't become the point of ordering although
it has committed to respond to pending snoops. This is the reason for
the deadlock. There are snoops that are stuck waiting for responses.

One simple solution [3] is to force the prefetch exclusive request to
unconditionally mark the block dirty. This should work as it doesn't
break any of the assumptions *but* it might mark blocks as dirty even
though they are not. As a result it might increase writebacks. In
practice this shouldn't be a big problem unless the application is
unnecessarily using prefetch exclusive instructions.

Another solution is to change the design of the memory system to handle
this properly, I've been thinking about it and it's not that
straightforward.

[1]: When a cache commits to respond, it "informs" the xbar/PoC (point
of coherence) and the other caches of its intention to respond. As a
result the request will not be send to the main memory.
[2]: In fact the assumption is that in the needsWritable MSHR there is
at least one WriteReq before any snoops from other caches.
[3]: https://gem5-review.googlesource.com/c/public/gem5/+/17729

Hope this helps.

Nikos

On 29/03/2019 10:33, Simon William wrote:
> Hello all,
>
> I have been using gem5 for about 2 years now, and have run into a
> problem that I cannot seem to surmount, though I have located the root
> cause. I have been using workload automation to mount a local folder
> from host into my gem5 fs simulation, allowing me to modify and cross
> compile applications on the host and then run them easily in gem5. To do
> this, I followed the instructions on
> http://gem5.org/WA-gem5#Using_gem5_with_Workload_Automation, and run the
> command
>
> mount -t 9p -n gem5 /mnt
> -oaname=/home/wsimon/temp,version=9p2000.L,uname=root,access=user
>
> To mount 9p filesystem within gem5. This has worked very well up until
> recently updating the the newest version of gem5. Now, while mounting
> still works in the O3 CPU, I get a simulation timout in the Minor CPU:
>
> Exiting @ tick 18446744073709551615 because simulate() limit reached
>
> diod: caught SIGTERM: shutting down
>
> info: Trying to kill diod with SIGKILL as SIGTERM failed
>
> I tracked down the cause of this error to git commit
> *59e3585a84ef172eba57c9936680c0248f9a97db
> *,
> in which Prefetching for stores is added to the ARM ISA (I am using
> aarch64 ISA), specifically the 4 lines in
> *src/arch/arm/isa/insts/ldr64.isa*.
>
>  LoadImm64("prfm", "PRFM64_IMM", 8, flavor="mprefetch").emit()
>
>  LoadReg64("prfm", "PRFM64_REG", 8, flavor="mprefetch").emit()
>
>  LoadLit64("prfm", "PRFM64_LIT", 8, literal=True,
>
>flavor="mprefetch").emit()
>
>  LoadImm64("prfum", "PRFUM64_IMM", 8, flavor="mprefetch").emit()
>
> The “mprefetch” flavor modifies the memFlags in the following way (also
> in src/arch/arm/isa/insts/ldr64.isa):
>
>  if self.flavor == "dprefetch":
>
>  self.memFlags.append("Request::PREFETCH")
>
>  self.instFlags = ['IsDataPrefetch']
>
>  elif self.flavor == "iprefetch":
>
>  self.memFlags.append("Request::PREFETCH")
>
>  self.instFlags = ['IsInstPrefetch']
>
> *elif self.flavor == "mprefetch":*
>
> *self.memFlags.append("dest>>3)&3)==2)? \*
>
> * (Request::PF_EXCLUSIVE):(Request::PREFETCH))")*
>
> *self.instFlags = ['IsDataPrefetch']*
>
>  if self.micro:
>
>  self.instFlags.append("IsMicroop")
>
> If I modify the emit commands above to emit with the flavor “dprefetch”,
> as was previously the case, I have no problems, so the timeout error
> must arise from a memory request being tagged as PF_EXCLUSIVE instead of
> PREFETCH.
>
> If I understand correctly, the PF_EXCLUSIVE tag requests that a memory
> location be prefetched and marked as exclusive, to allow storing. I’m
> not sure why this would cause a hang in the code. Can anyone shed
> 

[gem5-users] Simulate limit reached, MinorCPU, related to PF_EXCLUSIVE

2019-03-29 Thread Simon William
Hello all,

I have been using gem5 for about 2 years now, and have run into a problem that 
I cannot seem to surmount, though I have located the root cause. I have been 
using workload automation to mount a local folder from host into my gem5 fs 
simulation, allowing me to modify and cross compile applications on the host 
and then run them easily in gem5. To do this, I followed the instructions on 
http://gem5.org/WA-gem5#Using_gem5_with_Workload_Automation, and run the command

mount -t 9p -n gem5 /mnt 
-oaname=/home/wsimon/temp,version=9p2000.L,uname=root,access=user

To mount 9p filesystem within gem5. This has worked very well up until recently 
updating the the newest version of gem5. Now, while mounting still works in the 
O3 CPU, I get a simulation timout in the Minor CPU:

Exiting @ tick 18446744073709551615 because simulate() limit reached
diod: caught SIGTERM: shutting down
info: Trying to kill diod with SIGKILL as SIGTERM failed

I tracked down the cause of this error to git commit 
59e3585a84ef172eba57c9936680c0248f9a97db,
 in which Prefetching for stores is added to the ARM ISA (I am using aarch64 
ISA), specifically the 4 lines in src/arch/arm/isa/insts/ldr64.isa.

LoadImm64("prfm", "PRFM64_IMM", 8, flavor="mprefetch").emit()
LoadReg64("prfm", "PRFM64_REG", 8, flavor="mprefetch").emit()
LoadLit64("prfm", "PRFM64_LIT", 8, literal=True,
  flavor="mprefetch").emit()
LoadImm64("prfum", "PRFUM64_IMM", 8, flavor="mprefetch").emit()

The "mprefetch" flavor modifies the memFlags in the following way (also in 
src/arch/arm/isa/insts/ldr64.isa):

if self.flavor == "dprefetch":
self.memFlags.append("Request::PREFETCH")
self.instFlags = ['IsDataPrefetch']
elif self.flavor == "iprefetch":
self.memFlags.append("Request::PREFETCH")
self.instFlags = ['IsInstPrefetch']
elif self.flavor == "mprefetch":
self.memFlags.append("dest>>3)&3)==2)? \
 (Request::PF_EXCLUSIVE):(Request::PREFETCH))")
self.instFlags = ['IsDataPrefetch']
if self.micro:
self.instFlags.append("IsMicroop")

If I modify the emit commands above to emit with the flavor "dprefetch", as was 
previously the case, I have no problems, so the timeout error must arise from a 
memory request being tagged as PF_EXCLUSIVE instead of PREFETCH.

If I understand correctly, the PF_EXCLUSIVE tag requests that a memory location 
be prefetched and marked as exclusive, to allow storing. I'm not sure why this 
would cause a hang in the code. Can anyone shed further light on the possible 
cause of this?

Sincerely,
William
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users