Re: [gem5-users] About the BDI compression file

2019-05-22 Thread Muhammad Avais
I hope you will be using online BDI compression file. I think you should
modify cache.cc file to use BDI. I could not understand how have you
modified BeseSetAssoc file to accommodate BDI compression.
Furthermore, i think sometimes little changed results are also possible in
same experiments.

On Wed, May 22, 2019 at 12:50 PM Pooneh Safayenikoo 
wrote:

> Hi,
>
> I want to apply BDI compression on the L2 cache. So, I changed the config
> file for the caches (gem5/configs/common/Caches.py) like following:
>
> class L1Cache(Cache):
> tags = BaseSetAssoc()
> compressor = NULL
> class L2Cache(Cache):
> tags = CompressedTags()
> compressor = BDI()
>
> After that, I got the results for some SPEC benchmarks (I used a
> configuration like BDI paper) to compare the L2 miss rate between this
> compression and baseline (without applying BDI and CompressedTags).
> But, miss rate increases a little for some benchmarks (like mcf and bzip).
> Why BDI has higher L2 miss rate? I cannot make sense of it.
>
> Many thanks for any help!
>
> Best,
> Pooneh
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] (no subject)

2019-05-12 Thread Muhammad Avais
Dear Abhishek,

 Many thanks for your reply. I will set the flag in response packet for
L2 hit. This flag will be default reset, therefore, I think I will not need
main memory flag in this case.
 Please, let me know if you feel a problem in this logic.
 For multicore simulation, what should be the difference?

Many thanks for your response,
Best regards,
Avais

On Sat, May 11, 2019 at 8:15 AM Abhishek Singh <
abhishek.singh199...@gmail.com> wrote:

> What you do, is create flags in src/mem/packet.hh for various cache levels.
> Whenever you hit in L2, you can set the L2flag in response pkt.
> And if it is misses in L2, set main memory flag in response pkt, as you
> are sure you will get data from main memory.
> Here we are assuming it’s a single core simulation.
>
> On Fri, May 10, 2019 at 5:42 AM Muhammad Avais 
> wrote:
>
>> Dear All,
>>
>> 1- For blocks loaded in the L1 cache, how can I distinguish that it was
>> loaded into the L1 cache from the L2 cache (L2 hit) or main memory (L1
>> cache)?
>>
>> Many thanks,
>> Best Regards,
>> Avais
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] (no subject)

2019-05-10 Thread Muhammad Avais
Dear All,

1- For blocks loaded in the L1 cache, how can I distinguish that it was
loaded into the L1 cache from the L2 cache (L2 hit) or main memory (L1
cache)?

Many thanks,
Best Regards,
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] (no subject)

2019-05-10 Thread Muhammad Avais
   Dear All,

  I have one question. For blocks loaded in the L1 cache, how can I
distinguish that it was loaded into the L1 cache from L2 cache or main
memory?

Many thanks,
Best regards,
Avais


On Wed, May 8, 2019 at 5:21 AM Abhishek Singh <
abhishek.singh199...@gmail.com> wrote:

> Hi Muhammad,
>
>
> If you want on L2 hit, the block is invalidated from L2 cache and filled
> in Dcache and the rest behavior same as you explained in the diagram, you
> can use gem5's "most_excl" option in "gem5/src/mem/cache/Cache.py" file.
> You may need to take care of "clean victim" from dcache which is not a
> difficult modification.
>
> Best regards,
>
> Abhishek
>
>
> On Tue, May 7, 2019 at 1:48 AM Muhammad Avais 
> wrote:
>
>> Dear All,
>>   Is 'mostly exclusive cache' supported in GEM5 classic model
>> strictly non-exclusive cache? If it is not non-exclusive cache, how can I
>> make it non-exclusive cache?
>>
>>   The non-exclusive cache is shown in Fig. below.
>> [image: image.png]
>>  Can anyone guide me?
>>
>> Many thanks,
>> best regards,
>> Avais
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] (no subject)

2019-05-07 Thread Muhammad Avais
Dear Abhishek,

  Many thanks for the useful response, I will try to modify clean
victim eviction from "dcache".

Many thanks,
Best regards,
Avais

On Wed, May 8, 2019 at 5:21 AM Abhishek Singh <
abhishek.singh199...@gmail.com> wrote:

> Hi Muhammad,
>
>
> If you want on L2 hit, the block is invalidated from L2 cache and filled
> in Dcache and the rest behavior same as you explained in the diagram, you
> can use gem5's "most_excl" option in "gem5/src/mem/cache/Cache.py" file.
> You may need to take care of "clean victim" from dcache which is not a
> difficult modification.
>
> Best regards,
>
> Abhishek
>
>
> On Tue, May 7, 2019 at 1:48 AM Muhammad Avais 
> wrote:
>
>> Dear All,
>>   Is 'mostly exclusive cache' supported in GEM5 classic model
>> strictly non-exclusive cache? If it is not non-exclusive cache, how can I
>> make it non-exclusive cache?
>>
>>   The non-exclusive cache is shown in Fig. below.
>> [image: image.png]
>>  Can anyone guide me?
>>
>> Many thanks,
>> best regards,
>> Avais
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] (no subject)

2019-05-06 Thread Muhammad Avais
Dear All,
  Is 'mostly exclusive cache' supported in GEM5 classic model strictly
non-exclusive cache? If it is not non-exclusive cache, how can I make it
non-exclusive cache?

  The non-exclusive cache is shown in Fig. below.
[image: image.png]
 Can anyone guide me?

Many thanks,
best regards,
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Non-exclusive cache in GEM5

2019-04-26 Thread Muhammad Avais
Dear All,
  Is 'mostly exclusive cache' supported in GEM5 classic model strictly
non-exclusive cache? If it is not non-exclusive cache, how can I make it
non-exclusive cache?

 Can anyone guide me?

Many thanks,
best regards,
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Cache bank model

2019-03-28 Thread Muhammad Avais
Dear all,
   Can anyone guide how to implement bank model for caches in gem5?
   Is there any patch to implement bank model
Many thanks,
Best Regards,
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Strange: Cache data not maintained in gem5

2018-08-21 Thread Muhammad Avais
Dear All,

  I was tracking zero blocks (64B 0 value) written to cache.
During experiments, i found that cache block does not hold data sometimes.
To some cache lines non-zero block is written to cache line from lower
level memory but this data is read as Zero block during access by upper
level cache.

 I could not find how this data is changed ?

Many Thanks,
Best Regards,
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Fix periods in gem5 cache

2018-08-14 Thread Muhammad Avais
Dear Parmidra,

  I am thankful to you for this help. I am not good in gem5 and feel
difficult to learn it.

Best Regards,
Avais

On Mon, Aug 13, 2018 at 4:30 PM, Parmida Vahdatnia <
parmida.vahdat...@gmail.com> wrote:

> Yes the if statements in the code, sorry if it was confusing.
> But I have used them in python and in the se.py and
> garnet_synth_traffic.py codes, although I have not done exactly what you
> are trying to do.
> Regards
> Parmida
>
> On Mon, Aug 13, 2018 at 11:40 AM, Muhammad Avais 
> wrote:
>
>> Dear Parmida,
>>
>>   Many thanks for your reply. "ifs" mean If statement in C or is
>> something else.
>>
>> Best Regartds,
>> Avais
>>
>> On Mon, Aug 13, 2018 at 2:28 PM, Parmida Vahdatnia <
>> parmida.vahdat...@gmail.com> wrote:
>>
>>> I usually use the combination of the getTick() function and ifs in the
>>> code, there is also a sleep function but I don't know if that's what you
>>> want.
>>>
>>> On Mon, 13 Aug 2018, 09:43 Muhammad Avais, 
>>> wrote:
>>>
>>>> Dear all,
>>>>
>>>>I want to adjust associativity of cache after fix
>>>> periods of time. Can anyone suggest how can i decide time in gem5.
>>>>
>>>> Best Regards,
>>>> Thanks,
>>>> Avais
>>>> ___
>>>> gem5-users mailing list
>>>> gem5-users@gem5.org
>>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>>
>>>
>>> ___
>>> gem5-users mailing list
>>> gem5-users@gem5.org
>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>>
>>
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Fix periods in gem5 cache

2018-08-13 Thread Muhammad Avais
Dear Parmida,

  Many thanks for your reply. "ifs" mean If statement in C or is
something else.

Best Regartds,
Avais

On Mon, Aug 13, 2018 at 2:28 PM, Parmida Vahdatnia <
parmida.vahdat...@gmail.com> wrote:

> I usually use the combination of the getTick() function and ifs in the
> code, there is also a sleep function but I don't know if that's what you
> want.
>
> On Mon, 13 Aug 2018, 09:43 Muhammad Avais,  wrote:
>
>> Dear all,
>>
>>I want to adjust associativity of cache after fix
>> periods of time. Can anyone suggest how can i decide time in gem5.
>>
>> Best Regards,
>> Thanks,
>> Avais
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Fix periods in gem5 cache

2018-08-12 Thread Muhammad Avais
Dear all,

   I want to adjust associativity of cache after fix
periods of time. Can anyone suggest how can i decide time in gem5.

Best Regards,
Thanks,
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Terminating multi-core simulation

2018-05-29 Thread Muhammad Avais
Dear Haeyoon,

  Many thanks for your help and guidance. I will try to apply
first scheme suggested by you in previous email.

Best Regards,
Avais

On Mon, May 28, 2018 at 1:18 PM, 조해윤  wrote:

> Dear Avais,
>
> I think it is reasonable to normalize with the numbers of executed
> instructions, but it still is a weakness that the running sections of
> benchmarks are not same totally.
> In my case, it was my best to exit the simulation by the total numbers of
> executed instructions.
> But if you can apply FIESTA methodology, it will be better.
>
> Best Regards,
> Haeyoon Cho.
>
> 2018-05-25 12:17 GMT+09:00 Muhammad Avais :
>
>> Dear Haeyoon Cho.,
>>
>> I am really thankful to you for this help. Actually, i am not very good
>> in modifying gem5 and this code will be very helpful for me.
>>
>> I have one more question, is it good idea to normalize the stats with
>> number of instructions simulated to calculate energy or other things? Does
>> people use this? Or some other metric to compare energy?
>>
>> Many thanks for your help,
>> Best Regards,
>> Avais
>>
>> On Thu, May 24, 2018 at 5:45 PM, 조해윤  wrote:
>>
>>> Dear Avais,
>>>
>>> I think running workloads fairly is very important in multi-core
>>> experiments, because the number of executed instructions of each core can
>>> be changed depend on each experimental configuration.
>>> There is a prior work how to experiment fairly on multi-core system; A.
>>> Hilton et al., "FIESTA: A Sample-Balanced Multi-Program Workload
>>> Methodology", MoBS, 2009.
>>> However, implementing this methodology in gem5 is another problem, and I
>>> couldn't do that.
>>>
>>> Alternatively, I modified the gem5 code to terminate by the number of
>>> total executed instructions of all cores.
>>> Existing gem5 code can only terminate by the maximum or minimum number
>>> of executed instructions per core.
>>> Since LocalSimLoopExitEvent() is called in CPU class in existing gem5
>>> code, I modified system class code to correct the number of executed
>>> instructions of all cores and to call LocalSimLoopExitEvent() by system
>>> class.
>>> As I think, the most important part is whether you can call
>>> LocalSimLoopExitEvent() when you want.
>>> I attach total_sim_exit.patch just for reference.
>>> I modified followed six files.
>>> /configs/commom/Simulation.py
>>> /src/sim/system.hh
>>> /src/sim/system.cc
>>> /src/sim/System.py
>>> /src/cpu/simple/base.hh
>>> /src/cpu/o3/cpu.cc
>>> This attached file may not compatible with current gem5 code, because I
>>> modified code base on stable version of gem5 code.
>>> Also, this modification is just for restrictive situation that one fast
>>> forward and one real simulation, and coding style is not good.
>>>
>>> If you can modify gem5 code better than me, please let me know.
>>>
>>> Best Regards,
>>> Haeyoon Cho.
>>>
>>>
>>> 2018-05-23 15:55 GMT+09:00 Muhammad Avais :
>>>
>>>> Dear All,
>>>>
>>>>  I want to measure dynamic energy of L2 cache for multi-core
>>>> simulations. For this purpose, i measure stats from gem5 like # of hits,  #
>>>> of misses and # of writebacks.
>>>>  As, multi-core simulation in gem5 terminates, as soon as, any
>>>> workload reaches maximum count. Therefore, while comparing different
>>>> schemes, each scheme terminates after different number of instructions, so
>>>> stats like  # of hits,  # of misses and # of writebacks are not
>>>> useful.
>>>>Is there any  other metric that can be used to compare energy in
>>>> multicore systems like weighted speed up for performance. Or is it possible
>>>> that simulation always runs for fixed number of instruction.
>>>>
>>>> Many Thanks,
>>>> Best Regards,
>>>> Avais
>>>>
>>>> ___
>>>> gem5-users mailing list
>>>> gem5-users@gem5.org
>>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>>>
>>>
>>>
>>> ___
>>> gem5-users mailing list
>>> gem5-users@gem5.org
>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>>
>>
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Terminating multi-core simulation

2018-05-24 Thread Muhammad Avais
Dear Haeyoon Cho.,

I am really thankful to you for this help. Actually, i am not very good in
modifying gem5 and this code will be very helpful for me.

I have one more question, is it good idea to normalize the stats with
number of instructions simulated to calculate energy or other things? Does
people use this? Or some other metric to compare energy?

Many thanks for your help,
Best Regards,
Avais

On Thu, May 24, 2018 at 5:45 PM, 조해윤 <chohy2...@gmail.com> wrote:

> Dear Avais,
>
> I think running workloads fairly is very important in multi-core
> experiments, because the number of executed instructions of each core can
> be changed depend on each experimental configuration.
> There is a prior work how to experiment fairly on multi-core system; A.
> Hilton et al., "FIESTA: A Sample-Balanced Multi-Program Workload
> Methodology", MoBS, 2009.
> However, implementing this methodology in gem5 is another problem, and I
> couldn't do that.
>
> Alternatively, I modified the gem5 code to terminate by the number of
> total executed instructions of all cores.
> Existing gem5 code can only terminate by the maximum or minimum number of
> executed instructions per core.
> Since LocalSimLoopExitEvent() is called in CPU class in existing gem5
> code, I modified system class code to correct the number of executed
> instructions of all cores and to call LocalSimLoopExitEvent() by system
> class.
> As I think, the most important part is whether you can call
> LocalSimLoopExitEvent() when you want.
> I attach total_sim_exit.patch just for reference.
> I modified followed six files.
> /configs/commom/Simulation.py
> /src/sim/system.hh
> /src/sim/system.cc
> /src/sim/System.py
> /src/cpu/simple/base.hh
> /src/cpu/o3/cpu.cc
> This attached file may not compatible with current gem5 code, because I
> modified code base on stable version of gem5 code.
> Also, this modification is just for restrictive situation that one fast
> forward and one real simulation, and coding style is not good.
>
> If you can modify gem5 code better than me, please let me know.
>
> Best Regards,
> Haeyoon Cho.
>
>
> 2018-05-23 15:55 GMT+09:00 Muhammad Avais <avais.suh...@gmail.com>:
>
>> Dear All,
>>
>>  I want to measure dynamic energy of L2 cache for multi-core
>> simulations. For this purpose, i measure stats from gem5 like # of hits,  #
>> of misses and # of writebacks.
>>  As, multi-core simulation in gem5 terminates, as soon as, any
>> workload reaches maximum count. Therefore, while comparing different
>> schemes, each scheme terminates after different number of instructions, so
>> stats like  # of hits,  # of misses and # of writebacks are not useful.
>>Is there any  other metric that can be used to compare energy in
>> multicore systems like weighted speed up for performance. Or is it possible
>> that simulation always runs for fixed number of instruction.
>>
>> Many Thanks,
>> Best Regards,
>> Avais
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Terminating multi-core simulation

2018-05-23 Thread Muhammad Avais
Dear All,

 I want to measure dynamic energy of L2 cache for multi-core
simulations. For this purpose, i measure stats from gem5 like # of hits,  #
of misses and # of writebacks.
 As, multi-core simulation in gem5 terminates, as soon as, any workload
reaches maximum count. Therefore, while comparing different schemes, each
scheme terminates after different number of instructions, so stats like  #
of hits,  # of misses and # of writebacks are not useful.
   Is there any  other metric that can be used to compare energy in
multicore systems like weighted speed up for performance. Or is it possible
that simulation always runs for fixed number of instruction.

Many Thanks,
Best Regards,
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Problem Solved: Dynamic Associative cache in gem5

2018-05-08 Thread Muhammad Avais
Dear Nikos,

   Many thanks for your help. Actually i have not handled snoops for
blocks retained outside associativity and this was causing snoop filter
problem.Your suggestion proved helpful for me in solving this problem,

Best Regards,
Thanks,
Avais

On Fri, May 4, 2018 at 7:43 PM, Nikos Nikoleris <nikos.nikole...@arm.com>
wrote:

> Hi Avais,
>
>
>
> You might need to remove the snoop filter from the system as well,
> depending on what exactly you want to do. Doing so will increase
> significantly the number of snoops in the system. But in your case, the
> snoop filter performs sanity checks about the coherence protocol and if
> these checks fail then the memory system is not any more in a consistent
> state. Unless I am missing something, you will have to change
> WritebackDrity to WriteClean as well.
>
>
>
> Nikos
>
>
>
> *From: *gem5-users <gem5-users-boun...@gem5.org> on behalf of Muhammad
> Avais <avais.suh...@gmail.com>
> *Reply-To: *gem5 users mailing list <gem5-users@gem5.org>
> *Date: *Friday, 4 May 2018 at 11:16
> *To: *gem5 users mailing list <gem5-users@gem5.org>
> *Subject: *Re: [gem5-users] Dynamic Associative cache in gem5
>
>
>
> Dear Nikos,
>
>
>
>  I have seen that we can bypass snoopfilter in "coherent_xbar.cc"
> file by passing snoopfilter variabels as nullptr.
>
> Is it good solution to avoid snoopfilter checking, if we do not care about
> coherence protocols in simulation.
>
>
>
> Best Regards,
>
> Thanks,
>
> Avais
>
>
>
>
>
> On Fri, May 4, 2018 at 2:28 PM, Muhammad Avais <avais.suh...@gmail.com>
> wrote:
>
> Dear Nikos,
>
>
>
> Many thanks for your reply. Your suggestion are always helpful in
> solving issues in gem5. I will try to use writeclean packets.
>
>
>
> Many thanks,
>
> Kind Regards,
>
> Avais
>
>
>
> On Thu, May 3, 2018 at 11:24 PM, Nikos Nikoleris <nikos.nikole...@arm.com>
> wrote:
>
> Hi Avais,
>
>
>
> I am not sure exactly what you mean by not invalidating but based on the
> assertion I am assuming that for some blocks you write back any dirty data
> and then you retain them in the cache, but it is not clear how you handle
> subsequent accesses and snoops to any of these blocks.
>
>
>
> I can only speculate about the problem, but this might help. I suppose
> that you use WritebackDirty packets to write dirty data to the memory
> below. WritebackDirty packets are treated as evicts and the snoop filter
> believes that the cache doesn’t have the data any longer. If you are on a
> reasonably recent version of gem5, you could try using WriteClean packets
> which have the exact same property of carrying dirty data without the
> additional property of implying an eviction.
>
> Nikos
>
>
>
>
>
> *From: *gem5-users <gem5-users-boun...@gem5.org> on behalf of Muhammad
> Avais <avais.suh...@gmail.com>
> *Reply-To: *gem5 users mailing list <gem5-users@gem5.org>
> *Date: *Thursday, 3 May 2018 at 06:44
> *To: *gem5 users mailing list <gem5-users@gem5.org>
> *Subject: *[gem5-users] Dynamic Associative cache in gem5
>
>
>
> Dear All,
>
>I am trying tom implement dynamic associative cache. I have found
> that after decreasing the associativity, if i do not invalidate blocks then
> following problem occurs.
>
>
>
> Actually, i do not want to invaliate blocks that are out of associativity,
> can anyone suggest some solution
>
>
>
> #0  0x76401035 in raise () from /lib/x86_64-linux-gnu/libc.so.6
>
> #1  0x7640479b in abort () from /lib/x86_64-linux-gnu/libc.so.6
>
> #2  0x009c4327 in SnoopFilter::lookupRequest(Packet const*,
> SlavePort const&) () at build/X86/mem/snoop_filter.cc:137
>
> #3  0x0099e3ec in CoherentXBar::recvTimingReq(Packet*, short) ()
> at build/X86/mem/coherent_xbar.cc:192
>
> #4  0x0134117a in Cache::sendWriteQueuePacket(WriteQueueEntry*)
> () at build/X86/mem/cache/cache.cc:3528
>
> #5  0x01341a61 in Cache::CacheReqPacketQueue::sendDeferredPacket()
> () at build/X86/mem/cache/cache.cc:3731
>
> #6  0x01417b41 in EventQueue::serviceOne() () at
> build/X86/sim/eventq.cc:228
>
> #7  0x01426e08 in doSimLoop(EventQueue*) () at
> build/X86/sim/simulate.cc:219
>
> #8  0x014274eb in simulate(unsigned long) () at
> build/X86/sim/simulate.cc:132
>
>
>
>
>
> Best Regards,
>
> Thanks,
>
> Avais
>
>
>
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the inte

Re: [gem5-users] Dynamic Associative cache in gem5

2018-05-04 Thread Muhammad Avais
Dear Nikos,

 I have seen that we can bypass snoopfilter in "coherent_xbar.cc"
file by passing snoopfilter variabels as nullptr.
Is it good solution to avoid snoopfilter checking, if we do not care about
coherence protocols in simulation.

Best Regards,
Thanks,
Avais


On Fri, May 4, 2018 at 2:28 PM, Muhammad Avais <avais.suh...@gmail.com>
wrote:

> Dear Nikos,
>
> Many thanks for your reply. Your suggestion are always helpful in
> solving issues in gem5. I will try to use writeclean packets.
>
> Many thanks,
> Kind Regards,
> Avais
>
> On Thu, May 3, 2018 at 11:24 PM, Nikos Nikoleris <nikos.nikole...@arm.com>
> wrote:
>
>> Hi Avais,
>>
>>
>>
>> I am not sure exactly what you mean by not invalidating but based on the
>> assertion I am assuming that for some blocks you write back any dirty data
>> and then you retain them in the cache, but it is not clear how you handle
>> subsequent accesses and snoops to any of these blocks.
>>
>>
>>
>> I can only speculate about the problem, but this might help. I suppose
>> that you use WritebackDirty packets to write dirty data to the memory
>> below. WritebackDirty packets are treated as evicts and the snoop filter
>> believes that the cache doesn’t have the data any longer. If you are on a
>> reasonably recent version of gem5, you could try using WriteClean packets
>> which have the exact same property of carrying dirty data without the
>> additional property of implying an eviction.
>>
>> Nikos
>>
>>
>>
>>
>>
>> *From: *gem5-users <gem5-users-boun...@gem5.org> on behalf of Muhammad
>> Avais <avais.suh...@gmail.com>
>> *Reply-To: *gem5 users mailing list <gem5-users@gem5.org>
>> *Date: *Thursday, 3 May 2018 at 06:44
>> *To: *gem5 users mailing list <gem5-users@gem5.org>
>> *Subject: *[gem5-users] Dynamic Associative cache in gem5
>>
>>
>>
>> Dear All,
>>
>>I am trying tom implement dynamic associative cache. I have found
>> that after decreasing the associativity, if i do not invalidate blocks then
>> following problem occurs.
>>
>>
>>
>> Actually, i do not want to invaliate blocks that are out of
>> associativity, can anyone suggest some solution
>>
>>
>>
>> #0  0x76401035 in raise () from /lib/x86_64-linux-gnu/libc.so.6
>>
>> #1  0x7640479b in abort () from /lib/x86_64-linux-gnu/libc.so.6
>>
>> #2  0x009c4327 in SnoopFilter::lookupRequest(Packet const*,
>> SlavePort const&) () at build/X86/mem/snoop_filter.cc:137
>>
>> #3  0x0099e3ec in CoherentXBar::recvTimingReq(Packet*, short) ()
>> at build/X86/mem/coherent_xbar.cc:192
>>
>> #4  0x0134117a in Cache::sendWriteQueuePacket(WriteQueueEntry*)
>> () at build/X86/mem/cache/cache.cc:3528
>>
>> #5  0x01341a61 in Cache::CacheReqPacketQueue::sendDeferredPacket()
>> () at build/X86/mem/cache/cache.cc:3731
>>
>> #6  0x01417b41 in EventQueue::serviceOne() () at
>> build/X86/sim/eventq.cc:228
>>
>> #7  0x01426e08 in doSimLoop(EventQueue*) () at
>> build/X86/sim/simulate.cc:219
>>
>> #8  0x014274eb in simulate(unsigned long) () at
>> build/X86/sim/simulate.cc:132
>>
>>
>>
>>
>>
>> Best Regards,
>>
>> Thanks,
>>
>> Avais
>>
>>
>> IMPORTANT NOTICE: The contents of this email and any attachments are
>> confidential and may also be privileged. If you are not the intended
>> recipient, please notify the sender immediately and do not disclose the
>> contents to any other person, use it for any purpose, or store or copy the
>> information in any medium. Thank you.
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Error message

2018-05-04 Thread Muhammad Avais
Dear All,

 Can anyone suggest what this error message indicates

read error, exit
Exiting @ tick 104585181462 because exiting with last active thread context

And how can i do to debug it?

Many thanks,
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Dynamic Associative cache in gem5

2018-05-03 Thread Muhammad Avais
Dear Nikos,

Many thanks for your reply. Your suggestion are always helpful in
solving issues in gem5. I will try to use writeclean packets.

Many thanks,
Kind Regards,
Avais

On Thu, May 3, 2018 at 11:24 PM, Nikos Nikoleris <nikos.nikole...@arm.com>
wrote:

> Hi Avais,
>
>
>
> I am not sure exactly what you mean by not invalidating but based on the
> assertion I am assuming that for some blocks you write back any dirty data
> and then you retain them in the cache, but it is not clear how you handle
> subsequent accesses and snoops to any of these blocks.
>
>
>
> I can only speculate about the problem, but this might help. I suppose
> that you use WritebackDirty packets to write dirty data to the memory
> below. WritebackDirty packets are treated as evicts and the snoop filter
> believes that the cache doesn’t have the data any longer. If you are on a
> reasonably recent version of gem5, you could try using WriteClean packets
> which have the exact same property of carrying dirty data without the
> additional property of implying an eviction.
>
> Nikos
>
>
>
>
>
> *From: *gem5-users <gem5-users-boun...@gem5.org> on behalf of Muhammad
> Avais <avais.suh...@gmail.com>
> *Reply-To: *gem5 users mailing list <gem5-users@gem5.org>
> *Date: *Thursday, 3 May 2018 at 06:44
> *To: *gem5 users mailing list <gem5-users@gem5.org>
> *Subject: *[gem5-users] Dynamic Associative cache in gem5
>
>
>
> Dear All,
>
>I am trying tom implement dynamic associative cache. I have found
> that after decreasing the associativity, if i do not invalidate blocks then
> following problem occurs.
>
>
>
> Actually, i do not want to invaliate blocks that are out of associativity,
> can anyone suggest some solution
>
>
>
> #0  0x76401035 in raise () from /lib/x86_64-linux-gnu/libc.so.6
>
> #1  0x7640479b in abort () from /lib/x86_64-linux-gnu/libc.so.6
>
> #2  0x009c4327 in SnoopFilter::lookupRequest(Packet const*,
> SlavePort const&) () at build/X86/mem/snoop_filter.cc:137
>
> #3  0x0099e3ec in CoherentXBar::recvTimingReq(Packet*, short) ()
> at build/X86/mem/coherent_xbar.cc:192
>
> #4  0x0134117a in Cache::sendWriteQueuePacket(WriteQueueEntry*)
> () at build/X86/mem/cache/cache.cc:3528
>
> #5  0x01341a61 in Cache::CacheReqPacketQueue::sendDeferredPacket()
> () at build/X86/mem/cache/cache.cc:3731
>
> #6  0x01417b41 in EventQueue::serviceOne() () at
> build/X86/sim/eventq.cc:228
>
> #7  0x01426e08 in doSimLoop(EventQueue*) () at
> build/X86/sim/simulate.cc:219
>
> #8  0x014274eb in simulate(unsigned long) () at
> build/X86/sim/simulate.cc:132
>
>
>
>
>
> Best Regards,
>
> Thanks,
>
> Avais
>
>
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Dynamic Associative cache in gem5

2018-05-02 Thread Muhammad Avais
 Dear All,
   I am trying tom implement dynamic associative cache. I have found
that after decreasing the associativity, if i do not invalidate blocks then
following problem occurs.

Actually, i do not want to invaliate blocks that are out of associativity,
can anyone suggest some solution

#0  0x76401035 in raise () from /lib/x86_64-linux-gnu/libc.so.6
#1  0x7640479b in abort () from /lib/x86_64-linux-gnu/libc.so.6
#2  0x009c4327 in SnoopFilter::lookupRequest(Packet const*,
SlavePort const&) () at build/X86/mem/snoop_filter.cc:137
#3  0x0099e3ec in CoherentXBar::recvTimingReq(Packet*, short) () at
build/X86/mem/coherent_xbar.cc:192
#4  0x0134117a in Cache::sendWriteQueuePacket(WriteQueueEntry*) ()
at build/X86/mem/cache/cache.cc:3528
#5  0x01341a61 in Cache::CacheReqPacketQueue::sendDeferredPacket()
() at build/X86/mem/cache/cache.cc:3731
#6  0x01417b41 in EventQueue::serviceOne() () at
build/X86/sim/eventq.cc:228
#7  0x01426e08 in doSimLoop(EventQueue*) () at
build/X86/sim/simulate.cc:219
#8  0x014274eb in simulate(unsigned long) () at
build/X86/sim/simulate.cc:132


Best Regards,
Thanks,
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] findBlockBySetAndWay(int set, int way) in LRU

2018-04-27 Thread Muhammad Avais
Dear All,
 I found it was inheritance problem,
Many thanks,
Best Regards,
Avais

On Fri, Apr 27, 2018 at 8:45 PM, Muhammad Avais <avais.suh...@gmail.com>
wrote:

> Dear Eliot,
>
>Just after returning from this function, i am checking the way
> number of returned block, but this way number is wrong. There are no
> accesses between invocation of function and its output.
>
> Many thanks for your help,
> Best Regards,
> Avais
>
> On Fri, Apr 27, 2018 at 8:42 PM, Eliot Moss <m...@cs.umass.edu> wrote:
>
>> On 4/27/2018 7:36 AM, Muhammad Avais wrote:
>>
>>> Dear all,
>>>
>>>   I implemented function findBlockBySetAndWay(int set, int
>>> way_no) function in lru.cc file, as it's parent class ()base_set_assoc.hh)
>>> function can not be applied to LRU class.
>>>
>>> Code is simple but it gives block with wrong way_no.
>>>
>>> CacheBlk* LRU::findBlockBySetAndWay(int set, int way_no){
>>>   for(int i=0; i<sets[set].assoc; i++){
>>> if(sets[set].blks[i]->way == way_no){
>>>   return sets[set].blks[i];
>>> }
>>>   }
>>>   return nullptr;
>>> }
>>>
>>> It is very strange. Can anyone guide about mistake in code?
>>>
>>
>> My first thought is that there is nothing wrong, per se, with the
>> code, but that later cache accesses caused a replacement and the
>> block's information changed accordingly.  I suspect that if you
>> want stable information, you will need to copy the block.
>>
>> Regards - Eliot Moss
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
>
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] findBlockBySetAndWay(int set, int way) in LRU

2018-04-27 Thread Muhammad Avais
Dear Eliot,

   Just after returning from this function, i am checking the way
number of returned block, but this way number is wrong. There are no
accesses between invocation of function and its output.

Many thanks for your help,
Best Regards,
Avais

On Fri, Apr 27, 2018 at 8:42 PM, Eliot Moss <m...@cs.umass.edu> wrote:

> On 4/27/2018 7:36 AM, Muhammad Avais wrote:
>
>> Dear all,
>>
>>   I implemented function findBlockBySetAndWay(int set, int
>> way_no) function in lru.cc file, as it's parent class ()base_set_assoc.hh)
>> function can not be applied to LRU class.
>>
>> Code is simple but it gives block with wrong way_no.
>>
>> CacheBlk* LRU::findBlockBySetAndWay(int set, int way_no){
>>   for(int i=0; i<sets[set].assoc; i++){
>> if(sets[set].blks[i]->way == way_no){
>>   return sets[set].blks[i];
>> }
>>   }
>>   return nullptr;
>> }
>>
>> It is very strange. Can anyone guide about mistake in code?
>>
>
> My first thought is that there is nothing wrong, per se, with the
> code, but that later cache accesses caused a replacement and the
> block's information changed accordingly.  I suspect that if you
> want stable information, you will need to copy the block.
>
> Regards - Eliot Moss
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] findBlockBySetAndWay(int set, int way) in LRU

2018-04-27 Thread Muhammad Avais
Dear all,

 I implemented function findBlockBySetAndWay(int set, int
way_no) function in lru.cc file, as it's parent class ()base_set_assoc.hh)
function can not be applied to LRU class.

Code is simple but it gives block with wrong way_no.

CacheBlk* LRU::findBlockBySetAndWay(int set, int way_no){
 for(int i=0; iway == way_no){
 return sets[set].blks[i];
}
 }
 return nullptr;
}

It is very strange. Can anyone guide about mistake in code?

Many thanks,
Best Regards,
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Write non allocate policy for L2 cache

2018-04-25 Thread Muhammad Avais
Dear Nikos,

 Many thanks for guidance, i will try to use this patch.

Best Regards,
Avais

On Wed, Apr 25, 2018 at 8:34 PM, Nikos Nikoleris <nikos.nikole...@arm.com>
wrote:

> If this related to one of the previous problem you observed with
> unnecessary writebacks in exclusive caches, the issue is already addressed
> with this commit https://github.com/gem5/gem5/commit/
> e8236503ce70ea83f4f61716f54421b32ce009ce#diff-
> f52105df841ff570a96503e5df9d356e . But in any case, it is certainly a
> good idea, to test changes one by one to get to the source of the problem.
>
>
>
> Nikos
>
>
>
> *From: *gem5-users <gem5-users-boun...@gem5.org> on behalf of Muhammad
> Avais <avais.suh...@gmail.com>
> *Reply-To: *gem5 users mailing list <gem5-users@gem5.org>
> *Date: *Wednesday, 25 April 2018 at 12:25
>
> *To: *gem5 users mailing list <gem5-users@gem5.org>
> *Subject: *Re: [gem5-users] Write non allocate policy for L2 cache
>
>
>
> Dear Nikos,
>
>
>
>   Many thanks for your help.
>
> May be assertion problem is because of some other part. I will try to find
> it out.
>
>
>
> In gem5, dirty blocks in L2 cache are sent to L1 as dirty blocks and they
> are marked as clean in L2 cache. Actually, i am also trying to skip this
> behavior. May be it has caused problem.
>
>
>
> Many thanks,
>
> Best Regards,
>
> Avais
>
>
>
> On Wed, Apr 25, 2018 at 7:40 PM, Nikos Nikoleris <nikos.nikole...@arm.com>
> wrote:
>
> Hi Avais,
>
>
>
> From the code you just sent, I think what you are trying to do is actually
> even simpler, essentially you want writebacks to pass through L2 (
>
> without allocating down to the L3 where they allocate. If that’s correct
> you actually don’t even have to use the tempBlock. In Cache::access() for
> writebacks bypass the call to allocateBlock assing blk = nullptr and make
> sure that you return false. That will have the effect you’re looking for.
>
>
>
> As for the assertion you’re hitting, I am not entirely sure why it
> happens. The coherent xbar uses the pkt->req pointer for its routing
> decisions. For some reason, after handling a request, it didn’t clean up
> the routeTo map and after some time, when a new packet reuse the same
> memory and therefore the pkt->req, it finds the old route in the map and
> crashes. It might be worth making sure that you are on the latest version
> of gem5.
>
>
>
> Nikos
>
>
>
>
>
> *From: *gem5-users <gem5-users-boun...@gem5.org> on behalf of Muhammad
> Avais <avais.suh...@gmail.com>
> *Reply-To: *gem5 users mailing list <gem5-users@gem5.org>
> *Date: *Wednesday, 25 April 2018 at 09:59
> *To: *gem5 users mailing list <gem5-users@gem5.org>
> *Subject: *Re: [gem5-users] Write non allocate policy for L2 cache
>
>
>
> Dear Nikos,
>
>
>
>  Many thanks for your reply. I am trying to implement Write non
> allocation policy for L2 cache. As suggested by you, I used tempblock to
> fill in case of writeback miss, still following error appears
>
>
>
> gem5.opt: build/X86/mem/coherent_xbar.cc:303: bool
> CoherentXBar::recvTimingReq(PacketPtr, PortID): Assertion
> `routeTo.find(pkt->req) == routeTo.end()' failed.
>
>
>
> I made following modification in gem5
>
>
>
> if(WR_NON_ALLOC){
>
> //assert(!tempBlock->isValid());
>
> incMissCount(pkt);
>
> blk = tempBlock;
>
> blk->set = tags->extractSet(pkt->getAddr());
>
> blk->tag = tags->extractTag(pkt->getAddr());
>
> blk->status |= BlkValid;
>
> if (pkt->cmd == MemCmd::WritebackDirty) {
>
> blk->status |= BlkDirty;
>
> }
>
> std::memcpy(blk->data, pkt->getConstPtr(), blkSize);
>
> return true;
>
> }
>
> else{
>
>   Allocate as previously
> }
>
>
>
> Is still there is some mistake in my implementation?
>
>
>
> Best Regards,
>
> Many thanks,
>
> Avais
>
>
>
>
>
> On Tue, Apr 24, 2018 at 8:43 PM, Nikos Nikoleris <nikos.nikole...@arm.com>
> wrote:
>
> Hi Avais,
>
>
>
> A quick and easy way to achieve this, would be to actually use the
> tempBlock to fill-in the dcache. The tempBlock will be automatically
> written back to the L2 as soon as the WriteReq is satisfied. This solution
> would actually incur a bit of extra traffic between the L1 and L2 but at
> least it won’t trigger any replacements/evictions in the L1 and it will
> fill the L2.
>
>
>
> Alternative solutions would require changes to the way we handle
> coherence. A need

Re: [gem5-users] Write non allocate policy for L2 cache

2018-04-25 Thread Muhammad Avais
Dear Nikos,

  Many thanks for your help.
May be assertion problem is because of some other part. I will try to find
it out.

In gem5, dirty blocks in L2 cache are sent to L1 as dirty blocks and they
are marked as clean in L2 cache. Actually, i am also trying to skip this
behavior. May be it has caused problem.

Many thanks,
Best Regards,
Avais

On Wed, Apr 25, 2018 at 7:40 PM, Nikos Nikoleris <nikos.nikole...@arm.com>
wrote:

> Hi Avais,
>
>
>
> From the code you just sent, I think what you are trying to do is actually
> even simpler, essentially you want writebacks to pass through L2 (
>
> without allocating down to the L3 where they allocate. If that’s correct
> you actually don’t even have to use the tempBlock. In Cache::access() for
> writebacks bypass the call to allocateBlock assing blk = nullptr and make
> sure that you return false. That will have the effect you’re looking for.
>
>
>
> As for the assertion you’re hitting, I am not entirely sure why it
> happens. The coherent xbar uses the pkt->req pointer for its routing
> decisions. For some reason, after handling a request, it didn’t clean up
> the routeTo map and after some time, when a new packet reuse the same
> memory and therefore the pkt->req, it finds the old route in the map and
> crashes. It might be worth making sure that you are on the latest version
> of gem5.
>
>
>
> Nikos
>
>
>
>
>
> *From: *gem5-users <gem5-users-boun...@gem5.org> on behalf of Muhammad
> Avais <avais.suh...@gmail.com>
> *Reply-To: *gem5 users mailing list <gem5-users@gem5.org>
> *Date: *Wednesday, 25 April 2018 at 09:59
> *To: *gem5 users mailing list <gem5-users@gem5.org>
> *Subject: *Re: [gem5-users] Write non allocate policy for L2 cache
>
>
>
> Dear Nikos,
>
>
>
>  Many thanks for your reply. I am trying to implement Write non
> allocation policy for L2 cache. As suggested by you, I used tempblock to
> fill in case of writeback miss, still following error appears
>
>
>
> gem5.opt: build/X86/mem/coherent_xbar.cc:303: bool
> CoherentXBar::recvTimingReq(PacketPtr, PortID): Assertion
> `routeTo.find(pkt->req) == routeTo.end()' failed.
>
>
>
> I made following modification in gem5
>
>
>
> if(WR_NON_ALLOC){
>
> //assert(!tempBlock->isValid());
>
> incMissCount(pkt);
>
> blk = tempBlock;
>
> blk->set = tags->extractSet(pkt->getAddr());
>
> blk->tag = tags->extractTag(pkt->getAddr());
>
> blk->status |= BlkValid;
>
> if (pkt->cmd == MemCmd::WritebackDirty) {
>
> blk->status |= BlkDirty;
>
> }
>
> std::memcpy(blk->data, pkt->getConstPtr(), blkSize);
>
> return true;
>
> }
>
> else{
>
>   Allocate as previously
> }
>
>
>
> Is still there is some mistake in my implementation?
>
>
>
> Best Regards,
>
> Many thanks,
>
> Avais
>
>
>
>
>
> On Tue, Apr 24, 2018 at 8:43 PM, Nikos Nikoleris <nikos.nikole...@arm.com>
> wrote:
>
> Hi Avais,
>
>
>
> A quick and easy way to achieve this, would be to actually use the
> tempBlock to fill-in the dcache. The tempBlock will be automatically
> written back to the L2 as soon as the WriteReq is satisfied. This solution
> would actually incur a bit of extra traffic between the L1 and L2 but at
> least it won’t trigger any replacements/evictions in the L1 and it will
> fill the L2.
>
>
>
> Alternative solutions would require changes to the way we handle
> coherence. A needsWritable MSHR that handles WriteReq misses becomes the
> point of ordering and not filling in the L1 at all would cause problems
> with ordering.
>
>
>
> Nikos
>
>
>
> *From: *gem5-users <gem5-users-boun...@gem5.org> on behalf of Muhammad
> Avais <avais.suh...@gmail.com>
> *Reply-To: *gem5 users mailing list <gem5-users@gem5.org>
> *Date: *Tuesday, 24 April 2018 at 12:33
> *To: *gem5 users mailing list <gem5-users@gem5.org>
> *Subject: *[gem5-users] Write non allocate policy for L2 cache
>
>
>
> Dear all,
>
>  I want to implement write-non-allocate policy in gem5. Can
> any one give some hint?
>
>
>
> In "bool Cache::access(PacketPtr pkt, CacheBlk *, Cycles ,
> PacketList writebacks) " function, inside cache.cc file, where blocks are
> allocated, i have added following line
>
>
>
> if(WR_NON_ALLOC{
>
> return false;
>
> }
>
>
>
> but it gives following error
>
>
>
> gem5.opt: build/X86/mem/coherent_xbar.cc:303: bool
> CoherentXBar::recvTimingReq(PacketPtr, PortID): Assertion
> `ro

[gem5-users] Write non allocate policy for L2 cache

2018-04-24 Thread Muhammad Avais
Dear all,
 I want to implement write-non-allocate policy in gem5. Can any
one give some hint?

In "bool Cache::access(PacketPtr pkt, CacheBlk *, Cycles ,
PacketList writebacks) " function, inside cache.cc file, where blocks are
allocated, i have added following line

if(WR_NON_ALLOC{
return false;
}

but it gives following error

gem5.opt: build/X86/mem/coherent_xbar.cc:303: bool
CoherentXBar::recvTimingReq(PacketPtr, PortID): Assertion
`routeTo.find(pkt->req) == routeTo.end()' failed.
Program aborted at tick 56513387376

Can anyone suggest any problem or guide about better solution?


Many thanks,
Best Regards,
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] L2 stores data or instruction

2018-01-18 Thread Muhammad Avais
Dear Nikos,
  Many thanks,
  I will do accordingly,

Best Regards
Avais

On Thu, Jan 18, 2018 at 8:29 PM, Nikos Nikoleris <nikos.nikole...@arm.com>
wrote:

> Hi Avais,
>
> If I am not missing something, it should be quite easy to mark the
> blocks that are filled due to an instruction fetch.
>
> You would first need to add the relevant flag in the CacheBlk class and
> set the flag in the Cache::handleFill if pkt->req->isInstFetch(). Make
> sure that you initialize it to false and you reset when you invalidate
> the block.
>
> Nikos
>
>
> On 01/18/18 11:13, Muhammad Avais wrote:
>
>> Dear All,
>>   Is there any way to know that block in L2 cache stores
>> data or instruction?
>>
>> Many Thanks
>> Best Regards
>> Avais
>>
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] (no subject)

2018-01-18 Thread Muhammad Avais
Dear All,
  Has anyone made some list of read intensive or data intensive
SPEC2006 benchmarks or some other benchmarks

Many Thanks
Best Regards
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] L2 stores data or instruction

2018-01-18 Thread Muhammad Avais
Dear All,
 Is there any way to know that block in L2 cache stores
data or instruction?

Many Thanks
Best Regards
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Invalid Program counter of Packet

2018-01-16 Thread Muhammad Avais
Dear Nikos,

  Many thanks for your reply. I have not intentionally
specified any pre-fetcher.
But the invalid program counter problem is solved by your hint. In cache.cc
file, inside recvTimingReq() function, i saw one new request is generated,
for software prefetch packet. By copying the program counter value (as
highlighted in code below), i was able to get valid program counter values.

 Does this mean there is some by default  Software
prefetcher?

Many Thanks
Best Regards
Avais

if (pkt->cmd.isSWPrefetch()) {
assert(needsResponse);
assert(pkt->req->hasPaddr());
assert(!pkt->req->isUncacheable());

// There's no reason to add a prefetch as an additional target
// to an existing MSHR. If an outstanding request is already
// in progress, there is nothing for the prefetch to do.
// If this is the case, we don't even create a request at all.
PacketPtr pf = nullptr;

if (!mshr) {
// copy the request and create a new SoftPFReq packet
RequestPtr req = new Request(pkt->req->getPaddr(),
 pkt->req->getSize(),
 pkt->req->getFlags(),
 pkt->req->masterId());
if(!pkt->req->hasPC())
invalid_onmiss[Request::wbMasterId]++;
else
req->setPC(pkt->req->getPC());
pf = new Packet(req, pkt->cmd);
pf->allocate();
assert(pf->getAddr() == pkt->getAddr());
assert(pf->getSize() == pkt->getSize());
}

On Mon, Jan 15, 2018 at 8:23 PM, Nikos Nikoleris <nikos.nikole...@arm.com>
wrote:

> Hi Avais,
>
> Are you using any kind of prefetcher? Requests issued by a prefetcher
> won't have a valid PC either.
>
> Generally the PC is stored in the request object of a packet. If the
> request has been instantiated and initialized in the memory system
> (e.g., writebacks, prefetches), it won't have a valid PC.
>
> Thanks,
>
> Nikos
>
> On 01/15/18 11:17, Muhammad Avais wrote:
>
>> Dear Nikos,
>>
>>   Many thanks for your reply. Actually, I wanted to see
>> the Program counter value of Load/Store instructions that brought blocks
>> in L2 cache
>> For evicted blocks from L1 cache, i have copied the Program counter
>> value in Request of Packet in writebackblock() function. Now, I can see
>> the Program counter of L2 blocks brought into L2 because of writeback miss
>>
>> Problem is for blocks brought into L2 because of miss in L2. For some
>> benchmarks(astar, gobmk), i see invalid PC value for many packets.
>>
>> How can i modify gem5 to get valid PC value for  blocks brought into L2
>> because of miss in L2
>>
>> Many Thanks,
>> Best Regards,
>> Avais
>>
>> On Sat, Jan 13, 2018 at 12:21 AM, Nikos Nikoleris
>> <nikos.nikole...@arm.com <mailto:nikos.nikole...@arm.com>> wrote:
>>
>> Hi Avais,
>>
>> If I remember correctly, this is expected. Evictions, for example,
>> won't
>> have a valid program counter.
>>
>> Nikos
>>
>> On 01/12/18 11:44, hassan yamin wrote:
>>
>> For the packets you are getting invalid program counter, can you
>> check
>> is it read or write packet?
>>
>> On Jan 12, 2018 8:20 PM, "Muhammad Avais"
>> <avais.suh...@gmail.com <mailto:avais.suh...@gmail.com>
>> <mailto:avais.suh...@gmail.com <mailto:avais.suh...@gmail.com>>>
>> wrote:
>>
>>  Dear All,
>> I want to get the Program counter value of
>> packets at
>>  some points in gem5.
>>  For some packets, i am getting invalid program counter
>> value(in
>>  cache::handlefill() function mostly)
>>
>>  Can anyone suggest, how can i get valid program counter.
>>
>>
>>  Many Thanks
>>  Best Regards
>>  Avais
>>
>>  ___
>>  gem5-users mailing list
>> gem5-users@gem5.org <mailto:gem5-users@gem5.org>
>> <mailto:gem5-users@gem5.org <mailto:gem5-users@gem5.org>>
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>> <http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users>
>>  <http://m5sim.org/cgi-bin/mailman/lis

Re: [gem5-users] Invalid Program counter of Packet

2018-01-15 Thread Muhammad Avais
Dear Nikos,

 Many thanks for your reply. Actually, I wanted to see the
Program counter value of Load/Store instructions that brought blocks in L2
cache
For evicted blocks from L1 cache, i have copied the Program counter value
in Request of Packet in writebackblock() function. Now, I can see the
Program counter of L2 blocks brought into L2 because of writeback miss

Problem is for blocks brought into L2 because of miss in L2. For some
benchmarks(astar, gobmk), i see invalid PC value for many packets.

How can i modify gem5 to get valid PC value for  blocks brought into L2
because of miss in L2

Many Thanks,
Best Regards,
Avais

On Sat, Jan 13, 2018 at 12:21 AM, Nikos Nikoleris <nikos.nikole...@arm.com>
wrote:

> Hi Avais,
>
> If I remember correctly, this is expected. Evictions, for example, won't
> have a valid program counter.
>
> Nikos
>
> On 01/12/18 11:44, hassan yamin wrote:
>
>> For the packets you are getting invalid program counter, can you check
>> is it read or write packet?
>>
>> On Jan 12, 2018 8:20 PM, "Muhammad Avais" <avais.suh...@gmail.com
>> <mailto:avais.suh...@gmail.com>> wrote:
>>
>> Dear All,
>>I want to get the Program counter value of packets at
>> some points in gem5.
>> For some packets, i am getting invalid program counter value(in
>> cache::handlefill() function mostly)
>>
>> Can anyone suggest, how can i get valid program counter.
>>
>>
>> Many Thanks
>> Best Regards
>> Avais
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org <mailto:gem5-users@gem5.org>
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>> <http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users>
>>
>>
>>
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Invalid Program counter of Packet

2018-01-12 Thread Muhammad Avais
Dear All,
  I want to get the Program counter value of packets at some points
in gem5.
For some packets, i am getting invalid program counter value(in
cache::handlefill() function mostly)

Can anyone suggest, how can i get valid program counter.


Many Thanks
Best Regards
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] MOESI protocol study

2017-10-19 Thread Muhammad Avais
Dear Boris,
   Thanks for help.
Best Regards
Avais

On Fri, Oct 20, 2017 at 3:46 AM, Boris Shingarov <shinga...@labware.com>
wrote:

> Avais,
>
> I think the standard textbook is:
> Sorin, Hill, Wood: A Primer on Memory Consistency and Cache Coherence.
> Published by Morgan and Claypool.
>
> Boris
>
> -"gem5-users" <gem5-users-boun...@gem5.org> wrote: -
> To: gem5 users mailing list <gem5-users@gem5.org>
> From: Muhammad Avais
> Sent by: "gem5-users"
> Date: 10/19/2017 03:47AM
> Subject: [gem5-users] MOESI protocol study
>
>
> Dear All,
>
>I want to study the MOESI cache coherence protocol. Can
> anyone suggest some study material that can be useful in understanding
> MOESI protocol for cache coherence.
>
> Many Thanks
>
> Avais
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> https://urldefense.proofpoint.com/v2/url?u=http-3A__m5sim.
> org_cgi-2Dbin_mailman_listinfo_gem5-2Dusers=DwIGaQ=
> sPZ6DeHLiehUHQWKIrsNwWp3t7snrE-az24ztT0w7Jc=
> ecC5uu6ubGhPt6qQ8xWcSQh1QUJ8B1-CG4B9kRM0nd4=b-
> JFVb5ZTQfKKqnUYnFIp40qfjWhxZKqK-xFT_w69ao=DelDLpIfd0Hptu8KaCOEkgRO7sVyTO
> idh6d1eido1p8=
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] MOESI protocol study

2017-10-19 Thread Muhammad Avais
Dear All,

   I want to study the MOESI cache coherence protocol. Can
anyone suggest some study material that can be useful in understanding
MOESI protocol for cache coherence.

Many Thanks

Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] (no subject)

2017-10-10 Thread Muhammad Avais
Hi,
Does GEM5 follows any specific cache coherence protocol?
 If yes? then which one?
Thanks
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Small entry table creation in gem5

2017-10-09 Thread Muhammad Avais
Hi,

I want to create small table (256 entries) in gem5 that is accessed on each
cache miss and follows LRU replacement policy.

Can someone guide me how to do it? (Which classes i should use or inherit)

Many Thanks

Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Dirty writeback stat in GEM5

2017-10-03 Thread Muhammad Avais
Hi,



  I want to share one problem about Dirtywriteback stat in gem5.


  I think that if a dirty block is loaded from L2 cache into L1 cache,
then this block is marked dirty in L1 cache (in handleFill() function in
cache.cc file) and clean in L2 cache (in satisfyRequest() function in
cache.cc file). Further, clean eviction of such blocks from L1 cache into
L2 cache is treated as dirty writeback and adds to dirty writeback stat in
GEM5.


 Should not it be treated as clean writeback?


Thanks

Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Simpoints

2017-08-28 Thread Muhammad Avais
Hi,

I want to create simpoints for SPEC2006 benchmarks. Can anyone tell what to
set in "--maxinsts=" parameter while creating simpoints (particularly
sjeng, namd benchmarks)

Many Thanks
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Simpoints for Spec2006 benchmarks

2017-08-28 Thread Muhammad Avais
Hi

 Can anyone share with me simpoints for SPEC2006 benchmarks
(particularly sjeng, namd, bzip2)

Many Thanks
Best Regards
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] (no subject)

2017-08-07 Thread Muhammad Avais
Hi,

  I have question regarding 'ResponseLatency' of Cache. Is it
technology dependent or has fixed value?

 How to choose its value?

Thanks
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Mostly exclusive cache

2017-08-07 Thread Muhammad Avais
Hi,

  I have one question regarding mostly exclusive cache in gem5

Although 'Mostly exclusive' cache in gem5 does not allocate blocks on miss
in higher level caches but i think filllatency and response latency are
still added for these blocks.

Is it true? How can it be avoided?


Thanks
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] allocOnFill() function in cache.hh

2017-08-04 Thread Muhammad Avais
Dear Andreas,

 Thanks for your reply, If i want to implement
exclusive or non exclusive cache (As cache implemented by gem5 is not
perfectly exclusive), do i need to check these commands.

Many Thanks
Best Regards
Avais

On Fri, Aug 4, 2017 at 4:29 PM, Andreas Hansson <andreas.hans...@arm.com>
wrote:

> Hi,
>
> As the comment says:
>
> In the case of a mostly exclusive cache, we allocate on fill *if the
> packet did not come from a cache*, thus if we are dealing with a
> whole-line write (the latter behaves much like a writeback), the original
> target packet came from a non-caching source, or if we are performing a
> prefetch or LLSC.
>
> Andreas
>
> From: gem5-users <gem5-users-boun...@gem5.org> on behalf of Muhammad
> Avais <avais.suh...@gmail.com>
> Reply-To: gem5 users mailing list <gem5-users@gem5.org>
> Date: Friday, 4 August 2017 at 03:31
> To: gem5 users mailing list <gem5-users@gem5.org>
> Subject: [gem5-users] allocOnFill() function in cache.hh
>
> Hi,
>
> There is alloconfill() function in gem5. This determines whether data is
> allocated in cache upon miss in L1 cache or not.
>
> Ideally, If cache is mostly exclusive then data should not be allocated in
> cache upon miss in L1 cache.
>
> But this function loads data in 'mostly exclusive' upon miss in L1 cache
> in some cases( if commands are WriteLineReq, ReadReq and WriteReq).
>
> Can anyone explain, why this function checks  commands(WriteLineReq, ReadReq
> and WriteReq) in order to fill data in mostly exclusive cache
>
> https://github.com/gem5/gem5/blob/master/src/mem/cache/cache.hh
> /**
>   * Determine whether we should allocate on a fill or not. If this
>   * cache is mostly inclusive with regards to the upstream cache(s)
>   * we always allocate (for any non-forwarded and cacheable
>   * requests). In the case of a mostly exclusive cache, we allocate
>   * on fill if the packet did not come from a cache, thus if we:
>   * are dealing with a whole-line write (the latter behaves much
>   * like a writeback), the original target packet came from a
>   * non-caching source, or if we are performing a prefetch or LLSC.
>   *
>   * @param cmd Command of the incoming requesting packet
>   * @return Whether we should allocate on the fill
>   */
>   inline bool allocOnFill(MemCmd cmd) const override
>   {
>   return clusivity == Enums::mostly_incl ||
>   cmd == MemCmd::WriteLineReq ||
>   cmd == MemCmd::ReadReq ||
>   cmd == MemCmd::WriteReq ||
>   cmd.isPrefetch() ||
>   cmd.isLLSC();
>   }
>
>
> IMPORTANT NOTICE: The contents of this email and any attachments are
> confidential and may also be privileged. If you are not the intended
> recipient, please notify the sender immediately and do not disclose the
> contents to any other person, use it for any purpose, or store or copy the
> information in any medium. Thank you.
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] allocOnFill() function in cache.hh

2017-08-03 Thread Muhammad Avais
Hi,

There is alloconfill() function in gem5. This determines whether data is
allocated in cache upon miss in L1 cache or not.

Ideally, If cache is mostly exclusive then data should not be allocated in
cache upon miss in L1 cache.

But this function loads data in 'mostly exclusive' upon miss in L1 cache in
some cases( if commands are WriteLineReq, ReadReq and WriteReq).

Can anyone explain, why this function
checks  commands(WriteLineReq, ReadReq and WriteReq) in order to fill data
in mostly exclusive cache

https://github.com/gem5/gem5/blob/master/src/mem/cache/cache.hh
/**
  * Determine whether we should allocate on a fill or not. If this
  * cache is mostly inclusive with regards to the upstream cache(s)
  * we always allocate (for any non-forwarded and cacheable
  * requests). In the case of a mostly exclusive cache, we allocate
  * on fill if the packet did not come from a cache, thus if we:
  * are dealing with a whole-line write (the latter behaves much
  * like a writeback), the original target packet came from a
  * non-caching source, or if we are performing a prefetch or LLSC.
  *
  * @param cmd Command of the incoming requesting packet
  * @return Whether we should allocate on the fill
  */
  inline bool allocOnFill(MemCmd cmd) const override
  {
  return clusivity == Enums::mostly_incl ||
  cmd == MemCmd::WriteLineReq ||
  cmd == MemCmd::ReadReq ||
  cmd == MemCmd::WriteReq ||
  cmd.isPrefetch() ||
  cmd.isLLSC();
  }
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Effect of write latency on performance

2017-07-25 Thread Muhammad Avais
I want to measure improvement in performance with decrease in write latency
of L2 cache.
But as i reduce the write latency of L2 cache, i observe no increase in
ipc.

Is it always the case or i have made some mistake?

Can anyone comment on this?

Should i see some other parameter as indication of performance improvement

Thanks
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] subname() function

2017-07-16 Thread Muhammad Avais
Hi,

what is the purpose of function subname() function used in base.cc file

Many THanks
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Re: [gem5-users] Non exclusive cache in gem5

2017-07-15 Thread Muhammad Avais
Hi
 Thanks for your response,
 In 'gem5', i have declared cache 'mostly
exclusive'  as recommended by you. I have also set "writeback_clean"
parameter to true of cache which is closer to CPU.
 Now, i want that this 'mostly exclusive cache,
should act as 'Non exclusive' cache.
 Non exclusive cache is cache which does not fill
on a miss from cache closer to CPU. Also, when some block is accessed from
Non exclusive cache, it does not invalidate the block.
 I think in gem5, accessed block from Non exclusive
cache is invalidated in 'maintainClusivity()' function.
 I think that by removing the 'maintainClusivity()'
function, i can make the 'mostly exclusive' cache as 'non exclusive' cache.
 Is my thinking correct?
 Also, maintainClusivity() function is used at two
places in cache.cc file. In 'access()' function and 'recvTimingResp()'
function.
I do not understand both functions completely. Can
anyone suggest, do i need to remove 'maintainClusivity()' function from
both places or not?

Many Thanks
Avais


On Fri, Jul 14, 2017 at 11:37 PM, Jason Lowe-Power <ja...@lowepower.com>
wrote:

> Hello,
>
> The classic cache in gem5 (src/mem/cache and Cache()) is always
> non-inclusive (i.e., it is neither inclusive nor exclusive). You can set
> whether it is "mostly-inclusive" or "mostly-exclusive" as a parameter to
> the Cache SimObject. If the cache is "mostly-exclusive" it will not fill on
> a miss from a cache closer to the CPU (and the opposite for a
> mostly-inclusive). Thus, if you want a mostly-exclusive cache, the caches
> closer to the CPU should set the "writeback_clean" parameter to true (and
> to false if the further cache is mostly-inclusive).
>
> Jason
>
> On Fri, Jul 14, 2017 at 3:16 AM Muhammad Avais <avais.suh...@gmail.com>
> wrote:
>
>> Dear all,
>>
>>  Gem5 supports 'Mostly exclusive' cache. How can i
>> modify code to make it non exclusive cache.
>>
>> I think i can do it by removing maintainClusivity() function from
>> cache.cc file
>>
>> Can someone comment, how to do it
>>
>>
>>
>> Many Thanks
>> ___
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
>
> ___
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Packet.hh function

2017-07-12 Thread Muhammad Avais
Can someone explain following functions in Packet.hh file

 Addr getOffset(unsigned int blk_size) const

 Addr getBlockAddr(unsigned int blk_size) const

Many Thanks
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Multiple BlkDirty bits per cacheline

2017-07-12 Thread Muhammad Avais
Dear All,
  Gem5 contains BlkDirty bit as indication whether the cache line
is modified or not. How can i assign dirty bit for each byte in cache line.

How can i determine that which bytes of cache line are modified.

Many Thanks
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Hybrid cache in gem5

2017-07-02 Thread Muhammad Avais
Dear all,
  Can anyone guide me how to  implement hybrid cache in
gem5.
Particularly, i want to implement Hybrid cache with each set composed of
two types of memories having different latencies

Many Thanks
Avais
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

[gem5-users] Problem in Gem5 Code regarding 'Mostly Exclusive' cache

2017-06-18 Thread Muhammad Avais
Hi,
 I found one misleading stat in gem5. I added one 'Mostly
Exclusive' cache just below 'Data cache', that stores only dirty blocks
coming out of Data Cache. In stats file, i found that dirty writebacks out
of this 'Mostly Exclusive' cache are more than WritebackDirty_hits into
this cache
WritebackDirty_hits::total < writebacks::total (Which does
not seem correct)
 During debugging, i found following reason for this problem.
 In case of 'Mostly Exclusive' cache data coming from lower level
cache is stored in 'tempblock' rather than actual cache because 'Mostly
Exclusive' caches are not required to store data coming from lower level
caches.
When any dirty block temporarily stored into 'tempblock' is flushed
back to lower level cache (L2), extra 'writebacks' from 'Mostly Exclusive'
cache are generated.
I think this is not correct, blocks stored into 'temp' block should
not be written back to lower level cache or atleast they should not
contribute 'writeback' stats in gem5.

  The 'tempblock' writebacks are generated in following code in
"void
Cache::recvTimingResp(PacketPtr pkt)" function in
gem5/src/mem/cache/cache.cc file.

if (blk == tempBlock && tempBlock->isValid()) {
// We use forwardLatency here because we are copying
// Writebacks/CleanEvicts to write buffer. It specifies the latency
to
// allocate an internal buffer and to schedule an event to the
// queued port.
if (blk->isDirty() || writebackClean) {
PacketPtr wbPkt = writebackBlk(blk);
allocateWriteBuffer(wbPkt, forward_time);
// Set BLOCK_CACHED flag if cached above.
if (isCachedAbove(wbPkt))
wbPkt->setBlockCached();
} else {
PacketPtr wcPkt = cleanEvictBlk(blk);
// Check to see if block is cached above. If not allocate
// write buffer
if (isCachedAbove(wcPkt))
delete wcPkt;
else
allocateWriteBuffer(wcPkt, forward_time);
}
invalidateBlock(blk);
}
Can anyone please tell that whether above mentioned code needs some
modification or not?

Thanks
___
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users