Hi Radhika, I solved my problem by removing the 100packets limitation in the gem5 source code. It may not be the best solution but it works. I will try your solution as soon as possible.
Thanks +-------------------------------+ | Louisa Bessad | | PhD student - LIRMM - Sysmic | | Bâtiment 4 Bureau H2.2 | +-------------------------------+ Le 22/01/2016 14:31, Radhika Jagtap a écrit : > Hi Louisa, > > > > For trace generation the ROB, Load Queue and Store Queue are set to very > large sizes – 512, 128 and 128 respectively. > > Due to this artificially large dimensioned core, 100s of requests will > be outstanding. In all my validation so far, I have relied on these > requests causing misses in the L1 cache which will result in the cache > blocking (no free MSHRS/targets). This in turn means the core will stop > sending requests. In your setup, it could be that there are many hits so > cache does not block but has too many requests/responses queued up. So > we fill up the 100 entry port queue. > > > > Please, could you check a few things for me while I look into a solution? > > > > 1. Is the clock for O3 core and L1 caches the same? > > 2. Please could you confirm that there is no panic when we run > with a normal dimensioned core? > > In configs/common/CpuConfig.py: function: config_etrace, comment out the > following lines > > cpu.numROBEntries = 512; > > cpu.LQEntries = 128; > > cpu.SQEntries = 128; > > > > Thanks, > > Radhika > > > > IMPORTANT NOTICE: The contents of this email and any attachments are > confidential and may also be privileged. If you are not the intended > recipient, please notify the sender immediately and do not disclose the > contents to any other person, use it for any purpose, or store or copy > the information in any medium. Thank you. _______________________________________________ gem5-users mailing list [email protected] http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
