Hi Varun,

> How do I know if the benchmarks ran correctly?
This isn't straightforward. One way to do it is to check the output of the
benchmark. However, sometimes the run time is too long to do this. Another
option is to run a smaller input size and check the output. If it is
correct on the smaller size, you might be able to assume it will be correct
on the larger input.

> Any way to track dirty blocks in the caches?
Not without modifying the code. You'll have to dig into the cache code to
do this. The blocks have a "dirty" field that is likely tracked correctly,
but you'll have to add a new interface to access that information outside
of the cache. Which, by the way, you should consider how this is done in
hardware :).

If there are zero writes to the DRAM something is wrong. Either your
benchmark isn't working, the statistics you're looking at are wrong, or
you're not actually using the memory controllers you think you are, or
something else :). It's impossible for me to debug this remotely, but these
are some ideas.

If I were you, I would start with a much simpler, smaller, program. Then I
would check to make sure gem5 is behaving the way I expect before trying to
run full-sized SPEC workloads.

Cheers,
Jason

On Wed, Feb 28, 2018 at 8:40 PM Saivarun R <rsvaru...@gmail.com> wrote:

> Hi jason,
>
> I'm using a timing simulation and I checked the writes to the dram
> controller after the simulation of leslie3d benchmark suite. I'm still not
> sure if the benchmarks were running properly, *How do I know if the
> benchmarks ran correctly, *and there are zero writes to mem_ctrls in the
> stats file.
>
> This is the command that I used and I got some warning statements like
> below
>
> build/ALPHA/gem5.opt --outdir=/home/aaron/dramcache/leslie3d_idea
> configs/example/se.py --cpu-type=TimingSimpleCPU --caches --l1i_assoc=4
> --l1d_assoc=4 --l2cache --l2_size=256kB --l2_assoc=8 --l3cache
> --l3_size=16MB --l3_assoc=32 --mem-size=8192MB --bench leslie3d
> --maxinst=250000000
>
> warn: subt/sud   f12,f22,f11: non-standard trapping mode not supported
> warn: mult/sud   f12,f12,f13: non-standard trapping mode not supported
> warn: addt/sud   f12,f10,f14: non-standard trapping mode not supported
>
> Sorry not stating the context in which I want to track the dirty pages in
> the cache in my earlier mail. I'm trying to implement footprint caching in
> DRAM caches, I have a running model of a DRAM cache integrated with
> DRAMSim2. And according to the paper [1] , blocks are classified as
> referred if they are dirty. This information I need to use when encountered
> with further fetching requests to this page. I've implemented all the
> required logic for the footprint caching but unable to track the dirty
> blocks in the cache. *Any way to track dirty blocks in the caches?*
>
> I will be helpful if you can answer any of the *questions raised*.
>
> [1]: "Die-Stacked DRAM Caches for Servers: Hit Ratio, Latency, or
> Bandwidth? Have It All with Footprint Cache"
>
> Thank you
> Varun
>
> On Wed, Feb 28, 2018 at 10:44 PM, Jason Lowe-Power <ja...@lowepower.com>
> wrote:
>
>> Hi Varun,
>>
>> I imagine there are other code paths that writeback dirty data. The cache
>> code is pretty complicated so I can't tell you exactly where to look off
>> the top of my head. One thing you could do is put an inform (or use debug
>> flags) at the memory controller. Here, you can see when there are writes to
>> memory (which are cache writebacks).
>>
>> Another thing to check is to make sure you're using timing (not atomic)
>> simulation.
>>
>> Cheers,
>> Jason
>>
>> On Tue, Feb 27, 2018 at 9:20 PM Saivarun R <rsvaru...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> I want to track the number of dirty blocks during the simulation. And
>>> after running a spec cpu2006 benchmark, calculix, I found that the number
>>> of dirty blocks is zero.
>>>
>>> So I put an inform statement in the function allocateBlock as follows:
>>>
>>>              if (blk->isDirty() || writebackClean) {
>>>              // Save writeback packet for handling by caller
>>> // correlate[sector_index]++;
>>> // predicted[i] = 1;
>>> inform("Block is Dirty");
>>>              writebacks.push_back(writebackBlk(blk));
>>>             } else {
>>>              writebacks.push_back(cleanEvictBlk(blk));
>>>             }
>>>
>>> And through out the simulation not even one block was declared as dirty.
>>> What could be reason or is it that blocks are not dirty at all??
>>>
>>> Can any one help me out with a reason for such an observation?
>>>
>>> Thank you
>>> Varun
>>> _______________________________________________
>>> gem5-users mailing list
>>> gem5-users@gem5.org
>>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>>
>> _______________________________________________
>> gem5-users mailing list
>> gem5-users@gem5.org
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>
> _______________________________________________
> gem5-users mailing list
> gem5-users@gem5.org
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
_______________________________________________
gem5-users mailing list
gem5-users@gem5.org
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to