Dear list,

any idea on this?


On Wed, Nov 6, 2013 at 9:20 AM, Hossein Nikoonia <[email protected]> wrote:

> Thanks Andreas ...
>
> I also guess this bandwidth is quite big ! ... I will also try with the
> latest source code ...
>
> To get things more complicated, I also repeat the above experience with
> only one benchmark (2 CPUs, one idle, one running the benchmark). I expect
> the run time of the benchmark (not simulation time) be much less than the
> previous experience (because of cache contention, memory bandwidth
> contention, etc.) But surprisingly this is not the case! The runtime of
> blackscholes running concurrently with fluidanimate is 27.217s while the
> runtime of blackscholes alone is 27.145s.
>
> When I look into memory bandwidth stat, It seems that memory bandwidth of
> fluidanimate and blackscholes are "summed up" and there is not any
> contention!
>
> I'm using MOESI_CMP_token, with recent dev source code of gem5 (I guess it
> is one month old).
>
> Do you agree with me that the difference should be more significant ?
> Is it related to --sys-clock and --cpu-clock? I am using default values.
>
> Thank you in advance
>
>
> On Wed, Nov 6, 2013 at 8:53 AM, Andreas Hansson 
> <[email protected]>wrote:
>
>>  Hi,
>>
>>  That does look rather surprising indeed, but it’s not impossible.
>> peakBW is what the memory controller deduced by looking at the burst time
>> and interface width (i.e. the peak interface bandwidth). The other stat,
>> bw_total, is based on the underlying memory model counting all reads and
>> writes, which includes all the merges in the write buffer and reads that
>> got their data from the write buffer rather than from the DRAM. If you are
>> running on a recent version of gem5 (from last Friday), we have
>> incorporated some additional stats for the controller to see these effects
>> more clearly.
>>
>>  What I find more surprising is that with ruby_fs you see so much
>> bandwidth to the memory controller. When you use ruby_fs the normal memory
>> controller organisation is a bit contrived (to say the least), and the
>> mem_ctrls are actually only sitting on the piobus and (as far as I know)
>> are only used by devices. Thus, the bandwidth you see does sound quite
>> large. Could some Ruby ninja out there verify that this is right? Also
>> beware that the “CPU memory”, I.e. The one baked into Ruby, does not care
>> about the “—men-type” on the command line.
>>
>>  Andreas
>>
>>   From: Hossein Nikoonia <[email protected]>
>> Reply-To: gem5 users mailing list <[email protected]>
>> Date: Wednesday, 6 November 2013 07:27
>> To: gem5 users mailing list <[email protected]>
>> Subject: [gem5-users] More than peak bw ?
>>
>>   Dear List,
>>
>>  I run a system with 2 CPUs to run two concurrent parsec benchmarks. The
>> problem is that I see a memory controller bandwidth of 6.3 GBps. But the
>> peakBW is 4.2 GBps !
>>
>>   system.mem_ctrls.bw_total::total           6311068945
>>       # Total bandwidth to/from this memory (bytes/s)
>>
>>  system.mem_ctrls.peakBW                       4266.00
>>     # Theoretical peak bandwidth in MB/s
>>
>>  Am I mistaking something?
>>
>>  Benchmarks are Blackscholes and Fluidanimate.
>>
>>
>> ---------------------------------------------------------------------------------------------------------------------------------------------
>> Command line: ./build/ALPHA/gem5.fast -d /root/gem5run-large/busy -r -e
>> configs/example/ruby_fs.py --kernel=alpha-vmlinux_2.6.27-gcc_4.3.4
>> --disk-image=linux-parsec-2-1-m5-with-test-inputs.img --topology=Mesh
>> --mesh-rows=1 --l2cache --l2_size=2MB --num-l2caches=1 --num-dirs=1
>> --mem-type=LPDDR2_S4_1066_x32 --mem-size=1024MB
>> --script=large-17sh.conf/busy.rcS -n 2
>>
>> -------------------------------------------------------------------------------------------------------------------------------------------
>>
>>
>>  and the busy.rcS:
>>
>> ------------------------------------------------------------------------------------------------------------------------------------------
>>  #!/bin/sh
>>
>>  cd /parsec/install/bin
>>
>>  /sbin/m5  dumpresetstats 0 100000000
>> ./fluidanimate 1 5 /parsec/install/inputs/fluidanimate/in_300K.fluid
>> /parsec/install/inputs/fluidanimate/out.fluid &
>> ./blackscholes 1 /parsec/install/inputs/blackscholes/in_64K.txt
>> /parsec/install/inputs/blackscholes/prices.txt
>> echo "Done :D"
>> /sbin/m5 exit
>> /sbin/m5 exit
>>
>> -------------------------------------------------------------------------------------------------------------------------------------------
>>
>>  I would appreciate if someone let me know what is wrong with this ...
>>
>>  Thanks in advance :)
>>
>> -- IMPORTANT NOTICE: The contents of this email and any attachments are
>> confidential and may also be privileged. If you are not the intended
>> recipient, please notify the sender immediately and do not disclose the
>> contents to any other person, use it for any purpose, or store or copy the
>> information in any medium. Thank you.
>>
>> ARM Limited, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
>> Registered in England & Wales, Company No: 2557590
>> ARM Holdings plc, Registered office 110 Fulbourn Road, Cambridge CB1 9NJ,
>> Registered in England & Wales, Company No: 2548782
>>
>> _______________________________________________
>> gem5-users mailing list
>> [email protected]
>> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>>
>
>
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to