then it's a different issue.

On Sat, Nov 12, 2011 at 1:25 AM, Dibakar Gope <[email protected]> wrote:

> I am observing the same for other parsec benchmarks also, for example
> x264, swaptions, fluidanimate. however I can crosscheck by increasing the
> maxinsts count.
>
> Thanks,
> Dibakar
>
> On 11/11/11, biswabandan panda   wrote:
> > I guess it&#39;s for MAXINSTS as streamcluster is a large benchmark in
> terms of number of instructions.
> >
> > On Sat, Nov 12, 2011 at 1:14 AM, Dibakar Gope <[email protected] <
> [email protected]>> wrote:
> >
> >
> >
> > > Hi All,
> > >
> >
> > >
> > >
> >
> > >  I am trying to get selective traces for few stages in ALPHA_FS O3
> system. In order to do that, I
> > >
> >
> > > have enabled --debug-flags=Exec (with turning-off the non-required
> > >
> >
> > > switches) and getting the intended trace section after linux boot. I
> am running the benchmarks from PARSEC suite and for experimental purposes,
> the maxinsts is set to 200M. Further in order to attach each thread with a
> particular core, I am using the GOMP_CPU_AFFINITY option in my rcS script.
> As shown below, this rcS script attempts to run 4 threads of streamcluster
> benchmark in 4core cpu.
> > >
> >
> > >
> >
> > >
> > >
> >
> > > #!/bin/sh
> > >
> >
> > >
> > >
> >
> > > # File to run the streamcluster benchmark
> > >
> >
> > >
> > >
> >
> > > cd /parsec/install/bin
> > >
> >
> > >
> > >
> >
> > > #/sbi/m5 switchcpu
> > >
> >
> > > export GOMP_CPU_AFFINITY="0 1 2 3"
> > >
> >
> > > /sbin/m5 dumpstats
> > >
> >
> > > /sbin/m5 resetstats
> > >
> >
> > > echo "Simulation Beginning :D"
> > >
> >
> > > ./streamcluster 10 20 32 4096 4096 1000 none
> /parsec/install/inputs/streamcluster/output.txt 4
> > >
> >
> > > echo "Done :D"
> > >
> >
> > > /sbin/m5 exit
> > >
> >
> > > /sbin/m5 exit
> > >
> >
> > >
> > >
> >
> > > However, in my generated traces (deep into the ROI region), all the
> time I can see traces for only thread 0 running in different cores(cpu0,
> cpu1, cpu2, cpu3), as shown below, although in my streamcluster rcS script
> I set the threads count as 4.
> > >
> >
> > >
> >
> > >
> > >
> >
> > > 2394499819584: system.cpu0.BPredUnit: [tid:0] <some custom print msgs>
> > >
> >
> > > 2394499820918: system.cpu1.BPredUnit: [tid:0] <some custom print msgs>
> > >
> >
> > > 2394499824253: system.cpu0.BPredUnit: [tid:0] <some custom print msgs>
> > >
> >
> > > 2394499827588: system.cpu2.BPredUnit: [tid:0] <some custom print msgs>
> > >
> >
> > > 2394499830923: system.cpu3.BPredUnit: [tid:0] <some custom print msgs>
> > >
> >
> > > 2394499834258: system.cpu1.BPredUnit: [tid:0] <some custom print msgs>
> > >
> >
> > > 2394499837593: system.cpu0.BPredUnit: [tid:0] <some custom print msgs>
> > >
> >
> > > 2394499841595: system.cpu2.BPredUnit: [tid:0] <some custom print msgs>
> > >
> >
> > > 2394499844930: system.cpu3.BPredUnit: [tid:0] <some custom print msgs>
> > >
> >
> > >
> > >
> >
> > > I am observing the same for other benchmarks also. Is it due to the
> limited maxinsts (as I set that to 200M) or anything I am missing?
> > >
> >
> > >
> > >
> >
> > > Thanks,
> > >
> >
> > > Dibakar Gope
> > >
> >
> > > PhD Student, UW-Madison
> > >
> >
> > > _______________________________________________
> > >
> >
> > > gem5-users mailing list
> > >
> >
> > > [email protected] <[email protected]>
> > >
> >
> > > http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
> > >
> >
> > >
> >
> >
> >
> >
> > --
> >
> > thanks&regards
> > BISWABANDAN
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> >
> > _______________________________________________
> > gem5-users mailing list
> > [email protected]
> > http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>



-- 

*thanks&regards
*
*BISWABANDAN*
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to