I would suggest taking a checkpoint after the system boots and before
bbench launches. This can be inserted in the bbench.rcS script by adding
/sbin/m5 checkpoint right before bbench is launched. Then you can restore
into O3. This should shave about a day off of simulation time since, im my
experience, booting alone can take over a day in O3 mode.
On Mar 21, 2012 12:36 PM, "Ali Saidi" <[email protected]> wrote:

> **
>
> Hi Anirudh,
>
>
>
> By default gem5 uses the AtomicSimpleCPU. If you would like to get timing
> information you'll probably need to use the out-of-order CPU model which
> will take longer to run. It's an open question how to best sample with a
> multi-threaded browser type workload. It would be great to know the answer,
> but I don't think there is any consensus yet. It is possible to run the
> workload to completion with a detailed CPU model, it just takes a couple of
> days of simulation and that is what some people do at the moment.
>
>
>
> Thanks,
>
> Ali
>
>
>
> On 21.03.2012 11:31, Anirudh Sivaraman wrote:
>
> Hi
>
> Running BBench takes me ~ 12 hours. I was wondering if there was
> anyway to speed that up by sampling (and/or) checkpointing. BBench
> runs using fs.py and I notice that by default the DriveCPUClass in
> fs.py is set to AtomicSimpleCPU in the line : DriveCPUClass =
> AtomicSimpleCPU
> Does this mean that be default BBench runs the AtomicSimpleCPU model
> that is purely functional ? This would, in turn, imply that no kind of
> sampling would improve performance? Am  I understanding things
> correctly here ?
>
> Anirudh
> _______________________________________________
> gem5-users mailing 
> [email protected]http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
>
>
>
> _______________________________________________
> gem5-users mailing list
> [email protected]
> http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users
>
_______________________________________________
gem5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/gem5-users

Reply via email to