I can't really answer your question about the segfault, but I think I know why ALPHA_FS and ALPHA_SE behave differently when using lots of memory. In ALPHA_SE, the benchmark will use memory as it needs it, and if there's a lot of extra memory it'll just sit there doing nothing and not affect things. In ALPHA_FS, though, Linux will clear out all the physical memory as part of the boot process. By increasing the memory size by a factor of 20, you're making that part of boot which is normally a decent part of the time take at least 20 times as long.

Gabe

Quoting Lide Duan <[email protected]>:

Hi,

I noticed that the default memory size set in Benchmarks.py is 128MB, isn't
it too small for reasonable simulations?

Previously when I was using ALPHA_SE, the physmem is set to "2GB", and the
simulation ran well. In FS mode, however, if 2GB is used, booting up Linux
(with atomic CPU) becomes extremely slow; if 1GB or 512MB is used, I can
boot up the OS, start the program and make a checkpoint successfully.
However, restoring from the checkpoint directly with detailed CPU
(--detailed) gives me "segmentation fault", the interesting thing is: if I
restore the checkpoint with atomic CPU and then switch to timing and
detailed ones (--standard-switch), the simulation runs well. For the default
value 128MB, both --detailed and --standard-switch can run. I am confused by
this observation. Am I missing anything here? What is a reasonable memory
size in FS mode (say, for PARSEC programs)?

Thanks,
Lide



_______________________________________________
m5-users mailing list
[email protected]
http://m5sim.org/cgi-bin/mailman/listinfo/m5-users

Reply via email to