Hello,

I am using the Marshall University BigGreen cluster to run a parameter file 
with 5 Carpet time levels, on a 200^3 spatial domain, starting with 
d(x,y,z)_coarse = 1, and dtfac = 0.03125.

The total necessary memory looks to be somewhere around 40 GB.

My aim is to run it up to t=200, but I have a time limit on cluster of 72 hours.

I was thinking, naively, that if I increase the number of processes, the run 
will go faster. However, this was not the case.

I did an experiment, where I used first 48 procs, then 96 procs, and in 72 
hours, the 48 procs run stopped at t = 55.25, while the 96 procs run stopped 
ash 48.50.

That obviously did not work, so I tried next to decrease the time refinement.

For the same 48 procs, the run with bigger time step dtfac = 0.0625 stopped ash 
t = 54.17 after 72 hours.


I would very much appreciate if I can get an explanation of what is happening, 
and how to get to speed up the run, so that it will get finished in the time 
constraint of 72 hours that is imposed to me.


I am attaching the parameter file and the SLURM script I am using to run it.


thanks,

Maria?


_______________________
Dr. Maria C. Babiuc Hamilton
Department of Physics
Marshall University
S257 Science Building
Huntington, WV, 25755
Phone: (304)696-2754

Attachment: submit1.sh
Description: submit1.sh

Attachment: GiRaFFE_MagnetoWald_Poyn_omp.par
Description: GiRaFFE_MagnetoWald_Poyn_omp.par

_______________________________________________
Users mailing list
[email protected]
http://lists.einsteintoolkit.org/mailman/listinfo/users

Reply via email to