Hello Maria,

> I am using the Marshall University BigGreen cluster to run a
> parameter file with 5 Carpet time levels, on a 200^3 spatial domain,
> starting with d(x,y,z)_coarse = 1, and dtfac = 0.03125.
> 
> The total necessary memory looks to be somewhere around 40 GB.
> 
> My aim is to run it up to t=200, but I have a time limit on cluster
> of 72 hours.
You can run longer by checkpointing and recovering from the
checkpoints. In fact Cactus / Simfactory will do so automatically for
you if you ask for a runtime which is longer than the maxwalltime
setting of your machine. 

> That obviously did not work, so I tried next to decrease the time
> refinement.
> 
> For the same 48 procs, the run with bigger time step dtfac = 0.0625
> stopped ash t = 54.17 after 72 hours.
You usually cannot change the timestep and spatial resolution
independently since our runs are limited by the  Courant–Friedrichs–Lewy
condition which links timestep and spatial resolution. So instead you should 
increase the spatial grid spacing (reduce the resolution) which would make the 
run go faster. Note that dtfac 0.03125 seems very small to me. Usually we end 
up with a dtfac of 0.25 - 0.4 depending on the timestepper used. Keep in mind 
that dtfac is the CFL factor for the coarsest level and that Carpet will 
automatically take smaller timesteps for the finer levels.

In your parfile, you would need to add:

IO::recover = "autoprobe"
IO::checkpoint_on_terminate = "yes"

Yours,
Roland

-- 
My email is as private as my paper mail. I therefore support encrypting
and signing email messages. Get my PGP key from http://pgp.mit.edu .

Attachment: pgp4Zp54gvBD3.pgp
Description: OpenPGP digital signature

_______________________________________________
Users mailing list
[email protected]
http://lists.einsteintoolkit.org/mailman/listinfo/users

Reply via email to