Roland, I did get it to build all the way through with my MPI setup. I
ended up recompiling with gnu99 rather than c99 to keep from having to
define M_Pi to 20 digits in the flags... It finished, are you saying
there's a chance it built wrong with my setup? MPICC is just a wrapper on
GCC that handles MPI stuff (I did this because I had an issue with lmpi
flags not being found with GCC), so I think it should be alright. If I'm
not wrong, I'll probably just leave the hacky stuff in place.
Ian, that's just what I was looking for! Thanks. We use srun instead of
mpirun, and I was sorta scratching my head with how to get Slurm to submit
the jobs. I'll keep you posted on how it works!
Claudia

On Fri, 6 May 2016 at 05:14 Ian Hinder <[email protected]> wrote:

> On 6 May 2016, at 00:42, Claudia Richoux <[email protected]> wrote:
>
> Oh, I built using MPICC as the C compiler, then I had to define M_PI to
> like 20 digits in the cflags.... It was messy.... But it worked. Not really
> sure how to get it running w Slurm, but I'm sure I'll figure it out if I
> can find an MPI executable.
> Thank you all for your help!!
>
>
> Hi,
>
> I realised after writing the email that you already have MPI installed
> since it is a cluster.
>
> For getting it running with SLURM, you need to create a submit script and
> a run script for the cluster in simfactory/mdb/{submit,run}scripts and then
> reference them in your <machine>/ini file.  It is best to copy one of the
> existing ones for a machine as close as possible to your cluster.  For
> SLURM, you could take comet.sub as an example.  For the run script, you
> will need to adjust the mpirun command to match your MPI installation.
>
> --
> Ian Hinder
> http://members.aei.mpg.de/ianhin
>
>
_______________________________________________
Users mailing list
[email protected]
http://cactuscode.org/mailman/listinfo/users

Reply via email to