Gwyneth

This question is unrelated to Cactus, and only concerns the MPI
implementation you are using and the queuing system. Since you already
received advice regarding how to call mpirun, you should probably ask
that person for help.

A few questions:

- Is there an example MPI program that you can run to test this?
- Which MPI implementation did you use when building Cactus? Did you
check that this worked correctly? How?
- Can you point us to documentation regarding PBS and MPI for your cluster?

To be able to actually help, we will need more details. Consider
opening a help ticket and posting the details there, which would
include your option list, submit script, run script, relevant output,
etc.

-erik



On Sat, Feb 4, 2017 at 2:13 PM, Gwyneth Allwright <[email protected]> wrote:
> Hi All,
>
> I'm trying to get the Einstein Toolkit installed on an HPC cluster running
> SLES. The trouble is that Cactus tries to use all the available processors
> even when I specify a smaller number (by setting ppn in my PBS script).
>
> As a test, I tried running the compilation with a parameter file that
> required about 5 GB of RAM. In my PBS script, I set nodes=1 and ppn=3, and
> then ran using openmpi-1.10.1:
>
> mpirun -hostfile $PBS_NODEFILE <ET exe> <parameter file>
>
> This resulted in the simulation running on all 24 available processors, even
> though I'd only requested 3. Since PBS and MPI are integrated, I was told
> that using -np with mpirun wouldn't help.
>
> Does anyone know how to address this issue?
>
> Thanks,
>
> Gwyneth
>
>
> _______________________________________________
> Users mailing list
> [email protected]
> http://lists.einsteintoolkit.org/mailman/listinfo/users
>



-- 
Erik Schnetter <[email protected]>
http://www.perimeterinstitute.ca/personal/eschnetter/
_______________________________________________
Users mailing list
[email protected]
http://lists.einsteintoolkit.org/mailman/listinfo/users

Reply via email to