On Thu, Jun 22, 2017 at 12:41 PM, r...@open-mpi.org wrote:
> I gather you are using OMPI 2.x, yes? And you configured it
> --with-pmi=, then moved the executables/libs to your
> workstation?
correct
> I suppose I could state the obvious and say “don’t do that - just rebuild
I gather you are using OMPI 2.x, yes? And you configured it
--with-pmi=, then moved the executables/libs to your workstation?
I suppose I could state the obvious and say “don’t do that - just rebuild it”,
and I fear that (after checking the 2.x code) you really have no choice. OMPI
v3.0 will
On Thu, Jun 22, 2017 at 10:43 AM, John Hearns via users
wrote:
> Having had some problems with ssh launching (a few minutes ago) I can
> confirm that this works:
>
> --mca plm_rsh_agent "ssh -v"
this doesn't do anything for me
if i set OMPI_MCA_sec=^munge
i can clear
Having had some problems with ssh launching (a few minutes ago) I can
confirm that this works:
--mca plm_rsh_agent "ssh -v"
Stupidly I thought there was a majr problem - when it turned otu I could
not ssh into a host.. ahem.
On 22 June 2017 at 16:35, r...@open-mpi.org
that took care of one of the errors, but i missed a re-type on the second error
mca_base_component_repository_open: unable to open mca_pmix_pmix112:
libmunge missing
and the opal_pmix_base_select error is still there (which is what's
actually halting my job)
On Thu, Jun 22, 2017 at 10:35 AM,
You can add "OMPI_MCA_plm=rsh OMPI_MCA_sec=^munge” to your environment
> On Jun 22, 2017, at 7:28 AM, John Hearns via users
> wrote:
>
> Michael, try
> --mca plm_rsh_agent ssh
>
> I've been fooling with this myself recently, in the contect of a PBS cluster
>
> On
Michael, try
--mca plm_rsh_agent ssh
I've been fooling with this myself recently, in the contect of a PBS cluster
On 22 June 2017 at 16:16, Michael Di Domenico
wrote:
> is it possible to disable slurm/munge/psm/pmi(x) from the mpirun
> command line or (better) using
is it possible to disable slurm/munge/psm/pmi(x) from the mpirun
command line or (better) using environment variables?
i'd like to use the installed version of openmpi i have on a
workstation, but it's linked with slurm from one of my clusters.
mpi/slurm work just fine on the cluster, but when i