Hi Jeff,
as written in my original post, I'm using a custom build of 4.0.0 which
was configured with nothing more than a --prefix and
--enable-mpi-fortran. I checked for updates and it appears that there
was an issue until 4.0.1 with oversubscription. The changelog states
> - Fix a problem with t
Hi,
A QuantumEspresso, multinode and multiprocess MPI job has been terminated
with the following messages in the log file
total cpu time spent up to now is63540.4 secs
total energy = -14004.61932175 Ry
Harris-Foulkes estimate = -14004.73511665 Ry
estimate
I would do the normal things. Log into those nodes. Run dmesg and look at
/var/log/messages
Look at the Slurm log on the node and look for the job ending.
Also look at the sysstat files and see if there was a lot of memory being
used http://sebastien.godard.pagesperso-orange.fr/
On Wed, 17 Apr
Hi,
After successful installation of v4 on a custom location, I see some errors
while the default installation (v2) hasn't.
$ /share/apps/softwares/openmpi-4.0.1/bin/mpirun --version
mpirun (Open MPI) 4.0.1
Report bugs to http://www.open-mpi.org/community/help/
$ /share/apps/softwares/openmpi-4.0
On Apr 17, 2019, at 3:38 AM, Steffen Christgau
wrote:
>
> as written in my original post, I'm using a custom build of 4.0.0 which
I'm sorry -- I missed that (it was at the bottom; my bad).
> was configured with nothing more than a --prefix and
> --enable-mpi-fortran. I checked for updates and
Hi everyone,
I've been trying to track down the source of TCP connections when running
MPI singletons, with the goal of avoiding all TCP communication to free up
ports for other processes. I have a local apt install of OpenMPI 2.1.1 on
Ubuntu 18.04 which does not establish any TCP connections by d
Hi,
Am 17.04.2019 um 11:07 schrieb Mahmood Naderan:
> Hi,
> After successful installation of v4 on a custom location, I see some errors
> while the default installation (v2) hasn't.
Did you also recompile your application with this version of Open MPI?
-- Reuti
> $ /share/apps/softwares/open
Daniel,
If your MPI singleton will never MPI_Comm_spawn(), then you can use the
isolated mode like this
OMPI_MCA_ess_singleton_isolated=true ./program
You can also save some ports by blacklisting the btl/tcp component
OMPI_MCA_ess_singleton_isolated=true OMPI_MCA_pml=ob1
OMPI_MCA_btl=vad