Re: [QE-users] Installation verification

2019-04-30 Thread Paolo Giannozzi
On Tue, Apr 30, 2019 at 8:23 PM Mahmood Naderan 
wrote:

>From this line
>
> As of version 3.0.0, the "sm" BTL is no longer available in Open MPI.
>
> I think that QE is using a feature from OpenMPI which is not available in
> Openmpi-4. Do you confirm that?
>

no idea, but it seems to me very unlikely. I would  try first of all to
compile and run a code that says "Hello world" in parallel

Should I use OpenMPI-3?
>

you should use whatever MPI version that works for you

Paolo
-- 
Paolo Giannozzi, Dip. Scienze Matematiche Informatiche e Fisiche,
Univ. Udine, via delle Scienze 208, 33100 Udine, Italy
Phone +39-0432-558216, fax +39-0432-558222
___
Quantum Espresso is supported by MaX (www.max-centre.eu/quantum-espresso)
users mailing list users@lists.quantum-espresso.org
https://lists.quantum-espresso.org/mailman/listinfo/users

Re: [QE-users] Installation verification

2019-04-30 Thread Mahmood Naderan
I think I did all things correctly. In order to override the default mpirun
(which is 2.0.0) I ran:

# export
PATH=/usr/lib64/qt-3.3/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/share/apps/softwares/openmpi-4.0.1/bin
# export LD_LIBRARY_PATH=/share/apps/softwares/openmpi-4.0.1/lib
# mpirun --version
mpirun (Open MPI) 4.0.1

Report bugs to http://www.open-mpi.org/community/help/





Then I tried to build qe

# ./configure --prefix=/share/apps/softwares/q-e-qe-6.4/build
# make all -j6
# make install
...
\u2018XSpectra/src/xspectra.x\u2019 ->
\u2018/share/apps/softwares/q-e-qe-6.4/build/bin/xspectra.x\u2019

Quantum ESPRESSO binaries are installed in
/share/apps/softwares/q-e-qe-6.4/build/bin
# ldd build/bin/pw.x
linux-vdso.so.1 =>  (0x7ffd51f58000)
libmpi_usempi.so.40 =>
/share/apps/softwares/openmpi-4.0.1/lib/libmpi_usempi.so.40
(0x7f16552c8000)
libmpi_mpifh.so.40 =>
/share/apps/softwares/openmpi-4.0.1/lib/libmpi_mpifh.so.40
(0x7f165507)
libmpi.so.40 => /share/apps/softwares/openmpi-4.0.1/lib/libmpi.so.40
(0x7f1654d58000)
libgfortran.so.3 => /lib64/libgfortran.so.3 (0x7f1654a1)
libm.so.6 => /lib64/libm.so.6 (0x7f1654708000)
libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x7f16544f)
libquadmath.so.0 => /lib64/libquadmath.so.0 (0x7f16542b)
libpthread.so.0 => /lib64/libpthread.so.0 (0x7f165409)
libc.so.6 => /lib64/libc.so.6 (0x7f1653cc8000)
libopen-rte.so.40 =>
/share/apps/softwares/openmpi-4.0.1/lib/libopen-rte.so.40
(0x7f1653a08000)
libopen-pal.so.40 =>
/share/apps/softwares/openmpi-4.0.1/lib/libopen-pal.so.40
(0x7f16536f8000)
libdl.so.2 => /lib64/libdl.so.2 (0x7f16534f)
libudev.so.1 => /lib64/libudev.so.1 (0x7f16534d8000)
librt.so.1 => /lib64/librt.so.1 (0x7f16532d)
libutil.so.1 => /lib64/libutil.so.1 (0x7f16530c8000)
libz.so.1 => /lib64/libz.so.1 (0x7f1652eb)
/lib64/ld-linux-x86-64.so.2 (0x5652f0f3b000)
libcap.so.2 => /lib64/libcap.so.2 (0x7f1652ca8000)
libdw.so.1 => /lib64/libdw.so.1 (0x7f1652a6)
libattr.so.1 => /lib64/libattr.so.1 (0x7f1652858000)
libelf.so.1 => /lib64/libelf.so.1 (0x7f165264)
liblzma.so.5 => /lib64/liblzma.so.5 (0x7f1652418000)
libbz2.so.1 => /lib64/libbz2.so.1 (0x7f1652208000)








So, it seems that qe-6.4 has been built with openmpi-4 correctly.


Back to my user account, I again set PATH and LD_LIBRARY_PATH in order to
override the default openmpi on the system.
As I tried my own file, I received the following error


$ which mpirun
/share/apps/softwares/openmpi-4.0.1/bin/mpirun
$ mpirun --version
mpirun (Open MPI) 4.0.1

Report bugs to http://www.open-mpi.org/community/help/
$ which pw.x
/share/apps/softwares/q-e-qe-6.4/build/bin/pw.x
[mahmood@rocks7 job]$ mpirun -np 2 pw.x mos64.scf.in
--
As of version 3.0.0, the "sm" BTL is no longer available in Open MPI.

Efficient, high-speed same-node shared memory communication support in
Open MPI is available in the "vader" BTL.  To use the vader BTL, you
can re-run your job with:

mpirun --mca btl vader,self,... your_mpi_application
--
--
A requested component was not found, or was unable to be opened.  This
means that this component is either not installed or is unable to be
used on your system (e.g., sometimes this means that shared libraries
that the component requires are unable to be found/loaded).  Note that
Open MPI stopped checking at the first component that it did not find.

Host:  rocks7.jupiterclusterscu.com
Framework: btl
Component: sm
--
--
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  mca_bml_base_open() failed
  --> Returned "Not found" (-13) instead of "Success" (0)
--
[rocks7:08912] *** An error occurred in MPI_Init
[rocks7:08912] *** reported by process [1787822081,3255307776955514881]
[rocks7:08912] *** on a NULL communicator
[rocks7:08912] *** Unknown error
[rocks7:08912] *** MPI_ERRORS_ARE_FATAL (processes in this communicator
will now abort,
[rocks7:08912] ***and potentially your MPI job)
[rocks7.jupiterclusterscu.com:08903] 1 more process has sent help message
help-mpi-btl-sm.txt / btl sm is dead

Re: [QE-users] Installation verification

2019-04-30 Thread Paolo Giannozzi
The MPI run-time environment (e.g. mpirun) must be consistent with the MPI
libraries you are linking. Be very careful when setting your PATH, LIBPATH,
etc. environment variables. They must be the same when you configure,
compile, and run.

Paolo

On Mon, Apr 29, 2019 at 3:07 PM Mahmood Naderan 
wrote:

> Hi
> I want to know if I have correctly built qe with openmpi-4.0. So, I ran
> the following command and got this error
>
> [root@rocks7 q-e-qe-6.4]# ./configure
> --prefix=/share/apps/softwares/q-e-qe-6.4
> MPIF90=/share/apps/softwares/openmpi-4.0.1/bin/mpif90
> ...
> [root@rocks7 q-e-qe-6.4]# make all
> ...
> [mahmood@rocks7 job]$ /share/apps/softwares/openmpi-4.0.1/bin/mpirun
> /share/apps/softwares/q-e-qe-6.4/bin/pw.x
> --
> As of version 3.0.0, the "sm" BTL is no longer available in Open MPI.
>
> Efficient, high-speed same-node shared memory communication support in
> Open MPI is available in the "vader" BTL.  To use the vader BTL, you
> can re-run your job with:
>
> mpirun --mca btl vader,self,... your_mpi_application
> --
> --
> A requested component was not found, or was unable to be opened.  This
> means that this component is either not installed or is unable to be
> used on your system (e.g., sometimes this means that shared libraries
> that the component requires are unable to be found/loaded).  Note that
> Open MPI stopped checking at the first component that it did not find.
>
> Host:  rocks7.jupiterclusterscu.com
> Framework: btl
> Component: sm
> --
> --
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
>
>   mca_bml_base_open() failed
>   --> Returned "Not found" (-13) instead of "Success" (0)
> --
> [rocks7:19796] *** An error occurred in MPI_Init
> [rocks7:19796] *** reported by process [85917697,3255307776955514882]
> [rocks7:19796] *** on a NULL communicator
> [rocks7:19796] *** Unknown error
> [rocks7:19796] *** MPI_ERRORS_ARE_FATAL (processes in this communicator
> will now abort,
> [rocks7:19796] ***and potentially your MPI job)
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at line 2079
> [rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
> server/pmix_server.c at 

[QE-users] Installation verification

2019-04-29 Thread Mahmood Naderan
Hi
I want to know if I have correctly built qe with openmpi-4.0. So, I ran the
following command and got this error

[root@rocks7 q-e-qe-6.4]# ./configure
--prefix=/share/apps/softwares/q-e-qe-6.4
MPIF90=/share/apps/softwares/openmpi-4.0.1/bin/mpif90
...
[root@rocks7 q-e-qe-6.4]# make all
...
[mahmood@rocks7 job]$ /share/apps/softwares/openmpi-4.0.1/bin/mpirun
/share/apps/softwares/q-e-qe-6.4/bin/pw.x
--
As of version 3.0.0, the "sm" BTL is no longer available in Open MPI.

Efficient, high-speed same-node shared memory communication support in
Open MPI is available in the "vader" BTL.  To use the vader BTL, you
can re-run your job with:

mpirun --mca btl vader,self,... your_mpi_application
--
--
A requested component was not found, or was unable to be opened.  This
means that this component is either not installed or is unable to be
used on your system (e.g., sometimes this means that shared libraries
that the component requires are unable to be found/loaded).  Note that
Open MPI stopped checking at the first component that it did not find.

Host:  rocks7.jupiterclusterscu.com
Framework: btl
Component: sm
--
--
It looks like MPI_INIT failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during MPI_INIT; some of which are due to configuration or environment
problems.  This failure appears to be an internal failure; here's some
additional information (which may only be relevant to an Open MPI
developer):

  mca_bml_base_open() failed
  --> Returned "Not found" (-13) instead of "Success" (0)
--
[rocks7:19796] *** An error occurred in MPI_Init
[rocks7:19796] *** reported by process [85917697,3255307776955514882]
[rocks7:19796] *** on a NULL communicator
[rocks7:19796] *** Unknown error
[rocks7:19796] *** MPI_ERRORS_ARE_FATAL (processes in this communicator
will now abort,
[rocks7:19796] ***and potentially your MPI job)
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in file
server/pmix_server.c at line 2079
[rocks7.jupiterclusterscu.com:19784] PMIX ERROR: UNREACHABLE in