Note that what Gilles said is correct: it's not just the dependent libraries of 
libmpi.so (and friends) that matter -- it's also the dependent libraries of all 
of Open MPI's plugins that matter.

You can run "ldd *.so" in the lib directory where you installed Open MPI, but 
you'll also need to "ldd *.so" in the lib/openmpi directory -- that's where 
Open MPI installs its plugins.

I suspect that if you run "ldd lib/openmpi/mca_plm_tm.so" on the head node, 
you'll see all the dependent libraries listed.  But if you run the same command 
on your back-end compute nodes, it might say "not found" for some of the 
libraries.



> On Oct 4, 2018, at 9:12 AM, John Hearns via users <users@lists.open-mpi.org> 
> wrote:
> 
> Michele, the command is   ldd ./code.io
> I just Googled - ldd  means List dynamic Dependencies
> 
> To find out the PBS batch system type - that is a good question!
> Try this:     qstat --version
> 
> 
> 
> On Thu, 4 Oct 2018 at 10:12, Castellana Michele
> <michele.castell...@curie.fr> wrote:
>> 
>> Dear John,
>> Thank you for your reply. I have tried
>> 
>> ldd mpirun ./code.o
>> 
>> but I get an error message, I do not know what is the proper syntax to use 
>> ldd command. Here is the information about the Linux version
>> 
>> $ cat /etc/os-release
>> NAME="CentOS Linux"
>> VERSION="7 (Core)"
>> ID="centos"
>> ID_LIKE="rhel fedora"
>> VERSION_ID="7"
>> PRETTY_NAME="CentOS Linux 7 (Core)"
>> ANSI_COLOR="0;31"
>> CPE_NAME="cpe:/o:centos:centos:7"
>> HOME_URL="https://www.centos.org/";
>> BUG_REPORT_URL="https://bugs.centos.org/";
>> 
>> CENTOS_MANTISBT_PROJECT="CentOS-7"
>> CENTOS_MANTISBT_PROJECT_VERSION="7"
>> REDHAT_SUPPORT_PRODUCT="centos"
>> REDHAT_SUPPORT_PRODUCT_VERSION=“7"
>> 
>> May you please tell me how to check whether the batch system is PBSPro or 
>> OpenPBS?
>> 
>> Best,
>> 
>> 
>> 
>> 
>> On Oct 4, 2018, at 10:30 AM, John Hearns via users 
>> <users@lists.open-mpi.org> wrote:
>> 
>> Michele  one tip:   log into a compute node using ssh and as your own 
>> username.
>> If you use the Modules envirnonment then load the modules you use in
>> the job script
>> then use the  ldd  utility to check if you can load all the libraries
>> in the code.io executable
>> 
>> Actually you are better to submit a short batch job which does not use
>> mpirun but uses ldd
>> A proper batch job will duplicate the environment you wish to run in.
>> 
>>   ldd ./code.io
>> 
>> By the way, is the batch system PBSPro or OpenPBS?  Version 6 seems a bit 
>> old.
>> Can you say what version of Redhat or CentOS this cluster is installed with?
>> 
>> 
>> 
>> On Thu, 4 Oct 2018 at 00:02, Castellana Michele
>> <michele.castell...@curie.fr> wrote:
>> 
>> I fixed it, the correct file was in /lib64, not in /lib.
>> 
>> Thank you for your help.
>> 
>> On Oct 3, 2018, at 11:30 PM, Castellana Michele 
>> <michele.castell...@curie.fr> wrote:
>> 
>> Thank you, I found some libcrypto files in /usr/lib indeed:
>> 
>> $ ls libcry*
>> libcrypt-2.17.so  libcrypto.so.10  libcrypto.so.1.0.2k  libcrypt.so.1
>> 
>> but I could not find libcrypto.so.0.9.8. Here they suggest to create a 
>> hyperlink, but if I do I still get an error from MPI. Is there another way 
>> around this?
>> 
>> Best,
>> 
>> On Oct 3, 2018, at 11:00 PM, Jeff Squyres (jsquyres) via users 
>> <users@lists.open-mpi.org> wrote:
>> 
>> It's probably in your Linux distro somewhere -- I'd guess you're missing a 
>> package (e.g., an RPM or a deb) out on your compute nodes...?
>> 
>> 
>> On Oct 3, 2018, at 4:24 PM, Castellana Michele <michele.castell...@curie.fr> 
>> wrote:
>> 
>> Dear Ralph,
>> Thank you for your reply. Do you know where I could find libcrypto.so.0.9.8 ?
>> 
>> Best,
>> 
>> On Oct 3, 2018, at 9:41 PM, Ralph H Castain <r...@open-mpi.org> wrote:
>> 
>> Actually, I see that you do have the tm components built, but they cannot be 
>> loaded because you are missing libcrypto from your LD_LIBRARY_PATH
>> 
>> 
>> On Oct 3, 2018, at 12:33 PM, Ralph H Castain <r...@open-mpi.org> wrote:
>> 
>> Did you configure OMPI —with-tm=<path-to-PBS-libs>? It looks like we didn’t 
>> build PBS support and so we only see one node with a single slot allocated 
>> to it.
>> 
>> 
>> On Oct 3, 2018, at 12:02 PM, Castellana Michele 
>> <michele.castell...@curie.fr> wrote:
>> 
>> Dear all,
>> I am having trouble running an MPI code across multiple cores on a new 
>> computer cluster, which uses PBS. Here is a minimal example, where I want to 
>> run two MPI processes, each on  a different node. The PBS script is
>> 
>> #!/bin/bash
>> #PBS -l walltime=00:01:00
>> #PBS -l mem=1gb
>> #PBS -l nodes=2:ppn=1
>> #PBS -q batch
>> #PBS -N test
>> mpirun -np 2 ./code.o
>> 
>> and when I submit it with
>> 
>> $qsub script.sh
>> 
>> I get the following message in the PBS error file
>> 
>> $ cat test.e1234
>> [shbli040:08879] mca_base_component_repository_open: unable to open 
>> mca_plm_tm: libcrypto.so.0.9.8: cannot open shared object file: No such file 
>> or directory (ignored)
>> [shbli040:08879] mca_base_component_repository_open: unable to open 
>> mca_oob_ud: libibverbs.so.1: cannot open shared object file: No such file or 
>> directory (ignored)
>> [shbli040:08879] mca_base_component_repository_open: unable to open 
>> mca_ras_tm: libcrypto.so.0.9.8: cannot open shared object file: No such file 
>> or directory (ignored)
>> --------------------------------------------------------------------------
>> There are not enough slots available in the system to satisfy the 2 slots
>> that were requested by the application:
>> ./code.o
>> 
>> Either request fewer slots for your application, or make more slots available
>> for use.
>> —————————————————————————————————————
>> 
>> The PBS version is
>> 
>> $ qstat --version
>> Version: 6.1.2
>> 
>> and here is some additional information on the MPI version
>> 
>> $ mpicc -v
>> Using built-in specs.
>> COLLECT_GCC=/bin/gcc
>> COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/4.8.5/lto-wrapper
>> Target: x86_64-redhat-linux
>> […]
>> Thread model: posix
>> gcc version 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC)
>> 
>> Do you guys know what may be the issue here?
>> 
>> Thank you
>> Best,
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> 
>> 
>> --
>> Jeff Squyres
>> jsquy...@cisco.com
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
>> 
>> _______________________________________________
>> users mailing list
>> users@lists.open-mpi.org
>> https://lists.open-mpi.org/mailman/listinfo/users
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users


-- 
Jeff Squyres
jsquy...@cisco.com

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to