Hi Ray and list

I have Intel ifort 10.1.017 on a Rocks 4.3 cluster.
The OpenMPI compiler wrappers (i.e. "opal_wrapper") work fine,
and find the shared libraries (Intel or other) without a problem.

My guess is that this is not an OpenMPI problem, but an Intel compiler environment glitch. I wonder if your .profile/.tcshrc/.bashrc files initialize the Intel compiler environment properly. I.e., "source /share/apps/intel/fce/10.1.018/bin/ifortvars.csh" or similar, to get the right
Intel environment variables inserted on
PATH, LD_LIBRARY_PATH, MANPATH. and INTEL_LICENSE_FILE.

Not doing this caused trouble for me in the past.
Double or inconsistent assignment of LD_LIBRARY_PATH and PATH
(say on the ifortvars.csh and on the user login files) also caused conflicts.

I am not sure if this needs to be done before you configure and install OpenMPI,
but doing it after you build OpenMPI may still be OK.

I hope this helps,
Gus Correa

--
---------------------------------------------------------------------
Gustavo J. Ponce Correa, PhD - Email: g...@ldeo.columbia.edu
Lamont-Doherty Earth Observatory - Columbia University
P.O. Box 1000 [61 Route 9W] - Palisades, NY, 10964-8000 - USA
---------------------------------------------------------------------


Ray Muno wrote:

We have recently installed the Intel 10,1 compiler suite on our cluster.

I built OpenMPI (1.2.7 and 1.2.8) with

./configure CC=icc CXX=icpc F77=ifort FC=ifort

It configures, builds and installs.

However, the MPI compiler drivers (mpicc, mpif90, etc) fail immediately with error of the sort

mpif90: error while loading shared libraries: libimf.so: cannot open shared object file: No such file or directory

I updated the LD_LIBRARY_PATH to point to the directories that contain the installed copies of libimf.so. (this is not something I have not had to do for other compiler/OpenMpi combinations)

At that point, the program will compile but I get warnings like:

[muno@titan ~]$ mpif90 test.f

/share/apps/Intel/fce/10.1.018/lib/libimf.so: warning: warning: feupdateenv is not implemented and will always fail

In a google search, I found a reference to this in the OpenMPI lists. When I follow the link, it is a different thread. Searching the OpenMPI lists from the web page does not find any matches. Strange.

I found some references to this at some other sites using OpenMPI on clusters and they said to use

-i_dynamic

on the compile line.

This removes the warning.

Is there something I should be doing at OpenMPI configure time to take care of these issues?

--
Ray Muno
University of Minnesota
Aerospace Engineering
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


Reply via email to