On Jan 2, 2006, at 3:52 AM, Jyh-Shyong Ho wrote:

I am trying to install OpenMPI 1.0.1 on my Athlon X2 computer running SuSE10.0, the installation failed when I included --with-tm=/opt/torque option with the
error message:
...
gcc -shared .libs/pls_tm_component.o .libs/pls_tm_module.o -Wl,-- rpath -Wl,/home/c00jsh00/openmpi-1.0.1/orte/.libs -Wl,--rpath -Wl,/ home/c00jsh00/openmpi-1.0.1/opal/.libs -Wl,--rpath -Wl,/opt/openmpi/ lib -L/opt/torque/lib -lpbs /home/c00jsh00/openmpi-1.0.1/orte/.libs/ liborte.so -L/home/c00jsh00/openmpi-1.0.1/opal/.libs /home/c00jsh00/ openmpi-1.0.1/opal/.libs/libopal.so -lm -lutil -lnsl -pthread -Wl,- soname -Wl,mca_pls_tm.so -o .libs/mca_pls_tm.so /usr/lib64/gcc/x86_64-suse-linux/4.0.2/../../../../x86_64-suse- linux/bin/ld: /opt/torque/lib/libpbs.a(tm.o): relocation R_X86_64_32S against `a local symbol' can not be used when making a shared object; recompile with -fPIC
/opt/torque/lib/libpbs.a: could not read symbols: Bad value

My TORQUE is 2.0.0p4, the latest version. Any hint?

The problem is that Torque (and all the PBS derivatives I've seen) only provide static libraries and we are trying to build a shared library linked against those static libraries. This happens to work for x86 code, but not for x86_64 code. Unfortunately, the only solution at this time is to build Open MPI as static libraries with the configure options "--enable-static --disable-shared".

With a couple of changes to the build system, we should be able to allow the TM component to be built as part of libmpi.so (as opposed to a dynamically opened shared object), but that will not presently work with Open MPI 1.0.1 (or the upcoming 1.0.2 release). Of course, the easiest (and most flexible) solution for your problem would be the Torque team releasing their libraries as both shared and static libraries. It doesn't appear their build system supports this presently, which is most unfortunate as it prevents us from building TM support as a DSO...

Hope this helps,

Brian

--
  Brian Barrett
  Open MPI developer
  http://www.open-mpi.org/

Reply via email to