FWIW, on an Indiana University Opteron system running RHEL4, I was able
to compile Open MPI v1.0.2 in 32 bit mode with:

./configure --prefix=/u/jsquyres/x86_64-unknown-linux-gnu/bogus
CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32

I then successfully built and ran an MPI executable with:

shell$ mpicc hello.c -o hello -m32
shell$ file hello
hello: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for
GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped
shell$ mpirun -np 4 hello

The extra "-m32" was necessary because the wrapper compiler did not
include the CFLAGS from the configure line (we don't do this by default
on the assumption that you may want to build Open MPI with different
flags than your MPI executables).  You can get the wrapper compilers to
automatically include additional flags by supplying
--with-wrapper-[cflags|cxxflags|...].  For example, I could have used
the following configure line:

./configure --prefix=/u/jsquyres/x86_64-unknown-linux-gnu/bogus
CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32
--with-wrapper-cflags=-m32 --with-wrapper-cxxflags=-m32
--with-wrapper-fflags=-m32 --with-wrapper-fcflags=-m32

Then you can leave off the -m32 when using mpicc:

shell$ mpicc hello.c -o hello
shell$ file hello
hello: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for
GNU/Linux 2.2.5, dynamically linked (uses shared libs), not stripped
shell$ mpirun -np 4 hello

Both of the examples I listed above worked fine for me with Open MPI
v1.0.2.

FYI: Per Brian's suggestion, I would strongly recommend using the output
of "mpicc -showme" as a basis to load the LANL MPI_FLAGS environment
variable to compile with different compilers.  Also note that Open MPI's
wrapper compilers do allow changing of the base compiler, but this
sometimes gives unexpected results (e.g., not all the flags in mpicc
-showme will work properly with a different compiler, there may be
linker/bootstrap issues, there may be size differences between intrinsic
types [most usually with Fortran and C++ compilers -- C compiler
cross-linkability is almost never a problem], etc.).  You can find out
exactly what compiler and flags were used to build Open MPI with
"ompi_info --all" (be sure to pipe the output through a pager like
more/less).



> -----Original Message-----
> From: users-boun...@open-mpi.org 
> [mailto:users-boun...@open-mpi.org] On Behalf Of David Gunter
> Sent: Monday, April 10, 2006 9:43 AM
> To: Open MPI Users
> Cc: David R. (Chip) Kent IV
> Subject: Re: [OMPI users] Building 32-bit OpenMPI package for 
> 64-bit Opteronplatform
> 
> After much fiddling around, I managed to create a version of 
> open-mpi  
> that would actually build.  Unfortunately, I can't run the simplest  
> of applications with it.  Here's the setup I used:
> 
> export CC=gcc
> export CXX=g++
> export FC=gfortran
> export F77=gfortran
> export CFLAGS="-m32"
> export CXXFLAGS="-m32"
> export FFLAGS="-m32"
> export FCFLAGS="-m32"
> export LDFLAGS="-L/usr/lib"
> 
> ./configure --prefix=/net/scratch1/dog/flash64/openmpi/ 
> openmpi-1.0.2-32b --build=i686-pc-linux-gnu --with-bproc --with-g
> m --enable-io-romio --with-romio --with-io-romio-flags='--build=i686- 
> pc-linux-gnu'
> 
> Configure completes, as does 'make' and then 'make install'.  Next I  
> tried to compile a simple MPI_Send test application, which 
> fails to run:
> 
> (flashc 104%) gcc -m32 -I/net/scratch1/dog/flash64/openmpi/ 
> openmpi-1.0.2-32b/include   -o send4 send4.c -L/net/scratch1/dog/ 
> flash64/openmpi/openmpi-1.0.2-32b/lib -lmpi
> /net/scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib/libopa
> l.so.0:  
> warning: epoll_wait is not implemented and will always fail
> /net/scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib/libopa
> l.so.0:  
> warning: epoll_ctl is not implemented and will always fail
> /net/scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib/libopa
> l.so.0:  
> warning: epoll_create is not implemented and will always fail
> 
> (flashc 105%) which mpiexec
> /net/scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/bin/mpiexec
> 
> (flashc 106%) mpiexec -n 4 ./send4
> [flashc.lanl.gov:32373] mca: base: component_find: unable to open: / 
> lib/libc.so.6: version `GLIBC_2.3.4' not found (required by /net/ 
> scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib/openmpi/ 
> mca_paffinity_linux.so) (ignored)
> [flashc.lanl.gov:32373] mca: base: component_find: unable to open:  
> libbproc.so.4: cannot open shared object file: No such file or  
> directory (ignored)
> [flashc.lanl.gov:32373] mca: base: component_find: unable to open:  
> libbproc.so.4: cannot open shared object file: No such file or  
> directory (ignored)
> [flashc.lanl.gov:32373] mca: base: component_find: unable to open:  
> libbproc.so.4: cannot open shared object file: No such file or  
> directory (ignored)
> [flashc.lanl.gov:32373] mca: base: component_find: unable to open:  
> libbproc.so.4: cannot open shared object file: No such file or  
> directory (ignored)
> [flashc.lanl.gov:32373] mca: base: component_find: unable to open:  
> libbproc.so.4: cannot open shared object file: No such file or  
> directory (ignored)
> mpiexec: relocation error: /net/scratch1/dog/flash64/openmpi/ 
> openmpi-1.0.2-32b/lib/openmpi/mca_soh_bproc.so: undefined symbol:  
> bproc_nodelist
> 
> I'm still open to suggestions.
> 
> -david
> 
> 
> On Apr 10, 2006, at 7:11 AM, David R. (Chip) Kent IV wrote:
> 
> > When running the tests, is the LD_LIBRARY_PATH getting set to lib64
> > instead of lib or something like that?
> >
> > Chip
> >
> >
> > On Sat, Apr 08, 2006 at 02:45:01AM -0600, David Gunter wrote:
> >> I am trying to build a 32-bit compatible OpenMPI for our 
> 64-bit Bproc
> >> Opteron systems.   I saw the thread from last August-September 2005
> >> regarding this but didn't see where it ever succeeded or if any of
> >> the problems had been fixed.  Most importantly, romio is 
> required to
> >> work as well.
> >>
> >> Is this possible and how is it done?  Here's what I have tried so  
> >> far:
> >>
> >> setenv CFLAGS -m32
> >> setenv CXXFLAGS -m32
> >> setenv FFLAGS -m32
> >> setenv F90FLAGS -m32
> >>
> >> I have used the '--build=i686-pc-linux-gnu' option to the configure
> >> setup as well as --with-io-romio-flags="--build=i686-pc-linux-gnu"
> >>
> >> configure halts with errors when trying to run the Fortran 
> 77 tests.
> >> If I remove those env settings and just use the --build option,
> >> configure will proceed to the end but the make will eventually halt
> >> with errors due to a mix of lib64 libs being accessed at 
> some point.
> >>
> >> Any ideas?
> >>
> >> -david
> >> --
> >> David Gunter
> >> CCN-8: HPC Environments: Parallel Tools Team
> >> Los Alamos National Laboratory
> >>
> >>
> >>
> >>
> >> ------------------------------------------------------------
> >> listmanager [ptools_team] Options:
> >>  To: listmana...@listserv.lanl.gov
> >> Body: <subscribe|unsubscribe> ptools_team <email address>
> >> ------------------------------------------------------------
> >
> > -- 
> >
> >
> > -----------------------------------------------------
> > David R. "Chip" Kent IV
> >
> > Parallel Tools Team
> > High Performance Computing Environments Group (CCN-8)
> > Los Alamos National Laboratory
> >
> > (505)665-5021
> > drk...@lanl.gov
> > -----------------------------------------------------
> >
> > This message is "Technical data or Software  Publicly
> > Available" or "Correspondence".
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 

Reply via email to