Dear Gus,

Thank you for the detailed explanation. It is quite helpful. I think now I
have got how to manage the problem.

Best regards,

Mahmoud Payami
Theoretical Physics Group,
Atomic Energy Organization of Iran
Tehran-Iran
mpay...@aeoi.org.ir


On Mon, Jan 5, 2009 at 12:21 PM, Gus Correa <g...@ldeo.columbia.edu> wrote:

> Mahmoud Payami wrote:
>
>>
>>
>> On Fri, Jan 2, 2009 at 9:08 AM, doriankrause <doriankra...@web.de<mailto:
>> doriankra...@web.de>> wrote:
>>
>>    Mahmoud Payami wrote:
>>
>>
>>        Dear OpenMPI Users,
>>
>>        I have two systems, one with Intel64 processor, and one with
>>        IA32. The OSs on first is CentOS-86_64 and the other
>>        CentOS-i386. I installed Intel fortran compiler 10.1 on both.
>>         In the first I use the fce, and in the second I use fc
>>        directories (ifortvars.sh/csh). I have compiled openmpi
>>        separately on each machine. Now, I could not run my
>>        application whch is compiled on ia32 machine. Should I use
>>        "fc" instead of "fce" on intel64 and then compile openmpi with
>>        that?
>>
>>
>>    Could you give us some more information? What is the error message?
>>    You said that the application is compiled for the 32 bit
>>    architecture. I'm not used to mixing 32/64 bit architectures. Does
>>    the application run on each host seperately?
>>
>>    Dorian
>>
>>
>>
>>  Hi Mahmoud, list
>
>> Dear Dorian,
>> Thank you  for your contribution. The application, compiled on each box
>> separately, is ok with mpi an no problem. Recently, I had checked that a
>> binary file created on ia32, also works on 86_64 but the reverse is not
>> true.
>>
> That is correct.
> x86-64 architecture can run 32-bit binaries,
> but 64-bit binaries don't work on x86 machines.
>
>> So, why not a parallel program which is compiled on ia32 box? I think, if
>> I configure and install openmpi using ia32 intel compiler on 86_64 box, then
>> it will be resolved.
>>
> 1. You need to compile OpenMPI separately on each architecture.
> Use the "--prefix=/path/to/my/openmpi/32bit/" (32-bit example/suggestion)
> configure option, to install the two libraries on different locations,
> if you want. This will make clear for which architecture the library was
> built for.
>
> 2. You need to compile your application separately on each architecture,
> and link to the OpenMPI libraries built for that specific architecture
> according to item 1  above.
> (I.e. don't mix apples and oranges.)
>
> 3. You need to have the correct environment variables set
> on each machine architecture.
> They are *different* on each architecture.
>
> I.e., if you use Intel Fortran,
> source the fc script on the 32bit machine,
> and source the fce script on the 64-bit machine.
>
> This can be done on the .bashrc or .tcshrc file.
> If you have a different home directory on each machine,
> you can write a .bashrc or .tcshrc file for each architecture.
> If you have a single NFS mounted home directory,
> use a trick like this (tcsh example):
>
> if ( $HOST == "my_32bit_hostname" ) then
>   source /path/to/intel/fc/bin/ifortvars.csh             # Note "fc" here.
> else if ( $HOST == "my_64bit_hostname"  ) then
>   source /path/to/intel/fce/bin/ifortvars.csh           # Note "fce" here.
> endif
>
> whatever your "my_32bit_hostname", "my_64bit_hostname".
> /path/to/intel/fc/, and  /path/to/intel/fce/   are.
> (Do "hostname" on each machine to find out the right name to use.)
>
> Likewise for the OpenMPI binaries (mpicc, mpif90, mpirun, etc):
>
> if ( $HOST == "my_32bit_hostname" ) then
>   setenv PATH /path/to/my/openmpi/32bit/bin:$PATH   # Note "32bit" here.
> else if ( $HOST == "my_64bit_hostname"  ) then
>   setenv PATH /path/to/my/openmpi/64bit/bin:$PATH    # Note "64bit" here.
> endif
>
> This approach also works for separate home directories "per machine"
> (not NFS mounted), and is probably the simplest way to solve the problem.
>
> There are more elegant ways to setup the environment of choice,
> other than changing the user startup files.
> For instance, you can write intel.csh and intel.sh on the /etc/profile.d
> directory,
> to setup the appropriate environment as the user logs in.
> See also the "environment modules" package:
> http://modules.sourceforge.net/
>
> 4) If you run MPI programs across the two machines/architectures,
> make sure to use the MPI types on MPI function calls correctly,
> and to match them properly to the native Fortran (or C) types
> on each machine/architecture.
>
> I hope this helps.
> Gus Correa
> ---------------------------------------------------------------------
> Gustavo Correa, PhD - Email: g...@ldeo.columbia.edu
> Lamont-Doherty Earth Observatory - Columbia University
> Palisades, NY, 10964-8000 - USA
> ---------------------------------------------------------------------
>
>> I have to check it and will report the result. In present case, it is
>> searching for shared lib.so.0 which has some extension "..ELF...64". I have
>> already added "/usr/local/lib" which contains mpi libs in LD_LIBRARY_PATH
>> otherwise they would not work on each box even separatey.
>> Bests, Happy 2009
>> mahmoud
>>  ------------------------------------------------------------------------
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to