Hi Dominik,

Well, the OSCAR OpenMPI requires to have "tm" support. Please recompile OpenMPI 
on the current cluster which torque and torque-devel rpms installed.
(e.g., rpmbuild --rebuild --define "oscar 1" --define "_packager Dominik Ships 
<[EMAIL PROTECTED]>" --define "_vendor OSCAR" --define "configure_options 
--with-tm=/opt/pbs" --target x86_64 openmpi-1.2.4.src.rpm )

And please double check to see if SLES 10.0 also provides OpenMPI. If so, 
please make yume not pick it up but the oscar one (openmpi-1.2.4-1). OSCAR one 
has the openmpi module embedded. (note, openmpi-switcher module is different 
from this one and it is in a separated file).
FYI, what I mentioned at my previous email is the openmpi-switcher module which 
is in a separate file:openmpi-switcher-modulefile-1.2.4-1.noarch.rpm.

I hope you are not confused and you can figure this out all.

Regards,

- DongInn


Dominik Schips wrote:
> Hello DongInn,
> 
> Am Montag, den 14.01.2008, 10:12 -0500 schrieb DongInn Kim:
>> Hi Dominik,
>>
>> If your openmpi is properly compiled and your modules program works fine but 
>> the openmpi test fails because of the switcher issue, then here is my simple 
>> therapy.
>>
>> 1. First of all, check to see if openmpi was compile with the "tm" support
>>    module load openmpi
>>    ompi_info | grep tm     # if you see any output about tm here, that is 
>> good
> 
> There is no "tm" support in the package. But I used the OSCAR 5.0 SRC
> and didn't change anything. Is tm support recommended for OSCAR OpenMPI?
> 
>                 Open MPI: 1.1.1
>    Open MPI SVN revision: r11473
>                 Open RTE: 1.1.1
>    Open RTE SVN revision: r11473
>                     OPAL: 1.1.1
>        OPAL SVN revision: r11473
>                   Prefix: /usr
>  Configured architecture: x86_64-suse-linux-gnu
>            Configured by: root
>            Configured on: Mon Jan 14 11:09:34 CET 2008
>           Configure host: sles10oscar
>                 Built by: root
>                 Built on: Mo Jan 14 11:15:42 CET 2008
>               Built host: sles10oscar
>               C bindings: yes
>             C++ bindings: yes
>       Fortran77 bindings: yes (all)
>       Fortran90 bindings: yes
>  Fortran90 bindings size: small
>               C compiler: gcc
>      C compiler absolute: /usr/bin/gcc
>             C++ compiler: g++
>    C++ compiler absolute: /usr/bin/g++
>       Fortran77 compiler: gfortran
>   Fortran77 compiler abs: /usr/bin/gfortran
>       Fortran90 compiler: gfortran
>   Fortran90 compiler abs: /usr/bin/gfortran
>              C profiling: yes
>            C++ profiling: yes
>      Fortran77 profiling: yes
>      Fortran90 profiling: yes
>           C++ exceptions: no
>           Thread support: posix (mpi: no, progress: no)
>   Internal debug support: no
>      MPI parameter check: runtime
> Memory profiling support: no
> Memory debugging support: no
>          libltdl support: yes
>               MCA memory: ptmalloc2 (MCA v1.0, API v1.0, Component
> v1.1.1)
>            MCA paffinity: linux (MCA v1.0, API v1.0, Component v1.1.1)
>            MCA maffinity: first_use (MCA v1.0, API v1.0, Component
> v1.1.1)
>            MCA maffinity: libnuma (MCA v1.0, API v1.0, Component v1.1.1)
>                MCA timer: linux (MCA v1.0, API v1.0, Component v1.1.1)
>            MCA allocator: basic (MCA v1.0, API v1.0, Component v1.0)
>            MCA allocator: bucket (MCA v1.0, API v1.0, Component v1.0)
>                 MCA coll: basic (MCA v1.0, API v1.0, Component v1.1.1)
>                 MCA coll: hierarch (MCA v1.0, API v1.0, Component
> v1.1.1)
>                 MCA coll: self (MCA v1.0, API v1.0, Component v1.1.1)
>                 MCA coll: sm (MCA v1.0, API v1.0, Component v1.1.1)
>                 MCA coll: tuned (MCA v1.0, API v1.0, Component v1.1.1)
>                   MCA io: romio (MCA v1.0, API v1.0, Component v1.1.1)
>                MCA mpool: sm (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA pml: ob1 (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA bml: r2 (MCA v1.0, API v1.0, Component v1.1.1)
>               MCA rcache: rb (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA btl: self (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA btl: sm (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA btl: tcp (MCA v1.0, API v1.0, Component v1.0)
>                 MCA topo: unity (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA osc: pt2pt (MCA v1.0, API v1.0, Component v1.0)
>                  MCA gpr: null (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA gpr: proxy (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA gpr: replica (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA iof: proxy (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA iof: svc (MCA v1.0, API v1.0, Component v1.1.1)
>                   MCA ns: proxy (MCA v1.0, API v1.0, Component v1.1.1)
>                   MCA ns: replica (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA oob: tcp (MCA v1.0, API v1.0, Component v1.0)
>                  MCA ras: dash_host (MCA v1.0, API v1.0, Component
> v1.1.1)
>                  MCA ras: hostfile (MCA v1.0, API v1.0, Component
> v1.1.1)
>                  MCA ras: localhost (MCA v1.0, API v1.0, Component
> v1.1.1)
>                  MCA ras: slurm (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA rds: hostfile (MCA v1.0, API v1.0, Component
> v1.1.1)
>                  MCA rds: resfile (MCA v1.0, API v1.0, Component v1.1.1)
>                MCA rmaps: round_robin (MCA v1.0, API v1.0, Component
> v1.1.1)
>                 MCA rmgr: proxy (MCA v1.0, API v1.0, Component v1.1.1)
>                 MCA rmgr: urm (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA rml: oob (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA pls: fork (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA pls: rsh (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA pls: slurm (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA sds: env (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA sds: pipe (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA sds: seed (MCA v1.0, API v1.0, Component v1.1.1)
>                  MCA sds: singleton (MCA v1.0, API v1.0, Component
> v1.1.1)
>                  MCA sds: slurm (MCA v1.0, API v1.0, Component v1.1.1)
> 
>> 2. Chroot to image and then setup switcher manually.
>>    chroot /var/lib/systemimager/images/oscarimage
>>    cd /opt/env-switcher/share/env-switcher/mpi
>>    ls  # check to see what mpi switcher is configured.
>>        # OSCAR is supposed to configure three mpi names (lam, mpich, and 
>> openmpi) by default
>>        # If there is anything missing, the corresponding test would fail.
>>    cp /opt/openmpi-switcher-modulefile-1.2.4/share/openmpi/openmpi-1.2.4 . # 
>> Assuming openmpi name is missing.
>> 3. Once the image is updated, reimage your client nodes with the new image
>> 4. Or not recommend but cpush the missing mpi names to the client nodes.
>>    cd /opt/env-switcher/share/env-switcher/mpi
>>    cpush openmpi-1.2.4
> 
> The openmpi module was missing in this directory like you described.
> I copied the module to the image and imaged the clusternodes new.
> 
> But I get this error if I try to load the module:
> 
> oscarnode1:~ # module load openmpi
> ModuleCmd_Load.c(199):ERROR:105: Unable to locate a modulefile for
> 'openmpi'
> oscarnode1:~ # cd /opt/env-switcher/
> bin/   etc/   man/   share/
> oscarnode1:~ # cd /opt/env-switcher/
> bin/   etc/   man/   share/
> oscarnode1:~ # cd /opt/env-switcher/share/env-switcher/mpi/
> oscarnode1:/opt/env-switcher/share/env-switcher/mpi # module load
> openmpi-1.1.1
> ModuleCmd_Load.c(199):ERROR:105: Unable to locate a modulefile for
> 'openmpi-1.1.1'
> oscarnode1:/opt/env-switcher/share/env-switcher/mpi #
> 
> Same on the headnode:
> 
> sles10oscar:~ # module load openmpi
> ModuleCmd_Load.c(199):ERROR:105: Unable to locate a modulefile for
> 'openmpi'
> sles10oscar:~ #
> 
> Any idea why the couldn't be loaded?
> Could be the env setting for the module path wrong or missing?
> Do I have to any special path settings?
> 

-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
_______________________________________________
Oscar-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/oscar-devel

Reply via email to