On Sep 4, 2012, at 3:09 PM, mariana Vargas wrote:

> I 'am new in this, I have some codes that use mpi for python and I  
> just installed (openmpi, mrmpi, mpi4py) in my home (from a  cluster  
> account) without apparent errors and I tried to perform this simple  
> test in python and I get the following error related with openmpi,  
> could you help to figure out what is going on? I attach as many  
> informations as possible...

I think I know what's happening here.

It's a complicated linker issue that we've discussed before -- I'm not sure 
whether it was on this users list or the OMPI developers list.  

The short version is that you should remove your prior Open MPI installation, 
and then rebuild Open MPI with the --disable-dlopen configure switch.  See if 
that fixes the problem.

> Thanks.
> 
> Mariana
> 
> 
>  From a python console
>  >>> from mrmpi import mrmpi
>  >>> mr=mrmpi()
> [ferrari:23417] mca: base: component_find: unable to open /home/ 
> mvargas/lib/openmpi/mca_paffinity_hwloc: /home/mvargas/lib/openmpi/ 
> mca_paffinity_hwloc.so: undefined symbol: opal_hwloc_topology (ignored)
> [ferrari:23417] mca: base: component_find: unable to open /home/ 
> mvargas/lib/openmpi/mca_carto_auto_detect: /home/mvargas/lib/openmpi/ 
> mca_carto_auto_detect.so: undefined symbol:  
> opal_carto_base_graph_get_host_graph_fn (ignored)
> [ferrari:23417] mca: base: component_find: unable to open /home/ 
> mvargas/lib/openmpi/mca_carto_file: /home/mvargas/lib/openmpi/ 
> mca_carto_file.so: undefined symbol:  
> opal_carto_base_graph_get_host_graph_fn (ignored)
> [ferrari:23417] mca: base: component_find: unable to open /home/ 
> mvargas/lib/openmpi/mca_shmem_mmap: /home/mvargas/lib/openmpi/ 
> mca_shmem_mmap.so: undefined symbol: opal_show_help (ignored)
> [ferrari:23417] mca: base: component_find: unable to open /home/ 
> mvargas/lib/openmpi/mca_shmem_posix: /home/mvargas/lib/openmpi/ 
> mca_shmem_posix.so: undefined symbol: opal_show_help (ignored)
> [ferrari:23417] mca: base: component_find: unable to open /home/ 
> mvargas/lib/openmpi/mca_shmem_sysv: /home/mvargas/lib/openmpi/ 
> mca_shmem_sysv.so: undefined symbol: opal_show_help (ignored)
> --------------------------------------------------------------------------
> It looks like opal_init failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during opal_init; some of which are due to configuration or
> environment problems.  This failure appears to be an internal failure;
> here's some additional information (which may only be relevant to an
> Open MPI developer):
> 
>   opal_shmem_base_select failed
>   --> Returned value -1 instead of OPAL_SUCCESS
> --------------------------------------------------------------------------
> [ferrari:23417] [[INVALID],INVALID] ORTE_ERROR_LOG: Error in file  
> runtime/orte_init.c at line 79
> --------------------------------------------------------------------------
> It looks like MPI_INIT failed for some reason; your parallel process is
> likely to abort.  There are many reasons that a parallel process can
> fail during MPI_INIT; some of which are due to configuration or  
> environment
> problems.  This failure appears to be an internal failure; here's some
> additional information (which may only be relevant to an Open MPI
> developer):
> 
>   ompi_mpi_init: orte_init failed
>   --> Returned "Error" (-1) instead of "Success" (0)
> --------------------------------------------------------------------------
> *** An error occurred in MPI_Init
> *** on a NULL communicator
> *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
> [ferrari:23417] Local abort before MPI_INIT completed successfully;  
> not able to aggregate error messages, and not able to guarantee that  
> all other processes were killed!
> 
> 
> 
> echo $PATH
> 
> /home/mvargas/idl/pro/LibsSDSSS/idlutilsv5_4_15/bin:/usr/local/itt/ 
> idl70/bin:/opt/local/bin:/home/mvargas/bin:/home/mvargas/lib:/home/ 
> mvargas/lib/openmpi/:/home/mvargas:/home/vargas/bin/:/home/mvargas/idl/ 
> pro/LibsSDSSS/idlutilsv5_4_15/bin:/usr/local/itt/idl70/bin:/opt/local/ 
> bin:/home/mvargas/bin:/home/mvargas/lib:/home/mvargas/lib/openmpi/:/ 
> home/mvargas:/home/vargas/bin/:/usr/lib64/qt3.3/bin:/usr/kerberos/bin:/ 
> usr/local/bin:/bin:/usr/bin:/opt/pbs/bin:/opt/pbs/lib/xpbs/bin:/opt/ 
> envswitcher/bin:/opt/pvm3/lib:/opt/pvm3/lib/LINUX64:/opt/pvm3/bin/ 
> LINUX64:/opt/c3-4/
> 
> echo $LD_LIBRARY_PATH
> /usr/local/mpich2/lib:/home/mvargas/lib:/home/mvargas/:/home/mvargas/ 
> lib64:/home/mvargas/lib/openmpi/:/usr/lib64/openmpi/1.4-gcc/lib/:/user/ 
> local/:/usr/local/mpich2/lib:/home/mvargas/lib:/home/mvargas/:/home/ 
> mvargas/lib64:/home/mvargas/lib/openmpi/:/usr/lib64/openmpi/1.4-gcc/ 
> lib/:/user/local/:
> 
> Version: openmpi-1.6
> 
> 
> 
> mpirun --bynode --tag-output ompi_info -v ompi full --parsable
> [1,0]<stdout>:package:Open MPI mvargas@ferrari Distribution
> [1,0]<stdout>:ompi:version:full:1.6
> [1,0]<stdout>:ompi:version:svn:r26429
> [1,0]<stdout>:ompi:version:release_date:May 10, 2012
> [1,0]<stdout>:orte:version:full:1.6
> [1,0]<stdout>:orte:version:svn:r26429
> [1,0]<stdout>:orte:version:release_date:May 10, 2012
> [1,0]<stdout>:opal:version:full:1.6
> [1,0]<stdout>:opal:version:svn:r26429
> [1,0]<stdout>:opal:version:release_date:May 10, 2012
> [1,0]<stdout>:mpi-api:version:full:2.1
> [1,0]<stdout>:ident:1.6
> 
> 
> eth0      Link encap:Ethernet  HWaddr 00:30:48:95:99:CC
>          inet addr:192.168.2.1  Bcast:192.168.2.255  Mask:255.255.255.0
>          inet6 addr: fe80::230:48ff:fe95:99cc/64 Scope:Link
>          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>          RX packets:4739875255 errors:0 dropped:1636 overruns:0 frame:0
>          TX packets:5196871012 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:1000
>          RX bytes:4959384349297 (4.5 TiB)  TX bytes:3933641883577 (3.5  
> TiB)
>          Memory:ef300000-ef320000
> 
> eth1      Link encap:Ethernet  HWaddr 00:30:48:95:99:CD
>          inet addr:128.2.116.104  Bcast:128.2.119.255  Mask: 
> 255.255.248.0
>          inet6 addr: fe80::230:48ff:fe95:99cd/64 Scope:Link
>          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>          RX packets:2645952109 errors:0 dropped:13353 overruns:0 frame:0
>          TX packets:2974763570 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:1000
>          RX bytes:2024044043824 (1.8 TiB)  TX bytes:3390935387820 (3.0  
> TiB)
>          Memory:ef400000-ef420000
> 
> lo        Link encap:Local Loopback
>          inet addr:127.0.0.1  Mask:255.0.0.0
>          inet6 addr: ::1/128 Scope:Host
>          UP LOOPBACK RUNNING  MTU:16436  Metric:1
>          RX packets:143359307 errors:0 dropped:0 overruns:0 frame:0
>          TX packets:143359307 errors:0 dropped:0 overruns:0 carrier:0
>          collisions:0 txqueuelen:0
>          RX bytes:80413513464 (74.8 GiB)  TX bytes:80413513464 (74.8  
> GiB)
> 
> 
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> <files.tar.gz>


-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to: 
http://www.cisco.com/web/about/doing_business/legal/cri/


Reply via email to