HI:

    I tried to run IMB's Bcast test with 144 processes on 6
nodes(24cores/node) with Open MPI trunk like:
mpirun --hostfile ~/host --bynode -np 144  ./IMB-MPI1 Bcast -npmin 144

    Most time, it's stuck there after IMB's calling MPI_finalize.  When I
quit by ctrl+c, it throws the complain like:
[frennes.rennes.grid5000.fr:32047] [[38098,0],0,0]:route_callback trying
to get message from [[38098,0],0,0] to [[38098,0],4,0]:1, routing loop
[0]
func:/home/tma/opt/ompitrunk/lib/libopen-rte.so.0(opal_backtrace_print+0x1f)
[0x7fb46a06d06f]
[1] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_rml_oob.so [0x7fb467d69e41]
[2] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_oob_tcp.so [0x7fb467b606de]
[3] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_oob_tcp.so [0x7fb467b641a7]
[4] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_rml_oob.so [0x7fb467d6c23a]
[5]
func:/home/tma/opt/ompitrunk/lib/libopen-rte.so.0(orte_plm_base_orted_exit+0x182)
[0x7fb46a02a472]
[6] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_plm_rsh.so [0x7fb467f7124b]
[7] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_errmgr_hnp.so
[0x7fb46693abf7]
[8] func:/home/tma/opt/ompitrunk/lib/openmpi/mca_errmgr_hnp.so
[0x7fb46693c531]
[9] func:/home/tma/opt/ompitrunk/lib/libopen-rte.so.0 [0x7fb46a026b3f]
[10]
func:/home/tma/opt/ompitrunk/lib/libopen-rte.so.0(opal_libevent207_event_base_loop+0x3f5)
[0x7fb46a07b7f5]
[11] func:/home/tma/opt/mpi/bin/mpirun [0x403b45]
[12] func:/home/tma/opt/mpi/bin/mpirun [0x402fd7]
[13] func:/lib/libc.so.6(__libc_start_main+0xe6) [0x7fb468fc81a6]
[14] func:/home/tma/opt/mpi/bin/mpirun [0x402ef9]



Thanks for help
Teng Ma

Reply via email to