Hi Ralph,

 

Thanks for the reply. The OpenMPI version is 1.2b2 (because I would like
to integrate it with SGE).

 

Here is what is happening:

 

(1)     When I run with -debug-daemons (but WITHOUT -d), I get "Daemon
[0,0,27] checking in as pid 7620 on host blade28" (for example) messages
for most but not all of the daemons that should be started up, and then
it hangs. I also notice "reconnecting to LDAP server" messages in
various /var/log/secure files, and cannot login while things are hung
(with "su: pam_ldap: ldap_result Can't contact LDAP server" in
/var/log/messages). So apparently LDAP hits some limit to opening ssh
sessions, and I'm not sure how to address this.

(2)     When I run with -debug-daemons AND the debug option -d, all
daemons start start up and check-in, albeit slowly (debug must slow
things down so LDAP can handle all the requests??). Then apparently, the
cpi process is started for each task but it then hangs:

 

[blade1:23816] spawn: in job_state_callback(jobid = 1, state = 0x4)

[blade1:23816] Info: Setting up debugger process table for applications

  MPIR_being_debugged = 0

  MPIR_debug_gate = 0

  MPIR_debug_state = 1

  MPIR_acquired_pre_main = 0

  MPIR_i_am_starter = 0

  MPIR_proctable_size = 800

  MPIR_proctable:

    (i, host, exe, pid) = (0, blade1, /home4/itstaff/heywood/ompi/cpi,
24193)

...

...(i, host, exe, pid) = (799, blade213,
/home4/itstaff/heywood/ompi/cpi, 4762)

 

A "ps" on the head node shows 200 open ssh sessions, and 4 cpi processes
doing nothing. A ^C gives this:

 

mpirun: killing job...

 

------------------------------------------------------------------------
--

WARNING: A process refused to die!

 

Host: blade1

PID:  24193

 

This process may still be running and/or consuming resources.

 

 

 

Still got a ways to go, but any ideas/suggestions are welcome!

 

Thanks,

 

Todd

 

 

________________________________

From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On
Behalf Of Ralph Castain
Sent: Friday, February 02, 2007 5:20 PM
To: Open MPI Users
Subject: Re: [OMPI users] large jobs hang on startup (deadlock?)

 

Hi Todd

To help us provide advice, could you tell us what version of OpenMPI you
are using?

Meantime, try adding "-mca pls_rsh_num_concurrent 200" to your mpirun
command line. You can up the number of concurrent daemons we launch to
anything your system will support - basically, we limit the number only
because some systems have limits on the number of ssh calls we can have
active at any one time. Because we hold stdio open when running with
-debug-daemons, the number of concurrent daemons must match or exceed
the number of nodes you are trying to launch on.

I have a "fix" in the works that will help relieve some of that
restriction, but that won't come out until a later release.

Hopefully, that will allow you to obtain more debug info about why/where
things are hanging.

Ralph


On 2/2/07 11:41 AM, "Heywood, Todd" <heyw...@cshl.edu> wrote:

I have OpenMPI running fine for a small/medium number of tasks (simple
hello or cpi program). But when I try 700 or 800 tasks, it hangs,
apparently on startup. I think this might be related to LDAP, since if I
try to log into my account while the job is hung, I get told my username
doesn't exist. However, I tried adding -debug to the mpirun, and got the
same sequence of output as for successful smaller runs, until it hung
again. So I added --debug-daemons and got this (with an exit, i.e. no
hanging):
...
[blade1:31733] [0,0,0] wrote setup file
------------------------------------------------------------------------
--
The rsh launcher has been given a number of 128 concurrent daemons to
launch and is in a debug-daemons option. However, the total number of
daemons to launch (200) is greater than this value. This is a scenario
that
will cause the system to deadlock.
 
To avoid deadlock, either increase the number of concurrent daemons, or
remove the debug-daemons flag.
------------------------------------------------------------------------
--
[blade1:31733] [0,0,0] ORTE_ERROR_LOG: Fatal in file
../../../../../orte/mca/rmgr/urm/
rmgr_urm.c at line 455
[blade1:31733] mpirun: spawn failed with errno=-6
[blade1:31733] sess_dir_finalize: proc session dir not empty - leaving
 
Any ideas or suggestions appreciated.
 
Todd Heywood
 
 

 

________________________________

_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

 

Reply via email to