Re: [OMPI users] Restart after code hangs

2016-06-16 Thread Gus Correa

Hi Alex

You know all this, but just in case ...

Restartable code goes like this:

*
.start

read the initial/previous configuration from a file
...
final_step = first_step + nsteps
time_step = first_step
while ( time_step .le. final_step )
  ... march in time ...
  time_step = time_step +1
end

save the final_step configuration (or phase space) to a file
[depending on the algorithm you may need to save the
previous config also, or perhaps a few more]

.end


Then restart the job time and again, until the desired
number of time steps is completed.

Job queue systems/resource managers allow a job to resubmit itself,
so unless a job fails it feels like a single time integration.

If a job fails in the middle, you don't lose all work,
just restart from the previous saved configuration.
That is the only situation where you need to "monitor" the code.
Resource managers/ queue systems can also email you in
case the job fails, warning you to do manual intervention.

The time granularity per job (nsteps) is up to you.
Normally it is adjusted to the max walltime of job queues
(in a shared computer/cluster),
but in your case it can be adjusted to how often the program fails.

All atmosphere/ocean/climate/weather_forecast models work
this way (that's what we mostly run here).
I guess most CFD, computational Chemistry, etc, programs also do.

I hope this helps,
Gus Correa


On 06/16/2016 05:25 PM, Alex Kaiser wrote:

Hello,

I have an MPI code which sometimes hangs, simply stops running. It is
not clear why and it uses many large third party libraries so I do not
want to try to fix it. The code is easy to restart, but then it needs to
be monitored closely by me, and I'd prefer to do it automatically.

Is there a way, within an MPI process, to detect the hang and abort if so?

In psuedocode, I would like to do something like

for (all time steps)
 step
 if (nothing has happened for x minutes)

 call mpi abort to return control to the shell

 endif

endfor

This structure does not work however, as it can no longer do anything,
including check itself, when it is stuck.


Thank you,
Alex



___
users mailing list
us...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/06/29471.php





Re: [OMPI users] Restart after code hangs

2016-06-16 Thread Ralph Castain
Which version of OMPI are you using?

> On Jun 16, 2016, at 2:25 PM, Alex Kaiser  wrote:
> 
> Hello, 
> 
> I have an MPI code which sometimes hangs, simply stops running. It is not 
> clear why and it uses many large third party libraries so I do not want to 
> try to fix it. The code is easy to restart, but then it needs to be monitored 
> closely by me, and I'd prefer to do it automatically.
> 
> Is there a way, within an MPI process, to detect the hang and abort if so? 
> 
> In psuedocode, I would like to do something like 
> for (all time steps)
> step 
> if (nothing has happened for x minutes)
> call mpi abort to return control to the shell
> endif 
> endfor 
> This structure does not work however, as it can no longer do anything, 
> including check itself, when it is stuck. 
> 
> Thank you, 
> Alex 
> 
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/06/29471.php



[OMPI users] Restart after code hangs

2016-06-16 Thread Alex Kaiser
Hello,

I have an MPI code which sometimes hangs, simply stops running. It is not
clear why and it uses many large third party libraries so I do not want to
try to fix it. The code is easy to restart, but then it needs to be
monitored closely by me, and I'd prefer to do it automatically.

Is there a way, within an MPI process, to detect the hang and abort if so?

In psuedocode, I would like to do something like

for (all time steps)
step
if (nothing has happened for x minutes)

call mpi abort to return control to the shell

endif

endfor

This structure does not work however, as it can no longer do anything,
including check itself, when it is stuck.


Thank you,
Alex


[OMPI users] Avoiding the memory registration costs by having memory always registered, is it possible with Linux ?

2016-06-16 Thread Audet, Martin
Hi,

After reading a little the FAQ on the methods used by Open MPI to deal with 
memory registration (or pinning) with Infiniband adapter, it seems that we 
could avoid all the overhead and complexity of memory 
registration/deregistration, registration cache access and update, memory 
management (ummunotify) in addition to allowing a better overlap of the 
communications with the computations (we could let the communication hardware 
do its job independently without resorting to 
registration/transfer/deregistration pipelines) by simply having all user 
process memory registered all the time.

Of course a configuration like that is not appropriate in a general setting 
(ex: a desktop environment) as it would make swapping almost impossible.

But in the context of an HPC node where the processes are not supposed to swap 
and the OS not overcommit memory, not being able to swap doesn't appear to be a 
problem.

Moreover since the maximal total memory used per process is often predefined at 
the application start as a resource specified to the queuing system, the OS 
could easily keep a defined amount of extra memory for its own need instead of 
swapping out user process memory.

I guess that specialized (non-Linux) compute node OS does this.

But is it possible and does it make sense with Linux ?

Thanks,

Martin Audet



[OMPI users] "failed to create queue pair" problem, but settings appear OK

2016-06-16 Thread Sasso, John (GE Power, Non-GE)
Thank-you Nathan.  Since the default btl_openib_receive_queues setting is:

P,128,256,192,128:S,2048,1024,1008,64:S,12288,1024,1008,64:S,65536,1024,1008,64

this would mean that, with max_qp = 392632 and 4 QPs above, the "actual" max 
would be 392632 / 4 = 98158.   Using this value in my prior math, the upper 
bound on the number of 24-core nodes would be  98158 / 24^2 ~ 170.This 
comes closer to the limit I encountered while testing.   I'm sure there are 
other particulars I am not accounting for in this math, but the approximation 
is reasonable.  

Thanks for the clarification, Nathan!

--john

-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Nathan Hjelm
Sent: Thursday, June 16, 2016 9:56 AM
To: Open MPI Users
Subject: EXT: Re: [OMPI users] "failed to create queue pair" problem, but 
settings appear OK

XRC support is greatly improved in 1.10.x and 2.0.0. Would be interesting to 
see if a newer version fixed the shutdown hang.

When calculating the required number of queue pairs you also have to divide by 
the number of queue pairs in the btl_openib_receive_queues parameter. 
Additionally Open MPI uses 1 qp/rank for connections (1.7+) and there are some 
in use by IPoIB and other services.

-Nathan

> On Jun 16, 2016, at 7:15 AM, Sasso, John (GE Power, Non-GE) 
>  wrote:
> 
> Nathan,
> 
> Thank you for the suggestion.   I tried your btl_openib_receive_queues 
> setting with a 4200+ core IMB job, and the job ran (great!).   The shutdown 
> of the job took such a long time that after 6 minutes, I had to 
> force-terminate the job.
> 
> When I tried using this scheme before, with the following recommended by the 
> OpenMPI FAQ, I got some odd errors:
> 
> --mca btl openib,sm,self --mca btl_openib_receive_queues 
> X,128,256,192,128:X,2048,256,128,32:X,12288,256,128,32:X,65536,256,128
> ,32
> 
> However, when I tried:
> 
> --mca btl openib,sm,self --mca btl_openib_receive_queues 
> X,4096,1024:X,12288,512:X,65536,512
> 
> I got success with my aforementioned job.
> 
> I am going to do more testing, with the goal of getting a 5000 core job to 
> run successfully.  If I can, then down the road my concern is the impact the 
> btl_openib_receive_queues mca parameter (above) will have on lower-scale (< 
> 1024 cores) jobs, since the parameter change to the global openmpi config 
> file would impact ALL users and jobs of all scales.
> 
> Chuck – as I noted in my first email, log_num_mtt was set fine, so that is 
> not the issue here.
> 
> Finally, regarding running out of QPs, I examined the output of ‘ibv_devinfo 
> –v’ on our compute nodes.  I see the following pertinent settings:
> 
> max_qp: 392632
> max_qp_wr:  16351
> max_qp_rd_atom: 16
> max_qp_init_rd_atom:128
> max_cq: 65408
>max_cqe:4194303
> 
> Figuring that max_qp is the prime limitation here I am running into when 
> using the PP and SRQ QPs, considering 12 cores per node, this would seem to 
> imply that an upper bound on the number of nodes would be 392632 / 24^2 ~ 681 
> nodes.  This does not make sense, because I saw the QP creation failure error 
> (again, NO error about failure to register enough memory) for as small as 177 
> 24-core nodes!  I don’t know how to make sense of this, tho I don’t question 
> that we were running out of QPs.
> 
> --john
> 
> 
> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Nathan 
> Hjelm
> Sent: Wednesday, June 15, 2016 2:43 PM
> To: Open MPI Users
> Subject: EXT: Re: [OMPI users] "failed to create queue pair" problem, 
> but settings appear OK
> 
> You ran out of queue pairs. There is no way around this for larger all-to-all 
> transfers when using the openib btl and SRQ. Need O(cores^2) QPs to fully 
> connect with SRQ or PP QPs. I recommend using XRC instead by adding:
> 
> btl_openib_receive_queues = X,4096,1024:X,12288,512:X,65536,512
> 
> 
> to your openmpi-mca-params.conf
> 
> or
> 
> -mca btl_openib_receive_queues X,4096,1024:X,12288,512:X,65536,512
> 
> 
> to the mpirun command line.
> 
> 
> -Nathan
> 
> On Jun 15, 2016, at 12:35 PM, "Sasso, John (GE Power, Non-GE)" 
>  wrote:
> 
> Chuck,
> 
> The per-process limits appear fine, including those for the resource mgr 
> daemons:
> 
> Limit Soft Limit Hard Limit Units
> Max address space unlimited unlimited bytes Max core file size 0 0 
> bytes Max cpu time unlimited unlimited seconds Max data size unlimited 
> unlimited bytes Max file locks unlimited unlimited locks Max file size 
> unlimited unlimited bytes Max locked memory unlimited unlimited bytes 
> Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max open 
> files 16384 16384 files Max pending signals 515625 515625 signals Max 
> processes 515625 515625 processes Max realtime priority 0 0 Max 
> realtime timeout unlimited 

Re: [OMPI users] "failed to create queue pair" problem, but settings appear OK

2016-06-16 Thread Nathan Hjelm
XRC support is greatly improved in 1.10.x and 2.0.0. Would be interesting to 
see if a newer version fixed the shutdown hang.

When calculating the required number of queue pairs you also have to divide by 
the number of queue pairs in the btl_openib_receive_queues parameter. 
Additionally Open MPI uses 1 qp/rank for connections (1.7+) and there are some 
in use by IPoIB and other services.

-Nathan

> On Jun 16, 2016, at 7:15 AM, Sasso, John (GE Power, Non-GE) 
>  wrote:
> 
> Nathan,
> 
> Thank you for the suggestion.   I tried your btl_openib_receive_queues 
> setting with a 4200+ core IMB job, and the job ran (great!).   The shutdown 
> of the job took such a long time that after 6 minutes, I had to 
> force-terminate the job.
> 
> When I tried using this scheme before, with the following recommended by the 
> OpenMPI FAQ, I got some odd errors:
> 
> --mca btl openib,sm,self --mca btl_openib_receive_queues 
> X,128,256,192,128:X,2048,256,128,32:X,12288,256,128,32:X,65536,256,128,32
> 
> However, when I tried:
> 
> --mca btl openib,sm,self --mca btl_openib_receive_queues 
> X,4096,1024:X,12288,512:X,65536,512
> 
> I got success with my aforementioned job.
> 
> I am going to do more testing, with the goal of getting a 5000 core job to 
> run successfully.  If I can, then down the road my concern is the impact the 
> btl_openib_receive_queues mca parameter (above) will have on lower-scale (< 
> 1024 cores) jobs, since the parameter change to the global openmpi config 
> file would impact ALL users and jobs of all scales.
> 
> Chuck – as I noted in my first email, log_num_mtt was set fine, so that is 
> not the issue here.
> 
> Finally, regarding running out of QPs, I examined the output of ‘ibv_devinfo 
> –v’ on our compute nodes.  I see the following pertinent settings:
> 
> max_qp: 392632
> max_qp_wr:  16351
> max_qp_rd_atom: 16
> max_qp_init_rd_atom:128
> max_cq: 65408
>max_cqe:4194303
> 
> Figuring that max_qp is the prime limitation here I am running into when 
> using the PP and SRQ QPs, considering 12 cores per node, this would seem to 
> imply that an upper bound on the number of nodes would be 392632 / 24^2 ~ 681 
> nodes.  This does not make sense, because I saw the QP creation failure error 
> (again, NO error about failure to register enough memory) for as small as 177 
> 24-core nodes!  I don’t know how to make sense of this, tho I don’t question 
> that we were running out of QPs.
> 
> --john
> 
> 
> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Nathan Hjelm
> Sent: Wednesday, June 15, 2016 2:43 PM
> To: Open MPI Users
> Subject: EXT: Re: [OMPI users] "failed to create queue pair" problem, but 
> settings appear OK
> 
> You ran out of queue pairs. There is no way around this for larger all-to-all 
> transfers when using the openib btl and SRQ. Need O(cores^2) QPs to fully 
> connect with SRQ or PP QPs. I recommend using XRC instead by adding:
> 
> btl_openib_receive_queues = X,4096,1024:X,12288,512:X,65536,512
> 
> 
> to your openmpi-mca-params.conf
> 
> or
> 
> -mca btl_openib_receive_queues X,4096,1024:X,12288,512:X,65536,512
> 
> 
> to the mpirun command line.
> 
> 
> -Nathan
> 
> On Jun 15, 2016, at 12:35 PM, "Sasso, John (GE Power, Non-GE)" 
>  wrote:
> 
> Chuck,
> 
> The per-process limits appear fine, including those for the resource mgr 
> daemons:
> 
> Limit Soft Limit Hard Limit Units
> Max address space unlimited unlimited bytes
> Max core file size 0 0 bytes
> Max cpu time unlimited unlimited seconds
> Max data size unlimited unlimited bytes
> Max file locks unlimited unlimited locks
> Max file size unlimited unlimited bytes
> Max locked memory unlimited unlimited bytes
> Max msgqueue size 819200 819200 bytes
> Max nice priority 0 0
> Max open files 16384 16384 files
> Max pending signals 515625 515625 signals
> Max processes 515625 515625 processes
> Max realtime priority 0 0
> Max realtime timeout unlimited unlimited us
> Max resident set unlimited unlimited bytes
> Max stack size 30720 unlimited bytes
> 
> 
> 
> As for the FAQ re registered memory, checking our OpenMPI settings with 
> ompi_info, we have:
> 
> mpool_rdma_rcache_size_limit = 0 ==> Open MPI will register as much user 
> memory as necessary
> btl_openib_free_list_max = -1 ==> Open MPI will try to allocate as many 
> registered buffers as it needs
> btl_openib_eager_rdma_num = 16
> btl_openib_max_eager_rdma = 16
> btl_openib_eager_limit = 12288
> 
> 
> Other suggestions welcome. Hitting a brick wall here. Thanks!
> 
> --john
> 
> 
> 
> -Original Message-
> From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Gus Correa
> Sent: Wednesday, June 15, 2016 1:39 PM
> To: Open MPI Users
> Subject: EXT: Re: [OMPI users] "failed to create queue pair" problem, but 
> settings appear OK

[OMPI users] "failed to create queue pair" problem, but settings appear OK

2016-06-16 Thread Sasso, John (GE Power, Non-GE)
Nathan,

Thank you for the suggestion.   I tried your btl_openib_receive_queues setting 
with a 4200+ core IMB job, and the job ran (great!).   The shutdown of the job 
took such a long time that after 6 minutes, I had to force-terminate the job.

When I tried using this scheme before, with the following recommended by the 
OpenMPI FAQ, I got some odd errors:

--mca btl openib,sm,self --mca btl_openib_receive_queues 
X,128,256,192,128:X,2048,256,128,32:X,12288,256,128,32:X,65536,256,128,32

However, when I tried:

--mca btl openib,sm,self --mca btl_openib_receive_queues 
X,4096,1024:X,12288,512:X,65536,512

I got success with my aforementioned job.

I am going to do more testing, with the goal of getting a 5000 core job to run 
successfully.  If I can, then down the road my concern is the impact the 
btl_openib_receive_queues mca parameter (above) will have on lower-scale (< 
1024 cores) jobs, since the parameter change to the global openmpi config file 
would impact ALL users and jobs of all scales.

Chuck - as I noted in my first email, log_num_mtt was set fine, so that is not 
the issue here.

Finally, regarding running out of QPs, I examined the output of 'ibv_devinfo 
-v' on our compute nodes.  I see the following pertinent settings:

max_qp: 392632
max_qp_wr:  16351
max_qp_rd_atom: 16
max_qp_init_rd_atom:128
max_cq: 65408
   max_cqe:4194303

Figuring that max_qp is the prime limitation here I am running into when using 
the PP and SRQ QPs, considering 12 cores per node, this would seem to imply 
that an upper bound on the number of nodes would be 392632 / 24^2 ~ 681 nodes.  
This does not make sense, because I saw the QP creation failure error (again, 
NO error about failure to register enough memory) for as small as 177 24-core 
nodes!  I don't know how to make sense of this, tho I don't question that we 
were running out of QPs.

--john


From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Nathan Hjelm
Sent: Wednesday, June 15, 2016 2:43 PM
To: Open MPI Users
Subject: EXT: Re: [OMPI users] "failed to create queue pair" problem, but 
settings appear OK

You ran out of queue pairs. There is no way around this for larger all-to-all 
transfers when using the openib btl and SRQ. Need O(cores^2) QPs to fully 
connect with SRQ or PP QPs. I recommend using XRC instead by adding:


btl_openib_receive_queues = X,4096,1024:X,12288,512:X,65536,512

to your openmpi-mca-params.conf

or

-mca btl_openib_receive_queues X,4096,1024:X,12288,512:X,65536,512


to the mpirun command line.


-Nathan

On Jun 15, 2016, at 12:35 PM, "Sasso, John (GE Power, Non-GE)" 
> wrote:
Chuck,

The per-process limits appear fine, including those for the resource mgr 
daemons:

Limit Soft Limit Hard Limit Units
Max address space unlimited unlimited bytes
Max core file size 0 0 bytes
Max cpu time unlimited unlimited seconds
Max data size unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max file size unlimited unlimited bytes
Max locked memory unlimited unlimited bytes
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max open files 16384 16384 files
Max pending signals 515625 515625 signals
Max processes 515625 515625 processes
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
Max resident set unlimited unlimited bytes
Max stack size 30720 unlimited bytes



As for the FAQ re registered memory, checking our OpenMPI settings with 
ompi_info, we have:

mpool_rdma_rcache_size_limit = 0 ==> Open MPI will register as much user memory 
as necessary
btl_openib_free_list_max = -1 ==> Open MPI will try to allocate as many 
registered buffers as it needs
btl_openib_eager_rdma_num = 16
btl_openib_max_eager_rdma = 16
btl_openib_eager_limit = 12288


Other suggestions welcome. Hitting a brick wall here. Thanks!

--john



-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Gus Correa
Sent: Wednesday, June 15, 2016 1:39 PM
To: Open MPI Users
Subject: EXT: Re: [OMPI users] "failed to create queue pair" problem, but 
settings appear OK

Hi John

1) For diagnostic, you could check the actual "per process" limits on the nodes 
while that big job is running:

cat /proc/$PID/limits

2) If you're using a resource manager to launch the job, the resource manager 
daemon/deamons (local to the nodes) may have to to set the memlock and other 
limits, so that the Open MPI processes inherit them.
I use Torque, so I put these lines in the pbs_mom (Torque local daemon) 
initialization script:

# pbs_mom system limits
# max file descriptors
ulimit -n 32768
# locked memory
ulimit -l unlimited
# stacksize
ulimit -s unlimited

3) See also this FAQ related to registered memory.
I set these parameters in /etc/modprobe.d/mlx4_core.conf, but where they're set 
may depend