Re: [OMPI users] Hybrid OpenMPI/OpenMP leading to deadlocks?

2014-10-16 Thread McGrattan, Kevin B. Dr.
Yes, the code is restartable, and our users often do this. We have users in 
countries with unreliable power supplies. However, we still try to make the 
code as robust as possible. Usually, if I do something improper in my MPI 
coding, failure occurs right away. But I've run out of ideas as to why the code 
would fail after two days, other than network hiccups beyond my control. In 
this case, I run 15 jobs simultaneously on a new linux cluster that is 
dedicated solely to our program. Of the 15 jobs, maybe none or one will fail. 
So that makes me think there is something random at play, but I can't think of 
ways to further bullet-proof the code. 

When you say "poorly designed I/O", what do you mean? This is something that I 
have not really considered.

-Original Message-
From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Gus Correa
Sent: Thursday, October 16, 2014 11:40 AM
To: Open MPI Users
Subject: Re: [OMPI users] Hybrid OpenMPI/OpenMP leading to deadlocks?

Hi Kevin

Wouldn't it be possible to make your code restartable, by saving the 
appropriate fluid configuration/phase space variables, and splitting your long 
run into smaller pieces?
That is a very common strategy for large PDE integrations.
Time invested in programming the restart features may pay off in more steady 
jobs, and in integrations that actually complete.

If a long run fails, because of a network glitch, exhaustion of resources, slow 
NFS server, or any other reason, you loose a long computing time.
If a short run fails, you can always restart from the previous leg/job, and 
your loss is small.

Nearly all parallel code that we run here is restartable (GFD is a form of CFD 
:) ), both to prevent long run failures like the one you mentioned, as well as 
to provide a better throughput and user time share on the cluster as a whole.
This is nothing new, and
restartable codes + short job queue time policy is very common out there.

Here most problems with long runs
(we have some non-restartable serial code die-hards), happen due to NFS issues 
(busy, slow response, etc), and code with poorly designed IO.

My two cents,
Gus Correa

On 10/16/2014 10:16 AM, McGrattan, Kevin B. Dr. wrote:
> The individual MPI processes appear to be using a few percent of the 
> system memory.
>
> I have created a loop containing repeated calls to MPI_TESTALL. When 
> the process is in this loop for more than 10 s, it calls MPI_ABORT. So 
> the only error message I see is related to all the processes being 
> stopped suddenly.
>
> Is it reasonable to ABORT after 10 s? If I just use MPI_WAITALL, the 
> job hangs indefinitely. I at least know for which MPI exchange and 
> which MPI process the code is hanging.
>
> Is there a way to change the number of retries for a given MPI 
> send/receive, or a more graceful timeout function than what I have coded?
>
> *From:*users [mailto:users-boun...@open-mpi.org] *On Behalf Of *Ralph 
> Castain
> *Sent:* Wednesday, October 15, 2014 11:05 PM
> *To:* Open MPI Users
> *Subject:* Re: [OMPI users] Hybrid OpenMPI/OpenMP leading to deadlocks?
>
> If you only have one thread doing MPI calls, then single and funneled 
> are indeed the same. If this is only happening after long run times, 
> I'd suspect resource exhaustion. You might check your memory footprint 
> to see if you are running into leak issues (could be in our library as 
> well as your app). When you eventually deadlock, do you get any error output?
> If you are using IB and run out of QP, you should at least get 
> something saying that.
>
> On Oct 15, 2014, at 8:22 AM, McGrattan, Kevin B. Dr.
> <kevin.mcgrat...@nist.gov <mailto:kevin.mcgrat...@nist.gov>> wrote:
>
>
>
> I am using OpenMPI 1.8.3 on a linux cluster to run fairly long CFD 
> (computational fluid dynamics) simulations using 16 MPI processes. The 
> calculations last several days and typically involve millions of MPI 
> exchanges. I use the Intel Fortran compiler, and when I compile with 
> the -openmp option and run with only one OpenMP thread per MPI 
> process, I tend to get deadlocks after several days of computing. 
> These deadlocks only occur in about 1 out of 10 calculations, and they 
> only occur after running for several days. I have eliminated things 
> like network glitches, power spikes, etc, as possibilities. The only 
> thing left is the inclusion of the OpenMP option - even though I am 
> running with just one OpenMP thread per MPI process. I have read about 
> the issues with MPI_THREAD_INIT, and I have reduced the REQUIRED level 
> of support to MPI_THREAD_FUNNELED, down from MPI_THREAD_SERIALIZED. 
> The latter was not necessary for my application, and I think the 
> reduction in level of support has helped, but not completely removed, 
> the deadlocking. Of course, there is always the

Re: [OMPI users] Hybrid OpenMPI/OpenMP leading to deadlocks?

2014-10-16 Thread Gus Correa

Hi Kevin

Wouldn't it be possible to make your code restartable, by saving
the appropriate fluid configuration/phase space variables,
and splitting your long run into smaller pieces?
That is a very common strategy for large PDE integrations.
Time invested in programming the restart features may pay off
in more steady jobs, and in integrations that actually complete.

If a long run fails, because of a network glitch, exhaustion
of resources, slow NFS server, or any other reason,
you loose a long computing time.
If a short run fails, you can always restart from the previous leg/job,
and your loss is small.

Nearly all parallel code that we run here is restartable
(GFD is a form of CFD :) ),
both to prevent long run failures like the one you mentioned,
as well as to provide a better throughput and user time share
on the cluster as a whole.
This is nothing new, and
restartable codes + short job queue time policy
is very common out there.

Here most problems with long runs
(we have some non-restartable serial code die-hards),
happen due to NFS issues (busy, slow response, etc),
and code with poorly designed IO.

My two cents,
Gus Correa

On 10/16/2014 10:16 AM, McGrattan, Kevin B. Dr. wrote:

The individual MPI processes appear to be using a few percent of the
system memory.

I have created a loop containing repeated calls to MPI_TESTALL. When the
process is in this loop for more than 10 s, it calls MPI_ABORT. So the
only error message I see is related to all the processes being stopped
suddenly.

Is it reasonable to ABORT after 10 s? If I just use MPI_WAITALL, the job
hangs indefinitely. I at least know for which MPI exchange and which MPI
process the code is hanging.

Is there a way to change the number of retries for a given MPI
send/receive, or a more graceful timeout function than what I have coded?

*From:*users [mailto:users-boun...@open-mpi.org] *On Behalf Of *Ralph
Castain
*Sent:* Wednesday, October 15, 2014 11:05 PM
*To:* Open MPI Users
*Subject:* Re: [OMPI users] Hybrid OpenMPI/OpenMP leading to deadlocks?

If you only have one thread doing MPI calls, then single and funneled
are indeed the same. If this is only happening after long run times, I'd
suspect resource exhaustion. You might check your memory footprint to
see if you are running into leak issues (could be in our library as well
as your app). When you eventually deadlock, do you get any error output?
If you are using IB and run out of QP, you should at least get something
saying that.

On Oct 15, 2014, at 8:22 AM, McGrattan, Kevin B. Dr.
<kevin.mcgrat...@nist.gov <mailto:kevin.mcgrat...@nist.gov>> wrote:



I am using OpenMPI 1.8.3 on a linux cluster to run fairly long CFD
(computational fluid dynamics) simulations using 16 MPI processes. The
calculations last several days and typically involve millions of MPI
exchanges. I use the Intel Fortran compiler, and when I compile with the
–openmp option and run with only one OpenMP thread per MPI process, I
tend to get deadlocks after several days of computing. These deadlocks
only occur in about 1 out of 10 calculations, and they only occur after
running for several days. I have eliminated things like network
glitches, power spikes, etc, as possibilities. The only thing left is
the inclusion of the OpenMP option – even though I am running with just
one OpenMP thread per MPI process. I have read about the issues with
MPI_THREAD_INIT, and I have reduced the REQUIRED level of support to
MPI_THREAD_FUNNELED, down from MPI_THREAD_SERIALIZED. The latter was not
necessary for my application, and I think the reduction in level of
support has helped, but not completely removed, the deadlocking. Of
course, there is always the possibility that I have coded my MPI calls
improperly, even though the code runs for days on end. Maybe there’s
that one in a million possibility that rank x gets to a point in the
code that is so far ahead of all the other ranks that a deadlock occurs.
Placing MPI_BARRIERs has not helped me find any such situation.

So I have two questions. First, has anyone experienced something similar
to this where inclusion of OpenMP in an MPI code has caused deadlocks?
Second, is it possible that reducing the REQUIRED level of support to
MPI_THREAD_SINGLE will cause the code to behave differently than
FUNNELED? I have read in another post that SINGLE and FUNNELED are
essentially the same thing. I have even noted that I can spawn OpenMP
threads even when I use SINGLE.

Thanks

Kevin McGrattan

National Institute of Standards and Technology

100 Bureau Drive, Mail Stop 8664

Gaithersburg, Maryland 20899

301 975 2712

___
users mailing list
us...@open-mpi.org <mailto:us...@open-mpi.org>
Subscription:http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this
post:http://www.open-mpi.org/community/lists/users/2014/10/25500.php



___
users mailing list
us...@open-mpi.org
Subscriptio

Re: [OMPI users] Hybrid OpenMPI/OpenMP leading to deadlocks?

2014-10-16 Thread McGrattan, Kevin B. Dr.
The individual MPI processes appear to be using a few percent of the system 
memory.

I have created a loop containing repeated calls to MPI_TESTALL. When the 
process is in this loop for more than 10 s, it calls MPI_ABORT. So the only 
error message I see is related to all the processes being stopped suddenly.

Is it reasonable to ABORT after 10 s? If I just use MPI_WAITALL, the job hangs 
indefinitely. I at least know for which MPI exchange and which MPI process the 
code is hanging.

Is there a way to change the number of retries for a given MPI send/receive, or 
a more graceful timeout function than what I have coded?

From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Ralph Castain
Sent: Wednesday, October 15, 2014 11:05 PM
To: Open MPI Users
Subject: Re: [OMPI users] Hybrid OpenMPI/OpenMP leading to deadlocks?

If you only have one thread doing MPI calls, then single and funneled are 
indeed the same. If this is only happening after long run times, I'd suspect 
resource exhaustion. You might check your memory footprint to see if you are 
running into leak issues (could be in our library as well as your app). When 
you eventually deadlock, do you get any error output? If you are using IB and 
run out of QP, you should at least get something saying that.


On Oct 15, 2014, at 8:22 AM, McGrattan, Kevin B. Dr. 
<kevin.mcgrat...@nist.gov<mailto:kevin.mcgrat...@nist.gov>> wrote:


I am using OpenMPI 1.8.3 on a linux cluster to run fairly long CFD 
(computational fluid dynamics) simulations using 16 MPI processes. The 
calculations last several days and typically involve millions of MPI exchanges. 
I use the Intel Fortran compiler, and when I compile with the -openmp option 
and run with only one OpenMP thread per MPI process, I tend to get deadlocks 
after several days of computing. These deadlocks only occur in about 1 out of 
10 calculations, and they only occur after running for several days. I have 
eliminated things like network glitches, power spikes, etc, as possibilities. 
The only thing left is the inclusion of the OpenMP option - even though I am 
running with just one OpenMP thread per MPI process. I have read about the 
issues with MPI_THREAD_INIT, and I have reduced the REQUIRED level of support 
to MPI_THREAD_FUNNELED, down from MPI_THREAD_SERIALIZED. The latter was not 
necessary for my application, and I think the reduction in level of support has 
helped, but not completely removed, the deadlocking. Of course, there is always 
the possibility that I have coded my MPI calls improperly, even though the code 
runs for days on end. Maybe there's that one in a million possibility that rank 
x gets to a point in the code that is so far ahead of all the other ranks that 
a deadlock occurs. Placing MPI_BARRIERs has not helped me find any such 
situation.

So I have two questions. First, has anyone experienced something similar to 
this where inclusion of OpenMP in an MPI code has caused deadlocks? Second, is 
it possible that reducing the REQUIRED level of support to MPI_THREAD_SINGLE 
will cause the code to behave differently than FUNNELED? I have read in another 
post that SINGLE and FUNNELED are essentially the same thing. I have even noted 
that I can spawn OpenMP threads even when I use SINGLE.

Thanks

Kevin McGrattan
National Institute of Standards and Technology
100 Bureau Drive, Mail Stop 8664
Gaithersburg, Maryland 20899

301 975 2712

___
users mailing list
us...@open-mpi.org<mailto:us...@open-mpi.org>
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2014/10/25500.php



Re: [OMPI users] Hybrid OpenMPI/OpenMP leading to deadlocks?

2014-10-16 Thread Ralph Castain
If you only have one thread doing MPI calls, then single and funneled are 
indeed the same. If this is only happening after long run times, I'd suspect 
resource exhaustion. You might check your memory footprint to see if you are 
running into leak issues (could be in our library as well as your app). When 
you eventually deadlock, do you get any error output? If you are using IB and 
run out of QP, you should at least get something saying that.


On Oct 15, 2014, at 8:22 AM, McGrattan, Kevin B. Dr.  
wrote:

> I am using OpenMPI 1.8.3 on a linux cluster to run fairly long CFD 
> (computational fluid dynamics) simulations using 16 MPI processes. The 
> calculations last several days and typically involve millions of MPI 
> exchanges. I use the Intel Fortran compiler, and when I compile with the 
> –openmp option and run with only one OpenMP thread per MPI process, I tend to 
> get deadlocks after several days of computing. These deadlocks only occur in 
> about 1 out of 10 calculations, and they only occur after running for several 
> days. I have eliminated things like network glitches, power spikes, etc, as 
> possibilities. The only thing left is the inclusion of the OpenMP option – 
> even though I am running with just one OpenMP thread per MPI process. I have 
> read about the issues with MPI_THREAD_INIT, and I have reduced the REQUIRED 
> level of support to MPI_THREAD_FUNNELED, down from MPI_THREAD_SERIALIZED. The 
> latter was not necessary for my application, and I think the reduction in 
> level of support has helped, but not completely removed, the deadlocking. Of 
> course, there is always the possibility that I have coded my MPI calls 
> improperly, even though the code runs for days on end. Maybe there’s that one 
> in a million possibility that rank x gets to a point in the code that is so 
> far ahead of all the other ranks that a deadlock occurs. Placing MPI_BARRIERs 
> has not helped me find any such situation.
>  
> So I have two questions. First, has anyone experienced something similar to 
> this where inclusion of OpenMP in an MPI code has caused deadlocks? Second, 
> is it possible that reducing the REQUIRED level of support to 
> MPI_THREAD_SINGLE will cause the code to behave differently than FUNNELED? I 
> have read in another post that SINGLE and FUNNELED are essentially the same 
> thing. I have even noted that I can spawn OpenMP threads even when I use 
> SINGLE.
>  
> Thanks
>  
> Kevin McGrattan
> National Institute of Standards and Technology
> 100 Bureau Drive, Mail Stop 8664
> Gaithersburg, Maryland 20899
>  
> 301 975 2712
>  
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/10/25500.php



[OMPI users] Hybrid OpenMPI/OpenMP leading to deadlocks?

2014-10-15 Thread McGrattan, Kevin B. Dr.
I am using OpenMPI 1.8.3 on a linux cluster to run fairly long CFD 
(computational fluid dynamics) simulations using 16 MPI processes. The 
calculations last several days and typically involve millions of MPI exchanges. 
I use the Intel Fortran compiler, and when I compile with the -openmp option 
and run with only one OpenMP thread per MPI process, I tend to get deadlocks 
after several days of computing. These deadlocks only occur in about 1 out of 
10 calculations, and they only occur after running for several days. I have 
eliminated things like network glitches, power spikes, etc, as possibilities. 
The only thing left is the inclusion of the OpenMP option - even though I am 
running with just one OpenMP thread per MPI process. I have read about the 
issues with MPI_THREAD_INIT, and I have reduced the REQUIRED level of support 
to MPI_THREAD_FUNNELED, down from MPI_THREAD_SERIALIZED. The latter was not 
necessary for my application, and I think the reduction in level of support has 
helped, but not completely removed, the deadlocking. Of course, there is always 
the possibility that I have coded my MPI calls improperly, even though the code 
runs for days on end. Maybe there's that one in a million possibility that rank 
x gets to a point in the code that is so far ahead of all the other ranks that 
a deadlock occurs. Placing MPI_BARRIERs has not helped me find any such 
situation.

So I have two questions. First, has anyone experienced something similar to 
this where inclusion of OpenMP in an MPI code has caused deadlocks? Second, is 
it possible that reducing the REQUIRED level of support to MPI_THREAD_SINGLE 
will cause the code to behave differently than FUNNELED? I have read in another 
post that SINGLE and FUNNELED are essentially the same thing. I have even noted 
that I can spawn OpenMP threads even when I use SINGLE.

Thanks

Kevin McGrattan
National Institute of Standards and Technology
100 Bureau Drive, Mail Stop 8664
Gaithersburg, Maryland 20899

301 975 2712