On Oct 15, 2014, at 11:46 AM, Gus Correa wrote:
> Thank you Ralph and Jeff for the help!
>
> Glad to hear the segmentation fault is reproducible and will be fixed.
>
> In any case, one can just avoid the old parameter name
> (rmaps_base_schedule_policy),
> and use instead the new parameter nam
If you only have one thread doing MPI calls, then single and funneled are
indeed the same. If this is only happening after long run times, I'd suspect
resource exhaustion. You might check your memory footprint to see if you are
running into leak issues (could be in our library as well as your ap
Thank you Ralph and Jeff for the help!
Glad to hear the segmentation fault is reproducible and will be fixed.
In any case, one can just avoid the old parameter name
(rmaps_base_schedule_policy),
and use instead the new parameter name
(rmaps_base_mapping_policy)
without any problem in OMPI 1.8.3.
I am using OpenMPI 1.8.3 on a linux cluster to run fairly long CFD
(computational fluid dynamics) simulations using 16 MPI processes. The
calculations last several days and typically involve millions of MPI exchanges.
I use the Intel Fortran compiler, and when I compile with the -openmp option
We talked off-list -- fixed this on master and just filed
https://github.com/open-mpi/ompi-release/pull/33 to get this into the v1.8
branch.
On Oct 14, 2014, at 7:39 PM, Ralph Castain wrote:
>
> On Oct 14, 2014, at 5:32 PM, Gus Correa wrote:
>
>> Dear Open MPI fans and experts
>>
>> This