Max,
The recursive call should not be an issue, as the MPI_Allreduce is a blocking
operation, you can't recurse before the previous call completes.
What is the size of the data exchanged in the MPI_Alltoall?
George.
On Sep 30, 2013, at 17:09 , Max Staufer wrote:
> Well, havent tryed 1.7.2 y
Geoffroy,
Good catch. I pushed in the trunk in r29303. Thanks for the patch.
George.
On Sep 30, 2013, at 23:29 , "Vallee, Geoffroy R." wrote:
> Hi,
>
> Instead of references to the RTE layer, there are a few direct references to
> ORTE symbols in the current OMPI layer. The attached patc
Hi,
Instead of references to the RTE layer, there are a few direct references to
ORTE symbols in the current OMPI layer. The attached patches fix the problem.
Thanks,
proc_c.patch
Description: proc_c.patch
comm_c.patch
Description: comm_c.patch
All should be fixed with regards to the neighborhood collectives Fortran
interface now.
Please let Nathan know if you have any further issues; thanks.
On Sep 29, 2013, at 7:59 AM, Jeff Squyres (jsquyres) wrote:
> FYI: I discovered yesterday (and Mellanox reminded me today) that Fortran
> bui
Per some off-list emails, the commit message was referring to alignment issues
when RTE_DEBUG was set to 1.
I agree: it wasn't the most descriptive/accurate commit message. :-\
On Sep 30, 2013, at 11:05 AM, Tim Mattox wrote:
> FYI - The description does not seem to match the contents of this
FYI - The description does not seem to match the contents of this change.
On Mon, Sep 30, 2013 at 2:18 AM, wrote:
> Author: miked (Mike Dubman)
> Date: 2013-09-30 02:18:12 EDT (Mon, 30 Sep 2013)
> New Revision: 29293
> URL: https://svn.open-mpi.org/trac/ompi/changeset/29293
>
> Log:
> fix memory
On 25/09/2013 19:08, Ralph Castain wrote:
> Wow - that is hard to understand as that code path hasn't changed in quite
> some time. Could you please do two things for us?
>
> 1. tell us how you are configuring OMPI
Sure.
Here are the options list:
configure: running /bin/bash './configure' CFLA
Well, havent tryed 1.7.2 yet, but too elaborate the problem a little bit
more,
the groth happens if we use an MPI_ALLREDUCE in a recursive
subroutine call, that means in FORTRAN90 speech the
subroutine calls itself again, and is specially marked in order to work
properly. Apart from that no