I crashes into the fortran routine calling a MPI functions. When I run the
debugger, the crash seems to be in libmpi_f77.lib, but I cannot go further
since the lib is not in debbug mode.

Attached to this email the files of my small case. But with
less aggressive options, it works.

I did not know the lowst optimization level is /O: I am going to try.


On Mon, Oct 29, 2012 at 5:08 PM, Damien <dam...@khubla.com> wrote:

>  Mathieu,
>
> Where is the crash?  Without that info, I'd suggest turning off all the
> optimisations and just compile it without any flags other than what you
> need to compile it cleanly (so no /O flags) and see if it crashes.
>
> Damien
>
>
> On 26/10/2012 10:27 AM, Mathieu Gontier wrote:
>
> Dear all,
>
>  I am willing to use OpenMPI on Windows for a CFD instead of  MPICH2. My
> solver is developed if Fortran77 and piloted by a C++ interface; the both
> levels call MPI functions.
>
>  So, I installed OpenMPI-1.6.2-x64 on my system and compiled my code
> successfully. But, at the runtime it crashed.
> I reproduced the problem into a small C application calling a Fortran
> function using MPI_Allreduce; when I removed some aggressive optimization
> options from the Fortran, it worked:
> *
>
>    -
>
>    Optimization: Disable (/Od)
>     -
>
>    Inline Function Expansion: Any Suitable (/Ob2)
>     -
>
>    Favor Size or Speed: Favor Fast Code (/Ot)
>
> *
>
>  So, I removed the same options from the Fortran parts of my solver, but
> it still crashes. I tried some others, but it still continues
> crashing. Does anybody has an idea? Should I (de)activate some compilation
> options? Is there some properties to build and link against libmpi_f77.lib?
>
>  Thanks for your help.
> Mathieu.
>
>  --
> Mathieu Gontier
> - MSN: mathieu.gont...@gmail.com
> - Skype: mathieu_gontier
>
>
> _______________________________________________
> users mailing 
> listusers@open-mpi.orghttp://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
Mathieu Gontier
- MSN: mathieu.gont...@gmail.com
- Skype: mathieu_gontier
#include <iostream>
#include <mpi.h>

extern "C"  void RED(int*,int*);

int main( int argc, char* argv[] )
{
	int ier=MPI_SUCCESS, rank=-1, size=0 ;
	ier = MPI_Init( &argc, &argv ) ;
    ier = MPI_Comm_size( MPI_COMM_WORLD, &size ) ;
	ier = MPI_Comm_rank( MPI_COMM_WORLD, &rank ) ;
	for( int i=0 ; i<size ; ++i )
	{
	  ier = MPI_Barrier( MPI_COMM_WORLD ) ;
      if( i==rank ) std::cout << "Hello world, I am the processor #" << rank << " on " << size << std::endl ;
	  ier = MPI_Barrier( MPI_COMM_WORLD ) ;
    }

	int k = rank ;
	ier = MPI_Bcast( &k, 1, MPI_INT, 0, MPI_COMM_WORLD ) ;

	
	for( int i=0 ; i<size ; ++i )
	{
	  ier = MPI_Barrier( MPI_COMM_WORLD ) ;
      if( i==rank ) std::cout << "I am still the processor #" << rank << " and I received " << k << std::endl ;
	  ier = MPI_Barrier( MPI_COMM_WORLD ) ;
    }

	int c = MPI_Comm_c2f( MPI_COMM_WORLD ) ;
	int rr = -1 ;
	RED(&c,&rr) ;
	for( int i=0 ; i<size ; ++i )
	{
	  ier = MPI_Barrier( MPI_COMM_WORLD ) ;
	  if( i==rank ) std::cout << "red=" << rr << std::endl ;
	  ier = MPI_Barrier( MPI_COMM_WORLD ) ;
    }
#if 0
	int buf[2], recv[2] ;
	buf[0]=rank ;
	buf[1]=rank*-1 ;
	MPI_Allreduce( buf,recv,2,MPI_INT, MPI_MAX,MPI_COMM_WORLD ) ;
	
	for( int i=0 ; i<size ; ++i )
	{
	  ier = MPI_Barrier( MPI_COMM_WORLD ) ;
	  if( !rank ) std::cout << "RECV[]=" << recv[0] << "," << recv[1] << std::endl ;
	  ier = MPI_Barrier( MPI_COMM_WORLD ) ;
    }
#endif
	ier = MPI_Finalize() ;
	return 0 ;
}

Attachment: red.f90
Description: Binary data

Reply via email to