Re: [OMPI users] False positives with OpenMPI and memchecker (seems fixed between 3.0.0 and 3.0.1-rc1)

2018-01-06 Thread yvan . fournier
... Sorry for the (too-late) report... Yvan - Mail original - From: "yvan fournier" To: users@lists.open-mpi.org Sent: Sunday January 7 2018 01:52:04 Object: Re: False positives with OpenMPI and memchecker (with attachment) Hello, Sorry, I forgot the attached test case in m

Re: [OMPI users] False positives with OpenMPI and memchecker (with attachment)

2018-01-06 Thread yvan . fournier
Hello, Sorry, I forgot the attached test case in my previous message... :( Best regards, Yvan Fournier - Mail transferred - From: "yvan fournier" To: users@lists.open-mpi.org Sent: Sunday January 7 2018 01:43:16 Object: False positives with OpenMPI and memchecker Hello,

[OMPI users] False positives with OpenMPI and memchecker

2018-01-06 Thread yvan . fournier
i_isend_irecv.c:7) The first 2 warnings seem to relate to initialization, so are not a big issue, but the last one occurs whenever I use MPI_Isend, so they are a more important issue. Using a version built without --enable-memchecker, I also have the two initialization warnings, but not the

Re: [OMPI users] False positives and even failure with OpenMPI and memchecker

2016-11-05 Thread Yvan Fournier
submitted the issue sooner... Best regards, Yvan Fournier > Message: 5 > Date: Sat, 5 Nov 2016 22:08:32 +0900 > From: Gilles Gouaillardet > To: Open MPI Users > Subject: Re: [OMPI users] False positives and even failure with Open > MPI and memchecker > Message-ID: &g

[OMPI users] False positives and even failure with Open MPI and memchecker

2016-11-05 Thread Yvan Fournier
contain an obvious mistake that I am missing ? I initially though of possible alignment issues, but saw nothing in the standard that requires that, and the "malloc"-base variant exhibits the same behavior,while I assume alignment to 64-bits for allocated arrays is the default. Best reg

Re: [OMPI users] Bad parallel scaling using Code Saturne with openmpi

2012-07-11 Thread Yvan Fournier
then XE-6 machine. I am interested in trying to improve or at least try to improve performance on Ethernet clusters, and I may have a few suggestions for options you can test, but this conversation should probably move to the Code_Saturne forum (http://code-saturne.org), as we will go into some options of our linear solvers which are specific to that code, not to Open MPI. Best regards, Yvan Fournier

Re: [OMPI users] Latest Intel Compilers (ICS, version 12.1.0.233 Build 20110811) issues

2012-01-03 Thread Yvan Fournier
Hello, I am not sure your issues are related, and I have not tested this version of ICS, but I have actually had issues with an Intel compiler build of Open MPI 1.4.3 on a cluster using Westmere processors and Infiniband (Qlogic), using a Debian distribution, with our in-house code (www.code-satur

[OMPI users] MPI IO bug test case for OpenMPI 1.3

2009-07-09 Thread yvan . fournier
gths[0]), MPI_BYTE, &status); #if USE_FILE_TYPE MPI_Type_free(&file_type); #endif - Using the file type indexed datatype, I exhibit the bug with both versions 1.3.0 and 1.3.2 of OpenMPI. Best regards, Yvan Fournier #include #include #include #include #define

Re: [OMPI users] Incorrect results with MPI-IO under OpenMPI v1.3.1

2009-04-06 Thread Yvan Fournier
require a few extra hours of work. If the bug is not reproduced in a simpler manner first, I will try to build a simple program reproducing the bug within a week or 2, but In the meantime, I just want to confirm Scott's observation (hoping it is the same bug). Best regards, Yvan Fournie

Re: [OMPI users] bug in MPI_File_get_position_shared ?

2008-08-17 Thread Yvan Fournier
experimenting with the MPI-IO using explicit offsets, individual pointers, and shared pointers, and have workarounds, so I'll just avoid shared pointers on NFS. Best regards, Yvan Fournier EDF R&D On Sat, 2008-08-16 at 08:19 -0400, users-requ...@open-mpi.org wrote: > D

[OMPI users] bug in MPI_File_get_position_shared ?

2008-08-13 Thread Yvan Fournier
hangs (in more complete code, after writing data). I encounter the same problem with Open MPI 1.2.6 and MPICH2 1.0.7, so I may have misread the documentation, but I suspect a ROMIO bug. Best regards, Yvan Fournier

[OMPI users] Bug in Open MPI 1.2.3 using MPI_Recv with an indexed datatype

2007-09-24 Thread Yvan Fournier
57-58). It works with LAM 7.1.1 and MPICH2, but fails under Open MPI. This is a (much) simplified extract from a part of Code_Saturne's FVM library (http://rd.edf.com/code_saturne/), which otherwise works fine on most data using Open MPI. Best regards, Yvan Fournier

Re: [OMPI users] users Digest, Vol 328, Issue 1

2006-07-10 Thread Yvan Fournier
ompi_info output. I have also encountered the bug on the "parent" case (similar, but more complex) on my work machine (dual Xeon under Debian Sarge), but I'll check this simpler test on it just in case. Best regards, Yvan Fournier On Sun, 2006-07-09 at 12:00 -0400, users-requ.

[OMPI users] Datatype bug regression from Open MPI 1.0.2 to Open MPI 1.1

2006-06-30 Thread Yvan Fournier
file), the bug dissapears. -- Best regards, Yvan Fournier ompi_datatype_bug.tar.gz Description: application/compressed-tar

[OMPI users] Bug in OMPI 1.0.1 using MPI_Recv with indexed datatypes

2006-02-10 Thread Yvan Fournier
an indexed datatype (i.e. not defining USE_INDEXED_DATATYPE in the gather_test.c file), the bug dissapears. Using the indexed datatype with LAM MPI 7.1.1 or MPICH2, we do not reproduce the bug either, so it does seem to be an Open MPI issue. -- Best regards, Yvan