Re: [OMPI users] [EXTERNAL] Re: MPI One-Sided Communication, indexed datatype and segmentation fault.

2012-06-06 Thread Jim Dinan
Hi Brian, I filed a bug several months ago about derived datatypes in RMA: https://svn.open-mpi.org/trac/ompi/ticket/2656 Could this be the same issue? ~Jim. On 6/6/12 1:35 PM, Barrett, Brian W wrote: Ziaul - You're right, I totally misread the code, sorry about that. What version of

Re: [OMPI users] [EXTERNAL] Re: MPI One-Sided Communication, indexed datatype and segmentation fault.

2012-06-06 Thread Ziaul Haque Olive
Hello Brian, You do not have to be sorry. my code was not that clear. My Open MPI version is 1.6, the network is Ethernet. About the MPI_Alloc_mem, I tried with malloc also, but was getting the same seg-fault. Thanks, Ziaul On Wed, Jun 6, 2012 at 1:35 PM, Barrett, Brian W

Re: [OMPI users] [EXTERNAL] Re: MPI One-Sided Communication, indexed datatype and segmentation fault.

2012-06-06 Thread Barrett, Brian W
Ziaul - You're right, I totally misread the code, sorry about that. What version of Open MPI are you using and over what network? As an aside, there's no point in using MPI_ALLOC_MEM for the displacement arrays. MPI_ALLOC_MEM is really only advantageous for allocating communication buffers.

Re: [OMPI users] MPI One-Sided Communication, indexed datatype and segmentation fault.

2012-06-06 Thread Ziaul Haque Olive
Hello Brian, Actually, I am not modifying the local communication buffer that contains the data. I am modifying the the buffer that contains the indices of the data buffer(source_disp and target_disp). in MPICH2 this is not a problem. I am not sure about Open MPI. Thanks, Ziaul On Wed, Jun 6,

Re: [OMPI users] MPI One-Sided Communication, indexed datatype and segmentation fault.

2012-06-06 Thread Barrett, Brian W
Ziaul - Your program is erroneous; you can not modify the local communication buffer of an MPI_ACCUMULATE call until after the next synchronization call (Section 11.3 of MPI 2.2). In your example, that would be after the MPI_FENCE call following the call to MPI_ACCUMULATE. Brian On 6/6/12 9:44

Re: [OMPI users] Building openmpi from src rpm: rpmbuild --rebuild errors with 'cpio: MD5 sum mismatch' (since openmpi 1.4.5)

2012-06-06 Thread Prentice Bisbal
On 05/31/2012 07:26 AM, Jeff Squyres wrote: > On May 31, 2012, at 2:04 AM, livelfs wrote: > >> Since 1.4.5 openmpi release, it is no longer possible to build openmpi >> binary with rpmbuild --rebuild if system rpm package version is 4.4.x, like >> in SLES10, SLES11, RHEL/CentOS 5.x. >> >> For

Re: [OMPI users] Building openmpi from src rpm: rpmbuild --rebuild errors with 'cpio: MD5 sum mismatch' (since openmpi 1.4.5)

2012-06-06 Thread Prentice Bisbal
On 05/31/2012 02:04 AM, livelfs wrote: > Hi > Since 1.4.5 openmpi release, it is no longer possible to build openmpi > binary with rpmbuild --rebuild if system rpm package version is 4.4.x, > like in SLES10, SLES11, RHEL/CentOS 5.x. > > For instance, on CentOS 5.8 x86_64 with rpm 4.4.2.3-28.el5_8:

[OMPI users] MPI One-Sided Communication, indexed datatype and segmentation fault.

2012-06-06 Thread Ziaul Haque Olive
Hello, I am not sure, if my code is correct according to Open MPI(v1.6). the code is given as follows, I am doing MPI one-sided communication inside a function - data_transfer. this function is being called inside a fence epoch. inside data_transfer, I am allocating memory for non-contiguous

Re: [OMPI users] "-library=stlport4" neccessary for Sun C

2012-06-06 Thread TERRY DONTJE
On 6/6/2012 4:38 AM, Siegmar Gross wrote: Hello, I compiled "openmpi-1.6" on "Solaris 10 sparc", "Solaris 10 x86", and Linux (openSuSE 12.1) with "Sun C 5.12". Today I searched my log-files for "WARNING" and found the following message. WARNING:

[OMPI users] "-library=stlport4" neccessary for Sun C

2012-06-06 Thread Siegmar Gross
Hello, I compiled "openmpi-1.6" on "Solaris 10 sparc", "Solaris 10 x86", and Linux (openSuSE 12.1) with "Sun C 5.12". Today I searched my log-files for "WARNING" and found the following message. WARNING: ** WARNING: *** VampirTrace