Gilles,

I see. Thanks!

Takahiro Kawashima,
MPI development team,
Fujitsu

> Kawashima-san,
> 
> we always duplicate the communicator, and use the CID of the duplicated 
> communicator, so bottom line,
> there cannot be more than one window per communicator.
> 
> i will double check about using PID. if a broadcast is needed, i would 
> rather use the process name of rank 0 in order to avoid a broadcast.
> 
> Cheers,
> 
> Gilles
> 
> On 2/3/2016 8:40 AM, Kawashima, Takahiro wrote:
> > Nathan,
> >
> > Is is sufficient?
> > Multiple windows can be created on a communicator.
> > So I think PID + CID is not sufficient.
> >
> > Possible fixes:
> > - The root process creates a filename with a random number
> >    and broadcast it in the communicator.
> > - Use per-communicator counter and use it in the filename.
> >
> > Regards,
> > Takahiro Kawashima,
> > MPI development team,
> > Fujitsu
> >
> >> Hmm, I think you are correct. There may be instances where two different
> >> local processes may use the same CID for different communicators. It
> >> should be sufficient to add the PID of the current process to the
> >> filename to ensure it is unique.
> >>
> >> -Nathan
> >>
> >> On Tue, Feb 02, 2016 at 09:33:29PM +0900, Gilles Gouaillardet wrote:
> >>>     Nathan,
> >>>     the sm osc component uses communicator CID to name the file that will 
> >>> be
> >>>     used to create shared memory segments.
> >>>     if I understand and correctly, two different communicators coming 
> >>> from the
> >>>     same MPI_Comm_split might share the same CID, so CID (alone) cannot be
> >>>     used to generate a unique per communicator file name
> >>>     Makes sense ?
> >>>     Cheers,
> >>>     Gilles
> >>>
> >>>     ---------- Forwarded message ----------
> >>>     From: Peter Wind <peter.w...@met.no>
> >>>     Date: Tuesday, February 2, 2016
> >>>     Subject: [OMPI users] shared memory under fortran, bug?
> >>>     To: us...@open-mpi.org
> >>>
> >>>     Enclosed is a short (< 100 lines) fortran code example that uses 
> >>> shared
> >>>     memory.
> >>>     It seems to me it behaves wrongly if openmpi is used.
> >>>     Compiled with SGI/mpt , it gives the right result.
> >>>
> >>>     To fail, the code must be run on a single node.
> >>>     It creates two groups of 2 processes each. Within each group memory is
> >>>     shared.
> >>>     The error is that the two groups get the same memory allocated, but 
> >>> they
> >>>     should not.
> >>>
> >>>     Tested with openmpi 1.8.4, 1.8.5, 1.10.2 and gfortran, intel 13.0, 
> >>> intel
> >>>     14.0
> >>>     all fail.
> >>>
> >>>     The call:
> >>>        call MPI_Win_allocate_shared(win_size, disp_unit, MPI_INFO_NULL,
> >>>     comm_group, cp1, win, ierr)
> >>>
> >>>     Should allocate memory only within the group. But when the other group
> >>>     allocates memory, the pointers from the two groups point to the same
> >>>     address in memory.
> >>>
> >>>     Could you please confirm that this is the wrong behaviour?
> >>>
> >>>     Best regards,
> >>>     Peter Wind
> >>> program shmem_mpi
> >>>
> >>>     !
> >>>     ! in this example two groups are created, within each group memory is 
> >>> shared.
> >>>     ! Still the other group get allocated the same adress space, which it 
> >>> shouldn't.
> >>>     !
> >>>     ! Run with 4 processes, mpirun -np 4 a.out
> >>>
> >>>
> >>>     use mpi
> >>>
> >>>     use, intrinsic :: iso_c_binding, only : c_ptr, c_f_pointer
> >>>
> >>>     implicit none
> >>> !   include 'mpif.h'
> >>>
> >>>     integer, parameter :: nsize = 100
> >>>     integer, pointer   :: array(:)
> >>>     integer            :: num_procs
> >>>     integer            :: ierr
> >>>     integer            :: irank, irank_group
> >>>     integer            :: win
> >>>     integer            :: comm = MPI_COMM_WORLD
> >>>     integer            :: disp_unit
> >>>     type(c_ptr)        :: cp1
> >>>     type(c_ptr)        :: cp2
> >>>     integer            :: comm_group
> >>>
> >>>     integer(MPI_ADDRESS_KIND) :: win_size
> >>>     integer(MPI_ADDRESS_KIND) :: segment_size
> >>>
> >>>     call MPI_Init(ierr)
> >>>     call MPI_Comm_size(comm, num_procs, ierr)
> >>>     call MPI_Comm_rank(comm, irank, ierr)
> >>>
> >>>     disp_unit = sizeof(1)
> >>>     call MPI_COMM_SPLIT(comm, irank*2/num_procs, irank, comm_group, ierr)
> >>>     call MPI_Comm_rank(comm_group, irank_group, ierr)
> >>> !   print *, 'irank=', irank, ' group rank=', irank_group
> >>>
> >>>     if (irank_group == 0) then
> >>>        win_size = nsize*disp_unit
> >>>     else
> >>>        win_size = 0
> >>>     endif
> >>>
> >>>     call MPI_Win_allocate_shared(win_size, disp_unit, MPI_INFO_NULL, 
> >>> comm_group, cp1, win, ierr)
> >>>     call MPI_Win_fence(0, win, ierr)
> >>>
> >>>     call MPI_Win_shared_query(win, 0, segment_size, disp_unit, cp2, ierr)
> >>>
> >>>     call MPI_Win_fence(0, win, ierr)
> >>>     CALL MPI_BARRIER(comm, ierr)! allocations finished
> >>> !   print *, 'irank=', irank, ' size ', segment_size
> >>>
> >>>     call c_f_pointer(cp2, array, [nsize])
> >>>
> >>>     array(1)=0;array(2)=0
> >>>     CALL MPI_BARRIER(comm, ierr)!
> >>> 77 format(4(A,I3))
> >>>     if(irank<num_procs/2)then
> >>>        if (irank_group == 0)array(1)=11
> >>>        CALL MPI_BARRIER(comm, ierr)
> >>>        print 77, 'Group 0, rank', irank, ':  array ', array(1), ' 
> >>> ',array(2)
> >>>        CALL MPI_BARRIER(comm, ierr)!Group 1 not yet start writing
> >>>        CALL MPI_BARRIER(comm, ierr)!Group 1 finished writing
> >>>        print 77, 'Group 0, rank', irank, ':  array ', array(1),' 
> >>> ',array(2)
> >>>        if(array(1)==11.and.array(2)==0)then
> >>>           print *,irank,' correct result'
> >>>        else
> >>>           print *,irank,' wrong result'
> >>>        endif
> >>>     else
> >>>        CALL MPI_BARRIER(comm, ierr)
> >>>        CALL MPI_BARRIER(comm, ierr)!Group 0 finished writing
> >>>        print 77, 'Group 1, rank', irank, ':  array ', array(1),' 
> >>> ',array(2)
> >>>        if (irank_group == 0)array(2)=22
> >>>        CALL MPI_BARRIER(comm, ierr)
> >>>        print 77, 'Group 1, rank', irank, ':  array ', array(1),' 
> >>> ',array(2)
> >>>        if(array(1)==0.and.array(2)==22)then
> >>>           print *,irank,' correct result'
> >>>        else
> >>>           print *,irank,' wrong result'
> >>>        endif
> >>>     endif
> >>>
> >>>     call MPI_Finalize(ierr)
> >>>
> >>> end program

Reply via email to