Dear All,
in the attachment the 2D example, Now I will try the 3D example.

What do you think of it? is it correct?
The idea is to build a 2D data_type, to sent 3D data

Diego


On 16 January 2015 at 18:19, Diego Avesani <diego.aves...@gmail.com> wrote:

> Dear George, Dear All,
>
> and what do you think about the previous post?
>
> Thanks again
>
> Diego
>
>
> On 16 January 2015 at 18:11, George Bosilca <bosi...@icl.utk.edu> wrote:
>
>> You could but you don’t need to. The datatype engine of Open MPI is doing
>> a fair job of packing/unpacking the data on the flight, so you don’t have
>> to.
>>
>>   George.
>>
>> On Jan 16, 2015, at 11:32 , Diego Avesani <diego.aves...@gmail.com>
>> wrote:
>>
>> Dear all,
>>
>> Could I use  MPI_PACK?
>>
>>
>> Diego
>>
>>
>> On 16 January 2015 at 16:26, Diego Avesani <diego.aves...@gmail.com>
>> wrote:
>>
>>> Dear George, Dear all,
>>>
>>> I have been studying. It's clear for 2D case QQ(:,:).
>>>
>>> For example if
>>> real :: QQ(npt,9) , with 9 the characteristic of each particles.
>>>
>>> I can simple:
>>>
>>>  call MPI_TYPE_VECTOR(QQ(1:50), 9, 9, MPI_REAL,  my_2D_type, ierr)
>>>
>>> I send 50 element of QQ. I am in fortran so a two dimensional array is
>>> organized in a 1D array and a new row start after the 9 elements of a colum
>>>
>>> The problem is a 3D array. I belive that I have to create a sort of *vector
>>> of vectors.*
>>> More or less like:
>>>
>>> call MPI_TYPE_VECTOR(xxx, xxx, xxx, MPI_REAL,  my_row, ierr)
>>>
>>>      and then
>>>
>>> call MPI_TYPE_VECTOR(xxx, xxx, xxx, *my_row*,  my_type, ierr).
>>>
>>> You can note that in the second case I have  *my_row *instead of
>>> mpi_real.
>>>
>>> I found somethind about it in a tutorial but I am not able to find it
>>> again in google. I think that is not convinient the use of struct in this
>>> case, I have only real. Moreover, mpi_struct is think to emulate
>>> Fortran90 and C structures, as Gus' suggestion.
>>>
>>> Let's me look to that tutorial
>>> What do you think?
>>>
>>> Thanks again
>>>
>>>
>>>
>>>
>>>
>>>
>>> Diego
>>>
>>>
>>> On 16 January 2015 at 16:02, George Bosilca <bosi...@icl.utk.edu> wrote:
>>>
>>>> The operation you describe is a pack operation, agglomerating together
>>>> in a contiguous buffer originally discontinuous elements. As a result there
>>>> is no need to use the MPI_TYPE_VECTOR, but instead you can just use the
>>>> type you created so far (MPI_my_STRUCT) with a count.
>>>>
>>>>   George.
>>>>
>>>>
>>>> On Fri, Jan 16, 2015 at 5:32 AM, Diego Avesani <diego.aves...@gmail.com
>>>> > wrote:
>>>>
>>>>> Dear All,
>>>>> I'm sorry to insist, but I am not able to understand. Moreover, I have
>>>>> realized that I have to explain myself better.
>>>>>
>>>>> I try to explain in may program. Each CPU has *npt* particles. My
>>>>> program understand how many particles each CPU has to send, according to
>>>>> their positions. Then I can do:
>>>>>
>>>>> *icount=1*
>>>>> * DO i=1,npt*
>>>>> *    IF(i is a particle to send)THEN*
>>>>>
>>>>> *        DATASEND(icount)%ip     = PART(ip)%ip*
>>>>> *        DATASEND(icount)%mc     = PART(ip)%mc*
>>>>>
>>>>> *        DATASEND(icount)%RP     = PART(ip)%RP*
>>>>> *        DATASEND(icount)%QQ     = PART(ip)%QQ*
>>>>>
>>>>> *        icount=icount+1*
>>>>> *    ENDIF*
>>>>> * ENDDO*
>>>>>
>>>>> After that, I can send *DATASEND*
>>>>>
>>>>> I *DATASEND* is a   *MPI_my_STRUCT.* I can allocate it according to
>>>>> the number of particles that I have to send:
>>>>>
>>>>> TYPE(tParticle)  ,ALLOCATABLE,DIMENSION(:) :: DATASEND,DATARECV
>>>>>
>>>>> This means that the number of particles which I have to send can
>>>>> change every time.
>>>>>
>>>>> After that, I compute for each particles, somethins called
>>>>> QQmls(:,:,:).
>>>>> QQmls has all real elements. Now I would like to to the same that I
>>>>> did with PART, but in this case:
>>>>>
>>>>> *icount=1*
>>>>> *DO i=1,npt*
>>>>> *    IF(i is a particle to send)THEN*
>>>>>
>>>>>        *DATASEND_REAL(:,icount,:)=QQmls(:,i,:)*
>>>>> *      icount=icount+1*
>>>>>
>>>>> *    ENDIF*
>>>>> *ENDDO*
>>>>>
>>>>> I would like to have a sort  *MPI_my_TYPE to do that (like *
>>>>> *MPI_my_STRUCT**) *and not to create every time *MPI_TYPE_VECTOR *
>>>>> because  *DATASEND_REAL *changes size every time.
>>>>>
>>>>> I hope to make myself clear.
>>>>>
>>>>> So is it correct to use *MPI_TYPE_VECTOR?, *Can I do what I want?
>>>>>
>>>>> In the meantime, I will study some examples.
>>>>>
>>>>> Thanks again
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Diego
>>>>>
>>>>>
>>>>> On 16 January 2015 at 07:39, George Bosilca <bosi...@icl.utk.edu>
>>>>> wrote:
>>>>>
>>>>>>  The subarray creation is an multi-dimension extension of the vector
>>>>>> type. You can see it as a vector of vector of vector and so on, one 
>>>>>> vector
>>>>>> per dimension. The stride array is used to declare on each dimension what
>>>>>> is the relative displacement (in number of elements) from the beginning 
>>>>>> of
>>>>>> the dimension array.
>>>>>>
>>>>>> It is important to use regular type creation when you can take
>>>>>> advantage of such regularity instead of resorting to use of struct or h*.
>>>>>> This insure better packing/unpacking performance, as well as possible
>>>>>> future support for one-sided communications.
>>>>>>
>>>>>> George.
>>>>>>
>>>>>>
>>>>>>
>>>>>> > On Jan 15, 2015, at 19:31, Gus Correa <g...@ldeo.columbia.edu>
>>>>>> wrote:
>>>>>> >
>>>>>> > I never used MPI_Type_create_subarray, only MPI_Type_Vector.
>>>>>> > What I like about MPI_Type_Vector is that you can define a stride,
>>>>>> > hence you can address any regular pattern in memory.
>>>>>> > However, it envisages the array layout in memory as a big 1-D array,
>>>>>> > with a linear index progressing in either Fortran or C order.
>>>>>> >
>>>>>> > Somebody correct me please if I am wrong, but at first sight
>>>>>> MPI_Type_Vector sounds more flexible to me than MPI_Type_create_subarray,
>>>>>> exactly because the latter doesn't have strides.
>>>>>> >
>>>>>> > The downside is that you need to do some index arithmetic to figure
>>>>>> > the right strides, etc, to match the corresponding
>>>>>> > Fortran90 array sections.
>>>>>> >
>>>>>> > There are good examples in the "MPI - The complete reference" books
>>>>>> I suggested to you before (actually in vol 1).
>>>>>> >
>>>>>> > Online I could find the two man pages (good information, but no
>>>>>> example):
>>>>>> >
>>>>>> > http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_vector.3.php
>>>>>> >
>>>>>> http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_create_subarray.3.php
>>>>>> >
>>>>>> > There is a very simple 2D example of MPI_Type_vector using strides
>>>>>> here:
>>>>>> >
>>>>>> > https://computing.llnl.gov/tutorials/mpi/#Derived_Data_Types
>>>>>> >
>>>>>> > and a similar one here:
>>>>>> >
>>>>>> >
>>>>>> http://static.msi.umn.edu/tutorial/scicomp/general/MPI/content6.html
>>>>>> >
>>>>>> > Gus Correa
>>>>>> >
>>>>>> >> On 01/15/2015 06:53 PM, Diego Avesani wrote:
>>>>>> >> dear George, dear Gus, dear all,
>>>>>> >> Could you please tell me where I can find a good example?
>>>>>> >> I am sorry but I can not understand the 3D array.
>>>>>> >>
>>>>>> >>
>>>>>> >> Really Thanks
>>>>>> >>
>>>>>> >> Diego
>>>>>> >>
>>>>>> >>
>>>>>> >> On 15 January 2015 at 20:13, George Bosilca <bosi...@icl.utk.edu
>>>>>> >> <mailto:bosi...@icl.utk.edu>> wrote:
>>>>>> >>
>>>>>> >>
>>>>>> >>>    On Jan 15, 2015, at 06:02 , Diego Avesani <
>>>>>> diego.aves...@gmail.com
>>>>>> >>>    <mailto:diego.aves...@gmail.com>> wrote:
>>>>>> >>>
>>>>>> >>>    Dear Gus, Dear all,
>>>>>> >>>    Thanks a lot.
>>>>>> >>>    MPI_Type_Struct works well for the first part of my problem,
>>>>>> so I
>>>>>> >>>    am very happy to be able to use it.
>>>>>> >>>
>>>>>> >>>    Regarding MPI_TYPE_VECTOR.
>>>>>> >>>
>>>>>> >>>    I have studied it and for simple case it is clear to me what id
>>>>>> >>>    does (at least I believe). Foe example if I have a matrix
>>>>>> define as:
>>>>>> >>>    REAL, ALLOCATABLE (AA(:,:))
>>>>>> >>>    ALLOCATE AA(100,5)
>>>>>> >>>
>>>>>> >>>    I could send part of it defining
>>>>>> >>>
>>>>>> >>>    CALL MPI_TYPE_VECTOR(5,1,5,MPI_DOUBLE_PRECISION,/MY_NEW_TYPE/)
>>>>>> >>>
>>>>>> >>>    after that I can send part of it with
>>>>>> >>>
>>>>>> >>>    CALL MPI_SEND( AA(1:/10/,:), /10/, /MY_NEW_TYPE/, 1, 0,
>>>>>> >>>    MPI_COMM_WORLD );
>>>>>> >>>
>>>>>> >>>    Have I understood correctly?
>>>>>> >>>
>>>>>> >>>    What I can do in case of three dimensional array? for example
>>>>>> >>>    AA(:,:,:), I am looking to MPI_TYPE_CREATE_SUBARRAY.
>>>>>> >>>    Is that the correct way?
>>>>>> >>>
>>>>>> >>>    Thanks again
>>>>>> >>
>>>>>> >>    Indeed, using the subarray is the right approach independent on
>>>>>> the
>>>>>> >>    number of dimensions of the data (you can use it instead of
>>>>>> >>    MPI_TYPE_VECTOR as well).
>>>>>> >>
>>>>>> >>       George.
>>>>>> >>
>>>>>> >>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>    Diego
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>    On 13 January 2015 at 19:04, Gus Correa <g...@ldeo.columbia.edu
>>>>>> >>>    <mailto:g...@ldeo.columbia.edu>> wrote:
>>>>>> >>>
>>>>>> >>>        Hi Diego
>>>>>> >>>        I guess MPI_Type_Vector is the natural way to send and
>>>>>> receive
>>>>>> >>>        Fortran90 array sections (e.g. your QQMLS(:,50:100,:)).
>>>>>> >>>        I used that before and it works just fine.
>>>>>> >>>        I think that is pretty standard MPI programming style.
>>>>>> >>>        I guess MPI_Type_Struct tries to emulate Fortran90 and C
>>>>>> >>>        structures
>>>>>> >>>        (as you did in your previous code, with all the surprises
>>>>>> >>>        regarding alignment, etc), not array sections.
>>>>>> >>>        Also, MPI type vector should be more easy going (and
>>>>>> probably
>>>>>> >>>        more efficient) than MPI type struct, with less memory
>>>>>> >>>        alignment problems.
>>>>>> >>>        I hope this helps,
>>>>>> >>>        Gus Correa
>>>>>> >>>
>>>>>> >>>        PS - These books have a quite complete description and
>>>>>> several
>>>>>> >>>        examples
>>>>>> >>>        of all MPI objects and functions, including MPI types
>>>>>> (native
>>>>>> >>>        and user defined):
>>>>>> >>>        http://mitpress.mit.edu/books/__mpi-complete-reference-0
>>>>>> >>>        <http://mitpress.mit.edu/books/mpi-complete-reference-0>
>>>>>> >>>        http://mitpress.mit.edu/books/__mpi-complete-reference-1
>>>>>> >>>        <http://mitpress.mit.edu/books/mpi-complete-reference-1>
>>>>>> >>>
>>>>>> >>>        [They cover MPI 1 and 2. I guess there is a new/upcoming
>>>>>> book
>>>>>> >>>        with MPI 3, but for what you're doing 1 and 2 are more than
>>>>>> >>>        enough.]
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>        On 01/13/2015 09:22 AM, Diego Avesani wrote:
>>>>>> >>>
>>>>>> >>>            Dear all,
>>>>>> >>>
>>>>>> >>>            I had some wonderful talking about
>>>>>> MPI_type_create_struct adn
>>>>>> >>>            isend\irecv with
>>>>>> >>>            Gilles, Gustavo, George, Gus, Tom and Jeff. Now all is
>>>>>> >>>            more clear and my
>>>>>> >>>            program works.
>>>>>> >>>
>>>>>> >>>            Now I have another question. In may program I have
>>>>>> matrix:
>>>>>> >>>
>>>>>> >>>            /QQMLS(:,:,:) /that is allocate as
>>>>>> >>>
>>>>>> >>>            /ALLOCATE(QQMLS(9,npt,18)/), where npt is the number of
>>>>>> >>>            particles
>>>>>> >>>
>>>>>> >>>            QQMLS is double precision.
>>>>>> >>>
>>>>>> >>>            I would like to sent form a CPU to another part of it,
>>>>>> for
>>>>>> >>>            example,
>>>>>> >>>            sending QQMLS(:,50:100,:). I mean sending the QQMLS of
>>>>>> the
>>>>>> >>>            particles
>>>>>> >>>            between 50 to 100.
>>>>>> >>>            I suppose that i could use MPI_Type_vector but I am not
>>>>>> >>>            sure. The
>>>>>> >>>            particle that I want to sent could be from 25 to 50
>>>>>> ecc..
>>>>>> >>>            ecc..so
>>>>>> >>>              blocklength changes everytime.
>>>>>> >>>
>>>>>> >>>            Do I have to use MPI_type_create_struct?
>>>>>> >>>            Do I have correctly understood MPI_Type_vector?
>>>>>> >>>
>>>>>> >>>            Thanks a lot
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>            Diego
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>            _________________________________________________
>>>>>> >>>            users mailing list
>>>>>> >>>            us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>>>> >>>            Subscription:
>>>>>> >>>            http://www.open-mpi.org/__mailman/listinfo.cgi/users
>>>>>> >>>            <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>>>> >>>            Link to this post:
>>>>>> >>>
>>>>>> http://www.open-mpi.org/__community/lists/users/2015/01/__26171.php
>>>>>> >>>            <
>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26171.php>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>        _________________________________________________
>>>>>> >>>        users mailing list
>>>>>> >>>        us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>>>> >>>        Subscription:
>>>>>> >>>        http://www.open-mpi.org/__mailman/listinfo.cgi/users
>>>>>> >>>        <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>>>> >>>        Link to this post:
>>>>>> >>>
>>>>>> http://www.open-mpi.org/__community/lists/users/2015/01/__26172.php
>>>>>> >>>        <
>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26172.php>
>>>>>> >>>
>>>>>> >>>
>>>>>> >>>    _______________________________________________
>>>>>> >>>    users mailing list
>>>>>> >>>    us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>>>> >>>    Subscription:
>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> >>>    Link to this post:
>>>>>> >>>
>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26184.php
>>>>>> >>
>>>>>> >>
>>>>>> >>    _______________________________________________
>>>>>> >>    users mailing list
>>>>>> >>    us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>>>> >>    Subscription:
>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> >>    Link to this post:
>>>>>> >>    http://www.open-mpi.org/community/lists/users/2015/01/26192.php
>>>>>> >>
>>>>>> >>
>>>>>> >>
>>>>>> >>
>>>>>> >> _______________________________________________
>>>>>> >> users mailing list
>>>>>> >> us...@open-mpi.org
>>>>>> >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> >> Link to this post:
>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26193.php
>>>>>> >
>>>>>> > _______________________________________________
>>>>>> > users mailing list
>>>>>> > us...@open-mpi.org
>>>>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> > Link to this post:
>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26194.php
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> us...@open-mpi.org
>>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> Link to this post:
>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26195.php
>>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> users mailing list
>>>>> us...@open-mpi.org
>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>> Link to this post:
>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26197.php
>>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/01/26199.php
>>>>
>>>
>>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2015/01/26201.php
>>
>>
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post:
>> http://www.open-mpi.org/community/lists/users/2015/01/26202.php
>>
>
>
MODULE MOD_PRECISION
  integer, parameter :: dp = selected_real_kind(15, 307)
ENDMODULE MOD_PRECISION


PROGRAM test_2Darray_send
USE MOD_PRECISION
USE MPI
IMPLICIT NONE
INTEGER :: iCPU,icount,iTag,ip,sendcount,sendrecv,i,iStart,iEnd
INTEGER :: NPT
INTEGER :: status(MPI_STATUS_SIZE) 
TYPE tMPI
      INTEGER  :: myrank, nCPU, iErr, status
END TYPE tMPI

TYPE(tMPI)         :: MPIdata
INTEGER            :: MPI_TYPE_VECTOR_1D

REAL(DP),ALLOCATABLE,DIMENSION(:,:) :: AA

  CALL MPI_INIT(MPIdata%iErr)
  CALL MPI_COMM_RANK(MPI_COMM_WORLD, MPIdata%myrank, MPIdata%iErr)
  CALL MPI_COMM_SIZE(MPI_COMM_WORLD, MPIdata%nCPU,   MPIdata%iErr)
  
  NPT=10
  ALLOCATE(AA(NPT,5))
  AA=0.d0
  
  CALL MPI_TYPE_VECTOR(5, 1, 1, MPI_DOUBLE_PRECISION, MPI_TYPE_VECTOR_1D,MPIdata%iErr)
  CALL MPI_TYPE_COMMIT(MPI_TYPE_VECTOR_1D,MPIdata%iErr)
  
  
  IF(MPIdata%myrank==0)THEN
     DO I=1,NPT
        AA(I,1)=REAL(I,DP)
        AA(I,2)=10.d0*REAL(I,DP)
        AA(I,3)=100.d0*REAL(I,DP)
     ENDDO
     !
     CALL MPI_SEND(AA(1:6,:),6,MPI_TYPE_VECTOR_1D,1,5,MPI_COMM_WORLD,MPIdata%iErr)
     !
  ENDIF
  
  CALL MPI_BARRIER(MPI_COMM_WORLD,MPIdata%iErr)
  
  IF(MPIdata%myrank==1)THEN
     !
     CALL MPI_RECV(AA(1:6,:),6,MPI_TYPE_VECTOR_1D,0,5,MPI_COMM_WORLD,status,MPIdata%iErr)
     WRITE(*,*) AA(1:6,3)
     !
  ENDIF


  CALL MPI_Finalize(MPIdata%iErr)
ENDPROGRAM

Reply via email to