Dear All, Dear Gus, Dear George,
I almost get it. In the attachment the program.

All data arrived, However I got a segmentation fault.
The idea is to have mpi_type_row and then construct a mpi_rowcolumn_type.
Probably I am not able to understand the correct position in Fortran.

(P.S. I am sorry for the other post in the other treat.)

What do you think?

Thanks again



Diego


On 16 January 2015 at 20:04, Diego Avesani <diego.aves...@gmail.com> wrote:

> Dear all,
> here the 3D example, but unfortunately it does not work.
> I believe that there is some problem with the stride.
>
> What do you think?
>
> Thanks again to everyone
>
> Diego
>
>
> On 16 January 2015 at 19:20, Diego Avesani <diego.aves...@gmail.com>
> wrote:
>
>> Dear All,
>> in the attachment the 2D example, Now I will try the 3D example.
>>
>> What do you think of it? is it correct?
>> The idea is to build a 2D data_type, to sent 3D data
>>
>> Diego
>>
>>
>> On 16 January 2015 at 18:19, Diego Avesani <diego.aves...@gmail.com>
>> wrote:
>>
>>> Dear George, Dear All,
>>>
>>> and what do you think about the previous post?
>>>
>>> Thanks again
>>>
>>> Diego
>>>
>>>
>>> On 16 January 2015 at 18:11, George Bosilca <bosi...@icl.utk.edu> wrote:
>>>
>>>> You could but you don’t need to. The datatype engine of Open MPI is
>>>> doing a fair job of packing/unpacking the data on the flight, so you don’t
>>>> have to.
>>>>
>>>>   George.
>>>>
>>>> On Jan 16, 2015, at 11:32 , Diego Avesani <diego.aves...@gmail.com>
>>>> wrote:
>>>>
>>>> Dear all,
>>>>
>>>> Could I use  MPI_PACK?
>>>>
>>>>
>>>> Diego
>>>>
>>>>
>>>> On 16 January 2015 at 16:26, Diego Avesani <diego.aves...@gmail.com>
>>>> wrote:
>>>>
>>>>> Dear George, Dear all,
>>>>>
>>>>> I have been studying. It's clear for 2D case QQ(:,:).
>>>>>
>>>>> For example if
>>>>> real :: QQ(npt,9) , with 9 the characteristic of each particles.
>>>>>
>>>>> I can simple:
>>>>>
>>>>>  call MPI_TYPE_VECTOR(QQ(1:50), 9, 9, MPI_REAL,  my_2D_type, ierr)
>>>>>
>>>>> I send 50 element of QQ. I am in fortran so a two dimensional array is
>>>>> organized in a 1D array and a new row start after the 9 elements of a 
>>>>> colum
>>>>>
>>>>> The problem is a 3D array. I belive that I have to create a sort of 
>>>>> *vector
>>>>> of vectors.*
>>>>> More or less like:
>>>>>
>>>>> call MPI_TYPE_VECTOR(xxx, xxx, xxx, MPI_REAL,  my_row, ierr)
>>>>>
>>>>>      and then
>>>>>
>>>>> call MPI_TYPE_VECTOR(xxx, xxx, xxx, *my_row*,  my_type, ierr).
>>>>>
>>>>> You can note that in the second case I have  *my_row *instead of
>>>>> mpi_real.
>>>>>
>>>>> I found somethind about it in a tutorial but I am not able to find it
>>>>> again in google. I think that is not convinient the use of struct in this
>>>>> case, I have only real. Moreover, mpi_struct is think to emulate
>>>>> Fortran90 and C structures, as Gus' suggestion.
>>>>>
>>>>> Let's me look to that tutorial
>>>>> What do you think?
>>>>>
>>>>> Thanks again
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Diego
>>>>>
>>>>>
>>>>> On 16 January 2015 at 16:02, George Bosilca <bosi...@icl.utk.edu>
>>>>> wrote:
>>>>>
>>>>>> The operation you describe is a pack operation, agglomerating
>>>>>> together in a contiguous buffer originally discontinuous elements. As a
>>>>>> result there is no need to use the MPI_TYPE_VECTOR, but instead you can
>>>>>> just use the type you created so far (MPI_my_STRUCT) with a count.
>>>>>>
>>>>>>   George.
>>>>>>
>>>>>>
>>>>>> On Fri, Jan 16, 2015 at 5:32 AM, Diego Avesani <
>>>>>> diego.aves...@gmail.com> wrote:
>>>>>>
>>>>>>> Dear All,
>>>>>>> I'm sorry to insist, but I am not able to understand. Moreover, I
>>>>>>> have realized that I have to explain myself better.
>>>>>>>
>>>>>>> I try to explain in may program. Each CPU has *npt* particles. My
>>>>>>> program understand how many particles each CPU has to send, according to
>>>>>>> their positions. Then I can do:
>>>>>>>
>>>>>>> *icount=1*
>>>>>>> * DO i=1,npt*
>>>>>>> *    IF(i is a particle to send)THEN*
>>>>>>>
>>>>>>> *        DATASEND(icount)%ip     = PART(ip)%ip*
>>>>>>> *        DATASEND(icount)%mc     = PART(ip)%mc*
>>>>>>>
>>>>>>> *        DATASEND(icount)%RP     = PART(ip)%RP*
>>>>>>> *        DATASEND(icount)%QQ     = PART(ip)%QQ*
>>>>>>>
>>>>>>> *        icount=icount+1*
>>>>>>> *    ENDIF*
>>>>>>> * ENDDO*
>>>>>>>
>>>>>>> After that, I can send *DATASEND*
>>>>>>>
>>>>>>> I *DATASEND* is a   *MPI_my_STRUCT.* I can allocate it according to
>>>>>>> the number of particles that I have to send:
>>>>>>>
>>>>>>> TYPE(tParticle)  ,ALLOCATABLE,DIMENSION(:) :: DATASEND,DATARECV
>>>>>>>
>>>>>>> This means that the number of particles which I have to send can
>>>>>>> change every time.
>>>>>>>
>>>>>>> After that, I compute for each particles, somethins called
>>>>>>> QQmls(:,:,:).
>>>>>>> QQmls has all real elements. Now I would like to to the same that I
>>>>>>> did with PART, but in this case:
>>>>>>>
>>>>>>> *icount=1*
>>>>>>> *DO i=1,npt*
>>>>>>> *    IF(i is a particle to send)THEN*
>>>>>>>
>>>>>>>        *DATASEND_REAL(:,icount,:)=QQmls(:,i,:)*
>>>>>>> *      icount=icount+1*
>>>>>>>
>>>>>>> *    ENDIF*
>>>>>>> *ENDDO*
>>>>>>>
>>>>>>> I would like to have a sort  *MPI_my_TYPE to do that (like *
>>>>>>> *MPI_my_STRUCT**) *and not to create every time *MPI_TYPE_VECTOR *
>>>>>>> because  *DATASEND_REAL *changes size every time.
>>>>>>>
>>>>>>> I hope to make myself clear.
>>>>>>>
>>>>>>> So is it correct to use *MPI_TYPE_VECTOR?, *Can I do what I want?
>>>>>>>
>>>>>>> In the meantime, I will study some examples.
>>>>>>>
>>>>>>> Thanks again
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Diego
>>>>>>>
>>>>>>>
>>>>>>> On 16 January 2015 at 07:39, George Bosilca <bosi...@icl.utk.edu>
>>>>>>> wrote:
>>>>>>>
>>>>>>>>  The subarray creation is an multi-dimension extension of the
>>>>>>>> vector type. You can see it as a vector of vector of vector and so on, 
>>>>>>>> one
>>>>>>>> vector per dimension. The stride array is used to declare on each 
>>>>>>>> dimension
>>>>>>>> what is the relative displacement (in number of elements) from the
>>>>>>>> beginning of the dimension array.
>>>>>>>>
>>>>>>>> It is important to use regular type creation when you can take
>>>>>>>> advantage of such regularity instead of resorting to use of struct or 
>>>>>>>> h*.
>>>>>>>> This insure better packing/unpacking performance, as well as possible
>>>>>>>> future support for one-sided communications.
>>>>>>>>
>>>>>>>> George.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> > On Jan 15, 2015, at 19:31, Gus Correa <g...@ldeo.columbia.edu>
>>>>>>>> wrote:
>>>>>>>> >
>>>>>>>> > I never used MPI_Type_create_subarray, only MPI_Type_Vector.
>>>>>>>> > What I like about MPI_Type_Vector is that you can define a stride,
>>>>>>>> > hence you can address any regular pattern in memory.
>>>>>>>> > However, it envisages the array layout in memory as a big 1-D
>>>>>>>> array,
>>>>>>>> > with a linear index progressing in either Fortran or C order.
>>>>>>>> >
>>>>>>>> > Somebody correct me please if I am wrong, but at first sight
>>>>>>>> MPI_Type_Vector sounds more flexible to me than 
>>>>>>>> MPI_Type_create_subarray,
>>>>>>>> exactly because the latter doesn't have strides.
>>>>>>>> >
>>>>>>>> > The downside is that you need to do some index arithmetic to
>>>>>>>> figure
>>>>>>>> > the right strides, etc, to match the corresponding
>>>>>>>> > Fortran90 array sections.
>>>>>>>> >
>>>>>>>> > There are good examples in the "MPI - The complete reference"
>>>>>>>> books I suggested to you before (actually in vol 1).
>>>>>>>> >
>>>>>>>> > Online I could find the two man pages (good information, but no
>>>>>>>> example):
>>>>>>>> >
>>>>>>>> > http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_vector.3.php
>>>>>>>> >
>>>>>>>> http://www.open-mpi.org/doc/v1.8/man3/MPI_Type_create_subarray.3.php
>>>>>>>> >
>>>>>>>> > There is a very simple 2D example of MPI_Type_vector using
>>>>>>>> strides here:
>>>>>>>> >
>>>>>>>> > https://computing.llnl.gov/tutorials/mpi/#Derived_Data_Types
>>>>>>>> >
>>>>>>>> > and a similar one here:
>>>>>>>> >
>>>>>>>> >
>>>>>>>> http://static.msi.umn.edu/tutorial/scicomp/general/MPI/content6.html
>>>>>>>> >
>>>>>>>> > Gus Correa
>>>>>>>> >
>>>>>>>> >> On 01/15/2015 06:53 PM, Diego Avesani wrote:
>>>>>>>> >> dear George, dear Gus, dear all,
>>>>>>>> >> Could you please tell me where I can find a good example?
>>>>>>>> >> I am sorry but I can not understand the 3D array.
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> Really Thanks
>>>>>>>> >>
>>>>>>>> >> Diego
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> On 15 January 2015 at 20:13, George Bosilca <bosi...@icl.utk.edu
>>>>>>>> >> <mailto:bosi...@icl.utk.edu>> wrote:
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >>>    On Jan 15, 2015, at 06:02 , Diego Avesani <
>>>>>>>> diego.aves...@gmail.com
>>>>>>>> >>>    <mailto:diego.aves...@gmail.com>> wrote:
>>>>>>>> >>>
>>>>>>>> >>>    Dear Gus, Dear all,
>>>>>>>> >>>    Thanks a lot.
>>>>>>>> >>>    MPI_Type_Struct works well for the first part of my problem,
>>>>>>>> so I
>>>>>>>> >>>    am very happy to be able to use it.
>>>>>>>> >>>
>>>>>>>> >>>    Regarding MPI_TYPE_VECTOR.
>>>>>>>> >>>
>>>>>>>> >>>    I have studied it and for simple case it is clear to me what
>>>>>>>> id
>>>>>>>> >>>    does (at least I believe). Foe example if I have a matrix
>>>>>>>> define as:
>>>>>>>> >>>    REAL, ALLOCATABLE (AA(:,:))
>>>>>>>> >>>    ALLOCATE AA(100,5)
>>>>>>>> >>>
>>>>>>>> >>>    I could send part of it defining
>>>>>>>> >>>
>>>>>>>> >>>    CALL
>>>>>>>> MPI_TYPE_VECTOR(5,1,5,MPI_DOUBLE_PRECISION,/MY_NEW_TYPE/)
>>>>>>>> >>>
>>>>>>>> >>>    after that I can send part of it with
>>>>>>>> >>>
>>>>>>>> >>>    CALL MPI_SEND( AA(1:/10/,:), /10/, /MY_NEW_TYPE/, 1, 0,
>>>>>>>> >>>    MPI_COMM_WORLD );
>>>>>>>> >>>
>>>>>>>> >>>    Have I understood correctly?
>>>>>>>> >>>
>>>>>>>> >>>    What I can do in case of three dimensional array? for example
>>>>>>>> >>>    AA(:,:,:), I am looking to MPI_TYPE_CREATE_SUBARRAY.
>>>>>>>> >>>    Is that the correct way?
>>>>>>>> >>>
>>>>>>>> >>>    Thanks again
>>>>>>>> >>
>>>>>>>> >>    Indeed, using the subarray is the right approach independent
>>>>>>>> on the
>>>>>>>> >>    number of dimensions of the data (you can use it instead of
>>>>>>>> >>    MPI_TYPE_VECTOR as well).
>>>>>>>> >>
>>>>>>>> >>       George.
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>    Diego
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>    On 13 January 2015 at 19:04, Gus Correa <
>>>>>>>> g...@ldeo.columbia.edu
>>>>>>>> >>>    <mailto:g...@ldeo.columbia.edu>> wrote:
>>>>>>>> >>>
>>>>>>>> >>>        Hi Diego
>>>>>>>> >>>        I guess MPI_Type_Vector is the natural way to send and
>>>>>>>> receive
>>>>>>>> >>>        Fortran90 array sections (e.g. your QQMLS(:,50:100,:)).
>>>>>>>> >>>        I used that before and it works just fine.
>>>>>>>> >>>        I think that is pretty standard MPI programming style.
>>>>>>>> >>>        I guess MPI_Type_Struct tries to emulate Fortran90 and C
>>>>>>>> >>>        structures
>>>>>>>> >>>        (as you did in your previous code, with all the surprises
>>>>>>>> >>>        regarding alignment, etc), not array sections.
>>>>>>>> >>>        Also, MPI type vector should be more easy going (and
>>>>>>>> probably
>>>>>>>> >>>        more efficient) than MPI type struct, with less memory
>>>>>>>> >>>        alignment problems.
>>>>>>>> >>>        I hope this helps,
>>>>>>>> >>>        Gus Correa
>>>>>>>> >>>
>>>>>>>> >>>        PS - These books have a quite complete description and
>>>>>>>> several
>>>>>>>> >>>        examples
>>>>>>>> >>>        of all MPI objects and functions, including MPI types
>>>>>>>> (native
>>>>>>>> >>>        and user defined):
>>>>>>>> >>>        http://mitpress.mit.edu/books/__mpi-complete-reference-0
>>>>>>>> >>>        <http://mitpress.mit.edu/books/mpi-complete-reference-0>
>>>>>>>> >>>        http://mitpress.mit.edu/books/__mpi-complete-reference-1
>>>>>>>> >>>        <http://mitpress.mit.edu/books/mpi-complete-reference-1>
>>>>>>>> >>>
>>>>>>>> >>>        [They cover MPI 1 and 2. I guess there is a new/upcoming
>>>>>>>> book
>>>>>>>> >>>        with MPI 3, but for what you're doing 1 and 2 are more
>>>>>>>> than
>>>>>>>> >>>        enough.]
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>        On 01/13/2015 09:22 AM, Diego Avesani wrote:
>>>>>>>> >>>
>>>>>>>> >>>            Dear all,
>>>>>>>> >>>
>>>>>>>> >>>            I had some wonderful talking about
>>>>>>>> MPI_type_create_struct adn
>>>>>>>> >>>            isend\irecv with
>>>>>>>> >>>            Gilles, Gustavo, George, Gus, Tom and Jeff. Now all
>>>>>>>> is
>>>>>>>> >>>            more clear and my
>>>>>>>> >>>            program works.
>>>>>>>> >>>
>>>>>>>> >>>            Now I have another question. In may program I have
>>>>>>>> matrix:
>>>>>>>> >>>
>>>>>>>> >>>            /QQMLS(:,:,:) /that is allocate as
>>>>>>>> >>>
>>>>>>>> >>>            /ALLOCATE(QQMLS(9,npt,18)/), where npt is the number
>>>>>>>> of
>>>>>>>> >>>            particles
>>>>>>>> >>>
>>>>>>>> >>>            QQMLS is double precision.
>>>>>>>> >>>
>>>>>>>> >>>            I would like to sent form a CPU to another part of
>>>>>>>> it, for
>>>>>>>> >>>            example,
>>>>>>>> >>>            sending QQMLS(:,50:100,:). I mean sending the QQMLS
>>>>>>>> of the
>>>>>>>> >>>            particles
>>>>>>>> >>>            between 50 to 100.
>>>>>>>> >>>            I suppose that i could use MPI_Type_vector but I am
>>>>>>>> not
>>>>>>>> >>>            sure. The
>>>>>>>> >>>            particle that I want to sent could be from 25 to 50
>>>>>>>> ecc..
>>>>>>>> >>>            ecc..so
>>>>>>>> >>>              blocklength changes everytime.
>>>>>>>> >>>
>>>>>>>> >>>            Do I have to use MPI_type_create_struct?
>>>>>>>> >>>            Do I have correctly understood MPI_Type_vector?
>>>>>>>> >>>
>>>>>>>> >>>            Thanks a lot
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>            Diego
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>            _________________________________________________
>>>>>>>> >>>            users mailing list
>>>>>>>> >>>            us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>>>>>> >>>            Subscription:
>>>>>>>> >>>            http://www.open-mpi.org/__mailman/listinfo.cgi/users
>>>>>>>> >>>            <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>>>>>> >>>            Link to this post:
>>>>>>>> >>>
>>>>>>>> http://www.open-mpi.org/__community/lists/users/2015/01/__26171.php
>>>>>>>> >>>            <
>>>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26171.php>
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>        _________________________________________________
>>>>>>>> >>>        users mailing list
>>>>>>>> >>>        us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>>>>>> >>>        Subscription:
>>>>>>>> >>>        http://www.open-mpi.org/__mailman/listinfo.cgi/users
>>>>>>>> >>>        <http://www.open-mpi.org/mailman/listinfo.cgi/users>
>>>>>>>> >>>        Link to this post:
>>>>>>>> >>>
>>>>>>>> http://www.open-mpi.org/__community/lists/users/2015/01/__26172.php
>>>>>>>> >>>        <
>>>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26172.php>
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>    _______________________________________________
>>>>>>>> >>>    users mailing list
>>>>>>>> >>>    us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>>>>>> >>>    Subscription:
>>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>> >>>    Link to this post:
>>>>>>>> >>>
>>>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26184.php
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >>    _______________________________________________
>>>>>>>> >>    users mailing list
>>>>>>>> >>    us...@open-mpi.org <mailto:us...@open-mpi.org>
>>>>>>>> >>    Subscription:
>>>>>>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>> >>    Link to this post:
>>>>>>>> >>
>>>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26192.php
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >>
>>>>>>>> >> _______________________________________________
>>>>>>>> >> users mailing list
>>>>>>>> >> us...@open-mpi.org
>>>>>>>> >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>> >> Link to this post:
>>>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26193.php
>>>>>>>> >
>>>>>>>> > _______________________________________________
>>>>>>>> > users mailing list
>>>>>>>> > us...@open-mpi.org
>>>>>>>> > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>> > Link to this post:
>>>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26194.php
>>>>>>>> _______________________________________________
>>>>>>>> users mailing list
>>>>>>>> us...@open-mpi.org
>>>>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>>> Link to this post:
>>>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26195.php
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> users mailing list
>>>>>>> us...@open-mpi.org
>>>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>>> Link to this post:
>>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26197.php
>>>>>>>
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> users mailing list
>>>>>> us...@open-mpi.org
>>>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>>>> Link to this post:
>>>>>> http://www.open-mpi.org/community/lists/users/2015/01/26199.php
>>>>>>
>>>>>
>>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/01/26201.php
>>>>
>>>>
>>>>
>>>> _______________________________________________
>>>> users mailing list
>>>> us...@open-mpi.org
>>>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>> Link to this post:
>>>> http://www.open-mpi.org/community/lists/users/2015/01/26202.php
>>>>
>>>
>>>
>>
>
program vector 
IMPLICIT NONE
include 'mpif.h' 
integer SIZE_
parameter(SIZE_=4) 
integer numtasks, rank, source, dest, tag, i,  ierr 
real*4 AA(SIZE_,3,4), BB(SIZE_,3,4) 
integer stat(MPI_STATUS_SIZE), rowtype,colrowtype

!Fortran stores this array in column major order 
AA=0.
AA(1,1,1)= 1.0
AA(1,1,2)= 4.0
AA(1,1,3)= 10.0
AA(1,1,4)= 33.0
! AA(1,:,2)=[4.0,5.0,6.0,2.0]
! 
! AA(2,:,1)=[8.0,9.0,10.0,11.0]
! AA(2,:,2)=[11.0,12.0,13.0,12.0]

   CALL MPI_INIT(ierr) 
   CALL MPI_COMM_RANK(MPI_COMM_WORLD, rank, ierr) 
   CALL MPI_COMM_SIZE(MPI_COMM_WORLD, numtasks, ierr) 

   CALL  MPI_TYPE_VECTOR(3, 3, 3, MPI_REAL, rowtype, ierr) 
   CALL  MPI_TYPE_COMMIT(rowtype, ierr)
   
   CALL  MPI_TYPE_VECTOR(4, 4, 4, rowtype, colrowtype, ierr) 
   CALL  MPI_TYPE_COMMIT(colrowtype, ierr) 
   
   CALL MPI_BARRIER(MPI_COMM_WORLD, ierr)

      IF(rank==0)THEN
         i=1
         call MPI_SEND(AA(1,1,1), 1, colrowtype, 1, 300, MPI_COMM_WORLD, ierr)
      ENDIF
      
      IF(rank==1)THEN
         source = 0 
         call MPI_RECV(BB(1,1,1), 1, colrowtype, 0, 300, MPI_COMM_WORLD, stat, ierr) 
         !
         WRITE(*,*) ' b= ', BB(1,1,:)
      ENDIF
 
   call MPI_FINALIZE(ierr) 
ENDPROGRAM

Reply via email to