Sorry about that, I had misinterpreted your original post as being the pair of send-receive. The example you give below does seem correct indeed, which means you might have to show us the code that doesn't work. Note that I am in no way a Fortran expert, I'm more versed in C. The only hint I'd give a C programmer in this case is "make sure your receiving structures are indeed large enough (ie: you send 3d but eventually receive 4d...did you allocate for 3d or 4d for receiving the converted array...).

Eric

Enrico Barausse wrote:
sorry, I hadn't changed the subject. I'm reposting:

Hi

I think it's correct. what I want to to is to send a 3d array from the
process 1 to process 0 =root):
call MPI_Send(toroot,3,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD

in some other part of the code process 0 acts on the 3d array and
turns it into a 4d one and sends it back to process 1, which receives
it with

call MPI_RECV(tonode,4,MPI_DOUBLE_PRECISION,root,n,MPI_COMM_WORLD,status,ierr)

in practice, what I do i basically give by this simple code (which
doesn't give the segmentation fault unfortunately):



       a=(/1,2,3,4,5/)

       call MPI_INIT(ierr)
       call MPI_COMM_RANK(MPI_COMM_WORLD, id, ierr)
       call MPI_COMM_SIZE(MPI_COMM_WORLD, numprocs, ierr)

       if(numprocs/=2) stop

       if(id==0) then
               do k=1,5
                       a=a+1
                       call MPI_SEND(a,5,MPI_INTEGER,1,k,MPI_COMM_WORLD,ierr)
                       call
MPI_RECV(b,4,MPI_INTEGER,1,k,MPI_COMM_WORLD,status,ierr)
               end do
       else
               do k=1,5
                       call
MPI_RECV(a,5,MPI_INTEGER,0,k,MPI_COMM_WORLD,status,ierr)
                       b=a(1:4)
                       call MPI_SEND(b,4,MPI_INTEGER,0,k,MPI_COMM_WORLD,ierr)
               end do
       end if
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to