Hi,

Is there an overlapping section in the MPI part ?

Otherwise, please check :
- declaration type of all the variables (consistency)
- correct initialization of the array "wave" (to zero)
- maybe use temporary variables like
real size1,size2,factor
size1 = dx+dy
size2 = dhx+dhy
factor = dt*size2/(size1**2)
and then in the big loop:
wave(it,j,k)= wave(it,j,k)*factor
The code will also run faster.

Yann

Le 16/01/2017 à 14:28, Oscar Mojica a écrit :
Hello everybody

I'm having a problem with a parallel program written in fortran. I have a 3D array which is divided in two in the third dimension so thats two processes

perform some operations with a part of the cube, usinga subroutine. Each process also has the complete cube. Before each process call the subroutine,

I compare its sub array with its corresponding part of the whole cube. These are the same. The subroutine simply performs point-to-point operations in a loop, i.e.


     do k=k1,k2
      do j=1,nhx
       do it=1,nt
            wave(it,j,k)= wave(it,j,k)*dt/(dx+dy)*(dhx+dhy)/(dx+dy)
         end do
       end do
      enddo


where, wave is the 3D array and the other values are constants.


After leaving the subroutine I notice that there is a difference in the values calculated by process 1 compared to the values that I get if the whole cube is passed to the subroutine but that this only works on its part, i.e.


---    complete    2017-01-12 10:30:23.000000000 -0400
+++ half              2017-01-12 10:34:57.000000000 -0400
@@ -4132545,7 +4132545,7 @@
   -2.5386049E-04
   -2.9899486E-04
   -3.4697619E-04
-  -3.7867704E-04
+ -3.7867710E-04
    0.0000000E+00
    0.0000000E+00
    0.0000000E+00


When I do this with more processes the same thing happens with all processes other than zero. I find it very strange. I am disabling the optimization when compiling.

In the end the results are visually the same, but not numerically. I am working with simple precision.


Any idea what may be going on? I do not know if this is related to MPI



Oscar Mojica
Geologist Ph.D. in Geophysics
SENAI CIMATEC Supercomputing Center
Lattes: http://lattes.cnpq.br/0796232840554652



_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


_______________________________________________
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users

Reply via email to