Hi Jelena,

 int* ttt = (int*)malloc(2 * sizeof(int));
 ttt[0] = myworldrank + 1;
 ttt[1] = myworldrank * 2;
 if(myworldrank == 0)
   MPI_Reduce(MPI_IN_PLACE, ttt, 2, MPI_INT, MPI_SUM, 0,
MPI_COMM_WORLD); //sum all logdetphi from different nodes
 else
   MPI_Reduce(ttt, NULL, 2, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);
//sum all logdetphi from different nodes

 FOR_WORLDNODE0 printf("%d, %d\n" , ttt[0],ttt[1]);


That works. Thanks so much.

Anyi

On 7/11/07, Jelena Pjesivac-Grbovic <pj...@cs.utk.edu> wrote:
Hi Anyi,

you are using reduce incorrectly: you cannot use the same buffer as input
and output.
If you want to do operation in place, you must specify "MPI_IN_PLACE"
as send buffer at the root process.
Thus, your code should look something like:
--------
   int* ttt = (int*)malloc(2 * sizeof(int));
   ttt[0] = myworldrank + 1;
   ttt[1] = myworldrank * 2;
   if (root == myworldrank) {
      MPI_Reduce(MPI_IN_PLACE, ttt, 2, MPI_INT, MPI_SUM, MPI_COMM_WORLD);
   } else {
      MPI_Reduce(ttt, NULL, 2, MPI_INT, MPI_SUM, MPI_COMM_WORLD);
   }
   FOR_WORLDNODE0 printf("%d, %d\n" , ttt[0],ttt[1]);
--------

hope this helps,
Jelena
PS. If I remember correctly the standard, you must specify send buffer on
non-root nodes - it cannot be MPI_IN_PLACE (if you try it - you'll get
segfault).

On Wed, 11 Jul 2007 any...@pa.uky.edu wrote:

> Hi,
>  I have a code which have a identical vector on each node,  I am going to do
> the vector sum and return result to root.  Such like this,
>
>  int* ttt = (int*)malloc(2 * sizeof(int));
>  ttt[0] = myworldrank + 1;
>  ttt[1] = myworldrank * 2;
>   MPI_Allreduce(ttt, ttt, 2, MPI_INT, MPI_SUM, MPI_COMM_WORLD);
>  FOR_WORLDNODE0 printf("%d, %d\n" , ttt[0],ttt[1]);
>
>  myworldrank is the rank of local node, if I run this code under 4 nodes, what
> I expect  return is 10,12. But what I got is 18,24. So I'm confused here on
> MPI_Reduce, is that supposed to do the vector sum ?
>  I tried MPI_Allreduce, it gave me the correct answer 10, 12.
>
>  Is there someone met the same problems or I am wrong on calling MPI_Reduce()
>
>  Thanks.
> Anyi
>
>
>
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

--
Jelena Pjesivac-Grbovic, Pjesa
Graduate Research Assistant
Innovative Computing Laboratory
Computer Science Department, UTK
Claxton Complex 350
(865) 974 - 6722
(865) 974 - 6321
jpjes...@utk.edu

Murphy's Law of Research:
         Enough research will tend to support your theory.
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Reply via email to