Please check the value returned from strlen() in MPI_Send. Is it 99 or 
100? I suspect it is 25 or 26. The message size for send and recv should 
match.

Wei-keng


On Tue, 13 May 2008, Rob Ross wrote:

> On rank 0 you're opening the file N-1 times and closing it once. This isn't
> your problem (I will leave that to others), but you should probably fix that
> in your code. Might be buffering and not writing.
> 
> Rob
> 
> On May 13, 2008, at 2:32 PM, Davi Vercillo C. Garcia wrote:
> >Hi all,
> >
> >I'm trying to run some programs using MPI-IO, like MPI-IO Test, and
> >I'm with some problems. Trying to discover what is happening, I build
> >a simple distributed IO program with MPI and "fprintf" function, like
> >above, but the output is not correctly when is executed in PVFS.
> >
> > #include <stdio.h>
> > #include <stdlib.h>
> > #include <string.h>
> > #include "mpi.h"
> >
> >int main(int argc, char** argv) {
> >   int my_rank;          /* Rank of process */
> >   int p;                /* Number of processes */
> >   int source;           /* Rank of sender */
> >   int dest;             /* Rank of receiver */
> >   int tag = 50;         /* Tag for messages */
> >   char message[100];    /* Storage for the message */
> >   MPI_Status status;    /* Return status for receive */
> >   MPI_Init(&argc, &argv);
> >   MPI_Comm_rank(MPI_COMM_WORLD, &my_rank);
> >   MPI_Comm_size(MPI_COMM_WORLD, &p);
> >
> >   FILE *pfile;
> >
> >   if (my_rank != 0) {
> >       sprintf(message, "Greetings from process %d!", my_rank);
> >       dest = 0;
> >       pfile = fopen("teste.txt","w");
> >       /* Use strlen(message)+1 to include '\0' */
> >       MPI_Send(message, strlen(message)+1, MPI_CHAR, dest, tag,
> >MPI_COMM_WORLD);
> >
> >   } else { /* my_rank == 0 */
> >       for (source = 1 ; source < p ; source++) {
> >            MPI_Recv(message, 100, MPI_CHAR, source, tag,
> >MPI_COMM_WORLD, &status);
> >            pfile = fopen("teste.txt","a");
> >            fprintf(pfile,"%s\n", message);
> >       }
> >   }
> >   fclose(pfile);
> >   MPI_Finalize();
> >
> >   return 0;
> >} /* main */
> >
> >When i execute him with 25 nodes on NFS, the output is:
> >
> >Greetings from process 24!
> >Greetings from process 23!
> >Greetings from process 22!
> >Greetings from process 21!
> >Greetings from process 20!
> >Greetings from process 19!
> >Greetings from process 18!
> >Greetings from process 17!
> >Greetings from process 16!
> >Greetings from process 15!
> >Greetings from process 14!
> >Greetings from process 13!
> >Greetings from process 12!
> >Greetings from process 11!
> >Greetings from process 10!
> >Greetings from process 9!
> >Greetings from process 8!
> >Greetings from process 7!
> >Greetings from process 6!
> >Greetings from process 5!
> >Greetings from process 4!
> >Greetings from process 3!
> >Greetings from process 2!
> >Greetings from process 1!
> >
> >But with PVFS, is only:
> >
> >Greetings from process 1!
> >
> >I'm using the last version of PVFS2 with 1 MDS and 3 IOS. The content
> >of file pvfs2tab is:
> >
> >tcp://campogrande01:3334/pvfs2-fs /mnt/pvfs2 pvfs2 rw,user,noauto 0 0
> >
> >
> >Someone can help me, please ?!
> >
> >-- 
> >Davi Vercillo Carneiro Garcia
> >
> >Universidade Federal do Rio de Janeiro
> >Departamento de Ciência da Computação
> >DCC-IM/UFRJ - http://www.dcc.ufrj.br
> >
> >"Good things come to those who... wait." - Debian Project
> >
> >"A computer is like air conditioning: it becomes useless when you open
> >windows." - Linus Torvalds
> >
> >_______________________________________________
> >Pvfs2-users mailing list
> >[email protected]
> >http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
> 
> 
> _______________________________________________
> Pvfs2-users mailing list
> [email protected]
> http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users
> 
_______________________________________________
Pvfs2-users mailing list
[email protected]
http://www.beowulf-underground.org/mailman/listinfo/pvfs2-users

Reply via email to