Re: [OMPI users] How to create multi-thread parallel program using thread-safe send and recv?

2009-09-27 Thread guosong

The reason I asked the background thread is that I need to make MPI calls in 
this thread and this is also why I got errors in the little testing program. 



List-Post: users@lists.open-mpi.org
Date: Sun, 27 Sep 2009 14:00:00 -0700
From: eugene@sun.com
To: us...@open-mpi.org
Subject: Re: [OMPI users] How to create multi-thread parallel program using 
thread-safe send and recv?

guosong wrote: 


I used MPI_Init_thread(,, MPI_THREAD_MULTIPLE, ); in my 
program and got provided = 0 which turns out to be the MPI_THREAD_SINGLE. Does 
this mean that I can not use MPI_THREAD_MULTIPLE model?Right.  I've not done 
much multithreaded MPI work.  Someone else on this list can advise you better 
what you need to do to get provided=MPI_THREAD_MULTIPLE.

I write a little program to test the multithreading and it worked sometimes and 
failed sometimes. It also hang there sometimes. Does this only because the 
MPI_THREAD_MULTIPLE is not supported or there are some bugs in the program?I 
don't know if there are bugs in the program, but without the MPI threads 
support you can't really test it.  

BTW, if I want to create a background thread which is sort of like a deamon 
thread, how can I achieve that in MPI programs? Thanks.
I'm not sure I understand the question.  Creating a background thread isn't 
part of MPI.  You would use something else, like POSIX threads or OpenMP.

_
约会说不清地方?来试试微软地图最新msn互动功能!
http://ditu.live.com/?form=TL=1

Re: [OMPI users] How to create multi-thread parallel program using thread-safe send and recv?

2009-09-27 Thread Eugene Loh




guosong wrote:

  I
used MPI_Init_thread(,, MPI_THREAD_MULTIPLE,
); in my program and got provided = 0 which turns out to
be the MPI_THREAD_SINGLE. Does this mean that I can not use
MPI_THREAD_MULTIPLE model?
Right.  I've not done much multithreaded MPI work.  Someone else on
this list can advise you better what you need to do to get
provided=MPI_THREAD_MULTIPLE.
I write a little program to test the multithreading and it
worked sometimes and failed sometimes. It also hang there
sometimes. Does this only because the MPI_THREAD_MULTIPLE is not
supported or there are some bugs in the program?
I don't know if there are bugs in the program, but without the MPI
threads support you can't really test it. 

BTW, if I want to create a background thread which is sort
of like a deamon thread, how can I achieve that in MPI programs? Thanks.

I'm not sure I understand the question.  Creating a background thread
isn't part of MPI.  You would use something else, like POSIX threads or
OpenMP.




Re: [OMPI users] How to create multi-thread parallel program using thread-safe send and recv?

2009-09-27 Thread guosong

Hi Loh,

I used MPI_Init_thread(,, MPI_THREAD_MULTIPLE, ); in my 
program and got provided = 0 which turns out to be the MPI_THREAD_SINGLE. Does 
this mean that I can not use MPI_THREAD_MULTIPLE model? I write a little 
program to test the multithreading and it worked sometimes and failed 
sometimes. It also hang there sometimes. Does this only because the 
MPI_THREAD_MULTIPLE is not supported or there are some bugs in the program? I 
attached the little program as follow:



#include 
#include 
#include 
#include 
#include 
#include "mpi.h"
using namespace std;
#define MSG_QUERY_SIZE 16  //sizeof(MPI_query_msg) = 16

struct MPI_query_msg
{
 int flag;   // -1:request cell; 0:query coordinate; 1:there is no cell to grant
 int x;
 int y;
 int ignited;   // if x,y are not negative, then ignited: 0 is not ignited, 1 
is ignited
};

void* backRecv(void* arg)
{
 int myid, nprocs;
 pthread_mutex_init(&_dealmutex2, NULL);
 stringstream RANK;
 MPI_Status status;
 MPI_Request  req2;
 MPI_Comm_rank(MPI_COMM_WORLD, );
 MPI_Comm_size(MPI_COMM_WORLD, );
 int left = (myid - 1 + nprocs - 1) % (nprocs - 1);
 int right = (myid + 1) % (nprocs - 1);
 MPI_query_msg rMSG;
 RANK << myid;
 cout << myid << " create background message recv" << endl;
 int x, y;
 //char c;
 int m;
 int count = 0;
 string filename("f_");
 filename += RANK.str();
 filename += "_backRecv.txt";
 fstream fout(filename.c_str(), ios::out);
 if(!fout)
 {
  cout << "can not create the file " << filename << endl;
  fout.close();
  exit(1);
 }

 while(true)
 {
  MPI_Recv(, MSG_QUERY_SIZE, MPI_CHAR, MPI_ANY_SOURCE, 222, 
MPI_COMM_WORLD, );
  //MPI_Irecv(, MSG_QUERY_SIZE, MPI_CHAR, MPI_ANY_SOURCE, 222, 
MPI_COMM_WORLD, );
  //MPI_Wait(, );
  fout << "BACKREV:" << myid << " recv from " << status.MPI_SOURCE << " 
rMSG.flag = " << rMSG.flag << " with tag 222" << endl;
  fout.flush();
  if(rMSG.flag == -1)
  {
   fout << "***backRecv FINISHED IN " << myid << "" << endl;
   fout.flush();
   fout.close();
   pthread_exit(NULL);
   return 0;
  } 
 };
}

int main(int argc, char **argv) 
{
 int myid = 0;
 int provided;
 int nprocs = 0;
 pthread_t pt1 = 0;
pthread_t pt2 = 0;;
 int pret1 = 0;
 int pret2 = 0;
 int i = 0, j = 0, m = 0;
 //MPI_Status status;
 MPI_Request  requ1, requ2;
 MPI_Status status1, status2;

 MPI_Init_thread(,, MPI_THREAD_MULTIPLE, );
 //MPI_Init(,);
   MPI_Comm_size(MPI_COMM_WORLD,);
   MPI_Comm_rank(MPI_COMM_WORLD,); 
 pthread_mutex_init(&_dealmutex, NULL);

 if(myid == nprocs - 1)  // the last one
 {
  if(provided == MPI_THREAD_MULTIPLE)
  {
   cout << myid << " got MPI_THREAD_MULTIPLE " << endl;
  }
  else
  {
   cout << myid << " MPI_THREAD_MULTIPLE = " << MPI_THREAD_MULTIPLE << endl;
   cout << myid << " MPI_THREAD_SINGLE = " << MPI_THREAD_SINGLE << endl;
   cout << myid << " got provided = " << provided << endl;
  }
  MPI_query_msg sMSGqueue[50], rMSG;
  for(i=0; i<50; i++)
  {
   sMSGqueue[i].flag = i;
   sMSGqueue[i].x = i;
   sMSGqueue[i].y = i;
   sMSGqueue[i].ignited = i;
  }
  while(j < 50)
  {
   MPI_Recv(, MSG_QUERY_SIZE, MPI_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG, 
MPI_COMM_WORLD, );
   //MPI_Irecv(, MSG_QUERY_SIZE, MPI_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG, 
MPI_COMM_WORLD, );
   //MPI_Wait(, );
   cout << "MAIN(" << j << "): " << myid << " recvs from "<< status2.MPI_SOURCE 
<< " with tag = " << status2.MPI_TAG << " rMSG.flag = " << rMSG.flag << endl;

   MPI_Send(&(sMSGqueue[j]), MSG_QUERY_SIZE, MPI_CHAR, status2.MPI_SOURCE, 
status2.MPI_TAG, MPI_COMM_WORLD);
   //MPI_Isend(&(sMSGqueue[j]), MSG_QUERY_SIZE, MPI_CHAR, status2.MPI_SOURCE, 
status2.MPI_TAG, MPI_COMM_WORLD, );
   //MPI_Wait(, );
   cout << "MAIN(" << j << "): " << myid << " sends to "<< status2.MPI_SOURCE 
<< " with tag = " << status2.MPI_TAG << " sMSGqueue[j].flag = " << 
sMSGqueue[j].flag << endl;
   j++;
  };
  int count = 0;
  while(true)
  {
   MPI_Recv(, MSG_QUERY_SIZE, MPI_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG, 
MPI_COMM_WORLD, );
   //MPI_Irecv(, MSG_QUERY_SIZE, MPI_CHAR, MPI_ANY_SOURCE, MPI_ANY_TAG, 
MPI_COMM_WORLD, );
   //MPI_Wait(, );
   rMSG.flag = -1;
   MPI_Send(, MSG_QUERY_SIZE, MPI_CHAR, status2.MPI_SOURCE, 
status2.MPI_TAG, MPI_COMM_WORLD);
   //MPI_Isend(, MSG_QUERY_SIZE, MPI_CHAR, status2.MPI_SOURCE, 
status2.MPI_TAG, MPI_COMM_WORLD, );
   //MPI_Wait(, );
   cout << "MAIN sends termination to " << status2.MPI_SOURCE << endl;
   count++;
   if(count == myid)
break;
  };
  cout << "***MAIN SUCESS!" << endl;
 }
 else
 {
  pret1 = pthread_create(, NULL, backRecv, NULL);
  if(pret1 != 0)
  {
   cout << myid << "backRecv Thread Create Failed." << endl;
   exit(1);
  }
  MPI_query_msg sMSG, rMSG;
  rMSG.flag = myid;
  rMSG.x = myid;
  rMSG.y = myid;
  rMSG.ignited = myid;
  sMSG.flag = myid;
  sMSG.x = myid;
  sMSG.y = myid;
  sMSG.ignited = myid;
  int left = (myid - 1 + nprocs - 1) % (nprocs - 1);
  int right = (myid + 1) % (nprocs - 1);
  while(true)
  {
   MPI_Send(, MSG_QUERY_SIZE, MPI_CHAR, nprocs-1, myid, 

Re: [OMPI users] Is there an "flush()"-like function in MPI?

2009-09-27 Thread Ashley Pittman

There are tools available to allow you to see the "message queues" of a
process, this might help you identify why you aren't seeing the messages
that you are waiting on complete.  One such tool is linked to in my
signature, you could also look into TotalView or DDT as well.

I would also suggest that as you are seeing random hangs and crashes
running your code under Valgrind might be advantageous.

Ashley Pittman.

On Sun, 2009-09-27 at 02:05 +0800, guosong wrote:
> Yes, I know there should be a bug. But I do not know where and why.
> The strange thing was sometimes it worked but at this time there will
> be a segmentation fault. If it did not work, some process must sit
> there waiting for the message. There are many iterations in my
> program(using a loop). It would after a few iterations the "bug" would
> appear, which means the previous a few iterations the communication
> worked. I am quite comfused now.


-- 

Ashley Pittman, Bath, UK.

Padb - A open source job inspection tool for parallel computing
http://padb.pittman.org.uk