Re: [OMPI users] Fwd: Open MPI v1.4 cant find default hostfile

2010-04-18 Thread Ralph Castain
Afraid I can't help you - I've never seen that behavior on any system, can't 
replicate it anywhere, and have no idea what might cause it.

On Apr 18, 2010, at 9:15 AM, Mario Ogrizek wrote:

> It is a parallel tools platform for eclipse IDE, a plugin.
> I dont think it is a source of problem.
> 
> The same thing is happening running it from shell. It has something to do 
> with mapping or something else. Since it allways maps for job 0, what ever 
> that means.
> 
> On Sun, Apr 18, 2010 at 4:50 PM, Ralph Castain  wrote:
> Again, what is PTP?
> 
> I can't replicate this on any system we can access, so it may be something 
> about this PTP thing.
> 
> On Apr 18, 2010, at 1:37 AM, Mario Ogrizek wrote:
> 
>> Ofcourse i checked that, i have all of this things, 
>> I simplified the program, and its the same.
>> Nothing gave me clue, except the more detailed writeout of the PTP.
>> Here is the critical part of it:
>> (1.2 one, this is correct)
>> [Mario.local:05548]  Map for job: 1  Generated by mapping mode: byslot
>>  Starting vpid: 0Vpid range: 4   Num app_contexts: 1
>> ...
>> ...
>> 
>> (1.4 one)
>> [Mario.local:05542]  Map for job: 0  Generated by mapping mode: byslot
>>  Starting vpid: 0Vpid range: 1   Num app_contexts: 1
>> ...
>> ...
>> 
>> Seems the 1.4 mapps the wrong job, Im not sure to what is it referred to, 
>> but hope it will give you some clues.
>>  
>> On Sun, Apr 18, 2010 at 4:07 AM, Ralph Castain  wrote:
>> Just to check what is going on, why don't you remove that message passing 
>> code and just
>> 
>> printf("Hello MPI World from process %d!", my_rank
>> 
>>  in each process? Much more direct - avoids any ambiguity.
>> 
>> Also, be certain that you compile this program for the specific OMPI version 
>> you are running it under. OMPI is NOT binary compatible across releases - 
>> you have to recompile the program for the specific release you are going to 
>> use.
>> 
>> 
>> On Apr 17, 2010, at 4:52 PM, Mario Ogrizek wrote:
>> 
>>> Ofcourse, its the same program, wasnt recompiled for a week.
>>> 
>>> 
>>> #include 
>>> #include 
>>> #include "mpi.h"
>>> 
>>> int main(int argc, char* argv[]){
>>> int  my_rank; /* rank of process */
>>> int  p;   /* number of processes */
>>> int source;   /* rank of sender */
>>> int dest; /* rank of receiver */
>>> int tag=0;/* tag for messages */
>>> char message[100];/* storage for message */
>>> MPI_Status status ;   /* return status for receive */
>>> 
>>> /* start up MPI */
>>> 
>>> MPI_Init(, );
>>> 
>>> /* find out process rank */
>>> MPI_Comm_rank(MPI_COMM_WORLD, _rank); 
>>> 
>>> 
>>> /* find out number of processes */
>>> MPI_Comm_size(MPI_COMM_WORLD, );
>>> 
>>> 
>>> if (my_rank !=0){
>>> /* create message */
>>> sprintf(message, "Hello MPI World from process %d!", my_rank);
>>> dest = 0;
>>> /* use strlen+1 so that '\0' get transmitted */
>>> MPI_Send(message, strlen(message)+1, MPI_CHAR,
>>>dest, tag, MPI_COMM_WORLD);
>>> }
>>> else{
>>> printf("Hello MPI World From process 0: Num processes: %d\n",p);
>>> for (source = 1; source < p; source++) {
>>> MPI_Recv(message, 100, MPI_CHAR, source, tag,
>>>   MPI_COMM_WORLD, );
>>> printf("%s\n",message);
>>> }
>>> }
>>> /* shut down MPI */
>>> MPI_Finalize(); 
>>> 
>>> 
>>> return 0;
>>> }
>>> 
>>> I triplechecked:
>>> v1.2 output
>>> Hello MPI World From process 0: Num processes: 4
>>> Hello MPI World from process 1!
>>> Hello MPI World from process 2!
>>> Hello MPI World from process 3!
>>> 
>>> v1.4 output:
>>> Hello MPI World From process 0: Num processes: 1
>>> Hello MPI World From process 0: Num processes: 1
>>> Hello MPI World From process 0: Num processes: 1
>>> Hello MPI World From process 0: Num processes: 1
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> On Sat, Apr 17, 2010 at 9:13 PM, Ralph Castain  wrote:
>>> 
>>> On Apr 17, 2010, at 11:17 AM, Mario Ogrizek wrote:
>>> 
 Hahaha, ok then that WAS silly! :D
 So there is no way to utilize both cores with mpi?
>>> 
>>> We are using both cores - it is just that they are on the same node. Unless 
>>> told otherwise, the processes will use shared memory for communication.
>>> 
 
 Ah well, I'll correct that.
 
 From console, im starting a job like this: mpirun -np 4 Program, where i 
 want to run a Program on 4 processors.
 I was just stumbled when i got same output 4 times, like there are 4 
 processes ranked 0.
 While with the old version of mpi (1.2) same execution would give 4 
 processes ranked 0..3.
>>> 
>>> And so you should - if not, then there is something wrong. No way mpirun 
>>> would start 4 processes ranked 0. How are you 

Re: [OMPI users] Fwd: Open MPI v1.4 cant find default hostfile

2010-04-18 Thread Mario Ogrizek
It is a parallel tools platform for eclipse IDE, a plugin.
I dont think it is a source of problem.

The same thing is happening running it from shell. It has something to do
with mapping or something else. Since it allways maps for job 0, what ever
that means.

On Sun, Apr 18, 2010 at 4:50 PM, Ralph Castain  wrote:

> Again, what is PTP?
>
> I can't replicate this on any system we can access, so it may be something
> about this PTP thing.
>
> On Apr 18, 2010, at 1:37 AM, Mario Ogrizek wrote:
>
> Ofcourse i checked that, i have all of this things,
> I simplified the program, and its the same.
> Nothing gave me clue, except the more detailed writeout of the PTP.
> Here is the critical part of it:
> (1.2 one, this is correct)
> [Mario.local:05548]  Map for job: 1 Generated by mapping mode: byslot
>   Starting vpid: 0 Vpid range: 4 Num app_contexts: 1
> ...
> ...
>
> (1.4 one)
> [Mario.local:05542]  Map for job: 0 Generated by mapping mode: byslot
>   Starting vpid: 0 Vpid range: 1 Num app_contexts: 1
> ...
> ...
>
> Seems the 1.4 mapps the wrong job, Im not sure to what is it referred to,
> but hope it will give you some clues.
>
> On Sun, Apr 18, 2010 at 4:07 AM, Ralph Castain  wrote:
>
>> Just to check what is going on, why don't you remove that message passing
>> code and just
>>
>> printf("Hello MPI World from process %d!", my_rank
>>
>>  in each process? Much more direct - avoids any ambiguity.
>>
>> Also, be certain that you compile this program for the specific OMPI
>> version you are running it under. OMPI is NOT binary compatible across
>> releases - you have to recompile the program for the specific release you
>> are going to use.
>>
>>
>> On Apr 17, 2010, at 4:52 PM, Mario Ogrizek wrote:
>>
>> Ofcourse, its the same program, wasnt recompiled for a week.
>>
>>
>> #include 
>> #include 
>> #include "mpi.h"
>>
>> int main(int argc, char* argv[]){
>>  int  my_rank; /* rank of process */
>>  int  p;   /* number of processes */
>>  int source;   /* rank of sender */
>>  int dest; /* rank of receiver */
>>  int tag=0;/* tag for messages */
>>  char message[100];/* storage for message */
>>  MPI_Status status ;   /* return status for receive */
>>
>>  /* start up MPI */
>>
>>  MPI_Init(, );
>>
>> /* find out process rank */
>>  MPI_Comm_rank(MPI_COMM_WORLD, _rank);
>>
>>
>> /* find out number of processes */
>>  MPI_Comm_size(MPI_COMM_WORLD, );
>>
>>
>> if (my_rank !=0){
>>  /* create message */
>>  sprintf(message, "Hello MPI World from process %d!", my_rank);
>>  dest = 0;
>> /* use strlen+1 so that '\0' get transmitted */
>>  MPI_Send(message, strlen(message)+1, MPI_CHAR,
>>dest, tag, MPI_COMM_WORLD);
>>  }
>> else{
>>  printf("Hello MPI World From process 0: Num processes: %d\n",p);
>>  for (source = 1; source < p; source++) {
>>  MPI_Recv(message, 100, MPI_CHAR, source, tag,
>>   MPI_COMM_WORLD, );
>>  printf("%s\n",message);
>>  }
>> }
>>  /* shut down MPI */
>>  MPI_Finalize();
>>
>>
>>  return 0;
>> }
>>
>> I triplechecked:
>> v1.2 output
>> Hello MPI World From process 0: Num processes: 4
>> Hello MPI World from process 1!
>> Hello MPI World from process 2!
>> Hello MPI World from process 3!
>>
>> v1.4 output:
>>
>> Hello MPI World From process 0: Num processes: 1
>>
>> Hello MPI World From process 0: Num processes: 1
>>
>> Hello MPI World From process 0: Num processes: 1
>>
>> Hello MPI World From process 0: Num processes: 1
>>
>>
>>
>>
>>
>>
>>
>> On Sat, Apr 17, 2010 at 9:13 PM, Ralph Castain  wrote:
>>
>>>
>>> On Apr 17, 2010, at 11:17 AM, Mario Ogrizek wrote:
>>>
>>> Hahaha, ok then that WAS silly! :D
>>> So there is no way to utilize both cores with mpi?
>>>
>>>
>>> We are using both cores - it is just that they are on the same node.
>>> Unless told otherwise, the processes will use shared memory for
>>> communication.
>>>
>>>
>>> Ah well, I'll correct that.
>>>
>>> From console, im starting a job like this: mpirun -np 4 Program, where i
>>> want to run a Program on 4 processors.
>>> I was just stumbled when i got same output 4 times, like there are 4
>>> processes ranked 0.
>>> While with the old version of mpi (1.2) same execution would give 4
>>> processes ranked 0..3.
>>>
>>>
>>> And so you should - if not, then there is something wrong. No way mpirun
>>> would start 4 processes ranked 0. How are you printing the rank? Are you
>>> sure you are getting it correctly?
>>>
>>>
>>>
>>> Hope you see my question.
>>>
>>> On Sat, Apr 17, 2010 at 6:29 PM, Ralph Castain  wrote:
>>>

 On Apr 17, 2010, at 1:16 AM, Mario Ogrizek wrote:

 I am new to mpi, so I'm sorry for any silly questions.

 My idea was to try to use dual core machine as two nodes. I have a
 limited access to a cluster, so this was just for "testing" purposes.
 My default hostfile contains usual comments and this two nodes:

 node0
 node1

 I thought that each processor is a 

Re: [OMPI users] Fwd: Open MPI v1.4 cant find default hostfile

2010-04-18 Thread Ralph Castain
Again, what is PTP?

I can't replicate this on any system we can access, so it may be something 
about this PTP thing.

On Apr 18, 2010, at 1:37 AM, Mario Ogrizek wrote:

> Ofcourse i checked that, i have all of this things, 
> I simplified the program, and its the same.
> Nothing gave me clue, except the more detailed writeout of the PTP.
> Here is the critical part of it:
> (1.2 one, this is correct)
> [Mario.local:05548]  Map for job: 1   Generated by mapping mode: byslot
>   Starting vpid: 0Vpid range: 4   Num app_contexts: 1
> ...
> ...
> 
> (1.4 one)
> [Mario.local:05542]  Map for job: 0   Generated by mapping mode: byslot
>   Starting vpid: 0Vpid range: 1   Num app_contexts: 1
> ...
> ...
> 
> Seems the 1.4 mapps the wrong job, Im not sure to what is it referred to, but 
> hope it will give you some clues.
>  
> On Sun, Apr 18, 2010 at 4:07 AM, Ralph Castain  wrote:
> Just to check what is going on, why don't you remove that message passing 
> code and just
> 
> printf("Hello MPI World from process %d!", my_rank
> 
>  in each process? Much more direct - avoids any ambiguity.
> 
> Also, be certain that you compile this program for the specific OMPI version 
> you are running it under. OMPI is NOT binary compatible across releases - you 
> have to recompile the program for the specific release you are going to use.
> 
> 
> On Apr 17, 2010, at 4:52 PM, Mario Ogrizek wrote:
> 
>> Ofcourse, its the same program, wasnt recompiled for a week.
>> 
>> 
>> #include 
>> #include 
>> #include "mpi.h"
>> 
>> int main(int argc, char* argv[]){
>>  int  my_rank; /* rank of process */
>>  int  p;   /* number of processes */
>>  int source;   /* rank of sender */
>>  int dest; /* rank of receiver */
>>  int tag=0;/* tag for messages */
>>  char message[100];/* storage for message */
>>  MPI_Status status ;   /* return status for receive */
>>  
>>  /* start up MPI */
>>  
>>  MPI_Init(, );
>> 
>>  /* find out process rank */
>>  MPI_Comm_rank(MPI_COMM_WORLD, _rank); 
>>  
>> 
>>  /* find out number of processes */
>>  MPI_Comm_size(MPI_COMM_WORLD, );
>> 
>>  
>>  if (my_rank !=0){
>>  /* create message */
>>  sprintf(message, "Hello MPI World from process %d!", my_rank);
>>  dest = 0;
>>  /* use strlen+1 so that '\0' get transmitted */
>>  MPI_Send(message, strlen(message)+1, MPI_CHAR,
>> dest, tag, MPI_COMM_WORLD);
>>  }
>>  else{
>>  printf("Hello MPI World From process 0: Num processes: %d\n",p);
>>  for (source = 1; source < p; source++) {
>>  MPI_Recv(message, 100, MPI_CHAR, source, tag,
>>MPI_COMM_WORLD, );
>>  printf("%s\n",message);
>>  }
>>  }
>>  /* shut down MPI */
>>  MPI_Finalize(); 
>>  
>>  
>>  return 0;
>> }
>> 
>> I triplechecked:
>> v1.2 output
>> Hello MPI World From process 0: Num processes: 4
>> Hello MPI World from process 1!
>> Hello MPI World from process 2!
>> Hello MPI World from process 3!
>> 
>> v1.4 output:
>> Hello MPI World From process 0: Num processes: 1
>> Hello MPI World From process 0: Num processes: 1
>> Hello MPI World From process 0: Num processes: 1
>> Hello MPI World From process 0: Num processes: 1
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> On Sat, Apr 17, 2010 at 9:13 PM, Ralph Castain  wrote:
>> 
>> On Apr 17, 2010, at 11:17 AM, Mario Ogrizek wrote:
>> 
>>> Hahaha, ok then that WAS silly! :D
>>> So there is no way to utilize both cores with mpi?
>> 
>> We are using both cores - it is just that they are on the same node. Unless 
>> told otherwise, the processes will use shared memory for communication.
>> 
>>> 
>>> Ah well, I'll correct that.
>>> 
>>> From console, im starting a job like this: mpirun -np 4 Program, where i 
>>> want to run a Program on 4 processors.
>>> I was just stumbled when i got same output 4 times, like there are 4 
>>> processes ranked 0.
>>> While with the old version of mpi (1.2) same execution would give 4 
>>> processes ranked 0..3.
>> 
>> And so you should - if not, then there is something wrong. No way mpirun 
>> would start 4 processes ranked 0. How are you printing the rank? Are you 
>> sure you are getting it correctly?
>> 
>> 
>>> 
>>> Hope you see my question.
>>> 
>>> On Sat, Apr 17, 2010 at 6:29 PM, Ralph Castain  wrote:
>>> 
>>> On Apr 17, 2010, at 1:16 AM, Mario Ogrizek wrote:
>>> 
 I am new to mpi, so I'm sorry for any silly questions.
 
 My idea was to try to use dual core machine as two nodes. I have a limited 
 access to a cluster, so this was just for "testing" purposes.
 My default hostfile contains usual comments and this two nodes:
 
> node0
> node1
 I thought that each processor is a node for MPI purpose.
>>> 
>>> I'm afraid 

Re: [OMPI users] Fwd: Open MPI v1.4 cant find default hostfile

2010-04-18 Thread Mario Ogrizek
Ofcourse i checked that, i have all of this things,
I simplified the program, and its the same.
Nothing gave me clue, except the more detailed writeout of the PTP.
Here is the critical part of it:
(1.2 one, this is correct)

[Mario.local:05548]  Map for job: 1 Generated by mapping mode: byslot

  Starting vpid: 0 Vpid range: 4 Num app_contexts: 1

...

...

(1.4 one)

[Mario.local:05542]  Map for job: 0 Generated by mapping mode: byslot

  Starting vpid: 0 Vpid range: 1 Num app_contexts: 1

...

...

Seems the 1.4 mapps the wrong job, Im not sure to what is it referred to,
but hope it will give you some clues.

On Sun, Apr 18, 2010 at 4:07 AM, Ralph Castain  wrote:

> Just to check what is going on, why don't you remove that message passing
> code and just
>
> printf("Hello MPI World from process %d!", my_rank
>
>  in each process? Much more direct - avoids any ambiguity.
>
> Also, be certain that you compile this program for the specific OMPI
> version you are running it under. OMPI is NOT binary compatible across
> releases - you have to recompile the program for the specific release you
> are going to use.
>
>
> On Apr 17, 2010, at 4:52 PM, Mario Ogrizek wrote:
>
> Ofcourse, its the same program, wasnt recompiled for a week.
>
>
> #include 
> #include 
> #include "mpi.h"
>
> int main(int argc, char* argv[]){
> int  my_rank; /* rank of process */
> int  p;   /* number of processes */
> int source;   /* rank of sender */
> int dest; /* rank of receiver */
> int tag=0;/* tag for messages */
> char message[100];/* storage for message */
> MPI_Status status ;   /* return status for receive */
>
>
> /* start up MPI */
>
>
> MPI_Init(, );
>
> /* find out process rank */
> MPI_Comm_rank(MPI_COMM_WORLD, _rank);
>
>
>
> /* find out number of processes */
> MPI_Comm_size(MPI_COMM_WORLD, );
>
>
> if (my_rank !=0){
> /* create message */
> sprintf(message, "Hello MPI World from process %d!", my_rank);
> dest = 0;
> /* use strlen+1 so that '\0' get transmitted */
> MPI_Send(message, strlen(message)+1, MPI_CHAR,
>dest, tag, MPI_COMM_WORLD);
> }
> else{
> printf("Hello MPI World From process 0: Num processes: %d\n",p);
> for (source = 1; source < p; source++) {
> MPI_Recv(message, 100, MPI_CHAR, source, tag,
>   MPI_COMM_WORLD, );
> printf("%s\n",message);
> }
> }
> /* shut down MPI */
> MPI_Finalize();
>
>
>
> return 0;
> }
>
> I triplechecked:
> v1.2 output
> Hello MPI World From process 0: Num processes: 4
> Hello MPI World from process 1!
> Hello MPI World from process 2!
> Hello MPI World from process 3!
>
> v1.4 output:
>
> Hello MPI World From process 0: Num processes: 1
>
> Hello MPI World From process 0: Num processes: 1
>
> Hello MPI World From process 0: Num processes: 1
>
> Hello MPI World From process 0: Num processes: 1
>
>
>
>
>
>
>
> On Sat, Apr 17, 2010 at 9:13 PM, Ralph Castain  wrote:
>
>>
>> On Apr 17, 2010, at 11:17 AM, Mario Ogrizek wrote:
>>
>> Hahaha, ok then that WAS silly! :D
>> So there is no way to utilize both cores with mpi?
>>
>>
>> We are using both cores - it is just that they are on the same node.
>> Unless told otherwise, the processes will use shared memory for
>> communication.
>>
>>
>> Ah well, I'll correct that.
>>
>> From console, im starting a job like this: mpirun -np 4 Program, where i
>> want to run a Program on 4 processors.
>> I was just stumbled when i got same output 4 times, like there are 4
>> processes ranked 0.
>> While with the old version of mpi (1.2) same execution would give 4
>> processes ranked 0..3.
>>
>>
>> And so you should - if not, then there is something wrong. No way mpirun
>> would start 4 processes ranked 0. How are you printing the rank? Are you
>> sure you are getting it correctly?
>>
>>
>>
>> Hope you see my question.
>>
>> On Sat, Apr 17, 2010 at 6:29 PM, Ralph Castain  wrote:
>>
>>>
>>> On Apr 17, 2010, at 1:16 AM, Mario Ogrizek wrote:
>>>
>>> I am new to mpi, so I'm sorry for any silly questions.
>>>
>>> My idea was to try to use dual core machine as two nodes. I have a
>>> limited access to a cluster, so this was just for "testing" purposes.
>>> My default hostfile contains usual comments and this two nodes:
>>>
>>> node0
>>> node1
>>>
>>> I thought that each processor is a node for MPI purpose.
>>>
>>>
>>> I'm afraid not - it is just another processor on that node. So you only
>>> have one node as far as OMPI is concerned.
>>>
>>> Im not sure what do you mean with "mpirun cmd line"?
>>>
>>>
>>> How are you starting your job? The usual way is with "mpirun -n N ...".
>>> That is what we mean by the "mpirun cmd line" - i.e., what command are you
>>> using to start your job?
>>>
>>> It sounds like things are actually working correctly. You might look at
>>> "mpirun -h" for possible options of interest.
>>>
>>>
>>>
>>> Regards,
>>>
>>> Mario
>>>
>>> On Sat, Apr 17, 2010 at 1:54 AM, Ralph Castain  wrote:
>>>

 On Apr 16, 2010, at 5:08 PM,