Re: [OMPI users] Problem with MPI_Barrier (Inter-communicator)

2012-04-09 Thread Thatyene Louise Alves de Souza Ramos
Edgar,

I forgot to answer your previous question. I used MPI 1.5.4 and the C++ API.

Thatyene Ramos

On Mon, Apr 9, 2012 at 8:00 PM, Thatyene Louise Alves de Souza Ramos <
thaty...@gmail.com> wrote:

> Hi Edgar, sorry about the late response. I've been travelling without
> Internet access.
>
> Well, I took the code Rodrigo provided and modified the client to make the
> dup after the creation of the new inter communicator, without 1 process.
> That is, I just replaced the lines 54-55 in the *removeRank* method with
> my if-else block.
>
> I tried this because call a new create after the first create did not work
> and I thought it would might be the communicator . So, I tried to duplicate
> the inter communicator to see if worked.
>
> Thanks.
>
> Thatyene Ramos.
>
>
> On Thu, Apr 5, 2012 at 5:10 PM, Edgar Gabriel <gabr...@cs.uh.edu> wrote:
>
>> so just to confirm, I ran our test suite for inter-communicator
>> collective operations and communicator duplication, and everything still
>> works. Specifically comm_dup on an intercommunicator is not
>> fundamentally broken, but worked for my tests.
>>
>> Having your code to see what your code precisely does would help me to
>> hunt the problem down, since I am otherwise not able to reproduce the
>> problem.
>>
>> Also, which version of Open MPI did you use?
>>
>> Thanks
>> Edgar
>>
>> On 4/4/2012 3:09 PM, Thatyene Louise Alves de Souza Ramos wrote:
>> > Hi Edgar, thank you for the response.
>> >
>> > Unfortunately, I've tried with and without this option. In both the
>> > result was the same... =(
>> >
>> > On Wed, Apr 4, 2012 at 5:04 PM, Edgar Gabriel <gabr...@cs.uh.edu
>> > <mailto:gabr...@cs.uh.edu>> wrote:
>> >
>> > did you try to start the program with the --mca coll ^inter switch
>> that
>> > I mentioned? Collective dup for intercommunicators should work, its
>> > probably again the bcast over a communicator of size 1 that is
>> causing
>> > the hang, and you could avoid it with the flag that I mentioned
>> above.
>> >
>> > Also, if you could attach your test code, that would help in hunting
>> > things down.
>> >
>> > Thanks
>> > Edgar
>> >
>> > On 4/4/2012 2:18 PM, Thatyene Louise Alves de Souza Ramos wrote:
>> > > Hi there.
>> > >
>> > > I've made some tests related to the problem reported by Rodrigo.
>> And I
>> > > think, I'd rather be wrong, that /collective calls like Create
>> and Dup
>> > > do not work with Inter communicators. I've try this in the client
>> > group:/
>> > >
>> > > *MPI::Intercomm tmp_inter_comm;*
>> > > *
>> > > *
>> > > *tmp_inter_comm = server_comm.Create
>> (server_comm.Get_group().Excl(1,
>> > > ));*
>> > > *
>> > > *
>> > > *if(server_comm.Get_rank() != rank)*
>> > > *server_comm = tmp_inter_comm.Dup();*
>> > > *else*
>> > > *server_comm = MPI::COMM_NULL;*
>> > > *
>> > > *
>> > > The server_comm is the original inter communicator with the server
>> > group.
>> > >
>> > > I've noticed that the program hangs in the Dup call. It seems
>> that the
>> > > tmp_inter_comm created without one process still has this process,
>> > > because the other processes are waiting for it call the Dup too.
>> > >
>> > > What do you think?
>> > >
>> > > On Wed, Mar 28, 2012 at 6:03 PM, Edgar Gabriel <gabr...@cs.uh.edu
>> > <mailto:gabr...@cs.uh.edu>
>> > > <mailto:gabr...@cs.uh.edu <mailto:gabr...@cs.uh.edu>>> wrote:
>> > >
>> > > it just uses a different algorithm which avoids the bcast on a
>> > > communicator of 1 (which is causing the problem here).
>> > >
>> > > Thanks
>> > > Edgar
>> > >
>> > > On 3/28/2012 12:08 PM, Rodrigo Oliveira wrote:
>> > > > Hi Edgar,
>> > > >
>> > > > I tested the execution of my code using the option -mca coll
>> > ^inter as
>> > > > you suggested and the program worked fine, even when I use 1
>> > server
>>

Re: [OMPI users] Problem with MPI_Barrier (Inter-communicator)

2012-04-09 Thread Thatyene Louise Alves de Souza Ramos
Hi Edgar, sorry about the late response. I've been travelling without
Internet access.

Well, I took the code Rodrigo provided and modified the client to make the
dup after the creation of the new inter communicator, without 1 process.
That is, I just replaced the lines 54-55 in the *removeRank* method with my
if-else block.

I tried this because call a new create after the first create did not work
and I thought it would might be the communicator . So, I tried to duplicate
the inter communicator to see if worked.

Thanks.

Thatyene Ramos.

On Thu, Apr 5, 2012 at 5:10 PM, Edgar Gabriel <gabr...@cs.uh.edu> wrote:

> so just to confirm, I ran our test suite for inter-communicator
> collective operations and communicator duplication, and everything still
> works. Specifically comm_dup on an intercommunicator is not
> fundamentally broken, but worked for my tests.
>
> Having your code to see what your code precisely does would help me to
> hunt the problem down, since I am otherwise not able to reproduce the
> problem.
>
> Also, which version of Open MPI did you use?
>
> Thanks
> Edgar
>
> On 4/4/2012 3:09 PM, Thatyene Louise Alves de Souza Ramos wrote:
> > Hi Edgar, thank you for the response.
> >
> > Unfortunately, I've tried with and without this option. In both the
> > result was the same... =(
> >
> > On Wed, Apr 4, 2012 at 5:04 PM, Edgar Gabriel <gabr...@cs.uh.edu
> > <mailto:gabr...@cs.uh.edu>> wrote:
> >
> > did you try to start the program with the --mca coll ^inter switch
> that
> > I mentioned? Collective dup for intercommunicators should work, its
> > probably again the bcast over a communicator of size 1 that is
> causing
> > the hang, and you could avoid it with the flag that I mentioned
> above.
> >
> > Also, if you could attach your test code, that would help in hunting
> > things down.
> >
> > Thanks
> > Edgar
> >
> > On 4/4/2012 2:18 PM, Thatyene Louise Alves de Souza Ramos wrote:
> > > Hi there.
> > >
> > > I've made some tests related to the problem reported by Rodrigo.
> And I
> > > think, I'd rather be wrong, that /collective calls like Create and
> Dup
> > > do not work with Inter communicators. I've try this in the client
> > group:/
> > >
> > > *MPI::Intercomm tmp_inter_comm;*
> > > *
> > > *
> > > *tmp_inter_comm = server_comm.Create
> (server_comm.Get_group().Excl(1,
> > > ));*
> > > *
> > > *
> > > *if(server_comm.Get_rank() != rank)*
> > > *server_comm = tmp_inter_comm.Dup();*
> > > *else*
> > > *server_comm = MPI::COMM_NULL;*
> > > *
> > > *
> > > The server_comm is the original inter communicator with the server
> > group.
> > >
> > > I've noticed that the program hangs in the Dup call. It seems that
> the
> > > tmp_inter_comm created without one process still has this process,
> > > because the other processes are waiting for it call the Dup too.
> > >
> > > What do you think?
> > >
> > > On Wed, Mar 28, 2012 at 6:03 PM, Edgar Gabriel <gabr...@cs.uh.edu
> > <mailto:gabr...@cs.uh.edu>
> > > <mailto:gabr...@cs.uh.edu <mailto:gabr...@cs.uh.edu>>> wrote:
> > >
> > > it just uses a different algorithm which avoids the bcast on a
> > > communicator of 1 (which is causing the problem here).
> > >
> > > Thanks
> > > Edgar
> > >
> > > On 3/28/2012 12:08 PM, Rodrigo Oliveira wrote:
> > > > Hi Edgar,
> > > >
> > > > I tested the execution of my code using the option -mca coll
> > ^inter as
> > > > you suggested and the program worked fine, even when I use 1
> > server
> > > > instance.
> > > >
> > > > What is the modification caused by this parameter? I did not
> > find an
> > > > explanation about the utilization of the module coll inter.
> > > >
> > > > Thanks a lot for your attention and for the solution.
> > > >
> > > > Best regards,
> > > >
> > > > Rodrigo Oliveira
> > > >
> > > > On Tue, Mar 27, 2012 at 1:10 PM, Rodrigo Oliveira
> > > >

Re: [OMPI users] Problem with MPI_Barrier (Inter-communicator)

2012-04-04 Thread Thatyene Louise Alves de Souza Ramos
Hi Edgar, thank you for the response.

Unfortunately, I've tried with and without this option. In both the result
was the same... =(

On Wed, Apr 4, 2012 at 5:04 PM, Edgar Gabriel <gabr...@cs.uh.edu> wrote:

> did you try to start the program with the --mca coll ^inter switch that
> I mentioned? Collective dup for intercommunicators should work, its
> probably again the bcast over a communicator of size 1 that is causing
> the hang, and you could avoid it with the flag that I mentioned above.
>
> Also, if you could attach your test code, that would help in hunting
> things down.
>
> Thanks
> Edgar
>
> On 4/4/2012 2:18 PM, Thatyene Louise Alves de Souza Ramos wrote:
> > Hi there.
> >
> > I've made some tests related to the problem reported by Rodrigo. And I
> > think, I'd rather be wrong, that /collective calls like Create and Dup
> > do not work with Inter communicators. I've try this in the client group:/
> >
> > *MPI::Intercomm tmp_inter_comm;*
> > *
> > *
> > *tmp_inter_comm = server_comm.Create (server_comm.Get_group().Excl(1,
> > ));*
> > *
> > *
> > *if(server_comm.Get_rank() != rank)*
> > *server_comm = tmp_inter_comm.Dup();*
> > *else*
> > *server_comm = MPI::COMM_NULL;*
> > *
> > *
> > The server_comm is the original inter communicator with the server group.
> >
> > I've noticed that the program hangs in the Dup call. It seems that the
> > tmp_inter_comm created without one process still has this process,
> > because the other processes are waiting for it call the Dup too.
> >
> > What do you think?
> >
> > On Wed, Mar 28, 2012 at 6:03 PM, Edgar Gabriel <gabr...@cs.uh.edu
> > <mailto:gabr...@cs.uh.edu>> wrote:
> >
> > it just uses a different algorithm which avoids the bcast on a
> > communicator of 1 (which is causing the problem here).
> >
> > Thanks
> > Edgar
> >
> > On 3/28/2012 12:08 PM, Rodrigo Oliveira wrote:
> > > Hi Edgar,
> > >
> > > I tested the execution of my code using the option -mca coll
> ^inter as
> > > you suggested and the program worked fine, even when I use 1 server
> > > instance.
> > >
> > > What is the modification caused by this parameter? I did not find
> an
> > > explanation about the utilization of the module coll inter.
> > >
> > > Thanks a lot for your attention and for the solution.
> > >
> > > Best regards,
> > >
> > > Rodrigo Oliveira
> > >
> > > On Tue, Mar 27, 2012 at 1:10 PM, Rodrigo Oliveira
> > > <rsilva.olive...@gmail.com <mailto:rsilva.olive...@gmail.com>
> > <mailto:rsilva.olive...@gmail.com
> > <mailto:rsilva.olive...@gmail.com>>> wrote:
> > >
> > >
> > > Hi Edgar.
> > >
> > > Thanks for the response. I just did not understand why the
> Barrier
> > > works before I remove one of the client processes.
> > >
> > > I tryed it with 1 server and 3 clients and it worked properly.
> > After
> > > I removed 1 of the clients, it stops working. So, the removal
> is
> > > affecting the functionality of Barrier, I guess.
> > >
> > > Anyone has an idea?
> > >
> > >
> > > On Mon, Mar 26, 2012 at 12:34 PM, Edgar Gabriel
> > <gabr...@cs.uh.edu <mailto:gabr...@cs.uh.edu>
> > > <mailto:gabr...@cs.uh.edu <mailto:gabr...@cs.uh.edu>>> wrote:
> > >
> > > I do not recall on what the agreement was on how to treat
> > the size=1
> > >
> > >
> > >
> > >
> > >
> > > ___
> > > users mailing list
> > > us...@open-mpi.org <mailto:us...@open-mpi.org>
> > > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org <mailto:us...@open-mpi.org>
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
> >
> >
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> --
> Edgar Gabriel
> Associate Professor
> Parallel Software Technologies Lab  http://pstl.cs.uh.edu
> Department of Computer Science  University of Houston
> Philip G. Hoffman Hall, Room 524Houston, TX-77204, USA
> Tel: +1 (713) 743-3857  Fax: +1 (713) 743-3335
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Problem with MPI_Barrier (Inter-communicator)

2012-04-04 Thread Thatyene Louise Alves de Souza Ramos
Hi there.

I've made some tests related to the problem reported by Rodrigo. And I
think, I'd rather be wrong, that *collective calls like Create and Dup do
not work with Inter communicators. I've try this in the client group:*

*MPI::Intercomm tmp_inter_comm;*
*
*
*tmp_inter_comm = server_comm.Create (server_comm.Get_group().Excl(1,
));*
*
*
*if(server_comm.Get_rank() != rank)*
* server_comm = tmp_inter_comm.Dup();*
*else*
* server_comm = MPI::COMM_NULL;*
*
*
The server_comm is the original inter communicator with the server group.

I've noticed that the program hangs in the Dup call. It seems that the
tmp_inter_comm created without one process still has this process, because
the other processes are waiting for it call the Dup too.

What do you think?

On Wed, Mar 28, 2012 at 6:03 PM, Edgar Gabriel  wrote:

> it just uses a different algorithm which avoids the bcast on a
> communicator of 1 (which is causing the problem here).
>
> Thanks
> Edgar
>
> On 3/28/2012 12:08 PM, Rodrigo Oliveira wrote:
> > Hi Edgar,
> >
> > I tested the execution of my code using the option -mca coll ^inter as
> > you suggested and the program worked fine, even when I use 1 server
> > instance.
> >
> > What is the modification caused by this parameter? I did not find an
> > explanation about the utilization of the module coll inter.
> >
> > Thanks a lot for your attention and for the solution.
> >
> > Best regards,
> >
> > Rodrigo Oliveira
> >
> > On Tue, Mar 27, 2012 at 1:10 PM, Rodrigo Oliveira
> > > wrote:
> >
> >
> > Hi Edgar.
> >
> > Thanks for the response. I just did not understand why the Barrier
> > works before I remove one of the client processes.
> >
> > I tryed it with 1 server and 3 clients and it worked properly. After
> > I removed 1 of the clients, it stops working. So, the removal is
> > affecting the functionality of Barrier, I guess.
> >
> > Anyone has an idea?
> >
> >
> > On Mon, Mar 26, 2012 at 12:34 PM, Edgar Gabriel  > > wrote:
> >
> > I do not recall on what the agreement was on how to treat the
> size=1
> >
> >
> >
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] MPI_Comm_split and intercommunicator - Problem

2012-01-25 Thread Thatyene Louise Alves de Souza Ramos
It seems the split is blocking when must return MPI_COMM_NULL, in the case
I have one process with a color that does not exist in the other group or
with the color = MPI_UNDEFINED.

On Wed, Jan 25, 2012 at 4:28 PM, Rodrigo Oliveira  wrote:

> Hi Thatyene,
>
> I took a look in your code and it seems to be logically correct. Maybe
> there is some problem when you call the split function having one client
> process with color = MPI_UNDEFINED. I understood you are trying to isolate
> one of the client process to do something applicable only to it, am I
> wrong? According to open mpi documentation, this function can be used to do
> that, but it is not working. Anyone have any idea about what can be?
>
> Best regards
>
> Rodrigo Oliveira
>
>


[OMPI users] MPI_Comm_split and intercommunicator - Problem

2012-01-23 Thread Thatyene Louise Alves de Souza Ramos
Hi there!

I've been trying to use the MPI_Comm_split function on an
intercommunicator, but I didn't have success. My application is very simple
and consists of a server that spawns 2 clients. After that, I want to split
the intercommunicator between the server and the clients so that one client
stay not connected with the server.

The processes block in the split call and do not return. Can anyone help me?

== Simplified server code ==

int main( int argc, char *argv[] ) {

MPI::Intracomm spawn_communicator = MPI::COMM_SELF;
MPI::Intercomm group1;

MPI::Init(argc, argv);
group1 = spawn_client ( /* spawns 2 processes and returns the
intercommunicator with them */ );
 /* Tryes to split the intercommunicator */
int color = 0;
 MPI::Intercomm new_G1 = group1.Split(color, 0);
group1.Free();
group1 = new_G1;

cout << "server after splitting- size G1 = " << group1.Get_remote_size() <<
endl << endl;
MPI::Finalize();
 return 0;
}

== Simplified client code ==

int main( int argc, char *argv[] ) {

 MPI::Intracomm group_communicator;
MPI::Intercomm parent;
int group_rank;
 MPI::Init(argc, argv);
 parent = MPI::Comm::Get_parent ();
group_communicator = MPI::COMM_WORLD;
group_rank = group_communicator.Get_rank();
 if (group_rank == 0) {
color = 0;
 }
else {
color = MPI_UNDEFINED;
 }
 MPI::Intercomm new_parent = parent.Split(color, inter_rank);
 if (new_parent != MPI::COMM_NULL) {
parent.Free();
parent = new_parent;
 }
 group_communicator.Free();
 parent.Free();
MPI::Finalize();
return 0;
}

Thanks in advance.

Thatyene Ramos


Re: [OMPI users] MPI_Comm_accept - Busy wait

2011-10-14 Thread Thatyene Louise Alves de Souza Ramos
Thank you for the explanation! I use "-mca mpi_yield_when_idle 1" already!

Thank you again!
---
Thatyene Ramos

On Fri, Oct 14, 2011 at 3:43 PM, Ralph Castain <r...@open-mpi.org> wrote:

> Sorry - been occupied. This is normal behavior. As has been discussed on
> this list before, OMPI made a design decision to minimize latency. This
> means we aggressively poll for connections. Only thing you can do is tell it
> to yield the processor when idle so, if something else is trying to run, we
> will let it get in there a little earlier. Use -mca mpi_yield_when_idle 1
>
> However, we have seen that if no other user processes are trying to run,
> then the scheduler hands the processor right back to you - and you'll still
> see that 100% number. It doesn't mean we are being hogs - it just means that
> nothing else wants to run, so we happily accept the time.
>
>
> On Oct 14, 2011, at 12:21 PM, Thatyene Louise Alves de Souza Ramos wrote:
>
> Does anyone have any idea?
>
> ---
> Thatyene Ramos
>
> On Fri, Oct 7, 2011 at 12:01 PM, Thatyene Louise Alves de Souza Ramos <
> thaty...@gmail.com> wrote:
>
>> Hi there!
>>
>> In my code I use MPI_Comm_accept in a server-client communication. I
>> noticed that the server remains on busy wait whereas waiting for clients
>> connections, using 100% of CPU if there are no other processes running.
>>
>> I wonder if there is any way to prevent this from happening.
>>
>> Thanks in advance.
>>
>> Thatyene Ramos
>>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] MPI_Comm_accept - Busy wait

2011-10-14 Thread Thatyene Louise Alves de Souza Ramos
Does anyone have any idea?

---
Thatyene Ramos

On Fri, Oct 7, 2011 at 12:01 PM, Thatyene Louise Alves de Souza Ramos <
thaty...@gmail.com> wrote:

> Hi there!
>
> In my code I use MPI_Comm_accept in a server-client communication. I
> noticed that the server remains on busy wait whereas waiting for clients
> connections, using 100% of CPU if there are no other processes running.
>
> I wonder if there is any way to prevent this from happening.
>
> Thanks in advance.
>
> Thatyene Ramos
>


[OMPI users] MPI_Comm_accept - Busy wait

2011-10-07 Thread Thatyene Louise Alves de Souza Ramos
Hi there!

In my code I use MPI_Comm_accept in a server-client communication. I noticed
that the server remains on busy wait whereas waiting for clients
connections, using 100% of CPU if there are no other processes running.

I wonder if there is any way to prevent this from happening.

Thanks in advance.

Thatyene Ramos


Re: [OMPI users] Problems with MPI_Iprobe

2011-08-02 Thread Thatyene Louise Alves de Souza Ramos
I am having this problem too. If someone could help, I will appreciate!

On Fri, Jul 22, 2011 at 5:29 PM, Rodrigo Oliveira  wrote:

> Hi there.
>
> I have an application in which I need to terminate a process anytime due an
> external command. In order to maintain the consistence of the processes, I
> need to receive the messages that were already sent to the terminating
> process. I used the MPI_Iprobe to check whether there is messages to be
> received, but I noticed that I have to call this function twice. Otherwise
> it does not work properly. The code bellow exemplifies what happens. Can
> anyone help me? Is there another way to do what I need?
>
> Thanks in advance.
>
>
> #include "mpi.h"
> #include 
>
> int main(int argc, char *argv[]) {
> int rank, size, i;
> MPI_Status status;
>
> MPI_Init(, );
> MPI_Comm_size(MPI_COMM_WORLD, );
>  MPI_Comm_rank(MPI_COMM_WORLD, );
>  if (size < 2) {
>  printf("Please run with two processes.\n"); fflush(stdout);
> MPI_Finalize();
>  return 0;
> }
> if (rank == 0) {
>  for (i=0; i<10; i++) {
> MPI_Send(, 1, MPI_INT, 1, 123, MPI_COMM_WORLD);
>  }
> }
> if (rank == 1) {
>  int value, has_message;
> MPI_Status status;
> sleep (2);
>  *// Code bellow does not work properly*
> MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
>  while (has_message) {
> MPI_Recv(, 1, MPI_INT, 0, 123, MPI_COMM_WORLD, );
>  printf("Process %d received message %d.\n", rank, value);
> MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
>  }
>
> *// Calling MPI_Iprobe twice for each incoming message makes the code
> work.*
>  /*
> MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
> MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
>  while (has_message) {
> MPI_Recv(, 1, MPI_INT, 0, 123, MPI_COMM_WORLD, );
>  printf("Process %d received message %d.\n", rank, value);
> MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
>  MPI_Iprobe(0, 123, MPI_COMM_WORLD, _message, );
> }
> */
>  fflush(stdout);
> }
>  MPI_Finalize();
> return 0;
> }
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Scheduling dynamically spawned processes

2011-05-16 Thread Thatyene Louise Alves de Souza Ramos
Ralph,

I have the same issue and I've been searching how to do this, but I couldn't
find.

What exactly must be the string in the host info key to do what Rodrigo
described?

<<< Inside your master, you would create an MPI_Info key "host" that has a
value
<<< consisting of a string "host1,host2,host3" identifying the hosts you
want
<<< your slave to execute upon. Those hosts must have been included in
<<< my_hostfile. Include that key in the MPI_Info array passed to your
Spawn.

I tried to do what you said above but ompi ignores the repetition of hosts.
Using Rodrigo's example I did:

host info key = "m1,m2,m2,m2,m3" and number of processes = 5 and the result
was

m1 -> 2
m2 -> 2
m3 -> 1

and not

m1 -> 1
m2 -> 3
m3 -> 1

as I wanted.

Thanks in advance.

Thatyene Ramos

On Fri, May 13, 2011 at 9:16 PM, Ralph Castain  wrote:

> I believe I answered that question. You can use the hostfile info key, or
> you can use the host info key - either one will do what you require.
>
> On May 13, 2011, at 4:11 PM, Rodrigo Silva Oliveira wrote:
>
> Hi,
>
> I think I was not specific enough. I need to spawn the copies of a process
> in a unique mpi_spawn call. It is, I have to specify a list of machines and
> how many copies of the process will be spawned on each one. Is it possible?
>
> I would be something like that:
>
> machines #copies
> m11
> m23
> m31
>
> After an unique call to spawn, I want the copies running in this fashion. I
> tried use a hostfile with the option slot, but I'm not sure if it is the
> best way.
>
> hostfile:
>
> m1 slots=1
> m2 slots=3
> m3 slots=1
>
> Thanks
>
> --
> Rodrigo Silva Oliveira
> M.Sc. Student - Computer Science
> Universidade Federal de Minas Gerais
> www.dcc.ufmg.br/~rsilva 
>  ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>