[OMPI users] Gigabit ethernet (PCI Express) and openmpi v1.2.4

2007-12-16 Thread Allan Menezes

Hi,
How many PCI-Express Gigabit ethernet cards does OpenMPI version 1.2.4 
support with a corresponding linear increase in bandwith measured with 
netpipe NPmpi and openmpi mpirun?
With two PCI express cards I get a B/W of 1.75Gbps for 892Mbps each ans 
for three pci express cards ( one built into the motherboard) i get 
1.95Gbps. They all are around 890Mbs indiviually measured with netpipe 
and NPtcp and NPmpi and openmpi. For two it seems there is a linear 
increase in b/w but not for three pci express gigabit eth cards.
I have tune the cards using netpipe and $HOME/.openmpi/mca-params.conf 
file for latency and percentage b/w .

Please advise.
Regards,
Allan Menezes


Re: [OMPI users] MPI::Intracomm::Spawn and cluster configuration

2007-12-16 Thread Bruno Coutinho
Try using the info parameter in MPI::Intracomm::Spawn().
In this structure, you can say in which hosts you want to spawn.

Info parameters for MPI spawn:
http://www.mpi-forum.org/docs/mpi-20-html/node97.htm


2007/12/12, Elena Zhebel :
>
>  Hello,
>
> I'm working on a MPI application where I'm using OpenMPI instead of MPICH.
>
> In my "master" program I call the function MPI::Intracomm::Spawn which
> spawns "slave" processes. It is not clear for me how to spawn the "slave"
> processes over the network. Currently "master" creates "slaves" on the same
> host.
> If I use 'mpirun --hostfile openmpi.hosts' then processes are spawn over
> the network as expected. But now I need to spawn processes over the network
> from my own executable using MPI::Intracomm::Spawn, how can I achieve it?
>
> Thanks in advance for any help.
> Elena
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>