Hi George, The following test peaks at 8392Mpbs: mpirun --prefix
/opt/opnmpi124b --host a1,a1 -mca btl tcp,sm,self -np 2 ./NPmpi on a1
and on a2
mpirun --prefix /opt/opnmpi124b --host a2,a2 -mca btl tcp,sm,self -np 2 ./NPmpi
gives 8565Mbps
--(a)
on a1:
mpirun --prefix /opt/opnmpi124b --host
HTML attachment scrubbed and removed
--
Message: 2
Date: Sun, 16 Dec 2007 18:49:30 -0500
From: Allan Menezes
Subject: [OMPI users] Gigabit ethernet (PCI Express) and openmpi
v1.2.4
To: us...@open-mpi.org
Message-ID:
Hi Marco and Jeff,
My own knowledge of OpenMPI's internals is limited, but I thought I'd add
my less-than-two-cents...
> I've found only a way in order to have tcp connections binded only to
> > the eth1 interface, using both the following MCA directives in the
> > command line:
> >
> > mpirun
You should run a shared memory test, to see what's the max memory
bandwidth you can get.
Thanks,
george.
On Dec 17, 2007, at 7:14 AM, Gleb Natapov wrote:
On Sun, Dec 16, 2007 at 06:49:30PM -0500, Allan Menezes wrote:
Hi,
How many PCI-Express Gigabit ethernet cards does OpenMPI
On Dec 17, 2007, at 8:35 AM, Marco Sbrighi wrote:
I'm using Open MPI 1.2.2 over OFED 1.2 on an 256 nodes, dual Opteron,
dual core, Linux cluster. Of course, with Infiniband 4x interconnect.
Each cluster node is equipped with 4 (or more) ethernet interface,
namely 2 gigabit ones plus 2 IPoIB.
I see... I wasn't aware of the protocol with regards to these accounts.
Having a 'ubc' account is sufficient for us. You can remove the kmroz
and penoff accounts.
Thanks, and sorry for the confusion.
Ethan Mallove wrote:
> I thought maybe there was a reason Karol wanted separate
> accounts that
On 12/17/07 8:19 AM, "Elena Zhebel" wrote:
> Hello Ralph,
>
> Thank you for your answer.
>
> I'm using OpenMPI 1.2.3. , compiler glibc232, Linux Suse 10.0.
> My "master" executable runs only on the one local host, then it spawns
> "slaves" (with
If you care, this is actually the result of a complex issue that was
just recently discussed on the OMPI devel list. You can see a full
explanation there if you're interested.
On Dec 17, 2007, at 10:46 AM, Brian Granger wrote:
This should be fixed in the subversion trunk of mpi4py. Can
This should be fixed in the subversion trunk of mpi4py. Can you do an
update to that version and retry. If it still doesn't work, post to
the mpi4py list and we will see what we can do.
Brian
On Dec 17, 2007 8:25 AM, de Almeida, Valmor F. wrote:
>
> Hello,
>
> I am
Hello,
I am getting these messages (below) when running mpi4py python codes.
Always one message per mpi process. The codes seem to run correctly. Any
ideas why this is happening and how to avoid it?
Thanks,
--
Valmor de Almeida
>mpirun -np 2 python helloworld.py
[xeon0:05998] mca: base:
On 12/12/07 5:46 AM, "Elena Zhebel" wrote:
>
>
> Hello,
>
>
>
> I'm working on a MPI application where I'm using OpenMPI instead of MPICH.
>
> In my "master" program I call the function MPI::Intracomm::Spawn which spawns
> "slave" processes. It is not
On Sun, Dec 16, 2007 at 06:49:30PM -0500, Allan Menezes wrote:
> Hi,
> How many PCI-Express Gigabit ethernet cards does OpenMPI version 1.2.4
> support with a corresponding linear increase in bandwith measured with
> netpipe NPmpi and openmpi mpirun?
> With two PCI express cards I get a B/W of
12 matches
Mail list logo