HTML attachment scrubbed and removed

------------------------------

Message: 2
Date: Sun, 16 Dec 2007 18:49:30 -0500
From: Allan Menezes <amenezes...@sympatico.ca>
Subject: [OMPI users] Gigabit ethernet (PCI Express) and openmpi
        v1.2.4
To: us...@open-mpi.org
Message-ID: <4765b98a.30...@sympatico.ca>
Content-Type: text/plain; charset=ISO-8859-1; format=flowed

Hi,
How many PCI-Express Gigabit ethernet cards does OpenMPI version 1.2.4 support with a corresponding linear increase in bandwith measured with netpipe NPmpi and openmpi mpirun? With two PCI express cards I get a B/W of 1.75Gbps for 892Mbps each ans for three pci express cards ( one built into the motherboard) i get 1.95Gbps. They all are around 890Mbs indiviually measured with netpipe and NPtcp and NPmpi and openmpi. For two it seems there is a linear increase in b/w but not for three pci express gigabit eth cards. I have tune the cards using netpipe and $HOME/.openmpi/mca-params.conf file for latency and percentage b/w .
Please advise.
Regards,
Allan Menezes


------------------------------

Message: 3
Date: Mon, 17 Dec 2007 14:14:42 +0200
From: gl...@voltaire.com (Gleb Natapov)
Subject: Re: [OMPI users] Gigabit ethernet (PCI Express) and openmpi
        v1.2.4
To: Open MPI Users <us...@open-mpi.org>
Message-ID: <20071217121442.gd28...@minantech.com>
Content-Type: text/plain; charset=us-ascii

On Sun, Dec 16, 2007 at 06:49:30PM -0500, Allan Menezes wrote:
Hi,
How many PCI-Express Gigabit ethernet cards does OpenMPI version 1.2.4 support with a corresponding linear increase in bandwith measured with netpipe NPmpi and openmpi mpirun? With two PCI express cards I get a B/W of 1.75Gbps for 892Mbps each ans for three pci express cards ( one built into the motherboard) i get 1.95Gbps. They all are around 890Mbs indiviually measured with netpipe and NPtcp and NPmpi and openmpi. For two it seems there is a linear increase in b/w but not for three pci express gigabit eth cards. I have tune the cards using netpipe and $HOME/.openmpi/mca-params.conf file for latency and percentage b/w .
Please advise.
What is in your $HOME/.openmpi/mca-params.conf? May be are hitting your
chipset limit here. What is your HW configuration? Can you try to run
NPtcp on each interface simultaneously and see what BW do you get.

--
                        Gleb.


Hi ,
My mca-params.conf file is:
btl_tcp_latency_eth0=171
btl_tcp_latency_eth2=50
btl_tcp_latency_eth3=71
btl_tcp_bandwidth_eth0=34
btl_tcp_bandwidth_eth2=33
btl_tcp_bandwidth_eth3=33

HW config:
host a1:
On x4 PCI express a Syskonnect PCI express x1 gigabit ethernet card.
On x16 PCI express a Intel pro 1000 pt gigabit pci express x1 gigabit ethernet card.
Built in mobo pci express gigabit ethernet card e1000 intel 82566DM chipset
all MTUs = 1500
host a2: same hardware config as host a1
I measure the latency and b/w this way:
a1#> ./NPtcp
a2#>./NPtcp -h 192.168.1.1 -n 50 for eth0
a2#>. /NPtcp -h 192.168.5.1 -n 50 for eth2
a2#>./NPtcp -h 192.168.8.1 -n 50 for eth3
and I use the measurement straight at 64bytes as 171 micro secs for eth0
etc .. and the highest band width.
The bandwith measured for eth0 syskonnect 892Mbs latency 171microseconds
The bandwith measured for eth2 intel pro 1000 pt 892Mbs latency 50microseconds The bandwith measured for eth3 intel built in pci ex 888Mbs latency 71microsecond#!/bin/sh
Linux: FC8 kernel 2.6.23.11 with marvell drivers patch 10.22.4.3
and intel e1000 version 7.6.12 from the intel website
This is how i use /opt/openmpi124b to check the b/w:
and the b/w i measure is max of 1950Mbps for three 890 Mbps gigabit pci express eth cards with gigabit switches for each subnet!
a1$>mpirun --prefix /opt/openmpi124b --host a1,a2 -mca btl tcp,sm,self
-mca btl_tcp_if_include eth0,eth3,eth2
-mca btl_tcp_if_exclude lo,eth1,eth4 -mca oob_tcp_include eth0,eth3,eth2
-mca oob_tcp_exclude lo,eth1,eth4 -np 2 ./NPmpi
The motherboards are Asus P5B-VM DO and processors pentium-d intel 945
each with 2gigabytes of ddr2 667mhz ram.
Any help would bet appreciated.
Thank you,
Allan Menezes

Reply via email to