Re: [OMPI users] quick patch to buildrpm.sh to enable building on SuSE

2006-10-23 Thread Jeff Squyres
Committed -- thanks! On Oct 23, 2006, at 7:14 PM, Joe Landman wrote: --- buildrpm.sh 2006-10-23 17:59:33.729764603 -0400 +++ buildrpm-fixed.sh 2006-10-23 17:58:33.145635240 -0400 @@ -11,6 +11,7 @@ # prefix="/opt/openmpi" +#/1.1.2/pgi" specfile="openmpi.spec" rpmbuild_options="--define

[OMPI users] quick patch to buildrpm.sh to enable building on SuSE

2006-10-23 Thread Joe Landman
--- buildrpm.sh 2006-10-23 17:59:33.729764603 -0400 +++ buildrpm-fixed.sh 2006-10-23 17:58:33.145635240 -0400 @@ -11,6 +11,7 @@ # prefix="/opt/openmpi" +#/1.1.2/pgi" specfile="openmpi.spec" rpmbuild_options="--define 'mflags -j4'" configure_options= @@ -22,10 +23,10 @@ # Some distro's

Re: [OMPI users] dual Gigabit ethernet support

2006-10-23 Thread Lisandro Dalcin
On 10/23/06, Tony Ladd wrote: A couple of comments regarding issues raised by this thread. 1) In my opinion Netpipe is not such a great network benchmarking tool for HPC applications. It measures timings based on the completion of the send call on the transmitter not the

[OMPI users] dual Gigabit ethernet support

2006-10-23 Thread Tony Ladd
A couple of comments regarding issues raised by this thread. 1) In my opinion Netpipe is not such a great network benchmarking tool for HPC applications. It measures timings based on the completion of the send call on the transmitter not the completion of the receive. Thus, if there is a delay in

Re: [OMPI users] dual Gigabit ethernet support

2006-10-23 Thread Brock Palen
We manage to get 900+ Mbps on a broadcom, 570x chip. We run jumbo frames and use a force10 switch. This is with also openmpi-1.0.2 (have not tried rebuilding netpipe with 1.1.2) Also see great results with netpipe (mpi) on infiniband. Great work so far guys. 120: 6291459 bytes 3

Re: [OMPI users] dual Gigabit ethernet support

2006-10-23 Thread Durga Choudhury
What I think is happening is this: The initial transfer rate you are seeing is the burst rate; after a long time average, your sustained transfer rate emerges. Like George said, you should use a proven tool to measure your bandwidth. We use netperf, a freeware from HP. That said, the ethernet

Re: [OMPI users] dual Gigabit ethernet support

2006-10-23 Thread Jayanta Roy
Hi George, Yes, it is duplex BW. The BW benchmark is a simple timing call around MPI_Alltoall call. Then you estimate the network traffic from the sending buffer size and get the rate. Regards, Jayanta On Mon, 23 Oct 2006, George Bosilca wrote: I don't know what your bandwidth tester look

Re: [OMPI users] dual Gigabit ethernet support

2006-10-23 Thread George Bosilca
I don't know what your bandwidth tester look like, but 140MB/s it's way too much for a single Gige card, except if it's a bidirectional bandwidth. Usually, on a new generation Gige card (Broadcom Corporation NetXtreme BCM5751 Gigabit Ethernet PCI Express) with a AMD processor (AMD

Re: [OMPI users] dual Gigabit ethernet support

2006-10-23 Thread Miguel Figueiredo Mascarenhas Sousa Filipe
Hello, On 10/23/06, Jayanta Roy wrote: Hi, Sometime before I have posted doubts about using dual gigabit support fully. See I get ~140MB/s full duplex transfer rate in each of following runs. Thats impressive, since its _more_ than the threotetical limit of

Re: [OMPI users] dual Gigabit ethernet support

2006-10-23 Thread Durga Choudhury
Did you try channel bonding? If your OS is Linux, there are plenty of "howto" on the internet which will tell you how to do it. However, your CPU might be the bottleneck in this case. How much of CPU horsepower is available at 140MB/s? If the CPU *is* the bottleneck, changing your network

[OMPI users] dual Gigabit ethernet support

2006-10-23 Thread Jayanta Roy
Hi, Sometime before I have posted doubts about using dual gigabit support fully. See I get ~140MB/s full duplex transfer rate in each of following runs. mpirun --mca btl_tcp_if_include eth0 -n 4 -bynode -hostfile host a.out mpirun --mca btl_tcp_if_include eth1 -n 4 -bynode -hostfile