lit...@dev.mellanox.co.il>
To: Randolph Pullen <randolph_pul...@yahoo.com.au>
Cc: OpenMPI Users <us...@open-mpi.org>
Sent: Monday, 10 September 2012 9:11 PM
Subject: Re: [OMPI users] Infiniband performance Problem and stalling
Randolph,
So what you saying in short, leaving all the numbers aside
ny Kliteynik <klit...@dev.mellanox.co.il>
> *To:* Randolph Pullen <randolph_pul...@yahoo.com.au>
> *Cc:* OpenMPI Users <us...@open-mpi.org>
> *Sent:* Sunday, 9 September 2012 6:18 PM
> *Subject:* Re: [
Randolph,
On 9/7/2012 7:43 AM, Randolph Pullen wrote:
> Yevgeny,
> The ibstat results:
> CA 'mthca0'
> CA type: MT25208 (MT23108 compat mode)
What you have is InfiniHost III HCA, which is 4x SDR card.
This card has theoretical peak of 10 Gb/s, which is 1GB/s in IB bit coding.
> And more
penMPI Users
<us...@open-mpi.org>
Sent: Thursday, 6 September 2012 6:03 PM
Subject: Re: [OMPI users] Infiniband performance Problem and stalling
On 9/3/2012 4:14 AM, Randolph Pullen wrote:
> No RoCE, Just native IB with TCP over the top.
Sorry, I'm confused - still not clear what is &quo
oo.com.au>; OpenMPI Users
<us...@open-mpi.org>
Sent: Thursday, 6 September 2012 6:03 PM
Subject: Re: [OMPI users] Infiniband performance Problem and stalling
On 9/3/2012 4:14 AM, Randolph Pullen wrote:
> No RoCE, Just native IB with TCP over the top.
Sorry, I'm confused -
--
> *From:* Yevgeny Kliteynik <klit...@dev.mellanox.co.il>
> *To:* Randolph Pullen <randolph_pul...@yahoo.com.au>; Open MPI Users
> <us...@open-mpi.org>
> *Sent:* Sunday, 2 September 2012 10:54 PM
> *Subject:* Re: [OMPI users] Infiniband performance Problem an
dolph_pul...@yahoo.com.au>; Open MPI Users
<us...@open-mpi.org>
Sent: Sunday, 2 September 2012 10:54 PM
Subject: Re: [OMPI users] Infiniband performance Problem and stalling
Randolph,
Some clarification on the setup:
"Melanox III HCA 10G
cards" - are those ConnectX 3
Randolph,
Some clarification on the setup:
"Melanox III HCA 10G cards" - are those ConnectX 3 cards configured to Ethernet?
That is, when you're using openib BTL, you mean RoCE, right?
Also, have you had a chance to try some newer OMPI release?
Any 1.6.x would do.
-- YK
On 8/31/2012 10:53
(reposted with consolidated information)
I have a test rig comprising 2 i7 systems 8GB RAM with Melanox III
HCA 10G cards
running Centos 5.7 Kernel 2.6.18-274
Open MPI 1.4.3
MLNX_OFED_LINUX-1.5.3-1.0.0.2 (OFED-1.5.3-1.0.0.2):
On a Cisco 24 pt switch
Normal performance is:
$ mpirun --mca btl
-mpi.org>
Sent: Thursday, 30 August 2012 11:46 AM
Subject: Re: [OMPI users] Infiniband performance Problem and stalling
Interesting, the log_num_mtt and log_mtts_per_seg params where not set.
Setting them to utilise 2*8G of my RAM resulted in no change to the stalls or
run time ie; (
-mpi.org>
Sent: Tuesday, 28 August 2012 6:13 PM
Subject: Re: [OMPI users] Infiniband performance Problem and stalling
Randolph,
after reading this:
On 08/28/12 04:26, Randolph Pullen wrote:
> - On occasions it seems to stall indefinately, waiting on a single receive.
... I would make a
Randolph,
after reading this:
On 08/28/12 04:26, Randolph Pullen wrote:
- On occasions it seems to stall indefinately, waiting on a single receive.
... I would make a blind guess: are you aware about IB card parameters for
registered memory?
I have a test rig comprising 2 i7 systems with Melanox III HCA 10G cards
running Centos 5.7 Kernel 2.6.18-274
Open MPI 1.4.3
MLNX_OFED_LINUX-1.5.3-1.0.0.2 (OFED-1.5.3-1.0.0.2):
On a Cisco 24 pt switch
Normal performance is:
$ mpirun --mca btl openib,self -n 2 -hostfile mpi.hosts PingPong
13 matches
Mail list logo