IBM ESS building blocks can have up to 3 dual port 10GBEth, 40GB Eth, 56GB IB, 100GB IB Mlx adapater cards, because we have 2 IO nodes this is up to a total of 12 ports per building block
so there should not be any reason for this to fail.
 
I regularly see a Mix of 10GB / 40GB or 10GB / IB configurations
 
 
 
Regards
Andrew Beattie
Software Defined Storage  - IT Specialist
Phone: 614-2133-7927
 
 
----- Original message -----
From: "J. Eric Wonderley" <[email protected]>
Sent by: [email protected]
To: gpfsug main discussion list <[email protected]>
Cc:
Subject: Re: [gpfsug-discuss] more than one mlx connectx-4 adapter in same host
Date: Thu, Dec 21, 2017 6:37 AM
 
Just plain tcpip.
 
We have dual port connectx4s in our nsd servers.  Upon adding a second connectx4 hba...no links go up or show "up".  I have one port on each hba configured for eth and ibv_devinfo looks sane.
 
I cannot find anything indicating that this should not work.  I have a ticket opened with mellanox.
 
On Wed, Dec 20, 2017 at 3:25 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER SCIENCE CORP] <[email protected]> wrote:
 


We’ve done a fair amount of VPI work but admittedly not with connectx4. Is it possible the cards are trying to talk IB rather than Eth? I figured you’re Ethernet based because of the mention of Juniper. 
 
Are you attempting to do RoCE or just plain TCP/IP?
 
On December 20, 2017 at 14:40:48 EST, J. Eric Wonderley <[email protected]> wrote:
Hello:
 
Does anyone have this type of config?
 
The host configuration looks sane but we seem to observe link-down on all mlx adapters no matter what we do.
 
Big picture is that we are attempting to do mc(multichassis)-lags to a core switch.  I'm somewhat fearful as to how this is implemented in the juniper switch we are about to test. 

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
 
 

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to