i don't know if that works with cisco, but i use 50 an 100m cables for 40 as well as 100Gbit in my lab between 2 Mellanox switches : http://www.mellanox.com/products/interconnect/ethernet-active-optical-cables.php as paul pointed out one of the very first things one needs to do after adding an adapter is to flash the firmware to a recent level. especially of you have 2 adapters with different FW i have seen even once with higher level not work properly, so before you do anything else get them to a recent level and especially the same if its the same adapter types.
sven On Wed, Dec 20, 2017 at 10:01 PM David D Johnson <[email protected]> wrote: > We're trying to get 40 gbe connection between Mellanox switches and Cisco > switches down at the other end of the machine room > The BiDi part seems to be the best given about 30m run on multimode. > However Mellanox support says it's not supported. > Want to use this to get close to IB speeds for GPFS on nodes that aren't > on the IB fabric. > Does anyone have any luck getting 40 or 100 gig at 20-30m when the > switches are different brands? > > Thanks, > -- ddj > > On Dec 20, 2017, at 4:53 PM, Sanchez, Paul <[email protected]> > wrote: > > We have run multiple ConnectX-4 NICs in bonded MLAG (Arista) and VPC > (Cisco) switch configurations on our NSD servers. We used to see issues > with firmware versions that didn’t support the optics we wanted to use > (e.g. early CX3/CX4 and Cisco 40G-BiDi). You may also want check mstflint > to see whether the firmware levels match on the MLX cards, and if you > upgrade firmware in some cases a power-cycle (not reboot) can be required > to finish the process. > > -Paul > > *From:* [email protected] [ > mailto:[email protected] > <[email protected]>]*On Behalf Of *Andrew Beattie > *Sent:* Wednesday, December 20, 2017 4:47 PM > *To:* [email protected] > *Subject:* Re: [gpfsug-discuss] more than one mlx connectx-4 adapter in > samehost > > IBM ESS building blocks can have up to 3 dual port 10GBEth, 40GB Eth, 56GB > IB, 100GB IB Mlx adapater cards, because we have 2 IO nodes this is up to a > total of 12 ports per building block > so there should not be any reason for this to fail. > > I regularly see a Mix of 10GB / 40GB or 10GB / IB configurations > > > > Regards > *Andrew Beattie* > *Software Defined Storage - IT Specialist* > *Phone: *614-2133-7927 > *E-mail: *[email protected] > > > > ----- Original message ----- > From: "J. Eric Wonderley" <[email protected]> > Sent by: [email protected] > To: gpfsug main discussion list <[email protected]> > Cc: > Subject: Re: [gpfsug-discuss] more than one mlx connectx-4 adapter in same > host > Date: Thu, Dec 21, 2017 6:37 AM > > Just plain tcpip. > > We have dual port connectx4s in our nsd servers. Upon adding a second > connectx4 hba...no links go up or show "up". I have one port on each hba > configured for eth and ibv_devinfo looks sane. > > I cannot find anything indicating that this should not work. I have a > ticket opened with mellanox. > > On Wed, Dec 20, 2017 at 3:25 PM, Knister, Aaron S. (GSFC-606.2)[COMPUTER > SCIENCE CORP] <[email protected]> wrote: > > > > > We’ve done a fair amount of VPI work but admittedly not with connectx4. Is > it possible the cards are trying to talk IB rather than Eth? I figured > you’re Ethernet based because of the mention of Juniper. > > Are you attempting to do RoCE or just plain TCP/IP? > > On December 20, 2017 at 14:40:48 EST, J. Eric Wonderley < > [email protected]> wrote: > > Hello: > > Does anyone have this type of config? > > The host configuration looks sane but we seem to observe link-down on all > mlx adapters no matter what we do. > > Big picture is that we are attempting to do mc(multichassis)-lags to a > core switch. I'm somewhat fearful as to how this is implemented in the > juniper switch we are about to test. > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > <https://urldefense.proofpoint.com/v2/url?u=http-3A__spectrumscale.org&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=STXkGEO2XATS_s2pRCAAh2wXtuUgwVcx1XjUX7ELNdk&m=Olk0lQk7rek9IplOjJ_2Vcd7P1LgVbnrSupC7O0hJHQ&s=I5Dq2T7aYvC87Wp12fsz6CRLw4uo2-RVnrnpxRYfYuA&e=> > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > <https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwMFaQ&c=jf_iaSHvJObTbx-siA1ZOg&r=STXkGEO2XATS_s2pRCAAh2wXtuUgwVcx1XjUX7ELNdk&m=Olk0lQk7rek9IplOjJ_2Vcd7P1LgVbnrSupC7O0hJHQ&s=hxNNpOkwGQ9zRmTnM3FEo5hgnPSUsPG0FNqZbK6eA6Q&e=> > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > > https://urldefense.proofpoint.com/v2/url?u=http-3A__gpfsug.org_mailman_listinfo_gpfsug-2Ddiscuss&d=DwICAg&c=jf_iaSHvJObTbx-siA1ZOg&r=STXkGEO2XATS_s2pRCAAh2wXtuUgwVcx1XjUX7ELNdk&m=Olk0lQk7rek9IplOjJ_2Vcd7P1LgVbnrSupC7O0hJHQ&s=hxNNpOkwGQ9zRmTnM3FEo5hgnPSUsPG0FNqZbK6eA6Q&e= > > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
