To add to the excellent advice others have already provided, I think you have fundamentally 2 choices:

- Establish additional OPA connections from NSD-A and NSD-B to cluster C2 and from NSD-C and NSD-D to cluster C1

*or*

- Add NSD-A and NSD-B as nsd servers for the NSDs for FS2 and add NSD-C and NSD-D as nsd servers for the NSDs for FS1. (Note: If you're running Scale 5.0 you can change the NSD server list with the FS available and mounted, else you'll need an outage to unmount the FS and change the NSD server list.)

It's a matter of what's preferable (aasier, cheaper, etc.)-- adding OPA connections to the NSD servers or adding additional LUN presentations (which may involve SAN connections, of course) to the NSD servers.

In our environment we do the latter and it works very well for us.

-Aaron

On 7/19/18 11:42 AM, Simon Thompson wrote:
I think what you want is to use fabric numbers with verbsPorts, e.g. we have two IB fabrics and in the config we do thinks like:

[nodeclass1]

verbsPorts mlx4_0/1/1

[nodeclass2]

verbsPorts mlx5_0/1/3

GPFS recognises the /1 or /3 at the end as a fabric number and knows they are separate and will Ethernet between those nodes instead.

Simon

*From: *<[email protected]> on behalf of "[email protected]" <[email protected]> *Reply-To: *"[email protected]" <[email protected]>
*Date: *Thursday, 19 July 2018 at 15:13
*To: *"[email protected]" <[email protected]>
*Subject: *[gpfsug-discuss] Mixing RDMA Client Fabrics for a single NSD Cluster

Dear GPFS List,

does anyone of you know, if it is possible to have multiple file systems in a GPFS Cluster that all are served primary via Ethernet but for which different “booster” connections to various IB/OPA fabrics exist.

For example let’s say in my central Storage/NSD Cluster, I implement two file systems FS1 and FS2. FS1 is served by NSD-A and NSD-B and FS2 is served by NSD-C and NSD-D.

Now I have two client Clusters C1 and C2 which have different OPA fabrics. Both Clusters can mount the two file systems via Ethernet, but I now add OPA connections for NSD-A and NSD-B to C1’s fabric and OPA connections for NSD-C and NSD-D to  C2’s fabric and just switch on RDMA.

As far as I understood, GPFS will use RDMA if it is available between two nodes but switch to Ethernet if RDMA is not available between the two nodes. So given just this, the above scenario could work in principle. But will it work in reality and will it be supported by IBM?

Many thanks in advance.

Best Regards,

Stephan Peinkofer

--
Stephan Peinkofer
Leibniz Supercomputing Centre
Data and Storage Division
Boltzmannstraße 1, 85748 Garching b. München
URL: http://www.lrz.de



_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss


--
Aaron Knister
NASA Center for Climate Simulation (Code 606.2)
Goddard Space Flight Center
(301) 286-2776
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss

Reply via email to