Hi Stephan: I think every node in C1 and in C2 have to see every node in the server cluster NSD-[AD].
We have a 10 node server cluster where 2 nodes do nothing but server out nfs. Since these two are apart of the server cluster...client clusters wanting to mount the server cluster via gpfs need to see them. I think both OPA fabfics need to be on all 4 of your server nodes. Eric On Thu, Jul 19, 2018 at 10:05 AM, Peinkofer, Stephan < [email protected]> wrote: > Dear GPFS List, > > does anyone of you know, if it is possible to have multiple file systems > in a GPFS Cluster that all are served primary via Ethernet but for which > different “booster” connections to various IB/OPA fabrics exist. > > For example let’s say in my central Storage/NSD Cluster, I implement two > file systems FS1 and FS2. FS1 is served by NSD-A and NSD-B and FS2 is > served by NSD-C and NSD-D. > Now I have two client Clusters C1 and C2 which have different OPA fabrics. > Both Clusters can mount the two file systems via Ethernet, but I now add > OPA connections for NSD-A and NSD-B to C1’s fabric and OPA connections for > NSD-C and NSD-D to C2’s fabric and just switch on RDMA. > As far as I understood, GPFS will use RDMA if it is available between two > nodes but switch to Ethernet if RDMA is not available between the two > nodes. So given just this, the above scenario could work in principle. But > will it work in reality and will it be supported by IBM? > > Many thanks in advance. > Best Regards, > Stephan Peinkofer > -- > Stephan Peinkofer > Leibniz Supercomputing Centre > Data and Storage Division > Boltzmannstraße 1, 85748 Garching b. München > URL: http://www.lrz.de > > > _______________________________________________ > gpfsug-discuss mailing list > gpfsug-discuss at spectrumscale.org > http://gpfsug.org/mailman/listinfo/gpfsug-discuss > >
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at spectrumscale.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss
