: [gpfsug-discuss] Mixing RDMA Client Fabrics for a single NSD
Cluster
I think what you want is to use fabric numbers with verbsPorts, e.g. we have
two IB fabrics and in the config we do thinks like:
[nodeclass1]
verbsPorts mlx4_0/1/1
[nodeclass2]
verbsPorts mlx5_0/1/3
GPFS recognises the /1
nodes instead.
Simon
*From: * on behalf of
"stephan.peinko...@lrz.de"
*Reply-To: *"gpfsug-discuss@spectrumscale.org"
*Date: *Thursday, 19 July 2018 at 15:13
*To: *"gpfsug-discuss@spectrumscale.org"
*Subject: *[gpfsug-discuss] Mixing RDMA Client Fabrics for a single
and will Ethernet between those nodes instead.
Simon
From: on behalf of
"stephan.peinko...@lrz.de"
Reply-To: "gpfsug-discuss@spectrumscale.org"
Date: Thursday, 19 July 2018 at 15:13
To: "gpfsug-discuss@spectrumscale.org"
Subject: [gpfsug-discuss] Mixing RDMA Client Fa
Hi Stephan:
I think every node in C1 and in C2 have to see every node in the server
cluster NSD-[AD].
We have a 10 node server cluster where 2 nodes do nothing but server out
nfs. Since these two are apart of the server cluster...client clusters
wanting to mount the server cluster via gpfs need
Dear GPFS List,
does anyone of you know, if it is possible to have multiple file systems in a
GPFS Cluster that all are served primary via Ethernet but for which different
“booster” connections to various IB/OPA fabrics exist.
For example let’s say in my central Storage/NSD Cluster, I