We had an environment like this and had gateway nodes between the OPA and IB 
networks that had both cards. The DDN NetScaler in this case was IB only. 
Doesn’t answer your question, but that does work.

--
#BlackLivesMatter
____
|| \\UTGERS,     |---------------------------*O*---------------------------
||_// the State  |         Ryan Novosielski - [email protected]
|| \\ University | Sr. Technologist - 973/972.0922 (2x0922) ~*~ RBHS Campus
||  \\    of NJ  | Office of Advanced Research Computing - MSB A555B, Newark
     `'

On Sep 6, 2024, at 11:52, Sean Mc Grath <[email protected]> wrote:

Hi,

We have a GPFS cluster where the NSD servers mount the storage over fibre 
channel and export the file system over InfiniBand for clients.

We will be getting some used equipment that uses OmniPath.

As per the "IBM Storage Scale Frequently Asked Questions and Answers" it states 
[1]:

RDMA is not supported on a node when both Mellanox HCAs and Cornelis Networks 
Omni-Path HFIs are enabled for RDMA.

Does this mean that we wouldn't be able to consolidate both IB & OPA HCA's in 
NSD servers and would have to have 2 types of NSD servers? 1) InfiniBand 
exporting and 2) OmniPath exporting?

If so, is it then a matter of using the Multi-Rail over TCP "subnets =" setting 
in mmchonfig to distinguish which nsd server the clients should connect to? [2].

Or am I completely miss understanding all this?

Many thanks in advance.

Sean

[1] https://www.ibm.com/docs/en/STXKQY/pdf/gpfsclustersfaq.pdf
[2] 
https://www.ibm.com/docs/en/storage-scale/5.1.6?topic=configuring-multi-rail-over-tcp-mrot

---

Sean McGrath

[email protected]
Senior Systems Administrator
Research IT, IT Services, Trinity College Dublin
https://www.tcd.ie/itservices/
https://www.tchpc.tcd.ie/

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at gpfsug.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org

Reply via email to