If I understand what you’re asking correctly, we used to have a cluster that did this. GPFS was on Infininiband, some of the compute nodes were too, and the rest were on Omnipath. There were routers in between with both types.
Sent from my iPhone On Aug 21, 2023, at 13:55, Kidger, Daniel <[email protected]> wrote: I know in the Lustre world that LNET routers are used to provide RDMA over heterogeneous networks. Is there an equivalent for Storage Scale? eg if an ESS uses Infiniband to connect directly to Cluster A, could that InfiniBand RDMA fabric be “routed” to ClusterB that has RoCE connecting all its nodes together and hence the filesystem mounted? ps. The same question would apply to other usually incompatible RDMA networks like Omnipath, Slingshot, Cornelis, … ? Daniel Daniel Kidger HPC Storage Solutions Architect, EMEA [email protected]<mailto:[email protected]> +44 (0)7818 522266 hpe.com<http://www.hpe.com/> <image001.png> _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
