Ryan, This sounds very interesting. Do you have more details or references of how they connected together, and what any pain points were?
Daniel From: gpfsug-discuss <[email protected]> On Behalf Of Ryan Novosielski Sent: 21 August 2023 19:07 To: gpfsug main discussion list <[email protected]> Cc: [email protected] Subject: Re: [gpfsug-discuss] Joining RDMA over different networks? If I understand what you’re asking correctly, we used to have a cluster that did this. GPFS was on Infininiband, some of the compute nodes were too, and the rest were on Omnipath. There were routers in between with both types. Sent from my iPhone On Aug 21, 2023, at 13:55, Kidger, Daniel <[email protected]<mailto:[email protected]>> wrote: I know in the Lustre world that LNET routers are used to provide RDMA over heterogeneous networks. Is there an equivalent for Storage Scale? eg if an ESS uses Infiniband to connect directly to Cluster A, could that InfiniBand RDMA fabric be “routed” to ClusterB that has RoCE connecting all its nodes together and hence the filesystem mounted? ps. The same question would apply to other usually incompatible RDMA networks like Omnipath, Slingshot, Cornelis, … ? Daniel Daniel Kidger HPC Storage Solutions Architect, EMEA [email protected]<mailto:[email protected]> +44 (0)7818 522266 hpe.com<http://www.hpe.com/> <image001.png> _______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org<http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org>
_______________________________________________ gpfsug-discuss mailing list gpfsug-discuss at gpfsug.org http://gpfsug.org/mailman/listinfo/gpfsug-discuss_gpfsug.org
