> -----Original Message-----
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
> Denny Fuchs
> Sent: 05 October 2016 12:43
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] 6 Node cluster with 24 SSD per node: 
> Hardwareplanning/ agreement
> 
> hi,
> 
> I get a call from Mellanox and we get now a offer for the following
> network:
> 
> * 2 x SN2100 100Gb/s Switch 16 ports
> * 10 x ConnectX 4LX-EN 25Gb card for hypervisor and OSD nodes
> * 4 x Adapter from Mellanox QSA to SFP+ port for interconnecting to our HP 
> 2920 switches
> * 3 x Copper split cables 1 x 100Gb -> 4 x 25Gb

Even better than 10G, 25GB is clocked faster than 10GB, so you should see 
slightly lower latency vs 10G. Just make sure the kernel
you will be using will support those Nics.

> 
> 
> So, if the price fits, that should be O.K for anything else ....  :-)
> 
> cu denny
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to