>>I'm seeking for an explanation how Ceph is utilizing two (or more) 
>>independent links on both the public and the cluster network.

public network : client -> osd
cluster network : osd->osd (mainly replication)


>>If I configure two IPs for the public network on two NICs, will Ceph route 
>>traffic from its (multiple) OSDs on this node over both IPs?

no.

you need 1 ip for public and 1 ip for cluster network, with differents subnet 
of course.
you'll have more bandwidth because of replication traffic going to another link.
But they are no magic to use both links for public (client-osd). No 
multipathing like iscsi for example.




----- Mail original ----- 

De: "Sven Budde" <sven.bu...@itgration-gmbh.de> 
À: "Alexandre DERUMIER" <aderum...@odiso.com> 
Cc: ceph-users@lists.ceph.com 
Envoyé: Jeudi 5 Juin 2014 18:27:32 
Objet: AW: [ceph-users] Ceph networks, to bond or not to bond? 

Hi Alexandre, 

thanks for the reply. As said, my switches are not stackable, so using LCAP 
seems not to be my best option. 

I'm seeking for an explanation how Ceph is utilizing two (or more) independent 
links on both the public and the cluster network. 

If I configure two IPs for the public network on two NICs, will Ceph route 
traffic from its (multiple) OSDs on this node over both IPs? 

Cheers, 
Sven 

-----Ursprüngliche Nachricht----- 
Von: Alexandre DERUMIER [mailto:aderum...@odiso.com] 
Gesendet: Donnerstag, 5. Juni 2014 18:14 
An: Sven Budde 
Cc: ceph-users@lists.ceph.com 
Betreff: Re: [ceph-users] Ceph networks, to bond or not to bond? 

Hi, 

>>My low-budget setup consists of two gigabit switches, capable of LACP, 
>>but not stackable. For redundancy, I'd like to have my links spread 
>>evenly over both switches. 

If you want to do lacp with both switches, they need to be stackable. 

(or use active-backup bonding) 

>>My question where I didn't find a conclusive answer in the 
>>documentation and mailing archives: 
>>Will the OSDs utilize both 'single' interfaces per network, if I 
>>assign two IPs per public and per cluster network? Or will all OSDs 
>>just bind on one IP and use only a single link? 

you just need 1 ip by bond. 

with lacp, the load balacing use an hash algorithm, to loadbalance tcp 
connections. 
(that also mean than 1 connection can't use more than 1 link) 

check that your switch support ip+port hash algorithm, 
(xmit_hash_policy=layer3+4 is linux lacp bonding) 

like this, each osd->osd can be loadbalanced, same for your clients->osd. 






----- Mail original ----- 

De: "Sven Budde" <sven.bu...@itgration-gmbh.de> 
À: ceph-users@lists.ceph.com 
Envoyé: Jeudi 5 Juin 2014 16:20:04 
Objet: [ceph-users] Ceph networks, to bond or not to bond? 

Hello all, 

I'm currently building a new small cluster with three nodes, each node having 
4x 1 Gbit/s network interfaces available and 8-10 OSDs running per node. 

I thought I assign 2x 1 Gb/s for the public network, and the other 2x 1 Gb/s 
for the cluster network. 

My low-budget setup consists of two gigabit switches, capable of LACP, but not 
stackable. For redundancy, I'd like to have my links spread evenly over both 
switches. 

My question where I didn't find a conclusive answer in the documentation and 
mailing archives: 
Will the OSDs utilize both 'single' interfaces per network, if I assign two IPs 
per public and per cluster network? Or will all OSDs just bind on one IP and 
use only a single link? 

I'd rather avoid bonding the NICs, as if one switch fails, there would be at 
least one node unavailable, in worst case 2 (out of 3) ...rendering the cluster 
inoperable. 

Are there other options I missed? 10 GE is currently out of our budget ;) 

Thanks, 
Sven 


_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to