Hi,

>>My low-budget setup consists of two gigabit switches, capable of LACP, 
>>but not stackable. For redundancy, I'd like to have my links spread 
>>evenly over both switches.

If you want to do lacp with both switches, they need to be stackable.

(or use active-backup bonding)

>>My question where I didn't find a conclusive answer in the documentation 
>>and mailing archives: 
>>Will the OSDs utilize both 'single' interfaces per network, if I assign 
>>two IPs per public and per cluster network? Or will all OSDs just bind 
>>on one IP and use only a single link? 

you just need 1 ip by bond.

with lacp, the load balacing use an hash algorithm,
to loadbalance tcp connections.
(that also mean than 1 connection can't use more than 1 link)

check that your switch support ip+port hash algorithm, 
(xmit_hash_policy=layer3+4  is linux lacp bonding)

like this, each osd->osd can be loadbalanced, same for your clients->osd.






----- Mail original ----- 

De: "Sven Budde" <[email protected]> 
À: [email protected] 
Envoyé: Jeudi 5 Juin 2014 16:20:04 
Objet: [ceph-users] Ceph networks, to bond or not to bond? 

Hello all, 

I'm currently building a new small cluster with three nodes, each node 
having 4x 1 Gbit/s network interfaces available and 8-10 OSDs running 
per node. 

I thought I assign 2x 1 Gb/s for the public network, and the other 2x 1 
Gb/s for the cluster network. 

My low-budget setup consists of two gigabit switches, capable of LACP, 
but not stackable. For redundancy, I'd like to have my links spread 
evenly over both switches. 

My question where I didn't find a conclusive answer in the documentation 
and mailing archives: 
Will the OSDs utilize both 'single' interfaces per network, if I assign 
two IPs per public and per cluster network? Or will all OSDs just bind 
on one IP and use only a single link? 

I'd rather avoid bonding the NICs, as if one switch fails, there would 
be at least one node unavailable, in worst case 2 (out of 3) 
...rendering the cluster inoperable. 

Are there other options I missed? 10 GE is currently out of our budget ;) 

Thanks, 
Sven 


_______________________________________________ 
ceph-users mailing list 
[email protected] 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to