Le 05/06/2014 18:27, Sven Budde a écrit :
> Hi Alexandre,
>
> thanks for the reply. As said, my switches are not stackable, so using LCAP 
> seems not to be my best option.
>
> I'm seeking for an explanation how Ceph is utilizing two (or more) 
> independent links on both the public and the cluster network.
AFAIK, Ceph do not support multiple IP link in the same "designated
network" (aka client/osd networks). Ceph is not aware of links
aggregations, it has to be done at the Ethernet layer, so :

- if your switchs are stackable, you can use traditional LACP on both
sides (switch and Ceph)
- if they are not, and as Mariusz said, use the appropriate bonding mode
on the Ceph side and do not use LCAP on switchs.

More infos here :
http://www.linuxfoundation.org/collaborate/workgroups/networking/bonding

Cheers !
>
> If I configure two IPs for the public network on two NICs, will Ceph route 
> traffic from its (multiple) OSDs on this node over both IPs?
>
> Cheers,
> Sven
>
> -----Ursprüngliche Nachricht-----
> Von: Alexandre DERUMIER [mailto:[email protected]] 
> Gesendet: Donnerstag, 5. Juni 2014 18:14
> An: Sven Budde
> Cc: [email protected]
> Betreff: Re: [ceph-users] Ceph networks, to bond or not to bond?
>
> Hi,
>
>>> My low-budget setup consists of two gigabit switches, capable of LACP, 
>>> but not stackable. For redundancy, I'd like to have my links spread 
>>> evenly over both switches.
> If you want to do lacp with both switches, they need to be stackable.
>
> (or use active-backup bonding)
>
>>> My question where I didn't find a conclusive answer in the 
>>> documentation and mailing archives:
>>> Will the OSDs utilize both 'single' interfaces per network, if I 
>>> assign two IPs per public and per cluster network? Or will all OSDs 
>>> just bind on one IP and use only a single link?
> you just need 1 ip by bond.
>
> with lacp, the load balacing use an hash algorithm, to loadbalance tcp 
> connections.
> (that also mean than 1 connection can't use more than 1 link)
>
> check that your switch support ip+port hash algorithm, 
> (xmit_hash_policy=layer3+4  is linux lacp bonding)
>
> like this, each osd->osd can be loadbalanced, same for your clients->osd.
>
>
>
>
>
>
> ----- Mail original ----- 
>
> De: "Sven Budde" <[email protected]>
> À: [email protected]
> Envoyé: Jeudi 5 Juin 2014 16:20:04
> Objet: [ceph-users] Ceph networks, to bond or not to bond? 
>
> Hello all, 
>
> I'm currently building a new small cluster with three nodes, each node having 
> 4x 1 Gbit/s network interfaces available and 8-10 OSDs running per node. 
>
> I thought I assign 2x 1 Gb/s for the public network, and the other 2x 1 Gb/s 
> for the cluster network. 
>
> My low-budget setup consists of two gigabit switches, capable of LACP, but 
> not stackable. For redundancy, I'd like to have my links spread evenly over 
> both switches. 
>
> My question where I didn't find a conclusive answer in the documentation and 
> mailing archives: 
> Will the OSDs utilize both 'single' interfaces per network, if I assign two 
> IPs per public and per cluster network? Or will all OSDs just bind on one IP 
> and use only a single link? 
>
> I'd rather avoid bonding the NICs, as if one switch fails, there would be at 
> least one node unavailable, in worst case 2 (out of 3) ...rendering the 
> cluster inoperable. 
>
> Are there other options I missed? 10 GE is currently out of our budget ;) 
>
> Thanks,
> Sven 
>
>
> _______________________________________________ 
> ceph-users mailing list 
> [email protected] 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

-- 
Cédric

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to