This is a natural condition of bonding, it has little to do with ceph-osd.

Make sure your hash policy is set appropriatelly, so that you even have a 
chance of using both links.

https://support.packet.com/kb/articles/lacp-bonding

The larger the set of destinations, the more likely you are to spread traffic 
across both links.



> Osd's do not even use bonding effenciently. If it were to use 2 links 
> concurrently it would be a lot better. 
> 
> https://www.mail-archive.com/[email protected]/msg35474.html 
> 
> 
> 
> -----Original Message-----
> To: [email protected]
> Subject: [ceph-users] Re: small cluster HW upgrade
> 
> Hi Philipp,
> 
> More nodes is better, more availability, more CPU and more RAM. But, I'm 
> agree that your 1GbE link will be most limiting factor, especially if 
> there are some SSDs. I suggest you upgrade your networking to 10GbE (or 
> 25GbE since it will cost you nearly same with 10GbE). Upgrading your 
> networking is better than using bonding since bonding cannot have 100% 
> of total links bandwidth.
> 
> Best regards,
> _______________________________________________
> ceph-users mailing list -- [email protected] To unsubscribe send an 
> email to [email protected]
> 
> _______________________________________________
> ceph-users mailing list -- [email protected]
> To unsubscribe send an email to [email protected]
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to