I tested something in the past[1] where I could notice that an osd 
staturated a bond link and did not use the available 2nd one. I think I 
maybe made a mistake in writing down it was a 1x replicated pool. 
However it has been written here multiple times that these osd processes 
are single thread, so afaik they cannot use more than on link, and at 
the moment your osd has a saturated link, your clients will notice this.


[1]
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg35474.html



-----Original Message-----
From: Lindsay Mathieson [mailto:lindsay.mathie...@gmail.com] 
Sent: maandag 21 september 2020 2:42
To: ceph-users@ceph.io
Subject: [ceph-users] Re: Setting up a small experimental CEPH network

On 21/09/2020 5:40 am, Stefan Kooman wrote:
> My experience with bonding and Ceph is pretty good (OpenvSwitch). Ceph 

> uses lots of tcp connections, and those can get shifted (balanced) 
> between interfaces depending on load.

Same here - I'm running 4*1GB (LACP, Balance-TCP) on a 5 node cluster 
with 19 OSD's. 20 Active VM's and it idles at under 1 MiB/s, spikes up 
to 100MiB/s no problem. When doing a heavy rebalance/repair data rates 
on any one node can hit 400MiBs+


It scales out really well.

--
Lindsay
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to