Hi, We use fd on both hypervisors and ceph servers, seems to be working quite well.
We use fd_codel on the router sending traffic from hypervisor to slower segments of the network. I was afraid to use fd_codel in hypervisors as it could affect latency -we run a full ceph SSD and NVMe cluster- as we need to keep latency under 1ms. My recommendation would be to be careful when using fd_codel In hypervisors -or at least tweak the default values-. But, we don't have serious test data to backup using that configuration. Saludos Cordiales, Xavier Trilla P. Clouding.io<https://clouding.io/> ?Un Servidor Cloud con SSDs, redundado y disponible en menos de 30 segundos? ?Pru?balo ahora en Clouding.io<https://clouding.io/>! El 2 gen 2019, a les 22:47, Kevin Olbrich <k...@sv01.de<mailto:k...@sv01.de>> va escriure: Hi! I wonder if changing qdisc and congestion_control (for example fq with Google BBR) on Ceph servers / clients has positive effects during high load. Google BBR: https://cloud.google.com/blog/products/gcp/tcp-bbr-congestion-control-comes-to-gcp-your-internet-just-got-faster I am running a lot of VMs with BBR but the hypervisors run fq_codel + cubic (OSDs run Ubuntu defaults). Did someone test qdisc and congestion control settings? Kevin _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com<mailto:ceph-users@lists.ceph.com> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com