[ceph-users] Re: RadosGW public HA traffic - best practices?
An easy setup if you use PowerDNS is to establish LUA records on the gateway: https://doc.powerdns.com/authoritative/lua-records/ ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: RadosGW public HA traffic - best practices?
On Fri, Nov 17, 2023 at 11:09:22AM +0100, Boris Behrens wrote: > Hi, > I am looking for some experience on how people make their RGW public. What level of fine-grained control do you have over DNS for your environment? If you can use a very short TTL, and dynamically update DNS rapidly, maybe a DNS-based routing solution would be the quickest win for you? s3.example.com => A/ record that resolves to only the pod(s) that are online AND least loaded with traffic. 10 second TTL on the DNS entry. Right now those pods might be direct RGW, or L7LB+RGW (HAProxy, Envoy). In future, you might iterate the design to be L4LB ingress on those pods, and have the L7LB+RGW pods doing direct server return. If a pod goes offline: 0-TTL seconds: some clients might have to retry on a different IP. TTL+ seconds: failed pod is no longer in the DNS records. A good piece of overall reading is vbernat's load-balancing with Linux page: https://vincent.bernat.ch/en/blog/2018-multi-tier-loadbalancer It doesn't have the above dynamic DNS solution directly in front of pods, because it mostly focuses on what can be done with BGP as a common point. It does however suggest DNS for regional failover. -- Robin Hugh Johnson Gentoo Linux: Dev, Infra Lead, Foundation President & Treasurer E-Mail : robb...@gentoo.org GnuPG FP : 11ACBA4F 4778E3F6 E4EDF38E B27B944E 34884E85 GnuPG FP : 7D0B3CEB E9B85B1F 825BCECF EE05E6F6 A48F6136 signature.asc Description: PGP signature ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: RadosGW public HA traffic - best practices?
I apologize, I somehow missed you cannot do BGP. I don't know of a better solution for you if this is the case. You'll just want to make sure to do graceful shutdowns of haproxy when necessary to do maintenance work to avoid severing active connections. At some point, though, timeouts will likely happens, so the impact won't be non-zero, but it also won't be catastrophic. David On Fri, Nov 17, 2023, at 10:09, David Orman wrote: > Use BGP/ECMP with something like exabgp on the haproxy servers. > > David > > On Fri, Nov 17, 2023, at 04:09, Boris Behrens wrote: >> Hi, >> I am looking for some experience on how people make their RGW public. >> >> Currently we use the follow: >> 3 IP addresses that get distributed via keepalived between three HAproxy >> instances, which then balance to three RGWs. >> The caveat is, that keepalived is PITA to get working in distributing a set >> of IP addresses, and it doesn't scale very well (up and down). >> The upside is, that it is really stable and customer nearly never have an >> availability problem. And we have 3 IPs that make some sort of LB. It >> serves up to 24Gbit in peak times, when all those backup jobs are running >> at night. >> >> But today I thought, what will happen if I just ditch the keepalived and >> configure thos addresses static to the haproxy hosts? >> How bad will the impact to a customer if I reboot one haproxy? Is there an >> easier, more scalable way if I want to spread the load even further without >> having an ingress HW LB (what I don't have)? >> >> I have a lot of hosts that would be able to host some POD with a haproxy >> and a RGW as container together, or even host the RGW alone in a container. >> It would just need to bridge two networks. >> But I currently do not have a way to use BGP to have one IP address split >> between a set of RGW instances. >> >> So long story short: >> What are your easy setups to serve public RGW traffic with some sort of HA >> and LB (without using a big HW LB that is capable of 100GBit traffic)? >> And have you experienced problems when you do not shift around IP addresses. >> >> Cheers >> Boris >> ___ >> ceph-users mailing list -- ceph-users@ceph.io >> To unsubscribe send an email to ceph-users-le...@ceph.io > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io
[ceph-users] Re: RadosGW public HA traffic - best practices?
Use BGP/ECMP with something like exabgp on the haproxy servers. David On Fri, Nov 17, 2023, at 04:09, Boris Behrens wrote: > Hi, > I am looking for some experience on how people make their RGW public. > > Currently we use the follow: > 3 IP addresses that get distributed via keepalived between three HAproxy > instances, which then balance to three RGWs. > The caveat is, that keepalived is PITA to get working in distributing a set > of IP addresses, and it doesn't scale very well (up and down). > The upside is, that it is really stable and customer nearly never have an > availability problem. And we have 3 IPs that make some sort of LB. It > serves up to 24Gbit in peak times, when all those backup jobs are running > at night. > > But today I thought, what will happen if I just ditch the keepalived and > configure thos addresses static to the haproxy hosts? > How bad will the impact to a customer if I reboot one haproxy? Is there an > easier, more scalable way if I want to spread the load even further without > having an ingress HW LB (what I don't have)? > > I have a lot of hosts that would be able to host some POD with a haproxy > and a RGW as container together, or even host the RGW alone in a container. > It would just need to bridge two networks. > But I currently do not have a way to use BGP to have one IP address split > between a set of RGW instances. > > So long story short: > What are your easy setups to serve public RGW traffic with some sort of HA > and LB (without using a big HW LB that is capable of 100GBit traffic)? > And have you experienced problems when you do not shift around IP addresses. > > Cheers > Boris > ___ > ceph-users mailing list -- ceph-users@ceph.io > To unsubscribe send an email to ceph-users-le...@ceph.io ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io