Den fre 23 maj 2025 kl 04:23 skrev Michel Jouvin <michel.jou...@ijclab.in2p3.fr>: > One good reason to use cephadm and the container based deployment!
I would say that the Reef release has been an interesting ride, you hold off on the 18.2.0 and early releases to be safe from the worst burn-in bugs and by the time you get your first test cluster with reef, its around 18.2.2, so this is when the "fun" starts. First 18.2.3 leak out so 18.2.4 needs to get pushed out so people who accidentally got the not-good 18.2.3 can upgrade, then around this time rgw uploads start to fail due to how AWS-supporting clients want to have their checksumming done, so all the "we run latest aws-sdk" clients can't upload to S3 anymore, then come 18.2.5 with a solution for this so you upgrade into that, which in turn makes ipv6 configs fail/complain and dmcrypt for OSDs gets broken, so you quickly upgrade to the fresh 18.2.6, but that seems to make certain clusters OSDs to crash, so you hold your breath while waiting for 18.2.7 and if you run Alma Linux 9 (and presumably other rhel/centos derivates that have worked for the other 18.2.x releases) the upgrade now fails due to changed ssl deps and suddenly you need to flip your whole cluster deployment to containers. If anyone had an idea that upgrading minors Should Be Fine(tm) then ceph 18 will be a fresh break from that. No, I did not experience ALL these issues myself - only some of them - but its been quite the time to get the popcorn.. > > Some weeks ago we updated our ceph cluster (installed on AlmaLinux9 > > servers) to Ceph reef v. 18.2.6. > > The day after the email talking about the serious regression bug in that > > release was sent to this mailing list > > > > 18.2.7 can't be installed because it needed openssl-libs v. 3.4 (as > > discussed in this mailing list) which is in CentOS9 stream but not yet in > > AlmaLinux9 -- May the most significant bit of your life be positive. _______________________________________________ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io