[ceph-users] MDS daemons don't report any more

2023-09-09 Thread Frank Schilder
Hi all, I make a weird observation. 8 out of 12 MDS daemons seem not to report to the cluster any more: # ceph fs status con-fs2 - 1625 clients === RANK STATE MDS ACTIVITY DNSINOS 0active ceph-16 Reqs:0 /s 0 0 1active ceph-09 Reqs: 128 /s

[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-09 Thread Anthony D'Atri
That may be the very one I was thinking of, though the OP seemed to be preserving the IP addresses, so I suspect containerization is in play. > On Sep 9, 2023, at 11:36 AM, Tyler Stachecki > wrote: > > On Sat, Sep 9, 2023 at 10:48 AM Anthony D'Atri > wrote: >> There was also at point an

[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-09 Thread Tyler Stachecki
On Sat, Sep 9, 2023 at 10:48 AM Anthony D'Atri wrote: > There was also at point an issue where clients wouldn’t get a runtime update > of new mons. There's also 8+ year old unresolved bugs like this in OpenStack Cinder that will bite you if the relocated mons have new IP addresses:

[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-09 Thread Anthony D'Atri
Which Ceph release are you running, and how was it deployed? With some older releases I experienced mons behaving unexpectedly when one of the quorum bounced, so I like to segregate them for isolation still. There was also at point an issue where clients wouldn’t get a runtime update of new

[ceph-users] Best practices regarding MDS node restart

2023-09-09 Thread Alexander E. Patrakov
Hello, I am interested in the best-practice guidance for the following situation. There is a Ceph cluster with CephFS deployed. There are three servers dedicated to running MDS daemons: one active, one standby-replay, and one standby. There is only a single rank. Sometimes, servers need to be

[ceph-users] Re: Separating Mons and OSDs in Ceph Cluster

2023-09-09 Thread Eugen Block
Hi, is it an actual requirement to redeploy MONs? Because almost all clusters we support or assist with have MONs and OSDs colocated. MON daemons are quite light-weight services, so if it's not really necessary, I'd leave it as it is. If you really need to move the MONs to different

[ceph-users] Separating Mons and OSDs in Ceph Cluster

2023-09-09 Thread Ramin Najjarbashi
Hi I am writing to seek guidance and best practices for a maintenance operation in my Ceph cluster. I have an older cluster in which the Monitors (Mons) and Object Storage Devices (OSDs) are currently deployed on the same host. I am interested in separating them while ensuring zero downtime and