On Sun, Jan 4, 2026 at 1:48 AM lejeczek via ceph-users <[email protected]> wrote: > > Hi guys. > > Is this ok & expected that standby MDS for a filesystem > looks as such: > > -> $ ceph fs status APKI > APKI - 4 clients > ==== > RANK STATE MDS ACTIVITY DNS > INOS DIRS CAPS > 0 active APKI.podster2.gsniot Reqs: 0 /s 8505 > 5516 143 2757 > POOL TYPE USED AVAIL > cephfs.APKI.meta metadata 671M 74.0G > cephfs.APKI.data data 16.1G 74.0G > STANDBY MDS > APKI.podster3.utwwpp > MONERO.podster3.gwtegj > MDS version: ceph version 20.1.1 > (dd9c546413d50a90668289255a256022ea21f0c0) tentacle (rc - > RelWithDebInfo) > > What puzzles me from that above is: MONERO.podster3.gwtegj > I have only two files systems - this is a small lab with > ceph - and both were created with "default" ceph cmd, > meaning that pools where created automatically by cmd. > Is there no special and exclusive relationship between file > system & MDS - MONERO.podster3 was created when file > system/volume MONERO was created, yet it is available to a > different file system?
Yes, this is fine. The monitors can generally assign a standby MDS to any filesystem, and if you look at the overall "ceph fs status" view you will see the standby daemons are listed out separately from the active FSes for that reason. > (what also boggles my mind is "size" the two file systems > report, like they share something, but not relevant yet in > case I have something seriously dis-configured relating what > stat "status" showed) The size statistics you are seeing here are drawn from RADOS reporting, so unless you have FSes configured to use different OSDs (via different CRUSH rules for the relevant pools) or have set pool quotas, they are going to report the same available capacity. -Greg > > many thanks, L. > _______________________________________________ > ceph-users mailing list -- [email protected] > To unsubscribe send an email to [email protected] _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
