Hi, Just restart your manager daemons using `ceph orch restart mgr` and see the updated status. AFAIK Quincy and earlier versions had problems with mgrs where stale data was shown and a simple daemon restart would fix it.
Additionally, it is extremely recommended to use a monitoring system for the Operating System as well as Ceph. Ceph dashboard also should shows you problems with Ceph and your monitoring system for the OS should catch any abnormal disk usage. Thanks ________________________________ From: 苏察哈尔灿 via ceph-users <[email protected]> Sent: Friday, February 13, 2026 7:55:51 AM To: ceph-users <[email protected]> Subject: [ceph-users] Why did this situation occur? Please help me, I have a CEPH cluster with six nodes. Yesterday, one of the nodes reported insufficient disk space (image01.png), but using the command "ceph -s", it kept showing normal (image02.png). Before this, there was also a situation where the disk space of a certain node was insufficient. At that time, using the "ceph -s" command could normally display the alarm and the involved nodes. Why is this situation occurring now? Filesystem Type Size Used Avail Use% Mounted on tmpfs tmpfs 3.2G 3.6M 3.1G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv ext4 98G 93G 863M 100% / tmpfs tmpfs 16G 0 16G 0% /dev/shm tmpfs tmpfs 5.0M 0 5.0M 0% /run/lock /dev/sdj2 ext4 2.0G 253M 1.6G 14% /boot /dev/sdj1 vfat 1.1G 6.1M 1.1G 1% /boot/efi tmpfs tmpfs 3.2G 4.0K 3.2G 1% /run/user/0 overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/b5a55615f548ae182d48dee0eced64eb2ed43c37d696d951618e66b01b700/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/3c6ecfc4d8a3561835504c3aaee7f69ea7a117c26d3ec59ea1ceabe5994fd57/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/a504698e7cc386ebefd506402638c2f8af83ad9a35a8d0c49a7c287d8d26c24/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/380d370bc8313724d95b8fc7c70d7283c69439b014ce82936b245ee6b810c/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/71ba8a75e186df7542a2fa9667580e4b2b0db2fb78eacfea3711abe6c9d22ad/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/8daea1eaddbc8c3b0f7afa35b58524ec3adcfcf48a304f988e5adbec2c4ffa/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/04f8e8635d8bc88c953b00815bc3234f26499f6eb066ae998121bf0dd4a03b4/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/bafbe7cbad3f4ab94a92eb5f098591ac316de90def61c2ca1b69559d96b62/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/4a0aa4bad8692e397ca98ed91a042e22caeb7509d3825f597bb37e9fb98a1cca/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/9c2e8075696b5edb2a38e4f8d7e866ada67b5d238ff73f8ea7b0f0669cf06660a/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/8251b5ef9df5394947e5d8018a163822aac69ab1fe378cfa7359cb42d1788db/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/0762e8c175c6b568042a8e8a12b90bb46dc76b9bc41ccc972ee56c6670e2f5/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/c7f232a9a0abe297fbcfe58f5242754078a669b826726a4ed7146d1e5619cefc/merged overlay overlay 98G 93G 863M 100% /var/lib/docker/overlay2/8186f868a3e87299a360a60f780cef452556fe00ace340433f11e7f922c12124/merged cluster: id: f68c4e2e-e880-11ef-b13e-512ad216ec40 health: HEALTH_OK services: mon: 5 daemons, quorum ceph-node001,ceph-node006,ceph-node002,ceph-node003,ceph-node005 (age 44h) mgr: ceph-node001.zcyayo(active, since 12H), standbys: ceph-node002.wvpazs mds: 1/1 daemons up, 1 standby osd: 60 osds: 60 up (since 5w), 60 in (since 7M) data: volumes: 1/1 healthy pools: 6 pools, 2625 pg objects: 42.59M objects, 160 TiB usage: 477 TiB used, 396 TiB / 873 TiB avail pg: 2591 active+clean 28 active+clean+scrubbing+deep 6 active+clean+scrubbing ceph version 17.2.7 (b12291d11b049b2f35a32e0de30d70e9a4c060d2), quincy (stable) CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 873 TiB 396 TiB 477 TiB 477 TiB 54.68 TOTAL 873 TiB 396 TiB 477 TiB 477 TiB 54.68 POOL ID PGS STORAGED OBJECTS USED %USED MAX AVAIL .mgr 1 1 978 MiB 161 2.9 GiB 0 108 TiB vmware-server 2 1024 155 TiB 41.5M 464 TiB 58.79 108 TiB cephfs-metadata 4 32 195 MiB 69 584 MiB 0 108 TiB cephfs-data 5 512 1.9 GiB 490 5.7 GiB 0 108 TiB .nfs 6 32 0 B 0 0 B 0 108 TiB backup-pool 8 1024 4.1 TiB 1.0M 12 TiB 3.63 108 TiB NAME PORTS RUNNING REFRESHED AGE PLACEMENT alertmanager ?:9093,9094 1/1 12s ago 12M count:1 crash - 6/6 6m ago 12M * graphana ?:3000 1/1 12s ago 12M count:1 mds.cephfs-vmware-server - 2/2 4m ago 12M count:2 mgr - 2/2 4m ago 12M count:2 mon - 5/5 6m ago 12M count:5 node-exporter ?:9100 6/6 6m ago 12M * osd - 60 - - <unmanaged> prometheus ?:9095 1/1 12s ago 12M count:1 NAME HOST PORTS STATUS REFRESHED AGE MEM USE MEM LIM VERSTION IMAGE ID CONTAINER ID alertmanager.ceph-node001 ceph-node001 *:9093,9094 running (12M) 10m ago 12M 32.2K - 0.25.0 c8568f9114cd2 0bda2adb384 crash.ceph-node001 ceph-node001 - running (12M) 10m ago 12M 3479k - 17.2.8 259b35566514 6bdcadd825e5 crash.ceph-node002 ceph-node002 - running (5M) 3m ago 12M 5227k - 17.2.8 259b35566514 4f315378f8fb crash.ceph-node003 ceph-node003 - running (5M) 3m ago 12M 4328k - 17.2.8 259b35566514 c89f20d5244 crash.ceph-node004 ceph-node004 - running (7M) 5m ago 7M 10.9M - 17.2.8 259b35566514 e3f0a0a49d6a crash.ceph-node005 ceph-node005 - running (5M) 3m ago 12M 4551k - 17.2.8 259b35566514 a3028d149a9c crash.ceph-node006 ceph-node006 - running (12M) 5m ago 12M 4219k - 17.2.8 259b35566514 5458ecc46771 grafana.ceph-node001 ceph-node001 *:3000 running (12M) 10m ago 12M 133M - 9.4.7 954c88fa6188 5536bc4858e4 mds.cephfs-vmware-server.ceph-node002.gryudx ceph-node002 - running (5M) 3m ago 12M 15.3M - 17.2.8 259b35566514 b7bd424d9996 mds.cephfs-vmware-server.ceph-node005.qyxswr ceph-node005 - running (5M) 3m ago 12M 25.8M - 17.2.8 259b35566514 3be6f2e7c7ce mgr.ceph-node001.zcyayo ceph-node001 *:9283 running (12M) 10m ago 12M 891M - 17.2.8 259b35566514 daa1ea0d1168 mgr.ceph-node002.uvpazs ceph-node002 *:8443,9283 running (5M) 3m ago 12M 57.3M - 17.2.8 259b35566514 81a8cd1fe2c3 mon.ceph-node001 ceph-node001 - running (7M) 10m ago 12M 477M 2048M 17.2.8 259b35566514 3e5c22bfedd4 mon.ceph-node002 ceph-node002 - running (5M) 10m ago 12M 398M 2048M 17.2.8 259b35566514 c15e5b4c21c8 mon.ceph-node003 ceph-node003 - running (12M) 5m ago 12M 290M 2048M 17.2.8 259b35566514 72facfdf82a8 mon.ceph-node005 ceph-node005 - running (7M) 3m ago 7M 832M 2048M 17.2.8 259b35566514 3f94eb72a5e3 mon.ceph-node006 ceph-node006 - running (12M) 5m ago 12M 285M 2048M 17.2.8 259b35566514 97a4a1d76116 node-exporter.ceph-node001 ceph-node001 *:9100 running (12M) 10m ago 12M 17.3M - 1.5.0 0da6a335fe13 81b476bbe6fc node-exporter.ceph-node002 ceph-node002 *:9100 running (5M) 3m ago 12M 20.2M - 1.5.0 0da6a335fe13 ba20a215dd70 node-exporter.ceph-node003 ceph-node003 *:9100 running (5M) 3m ago 12M 17.4M - 1.5.0 0da6a335fe13 ef75144ce2b3 node-exporter.ceph-node004 ceph-node004 *:9100 running (7M) 5m ago 7M 20.7M - 1.5.0 0da6a335fe13 a04852da78e8 node-exporter.ceph-node005 ceph-node005 *:9100 running (12M) 3m ago 12M 17.1M - 1.5.0 0da6a335fe13 d07533213e5c node-exporter.ceph-node006 ceph-node006 *:9100 running (12M) 5m ago 12M 16.8M - 1.5.0 0da6a335fe13 23f1ceec45a osd.0 ceph-node001 - running (12M) 10m ago 12M 1885M 1287M 17.2.8 259b35566514 9b33a19ff54c osd.1 ceph-node001 - running (12M) 10m ago 12M 1711M 1287M 17.2.8 259b35566514 49eea8cdb06d osd.2 ceph-node001 - running (12M) 10m ago 12M 2118M 1287M 17.2.8 259b35566514 25cdd9efdd95 osd.3 ceph-node001 - running (12M) 10m ago 12M 1639M 1287M 17.2.8 259b35566514 ed50fbe841f osd.4 ceph-node001 - running (12M) 10m ago 12M 1828M 1287M 17.2.8 259b35566514 56109a380a52 osd.5 ceph-node001 - running (8w) 10m ago 12M 2335M 1287M 17.2.8 259b35566514 b1f65b1df237 osd.6 ceph-node001 - running (12M) 10m ago 12M 2088M 1287M 17.2.8 259b35566514 ca7d1a2835ab osd.7 ceph-node001 - running (12M) 10m ago 12M 2055M 1287M 17.2.8 259b35566514 fafd75e72d5f osd.8 ceph-node001 - running (12M) 10m ago 12M 2033M 1287M 17.2.8 259b35566514 1cdc6b774429 osd.9 ceph-node002 - running (5M) 3m ago 12M 2307M 1185M 17.2.8 259b35566514 117c76b6d10 osd.10 ceph-node002 - running (5M) 3m ago 12M 1683M 1185M 17.2.8 259b35566514 d1ce2269d143 osd.11 ceph-node002 - running (5M) 3m ago 12M 2008M 1185M 17.2.8 259b35566514 b69fbf088ee osd.12 ceph-node002 - running (5M) 3m ago 12M 2442M 1185M 17.2.8 259b35566514 df71fa4dd602 osd.13 ceph-node002 - running (5M) 3m ago 12M 1803M 1185M 17.2.8 259b35566514 86d54722e9ab osd.14 ceph-node002 - running (5M) 3m ago 12M 1999M 1185M 17.2.8 259b35566514 ab1f5734cf6f My cluster is version 17.2.7 (quincy). The basic information of the Ceph cluster is provided in the other attached pictures. Thank you! _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected] _______________________________________________ ceph-users mailing list -- [email protected] To unsubscribe send an email to [email protected]
