Dear all,

I am currently in the process to add SSDs for DB/WAL to our "converged" 3-node Ceph cluster. After having done so on two of three nodes, the PVE Ceph dashboard now reports "5 pool(s) nearfull":

     HEALTH_WARN: 5 pool(s) nearfull
     pool 'pve-pool1' is nearfull
     pool 'cephfs_data' is nearfull
     pool 'cephfs_metadata' is nearfull
     pool '.mgr' is nearfull
     pool '.rgw.root' is nearfull

(see also attached Ceph_dashboard_nearfull_warning.jpg). The storage in general is 73% full ("40.87 TiB of 55.67 TiB").

However when looking at the pool overview in PVE, the pools don't seem to be very full at all. Some of them are even reported as being completely empty (see the attached Ceph_pool_overview.jpg).

Please note: All Ceph manipulations have been done from the PVE UI, as we are not very experienced with the Ceph CLI.

We are running PVE 8.2.3 and Ceph runs on version 17.2.7.

Is this inconsistency normal or a problem? And if the latter, then (how) can it be fixed?

Cheers, Frank
_______________________________________________
pve-user mailing list
pve-user@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to