Hi,

We have a 5 node EC 4+1 cluster with 335 OSDs running Kraken Bluestore
11.2.0.
There was a disk failure on one of the OSDs and the disk was replaced.
After which it was noticed that there was a ~30TB drop in the MAX_AVAIL
value for the pool storage details on output of 'ceph df'
Even though the disk was replaced and the OSD is now running properly, this
value did not recover back to the original; also the disk is only a 4TB
disk. Hence the drop of ~30TB from the MAX_AVAIL doesn't seem right. Has
anyone had a similar issue before?

Thanks.
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to