So just a little update... after replacing the original failed drive things
seem to be progressing a little better however I noticed something else
odd. Looking at a 'rados df' it looks like the system thinks that the data
pool has 32 TB of data, this is only a 18TB raw system.
pool name cat
TL;DR
bobtail Ceph cluster unable to finish rebalance after drive failure, usage
increasing even with no clients connected.
I've been running a test bobtail cluster for a couple of months and it's
been working great. Last week I had a drive die and rebalance; durring that
time another OSD cr