I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|
And maybe I leaved something in the way.
Il 02/03/2018 11:35, Janne Johansson ha scritto:
2018-03-02 11:21 GMT+01:00 Max Cuttins <m...@phoenixweb.it
i deleted everything from the cluster after some test with RBD.
Now I see that there something still in use:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 bytes
usage: *9510 MB used*, 8038 GB / 8048 GB avail
Is this the overhead of the bluestore journal/wall?
Or there is something wrong and this should be zero?
People setting up new clusters also see this, there are overhead items
and stuff that eat some space
so it would never be zero. At your place, it would seem it is close to
0.1%, so just live with it and move
on to using your 8TB for what you really needed it to be used for.
In almost no case will I think that "if only I could get those 0.1%
back and then my cluster would be great
Storage clusters should probably have something like 10% "admin"
margins so if ceph warns and
whines at OSDs being 85% full, then at 75% you should be writing
orders for more disks and/or more
At that point, regardless of where the "miscalculation" is, or where
ceph manages to waste
9500M while you think it should be zero, it will be all but impossible
to make anything decent with it
if you were to get those 0.1% back with some magic command.
May the most significant bit of your life be positive.
ceph-users mailing list