How can I analyze this?

Il 02/03/2018 12:18, Gonzalo Aguilar Delgado ha scritto:

Hi Max,

No that's not normal. 9GB for an empty cluster. Maybe you reserved some space or you have other service that's taking the space. But It seems way to much for me.

El 02/03/18 a las 12:09, Max Cuttins escribió:

I don't care of get back those space.
I just want to know if it's expected or not.
Because I run several rados bench with the flag |--no-cleanup|

And maybe I leaved something in the way.

Il 02/03/2018 11:35, Janne Johansson ha scritto:
2018-03-02 11:21 GMT+01:00 Max Cuttins < <>>:

    Hi everybody,

    i deleted everything from the cluster after some test with RBD.
    Now I see that there something still in use:

            pools:   0 pools, 0 pgs
            objects: 0 objects, 0 bytes
            usage: *9510 MB used*, 8038 GB / 8048 GB avail

    Is this the overhead of the bluestore journal/wall?
    Or there is something wrong and this should be zero?

People setting up new clusters also see this, there are overhead items and stuff that eat some space so it would never be zero. At your place, it would seem it is close to 0.1%, so just live with it and move
on to using your 8TB for what you really needed it to be used for.

In almost no case will I think that "if only I could get those 0.1% back and then my cluster would be great

Storage clusters should probably have something like 10% "admin" margins so if ceph warns and whines at OSDs being 85% full, then at 75% you should be writing orders for more disks and/or more
storage nodes.

At that point, regardless of where the "miscalculation" is, or where ceph manages to waste 9500M while you think it should be zero, it will be all but impossible to make anything decent with it
if you were to get those 0.1% back with some magic command.

May the most significant bit of your life be positive.

ceph-users mailing list

ceph-users mailing list

ceph-users mailing list

Reply via email to