Den tors 16 nov. 2023 kl 00:30 skrev Giuliano Maggi
<giuliano.maggi.olm...@gmail.com>:
> Hi,
>
> I’d like to remove some “spurious" data:
>
> root@nerffs03:/# ceph df
> --- RAW STORAGE ---
> CLASS     SIZE    AVAIL    USED  RAW USED  %RAW USED
> hdd    1.0 PiB  1.0 PiB  47 GiB    47 GiB          0
> TOTAL  1.0 PiB  1.0 PiB  47 GiB    47 GiB          0
> The 47GiB could be from previous pools/filesystems that I used for testing.

Bluestore counts DB/WAL space as "storage" and those areas get a lot
of space "used" due to preallocation and things like that, so it looks
like your cluster is more used than it actually is, at least if you
only count --data-drive space. This is also visible when you create
new clusters and haven't written a single byte to it yet, each OSD
starts off with some data USED, due to WAL/DB eating some space before
first written byte hits the disks.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to