osd df is misleading when using external DB devices, they are always
counted as 100% full there


Paul

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90


On Wed, May 13, 2020 at 11:40 AM Denis Krienbühl <de...@href.ch> wrote:

> Hi
>
> On one of our Ceph clusters, some OSDs have been marked as full. Since
> this is a staging cluster that does not have much data on it, this is
> strange.
>
> Looking at the full OSDs through “ceph osd df” I figured out that the
> space is mostly used by metadata:
>
>     SIZE: 122 GiB
>     USE: 118 GiB
>     DATA: 2.4 GiB
>     META: 116 GiB
>
> We run mimic, and for the affected OSDs we use a db device (nvme) in
> addition to the primary device (hdd).
>
> In the logs we see the following errors:
>
>     2020-05-12 17:10:26.089 7f183f604700  1 bluefs _allocate failed to
> allocate 0x400000 on bdev 1, free 0x0; fallback to bdev 2
>     2020-05-12 17:10:27.113 7f183f604700  1
> bluestore(/var/lib/ceph/osd/ceph-8) _balance_bluefs_freespace gifting
> 0x180a000000~400000 to bluefs
>     2020-05-12 17:10:27.153 7f183f604700  1 bluefs add_block_extent bdev 2
> 0x180a000000~400000
>
> We assume it is an issue with Rocksdb, as the following call will quickly
> fix the problem:
>
>     ceph daemon osd.8 compact
>
> The question is, why is this happening? I would think that “compact" is
> something that runs automatically from time to time, but I’m not sure.
>
> Is it on us to run this regularly?
>
> Any pointers are welcome. I’m quite new to Ceph :)
>
> Cheers,
>
> Denis
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to