Hi Florent,

The BLUESTORE_FREE_FRAGMENTATION alert is triggered at a threshold of 0.8.
From my observations, fragmentation between 0.8 and 0.9 is common and safe,
while performance issues might occur above 0.9 [1]. Therefore, you should
be safe raising the bluestore_warn_on_free_fragmentation value to 0.85 or
even 0.9.

$ ceph config help bluestore_warn_on_free_fragmentation
bluestore_warn_on_free_fragmentation - Level at which disk free
fragmentation causes health warning. Set "1" to disable. This is same value
as admin command "bluestore allocator score block".
  (float, basic)
  Default: 0.800000
  Can update at runtime: true
  See also: [bluestore_fragmentation_check_period]

Currently, the only way to fix excessive fragmentation is to redeploy your
OSDs. The default warning threshold was set to 0.8 to give you enough time
to perform this task when necessary. Note that work is in progress to allow
OSDs to be defragged during scrubbing [2], but that feature isn't available
yet.

Best regards,
Frédéric.

[1]
https://docs.ceph.com/en/latest/rados/operations/health-checks/#bluestore-fragmentation
[2] https://github.com/ceph/ceph/pull/57631

--
Frédéric Nass
Ceph Ambassador France | Senior Ceph Engineer @ CLYSO
Try our Ceph Analyzer -- https://analyzer.clyso.com/
https://clyso.com | frederic.n...@clyso.com


Le sam. 16 août 2025 à 09:39, Florent Carli <fca...@gmail.com> a écrit :

> Hello,
>
> I’m running Ceph 19.2.3 with cephadm on a 3-node cluster, and I
> recently started seeing the following warning related to free
> fragmentation:
>
> HEALTH_WARN 3 OSD(s)
> [WRN] BLUESTORE_FREE_FRAGMENTATION: 3 OSD(s)
>      osd.0 0.802713
>      osd.1 0.803276
>      osd.2 0.803870
>
> I haven’t been able to find clear guidance on how to address this issue.
> Are there any recommended approaches or best practices for reducing
> BlueStore free space fragmentation?
>
> Thanks in advance for your help!
>
> Florent.
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to