> 
>> I'm about at the point of ignoring the "rbd du" USED number.  It doesn't 
>> seem terribly meaningful.
> 

If all of your clients are the equivalent of Luminous or later, do you have the 
fast-diff and object-map feature flags enabled by default and for existing 
volumes?  They dramatically increase the speed of `rbd du`.

Also, ISTR that there may be an asynchronous GC process where the freed 
capacity takes some time to show up.  Maybe run the du again and see if it’s 
changed?

> Sparsify really tries to find 0-filled objects that can be removed. It's best 
> to stick with
> fstrim (or equivalent) in the guest.


Agreed.  Sparsify is best reserved for volumes that are currently unattached.

> An OSD has a 4K (SSD) / 64K (HDD) threshold (bluestore_min_alloc_size)

This used to default to 64 KiB; in the O/P era these values were split and 
varied a bit before both settling on 4 KiB.  In recent releases they are both 4 
KiB to reduce space amp, e.g. for tiny RGW objects.  Note that despite some 
references on the net, you cannot effectively retrofit this value for an 
existing OSD.  The process will report a runtime change, but it’s actually 
baked into the OSD and cannot be changed in practice without redeploying.  From 
… Quincy I think it was, the baked-in value is reported by `ceph osd metadata`. 
Some outdated references suggest setting the value based on workload, I suggest 
not doing that unless you are using coarse-IU SSDs e.g. P4326, P5316, P5336, 
6550, et al.

I suggest setting bluestore_use_optimal_io_size_for_min_alloc_size to true 
before creating OSDs.  I have never seen a case where it does the wrong thing.

> EXT4 uses 
> 
All the cool kids skate XFS, warts and all ;)
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]

Reply via email to