Hi Joel,
generally speaking you need OSD redeployment to apply 64K to 4K
min_alloc_size downgrade for block device only. Other improvements
(including supporting 4K units for BlueFS) are applied to existing OSDs
automatically when relevant Ceph release is installed.
So yes - if you have octo
Hi Igor,
Thanks, that's very helpful.
So in this case the Ceph developers recommend that all osds originally
built under octopus be redeployed with default settings and that default
settings continue to be used going forward. Is that correct?
Thanks for your assistance,
Joel
On Tue, Mar 12, 20
Hi Joel,
my primary statement would be - do not adjust "alloc size" settings on
your own and use default values!
We've had pretty long and convoluted evolution of this stuff so tuning
recommendations and their aftermaths greatly depend on the exact Ceph
version. While using improper settings
Hello Joel,
Please be aware that it is not recommended to keep a mix of OSDs
created with different bluestore_min_alloc_size values within the same
CRUSH device class. The consequence of such a mix is that the balancer
will not work properly - instead of evening out the OSD space
utilization, it w
For osds that are added new, bfm_bytes_per_block is 4096. However, for osds
that were added when the cluster was running octopus, bfm_bytes_per_block
remains 65535.
Based on
https://github.com/ceph/ceph/blob/1c349451176cc5b4ebfb24b22eaaa754e05cff6c/src/os/bluestore/BitmapFreelistManager.cc
and the
> On Feb 28, 2024, at 17:55, Joel Davidow wrote:
>
> Current situation
> -
> We have three Ceph clusters that were originally built via cephadm on octopus
> and later upgraded to pacific. All osds are HDD (will be moving to wal+db on
> SSD) and were resharded after the upgrad