[ceph-users] CephFS space usage

2024-03-12 Thread Thorne Lawler
Hi everyone! My Ceph cluster (17.2.6) has a CephFS volume which is showing 41TB usage for the data pool, but there are only 5.5TB of files in it. There are fewer than 100 files on the filesystem in total, so where is all that space going? How can I analyze my cephfs to understand what is

[ceph-users] Re: bluestore_min_alloc_size and bluefs_shared_alloc_size

2024-03-12 Thread Joel Davidow
Hi Igor, Thanks, that's very helpful. So in this case the Ceph developers recommend that all osds originally built under octopus be redeployed with default settings and that default settings continue to be used going forward. Is that correct? Thanks for your assistance, Joel On Tue, Mar 12,

[ceph-users] Ceph Users Feedback Survey

2024-03-12 Thread Neha Ojha
Hi everyone, On behalf of the Ceph Foundation Board, I would like to announce the creation of, and cordially invite you to, the first of a recurring series of meetings focused solely on gathering feedback from the users of Ceph. The overarching goal of these meetings is to elicit feedback from

[ceph-users] Re: bluestore_min_alloc_size and bluefs_shared_alloc_size

2024-03-12 Thread Igor Fedotov
Hi Joel, my primary statement would be - do not adjust "alloc size" settings on your own and use default values! We've had pretty long and convoluted evolution of this stuff so tuning recommendations and their aftermaths greatly depend on the exact Ceph version. While using improper

[ceph-users] Re: Hanging request in S3

2024-03-12 Thread Christian Kugler
Hi Casey, Interesting. Especially since the request it hangs on is a GET request. I set the option and restarted the RGW I test with. The POSTs for deleting take a while but there are not longer blocking GET or POST requests. Thank you! Best, Christian PS: Sorry for pressing the wrong reply

[ceph-users] Re: 18.2.2 dashboard really messed up.

2024-03-12 Thread Nizamudeen A
Hi, The warning and danger indicator in the capacity chart points to the nearful and full ratio set to the cluster and the default values for them are 85% and 95% respectively. You can do a `ceph osd dump | grep ratio` and see those. When this got introduced, there was a blog post