Hi everyone!
My Ceph cluster (17.2.6) has a CephFS volume which is showing 41TB usage
for the data pool, but there are only 5.5TB of files in it. There are
fewer than 100 files on the filesystem in total, so where is all that
space going?
How can I analyze my cephfs to understand what is
Hi Igor,
Thanks, that's very helpful.
So in this case the Ceph developers recommend that all osds originally
built under octopus be redeployed with default settings and that default
settings continue to be used going forward. Is that correct?
Thanks for your assistance,
Joel
On Tue, Mar 12,
Hi everyone,
On behalf of the Ceph Foundation Board, I would like to announce the
creation of, and cordially invite you to, the first of a recurring series
of meetings focused solely on gathering feedback from the users of
Ceph. The overarching goal of these meetings is to elicit feedback from
Hi Joel,
my primary statement would be - do not adjust "alloc size" settings on
your own and use default values!
We've had pretty long and convoluted evolution of this stuff so tuning
recommendations and their aftermaths greatly depend on the exact Ceph
version. While using improper
Hi Casey,
Interesting. Especially since the request it hangs on is a GET request.
I set the option and restarted the RGW I test with.
The POSTs for deleting take a while but there are not longer blocking GET
or POST requests.
Thank you!
Best,
Christian
PS: Sorry for pressing the wrong reply
Hi,
The warning and danger indicator in the capacity chart points to the
nearful and full ratio set to the cluster and
the default values for them are 85% and 95% respectively. You can do a
`ceph osd dump | grep ratio` and see those.
When this got introduced, there was a blog post