[ceph-users] RGW: Quincy 17.2.7 and rgw_crypt_default_encryption_key

2023-11-04 Thread Jayanth Reddy
Hello Users, It is great to hear a note about RGW "S3 multipart uploads using Server-Side Encryption now replicate correctly in multi-site" in Quincy v17.2.7 release. But I see that users who are using [1] still have a dependency on the item tracked at [2]. I tested with Reef 18.2.0 as well and

[ceph-users] Re: OSD not starting

2023-11-04 Thread Alex Gorbachev
Hi Amudhan, Have you checked the time sync? This could be an issue: https://tracker.ceph.com/issues/17170 -- Alex Gorbachev Intelligent Systems Services Inc. http://www.iss-integration.com https://www.linkedin.com/in/alex-gorbachev-iss/ On Sat, Nov 4, 2023 at 11:22 AM Amudhan P wrote: >

[ceph-users] OSD not starting

2023-11-04 Thread Amudhan P
Hi, One of the server in Ceph cluster accidentally shutdown abruptly due to power failure. After restarting OSD's not coming up and in Ceph health check it shows osd down. When checking OSD status "osd.26 18865 unable to obtain rotating service keys; retrying" For every 30 seconds it's just

[ceph-users] Re: Ceph OSD reported Slow operations

2023-11-04 Thread Zakhar Kirpichenko
You have an IOPS budget, i.e. how much I/O your spinners can deliver. Space utilization doesn't affect it much. You can try disabling write (not read!) cache on your HDDs with sdparm (for example, sdparm -c WCE /dev/bla); in my experience this allows HDDs to deliver 50-100% more write IOPS. If

[ceph-users] Many pgs inactive after node failure

2023-11-04 Thread Matthew Booth
I have a 3 node ceph cluster in my home lab. One of the pools spans 3 hdds, one on each node, and has size 2, min size 1. One of my nodes is currently down, and I have 160 pgs in 'unknown' state. The other 2 hosts are up and the cluster has quorum. Example `ceph health detail` output: pg 9.0 is

[ceph-users] Re: Ceph OSD reported Slow operations

2023-11-04 Thread V A Prabha
Now in this situation how can stabilize my production setup as you have mentioned the cluster is very busy. Is there any configuration parameter tuning will help or the only option is to reduce the applications running on the cluster. Though if I have free available storage of 1.6 TB free in each

[ceph-users] Re: Ceph OSD reported Slow operations

2023-11-04 Thread V A Prabha
Thanks for your prompt reply and that clears my doubt. 4 of the OSDs in 2 different nodes goes down daily by getting errors in multipath failure. All the four paths are going to failure mode that makes the OSD down. My query is that as the ceph cluster is overloaded with IOPS does the multipaths