[ceph-users] Removing failing OSD with cephadm?

2023-02-17 Thread Matt Larson
I have an OSD that is causing slow ops, and appears to be backed by a failing drive according to smartctl outputs. I am using cephadm, and wondering what is the best way to remove this drive from the cluster and proper steps to replace the disk? Mark the osd.35 as out. `sudo ceph osd out osd.35`

[ceph-users] Re: RGW cannot list or create openidconnect providers

2023-02-17 Thread mat
Hi Pritha, The caps were set correctly. I actually discovered that the SHA1 hash in my ThumbprintList was wrong. I had to attach the python debugger to the find the real-issue because boto3 seems to suppress the error returned from the iam api. The radosgw response is pretty explicit about the

[ceph-users] Re: RGW Service SSL HAProxy.cfg

2023-02-17 Thread Jimmy Spets
The config file for HAProxy is generated by Ceph and I think it should include "ssl verify none" on each backed line as the config use plain ip:port notation. What I wonder is if my yaml config for the RGW and Ingress miss something or if it is a bug in the HAProxy config file generator.

[ceph-users] Re: ceph noout vs ceph norebalance, which is better for minor maintenance

2023-02-17 Thread Konstantin Shalygin
> On 17 Feb 2023, at 23:20, Anthony D'Atri wrote: > > > >> * if rebalance will starts due EDAC or SFP degradation, is faster to fix the >> issue via DC engineers and put node back to work > > A judicious mon_osd_down_out_subtree_limit setting can also do this by not > rebalancing when an e

[ceph-users] Re: ceph noout vs ceph norebalance, which is better for minor maintenance

2023-02-17 Thread Konstantin Shalygin
> On 17 Feb 2023, at 23:20, Anthony D'Atri wrote: > > > >> * if rebalance will starts due EDAC or SFP degradation, is faster to fix the >> issue via DC engineers and put node back to work > > A judicious mon_osd_down_out_subtree_limit setting can also do this by not > rebalancing when an e

[ceph-users] Re: Next quincy release (17.2.6)

2023-02-17 Thread Laura Flores
And make sure the PR is passing all required checks and approved. On Fri, Feb 17, 2023 at 10:25 AM Yuri Weinstein wrote: > Hello > > We are planning to start QE validation release next week. > If you have PRs that are to be part of it, please let us know by > adding "needs-qa" for 'quincy' miles

[ceph-users] Next quincy release (17.2.6)

2023-02-17 Thread Yuri Weinstein
Hello We are planning to start QE validation release next week. If you have PRs that are to be part of it, please let us know by adding "needs-qa" for 'quincy' milestone ASAP. Thx YuriW ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe se

[ceph-users] Re: ceph noout vs ceph norebalance, which is better for minor maintenance

2023-02-17 Thread Anthony D'Atri
> * if rebalance will starts due EDAC or SFP degradation, is faster to fix the > issue via DC engineers and put node back to work A judicious mon_osd_down_out_subtree_limit setting can also do this by not rebalancing when an entire node is detected down. > * noout prevents unwanted OSD's fi

[ceph-users] bluefs_db_type

2023-02-17 Thread Stolte, Felix
Hey guys, most of my osds have HDD for block and SSD for db. But according to "ceph osd metadata" bluefs_db_type = hdd and bluefs_db_rotational = 1. lsblk -o name, rota reveals the following (sdb is db device for 3 hdds): sdb