I have an OSD that is causing slow ops, and appears to be backed by a
failing drive according to smartctl outputs. I am using cephadm, and
wondering what is the best way to remove this drive from the cluster and
proper steps to replace the disk?
Mark the osd.35 as out.
`sudo ceph osd out osd.35`
Hi Pritha,
The caps were set correctly. I actually discovered that the SHA1 hash in my
ThumbprintList was wrong. I had to attach the python debugger to the find the
real-issue because boto3 seems to suppress the error returned from the iam api.
The radosgw response is pretty explicit about the
The config file for HAProxy is generated by Ceph and I think it should include
"ssl verify none" on each backed line as the config use plain ip:port notation.
What I wonder is if my yaml config for the RGW and Ingress miss something or if
it is a bug in the HAProxy config file generator.
> On 17 Feb 2023, at 23:20, Anthony D'Atri wrote:
>
>
>
>> * if rebalance will starts due EDAC or SFP degradation, is faster to fix the
>> issue via DC engineers and put node back to work
>
> A judicious mon_osd_down_out_subtree_limit setting can also do this by not
> rebalancing when an e
> On 17 Feb 2023, at 23:20, Anthony D'Atri wrote:
>
>
>
>> * if rebalance will starts due EDAC or SFP degradation, is faster to fix the
>> issue via DC engineers and put node back to work
>
> A judicious mon_osd_down_out_subtree_limit setting can also do this by not
> rebalancing when an e
And make sure the PR is passing all required checks and approved.
On Fri, Feb 17, 2023 at 10:25 AM Yuri Weinstein wrote:
> Hello
>
> We are planning to start QE validation release next week.
> If you have PRs that are to be part of it, please let us know by
> adding "needs-qa" for 'quincy' miles
Hello
We are planning to start QE validation release next week.
If you have PRs that are to be part of it, please let us know by
adding "needs-qa" for 'quincy' milestone ASAP.
Thx
YuriW
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe se
> * if rebalance will starts due EDAC or SFP degradation, is faster to fix the
> issue via DC engineers and put node back to work
A judicious mon_osd_down_out_subtree_limit setting can also do this by not
rebalancing when an entire node is detected down.
> * noout prevents unwanted OSD's fi
Hey guys,
most of my osds have HDD for block and SSD for db. But according to "ceph osd
metadata" bluefs_db_type = hdd and bluefs_db_rotational = 1.
lsblk -o name, rota reveals the following (sdb is db device for 3 hdds):
sdb