[ceph-users] mismatch between min-compat-client and connected clients

2021-10-24 Thread gustavo panizzo
hello in a cluster running octopus, i've set the upmap balancer; in order to do so i had to set the set-require-min-compat-client flag to luminous however i can still mount my cephfs exports using old kernels (3.10.0-1160.el7) and more surprisingly clients appear in ceph features as jewel

[ceph-users] Re: Consul as load balancer

2021-10-24 Thread gustavo panizzo
Hi On Tue, Oct 06, 2020 at 06:36:43AM +, Szabo, Istvan (Agoda) wrote: Hi, Is there anybody tried consul as a load balancer? I'm doing it, works fine. Users connect to the .consul fqdn the only downside of consul comparing to a "real" LB is that you cannot set weights for individual

[ceph-users] Re: Fwd: Dashboard URL

2021-10-24 Thread Yury Kirsanov
Hi E Taka, Thanks a lot for sharing your script, yes, that could be one of the solutions to fix that issue! But strange thing, on another CEPH test deployment I somehow get valid FQDNs reported by 'ceph mgr services' instead of IP addresses. I've compared their configurations and it seems to be

[ceph-users] Re: Upgrade to 16.2.6 and osd+mds crash after bluestore_fsck_quick_fix_on_mount true

2021-10-24 Thread Igor Fedotov
Hey Marek, one more attempt, please: set debug_bluestore to 10/30 and and share osd startup  the log. Thanks, Igor. On 10/22/2021 10:41 PM, mgrzybowski wrote: Hi Igor In ceph.conf i added: [osd] debug bluestore = 20 next: systemctl start ceph-osd@2 Log is large : # ls -alh 

[ceph-users] Re: deep-scrubs not respecting scrub interval (ceph luminous)

2021-10-24 Thread Anthony D'Atri
This page might also help: https://docs.ceph.com/en/pacific/dev/osd_internals/scrub/ > osd_scrub_begin_hour = 10 <= this works, great > osd_scrub_end_hour = 17 <= this works, great Does your workload vary that much over the course of a day? This limits scrubs to ~29% of the day, so during

[ceph-users] Re: CephFS multi active MDS high availability

2021-10-24 Thread Denis Polom
Hi, even better is to set allow_standby_replay and have for example 2 active and 2 standby. More here https://docs.ceph.com/en/latest/cephfs/standby/#configuring-standby-replay dp On 10/24/21 09:52, huxia...@horebdata.cn wrote: Dear Cephers, When setting up multiple active CephFS MDS,

[ceph-users] Fwd: Dashboard URL

2021-10-24 Thread E Taka
Hi Yuri, I faced the same problem that recently only IP addresses are listed in `ceph mgr services` As a easy workarounf I installed a lightttp for just one CGI script: #!/bin/bash DASHBOARD=$(ceph mgr services | jq

[ceph-users] Re: CephFS multi active MDS high availability

2021-10-24 Thread E Taka
see https://docs.ceph.com/en/pacific/cephfs/multimds/ If I understand it, do this: ceph fs set max_mds 2 ceph fs set standby_count_wanted 1 ceph orch apply mds 3 Am So., 24. Okt. 2021 um 09:52 Uhr schrieb huxia...@horebdata.cn < huxia...@horebdata.cn>: > Dear Cephers, > > When setting up

[ceph-users] CephFS multi active MDS high availability

2021-10-24 Thread huxia...@horebdata.cn
Dear Cephers, When setting up multiple active CephFS MDS, how to make these MDS high available? i.e. whenever there is failed MDS, another MDS would quickly take over. Does it mean that for N active MDS, I need to set up N standby MDS, and make one standby MDS associated with one active MDS?