hello
in a cluster running octopus, i've set the upmap balancer; in order to
do so i had to set the set-require-min-compat-client flag to luminous
however i can still mount my cephfs exports using old kernels (3.10.0-1160.el7)
and more surprisingly clients appear in ceph features as jewel
Hi
On Tue, Oct 06, 2020 at 06:36:43AM +, Szabo, Istvan (Agoda) wrote:
Hi,
Is there anybody tried consul as a load balancer?
I'm doing it, works fine. Users connect to the .consul fqdn
the only downside of consul comparing to a "real" LB is that you
cannot set weights for individual
Hi E Taka,
Thanks a lot for sharing your script, yes, that could be one of the
solutions to fix that issue! But strange thing, on another CEPH test
deployment I somehow get valid FQDNs reported by 'ceph mgr services'
instead of IP addresses. I've compared their configurations and it seems to
be
Hey Marek,
one more attempt, please:
set debug_bluestore to 10/30 and and share osd startup the log.
Thanks,
Igor.
On 10/22/2021 10:41 PM, mgrzybowski wrote:
Hi Igor
In ceph.conf i added:
[osd]
debug bluestore = 20
next: systemctl start ceph-osd@2
Log is large :
# ls -alh
This page might also help:
https://docs.ceph.com/en/pacific/dev/osd_internals/scrub/
> osd_scrub_begin_hour = 10 <= this works, great
> osd_scrub_end_hour = 17 <= this works, great
Does your workload vary that much over the course of a day? This limits scrubs
to ~29% of the day, so during
Hi,
even better is to set allow_standby_replay and have for example 2 active
and 2 standby. More here
https://docs.ceph.com/en/latest/cephfs/standby/#configuring-standby-replay
dp
On 10/24/21 09:52, huxia...@horebdata.cn wrote:
Dear Cephers,
When setting up multiple active CephFS MDS,
Hi Yuri,
I faced the same problem that recently only IP addresses are listed in
`ceph mgr services`
As a easy workarounf I installed a lightttp for just one CGI script:
#!/bin/bash
DASHBOARD=$(ceph mgr services | jq
see https://docs.ceph.com/en/pacific/cephfs/multimds/
If I understand it, do this:
ceph fs set max_mds 2
ceph fs set standby_count_wanted 1
ceph orch apply mds 3
Am So., 24. Okt. 2021 um 09:52 Uhr schrieb huxia...@horebdata.cn <
huxia...@horebdata.cn>:
> Dear Cephers,
>
> When setting up
Dear Cephers,
When setting up multiple active CephFS MDS, how to make these MDS high
available? i.e. whenever there is failed MDS, another MDS would quickly take
over. Does it mean that for N active MDS, I need to set up N standby MDS, and
make one standby MDS associated with one active MDS?