ersion 17.2.7
(b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable)
Why is the mds (in error state) for the backupfs-filesystem shown with
the cloudfs-filesystem?
Now... Is there a way back to normal?
/Johan
___
ceph-users mailing list -- ceph-
sd:s that
remained in the host (after the pools recovered/rebalanced).
/Johan
Den 2024-05-07 kl. 12:09, skrev Eugen Block:
Hi, did you remove the host from the host list [0]?
ceph orch host rm [--force] [--offline]
[0]
https://docs.ceph.com/en/latest/cephadm/host-management/#offline-hos
n and
simply decided to force the removal from the cluster.
The host is now removed but ceph (17.2.7) keep on complaining on it
being in maintenance mode.
How can I remove the last remnants of this host from the cluster?
/Johan
___
ceph-users mailing
test cluster.
Regards
For reference, this was also discussed about two years ago:
https://www.spinics.net/lists/ceph-users/msg70108.html
Worked for me.
// Johan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-use
I have checked my disks as well,
all devices are hot-swappable hdd and have the removable flag set
/Johan
Den 2023-10-24 kl. 13:38, skrev Patrick Begou:
Hi Eugen,
Yes Eugen, all the devices /dev/sd[abc] have the removable flag set to
1. May be because they are hot-swappable hard drives.
I
Which OS are your running?
What is the outcome of these two tests?
cephadm --image quay.io/ceph/ceph:v16.2.10-20220920 ceph-volume inventory
cephadm --image quay.io/ceph/ceph:v16.2.11-20230125 ceph-volume inventory
/Johan
Den 2023-10-16 kl. 08:25, skrev 544463...@qq.com:
I encountered a
The problem appears in v16.2.11-20230125.
I have no insight into the different commits.
/Johan
Den 2023-10-16 kl. 08:25, skrev 544463...@qq.com:
I encountered a similar problem on ceph17.2.5, could you found which commit
caused it?
___
ceph-users
n1p3 False False
/dev/md2 279.27 GBnvme1n1p4,nvme0n1p4 False False
/dev/nvme0n1 931.51 GBnvme0n1 False False
KINGSTON SNV2S1000G
/dev/nvme1n1 931.51 GBnvme1n1 False False
KINGSTON SNV2S1000G
-1 0 root default" is a bit strange
Am 1. April 2023 01:01:39 MESZ schrieb Johan Hattne :
Here goes:
# ceph -s
cluster:
id: e1327a10-8b8c-11ed-88b9-3cecef0e3946
health: HEALTH_OK
services:
mon: 5 daemons, quorum
bcgonen-a,bcgonen-b,bcgo
those rack
buckets were sitting next to the default root as opposed to under it.
Now that's fixed, and the cluster is backfilling remapped PGs.
// J
On 2023-03-31 16:01, Johan Hattne wrote:
Here goes:
# ceph -s
cluster:
id: e1327a10-8b8c-11ed-88b9-3cecef0e3946
health: HEALTH_
ost).
// J
On 2023-03-31 15:37, c...@elchaka.de wrote:
Need to know some more about your cluster...
Ceph -s
Ceph osd df tree
Replica or ec?
...
Perhaps this can give us some insight
Mehmet
Am 31. März 2023 18:08:38 MESZ schrieb Johan Hattne :
Dear all;
Up until a few hours ago, I had
; Johan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
errormessage ...
Thank you for your help!
/Johan
Den 2022-11-07 kl. 09:09, skrev Eugen Block:
Hi,
how does your mon section of the myservice.yaml look? Could you please
paste it?
How did you configure the public network? Can you share
# ceph config get mon public_network
It sounds like you have
/24 has host bits set
What have I done wrong and how can I correct it?
/Johan
Failed to apply mon spec
ServiceSpec.from_json(yaml.safe_load('''service_type: mon service_name:
mon placement: count: 3 ''')): 192.168.119.1/24 has host bits set
Traceback (most recent c
ishes; Johan
On 2021-07-27 23:48, Eugen Block wrote:
Alright, it's great that you could fix it!
In my one-node test cluster (Pacific) I see this smartctl version:
[ceph: root@pacific /]# rpm -q smartmontools
smartmontools-7.1-1.el8.x86_64
Zitat von Johan Hattne :
Thanks a lot, Eugen!
ols.org/ticket/1404). Unfortunately, the
Octopus image ships with smartmontools 7.1, which will crash the kernel
on e.g. "smartctl -a /dev/nvme0". Before switching to Octopus
containers, I was using smartmontools from Debian backports, which does
not have this problem.
Does Pacific h
: 0 up (since 47h), 1 in (since 47h); epoch: e4378
Is there any way to get past this? For instance, could I coax the OSDs
into epoch 4378? This is the first time I deal a ceph disaster, so
there may be all kinds of obvious things I'm missing.
// Best wishes; Johan
become EOL by the end of
2019.
Or am I totally wrong? ^
Thank you for reply in advance,
/Johan
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
18 matches
Mail list logo