[ceph-users] MDS crashing

2024-05-29 Thread Johan
ersion 17.2.7 (b12291d110049b2f35e32e0de30d70e9a4c060d2) quincy (stable) Why is the mds (in error state) for the backupfs-filesystem shown with the cloudfs-filesystem? Now... Is there a way back to normal? /Johan ___ ceph-users mailing list -- ceph-

[ceph-users] Re: Removed host in maintenance mode

2024-05-07 Thread Johan
sd:s that remained in the host (after the pools recovered/rebalanced). /Johan Den 2024-05-07 kl. 12:09, skrev Eugen Block: Hi, did you remove the host from the host list [0]? ceph orch host rm [--force] [--offline] [0] https://docs.ceph.com/en/latest/cephadm/host-management/#offline-hos

[ceph-users] Removed host in maintenance mode

2024-05-07 Thread Johan
n and simply decided to force the removal from the cluster. The host is now removed but ceph (17.2.7) keep on complaining on it being in maintenance mode. How can I remove the last remnants of this host from the cluster? /Johan ___ ceph-users mailing

[ceph-users] Re: Building new cluster had a couple of questions

2023-12-22 Thread Johan Hattne
test cluster. Regards For reference, this was also discussed about two years ago:   https://www.spinics.net/lists/ceph-users/msg70108.html Worked for me. // Johan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-use

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-24 Thread Johan
I have checked my disks as well, all devices are hot-swappable hdd and have the removable flag set /Johan Den 2023-10-24 kl. 13:38, skrev Patrick Begou: Hi Eugen, Yes Eugen, all the devices /dev/sd[abc] have the removable flag set to 1. May be because they are hot-swappable hard drives. I

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-17 Thread Johan
Which OS are your running? What is the outcome of these two tests? cephadm --image quay.io/ceph/ceph:v16.2.10-20220920 ceph-volume inventory cephadm --image quay.io/ceph/ceph:v16.2.11-20230125 ceph-volume inventory /Johan Den 2023-10-16 kl. 08:25, skrev 544463...@qq.com: I encountered a

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-17 Thread Johan
The problem appears in v16.2.11-20230125. I have no insight into the different commits. /Johan Den 2023-10-16 kl. 08:25, skrev 544463...@qq.com: I encountered a similar problem on ceph17.2.5, could you found which commit caused it? ___ ceph-users

[ceph-users] Re: [EXTERNAL] [Pacific] ceph orch device ls do not returns any HDD

2023-10-13 Thread Johan
n1p3 False False /dev/md2 279.27 GBnvme1n1p4,nvme0n1p4 False False /dev/nvme0n1 931.51 GBnvme0n1 False False KINGSTON SNV2S1000G /dev/nvme1n1 931.51 GBnvme1n1 False False KINGSTON SNV2S1000G

[ceph-users] Re: Misplaced objects greater than 100%

2023-04-05 Thread Johan Hattne
-1 0 root default" is a bit strange Am 1. April 2023 01:01:39 MESZ schrieb Johan Hattne : Here goes: # ceph -s cluster: id: e1327a10-8b8c-11ed-88b9-3cecef0e3946 health: HEALTH_OK services: mon: 5 daemons, quorum bcgonen-a,bcgonen-b,bcgo

[ceph-users] Re: Misplaced objects greater than 100%

2023-04-03 Thread Johan Hattne
those rack buckets were sitting next to the default root as opposed to under it. Now that's fixed, and the cluster is backfilling remapped PGs. // J On 2023-03-31 16:01, Johan Hattne wrote: Here goes: # ceph -s   cluster:     id: e1327a10-8b8c-11ed-88b9-3cecef0e3946     health: HEALTH_

[ceph-users] Re: Misplaced objects greater than 100%

2023-03-31 Thread Johan Hattne
ost). // J On 2023-03-31 15:37, c...@elchaka.de wrote: Need to know some more about your cluster... Ceph -s Ceph osd df tree Replica or ec? ... Perhaps this can give us some insight Mehmet Am 31. März 2023 18:08:38 MESZ schrieb Johan Hattne : Dear all; Up until a few hours ago, I had

[ceph-users] Misplaced objects greater than 100%

2023-03-31 Thread Johan Hattne
; Johan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io

[ceph-users] Re: Failed to apply 1 service(s): mon

2022-11-07 Thread Johan
errormessage ... Thank you for your help! /Johan Den 2022-11-07 kl. 09:09, skrev Eugen Block: Hi, how does your mon section of the myservice.yaml look? Could you please paste it? How did you configure the public network? Can you share # ceph config get mon public_network It sounds like you have

[ceph-users] Failed to apply 1 service(s): mon

2022-11-05 Thread Johan
/24 has host bits set What have I done wrong and how can I correct it? /Johan Failed to apply mon spec ServiceSpec.from_json(yaml.safe_load('''service_type: mon service_name: mon placement: count: 3 ''')): 192.168.119.1/24 has host bits set Traceback (most recent c

[ceph-users] Re: OSD failed to load OSD map for epoch

2021-07-28 Thread Johan Hattne
ishes; Johan On 2021-07-27 23:48, Eugen Block wrote: Alright, it's great that you could fix it! In my one-node test cluster (Pacific) I see this smartctl version: [ceph: root@pacific /]# rpm -q smartmontools smartmontools-7.1-1.el8.x86_64 Zitat von Johan Hattne : Thanks a lot, Eugen! 

[ceph-users] Re: OSD failed to load OSD map for epoch

2021-07-27 Thread Johan Hattne
ols.org/ticket/1404). Unfortunately, the Octopus image ships with smartmontools 7.1, which will crash the kernel on e.g. "smartctl -a /dev/nvme0". Before switching to Octopus containers, I was using smartmontools from Debian backports, which does not have this problem. Does Pacific h

[ceph-users] OSD failed to load OSD map for epoch

2021-07-23 Thread Johan Hattne
: 0 up (since 47h), 1 in (since 47h); epoch: e4378 Is there any way to get past this? For instance, could I coax the OSDs into epoch 4378? This is the first time I deal a ceph disaster, so there may be all kinds of obvious things I'm missing. // Best wishes; Johan

[ceph-users] Fwd: Kinetic support

2019-09-02 Thread Johan Thomsen
become EOL by the end of 2019. Or am I totally wrong? ^ Thank you for reply in advance, /Johan ___ ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to ceph-users-le...@ceph.io