[ceph-users] Re: Unable to add OSD after removing completely

2024-02-13 Thread Anthony D'Atri
Glad you're up and running. Part of this process IIRC is ensuring that the drives don't have partitions and perhaps even GPT labels on them, otherwise orchestrators may not consider them available, out of an abundance of caution. > On Feb 13, 2024, at 01:43, sa...@dcl-online.com wrote: > > Tha

[ceph-users] Re: concept of ceph and 2 datacenters

2024-02-13 Thread Vladimir Sigunov
Hi Ronny, This is a good starting point for your design. https://docs.ceph.com/en/latest/rados/operations/stretch-mode/ My personal experience says that 2 DC Ceph deployment could suffer from a 'split brain' situation. If you have any chance to create a 3 DC configuration, I would suggest to con

[ceph-users] concept of ceph and 2 datacenters

2024-02-13 Thread ronny . lippold
hi there, i have a design/concept question, to see, whats outside and which kind of redundancy you use. actually, we use 2 ceph cluster with rbd-mirror to have an cold-standby clone. but, rbd mirror is not application consistend. so we cannot be sure, that all vms (kvm/proxy) are running. we

[ceph-users] Re: Unable to add OSD after removing completely

2024-02-13 Thread salam
Thank you for your prompt response, Dear Anthony. I have fixed the problem. As I had already removed all the OSDs from my third node, this time I removed the ceph-node3 node from my Ceph Cluster. Then I re-added it as a new cluster node. I followed the following method: ceph osd crush remove c

[ceph-users] Re: Slow RGW multisite sync due to "304 Not Modified" responses on primary zone

2024-02-13 Thread Alam Mohammad
Hi All, I just wanted to quick follow-up on my previous mail about "Slow RGW multisite sync due to '304 Not Modified' responses on primary zone". I wanted to highlight that I'm still facing the issue and urgently need your guidance to resolve it. I appreciate your attention to this matter. Th

[ceph-users] RECENT_CRASH: x daemons have recently crashed

2024-02-13 Thread Jaemin Joo
Hi everyone, I'd like to ask you why this message happened. There are no symptoms with a warning message. After archiving the message, It happened again. It will be very helpful if you give me the hint about what part I should look at. [log]--- root@ce

[ceph-users] Pacific Bug?

2024-02-13 Thread Alex
Hello Ceph Gurus! I'm running Ceph Pacific version. if I run ceph orch host ls --label osds shows all hosts label osds or ceph orch host ls --host-pattern host1 shows just host1 it works as expected But combining the two the label tag seems to "take over" ceph orch host ls --label osds --host-pa

[ceph-users] Re: RBD Mirroring

2024-02-13 Thread Eugen Block
So the error you reported first is now resolved? What does the mirror daemon log? Zitat von Michel Niyoyita : I have configured it as follow : ceph_rbd_mirror_configure: true ceph_rbd_mirror_mode: "pool" ceph_rbd_mirror_pool: "images" ceph_rbd_mirror_remote_cluster: "prod" ceph_rbd_mirror_re

[ceph-users] Help with setting-up Influx MGR module: ERROR - queue is full

2024-02-13 Thread Fulvio Galeazzi
Hi there! Has anyone any experience with the Influx Ceph mgr module? I am using 17.2.7 on CentOS8-Stream, I configured one of my clusters, I test with "ceph influx send" (whereas official doc https://docs.ceph.com/en/quincy/mgr/influx/ mentions the non-existing "ceph influx self-test") but no

[ceph-users] Announcing go-ceph v0.26.0

2024-02-13 Thread Sven Anderson
We are happy to announce another release of the go-ceph API library. This is a regular release following our every-two-months release cadence. https://github.com/ceph/go-ceph/releases/tag/v0.26.0 The library includes bindings that aim to play

[ceph-users] Re: RBD Mirroring

2024-02-13 Thread Michel Niyoyita
I have configured it as follow : ceph_rbd_mirror_configure: true ceph_rbd_mirror_mode: "pool" ceph_rbd_mirror_pool: "images" ceph_rbd_mirror_remote_cluster: "prod" ceph_rbd_mirror_remote_user: "admin" ceph_rbd_mirror_remote_key: "AQDGVctluyvAHRAAtjeIB3ZZ75L8yT/erZD7eg==" ceph_rbd_mirror_remote_mon

[ceph-users] Re: RBD Mirroring

2024-02-13 Thread Eugen Block
You didn't answer if the remote_key is defined. If it's not then your rbd-mirror daemon won't work which confirms what you pasted (daemon health: ERROR). You need to fix that first. Zitat von Michel Niyoyita : Thanks Eugen, On my prod Cluster (as named it) this is the output the following

[ceph-users] Re: RBD Mirroring

2024-02-13 Thread Michel Niyoyita
Thanks Eugen, On my prod Cluster (as named it) this is the output the following command checking the status : rbd mirror pool status images --cluster prod health: WARNING daemon health: UNKNOWN image health: WARNING images: 4 total 4 unknown but on bup cluster there are some errors which I am

[ceph-users] Re: Accumulation of removed_snaps_queue After Deleting Snapshots in Ceph RBD

2024-02-13 Thread Josh Baergen
> 24 active+clean+snaptrim I see snaptrimming happening in your status output - do you know if that was happening before restarting those OSDs? This is the mechanism by which OSDs clean up deleted snapshots, and once all OSDs have completed snaptrim for a given snapshot it should be removed from

[ceph-users] Re: RBD Mirroring

2024-02-13 Thread Eugen Block
Did you define ceph_rbd_mirror_remote_key? According to the docs [1]: ceph_rbd_mirror_remote_key : This must be the same value as the user ({{ ceph_rbd_mirror_local_user }}) keyring secret from the primary cluster. [1] https://docs.ceph.com/projects/ceph-ansible/en/latest/rbdmirror/index.ht

[ceph-users] Remove cluster_network without routing

2024-02-13 Thread Torkil Svensgaard
Hi Cephadm Reef 18.2.0. We would like to remove our cluster_network without stopping the cluster and without having to route between the networks. globaladvanced cluster_network 192.168.100.0/24 * globaladvanced public_network 172.21.12

[ceph-users] RBD Mirroring

2024-02-13 Thread Michel Niyoyita
Hello team, I have two clusters in testing environment deployed using ceph-ansible running on ubuntu 20.04 with ceph Pacific version . I am testing mirroring between two clusters , in pool mode . Our production Cluster is for backend storage for openstack. This is how I configured the rbdmirros.y