[ceph-users] Re: RBD Mirroring with Journaling and Snapshot mechanism

2024-05-09 Thread Ramana Krisna Venkatesh Raja
On Tue, May 7, 2024 at 7:54 AM Eugen Block wrote: > > Hi, > > I'm not the biggest rbd-mirror expert. > As understand it, if you use one-way mirroring you can failover to the > remote site, continue to work there but there's no failover back to > primary site. You would need to stop client IO on

[ceph-users] Re: RBD Mirroring with Journaling and Snapshot mechanism

2024-05-09 Thread Ramana Krisna Venkatesh Raja
On Thu, May 2, 2024 at 2:56 AM V A Prabha wrote: > > Dear Eugen, > We have a scenario of DC and DR replication, and planned to explore RBD > mirroring with both Journaling and Snapshot mechanism. > I have a 5 TB storage at Primary DC and 5 TB storage at DR site with 2 > different > ceph clusters

[ceph-users] Re: RBD Mirroring with Journaling and Snapshot mechanism

2024-05-07 Thread Eugen Block
Hi, I'm not the biggest rbd-mirror expert. As understand it, if you use one-way mirroring you can failover to the remote site, continue to work there but there's no failover back to primary site. You would need to stop client IO on DR, demote the image and then import the remote images

[ceph-users] Re: RBD Mirroring with Journaling and Snapshot mechanism

2024-05-05 Thread V A Prabha
Dear Eugen, Expecting your response for the below query. Please guide me the solution On May 2, 2024 at 12:25 PM V A Prabha wrote: > Dear Eugen, > We have a scenario of DC and DR replication, and planned to explore RBD > mirroring with both Journaling and Snapshot mechanism. > I have a 5 TB

[ceph-users] Re: RBD Mirroring

2024-02-15 Thread Michel Niyoyita
Thank you Eugen , Those who are familiar with ceph-ansible can help to explain and guide. Thank you On Thu, Feb 15, 2024 at 12:28 PM Eugen Block wrote: > Sounds like ceph-ansible only supports one pool? I don't know, I've > never used ceph-ansible. But if it created a rbd-mirror setup >

[ceph-users] Re: RBD Mirroring

2024-02-15 Thread Eugen Block
Sounds like ceph-ansible only supports one pool? I don't know, I've never used ceph-ansible. But if it created a rbd-mirror setup successfully you should be able to configure more pools to be mirrored manually as described in the docs [1]. [1]

[ceph-users] Re: RBD Mirroring

2024-02-15 Thread Michel Niyoyita
Thank you Eugen , all errors have been solved now is syncing on pool mode level , I am trying to use two or more pools but I sill able using the first one defined as it is in this configurations: ceph_rbd_mirror_configure: true ceph_rbd_mirror_mode: "pool" ceph_rbd_mirror_pool: "data"

[ceph-users] Re: RBD Mirroring

2024-02-13 Thread Eugen Block
So the error you reported first is now resolved? What does the mirror daemon log? Zitat von Michel Niyoyita : I have configured it as follow : ceph_rbd_mirror_configure: true ceph_rbd_mirror_mode: "pool" ceph_rbd_mirror_pool: "images" ceph_rbd_mirror_remote_cluster: "prod"

[ceph-users] Re: RBD Mirroring

2024-02-13 Thread Michel Niyoyita
I have configured it as follow : ceph_rbd_mirror_configure: true ceph_rbd_mirror_mode: "pool" ceph_rbd_mirror_pool: "images" ceph_rbd_mirror_remote_cluster: "prod" ceph_rbd_mirror_remote_user: "admin" ceph_rbd_mirror_remote_key: "AQDGVctluyvAHRAAtjeIB3ZZ75L8yT/erZD7eg=="

[ceph-users] Re: RBD Mirroring

2024-02-13 Thread Eugen Block
You didn't answer if the remote_key is defined. If it's not then your rbd-mirror daemon won't work which confirms what you pasted (daemon health: ERROR). You need to fix that first. Zitat von Michel Niyoyita : Thanks Eugen, On my prod Cluster (as named it) this is the output the following

[ceph-users] Re: RBD Mirroring

2024-02-13 Thread Michel Niyoyita
Thanks Eugen, On my prod Cluster (as named it) this is the output the following command checking the status : rbd mirror pool status images --cluster prod health: WARNING daemon health: UNKNOWN image health: WARNING images: 4 total 4 unknown but on bup cluster there are some errors which I

[ceph-users] Re: RBD Mirroring

2024-02-13 Thread Eugen Block
Did you define ceph_rbd_mirror_remote_key? According to the docs [1]: ceph_rbd_mirror_remote_key : This must be the same value as the user ({{ ceph_rbd_mirror_local_user }}) keyring secret from the primary cluster. [1]

[ceph-users] Re: RBD mirroring to an EC pool

2024-02-05 Thread Eugen Block
On the other hand, the openstack docs [3] report this: The mirroring of RBD images stored in Erasure Coded pools is not currently supported by the ceph-rbd-mirror charm due to limitations in the functionality of the Ceph rbd-mirror application. But I can't tell if it's a limitation within

[ceph-users] Re: RBD mirroring to an EC pool

2024-02-05 Thread Eugen Block
Hi, I think you still need a replicated pool for the rbd metadata, check out this thread [1]. Althouh I don't know if a mixed setup will work. IIUC, in the referred thread the pools are set up identically on both clusters, not sure if it will work if you only have one replicated pool in

[ceph-users] Re: RBD mirroring, asking for clarification

2023-05-03 Thread Eugen Block
Hi, the question is if both sites are used as primary clusters from different clients or if it's for disaster recovery only (site1 fails, make site2 primary). If both clusters are used independently with different clients I would prefer to separate the pools, so this option: PoolA

[ceph-users] Re: RBD mirroring, asking for clarification

2023-05-03 Thread wodel youchi
Hi, The goal is to sync some VMs from site1 - to - site2 and vice-versa sync some VMs in the other way. I am thinking of using rdb mirroring for that. But I have little experience with Ceph management. I am searching for the best way to do that. I could create two pools on each site, and cross

[ceph-users] Re: RBD mirroring, asking for clarification

2023-05-03 Thread Eugen Block
Hi, just to clarify, you mean in addition to the rbd mirroring you want to have another sync of different VMs between those clusters (potentially within the same pools) or are you looking for one option only? Please clarify. Anyway, I would use dedicated pools for rbd mirroring and then

[ceph-users] Re: RBD mirroring, asking for clarification

2023-05-03 Thread wodel youchi
Hi, Thanks I am trying to find out what is the best way to synchronize VMS between two HCI Proxmox clusters. Each cluster will contain 3 compute/storage nodes and each node will contain 4 nvme osd disks. There will be a 10gbs link between the two platforms. The idea is to be able to sync VMS

[ceph-users] Re: RBD mirroring, asking for clarification

2023-05-02 Thread Eugen Block
Hi, while your assumptions are correct (you can use the rest of the pool for other non-mirrored images), at least I'm not aware of any limitations, can I ask for the motivation behind this question? Mixing different use-cases doesn't seem like a good idea to me. There's always a chance

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-09-19 Thread Ilya Dryomov
On Thu, Sep 15, 2022 at 3:33 PM Arthur Outhenin-Chalandre wrote: > > Hi Ronny, > > > On 15/09/2022 14:32 ronny.lippold wrote: > > hi arthur, some time went ... > > > > i would like to know, if there are some news of your setup. > > do you have replication active running? > > No, there was no

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-09-16 Thread Josef Johansson
Hi, I've added as much logging as I can, still shows nothing. On Fri, 16 Sep 2022 at 21:35, Arthur Outhenin-Chalandre < arthur.outhenin-chalan...@cern.ch> wrote: > Hi Josef, > > > On 16/09/2022 14:15 Josef Johansson wrote: > > Are you guys affected by > > https://tracker.ceph.com/issues/57396

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-09-16 Thread Arthur Outhenin-Chalandre
Hi Josef, > On 16/09/2022 14:15 Josef Johansson wrote: > Are you guys affected by > https://tracker.ceph.com/issues/57396 ? The issue with journal mode for me was more that the journal replay was slow which made the journal also grows... You should probably inspect your rbd-mirror logs (and

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-09-16 Thread Josef Johansson
Hi, Are you guys affected by https://tracker.ceph.com/issues/57396 ? On Fri, 16 Sep 2022 at 09:40, ronny.lippold wrote: > hi and thanks a lot. > good to stay not alone and understand some right :) > > i will also tell, if there is something new. > > > so from my point of view, the only

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-09-16 Thread ronny.lippold
hi and thanks a lot. good to stay not alone and understand some right :) i will also tell, if there is something new. so from my point of view, the only consistant way is to freeze fs or shutdown vm. after that start journal mirroring. so i think, only journal can work. you helped me a lot,

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-09-15 Thread Arthur Outhenin-Chalandre
Hi Ronny, > On 15/09/2022 14:32 ronny.lippold wrote: > hi arthur, some time went ... > > i would like to know, if there are some news of your setup. > do you have replication active running? No, there was no change at CERN. I am switching jobs as well actually so I won't have much news for

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-09-15 Thread ronny.lippold
hi arthur, some time went ... i would like to know, if there are some news of your setup. do you have replication active running? we are using actually snapshot based and had last time a move of both clusters. after that, we had some damaged filesystems ind the kvm vms. did you ever had such

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-05-24 Thread Arthur Outhenin-Chalandre
Hi Ronny, Not sure what could have cause your outage with journaling TBH :/. Best of luck for the Ceph/Proxmox bug! On 5/23/22 20:09, ronny.lippold wrote: > hi arthur, > > just for information. we had some horrible days ... > > last week, we shut some virtual machines down. > most of them did

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-05-23 Thread ronny.lippold
hi arthur, just for information. we had some horrible days ... last week, we shut some virtual machines down. most of them did not came back. timeout qmp socket ... and no kvm console. so, we switched to our rbd-mirror cluster and ... yes, was working, puh. some days later, we tried to

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-05-16 Thread ronny.lippold
Am 2022-05-12 15:29, schrieb Arthur Outhenin-Chalandre: We are going towards mirror snapshots, but we didn't advertise internally so far and we won't enable it on every images; it would only be for new volumes if people want explicitly that feature. So we are probably not going to hit these

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-05-12 Thread Arthur Outhenin-Chalandre
On 5/12/22 14:31, ronny.lippold wrote: > many thanks, we will check the slides ... are looking great > > >>> >>> ok, you mean, that the growing came, cause of replication is to slow? >>> strange ... i thought our cluster is not so big ... but ok. >>> so, we cannot use journal ... >>> maybe some

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-05-12 Thread ronny.lippold
many thanks, we will check the slides ... are looking great ok, you mean, that the growing came, cause of replication is to slow? strange ... i thought our cluster is not so big ... but ok. so, we cannot use journal ... maybe some else have same result? If you want a bit more details on

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-05-12 Thread ronny.lippold
hi arthur and thanks for answering, Am 2022-05-12 13:06, schrieb Arthur Outhenin-Chalandre: Hi Ronny Yes according to my test we were not able to have a good replication speed on a single image (I think it was 30Mb/s per image something like that). So you have probably a few image that

[ceph-users] Re: rbd mirroring - journal growing and snapshot high io load

2022-05-12 Thread Arthur Outhenin-Chalandre
Hi Ronny On 5/12/22 12:47, ronny.lippold wrote: > hi to all here > we tried a lot and now, we need your help ... > > we are using 5 proxmox 7.2-3 server with kernel 5.15.30-2-pve and ceph > 16.2.7.. > per server, we use 9 osd (8x 2tb, 1x8tb both sas ssd, connected via sas > hba) > the second

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-16 Thread Arthur Outhenin-Chalandre
Hi Torkil, On 12/16/21 08:01, Torkil Svensgaard wrote: > I set up one peer with rx-tx, and it seems to replicate as it should, > but the site-a status looks a little odd. Why down+unknown and status > not found? Because of rx-tx peer with only one way active? If you don't have any mirror from

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Torkil Svensgaard
On 12/15/21 14:18, Arthur Outhenin-Chalandre wrote: On 12/15/21 13:50, Torkil Svensgaard wrote: Ah, so as long as I don't run the mirror daemons on site-a there is no risk of overwriting production data there? To be perfectly clear there should be no risk whatsoever (as Ilya also said). I

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Arthur Outhenin-Chalandre
On 12/15/21 13:50, Torkil Svensgaard wrote: > Ah, so as long as I don't run the mirror daemons on site-a there is no > risk of overwriting production data there? To be perfectly clear there should be no risk whatsoever (as Ilya also said). I suggested to not run rbd-mirror on site-a so that

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Torkil Svensgaard
On 15/12/2021 13.58, Ilya Dryomov wrote: Hi Torkil, Hi Ilya I would recommend sticking to rx-tx to make potential failback back to the primary cluster easier. There shouldn't be any issue with running rbd-mirror daemons at both sites either -- it doesn't start replicating until it is

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Ilya Dryomov
Hi Torkil, I would recommend sticking to rx-tx to make potential failback back to the primary cluster easier. There shouldn't be any issue with running rbd-mirror daemons at both sites either -- it doesn't start replicating until it is instructed to, either per-pool or per-image. Thanks,

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Torkil Svensgaard
On 15/12/2021 13.44, Arthur Outhenin-Chalandre wrote: Hi Torkil, Hi Arthur On 12/15/21 13:24, Torkil Svensgaard wrote: I'm confused by the direction parameter in the documentation[1]. If I have my data at site-a and want one way replication to site-b should the mirroring be configured as

[ceph-users] Re: RBD mirroring bootstrap peers - direction

2021-12-15 Thread Arthur Outhenin-Chalandre
Hi Torkil, On 12/15/21 13:24, Torkil Svensgaard wrote: > I'm confused by the direction parameter in the documentation[1]. If I > have my data at site-a and want one way replication to site-b should the > mirroring be configured as the documentation example, directionwise? What you are

[ceph-users] Re: RBD Mirroring down+unknown

2020-05-29 Thread Jason Dillaman
On Fri, May 29, 2020 at 12:09 PM Miguel Castillo wrote: > Happy New Year Ceph Community! > > I'm in the process of figuring out RBD mirroring with Ceph and having a > really tough time with it. I'm trying to set up just one way mirroring > right now on some test systems (baremetal servers, all

[ceph-users] Re: RBD Mirroring down+unknown

2020-01-07 Thread miguel . castillo
A I tried every combination of cluster references in every other command but the verification :) Thanks Jason for saving my forehead from more wall punishment! I'm all set with this one now. (On ceph1-dc2): rbd --cluster ceph mirror pool status fs_data --verbose health: OK images: 1 total

[ceph-users] Re: RBD Mirroring down+unknown

2020-01-07 Thread Jason Dillaman
On Tue, Jan 7, 2020 at 11:17 AM wrote: > > Thanks for the reply Jason! > > We don't have selinux running on these machines, but I did fix the ownership > on that file now so the ceph user can access it properly. The rbd mirror > daemon does start up now, but the test image still shows

[ceph-users] Re: RBD Mirroring down+unknown

2020-01-07 Thread miguel . castillo
Thanks for the reply Jason! We don't have selinux running on these machines, but I did fix the ownership on that file now so the ceph user can access it properly. The rbd mirror daemon does start up now, but the test image still shows down+unknown. I'll continue poking at it, but if you or

[ceph-users] Re: RBD Mirroring down+unknown

2020-01-06 Thread Jason Dillaman
On Mon, Jan 6, 2020 at 4:59 PM wrote: > > Happy New Year Ceph Community! > > I'm in the process of figuring out RBD mirroring with Ceph and having a > really tough time with it. I'm trying to set up just one way mirroring right > now on some test systems (baremetal servers, all Debian 9). The