On Tue, May 7, 2024 at 7:54 AM Eugen Block wrote:
>
> Hi,
>
> I'm not the biggest rbd-mirror expert.
> As understand it, if you use one-way mirroring you can failover to the
> remote site, continue to work there but there's no failover back to
> primary site. You would need to stop client IO on
On Thu, May 2, 2024 at 2:56 AM V A Prabha wrote:
>
> Dear Eugen,
> We have a scenario of DC and DR replication, and planned to explore RBD
> mirroring with both Journaling and Snapshot mechanism.
> I have a 5 TB storage at Primary DC and 5 TB storage at DR site with 2
> different
> ceph clusters
Hi,
I'm not the biggest rbd-mirror expert.
As understand it, if you use one-way mirroring you can failover to the
remote site, continue to work there but there's no failover back to
primary site. You would need to stop client IO on DR, demote the image
and then import the remote images
Dear Eugen,
Expecting your response for the below query. Please guide me the solution
On May 2, 2024 at 12:25 PM V A Prabha wrote:
> Dear Eugen,
> We have a scenario of DC and DR replication, and planned to explore RBD
> mirroring with both Journaling and Snapshot mechanism.
> I have a 5 TB
Thank you Eugen ,
Those who are familiar with ceph-ansible can help to explain and guide.
Thank you
On Thu, Feb 15, 2024 at 12:28 PM Eugen Block wrote:
> Sounds like ceph-ansible only supports one pool? I don't know, I've
> never used ceph-ansible. But if it created a rbd-mirror setup
>
Sounds like ceph-ansible only supports one pool? I don't know, I've
never used ceph-ansible. But if it created a rbd-mirror setup
successfully you should be able to configure more pools to be mirrored
manually as described in the docs [1].
[1]
Thank you Eugen , all errors have been solved now is syncing on pool mode
level , I am trying to use two or more pools but I sill able using the
first one defined as it is in this configurations:
ceph_rbd_mirror_configure: true
ceph_rbd_mirror_mode: "pool"
ceph_rbd_mirror_pool: "data"
So the error you reported first is now resolved? What does the mirror
daemon log?
Zitat von Michel Niyoyita :
I have configured it as follow :
ceph_rbd_mirror_configure: true
ceph_rbd_mirror_mode: "pool"
ceph_rbd_mirror_pool: "images"
ceph_rbd_mirror_remote_cluster: "prod"
I have configured it as follow :
ceph_rbd_mirror_configure: true
ceph_rbd_mirror_mode: "pool"
ceph_rbd_mirror_pool: "images"
ceph_rbd_mirror_remote_cluster: "prod"
ceph_rbd_mirror_remote_user: "admin"
ceph_rbd_mirror_remote_key: "AQDGVctluyvAHRAAtjeIB3ZZ75L8yT/erZD7eg=="
You didn't answer if the remote_key is defined. If it's not then your
rbd-mirror daemon won't work which confirms what you pasted (daemon
health: ERROR). You need to fix that first.
Zitat von Michel Niyoyita :
Thanks Eugen,
On my prod Cluster (as named it) this is the output the following
Thanks Eugen,
On my prod Cluster (as named it) this is the output the following command
checking the status : rbd mirror pool status images --cluster prod
health: WARNING
daemon health: UNKNOWN
image health: WARNING
images: 4 total
4 unknown
but on bup cluster there are some errors which I
Did you define ceph_rbd_mirror_remote_key? According to the docs [1]:
ceph_rbd_mirror_remote_key : This must be the same value as the user
({{ ceph_rbd_mirror_local_user }}) keyring secret from the primary
cluster.
[1]
On the other hand, the openstack docs [3] report this:
The mirroring of RBD images stored in Erasure Coded pools is not
currently supported by the ceph-rbd-mirror charm due to limitations
in the functionality of the Ceph rbd-mirror application.
But I can't tell if it's a limitation within
Hi,
I think you still need a replicated pool for the rbd metadata, check
out this thread [1]. Althouh I don't know if a mixed setup will work.
IIUC, in the referred thread the pools are set up identically on both
clusters, not sure if it will work if you only have one replicated
pool in
Hi,
the question is if both sites are used as primary clusters from
different clients or if it's for disaster recovery only (site1 fails,
make site2 primary). If both clusters are used independently with
different clients I would prefer to separate the pools, so this option:
PoolA
Hi,
The goal is to sync some VMs from site1 - to - site2 and vice-versa sync
some VMs in the other way.
I am thinking of using rdb mirroring for that. But I have little experience
with Ceph management.
I am searching for the best way to do that.
I could create two pools on each site, and cross
Hi,
just to clarify, you mean in addition to the rbd mirroring you want to
have another sync of different VMs between those clusters (potentially
within the same pools) or are you looking for one option only? Please
clarify. Anyway, I would use dedicated pools for rbd mirroring and
then
Hi,
Thanks
I am trying to find out what is the best way to synchronize VMS between two
HCI Proxmox clusters.
Each cluster will contain 3 compute/storage nodes and each node will
contain 4 nvme osd disks.
There will be a 10gbs link between the two platforms.
The idea is to be able to sync VMS
Hi,
while your assumptions are correct (you can use the rest of the pool
for other non-mirrored images), at least I'm not aware of any
limitations, can I ask for the motivation behind this question? Mixing
different use-cases doesn't seem like a good idea to me. There's
always a chance
On Thu, Sep 15, 2022 at 3:33 PM Arthur Outhenin-Chalandre
wrote:
>
> Hi Ronny,
>
> > On 15/09/2022 14:32 ronny.lippold wrote:
> > hi arthur, some time went ...
> >
> > i would like to know, if there are some news of your setup.
> > do you have replication active running?
>
> No, there was no
Hi,
I've added as much logging as I can, still shows nothing.
On Fri, 16 Sep 2022 at 21:35, Arthur Outhenin-Chalandre <
arthur.outhenin-chalan...@cern.ch> wrote:
> Hi Josef,
>
> > On 16/09/2022 14:15 Josef Johansson wrote:
> > Are you guys affected by
> > https://tracker.ceph.com/issues/57396
Hi Josef,
> On 16/09/2022 14:15 Josef Johansson wrote:
> Are you guys affected by
> https://tracker.ceph.com/issues/57396 ?
The issue with journal mode for me was more that the journal replay was slow
which made the journal also grows... You should probably inspect your
rbd-mirror logs (and
Hi,
Are you guys affected by
https://tracker.ceph.com/issues/57396 ?
On Fri, 16 Sep 2022 at 09:40, ronny.lippold wrote:
> hi and thanks a lot.
> good to stay not alone and understand some right :)
>
> i will also tell, if there is something new.
>
>
> so from my point of view, the only
hi and thanks a lot.
good to stay not alone and understand some right :)
i will also tell, if there is something new.
so from my point of view, the only consistant way is to freeze fs or
shutdown vm.
after that start journal mirroring. so i think, only journal can work.
you helped me a lot,
Hi Ronny,
> On 15/09/2022 14:32 ronny.lippold wrote:
> hi arthur, some time went ...
>
> i would like to know, if there are some news of your setup.
> do you have replication active running?
No, there was no change at CERN. I am switching jobs as well actually so I
won't have much news for
hi arthur, some time went ...
i would like to know, if there are some news of your setup.
do you have replication active running?
we are using actually snapshot based and had last time a move of both
clusters.
after that, we had some damaged filesystems ind the kvm vms.
did you ever had such
Hi Ronny,
Not sure what could have cause your outage with journaling TBH :/. Best
of luck for the Ceph/Proxmox bug!
On 5/23/22 20:09, ronny.lippold wrote:
> hi arthur,
>
> just for information. we had some horrible days ...
>
> last week, we shut some virtual machines down.
> most of them did
hi arthur,
just for information. we had some horrible days ...
last week, we shut some virtual machines down.
most of them did not came back. timeout qmp socket ... and no kvm
console.
so, we switched to our rbd-mirror cluster and ... yes, was working, puh.
some days later, we tried to
Am 2022-05-12 15:29, schrieb Arthur Outhenin-Chalandre:
We are going towards mirror snapshots, but we didn't advertise
internally so far and we won't enable it on every images; it would only
be for new volumes if people want explicitly that feature. So we are
probably not going to hit these
On 5/12/22 14:31, ronny.lippold wrote:
> many thanks, we will check the slides ... are looking great
>
>
>>>
>>> ok, you mean, that the growing came, cause of replication is to slow?
>>> strange ... i thought our cluster is not so big ... but ok.
>>> so, we cannot use journal ...
>>> maybe some
many thanks, we will check the slides ... are looking great
ok, you mean, that the growing came, cause of replication is to slow?
strange ... i thought our cluster is not so big ... but ok.
so, we cannot use journal ...
maybe some else have same result?
If you want a bit more details on
hi arthur and thanks for answering,
Am 2022-05-12 13:06, schrieb Arthur Outhenin-Chalandre:
Hi Ronny
Yes according to my test we were not able to have a good replication
speed on a single image (I think it was 30Mb/s per image something like
that). So you have probably a few image that
Hi Ronny
On 5/12/22 12:47, ronny.lippold wrote:
> hi to all here
> we tried a lot and now, we need your help ...
>
> we are using 5 proxmox 7.2-3 server with kernel 5.15.30-2-pve and ceph
> 16.2.7..
> per server, we use 9 osd (8x 2tb, 1x8tb both sas ssd, connected via sas
> hba)
> the second
Hi Torkil,
On 12/16/21 08:01, Torkil Svensgaard wrote:
> I set up one peer with rx-tx, and it seems to replicate as it should,
> but the site-a status looks a little odd. Why down+unknown and status
> not found? Because of rx-tx peer with only one way active?
If you don't have any mirror from
On 12/15/21 14:18, Arthur Outhenin-Chalandre wrote:
On 12/15/21 13:50, Torkil Svensgaard wrote:
Ah, so as long as I don't run the mirror daemons on site-a there is no
risk of overwriting production data there?
To be perfectly clear there should be no risk whatsoever (as Ilya also
said). I
On 12/15/21 13:50, Torkil Svensgaard wrote:
> Ah, so as long as I don't run the mirror daemons on site-a there is no
> risk of overwriting production data there?
To be perfectly clear there should be no risk whatsoever (as Ilya also
said). I suggested to not run rbd-mirror on site-a so that
On 15/12/2021 13.58, Ilya Dryomov wrote:
Hi Torkil,
Hi Ilya
I would recommend sticking to rx-tx to make potential failback back to
the primary cluster easier. There shouldn't be any issue with running
rbd-mirror daemons at both sites either -- it doesn't start replicating
until it is
Hi Torkil,
I would recommend sticking to rx-tx to make potential failback back to
the primary cluster easier. There shouldn't be any issue with running
rbd-mirror daemons at both sites either -- it doesn't start replicating
until it is instructed to, either per-pool or per-image.
Thanks,
On 15/12/2021 13.44, Arthur Outhenin-Chalandre wrote:
Hi Torkil,
Hi Arthur
On 12/15/21 13:24, Torkil Svensgaard wrote:
I'm confused by the direction parameter in the documentation[1]. If I
have my data at site-a and want one way replication to site-b should the
mirroring be configured as
Hi Torkil,
On 12/15/21 13:24, Torkil Svensgaard wrote:
> I'm confused by the direction parameter in the documentation[1]. If I
> have my data at site-a and want one way replication to site-b should the
> mirroring be configured as the documentation example, directionwise?
What you are
On Fri, May 29, 2020 at 12:09 PM Miguel Castillo
wrote:
> Happy New Year Ceph Community!
>
> I'm in the process of figuring out RBD mirroring with Ceph and having a
> really tough time with it. I'm trying to set up just one way mirroring
> right now on some test systems (baremetal servers, all
A I tried every combination of cluster references in every other command
but the verification :) Thanks Jason for saving my forehead from more wall
punishment! I'm all set with this one now.
(On ceph1-dc2): rbd --cluster ceph mirror pool status fs_data --verbose
health: OK
images: 1 total
On Tue, Jan 7, 2020 at 11:17 AM wrote:
>
> Thanks for the reply Jason!
>
> We don't have selinux running on these machines, but I did fix the ownership
> on that file now so the ceph user can access it properly. The rbd mirror
> daemon does start up now, but the test image still shows
Thanks for the reply Jason!
We don't have selinux running on these machines, but I did fix the ownership on
that file now so the ceph user can access it properly. The rbd mirror daemon
does start up now, but the test image still shows down+unknown. I'll continue
poking at it, but if you or
On Mon, Jan 6, 2020 at 4:59 PM wrote:
>
> Happy New Year Ceph Community!
>
> I'm in the process of figuring out RBD mirroring with Ceph and having a
> really tough time with it. I'm trying to set up just one way mirroring right
> now on some test systems (baremetal servers, all Debian 9). The
45 matches
Mail list logo