I was able to trigger the issue again.
- On the primary I created a snap called TestSnapper for disk vm-100-disk-1
- Allowed the next RBD-Mirror scheduled snap to complete
- At this point the snapshot is showing up on the remote side.
root@Bunkcephtest1:~# rbd mirror image status CephTestPool1/vm-100-disk-1
vm-100-disk-1:
global_id: a04e92df-3d64-4dc4-8ac8-eaba17b45403
state: up+replaying
description: replaying,
{"bytes_per_second":0.0,"bytes_per_snapshot":0.0,"local_snapshot_timestamp":1611247200,"remote_snapshot_timestamp":1611247200,"replay_state":"idle"}
service: admin on Bunkcephmon1
last_update: 2021-01-21 11:46:24
peer_sites:
name: ccs
state: up+stopped
description: local image is primary
last_update: 2021-01-21 11:46:28
root@Ccscephtest1:/etc/pve/priv# rbd snap ls --all CephTestPool1/vm-100-disk-1
SNAPID NAME SIZE PROTECTED TIMESTAMP NAMESPACE
11532 TestSnapper 2 TiB Thu Jan 21 11:21:25 2021 user
11573
.mirror.primary.a04e92df-3d64-4dc4-8ac8-eaba17b45403.9525e4eb-41c0-499c-8879-0c7d9576e253
2 TiB Thu Jan 21 11:35:00 2021 mirror (primary
peer_uuids:[debf975b-ebb8-432c-a94a-d3b101e0f770])
Seems like the sync is complete, So I then clone it, map it and attempt to
mount it.
root@Bunkcephtest1:~# rbd clone CephTestPool1/vm-100-disk-1@TestSnapper
CephTestPool1/vm-100-disk-1-CLONE
root@Bunkcephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-1-CLONE --id admin
--keyring /etc/ceph/ceph.client.admin.keyring
/dev/nbd0
root@Bunkcephtest1:~# mount /dev/nbd0 /usr2
mount: /usr2: wrong fs type, bad option, bad superblock on /dev/nbd0, missing
codepage or helper program, or other error.
On the primary still no issues
root@Ccscephtest1:/etc/pve/priv# rbd clone
CephTestPool1/vm-100-disk-1@TestSnapper CephTestPool1/vm-100-disk-1-CLONE
root@Ccscephtest1:/etc/pve/priv# rbd-nbd map CephTestPool1/vm-100-disk-1-CLONE
--id admin --keyring /etc/ceph/ceph.client.admin.keyring
/dev/nbd0
root@Ccscephtest1:/etc/pve/priv# mount /dev/nbd0 /usr2
From: "Jason Dillaman" <[email protected]>
To: "adamb" <[email protected]>
Cc: "Eugen Block" <[email protected]>, "ceph-users" <[email protected]>, "Matt
Wilder" <[email protected]>
Sent: Thursday, January 21, 2021 9:42:26 AM
Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
On Thu, Jan 21, 2021 at 9:40 AM Adam Boyhan <[email protected]> wrote:
>
> After the resync finished. I can mount it now.
>
> root@Bunkcephtest1:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
> CephTestPool1/vm-100-disk-0-CLONE
> root@Bunkcephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-0-CLONE --id
> admin --keyring /etc/ceph/ceph.client.admin.keyring
> /dev/nbd0
> root@Bunkcephtest1:~# mount /dev/nbd0 /usr2
>
> Makes me a bit nervous how it got into that position and everything appeared
> ok.
We unfortunately need to create the snapshots that are being synced as
a first step, but perhaps there are some extra guardrails we can put
on the system to prevent premature usage if the sync status doesn't
indicate that it's complete.
> ________________________________
> From: "Jason Dillaman" <[email protected]>
> To: "adamb" <[email protected]>
> Cc: "Eugen Block" <[email protected]>, "ceph-users" <[email protected]>, "Matt
> Wilder" <[email protected]>
> Sent: Thursday, January 21, 2021 9:25:11 AM
> Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
>
> On Thu, Jan 21, 2021 at 8:34 AM Adam Boyhan <[email protected]> wrote:
> >
> > When cloning the snapshot on the remote cluster I can't see my ext4
> > filesystem.
> >
> > Using the same exact snapshot on both sides. Shouldn't this be consistent?
>
> Yes. Has the replication process completed ("rbd mirror image status
> CephTestPool1/vm-100-disk-0")?
>
> > Primary Site
> > root@Ccscephtest1:~# rbd snap ls --all CephTestPool1/vm-100-disk-0 | grep
> > TestSnapper1
> > 10621 TestSnapper1 2 TiB Thu Jan 21 08:15:22 2021 user
> >
> > root@Ccscephtest1:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
> > CephTestPool1/vm-100-disk-0-CLONE
> > root@Ccscephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-0-CLONE --id
> > admin --keyring /etc/ceph/ceph.client.admin.keyring
> > /dev/nbd0
> > root@Ccscephtest1:~# mount /dev/nbd0 /usr2
> >
> > Secondary Site
> > root@Bunkcephtest1:~# rbd snap ls --all CephTestPool1/vm-100-disk-0 | grep
> > TestSnapper1
> > 10430 TestSnapper1 2 TiB Thu Jan 21 08:20:08 2021 user
> >
> > root@Bunkcephtest1:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
> > CephTestPool1/vm-100-disk-0-CLONE
> > root@Bunkcephtest1:~# rbd-nbd map CephTestPool1/vm-100-disk-0-CLONE --id
> > admin --keyring /etc/ceph/ceph.client.admin.keyring
> > /dev/nbd0
> > root@Bunkcephtest1:~# mount /dev/nbd0 /usr2
> > mount: /usr2: wrong fs type, bad option, bad superblock on /dev/nbd0,
> > missing codepage or helper program, or other error.
> >
> >
> >
> > ________________________________
> > From: "adamb" <[email protected]>
> > To: "dillaman" <[email protected]>
> > Cc: "Eugen Block" <[email protected]>, "ceph-users" <[email protected]>, "Matt
> > Wilder" <[email protected]>
> > Sent: Wednesday, January 20, 2021 3:42:46 PM
> > Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
> >
> > Awesome information. I new I had to be missing something.
> >
> > All of my clients will be far newer than mimic so I don't think that will
> > be an issue.
> >
> > Added the following to my ceph.conf on both clusters.
> >
> > rbd_default_clone_format = 2
> >
> > root@Bunkcephmon2:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
> > CephTestPool2/vm-100-disk-0-CLONE
> > root@Bunkcephmon2:~# rbd ls CephTestPool2
> > vm-100-disk-0-CLONE
> >
> > I am sure I will be back with more questions. Hoping to replace our Nimble
> > storage with Ceph and NVMe.
> >
> > Appreciate it!
> >
> > ________________________________
> > From: "Jason Dillaman" <[email protected]>
> > To: "adamb" <[email protected]>
> > Cc: "Eugen Block" <[email protected]>, "ceph-users" <[email protected]>, "Matt
> > Wilder" <[email protected]>
> > Sent: Wednesday, January 20, 2021 3:28:39 PM
> > Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
> >
> > On Wed, Jan 20, 2021 at 3:10 PM Adam Boyhan <[email protected]> wrote:
> > >
> > > That's what I though as well, specially based on this.
> > >
> > >
> > >
> > > Note
> > >
> > > You may clone a snapshot from one pool to an image in another pool. For
> > > example, you may maintain read-only images and snapshots as templates in
> > > one pool, and writeable clones in another pool.
> > >
> > > root@Bunkcephmon2:~# rbd clone CephTestPool1/vm-100-disk-0@TestSnapper1
> > > CephTestPool2/vm-100-disk-0-CLONE
> > > 2021-01-20T15:06:35.854-0500 7fb889ffb700 -1 librbd::image::CloneRequest:
> > > 0x55c7cf8417f0 validate_parent: parent snapshot must be protected
> > >
> > > root@Bunkcephmon2:~# rbd snap protect
> > > CephTestPool1/vm-100-disk-0@TestSnapper1
> > > rbd: protecting snap failed: (30) Read-only file system
> >
> > You have two options: (1) protect the snapshot on the primary image so
> > that the protection status replicates or (2) utilize RBD clone v2
> > which doesn't require protection but does require Mimic or later
> > clients [1].
> >
> > >
> > > From: "Eugen Block" <[email protected]>
> > > To: "adamb" <[email protected]>
> > > Cc: "ceph-users" <[email protected]>, "Matt Wilder"
> > > <[email protected]>
> > > Sent: Wednesday, January 20, 2021 3:00:54 PM
> > > Subject: Re: [ceph-users] Re: RBD-Mirror Snapshot Backup Image Uses
> > >
> > > But you should be able to clone the mirrored snapshot on the remote
> > > cluster even though it’s not protected, IIRC.
> > >
> > >
> > > Zitat von Adam Boyhan <[email protected]>:
> > >
> > > > Two separate 4 node clusters with 10 OSD's in each node. Micron 9300
> > > > NVMe's are the OSD drives. Heavily based on the Micron/Supermicro
> > > > white papers.
> > > >
> > > > When I attempt to protect the snapshot on a remote image, it errors
> > > > with read only.
> > > >
> > > > root@Bunkcephmon2:~# rbd snap protect
> > > > CephTestPool1/vm-100-disk-0@TestSnapper1
> > > > rbd: protecting snap failed: (30) Read-only file system
> > > > _______________________________________________
> > > > ceph-users mailing list -- [email protected]
> > > > To unsubscribe send an email to [email protected]
> > > _______________________________________________
> > > ceph-users mailing list -- [email protected]
> > > To unsubscribe send an email to [email protected]
> >
> > [1] https://ceph.io/community/new-mimic-simplified-rbd-image-cloning/
> >
> > --
> > Jason
> >
>
>
> --
> Jason
--
Jason
_______________________________________________
ceph-users mailing list -- [email protected]
To unsubscribe send an email to [email protected]