Re: [ceph-users] RBD mirroring feat not supported

2019-01-11 Thread Jason Dillaman
krbd doesn't yet support several RBD features, including journaling
[1]. The only current way to use object-map, fast-diff, deep-flatten,
and/or journaling features against a block device is to use "rbd
device map --device-type nbd " (or use a TCMU loopback
device to create an librbd-backed SCSI block device).

On Fri, Jan 11, 2019 at 1:20 AM Hammad Abdullah
 wrote:
>
> Hey guys,
>
> I'm trying to mount a ceph image with journaling, layering and exclusive-lock 
> features enabled (it is a mirror image) but I keep getting the error "feature 
> not supported". I upgraded the kernel from 4.4 to 4.18 but I still get the 
> same error message. Any Idea what the issue might be?
>
> screenshot attached.
>

[1] http://docs.ceph.com/docs/master/rbd/rbd-config-ref/#rbd-features

-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD mirroring replicated and erasure coded pools

2018-08-13 Thread Jason Dillaman
On Tue, Jul 31, 2018 at 11:10 AM Ilja Slepnev  wrote:

> Hi,
>
> is it possible to establish RBD mirroring between replicated and erasure
> coded pools?
> I'm trying to setup replication as described on
> http://docs.ceph.com/docs/master/rbd/rbd-mirroring/ without success.
> Ceph 12.2.5 Luminous
>
> root@local:~# rbd --cluster local mirror pool enable rbd-2 pool
> 2018-07-31 17:35:57.350506 7fa0833af0c0 -1 librbd::api::Mirror: mode_set:
> failed to allocate mirroring uuid: (95) Operation not supported
>
> No problem with replicated pool:
> root@local:~# rbd --cluster local mirror pool enable rbd pool
> root@local:~#
>
> Pool configuration:
> root@local:~# ceph --cluster local osd pool ls detail
> pool 13 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash
> rjenkins pg_num 256 pgp_num 256 last_change 2219 flags hashpspool
> stripe_width 0 compression_mode none application rbd
> pool 15 'rbd-2' erasure size 6 min_size 5 crush_rule 4 object_hash
> rjenkins pg_num 64 pgp_num 64 last_change 2220 flags
> hashpspool,ec_overwrites stripe_width 16384 application rbd
>
> BR,
> --
> Ilja Slepnev
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

RBD requires that its base-pool is a replicated pool, but you can place the
RBD image data objects in an erasure-coded pool. For rbd-mirror daemon to
do this by default, you can set the "rbd default data pool = XYZ"
configuration option in your ceph.conf on your rbd-mirror daemon host.

-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RBD mirroring replicated and erasure coded pools

2018-07-31 Thread Ilja Slepnev
Hi,

is it possible to establish RBD mirroring between replicated and erasure
coded pools?
I'm trying to setup replication as described on
http://docs.ceph.com/docs/master/rbd/rbd-mirroring/ without success.
Ceph 12.2.5 Luminous

root@local:~# rbd --cluster local mirror pool enable rbd-2 pool
2018-07-31 17:35:57.350506 7fa0833af0c0 -1 librbd::api::Mirror: mode_set:
failed to allocate mirroring uuid: (95) Operation not supported

No problem with replicated pool:
root@local:~# rbd --cluster local mirror pool enable rbd pool
root@local:~#

Pool configuration:
root@local:~# ceph --cluster local osd pool ls detail
pool 13 'rbd' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 256 pgp_num 256 last_change 2219 flags hashpspool
stripe_width 0 compression_mode none application rbd
pool 15 'rbd-2' erasure size 6 min_size 5 crush_rule 4 object_hash rjenkins
pg_num 64 pgp_num 64 last_change 2220 flags hashpspool,ec_overwrites
stripe_width 16384 application rbd

BR,
--
Ilja Slepnev
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD mirroring to DR site

2018-02-28 Thread Jason Dillaman
On Wed, Feb 28, 2018 at 2:56 PM, Brady Deetz  wrote:
> Great. We are read heavy. I assume the journals do not replicate reads. Is
> that correct?

Correct -- only writes (plus discards, snapshots, etc) are replicated.

> On Wed, Feb 28, 2018 at 1:50 PM, Jason Dillaman  wrote:
>>
>> On Wed, Feb 28, 2018 at 2:42 PM, Brady Deetz  wrote:
>> > I'm considering doing one-way rbd mirroring to a DR site. The
>> > documentation
>> > states that my link to the DR site should have sufficient throughput to
>> > support replication.
>> >
>> > Our write activity is bursty. As such, we tend to see moments of high
>> > throughput 4-6gbps followed by long bouts of basically no activity.
>> >
>> > 1) how sensitive is rbd mirroring to latency?
>>
>> It's not sensitive at all -- at the worse case, your journals will
>> expand during the burst period and shrink again during the idle
>> period.
>>
>> > 2) how sensitive is rbd mirroring to falling behind on replication and
>> > having to catch up?
>>
>> It's designed to be asynchronous replication w/ consistency so it
>> doesn't matter to rbd-mirror if it's behind. In fact, you can even
>> configure it to always be X hours behind if you want to have a window
>> for avoiding accidents from propagating to the DR site.
>>
>> >
>> > ___
>> > ceph-users mailing list
>> > ceph-users@lists.ceph.com
>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >
>>
>>
>>
>> --
>> Jason
>
>



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD mirroring to DR site

2018-02-28 Thread Brady Deetz
Great. We are read heavy. I assume the journals do not replicate reads. Is
that correct?

On Wed, Feb 28, 2018 at 1:50 PM, Jason Dillaman  wrote:

> On Wed, Feb 28, 2018 at 2:42 PM, Brady Deetz  wrote:
> > I'm considering doing one-way rbd mirroring to a DR site. The
> documentation
> > states that my link to the DR site should have sufficient throughput to
> > support replication.
> >
> > Our write activity is bursty. As such, we tend to see moments of high
> > throughput 4-6gbps followed by long bouts of basically no activity.
> >
> > 1) how sensitive is rbd mirroring to latency?
>
> It's not sensitive at all -- at the worse case, your journals will
> expand during the burst period and shrink again during the idle
> period.
>
> > 2) how sensitive is rbd mirroring to falling behind on replication and
> > having to catch up?
>
> It's designed to be asynchronous replication w/ consistency so it
> doesn't matter to rbd-mirror if it's behind. In fact, you can even
> configure it to always be X hours behind if you want to have a window
> for avoiding accidents from propagating to the DR site.
>
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >
>
>
>
> --
> Jason
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD mirroring to DR site

2018-02-28 Thread Jason Dillaman
On Wed, Feb 28, 2018 at 2:42 PM, Brady Deetz  wrote:
> I'm considering doing one-way rbd mirroring to a DR site. The documentation
> states that my link to the DR site should have sufficient throughput to
> support replication.
>
> Our write activity is bursty. As such, we tend to see moments of high
> throughput 4-6gbps followed by long bouts of basically no activity.
>
> 1) how sensitive is rbd mirroring to latency?

It's not sensitive at all -- at the worse case, your journals will
expand during the burst period and shrink again during the idle
period.

> 2) how sensitive is rbd mirroring to falling behind on replication and
> having to catch up?

It's designed to be asynchronous replication w/ consistency so it
doesn't matter to rbd-mirror if it's behind. In fact, you can even
configure it to always be X hours behind if you want to have a window
for avoiding accidents from propagating to the DR site.

>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>



-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RBD mirroring to DR site

2018-02-28 Thread Brady Deetz
I'm considering doing one-way rbd mirroring to a DR site. The documentation
states that my link to the DR site should have sufficient throughput to
support replication.

Our write activity is bursty. As such, we tend to see moments of high
throughput 4-6gbps followed by long bouts of basically no activity.

1) how sensitive is rbd mirroring to latency?
2) how sensitive is rbd mirroring to falling behind on replication and
having to catch up?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD mirroring

2017-01-07 Thread Klemen Pogacnik
Yes, disaster recovery can be solved by application layer, but I think it
would be nice openstack feature too. Specially when the replication is
already solved by Ceph. I'll ask on other forums if something is doing on
that feature. Thanks again for pointing me to the right direction.
Kemo

On Fri, Jan 6, 2017 at 2:59 PM, Jason Dillaman  wrote:

> In all honesty, this unfortunately isn't one of my areas of expertise.
>
> OpenStack has such a large umbrella and is moving too quickly for me
> to stay 100% up-to-date. I don't think, from a cloud point-of-view,
> this is a problem too many purists are concerned about since a
> cloud-native app should be able to survive failures -- and any
> necessary replication of data should be handled by the
> application-layer instead of the infrastructure.  Also -- it's
> definitely a hard problem to solve in a generic fashion.
>
> On Fri, Jan 6, 2017 at 8:27 AM, Klemen Pogacnik  wrote:
> > That I was afraid of. So there isn't any commands available and I must
> > somehow synchronize Cinder DB, to get access to volumes also on second
> site.
> > Do you maybe know, if somebody is already thinking or even working on
> that?
> > On presentation Kingbird project was mentioned, but I'm not sure if their
> > work will solve this problem.
> > Kemo
> >
> > On Thu, Jan 5, 2017 at 4:45 PM, Jason Dillaman 
> wrote:
> >>
> >> On Thu, Jan 5, 2017 at 7:24 AM, Klemen Pogacnik 
> wrote:
> >> > I'm playing with rbd mirroring with openstack. The final idea is to
> use
> >> > it
> >> > for disaster recovery of DB server running on Openstack cluster, but
> >> > would
> >> > like to test this functionality first.
> >> > I've prepared this configuration:
> >> > - 2 openstack clusters (devstacks)
> >> > - 2 ceph clusters (one node clusters)
> >> > Remote Ceph is used as a backend for Cinder service. Each devstack has
> >> > its
> >> > own Ceph cluster. Mirroring was enabled for volumes pool, and
> rbd-mirror
> >> > daemon was started.
> >> > When I create new cinder volume on devstack1, the same rbd storage
> >> > appeared
> >> > on both Ceph clusters, so it seems, mirroring is working.
> >> > Now I would like to see this storage as a Cinder volume on devstack2
> >> > too. Is
> >> > it somehow possible to do that?
> >>
> >> This level is HA/DR is not currently built-in to OpenStack (it's
> >> outside the scope of Ceph). There are several strategies you could use
> >> to try to replicate the devstack1 database to devstack2. Here is a
> >> presentation from OpenStack Summit Austin [1] re: this subject.
> >>
> >> > The next question is, how to make switchover. On Ceph it can easily be
> >> > done
> >> > by demote and promote commands, but then the volumes are still not
> seen
> >> > on
> >> > Devstack2, so I can't use it.
> >> > On open stack there is cinder failover-host command, which is, as I
> can
> >> > understand, only useful for configuration with one openstack and two
> >> > ceph
> >> > clusters. Any idea how to make switchover with my configuration.
> >> > Thanks a lot for help!
> >>
> >> Correct -- Cinder's built-in volume replication feature is just a set
> >> of hooks available for backends that already support
> >> replication/mirroring. The hooks for Ceph RBD are scheduled to be
> >> included in the next release of OpenStack, but as you have discovered,
> >> it really only protects against a storage failure (where you can
> >> switch from Ceph cluster A to Ceph cluster B), but does not help with
> >> losing your OpenStack data center.
> >>
> >> > Kemo
> >> >
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > ___
> >> > ceph-users mailing list
> >> > ceph-users@lists.ceph.com
> >> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >> >
> >>
> >> [1]
> >> https://www.openstack.org/videos/video/protecting-the-
> galaxy-multi-region-disaster-recovery-with-openstack-and-ceph
> >>
> >> --
> >> Jason
> >
> >
>
>
>
> --
> Jason
>
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD mirroring

2017-01-05 Thread Jason Dillaman
On Thu, Jan 5, 2017 at 7:24 AM, Klemen Pogacnik  wrote:
> I'm playing with rbd mirroring with openstack. The final idea is to use it
> for disaster recovery of DB server running on Openstack cluster, but would
> like to test this functionality first.
> I've prepared this configuration:
> - 2 openstack clusters (devstacks)
> - 2 ceph clusters (one node clusters)
> Remote Ceph is used as a backend for Cinder service. Each devstack has its
> own Ceph cluster. Mirroring was enabled for volumes pool, and rbd-mirror
> daemon was started.
> When I create new cinder volume on devstack1, the same rbd storage appeared
> on both Ceph clusters, so it seems, mirroring is working.
> Now I would like to see this storage as a Cinder volume on devstack2 too. Is
> it somehow possible to do that?

This level is HA/DR is not currently built-in to OpenStack (it's
outside the scope of Ceph). There are several strategies you could use
to try to replicate the devstack1 database to devstack2. Here is a
presentation from OpenStack Summit Austin [1] re: this subject.

> The next question is, how to make switchover. On Ceph it can easily be done
> by demote and promote commands, but then the volumes are still not seen on
> Devstack2, so I can't use it.
> On open stack there is cinder failover-host command, which is, as I can
> understand, only useful for configuration with one openstack and two ceph
> clusters. Any idea how to make switchover with my configuration.
> Thanks a lot for help!

Correct -- Cinder's built-in volume replication feature is just a set
of hooks available for backends that already support
replication/mirroring. The hooks for Ceph RBD are scheduled to be
included in the next release of OpenStack, but as you have discovered,
it really only protects against a storage failure (where you can
switch from Ceph cluster A to Ceph cluster B), but does not help with
losing your OpenStack data center.

> Kemo
>
>
>
>
>
>
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>

[1] 
https://www.openstack.org/videos/video/protecting-the-galaxy-multi-region-disaster-recovery-with-openstack-and-ceph

-- 
Jason
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RBD mirroring

2017-01-05 Thread Klemen Pogacnik
I'm playing with rbd mirroring with openstack. The final idea is to use it
for disaster recovery of DB server running on Openstack cluster, but would
like to test this functionality first.
I've prepared this configuration:
- 2 openstack clusters (devstacks)
- 2 ceph clusters (one node clusters)
Remote Ceph is used as a backend for Cinder service. Each devstack has its
own Ceph cluster. Mirroring was enabled for volumes pool, and rbd-mirror
daemon was started.
When I create new cinder volume on devstack1, the same rbd storage appeared
on both Ceph clusters, so it seems, mirroring is working.
Now I would like to see this storage as a Cinder volume on devstack2 too.
Is it somehow possible to do that?
The next question is, how to make switchover. On Ceph it can easily be done
by demote and promote commands, but then the volumes are still not seen on
Devstack2, so I can't use it.
On open stack there is cinder failover-host command, which is, as I can
understand, only useful for configuration with one openstack and two ceph
clusters. Any idea how to make switchover with my configuration.
Thanks a lot for help!
Kemo
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD mirroring between a IPv6 and IPv4 Cluster

2016-07-04 Thread Nick Fisk
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Wido den Hollander
> Sent: 04 July 2016 14:34
> To: ceph-users@lists.ceph.com; n...@fisk.me.uk
> Subject: Re: [ceph-users] RBD mirroring between a IPv6 and IPv4 Cluster
> 
> 
> > Op 4 juli 2016 om 9:25 schreef Nick Fisk <n...@fisk.me.uk>:
> >
> >
> > Hi All,
> >
> > Quick question. I'm currently in the process of getting ready to
> > deploy a 2nd cluster, which at some point in the next 12 months, I
> > will want to enable RBD mirroring between the new and existing
> > clusters. I'm leaning towards deploying this new cluster with IPv6,
> > because Wido says so ;-)
> >
> 
> Good job! More IPv6 is better :)
> 
> > Question is, will RBD mirroring still be possible between the two? I
> > know you can't dual stack the core Ceph components, but does RBD
> > mirroring have the same limitations?
> >
> 
> I haven't touched it yet, but looking at the docs it seems it will:
> http://docs.ceph.com/docs/master/rbd/rbd-mirroring/
> 
> "The cluster name in the following examples corresponds to a Ceph
> configuration file of the same name (e.g. /etc/ceph/remote.conf). See the
> ceph-conf documentation for how to configure multiple clusters."
> 
> So, in 'remote.conf' you can add the IPv6 addresses of a cluster running
on
> IPv6 and on 'ceph.conf' the IPv4 addresses.
> 
> The rbd-mirror daemon will eventually talk to librbd/librados which will
act as
> a 'proxy' between the two clusters.
> 
> I think it works, but that's just based on reading the docs and prior
> knowledge.

Ok, thanks Wido. It won't be for a while that we have this all setup, but I
report back to confirm.

> 
> Wido
> 
> > Thanks,
> > Nick
> >
> > ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] RBD mirroring between a IPv6 and IPv4 Cluster

2016-07-04 Thread Wido den Hollander

> Op 4 juli 2016 om 9:25 schreef Nick Fisk :
> 
> 
> Hi All,
> 
> Quick question. I'm currently in the process of getting ready to deploy a
> 2nd cluster, which at some point in the next 12 months, I will want to
> enable RBD mirroring between the new and existing clusters. I'm leaning
> towards deploying this new cluster with IPv6, because Wido says so ;-) 
> 

Good job! More IPv6 is better :)

> Question is, will RBD mirroring still be possible between the two? I know
> you can't dual stack the core Ceph components, but does RBD mirroring have
> the same limitations?
> 

I haven't touched it yet, but looking at the docs it seems it will: 
http://docs.ceph.com/docs/master/rbd/rbd-mirroring/

"The cluster name in the following examples corresponds to a Ceph configuration 
file of the same name (e.g. /etc/ceph/remote.conf). See the ceph-conf 
documentation for how to configure multiple clusters."

So, in 'remote.conf' you can add the IPv6 addresses of a cluster running on 
IPv6 and on 'ceph.conf' the IPv4 addresses.

The rbd-mirror daemon will eventually talk to librbd/librados which will act as 
a 'proxy' between the two clusters.

I think it works, but that's just based on reading the docs and prior knowledge.

Wido

> Thanks,
> Nick
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] RBD mirroring between a IPv6 and IPv4 Cluster

2016-07-04 Thread Nick Fisk
Hi All,

Quick question. I'm currently in the process of getting ready to deploy a
2nd cluster, which at some point in the next 12 months, I will want to
enable RBD mirroring between the new and existing clusters. I'm leaning
towards deploying this new cluster with IPv6, because Wido says so ;-) 

Question is, will RBD mirroring still be possible between the two? I know
you can't dual stack the core Ceph components, but does RBD mirroring have
the same limitations?

Thanks,
Nick

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com