Re: [Openstack-operators] Configure cinder to allow volume delete when RBD snapshots present

2016-04-19 Thread Forrest Flagg
Eric,

Thanks for the reply and info about the changes coming to the RBD driver
for Newton.  Since the changes are slated to hit in Newton is there a plan
to backport?  This seems to me to be an important piece of being able to
use Ceph effectively for storage.  What have people been doing up to this
point for backup and disaster recovery?  Is there a current work around for
dealing with this limitation?  Thanks,

Forrest

On Tue, Apr 12, 2016 at 11:41 AM, Eric Harney <ehar...@redhat.com> wrote:

> On 04/11/2016 11:18 AM, Forrest Flagg wrote:
> > All,
> >
> > I have a working Kilo cloud running with ceph for the storage backend.
> I'd like
> > to use RBD snapshots for backups because they're so fast, but cinder
> doesn't
> > allow volume deletion when an RBD snapshot exists.  I want to keep daily
> backups
> > in case a user terminates an instance and we need recover it or for
> disaster
> > recovery.  Is there a way to mark the volumes as deleted when a tenant
> deletes
> > them so they don't show up in OpenStack but still exist within ceph for
> backup
> > purposes?  Thanks,
> >
> > --
> > Forrest Flagg
> > Cloud System Administrator
> > Advanced Computing Group
> > (207) 561-3575
> > raymond.fl...@maine.edu <mailto:raymond.fl...@maine.edu>
> >
>
> Hi Forrest,
>
> There is ongoing development for the RBD driver in Cinder which fixes
> this. This change [1] is currently slated to land in Newton.
>
> [1] https://review.openstack.org/#/c/281550/
>
> Thanks,
> Eric
>
>


-- 
Forrest Flagg
Cloud System Administrator
Advanced Computing Group
(207) 561-3575
raymond.fl...@maine.edu
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Configure cinder to allow volume delete when RBD snapshots present

2016-04-11 Thread Forrest Flagg
All,

I have a working Kilo cloud running with ceph for the storage backend.  I'd
like to use RBD snapshots for backups because they're so fast, but cinder
doesn't allow volume deletion when an RBD snapshot exists.  I want to keep
daily backups in case a user terminates an instance and we need recover it
or for disaster recovery.  Is there a way to mark the volumes as deleted
when a tenant deletes them so they don't show up in OpenStack but still
exist within ceph for backup purposes?  Thanks,

-- 
Forrest Flagg
Cloud System Administrator
Advanced Computing Group
(207) 561-3575
raymond.fl...@maine.edu
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Fuel][ceph] Fuel 7.0 Ceph dm-crypt and multi-site redundancy

2015-11-10 Thread Forrest Flagg
Hi all,

I'm having some trouble finding information about using Fuel and Ceph
together with more complex options such as encryption and multi-site
redundancy.  Does anyone know how to use Fuel to enabled dm-crypt for ceph
and if so what sort of performance hit you take when doing so?  What about
having multi-site ceph nodes for offsite backup?  Would using federated
gateways be a workable option or is there a better way to deal with of
off-site redundancy/backup?  Thanks,

Forrest
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] what is the different in use Qcow2 or Raw in Ceph

2015-05-28 Thread Forrest Flagg
I'm also curious about this.  Here are some other pieces of information
relevant to the discussion.  Maybe someone here can clear this up for me as
well.  The documentation for Fuel 6.0, not sure what they changed for 6.1,
[1] states that when using Ceph one should disable qcow2 so that images are
stored in raw format.  This is due to the fact that Ceph includes its own
mechanisms for copy-on-write and snapshots.  According to the Ceph
documentation [2], this is true only when using a BTRFS file system, but in
Fuel 6.0 Ceph uses XFS which doesn't provide this functionality.  Also, [2]
recommends not using BTRFS for production as it isn't considered fully
mature.  In addition, Fuel 6.0 [3] states that OpenStack with raw images
doesn't support snapshotting.

Given this, why does Fuel suggest not using qcow2 with Ceph?  How can Ceph
be useful if snapshotting isn't an option with raw images and qcow2 isn't
recommended?  Are there other factors to take into consideration that I'm
missing?

[1]
https://docs.mirantis.com/openstack/fuel/fuel-6.0/terminology.html#qcow2
[2]
http://ceph.com/docs/master/rados/configuration/filesystem-recommendations/
[3]
https://docs.mirantis.com/openstack/fuel/fuel-6.0/user-guide.html#qcow-format-ug

Thanks,

Forrest

On Thu, May 28, 2015 at 8:02 AM, David Medberry openst...@medberry.net
wrote:

 and better explained here:
 http://ceph.com/docs/master/rbd/qemu-rbd/

 On Thu, May 28, 2015 at 6:02 AM, David Medberry openst...@medberry.net
 wrote:

 The primary difference is the ability for CEPH to make zero byte copies.
 When you use qcow2, ceph must actually create a complete copy instead of a
 zero byte copy as it cannot do its own copy-on-write tricks with a qcow2
 image.

 So, yes, it will work fine with qcow2 images but it won't be as
 performant as it is with RAW. Also, it will actually use more of the native
 underlying storage.

 This is also shown as an Important Note in the CEPH docs:
 http://ceph.com/docs/master/rbd/rbd-openstack/

 On Thu, May 28, 2015 at 4:12 AM, Shake Chen shake.c...@gmail.com wrote:

 Hi

 Now I try to use Fuel 6.1 deploy openstack Juno, use Ceph as cinder,
 nova and glance backend.

 In Fuel document suggest if use ceph, suggest use RAW format image.

 but if I upload qcow2 image, seem working well.

 what is the different use qcow2 and RAW in Ceph?


 --
 Shake Chen


 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators