Something isn't right. Ceph won't delete RBDs that have existing snapshots, even when those snapshots aren't protected. You can't delete a snapshot that's protected, and you can't unprotect a snapshot if there is a COW clone that depends on it.
I'm not intimately familiar with OpenStack, but it must be deleting A without any snapshots. That would seem to indicate that at the point of deletion there are no COW clones of A or that any clone is no longer dependent on A. A COW clone requires a protected snapshot, a protected snapshot can't be deleted, and existing snapshots prevent RBDs from being deleted. In my experience with OpenStack, booting a nova instance from a glance image causes a snapshot to be created, protected, and cloned on the RBD for the glance image. The clone becomes a cinder device that is then attached to the nova instance. Thus you're able to modify the contents of the volume within the instance. You wouldn't be able to delete the glance image at that point unless the cinder device were deleted first or it was flattened and no longer dependent on the glance image. I haven't performed this particular test. It's possible that OpenStack does the flattening for you in this scenario. This issue will likely require some investigation at the RBD level throughout your testing process to understand exactly what's happening. ________________________________ [cid:[email protected]]<https://storagecraft.com> Steve Taylor | Senior Software Engineer | StorageCraft Technology Corporation<https://storagecraft.com> 380 Data Drive Suite 300 | Draper | Utah | 84020 Office: 801.871.2799 ________________________________ If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited. ________________________________ -----Original Message----- From: Eugen Block [mailto:[email protected]] Sent: Thursday, September 1, 2016 9:06 AM To: Steve Taylor <[email protected]> Cc: [email protected] Subject: Re: [ceph-users] Turn snapshot of a flattened snapshot into regular image Thanks for the quick response, but I don't believe I'm there yet ;-) > cloned the glance image to a cinder device I have configured these three services (nova, glance, cinder) to use ceph as storage backend, but cinder is not involved in this process I'm referring to. Now I wanted to reproduce this scenario to show a colleague, and couldn't because now I was able to delete image A even with a non-flattened snapshot! How is that even possible? Eugen Zitat von Steve Taylor <[email protected]>: > You're already there. When you booted ONE you cloned the glance image > to a cinder device (A', separate RBD) that was a COW clone of A. > That's why you can't delete A until you flatten SNAP1. A' isn't a full > copy until that flatten is complete, at which point you're able to > delete A. > > SNAP2 is a second snapshot on A', and thus A' already has all of the > data it needs from the previous flatten of SNAP1 to allow you to > delete SNAP1. So SNAP2 isn't actually a full extra copy of the data. > > > ________________________________ > > [cid:[email protected]]<https://storagecraft.com> > Steve Taylor | Senior Software Engineer | StorageCraft Technology > Corporation<https://storagecraft.com> > 380 Data Drive Suite 300 | Draper | Utah | 84020 > Office: 801.871.2799 > > ________________________________ > > If you are not the intended recipient of this message or received it > erroneously, please notify the sender and delete it, together with > any attachments, and be advised that any dissemination or copying of > this message is prohibited. > > ________________________________ > > -----Original Message----- > From: ceph-users [mailto:[email protected]] On > Behalf Of Eugen Block > Sent: Thursday, September 1, 2016 6:51 AM > To: [email protected] > Subject: [ceph-users] Turn snapshot of a flattened snapshot into > regular image > > Hi all, > > I'm trying to understand the idea behind rbd images and their > clones/snapshots. I have tried this scenario: > > 1. upload image A to glance > 2. boot instance ONE from image A > 3. make changes to instance ONE (install new package) 4. create > snapshot SNAP1 from ONE 5. delete instance ONE 6. delete image A > deleting image A fails because of existing snapshot SNAP1 7. > flatten snapshot SNAP1 8. delete image A > succeeds > 9. launch instance TWO from SNAP1 > 10. make changes to TWO (install package) 11. create snapshot SNAP2 > from TWO 12. delete TWO 13. delete SNAP1 > succeeds > > This means that the second snapshot has the same (full) size as the > first. Can I manipulate SNAP1 somehow so that snapshots are not > flattened anymore and SNAP2 becomes a cow clone of SNAP1? > > I hope my description is not too confusing. The idea behind this > question is, if I have one base image and want to adjust that image > from time to time, I don't want to keep several versions of that > image, I just want one. But this way i would lose the protection > from deleting the base image. > > Is there any config option in ceph or Openstack or anything else I > can do to "un-flatten" an image? I would assume that there is some > kind of flag set for that image. Maybe someone can point me to the > right direction. > > Thanks, > Eugen > > -- > Eugen Block voice : +49-40-559 51 75 > NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 > Postfach 61 03 15 > D-22423 Hamburg e-mail : [email protected] > > Vorsitzende des Aufsichtsrates: Angelika Mozdzen > Sitz und Registergericht: Hamburg, HRB 90934 > Vorstand: Jens-U. Mozdzen > USt-IdNr. DE 814 013 983 > > _______________________________________________ > ceph-users mailing list > [email protected] > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- Eugen Block voice : +49-40-559 51 75 NDE Netzdesign und -entwicklung AG fax : +49-40-559 51 77 Postfach 61 03 15 D-22423 Hamburg e-mail : [email protected] Vorsitzende des Aufsichtsrates: Angelika Mozdzen Sitz und Registergericht: Hamburg, HRB 90934 Vorstand: Jens-U. Mozdzen USt-IdNr. DE 814 013 983
_______________________________________________ ceph-users mailing list [email protected] http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
