The following pull request was submitted through Github.
It can be accessed and reviewed at: https://github.com/lxc/lxd/pull/3781

This e-mail was sent by the LXC bot, direct replies will not reach the author
unless they happen to be subscribed to this list.

=== Description (from pull-request) ===
Ceph is similar to zfs in that storage volumes need to be kept around in case they have dependent snapshots. That is:

```
- <storage-volume-1>
- snapshot(<storage-volume-1) = <snap1>
- protect(<snap1>)            = <snap1_protected>
- delete(<storage-volume-1>)
```

will fail since \<snap1_protected\> is dependent. Because of this the way I implemented the ceph driver it will map the delete() operation onto a rename() operation:

```
- delete(<storage-volume-1>) -> rename(<storage-volume-1>) = zombie_<storage-volume-1>
```

But consider the following:

```
lxc init images:alpine/edge a = <storage-volume-a>                                  /* succeeds */
lxc copy a b                  = <storage-volume-b> depends <storage-volume-a>       /* succeeds */
lxc delete a                  = <storage-volume-a> rename zombie_<storage-volume-a> /* succeeds */
lxc init images:alpine/edge a = <storage-volume-a>                                  /* succeeds */
lxc copy a c                  = <storage-volume-c> depends <storage-volume-a>       /* succeeds */
lxc delete a                  = <storage-volume-a> rename zombie_<storage-volume-a> /* fails    */
```

To avoid this we suffix each storage volume with a UUID so the sequence above becomes:

```
lxc init images:alpine/edge a = <storage-volume-a>                                         /* succeeds */
lxc copy a b                  = <storage-volume-b> depends <storage-volume-a>              /* succeeds */
lxc delete a                  = <storage-volume-a> rename zombie_<storage-volume-a>_<uuid> /* succeeds */
lxc init images:alpine/edge a = <storage-volume-a>                                         /* succeeds */
lxc copy a c                  = <storage-volume-c> depends <storage-volume-a>              /* succeeds */
lxc delete a                  = <storage-volume-a> rename zombie_<storage-volume-a>-<uuid> /* succeeds */
```
From e8a3d53673664b3f7a2fdc8a902bc6fb532aa0d1 Mon Sep 17 00:00:00 2001
From: Christian Brauner <[email protected]>
Date: Thu, 7 Sep 2017 11:48:07 +0200
Subject: [PATCH 1/2] ceph: use uuid when creating zombie storage volumes

Ceph is similar to zfs in that storage volumes need to be kept around in case
they have dependent snapshots. That is:

- <storage-volume-1>
- snapshot(<storage-volume-1) = <snap1>
- protect(<snap1>)            = <snap1_protected>
- delete(<storage-volume-1>)

will fail since <snap1_protected> is dependent. Because of this the way I
implemented the ceph driver it will map the delete() operation onto a rename()
operation:

- delete(<storage-volume-1>) -> rename(<storage-volume-1>) = 
zombie_<storage-volume-1>

But consider the following:

lxc init images:alpine/edge a = <storage-volume-a>                              
    /* succeeds */
lxc copy a b                  = <storage-volume-b> depends <storage-volume-a>   
    /* succeeds */
lxc delete a                  = <storage-volume-a> rename 
zombie_<storage-volume-a> /* succeeds */
lxc init images:alpine/edge a = <storage-volume-a>                              
    /* succeeds */
lxc copy a c                  = <storage-volume-c> depends <storage-volume-a>   
    /* succeeds */
lxc delete a                  = <storage-volume-a> rename 
zombie_<storage-volume-a> /* fails    */

To avoid this we suffix each storage volume with a UUID so the sequence above 
becomes:

lxc init images:alpine/edge a = <storage-volume-a>                              
           /* succeeds */
lxc copy a b                  = <storage-volume-b> depends <storage-volume-a>   
           /* succeeds */
lxc delete a                  = <storage-volume-a> rename 
zombie_<storage-volume-a>_<uuid> /* succeeds */
lxc init images:alpine/edge a = <storage-volume-a>                              
           /* succeeds */
lxc copy a c                  = <storage-volume-c> depends <storage-volume-a>   
           /* succeeds */
lxc delete a                  = <storage-volume-a> rename 
zombie_<storage-volume-a>-<uuid> /* succeeds */

Closes #3780.

Signed-off-by: Christian Brauner <[email protected]>
---
 lxd/storage_ceph.go       |  7 ++++---
 lxd/storage_ceph_utils.go | 11 +++++++----
 2 files changed, 11 insertions(+), 7 deletions(-)

diff --git a/lxd/storage_ceph.go b/lxd/storage_ceph.go
index efb184836..d665527c4 100644
--- a/lxd/storage_ceph.go
+++ b/lxd/storage_ceph.go
@@ -2354,8 +2354,8 @@ func (s *storageCeph) ImageCreate(fingerprint string) 
error {
                        }
 
                        err := cephRBDVolumeMarkDeleted(s.ClusterName,
-                               s.OSDPoolName, fingerprint,
-                               storagePoolVolumeTypeNameImage, s.UserName)
+                               s.OSDPoolName, storagePoolVolumeTypeNameImage,
+                               fingerprint, fingerprint, s.UserName)
                        if err != nil {
                                logger.Warnf(`Failed to mark RBD storage `+
                                        `volume for image "%s" on storage `+
@@ -2472,7 +2472,8 @@ func (s *storageCeph) ImageDelete(fingerprint string) 
error {
 
                // mark deleted
                err := cephRBDVolumeMarkDeleted(s.ClusterName, s.OSDPoolName,
-                       fingerprint, storagePoolVolumeTypeNameImage, s.UserName)
+                       storagePoolVolumeTypeNameImage, fingerprint,
+                       fingerprint, s.UserName)
                if err != nil {
                        logger.Errorf(`Failed to mark RBD storage volume for `+
                                `image "%s" on storage pool "%s" as zombie: %s`,
diff --git a/lxd/storage_ceph_utils.go b/lxd/storage_ceph_utils.go
index 4ad72aa0e..1200677dc 100644
--- a/lxd/storage_ceph_utils.go
+++ b/lxd/storage_ceph_utils.go
@@ -378,14 +378,15 @@ func cephRBDSnapshotListClones(clusterName string, 
poolName string,
 // creating a sparse copy of a container or when LXD updated an image and the
 // image still has dependent container clones.
 func cephRBDVolumeMarkDeleted(clusterName string, poolName string,
-       volumeName string, volumeType string, userName string) error {
+       volumeType string, oldVolumeName string, newVolumeName string,
+       userName string) error {
        _, err := shared.RunCommand(
                "rbd",
                "--id", userName,
                "--cluster", clusterName,
                "mv",
-               fmt.Sprintf("%s/%s_%s", poolName, volumeType, volumeName),
-               fmt.Sprintf("%s/zombie_%s_%s", poolName, volumeType, 
volumeName))
+               fmt.Sprintf("%s/%s_%s", poolName, volumeType, oldVolumeName),
+               fmt.Sprintf("%s/zombie_%s_%s", poolName, volumeType, 
newVolumeName))
        if err != nil {
                return err
        }
@@ -964,8 +965,10 @@ func cephContainerDelete(clusterName string, poolName 
string, volumeName string,
                                return 1
                        }
 
+                       newVolumeName := fmt.Sprintf("%s_%s", volumeName,
+                               uuid.NewRandom().String())
                        err := cephRBDVolumeMarkDeleted(clusterName, poolName,
-                               volumeName, volumeType, userName)
+                               volumeType, volumeName, newVolumeName, userName)
                        if err != nil {
                                logger.Errorf(`Failed to mark RBD storage `+
                                        `volume "%s" as zombie: %s`, logEntry,

From cdd0d36dde5dbcca9eddc15a5d2349d48ac6b2c5 Mon Sep 17 00:00:00 2001
From: Christian Brauner <[email protected]>
Date: Thu, 7 Sep 2017 12:08:49 +0200
Subject: [PATCH 2/2] tests: add more ceph tests

Signed-off-by: Christian Brauner <[email protected]>
---
 test/suites/storage_driver_ceph.sh | 17 +++++++++++++++++
 1 file changed, 17 insertions(+)

diff --git a/test/suites/storage_driver_ceph.sh 
b/test/suites/storage_driver_ceph.sh
index f0ddf9152..120f32c32 100644
--- a/test/suites/storage_driver_ceph.sh
+++ b/test/suites/storage_driver_ceph.sh
@@ -55,6 +55,23 @@ test_storage_driver_ceph() {
     lxc launch testimage c4pool2 -s "lxdtest-$(basename "${LXD_DIR}")-pool2"
     lxc list -c b c4pool2 | grep "lxdtest-$(basename "${LXD_DIR}")-pool2"
 
+    # Test whether dependency tracking is working correctly. We should be able
+    # to create a container, copy it, which leads to a dependency relation
+    # between the source container's storage volume and the copied container's
+    # storage volume. Now, we delete the source container which will trigger a
+    # rename operation and not an actual delete operation. Now we create a
+    # container of the same name as the source container again, create a copy 
of
+    # it to introduce another dependency relation. Now we delete the source
+    # container again. This should work. If it doesn't it means the rename
+    # operation tries to map the two source to the same name.
+    lxc init testimage a -s "lxdtest-$(basename "${LXD_DIR}")-pool1"
+    lxc copy a b
+    lxc delete a
+    lxc init testimage a -s "lxdtest-$(basename "${LXD_DIR}")-pool1"
+    lxc copy a c
+    lxc delete a
+    lxc delete c
+
     lxc storage volume create "lxdtest-$(basename "${LXD_DIR}")-pool1" c1pool1
     lxc storage volume attach "lxdtest-$(basename "${LXD_DIR}")-pool1" c1pool1 
c1pool1 testDevice /opt
     ! lxc storage volume attach "lxdtest-$(basename "${LXD_DIR}")-pool1" 
c1pool1 c1pool1 testDevice2 /opt
_______________________________________________
lxc-devel mailing list
[email protected]
http://lists.linuxcontainers.org/listinfo/lxc-devel

Reply via email to