[sheepdog] [PATCH] deb: escape forward slashes in DAEMON_ARGS
This allows arguments that take things like paths (e.g. -l dir=/somewhere) to be handled properly. Signed-off-by: Alexander Guy alexan...@andern.org --- debian/sheepdog.postinst | 1 + 1 file changed, 1 insertion(+) diff --git a/debian/sheepdog.postinst b/debian/sheepdog.postinst index bc72f06..760ee67 100644 --- a/debian/sheepdog.postinst +++ b/debian/sheepdog.postinst @@ -19,6 +19,7 @@ if [ $1 = configure ] ; then sed -i -e s/^[ \t]*START=.*/START=\$SERVICE_START\/g /etc/default/sheepdog db_get sheepdog/daemon_args + RET=$(echo $RET | sed -e 's:/:\\/:g') sed -i -e s/^[ \t]*DAEMON_ARGS=.*/DAEMON_ARGS=\$RET\/g /etc/default/sheepdog fi db_stop || true -- 2.1.4 -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog
Re: [sheepdog] [PATCH] deb: escape forward slashes in DAEMON_ARGS
At Mon, 2 Feb 2015 14:12:41 -0800, Alexander Guy wrote: This allows arguments that take things like paths (e.g. -l dir=/somewhere) to be handled properly. Signed-off-by: Alexander Guy alexan...@andern.org --- debian/sheepdog.postinst | 1 + 1 file changed, 1 insertion(+) Applied, thanks. Hitoshi diff --git a/debian/sheepdog.postinst b/debian/sheepdog.postinst index bc72f06..760ee67 100644 --- a/debian/sheepdog.postinst +++ b/debian/sheepdog.postinst @@ -19,6 +19,7 @@ if [ $1 = configure ] ; then sed -i -e s/^[ \t]*START=.*/START=\$SERVICE_START\/g /etc/default/sheepdog db_get sheepdog/daemon_args + RET=$(echo $RET | sed -e 's:/:\\/:g') sed -i -e s/^[ \t]*DAEMON_ARGS=.*/DAEMON_ARGS=\$RET\/g /etc/default/sheepdog fi db_stop || true -- 2.1.4 -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog
Re: [sheepdog] [PATCH 0/3] sheep, dog: configurable vid space
At Mon, 02 Feb 2015 18:23:03 +0900, Hitoshi Mitake wrote: At Mon, 2 Feb 2015 14:58:18 +0900, Takafumi Fujieda wrote: Current, deleted vids are not reused, without cutting relations. If snapshots of many online vdis in a cluster are created continuously, the vid space will be exhausted. To be honest, it is hard for me to understand the motivation of expanding VID space. As you pointed, cutting relation between VDIs enables fine grained VID GC. Is this not enough? I forgot to say that cutting relation for online VDIs is possible now: $ dog vdi snapshot --no-share VDI name cuts relation between existing VDI (which will be snapshot) and newly created working VDI (whose name is VDI name). Thanks, Hitoshi Thanks, Hitoshi These patches make vid space size configurable from 24 bits to 26 bits by using the reserved bits in the oid. A new option -s bits is added to dog cluster format for specify the vid space size. (The default vid space size is 24 bits) These patches don't care compatibility of object files and cluster snapshot data created under different vid spaces. If you want to change your cluster's vid space keeping existing vdis, the cluster must be shutdowned and all vdis should be outputted by using qemu-img and dog vdi backup. Takafumi Fujieda (3): sheep, dog: add vid space variables to the structs sheep, dog: make vid space size variable dog: add a new option to specify the vid space dog/cluster.c | 67 +++- dog/common.c | 32 +- dog/dog.h |3 +- dog/farm/farm.c | 16 ++- dog/farm/farm.h |3 +- dog/node.c|3 +- dog/vdi.c | 22 ++- include/internal_proto.h |6 +++- include/sheepdog_proto.h | 18 +++- sheep/config.c| 20 - sheep/gateway.c | 23 +--- sheep/group.c | 10 +- sheep/journal.c |3 +- sheep/nfs/fs.c|4 +- sheep/nfs/nfs.c |4 +- sheep/object_cache.c | 13 + sheep/object_list_cache.c |2 +- sheep/ops.c | 35 +++- sheep/plain_store.c | 25 ++--- sheep/recovery.c | 10 --- sheep/request.c |3 +- sheep/sheep_priv.h|6 +++- sheep/vdi.c | 43 +--- sheepfs/volume.c |2 +- 24 files changed, 269 insertions(+), 104 deletions(-) -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog
[sheepdog] [PATCH] Change tests outputs to be suitable for new cluster info format
From: Wang Dongxu wangdon...@cmss.chinamobile.com Since commit 4fea6f95a2de90f45f90415f289083c6b29120a7, dog cluster info change its output format, to make sure tests/functional cases are suitable for the new format, modified these output. Signed-off-by: Wang Dongxu wangdon...@cmss.chinamobile.com --- tests/functional/001.out | 36 +++--- tests/functional/002.out | 30 +- tests/functional/003.out | 30 +- tests/functional/004.out | 100 ++-- tests/functional/005.out | 90 tests/functional/007.out | 20 tests/functional/010.out |8 ++-- tests/functional/025.out | 24 tests/functional/030.out |4 +- tests/functional/043.out | 82 +++--- tests/functional/051.out | 18 +++--- tests/functional/052.out | 116 +- tests/functional/053.out | 128 +++--- tests/functional/054.out | 10 ++-- tests/functional/055.out | 20 tests/functional/056.out | 14 +++--- tests/functional/057.out | 18 +++--- tests/functional/063.out |8 ++-- tests/functional/064.out | 10 ++-- tests/functional/065.out |8 ++-- tests/functional/066.out | 72 +- tests/functional/068.out | 36 +++--- tests/functional/069.out |8 ++-- tests/functional/070.out | 36 +++--- tests/functional/073.out |4 +- tests/functional/085.out |4 +- tests/functional/088.out | 12 ++-- tests/functional/096.out |8 ++-- tests/functional/098.out | 26 +- 29 files changed, 490 insertions(+), 490 deletions(-) diff --git a/tests/functional/001.out b/tests/functional/001.out index 82d27da..ef39e0a 100644 --- a/tests/functional/001.out +++ b/tests/functional/001.out @@ -5,29 +5,29 @@ Cluster status: running, auto-recovery enabled Cluster created at DATE -Epoch Time Version -DATE 5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] -DATE 4 [127.0.0.1:7002] -DATE 3 [127.0.0.1:7001, 127.0.0.1:7002] -DATE 2 [127.0.0.1:7001] -DATE 1 [127.0.0.1:7000, 127.0.0.1:7001] +Epoch Time Version [Host:Port:V-Nodes,,,] +DATE 5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 4 [127.0.0.1:7002:128] +DATE 3 [127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 2 [127.0.0.1:7001:128] +DATE 1 [127.0.0.1:7000:128, 127.0.0.1:7001:128] Cluster status: running, auto-recovery enabled Cluster created at DATE -Epoch Time Version -DATE 5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] -DATE 4 [127.0.0.1:7002] -DATE 3 [127.0.0.1:7001, 127.0.0.1:7002] -DATE 2 [127.0.0.1:7001] -DATE 1 [127.0.0.1:7000, 127.0.0.1:7001] +Epoch Time Version [Host:Port:V-Nodes,,,] +DATE 5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 4 [127.0.0.1:7002:128] +DATE 3 [127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 2 [127.0.0.1:7001:128] +DATE 1 [127.0.0.1:7000:128, 127.0.0.1:7001:128] Cluster status: running, auto-recovery enabled Cluster created at DATE -Epoch Time Version -DATE 5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] -DATE 4 [127.0.0.1:7002] -DATE 3 [127.0.0.1:7001, 127.0.0.1:7002] -DATE 2 [127.0.0.1:7001] -DATE 1 [127.0.0.1:7000, 127.0.0.1:7001] +Epoch Time Version [Host:Port:V-Nodes,,,] +DATE 5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 4 [127.0.0.1:7002:128] +DATE 3 [127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 2 [127.0.0.1:7001:128] +DATE 1 [127.0.0.1:7000:128, 127.0.0.1:7001:128] diff --git a/tests/functional/002.out b/tests/functional/002.out index ce99957..0efa4be 100644 --- a/tests/functional/002.out +++ b/tests/functional/002.out @@ -5,26 +5,26 @@ Cluster status: running, auto-recovery enabled Cluster created at DATE -Epoch Time Version -DATE 4 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] -DATE 3 [127.0.0.1:7002] -DATE 2 [127.0.0.1:7001, 127.0.0.1:7002] -DATE 1 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] +Epoch Time Version [Host:Port:V-Nodes,,,] +DATE 4 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 3 [127.0.0.1:7002:128] +DATE 2 [127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 1 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] Cluster status: running, auto-recovery enabled Cluster created at DATE -Epoch Time Version -DATE 4 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] -DATE 3 [127.0.0.1:7002] -DATE 2 [127.0.0.1:7001, 127.0.0.1:7002] -DATE 1 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] +Epoch Time Version [Host:Port:V-Nodes,,,] +DATE 4 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 3 [127.0.0.1:7002:128]
Re: [sheepdog] [PATCH] Change tests outputs to be suitable for new cluster info format
On Tue, Feb 03, 2015 at 12:19:12PM +0800, Wang Dongxu wrote: Since commit 4fea6f95a2de90f45f90415f289083c6b29120a7, dog cluster info change its output format, to make sure tests/functional cases are suitable for the new format, modified these output. Signed-off-by: Wang Dongxu wangdon...@cmss.chinamobile.com Applied since this is an easy and obvious fix for the tests. Thanks Yuan -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog
Re: [sheepdog] [PATCH v2 3/4] sheep, dog: fast deep copy for snapshot
On Mon, Jan 19, 2015 at 07:28:53PM +0900, Hitoshi Mitake wrote: dog vdi snapshot vdi --no-share has a bottleneck: the dog process. This patch adds a new option --fast-deep-copy to dog vdi snapshot, which avoid the bottleneck. It seems to me --no-share and --fast-deep-copy has some relations, they all try to achieve the same purpose, right? But the wording quit differs, which might cause troulbe for uesrs to understand. If the option is passed to the dog command, actual copying of data objects are done by sheep processes. So the dog process isn't a bottleneck in this case. If sheep is busy, e.g, busy with VMs or in recovery state, will this option worsen the situation? That is, make the slow VM slower and in return, this command will be excueted slower and slower. Thanks Yuan -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog
Re: [sheepdog] [PATCH] sheep: handle VID overflow correctly
On Mon, Feb 02, 2015 at 06:18:52PM +0900, Hitoshi Mitake wrote: Current sheep cannot handle a case like this: 1. iterate snapshot creation and let latest working VDI have VID 0xff 2. create one more snapshot (The situation can be reproduced with the below sequence: $ dog vdi create 00471718 1G $ dog vdi snapshot 00471718 (repeat 7 times) ) In this case, new VID becomes 0x00. Current fill_vdi_info() and fill_vdi_info_range() cannot handle this case. It comes from the below two reasons: 1. Recent change 00ecfb24ee46f2 introduced a bug which breaks fill_vdi_info_range() in a case of underflow of its variable i. I don't yet look close into this problem, but from this description, should we fix the bug somewhere else? because 1. fill_vdi_info_range() runs well in the past, so we'd better avoid to modify it to fix the bug introduced by other patch(es). 2. You mentioned it was introduced by 00ecfb24ee46f2, so we might better fix 00ecfb24ee46f2. It would be great if you can elaberate the bug, what it is, why breaks fill_vdi_info_range() and if modify fill_vdi_info_range() is the only way to fix the problem. 2. fill_vdi_info_range() assumes that its parameters, left and right, are obtained from get_vdi_bitmap_range(). get_vdi_bitmap_range() obtains left and right which mean the range of existing VIDs is [left, right), in other words, [left, right - 1]. So fill_vdi_info_range() starts checking from right - 1 to left. But it means fill_vdi_info_range() cannot check VID 0xff even VID overflow happens. So this patch lets fill_vdi_info_range() check from right to left, and also change callers' manner (it passes left and right - 1 in ordinal cases). Signed-off-by: Hitoshi Mitake mitake.hito...@lab.ntt.co.jp --- sheep/vdi.c | 30 +++--- 1 file changed, 23 insertions(+), 7 deletions(-) diff --git a/sheep/vdi.c b/sheep/vdi.c index 2889df6..3d14ccf 100644 --- a/sheep/vdi.c +++ b/sheep/vdi.c @@ -1404,9 +1404,11 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, ret = SD_RES_NO_MEM; goto out; } - for (i = right - 1; i i = left; i--) { + + i = right; + while (i = left) { if (!test_bit(i, sys-vdi_inuse)) - continue; + goto next; ret = sd_read_object(vid_to_vdi_oid(i), (char *)inode, SD_INODE_HEADER_SIZE, 0); @@ -1420,10 +1422,10 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, /* Read, delete, clone on snapshots */ if (!vdi_is_snapshot(inode)) { vdi_found = true; - continue; + goto next; } if (!vdi_tag_match(iocb, inode)) - continue; + goto next; } else { /* * Rollback snap create, read, delete on @@ -1438,6 +1440,10 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, info-vid = inode-vdi_id; goto out; } +next: + if (!i) + break; + i--; I don't have the whole picture yet, but just for this fragment, we change the for loop into goto, is not so good for code clarity. Thanks Yuan -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog
[sheepdog] [PATCH] Change tests outputs to be suitable for new cluster info format
Since commit 4fea6f95a2de90f45f90415f289083c6b29120a7, dog cluster info change its output format, to make sure tests/functional cases are suitable for the new format, modified these output. Signed-off-by: Wang Dongxu wangdon...@cmss.chinamobile.com --- tests/functional/001.out | 36 +++--- tests/functional/002.out | 30 +- tests/functional/003.out | 30 +- tests/functional/004.out | 100 ++-- tests/functional/005.out | 90 tests/functional/007.out | 20 tests/functional/010.out |8 ++-- tests/functional/025.out | 24 tests/functional/030.out |4 +- tests/functional/043.out | 82 +++--- tests/functional/051.out | 18 +++--- tests/functional/052.out | 116 +- tests/functional/053.out | 128 +++--- tests/functional/054.out | 10 ++-- tests/functional/055.out | 20 tests/functional/056.out | 14 +++--- tests/functional/057.out | 18 +++--- tests/functional/063.out |8 ++-- tests/functional/064.out | 10 ++-- tests/functional/065.out |8 ++-- tests/functional/066.out | 72 +- tests/functional/068.out | 36 +++--- tests/functional/069.out |8 ++-- tests/functional/070.out | 36 +++--- tests/functional/073.out |4 +- tests/functional/085.out |4 +- tests/functional/088.out | 12 ++-- tests/functional/096.out |8 ++-- tests/functional/098.out | 26 +- 29 files changed, 490 insertions(+), 490 deletions(-) diff --git a/tests/functional/001.out b/tests/functional/001.out index 82d27da..ef39e0a 100644 --- a/tests/functional/001.out +++ b/tests/functional/001.out @@ -5,29 +5,29 @@ Cluster status: running, auto-recovery enabled Cluster created at DATE -Epoch Time Version -DATE 5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] -DATE 4 [127.0.0.1:7002] -DATE 3 [127.0.0.1:7001, 127.0.0.1:7002] -DATE 2 [127.0.0.1:7001] -DATE 1 [127.0.0.1:7000, 127.0.0.1:7001] +Epoch Time Version [Host:Port:V-Nodes,,,] +DATE 5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 4 [127.0.0.1:7002:128] +DATE 3 [127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 2 [127.0.0.1:7001:128] +DATE 1 [127.0.0.1:7000:128, 127.0.0.1:7001:128] Cluster status: running, auto-recovery enabled Cluster created at DATE -Epoch Time Version -DATE 5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] -DATE 4 [127.0.0.1:7002] -DATE 3 [127.0.0.1:7001, 127.0.0.1:7002] -DATE 2 [127.0.0.1:7001] -DATE 1 [127.0.0.1:7000, 127.0.0.1:7001] +Epoch Time Version [Host:Port:V-Nodes,,,] +DATE 5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 4 [127.0.0.1:7002:128] +DATE 3 [127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 2 [127.0.0.1:7001:128] +DATE 1 [127.0.0.1:7000:128, 127.0.0.1:7001:128] Cluster status: running, auto-recovery enabled Cluster created at DATE -Epoch Time Version -DATE 5 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] -DATE 4 [127.0.0.1:7002] -DATE 3 [127.0.0.1:7001, 127.0.0.1:7002] -DATE 2 [127.0.0.1:7001] -DATE 1 [127.0.0.1:7000, 127.0.0.1:7001] +Epoch Time Version [Host:Port:V-Nodes,,,] +DATE 5 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 4 [127.0.0.1:7002:128] +DATE 3 [127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 2 [127.0.0.1:7001:128] +DATE 1 [127.0.0.1:7000:128, 127.0.0.1:7001:128] diff --git a/tests/functional/002.out b/tests/functional/002.out index ce99957..0efa4be 100644 --- a/tests/functional/002.out +++ b/tests/functional/002.out @@ -5,26 +5,26 @@ Cluster status: running, auto-recovery enabled Cluster created at DATE -Epoch Time Version -DATE 4 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] -DATE 3 [127.0.0.1:7002] -DATE 2 [127.0.0.1:7001, 127.0.0.1:7002] -DATE 1 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] +Epoch Time Version [Host:Port:V-Nodes,,,] +DATE 4 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 3 [127.0.0.1:7002:128] +DATE 2 [127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 1 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] Cluster status: running, auto-recovery enabled Cluster created at DATE -Epoch Time Version -DATE 4 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] -DATE 3 [127.0.0.1:7002] -DATE 2 [127.0.0.1:7001, 127.0.0.1:7002] -DATE 1 [127.0.0.1:7000, 127.0.0.1:7001, 127.0.0.1:7002] +Epoch Time Version [Host:Port:V-Nodes,,,] +DATE 4 [127.0.0.1:7000:128, 127.0.0.1:7001:128, 127.0.0.1:7002:128] +DATE 3 [127.0.0.1:7002:128] +DATE 2 [127.0.0.1:7001:128, 127.0.0.1:7002:128]
Re: [sheepdog] [PATCH] sheep: handle VID overflow correctly
At Tue, 3 Feb 2015 11:55:34 +0800, Liu Yuan wrote: On Mon, Feb 02, 2015 at 06:18:52PM +0900, Hitoshi Mitake wrote: Current sheep cannot handle a case like this: 1. iterate snapshot creation and let latest working VDI have VID 0xff 2. create one more snapshot (The situation can be reproduced with the below sequence: $ dog vdi create 00471718 1G $ dog vdi snapshot 00471718 (repeat 7 times) ) In this case, new VID becomes 0x00. Current fill_vdi_info() and fill_vdi_info_range() cannot handle this case. It comes from the below two reasons: 1. Recent change 00ecfb24ee46f2 introduced a bug which breaks fill_vdi_info_range() in a case of underflow of its variable i. I don't yet look close into this problem, but from this description, should we fix the bug somewhere else? because 1. fill_vdi_info_range() runs well in the past, so we'd better avoid to modify it to fix the bug introduced by other patch(es). 2. You mentioned it was introduced by 00ecfb24ee46f2, so we might better fix 00ecfb24ee46f2. It would be great if you can elaberate the bug, what it is, why breaks fill_vdi_info_range() and if modify fill_vdi_info_range() is the only way to fix the problem. The problem is related to --no-share for snapshot, its design seems to have a problem. I'll redesign it and refine this patch as a part of the rework. Thanks, Hitoshi 2. fill_vdi_info_range() assumes that its parameters, left and right, are obtained from get_vdi_bitmap_range(). get_vdi_bitmap_range() obtains left and right which mean the range of existing VIDs is [left, right), in other words, [left, right - 1]. So fill_vdi_info_range() starts checking from right - 1 to left. But it means fill_vdi_info_range() cannot check VID 0xff even VID overflow happens. So this patch lets fill_vdi_info_range() check from right to left, and also change callers' manner (it passes left and right - 1 in ordinal cases). Signed-off-by: Hitoshi Mitake mitake.hito...@lab.ntt.co.jp --- sheep/vdi.c | 30 +++--- 1 file changed, 23 insertions(+), 7 deletions(-) diff --git a/sheep/vdi.c b/sheep/vdi.c index 2889df6..3d14ccf 100644 --- a/sheep/vdi.c +++ b/sheep/vdi.c @@ -1404,9 +1404,11 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, ret = SD_RES_NO_MEM; goto out; } - for (i = right - 1; i i = left; i--) { + + i = right; + while (i = left) { if (!test_bit(i, sys-vdi_inuse)) - continue; + goto next; ret = sd_read_object(vid_to_vdi_oid(i), (char *)inode, SD_INODE_HEADER_SIZE, 0); @@ -1420,10 +1422,10 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, /* Read, delete, clone on snapshots */ if (!vdi_is_snapshot(inode)) { vdi_found = true; - continue; + goto next; } if (!vdi_tag_match(iocb, inode)) - continue; + goto next; } else { /* * Rollback snap create, read, delete on @@ -1438,6 +1440,10 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, info-vid = inode-vdi_id; goto out; } +next: + if (!i) + break; + i--; I don't have the whole picture yet, but just for this fragment, we change the for loop into goto, is not so good for code clarity. Thanks Yuan -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog
Re: [sheepdog] [PATCH v2 3/4] sheep, dog: fast deep copy for snapshot
Am 2015-02-03 04:47, schrieb Liu Yuan: It seems to me --no-share and --fast-deep-copy has some relations, they all try to achieve the same purpose, right? But the wording quit differs, which might cause troulbe for uesrs to understand. How about --no-share and --no-share-fast? Cheers Bastian -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog
[sheepdog] Redundancy policy via iSCSI
Hi Hitoshi, Sorry for disturb. I'm testing redundancy policy of sheepdog via iSCSI. I think if I create a 1G v-disk, the total space cost of this device should be 3*1G under a 3 copies policy. But after tests, I find the cost of this device is only 1G. Seems no additional copy is created. I don't know what happened. I'd like to should my configurations and wish you could take some time to help me. Many thanks! linux-rme9:/mnt # dog cluster info Cluster status: running, auto-recovery enabled Cluster created at Tue Feb 3 23:07:13 2015 Epoch Time Version [Host:Port:V-Nodes,,,] 2015-02-03 23:07:13 1 [130.1.0.147:7000:128, 130.1.0.148:7000:128, 130.1.0.149:7000:128] linux-rme9:/mnt # dog node list Id Host:Port V-Nodes Zone 0 130.1.0.147:7000 128 0 1 130.1.0.148:7000 128 0 2 130.1.0.149:7000 128 0 linux-rme9:/mnt # dog vdi list NameIdSizeUsed SharedCreation time VDI id Copies Tag Block Size Shift Hu0 0 1.0 GB 1.0 GB 0.0 MB 2015-02-03 23:12 6e7762 3 22 linux-rme9:/mnt # dog node info Id SizeUsedAvail Use% 0 261 GB 368 MB 260 GB0% 1 261 GB 336 MB 261 GB0% 2 261 GB 320 MB 261 GB0% Total 783 GB 1.0 GB 782 GB0% Total virtual image size1.0 GB linux-rme9:/mnt # tgtadm --op show --mode target Target 1: iqn.2015.01.org.sheepdog System information: Driver: iscsi State: ready I_T nexus information: I_T nexus: 3 Initiator: iqn.1996-04.de.suse:01:23a8f73738e7 alias: Fs-Server Connection: 0 IP Address: 130.1.0.10 LUN information: LUN: 0 Type: controller SCSI ID: IET 0001 SCSI SN: beaf10 Size: 0 MB, Block size: 1 Online: Yes Removable media: No Prevent removal: No Readonly: No SWP: No Thin-provisioning: No Backing store type: null Backing store path: None Backing store flags: LUN: 1 Type: disk SCSI ID: IET 00010001 SCSI SN: beaf11 Size: 1074 MB, Block size: 512 Online: Yes Removable media: No Prevent removal: No Readonly: No SWP: No Thin-provisioning: No Backing store type: sheepdog Backing store path: tcp:130.1.0.147:7000:Hu0 Backing store flags: Account information: ACL information: ALL Client: # iscsiadm -m node --targetname iqn.2015.01.org.sheepdog --portal 130.1.0.147:3260 --rescan Rescanning session [sid: 4, target: iqn.2015.01.org.sheepdog, portal: 130.1.0.147,3260] # dd if=/dev/random of=/dev/sdg bs=2M dd: writing `/dev/sdg': No space left on device 0+13611539 records in 0+13611538 records out 1073741824 bytes (1.1 GB) copied, 956.511 s, 1.1 MB/s Thanks! Hu -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog
Re: [sheepdog] [PATCH] sheep: fix vid wrap around
Sorry, this patch didn't work well. Please ignore this patch. But there is problem on vid wrap around. From: fukumoto.yoshif...@lab.ntt.co.jp To: sheepdog@lists.wpkg.org Cc: FUKUMOTO Yoshifumi fukumoto.yoshif...@lab.ntt.co.jp Subject: [sheepdog] [PATCH] sheep: fix vid wrap around Message-ID: 1422855383-1231-1-git-send-email-fukumoto.yoshif...@lab.ntt.co.jp From: FUKUMOTO Yoshifumi fukumoto.yoshif...@lab.ntt.co.jp If a vid of a vdi reaches the max number of vid space, creating the snapshot of the vdi fails. Example: $ dog vdi create 00471718 1G $ dog vdi snapshot 00471718 (repeat 7 times) failed to read a response Failed to create snapshot for 00471718: I/O error This patch fixes the problem. Signed-off-by: FUKUMOTO Yoshifumi fukumoto.yoshif...@lab.ntt.co.jp --- sheep/vdi.c | 17 ++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/sheep/vdi.c b/sheep/vdi.c index 2889df6..4113a4f 100644 --- a/sheep/vdi.c +++ b/sheep/vdi.c @@ -1398,15 +1398,20 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, uint32_t i; const char *name = iocb-name; + if (!right) + return SD_RES_NO_VDI; inode = malloc(SD_INODE_HEADER_SIZE); if (!inode) { sd_err(failed to allocate memory); ret = SD_RES_NO_MEM; goto out; } - for (i = right - 1; i i = left; i--) { - if (!test_bit(i, sys-vdi_inuse)) + for (i = right - 1; i = left; i--) { + if (!test_bit(i, sys-vdi_inuse)) { + if (!i) + break; continue; + } ret = sd_read_object(vid_to_vdi_oid(i), (char *)inode, SD_INODE_HEADER_SIZE, 0); @@ -1420,9 +1425,13 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, /* Read, delete, clone on snapshots */ if (!vdi_is_snapshot(inode)) { vdi_found = true; + if (!i) + break; continue; } if (!vdi_tag_match(iocb, inode)) + if (!i) + break; continue; } else { /* @@ -1438,6 +1447,8 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, info-vid = inode-vdi_id; goto out; } + if (!i) + break; } ret = vdi_found ? SD_RES_NO_TAG : SD_RES_NO_VDI; out: @@ -1458,7 +1469,7 @@ static int fill_vdi_info(unsigned long left, unsigned long right, switch (ret) { case SD_RES_NO_VDI: case SD_RES_NO_TAG: - ret = fill_vdi_info_range(left, SD_NR_VDIS - 1, iocb, info); + ret = fill_vdi_info_range(left, SD_NR_VDIS, iocb, info); break; default: break; -- 1.9.1 -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog
[sheepdog] [PATCH] sheep: handle VID overflow correctly
Current sheep cannot handle a case like this: 1. iterate snapshot creation and let latest working VDI have VID 0xff 2. create one more snapshot (The situation can be reproduced with the below sequence: $ dog vdi create 00471718 1G $ dog vdi snapshot 00471718 (repeat 7 times) ) In this case, new VID becomes 0x00. Current fill_vdi_info() and fill_vdi_info_range() cannot handle this case. It comes from the below two reasons: 1. Recent change 00ecfb24ee46f2 introduced a bug which breaks fill_vdi_info_range() in a case of underflow of its variable i. 2. fill_vdi_info_range() assumes that its parameters, left and right, are obtained from get_vdi_bitmap_range(). get_vdi_bitmap_range() obtains left and right which mean the range of existing VIDs is [left, right), in other words, [left, right - 1]. So fill_vdi_info_range() starts checking from right - 1 to left. But it means fill_vdi_info_range() cannot check VID 0xff even VID overflow happens. So this patch lets fill_vdi_info_range() check from right to left, and also change callers' manner (it passes left and right - 1 in ordinal cases). Signed-off-by: Hitoshi Mitake mitake.hito...@lab.ntt.co.jp --- sheep/vdi.c | 30 +++--- 1 file changed, 23 insertions(+), 7 deletions(-) diff --git a/sheep/vdi.c b/sheep/vdi.c index 2889df6..3d14ccf 100644 --- a/sheep/vdi.c +++ b/sheep/vdi.c @@ -1404,9 +1404,11 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, ret = SD_RES_NO_MEM; goto out; } - for (i = right - 1; i i = left; i--) { + + i = right; + while (i = left) { if (!test_bit(i, sys-vdi_inuse)) - continue; + goto next; ret = sd_read_object(vid_to_vdi_oid(i), (char *)inode, SD_INODE_HEADER_SIZE, 0); @@ -1420,10 +1422,10 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, /* Read, delete, clone on snapshots */ if (!vdi_is_snapshot(inode)) { vdi_found = true; - continue; + goto next; } if (!vdi_tag_match(iocb, inode)) - continue; + goto next; } else { /* * Rollback snap create, read, delete on @@ -1438,6 +1440,10 @@ static int fill_vdi_info_range(uint32_t left, uint32_t right, info-vid = inode-vdi_id; goto out; } +next: + if (!i) + break; + i--; } ret = vdi_found ? SD_RES_NO_TAG : SD_RES_NO_VDI; out: @@ -1452,13 +1458,23 @@ static int fill_vdi_info(unsigned long left, unsigned long right, int ret; if (left right) - return fill_vdi_info_range(left, right, iocb, info); + return fill_vdi_info_range(left, right - 1, iocb, info); + + if (!right) + /* +* a special case right == 0 +* because the variables left and right have values obtained by +* get_vdi_bitmap_range(), they mean used bitmap range is +* [left, right). If right == 0, it means used bitmap range is +* [left, SD_NR_VDIS]. +*/ + return fill_vdi_info_range(left, SD_NR_VDIS, iocb, info); - ret = fill_vdi_info_range(0, right, iocb, info); + ret = fill_vdi_info_range(0, right - 1, iocb, info); switch (ret) { case SD_RES_NO_VDI: case SD_RES_NO_TAG: - ret = fill_vdi_info_range(left, SD_NR_VDIS - 1, iocb, info); + ret = fill_vdi_info_range(left, SD_NR_VDIS, iocb, info); break; default: break; -- 1.9.1 -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog
Re: [sheepdog] [PATCH 0/3] sheep, dog: configurable vid space
At Mon, 2 Feb 2015 14:58:18 +0900, Takafumi Fujieda wrote: Current, deleted vids are not reused, without cutting relations. If snapshots of many online vdis in a cluster are created continuously, the vid space will be exhausted. To be honest, it is hard for me to understand the motivation of expanding VID space. As you pointed, cutting relation between VDIs enables fine grained VID GC. Is this not enough? Thanks, Hitoshi These patches make vid space size configurable from 24 bits to 26 bits by using the reserved bits in the oid. A new option -s bits is added to dog cluster format for specify the vid space size. (The default vid space size is 24 bits) These patches don't care compatibility of object files and cluster snapshot data created under different vid spaces. If you want to change your cluster's vid space keeping existing vdis, the cluster must be shutdowned and all vdis should be outputted by using qemu-img and dog vdi backup. Takafumi Fujieda (3): sheep, dog: add vid space variables to the structs sheep, dog: make vid space size variable dog: add a new option to specify the vid space dog/cluster.c | 67 +++- dog/common.c | 32 +- dog/dog.h |3 +- dog/farm/farm.c | 16 ++- dog/farm/farm.h |3 +- dog/node.c|3 +- dog/vdi.c | 22 ++- include/internal_proto.h |6 +++- include/sheepdog_proto.h | 18 +++- sheep/config.c| 20 - sheep/gateway.c | 23 +--- sheep/group.c | 10 +- sheep/journal.c |3 +- sheep/nfs/fs.c|4 +- sheep/nfs/nfs.c |4 +- sheep/object_cache.c | 13 + sheep/object_list_cache.c |2 +- sheep/ops.c | 35 +++- sheep/plain_store.c | 25 ++--- sheep/recovery.c | 10 --- sheep/request.c |3 +- sheep/sheep_priv.h|6 +++- sheep/vdi.c | 43 +--- sheepfs/volume.c |2 +- 24 files changed, 269 insertions(+), 104 deletions(-) -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog -- sheepdog mailing list sheepdog@lists.wpkg.org https://lists.wpkg.org/mailman/listinfo/sheepdog