Change output of functional test using __vdi_list
in assosiation with adding block_size_shift information
to vdi list command.
Signed-off-by: Teruaki Ishizaki ishizaki.teru...@lab.ntt.co.jp
---
tests/functional/016.out |2 +-
tests/functional/029.out | 18 +++---
tests/functional/030.out |
This patch fixes bugs that block_size_shift info was forgotten
after cluster shutdown and start sheepdog.
Add block_size_shift info to cluster config file.
Signed-off-by: Teruaki Ishizaki ishizaki.teru...@lab.ntt.co.jp
---
sheep/config.c |6 --
1 files changed, 4 insertions(+), 2
Hi Hitoshi,
sorry, second try to send to the list...
Am 2014-12-16 10:51, schrieb Hitoshi Mitake:
if I remove the VDI lock the live migration works correctly:
$ dog vdi lock unlock test-vm-disk
but after the live migration I can't relock the VDI.
Thanks for your report. As you say, live
2014-12-15 10:36 GMT+01:00 Hitoshi Mitake mitake.hito...@lab.ntt.co.jp:
Current sheepdog never recycles VIDs. But it will cause problems
e.g. VID space exhaustion, too much garbage inode objects.
I've been testing this branch and it seem to work.
I use a script that creates 3 vdi, 3 snapshot
2014-12-11 8:00 GMT+01:00 Hitoshi Mitake mitake.hito...@lab.ntt.co.jp:
Current recovery process can cause revival of orphan objects. This
patch solves this problem.
sheep -v
Sheepdog daemon version 0.9.0_18_g7215788
It works fine!
Dec 16 15:00:20 INFO [main] main(966) shutdown
Dec 16
2014-12-16 15:07 GMT+01:00 Valerio Pachera siri...@gmail.com:
It works fine!
...
There's only this corner case to fix:
all vdi are removed then and the disconnected node joins back the cluster
Please, notice that the same logic should apply to multi device:
create some vdi
unplug a disk
See http://jenkins.sheepdog-project.org:8080/job/sheepdog-build/574/changes
Changes:
[mitake.hitoshi] sheep, dog: add block_size_shift option to cluster format
command
[mitake.hitoshi] sheep, dog: add selectable object_size support of VDI operation
[mitake.hitoshi] dog: revert the change for
From: Hitoshi Mitake mitake.hitoshi@lab.ntt.co.jp
To: Teruaki Ishizaki ishizaki.teruaki@lab.ntt.co.jp
Cc: sheepdog@lists.wpkg.org
Subject: Re: [sheepdog] [PATCH] func/test: change functional test output for __vdi_list
In-Reply-To: 1418725227-20464-1-git-send-email-ishizaki.teruaki@lab.ntt.co.jp
- NameIdSizeUsed SharedCreation time VDI id Copies Tag
- test 0 8.0 MB 8.0 MB 0.0 MB DATE 7c2b25 1
- NameIdSizeUsed SharedCreation time VDI id Copies Tag
- test 0 8.0 MB 8.0 MB 0.0 MB DATE
At Tue, 16 Dec 2014 19:32:17 +0900,
Teruaki Ishizaki wrote:
This patch fixes bugs that block_size_shift info was forgotten
after cluster shutdown and start sheepdog.
Add block_size_shift info to cluster config file.
Signed-off-by: Teruaki Ishizaki ishizaki.teru...@lab.ntt.co.jp
---
At Tue, 16 Dec 2014 15:18:18 +0100,
Valerio Pachera wrote:
2014-12-16 15:07 GMT+01:00 Valerio Pachera siri...@gmail.com:
It works fine!
...
There's only this corner case to fix:
all vdi are removed then and the disconnected node joins back the cluster
Please, notice that the same
Current dog prints an odd error message in a case of vdi creation
before cluster formatting like below:
$ dog/dog vdi create test 16M
VDI size is larger than 1.0 MB bytes, please use '-y' to create a hyper volume
with size up to 16 PB bytes or use '-z' to create larger object size volume
This
At Tue, 16 Dec 2014 15:11:49 +0100,
Valerio Pachera wrote:
2014-12-11 8:00 GMT+01:00 Hitoshi Mitake mitake.hito...@lab.ntt.co.jp:
Current recovery process can cause revival of orphan objects. This
patch solves this problem.
sheep -v
Sheepdog daemon version 0.9.0_18_g7215788
It works
(2014/12/17 10:48), Hitoshi Mitake wrote:
Current dog prints an odd error message in a case of vdi creation
before cluster formatting like below:
$ dog/dog vdi create test 16M
VDI size is larger than 1.0 MB bytes, please use '-y' to create a hyper
volume with size up to 16 PB bytes or use
At Wed, 17 Dec 2014 12:32:08 +0900,
Teruaki Ishizaki wrote:
(2014/12/17 10:48), Hitoshi Mitake wrote:
Current dog prints an odd error message in a case of vdi creation
before cluster formatting like below:
$ dog/dog vdi create test 16M
VDI size is larger than 1.0 MB bytes, please use
At Mon, 15 Dec 2014 23:14:55 +0900,
Hitoshi Mitake wrote:
When a cluster has gateway nodes only, it means the gateway nodes
doesn't contribute to I/O of VMs. So this patch simply let them exit
and avoid the below recovery issue.
Related issue:
At Tue, 16 Dec 2014 12:28:29 +0100,
Valerio Pachera wrote:
2014-12-15 10:36 GMT+01:00 Hitoshi Mitake mitake.hito...@lab.ntt.co.jp:
Current sheepdog never recycles VIDs. But it will cause problems
e.g. VID space exhaustion, too much garbage inode objects.
I've been testing this branch and
hi,Hitoshi
we've tested the patch. Our test method is:
We attached a 20G sheepdog VDI to a VM holded by openstack. And we created
a 2G file which we have it's md5 in hand in the VDI.
We killed the non-gateway nodes in the middle of the process, then
restarted the cluster. The process resumed
At Wed, 17 Dec 2014 15:40:35 +0800,
$B=y.$(AAz(B wrote:
[1 text/plain; UTF-8 (7bit)]
hi,Hitoshi
we've tested the patch. Our test method is:
We attached a 20G sheepdog VDI to a VM holded by openstack. And we created
a 2G file which we have it's md5 in hand in the VDI.
We killed the
2014-12-17 3:48 GMT+01:00 Hitoshi Mitake mitake.hito...@lab.ntt.co.jp:
There's only this corner case to fix:
all vdi are removed then and the disconnected node joins back the cluster
Do you mean the problem is the below error messages?
The problem is that:
node 1, 2, 3, 4
create vdi
At Wed, 17 Dec 2014 16:42:27 +0900,
Hitoshi Mitake wrote:
At Wed, 17 Dec 2014 15:40:35 +0800,
$B=y.$(AAz(B wrote:
[1 text/plain; UTF-8 (7bit)]
hi,Hitoshi
we've tested the patch. Our test method is:
We attached a 20G sheepdog VDI to a VM holded by openstack. And we created
At Wed, 17 Dec 2014 08:50:35 +0100,
Valerio Pachera wrote:
2014-12-17 3:48 GMT+01:00 Hitoshi Mitake mitake.hito...@lab.ntt.co.jp:
There's only this corner case to fix:
all vdi are removed then and the disconnected node joins back the cluster
Do you mean the problem is the below error
22 matches
Mail list logo