Am 2015-02-03 09:54, schrieb hujianyang:
I'm not clear with this. Just run "sheep /mnt/store/0 -z 0 -p 7000"
on each host.
I guess this is your problem. Sheep replicates in different zones,
since you have only one zone defined, sheep dont replicate the data,
it only distribute the data across y
Am 2015-02-03 04:47, schrieb Liu Yuan:
It seems to me --no-share and --fast-deep-copy has some relations, they
all try
to achieve the same purpose, right? But the wording quit differs, which
might
cause troulbe for uesrs to understand.
How about --no-share and --no-share-fast?
Cheers
Bastian
Hi Hitoshi,
sorry, second try to send to the list...
Am 2014-12-16 10:51, schrieb Hitoshi Mitake:
if I remove the VDI lock the live migration works correctly:
$ dog vdi lock unlock test-vm-disk
but after the live migration I can't relock the VDI.
Thanks for your report. As you say, live mig
Your Host are 10.198.3.141 and 10.198.4.108 and
not in the same IP Subnet. (Are you able to ping
these host among each other at all?)
Your corosync listen to 192.168.1.1, which seem
to be no local interface on your hosts.
And your sheep processes listen to 127.0.0.1 which
is the loopback interfa
Hi,
maybe possible to include this one in
0.7-stable tree?
Cheers
Bastian
Am 2013-12-17 14:26, schrieb Liu Yuan:
...
Dec 17 19:04:27 ERROR [net 11088] do_read(220) connection is closed
(48 bytes left)
Dec 17 19:04:27 ERROR [net 11088] rx_work(684) failed to read a header
...
This is quit a
Hi Hitoshi,
Am 2013-10-21 10:03, schrieb Hitoshi Mitake:
Thanks a lot for reporting the problem. The cause of the error is
that
"make deb" assumes it is executed in a directory of the sheepdog git
repository.
If you need the deb package soon, could you try the below commands?
$ git clone http
Hi Hitoshi,
I test version v0.7.4 today, but make deb
seems to be broken (earlier version not
tested) Testenvironment is an debian wheezy
x64
When calling "make deb" I get this error.
[...]
Making distclean in .
make[3]: Entering directory `/usr/src/sheepdog/sheepdog-0.7.4'
rm -rf sheepdog.spec
OKAY...
call me sheep-ripper...
Am 2012-10-25 10:08, schrieb MORITA Kazutaka:
So far, I've not encountered situations where my patch shows worse
performance. In most cases, queue_work is called only from one
thread, so serializing at queue_work is unlikely to be a problem.
Another contention i
Am 2012-10-22 08:43, schrieb MORITA Kazutaka:
Yes, we need more numbers with various conditions to change the
design. (I like this patch implementation, which uses the same code
with ordered work queue, though.)
I think of trying it, but I wish more users would test it too.
Hi Kazutaka,
If I
Hi Kazutaka,
this patch works fine for me.
Thanks
Bastian
Am 2012-10-08 18:35, schrieb MORITA Kazutaka:
SD_PROTO_VER is a protocol version between sheep and client, so the
check of SD_PROTO_VER_TRIM_ZERO_SECTORS must be in gateway_read_obj,
not peer_read_obj.
Signed-off-by: MORITA Kazutaka
Hi,
maybe a minor bug?
I am using the latest devel branch (version 0.4.0_194_g1d2ae7e)
and get following output from collie node info...
# collie node info
Id SizeUsedUse%
0 0.0 MB 0.0 MB -2147483648%
1 0.0 MB 0.0 MB -2147483648%
2 434 GB 115 GB 26%
[...]
Second try, these time to the list ;-)
Hi Kazutaka,
Am 2012-08-25 20:09, schrieb MORITA Kazutaka:
We shouldn't remove objects until object recovery completely
finishes.
With this patch, even if we wrongly stop more sheeps than the
redundancy level at the same time, sheepdog can recover objects
Hi Dietmar, Hi Yuan,
Am 2012-08-21 07:27, schrieb Dietmar Maurer:
Membership change can happen for many reason. It can happen if
something is
wrong on the switch (or if some admin configures the switch), a
damaged network cable,
a bug in the bonding driver, a damaged network card, or simply a
Am 2012-07-26 23:06, schrieb David Douard:
I've put a modified version of this in the wiki.
Do never kill more than X sheep daemons (X being the number
of copies you formatted your cluster with) at a time
Technical it is a little more complex, you can kill more
than X sheeps, but avoid to ki
Am 2012-06-08 14:46, schrieb David Douard:
Hi, just asking: shouldn't this question (and the "Dead sheeps" one)
better be sent on sheepdog-users mailing-list? (I mean, we **should**
begin to use it).
I think, you are right, if nobody disagree, please
send answers to my questions to sheepdog-use
Am 2012-06-07 12:07, schrieb Yibin Shen:
On Thu, Jun 7, 2012 at 5:38 PM, Liu Yuan
wrote:
Maybe you refer to object cache?
Yes, sorry, mixed that two...
Without object cache, the answer is NO, the data will be known to
the
cluster as soon as the request is completely.
With object cache ena
I am using sheepdog 0.2.4 since its out without any Problems, yesterday
I start upgrading to current git version, but one of my virtual
machines
crashed the hole cluster... Other Machines seems to work without
Problems..
Steps for repoducing it in a very stripped down environment.
Using the im
I have a question about migration a Qemu Virtual Machines from
one node to another...
Is there something I have to pay attention when using the
farm cache mechanisms?
For example, is it possible, that I ran my virtual Machine V
on host A, Start a live migration to host B, but some content
in the
Am 2012-06-06 14:19, schrieb Liu Yuan:
Well, the membership management backend such as corosync can only
reliably support less than 20 nodes. This means you can't add more
nodes
into a running cluster with Corosync when number exceeds 15~20, See
more
info about it, https://github.com/collie/sh
Hi all,
firstly, its cool stuff you made :-)
Thanks to all participant.
Maybe it helps, if we can collect some use-cases here?
Am 2012-06-06 12:59, schrieb Liu Yuan:
On 06/06/2012 06:54 PM, Christoph Hellwig wrote:
I'd say performance numbers only start to really matter for 20,30+
nodes, or
20 matches
Mail list logo