that it doesn't happen again.
Is this a DRBD bug, or expected behavior? If it's somehow the latter, I think
the combination of the documentation and error messages is quite misleading and
should be fixed.
Thanks,
Zev Weiss
___
drbd-user mailing list
drbd
On May 23, 2012, at 4:48 PM, Lars Ellenberg wrote:
On Wed, May 23, 2012 at 04:34:28PM -0500, Zev Weiss wrote:
I'm running DRBD 8.3.12, and recently hit what looks to me like
a bug that was listed as fixed in 8.3.13 -- getting into a
state where both nodes are in SyncSource (it's just stuck
if it wipes the resource or
anything, I'd just like to have my test device back in a working state without
disturbing anything else.
Thanks,
Zev Weiss
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd
On May 23, 2012, at 3:22 PM, Florian Haas wrote:
On Wed, May 23, 2012 at 10:14 PM, Zev Weiss zwe...@scout.wisc.edu wrote:
Hi,
I'm running DRBD 8.3.12, and recently hit what looks to me like a bug that
was listed as fixed in 8.3.13 -- getting into a state where both nodes
On May 23, 2012, at 3:47 PM, Florian Haas wrote:
On Wed, May 23, 2012 at 10:34 PM, Zev Weiss zwe...@scout.wisc.edu wrote:
On May 23, 2012, at 3:22 PM, Florian Haas wrote:
On Wed, May 23, 2012 at 10:14 PM, Zev Weiss zwe...@scout.wisc.edu wrote:
Hi,
I'm running DRBD 8.3.12, and recently
On May 23, 2012, at 3:45 PM, Lars Ellenberg wrote:
On Wed, May 23, 2012 at 03:34:27PM -0500, Zev Weiss wrote:
On May 23, 2012, at 3:22 PM, Florian Haas wrote:
On Wed, May 23, 2012 at 10:14 PM, Zev Weiss zwe...@scout.wisc.edu wrote:
Hi,
I'm running DRBD 8.3.12, and recently hit what
On May 23, 2012, at 4:18 PM, Lars Ellenberg wrote:
On Wed, May 23, 2012 at 04:12:23PM -0500, Zev Weiss wrote:
On May 23, 2012, at 3:45 PM, Lars Ellenberg wrote:
On Wed, May 23, 2012 at 03:34:27PM -0500, Zev Weiss wrote:
On May 23, 2012, at 3:22 PM, Florian Haas wrote:
On Wed, May 23
On May 23, 2012, at 4:34 PM, Zev Weiss wrote:
On May 23, 2012, at 4:18 PM, Lars Ellenberg wrote:
You could also try ifdown sleep a while and ifup again.
(which obviously will impact the other resources, and everything going
via that interface).
Right, but I don't really want
, but not a whole lot (I say felt because I don't have any
hard before/after numbers on it).
Zev Weiss
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
On Aug 11, 2011, at 5:34 AM, Masashi Yamaguchi wrote:
Hello.
no-md-flushes's default value or behavior was changed
on drbd-8.4 ?
I use DRBD on LVM and barriers are not supported.
drbd-8.3.10 with no-disk-barrier; works
but drbd-8.4.0 with no-disk-barrier; (or disk-barrier: no;)
does
source), but I'm not sure if I'm going
to be able to get a full two-node system set up that way in order to
really do a comprehensive test.
If you find out anything more about it or discover a solution, please
do post to the list!
Thanks,
Zev Weiss
On Jul 29, 2011, at 3:41 AM, Junko IKEDA wrote:
Hi,
I got it.
drbdadm wipe-md + create-md are needed.
0) start DRBD on 2 node, as Primary/Secnodary
on Secondary;
1) stop DRBD
# drbdadm down all
# service drbd stop
2) remove old DRBD
# rpm -e drbd-km-2.6.18_238.el5-8.3.11-1
On Jul 28, 2011, at 2:19 PM, lists...@outofoptions.net wrote:
Linux version 2.6.18-274.el5
(mockbu...@x86-002.build.bos.redhat.com) (gcc version 4.1.2 20080704
(Red Hat 4.1.2-51)) #1 SMP Fri Jul 8 17:36:59 EDT 2011
[root@thing1 ~]# cat /etc/redhat-release
Red Hat Enterprise Linux Server
Hi,
Ken, Mark -- any further information on the write performance problem,
and/or a solution to it?
I'm running 8.3.7 at the moment, I think I'm going to try updating to
8.3.11 and/or 8.4 and see if it magically fixes anything (unless there
are further suggestions on other things to
On Jul 25, 2011, at 3:59 PM, lists...@outofoptions.net wrote:
Quoting Zev Weiss zwe...@scout.wisc.edu:
Hi,
Ken, Mark -- any further information on the write performance problem,
and/or a solution to it?
I'm running 8.3.7 at the moment, I think I'm going to try updating to 8.3.11
listslut listslut@... writes:
I didn't set this cluster up but that information was documented by
the
previous admin here:
http://crunchtools.com/kvm-cluster-with-drbd-gfs2/
It is now to the point where provisioning space for a new VM is a day
long process with a load level that
16 matches
Mail list logo