Dan Lavu wrote:
###
version: 8.3.2 (api:88/proto:86-90)
GIT-hash: dd7985327f146f33b86d4bff5ca8c94234ce840e build by
mockbu...@v20z-x86-64.home.local, 2009-08-29 14:07:55
0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r
. This sync operation brings the other node back to
being identical to the good node.
What, exactly, are you trying to accomplish?
--
Digimer
E-Mail: li...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd
like it's a bit outside of the original goal of DRBD, but
it might be doable via the 'on-io-error' or 'fencing' options. Take a
look at this: http://www.drbd.org/users-guide/re-drbdconf.html
--
Digimer
E-Mail: li...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http
On 10-03-23 02:50 PM, Florian Haas wrote:
On 03/23/2010 07:35 PM, Michael Schwartzkopff wrote:
Am Dienstag, 23. März 2010 17:08:56 schrieb Digimer:
On 10-03-23 10:02 AM, Florian Haas wrote:
On 2010-03-23 14:52, Michael Schwartzkopff wrote:
Am Freitag, 19. März 2010 14:12:30 schrieb
).
Should I be asking this on the LVM mailing list? I figured it's more
appropriate here as not so many LVM users use DRBD as the other way
around. :)
--
Digimer
E-Mail: li...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
an idea
of what tests I could run to best test this, I'll be happy to load it up
on my test cluster this weekend.
--
Digimer
E-Mail: li...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing
). Is there an updated guide I could
review?
Thanks!
--
Digimer
E-Mail: li...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo
stable (I'm not getting random fencing as I'd expect with network
issues). Any idea what might be going on?
Thanks!
--
Digimer
E-Mail: li...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user
On 10-09-09 01:06 PM, Adam Gandelman wrote:
On 09/09/2010 07:18 AM, Digimer wrote:
vgcreate -c y drbd_vg0 /dev/drbd0
Found duplicate PV 4BbAHN2mFIP5pJsUE1ktdjSBwgFegmaG: using /dev/md3
not /dev/drbd0
Error locking on node node name: Command timed out
Can't get lock for orphan PVs
connected...
As an aside, have you considered simply freezing and dd'ing your VM when
you want to migrate it? It doesn't seem like you really need DRBD if
you're not concerned about keeping the data sync'ed across two nodes.
--
Digimer
E-Mail: li...@alteeve.com
AN!Whitepapers: http
On 10-09-12 07:35 PM, Sam Przyswa wrote:
Le 12/09/2010 19:36, Digimer a écrit :
On 10-09-12 09:31 AM, Sam Przyswa wrote:
Hi,
I use DRBD 8.3.7 on Debian 5.0.6 on Vserver in both side, 2nd machine is
a 1st clone.
There is may drbd.conf
Diskless, it won't work.
--
Digimer
E-Mail: li...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
On 10-09-13 03:31 PM, David Coulson wrote:
Don't you really need cluster aware LVM for that?
In Primary/Primary mode, yes you do. You also need a working cluster
that's providing DLM.
--
Digimer
E-Mail: li...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http
drbdadm invalidate r0
drbdadm connect r0 drbdadm connect r0
drbdadm primary r0 drbdadm primary r0
--
Digimer
E-Mail: li...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd
, mark itself as more UpTpDate and then
invalidate the peer if and when it recovers from the fence action. Is
this a correct understanding?
Any further information would be much appreciated. :)
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http
at what step it failed and what, if
anything, was shown in either /proc/drbd or /var/log/messages.
HTH
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd-user
.
Can you copy your setup to a different set of hardware, again to test?
I'm throwing mud at a wall to see what sticks...
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user
.
HTH
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
is an inexpensive route to much better uptime. :)
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo
On 01/05/2011 03:26 PM, J wrote:
On 1/5/2011 12:49 PM, Digimer wrote:
On 01/05/2011 11:48 AM, J wrote:
Hello,
This is my first go at setting up any HA services, so I wanted to see if
anyone on the list had a similar setup or any recommendations on doing
this better using drbd.
I'm setting
On 01/06/2011 01:13 PM, J. Ryan Earl wrote:
Reply inline:
On Thu, Jan 6, 2011 at 7:46 AM, Digimer li...@alteeve.com
mailto:li...@alteeve.com wrote:
On 01/06/2011 03:27 AM, Felix Frank wrote:
In any case, be sure to have (at least) RAID 1 on each node
backing the
DRBD
would fail for many other reasons, too).
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd
On 01/06/2011 06:25 PM, Lentes, Bernd wrote:
Digimer wrote:
On 01/06/2011 03:14 PM, Lentes, Bernd wrote:
How can a dual primary solution works, when i install the VM's in a plain
lv ? Don't i need a cluster aware filesystem (like OCFS2) for a dual
primary solution ?
Bernd
Dual
when using one LV per VM
though, so that live-migration is available, afaik.
Regards,
Felix
You are right. If you have one DRBD resource per LV/VM, then you will
avoid the scenario I described.
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http
On 01/07/2011 11:25 AM, Lentes, Bernd wrote:
Digimer wrote:
i thought dual primary means, that i have services (like a
VM) running on both nodes. Am i wrong ?
Bernd
That is a use for dual-primary, but not technically the same thing.
All that dual-primary gives you is block-level
On 01/08/2011 11:32 AM, Lentes, Bernd wrote:
Digimer wrote:
On 01/07/2011 04:00 PM, Lentes, Bernd wrote:
VM1A and VM1B are identical VM's, offering the same service !
Here is the confusion. When you say identical, you are
referring to the underlying data being mirrored on both
nodes
On 01/09/2011 12:00 PM, Lentes, Bernd wrote:
Hi,
Currently everything is clear.
Thanks for your help
Bernd
Glad to hear it. Have fun! :)
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
hard drive from the point of
view of the VM. To back it up, in a non-clustered environment, you can
use tradition LVM snapshoting. This occurs below the VM, so it needs not
know anything about it.
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http
On 01/14/2011 07:11 AM, Lentes, Bernd wrote:
Digimer wrote:
Hi,
how can i backup a VM which is installed in a plain LV
without no fs ?
No fs - no mount - no backup, right ?
I'd like to backup the vm's in the host.
Could a lv snapshot be helpful ?
Bernd
The LV is, essentially
On 01/14/2011 10:01 AM, Lentes, Bernd wrote:
Digimer wrote:
I suppose you could create the snapshot and then use 'dd' to
create a bit-level image of the LV.
If i have a lv of 50 GB for example, i think dd will need a long time, right ?
Bernd
Yup, won't be the fastest in the world
).
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
On 03/03/2011 12:09 PM, Florian Haas wrote:
On 2011-03-03 16:13, Digimer wrote:
On 03/03/2011 10:05 AM, Robinson, Eric wrote:
I am thinking about taking a secondary node offline for maintenance
during production hours. I assume it cannot stay offline forever because
eventually the primary
On 03/03/2011 01:37 PM, Florian Haas wrote:
On 03/03/2011 06:20 PM, Digimer wrote:
On 03/03/2011 12:09 PM, Florian Haas wrote:
On 2011-03-03 16:13, Digimer wrote:
On 03/03/2011 10:05 AM, Robinson, Eric wrote:
I am thinking about taking a secondary node offline for maintenance
during
that
starts, in order, drbd - clvmd - gfs2.
Then you can create normal services, failover groups and whatnot.
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd
for the first time. Once it connects, it should begin a full sync.
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd-user@lists.linbit.com
http
;
disk/dev/sda6;
}
}
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com
of the file system. Non-clustered file systems
expect that they are the only one with access to the storage, and will
quickly corrupt is anything else changes the data.
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
On 04/24/2011 09:57 PM, Whit Blauvelt wrote:
Digimer,
I really thank you for your long-form discussion. So much of the writing on
this stuff is terse, making for a steep learning curve.
You should be using Clustered LVM (clvmd). This way the LVM PV/VG/LVs
are in sync across both nodes
admit, you lost me somewhat in your reference to emailing people. :)
Any notable systems events around here result in notices, whether through
Nagios or independently.
Ah
Best, and thanks again,
Whit
Best of luck. :)
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
The 'on data {}' device values are supposed to point to the actual
backing devices for the resources (ie: device /dev/sda3, device
/dev/md1).
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
our ecrit. :)
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
On 04/28/2011 09:17 AM, Digimer wrote:
The 'on data {}' device values are supposed to point to the actual
backing devices for the resources (ie: device /dev/sda3, device
/dev/md1).
In English - This is totally wrong of me. I shall be more careful
posting on the morning train lacking coffee
server.
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
I feel confined, only free to expand myself within boundaries.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http
about a cluster.
Any feedback is appreciated!
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
I feel confined, only free to expand myself within boundaries.
___
drbd-user mailing list
the actual possible speed can hurt
performance. You said that you sustained ~200M before, right? Try
setting the sync rate to 180M and see if that works any better.
--
Digimer
E-Mail: digi...@alteeve.com
AN!Whitepapers: http://alteeve.com
Node Assassin: http://nodeassassin.org
I feel confined
On 05/19/2011 02:16 PM, Daniel Meszaros wrote:
Hi!
Am 05/19/2011 03:48 PM, schrieb Digimer:
Setting a sync speed greater than the actual possible speed can hurt
performance. You said that you sustained ~200M before, right? Try
setting the sync rate to 180M and see if that works any better
try and improve this ( a lot
:-) )
Standard disclaimer: I am not responsible if this script destroys your
data and/or server. Use at your own risk
Asking people on a mailing list to open a zip file attachment is not
going to get very far. Can you set this up on github or the like?
--
Digimer
E
is not that difficult, but it does require
understanding the fundamentals. Once you have the cluster formed, you
will find that CLVM works just fine. It certainly doesn't lock LVs for
not reason. :)
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects
what speed you get (both alone and using ramdisk to back a DRBD
resource).
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
I feel confined, only free to expand myself within
. :)
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
I feel confined, only free to expand myself within boundaries.
___
drbd-user mailing list
drbd
On 06/21/2011 09:51 PM, wang xuchen wrote:
Digimer,
Thanks for your reply.
I come up with the number 300M from DRBD official website A good rule
of thumb for this value(rate parameter) is to use about 30% of the
available replication bandwidth. . I use 10G Ethernet card for
replication traffic
if this is a drbd issue or an lvm. I can't seem to find
anything on it.
Thanks
Ken Lowther
Obvius question first; Is the DRBD in Primary?
Please share more details about your config so that folks can better
help you, rather than making wild stabs in the dark. :)
--
Digimer
E-Mail
LVM will not touch
clustered LVs. As an aside, be sure to set 'fallback_to_local_locking'
to 0. You never want a clustered LV to ever use local locking. Though it
seems you may already have done this.
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers
On 07/19/2011 12:47 PM, lists...@outofoptions.net wrote:
Digimer. Thanks for the help. I went back and partitioned the drive
for lvm and then it didn't show up at all. I removed the partitions and
suddenly everything went according to plan. I'm guessing this just got
everything back
, I just need to know where. :)
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
At what point did we forget that the Space Shuttle was, essentially,
a program that strapped human
On 07/23/2011 03:30 PM, Andreas Hofmeister wrote:
On 23.07.2011 06:30, Digimer wrote:
Any way to debug this? If it looks like a DRBD bug,
I don't think so. I have DRBD running with jumbo frames on several
machines and it just work (albeit with 0.8.10).
Check you networking.
Try ping -c1
On 07/24/2011 03:48 PM, Lucian Torje wrote:
Hi,
Is it possible to use multiple network connections for the same node -
redundant network, in case one fails the other one should be used.
Using DRBD over a standard bonded connection works fine.
--
Digimer
E-Mail: digi
fehlt das Backup. Ohne Backup will
ich aber nicht updaten. :-/
CU,
Mészi.
ごめんなさい、でも、このリストで英語を話すをください。ここの人はドイブツ
語と日本語も話せません。 ;)
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
can do so without downtime
(ie: live-migrate a VM, is that is what is using the DRBD storage). The
reason being that if anything happens to the communication with the
secondary, the primary will drop to Secondary until it's local storage
is UpToDate.
--
Digimer
E-Mail: digi
-and-stonith;
}
handlers {
outdate-peer/sbin/obliterate-peer.sh;
}
=
Replace that script if you're not using red hat cluster service with a
script that will call the fence device of your choice.
--
Digimer
E-Mail: digi...@alteeve.com
Freenode
On 08/16/2011 04:22 PM, Lars Ellenberg wrote:
On Tue, Aug 16, 2011 at 12:55:28PM -0400, Digimer wrote:
A couple of comments:
net {
allow-two-primaries;
cram-hmac-alg sha1;
shared-secret 123456;
You're on a dedicated
'syncer' to no more than 30% of that speed.
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
At what point did we forget that the Space Shuttle was, essentially,
a program
and
fence).
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
At what point did we forget that the Space Shuttle was, essentially,
a program that strapped human beings to an explosion
On 08/30/2011 11:25 AM, William Seligman wrote:
On 8/29/11 4:42 PM, Digimer wrote:
On 08/29/2011 03:36 PM, William Seligman wrote:
A general question: I have a Corosync+Pacemaker with DRBD setup on Linux;
I'll
give the details if it's relevant. Corosync+Pacemaker controls DRBD start,
stop
I missing something obvious? If it is possible, please let me know.
If it's not possible, may I ask why not? Would it be possible to request
this ability as a feature request for a future release?
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers
relative to the bare drives underneath? How does that change when you
add simple GFS2? How about if you used CLVMd as a (test) alternative? If
the latency is fairly close between GFS2 and clvmd, it's possibly DLM
overhead.
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle
problem you are trying to solve
with your cluster. At this point, it's a lot of conjecture and guessing.
With details of what you need, we might be able to offer more specific
advice and ask more intelligent questions.
Cheers,
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle
On 10/12/2011 02:34 PM, Kushnir, Michael (NIH/NLM/LHC) [C] wrote:
Digimer,
Thanks again for holding my hand on this. I've already started reading your
wiki posts. I wish Google gave your site a better ranking. I've been doing
research for months, and your articles (especially comments
ISO, ~3.5GB written.
Node B saved a hours worth of credit card transactions, ~1MB written.
Which node has the more valuable data?
The best you can do is configure and test fencing so that you can avoid
split brain conditions in the first place.
--
Digimer
E-Mail: digi
.
Caveat - I did not read the thread before now. If this is totally out to
left field, my apologies. :)
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
omg my singularity battery is dead
though.
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation. - epitron
On 12/04/2011 09:25 PM, Ivan Pavlenko wrote:
Hi ALL,
Digimer, thank you again for your answer I'm really appreciate it!
Unfortunately, I've tried to fixes split brain manually several times.
It doesn't work.
# drbdadm disconnect r0
[root@infplsm017 ~]# drbdadm secondary r0
1: State
.elrepo.noarch.rpm
yum install drbd83-utils kmod-drbd83
- --
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation. - epitron
. Regardless, I've found it a
worthy investment to have dedicated, bonded (mode=1 Active/Passive)
interfaces dedicated to DRBD and a separate network connection for other
traffic.
hth
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
components explained.
* Includes all testing steps covered.
* Configuration is used in production environments!
This tutorial is totally free (no ads, no registration) and released
under the Creative Common 3.0 Share-Alike Non-Commercial license.
Feedback is always appreciated!
--
Digimer
E-Mail
to match yours.
Cheers
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation. - epitron
.
None of the RHEL 6 Add-ons get you DRBD. HA just gives you
cman/rgmanager/etc. I think even pacemaker is still tech preview as of 6.2.
David
Ya, you get a linbit repository so installing and updating DRBD is the
same as updating the system itself.
--
Digimer
E-Mail: digi
node lives, you found the issue.
You *are* using fencing, right? ;)
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation
handlers?
Thanks!
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation. - epitron
:
https://github.com/digimer/rhcs_fence
The main differences are:
- No longer restricted to 2-node clusters. So long as DRBD_PEERS is set
to a name found in 'cman_tool', the proper node will be fenced.
- More sanity checks are made to help minimize the risk of dual-fencing.
First, dynamic
link or a saturated link, it appears the same to DRBD and will
trigger the fence handler.
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
omg my singularity battery is dead again
in production then,
I guess it is the only way.
thanks for the info nevertheless,
Generally a good idea. Test to see what your underlying storage is
capable of and then ensure your network bandwidth is higher, if you can.
Cheers
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle
, but 06a126f315b2f1cbcf2bc7485507815266d34
(https://github.com/digimer/rhcs_fence/commit/06a126f315b2f1cbcf2bc7485507815266d34926)
reflects your feedback.
Cheers!
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http
to a normal
operating state.
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation. - epitron
://alteeve.com/w/2-Node_Red_Hat_KVM_Cluster_Tutorial#Configuring_DRBD
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation
when
they are in the archives. :)
--
Digimer
E-Mail: digi...@alteeve.com
Freenode handle: digimer
Papers and Projects: http://alteeve.com
Node Assassin: http://nodeassassin.org
omg my singularity battery is dead again.
stupid hawking radiation. - epitron
On 01/17/2012 01:09 PM, Luis M. Carril wrote:
El 17/01/2012 18:56, Digimer escribió:
On 01/17/2012 12:32 PM, Luis M. Carril wrote:
Hello,
Ok, the fencing and splitbrain mechanisms only enter to play when
both nodes meet again after some failure.
So... meanwhile the nodes doesn
that this value is
ignored and 102MB/s is used.
How can I increase the syncer rate in drbd84?
I have not played with 8.4 yet, but if it is the same as 8.3, you should
be able to push up the sync rate to 300MB/sec using;
drbdsetup /dev/drbdX syncer -r 300M
--
Digimer
E-Mail: digi
the issue though, you can use 'wfc-timeout 300'
which will tell DRBD to wait up to 5 minutes for it's peer. After that
time, consider itself primary. Please don't use this though until you've
exhausted all other ways of starting safely.
--
Digimer
E-Mail: digi...@alteeve.com
Papers
and configuring
external metadata.
--
Digimer
E-Mail: digi...@alteeve.com
Papers and Projects: https://alteeve.com
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
? If
not, I would suggest looking at DRBD Proxy instead, as it is designed to
run over slow/unstable links, but it is asynchronous, so be advised that
consistency is not always guaranteed.
--
Digimer
E-Mail: digi...@alteeve.com
Papers and Projects: https://alteeve.com
On 02/16/2012 11:02 AM, Piotr Kandziora wrote:
Digimer,
Thanks for your answer. DRBD Proxy could be a solution, but it is not
possible for today.
I am thinking of trying DRBD with protocol A, but currently I am not
able to test on my environment as it is in production. I will have
cp instead of rsync?
--
Digimer
E-Mail: digi...@alteeve.com
Papers and Projects: https://alteeve.com
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
On 02/17/2012 11:42 AM, Brian O Mahony wrote:
(Apologies for top posting - working off a different laptop as I broke mine
this morning, and outlook is a pain)
Jake/Maurits/Digimer, Thanks for the replies.
Jake/Digimer:
What it sounds like you need is LVM's snapshotting capability. DRBD
in the main ring.
Quorum over a stretch cluster is always tricky. With sufficient speed,
you could look at qdisk, but that would likely induce as many problems
as it solved.
--
Digimer
E-Mail: digi...@alteeve.com
Papers and Projects: https://alteeve.com
On 03/02/2012 05:24 PM, Florian Haas wrote:
On Fri, Mar 2, 2012 at 11:21 PM, Digimer li...@alteeve.com wrote:
On 03/02/2012 04:41 PM, Robinson, Eric wrote:
We have two geographically separate data centers connected by 4 x
Gigabit links (in 2 trunks). Our HA clusters are distributed between
are confusing totem (corosync)'s communication (the log file
entries and DRBD.
Can you paste your cluster config to confirm?
Digimer
PS - I am tired and may be missing something obvious. :P
--
Digimer
E-Mail: digi...@alteeve.com
Papers and Projects: https://alteeve.com
you restart.
--
Digimer
E-Mail: digi...@alteeve.com
Papers and Projects: https://alteeve.com
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
On 03/14/2012 11:10 AM, Marcelo Pereira wrote:
If you're installing it, why don't you install the latest version (8.4)?
--Marcelo
For production systems, I'd recommend staying with 8.3.12, as it is much
better tested. 8.4 series is very promising, but it's still young. I'd
recommend using it
On 03/19/2012 12:33 PM, Carlos Xavier wrote:
Hi.
We have an old cluster made of OCFS2 running over DRBD using the
heartbeat protocol. Now we are moving to DRBD + OCFS2 + Pacemaker.
Following the steps from the user's manual I tried to make this
configuration:
disk {
fencing
1 - 100 of 371 matches
Mail list logo