in active/passive mode on two nodes backing
NFS, and then mount the NFS mount on the three nodes? Avoids the
need for a cluster FS (and it's expensive cluster locking), and
avoids the unsupported active/active(/active).
--
Digimer
Papers and Projects: https://alteeve.co
know that a given block hasn't flushed yet and, on read request,
read from cache not disk. This is a guess on my part.
What are your 'disk { disk-flushes [yes|no]; and md-flushes
[yes|no]; }' set to?
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am
On 2021-08-10 2:11 a.m., Eddie Chapman
wrote:
On
10/08/2021 00:01, Digimer wrote:
On 2021-08-05 5:53 p.m., Janusz Jaskiewicz
wrote:
Hello.
I'm experimenting a bit with DRBD in a cluster managed
dev/mapper/drbdpool-drbdata;
address 192.168.0.2:7789;
meta-disk internal;
}
}
What filesystem are you using? Is it cluster / multi-node aware?
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow,
o protect against hardware faults. This is true of the
filesystem inside a VM, or a file system directly on top of a DRBD resource.
The key take-away here is the role of different technologies in your
over-all corporate resilience planning. It's one (very powerful) tool in
a toolbox to protect you
layer.
HA protects against component failure. That's it's job, and it does it
well, when well implemented.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people
ent_
your HA solution.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweats
es during attach
> * implement 'blockdev --setro' in DRBD
> * following upstream changes to DRBD up to Linux 5.10 and ensure
>compatibility with Linux 5.8, 5.9, and 5.10
>
>
> https://www.linbit.com/downloads/drbd/9.0/drbd-9.0.26-1.tar.gz
> https://github.com/LINBIT/drbd/commit/8e0c552326815d9d2bf
into backups).
In the end, DRBD is fundamentally an availability solution, and not a
backup solution. (Same idea as how "RAID is not backup"). You really
need to be sure that your data is backed up safely and incrementally.
Any snapshot-based approach should be seen as a way to more rapidly
recover t
resource, stop cluster, start cluster, stop cluster OK
node 1 - https://pastebin.com/raw/haECJz8y
node 2 - https://pastebin.com/raw/tBSD0ZyJ
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the
The best advice would be to first upgrade (or create a new test system)
with the latest 8.4 release, and see if the problem remains and can be
reproduced.
digimer
On 2020-03-11 9:47 a.m., Mona Sinha wrote:
> Hello Roland,
>
> Thanks for recognising me . But on a serious note, i
Without knowing your setup or what is limiting you; I can suggest two
options;
1. Faster hardware (links speed / peer disk)
2. Switch to Protocol C
digimer
On 2019-10-21 4:36 p.m., G C wrote:
> Is there anything that will force the OOS to push what is out of sync?
>
>
> On Mon,
Tuning is quite instance-specific. I would always suggest starting by
commenting out all tuning, see how it behaves, then tune. Premature
optimization never is.
digimer
On 2019-10-21 1:31 p.m., G C wrote:
> Would any of these values being changed help or would it need to be the
> actual
likely unacceptable.
In short, you have a hardware/resource issue.
digimer
On 2019-10-21 12:19 p.m., G C wrote:
> version: 8.4.10
> Ran the resume-sync all and received:
> 0: Failure: (135) Sync-pause flag is already cleared
> Command 'drbdsetup-84 resume-sync 0' terminated with
go down.
What protocol are you using? A, B or C?
digimer
On 2019-10-21 11:31 a.m., G C wrote:
> I'm seeing OOS not being cleared for many days if not weeks, i.e. the
> OOS number stays the same.
>
> Is there a way to tell if the blocks that are OOS are changing or if
> it'
8.4.4 is old. Can you upgrade to the latest 8.4.11? I believe 8.4.4 was
older than the other reporters with a similar issue, so this may be
fixed. Upgrading to .11 should not cause any issues.
PS - Please keep replies on the list. These discussions help others by
being in the archives.
digimer
!
> Paras.
Do you have the system logs from when you started DRBD on the nodes
post-recovery? There should be DRBD log entries on both nodes as DRBD
started. The reason/trigger of the resync will likely be explained in
there. If not, please share the logs.
--
Digimer
Papers and Projects: h
r, and may not be acceptable in your
use case (see back to point 1 above).
digimer
On 2019-10-07 10:40 a.m., G C wrote:
> I have an instance that seems to get OOS down lower and once in a while
> it hits 0 but not very often. Typically my oos is about 18-20,
> is there a way to clear this ou
owever, I would like to understand why
the split-brain condition takes sometimes on booting up and, more
importantly, how to prevent this from happening, if at all possible.
Suggestions?
Stonith in pacemaker, once tested, fencing in DRBD. This is what fencing
is for.
d
The thing is, even though it's a test system, pacemaker and DRBD will
still operate as if it is critical. Turning off stonith won't properly
emulate production because when a node enters an unknown state, the
system will no longer behave predictably.
digimer
On 2019-04-15 12:13 p.m., Graham
For one; Enable and test stonith in Pacemaker. When a node can be failed
and fenced, then configure drbd to use fencing: resource-and-stonith;
and set the {un,}fence-handler to crm-{un,}fence-peer.sh.
digimer
On 2019-04-12 11:51 p.m., Graham Smith wrote:
Hi
1st time user, trying to set up
) maximum of 16 nodes there’ll be 120 host pairs to connect.
====
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton f
, I'll rebuild under
an unprivileged user.
Below I will include the contents of the above links, just in case the
links disappear someday.
Thanks for any insight or help!
Digimer
* Current .spec:
Name: drbd-kernel
Summary: Kernel driver for DRBD
Version: 9.0.16
Release: 1
# always require
to be fenced in a way that informs DLM that the lost node has
been fenced. I do not know of any fence handlers that do this is a pure
DRBD install. So for practical purposes, no, you will need pacemaker
with a proper stonith configuration to avoid split-brains in the first
place.
--
Digimer
Papers
on the stack). We can and have failed NICs,
cables and switches without interruption.
We've documented this setup here;
https://www.alteeve.com/w/Build_an_m2_Anvil!#Logical_Map.3B_Hardware_And_Plumbing
digimer
___
drbd-user mailing list
drbd-user
On 2018-10-18 4:00 p.m., Lars Ellenberg wrote:
> On Wed, Oct 17, 2018 at 03:11:42PM -0400, Digimer wrote:
>> On 2018-10-17 5:35 a.m., Adam Weremczuk wrote:
>>> Hi all,
>>>
>>> Yesterday I rebooted both nodes a couple of times (replacing BBU RAID
>>>
) adapts the resync rate to minimize impact on
applications using the storage. As it slows itself down to "stay out of
the way", the resync time increases of course. You won't have redundancy
until the resync completes.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I a
u need the configuration (although it should be identical to
> similar drbd configs which are working without problems) I am happy to
> provide it.
>
> Best and many thanks if any body could shed some light on this,
> Hp
Can you share your config? Are you using thin LVM?
Also, 8.4.7 is _anc
1, run 'drbdadm connect
r0'. Wait for things to settle, then share the new status of /proc/drbd
and the log output from the two nodes please.
digimer
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain tha
Yup, its fine.
Note though; If the UpToDate node goes offline, the Inconsistent node
will force itself to Secondary and be unusable. So while it's possible
to mount and use, be careful that whatever is being used can handle
having the storage ripped out from under it.
digimer
On 2018-09
' will be problematic.
digimer
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
e IOPs up to 30% on a system
>with fast backend storage; lowers CPU load caused by DRBD on every workload
> * compat for v4.18 kernel
>
> http://www.linbit.com/downloads/drbd/9.0/drbd-9.0.15-1.tar.gz
> https://github.com/LINBIT/drbd-9.0/releases/tag/drbd-9.0.15
>
&g
Fencing prevents split-brains, period. You absolutely want to set that up.
On 2018-06-12 04:02 AM, Christian Still wrote:
> Hello Roland, Hello Digimer,
>
> thank you for your fast response,
>
> at the moment I have upgraded one host from source to:
>
> cat /proc/drbd
>
gards, rck
Also, are you using fencing?
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton
is a replication technology meant to protect against
hardware faults. Think of it like RAID level 1, but for a full machine.
You wouldn't use RAID 1 for backups. So DRBD is very useful, but it
scratches a different itch.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, les
that the data on the disk had started to sync from an
UpToDate node, but that the resync has not finished. The data on the
disk is likely not usable (specifically, it might be a mix of new and
old data).
digimer
___
drbd-user mailing list
drbd-user
comments;
USE FENCING! That is, a node should be power cycled (or at least
disconnected from the network at the switch level) when it stops
responding. If both nodes are alive, you can either let the faster one
win, or, configure 'delay="15"' on the stonith config in pacemaker's
fence conf
On 2018-02-13 09:39 AM, Lars Ellenberg wrote:
> On Sun, Feb 11, 2018 at 02:43:27AM -0500, Digimer wrote:
>> On 2018-02-11 01:42 AM, Digimer wrote:
>>
>>
>> Hi all,
>>
>> I've setup a 3-node cluster (config below). Basically, Node 1 & 2 are
know how
to set it up such that it doesn't care about the DR node's state
(specially because the DR is not part of the pacemaker cluster).
Is it possible with the current RA to setup such that it only monitors
the two nodes in pacemaker and ignores the state of the DR node?
Cheers
--
Digimer
Pa
Did it somehow maintain connection through node 3?
If not, then a) Why didn't the fence-handler get invoked? b)
Why is it still showing connected?
If so, then is the connection between node 1 and 2 still
protocol C, even if the connection between 1 <->
Good fencing prevents split-brains, period. Setup pacemaker, setup and
test stonith, configure DRBD to use 'fencing resource-and-stonith;' and
enable the crm-{un,}fence-peer.sh' {un,}fence-handlers. The setup
pacemaker to colocate the DRBD resource -> FS -> IP.
digimer
On 2017-10-05 05
Will you be using pacemaker for auto-recovery, or are you planning to do
fully manual recovery?
digimer
On 2017-10-05 12:19 PM, José Andrés Matamoros Guevara wrote:
> OK. That's a procedure I have thought about, but didn't know if there were
> another more effective. I just have to che
to setup a DRBD based system, we'll need to know
more about what you want the system to do.
cheers,
digimer
On 2017-10-04 11:29 PM, José Andrés Matamoros Guevara wrote:
> Thanks for your answer.
>
> Yes, I need to copy a SAN to a new drbd system. And yes, I'll check with
> LinBit for
tside prod
thoroughly to be certain you have the steps down pat.
There are docs on how to do this openly available on LINBIT's website.
If you get stuck on certain steps, post specific questions and we'll help.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less in
On 2017-09-25 07:28 AM, Lars Ellenberg wrote:
> On Sat, Sep 23, 2017 at 11:32:42PM -0400, Digimer wrote:
>> I tried updating an 8.3.19 DRBD install (on EL6.9), and when I tried to
>
> 8.3.16 is the latest I know... typo I assume.
Yup, typo. Was late...
>> connect the upda
uld truncate
and cause errors, so I assume drbd 8.4 updated the metadata and 8.3
doesn't recognize it now...
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equ
thinking almost the same thing. Even started writing a flame answer ;-)
>
> Can you unsubscride such lame users ?
>
> Julien
It's possible the sender sent the email incomplete by accident. There is
no evidence of malice so I think there is no need for a harsh reaction.
--
Digimer
P
system. I
would stick with something fast, maybe md5 at the most.
This all said, I am not an expert. If someone else says I am wrong,
believe them. :)
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain tha
On 2017-08-25 04:08 PM, Gionatan Danti wrote:
> Il 25-08-2017 22:01 Digimer ha scritto:
>> On 2017-08-25 03:37 PM, Gionatan Danti wrote:
>>
>> The overhead of clustered locking is likely such that your VM
>> performance would not be good, I think.
>
> Mmm.
hnology solution, where DRBD is pure
resiliency/replication. So for HA-focused platforms, DRDB makes a lot
more sense, and that is why we use it under the Anvil! platform.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convol
On 2017-08-25 03:37 PM, Gionatan Danti wrote:
> Il 25-08-2017 14:34 Digimer ha scritto:
>>
>> Our Anvil! project (https://www.alteeve.com/w/Build_an_m2_Anvil!) is
>> basically this, except we put the VMs on clustered LVs and use gfs2 to
>> store install media and th
nstall media and the server XML files.
I would NOT put the image files on gfs2, as the distributed locking
overhead would hurt performance a fair bit.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in
will probably need to do a
pv/vg/lvscan to pick up changes. This is quite risky though and
certainly not recommended if you run DRBD in dual-primary.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the ne
od-drbd84-8.4.10-1.el6.anvil.x86_64.rpm
https://www.alteeve.com/an-repo/el6/SRPMS/drbd84-kmod-8.4.10-1.el6.anvil.src.rpm
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty t
patibility with Linux kernels up to 4.12
Thanks for the release!
Once utils is released, how urgent would you suggest an update is?
cheers
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in t
ources:
>
> vCluster-VirtualIP-10.168.10.199 (ocf::heartbeat:IPaddr2):
> Started server7ha
> vCluster-Stonith-server7ha (stonith:fence_ipmilan):Stopped
> vCluster-Stonith-server4ha (stonith:fence_ipmilan):Started
> server7ha
> Clone Set: dlm
>
>
> Can anyone help how to avoid running node hang when other node crashes?
>
>
> Attaching DRBD config file.
>
>
> --Raman
>
>
>
> ___
> drbd-user mailing list
> drbd-user@
ptions twice
> * various JSON output fixes
> * udev fixes for newer ubuntu releases
> * set bitmap to 0 on metadata creation
>
> http://www.drbd.org/download/drbd/utils/drbd-utils-8.9.11rc1.tar.gz
> http://git.drbd.org/drbd-utils.git/tag/refs/tags/v8.9.11rc1
>
> best,
any problems.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweatshops.&quo
On 02/03/17 02:40 PM, Lars Ellenberg wrote:
> On Thu, Mar 02, 2017 at 03:07:52AM -0500, Digimer wrote:
>> Hi all,
>>
>> We had an event last night on a system that's been in production for a
>> couple of years; DRBD 8.3.16. At almost exactly midnight, both
e years I've been using DRBD.
Let me know if there are any other logs or info
Thanks!
digimer
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talen
the same
versions of everything you have on prod, clone the images, and then try
the upgrade. If you run into trouble, you can "reset" by reloading the
images.
There is no other way to know for sure.
digimer
On 20/01/17 06:14 AM, pillai bs wrote:
>
> Thank you Trevor/Digimer.
han nothing) to match the
current version of your cluster and test the upgrade there first. We
will never know your environment well enough to say "this is safe".
If you want commercial support, LINBIT (the company behind DRBD)
offers commercial support and official releases.
-
/yum/rhel7/drbd-9.0/
> gpgcheck=0
>
> <https://www.drbd.org/en/doc/users-guide-90/s-upgrading-drbd>
The hash is provided when you purchase a support agreement from LINBIT.
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convol
On 22/12/16 04:22 PM, Bart Coninckx wrote:
> -Original message-
> *From:* Digimer <li...@alteeve.ca>
> *Sent:* Thu 22-12-2016 20:55
> *Subject:*Re: [DRBD-user] secondary DRBD node shows UpToDate but is not
> *To:* Bart Coninckx <i...@bitsa
a complete virtualization setup is running on top of
> it, this is not evident, hence the reason why I inquire first about
> possible known issues.
>
> Cheers,
Might not be helpful, but have you tried running a verify?
https://www.drbd.org/en/doc/users-guide-84/s-use-online-verify
--
the debate to other forums.
How long would you last before you got emotional?
I very much agree that maintaining calm and civility is important. I
even think that Roland's reply was emotional and not overly
professional. However, he, like you and I, are human.
Cut him some slack.
digimer
On 26/1
no.
> Not yet, anyways.
> Various bits an pieces missing.
Is there a time line for expected support?
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
___
available in stacked mode."
You will need to tell us more about your setup. What OS? What are the
pacemaker and DRBD configurations? Show the complete log output from
both nodes (starting from just before you tried to start until a little
after it failed, from both/all nodes).
--
Digimer
Pape
ork anymore. Even removing the setting
> from the DRBD configuration does not help.
>
> I discussed this on the Pacemaker mailing list and from a Pacemaker point of
> view this should not happen. The nodes are still online so no fencing should
> happen, however DRBD fences the wrong node?
&g
resource to the
> global_common.conf file. Furthermore, I want to change it to
>
> fencing resource-and-stonith;
This is what it should have been, yes. However, I doubt it would have
helped in this failure scenario.
> 4) Finally, in the global "net" section I need to add:
>
tures not yet released. If, though, 8.4 is good
enough for you and if your tolerance for risk is very low, then you
might want to stick with 8.4 for the time being.
Keep in mind as well; The more people that use DRBD 9, the faster it
will mature.
--
Digimer
Papers and Projects: https://alteeve.ca
On 20/07/16 12:42 PM, David Bruzos wrote:
> Hi, I'm attempting to post a message to this list and it is not showing up
> for some reason. Did anyone get this?
>
> David
Test succeeded. You can pick up your celebratory cupcake by the door on
the way out of the exam hall.
--
Di
To anyone on EL6 with interest, we've updated our DRBD RPMs on the
AN!Repo[1].
digimer
1. https://alteeve.ca/an-repo/el6/
On 18/07/16 12:20 PM, Digimer wrote:
> \o/
>
> On 18/07/16 08:57 AM, Lars Ellenberg wrote:
>>
>> No one complained about the release candidate,
HEL 5)
>> * al_write_transaction: skip re-scanning of bitmap page pointer array.
>>Improves performance of random writes on "huge" volumes
>>by burning less CPU and avoiding CPU cache pressure.
>
> Thanks,
>
> Lars
&g
On 15/07/16 12:44 AM, Igor Cicimov wrote:
> On 15 Jul 2016 9:30 am, "Digimer" <li...@alteeve.ca
> <mailto:li...@alteeve.ca>> wrote:
>>
>> On 14/07/16 07:10 PM, Igor Cicimov wrote:
>> > Ok, this has been coming for a while now, does anyon
a while now"?
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
on the primary, the network
> services are stopped, the external ip address is removed, the file
> systems are unmounted, drbd is demoted to secondary. The other machine
> is promoted just like hardware failover.
>
> Klint.
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if t
t;
> How can I recover and use this data without introducing pacemaker to our
> configuration?
Manually, or rgmanager.
> Thanks for your help.
>
> -James Ault
>
>
> _______
> drbd-user mailing list
> drbd-user@lists.li
/users-guide-84/s-replication-protocols
You probably want to look at DRBD proxy:
https://www.drbd.org/en/doc/users-guide-84/s-drbd-proxy
digimer
On 27/06/16 12:18 PM, Louis Munro wrote:
> Hello,
>
> I manage a pair of servers in two separate datacenters that replicate a
> dev
a test.
If I can get some feedback, I'll pass them up to ELRepo (if they want
them) for wider availability.
Please note; These are very much testing RPMs and they are unsigned.
Even if they appear to work well, I would not use them in production. :)
Thanks!
digimer
1. http://elrepo.org/linux
On 07/06/16 04:09 PM, Lars Ellenberg wrote:
> On Tue, Jun 07, 2016 at 12:24:48PM -0400, Digimer wrote:
>> On 07/06/16 08:46 AM, David Pullman wrote:
>>> Digimer,
>>>
>>> Thanks for the direction, this sounds right for sure. So to actually do
>>> this:
he I/O between the nodes goes
> down. We have two switches in a redundant configuration connecting the
> nodes. For unrelated reasons I can't change the interconnect.
>
> Any suggestions, referrals to docs, etc., would be greatly appreciated.
Fencing. 100% required, and will prevent split b
t
having fencing configured.
For a more useful answer, please share more information on your
environment and config.
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
__
,
>
> Yahagi(TGI)
>
Can you paste the output of 'cat /proc/drbd' from the machine you are
trying to detach? If the peer is not UpToDate and Primary, then it needs
the local machine to run. (Just a guess, /proc/drbd will give better
indication).
madi
--
Digimer
Papers and Projects: htt
s no meaning. The inode(s) are marked dirty and it's done.
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
attached DRBD configuration file and suggest correct
> configuration for DRBD. Should I able to mount this volume on both node
> with Read/write access and also mount this same volume third node with
> Read/Write access.
>
> Kindly guide for same.
>
> Regards,
> Vij
h (the former being ideal for
production, the later being easier to setup but more fragile, so only
good for testing).
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
On 16/03/16 03:29 PM, Digimer wrote:
> On 16/03/16 01:51 PM, Tim Walberg wrote:
>> Is there a way to make this work properly without STONITH? I forgot to
>> mention
>> that both nodes are virtual machines (QEMU/KVM), which makes STONITH a minor
>> challenge. Also, sinc
aster (score:INFINITY) (with-rsc-role:Master)
> (id:colocation-drbd_fs-drbd_master-INFINITY)
>
> Resources Defaults:
> resource-stickiness: 100
> failure-timeout: 60
> Operations Defaults:
> No defaults set
>
> Cluster Properties:
> cluster-infrastructure: corosync
On 01/03/16 08:56 PM, Mark Wu wrote:
> Hi Digimer,
>
> Thanks for your reply!
>
> Yes, I understand fencing can prevent split-brains. But in my case it
> maybe not sufficient. Let me clarify my use case. Basically, you can
> get the architecture from http://ibin.co/2Y
e is written to the other node. I am pretty sure that the older,
smaller changes are a lot more valuable.
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
_
On 22/02/16 06:00 PM, Lars Ellenberg wrote:
> On Mon, Feb 22, 2016 at 12:10:12PM -0500, Digimer wrote:
>> On 22/02/16 11:59 AM, Lars Ellenberg wrote:
>>> On Thu, Feb 18, 2016 at 02:04:36PM -0500, Digimer wrote:
>>>> This software is purely for 8.4, so
On 22/02/16 11:59 AM, Lars Ellenberg wrote:
> On Thu, Feb 18, 2016 at 02:04:36PM -0500, Digimer wrote:
>> This software is purely for 8.4, so I am not worried about DRBD 9
>> compatibility at this time.
>>
>> That said, I would rather not use it anyway, so if I can get
This software is purely for 8.4, so I am not worried about DRBD 9
compatibility at this time.
That said, I would rather not use it anyway, so if I can get the
estimate resync time via another tool, I would prefer to do so.
digimer
On 18/02/16 02:03 PM, Julien Escario wrote:
> Hello,
>
to
seconds if the data is already available directly somewhere.
Thanks!
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?
___
drbd-user mailing list
node 1. When fencing succeeds, Node 2 knows the state of node 1 (off)
and can safely promote itself without causing a split-brain and recovery
can proceed.
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to educatio
disk internal;
> }
> on filer06 {
> device /dev/drbd0;
> disk /dev/cciss/c0d0p4;
> address 192.168.2.11:7788;
> meta-disk internal;
> }
> }
>
> filter in lvm.conf
>
> filter = [ "a|drbd[0-9]|", "r|.*|" ]
>
>
>
anges is is when the
primary considers the write to be complete which depends on your
protocol (c == hits storage on the peer, b == received on the peer's
network, a == hits the local machines network stack).
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in
refs/tags/v8.9.5
>
>
> cheers,
> Phil
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
--
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure
1 - 100 of 371 matches
Mail list logo