On 2017-10-03 06:57 PM, José Andrés Matamoros Guevara wrote:
> I have been consulted about moving multiple Terabytes to a new system
> using drbd to ensure high availability. I have been thinking on multiple
> scenarios to move the data as fast as I can and to have a minimal
> maintenance window
I have been consulted about moving multiple Terabytes to a new system using
drbd to ensure high availability. I have been thinking on multiple scenarios
to move the data as fast as I can and to have a minimal maintenance window
to change the systems.
Is there any how-to or recommendation about
On 02/10/2017 15:35, Lars Ellenberg wrote:> Usually a result of having
(temporarily?) only a "primitive", without
corresponding "ms" resource definition in the cib.
Once you fixed the config, you should no longer get it,
and be able to clear previous fail-counts by doing a "resource cleanup".
In addition, as long as you're using proxmox, it would be way easier to
setup the native drbd9 plugin for proxmox instead of using the iscsi
method. In this case both drbd and proxmox should be hosted on the same
servers (hyper-converged setup). Each vm will reside in a separate drbd9
Le 03/10/2017 à 14:50, Robert Altnoeder a écrit :
> On 10/02/2017 06:20 PM, Julien Escario wrote:
>> Hello,
>> In the doc, I can read : "In this case drbdmanage chooses 3 nodes that fit
>> all
>> requirements best, which is by default the set of nodes with the most free
>> space
>> in the
On 10/02/2017 12:37 PM, Martyn Spencer wrote:
> I managed to put node1 into a state where it had pending actions that
> I could not remove, so decided to remove the node and then re-add it.
> Rather naively I did not check and the DRBD resources were all
> role:primary on node1. Now node1 is in a
On 10/02/2017 06:20 PM, Julien Escario wrote:
> Hello,
> In the doc, I can read : "In this case drbdmanage chooses 3 nodes that fit all
> requirements best, which is by default the set of nodes with the most free
> space
> in the drbdpool volume group."
>
> Is there a way to change this 'default
This seems to be the same bug that I hit on one of my test clusters.
It is a race condition that we are currently investigating.
If a somehow outdated resource is started, udev causes a read-only open
attempt on the resource, which will - at least in some cases - attempt
to wait for an
Note, all the below relates to my uses of DRBD 8.4 in production. I'm
assuming most of it will be equally applicable to DRBD9.
On 3/10/17 19:52, Gandalf Corvotempesta wrote:
Just trying to figure out if drbd9 can do the job.
Requirement: a scale-out storage for VMs image hosting (and other
Thanks for clarifying this ...
Regards,
Yannis
On Tue, Oct 3, 2017 at 12:30 PM, Roland Kammerer wrote:
> On Tue, Oct 03, 2017 at 12:05:50PM +0100, Yannis Milios wrote:
> > I think you have to use 'drbdmanage reelect' command to reelect a new
> > leader first.
> >
>
On Tue, Oct 03, 2017 at 12:05:50PM +0100, Yannis Milios wrote:
> I think you have to use 'drbdmanage reelect' command to reelect a new
> leader first.
>
> man drbdmanage-reelect
In general that is a bad idea, and I regret that I exposed it as a
subcommand and did not hide it behind a
I think you have to use 'drbdmanage reelect' command to reelect a new
leader first.
man drbdmanage-reelect
Yannis
On Mon, Oct 2, 2017 at 2:12 PM, Jason Fitzpatrick
wrote:
> Hi all
>
> I am trying to get my head around the quorum-control features within
>
Just trying to figure out if drbd9 can do the job.
Requirement: a scale-out storage for VMs image hosting (and other
services, but they would be made by creating, in example, an NFS VM on
top of DRBD)
Let's assume a 3-nodes DRBDv9 cluster.
I would like to share this cluster by using iSCSI (or
On Tue, Sep 26, 2017 at 11:01:33PM +0200, Gionatan Danti wrote:
> Hi list,
> I would like to have a clarification how barriers and flushes work to
> preserve write ordering.
That is a bit hard to clarify,
because it changed a few times in Linux kernel.
"Today" the Linux block layer "contract"
On Mon, Sep 25, 2017 at 09:02:57PM +, Eric Robinson wrote:
> Problem:
>
> Under high write load, DRBD exhibits data corruption. In repeated
> tests over a month-long period, file corruption occurred after 700-900
> GB of data had been written to the DRBD volume.
Interesting.
Actually,
I am testing a three node DRBD 9.0.9 setup using packages I built for
CentOS7. I am using the latest drbdmanage and drbd-utils versions. If I
lose the data on the resources, it is fine (I am only testing) but I was
wanting to learn how to manage (if possible) the mess that I have just
caused
16 matches
Mail list logo