Hi all,
I'm trying to set up HA with DRBD and Heartbeat on two identical RedHat
VM's hosted on Windows VirtualBox. The details about the server(s) drbd
biuld and config are given below:
[root@cpvmdc1 ~]# uname -a
Linux cpvmdc1 2.6.18-229.el5 #1 SMP Tue Oct 26 18:54:44 EDT 2010 x86_64
x86_64 x86_6
so strange that I'm sure I
formated drbd partition with ext3 but is showing as ext2 in mount:
[root@cpvmdc1 httpd]# mount
/dev/drbd0 on /drbd0 type ext2 (rw)
.
.
Thanks,
Igor
On Thu, Dec 22, 2011 at 8:42 PM, Rasto Levrinc wrote:
> On Thu, Dec 22, 2011 at 6:25 AM, Igor Cicimov wrote:
&g
?
Thanks again for your help
Igor
On Fri, Dec 23, 2011 at 7:27 PM, Felix Frank wrote:
> Hi,
>
> On 12/23/2011 03:37 AM, Igor Cicimov wrote:
> > Thanks to both of you guys for your help. Now the error is gone but they
> > are still in StandAlone mode instead of connected:
&
Cant you just use ebs optimized instance?
On 14/05/2014 7:47 AM, "Csanad Novak" wrote:
>
> Hi Arnold,
>
> I guess the challenge here is that the instance-storage get reseted
completely (back to the point when its just a raw device without
partitions) on every stop/start cycle.
>
> Currently I’m u
Hi all,
I've been searching for a solution about this but couldn't find anything
but couple of threads ending without any outcome.
I have drbd-8.4.4 installed in Ubuntu-12.04 container running on
Ubuntu-12.04 host. I can only create the metadata and then the try to bring
the resource up fails wit
On 02/06/2014 6:14 AM, "Lars Ellenberg" wrote:
>
> On Sat, May 31, 2014 at 10:48:09PM +1000, Igor Cicimov wrote:
> > Hi all,
> >
> > I've been searching for a solution about this but couldn't find anything
> > but couple of threads ending withou
On 03/06/2014 7:50 AM, "Lars Ellenberg" wrote:
>
> On Mon, Jun 02, 2014 at 11:07:56AM +1000, Igor Cicimov wrote:
> > On 02/06/2014 6:14 AM, "Lars Ellenberg"
wrote:
> > >
> > > On Sat, May 31, 2014 at 10:48:09PM +1000, Igor Cicimov wrote:
> &g
On 04/06/2015 8:50 PM, "Kimly Heler" wrote:
>
> Hello,
>
> I have 2 of two-node clusters in different data centers. I have
configured drbd of lvm resource on each cluster. Now I need to configure a
stacked drbd on top of these 2 clusters.
>
> It should be similar to section 8.4.2 in
http://drbd.li
On 20/08/2015 6:58 PM, "Matthew Vernon" wrote:
>
> Hi,
>
> > Are you sure LVM only uses the DRBD device to write data to and not the
> > backend disk? We've had this issue in the past and this was caused by
> > LVM which scans all the devices for PV's, VG's and LV's and sometimes
> > pick the wron
On Fri, Dec 11, 2015 at 3:08 AM, Digimer wrote:
> On 10/12/15 09:27 AM, Fabrizio Zelaya wrote:
> > Thank you Lars and Adam for your recommendations.
> >
> > I have ping-timeout set to 5 and it still happened.
> >
> > Lars with this fencing idea. I have been contemplating this, however I
> > am re
On 11/12/2015 4:40 PM, "Digimer" wrote:
>
> On 11/12/15 12:12 AM, Igor Cicimov wrote:
> >
> >
> > On Fri, Dec 11, 2015 at 3:08 AM, Digimer > <mailto:li...@alteeve.ca>> wrote:
> >
> > On 10/12/15 09:27 AM, Fabrizio Zelaya wrote:
Hi,
I'm testing 9.0.1.1 installed from git and have a resource with fencing in
the disk section:
disk {
on-io-error detach;
fencing resource-and-stonith;
}
but when running:
# drbdadm create-md vg1
drbd.d/vg1.res:10: Parse error: 'resync-rate | c-plan-ahead |
c-delay-tar
On Wed, Feb 17, 2016 at 8:18 PM, Roland Kammerer wrote:
> On Wed, Feb 17, 2016 at 04:20:12PM +1100, Igor Cicimov wrote:
> > Hi,
> >
> > I'm testing 9.0.1.1 installed from git and have a resource with fencing
> in
> > the disk section:
> >
>
Sorry, forgot to include the list in my reply.
-- Forwarded message --
From: Igor Cicimov
Date: Mon, Feb 22, 2016 at 9:33 PM
Subject: Re: [DRBD-user] DRBD9 drbdadm complains about fencing being in
wrong section
To: Roland Kammerer
Hi Roland,
On Thu, Feb 18, 2016 at 8:03 PM
On 25/02/2016 7:47 PM, "Eric Robinson" wrote:
>
> I have a 2-node cluster, where each node is primary for one drbd volume
and secondary for the other node’s drbd volume. Replication is A->B for
drbd0 and A<-B for drbd1. I have a logical volume and filesystem on each
drbd device. When I try to fail
On 26/02/2016 8:53 AM, "Eric Robinson" wrote:
>
> ‘> And your pacemaker config is???
>
> ‘> Run
>
> ‘> # crm configure show
> ‘>and paste it here.
>
> Pacemaker 1.1.12.
>
>
>
> Here’s the config…
>
>
>
>
>
> [root@ha13a /]# crm configure show
>
> node ha13a
>
> node ha13b
>
> primitive p_drbd0 oc
On 26/02/2016 9:51 AM, "Eric Robinson" wrote:
>
> Ø Im confused I don't see the VG(s) and LV(s) under cluster control have
you done that bit?
>
> (blank stare)
>
> This is where I admit that I have no idea what you mean. I’ve been
building clusters with drbd for a decade, and I’ve always had drb
On Fri, Feb 26, 2016 at 10:10 AM, Eric Robinson
wrote:
> > The usual problem with LVM on top of DRBD is that the
>
> > backing device gets seen as an LVM PV and is grabbed by
>
> > LVM before DRBD starts up. That means DRBD cannot access
>
> > it since it's already in use. Solution: adjust the fi
On Mon, Feb 22, 2016 at 9:36 PM, Igor Cicimov wrote:
> Sorry, forgot to include the list in my reply.
>
>
> -- Forwarded message ------
> From: Igor Cicimov
> Date: Mon, Feb 22, 2016 at 9:33 PM
> Subject: Re: [DRBD-user] DRBD9 drbdadm complains about fencing bei
On Fri, Feb 26, 2016 at 10:37 AM, Eric Robinson
wrote:
> > Those are not the backing devices. Backing devices are the ones named
> > on the "disk " lines in your DRBD resource files - for example "disk
> /dev/vg/lv1".
>
> Sorry, for a second I was thinking of the drbd disks as backing devices
> f
On 27/02/2016 9:44 am, "Eric Robinson" wrote:
>
> In the example you provided…
>
> ...
> filter = [ "a|/dev/vd.*|", "a|/dev/drbd*|", "r|.*|" ]
> write_cache_state = 0
> volume_list = [ "rootvg", "vg1", "vg2" ]
> ...
>
> …it looks like you are accepting anything that begins with '/dev/vd.' or
'/de
On Sat, Feb 27, 2016 at 11:18 AM, Eric Robinson
wrote:
> Sadly, it still isn’t working.
>
> Here is my crm config...
>
> node ha13a
> node ha13b
> primitive p_drbd0 ocf:linbit:drbd \
> params drbd_resource=ha01_mysql \
> op monitor interval=31s role=Slave \
> op monitor in
On 27/02/2016 4:10 pm, "Eric Robinson" wrote:
>
> > Can you please try following constraints instead the ones you have:
>
> > group g_drbd0 p_lvm_drbd0 p_fs_clust17 p_vip_clust17
> > group g_drbd1 p_lvm_drbd1 p_fs_clust18 p_vip_clust18
> > colocation c_clust17 inf: g_drbd0 ms_drbd0:Master
> > colo
On Sat, Feb 27, 2016 at 5:05 PM, Igor Cicimov wrote:
>
> On 27/02/2016 4:10 pm, "Eric Robinson" wrote:
> >
> > > Can you please try following constraints instead the ones you have:
> >
> > > group g_drbd0 p_lvm_drbd0 p_fs_clust17 p_vip_clust1
On 28/02/2016 1:19 PM, "Eric Robinson" wrote:
>
> > That's exactly what this configuration gives you right? Each group is
collocated
> > with one and only one drbd device on the master node. Regarding
starting/stopping of
> > the resources tied up together in the same group. I guess after adding
M
On Mon, Feb 29, 2016 at 7:52 AM, Igor Cicimov wrote:
>
> On 28/02/2016 1:19 PM, "Eric Robinson" wrote:
> >
> > > That's exactly what this configuration gives you right? Each group is
> collocated
> > > with one and only one drbd device on t
On 29/02/2016 2:00 AM, "翟果" wrote:
>
> Hello,All:
> I used to google for the solution,but get no answers.
> Somebody says DRBD8.4 doesn't work with sdp?Really?
As far as I can see thats not true:
http://drbd.linbit.com/users-guide-8.4/s-replication-transports.html
> I have two nodes(C
On 01/03/2016 3:08 AM, "Eric Robinson" wrote:
>
> >> That approach does not really work because if you stop resource
> >> p_mysql_002 (for example) then all the other resources in the group
stop too!
> >>
> > Still dont understand whats your problem with that.
>
> Each mysql instance is for a diff
Did you read this?
https://www.drbd.org/en/doc/users-guide-90/s-enable-dual-primary
On 24 Apr 2016 6:03 am, "Ml Ml" wrote:
> Hello List,
>
> i am running DRBD9 with 4.2.8-1-pve (Proxmox 4.1) and in syslog i get:
>
> Apr 23 21:50:07 node02 kernel: [ 5018.813223] drbd vm-101-disk-1
> node01: ASSERT
On Tue, May 17, 2016 at 8:21 AM, Mats Ramnefors wrote:
> I am testing a DRBD 9 and 8.4 in simple 2 node active - passive clusters
> with NFS.
>
> Copying files form a third server to the NFS share using dd, I typically
> see an average of 20% CPU load (with v9) on the primary during transfer of
>
str(connection->info.conn_connection_state));
> + if
> (connection->info.conn_connection_state == C_CONNECTED) {
> + printI("_peer=%s\n",
> drbd_role_str(connection->info.conn_role));
On 7 Jun 2016 3:18 pm, "Stephano-Shachter, Dylan"
wrote:
>
> Hello all,
>
> I am building an HA NFS server using drbd and pacemaker. Everything is
working well except I am getting lower write speeds than I would expect. I
have been doing all of my benchmarking with bonnie++. I always get read
spee
On 1 Jul 2016 3:48 pm, "T.J. Yang" wrote:
>
> Hi All
>
> I am new to drbd performance turning and I have been study (R0).
> Aso I am browsing others effort in drbd-user archive (R1).
> I was able to get 350MB/s rsync rate (R2) for two Centos 7.2 VMs(A and B)
when they are on same LAN with turning
Ok, this has been coming for a while now, does anyone know when is the
expected 9.1 release date?
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user
On 15 Jul 2016 9:30 am, "Digimer" wrote:
>
> On 14/07/16 07:10 PM, Igor Cicimov wrote:
> > Ok, this has been coming for a while now, does anyone know when is the
> > expected 9.1 release date?
>
> 9.0.3 just came out... So far as I know, nothing has been said ab
On Wed, Aug 31, 2016 at 3:49 PM, Mia Lueng wrote:
> Hi:
> I have a cluster with four drbd devices. I found oracle stopped
> timeout while drbd is in resync state.
> oracle is blocked like following:
>
> oracle6869 6844 0.0 0.0 71424 12616 ?S16:28
> 00:00:00 pipe_wait
> /oracle/
On 1 Sep 2016 1:16 am, "Mia Lueng" wrote:
>
> Yes, Oracle & drbd is running under pacemaker just in
> primary/secondary mode. I stopped the oracle resource during DRBD is
> resyncing and the oracle hangup
>
> 2016-08-31 14:38 GMT+08:00 Igor Cicimov :
> >
&
On 1 Sep 2016 9:02 am, "Igor Cicimov"
wrote:
>
> On 1 Sep 2016 1:16 am, "Mia Lueng" wrote:
> >
> > Yes, Oracle & drbd is running under pacemaker just in
> > primary/secondary mode. I stopped the oracle resource during DRBD is
> > resyncing an
On Thu, Sep 1, 2016 at 9:02 AM, Igor Cicimov wrote:
> On 1 Sep 2016 1:16 am, "Mia Lueng" wrote:
> >
> > Yes, Oracle & drbd is running under pacemaker just in
> > primary/secondary mode. I stopped the oracle resource during DRBD is
> > resyncing and the o
ateur Systèmes et Réseaux
> Tel: +33 3 81 666 440 Fax: +33 3 81 666 568
>
> Laboratoire Chrono-environnement
> <http://chrono-environnement.univ-fcomte.fr/>
> 16, Route de Gray
> 25030 BESANÇON Cédex
>
> Plan et Accès
> <https://mapsengine.google.com/m
On 19 Sep 2016 5:45 pm, "Marco Marino" wrote:
>
> Hi, I'm trying to build an active/passive cluster with drbd and pacemaker
for a san. I'm using 2 nodes with one raid controller (megaraid) on each
one. Each node has an ssd disk that works as cache for read (and write?)
realizing the CacheCade prop
pastebin.com/BGR33jN6
>>
>> @digimer:
>> Using local-io-error should power off the node and switch the cluster on
the remaing node is this a good idea?
>>
>> Regards,
>> Marco
>>
>> 2016-09-19 12:58 GMT+02:00 Adam Goryachev :
>>
On Tue, Sep 20, 2016 at 7:13 PM, Marco Marino wrote:
> mmm... This means that I do not understood this policy. I thought that I/O
> error happens only on the primary node, but it seems that all nodes become
> diskless in this case. Why? Basically I have an I/O error on the primary
> node because
and one of these disks fails,
> nothing should happen on the secondary node.
>
> Igor Cicimov: why removing the write-back cache drive on the primary node
> cause problems also on the secondary node? What is the dynamics involved?
>
> As Lars pointed out it is up to you to
On Fri, Sep 23, 2016 at 7:16 PM, mzlld1988 wrote:
> Hello, everyone
>
> I have a question about using DRBD9 with pacemaker 1.1.15 .
>
> Does DRBD9 can be used in pacemaker?
>
No, not yet but Linbit is working on it as they say. For now you need to
apply the attached patch to the drbd ocf agent.
On Sun, Sep 25, 2016 at 11:15 AM, Igor Cicimov <
ig...@encompasscorporation.com> wrote:
>
>
> On Fri, Sep 23, 2016 at 7:16 PM, mzlld1988 wrote:
>
>> Hello, everyone
>>
>> I have a question about using DRBD9 with pacemaker 1.1.15 .
>>
>> Does DRBD9
't tried any other layout.
>
>
> Sent from my Mi phone
> On Igor Cicimov , Sep 25, 2016 9:15 AM
wrote:
>>
>>
>>
>> On Fri, Sep 23, 2016 at 7:16 PM, mzlld1988 wrote:
>>>
>>> Hello, everyone
>>>
>>> I have a question a
On 27 Sep 2016 11:51 pm, "刘丹" wrote:
>
> Failed to send to one or more email server, so send again.
>
>
>
> At 2016-09-27 15:47:37, "Nick Wang" > wrote: >>>> On 2016-9-26 at 19:17, in message >, Igor >Cicimov <
ig...@encompasscorpora
On 17 Oct 2016 6:11 pm, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> wrote:
>
> Il 17 ott 2016 09:01, "Jan Schermer" ha scritto:
> >
> > 3 storages, many more hypervisors, data triplicated... that's the usual
scenario
>
> Are you using drbd9 with a 3 nodes replication?
> could you pl
Hi Brandon,
On Wed, Nov 16, 2016 at 5:17 PM, Brandon Chapman
wrote:
> Hello.
>
> I am trying to integrate MySQL into my cluster currently setup with
> drbd8.4.5 on ubuntu16.04.1LTS.
> I have mostly followed this guide here:
>
> http://www.tokiwinter.com/clustering-with-drbd-corosync-and-pacemake
On 21 Nov 2016 1:48 am, "Jasmin J." wrote:
>
> Hi!
>
> I am playing with Proxmox 4.3 and DRDB9.
> I followed this guide (https://pve.proxmox.com/wiki/DRBD#Disk_for_DRBD),
> because it explains how to setup DRDB (8.x) on top of a physical disk (I
don't
> want to use it on top of LVM). This guide is
On 28 Nov 2016 6:18 am, wrote:
>
> Hello,
>
> i don't speak english very well, but i hope to make my request partly
clear.
>
> My task is to look for an opportunity to make an optimization regarding
> the reliabilty of a DRBD-Connection (Version 9) over a LAN between three
> or four mobile devices
On 17 Dec 2016 1:07 pm, "Jasmin J." wrote:
Hi!
I have a machine (A) with a RAID1 and a BBU. On top of a partition of this
RAID
is LVM and then DRBD 8.4.
The other machine (B), which is the DRBD mirror for the former mentioned
partition, has a normal SATA disk. I try to use Protocol A, so it mak
try to find some time over the holidays and test this.
Cheers,
--
Igor Cicimov | DevOps
p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
w*.* www.encompasscorporation.com
a. Level 4, 65 York Street, Sydney 2000
__
On 27 Dec 2016 4:15 am, "Jasmin J." wrote:
Hello Dietmar!
> The functionality you are looking for already exists using DRBD9. It is
> called drbdmanage, and Linbit provides a repository including all
> packages and a storage driver for PVE.
Yes it is ... BUT ...
> AFAIK DRBD9 is stable (or wi
On 04/01/2017 7:32 pm, "Roland Kammerer" wrote:
On Tue, Jan 03, 2017 at 10:38:38AM +, Enrica Ruedin wrote:
> I can't understand Linbit to change license in such a way that PVE
> Proxmox has to remove their support completely. This unnecessary
> change leads to confusion.
But you do understan
On 9 Feb 2017 6:02 pm, "Dr. Volker Jaenisch"
wrote:
Hi!
I have two pairs of servers over the same 10 Gbit link:
1) Pair A is replicating a 1TB Sata Disk using drbd8 (Debian Jessie 8.6).
2) Pair B is replicating a 0.8TB Volume with an unterlying hardware Raid
10 consisting out of 6 SAS Disks, S
On 25 Feb 2017 3:32 am, "Dr. Volker Jaenisch"
wrote:
Servus !
Am 24.02.2017 um 15:53 schrieb Lars Ellenberg:
On Fri, Feb 24, 2017 at 03:08:04PM +0100, Dr. Volker Jaenisch wrote:
If both 10Gbit links fail then the bond0 aka the worker connection fails
and DRBD goes - as expected - into split bra
On 26 Feb 2017 10:58 pm, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> wrote:
2017-02-26 10:33 GMT+01:00 Rabin Yasharzadehe :
> what about putting DRBD over ZVOL ?
If possible, I have no issue in doing this.
Anyone using DRBD over ZFS ?
___
___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
--
Igor Cicimov | DevOps
p. +61 (0) 433 078 728
e. ig...@encompasscorporation.com <http://encompasscorporation.com/>
w*.* www
On Wed, Mar 1, 2017 at 2:52 AM, Sean M. Pappalardo <
spappala...@renegadetech.com> wrote:
>
> Only because that's all the distro had available as I was running an old
> one. This is part of the reason for the upgrade. Unfortunately,
> Proxmox's kernel includes the DRBD 9 module only. (Is there any
On Fri, Mar 24, 2017 at 7:19 PM, Raman Gupta wrote:
> Hi All,
>
> I am having a problem where if in GFS2 dual-Primary-DRBD Pacemaker
> Cluster, a node crashes then the running node hangs! The CLVM commands
> hang, the libvirt VM on running node hangs.
>
> Env:
> -
> CentOS 7.3
> DRBD 8.4
Raman,
On Sat, Mar 25, 2017 at 12:07 AM, Raman Gupta
wrote:
> Hi,
>
> Thanks for looking into this issue. Here is my 'pcs status' and attached
> is cib.xml pacemaker file
>
> [root@server4 cib]# pcs status
> Cluster name: vCluster
> Stack: corosync
> Current DC: server7ha (version 1.1.15-11.el7_
On 25 Mar 2017 11:00 am, "Igor Cicimov" wrote:
Raman,
On Sat, Mar 25, 2017 at 12:07 AM, Raman Gupta
wrote:
> Hi,
>
> Thanks for looking into this issue. Here is my 'pcs status' and attached
> is cib.xml pacemaker file
>
> [root@server4 cib]# pcs s
On 25 Mar 2017 8:16 pm, "Marco Marino" wrote:
Hi, I'm trying to understand how to configure raw devices when used with
drbd. I think I have a problem with data alignment. Let me describe my case:
I have a raw device /dev/sde on both nodes and on top of it there is the
drbd device. So, in the .re
ctly fine, single drbd device backed by a big block storage
exported as single lun over a target. Then a vg on the pv created from the
iscsi block device from which the lvs are created for the vm's. This is
fine for single vm host.
Thank you,
Marco
2017-03-27 23:40 GMT+02:00 Igor Cici
On 2 Apr 2017 3:45 am, "Gregor Burck" wrote:
Hi,
I'm testing drbd on a debian sytem. (drbd 8.9.2)
My setup is two nodes with a primary-primary setup with gfs2
I mount the cluster resource in the local filesystem. (/dev/drbd0 on
/clusterdata type gfs2)
When I kill one node (take the electircal
On Tue, Apr 4, 2017 at 9:42 PM, Roberto Resoli
wrote:
> Il 04/04/2017 13:08, Frank Rust ha scritto:
> > Hi folks,
> >
> > I am wondering if it would be possible to create a drbdmanage cluster
> where the hostname don’t match the ip address of the network interface to
> use.
> >
> > In detail:
> >
On 4 Apr 2017 11:08 pm, "Robert Altnoeder"
wrote:
On 04/04/2017 02:48 PM, Frank Rust wrote:
> That’s what I tried, but what is not working, because the drbdmanage
software detects its own name by doing os.uname().
> And that reports the name from /etc/hostname, corresponding to the
external inter
On 10 Apr 2017 7:42 am, "Marco Certelli" wrote:
Hello. Thanks for the answer.
Maybe I was not clear: I do not want the authomatic poweroff of the server.
Why do you have problem with this? The server is already powering off right?
My problem is that if I manually poweroff the primary node (i.e
ces into Pacemaker was
> the issue.
>
Which is the very thing pointed in my answer to your previous thread,
http://marc.info/?l=drbd-user&m=14904721736&w=2, the *proper
integration*.There is no compromise regarding this it *ALL* has to be
managed by pacemaker not just parts of this and
On 6 Jun 2017 7:23 pm, "Andrea del Monaco" <
andrea.delmon...@clustervision.com> wrote:
Hello everybody,
I am currently facing some issues with the DRBD syncronization.
Here is the config file:
global {
usage-count no;
}
common {
startup {
wfc-timeout 15;
rove with
> 9.0.8. If you can, that would be the perfect time to test rc2 and check
> if it is solved.
>
> Regards, rck
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-use
On 14 Jun 2017 5:23 pm, "Roland Kammerer"
wrote:
On Wed, Jun 14, 2017 at 11:26:25AM +1000, Igor Cicimov wrote:
> Hi Roland,
>
> I noticed issues with 8.4.x not being able to compile on this very
> 4.4.67-1-pve kernel (it is working fine on the 4.4.8 I upgraded from). Not
t; 55268 Nieder-Olm
>
>
>
> Dominic Pratt
> Tel.: +49-6136-994390 <+49%206136%20994390>
> Fax.: +49-6136-9943999 <+49%206136%209943999>
>
> E-Mail: d...@zmt.info
>
>
>
> Geschäftsführer: Grischa Zengel
>
> Amtsgerich
Hi Gionatan,
On Thu, Jul 27, 2017 at 12:14 AM, Gionatan Danti wrote:
> Hi all,
> I have a possibly naive question about a dual primary setup involving LVM
> devices on top of DRBD.
>
> The main question is: using cLVM or native LVM locking, can I safely use a
> LV block device on the first node,
On 27 Jul 2017 6:11 pm, "Gionatan Danti" wrote:
Il 27-07-2017 09:38 Gionatan Danti ha scritto:
>
> Thanks for your input. I also read your excellent suggestions on link
> Igor posted.
>
>
To clarify: the main reason I am asking about the feasibility of a
dual-primary DRBD setup with LVs on top o
On Thu, Jul 27, 2017 at 9:04 PM, Igor Cicimov <
ig...@encompasscorporation.com> wrote:
> Hey Gionatan,
>
> On Thu, Jul 27, 2017 at 7:04 PM, Gionatan Danti
> wrote:
>
>> Il 27-07-2017 10:23 Igor Cicimov ha scritto:
>>
>>>
>>> When in cluster m
Hey Gionatan,
On Thu, Jul 27, 2017 at 7:04 PM, Gionatan Danti wrote:
> Il 27-07-2017 10:23 Igor Cicimov ha scritto:
>
>>
>> When in cluster mode LVM will not use local cache that's part of the
>> configuration you need to do during setup.
>>
>>
> H
On 11 Sep 2017 4:20 pm, "Ravi Kiran Chilakapati" <
ravikiran.chilakap...@gmail.com> wrote:
Thank you for the response Roland.
I will start going through the source code. In the meantime, it will be
great if these preliminary questions can be answered.
Q: Is Protocol C a variant of any standard a
On 12 Oct 2017 5:10 am, "Gandalf Corvotempesta" <
gandalf.corvotempe...@gmail.com> wrote:
Previously i've asked about DRBDv9+ZFS.
Let's assume a more "standard" setup with DRBDv8 + mdadm.
What I would like to archieve is a simple redundant SAN. (anything
preconfigured for this ?)
Which is best,
Hi Marco,
On 23 Nov 2017 7:05 am, "Marco Marino" wrote:
Hi, I'm trying to configure drbd9 with openstack-cinder.
Actually my (simplified) infrastructure is composed by:
- 2 drbd9 nodes with 2 NICs on each node, one for the "replication" network
(without using a switch) and one for the "storage"
On 9 Feb 2018 7:30 pm, "Paul O'Rorke" wrote:
In a functional classic 3 node set up what is the expected state of the
secondary node that is the lower resource with regards the upper resource?
trk-kvm-01 (primary for both resource levels)
root@trk-kvm-01:~/scripts# drbd-overview
[...]
12:convir
On 10 Feb 2018 5:02 am, "Julien Escario" wrote:
Hello,
I'm just doing a lab about zpool as storage backend for DRBD (storing VM
images
with Proxmox).
Right now, it's pretty good once tuned and I've been able to achieve 500MB/s
write speed with just a little curiosity about concurrent write from
On Tue, Feb 20, 2018 at 9:55 PM, Julien Escario wrote:
> Le 10/02/2018 à 04:39, Igor Cicimov a écrit :
> > Did you tell it
> > to? https://docs.linbit.com/doc/users-guide-84/s-
> configure-io-error-behavior/
>
> Sorry for the late answer : I moved on performance tests wi
Hi,
On Fri, Mar 23, 2018 at 9:01 AM, Lozenkov Sergei
wrote:
> Hello.
> I have two Debian 9 servers with configured Corosync-Pacemaker-DRBD. All
> work well for month.
> After some servers issues (with reboots) I have situation that pacemaker
> could not switch drbd node with such errors:
>
> Ma
Hi Jaco,
On Mon, Jul 23, 2018 at 11:10 PM, Jaco van Niekerk
wrote:
> Hi
> I am using the following packages:
> pcs-0.9.162-5.el7.centos.1.x86_64
> kmod-drbd84-8.4.11-1.el7_5.elrepo.x86_64
> drbd84-utils-9.3.1-1.el7.elrepo.x86_64
> pacemaker-1.1.18-11.el7_5.3.x86_64
> corosync-2.4.3-2.el7_5.1.x86
On Fri, Jul 27, 2018 at 3:51 AM, Eric Robinson
wrote:
> > On Thu, Jul 26, 2018 at 04:32:17PM +0200, Veit Wahlich wrote:
> > > Hi Eric,
> > >
> > > Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson:
> > > > Would there really be a PV signature on the backing device? I didn't
> > > >
Hi,
On Fri, Jul 27, 2018 at 1:36 AM, Lars Ellenberg
wrote:
> On Mon, Jul 23, 2018 at 02:46:25PM +0200, Michal Michaláč wrote:
> > Hello,
> >
> >
> >
> > after replacing backing device of DRBD, content of DRBD volume (not only
> > backing disk) is invalid on node with inconsisten
On Sun, 29 Jul 2018 1:13 am Eric Robinson wrote:
>
>
>
>
> > -Original Message-
> > From: Eric Robinson
> > Sent: Saturday, July 28, 2018 7:39 AM
> > To: Lars Ellenberg ;
> drbd-user@lists.linbit.com
> > Subject: RE: [DRBD-user] drbd+lvm no bueno
> >
> > > > > Lars,
> > > > >
> > > > > I
On Tue, Nov 13, 2018 at 3:37 AM Yannis Milios
wrote:
> As far as I know, Proxmox does not need 3 nodes and/or a quorum, and the
>> LINSTOR controller does not care either.
>>
>
> Thanks for confirming this Robert.
> In my experience, Promox requires a minimum 3 nodes when HA is
> enabled/required
On Wed, Nov 14, 2018 at 8:31 AM Bryan K. Walton
wrote:
>
> I have a two-node DRBD 9 resource configured in Primary-Secondary mode
> with automatic failover configured with Pacemaker.
>
> I know that I need to configure STONITH in Pacemaker and then set DRBD's
> fencing to "resource-and-stonith".
>
On Fri, Dec 14, 2018 at 2:57 AM Lars Ellenberg
wrote:
> On Wed, Dec 12, 2018 at 10:16:09AM +0100, Harald Dunkel wrote:
> > Hi folks,
> >
> > using drbd umounting /data1 takes >50 seconds, even though the file
> > system (ext4, noatime, default) wasn't accessed for more than 2h.
> > umount ran wit
Hi,
According to https://docs.linbit.com/docs/users-guide-8.4/#s-resizing,
when resizing DRBD 8.4 device online one side of the mirror needs to
be Secondary. I have dual primary setup with Pacemaker and GFS2 as
file system and wonder if I need to demote one side to Secondary
before I run:
drbdadm
Nevermind decided not to be lazy (saturday and all that) and do it
properly. Done now.
On Sat, 15 Dec 2018 12:40 pm Igor Cicimov Hi,
>
> According to https://docs.linbit.com/docs/users-guide-8.4/#s-resizing,
> when resizing DRBD 8.4 device online one side of the mirror needs to
> be
On Sun, 3 Feb 2019 3:42 am Yannis Milios You have to specify which storage pool to use for the resource, otherwise
> it will default to 'DfltStorPool', which does not exist. So that would be
> something like this...
>
> $ linstor resource create pve3 vm-400-disk-1 --storage-pool
>
>
> It might be
Yannis,
On Sun, Feb 3, 2019 at 9:39 PM Yannis Milios
wrote:
>
> Are you saying this needs to be done for every single resource potentially
>> hundreds of vm's with multiple disks attached? This sounds like a huge pita.
>>
>
> Yes. However, I did a test. I temporarily reduced redundancy level in
On Tue, Oct 20, 2020 at 4:17 AM Jeremy Faith wrote:
> Hi,
>
> drbd90 kernel module version:9.0.22-2(also 9.0.25-1 compiled from
> source)drbd90-utils:9.12.2-1
> kernel:3.10.0-1127.18.2.el7.x86_64
>
> 4 nodes
> n1 primary
> n2,n3,n4 all secondary
>
> If I run the folowing script then sometimes, a
98 matches
Mail list logo