Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Lars Ellenberg
On Thu, Jul 26, 2018 at 05:51:40PM +, Eric Robinson wrote: > > But really, most of the time, you really want LVM *below* DRBD, and NOT > > above it. Even though it may "appear" to be convenient, it is usually not > > what > > you want, for various reasons, one of it being performance. > >

Re: [DRBD-user] Content of DRBD volume is invalid during sync after disk replace

2018-07-26 Thread Igor Cicimov
Hi, On Fri, Jul 27, 2018 at 1:36 AM, Lars Ellenberg wrote: > On Mon, Jul 23, 2018 at 02:46:25PM +0200, Michal Michaláč wrote: > > Hello, > > > > > > > > after replacing backing device of DRBD, content of DRBD volume (not only > > backing disk) is invalid on node with

Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Igor Cicimov
On Fri, Jul 27, 2018 at 3:51 AM, Eric Robinson wrote: > > On Thu, Jul 26, 2018 at 04:32:17PM +0200, Veit Wahlich wrote: > > > Hi Eric, > > > > > > Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson: > > > > Would there really be a PV signature on the backing device? I didn't > > >

Re: [DRBD-user] Pacemaker unable to start DRBD

2018-07-26 Thread Lars Ellenberg
On Wed, Jul 25, 2018 at 12:02:38PM +0300, Roman Makhov wrote: > Hi Jaco, > > Maybe it is because crm is core component of Pacemaker ( > https://wiki.clusterlabs.org/wiki/File:Stack.png)? > "crmd Short for Cluster Resource Management Daemon. Largely a message > broker for the PEngine and LRM, it

Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Veit Wahlich
Am Donnerstag, den 26.07.2018, 17:31 +0200 schrieb Lars Ellenberg: > > global_filter = [ "a|^/dev/md.*$|", "r/.*/" ] > > > > or even more strict: > > > > global_filter = [ "a|^/dev/md4$|", "r/.*/" ] > > Uhm, no. > Not if he want DRBD to be his PV... > then he needs to exclude (reject) the

Re: [DRBD-user] Content of DRBD volume is invalid during sync after disk replace

2018-07-26 Thread Lars Ellenberg
On Mon, Jul 23, 2018 at 02:46:25PM +0200, Michal Michaláč wrote: > Hello, > > > > after replacing backing device of DRBD, content of DRBD volume (not only > backing disk) is invalid on node with inconsistent backing device, until > sync finishes. I think, correct behaviour is

Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Lars Ellenberg
On Thu, Jul 26, 2018 at 04:32:17PM +0200, Veit Wahlich wrote: > Hi Eric, > > Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson: > > Would there really be a PV signature on the backing device? I didn't > > turn md4 into a PV (did not run pvcreate /dev/md4), but I did turn > > the

Re: [DRBD-user] linstor-proxmox-2.8

2018-07-26 Thread Julien Escario
-BEGIN PGP SIGNED MESSAGE- Hash: SHA512 Le 26/07/2018 à 15:28, Roland Kammerer a écrit : > Dear Proxmox VE users, > > we released the first version of the linstor-proxmox plugin. This > integrates LINSTOR (the successor of DRBDManage) into Proxmox. > > It contains all the features the

Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Veit Wahlich
Hi Eric, Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson: > Would there really be a PV signature on the backing device? I didn't turn md4 > into a PV (did not run pvcreate /dev/md4), but I did turn the drbd disk into > one (pvcreate /dev/drbd1). both DRBD and mdraid put their

Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Eric Robinson
> -Original Message- > From: drbd-user-boun...@lists.linbit.com [mailto:drbd-user- > boun...@lists.linbit.com] On Behalf Of Robert Altnoeder > Sent: Thursday, July 26, 2018 5:12 AM > To: drbd-user@lists.linbit.com > Subject: Re: [DRBD-user] drbd+lvm no bueno > > On 07/26/2018 08:50 AM,

Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Eric Robinson
Thank you, I will check that out. From: Jaco van Niekerk [mailto:j...@desktop.co.za] Sent: Thursday, July 26, 2018 3:34 AM To: Eric Robinson ; drbd-user@lists.linbit.com Subject: Re: [DRBD-user] drbd+lvm no bueno Hi Check your LVM configuration:

[DRBD-user] linstor-proxmox-2.8

2018-07-26 Thread Roland Kammerer
Dear Proxmox VE users, we released the first version of the linstor-proxmox plugin. This integrates LINSTOR (the successor of DRBDManage) into Proxmox. It contains all the features the drbdmanage-proxmox plugin had (i.e., creating/deleting volumes with a configurable redundancy, VM

Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Robert Altnoeder
On 07/26/2018 08:50 AM, Eric Robinson wrote: > > > Failed Actions: > > * p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68, > status=complete, exitreason='LVM: vg_on_drbd1 did not activate correctly', > >     last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms > >   > > The

Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Jaco van Niekerk
Hi Check your LVM configuration: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-exclusiveactive-haaa Regards Jaco van Niekerk Office: 011 608 2663 E-mail: j...@desktop.co.za [Desktop] accepts

[DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Eric Robinson
Using drbd 9.0.14, I am having trouble getting rtesources to move between nodes. I get... Failed Actions: * p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68, status=complete, exitreason='LVM: vg_on_drbd1 did not activate correctly', last-rc-change='Wed Jul 25 22:36:37 2018',

Re: [DRBD-user] [DRBD-9.0.15-0rc1] Resource "stuck" during live migration

2018-07-26 Thread Yannis Milios
I've switched pve1,pve2 to lvm thin recently just for testing and left pve3 with zfs as a storage back end. However, I really miss some cool zfs features, compared to lvm thin, like on-the-fly compression of zero blocks and its fast,low cost, point in time snapshots... What I don't miss though, is

Re: [DRBD-user] [DRBD-9.0.15-0rc1] Resource "stuck" during live migration

2018-07-26 Thread Roland Kammerer
On Wed, Jul 25, 2018 at 08:49:02PM +0100, Yannis Milios wrote: > Hello, > > Currently testing 9.0.15-0rc1 on a 3 node PVE cluster. > > Pkg versions: > -- > cat /proc/drbd > version: 9.0.15-0rc1 (api:2/proto:86-114) > GIT-hash: fc844fc366933c60f7303694ca1dea734dcb39bb build by