Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Lars Ellenberg
On Thu, Jul 26, 2018 at 05:51:40PM +, Eric Robinson wrote:
> > But really, most of the time, you really want LVM *below* DRBD, and NOT
> > above it. Even though it may "appear" to be convenient, it is usually not 
> > what
> > you want, for various reasons, one of it being performance.
> 
> Lars,
> 
> I put MySQL databases on the drbd volume. To back them up, I pause
> them and do LVM snapshots (then rsync the snapshots to an archive
> server). How could I do that with LVM below drbd, since what I want is
> a snapshot of the filesystem where MySQL lives?

You just snapshot below DRBD, after "quiescen" the mysql db.

DRBD is transparent, the "garbage" (to the filesystem) of the "trailing
drbd meta data" is of no concern.
You may have to "mount -t ext4" (or xfs or whatever),
if your mount and libblkid decide that this was a "drbd" type
and could not be mounted. They are just trying to help, really.
which is good. but in that case they get it wrong.

> How severely does putting LVM on top of drbd affect performance?  

It's not the "putting LVM on top of drbd" part.
it's what most people think when doing that:
use a huge single DRBD as PV, and put loads of unrelated LVS
inside of that.

Which then all share the single DRBD "activity log" of the single DRBD
volume, which then becomes a bottleneck for IOPS.

-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Content of DRBD volume is invalid during sync after disk replace

2018-07-26 Thread Igor Cicimov
Hi,

On Fri, Jul 27, 2018 at 1:36 AM, Lars Ellenberg 
wrote:

> On Mon, Jul 23, 2018 at 02:46:25PM +0200, Michal Michaláč wrote:
> >  Hello,
> >
> >
> >
> > after replacing backing device of DRBD, content of DRBD volume (not only
> > backing disk) is invalid on node with inconsistent backing device, until
> > sync finishes. I think, correct behaviour is to read data from peer's
> > (consistent) backing device if process running on node with inconsistent
> > backing device wants to read unsynchronized part of DRBD volume.
>
> ...
>
> > If I  skip create-md (step 4), situation is even worse - after attach
> disk,
> > DRBD says volume is sychronized(!):
> >
> > log: Began resync as SyncTarget (will sync 0 KB [0 bits set])
> >
> > but after verification (drbdadm verify test), there are many out-of-sync
> > sectors.
> >
> > After disconnect/connect volume test, resync not started(!):
> >
> > log: No resync, but 3840 bits in bitmap!
> >
> > If I (on new DRBD volume) just disconnect -> write changes to primary ->
> > connect, sync works correctly.
>
> > Versions (on both nodes are identical):
> > # cat /proc/drbd
> > version: 9.0.14-1 (api:2/proto:86-113)
> > GIT-hash: 62f906cf44ef02a30ce0c148fec223b40c51c533 build by root@n2,
> > 2018-07-12 13:18:02
> >
> > Transports (api:16): tcp (9.0.14-1)
> >
> > # uname -a
> > Linux n2 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43
> +0200)
> > x86_64 GNU/Linux
> >
> > # lvm version
> >   LVM version: 2.02.168(2) (2016-11-30)
> >   Library version: 1.02.137 (2016-11-30)
> >   Driver version:  4.37.0
>
> > Is it bug or am I doing something wrong?
>
> Thanks for the detailed and useful report,
> definetely a serious and embarassing bug,
> now already fixed internally.
> Fix will go into 9.0.15 final.
> We are in the progress of making sure
> we have covered all variants and lose ends of this.
>

​Is this going to get back ported to 8.4 as well?​
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Igor Cicimov
On Fri, Jul 27, 2018 at 3:51 AM, Eric Robinson 
wrote:

> > On Thu, Jul 26, 2018 at 04:32:17PM +0200, Veit Wahlich wrote:
> > > Hi Eric,
> > >
> > > Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson:
> > > > Would there really be a PV signature on the backing device? I didn't
> > > > turn md4 into a PV (did not run pvcreate /dev/md4), but I did turn
> > > > the drbd disk into one (pvcreate /dev/drbd1).
> >
> > Yes (please view in fixed with font):
> >
> > | PV signature | VG extent pool |
> > | drbd1     | drbd metadata |  md4  
> > |     ...|md metadata|
> > |component|drives|.|.|...of...|md4..|.|.|
> >
> > > both DRBD and mdraid put their metadata at the end of the block
> > > device, thus depending on LVM configuration, both mdraid backing
> > > devices as well as DRBD minors bcking VM disks with direct-on-disk PVs
> > > might be detected as PVs.
> > >
> > > It is very advisable to set lvm.conf's global_filter to allow only the
> > > desired devices as PVs by matching a strict regexp, and to ignore all
> > > other devices, e.g.:
> > >
> > >  global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]
> > >
> > > or even more strict:
> > >
> > >  global_filter = [ "a|^/dev/md4$|", "r/.*/" ]
> >
> > Uhm, no.
> > Not if he want DRBD to be his PV...
> > then he needs to exclude (reject) the backend, and only include (accept)
> the
> > DRBD.
> >
> > But yes, I very much recommend to put an explicit white list of the
> to-be-
> > used PVs into the global filter, and reject anything else.
> >
> > Note that these are (by default unanchored) regexes, NOT glob patterns.
> > (Above examples get that one right, though r/./ would be enough...
> > but I have seen people get it wrong too many times, so I thought I'd
> mention
> > it here again)
> >
> > > After editing the configuration, you might want to regenerate your
> > > distro's initrd/initramfs to reflect the changes directly at startup.
> >
> > Yes, don't forget that step ^^^ that one is important as well.
> >
> > But really, most of the time, you really want LVM *below* DRBD, and NOT
> > above it. Even though it may "appear" to be convenient, it is usually
> not what
> > you want, for various reasons, one of it being performance.
>
> Lars,
>
> I put MySQL databases on the drbd volume. To back them up, I pause them
> and do LVM snapshots (then rsync the snapshots to an archive server). How
> could I do that with LVM below drbd, since what I want is a snapshot of the
> filesystem where MySQL lives?
>
> How severely does putting LVM on top of drbd affect performance?
>
> >
> > Cheers,
> >
> > --
> > : Lars Ellenberg


​It depends I would say it is not unusual to end up with a setup where dbrd
is sandwiched between top and bottom lvm due to requirements or
convenience. For example in case of master-master with GFS2:

iscsi,raid -> lvm -> drbd -> clvm -> gfs2​

​Apart from the clustered lvm on top of drbd (which is RedHat recommended)
you also get the benefit of easily extending the drbd device(s) due to the
underlying lvm.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Pacemaker unable to start DRBD

2018-07-26 Thread Lars Ellenberg
On Wed, Jul 25, 2018 at 12:02:38PM +0300, Roman Makhov wrote:
> Hi Jaco,
> 
> Maybe it is because crm is core component of Pacemaker (
> https://wiki.clusterlabs.org/wiki/File:Stack.png)?
> "crmd Short for Cluster Resource Management Daemon. Largely a message
> broker for the PEngine and LRM, it also elects a leader to co-ordinate the
> activities (including starting/stopping resources) of the cluster."

:-)

crm here is the "crm shell",
which is why the mentioned package name is crmsh.
It is the "original" and well established CLI for configuring pacemaker
(without resorting to xml snippets directly).

crmd there is the cluster resource manager daemon, aka pacemaker itself.

(be careful when assuming you understand more than the other guy)

> >> The following command doesn't work:
> >> pcs resource create my_iscsidata ocf:linbit:drbd
> >> * ​​ ​​ drbd_resource=iscsidata* op monitor interval=10s
> >> pcs resource master MyISCSIClone my_iscsidata master-max=1
> >> master-node-max=1 clone-max=2 clone-node-max=1 notify=true

BTW, pcs is also a cli for configuring pacemaker, but came later.
I feel that crm is much more convenient for interactive use,
but that is probably some "vim vs emacs" kind of discussion.

You really want the "pcs resource create x ..."
and the "pcs resource master y x ..."
to go through without the "pcs resource" committing the "primitive"
to the configuration alone, or pacemaker will attempt to start that
resource agent as a normal, primitive, resource,
and then the resource agent will complain like below:

> >> I receive the following on pcs status:
> >> * my_iscsidata_monitor_0 on node2.san.localhost 'not configured' (6):
> >> call=9, status=complete, exitreason='meta parameter misconfigured, expected
> >> clone-max -le 2, but found unset.',

That's why the example you should have been following most likely
prepares the configuration in what we call a "shadow cib" first
(cib: cluster information base)
and then commits (or "pushes" in pcs speak) that shadow cib
with both primitive and master definitions, possible constraints
and other dependencies to the "live" cib.

as in
pcs cluster cib tmp_cfg
pcs -f tmp_cfg resource create ...
pcs -f tmp_cfg resource master ...
pcs cluster push cib tmp_cfg

if you need to get things done,
don't take unknown short cuts, because, as they say,
the unknown short cut is the longest route to the destination.

though you may learn a lot along the way,
so if you are in the position where the journey is the reward,
absolutely go for it ;-)

-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
__
please don't Cc me, but send to list -- I'm subscribed
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Veit Wahlich
Am Donnerstag, den 26.07.2018, 17:31 +0200 schrieb Lars Ellenberg:
> >  global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]
> > 
> > or even more strict: 
> > 
> >  global_filter = [ "a|^/dev/md4$|", "r/.*/" ]
> 
> Uhm, no.
> Not if he want DRBD to be his PV...
> then he needs to exclude (reject) the backend,
> and only include (accept) the DRBD.

Ah yes, sorry. In my mind Eric used LVM below DRBD, just like you
recommended:

> But really, most of the time, you really want LVM *below* DRBD,

Regards,
// Veit

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Content of DRBD volume is invalid during sync after disk replace

2018-07-26 Thread Lars Ellenberg
On Mon, Jul 23, 2018 at 02:46:25PM +0200, Michal Michaláč wrote:
>  Hello,
> 
>  
> 
> after replacing backing device of DRBD, content of DRBD volume (not only
> backing disk) is invalid on node with inconsistent backing device, until
> sync finishes. I think, correct behaviour is to read data from peer's
> (consistent) backing device if process running on node with inconsistent
> backing device wants to read unsynchronized part of DRBD volume.

...

> If I  skip create-md (step 4), situation is even worse - after attach disk,
> DRBD says volume is sychronized(!):
> 
> log: Began resync as SyncTarget (will sync 0 KB [0 bits set])
> 
> but after verification (drbdadm verify test), there are many out-of-sync
> sectors. 
> 
> After disconnect/connect volume test, resync not started(!):
> 
> log: No resync, but 3840 bits in bitmap!
> 
> If I (on new DRBD volume) just disconnect -> write changes to primary ->
> connect, sync works correctly.

> Versions (on both nodes are identical): 
> # cat /proc/drbd
> version: 9.0.14-1 (api:2/proto:86-113)
> GIT-hash: 62f906cf44ef02a30ce0c148fec223b40c51c533 build by root@n2,
> 2018-07-12 13:18:02
> 
> Transports (api:16): tcp (9.0.14-1)
> 
> # uname -a
> Linux n2 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43 +0200)
> x86_64 GNU/Linux
> 
> # lvm version
>   LVM version: 2.02.168(2) (2016-11-30)
>   Library version: 1.02.137 (2016-11-30)
>   Driver version:  4.37.0

> Is it bug or am I doing something wrong?

Thanks for the detailed and useful report,
definetely a serious and embarassing bug,
now already fixed internally.
Fix will go into 9.0.15 final.
We are in the progress of making sure
we have covered all variants and lose ends of this.

-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
__
please don't Cc me, but send to list -- I'm subscribed
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Lars Ellenberg
On Thu, Jul 26, 2018 at 04:32:17PM +0200, Veit Wahlich wrote:
> Hi Eric,
> 
> Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson:
> > Would there really be a PV signature on the backing device? I didn't
> > turn md4 into a PV (did not run pvcreate /dev/md4), but I did turn
> > the drbd disk into one (pvcreate /dev/drbd1).

Yes (please view in fixed with font):

| PV signature | VG extent pool |
| drbd1     | drbd metadata |
| md4        ...|md metadata|
|component|drives|.|.|...of...|md4..|.|.|

> both DRBD and mdraid put their metadata at the end of the block device,
> thus depending on LVM configuration, both mdraid backing devices as well
> as DRBD minors bcking VM disks with direct-on-disk PVs might be detected
> as PVs.
> 
> It is very advisable to set lvm.conf's global_filter to allow only the
> desired devices as PVs by matching a strict regexp, and to ignore all
> other devices, e.g.:
> 
>  global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]
> 
> or even more strict: 
> 
>  global_filter = [ "a|^/dev/md4$|", "r/.*/" ]

Uhm, no.
Not if he want DRBD to be his PV...
then he needs to exclude (reject) the backend,
and only include (accept) the DRBD.

But yes, I very much recommend to put an explicit white list
of the to-be-used PVs into the global filter, and reject anything else.

Note that these are (by default unanchored) regexes, NOT glob patterns.
(Above examples get that one right, though r/./ would be enough...
but I have seen people get it wrong too many times, so I thought I'd
mention it here again)

> After editing the configuration, you might want to regenerate your
> distro's initrd/initramfs to reflect the changes directly at startup.

Yes, don't forget that step ^^^ that one is important as well.

But really, most of the time, you really want LVM *below* DRBD,
and NOT above it. Even though it may "appear" to be convenient,
it is usually not what you want, for various reasons,
one of it being performance.

Cheers,

-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
__
please don't Cc me, but send to list -- I'm subscribed
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] linstor-proxmox-2.8

2018-07-26 Thread Julien Escario
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

Le 26/07/2018 à 15:28, Roland Kammerer a écrit :
> Dear Proxmox VE users,
> 
> we released the first version of the linstor-proxmox plugin. This 
> integrates LINSTOR (the successor of DRBDManage) into Proxmox.
> 
> It contains all the features the drbdmanage-proxmox plugin had (i.e., 
> creating/deleting volumes with a configurable redundancy, VM 
> live-migration, resizing, snapshots (limited as by the design differences
> between DRBD and Proxmox).

Well, that's great.

> The missing feature is proper size reporting. Reporting meaningful values
> for thinly allocated storage is a TODO in LINSTOR itself, so currently the
> plugin reports 8 out of 10TB as free storage. Always. Besides that it
> should be complete.

Huh ? Not certain I understand : 8 out 10 (aka 80% ?) or ... why 10TB ?

> The brave find the source code here (be careful, it is Perl, your eyes may
> start bleeding ;-) ): https://github.com/LINBIT/linstor-proxmox

Wow, You're going a bit too far Roland ;-) Perl's not the horror you describe
but just an undead language. This will change with Perl 7, you'll see !
(sarcasm inside).

Julien
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJbWeTTAAoJEOWv2Do/Mctu9jgP/RBnAb1ipPbvvgh49jTais2A
07y57V8PJbYEVfca72dLWnyAat2zwvGBnK4NZV7f537J1c75Y4B6LDZ0OE8kGPL0
aqbMH6K/OtEKNiCARwbZfDuxtiLD63ZIMBBiU/LcimXGGvBbBSQ+ZNrVt4jz9t/9
OLX6bn2I9iFvHPiDGoKfYcJyHsNmQgewqIRo6okIRyWvSeSQWWf1GXAtmF8vgUb/
xMsEHPsMnahjqaFn8ZtYEVz+dZTMQ4ORFvJOhtEDqgJhyBIbT4FVeCTtt+/uLsx8
nzF7OILMti0D57322zakk3sEOPcNceFHhMRDvp3mmcIJ1vXlRHMHj+r6AZXj47BX
6WcHNv+LhVEkeWFHCEUVyU/FdG89ZBY6KYzG/pqKzRLMOVo5E7VE/p4b9tHUz3Z+
Rh8cDwYkXyrnuGGWSD1TMYBs3h4MZ2RzgQik4bwmypm0zf4zAAXwMA53ABRFibC8
n1FonPTNA2dQm5bgx1awMeXaFcRCZpap1Kp4gukqswN8hz3XIP/h36n2aiUqoUW+
AtUBDNVdYVDnPWE5yc854PWx6ahDGd1TpIDiEAvwEFbp245WaNvx/4lXI91e8L4S
lIngaKmuit1rAFBkeGKKIpWLqP2pW2gAP6UdsOggBLZ4Z7ou0zZ2Q4E+O8IY5mE6
odZDaTvMIShLaIj5zjal
=9tJy
-END PGP SIGNATURE-
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Veit Wahlich
Hi Eric,

Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson:
> Would there really be a PV signature on the backing device? I didn't turn md4 
> into a PV (did not run pvcreate /dev/md4), but I did turn the drbd disk into 
> one (pvcreate /dev/drbd1).

both DRBD and mdraid put their metadata at the end of the block device,
thus depending on LVM configuration, both mdraid backing devices as well
as DRBD minors bcking VM disks with direct-on-disk PVs might be detected
as PVs.

It is very advisable to set lvm.conf's global_filter to allow only the
desired devices as PVs by matching a strict regexp, and to ignore all
other devices, e.g.:

 global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]

or even more strict: 

 global_filter = [ "a|^/dev/md4$|", "r/.*/" ]

After editing the configuration, you might want to regenerate your
distro's initrd/initramfs to reflect the changes directly at startup.

Best regards,
// Veit

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Eric Robinson


> -Original Message-
> From: drbd-user-boun...@lists.linbit.com [mailto:drbd-user-
> boun...@lists.linbit.com] On Behalf Of Robert Altnoeder
> Sent: Thursday, July 26, 2018 5:12 AM
> To: drbd-user@lists.linbit.com
> Subject: Re: [DRBD-user] drbd+lvm no bueno
> 
> On 07/26/2018 08:50 AM, Eric Robinson wrote:
> >
> >
> > Failed Actions:
> >
> > * p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68,
> > status=complete, exitreason='LVM: vg_on_drbd1 did not activate
> > correctly',
> >
> >     last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms
> >
> >
> >
> > The storage stack is:
> >
> >
> >
> > md4 -> drbd -> lvm -> filesystem
> >
> 
> This is most probably an LVM configuration error. Any LVM volume group on
> top of DRBD must be deactivated/stopped whenever DRBD is Secondary and
> must be started whenever DRBD is Primary, and LVM must be prevented from
> finding and using the storage device that DRBD uses as its backend, which it
> would normally do, because it can see the LVM physical volume signature not
> only on the DRBD device, but also on the backing device that DRBD uses.
> 

Would there really be a PV signature on the backing device? I didn't turn md4 
into a PV (did not run pvcreate /dev/md4), but I did turn the drbd disk into 
one (pvcreate /dev/drbd1). 

-Eric 
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Eric Robinson
Thank you, I will check that out.

From: Jaco van Niekerk [mailto:j...@desktop.co.za]
Sent: Thursday, July 26, 2018 3:34 AM
To: Eric Robinson ; drbd-user@lists.linbit.com
Subject: Re: [DRBD-user] drbd+lvm no bueno


Hi

Check your LVM configuration:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-exclusiveactive-haaa

Regards

Jaco van Niekerk

Office:   011 608 2663  E-mail:  j...@desktop.co.za
[Desktop]
accepts no liability for the content of this email, or for the consequences of 
any actions taken on the basis of the information provided, unless that 
information is subsequently confirmed in writing. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or taking 
any action in reliance on the contents of this information is strictly 
prohibited.

Disclaimer added by CodeTwo Exchange Rules 2010
www.codetwo.com
On 26/07/2018 11:35, Eric Robinson wrote:
Using drbd 9.0.14, I am having trouble getting rtesources to move between 
nodes. I get...

Failed Actions:
* p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68, status=complete, 
exitreason='LVM: vg_on_drbd1 did not activate correctly',
last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms

The storage stack is:

md4 -> drbd -> lvm -> filesystem

--Eric

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] linstor-proxmox-2.8

2018-07-26 Thread Roland Kammerer
Dear Proxmox VE users,

we released the first version of the linstor-proxmox plugin. This
integrates LINSTOR (the successor of DRBDManage) into Proxmox.

It contains all the features the drbdmanage-proxmox plugin had (i.e.,
creating/deleting volumes with a configurable redundancy, VM
live-migration, resizing, snapshots (limited as by the design
differences between DRBD and Proxmox).

As it continues the drbdmanage-proxmox plugin and keeps its history in
the git repo, the release "had to be" somewhere in the 2.x range. As it
felt pretty good and stable during development, I chose 2.8 for this
reasons:

- pretty close to 3.0
- there will be bugs, so some space between 2.8. and 3.0
- there is one feature missing

The missing feature is proper size reporting. Reporting meaningful
values for thinly allocated storage is a TODO in LINSTOR itself, so
currently the plugin reports 8 out of 10TB as free storage. Always.
Besides that it should be complete.

Please see the updated UG:
https://docs.linbit.com/docs/users-guide-9.0/#ch-proxmox-linstor

As described there, you find the new software components in the free
Proxmox debian repository provided by LINBIT.

Tarballs can be found here:
https://www.linbit.com/en/drbd-community/drbd-download/

The brave find the source code here (be careful, it is Perl, your eyes
may start bleeding ;-) ):
https://github.com/LINBIT/linstor-proxmox

There is also a blog post supporting the information provided in the UG:
https://www.linbit.com/en/how-linstor-proxmox-ve-volumes/

Please report bugs on the drbd-user ML.

Regards, rck
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Robert Altnoeder
On 07/26/2018 08:50 AM, Eric Robinson wrote:
>
>
> Failed Actions:
>
> * p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68,
> status=complete, exitreason='LVM: vg_on_drbd1 did not activate correctly',
>
>     last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms
>
>  
>
> The storage stack is:
>
>  
>
> md4 -> drbd -> lvm -> filesystem
>

This is most probably an LVM configuration error. Any LVM volume group
on top of DRBD must be deactivated/stopped whenever DRBD is Secondary
and must be started whenever DRBD is Primary, and LVM must be prevented
from finding and using the storage device that DRBD uses as its backend,
which it would normally do, because it can see the LVM physical volume
signature not only on the DRBD device, but also on the backing device
that DRBD uses.

br,
Robert

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Jaco van Niekerk
Hi

Check your LVM configuration:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-exclusiveactive-haaa

Regards

Jaco van Niekerk

Office:   011 608 2663  E-mail:  j...@desktop.co.za
[Desktop]
accepts no liability for the content of this email, or for the consequences of 
any actions taken on the basis of the information provided, unless that 
information is subsequently confirmed in writing. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or taking 
any action in reliance on the contents of this information is strictly 
prohibited.

Disclaimer added by CodeTwo Exchange Rules 2010
www.codetwo.com
On 26/07/2018 11:35, Eric Robinson wrote:
Using drbd 9.0.14, I am having trouble getting rtesources to move between 
nodes. I get…

Failed Actions:
* p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68, status=complete, 
exitreason='LVM: vg_on_drbd1 did not activate correctly',
last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms

The storage stack is:

md4 -> drbd -> lvm -> filesystem

--Eric

___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] drbd+lvm no bueno

2018-07-26 Thread Eric Robinson
Using drbd 9.0.14, I am having trouble getting rtesources to move between 
nodes. I get...

Failed Actions:
* p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68, status=complete, 
exitreason='LVM: vg_on_drbd1 did not activate correctly',
last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms

The storage stack is:

md4 -> drbd -> lvm -> filesystem

--Eric
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] [DRBD-9.0.15-0rc1] Resource "stuck" during live migration

2018-07-26 Thread Yannis Milios
I've switched pve1,pve2 to lvm thin recently just for testing and left pve3
with zfs as a storage back end. However, I really miss some cool zfs
features, compared to lvm thin, like on-the-fly compression of zero blocks
and its fast,low cost, point in time snapshots... What I don't miss though,
is zfs memory consumtion compared to lvm thin :-)

On Thu, Jul 26, 2018 at 8:26 AM Roland Kammerer 
wrote:

> On Wed, Jul 25, 2018 at 08:49:02PM +0100, Yannis Milios wrote:
> > Hello,
> >
> > Currently testing 9.0.15-0rc1 on a 3 node PVE cluster.
> >
> > Pkg versions:
> > --
> > cat /proc/drbd
> > version: 9.0.15-0rc1 (api:2/proto:86-114)
> > GIT-hash: fc844fc366933c60f7303694ca1dea734dcb39bb build by root@pve1,
> > 2018-07-23 18:47:08
> > Transports (api:16): tcp (9.0.15-0rc1)
> > ii  python-drbdmanage 0.99.18-1
> > ii  drbdmanage-proxmox2.2-1
> > ii  drbd-utils9.5.0-1
> > -
> > Resource=vm-122-disk-1
> > Replica count=3
> > PVE nodes=pve1,pve2,pve3
> > Resource is active on pve2 (Primary), the rest two nodes (pve1,pve2) are
> > Secondary.
> >
> > Tried to live migrate the VM from pve2 to pve3 and the process stuck just
> > before starting. By inspecting dmesg on both nodes (pve2,pve3), I get the
> > following crash..
> >
> >
> > pve2 (Primary) node:
> >
> https://privatebin.net/?fb5435a42b431af2#4xZpd9D5bYnB000+H3K0noZmkX20fTwGSziv5oO/Zlg=
> >
> > pve3(Secondary)node:
> >
> https://privatebin.net/?d3b1638fecb6728f#2StXbwDPT0JlFUKf686RJiR+4hl52jEmmij2UTtnSjs=
> >
>
> We will look into it closer. For now I saw "zfs" in the second trace and
> stopped. It is so freaking broken, it is not funny any more (it craps
> out with all kinds of BS in our internal infrastructure as well). For
> example we had to go back to a xenial kernel because the bionic ones zfs
> is that broken :-/ 
>
> Regards, rck
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] [DRBD-9.0.15-0rc1] Resource "stuck" during live migration

2018-07-26 Thread Roland Kammerer
On Wed, Jul 25, 2018 at 08:49:02PM +0100, Yannis Milios wrote:
> Hello,
> 
> Currently testing 9.0.15-0rc1 on a 3 node PVE cluster.
> 
> Pkg versions:
> --
> cat /proc/drbd
> version: 9.0.15-0rc1 (api:2/proto:86-114)
> GIT-hash: fc844fc366933c60f7303694ca1dea734dcb39bb build by root@pve1,
> 2018-07-23 18:47:08
> Transports (api:16): tcp (9.0.15-0rc1)
> ii  python-drbdmanage 0.99.18-1
> ii  drbdmanage-proxmox2.2-1
> ii  drbd-utils9.5.0-1
> -
> Resource=vm-122-disk-1
> Replica count=3
> PVE nodes=pve1,pve2,pve3
> Resource is active on pve2 (Primary), the rest two nodes (pve1,pve2) are
> Secondary.
> 
> Tried to live migrate the VM from pve2 to pve3 and the process stuck just
> before starting. By inspecting dmesg on both nodes (pve2,pve3), I get the
> following crash..
> 
> 
> pve2 (Primary) node:
> https://privatebin.net/?fb5435a42b431af2#4xZpd9D5bYnB000+H3K0noZmkX20fTwGSziv5oO/Zlg=
> 
> pve3(Secondary)node:
> https://privatebin.net/?d3b1638fecb6728f#2StXbwDPT0JlFUKf686RJiR+4hl52jEmmij2UTtnSjs=
> 

We will look into it closer. For now I saw "zfs" in the second trace and
stopped. It is so freaking broken, it is not funny any more (it craps
out with all kinds of BS in our internal infrastructure as well). For
example we had to go back to a xenial kernel because the bionic ones zfs
is that broken :-/ 

Regards, rck
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user