On Thu, Jul 26, 2018 at 05:51:40PM +, Eric Robinson wrote:
> > But really, most of the time, you really want LVM *below* DRBD, and NOT
> > above it. Even though it may "appear" to be convenient, it is usually not
> > what
> > you want, for various reasons, one of it being performance.
>
>
Hi,
On Fri, Jul 27, 2018 at 1:36 AM, Lars Ellenberg
wrote:
> On Mon, Jul 23, 2018 at 02:46:25PM +0200, Michal Michaláč wrote:
> > Hello,
> >
> >
> >
> > after replacing backing device of DRBD, content of DRBD volume (not only
> > backing disk) is invalid on node with
On Fri, Jul 27, 2018 at 3:51 AM, Eric Robinson
wrote:
> > On Thu, Jul 26, 2018 at 04:32:17PM +0200, Veit Wahlich wrote:
> > > Hi Eric,
> > >
> > > Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson:
> > > > Would there really be a PV signature on the backing device? I didn't
> > >
On Wed, Jul 25, 2018 at 12:02:38PM +0300, Roman Makhov wrote:
> Hi Jaco,
>
> Maybe it is because crm is core component of Pacemaker (
> https://wiki.clusterlabs.org/wiki/File:Stack.png)?
> "crmd Short for Cluster Resource Management Daemon. Largely a message
> broker for the PEngine and LRM, it
Am Donnerstag, den 26.07.2018, 17:31 +0200 schrieb Lars Ellenberg:
> > global_filter = [ "a|^/dev/md.*$|", "r/.*/" ]
> >
> > or even more strict:
> >
> > global_filter = [ "a|^/dev/md4$|", "r/.*/" ]
>
> Uhm, no.
> Not if he want DRBD to be his PV...
> then he needs to exclude (reject) the
On Mon, Jul 23, 2018 at 02:46:25PM +0200, Michal Michaláč wrote:
> Hello,
>
>
>
> after replacing backing device of DRBD, content of DRBD volume (not only
> backing disk) is invalid on node with inconsistent backing device, until
> sync finishes. I think, correct behaviour is
On Thu, Jul 26, 2018 at 04:32:17PM +0200, Veit Wahlich wrote:
> Hi Eric,
>
> Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson:
> > Would there really be a PV signature on the backing device? I didn't
> > turn md4 into a PV (did not run pvcreate /dev/md4), but I did turn
> > the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512
Le 26/07/2018 à 15:28, Roland Kammerer a écrit :
> Dear Proxmox VE users,
>
> we released the first version of the linstor-proxmox plugin. This
> integrates LINSTOR (the successor of DRBDManage) into Proxmox.
>
> It contains all the features the
Hi Eric,
Am Donnerstag, den 26.07.2018, 13:56 + schrieb Eric Robinson:
> Would there really be a PV signature on the backing device? I didn't turn md4
> into a PV (did not run pvcreate /dev/md4), but I did turn the drbd disk into
> one (pvcreate /dev/drbd1).
both DRBD and mdraid put their
> -Original Message-
> From: drbd-user-boun...@lists.linbit.com [mailto:drbd-user-
> boun...@lists.linbit.com] On Behalf Of Robert Altnoeder
> Sent: Thursday, July 26, 2018 5:12 AM
> To: drbd-user@lists.linbit.com
> Subject: Re: [DRBD-user] drbd+lvm no bueno
>
> On 07/26/2018 08:50 AM,
Thank you, I will check that out.
From: Jaco van Niekerk [mailto:j...@desktop.co.za]
Sent: Thursday, July 26, 2018 3:34 AM
To: Eric Robinson ; drbd-user@lists.linbit.com
Subject: Re: [DRBD-user] drbd+lvm no bueno
Hi
Check your LVM configuration:
Dear Proxmox VE users,
we released the first version of the linstor-proxmox plugin. This
integrates LINSTOR (the successor of DRBDManage) into Proxmox.
It contains all the features the drbdmanage-proxmox plugin had (i.e.,
creating/deleting volumes with a configurable redundancy, VM
On 07/26/2018 08:50 AM, Eric Robinson wrote:
>
>
> Failed Actions:
>
> * p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68,
> status=complete, exitreason='LVM: vg_on_drbd1 did not activate correctly',
>
> last-rc-change='Wed Jul 25 22:36:37 2018', queued=0ms, exec=401ms
>
>
>
> The
Hi
Check your LVM configuration:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-exclusiveactive-haaa
Regards
Jaco van Niekerk
Office: 011 608 2663 E-mail: j...@desktop.co.za
[Desktop]
accepts
Using drbd 9.0.14, I am having trouble getting rtesources to move between
nodes. I get...
Failed Actions:
* p_lv_on_drbd1_start_0 on ha16b 'not running' (7): call=68, status=complete,
exitreason='LVM: vg_on_drbd1 did not activate correctly',
last-rc-change='Wed Jul 25 22:36:37 2018',
I've switched pve1,pve2 to lvm thin recently just for testing and left pve3
with zfs as a storage back end. However, I really miss some cool zfs
features, compared to lvm thin, like on-the-fly compression of zero blocks
and its fast,low cost, point in time snapshots... What I don't miss though,
is
On Wed, Jul 25, 2018 at 08:49:02PM +0100, Yannis Milios wrote:
> Hello,
>
> Currently testing 9.0.15-0rc1 on a 3 node PVE cluster.
>
> Pkg versions:
> --
> cat /proc/drbd
> version: 9.0.15-0rc1 (api:2/proto:86-114)
> GIT-hash: fc844fc366933c60f7303694ca1dea734dcb39bb build by
17 matches
Mail list logo