Re: [DRBD-user] building v9

2020-12-10 Thread Yannis Milios
Thanks Christoph for the very clear explanation, I think that it was a piece which was missing for all of us! B.R. Yannis On Thu, 10 Dec 2020 at 12:07, Christoph Böhmwalder < christoph.boehmwal...@linbit.com> wrote: > Hi Pierre, > > As much as we may want it, DRBD's coccinelle-based compat

Re: [DRBD-user] building v9

2020-12-10 Thread Yannis Milios
I tested building on 5.8.0-31-generic (Ubuntu Focal) and I'm getting the same error as you do, so I'd assume that drbd cannot build against that kernel at the moment. I have reverted back to 5.4.0-48-generic which seems to be ok. On Thu, 10 Dec 2020 at 05:59, Pierre-Philipp Braun wrote: > Hey.

Re: [DRBD-user] drbd-9.0.26-rc1

2020-11-13 Thread Yannis Milios
Attaching another build issue on Arch, kernel 5.9.6-arch1-1 (x86_64) ... DKMS make.log for drbd-9.0.26-0rc1 for kernel 5.9.6-arch1-1 (x86_64) Fri Nov 13 10:14:12 GMT 2020 make: Entering directory '/var/lib/dkms/drbd/9.0.26-0rc1/build/src/drbd' Calling toplevel makefile of kernel source

Re: [DRBD-user] [DRBD-announce] linstor-server 1.10.0 release

2020-11-10 Thread Yannis Milios
Hello, Quick question, just wondering how will "auto-evict" affect a 3 node linstor cluster with a replica number of 2? Say that node1 goes down for more than an 1h, linstor will try to replace its drbd resources to either node2 or node3 assuming that the redundacy level falls below 2 and there's

Re: [DRBD-user] Remove DRBD w/out Data Loss

2020-09-01 Thread Yannis Milios
You mean completely removing DRBD while preserving the data on its backing device ? That should work out of the box, without any extra effort, as DRBD works as a transparent layer and it does not modify the data on the backing device. Yannis On Mon, 31 Aug 2020 at 09:39, Eric Robinson wrote:

Re: [DRBD-user] cross version sync failing

2020-05-26 Thread Yannis Milios
Indeed, but that normally shouldn't be a problem (?). For example, the combination of centos6 (drbd8) and centos7(drbd9) works without issues. On Tue, 26 May 2020 at 11:48, Trevor Hemsley wrote: > On 26/05/2020 11:41, Yannis Milios wrote: > > centos6 <-> centos7 [OK] > >

Re: [DRBD-user] cross version sync failing

2020-05-26 Thread Yannis Milios
I confirmed this as well, details below... --- centos 6.10 kernel 2.6.32-754.el6.x86_64 drbd84-utils-9.5.0-1.el6.elrepo.x86_64 kmod-drbd84-8.4.11-1.el6_10.elrepo.x86_64

Re: [DRBD-user] Linstor/DRBD9 : Initial sync stuck at 30Mbps

2019-05-06 Thread Yannis Milios
What happens if you temporarily disconnect the drbd resource in Primary mode from the rest during the "move" process? Does that speed up the process ? If yes, then you will have to tune drbd sync parameters for the 10Gbit link. G. On Mon, 6 May 2019 at 14:35, Julien Escario wrote: > Hello, >

Re: [DRBD-user] LINSTOR on PROXMOX : How to deploy ressources on a new satellite?

2019-02-03 Thread Yannis Milios
> Are you saying this needs to be done for every single resource potentially > hundreds of vm's with multiple disks attached? This sounds like a huge pita. > Yes. However, I did a test. I temporarily reduced redundancy level in /etc/pve/storage.cfg and the created a new VM in PVE. Then I added

Re: [DRBD-user] linstor-proxmox broken after upgrading to PVE/libpve-storage-perl/stable 5.0-32

2018-12-10 Thread Yannis Milios
> Hello, > Yannis, did you managed to get rid of this warning ? Same thing here > since last upgrade. > > Nothing really bad happening except this annoying warning ... No, I haven’t, but doesn’t bother me as soon as everything is working properly. My understanding is that this warning is set

Re: [DRBD-user] Linstor-server 0.7.3/linstor-client 0.7.2 release

2018-11-30 Thread Yannis Milios
> > > I don't really understand why drbd8.4 module was loaded (from pve kernel > > package) instead of drbd-dkms (aka 9) module ... > > > > Just ran dpkg-reconfigure drbd-dkms and rebooted servers to check if > > correct version is loaded at boot time. > Personally, I choose to use 'apt-mark

Re: [DRBD-user] linstor-proxmox broken after upgrading to PVE/libpve-storage-perl/stable 5.0-32

2018-11-27 Thread Yannis Milios
Upgraded to linstor-proxmox (3.0.2-3) and seems to be working well with libpve-storage-perl (5.0-32). There's a warning notification during live migrates about the upgraded storage API, but at the end the process is completed successfully.. "Plugin "PVE::Storage::Custom::LINSTORPlugin" is

Re: [DRBD-user] linstor-proxmox broken after upgrading to PVE/libpve-storage-perl/stable 5.0-32

2018-11-27 Thread Yannis Milios
Ok, I used "dist-upgrade" because that's what Proxmox recommends to do when upgrading their systems. On Tue, 27 Nov 2018 at 15:41, Roland Kammerer wrote: > On Tue, Nov 27, 2018 at 03:22:07PM +, Yannis Milios wrote: > > Hi Roland, > > > > I just did a simple

Re: [DRBD-user] linstor-proxmox broken after upgrading to PVE/libpve-storage-perl/stable 5.0-32

2018-11-27 Thread Yannis Milios
Hi Roland, I just did a simple ‘apt dist-upgrade’ and the rest followed ... Yannis On Tue, 27 Nov 2018 at 14:58, Roland Kammerer wrote: > On Tue, Nov 27, 2018 at 02:13:58PM +0000, Yannis Milios wrote: > > Just for the record, Proxmox has released libpve-storage-perl/stable > 5.0-

[DRBD-user] linstor-proxmox broken after upgrading to PVE/libpve-storage-perl/stable 5.0-32

2018-11-27 Thread Yannis Milios
Just for the record, Proxmox has released libpve-storage-perl/stable 5.0-32 today, which seems to break linstor-proxmox plugin. Reverting to libpve-storage-perl/stable 5.0-30 and reinstalling linstor-proxmox package fixes the problem. During the upgrade, the following action is taking place ...

Re: [DRBD-user] Linstor-Proxmox plugin 3.0.2-1 on PVE 5.2-11

2018-11-15 Thread Yannis Milios
Looks like they did a change on pve-storage >= 5.0-31 (on pvetest currently), which can potentially break things on the upcoming "stable" version (see Wolfgang's previous e-mail) ... :) On Thu, 15 Nov 2018 at 14:36, Roland Kammerer wrote: > On Thu, Nov 15, 2018 at 10:45:43

Re: [DRBD-user] Linstor-Proxmox plugin 3.0.2-1 on PVE 5.2-11

2018-11-15 Thread Yannis Milios
This probably has nothing to do with DRBD, better to confirm on PVE forum/ML. > Versions : > - PVE 5.2-11 > > I'm using the latest versions of both LINSTOR/PVE, no issues here. Just a thought, I noticed your pve-manager version is 5.2-11 where normally it should be 5.2-10, if you are using pve

Re: [DRBD-user] linstor-proxmox controller toggle tests

2018-11-12 Thread Yannis Milios
> > As far as I know, Proxmox does not need 3 nodes and/or a quorum, and the > LINSTOR controller does not care either. > Thanks for confirming this Robert. In my experience, Promox requires a minimum 3 nodes when HA is enabled/required. When HA is enabled, and one of the 2 cluster nodes goes

Re: [DRBD-user] DRBD9 + PROXMOX 5.2 - can't initialise drbd with drbdmanage

2018-11-10 Thread Yannis Milios
instor but it's for > production use in little time, is it OK to use linstor for production ? > > Regards > Le 10/11/2018 à 13:27, Yannis Milios a écrit : > > drbdmanage is end of life, please use linstor instead (see drbd > documentation on how to configure it on proxmox). > > &

Re: [DRBD-user] DRBD9 + PROXMOX 5.2 - can't initialise drbd with drbdmanage

2018-11-10 Thread Yannis Milios
drbdmanage is end of life, please use linstor instead (see drbd documentation on how to configure it on proxmox). On Sat, 10 Nov 2018 at 12:18, Sebastien CHATEAU-DUTIER < sebast...@chateau-dutier.com> wrote: > Dear all, > > On fresh install of proxmox 5.2 (2 nodes up to date with installation

Re: [DRBD-user] linstor-proxmox controller toggle tests

2018-11-10 Thread Yannis Milios
You need at least 3 nodes to have a proper working cluster (i.e quorum). In addition, check drbd/linstor documentation on how to create a linstor vm in pve, it will save you time in doing all those steps manually.. Yannis On Sat, 10 Nov 2018 at 12:18, Greb wrote: > Hello, > > I did the same

Re: [DRBD-user] (DRBD 9) promote secondary to primary with primary crashed

2018-11-02 Thread Yannis Milios
Try adding --force parameter in drbdadm command. On Friday, November 2, 2018, Daniel Hertanu wrote: > Hi, > > I'm running two nodes with DRBD 9 and I want to simulate a primary node > crash followed by restoring the access to the data on the secondary node > left. > > So, having the sync done

Re: [DRBD-user] 8 Zettabytes out-of-sync?

2018-11-02 Thread Yannis Milios
On Fri, 2 Nov 2018 at 09:25, Jarno Elonen wrote: > > This is getting quite worrisome. Is anyone else experiencing this with > DRBD 9? Is it something really wrong in my setup, or are there perhaps some > known instabilities in DRBD 9.0.15-1? > Yes, I have been facing this as well on all

Re: [DRBD-user] slow sync speed

2018-10-17 Thread Yannis Milios
Just a quick note .. You are correct, it shouldn't be required (v8.9.10) and I was surprised > with that too. > In the DRBD documentation, it is stated that ... "When multiple DRBD resources share a single replication/synchronization network, synchronization with a fixed rate may not be an

Re: [DRBD-user] linstor-proxmox-3.0.0-rc1

2018-10-05 Thread Yannis Milios
Just came across this, not sure if it's a bug or a feature When snapshoting a VM, with RAM checkbox disabled, a snapshot is created on the node where the VM is running on. When snapshoting a VM, with RAM checkbox enabled,a new drbd resource is being created in the following format...

Re: [DRBD-user] DRBD 9 without Linstor

2018-10-04 Thread Yannis Milios
You can, but your life will be miserable without LINSTOR managing the resources (hence the existence of it in the first place) ... :) On Wed, 3 Oct 2018 at 13:29, M. Jahanzeb Khan wrote: > Hello, > > I would like to know that is it possible to use drbd 9 without using > Linstor on top of LVM ?

Re: [DRBD-user] Softlockup when using 9.0.15-1 version

2018-10-02 Thread Yannis Milios
Not sure if it's related, but I had a similar issue on one of my PVE hosts recently. I have the same kernel installed on all (3) nodes, but this machine was locking up after a few minutes with the softlockup messages you were getting. The only way to recover was to hard reboot the machine. Managed

[DRBD-user] How to split network traffic on LINSTOR cluster

2018-09-17 Thread Yannis Milios
Hello, I've got some questions in regards to splitting/separating the DRBD network traffic on a 3 node LINSTOR/PVE cluster. Initially, both LINSTOR and DRBD traffic were using the "Default" network (i.e 10.10.10.0/24), which was set during the initial cluster setup. Now, I used 'linstor n i c'

Re: [DRBD-user] Linstor v0.6.2

2018-09-03 Thread Yannis Milios
:32, Yannis Milios ha scritto: > > Just tried 0.6.2-1 on PVE and it seems to fail with a different error > > this time (Migration Failed!) > > > > > https://privatebin.net/?832d42e56c2734a9#0ZBY7DAQhLAbLSmzc62rokuKSVOkduBAO28lt0UIqrA= > > Yes, moreover, do

Re: [DRBD-user] Linstor v0.6.2

2018-09-03 Thread Yannis Milios
Just tried 0.6.2-1 on PVE and it seems to fail with a different error this time (Migration Failed!) https://privatebin.net/?832d42e56c2734a9#0ZBY7DAQhLAbLSmzc62rokuKSVOkduBAO28lt0UIqrA= On Mon, 3 Sep 2018 at 14:31, Rene Peinthor wrote: > Hi again! > > While further testing the latest

Re: [DRBD-user] Linstor v0.6.0 release

2018-09-01 Thread Yannis Milios
FYI, I tested upgrading linstor-controller and linstor-client from 0.5.0-1 to 0.6.0-1 on Proxmox and it's failing to start the service, with the error below... Reported error: > === > > Category: RuntimeException > Class name:

Re: [DRBD-user] linstor-proxmox-2.9.0

2018-08-27 Thread Yannis Milios
> > Do we agree on that? Yes, thanks for clairying ... > And that is the problem we have to fix. The linstor satellite deletes > its resource files from /var/lib/linstor.d on startup. So > linstor-satellite.service and drbd.service started more or less at the > same time. The satellite

Re: [DRBD-user] linstor-proxmox-2.9.0

2018-08-27 Thread Yannis Milios
> > This is just what came to my mind as a solution, but what storage pool > you used for controller vm resource? Is it the linstor-managed one or > another? In the former case, i guess that the controller vm resource is > a sort of 'foreign body'. I used the DRBD storage pool (the same used by

Re: [DRBD-user] linstor-proxmox-2.9.0

2018-08-27 Thread Yannis Milios
Sorry for interrupting, but I wanted to share my experience with this as well (see below) ... > What do you mean by that? The DRBD resource (== storage for the > controller VM) is brought up by the drbd.service and can then be > auto-promoted. The plugin code ignores that VM. The Proxmox HA

[DRBD-user] Linstor | Failed to restore snapshot

2018-08-24 Thread Yannis Milios
I have created a snapshot for a VM by using Proxmox web interface, by using the previous version of linstor-proxmox plugin (2.8-1). Today, I upgraded to version 2.9.0-1 , not sure if this problem is affecting this version as well. When trying to restore the snapshot from the command line to a new

Re: [DRBD-user] Linstor | Peer's disk size too small

2018-08-24 Thread Yannis Milios
> On 08/24/2018 12:21 PM, Yannis Milios wrote: > > Thanks for you answer. > > > > It should be possible to avoid this by setting an explicit size > > for the > > DRBD volume in the resource configuration file, so that DRBD will > only > &g

Re: [DRBD-user] Linstor | Peer's disk size too small

2018-08-24 Thread Yannis Milios
Thanks for you answer. It should be possible to avoid this by setting an explicit size for the > DRBD volume in the resource configuration file, so that DRBD will only > use that much space even if more is available. Do you mean by manually editing the resource configuration files in

[DRBD-user] Linstor | Peer's disk size too small

2018-08-24 Thread Yannis Milios
Hello, Trying to create a new resource by using Linstor, on a 3 node cluster. Two of the nodes are using LVM thin as storage backend and one of them is using ZFS. I created a test RD, then a VD with a size of 10GB. Then, I create the resource on the LVM backed nodes by using 'linstor r c test'

Re: [DRBD-user] DM to Linstor migration issue

2018-08-22 Thread Yannis Milios
Sorry, you are right, attaching it here ... https://privatebin.net/?f272071f6eb44ba1#qzYulRsVxV3CsEKe4LrNlVrAKGbB5x7DbMDr6Q1rbao= On Wed, 22 Aug 2018 at 12:59, Roland Kammerer wrote: > On Wed, Aug 22, 2018 at 12:34:32PM +0100, Yannis Milios wrote: > > Hi Roland, > > > > &g

Re: [DRBD-user] DM to Linstor migration issue

2018-08-22 Thread Yannis Milios
Hi Roland, > Do you still have the migration script? Could you post the part for that > resource? Would be interesting which value the script tried to set. > > Yes, I do. You can find it here https://privatebin.net/?a12ad8f1c97bcb15#XLlAENrDGQ7OYn/Mq4Uvq7vwZuZ+jyjRBLIUPMepYgE= The problem in my

Re: [DRBD-user] DM to Linstor migration issue

2018-08-21 Thread Yannis Milios
On Tue, 21 Aug 2018 at 10:23, Robert Altnoeder wrote: You could try to resize the volume (volume-definition set-size) to match > those 33587200 KiB that you see as the expected value, which will > effectively make it somewhat larger than that. If the peer count is > indeed different from what

[DRBD-user] DM to Linstor migration issue

2018-08-21 Thread Yannis Milios
Hello, I was testing DM to Linstor migration script by following the steps in the documentation on a 3 node test cluster. The migration script was completed successfully, resources and volume definitions were created normally. However, when rebooting the 3 nodes, none of the DRBD resources comes

Re: [DRBD-user] Linstor | changing cluster's interface ip addresses

2018-08-20 Thread Yannis Milios
Many thanks, this is going to be a very useful option... Yannis On Mon, 20 Aug 2018 at 17:27, Robert Altnoeder wrote: > On 08/20/2018 05:19 PM, Yannis Milios wrote: > > Quick question. I noticed the following command in Linstor: > > > > 'linstor node interface mod

[DRBD-user] Linstor | changing cluster's interface ip addresses

2018-08-20 Thread Yannis Milios
Hello, Quick question. I noticed the following command in Linstor: 'linstor node interface modify' By using this command someone can modify the network interface/ip address that Linstor is listening on. Let's assume that we have already created a 3 node DRBD/Linstor cluster in the ip range

Re: [DRBD-user] Migration from drbdmanage to linstor?

2018-08-15 Thread Yannis Milios
There's a migration plan from DM to LINSTOR on the way, check the following post (Roland's comment) ... https://lists.gt.net/drbd/users/29928?search_string=linstor%20migration;#29928 On Wed, 15 Aug 2018 at 10:22, Frank Rust wrote: > Hi all, > since drbdmanage will reach its end-of-life at the

Re: [DRBD-user] Extent XXX beyond end of bitmap!

2018-08-14 Thread Yannis Milios
Does this happen on both nodes? What’s the status of the backing device (lvm) ? Can you post the exact versions for both kernel module and utils? Any clue in the logs? On Tue, 14 Aug 2018 at 06:57, Oleksiy Evin wrote: > Hi, > > We have DRBD 8.4 over LVM volume setup on CentOS7. > > After a

Re: [DRBD-user] DRBD and ProxMox asymmetric cluster

2018-08-10 Thread Yannis Milios
This has to be asked in PVE mailing list, nothing to do with DRBD. Check PVE documentation for how HA works (keyword: nofailback). AFIAIK PVE's HA manager has no way to interact with the underlying storage. It will just attempt to start the resource wherever you have set it to do so.If the

Re: [DRBD-user] DRBD 9 and internal metadisk v08

2018-08-08 Thread Yannis Milios
I'm not familiar with opensuse, but just guessing... Perhaps there was a kernel update on your system and you forgot to recompile drbd9 kernel module against it? Normally, this should be handled automatically by dkms depending on how the system is configured. Try rebooting on an older kernel to

Re: [DRBD-user] linstor-proxmox-2.8

2018-07-30 Thread Yannis Milios
> Yes, "start" is pretty obvious and in the article. Sure, "enable" is > also a good idea, but the interesting thing is: Did you really have to > "unmask" it? > > AFAIR yes, I had to unmask the service in order to enable and then eventually start it. But perhaps this was true when I installed

Re: [DRBD-user] linstor-proxmox-2.8

2018-07-30 Thread Yannis Milios
On Mon, 30 Jul 2018 at 09:09, Roland Kammerer wrote: > On Fri, Jul 27, 2018 at 01:52:55PM +0100, Yannis Milios wrote: > > One last thing I forgot to mention in the last post is ... > > > > When creating a VM or CT via PVE webgui it fails with the below: > > &

Re: [DRBD-user] linstor-proxmox-2.8

2018-07-30 Thread Yannis Milios
> > > However, in your blog post you mention > > linstor-controller,linstor-satellite and linstor-client. > > That is what you should do. > You are right, that's what I ended to do and now everything works perfectly. In addition, I had to enable/start linstor-satellite service on all satellite

Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Yannis Milios
> Satellite and Controller are quite obvious, Combined is a node that runs > a Satellite and may sometimes run a Controller, Auxiliary is a node that > runs neither but is registered for other reasons, this is mostly > reserved for future features. > Can these 'roles' be modified afterwards once

Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Yannis Milios
One last thing I forgot to mention in the last post is ... When creating a VM or CT via PVE webgui it fails with the below: https://privatebin.net/?dd4373728501c9eb#FsTXbEfRh43WIV4q7tO5wnm0HdW0O/gJbwavrYCgkeE= Did some investigation on the linstor side and realised as a possible problem to be

Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Yannis Milios
Thanks for the explanation, this was helpful. Currently testing on a 'lab' environment. I've got some questions, most are related to linstor itself and not linstor-proxmox specific, hopefully this is the correct thread to expand these questions... - What's the difference between installing

Re: [DRBD-user] linstor-proxmox-2.8

2018-07-27 Thread Yannis Milios
Quick question, can we use Linstor side-by-side with DM, without affecting one the other ? This may be good for testing or perhaps for migrating DM resources to Linstor in the future ? ___ drbd-user mailing list drbd-user@lists.linbit.com

Re: [DRBD-user] [DRBD-9.0.15-0rc1] Resource "stuck" during live migration

2018-07-26 Thread Yannis Milios
, is zfs memory consumtion compared to lvm thin :-) On Thu, Jul 26, 2018 at 8:26 AM Roland Kammerer wrote: > On Wed, Jul 25, 2018 at 08:49:02PM +0100, Yannis Milios wrote: > > Hello, > > > > Currently testing 9.0.15-0rc1 on a 3 node PVE cluster. > > > > Pkg version

[DRBD-user] [DRBD-9.0.15-0rc1] Resource "stuck" during live migration

2018-07-25 Thread Yannis Milios
Hello, Currently testing 9.0.15-0rc1 on a 3 node PVE cluster. Pkg versions: -- cat /proc/drbd version: 9.0.15-0rc1 (api:2/proto:86-114) GIT-hash: fc844fc366933c60f7303694ca1dea734dcb39bb build by root@pve1, 2018-07-23 18:47:08 Transports (api:16): tcp (9.0.15-0rc1) ii

[DRBD-user] drbdmanage-handlers out-of-sync ?

2018-07-03 Thread Yannis Milios
Hello, Currently, drbdmanage-handlers command does not seem to support "out-of-sync" handler. I'm aware that drbdmanage is about to be replaced by linstor, just wondering if there is a possibility to be added to drbdmanage somehow ? Thanks ___

Re: [DRBD-user] Intro questions

2018-06-25 Thread Yannis Milios
I believe that most of your questions can be answered by reading the DRBD user's guide.. https://docs.linbit.com/docs/users-guide-8.4/#p-intro There are 2 versions of DRBD available at the moment, DRBD8 (for up to 2 nodes) and DRBD9 (for 2+ nodes). On Mon, Jun 25, 2018 at 7:07 AM Alex wrote:

Re: [DRBD-user] [proxmox]Move disk to drbd storage fails...

2018-06-06 Thread Yannis Milios
Looks like the new gmail has messed up the message, re sending in plain text... create full clone of drive scsi1 (local-zfs:vm-108-disk-2) pong Operation completed successfully Operation completed successfully pong Operation completed successfully Operation completed successfully pong Operation

Re: [DRBD-user] [proxmox]Move disk to drbd storage fails...

2018-06-06 Thread Yannis Milios
This does not seem to work for me either, I'm on the latest pve and drbd9/drbdmanage versions. For me, it fails with a different kind of error though. I've executed the command on the "leader" node. The source vm disk is local zfs but I tried it also with a qemu raw image and had the same result.

Re: [DRBD-user] cannot remove snapshot

2018-05-16 Thread Yannis Milios
First make sure your DRBD cluster is in healthy state, that is, all drbd control volumes are in "normal" status. Then, I would use "drbdmanage resume-all" to resume all pending operations first. If that fails, then I would use "drbdmanage remove-snapshot -f resource snapshot" to force remove then

Re: [DRBD-user] Strange drbdtop results

2018-05-11 Thread Yannis Milios
>drbdtop on a resource with detailed >status give me OutOfSync on some >nodes. I try to adjust all resources without any success on solving this problem That can be solved by “disconnect/connect” the resource that has out of sync blocks. -- Sent from Gmail Mobile

Re: [DRBD-user] One resource per disk?

2018-05-01 Thread Yannis Milios
I would prefer the 2nd option. Ideally all disks would be members of a RAID(10?) array, with DRBD sitting on top for the replication, and LVM for managing the volume. Another option would be ZFS managing the disks and the volume, while DRBD sitting on top for the replication. This very same

Re: [DRBD-user] failed exit code 1

2018-04-30 Thread Yannis Milios
gt; > > > Wit best regards > > > > *Von:* Yannis Milios <yannis.mil...@gmail.com> > *Gesendet:* Montag, 30. April 2018 14:17 > *An:* Sebastian Blajszczak <sebastian.blajszc...@addmore.de> > *Cc:* drbd-user@lists.linbit.com > *Betreff:* Re: [DRBD-user] faile

Re: [DRBD-user] failed exit code 1

2018-04-30 Thread Yannis Milios
When you say "it failed", how did it exactly failed ? Anything on the logs ? I'm on PVE5 as well, but did not have any issues updating both utils and kmod. dpkg -l | grep drbd-utils > ii drbd-utils 9.3.1-1 > amd64RAID 1 over TCP/IP for Linux (user utilities)

Re: [DRBD-user] Not enough free bitmap slots when assigning a resource on an additional node

2018-04-11 Thread Yannis Milios
other resources seem to have the correct values. On Mon, Apr 9, 2018 at 2:44 PM, Yannis Milios <yannis.mil...@gmail.com> wrote: > Hello, > > On a 3 node/zfs backed drbd9 cluster, while trying to assign-resource on > an additional node, I'm getting "Not enough free bitmap

[DRBD-user] Not enough free bitmap slots when assigning a resource on an additional node

2018-04-09 Thread Yannis Milios
Hello, On a 3 node/zfs backed drbd9 cluster, while trying to assign-resource on an additional node, I'm getting "Not enough free bitmap slots" and the resync does not start. Removing/reassigning the resource does not help either. I couldn't find enough information about this error when searching

Re: [DRBD-user] Unable to init a node

2018-03-28 Thread Yannis Milios
> > Yes, I have three nodes : > root@dmz-pve1:~ # drbdmanage list-nodes > +--- > -+ > | Name | Pool Size | Pool Free | > | State | >

Re: [DRBD-user] Upgrade PVE 4.4 to 5.1 DRB9

2018-03-20 Thread Yannis Milios
>> Do you get new headers automatically when there is a new Proxmox kernel? My experience has been that, no, PVE does not install the pve-headers- automatically after each kernel upgrade (apt dist-upgrade). So, what I need to do is 2 additional steps: - Install pve-headers- matching the

Re: [DRBD-user] Upgrade PVE 4.4 to 5.1 DRB9

2018-03-19 Thread Yannis Milios
A simple ‘apt install drbd-dkms —reinstall’ should work as well. Remember repeating this task each time you upgrade your kernel/headers. Y -- Sent from Gmail Mobile ___ drbd-user mailing list drbd-user@lists.linbit.com

Re: [DRBD-user] Unable to init a node

2018-03-14 Thread Yannis Milios
>> root@dmz-pve1:~ # modinfo drbd >> filename: /lib/modules/4.13.13-6-pve/kernel/drivers/block/drbd/ drbd.ko >> alias: block-major-147-* >> license:GPL >> version:8.4.7 < (this is wrong!) Since version 5, PVE ships with drbd8 kernel module (see version

Re: [DRBD-user] Problems with LVM over DRBD.

2018-02-16 Thread Yannis Milios
Are you using drbd in dual-primary mode ? Can you post the configuration of drbd resource (vm) and the output of "cat /proc/drbd" ? Any clue in the logs ? Y On Fri, Feb 16, 2018 at 10:16 AM, Carles Xavier Munyoz Baldó < car...@unlimitedmail.org> wrote: > Hi, > We have a two nodes Proxmox

Re: [DRBD-user] Proxmox repo release.gpg expired

2018-02-08 Thread Yannis Milios
Thanks for the hint, works now ... Y On Thu, Feb 8, 2018 at 10:43 AM, Lars Ellenberg <lars.ellenb...@linbit.com> wrote: > On Thu, Feb 08, 2018 at 10:06:00AM +0100, Christoph Lechleitner wrote: > > Am 08.02.18 um 09:38 schrieb Yannis Milios: > > > Can you please renew

[DRBD-user] Proxmox repo release.gpg expired

2018-02-08 Thread Yannis Milios
Can you please renew proxmox repo Release.gpg file ? Thanks W: An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: http://packages.linbit.com/proxmox proxmox-5 Release: The following signatures were invalid:

Re: [DRBD-user] Understanding "classic" 3 node set up.

2018-02-07 Thread Yannis Milios
Did you fixed the typo that Peter mentioned on his last post ? That should be "internal" and not "inetrnal". Then must copy the resource file on all 3 nodes. > >> meta-disk inetrnal; > > After fixing that you should be able to create the metadata and initialize the stacked device by: On

Re: [DRBD-user] online verification: how to query completion status?

2017-11-15 Thread Yannis Milios
>> prior to drbd 9.x I could easily check how far online verification has come by reading /proc/drbd. That has been deprecated on 9.x. To achieve something similar you will have to either use drbdtop or otherwise you can have a look on this

Re: [DRBD-user] Control deployment?

2017-11-02 Thread Yannis Milios
Sorry, now that I read your email more carefully, I get what you mean... Obviously the quickest solution is to set replica to 3, but I guess that you don’t want to do that, 1st because you don’t want to sacrfice storage space and 2nd because you will still have the problem when you add a 4th,5th

Re: [DRBD-user] Control deployment?

2017-11-02 Thread Yannis Milios
>>(Now I am solving this by migrate vm to node1, unassigned vm-image from node3, assign vm-image to node 3, migrate vm to node3, unassigned vm-image from node1, whats awful, error prone and somewhat waste of time) Why you are doing all these steps manually?! This is something that PVE handles

Re: [DRBD-user] query regarding existing data replication

2017-10-12 Thread Yannis Milios
If you mean that you want to preserve your existing data, then yes it's possible. Read carefully this section in documentation: http://docs.linbit.com/docs/users-guide-8.4/#s-prepare-storage ...and specifically the section "It is not necessary for this storage area to be empty before you create

Re: [DRBD-user] Some info

2017-10-11 Thread Yannis Milios
Are you planning to use DRBD8 or DRBD9? DRBD8 is limited to 2 nodes(max 3). DRBD9 can scale to multiple nodes. For DRBD8 the most common setup is RAID -> DRBD -> LVM or RAID -> LVM -> DRBD It’s management is way easier than DRBD9. The most common DRBD9 setups are RAID -> LVM (thin or thick)

Re: [DRBD-user] DRBDv9 with iSCSI as scaleout SAN

2017-10-03 Thread Yannis Milios
In addition, as long as you're using proxmox, it would be way easier to setup the native drbd9 plugin for proxmox instead of using the iscsi method. In this case both drbd and proxmox should be hosted on the same servers (hyper-converged setup). Each vm will reside in a separate drbd9

Re: [DRBD-user] drbdmanage quorum control

2017-10-03 Thread Yannis Milios
Thanks for clarifying this ... Regards, Yannis On Tue, Oct 3, 2017 at 12:30 PM, Roland Kammerer <roland.kamme...@linbit.com > wrote: > On Tue, Oct 03, 2017 at 12:05:50PM +0100, Yannis Milios wrote: > > I think you have to use 'drbdmanage reelect' command to reelect a new &

Re: [DRBD-user] drbdmanage quorum control

2017-10-03 Thread Yannis Milios
I think you have to use 'drbdmanage reelect' command to reelect a new leader first. man drbdmanage-reelect Yannis On Mon, Oct 2, 2017 at 2:12 PM, Jason Fitzpatrick wrote: > Hi all > > I am trying to get my head around the quorum-control features within >

Re: [DRBD-user] drbdmanage-proxmox v2.0

2017-09-22 Thread Yannis Milios
Tried, and it seems working, but only when taking snapshots of a vm > which resides on a leader node. > > I can reproduce this as well... ___ drbd-user mailing list drbd-user@lists.linbit.com http://lists.linbit.com/mailman/listinfo/drbd-user

Re: [DRBD-user] drbdmanage-proxmox v2.0

2017-09-20 Thread Yannis Milios
Sure, I get your point. Hope to see you come to an agreement again, there must be a way :) (don't want to start flame wars). On Wed, 20 Sep 2017 at 08:35, Roland Kammerer <roland.kamme...@linbit.com> wrote: > On Tue, Sep 19, 2017 at 05:26:02PM +0100, Yannis Milios wrote: > > >

Re: [DRBD-user] drbdmanage-proxmox v2.0

2017-09-19 Thread Yannis Milios
> I would also like to ask you as community which features you would like > to see in future releases. * I guess integrating a simple 'cluster health status' page in PVE gui is not a part of your development, right ? I mean something like showing the output of 'drbdadm status' or

Re: [DRBD-user] drbdmanage-proxmox v2.0

2017-09-19 Thread Yannis Milios
> > In this release we added: > - resize support > - creating and deleting of snapshots. Still, we consider creating (and deleting) > snapshots a good addition. Perfect! thanks a lot > I would also like to ask you as community which features you would like > to see in future releases, except

Re: [DRBD-user] drbd cluster initialize

2017-09-18 Thread Yannis Milios
If this is a clean pve installation, then you need to enable their repository first. By default it's not there ... https://pve.proxmox.com/wiki/Package_Repositories#_proxmox_ve_no_subscription_repository After adding it, you'll be able to find the kernel headers by doing a 'apt-get update' and

Re: [DRBD-user] Promote client-only to complete node ?

2017-09-13 Thread Yannis Milios
Usually when I need that, I live migrate the vm from the client node to the other node and then I use drbdmanage unassign/assign to convert client to a 'normal' satellite node with local storage. Then wait for sync to complete and move the vm back to the original node (if necessary). Yannis On

Re: [DRBD-user] Receiving a message "disk size of peer is too small"

2017-09-12 Thread Yannis Milios
>The partition takes all the disk, but in the new one, when I created it, was a little smaller. That's the case then. What about leaving the new disk unpartitioned and using it as /dev/sdb (instead of /dev/sdb1) ? Otherwise I guess 2 options remain: - Buy a bigger disk - Shrink the original in

Re: [DRBD-user] Receiving a message "disk size of peer is too small"

2017-09-12 Thread Yannis Milios
On Tue, Sep 12, 2017 at 1:50 AM, José Andrés Matamoros Guevara < amatamo...@ie-networks.com> wrote: > I have two different servers with drbd replication. One disk failed in one > of the servers. I have replaced it with a newer disk with physical sector > size of 4096 bytes. The old one is 512

Re: [DRBD-user] [DRBD9] on freshly created cluster, slave does not sync with master

2017-09-07 Thread Yannis Milios
Hi, Control volumes on the slave node, 10.9.4.192 , show up in 'Inconsistent' state 'before-adjust'. However 'after-adjust' they show up in 'Uptodate' state on all nodes, that is the expected state. Have you tried issuing 'drbdmanage restart -rq' on the slave node and then 'drbdmanage n' to

Re: [DRBD-user] drbdmanage for Proxmox 5.0

2017-09-07 Thread Yannis Milios
Yes, it's planned, check the below discussion: https://lists.gt.net/drbd/users/29178 On Wed, Sep 6, 2017 at 2:48 PM, Michał Szamocki wrote: > Hello, > > Are there any plans to support Proxmox 5.0? > > Greetings, > > Michał Szamocki > Cirrus - Aedificaremus Tibi > WWW:

Re: [DRBD-user] DRBD over ZFS - or the other way around?

2017-09-06 Thread Yannis Milios
...I mean by cloning it first, since snapshot does not appear as blockdev to the system but the clone does. On Wed, Sep 6, 2017 at 2:58 PM, Yannis Milios <yannis.mil...@gmail.com> wrote: > Even in that case I would prefer to assemble a new DRBD device ontop of > the ZVOL snapshot an

Re: [DRBD-user] DRBD over ZFS - or the other way around?

2017-09-06 Thread Yannis Milios
Even in that case I would prefer to assemble a new DRBD device ontop of the ZVOL snapshot and then mount the DRBD device instead :) On Wed, Sep 6, 2017 at 2:56 PM, Gionatan Danti <g.da...@assyoma.it> wrote: > On 06/09/2017 15:31, Yannis Milios wrote: > >> If your topology is

Re: [DRBD-user] DRBD over ZFS - or the other way around?

2017-09-06 Thread Yannis Milios
issue tracker that ZFS panics if you try to write to the snapshot or > something like that…) > > Other than that - yes, this should work fine. > > Jan > > > > On 6 Sep 2017, at 13:23, Gionatan Danti <g.da...@assyoma.it> wrote: > > > > On 19/08/2017 10:24,

Re: [DRBD-user] drbdmanage restart failed

2017-08-30 Thread Yannis Milios
Which version of drbdmanage/utils/kmod are you using? Many issues have been sorted out in latest versions. Yannis On Wed, 30 Aug 2017 at 07:58, 杨成伟 wrote: > Hi All, > > I'm following drbd 9.x user guide and has a 3 node(2 control, 1 > satellite) setup. > > However, I'm

Re: [DRBD-user] csums-alg,verify-alg algorithm

2017-08-29 Thread Yannis Milios
If there are only 2 nodes, it's better to stick to drbd8. Yannis On Tue, 29 Aug 2017 at 05:28, Digimer wrote: > On 2017-08-28 09:28 PM, 大川敬臣 wrote: > > I'm planning to build two MySQL DB servers that synchronized by DRBD 9.0 > > with RHEL 7.3. > > I want to enable

Re: [DRBD-user] DRBD Dual Primary + GFS2 for redundant KVM hosts

2017-08-26 Thread Yannis Milios
uot; <g.da...@assyoma.it> wrote: Il 26-08-2017 13:56 Yannis Milios ha scritto: > Have you considered a HA NFS over a 2-node DRBD8 cluster ? Should work > well on most hypervisors (qcow2,raw,vmdk based). > > Yannis > Hi Yannis, yes, I considered that. However, as this would be a conver

  1   2   >