Re: [pve-devel] [RFC v6 qemu-server] VM protection mode

2015-09-11 Thread Alen Grizonic
On 09/10/2015 06:44 PM, Dietmar Maurer wrote: @@ -889,6 +897,7 @@ my $update_vm_api = sub { $modified->{$opt} = 1; $conf = PVE::QemuServer::load_config($vmid); # update/reload if ($opt =~ m/^unused/) { +

Re: [pve-devel] Extra Node Summary Information

2015-09-11 Thread Thomas Lamprecht
Hi, I appreciate your interest in PVE and the open source development. To point it out first http://pve.proxmox.com/wiki/Developer_Documentation may help you somewhat. While it may look like a lot at the beginning after you initial setup a machine (maybe an VM with nested Proxmox) building

Re: [pve-devel] [PATCH] add influxdb stats plugin V2

2015-09-11 Thread Dietmar Maurer
applied ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Re: [pve-devel] PVE 4 and problem with drbdmanage

2015-09-11 Thread Gilberto Nunes
That's right! 2015-09-11 13:07 GMT-03:00 Dietmar Maurer : > > I perform a shutdown and a startup in both server, and in PVE02, I get > this: > > I will try to reproduce - maybe related to the recent kernel update... > I assume you run kernel 4.2 on both nodes? > > --

Re: [pve-devel] PVE 4 and problem with drbdmanage

2015-09-11 Thread Michael Rasmussen
Why is this thread duplicated on both user and developer list? Slightly annoying. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael rasmussen cc http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E mir datanom net http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C

Re: [pve-devel] PVE 4 and problem with drbdmanage

2015-09-11 Thread Dietmar Maurer
> I perform a shutdown and a startup in both server, and in PVE02, I get this: I will try to reproduce - maybe related to the recent kernel update... I assume you run kernel 4.2 on both nodes? ___ pve-devel mailing list pve-devel@pve.proxmox.com

Re: [pve-devel] Extra Node Summary Information

2015-09-11 Thread Thomas Lamprecht
Hi, Upfront note for the PVE developer list: forgot to hit "reply all" so if mark isn't subscribed to the devel list he didn't get the reply at all, thus this second email, sorry my bad. :) I appreciate your interest in PVE and the open source development. To point it out first

Re: [pve-devel] Proxmox VE 4.0 beta2 released!

2015-09-11 Thread Gilberto Nunes
drbdmanage works perfect when I try remove "dead" volume... I used --force option... I don't remember if this options already exist in previously versions, but now I am able to remove volumes from drbd... thanks 2015-09-10 12:05 GMT-03:00 Dietmar Maurer : > > qcow2, as a

Re: [pve-devel] Extra Node Summary Information

2015-09-11 Thread Mark Davis
Thank you for your response. I have already setup a VM and been able to compile stable-3 packages. Still trying to get my head around the commands for git though, but I guess that will come with time. I thought that Graphite and influxdb was for centralised collection of the metrics and

[pve-devel] PVE 4 and problem with drbdmanage

2015-09-11 Thread Gilberto Nunes
Hi all... I deploy two servers here, with PVE 4 and DRBD9. Same days ago, I was able to install and perform same live migration. Now,I can do it anymore... I deploy a new VM with Windows XP, just for test but I can't migrate... I get this error: Sep 11 10:57:03 starting migration of VM 101 to

Re: [pve-devel] Extra Node Summary Information

2015-09-11 Thread Alexandre DERUMIER
>>The first stage of what I want to do is make a way (possibly a plugin) that >>enables Proxmox to collect a variable number of metrics per node so that it >>could be stored in a RRD or sent to something like Graphite. Assuming my >>understanding to Graphite and influxdb are correct. yes.

Re: [pve-devel] PVE 4 and problem with drbdmanage

2015-09-11 Thread Dietmar Maurer
> As can you see, the disc 101 exist in PVE01 but not in PVE02. > How can I force drbdmanage to sync or whatever to show disc 101 in both > servers?? Is that reproducible? Did you wait until initial sync was finished? Any hints in /var/log/syslog? ___

Re: [pve-devel] PVE 4 and problem with drbdmanage

2015-09-11 Thread Gilberto Nunes
I perform a shutdown and a startup in both server, and in PVE02, I get this: Sep 11 11:32:12 pve02 kernel: [ 3411.819799] drbd .drbdctrl/0 drbd0 pve01: uuid_compare()=-100 by rule 100 Sep 11 11:32:12 pve02 kernel: [ 3411.819810] drbd .drbdctrl/0 drbd0 pve01: helper command: /sbin/drbdadm

[pve-devel] [PATCH pve-ha-manager] simulator: fix random output of manager status

2015-09-11 Thread Thomas Lamprecht
Tell Data::Dumper to sort the keys before dumping. That fixes the manager status mess of jumping keys. Signed-off-by: Thomas Lamprecht --- src/PVE/HA/Sim/RTHardware.pm | 1 + 1 file changed, 1 insertion(+) diff --git a/src/PVE/HA/Sim/RTHardware.pm

Re: [pve-devel] PVE 4 and problem with drbdmanage

2015-09-11 Thread Gilberto Nunes
Yes... I wait for that... It's work fine about 2/3 day ago... Since yesterday, I get this odd behavior... Right now, I try create other VM, and get this in PVE01: pve01:/var/log# tail -f /var/log/syslog | grep drbd Sep 11 11:21:44 pve01 kernel: [ 2838.206262] drbd .drbdctrl: Preparing

[pve-devel] [RFC] implement recovery policy for services

2015-09-11 Thread Thomas Lamprecht
RFC, variable names and some log message may change or get removed. We implement recovery policies almost identical to rgmanager There are the following policies which kick in on an failed service start: * restart:restart an service on the same node, whereas restarts are

Re: [pve-devel] [RFC] implement recovery policy for services

2015-09-11 Thread Thomas Lamprecht
The relocate implementation of this RFC may result in an relocate to an node which was already tried. Also when the service is placed in an group with less members then the number of MAX_RELOCATE, or even only one node member, this results also in two tries. This behavior is known to me and

Re: [pve-devel] [RFC] implement recovery policy for services

2015-09-11 Thread Dietmar Maurer
> The relocate implementation of this RFC may result in an relocate to an > node which was already tried. Also when the service is placed in an > group with less members then the number of MAX_RELOCATE, or even only > one node member, this results also in two tries. > > This behavior is known