Re: [pve-devel] [PATCH] balloon: use qom-get for guest balloon statistics V5

2015-03-06 Thread Alexandre DERUMIER
Do you have tested it ?

I'm unsure about actual = stat-total-memory.

It seem that guest alwats report a little less memory (around 100MB) than 
balloon target.


- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com, pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 6 Mars 2015 09:56:13
Objet: Re: [pve-devel] [PATCH] balloon: use qom-get for guest balloon 
statistics V5

applied, thanks! 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qom ballon fix v2

2015-03-06 Thread Dietmar Maurer
 But we wanted to avoid multiple calls ... 
 
 Do you think that a small extra call can slowdown pvestatd ?

At least it does not make it faster.

I will try to fix the qemu patch ...

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox VE developer release for Debian jessie

2015-03-06 Thread Dietmar Maurer

 On March 6, 2015 at 1:31 PM Alexandre DERUMIER aderum...@odiso.com wrote:
 
 
 How does that help exactly? You still need to update corosync on all nodes
 at 
 the same time? 
 
 yes, but I don't need to shutdown my vms.
 
 If I directly update a node to jessie with corosync2, how can I migrate a vm
 to this jessie node ?
 
 (If corosync1-wheezy-source node -  don't speak with corosync2-jessie-target
 node)

You probably need some downtime ...

So I guess what you suggest makes sense, but it requires some work to implement
it. You would
need to update several packages: libqb corosync, pve-cluster, ...

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] qom ballon fix v2

2015-03-06 Thread Alexandre DERUMIER
I will try to fix the qemu patch ... 

ok, thanks !

- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 6 Mars 2015 15:44:25
Objet: Re: [pve-devel] [PATCH] qom ballon fix v2

 But we wanted to avoid multiple calls ... 
 
 Do you think that a small extra call can slowdown pvestatd ? 

At least it does not make it faster. 

I will try to fix the qemu patch ... 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox VE developer release for Debian jessie

2015-03-06 Thread Alexandre DERUMIER
You probably need some downtime ... 

so, downtime for the whole cluster ?
I manage around 800vmsIt's impossible for me to manage that with all my 
customers 




- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 6 Mars 2015 15:43:32
Objet: Re: [pve-devel] Proxmox VE developer release for Debian jessie

 On March 6, 2015 at 1:31 PM Alexandre DERUMIER aderum...@odiso.com wrote: 
 
 
 How does that help exactly? You still need to update corosync on all nodes 
 at 
 the same time? 
 
 yes, but I don't need to shutdown my vms. 
 
 If I directly update a node to jessie with corosync2, how can I migrate a vm 
 to this jessie node ? 
 
 (If corosync1-wheezy-source node - don't speak with corosync2-jessie-target 
 node) 

You probably need some downtime ... 

So I guess what you suggest makes sense, but it requires some work to implement 
it. You would 
need to update several packages: libqb corosync, pve-cluster, ... 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] balloon: use qom-get for guest balloon statistics V5

2015-03-06 Thread Dietmar Maurer
 Do you have tested it ?

No. I assumed V5 is already well tested ;-)

I already applied it to both branches, and uploaded packages to the pvetest
repository.

Please mark patches if you do not want that I commit them.
 
 I'm unsure about actual = stat-total-memory.
 
 It seem that guest alwats report a little less memory (around 100MB) than
 balloon target.

but this does not harm, and it does not explain the problem the user reported on
the forum?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] balloon: use qom-get for guest balloon statistics V5

2015-03-06 Thread Alexandre DERUMIER
No. I assumed V5 is already well tested ;-) 
Oops sorry, next time i'll mark it as rfc



I think I have a little bug,

qmp query-balloon return bytes,


+# @free_mem: #optional amount of memory (in bytes) free in the guest
+#
+# @total_mem: #optional amount of memory (in bytes) visible to the guest
+#
+# @max_mem: amount of memory (in bytes) assigned to the guest


but hmp, MBytes.


I need to fix that. 

I'll test with debugging pvestatd before send the final patch








- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 6 Mars 2015 11:06:29
Objet: Re: [pve-devel] [PATCH] balloon: use qom-get for guest balloon 
statistics V5

 Do you have tested it ? 

No. I assumed V5 is already well tested ;-) 

I already applied it to both branches, and uploaded packages to the pvetest 
repository. 

Please mark patches if you do not want that I commit them. 

 I'm unsure about actual = stat-total-memory. 
 
 It seem that guest alwats report a little less memory (around 100MB) than 
 balloon target. 

but this does not harm, and it does not explain the problem the user reported 
on 
the forum? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox VE developer release for Debian jessie

2015-03-06 Thread Alexandre DERUMIER
Maybe:

Build a new empty cluster with jessie, with same storage.cfg,copy ssh keys, 
and allow to do live migration between both cluster.

It should be easy,
we simply need to scp the vm config file to the target host instead move it,
and remove it from source host.

Like this it can be easy to test the new cluster with jessie, and migrate 
slowly.

What do you think about it ?



- Mail original -
De: aderumier aderum...@odiso.com
À: dietmar diet...@proxmox.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 6 Mars 2015 18:03:39
Objet: Re: [pve-devel] Proxmox VE developer release for Debian jessie

You probably need some downtime ... 

so, downtime for the whole cluster ? 
I manage around 800vmsIt's impossible for me to manage that with all my 
customers  




- Mail original - 
De: dietmar diet...@proxmox.com 
À: aderumier aderum...@odiso.com 
Cc: pve-devel pve-devel@pve.proxmox.com 
Envoyé: Vendredi 6 Mars 2015 15:43:32 
Objet: Re: [pve-devel] Proxmox VE developer release for Debian jessie 

 On March 6, 2015 at 1:31 PM Alexandre DERUMIER aderum...@odiso.com wrote: 
 
 
 How does that help exactly? You still need to update corosync on all nodes 
 at 
 the same time? 
 
 yes, but I don't need to shutdown my vms. 
 
 If I directly update a node to jessie with corosync2, how can I migrate a vm 
 to this jessie node ? 
 
 (If corosync1-wheezy-source node - don't speak with corosync2-jessie-target 
 node) 

You probably need some downtime ... 

So I guess what you suggest makes sense, but it requires some work to implement 
it. You would 
need to update several packages: libqb corosync, pve-cluster, ... 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox VE developer release for Debian jessie

2015-03-06 Thread Dietmar Maurer
 You probably need some downtime ... 
 
 so, downtime for the whole cluster ?
 I manage around 800vmsIt's impossible for me to manage that with all my
 customers 

As already mentioned, it would be possible create special packages for a smooth
upgrade.
We just need to find someone doing all the work (an test it).

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] Proxmox VE developer release for Debian jessie

2015-03-06 Thread Dietmar Maurer


 Build a new empty cluster with jessie, with same storage.cfg,copy ssh keys, 
 and allow to do live migration between both cluster.
 
 It should be easy,
 we simply need to scp the vm config file to the target host instead move it,
 and remove it from source host.
 
 Like this it can be easy to test the new cluster with jessie, and migrate
 slowly.
 
 What do you think about it ?

Locking with different clusters does not work, so this is probably very
dangerous.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] balloon: use qom-get for guest balloon statistics V5

2015-03-06 Thread Alexandre Derumier
changelog:

we use MB, not bytes

$d-{balloon} = int($info-{stats}-{stat-total-memory}/1024/1024);
$d-{freemem} = int($info-{stats}-{stat-free-memory}/1024/1024);

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |   21 +++--
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index bb7a7f3..2509a0a 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -2490,17 +2490,12 @@ sub vmstatus {
my ($vmid, $resp) = @_;
 
my $info = $resp-{'return'};
-   return if !$info-{max_mem};
-
my $d = $res-{$vmid};
 
-   # use memory assigned to VM
-   $d-{maxmem} = $info-{max_mem};
-   $d-{balloon} = $info-{actual};
-
-   if (defined($info-{total_mem})  defined($info-{free_mem})) {
-   $d-{mem} = $info-{total_mem} - $info-{free_mem};
-   $d-{freemem} = $info-{free_mem};
+   if (defined($info-{stats}-{stat-total-memory})  
defined($info-{stats}-{stat-free-memory})) {
+   $d-{balloon} = 
int($info-{stats}-{stat-total-memory}/1024/1024);
+   $d-{freemem} = int($info-{stats}-{stat-free-memory}/1024/1024);
+   $d-{mem} = $d-{maxmem} - $d-{freemem};
}
 
 };
@@ -2524,7 +2519,13 @@ sub vmstatus {
$qmpclient-queue_cmd($vmid, $blockstatscb, 'query-blockstats');
# this fails if ballon driver is not loaded, so this must be
# the last commnand (following command are aborted if this fails).
-   $qmpclient-queue_cmd($vmid, $ballooncb, 'query-balloon');
+   # maybe is it fixed by
+   
#http://git.qemu.org/?p=qemu.git;a=commit;h=38dbd48b247ebe05bdc6ef52ccdc60cc21274877
+
+   $qmpclient-queue_cmd($vmid, $ballooncb, 'qom-get',  
+ path = machine/peripheral/balloon0, 
+ property = guest-stats);
+
 
my $status = 'unknown';
if (!defined($status = $resp-{'return'}-{status})) {
-- 
1.7.10.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH_V5] Bug Fix #602: now zfs will wait 5 sec if error msg is dataset is busy

2015-03-06 Thread Wolfgang Link

Signed-off-by: Wolfgang Link w.l...@proxmox.com
---
 PVE/Storage/ZFSPoolPlugin.pm |   17 +++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 5cbd1b2..30efe58 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -166,7 +166,7 @@ sub zfs_request {
 $msg .= $line\n;
 };
 
-run_command($cmd, outfunc = $output, timeout = $timeout);
+run_command($cmd, errmsg = ZFS ERROR, outfunc = $output, timeout = 
$timeout);
 
 return $msg;
 }
@@ -291,7 +291,20 @@ sub zfs_create_zvol {
 sub zfs_delete_zvol {
 my ($class, $scfg, $zvol) = @_;
 
-$class-zfs_request($scfg, undef, 'destroy', '-r', $scfg-{pool}/$zvol);
+for (my $i = 0; $i  5; $i++) {
+
+   eval { $class-zfs_request($scfg, undef, 'destroy', '-r', 
$scfg-{pool}/$zvol); };
+   if (my $err = $@) {
+   if ($err =~ m/^ZFS ERROR:(.*): dataset is busy.*/) {
+   sleep(1);
+   die $err if $i == 4; 
+   } else {
+   die $err;
+   }
+   } else {
+   last;
+   }
+}
 }
 
 sub zfs_list_zvol {
-- 
1.7.10.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] balloon bug in qemu 2.1 ?

2015-03-06 Thread Andrew Thrift
Hi Alexandre,

Yes the affected VM seem to have -machine type=pc-i440fx-2.1 in their KVM
parameters.

I found a bunch that have machine: pc-i440fx-1.4 in the config, and KVM
parameters, and they output the full balloon stats e.g.:

# info balloon
balloon: actual=4096 max_mem=4096 total_mem=4095 free_mem=2221
mem_swapped_in=0 mem_swapped_out=0 major_page_faults=0
minor_page_faults=43 last_update=1425629742




On Fri, Mar 6, 2015 at 4:17 AM, Alexandre DERUMIER aderum...@odiso.com
wrote:

 ok,I speak too fast.
 It's not related to this patch.


 on current pve-qemu-kvm 2.1.

 balloon is working fine
 # info balloon
 balloon: actual=1024 max_mem=1024 total_mem=1002 free_mem=941
 mem_swapped_in=0 mem_swapped_out=0 major_page_faults=120
 minor_page_faults=215272 last_update=1425568324


 But if the vm (qemu 2.1) is started with
  -machine type=pc-i440fx-2.1
 or
  -machine type=pc-i440fx-2.0   (this is the case when you do a live
 migration)

 It's not working

 # info balloon
 balloon: actual=1024 max_mem=1024



 @Andrew, are your vm where you see the info balloon bug, have been
 migrated from old proxmox (without stop/start ?)
 can you check in ssh if the kvm process have -machine type in the command
 line ?


 - Mail original -
 De: aderumier aderum...@odiso.com
 À: Andrew Thrift and...@networklabs.co.nz
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Jeudi 5 Mars 2015 16:05:03
 Objet: Re: [pve-devel] balloon bug in qemu 2.1 ?

 I need to do more tests,

 but it seem that this commit (applied on qemu 2.2 but not on qemu 2.1)

 http://git.qemu.org/?p=qemu.git;a=commit;h=22644cd2c60151a964d9505f4c5f7baf845f20d8

 fix the problem with qemu 2.1.
 (I have tested with the patch, balloon works fine, I need to test without
 the patch to compare)





 - Mail original -
 De: aderumier aderum...@odiso.com
 À: Andrew Thrift and...@networklabs.co.nz
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Jeudi 5 Mars 2015 15:41:51
 Objet: Re: [pve-devel] balloon bug in qemu 2.1 ?

 in proxmox virtio-balloon-fix-query.patch,

 we have

 hw/virtio/virtio-balloon.c

 +
 + if (!(balloon_stats_enabled(dev)  balloon_stats_supported(dev) 
 + dev-stats_last_update)) {
 + return;
 + }
 +
 + info-last_update = dev-stats_last_update;
 + info-has_last_update = true;
 +
 + info-mem_swapped_in = dev-stats[VIRTIO_BALLOON_S_SWAP_IN];
 + info-has_mem_swapped_in = info-mem_swapped_in = 0 ? true : false;
 +
 + info-mem_swapped_out = dev-stats[VIRTIO_BALLOON_S_SWAP_OUT];
 + info-has_mem_swapped_out = info-mem_swapped_out = 0 ? true : false;
 +
 + info-major_page_faults = dev-stats[VIRTIO_BALLOON_S_MAJFLT];
 + info-has_major_page_faults = info-major_page_faults = 0 ? true :
 false;
 +
 + info-minor_page_faults = dev-stats[VIRTIO_BALLOON_S_MINFLT];
 + info-has_minor_page_faults = info-minor_page_faults = 0 ? true :
 false;
 +


 so, that mean that in qemu 2.1

 + if (!(balloon_stats_enabled(dev)  balloon_stats_supported(dev) 
 + dev-stats_last_update)) {
 + return;
 + }

 one of this 3 funtions is not working


 - Mail original -
 De: Andrew Thrift and...@networklabs.co.nz
 À: aderumier aderum...@odiso.com
 Cc: pve-devel pve-devel@pve.proxmox.com
 Envoyé: Jeudi 5 Mars 2015 15:17:58
 Objet: Re: [pve-devel] balloon bug in qemu 2.1 ?

 Hi Alexandre,
 This may be the cause of the crashes we have been experiencing. We
 reported them here:


 http://forum.proxmox.com/threads/21276-Kernel-Oops-Panic-on-3-10-5-and-3-10-7-Kernels

 These only started happening since we moved to qemu-2.1.x and we get the
 same output:

 # info balloon
 balloon: actual=16384 max_mem=16384

 and have noticed VM's with only 1-2GB usage in the guest reporting almost
 the entire amount of ram used to the host, even though we have the latest
 balloon driver loaded and the blnsvr.exe service running.



 On Thu, Mar 5, 2015 at 11:33 PM, Alexandre DERUMIER  aderum...@odiso.com
  wrote:


 Hi,

 I have see a bug report here:

 http://forum.proxmox.com/threads/2-RAM-Problem-since-Upgrade-to-3-4?p=108367posted=1#post108367

 about balloon.


 on my qemu 2.2

 #info balloon
 balloon: actual=1024 max_mem=2048 total_mem=985 free_mem=895
 mem_swapped_in=0 mem_swapped_out=0 major_page_faults=301
 minor_page_faults=61411 last_update=1425550707


 same vm with qemu 2.2 + -machine type=pc-i440fx-2.1
 #info balloon
 balloon: actual=1024 max_mem=2048



 (Don't have true qemu 2.1 for test currently)


 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
 ___
 pve-devel mailing list
 pve-devel@pve.proxmox.com
 http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

___
pve-devel mailing list

[pve-devel] [PATCH_V6] Bug Fix #602: now zfs will wait 5 sec if error msg is dataset is busy

2015-03-06 Thread Wolfgang Link

Signed-off-by: Wolfgang Link w.l...@proxmox.com
---
 PVE/Storage/ZFSPoolPlugin.pm |   17 +++--
 1 file changed, 15 insertions(+), 2 deletions(-)

diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 5cbd1b2..30efe58 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -166,7 +166,7 @@ sub zfs_request {
 $msg .= $line\n;
 };
 
-run_command($cmd, outfunc = $output, timeout = $timeout);
+run_command($cmd, errmsg = ZFS ERROR, outfunc = $output, timeout = 
$timeout);
 
 return $msg;
 }
@@ -291,7 +291,20 @@ sub zfs_create_zvol {
 sub zfs_delete_zvol {
 my ($class, $scfg, $zvol) = @_;
 
-$class-zfs_request($scfg, undef, 'destroy', '-r', $scfg-{pool}/$zvol);
+for (my $i = 0; $i  5; $i++) {
+
+   eval { $class-zfs_request($scfg, undef, 'destroy', '-r', 
$scfg-{pool}/$zvol); };
+   if (my $err = $@) {
+   if ($err =~ m/^ZFS ERROR:(.*): dataset is busy.*/) {
+   sleep(1);
+   die $err if $i == 4; 
+   } else {
+   die $err;
+   }
+   } else {
+   last;
+   }
+}
 }
 
 sub zfs_list_zvol {
-- 
1.7.10.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH_V6] Bug Fix #602: now zfs will wait 5 sec if error msg is dataset is busy

2015-03-06 Thread Wolfgang Link

Signed-off-by: Wolfgang Link w.l...@proxmox.com
---
 PVE/Storage/ZFSPoolPlugin.pm |   20 ++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 5cbd1b2..d187e23 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
+++ b/PVE/Storage/ZFSPoolPlugin.pm
@@ -166,7 +166,7 @@ sub zfs_request {
 $msg .= $line\n;
 };
 
-run_command($cmd, outfunc = $output, timeout = $timeout);
+run_command($cmd, errmsg = zfs error, outfunc = $output, timeout = 
$timeout);
 
 return $msg;
 }
@@ -291,7 +291,23 @@ sub zfs_create_zvol {
 sub zfs_delete_zvol {
 my ($class, $scfg, $zvol) = @_;
 
-$class-zfs_request($scfg, undef, 'destroy', '-r', $scfg-{pool}/$zvol);
+my $err;
+
+for (my $i = 0; $i  6; $i++) {
+
+   eval { $class-zfs_request($scfg, undef, 'destroy', '-r', 
$scfg-{pool}/$zvol); };
+   if ($err = $@) {
+   if ($err =~ m/^zfs error:(.*): dataset is busy.*/) {
+   sleep(1);
+   } else {
+   die $err;
+   }
+   } else {
+   last;
+   }
+}
+
+die $err if $err;
 }
 
 sub zfs_list_zvol {
-- 
1.7.10.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH_V6] Bug Fix #602: now zfs will wait 5 sec if error msg is dataset is busy

2015-03-06 Thread Dietmar Maurer

Applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] balloon: use qom-get for guest balloon statistics V5

2015-03-06 Thread Dietmar Maurer

applied, thanks!

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel