Do you have tested it ?
I'm unsure about actual = stat-total-memory.
It seem that guest alwats report a little less memory (around 100MB) than
balloon target.
- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com, pve-devel pve-devel@pve.proxmox.com
But we wanted to avoid multiple calls ...
Do you think that a small extra call can slowdown pvestatd ?
At least it does not make it faster.
I will try to fix the qemu patch ...
___
pve-devel mailing list
pve-devel@pve.proxmox.com
On March 6, 2015 at 1:31 PM Alexandre DERUMIER aderum...@odiso.com wrote:
How does that help exactly? You still need to update corosync on all nodes
at
the same time?
yes, but I don't need to shutdown my vms.
If I directly update a node to jessie with corosync2, how can I migrate
I will try to fix the qemu patch ...
ok, thanks !
- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com
Cc: pve-devel pve-devel@pve.proxmox.com
Envoyé: Vendredi 6 Mars 2015 15:44:25
Objet: Re: [pve-devel] [PATCH] qom ballon fix v2
But we wanted to avoid
You probably need some downtime ...
so, downtime for the whole cluster ?
I manage around 800vmsIt's impossible for me to manage that with all my
customers
- Mail original -
De: dietmar diet...@proxmox.com
À: aderumier aderum...@odiso.com
Cc: pve-devel
Do you have tested it ?
No. I assumed V5 is already well tested ;-)
I already applied it to both branches, and uploaded packages to the pvetest
repository.
Please mark patches if you do not want that I commit them.
I'm unsure about actual = stat-total-memory.
It seem that guest alwats
No. I assumed V5 is already well tested ;-)
Oops sorry, next time i'll mark it as rfc
I think I have a little bug,
qmp query-balloon return bytes,
+# @free_mem: #optional amount of memory (in bytes) free in the guest
+#
+# @total_mem: #optional amount of memory (in bytes) visible to the
Maybe:
Build a new empty cluster with jessie, with same storage.cfg,copy ssh keys,
and allow to do live migration between both cluster.
It should be easy,
we simply need to scp the vm config file to the target host instead move it,
and remove it from source host.
Like this it can be easy to
You probably need some downtime ...
so, downtime for the whole cluster ?
I manage around 800vmsIt's impossible for me to manage that with all my
customers
As already mentioned, it would be possible create special packages for a smooth
upgrade.
We just need to find someone doing
Build a new empty cluster with jessie, with same storage.cfg,copy ssh keys,
and allow to do live migration between both cluster.
It should be easy,
we simply need to scp the vm config file to the target host instead move it,
and remove it from source host.
Like this it can be easy to
changelog:
we use MB, not bytes
$d-{balloon} = int($info-{stats}-{stat-total-memory}/1024/1024);
$d-{freemem} = int($info-{stats}-{stat-free-memory}/1024/1024);
Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
PVE/QemuServer.pm | 21 +++--
1 file changed, 11
Signed-off-by: Wolfgang Link w.l...@proxmox.com
---
PVE/Storage/ZFSPoolPlugin.pm | 17 +++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 5cbd1b2..30efe58 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
Hi Alexandre,
Yes the affected VM seem to have -machine type=pc-i440fx-2.1 in their KVM
parameters.
I found a bunch that have machine: pc-i440fx-1.4 in the config, and KVM
parameters, and they output the full balloon stats e.g.:
# info balloon
balloon: actual=4096 max_mem=4096 total_mem=4095
Signed-off-by: Wolfgang Link w.l...@proxmox.com
---
PVE/Storage/ZFSPoolPlugin.pm | 17 +++--
1 file changed, 15 insertions(+), 2 deletions(-)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 5cbd1b2..30efe58 100644
--- a/PVE/Storage/ZFSPoolPlugin.pm
Signed-off-by: Wolfgang Link w.l...@proxmox.com
---
PVE/Storage/ZFSPoolPlugin.pm | 20 ++--
1 file changed, 18 insertions(+), 2 deletions(-)
diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm
index 5cbd1b2..d187e23 100644
---
Applied, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
applied, thanks!
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
17 matches
Mail list logo