Re: [pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-28 Thread Dietmar Maurer
Just uploaded a new implementation - should work better now.

 I have finally uploaded an auto-ballooning implementation for pvestatd.
 
 The algorithm uses the new 'shares' property to distribute RAM accordingly.
 
 Feel free to test :-)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-21 Thread Alexandre DERUMIER
I have done some tests yesterday, with around 10vm, overcommiting the host 
memory a lot, (mixing windows/linux vm)

and it's seem to works fine.

I'll try to do more extensive test next week.


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 14:02:38 
Objet: RE: [pve-devel] [PATCH] auto balloning with mom algorithm implementation 

Hi Alexandre, 

I have finally uploaded an auto-ballooning implementation for pvestatd. 

The algorithm uses the new 'shares' property to distribute RAM accordingly. 

Feel free to test :-) 

- Dietmar 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-21 Thread Alexandre DERUMIER
The current algorithm has some bug - I will try to improve more. 

Not sure about it, But I have seen some vm more ballonning than other. (with 
same shares value).


- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre DERUMIER aderum...@odiso.com 
Cc: pve-devel@pve.proxmox.com 
Envoyé: Vendredi 21 Décembre 2012 09:21:13 
Objet: RE: [pve-devel] [PATCH] auto balloning with mom algorithm implementation 

 I have done some tests yesterday, with around 10vm, overcommiting the 
 host memory a lot, (mixing windows/linux vm) 
 
 and it's seem to works fine. 
 
 I'll try to do more extensive test next week. 

The current algorithm has some bug - I will try to improve more. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-20 Thread Alexandre DERUMIER
Ok,thanks, I'll try that this afternoon !



- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Jeudi 20 Décembre 2012 14:02:38 
Objet: RE: [pve-devel] [PATCH] auto balloning with mom algorithm implementation 

Hi Alexandre, 

I have finally uploaded an auto-ballooning implementation for pvestatd. 

The algorithm uses the new 'shares' property to distribute RAM accordingly. 

Feel free to test :-) 

- Dietmar 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-17 Thread Dietmar Maurer
 Just committed the ballooning stats patches.
 Ok, thanks.
 Also added a fix so that we can set the polling interval at VM startup.
 Great !
 
 Any news to get all stats values in 1 qom get ?

Juts uploaded a patch for that.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-12 Thread Alexandre DERUMIER
Just committed the ballooning stats patches. 
Ok, thanks.
Also added a fix so that we can set the polling interval at VM startup.
Great !

Any news to get all stats values in 1 qom get ?

- Mail original - 

De: Dietmar Maurer diet...@proxmox.com 
À: Alexandre Derumier aderum...@odiso.com, pve-devel@pve.proxmox.com 
Envoyé: Mercredi 12 Décembre 2012 14:27:07 
Objet: RE: [pve-devel] [PATCH] auto balloning with mom algorithm implementation 

 -Original Message- 
 From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel- 
 boun...@pve.proxmox.com] On Behalf Of Alexandre Derumier 
 Sent: Montag, 10. Dezember 2012 14:06 
 To: pve-devel@pve.proxmox.com 
 Subject: [pve-devel] [PATCH] auto balloning with mom algorithm 
 implementation 
 
 for test! 
 
 require ballooning stats patchs on top qemu-kvm 

Just committed the ballooning stats patches. Also added a fix so that we can 
set the polling interval at VM startup. 

https://git.proxmox.com/?p=pve-qemu-kvm.git;a=commit;h=904a90ccf761b43af8b6cae21be3a643cb228ae3
 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] auto balloning with mom algorithm implementation

2012-12-10 Thread Alexandre Derumier
for test!

require ballooning stats patchs on top qemu-kvm

Signed-off-by: Alexandre Derumier aderum...@odiso.com
---
 PVE/QemuServer.pm |  141 -
 1 file changed, 140 insertions(+), 1 deletion(-)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 81cc682..3df3f57 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -31,6 +31,25 @@ use Time::HiRes qw(gettimeofday);
 
 my $cpuinfo = PVE::ProcFSTools::read_cpuinfo();
 
+# then we will consider the host to be under memory pressure
+my $pressure_threshold =  0.20;
+
+# If pressure threshold drops below this level, then the pressure
+# is critical and more aggressive ballooning will be employed.
+my $pressure_critical = 0.05;
+
+# This is the minimum percentage of free memory that an unconstrained
+# guest would like to maintain
+my $min_guest_free_percent = 0.20;
+
+# Don't change a guest's memory by more than this percent of total memory
+my $max_balloon_change_percent = 0.5;
+
+# Only ballooning operations that change the balloon by this percentage
+# of current guest memory should be undertaken to avoid overhead
+my $min_balloon_change_percent = 0.0025;
+
+
 # Note about locking: we use flock on the config file protect
 # against concurent actions.
 # Aditionaly, we have a 'lock' setting in the config file. This
@@ -1966,6 +1985,9 @@ sub vmstatus {
$d-{diskread} = 0;
$d-{diskwrite} = 0;
 
+   $d-{freemem} = undef;
+   $d-{balloon} = $conf-{balloon} ?($conf-{balloon} * 1024 *1024) : 
undef;
+
$res-{$vmid} = $d;
 }
 
@@ -2042,10 +2064,28 @@ sub vmstatus {
$res-{$vmid}-{diskwrite} = $totalwrbytes;
 };
 
+my $freememcb = sub {
+   my ($vmid, $resp) = @_;
+   my $value = $resp-{'return'} || 0;
+   $res-{$vmid}-{freemem} = $value;
+};
+
+my $totalmemcb = sub {
+   my ($vmid, $resp) = @_;
+   my $value = $resp-{'return'} || 0;
+   $res-{$vmid}-{mem} = $value;
+};
+
 my $statuscb = sub {
my ($vmid, $resp) = @_;
$qmpclient-queue_cmd($vmid, $blockstatscb, 'query-blockstats');
 
+   if( $res-{$vmid}-{balloon}){
+   $qmpclient-queue_cmd($vmid, undef, 'qom-set',path= 
machine/peripheral/balloon0, property = stats-polling-interval, value = 
10);
+   $qmpclient-queue_cmd($vmid, $freememcb, 'qom-get',path= 
machine/peripheral/balloon0, property = stat-free-memory);
+   $qmpclient-queue_cmd($vmid, $totalmemcb, 'qom-get',path= 
machine/peripheral/balloon0, property = stat-total-memory);
+   }
+
my $status = 'unknown';
if (!defined($status = $resp-{'return'}-{status})) {
warn unable to get VM status\n;
@@ -2068,9 +2108,109 @@ sub vmstatus {
$res-{$vmid}-{qmpstatus} = $res-{$vmid}-{status} if 
!$res-{$vmid}-{qmpstatus};
 }
 
+#auto-balloning
+my $hostmeminfo = PVE::ProcFSTools::read_meminfo();
+my $hostfreemem = $hostmeminfo-{memtotal} - $hostmeminfo-{memused};
+my $host_free_percent = ($hostfreemem  / $hostmeminfo-{memtotal});
+
+warn host free:.$hostfreemem. total:.$hostmeminfo-{memtotal}.\n;
+
+foreach my $vmid (keys %$list) {
+   if($res-{$vmid}-{pid}  $res-{$vmid}-{balloon}  
$res-{$vmid}-{freemem}){
+   warn vm $vmid: mem:.$res-{$vmid}-{mem}.  
maxmem:.$res-{$vmid}-{maxmem} . freemem:.$res-{$vmid}-{freemem}.  
min_mem: .$res-{$vmid}-{balloon}.\n;
+if ($host_free_percent  $pressure_threshold){
+balloon_shrink_guest($vmid, $host_free_percent, 
$res-{$vmid}-{mem}, $res-{$vmid}-{maxmem}, $res-{$vmid}-{freemem}, 
$res-{$vmid}-{balloon});
+}else{
+balloon_grow_guest($vmid, $host_free_percent, 
$res-{$vmid}-{mem}, $res-{$vmid}-{maxmem}, $res-{$vmid}-{freemem}, 
$res-{$vmid}-{balloon});
+}
+}
+
+}
+
 return $res;
 }
 
+sub balloon_shrink_guest {
+my ($vmid, $host_free_percent, $balloon_cur, $balloon_max, $freemem, 
$min_mem) = @_;
+
+my $guest_free_percent = undef;
+ # Determine the degree of host memory pressure
+if ($host_free_percent = $pressure_critical){
+# Pressure is critical:
+#   Force guest to swap by making free memory negative
+$guest_free_percent = (-0.05 + $host_free_percent);
+}else{
+# Normal pressure situation
+#   Scale the guest free memory back according to host pressure
+$guest_free_percent = ($min_guest_free_percent * ($host_free_percent / 
$pressure_threshold));
+}
+
+# Given current conditions, determine the ideal guest memory size
+#   $guest_used_mem = $guest.StatAvg balloon_cur) - (guest.StatAvg 
mem_unused);
+
+my $guest_used_mem = $balloon_cur - $freemem; # do we need average ?
+
+my $balloon_min =  $guest_used_mem + ($guest_free_percent * $balloon_cur);
+
+$balloon_min = $min_mem if $balloon_min  $min_mem;
+
+# But do not change it too fast
+my $balloon_size =  $balloon_cur * (1 -