Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Alexandre DERUMIER
>>Unless all cluster nodes have identical hardware, how do you determine if a 
>>given node is a suitable target for a vm?

I think we could add a manual "host cpu weight" option, because it's difficult 
to compare cpus performance. (frequencies/nb cores/intel|amd).


>>Also, should there be a flag where you can set the vm to 'allow 
>>auto-migration' vs. 'stick to current node'? 

yes !


>>Last but not least, how do you keep the load that migration generates from 
>>impacting auto-migration decisions? 

Good question . I think we should use rrds to do average stats of cpu usage.



- Mail original -
De: "Martin Waschbüsch" 
À: "pve-devel" 
Envoyé: Mardi 17 Novembre 2015 13:04:16
Objet: Re: [pve-devel] adding a vm workload scheduler feature

> Am 17.11.2015 um 08:23 schrieb Alexandre DERUMIER : 
> 
> Hi, 
> 
> For next year, 
> I would like to implement a new feature : workload scheduler 
> 
> This could allow auto migration of vms, to try to balance the vms across the 
> cluster, 
> from defined rules (for example, try to balance cpu usage) 

Unless all cluster nodes have identical hardware, how do you determine if a 
given node is a suitable target for a vm? 

Also, should there be a flag where you can set the vm to 'allow auto-migration' 
vs. 'stick to current node'? 

Last but not least, how do you keep the load that migration generates from 
impacting auto-migration decisions? 

Martin 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Martin Waschbüsch

> Am 17.11.2015 um 08:23 schrieb Alexandre DERUMIER :
> 
> Hi,
> 
> For next year,
> I would like to implement a new feature : workload scheduler
> 
> This could allow auto migration of vms, to try to balance the vms across the 
> cluster,
> from defined rules (for example, try to balance cpu usage)

Unless all cluster nodes have identical hardware, how do you determine if a 
given node is a suitable target for a vm?

Also, should there be a flag where you can set the vm to 'allow auto-migration' 
vs. 'stick to current node'?

Last but not least, how do you keep the load that migration generates from 
impacting auto-migration decisions?

Martin
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] "bridge vlan show" truncated message kernel fix

2015-11-17 Thread Alexandre DERUMIER
Hi,

the "bridge vlan show" with truncated message error with too much vlans,
has been fixed in kernel 4.4

http://marc.info/?l=linux-netdev=144488823001097=2
http://marc.info/?l=linux-netdev=144489933906491=2

When we'll use them in a fixed kernel (don't known if a backport to 4.2 is 
planned) , 
we'll be able to remove the iproute2 patch which increase buffer size.


Alexandre

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Dietmar Maurer


> On November 17, 2015 at 4:37 PM Dietmar Maurer  wrote:
> 
> 
> > >>Last but not least, how do you keep the load that migration generates from
> > >>impacting auto-migration decisions? 
> > 
> > Good question . I think we should use rrds to do average stats of cpu usage.
> 
> I think that any load/cpu metric will be difficult(unstable). 
> I would simply use static values like cpu count or max. memory.

or static weight.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] "bridge vlan show" truncated message kernel fix

2015-11-17 Thread Alexandre DERUMIER
>> Do they apply cleanly to 4.2.X?

no, they need to be backported.

(don't seem to be hard, but need to be tested)
- Mail original -
De: "dietmar" 
À: "aderumier" , "pve-devel" 
Envoyé: Mardi 17 Novembre 2015 16:39:46
Objet: Re: [pve-devel] "bridge vlan show" truncated message kernel fix

> http://marc.info/?l=linux-netdev=144488823001097=2 
> http://marc.info/?l=linux-netdev=144489933906491=2 
> 
> When we'll use them in a fixed kernel (don't known if a backport to 4.2 is 
> planned) , 

Thanks for the links. Do they apply cleanly to 4.2.X? 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH] add ovmf uefi roms support

2015-11-17 Thread Alexandre Derumier
Signed-off-by: Alexandre Derumier 
---
 PVE/QemuServer.pm | 16 
 1 file changed, 16 insertions(+)

diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
index 81a1c84..87b7d20 100644
--- a/PVE/QemuServer.pm
+++ b/PVE/QemuServer.pm
@@ -30,6 +30,7 @@ use PVE::ProcFSTools;
 use PVE::QMPClient;
 use PVE::RPCEnvironment;
 use Time::HiRes qw(gettimeofday);
+use File::Copy qw(copy);
 
 my $qemu_snap_storage = {rbd => 1, sheepdog => 1};
 
@@ -390,6 +391,12 @@ EODESCR
description => "Sets the protection flag of the VM. This will prevent 
the remove operation.",
default => 0,
 },
+ovmf => {
+   optional => 1,
+   type => 'boolean',
+   description => "Enable ovmf uefi roms.",
+   default => 0,
+},
 };
 
 # what about other qemu settings ?
@@ -2683,6 +2690,15 @@ sub config_to_command {
push @$cmd, '-smbios', "type=1,$conf->{smbios1}";
 }
 
+if ($conf->{ovmf}) {
+   my $ovmfvar = "OVMF_VARS-pure-efi.fd";
+   my $ovmfvar_src = "/usr/share/kvm/$ovmfvar";
+   my $ovmfvar_dst = "/tmp/$vmid-$ovmfvar";
+   copy $ovmfvar_src,$ovmfvar_dst if !(-e $ovmfvar_dst);
+   push @$cmd, '-drive', 
"if=pflash,format=raw,readonly,file=/usr/share/kvm/OVMF_CODE-pure-efi.fd";
+   push @$cmd, '-drive', "if=pflash,format=raw,file=$ovmfvar_dst";
+}
+
 if ($q35) {
# the q35 chipset support native usb2, so we enable usb controller
# by default for this machine type
-- 
2.1.4

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Dietmar Maurer


> On November 17, 2015 at 4:43 PM Michael Rasmussen  wrote:
> 
> 
> On Tue, 17 Nov 2015 11:01:58 +0100 (CET)
> Alexandre DERUMIER  wrote:
> 
> > 
> > I'm thinked to use rrd files to have stats on a long time. (maybe do average
> > on last x minutes)
> > 
> > (for example we don't want to migrate a vm if the host is overload for 1 or
> > 2 minutes,
> > because of a spiky cpu vm)
> > 
> > 
> Is it not the point with this to take care of short time load? Long
> time load should be handled by human intervention.

I thought exactly the opposite. You will get a 'unstable' system
if you react to short time load changes.

See the theory of control systems:

https://en.wikipedia.org/wiki/Control_system

Any electronic engineer can teach you how simple it is to build an oscillator
;-)

Consider that a VM has a highly unpredictable behavior.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-manager 2/3] ext6migrate: move static class properties out of initComponent()

2015-11-17 Thread Dietmar Maurer
> Besides fitting more with the declarative style of ExtJS, this has the
> interesting side effect of allowing comboboxes to work with ExtJS6

applied, but I wonder if there is an alternative/correct way to
do it inside initCompoment? 


> - Ext.apply(me, {
> - displayField: 'value',
> - valueField: 'key',
> - queryMode: 'local'
> - });
> -

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Michael Rasmussen
On Tue, 17 Nov 2015 11:01:58 +0100 (CET)
Alexandre DERUMIER  wrote:

> 
> I'm thinked to use rrd files to have stats on a long time. (maybe do average 
> on last x minutes)
> 
> (for example we don't want to migrate a vm if the host is overload for 1 or 2 
> minutes,
> because of a spiky cpu vm)
> 
> 
Is it not the point with this to take care of short time load? Long
time load should be handled by human intervention.

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
May you do Good Magic with Perl.
-- Larry Wall's blessing


pgp5fdXVGwUk7.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Michael Rasmussen
On Tue, 17 Nov 2015 11:01:58 +0100 (CET)
Alexandre DERUMIER  wrote:

> yes, sure. I just want to start with cpu, but memory could be add too.
> 
>  I'm not sure for io-wait, as migrating the vm don't change the storage ?
> 
Is that also the case for Gluster?

-- 
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--
/usr/games/fortune -es says:
May you do Good Magic with Perl.
-- Larry Wall's blessing


pgpeLj3xizrEP.pgp
Description: OpenPGP digital signature
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Dietmar Maurer
> >>Last but not least, how do you keep the load that migration generates from
> >>impacting auto-migration decisions? 
> 
> Good question . I think we should use rrds to do average stats of cpu usage.

I think that any load/cpu metric will be difficult(unstable). 
I would simply use static values like cpu count or max. memory.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] "bridge vlan show" truncated message kernel fix

2015-11-17 Thread Dietmar Maurer


> http://marc.info/?l=linux-netdev=144488823001097=2
> http://marc.info/?l=linux-netdev=144489933906491=2
> 
> When we'll use them in a fixed kernel (don't known if a backport to 4.2 is
> planned) , 

Thanks for the links. Do they apply cleanly to 4.2.X?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-manager 3/3] ext6migrate: trigger working store reload when the component is rendered

2015-11-17 Thread Emmanuel Kasper
This fixes the following problem:
After a manual browser refresh, widgets were displayed without data
---
 www/manager6/dc/ACLView.js| 2 +-
 www/manager6/dc/AuthView.js   | 2 +-
 www/manager6/dc/Backup.js | 2 +-
 www/manager6/dc/GroupView.js  | 2 +-
 www/manager6/dc/OptionView.js | 3 +--
 www/manager6/dc/PoolView.js   | 2 +-
 www/manager6/dc/RoleView.js   | 2 +-
 www/manager6/dc/SecurityGroups.js | 2 +-
 www/manager6/dc/StorageView.js| 2 +-
 www/manager6/dc/Summary.js| 2 +-
 www/manager6/dc/Support.js| 2 +-
 www/manager6/dc/UserView.js   | 2 +-
 www/manager6/grid/BackupView.js   | 2 +-
 www/manager6/grid/PoolMembers.js  | 2 +-
 www/manager6/panel/IPSet.js   | 2 +-
 15 files changed, 15 insertions(+), 16 deletions(-)

diff --git a/www/manager6/dc/ACLView.js b/www/manager6/dc/ACLView.js
index 7ea314c..5e0d6fb 100644
--- a/www/manager6/dc/ACLView.js
+++ b/www/manager6/dc/ACLView.js
@@ -222,7 +222,7 @@ Ext.define('PVE.dc.ACLView', {
},
columns: columns,
listeners: {
-   show: reload
+   render: reload
}
});
 
diff --git a/www/manager6/dc/AuthView.js b/www/manager6/dc/AuthView.js
index 83e79c6..cadbee8 100644
--- a/www/manager6/dc/AuthView.js
+++ b/www/manager6/dc/AuthView.js
@@ -136,7 +136,7 @@ Ext.define('PVE.dc.AuthView', {
}
],
listeners: {
-   show: reload,
+   render: reload,
itemdblclick: run_editor
}
});
diff --git a/www/manager6/dc/Backup.js b/www/manager6/dc/Backup.js
index 9cc2f7d..b11aad5 100644
--- a/www/manager6/dc/Backup.js
+++ b/www/manager6/dc/Backup.js
@@ -448,7 +448,7 @@ Ext.define('PVE.dc.BackupView', {
}
],
listeners: {
-   show: reload,
+   render: reload,
itemdblclick: run_editor
}
});
diff --git a/www/manager6/dc/GroupView.js b/www/manager6/dc/GroupView.js
index 6950a46..011c7db 100644
--- a/www/manager6/dc/GroupView.js
+++ b/www/manager6/dc/GroupView.js
@@ -100,7 +100,7 @@ Ext.define('PVE.dc.GroupView', {
}
],
listeners: {
-   show: reload,
+   render: reload,
itemdblclick: run_editor
}
});
diff --git a/www/manager6/dc/OptionView.js b/www/manager6/dc/OptionView.js
index 4de75a1..bd6de1d 100644
--- a/www/manager6/dc/OptionView.js
+++ b/www/manager6/dc/OptionView.js
@@ -186,12 +186,11 @@ Ext.define('PVE.dc.OptionView', {
tbar: [ edit_btn ],
rows: rows,
listeners: {
+   render: reload,
itemdblclick: run_editor
}
});
 
me.callParent();
-
-   me.on('show', reload);
 }
 });
diff --git a/www/manager6/dc/PoolView.js b/www/manager6/dc/PoolView.js
index 4ae99e2..659576f 100644
--- a/www/manager6/dc/PoolView.js
+++ b/www/manager6/dc/PoolView.js
@@ -100,7 +100,7 @@ Ext.define('PVE.dc.PoolView', {
}
],
listeners: {
-   show: reload,
+   render: reload,
itemdblclick: run_editor
}
});
diff --git a/www/manager6/dc/RoleView.js b/www/manager6/dc/RoleView.js
index cbfe82d..d912dc9 100644
--- a/www/manager6/dc/RoleView.js
+++ b/www/manager6/dc/RoleView.js
@@ -52,7 +52,7 @@ Ext.define('PVE.dc.RoleView', {
}
],
listeners: {
-   show: function() {
+   render: function() {
store.load();
}
}
diff --git a/www/manager6/dc/SecurityGroups.js 
b/www/manager6/dc/SecurityGroups.js
index 0e31295..a8f17cb 100644
--- a/www/manager6/dc/SecurityGroups.js
+++ b/www/manager6/dc/SecurityGroups.js
@@ -178,7 +178,7 @@ Ext.define('PVE.SecurityGroupList', {
deselect: function() {
me.rule_panel.setBaseUrl(undefined);
},
-   show: reload
+   render: reload
}
});
 
diff --git a/www/manager6/dc/StorageView.js b/www/manager6/dc/StorageView.js
index 4bcf3b7..6200283 100644
--- a/www/manager6/dc/StorageView.js
+++ b/www/manager6/dc/StorageView.js
@@ -245,7 +245,7 @@ Ext.define('PVE.dc.StorageView', {
}
],
listeners: {
-   show: reload,
+   render: reload,
itemdblclick: run_editor
}
});
diff --git a/www/manager6/dc/Summary.js b/www/manager6/dc/Summary.js
index b0f8b32..109e45f 100644
--- a/www/manager6/dc/Summary.js
+++ b/www/manager6/dc/Summary.js
@@ -128,7 +128,7 @@ Ext.define('PVE.dc.Summary', {
layout: 'border',
items: [ nodegrid ],
listeners: {
-   show: function() {
+   render: function() {
nodegrid.fireEvent('show', 

[pve-devel] [PATCH pve-manager 2/3] Allow email adresses with a top level domain of up to 63 characters

2015-11-17 Thread Emmanuel Kasper
Reapply fix for  https://bugzilla.proxmox.com/show_bug.cgi?id=716
which was missing in manager6/ directory
---
 www/manager6/Toolkit.js   | 8 +++-
 www/manager6/dc/OptionView.js | 2 +-
 www/manager6/dc/UserEdit.js   | 2 +-
 3 files changed, 9 insertions(+), 3 deletions(-)

diff --git a/www/manager6/Toolkit.js b/www/manager6/Toolkit.js
index 201d131..351cd8b 100644
--- a/www/manager6/Toolkit.js
+++ b/www/manager6/Toolkit.js
@@ -88,7 +88,13 @@ Ext.apply(Ext.form.field.VTypes, {
 DnsName: function(v) {
return 
(/^(([a-zA-Z0-9]([a-zA-Z0-9\-]*[a-zA-Z0-9])?)\.)*([A-Za-z0-9]([A-Za-z0-9\-]*[A-Za-z0-9])?)$/).test(v);
 },
-DnsNameText: gettext('This is not a valid DNS name')
+DnsNameText: gettext('This is not a valid DNS name'),
+
+// workaround for https://www.sencha.com/forum/showthread.php?302150
+pveMail: function(v) {
+return 
(/^(\w+)([\-+.][\w]+)*@(\w[\-\w]*\.){1,5}([A-Za-z]){2,63}$/).test(v);
+},
+pveMailText: gettext('This field should be an e-mail address in the format 
"u...@example.com"'),
 });
 
 // we dont want that a displayfield set the form dirty flag! 
diff --git a/www/manager6/dc/OptionView.js b/www/manager6/dc/OptionView.js
index 3a98bce..4de75a1 100644
--- a/www/manager6/dc/OptionView.js
+++ b/www/manager6/dc/OptionView.js
@@ -85,7 +85,7 @@ Ext.define('PVE.dc.EmailFromEdit', {
items: {
xtype: 'pvetextfield',
name: 'email_from',
-   vtype: 'email',
+   vtype: 'pveMail',
emptyText: gettext('Send emails from root@$hostname'),
deleteEmpty: true,
value: '',
diff --git a/www/manager6/dc/UserEdit.js b/www/manager6/dc/UserEdit.js
index bb110b5..2ef8bce 100644
--- a/www/manager6/dc/UserEdit.js
+++ b/www/manager6/dc/UserEdit.js
@@ -117,7 +117,7 @@ Ext.define('PVE.dc.UserEdit', {
xtype: 'textfield',
name: 'email',
fieldLabel: gettext('E-Mail'),
-   vtype: 'email'
+   vtype: 'pveMail'
}
];
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-manager 1/3] Correct whitespaces and comment non obvious class properties

2015-11-17 Thread Emmanuel Kasper
---
 www/manager6/form/RealmComboBox.js | 8 +++-
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/www/manager6/form/RealmComboBox.js 
b/www/manager6/form/RealmComboBox.js
index 7e6700f..ce59422 100644
--- a/www/manager6/form/RealmComboBox.js
+++ b/www/manager6/form/RealmComboBox.js
@@ -20,15 +20,13 @@ Ext.define('PVE.form.RealmComboBox', {
}
 },
 stateEvents: [ 'select' ],
-stateful: true,
-id: 'pveloginrealm', // fixme: remove (Stateful does not work without)
+stateful: true, // last chosen auth realm is saved between page reloads
+id: 'pveloginrealm', // We need stable ids when using stateful, not 
autogenerated
 stateID: 'pveloginrealm',
 
 needOTP: function(realm) {
var me = this;
-
var rec = me.store.findRecord('realm', realm);
-
return rec && rec.data && rec.data.tfa ? rec.data.tfa : undefined;
 },
 
@@ -39,7 +37,7 @@ Ext.define('PVE.form.RealmComboBox', {
model: 'pve-domains',
});
 
-me.callParent();
+   me.callParent();
 
me.store.load({
callback: function(r, o, success) {
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Thomas Lamprecht



Am 17.11.2015 um 14:56 schrieb Martin Waschbüsch:

Am 17.11.2015 um 14:20 schrieb Alexandre DERUMIER :


Unless all cluster nodes have identical hardware, how do you determine if a 
given node is a suitable target for a vm?

I think we could add a manual "host cpu weight" option, because it's difficult 
to compare cpus performance. (frequencies/nb cores/intel|amd).
Manual weighting VMs and (maybe also) Nodes would general be an good 
option (which Dietmar proposed once to me as an idea) shift the control 
more to the admin, which is normally better able to determine how the 
load should be spread, as there are also different use cases.



Good point. Though, I was more thinking about situations where the cpu-type is 
not set to default (kvm64, I think?) but to something like 'IvyBridge' or 
Opteron_G5. (The primary use I had for using non-default cpu types was to 
expose features such as AES-NI to a vm.)

Another thing that just occurred to me: Backup schedules and settings are not 
necessarily the same for each node, which means auto-migration could lead to 
double, or (worse!) no backup for a vm that was moved around?

Dunno if these are just corner cases?


Configuration like done in the HA-Manager could be used, so that certain 
nodes are in a group and the VM is bound to this group, we could in fact 
use the group API from the HA-Manager.
Also migrations could be blocked when a service is doing stuff like a 
online backup.




___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Alexandre DERUMIER
>>Also migrations could be blocked when a service is doing stuff like a 
online backup.

block job too (like mirroring). Maybe just look if the vm have a lock should be 
enough.

- Mail original -
De: "Thomas Lamprecht" 
À: "pve-devel" 
Envoyé: Mardi 17 Novembre 2015 15:26:15
Objet: Re: [pve-devel] adding a vm workload scheduler feature

Am 17.11.2015 um 14:56 schrieb Martin Waschbüsch: 
>> Am 17.11.2015 um 14:20 schrieb Alexandre DERUMIER : 
>> 
 Unless all cluster nodes have identical hardware, how do you determine if 
 a given node is a suitable target for a vm? 
>> I think we could add a manual "host cpu weight" option, because it's 
>> difficult to compare cpus performance. (frequencies/nb cores/intel|amd). 
Manual weighting VMs and (maybe also) Nodes would general be an good 
option (which Dietmar proposed once to me as an idea) shift the control 
more to the admin, which is normally better able to determine how the 
load should be spread, as there are also different use cases. 

> Good point. Though, I was more thinking about situations where the cpu-type 
> is not set to default (kvm64, I think?) but to something like 'IvyBridge' or 
> Opteron_G5. (The primary use I had for using non-default cpu types was to 
> expose features such as AES-NI to a vm.) 
> 
> Another thing that just occurred to me: Backup schedules and settings are not 
> necessarily the same for each node, which means auto-migration could lead to 
> double, or (worse!) no backup for a vm that was moved around? 
> 
> Dunno if these are just corner cases? 

Configuration like done in the HA-Manager could be used, so that certain 
nodes are in a group and the VM is bound to this group, we could in fact 
use the group API from the HA-Manager. 
Also migrations could be blocked when a service is doing stuff like a 
online backup. 



___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Alexandre DERUMIER
>>Good point. Though, I was more thinking about situations where the cpu-type 
>>is not set to default (kvm64, I think?) but to something like 'IvyBridge' or 
>>Opteron_G5. (The primary use I had for using non-default cpu types was >>to 
>>expose features such as AES-NI to a vm.) 

I think we could find a way to find compatible host.
Another way could be to start the migration, and with the new cpu enforce 
feature in proxmox 4, the vm migration will stop directly if target host is not 
compatible and we can try another host or another vm.
But this is solvable. 

>>Another thing that just occurred to me: Backup schedules and settings are not 
>>necessarily the same for each node, which means auto-migration could lead to 
>>double, or (worse!) no backup for a vm that was moved around? 

This occur only if you restrict a backup job for specific node + specific vms.
(By default, you can backup vms, and it's work if they are on any nodes)

For this case, I think we can exclude this vms from auto-migration.


- Mail original -
De: "Martin Waschbüsch" 
À: "pve-devel" 
Envoyé: Mardi 17 Novembre 2015 14:56:39
Objet: Re: [pve-devel] adding a vm workload scheduler feature

> Am 17.11.2015 um 14:20 schrieb Alexandre DERUMIER : 
> 
>>> Unless all cluster nodes have identical hardware, how do you determine if a 
>>> given node is a suitable target for a vm? 
> 
> I think we could add a manual "host cpu weight" option, because it's 
> difficult to compare cpus performance. (frequencies/nb cores/intel|amd). 

Good point. Though, I was more thinking about situations where the cpu-type is 
not set to default (kvm64, I think?) but to something like 'IvyBridge' or 
Opteron_G5. (The primary use I had for using non-default cpu types was to 
expose features such as AES-NI to a vm.) 

Another thing that just occurred to me: Backup schedules and settings are not 
necessarily the same for each node, which means auto-migration could lead to 
double, or (worse!) no backup for a vm that was moved around? 

Dunno if these are just corner cases? 

Martin 
___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add ovmf uefi roms support

2015-11-17 Thread Andreas Steinel
Nice!

On Tue, Nov 17, 2015 at 5:39 PM, Alexandre Derumier 
wrote:

> Signed-off-by: Alexandre Derumier 
> ---
>  PVE/QemuServer.pm | 16 
>  1 file changed, 16 insertions(+)
>
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index 81a1c84..87b7d20 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -30,6 +30,7 @@ use PVE::ProcFSTools;
>  use PVE::QMPClient;
>  use PVE::RPCEnvironment;
>  use Time::HiRes qw(gettimeofday);
> +use File::Copy qw(copy);
>
>  my $qemu_snap_storage = {rbd => 1, sheepdog => 1};
>
> @@ -390,6 +391,12 @@ EODESCR
> description => "Sets the protection flag of the VM. This will
> prevent the remove operation.",
> default => 0,
>  },
> +ovmf => {
> +   optional => 1,
> +   type => 'boolean',
> +   description => "Enable ovmf uefi roms.",
> +   default => 0,
> +},
>  };
>
>  # what about other qemu settings ?
> @@ -2683,6 +2690,15 @@ sub config_to_command {
> push @$cmd, '-smbios', "type=1,$conf->{smbios1}";
>  }
>
> +if ($conf->{ovmf}) {
> +   my $ovmfvar = "OVMF_VARS-pure-efi.fd";
> +   my $ovmfvar_src = "/usr/share/kvm/$ovmfvar";
> +   my $ovmfvar_dst = "/tmp/$vmid-$ovmfvar";
> +   copy $ovmfvar_src,$ovmfvar_dst if !(-e $ovmfvar_dst);
> +   push @$cmd, '-drive',
> "if=pflash,format=raw,readonly,file=/usr/share/kvm/OVMF_CODE-pure-efi.fd";
> +   push @$cmd, '-drive', "if=pflash,format=raw,file=$ovmfvar_dst";
> +}
> +
>  if ($q35) {
> # the q35 chipset support native usb2, so we enable usb controller
> # by default for this machine type
> --
> 2.1.4
>
> ___
> pve-devel mailing list
> pve-devel@pve.proxmox.com
> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
>
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] qemu-server : add support for ovmf efi roms

2015-11-17 Thread Alexandre DERUMIER
I have sent another patch for qemu-kvm with roms,
I think it's pending in mailing list because of 2mb size

- Mail original -
De: "aderumier" 
À: "pve-devel" 
Envoyé: Mardi 17 Novembre 2015 17:39:51
Objet: [pve-devel] qemu-server : add support for ovmf efi roms

This patch add support for ovmf efi roms, with a new option "ovmf:1" 

users have reported problem with vga pcie passthrough with proxmox 4. 
https://forum.proxmox.com/threads/24362-PCIe-passthrough-does-not-work 

and ovmf fix the problem with uefi roms. 

Here a good wiki on archlinux: 

https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Alexandre DERUMIER
>> I think that any load/cpu metric will be difficult(unstable). 
>> I would simply use static values like cpu count or max. memory. 
>
>or static weight. 

For my personnal usage, I don't think it'll work, because I have a lot of 
differents workload at different time,
and some vms can use sometime 0% cpu and 80% at spike hours.

But indeed, dynamic stats is not easy to resolve.


Ovirt has a planned feature which use optaplanner

http://www.ovirt.org/Features/Optaplanner
This seem to do some kind of heuristics and maths (out of my competence ;), to 
known which vm migrate.





Another interesting whitepaper with algorithms
"Dynamic Load Balancing of Virtual Machines using QEMU-KVM"
http://research.ijcaonline.org/volume46/number6/pxc3879263.pdf


- Mail original -
De: "dietmar" 
À: "aderumier" , "pve-devel" 
Envoyé: Mardi 17 Novembre 2015 16:51:23
Objet: Re: [pve-devel] adding a vm workload scheduler feature

> On November 17, 2015 at 4:37 PM Dietmar Maurer  wrote: 
> 
> 
> > >>Last but not least, how do you keep the load that migration generates 
> > >>from 
> > >>impacting auto-migration decisions? 
> > 
> > Good question . I think we should use rrds to do average stats of cpu 
> > usage. 
> 
> I think that any load/cpu metric will be difficult(unstable). 
> I would simply use static values like cpu count or max. memory. 

or static weight. 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Dietmar Maurer

> But indeed, dynamic stats is not easy to resolve.
> 
> 
> Ovirt has a planned feature which use optaplanner
> 
> http://www.ovirt.org/Features/Optaplanner
> This seem to do some kind of heuristics and maths (out of my competence ;), to
> known which vm migrate.

IMHO you can always generate a load which leads to endless migrations...

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-manager 3/3] ext6migrate: trigger working store reload when the component is rendered

2015-11-17 Thread Dietmar Maurer
> This fixes the following problem:
> After a manual browser refresh, widgets were displayed without data

>   listeners: {
> - show: reload
> + render: reload
>   }
>   });

We have a few places where we use:

fireEvent('show', ...)

Does that still work with this changes?

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-manager 1/3] Correct whitespaces and comment non obvious class properties

2015-11-17 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH pve-manager 2/3] Allow email adresses with a top level domain of up to 63 characters

2015-11-17 Thread Dietmar Maurer
applied

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Alexandre DERUMIER
>>I assume it is hard to get this stable (just a feeling). 
yes, same for me.

>>On the other side, this would be simple 
>>to implement. Each node is responsible to move its own VMs, so you do not 
>>even 
>>need a lock. 
I was more thinking about a lock, to avoid node2 migrate vm to node1, when 
node1 try to migrate a vm to node3 for example.


>>My plan was to integrate this into the HA manager, but then you only have the 
>>feature for HA enabled VMs. 

Could be great to have it without HA too.


>>But the CRM code shows how to use a cluster wide lock to implement a 'master' 
>>role. 
Ok,thanks, I'll try to have a look at it.


Don't have time for now, but I'll begin to do test in January.



- Mail original -
De: "dietmar" 
À: "aderumier" , "pve-devel" 
Envoyé: Mardi 17 Novembre 2015 08:40:19
Objet: Re: [pve-devel] adding a vm workload scheduler feature

> What do you think about it ? 

interesting 

> 
> As we don't have master node, I don't known how to implement this: 
> 
> 1) each node try to migrate his own vms to another node with less cpu usage. 
> maybe with a global cluster lock to not have 2 nodes migrating in both way 
> at the same time ? 

I assume it is hard to get this stable (just a feeling). On the other side, 
this 
would be simple 
to implement. Each node is responsible to move its own VMs, so you do not even 
need a lock. 

> 2) have some kind of master service in the cluster (maybe with corosync 
> service ?), 
> which read global stats of all nodes, and through an algorithm, do the 
> migrations. 
> 
> Don't known which way is better ? 

My plan was to integrate this into the HA manager, but then you only have the 
feature 
for HA enabled VMs. 

But the CRM code shows how to use a cluster wide lock to implement a 'master' 
role. 

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Thomas Lamprecht


Il 17 novembre 2015 08:40:19 CET, Dietmar Maurer  ha 
scritto:
>
>> What do you think about it ?
>
>interesting
> 
>> 
>> As we don't have master node, I don't known how to implement this:
>> 
>> 1) each node try to migrate his own vms to another node with less cpu
>usage.
>>maybe with a global cluster lock to not have 2 nodes migrating in
>both way
>> at the same time ?
>
>I assume it is hard to get this stable (just a feeling). On the other
>side, this
>would be simple
>to implement. Each node is responsible to move its own VMs, so you do
>not even
>need a lock.

A lock maybe should be there to only let one rebalancing action happen at any 
time to avoid out of control feedback loops.

>> 2) have some kind of master service in the cluster (maybe with
>corosync
>> service ?),
>>which read global stats of all nodes, and through an algorithm, do
>the
>> migrations.
>> 
>> Don't known which way is better ?
>
>My plan was to integrate this into the HA manager, but then you only
>have the
>feature
>for HA enabled VMs.

That could still be done, adding a new service which uses the HA Environment 
class. With a lock and a status/command file a master wouldn't be necessarly 
needed.

>
>But the CRM code shows how to use a cluster wide lock to implement a
>'master'
>role.
>
>___
>pve-devel mailing list
>pve-devel@pve.proxmox.com
>http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread datanom.net

Hi all,

On 2015-11-17 08:23, Alexandre DERUMIER wrote:



What do you think about it ?

Sounds great, but I think memory and io-wait should be part of the list 
as well.




As we don't have master node, I don't known how to implement this:

1) each node try to migrate his own vms to another node with less cpu 
usage.

   maybe with a global cluster lock to not have 2 nodes migrating in
both way at the same time ?

How would you distinguish between operator initiated migration and 
automatic migration?
I should think that operator initiated migration should always overrule 
automatic migration.




2) have some kind of master service in the cluster (maybe with
corosync service ?),
   which read global stats of all nodes, and through an algorithm, do
the migrations.

Why not keep it simple? You could extend pvestatd to save the 
performance numbers in a file in a specific folder in /etc/pve since 
pvestatd already has these numbers. Each node names this file by node 
name and if writing the numbers through Data::Dumper then this could be 
a persisted hash like

{
'cpu' => {
   'cur' => 12,
   'max' => 80
 },
'mem' => {
   'cur' => 28,
   'max' => 96
 },
'wait' => {
'cur' => 2,
'max' => 10
  }
}
cur is the reading from pvestatd and max is the configured threshold on 
each node.


Another daemon on each node on a regular interval assembles a hash 
reading every file in the /etc/pve folder to:

{
   'node1' => {
'cpu' => {
   'cur' => 12,
   'max' => 80
 },
'mem' => {
   'cur' => 28,
   'max' => 96
 },
'wait' => {
'cur' => 2,
'max' => 10
  }
  },
..
}
and performs decisions according to a well defined algorithm which 
should also take into account that some VM's by configuration can be 
configured to be locked to a specific node.


As for locking I agree that there should be some kind of global locking 
scheme.


Just some quick thoughts.

--
Hilsen/Regards
Michael Rasmussen

Get my public GnuPG keys:
michael  rasmussen  cc
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E
mir  datanom  net
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C
mir  miras  org
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917
--



This mail was virus scanned and spam checked before delivery.
This mail is also DKIM signed. See header dkim-signature.

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] adding a vm workload scheduler feature

2015-11-17 Thread Alexandre DERUMIER
> What do you think about it ?
> 
>>Sounds great, but I think memory and io-wait should be part of the list 
>>as well.
yes, sure. I just want to start with cpu, but memory could be add too.

 I'm not sure for io-wait, as migrating the vm don't change the storage ?


>>Why not keep it simple? You could extend pvestatd to save the 
>>performance numbers in a file in a specific folder in /etc/pve since 
>>pvestatd already has these numbers. Each node names this file by node 
>>name and if writing the numbers through Data::Dumper then this could be 
>>a persisted hash like 

I'm thinked to use rrd files to have stats on a long time. (maybe do average on 
last x minutes)

(for example we don't want to migrate a vm if the host is overload for 1 or 2 
minutes,
because of a spiky cpu vm)




- Mail original -
De: "datanom.net" 
À: "pve-devel" 
Envoyé: Mardi 17 Novembre 2015 10:11:40
Objet: Re: [pve-devel] adding a vm workload scheduler feature

Hi all, 

On 2015-11-17 08:23, Alexandre DERUMIER wrote: 
> 
> 
> What do you think about it ? 
> 
Sounds great, but I think memory and io-wait should be part of the list 
as well. 

> 
> As we don't have master node, I don't known how to implement this: 
> 
> 1) each node try to migrate his own vms to another node with less cpu 
> usage. 
> maybe with a global cluster lock to not have 2 nodes migrating in 
> both way at the same time ? 
> 
How would you distinguish between operator initiated migration and 
automatic migration? 
I should think that operator initiated migration should always overrule 
automatic migration. 

> 
> 2) have some kind of master service in the cluster (maybe with 
> corosync service ?), 
> which read global stats of all nodes, and through an algorithm, do 
> the migrations. 
> 
Why not keep it simple? You could extend pvestatd to save the 
performance numbers in a file in a specific folder in /etc/pve since 
pvestatd already has these numbers. Each node names this file by node 
name and if writing the numbers through Data::Dumper then this could be 
a persisted hash like 
{ 
'cpu' => { 
'cur' => 12, 
'max' => 80 
}, 
'mem' => { 
'cur' => 28, 
'max' => 96 
}, 
'wait' => { 
'cur' => 2, 
'max' => 10 
} 
} 
cur is the reading from pvestatd and max is the configured threshold on 
each node. 

Another daemon on each node on a regular interval assembles a hash 
reading every file in the /etc/pve folder to: 
{ 
'node1' => { 
'cpu' => { 
'cur' => 12, 
'max' => 80 
}, 
'mem' => { 
'cur' => 28, 
'max' => 96 
}, 
'wait' => { 
'cur' => 2, 
'max' => 10 
} 
}, 
.. 
} 
and performs decisions according to a well defined algorithm which 
should also take into account that some VM's by configuration can be 
configured to be locked to a specific node. 

As for locking I agree that there should be some kind of global locking 
scheme. 

Just some quick thoughts. 

-- 
Hilsen/Regards 
Michael Rasmussen 

Get my public GnuPG keys: 
michael  rasmussen  cc 
http://pgp.mit.edu:11371/pks/lookup?op=get=0xD3C9A00E 
mir  datanom  net 
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE501F51C 
mir  miras  org 
http://pgp.mit.edu:11371/pks/lookup?op=get=0xE3E80917 
-- 

 

This mail was virus scanned and spam checked before delivery. 
This mail is also DKIM signed. See header dkim-signature. 

___ 
pve-devel mailing list 
pve-devel@pve.proxmox.com 
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel 
___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


Re: [pve-devel] [PATCH] add ovmf uefi roms support

2015-11-17 Thread Dietmar Maurer
> +if ($conf->{ovmf}) {
> + my $ovmfvar = "OVMF_VARS-pure-efi.fd";
> + my $ovmfvar_src = "/usr/share/kvm/$ovmfvar";
> + my $ovmfvar_dst = "/tmp/$vmid-$ovmfvar";

So we loose all EFI settings after reboot or migrate? (/tmp/ is cleared at
reboot)

___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH pve-cluster] add 'ha/fence.cfg' to observed files

2015-11-17 Thread Thomas Lamprecht
fence.cfg will be used by the PVE HA manager for external fence
device configuration, this allows us to use cfs_read_file and
cfs_write_file methods.

Signed-off-by: Thomas Lamprecht 
---
 data/PVE/Cluster.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/data/PVE/Cluster.pm b/data/PVE/Cluster.pm
index 5575e15..0085316 100644
--- a/data/PVE/Cluster.pm
+++ b/data/PVE/Cluster.pm
@@ -72,6 +72,7 @@ my $observed = {
 'ha/manager_status' => 1,
 'ha/resources.cfg' => 1,
 'ha/groups.cfg' => 1,
+'ha/fence.cfg' => 1,
 'status.cfg' => 1,
 };
 
-- 
2.1.4


___
pve-devel mailing list
pve-devel@pve.proxmox.com
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel