Re: [pve-devel] Count monthly traffic
How large is a file to store 32 netin/netout values with our setup? No idea.I can check this if you want. Please do but this means we always have a bigger file cause most vms have just one or two network cards. File size does not really matters if within a reasonable size. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] features ideas for 3.0 roadmap
- if host total cpu 80% since X minutes Why do you thing cpu usage is relevant? I guess server load, memory and disk usage are more important. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] features ideas for 3.0 roadmap
Why do you thing cpu usage is relevant? I guess server load, memory and disk usage are more important. What I have in my mind was live migrate vm if a server is overloaded with cpu. - For memory we already have ksm ballooning locally, - For disk, as we live migrate, it must be a shared storage, so migrate the vm will not help. But Indeed we could also check memory or network link for example. Vmware seem to call it VMware Distributed Resource Scheduler (DRS), I never use it, but here a demo: http://www.youtube.com/watch?v=aC1F_PMslmg They seem to handle cpu memory overload. - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Alexandre DERUMIER aderum...@odiso.com, James Devine fxmul...@gmail.com Cc: pve-devel@pve.proxmox.com Envoyé: Mercredi 13 Mars 2013 07:34:52 Objet: RE: [pve-devel] features ideas for 3.0 roadmap - if host total cpu 80% since X minutes Why do you thing cpu usage is relevant? I guess server load, memory and disk usage are more important. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] features ideas for 3.0 roadmap
Having a Maintenance mode would be good. Being able to mark a node as being under Maintenance and having all VM's migrate off it, and it disappear from the target list in migrations. Also being able to select Multiple guests and migrate them in one action would also be VERY useful. I agree too. (I have some hosts with 70vms ;). I shouldn't be too difficult. I'll try to have a look at it this month See also https://bugzilla.proxmox.com/show_bug.cgi?id=173 Martin ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] pve-manager : add start/stop/migrate all buttons to node
Hi, This add start/stop all buttons to node. I also add a new migrateall feature. This don't implement the HA maintenance mode, I don't known exactly how Martin want to manage that ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 1/5] add stopall/startall vm buttons
Signed-off-by: Alexandre Derumier aderum...@odiso.com --- www/manager/node/Config.js | 33 - 1 file changed, 32 insertions(+), 1 deletion(-) diff --git a/www/manager/node/Config.js b/www/manager/node/Config.js index 7a683a8..ea4c1ba 100644 --- a/www/manager/node/Config.js +++ b/www/manager/node/Config.js @@ -29,6 +29,37 @@ Ext.define('PVE.node.Config', { }); }; + var startallvmBtn = Ext.create('PVE.button.Button', { + text: gettext('Start All VMs'), + confirmMsg: Ext.String.format(gettext(Do you really want to start all Vms on node {0}?), nodename), + handler: function() { + PVE.Utils.API2Request({ + params: { force: 1 }, + url: '/nodes/' + nodename + '/startall', + method: 'POST', + waitMsgTarget: me, + failure: function(response, opts) { + Ext.Msg.alert('Error', response.htmlStatus); + } + }); + } + }); + + var stopallvmBtn = Ext.create('PVE.button.Button', { + text: gettext('Stop All VMs'), + confirmMsg: Ext.String.format(gettext(Do you really want to stop all Vms on node {0}?), nodename), + handler: function() { + PVE.Utils.API2Request({ + url: '/nodes/' + nodename + '/stopall', + method: 'POST', + waitMsgTarget: me, + failure: function(response, opts) { + Ext.Msg.alert('Error', response.htmlStatus); + } + }); + } + }); + var restartBtn = Ext.create('PVE.button.Button', { text: gettext('Restart'), disabled: !caps.nodes['Sys.PowerMgmt'], @@ -67,7 +98,7 @@ Ext.define('PVE.node.Config', { title: gettext('Node') + ' + nodename + ', hstateid: 'nodetab', defaults: { statusStore: me.statusStore }, - tbar: [ restartBtn, shutdownBtn, shellBtn ] + tbar: [ startallvmBtn, stopallvmBtn, restartBtn, shutdownBtn, shellBtn ] }); if (caps.nodes['Sys.Audit']) { -- 1.7.10.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 4/5] add migrate_all form
Signed-off-by: Alexandre Derumier aderum...@odiso.com --- www/manager/Makefile |1 + www/manager/window/MigrateAll.js | 84 ++ 2 files changed, 85 insertions(+) create mode 100644 www/manager/window/MigrateAll.js diff --git a/www/manager/Makefile b/www/manager/Makefile index b2763da..78a3f4b 100644 --- a/www/manager/Makefile +++ b/www/manager/Makefile @@ -87,6 +87,7 @@ JSSRC= \ node/Config.js \ qemu/StatusView.js \ window/Migrate.js \ + window/MigrateAll.js\ qemu/Monitor.js \ qemu/Summary.js \ qemu/OSTypeEdit.js \ diff --git a/www/manager/window/MigrateAll.js b/www/manager/window/MigrateAll.js new file mode 100644 index 000..fb677f5 --- /dev/null +++ b/www/manager/window/MigrateAll.js @@ -0,0 +1,84 @@ +Ext.define('PVE.window.MigrateAll', { +extend: 'Ext.window.Window', + +resizable: false, + +migrate: function(target, maxworkers) { + var me = this; + PVE.Utils.API2Request({ + params: { target: target, maxworkers: maxworkers}, + url: '/nodes/' + me.nodename + '/' + /migrateall, + waitMsgTarget: me, + method: 'POST', + failure: function(response, opts) { + Ext.Msg.alert('Error', response.htmlStatus); + }, + success: function(response, options) { + var upid = response.result.data; + + var win = Ext.create('PVE.window.TaskViewer', { + upid: upid + }); + win.show(); + me.close(); + } + }); +}, + +initComponent : function() { + var me = this; + + if (!me.nodename) { + throw no node name specified; + } + + me.formPanel = Ext.create('Ext.form.Panel', { + bodyPadding: 10, + border: false, + fieldDefaults: { + labelWidth: 100, + anchor: '100%' + }, + items: [ + { + xtype: 'PVE.form.NodeSelector', + name: 'target', + fieldLabel: 'Target node', + allowBlank: false, + onlineValidator: true + }, + { + xtype: 'numberfield', + name: 'maxworkers', + minValue: 1, + maxValue: 100, + value: 1, + fieldLabel: 'Parallel jobs', + allowBlank: false + }, + ] + }); + + var form = me.formPanel.getForm(); + + var submitBtn = Ext.create('Ext.Button', { + text: 'Migrate', + handler: function() { + var values = form.getValues(); + me.migrate(values.target, values.maxworkers); + } + }); + + Ext.apply(me, { + title: Migrate All VMs, + width: 350, + modal: true, + layout: 'auto', + border: false, + items: [ me.formPanel ], + buttons: [ submitBtn ] + }); + + me.callParent(); +} +}); -- 1.7.10.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH 2/5] api2: node : startall : add force option
force start if onboot = 0 Signed-off-by: Alexandre Derumier aderum...@odiso.com --- PVE/API2/Nodes.pm | 11 +-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/PVE/API2/Nodes.pm b/PVE/API2/Nodes.pm index 0dac6af..da1737a 100644 --- a/PVE/API2/Nodes.pm +++ b/PVE/API2/Nodes.pm @@ -919,6 +919,11 @@ __PACKAGE__-register_method ({ additionalProperties = 0, properties = { node = get_standard_option('pve-node'), + force = { + optional = 1, + type = 'boolean', + description = force if onboot=0., + }, }, }, returns = { @@ -933,6 +938,8 @@ __PACKAGE__-register_method ({ my $nodename = $param-{node}; $nodename = PVE::INotify::nodename() if $nodename eq 'localhost'; + my $force = $param-{force}; + my $code = sub { $rpcenv-{type} = 'priv'; # to start tasks in background @@ -942,8 +949,8 @@ __PACKAGE__-register_method ({ last if PVE::Cluster::check_cfs_quorum($i != 0 ? 1 : 0); sleep(1); } - - my $startList = $get_start_stop_list($nodename, 1); + my $autostart = $force ? undef : 1; + my $startList = $get_start_stop_list($nodename, $autostart); # Note: use numeric sorting with = foreach my $order (sort {$a = $b} keys %$startList) { -- 1.7.10.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] bridge: disable igmp querier by default
first user reports problems with that: http://forum.proxmox.com/threads/13153-pve-manager-2-3-7946f1f1-PVETest-two-node-cluster-fail applied, and uploaded to pvetest - please test. works here without problems so far. many thanks! -Original Message- From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel- boun...@pve.proxmox.com] On Behalf Of Alexandre Derumier Sent: Dienstag, 12. März 2013 06:13 To: pve-devel@pve.proxmox.com Subject: [pve-devel] [PATCH] bridge: disable igmp querier by default and reenable multicast_snooping module ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] bridge: disable igmp querier by default
On Wed, 13 Mar 2013 15:33:42 + Dietmar Maurer diet...@proxmox.com wrote: first user reports problems with that: http://forum.proxmox.com/threads/13153-pve-manager-2-3-7946f1f1-PVETest-two-node-cluster-fail Yep, read that. Hopefully he returns with the message that he is not using a Cisco switch;-) If he is using a Cisco switch Alexandre needs to dig his Cisco switches configuration:-) If this seems to be a thing where Cisco is doing it the opposite way than any other manufacture what will be the correct way to handle this? 1) Revert to the old config and give instructions to Cisco users how to turn igmp snooping off on pve hosts 2) Keep the new kernel an instruct anybody else how to turn igmp snooping on on pve hosts 3) Implement some kind of auto detection script to configure the kernel on runtime -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael at rasmussen dot cc http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E mir at datanom dot net http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C mir at miras dot org http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 -- Magic is always the best solution -- especially reliable magic. signature.asc Description: PGP signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] bridge: disable igmp querier by default
Martin is also testing here with that kernel, and reports strange behavior. For example it takes much longer to wait for quorum at boot time. I think this is because you need to wait for an igmp query before the multicast works. (my cisco send send around each minute by default). Does Martin used managed or unmanaged switches ? - Mail original - De: Dietmar Maurer diet...@proxmox.com À: Michael Rasmussen m...@datanom.net, pve-devel@pve.proxmox.com Envoyé: Mercredi 13 Mars 2013 17:33:46 Objet: Re: [pve-devel] [PATCH] bridge: disable igmp querier by default first user reports problems with that: http://forum.proxmox.com/threads/13153-pve-manager-2-3-7946f1f1- PVETes t-two-node-cluster-fail Yep, read that. Hopefully he returns with the message that he is not using a Cisco switch;-) If he is using a Cisco switch Alexandre needs to dig his Cisco switches configuration:-) Martin is also testing here with that kernel, and reports strange behavior. For example it takes much longer to wait for quorum at boot time. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] bridge: disable igmp querier by default
I think this is because you need to wait for an igmp query before the multicast works. (my cisco send send around each minute by default). Does Martin used managed or unmanaged switches ? I guess 'unmanaged'. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] bridge: disable igmp querier by default
-Original Message- From: pve-devel-boun...@pve.proxmox.com [mailto:pve-devel- boun...@pve.proxmox.com] On Behalf Of Dietmar Maurer Sent: Mittwoch, 13. März 2013 18:54 To: Alexandre DERUMIER Cc: pve-devel@pve.proxmox.com Subject: Re: [pve-devel] [PATCH] bridge: disable igmp querier by default I think this is because you need to wait for an igmp query before the multicast works. (my cisco send send around each minute by default). Does Martin used managed or unmanaged switches ? I guess 'unmanaged'. I do tests on IMS (switch is built in, AXXSW1GB, nothing configured), on a 3Com Baseline Switch 2928 also in several virtualization setups, including VMware WS. Martin ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel