Re: [pve-devel] [PATCH] ZFSPoolPlugin: Added the ability to use nested ZVOLs
AFAIK having a setting to control wether auto-import of pool is desirable would be a plus. As in some situations the import/export of the pool is controlled by any other means, and an accidental pool of the pool may be a destructive action (ie. when the pool maybe from a shared medium like iscsi, and thus should not be mounted by two nodes at the same time). I agree. Should I add another parameter for this? If yes, should this be default to auto-import, or not? Best regards, Adrian Costin ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] GUI for DHCP
I don't known if it's technically possible to use same ip on each host by vlan (for dhcp reponse of the vm of the hosts), so no need to care (I think?) about arp on the network. or maybe filtering arp for this ip to not going outside. Any idea about this ? Honestly, dhcp has do many drawbacks for me. I think we should really try to find something better. But so far I have no good ideas :-/ ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] GUI for DHCP
On February 12, 2015 at 8:23 PM Michael Rasmussen m...@datanom.net wrote: Honestly, dhcp has do many drawbacks for me. I think we should really try to find something better. But so far I have no good ideas :-/ could config and leases file be storage in cluster file system so a dnsmasq running on each host could share this? I think this should be possible using locks. There is no need to do that if we use a static assignment? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] ZFSPoolPlugin: Added the ability to use nested ZVOLs
- Moved the zpool_import method of zfs_request() to it's own pool_request function - activate_storage() is now using zfs list to check if the zpool is imported - pool import only the configured pool, not all the accessible pools Signed-off-by: Adrian Costin adr...@goat.fish --- PVE/Storage/ZFSPoolPlugin.pm | 39 +++ 1 file changed, 27 insertions(+), 12 deletions(-) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index 5cbd1b2..3fc2978 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage/ZFSPoolPlugin.pm @@ -149,16 +149,27 @@ sub zfs_request { $timeout = 5 if !$timeout; -my $cmd = []; +my $cmd = ['zfs', $method, @params]; -if ($method eq 'zpool_list') { - push @$cmd, 'zpool', 'list'; -} else { - push @$cmd, 'zfs', $method; -} +my $msg = ''; + +my $output = sub { +my $line = shift; +$msg .= $line\n; +}; + +run_command($cmd, outfunc = $output, timeout = $timeout); + +return $msg; +} + +sub zpool_request { +my ($class, $scfg, $timeout, $method, @params) = @_; + +$timeout = 5 if !$timeout; + +my $cmd = ['zpool', $method, @params]; -push @$cmd, @params; - my $msg = ''; my $output = sub { @@ -428,12 +439,16 @@ sub volume_rollback_is_possible { sub activate_storage { my ($class, $storeid, $scfg, $cache) = @_; -my @param = ('-o', 'name', '-H'); +my @param = ('-o', 'name', '-H', $scfg-{'pool'}); + +my $text = zfs_request($class, $scfg, undef, 'list', @param); -my $text = zfs_request($class, $scfg, undef, 'zpool_list', @param); - if ($text !~ $scfg-{pool}) { - run_command(zpool import -d /dev/disk/by-id/ -a); + my ($pool_name) = $scfg-{pool} =~ /([^\/]+)/; + + my @import_params = ('-d', '/dev/disk/by-id/', $pool_name); + + zpool_request($class, $scfg, undef, 'import', @import_params); } return 1; } -- 1.9.3 (Apple Git-50) ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] GUI for DHCP
On Thu, 12 Feb 2015 20:37:04 +0100 (CET) Dietmar Maurer diet...@proxmox.com wrote: There is no need to do that if we use a static assignment? What do you mean by static assignment? -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael at rasmussen dot cc http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E mir at datanom dot net http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C mir at miras dot org http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 -- /usr/games/fortune -es says: If you want to see card tricks, you have to expect to take cards. -- Harry Blackstone pgpA5Wb8WrV8D.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] GUI for DHCP
One question I also have is: do we install a dnsmasq on each proxmox host of a cluster ? I'm using a lot of vlans with /24, in a 16 proxmox nodes cluster, that mean that I need 16 ips per vlan to manage dhcp. I don't known if it's technically possible to use same ip on each host by vlan (for dhcp reponse of the vm of the hosts), so no need to care (I think?) about arp on the network. or maybe filtering arp for this ip to not going outside. Any idea about this ? - Mail original - De: datanom.net m...@datanom.net À: pve-devel pve-devel@pve.proxmox.com Envoyé: Jeudi 12 Février 2015 14:24:04 Objet: Re: [pve-devel] GUI for DHCP On Thu, 12 Feb 2015 14:12:29 +0100 (CET) Alexandre DERUMIER aderum...@odiso.com wrote: From the doc, it's possible to listen on specific interfaces interface=eth1 interface=eth2 or also like this dhcp-range=eth1,192.168.100.100,192.168.100.199,4h dhcp-range=eth2,192.168.200.100,192.168.200.199,4h I am aware of this. I was referring to the patches you pointed to. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael at rasmussen dot cc http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E mir at datanom dot net http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C mir at miras dot org http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 -- /usr/games/fortune -es says: Q: How do you keep a moron in suspense? ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] GUI for DHCP
On Thu, 12 Feb 2015 19:59:30 +0100 (CET) Dietmar Maurer diet...@proxmox.com wrote: Honestly, dhcp has do many drawbacks for me. I think we should really try to find something better. But so far I have no good ideas :-/ could config and leases file be storage in cluster file system so a dnsmasq running on each host could share this? I think this should be possible using locks. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael at rasmussen dot cc http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E mir at datanom dot net http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C mir at miras dot org http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 -- /usr/games/fortune -es says: Modularise. Use subroutines. - The Elements of Programming Style (Kernighan Plaugher) pgp5YSwNTKfSZ.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] ZFSPoolPlugin: Added the ability to use nested ZVOLs
Hi, AFAIK having a setting to control wether auto-import of pool is desirable would be a plus. As in some situations the import/export of the pool is controlled by any other means, and an accidental pool of the pool may be a destructive action (ie. when the pool maybe from a shared medium like iscsi, and thus should not be mounted by two nodes at the same time). Regards On Thu, Feb 12, 2015 at 8:27 PM, Adrian Costin adr...@goat.fish wrote: - Moved the zpool_import method of zfs_request() to it's own pool_request function - activate_storage() is now using zfs list to check if the zpool is imported - pool import only the configured pool, not all the accessible pools Signed-off-by: Adrian Costin adr...@goat.fish --- PVE/Storage/ZFSPoolPlugin.pm | 39 +++ 1 file changed, 27 insertions(+), 12 deletions(-) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index 5cbd1b2..3fc2978 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage/ZFSPoolPlugin.pm @@ -149,16 +149,27 @@ sub zfs_request { $timeout = 5 if !$timeout; -my $cmd = []; +my $cmd = ['zfs', $method, @params]; -if ($method eq 'zpool_list') { - push @$cmd, 'zpool', 'list'; -} else { - push @$cmd, 'zfs', $method; -} +my $msg = ''; + +my $output = sub { +my $line = shift; +$msg .= $line\n; +}; + +run_command($cmd, outfunc = $output, timeout = $timeout); + +return $msg; +} + +sub zpool_request { +my ($class, $scfg, $timeout, $method, @params) = @_; + +$timeout = 5 if !$timeout; + +my $cmd = ['zpool', $method, @params]; -push @$cmd, @params; - my $msg = ''; my $output = sub { @@ -428,12 +439,16 @@ sub volume_rollback_is_possible { sub activate_storage { my ($class, $storeid, $scfg, $cache) = @_; -my @param = ('-o', 'name', '-H'); +my @param = ('-o', 'name', '-H', $scfg-{'pool'}); + +my $text = zfs_request($class, $scfg, undef, 'list', @param); -my $text = zfs_request($class, $scfg, undef, 'zpool_list', @param); - if ($text !~ $scfg-{pool}) { - run_command(zpool import -d /dev/disk/by-id/ -a); + my ($pool_name) = $scfg-{pool} =~ /([^\/]+)/; + + my @import_params = ('-d', '/dev/disk/by-id/', $pool_name); + + zpool_request($class, $scfg, undef, 'import', @import_params); } return 1; } -- 1.9.3 (Apple Git-50) ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] GUI for DHCP
Attached image shows what I have in mind. Idea is that users are able to create a DHCP service on every configured device or bridge. There was a long discussion about DHCP in 2013. AFAIK I sent a patch, but I have no idea if that still works, or if someone else sent an implementation. So the first step would be to fínd the latest patches, test them, and re-sent them to the list. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] bug-fix: ZFSPoolPlugin
On 02/12/2015 11:26 AM, Wolfgang Link wrote: improve Error handling. inform user only if there is really no device. if both checks are fail. Signed-off-by: Wolfgang Link w.l...@proxmox.com --- PVE/Storage/ZFSPoolPlugin.pm | 20 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index 7dc7d3e..231d109 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage/ZFSPoolPlugin.pm @@ -180,14 +180,26 @@ sub alloc_image { $name = $class-zfs_find_free_diskname($storeid, $scfg, $vmid) if !$name; $class-zfs_create_zvol($scfg, $name, $size); -run_command (udevadm trigger --subsystem-match block); -run_command (udevadm settle --timeout 5); - + +eval { + run_command (udevadm trigger --subsystem-match block); + run_command (udevadm settle --timeout 5); +}; + +my $warn = @$; what is that? I guess you want: warn $@ if $@; instead? + +my $create_ok; + for (1..10) { - last if -e /dev/zvol/$scfg-{pool}/$name ; + if (-e /dev/zvol/$scfg-{pool}/$name) { + $create_ok = 1; + last; + } Time::HiRes::usleep(100); } +die can't alloc image\n unless $create_ok; + We just want to wait until udev create the device /dev/zvol, but we do not want to raise additional error (zfs alloctated the image successfully). So above change makes no sense to me. return $name; } ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] bug-fix: ZFSPoolPlugin
improve Error handling. inform user only if there is really no device. if both checks are fail. Signed-off-by: Wolfgang Link w.l...@proxmox.com --- PVE/Storage/ZFSPoolPlugin.pm | 20 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index 7dc7d3e..231d109 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage/ZFSPoolPlugin.pm @@ -180,14 +180,26 @@ sub alloc_image { $name = $class-zfs_find_free_diskname($storeid, $scfg, $vmid) if !$name; $class-zfs_create_zvol($scfg, $name, $size); -run_command (udevadm trigger --subsystem-match block); -run_command (udevadm settle --timeout 5); - + +eval { + run_command (udevadm trigger --subsystem-match block); + run_command (udevadm settle --timeout 5); +}; + +my $warn = @$; + +my $create_ok; + for (1..10) { - last if -e /dev/zvol/$scfg-{pool}/$name ; + if (-e /dev/zvol/$scfg-{pool}/$name) { + $create_ok = 1; + last; + } Time::HiRes::usleep(100); } +die can't alloc image\n unless $create_ok; + return $name; } -- 1.7.10.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] bug-fix for size output
Signed-off-by: Wolfgang Link w.l...@proxmox.com --- PVE/Storage/ZFSPoolPlugin.pm |6 -- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/PVE/Storage/ZFSPoolPlugin.pm b/PVE/Storage/ZFSPoolPlugin.pm index 7dc7d3e..8584bac 100644 --- a/PVE/Storage/ZFSPoolPlugin.pm +++ b/PVE/Storage/ZFSPoolPlugin.pm @@ -97,15 +97,17 @@ sub zfs_parse_zvol_list { my @parts = split /\//, $1; my $name = pop @parts; my $pool = join('/', @parts); + my $size = $2; + my $origin = $3; next unless $name =~ m!^(\w+)-(\d+)-(\w+)-(\d+)$!; $name = $pool . '/' . $name; $zvol-{pool} = $pool; $zvol-{name} = $name; - $zvol-{size} = zfs_parse_size($2); + $zvol-{size} = zfs_parse_size($size); if ($3 !~ /^-$/) { - $zvol-{origin} = $3; + $zvol-{origin} = $origin; } push @$list, $zvol; } -- 1.7.10.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] ZFSPoolPlugin: Added the ability to use nested ZVOLs
Hi, IMHO, I see no reason to not default for the most common case (ie. auto-importing) if there's a way to override it, and such a way is some-what documented.. ;) On Thu, Feb 12, 2015 at 8:35 PM, Adrian Costin adr...@goat.fish wrote: AFAIK having a setting to control wether auto-import of pool is desirable would be a plus. As in some situations the import/export of the pool is controlled by any other means, and an accidental pool of the pool may be a destructive action (ie. when the pool maybe from a shared medium like iscsi, and thus should not be mounted by two nodes at the same time). I agree. Should I add another parameter for this? If yes, should this be default to auto-import, or not? Best regards, Adrian Costin ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] add cpu hotplug form
Signed-off-by: Alexandre Derumier aderum...@odiso.com --- www/manager/Makefile |1 + www/manager/qemu/HardwareView.js | 61 -- 2 files changed, 60 insertions(+), 2 deletions(-) diff --git a/www/manager/Makefile b/www/manager/Makefile index 6876c74..44977a9 100644 --- a/www/manager/Makefile +++ b/www/manager/Makefile @@ -128,6 +128,7 @@ JSSRC= \ qemu/HDResize.js\ qemu/HDMove.js \ qemu/HDThrottle.js \ + qemu/CPUHotplug.js \ qemu/DisplayEdit.js \ qemu/KeyboardEdit.js\ qemu/HardwareView.js\ diff --git a/www/manager/qemu/HardwareView.js b/www/manager/qemu/HardwareView.js index ba986c4..d704391 100644 --- a/www/manager/qemu/HardwareView.js +++ b/www/manager/qemu/HardwareView.js @@ -63,15 +63,20 @@ Ext.define('PVE.qemu.HardwareView', { 'PVE.qemu.ProcessorEdit' : undefined, tdCls: 'pve-itype-icon-processor', defaultValue: 1, - multiKey: ['sockets', 'cpu', 'cores', 'numa'], + multiKey: ['sockets', 'cpu', 'cores', 'numa', 'vcpus'], renderer: function(value, metaData, record, rowIndex, colIndex, store, pending) { + var vcpus = me.getObjectValue('vcpus', undefined, pending); var sockets = me.getObjectValue('sockets', 1, pending); var model = me.getObjectValue('cpu', undefined, pending); var cores = me.getObjectValue('cores', 1, pending); var numa = me.getObjectValue('numa', undefined, pending); - var res = (sockets*cores) + ' (' + sockets + ' sockets, ' + cores + ' cores)'; + if(!vcpus) { + vcpus = sockets*cores; + } + + var res = vcpus + ' (' + sockets + ' sockets, ' + cores + ' cores)'; if (model) { res += ' [' + model + ']'; @@ -114,6 +119,9 @@ Ext.define('PVE.qemu.HardwareView', { }, hotplug: { visible: false + }, + vcpus: { + visible: false } }; @@ -260,6 +268,31 @@ Ext.define('PVE.qemu.HardwareView', { win.on('destroy', reload); }; + var run_cpuhotplug = function() { + var rec = sm.getSelection()[0]; + if (!rec) { + return; + } + +var sockets = me.getObjectValue('sockets', 1); +var cores = me.getObjectValue('cores', 1); +var vcpus = me.getObjectValue('vcpus', 1); + + + var win = Ext.create('PVE.qemu.CPUHotplug', { + maxvcpus: sockets * cores, + vcpus: vcpus, + vmid: vmid, + pveSelNode: me.pveSelNode, + confid: rec.data.key, + url: '/api2/extjs/' + baseurl + }); + + win.show(); + + win.on('destroy', reload); + }; + var run_move = function() { var rec = sm.getSelection()[0]; if (!rec) { @@ -305,6 +338,13 @@ Ext.define('PVE.qemu.HardwareView', { handler: run_diskthrottle }); + var cpuhotplug_btn = new PVE.button.Button({ + text: gettext('CPU Hotplug'), + selModel: sm, + disabled: true, + handler: run_cpuhotplug + }); + var remove_btn = new PVE.button.Button({ text: gettext('Remove'), selModel: sm, @@ -372,6 +412,7 @@ Ext.define('PVE.qemu.HardwareView', { resize_btn.disable(); move_btn.disable(); diskthrottle_btn.disable(); + cpuhotplug_btn.disable(); revert_btn.disable(); return; } @@ -383,6 +424,15 @@ Ext.define('PVE.qemu.HardwareView', { var isDisk = !key.match(/^unused\d+/) (rowdef.tdCls == 'pve-itype-icon-storage' !value.match(/media=cdrom/)); + var hotplug = me.getObjectValue('hotplug'); + var cpuhotplug; +if(hotplug) { +Ext.each(hotplug.split(','), function(el) { +if (el === 'cpu') { +cpuhotplug = 1; +} +}); + remove_btn.setDisabled(rec.data['delete'] || (rowdef.never_delete === true)); edit_btn.setDisabled(rec.data['delete'] || !rowdef.editor); @@ -393,7 +443,13 @@ Ext.define('PVE.qemu.HardwareView', { diskthrottle_btn.setDisabled(pending || !isDisk); + cpuhotplug_btn.setDisabled(!cpuhotplug ||
[pve-devel] pve-manager : add cpu hotplug form
I finally add an extra button with a separate form for hotplug vcpu. (to avoid confusion with main processor form where pending values are displayed) Feel free to adapt it ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] GUI for DHCP
On February 12, 2015 at 7:53 PM Alexandre DERUMIER aderum...@odiso.com wrote: One question I also have is: do we install a dnsmasq on each proxmox host of a cluster ? Yes. I'm using a lot of vlans with /24, in a 16 proxmox nodes cluster, that mean that I need 16 ips per vlan to manage dhcp. I don't known if it's technically possible to use same ip on each host by vlan (for dhcp reponse of the vm of the hosts), so no need to care (I think?) about arp on the network. or maybe filtering arp for this ip to not going outside. Any idea about this ? Sorry, no idea. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] GUI for DHCP
There is no need to do that if we use a static assignment? What do you mean by static assignment? VM 100 == IP1 VM 200 == IP2 VM XXX == IPYYY And we make sure these mapping do not overlap in out cluster. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] ZFSPoolPlugin: Added the ability to use nested ZVOLs
There is no need for a flag to do not autoimport, because first you can disable the storage in storage.cfg. and so it will not activated. Second one pool with iscsi export should not in the storage.cfg. Am 12.02.2015 21:25 schrieb Pablo Ruiz pablo.r...@gmail.com: Hi, IMHO, I see no reason to not default for the most common case (ie. auto-importing) if there's a way to override it, and such a way is some-what documented.. ;) On Thu, Feb 12, 2015 at 8:35 PM, Adrian Costin adr...@goat.fish wrote: AFAIK having a setting to control wether auto-import of pool is desirable would be a plus. As in some situations the import/export of the pool is controlled by any other means, and an accidental pool of the pool may be a destructive action (ie. when the pool maybe from a shared medium like iscsi, and thus should not be mounted by two nodes at the same time). I agree. Should I add another parameter for this? If yes, should this be default to auto-import, or not? Best regards, Adrian Costin ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] [PATCH] bugfix : add missing queues nic option in print_net
applied, thanks! ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] add vmname to vm remove msg
Signed-off-by: Wolfgang Link w.l...@proxmox.com --- www/manager/openvz/Config.js |4 +++- www/manager/qemu/Config.js |4 +++- 2 files changed, 6 insertions(+), 2 deletions(-) diff --git a/www/manager/openvz/Config.js b/www/manager/openvz/Config.js index 8589f22..17f40b5 100644 --- a/www/manager/openvz/Config.js +++ b/www/manager/openvz/Config.js @@ -15,6 +15,8 @@ Ext.define('PVE.openvz.Config', { throw no VM ID specified; } + var vmname = me.pveSelNode.data.name; + var caps = Ext.state.Manager.get('GuiCap'); var base_url = '/nodes/' + nodename + '/openvz/' + vmid; @@ -88,7 +90,7 @@ Ext.define('PVE.openvz.Config', { text: gettext('Remove'), disabled: !caps.vms['VM.Allocate'], dangerous: true, - confirmMsg: Ext.String.format(gettext('Are you sure you want to remove VM {0}? This will permanently erase all VM data.'), vmid), + confirmMsg: Ext.String.format(gettext('Are you sure you want to remove VM {0} ('+vmname+')? This will permanently erase all VM data.'), vmid), handler: function() { PVE.Utils.API2Request({ url: base_url, diff --git a/www/manager/qemu/Config.js b/www/manager/qemu/Config.js index 1d1c8a2..ed2dd24 100644 --- a/www/manager/qemu/Config.js +++ b/www/manager/qemu/Config.js @@ -15,6 +15,8 @@ Ext.define('PVE.qemu.Config', { throw no VM ID specified; } + var vmname = me.pveSelNode.data.name; + var caps = Ext.state.Manager.get('GuiCap'); var base_url = '/nodes/' + nodename + /qemu/ + vmid; @@ -97,7 +99,7 @@ Ext.define('PVE.qemu.Config', { text: gettext('Remove'), disabled: !caps.vms['VM.Allocate'], dangerous: true, - confirmMsg: Ext.String.format(gettext('Are you sure you want to remove VM {0}? This will permanently erase all VM data.'), vmid), + confirmMsg: Ext.String.format(gettext('Are you sure you want to remove VM {0} ('+vmname+')? This will permanently erase all VM data.'), vmid), handler: function() { PVE.Utils.API2Request({ url: base_url, -- 1.7.10.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] GUI for DHCP
So the first step would be to fínd the latest patches, maybe theses patches: ? [PATCH 1/2] add ip parameter to network settings https://www.mail-archive.com/pve-devel@pve.proxmox.com/msg05026.html {PATCH 2/2] setup DHCP server at vm_start https://www.mail-archive.com/pve-devel@pve.proxmox.com/msg05027.html - Mail original - De: dietmar diet...@proxmox.com À: datanom.net m...@datanom.net, pve-devel pve-devel@pve.proxmox.com Envoyé: Jeudi 12 Février 2015 11:17:04 Objet: Re: [pve-devel] GUI for DHCP Attached image shows what I have in mind. Idea is that users are able to create a DHCP service on every configured device or bridge. There was a long discussion about DHCP in 2013. AFAIK I sent a patch, but I have no idea if that still works, or if someone else sent an implementation. So the first step would be to fínd the latest patches, test them, and re-sent them to the list. ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] add multiqueues field to nic form
Signed-off-by: Alexandre Derumier aderum...@odiso.com --- www/manager/Parser.js |3 +++ www/manager/qemu/NetworkEdit.js | 10 ++ 2 files changed, 13 insertions(+) diff --git a/www/manager/Parser.js b/www/manager/Parser.js index ce0ddf6..56ea250 100644 --- a/www/manager/Parser.js +++ b/www/manager/Parser.js @@ -66,6 +66,9 @@ Ext.define('PVE.Parser', { statics: { if (net.rate) { netstr += ,rate= + net.rate; } + if (net.queues) { + netstr += ,queues= + net.queues; + } if (net.disconnect) { netstr += ,link_down= + net.disconnect; } diff --git a/www/manager/qemu/NetworkEdit.js b/www/manager/qemu/NetworkEdit.js index d4c358b..033e7d2 100644 --- a/www/manager/qemu/NetworkEdit.js +++ b/www/manager/qemu/NetworkEdit.js @@ -19,6 +19,7 @@ Ext.define('PVE.qemu.NetworkInputPanel', { } me.network.macaddr = values.macaddr; me.network.disconnect = values.disconnect; + me.network.queues = values.queues; if (values.rate) { me.network.rate = values.rate; @@ -150,6 +151,15 @@ Ext.define('PVE.qemu.NetworkInputPanel', { allowBlank: true }, { + xtype: 'numberfield', + name: 'queues', + fieldLabel: gettext('Multiqueues'), + minValue: 1, + maxValue: 8, + value: '', + allowBlank: true + }, + { xtype: 'pvecheckbox', fieldLabel: gettext('Disconnect'), name: 'disconnect' -- 1.7.10.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] pve-manager : add multiqueues field to nic form
I think it's ok to expose this option now. Last windows drivers are ok too with multiqueues It's improving rx traffic, don't remember my last benchs (results posted on this mailing), But the benefit is great I have put some doc in the wiki https://pve.proxmox.com/wiki/Virtio-net-multiqueues ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
[pve-devel] [PATCH] bugfix : add missing queues nic option in print_net
Currently the nic queues option is removed when we try to update config Signed-off-by: Alexandre Derumier aderum...@odiso.com --- PVE/QemuServer.pm |1 + 1 file changed, 1 insertion(+) diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm index 70e2ae6..814a673 100644 --- a/PVE/QemuServer.pm +++ b/PVE/QemuServer.pm @@ -1431,6 +1431,7 @@ sub print_net { $res .= ,tag=$net-{tag} if $net-{tag}; $res .= ,firewall=1 if $net-{firewall}; $res .= ,link_down=1 if $net-{link_down}; +$res .= ,queues=$net-{queues} if $net-{queues}; return $res; } -- 1.7.10.4 ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
Re: [pve-devel] GUI for DHCP
On Thu, 12 Feb 2015 13:26:23 +0100 (CET) Alexandre DERUMIER aderum...@odiso.com wrote: So the first step would be to fínd the latest patches, maybe theses patches: ? [PATCH 1/2] add ip parameter to network settings https://www.mail-archive.com/pve-devel@pve.proxmox.com/msg05026.html {PATCH 2/2] setup DHCP server at vm_start https://www.mail-archive.com/pve-devel@pve.proxmox.com/msg05027.html From my understanding it seems like dnsmasq is binding unconditionally to every interface? Should it not only bind to selected interfaces? An idea for the need to have assigned an IP for an interface: If user chooses to have DHCP for a specific interface we create an alias interface which listens on broadcast and first IP in subnet for DHCP requests only. This also means first IP in subnet must be reserved if users wants to have DHCP support. -- Hilsen/Regards Michael Rasmussen Get my public GnuPG keys: michael at rasmussen dot cc http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xD3C9A00E mir at datanom dot net http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE501F51C mir at miras dot org http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0xE3E80917 -- /usr/games/fortune -es says: Support bacteria -- it's the only culture some people have! pgpGoo6P24NBB.pgp Description: OpenPGP digital signature ___ pve-devel mailing list pve-devel@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-devel